Tech-Guide

Setting the Record Straight: What is HPC? A Tech Guide by GIGABYTE

by GIGABYTE
The term HPC, which stands for high performance computing, gets thrown around a lot nowadays, as server solutions become more and more ubiquitous. It is running the risk of becoming a catchall phrase: anything that is “HPC” must be the right choice for your computing needs. You may be wondering: what exactly are the benefits of HPC, and is HPC right for you? GIGABYTE Technology, an industry leader in high-performance servers, presents this tech guide to help you understand what HPC means on both a theoretical and a practical level. In doing so, we hope to help you evaluate if HPC is right for you, while demonstrating what GIGABYTE has to offer in the field of HPC.
HPC: Supercomputing Made Accessible and Achievable
The acronym “HPC” represents “high performance computing”. It refers broadly to a category of advanced computing that handles a larger amount of data, performs a more complex set of calculations, and runs at higher speeds than your average personal computer. This may align with the definition of a supercomputer; in fact, the two terms are sometimes used interchangeably, with some of the world’s most famous HPC systems functioning on the supercomputing scales of a quadrillion floating point operations per second (petaFLOPS), or even a quintillion floating point operations per second (exaFLOPS). But an enterprise (or any other organization) can still benefit from HPC, or even build its own HPC system, without contending for a spot on the “TOP500” list of the world’s supercomputers.

The trick to transcending the boundaries of an ordinary computer isn’t necessarily to build a “better” computer, but to pool multiple computing resources and form a “computing cluster”. The synergy that different types of processors can achieve through heterogeneous computing; the aggregation of storage devices in a secure and cohesive network; the deployment of high-throughput channels that eliminate latency—all these features and more enable HPC systems to perform like a supercomputer.

Glossary:
What is Computing Cluster?
What is Heterogeneous Computing?
PACT: Remember the Benefits of HPC with this Acronym
The advantages of an HPC system are numerous. Below, we highlight four benefits you can remember with a handy acronym we’ve invented: PACT, which stands for performance, availability, costs, and time.
● Performance
It goes without saying that an HPC system offers better performance than an average PC, but that’s not what we’re talking about. An organization with access to HPC will outperform the competition, simply because existing tasks can be done more efficiently, and because new value can be gleaned from the data on hand.

Think about the effort that goes into designing a new product, such as a car; or a new set of regulations, such as guidelines that help airports prevent flight delays. A lot of physical testing, not to mention trial-and-error, used to be necessary to nail things down, for no other reason than the fact that too many moving parts are involved. By running simulations on HPC systems during development, new products or procedures will have a much better chance of hitting the ground running.

Learn More:
Major Automaker Uses GIGABYTE to Achieve Optimal Aerodynamic Designs
Spain’s IFISC Prevents “Flight Delay Propogation” with GIGABYTE Servers

An HPC system can also comb through the big data on hand to look for hitherto undiscovered value. One exciting example is the analysis of medical records using artificial intelligence (AI) to find common indicators of disease. In this scenario, HPC can not only help doctors work more efficiently, it’s sifting through existing data to look for new value.
● Availability
Availability refers to the concept of “high availability”, which means IT equipment should offer as much uptime as possible to let users take full advantage of its services. This is becoming a basic requirement in the digital age, as more and more of our lives revolve around technological inventions.

Glossary:
What is High Availability?
What is IT?

Because the nodes of an HPC system are usually composed of more than one computer or server, and because the nodes are designed to work together and back each other up, the availability of an HPC system is generally superior to that of a single computer.《Glossary: What is Node?
● Costs
It may seem counter-intuitive that a cluster of servers will be more cost-efficient than a single computer; but because computation is spread across multiple resources, an HPC system is more scalable. That is, the user can scale up (upgrade the CPU, GPU, memory, or other resources) and scale out (add more nodes into the cluster) when the need arises, rather than buying more equipment than is necessary at the outset in anticipation of future growth. Renting HPC resources from a cloud service provider (CSP) can further improve scalability and lower costs.

Glossary:
What is Scalability?
What is GPU?
What is Cloud Computing?
Here is a simple acronym to help you remember the four key benefits of HPC: PACT, which stands for performance, availability, costs, and time. They are the reasons why HPC is having a profound impact on the way we live, work, and play.
● Time
HPC systems are fast—blindingly fast. Your average consumer-grade PC functions at the level of gigaFLOPS (a billion FLOPS) or teraFLOPS (a trillion FLOPS). But as we’ve established, HPC systems are measured on the scale of petaFLOPS, or even exaFLOPS, which are orders of magnitude faster. Being able to complete calculations in a matter of minutes or hours, instead of days or months, is obviously a game changer.

To use the animation industry as an example, rendering used to take up a lion’s share of production time, leaving very little room for artists to refine their designs or writers to polish their scripts. Access to HPC, which can be provided by a cloud-based “render farm”, will greatly accelerate the process, giving the creative geniuses more time to hone their craft.

Learn More:
《Glossary: What is Render Farm?
See How GIGABYTE Servers Elevate Taiwanese Animation on the World Stage
HPC Use Cases: How HPC Has Changed Our Lives
When you consider all the benefits of HPC, it shouldn’t come as a surprise that many different vertical markets have already incorporated it into their daily operations. We have all gained from having access to HPC. Here is a quick look at different vertical sectors and the ways that HPC has made a difference.
● Weather Simulation and Disaster Prevention
As climate change affects weather patterns around the world, natural disasters like tsunamis and storm surges are becoming more dangerous. There is an acute need to study extreme weather and engineer our towns and cities to withstand them. That is what Shibayama Lab at Japan’s prestigious Waseda University, the “Center for Disaster Prevention around the World”, is doing with a powerful computing cluster built with GIGABYTE servers. Shibayama Lab uses GIGABYTE G221-Z30 and W291-Z00 to simulate natural disasters and test recovery plans by running highly detailed computer models at very fast speeds. Multiple simulations can be conducted simultaneously via parallel computing.

Learn More:
《More information about GIGABYTE's GPU Server
《More information about GIGABYTE's Tower Server
《Glossary: What is Parallel Computing?
Waseda University Decodes the Storm with GIGABYTE’s Computing Cluster
● Autonomous Vehicles
One of the most exciting inventions on the horizon is the autonomous vehicle. Computer vision and deep learning methods are used to “train” the car’s AI to detect, recognize, and react to road conditions—just like a human driver. HPC systems are crucial to the effective training of self-driving algorithms.

Glossary:
What is Computer Vision?
What is Deep Learning?

An Israeli developer of autonomous driving technology uses GIGABYTE G291-281 and R281-NO0 to train its fleet of self-driving cars. Thanks to HPC, the data can be processed quickly to improve the autonomous vehicle’s AI.

Learn More:
《More information about GIGABYTE’s Rack Server
Constructing the Brain of a Self-Driving Car with GIGABYTE
Renowned research institutes are using HPC to achieve astounding breakthroughs. For example, CERN uses GIGABYTE G482-Z51 to process the 40 terabytes of data generated by the Large Hadron Collider every second to search for the beauty quark.
● Energy
The energy sector, which includes the oil and gas industry, has begun to use HPC to look for new extraction sites. Complex 2D and 3D images gathered during geological surveys can be rapidly and accurately analyzed with HPC to determine the most suitable drilling locations, reducing the costs of exploration. For example, a French geosciences research company utilizes GIGABYTE’s G-Series GPU Server, outfitted with a highly dense configuration of six GPGPUs in a 1U (one rack unit) form factor, to improve the performance of their image recognition and data analytics processing, so they can deliver more accurate data in real-time.

Learn More:
《Glossary: What is GPGPU?
《Glossary: What is Rack Unit?
GIGABYTE’s GPU Servers Help Improve Oil & Gas Exploration Efficiency
● Virtual and Augmented Reality
Virtual reality (VR), augmented reality (AR), and mixed reality (MR) have already become part of our daily lives. Now that international tech giants have thrown their weight behind the creation of a “Metaverse”, it’s worth taking a look at how HPC can help.《Glossary: What is Metaverse?

Here are two examples. In the case of the Taipei Music Center, GIGABYTE's G481-HA0 functions as part of a built-in micro data center that utilizes 5G communications and edge computing tech to offer audience members a “VR 360 Stadium Experience”—a 360-degree, 8K resolution view of a live concert in real time. Another instance is the tech company ArchiFiction in Silicon Valley, which used GIGABYTE's W281-G40 server and MW51-HP0 server motherboard to create n'Space, a projector-based platform capable of rendering photorealistic virtual environments that can be viewed with the naked eye.

Learn More:
《Glossary: What is Data Center?
《Glossary: What is 5G?
《Glossary: What is Edge Computing?
Naked-Eye Virtual Reality Made Possible with GIGABYTE Servers
● Space Exploration, Quantum Physics, and More
Since HPC is one of the most advanced ways to compute, it stands to reason that many scientific breakthroughs rely heavily on HPC. For example, Lowell Observatory in Flagstaff, Arizona, which discovered Pluto in 1930, is on a quest to find habitable exoplanets in outer space. GIGABYTE’s G482-Z50 is used to analyze changes in our Sun’s radial velocity to discover the common signature of the stars, so superfluous “stellar noise” can be filtered out to accelerate the search. The European Organization for Nuclear Research (CERN), which operates the Large Hadron Collider (LHC), is another success case. GIGABYTE’s G482-Z51 processes the 40 terabytes of raw data generated every second by the particle accelerator to detect the elusive subatomic particle known as the beauty (or bottom) quark.

Learn More:
Lowell Observatory Looks for Habitable Exoplanets with GIGABYTE Servers
CERN and the Large Hadron Collider Study Particle Physics Using GIGABYTE
Whether it is research in climate change, the COVID-19 virus, space exploration, or any other field of human knowledge, HPC has a role to play. This is why enterprises and universities alike are building their own HPC systems or renting HPC services.
With HPC being so vital for scientific research, it should come as no surprise that universities and research institutes around the world are building their own HPC systems. In 2020, the Institute of Theoretical and Computational Chemistry at the University of Barcelona expanded its on-campus data center by adding a new HPC cluster, the “IQTC09”, composed of dozens of GIGABYTE servers. The College of Science at Taiwan Normal University has built the Center for Cloud Computing with GIGABYTE servers; what’s more, it is providing university courses to train HPC experts. An illustrious European tech university has purchased GIGABYTE’s H262-Z63 to bolster its work in engineering science, biochemistry, natural science, civil engineering, and mathematics.

Learn More:
《More information about GIGABYTE’s High Density Server
The University of Barcelona Gets a Computing Boost by Choosing GIGABYTE
Taiwan Normal University Empowers Scientific Study with HPC
In the Quest for Higher Learning, High Density Servers Hold the Key
What are the Components of an HPC System?
At its heart, the structure of an HPC system is not dissimilar to that of a data center or server room. It contains dozens, if not hundreds or even thousands, of servers and other peripheral devices. Depending on their functions, they can be categorized into three different types of nodes: computing, storage, and networking. The computing nodes can be seen as the brains of the entire operation—they perform calculations and complete assignments. The storage nodes store the requests and the responses. The networking nodes help the servers talk to each other within the system, and it connects the system to the outside world. In this way, an HPC system behaves like a singular, cohesive supercomputer—a whole that is greater than the sum of its parts.《Glossary: What is Server Room?
This simple infographic shows that an HPC system is generally composed of the same three primary layers as a data center. There is the networking node, which connects the nodes to each other and the entire system to the outside world. There is the computing node, which performs the calculations. And, there is the storage node, which stores all the data.
To achieve HPC, the prowess of the computing layer—specifically, the performance of the processors—is crucial. The two CPU giants, AMD and Intel, are in a race to produce more powerful processors containing a higher number of cores and threads. An alternative to the x86 architecture is the ARM architecture, which utilizes a radically different design that allows a single processor to house an even higher number of cores. The lower power consumption of ARM processors also makes thermal management easier. Ultimately, the types of processors installed in an HPC system is largely dependent on the tasks the system is designed for.

Learn More:
《More information about GIGABYTE’s ARM Server
《Glossary: What is Core?
《Glossary: What is Thread?

Besides the CPU, the GPU (or GPGPU) accelerators installed in the computing node can also be a game-changer. These accelerators are apt at parallel computing, grid computing, and other forms of distributed computing. In other words, if the tasks on hand are optimized to work with accelerators, you can achieve HPC not only by purchasing powerful CPUs, but by adding the right GPUs. NVIDIA, AMD, and other tech leaders offer many competitive choices in the GPU market.

Glossary:
What is Grid Computing?
What is Distributed Computing?

Let’s now look at the storage node. As mentioned, high availability is a rule of thumb when it comes to HPC—not only should the system be online as often as possible, data stored in an HPC system should be secure and accessible. RAID, a method of virtualization that combines multiple disks into a single unit to achieve redundancy, is often used in the storage node. The interface standard known as NVMe accelerates data transfer, so calculation in the computing node is not hampered by inadequate transfer speeds. In summary, although the storage node may not be the focal point of an HPC system, adopting the correct storage technologies can go a long way towards bolstering the system’s overall performance.

Glossary:
What is RAID?
What is NVMe?

The networking node comprises of servers designed to host an intranet, provide access via VPN, or share connection to the internet; it can be supplemented with other devices, such as switches, routers, and firewalls. The deployment of the latest networking standards, such as Ethernet and InfiniBand (IB), can ensure that data is transferred quickly and securely—not only between the HPC system and its users, but among the different nodes inside an HPC system.

In addition to the hardware, it is worth noting that the software component of an HPC system is just as vital. HPC platform software, optimized frameworks, libraries, and other tools will guarantee you get the most out of your HPC system. Suitability is the key—if your hardware and software solutions are a good fit for the tasks on hand, you will reap the full benefits of HPC.
Conclusion: Three Steps to Find Out if HPC is Right for You
We hope we’ve been able to present an informative introduction to the concept of HPC, and how different vertical sectors have all benefited from the deployment of HPC systems. As the previous section has shown, the make-up of an HPC system is not complicated—it is similar to that of a server room or computing cluster. This means you could theoretically build your own; or, if that is not feasible, you could rent cloud-based HPC services from a data center. The crux of the problem is this: judging by your current requirements, do you really need HPC? If so, how can you procure HPC?

Individual needs will differ, of course, but we’ve narrowed the criteria down to three simple questions. If your answers to these three questions are a resounding “yes”, then congratulations—you have the potential to benefit from the latest computing trend that is HPC.

1. Do you work in a data-intensive computing environment that will benefit greatly from faster processing?
2. Do you work with, or want to work with, the latest technological breakthroughs, such as AI, IoT, and MLOps?
3. Do you have the manpower to develop and manage an HPC system, whether it’s on-premises or on the cloud; and do you have the know-how to adapt your tasks to fully benefit from HPC?

Glossary:
What is IoT?
What is MLOps?

These questions should be considered carefully. HPC is not just about computing faster; it is about generating new value with supercomputing capabilities. Therefore, you may need to adjust the way you organize data and perform tasks to suit this new, advanced way of computing. There will be an “induction process”, so to speak, as the HPC system is installed (or the HPC services are rented), and your IT team is trained to work with the new tech. But once HPC is fully integrated into your workflow, you will begin to see the benefits. One day, you will look back on the time before HPC and wonder: how did you ever manage!
Recommended GIGABYTE Server Solutions
GIGABYTE has a full line of server solutions for HPC applications. Worthy of note are H-Series High Density Servers for incredible processing prowess in a compact form factor; G-Series GPU Servers for use with GPGPU accelerators; versatile R-Series Rack Servers; S-Series Storage Servers for safeguarding your data; and W-Series Tower Servers / Workstations for installation outside of server racks.
● H-Series High Density Servers
If you are looking for servers to fill the role of the control or computing node in your HPC cluster, you cannot go wrong with GIGABYTE’s H-Series High Density Servers. Optimized for HPC and hyper-converged infrastructure (HCI), H-Series products specialize in fitting as much processing power as possible into a smaller footprint. For example, if you choose a dual-CPU model powered by the latest AMD EPYC™ processors, you can fit as much as 512 cores and 1,024 threads in a single 2U chassis, which is an astounding 10,240 cores and 20,480 threads in a fully populated 42U server rack, with some room left for networking devices.《Glossary: What is Hyper-Converged Infrastructure?
● G-Series GPU Servers
As mentioned, the path towards HPC is not necessarily through more powerful CPUs. For special tasks optimized for distributed computing, the addition of GPU accelerators can make all the difference. GIGABYTE’s G-Series GPU Servers are designed to support the use of GPUs, with some of the more expansive models capable of housing up to ten PCIe Gen4 GPU cards, or eight double slot GPU cards. Special interconnect architectures designed to maximize throughput and optimize GPU-to-GPU acceleration in a system with multiple graphics cards, such as NVIDIA® NVLink™, can be utilized to improve scalability, bandwidth, and the number of links between the GPUs.《Glossary: What is PCIe?
● R-Series Rack Servers
Whereas H-Series and G-Series servers were envisioned to be more specialized solutions, R-Series Rack Servers were designed with versatility in mind. An optimal balance between efficiency and reliability was achieved to produce top-of-the-line servers that are ideal for business-critical workloads. They can be used in conjunction with H-Series or G-Series products in the HPC system to help you reach performance and budget goals.
● S-Series Storage Servers
S-Series Storage Servers adopt an ultra-dense design with room for multiple bays, making them highly recommended for an HPC system’s storage node. In addition to RAID, GIGABYTE servers also incorporate proprietary features such as Smart Crises Management and Protection (SCMP), Smart Ride Through (SmaRT), and Dual ROM Architecture to ensure reliability and high availability.

Learn More:
《More information about GIGABYTE’s Storage Server
● W-Series Tower Servers / Workstations
Unlike the aforementioned options, which were designed to be installed inside a standard server rack, GIGABYTE’s W-Series come in stand-alone chassis that are easier to set up and customize. Despite the unassuming appearance, these servers run on some of the most advanced processors in the market, making them a favorite among university laboratories and smaller studios that may not own a server room, but would like to enjoy the full benefits of an on-premises HPC system.

We hope this tech guide has been able to explain what HPC is, how it is being used, and how you might take advantage of this exciting breakthrough in computer technology. If you are looking to incorporate HPC solutions in your work, GIGABYTE can help. As always, we encourage you to reach out to our sales representatives at marketing@gigacomputing.com for consultation.
Realtion Tags
Cloud Computing
Hyper-Converged Infrastructure
Big Data
Scalability
Edge Computing
Grid Computing
IoT
5G
HPC
Data Center
Artificial Intelligence
Deep Learning
GPU
Parallel Computing
NVMe
Render Farm
Computing Cluster
Server Room
Heterogeneous Computing
PCIe
Rack Unit
MLOps
GPGPU
Computer Vision
RAID
IT
Metaverse
High Availability
Core
Distributed Computing
Node
Thread
WE RECOMMEND
RELATED ARTICLES
The Advantages of ARM: From Smartphones to Supercomputers and Beyond

Tech Guide

The Advantages of ARM: From Smartphones to Supercomputers and Beyond

Processors based on the ARM architecture, an alternative to the mainstream x86 architecture, is gradually making the leap from mobile devices to servers and data centers. In this Tech Guide, GIGABYTE Technology, an industry leader in high-performance server solutions, recounts how ARM was developed. We also explain the various benefits of ARM processors and recommend ARM servers for different sectors and applications.
What is Big Data, and How Can You Benefit from It?

Tech Guide

What is Big Data, and How Can You Benefit from It?

You may be familiar with the term, “big data”, but how firm is your grasp of the concept? Have you heard of the “5 V’s” of big data? Can you recite the “Three Fundamental Steps” of how to use big data? Most importantly, do you know how to reap the benefits through the use of the right tools? GIGABYTE Technology, an industry leader in high-performance server solutions, is pleased to present our latest Tech Guide. We will walk you through the basics of big data, explain why it boasts unlimited potential, and finally delve into the GIGABYTE products that will help you ride high on the most exciting wave to sweep over the IT sector.
What is a Server? A Tech Guide by GIGABYTE

Tech Guide

What is a Server? A Tech Guide by GIGABYTE

In the modern age, we enjoy an incredible amount of computing power—not because of any device that we own, but because of the servers we are connected to. They handle all our myriad requests, whether it is to send an email, play a game, or find a restaurant. They are the inventions that make our intrinsically connected age of digital information possible. But what, exactly, is a server? GIGABYTE Technology, an industry leader in high-performance servers, presents our latest Tech Guide. We delve into what a server is, how it works, and what exciting new breakthroughs GIGABYTE has made in the field of server solutions.
Silicon Valley Startup Sushi Cloud Rolls Out Bare-metal Services with GIGABYTE

Success Case

Silicon Valley Startup Sushi Cloud Rolls Out Bare-metal Services with GIGABYTE

The Silicon Valley startup Sushi Cloud is competing in the public cloud sector by providing “bare-metal” services that give users exclusive access to individual, standalone servers on the cloud, resulting in a triple boost to performance, versatility, and reliability. Sushi Cloud purchased GIGABYTE’s R152-Z30 Rack Servers to offer its clients the state-of-the-art performance of AMD EPYC™ CPUs; the versatility afforded by superb memory and storage capacities, in addition to OS and software ecosystem compatibility; and GIGABYTE’s proprietary high availability features (such as SCMP and dual ROM) and remote management functions (such as GIGABYTE Management Console and GIGABYTE Server Management).
Get the inside scoop on the latest tech trends, subscribe today!
Get Updates