HPC | HPC

  • What is it?
    High Performance Computing, or HPC, refers to the ability to process data and perform calculations at high speeds, especially in systems that function above a trillion floating point operations per second (teraFLOPS). The world's leading supercomputers operate on the scale of petaFLOPS (quadrillion floating point operations per second), whereas the next goalpost is “exascale computing”, which functions above a quintillion floating point operations per second (exaFLOPS).

    To achieve this level of performance, parallel computing across a number of CPUs or GPUs is required. One common type of HPC solutions is a computing cluster, which aggregates the computing power of multiple computers (referred to as "nodes") into a large group. A cluster can deliver much higher performance than a single computer, as individual nodes work together to solve a problem larger than any one computer can easily solve.

  • Why do you need it?
    Acquiring HPC capabilities for your organization is important, whether it is through a computing cluster or high-end mainframe computer. HPC can solve problems in science, engineering, or business. Examples include:

    Science: HPC is used by scientists at universities and research institutes to understand the formation of our universe, conduct research into particle physics, or simulate and predict climate and weather patterns.

    Media & Entertainment: HPC solutions such as render farms can be used to render animations and special effects, edit feature films, or livestream special events globally.

    Artificial Intelligence: A particular subset of HPC that is currently very popular is machine learning, which is used to develop a myriad of artificial intelligence applications, such as self-driving vehicles, facial recognition software, speech recognition and translation, or drone technology.

    Oil and Gas: HPC is used to process data such as satellite images, ocean floor sonar readings, etc., in order to identify potential new deposits of oil or gas.

    Financial Services: HPC is used to track stock trends and realize algorithmic trading, or analyze patterns to detect fraudulent activity.

    Medicine: HPC is used to help develop cures for diseases like diabetes, or to enable faster and more accurate diagnosis methods, such as cancer screening techniques.

  • How is GIGABYTE helpful?
    GIGABYTE's H-Series High Density Servers and G-Series GPU Servers are designed especially for use in HPC applications, since they combine a high amount of computing power into a 1U, 2U, or 4U server chassis. The servers can be linked into a cluster via interconnects such as Ethernet, Infiniband, or Omni-Path. An example of a HPC server solution is the GIGABYTE H262 Series equipped with the AMD EPYC™ 7002 Series processor, which can feature up to 512 cores / 1024 threads of computing power (two 64-core AMD EPYC™ CPUs per node, four nodes per system) in a single 2U chassis. By populating a full 42U server rack with these systems (leaving some room for networking switches), the user will be able to utilize up to 10,240 cores and 20,480 threads—a massive amount of computing power.

  • WE RECOMMEND
    RELATED ARTICLES
    What is Big Data, and How Can You Benefit from It?
    You may be familiar with the term, “big data”, but how firm is your grasp of the concept? Have you heard of the “5 V’s” of big data? Can you recite the “Three Fundamental Steps” of how to use big data? Most importantly, do you know how to reap the benefits through the use of the right tools? GIGABYTE Technology, an industry leader in high-performance server solutions, is pleased to present our latest Tech Guide. We will walk you through the basics of big data, explain why it boasts unlimited potential, and finally delve into the GIGABYTE products that will help you ride high on the most exciting wave to sweep over the IT sector.
    GIGABYTE’s ARM Server Boosts Development of Smart Traffic Solution by 200%
    A team of scientists at NTU has adopted GIGABYTE’s G242-P32 server and the Arm HPC Developer Kit to incubate a “high-precision traffic flow model”—a smart traffic solution that can be used to test autonomous vehicles and identify accident-prone road sections for immediate redress. The ARM-based solution gives the project a 200% boost in efficiency, thanks to the cloud-native processor architecture that “speaks” the same coding language as the roadside sensors, the high number of CPU cores that excel at parallel computing, the synergy with GPUs that enable heterogeneous computing, and the ISO certifications which make the resulting model easily deployable for automakers and government regulators alike.
    CSR and ESG in Action: GIGABYTE Helps NCKU Train Award-Winning Supercomputing Team
    GIGABYTE Technology is not only a leading brand in high-performance server solutions—it is also an active force for good when it comes to CSR and ESG activities. Case in point: in 2020, GIGABYTE provided four G482-Z50 servers to Taiwan’s Cheng Kung University. The servers were used to train a team of talented students, who went on to take first place in that year’s APAC HPC-AI Competition in Singapore. The parallel computing performance of the servers’ processors, the seamless connectivity between the servers, and the servers’ unrivalled reliability are the reasons why GIGABYTE servers are ideal for educating the next generation of supercomputing experts. GIGABYTE is happy to give back to society and contribute to human advancement through high tech solutions.
    The Advantages of ARM: From Smartphones to Supercomputers and Beyond
    Processors based on the ARM architecture, an alternative to the mainstream x86 architecture, is gradually making the leap from mobile devices to servers and data centers. In this Tech Guide, GIGABYTE Technology, an industry leader in high-performance server solutions, recounts how ARM was developed. We also explain the various benefits of ARM processors and recommend ARM servers for different sectors and applications.
    Cluster Computing: An Advanced Form of Distributed Computing
    Cluster computing is a form of distributed computing that is similar to parallel or grid computing, but categorized in a class of its own because of its many advantages, such as high availability, load balancing, and HPC. GIGABYTE Technology, an industry leader in high-performance servers, presents this tech guide to help you learn about cluster computing. We also recommend GIGABYTE servers that can help you benefit from cluster computing.
    NCHC and Xanthus Elevate Taiwanese Animation on the World Stage with GIGABYTE Servers
    Created by Greener Grass Production, the Taiwanese sci-fi mini-series “2049” has debuted on Netflix and various local TV channels. The animated spin-off “2049+ Voice of Rebirth”, crafted by Xanthus Animation Studio, will soon premiere on the streaming service myVideo. The CGI show was created with the NCHC Render Farm’s GIGABYTE servers, which employ top-of-the-line NVIDIA® graphics cards to empower artists with industry-leading rendering capabilities. The servers can take on multiple workloads simultaneously through parallel computing, and they boast a wide range of patented smart features that ensure stability and availability. With all it has going for it, “2049+ Voice of Rebirth” may garner enough attention to become the breakout hit that will introduce Taiwanese animation to international audiences.
    What is a Server? A Tech Guide by GIGABYTE
    In the modern age, we enjoy an incredible amount of computing power—not because of any device that we own, but because of the servers we are connected to. They handle all our myriad requests, whether it is to send an email, play a game, or find a restaurant. They are the inventions that make our intrinsically connected age of digital information possible. But what, exactly, is a server? GIGABYTE Technology, an industry leader in high-performance servers, presents our latest Tech Guide. We delve into what a server is, how it works, and what exciting new breakthroughs GIGABYTE has made in the field of server solutions.