Revolutionizing Data Center Performance with NVIDIA Grace™ CPU Superchip & GH200 Grace Hopper Superchip

The breakthrough CPU accelerates data center AI & HPC with an advanced superchip design.

Bringing a Whole New Level of Performance and Efficiency to the Modern Data Center

In response to the rapidly growing demand for high performance and low power consumption, the Arm-based NVIDIA Grace™ CPU excels in modern data center workloads, emphasizing massive data processing capabilities. Its extraordinary performance per watt, packaging density, and memory bandwidth sets a new standard for the industry. The NVIDIA Grace™ CPU Superchip represents a comprehensive upgrade from the traditional dual-processor concept. By interconnecting the two CPUs on a single module with NVIDIA® NVLink®-Chip-to-Chip (NVLink-C2C) technology, achieving an astonishing bandwidth of 900GB/s, and packing up to 960GB LPDDR5X memory into the module, the superchip can efficiently handle various workloads, especially memory-intensive applications.
NVIDIA Grace™ CPU Superchip
NVIDIA Grace CPU Superchip
NVIDIA Grace CPU Superchip

144 Arm Neoverse V2 Cores

Armv9.0-A architecture for better compatibility and easier execution of other Arm-based binaries.

Up to 960GB CPU LPDDR5X memory with ECC

Balancing between bandwidth, energy efficiency, capacity, and cost with the first data center implementation of LPDDR technology.

Up to 8x PCIe Gen5 x16 links

Multiple PCIe Gen5 links for flexible add-in cards configurations and system communication.

NVLink-C2C Technology

Industry-leading chip-to-chip interconnect technology up to 900GB/s, alleviating bottlenecks and making coherent memory interconnect possible.

NVIDIA Scalable Coherency Fabric (SCF)

NVIDIA-designed mesh fabric and distributed cache architecture for much better bandwidth and scalability.

InfiniBand Networking Systems

Designed with maximum scale-out capability with up to 100GB/s total bandwidth across all superchips through InfiniBand switches, BlueField-3 DPUs, and ConnectX-7 NICs.

NVIDIA CUDA Platform

The well-known platform is optimized for the new Arm-based CPU, enabling accelerated computing with superchips, alongside all add-in cards and networking systems.

Stepping Further into the Era of AI and GPU-Accelerated HPC

Moving beyond pure CPU applications, the NVIDIA GH200 Grace Hopper Superchip is built on a combination of an NVIDIA Grace™ CPU Superchip and an NVIDIA H100 GPU for giant-scale AI and HPC applications. Utilizing the same NVIDIA® NVLink®-C2C technology, combining the heart of computing on a single superchip, forming the most powerful computational module. The coherent memory design leverages both high-speed HBM3 or HBM3e GPU memory and the large-storage LPDDR5X CPU memory. The superchip also inherits the capability of scaling out with InfiniBand networking by adopting BlueField®-3 DPUs or NICs, forming a system connected with a speed of 100GB/s for ML and HPC workloads. The latest GH200 NVL32 can further improve deep learning and HPC workloads by connecting up to 32 superchips through the NVLink Switch System, a system built on NVLink switches with 900GB/s bandwidth between any two superchips, making the most use of the powerful computing chips and extended GPU memory.
NVIDIA GH200 Grace Hopper Superchip
NVIDIA Grace Hopper Superchip
NVIDIA Grace Hopper Superchip

72 Arm Neoverse V2 Cores

Armv9.0-A architecture for better compatibility and easier execution of other Arm-based binaries.

Up to 480GB CPU LPDDR5X memory with ECC

Balancing between bandwidth, energy efficiency, capacity, and cost with the first data center implementation of LPDDR technology.

96GB HBM3 or 144GB HBM3e GPU memory

Adoption of high-bandwidth-memory for improved performance of memory-intensive workloads.

Up to 4x PCIe Gen5 x16 links

Multiple PCIe Gen5 links for flexible add-in cards configurations and system communication.

NVLink-C2C Technology

Industry-leading chip-to-chip interconnect technology up to 900GB/s, alleviating bottlenecks and making coherent memory interconnect possible.

InfiniBand Networking Systems

Designed with maximum scale-out capability with up to 100GB/s total bandwidth across all superchips through InfiniBand switches, BlueField-3 DPUs, and ConnectX-7 NICs.

Easy-to-Program Heterogeneous Platform

Bringing preferred programming languages to the CUDA platform along with hardware-accelerated memory coherency for simple adaptation to the new platform.

NVIDIA CUDA Platform

The well-known platform is optimized for the new Arm-based CPU, enabling accelerated computing with superchips, alongside all add-in cards and networking systems.

Maximize Configuration Flexibility with the GIGABYTE Server Lineup

Drawing from diverse product line experience, GIGABYTE provides various options supporting the NVIDIA Grace™ CPU & GH200 Grace Hopper Superchips in different form factors, aiming for multiple target applications. For the highest possible computing density, GIGABYTE designed the H263 and H223 series high-density servers to achieve the best computing power in a single rack, in either 2U 2-Node or 2U 4-Node configurations and with both air cooling and direct liquid cooling solutions.

For total NVIDIA package solutions, GIGABYTE also provides a series of X-series servers, which is based on the NVIDIA MGX™ Platform with a modularized design and support for multiple add-in cards. These servers ensure compatibility between different rack standards, flexible cluster configurations, and compatibility with NVIDIA software and NVIDIA-defined configurations.

For ground-up total solutions, GIGABYTE offers GIGA POD for data centers looking for the highest possible performance and scalability. Accommodating multiple MGX, high-density, or GPU servers in a single rack and further expanding it into an multi-rack configuration, the nodes can all be connected with InfiniBand Switch Systems at a 100GB/s full bi-directional bandwidth. The solution bundles in all the features, including flexible configuration, ease of deployment, future scale-out capability, and both software and hardware optimizations. GIGA POD demonstrates the future of giant-scale computing and has proven that the adoption of new technologies can be conducted in an easy, cost-effective, and time-saving approach, with all deployments from designing to final stage testing included in a single package.

Featured New Products

1 / 5
1 / 3