Revolutionizing Data Center Performance with NVIDIA Grace™ CPU Superchip & GH200 Grace Hopper Superchip
Bringing a Whole New Level of Performance and Efficiency to the Modern Data Center
144 Arm Neoverse V2 Cores
Armv9.0-A architecture for better compatibility and easier execution of other Arm-based binaries.
Up to 960GB CPU LPDDR5X memory with ECC
Balancing between bandwidth, energy efficiency, capacity, and cost with the first data center implementation of LPDDR technology.
Up to 8x PCIe Gen5 x16 links
Multiple PCIe Gen5 links for flexible add-in cards configurations and system communication.
NVLink-C2C Technology
Industry-leading chip-to-chip interconnect technology up to 900GB/s, alleviating bottlenecks and making coherent memory interconnect possible.
NVIDIA Scalable Coherency Fabric (SCF)
NVIDIA-designed mesh fabric and distributed cache architecture for much better bandwidth and scalability.
InfiniBand Networking Systems
Designed with maximum scale-out capability with up to 100GB/s total bandwidth across all superchips through InfiniBand switches, BlueField-3 DPUs, and ConnectX-7 NICs.
NVIDIA CUDA Platform
The well-known platform is optimized for the new Arm-based CPU, enabling accelerated computing with superchips, alongside all add-in cards and networking systems.
Stepping Further into the Era of AI and GPU-Accelerated HPC
72 Arm Neoverse V2 Cores
Armv9.0-A architecture for better compatibility and easier execution of other Arm-based binaries.
Up to 480GB CPU LPDDR5X memory with ECC
Balancing between bandwidth, energy efficiency, capacity, and cost with the first data center implementation of LPDDR technology.
96GB HBM3 or 144GB HBM3e GPU memory
Adoption of high-bandwidth-memory for improved performance of memory-intensive workloads.
Up to 4x PCIe Gen5 x16 links
Multiple PCIe Gen5 links for flexible add-in cards configurations and system communication.
NVLink-C2C Technology
Industry-leading chip-to-chip interconnect technology up to 900GB/s, alleviating bottlenecks and making coherent memory interconnect possible.
InfiniBand Networking Systems
Designed with maximum scale-out capability with up to 100GB/s total bandwidth across all superchips through InfiniBand switches, BlueField-3 DPUs, and ConnectX-7 NICs.
Easy-to-Program Heterogeneous Platform
Bringing preferred programming languages to the CUDA platform along with hardware-accelerated memory coherency for simple adaptation to the new platform.
NVIDIA CUDA Platform
The well-known platform is optimized for the new Arm-based CPU, enabling accelerated computing with superchips, alongside all add-in cards and networking systems.
Maximize Configuration Flexibility with the GIGABYTE Server Lineup
Applications for the NVIDIA Grace™ Superchip & GH200 Grace Hopper Superchip
-
AI
With the fast-growing adoption of AI, either for training massive language models or real-time responsive inference, the Arm-based NVIDIA Superchips benefit from the seamless communication between CPUs and GPUs and lower power consumptions on CPUs. Together with high-bandwidth chip-to-chip connections and coherent memory design for large AI model computations, these superchips fulfill the computational needs in modern AI applications.
-
HPC
As HPC developed throughout the years, the applications have gradually moved from traditional x86 platforms to the more power efficient Arm-based platform. Based on the existing Arm ecosystem, a range of HPC applications can be transferred to the new platform with ease. Along with packaged high-bandwidth low-power memory, the superchips deliver outstanding performance with low power consumption and encourage diverse platform choices for those seeking new solutions.
-
Cloud Computing
Benefiting from much higher core density and better core scalability, the NVIDIA Grace CPU Superchip offers outstanding performance, power efficiency, and system scalability in an era where the cloud has become an essential part of our daily lives. Providing low-latency and high scalability solutions to adapt to increasing needs in private and public cloud computing services.
Featured New Products
-
NVIDIA MGX™ Arm Server - NVIDIA GH200 Grace Hopper Superchip - 2U UP 4-Bay Gen5 NVMe | Application: AI , AI Training , AI Inference , HPC & HCI
-
High Density Arm Server - NVIDIA Grace™ CPU Superchip - 2U 4-Node 16-Bay Gen5 NVMe | Application: HPC , HCI & Hybrid/Private Cloud Server
-
HPC/AI Arm Server - NVIDIA GH200 Grace Hopper Superchip - 2U 2-Node 8-Bay Gen5 NVMe | Application: AI , AI Training , AI Inference , HPC & HCI
-
High Density Arm Server - NVIDIA Grace™ CPU Superchip - 2U 4-Node 16-Bay Gen5 NVMe DLC | Application: HPC , HCI & Hybrid/Private Cloud Server
-
HPC/AI Arm Server - NVIDIA GH200 Grace Hopper Superchip - 2U 4-Node 16-Bay Gen5 NVMe DLC | Application: AI , AI Training , AI Inference , HPC & HCI