Banner ImageMobile Banner Image

Intel® Gaudi® 3 Platform with GIGABYTE solutions

Leap forward in performance and efficiency with an open, Ethernet-based AI system.

Performance and Efficiency at Every Scale

Building on extensive experience in accelerator design and deep expertise in microarchitecture and software, Intel has developed the third generation of Gaudi AI accelerators, Intel® Gaudi® 3, delivering breakthrough performance and efficiency. These accelerators achieve competitive results that rival leading solutions while maintaining a scalable platform. With a strong emphasis on Ethernet adoption and an open system architecture, Intel Gaudi 3 AI accelerators set a new standard for AI infrastructure, enabling businesses to scale efficiently and meet the evolving demands of tomorrow's AI challenges.

Scaling AI with GIGABYTE Server on Intel Gaudi 3 Solution

Always striving for the perfect balance between performance, efficiency, stability, and scalability, GIGABYTE has developed numerous designs to fit various use cases for these AI-era GPU powerhouses. For this newcomer to the GIGABYTE AI lineup, a robust 8U chassis with optimized thermal capabilities was designed to extract every ounce of its potential. It marks the first GIGABYTE server to adopt an 8U air-cooling solution that seamlessly fits into industry-standard air-cooled infrastructure.


By fully utilizing this Ethernet-centric, scalable solution on GIGAPOD – GIGABYTE’s well-optimized and proven rack solution – customers can quickly adopt the latest Intel Gaudi 3 solution with minimal verification required. The rack solution features a 4-server configuration with a Rear Door Heat Exchanger (RDHx), maximizing compute density for optimal utilization of limited facility space.

To learn more about GIGAPOD, please visit our GIGAPOD solution page.

Designed for the Real-World Demands of AI

Feature Icon

Adopt with Ease

Effortless adoption or migration of existing code with Intel Gaudi software, purpose-built for Gen AI with industry-leading software capabilities.
Feature Icon

Built with Scalability

Designed for Ethernet hardware with 1200GB/s open standard RoCE connection among accelerators, scaling cost-effectively for even the largest and most complex deployments.
Feature Icon

Flexible and Powerful Computing

A mix of 8 Matrix Multiplication Engines (MME) and 64 Tensor Processor Cores (TPC) on two interconnected compute dies, delivering optimal performance across a wide range of workloads.
Feature Icon

Efficient Memory Intensive Computing

A total of 128GB of HBM and 96MB L2 cache, effectively addressing the memory bottlenecks often seen in AI training and inference, efficiently accelerating memory-intensive applications like LLM.

Intel Gaudi 3 AI Accelerator Specifications

Content Image
ModelIntel® Gaudi® 3 Accelerator
BF16/FP8 MME TFOPs 1835
BF16 Vector TFLOPs 28.7
MME Units 8
TPC Units 64
HBM Capacity128 GB
HBM Bandwidth3.7 TB/s
On-die SRAM Capacity96 MB
On-die SRAM Bandwidth12.8 TB/s
Networking 1200 GB/s bidirectional
Host InterfacePCIe Gen5 x16
Media 14 Decoders

Applications

HPC

Complex problem-solving in HPC applications use numerical methods, simulations, and computations to achieve significant insights. While traditionally less dependent on GPUs, the overwhelming parallel computing power of GPGPUs has greatly accelerated the development of HPC in recent years, making hybrid configurations a growing trend in modern supercomputers.

AI

With the rapid adoption of AI, from general applications to the fast-evolving deep learning, GPGPUs have become a game changer for the industry. The parallel processing capabilities of GPGPUs allow for the handling of massive datasets and complex algorithms, which are essential for training and deploying AI models. As a result, AI has become the key to making modern systems faster and “smarter” in the most efficient way.

Data Analytics

In data-intensive applications such as big data and computational simulations, systems rely heaving on GPGPUs for high parallel processing, low latency, and high bandwidth to facilitate data mining and large-scale data processing. The ability of GPGPUs to handle vast amounts of data simultaneously not only accelerates the processing of massive datasets but also enables more accurate and timely insights, driving informed decision-making in fields like finance, healthcare, and scientific research.

Featured New Products

G893-SG1-AAX1

HPC/AI Server - 5th/4th Gen Intel® Xeon® Scalable - 8U DP Intel® Gaudi® 3
Intel Logo

Resources

Resource Image

GIGAPOD - AI Supercomputing Solution

Resource Video

GIGAPOD: The Future of AI Computing in Data Centers

Resource Image

4th/5th Gen Intel Xeon Scalable Solutions

Resource Image

GIGABYTE POD Manager

Resource Image

GIGABYTE​ Direct Liquid Cooling​ Solution

Resource Image

Enterprise Solutions for Intel Xeon 6 Processors

Resource Image

AMD Instinct™ MI300 Series Platform

Resource Image

NVIDIA Blackwell Solutions

Resource Image
Topic

AI Server and AI PC Solutions for Every AI Application

Resource Image
Article

Phân tích khái niệm: HPC là gì? Hướng dẫn kỹ thuật của GIGABYTE

Resource Image
Article

CPU vs. GPU: Which Processor is Right for You?

Resource Image

GIGAPOD - AI Supercomputing Solution

Resource Image

AMD Instinct™ MI300 Series Platform

Resource Image

GIGABYTE Showcases Advanced Cooling Solutions for Data Centers at DCW Singapore, Enhancing Green Computing Performance

Resource Image

NVIDIA-Certified Systems™

Resource Image

AMD Instinct MI200 Series Platform

Resource Image

GIGABYTE Showcases Future-Ready AI and HPC Technologies for High-Efficiency Computing at SCA 2025

Resource Image

GIGABYTE Exhibits at the OCP Global Summit 2024 to Showcase AI Solutions That Make a Real Impact Today

Resource Image
Article

To Harness Generative AI, You Must Learn About “Training” & “Inference”

Resource Image

GIGABYTE Presents Total Data Center Solutions: From Hardware to Cluster Management

Resource Image

Giga Computing Showcases Scalable AI Data Center Infrastructure at ISC 2025, Featuring Support for New NVIDIA Blackwell Ultra Platform