Banner ImageMobile Banner Image

AMD Instinct™ MI300 Series Platform

Bringing Exascale-Class Technologies to Mainstream HPC and AI

Accelerators for the Exascale Era

・Frontier is the #1 fastest supercomputer in the TOP500 and one of the greenest in the Green500 with AMD EPYC™ processors and AMD Instinct™ GPUs. These technologies are now avaialable in GIGABYTE servers for high performance computing (HPC), AI training & inference, and data intensive workloads. 

・With AMD's data center APU and discrete GPUs, GIGABYTE has created and tailored powerful, passive and liquid-cooled servers to deliver accelerators for the Exascale era. The AMD Instinct™ MI325X and MI300X GPUs are designed for AI training, fine tuning and inference. They are Open Accelerator Modules (OAMs) on a universal baseboard (UBB) housed inside GIGABYTE G-series servers. The AMD Instinct MI300A integrated CPU/GPU accelerated processing unit (APU) targets HPC and AI. It comes in an LGA socketed design with four sockets in GIGABYTE G383 series servers. 

・El Capitan is projected to be the world's most powerful supercomputer capable of performing more than 2 exaflops per second. At the heart of the new machine is the AMD Instinct MI300A APU, designed to overcome prformance bottlenecks from the narrow interfaces between CPU and GPU, programming overhead for managing data, and the need to modify code for GPU generations. The MI300A APU architecture has a chiplet design where the AMD Zen4 CPUs and AMD CDNA™3 GPUs share unified memory. This means that the technology is not only used to support small deployments such as a single server, but it is also able to scale for large computing clusters. The demand for AI and HPC is here, and GIGABYTE has the technologies you need to win.

A Discrete GPU and a Data Center APU

The AMD Instinct MI300 series accelerators, including MI300X & MI325X, and MI300A, are designed to boost AI and high-performance computing (HPC) capabilities in a compact, efficient package that reduces total cost of ownership.

Designed to power the largest AI models, AMD Instinct MI325X accelerators feature industry-leading 256GB of memory and 6 TB/s bandwidth to speed response times. Enhanced power efficiency and support for matrix sparsity further optimize AI training and inference, helping enable sustainable scaling of AI solutions across data centers.

The Instinct MI300X offers raw acceleration power with eight GPUs per node on a standard platform. AMD Instinct MI300 Series accelerators aim to improve data center efficiencies, tackle budget and sustainability concerns, and provide a highly programmable GPU software platform. It features advanced GPUs for generative AI and HPC, high-throughput AMD CDNA 3 GPU CUs, and native hardware sparse matrix support. The MI300X delivers 304 high-throughput compute units for AI-specific functions including new data-type support, photo and video decoding. It supports a wide range of data types and the latest Generative AI models. 

The world's first unified CPU/GPU accelerated processing unit (APU) - Instinct MI300A - is built to overcome performance bottlenecks from the narrow interfaces between CPU and GPU, eliminate the programming overhead for managing data, and the need to refactor and recompile code for every GPU generation. Deployed at scale in some of the world's fastest and greenest supercomputers, this technology is now available to right-sized for your environment through GIGABYTE.

By enhancing computational throughput and simplifying programming and deployment, the AMD Instinct MI300 Series addresses the escalating demand for AI and accelerated HPC amidst resource, complexity, speed, and architecture challenges. The AMD Instinct MI300 Series is Ready to Deploy.

Select GIGABYTE for the AMD MI300 Series Platform

Feature Icon

High Performance

The custom 8-GPU baseboard server ensures stable and peak performance from CPUs & GPUs as priority was given to signal integrity and cooling.
Feature Icon

Scale-out

Multiple expansion slots are available to be populated with ethernet or InfiniBand NICs for high-speed communications between interconnected nodes.
Feature Icon

Energy Efficiency

Real-time power management, automatic fan speed control, and redundant Titanium PSUs ensure the best cooling and power efficiency. Also, DLC option.
Feature Icon

Compute Density

Offering industry leading compute density in a 5U chassis (G593 series) and 3U chassis (G383 series), servers achieve greater performance/rack.
Feature Icon

Advanced Cooling

With the availability of server models using direct liquid cooling (DLC), CPUs and GPUs can be cooled faster with liquid cooling than air.

AMD Instinct™ MI300 Series Accelerators Specifications

AMD Instinct™ MI325X & MI300X GPU
AMD Instinct™ MI300A APU
MI325X GPU MI300X GPU Model MI300A APU
OAM module Form Factor APU SH5 socket
- AMD ‘Zen 4’ CPU cores 24
304 GPU Compute Units 228
19,456 Stream Processors 14,592
163.4 TFLOPS Peak FP64/FP32 Matrix* 122.6 TFLOPS
81.7/163.4 TFLOPS Peak FP64/FP32 Vector* 61.3/122.6 TFLOPS
1307.4 TFLOPS Peak FP16/BF16* 980.6 TFLOPS
2614.9 TFLOPS Peak FP8* 1961.2 TFLOPS
256 GB HBM3E 192 GB HBM3 Dedicated Memory Size 128 GB HBM3
6.0 GHz 5.2 GHz Memory Clock 5.2 GHz
6 TB/s 5.3 TB/s Memory Bandwidth 5.3 TB/s
PCIe Gen5 x16 Bus Interface PCIe Gen5 x16
8 Infinity Fabric™ Links 8
1000W750W Maximum TDP/TBP 550W / 760W (Peak)
Up to 8 partitions Virtualization Support Up to 3 partitions

* Indicates not with sparsity

Applications for AMD Instinct MI300 Series

Generative AI

8-GPU UBB based servers are ideal for generative AI because of the parallel processing nature of the GPU. Parallel processing is great for massive training data sets and running deep learning models like neural networks, and it speeds up applications like natural language processing and data augmentation.

HPC

Complex problem solving in HPC applications involves simulations, modeling, and data analysis to achieve greater insights. Parallel processing from the GPU is needed, but also there is heavy reliance on the CPU for sequential processing in mathematical computations.

AI Inference

High memory bandwidth, large memory capacity, and low latency between GPUs are ideal for AI inference for their ability to handle large data sets and processing data in batches. This is important for real-time or large-scale inference applications.

Featured New Products

G893-ZX1-AAX2

HPC/AI Server - AMD EPYC™ 9005/9004 - 8U DP AMD Instinct™ MI325X
AIHPCAI TrainingAI Inference

G893-ZX1-AAX1

HPC/AI Server - AMD EPYC™ 9005/9004 - 8U DP AMD Instinct™ MI300X
AIHPCAI TrainingAI Inference

G4L3-ZX1-LAX2

HPC/AI Server - AMD EPYC™ 9005/9004 - 4U DP AMD Instinct™ MI325X DLC
AIHPCAI TrainingAI Inference

G4L3-ZX1-LAX1

HPC/AI Server - AMD EPYC™ 9005/9004 - 4U DP AMD Instinct™ MI300X DLC
AIHPCAI TrainingAI Inference

G383-R80-AAP1

HPC/AI Server - AMD Instinct MI300A APU - 3U 8-Bay Gen5 NVMe
AIHPCAI TrainingAI Inference

G593-SX1-AAX1

HPC/AI Server - 5th/4th Gen Intel® Xeon® - 5U DP AMD Instinct™ MI300X 8-GPU
AIHPCAI TrainingAI Inference

G593-ZX1-LAX1

HPC/AI Server - AMD EPYC™ 9004 - 5U DP AMD Instinct™ MI300X 8-GPU DLC
AIHPCAI TrainingAI Inference
AMD Logo

Resources

Resource Image

GIGAPOD - AI Supercomputing Solution

Scalable AI Data Center
Resource Image

GIGABYTE Releases Servers to Accelerate AI and LLMs with AMD EPYC™ 9005 Series Processors and AMD Instinct™ MI325X GPUs

GIGABYTE at Advancing AI 2024 to share compute solutions
Resource Image

AMD EPYC™ 9005 Series Solutions

5th Generation AMD EPYC is the pinnacle of the AMD SP5 platform.
Resource Image

GIGABYTE Unveils Next-gen HPC & AI Servers with AMD Instinct™ MI300 Series Accelerators

Leading the charge with both AMD Instinct™ MI300X GPU and MI300A APU
Resource Image
Topic

AI Server and AI PC Solutions for Every AI Application

Discover GIGABYTE’s AI server and PC portfolio, delivering high-density performance and reliability for all AI workloads.