Efficient Inferencing with Qualcomm
The GIGABYTE G292-Z43 was purpose-built for the AI 100 to support a high density of AI 100 cards. It also does so with optimal cooling that does not throttle performance. In the 2U chassis, it can support sixteen AI 100 cards and uses a dual AMD EPYC design.
Why Qualcomm Cloud AI 100 & GIGABYTE server?
Up to 400TOPS per card and low-power consumption, 75W, from a single-slot GPU, the Qualcomm AI 100 provides a card for edge, telco, and datacenters.
Dense Accelerated Computing
Low-profile, HHHL, design allows for absurd GPU density using PCIe Gen4 lanes. The G292-Z43 supports up to 16 cards in x8 slots for high throughput.
Strong power efficiency translates into lowering TCO for edge and telco computing while the AI 100 is highly adaptable to environments.
MLPerf v1.1 benchmark showcased leading performance by the Cloud AI 100 in inferences/second/watt. And also leadership in offline inferences/second.
Robust Software Tools
Major frameworks supported natively. Over 50 ML models for computer vision and natural language processing. Also python for development.
G292-Z43 can support sixteen Gen4 x8 lanes for AI 100 cards, and for faster networking there are an additional two low-profile Gen4 x16 lanes.
Inference applications with leading hyperscale companies requiring heavy AI inferencing workloads for natural language processing, recommendations and prediction engines.
Reshaping transportation and smart cities with massive MIMO (multiple-input, multiple-outfit) antennas.
5G Edge Box
Transforming the shopping experience, public safety, manufacturing (tracking defects), and agriculture.