Asetek Direct-to-Chip (D2C) Liquid Cooling

GIGABYTE has joined forces with Asetek to deliver flexible, proven and reliable liquid cooling solutions for a diverse range of systems & challenges including HPC (High Performance Computing), AI (Artificial Intelligence) and Cloud Services.
GIGABYTE & Asetek can provide flexible and reliable solutions for liquid cooling for both air-cooled data centers without facility liquid cooling infrastructure, or with liquid cooling infrastructure installed. These solutions can be smoothly integrated with GIGABYTE's off the shelf server systems that have been validated and ready for Asetek direct-to-chip (D2C) cooling loops, providing a complete liquid cooling solution that can increase compute performance / density and decrease energy consumption.
Usage Scenarios
High Performance Computing
HPC and AI workloads require high performance processors in dense configurations. Asetek Direct-to-Chip (D2C) technology is a field proven solution cooling some of the world's fastest supercomputers. The pervasive nature of AI applications means demand for these sophisticated cooling solutions is becoming more prevalent for enterprise customers.
High Frequency Trading
High Frequency Trading (HFT) requires high-performance systems that deliver ultra-low latency. Asetek liquid cooling solutions are used to enable much higher overclocks for these extreme applications.
Data Center Challenges
Data centers around the globe are being mandated to simultaneously increase energy efficiency, consolidate operations and reduce costs. Each year, data centers consume approximately two percent of global power consumption. 

In addition, with Artificial Intelligence (AI) workloads becoming more mainstream, HPC-style server configurations are moving into traditional air-cooled data centers. To accommodate these high performance, high density servers, data center operators must grapple with not only the increased power densities but also the thermal challenges that they present.

Because liquid is 4,000 times better at storing and transferring heat than air, liquid cooling solutions can provide immediate and measurable benefits to large and small data centers alike for both server / facility energy efficiency and density / performance.

GIGABYTE Server Solutions with Asetek Liquid Cooling
Liquid Cooling Benefits & Solutions
suitable to Performance, Efficiency
Extreme HPC Performance​
Liquid cooling lowers processor temps, resulting in 4% reduction in solution times, and minimizes latency by maximizing cluster interconnect density​
suitable to Environment Friendly, Environment Protection
Dramatic Energy Savings​
Reduces data center energy usage for cooling by up to 50% over traditional air-cooling
Solution 1: InRackLAAC™ for Air-Cooled Data Centers
Asetek's InRackLAAC brings the advantages of liquid cooling to your data center without the need for facility-wide liquid cooling infrastructure, and therefore without a steep initial setup investment burden in both cost and set-up time.

Liquid Cooling Without Facilities Water
The InRackLAAC is a 2U cabinet, designed to be installed into an existing rack within a traditional air-cooled data center. It enables deployment of high wattage processors in clusters with high interconnect densities without the need for costly infrastructure changes.

Increased Density
Implementation of liquid cooling at its best requires an architecture that can keep up with the highest rack power densities and can be adapted quickly to the latest server designs. The InRackLAAC system from Asetek enables the deployment of high wattage processors in clusters with high interconnect densities and is capable of removing up to 6.4 kW of total processor power from liquid cooled servers.

Multiple InRackLAAC systems can be installed within a single rack or mixed into racks with traditional air-cooled nodes. This provides the Data Center operators the flexibility needed to easily manage the transition from air cooling to liquid cooling of the data center.

Asetek InRackLAAC™
How does Asetek InRackLAAC™ Work?
Solution 2: InRackCDU™ for Liquid Cooled Data Centers
The Asetek lnRackCDU is a warm water CDU (Cooling Distribution Unit) system capable of removing up to 80kW of heat from the rack. The system is designed to be installed in liquid cooled data centers and enables deployment of high wattage processors in clusters with high interconnect densities while reducing overall data center cooling costs.

Designed to work with Asetek Direct-to-Chip (D2C) cooling loops installed on GIGABYTE servers that capture between 60-80% of server heat, the heat is then rejected by the lnRackCDU to facilities water in a highly efficient all liquid path. InRackCDU removes heat from CPUs, GPUs, memory modules and other high heat components within servers using water as hot as 45°C (113°F), eliminating the need for expensive and inefficient chillers.

InRackCDU™ Features and Benefits
• Rack-mounted 4U cabinet with liquid-to-liquid (L2L) heat exchanger
• Rejects up to 80kW of processor heat from the rack to data center liquid
• Captures 60% to 80% of server heat with Asetek D2C cooling loops
• Supports up to 3 cooling loops per RU
• 2.5x-5x increases in rack density
• Tool-less connection on facility hoses to simplify installation
• Monitoring system reports out system warnings and alarms
Asetek InRackCDU™
How does Asetek InRackCDU™ work?
Compatible with a Variety of GIGABYTE Server Systems
Both InRackLAAC or InRackCDU can be seamlessly integrated and deployed with a variety of GIGABYTE server systems that are validated and ready for Asetek direct-to-chip (D2C) cooling loops. Additional GIGABYTE systems can be made ready upon request.
H262 Series with Asetek D2C Cooling Loops
H262-Z61 (rev. 100)
8 x AMD EPYC 7002 Series Processors per 2U system; Liquid cooling allows CPU SKUs with a TDP of up to 280W to be integrated (air cooling limitation 240W)
G481-S80 with Asetek D2C Cooling Loops
G481-S80 (rev. 100/200)
Up to 8 x NVIDIA V100 SXM2 GPGPUs; Liquid cooling system for CPU and GPU reduces fan power consumption by 280W
R161-R12 with Asetek ServerLSL (Server Level Sealed Loop) System
R161-R12 (rev. 100)
Closed loop liquid cooling system enables server to be used with the highest performing overclocked CPUs for applications such as High Frequency Trading (HFT)
Related Technologies
High Performance Computing, or HPC, refers to the ability to process data and perform calculations at high speeds, especially in systems that function above a trillion floating point operations per second (teraFLOPS). The world's leading supercomputers operate on the scale of petaFLOPS (quadrillion floating point operations per second), whereas the next goalpost is “exascale computing”, which functions above a quintillion floating point operations per second (exaFLOPS).To achieve this level of performance, parallel computing across a number of CPUs or GPUs is required. One common type of HPC solutions is a computing cluster, which aggregates the computing power of multiple computers (referred to as "nodes") into a large group. A cluster can deliver much higher performance than a single computer, as individual nodes work together to solve a problem larger than any one computer can easily solve.
Liquid Cooling
As the name suggests, liquid cooling technology uses liquid as a heat transfer mechanism. Two common liquid cooling technologies available on the market for servers are liquid immersion cooling and direct to chip liquid cooling. The former involves immersion of the whole server directly into a bath of non-conductive chemical fluid or oil, and is the most efficient at heat transfer with thermal energy from components dissipated directly into the surrounding fluid. The latter uses pipes to transport cold fluid into and around the server chassis. Heat is then transferred from components to the liquid within the pipes via conductive copper plates, and the heated liquid is then circulated back outside the server and converted back into cold liquid or steam to be re-used.
Data Center
A data center is a facility that an organization uses for housing their IT equipment, including servers, storage, networking devices (such as switches, routers and firewalls), as well as the racks and cabling needed to organize and connect this equipment. This equipment also requires infrastructure to support it such as power distribution systems (including backup generators and uninterruptable power supplies) and ventilation and cooling systems (such as air conditioning systems or liquid cooling systems). A data center can range in size from a single room to a massive multi-warehouse complex. In 2005, the American National Standards Institute (ANSI) and the Telecommunications Industry Association (TIA) published standard ANSI/TIA-942, "Telecommunications Infrastructure Standard for Data Centers", which defines four tiers of data centers by various levels of reliability or resilience. For example, a Tier 1 data center is little more than a server room, while a Tier 4 data center offers redundant subsystems and high security.
You have the idea, we can help make it happen.
Contact Us
Please send us your idea
/ 1000
* For services and supports, please visit eSupport.
* We collect your information in accordance with our Privacy Policy.
Submitted Successfully