What is it?
PUE (Power Usage Effectiveness) is the ratio of the total amount of energy used by a computer data center facility to the energy delivered to computing equipment. An ideal PUE ratio would be 1.0: 100% of the energy delivered to the data center is used for computing. However, data centers require cooling systems, lighting and other overhead that will also consume some of this energy. As such, a PUE ratio for a conventional data center will always be greater than 1.0.
Why you need it? Who needs it?
In a large data center, operational expenses (such as power bills) can often be enormous and even a greater burden than the initial cost of building the data center and buying the physical equipment itself. To reduce expenses therefore, companies will strive to reduce both the energy consumption of the physical server equipment itself as well as the consumption of energy required to house and cool this equipment.
Traditional cooling infrastructure such as air conditioning systems use a lot of energy. Therefore companies are looking to reduce the PUE of their data centers in many different ways – such as using closed loop liquid cooling systems (instead of fan cooling) on the servers themselves, which are more efficient at cooling the server and therefore allow the servers to be operated at higher ambient temperatures (requiring less or no additional air conditioning), or even using liquid immersion cooling where the entire server rack is immersed in a non-conductive fluid for cooling.
Other companies can reduce their PUE by locating their data center in a colder climate – such as northern Europe, Russia or Alaska, requiring less additional energy to reach a lower ambient temperature, and Microsoft is even experimenting with locating data centers under the ocean.
How is GIGABYTE helpful?
GIGABYTE has formed close partnerships with vendors of both closed loop liquid cooling systems (such as CoolIT and Asetek) as well as immersion cooling systems (such as LiquidStack and Submer) to offer a variety of different solutions to help customers reduce their PUE. Partnering with these companies allows GIGABYTE to offer our servers that have been modified, tested and validated to work with these different solutions to provide for quick and easy deployment.
By using GIGABYTE, Spain’s Institute for Cross-Disciplinary Physics and Complex Systems is pitting the world’s foremost server solutions against some of the world’s most pressing issues, including the effects of climate change, the effects of pollution, and the COVID-19 pandemic. GIGABYTE servers are up to the diverse and daunting tasks, because they are designed for high performance computing, intensive numerical simulations, AI development, and big data management.
Arizona’s Lowell Observatory is studying the Sun with GIGABYTE’s G482-Z50 GPU Server in an effort to filter out “stellar noise” when looking for habitable planets outside of our Solar System. The server’s AMD EPYC™ processors, parallel computing capabilities, excellent scalability, and industry-leading stability are all features that qualify it for this astronomical task, making the discovery of a true “Twin Earth” achievable within our lifetime.
As CPUs and GPUs continue to advance, they consume more power and generate more heat. It is vital to keep temperature control in mind when purchasing servers. A good cooling solution keeps things running smoothly without hiking up the energy bill or requiring persistent maintenance. GIGABYTE Technology, an industry leader in high-performance servers, presents this tech guide to help you choose a suitable cooling solution. We analyze three popular options—air, liquid, immersion—and demonstrate what GIGABYTE can do for you.
GIGABYTE can now offer various server platforms with liquid cooling technology, such as direct-to-chip liquid cooling (liquid to liquid or liquid to air), one-phase oil immersion or two-phase liquid immersion technologies. Liquid cooling systems can support a greater density of CPUs and GPUs, enabling better compute performance in a given amount of space, as well as helping customers to reduce their power consumption for cooling infrastructure, to achieve a better data center PUE.