DPU | 資料處理器
What is it?
Taken as a whole, the evolution of server technology and data center applications has revolved around finding new helpers for the central processing unit (CPU) to make computing faster. The most obvious example is the GPU, which was created to help render computer graphics; and by extension, the GPGPU, which came into existence when people realized the GPU could actually do more than render graphics. Through the synergies of heterogeneous computing and parallel computing, GPGPUs are now widely used to boost the CPUs' processing capabilities into the realms of high performance computing (HPC) and supercomputing.
The DPU, or data processing unit, is a more recent milestone in the actualization of this philosophy. Envisioned as the third pillar of the data center in addition to the CPU and the GPU, the DPU further helps out the CPU by taking over its networking and communication workloads. It uses hardware acceleration technology as well as high-performance network interfaces to excel at handling data transfers, data compression, data storage, data security, and data analytics. While these tasks have traditionally been carried out by the CPU, in a large-scale server farm or server room, delegating the tasks to DPUs can free up the CPUs for other workloads. This can make a huge difference in performance when working with data-intensive tasks, such as big data, artificial intelligence (AI), machine learning, and deep learning.
Free Download:《How to Build Your Data Center with GIGABYTE?》
The DPU, or data processing unit, is a more recent milestone in the actualization of this philosophy. Envisioned as the third pillar of the data center in addition to the CPU and the GPU, the DPU further helps out the CPU by taking over its networking and communication workloads. It uses hardware acceleration technology as well as high-performance network interfaces to excel at handling data transfers, data compression, data storage, data security, and data analytics. While these tasks have traditionally been carried out by the CPU, in a large-scale server farm or server room, delegating the tasks to DPUs can free up the CPUs for other workloads. This can make a huge difference in performance when working with data-intensive tasks, such as big data, artificial intelligence (AI), machine learning, and deep learning.
Free Download:《How to Build Your Data Center with GIGABYTE?》
Why do you need it?
Introducing DPUs into your computing cluster can upgrade your performance, especially if you are working on large-scale projects that are pushing the envelope of data technology. A data center that uses DPUs to move data between the processors can expect to see faster computing, higher availability, better security, and greater shareability. Its proponents are confident that DPUs will be the linchpin of future cloud computing data centers, as more and more of the world's data is poured into the cloud.
How is GIGABYTE helpful?
One of the foremost DPU products on the market is NVIDIA's BlueField®-2 DPU, which is designed to offload critical networking, storage, and security tasks from the CPUs, enabling organizations to transform their IT infrastructure into state-of-the-art data centers that are accelerated, fully programmable, and armed with "zero-trust" security features to prevent data breaches and cyberattacks. GIGABYTE G242-P32, a G-Series GPU Server, is part of the Nvidia Arm HPC Developer Kit, an integrated hardware and software platform for HPC, AI, and scientific computing applications; the kit is outfitted with two NVIDIA® Bluefield®-2 DPUs. The Arm HPC Developer Kit has already been put to good use by customers across many different sectors; including the Graduate Institute of Networking and Multimedia at Taiwan University (NTU), which has developed an intelligent transportation system, called a "high-precision traffic flow model", to test autonomous vehicles and identify accident-prone road sections for immediate redress.
WE RECOMMEND
RELATED ARTICLES
Tech Guide
如何挑選您的AI伺服器?(下)記憶體、儲存裝置和其他元件
人工智慧盛行的當下,各種組織積極導入「AI伺服器」。技嘉科技最新發表的《科技指南》:「如何挑選您的AI伺服器?」,文章下半篇將介紹CPU和GPU以外的六個關鍵零組件。挑選合適元件可讓AI伺服器的性能達到顛峰,勝任人工智慧的相關工作。
Tech Guide
CPU vs. GPU:淺談伺服器的兩大運算力
順應科技趨勢,業界持續追求運算力更強大的伺服器,其中的關鍵元素除了大家熟知的中央處理器CPU之外,圖形處理器GPU近年來也受到重視;但您是否知道,什麼是GPU?它與CPU的差別何在?又該如何運用兩種不同類別的處理器,發揮最強大的運算力?技嘉科技是伺服器與尖端科技解決方案的知名品牌,發表本篇《科技指南》,目的是比較CPU與GPU的原理與功能,進一步協助您挑選合適的技嘉伺服器產品,將「GPU運算力」導入您的伺服器機房與資料中心。