Heterogeneous Computing | 異質運算

  • What is it?
    Traditionally, heterogeneous computing refers to a system that uses more than one type of computing cores, such as CPUGPUDPU, VPU, FPGA, or ASIC. By assigning different workloads to specialized processors suited for diverse purposes, performance and energy efficiency can be vastly improved. The different elements are interconnected via high-throughput, low-latency channels so they can operate as a single unit.

    In recent years, the definition of heterogeneous computing has expanded to encompass processors based on different computer architectures. For example, processors based on the Arm architecture may be a better choice for some tasks, due to their higher number of cores, better power-efficiency, and compatibility with Arm-based mobile devices. The adoption of an alternative architecture may reveal smarter ways to handle existing workloads and computing tasks.

  • Why do you need it?
    If you are using coprocessors besides CPUs in your server solutions, such as GPUs or DPUs, then you are already reaping the benefits of heterogeneous computing. Coprocessors can greatly accelerate computing speed and reduce the time it takes to complete a task. Especially in the development of artificial intelligence through machine vision and deep learning, which requires the server to process a vast amount of data converted to graphical form, the use of GPUs that operate at lower frequencies but have more cores than traditional CPUs is vital.

    By the same logic, installing processors based on an entirely different architecture may be another way to realize the full potential of heterogeneous computing. In recent years, the trend has been to explore alternatives to the conventional x86 architecture, such as Arm processors. Industry experts are looking for ways to unleash the potential of these new products and develop new computing solutions. If your current configuration of processors and coprocessors is performing below expectations, it may be that heterogeneous computing based on a different architecture will have a better chance of fulfilling your needs.

  • How is GIGABYTE helpful?
    Nearly all of GIGABYTE's server solutions support the conventional sense of heterogeneous computing, where CPUs are paired with coprocessors such as GPGPUs to handle workloads related to parallel computingmachine learning, etc. Whether the main processor is the latest Intel® Xeon® Scalable processors or AMD EPYC™ processors, they can transfer data to peripheral devices such as graphics cards, storage devices, and high-speed network cards with PCIe Gen 4.0, which has a maximum bandwidth of 64GB/s and is twice as fast as PCIe Gen 3.0.

    What's more, GIGABYTE has worked for many years with Arm-based systems. GIGABYTE offers a full range of servers designed for Ampere® Altra® processors, based on the Arm architecture. These products are highly recommended for high performance computing (HPC)cloud computing, and edge computing because they can address diverse deployment scenarios with scalability and flexibility. NVIDIA® has also unveiled the NVIDIA® Arm HPC Developer Kit, which includes an Ampere® Altra® processor, two NVIDIA® A100 Tensor Core GPUs, and two NVIDIA® BlueField®-2 DPUs, all contained inside a G242 server by GIGABYTE. This shining example of heterogeneous computing will pave the way for industry leaders to find new ways of tackling computing workloads with tools suited to the task.

    Free Download:
    How to Build Your Data Center with GIGABYTE? A Free Downloadable Tech Guide

  • WE RECOMMEND
    RELATED ARTICLES
    如何將人工智慧導入醫療保健業

    AI & AIoT

    如何將人工智慧導入醫療保健業

    從事醫療與健康產業的讀者,請花幾分鐘閱讀本篇文章,了解人工智慧(AI)為這個領域所帶來的嶄新商機,並認識能助您從中受益的AI工具。本篇是技嘉科技「Power of AI」系列文章,目的是介紹不同產業的最新AI趨勢,鼓勵前瞻者把握AI浪潮,找尋自己的機會。
    如何挑選您的AI伺服器?(上)CPU和GPU

    Tech Guide

    如何挑選您的AI伺服器?(上)CPU和GPU

    生成式人工智慧和其他AI工具盛行的當下,挑選合適的AI伺服器成為各產業的首要任務。技嘉科技最新發表《科技指南》,帶領讀者認識AI伺服器的八個關鍵零組件,本篇從最重要的元件開始,即中央處理器(CPU)和圖形處理器(GPU)。挑選適當的運算晶片,打造量身訂做的人工智慧超算平台,可以讓工作事半功倍,為使用者開創全新的巔峰。
    帶您快速跟上人工智慧AI趨勢的十大問答

    AI & AIoT

    帶您快速跟上人工智慧AI趨勢的十大問答

    大家都在談人工智慧(AI),您是否也希望擁有基本的知識,參與這個話題的討論?別擔心,技嘉科技為您準備了介紹AI趨勢的十大問答,讓您能快速理解人工智慧的概念!
    如何將人工智慧導入汽車和運輸產業

    AI & AIoT

    如何將人工智慧導入汽車和運輸產業

    若您從事汽車與運輸業,請花幾分鐘閱讀本篇文章,了解人工智慧(AI)所開拓的嶄新機會,認識能助您拓展更多可能性與商機的科技工具。本篇是技嘉科技「Power of AI」系列文章,目的是介紹不同產業的AI趨勢,協助具有先見之明的前瞻者利用AI創造自己的機會。
    技嘉伺服器算力爆發 助西班牙胡安卡洛斯國王大學抗老研究

    Success Case

    技嘉伺服器算力爆發 助西班牙胡安卡洛斯國王大學抗老研究

    想高效且精準地處理龐大的數據,需要強大的平行運算能力。胡安卡洛斯國王大學的研究人員Sergio Muñoz、Luis Bote與SIE和技嘉合作,創建了一個由GPU、儲存、運算和首節點組成的叢集系統。
    技嘉PILOT自動駕駛控制主機 開啟台灣自駕公車的AI科技之旅

    Success Case

    技嘉PILOT自動駕駛控制主機 開啟台灣自駕公車的AI科技之旅

    由財團法人車輛研究測試中心ARTC打造的MIT自駕電動小型巴士WinBus,是台灣首輛實現SAE Level 4高度自動駕駛技術的無人駕駛公車。WinBus的控制主機是GIGABYTE PILOT,這是技嘉科技專為次世代自動駕駛車輛設計的車載決策控制平台,三大特色包括:大幅提升AI演算性能的高效處理器組合,支援高速介面與各式車用裝置的超強連接性,及能承受不同天氣與道路狀況的強固型設計。技嘉PILOT自動駕駛控制主機(ADCU)不僅適用於自駕車,也可提供自駕船、自駕卡車、無人農耕機及自主移動機器人(AMR)等領域的自駕任務所運用。
    零百加速3.3秒的熱情奔放!技嘉工作站成就清大賽車工廠    揮舞方格旗的夢想!!

    Success Case

    零百加速3.3秒的熱情奔放!技嘉工作站成就清大賽車工廠 揮舞方格旗的夢想!!

    清大賽車工廠是國際級的學生方程式賽車團隊,旗下「清華四號」(TH04)電動賽車於2019年在日本舉辦的學生方程式賽車競賽Formula SAE奪下亞軍。2022年八月,派出100%台灣製的「清華六號」(TH06),勇往德國Formula Student Germany賽事角逐冠軍。 代表台灣登上國際舞台的「清華六號」,由技嘉科技W771-Z00與W331-Z00工作站助力打造,從設計階段的有限元素法分析、流體力學模擬與分析等步驟,到賽前駕駛模擬與新科技應用,皆使用技嘉工作站提供的超級運算力完成。「清華六號」從時速0加速到100公里只需3.3秒,越來越接近特斯拉Model S零百加速2.6秒的業界高標。
    技嘉藉由單相浸沒式液冷技術 助力日本電信龍頭KDDI開發新世代資料中心

    Success Case

    技嘉藉由單相浸沒式液冷技術 助力日本電信龍頭KDDI開發新世代資料中心

    日本電信業界龍頭KDDI,因應新世代需求,積極研發對環境友善、便於移動的「貨櫃型浸沒式液冷小型資料中心」,此資料中心採用「單相浸沒式液冷」技術,可減少43%的總耗能,並將PUE改善至1.07以下。技嘉科技於浸沒式液冷解決方案擁有豐富的研發與部署經驗,提供機架式伺服器R282-Z93與R182-Z91,肩負資料中心管理節點和GPU運算節點的任務。技嘉伺服器的強項,是在有限空間內部署高密度、高擴充性的處理器配置,其優異的機構設計,能發揮第三代AMD EPYC™中央處理器及NVIDIA® GPU加速器的最大運算力。協助客戶開發「浸沒式液冷資料中心」,亦是技嘉深耕環境、社會與治理的ESG長期努力目標,技嘉攜手產業夥伴和客戶,積極實踐「創新科技,美化人生」的企業使命。