GIGABYTE’s DNN Training Appliance is a well-integrated software and hardware package that combines powerful computing performance together with a user-friendly GUI, providing DNN developers an easy to use environment to conduct dataset management, training jobs management, real time system environment monitoring and model analysis. The appliance includes powerful hardware and software optimizations that can improve the performance and reduce the time required for DNN training.
To generate a production grade DNN model, a developer will need to go through many difficult and time consuming steps, including dataset collection, dataset cleansing, dataset labeling, dataset augmentation, dataset format conversion, DNN model selection, model design, hyperparameters tuning, model training, model evaluation, and model format conversion. Each step requires different tools and configurations that require time and effort for preparation.
GIGABYTE’s DNN Training Appliance aims to reduce this complexity by providing a complete training and management platform, incorporating all these processes into a single appliance enabled with a web-browser based GUI. Users can import, convert and manage their dataset; design, train and evaluate different DNN models; and test inferencing of their trained models. Built on GIGABYTE’s G481-HA1 server, the Appliance is fully optimized to use the bare metal resources available to deliver maximum training performance on cost efficient hardware.
DNN models need to be trained on a large dataset to achieve an acceptable level of accuracy. Depending on the dataset size, this training could take days or even weeks. And in order to adapt to the latest business circumstances or situations (such as new products, new regulations, etc.), the DNN model needs to be periodically retrained through the latest datasets. If running a DNN training job takes too long, it will have a serious impact on an organization’s operations, resource management and competiveness.
GIGABYTE’s DNN Training Appliance helps to reduce training time by incorporating many different optimization features, such as GPU memory optimization to allow larger size of training input or fit larger DNN models into GPU memory, automatic hyperparameters tuning during a training job to achieve higher accuracy, and dataset cleaning features to reduce the training time by removing by mislabeled or duplicate training data.
GIGABYTE's DNN Training Appliance manages the training process by project grouping. Create a new project to import datasets and run training jobs via the web portal GUI. Within each project, quickly and easily keep track of your model training history, including hyperparameter modification, each training job result and the trained model from each job.
GIGABYTE’s DNN Training Appliance features a step by step wizard, guiding the user on how to train different types of models (for image classification, object detection etc.). This wizard provides different datasets, DNN models and network and hyperparameter settings according to the DNN application type. The Appliance also includes many powerful optimization features (such as memory optimization, automatic hyperparameters tuning and mixed precision training) that are “one-click” enabled.
Once training starts, keep track of training progress in real-time via the training monitoring interface. And after each training job is completed, quickly verify your DNN model with the inferencing validation feature.
The user can easily create a new training job based on the result output of the previous job, by editing hyperparameters, editing the DNN model architecture, changing the dataset or adjusting the number of GPUs utilized for the training job.
A common problem for DNN training is the lack of good quality datasets or an uneven balance of dataset classifications. The Dataset Augmentation function provides a way to overcome these problems by enhancing your existing datasets, supporting standard or custom augmentations like randomly flipping images left or right, or randomly distorting the color of images to generate more variations of a certain image.
The platform supports multiple dataset formats (such as Cifar10, KITTI, COCO, ChestXray, etc.), and automatically converts the raw data into the dataset format of the deep learning framework that will be used.
By using model analysis features like the confusion matrix, the user can find suspect data in their dataset. The GUI can then be used for data re-labelling, deletion or duplication accordingly. The easy to use interface saves users time when cleaning up their data, and allows datasets to be managed quickly and easily.
The GPU memory optimization feature enhances GPU memory utilization performance during DNN training. This optimization allows for larger image batch sizes to be used during image classification training, reducing the total training time required based on a certain size dataset. This feature can be of particular benefit when larger DNN model sizes or GPUs with smaller memory capacities are used, and reduces the occurrence of GPU OOM (Out of Memory) errors.
It takes a lot of time and effort to test and find a proper set of hyperparameters which can optimize the training accuracy of a DNN model. GIGABYTE’s DNN Training Appliance includes a tool that can automatically discover optimal hyperparameters settings (such as batch size, learning rate, learning rate gamma, learning rate step) during a training job to achieve the most efficient time / accuracy ratio.
GIGABYTE’s DNN Training Appliance features a real-time GPU monitoring feature (for GPU utilization, memory usage and temperature), including a protection mechanism that will automatically adjust a training job in process when the temperature of a GPU rises over a certain threshold.
GIGABYTE’s DNN Training Appliance is built with G481-HA1, a server optimized for a single cluster DNN training appliance by employing a single root GPU system architecture. Since DNN training requires frequent communication between each GPU in the system, utilizing a single-root architecture (all GPUs can communicate via the same CPU root) helps reduce GPU to GPU latency and decrease DNN training job time.
|CPU||Dual 2nd Generation Intel Xeon Scalable Processors
TDP up to 205W
|Memory||6-Channel DDR4 memory, 24 x DIMMS
Intel Optane DC Persistent Memory Ready
|Networking||2 x 10GbE BASE-T LAN ports
2 x 1GbE BASE-T LAN ports
(Optional: 4 x Omni-Path QSFP28 LAN ports)
|Storage||8 x 2.5” NVMe + 2 x 2.5” SATA / SAS hot-swap SSD
12 x 3.5” SATA / SAS hot-swap HDD
|Expansion Slots||10 x PCIe x16 (3.0 x16) for GPUs
1 x PCIe 3.0 x16, LPHL
1 x PCIe 3.0 x16, LPHL (occupied by RAID card)
|Power||3 x 2200W 80 PLUS Platinum redundant PSUs|
|Management||1 x Dedicated management port
Aspeed AST2500 management controller
GIGABYTE Server Management remote management platform
|Image Classification||Segmentation||Object Detection||...|
|Deep Learning Frameworks (Caffe, TensorFlow, Chainer, and more)||DNN Training Optimization System
|Deep Learning Libraries (DIGITS, NCCL, cuDNN, CUDA, and more)|
|Ubuntu OS, GPU Drivers|
|GPU Accelerators||CPU, Memory, Storage, Networking|