AI-AIoT

Five Minutes to Know More About the Deep Learning Industry – Will AI Replace Humans?

by GIGABYTE
Can Artificial Intelligence really mimic a human being? And what are the most popular examples of deep learning applications in recent years?
2019 has been a year in which Artificial Intelligence (AI) has made advancements by leaps and bounds, particularly due to the maturity of the hardware and software ecosystem designed for machine learning and the advancement of high performance Graphics Processing Unit (GPUs) technology, which can greatly increase the speed of matrix multiplications and numerical computations. In addition, the widespread adoption of deep learning frameworks has reduced the difficulty of developing AI applications: TensorFlow, Caffe, Torch and other mainstream development frameworks have been welcomed by developers worldwide. 《Glossary: What is Artificial Intelligence (AI)?

Nowadays, using deep learning technology to enhance the function or features of a product is already commonplace: industry leaders such as NVIDIA, Google, Amazon and IBM have already invested significant resources into developing their own “deep learning systems” and giving birth to many products and services with a huge potential for development. These technology companies are now bringing about a revolution in traditional industries – Gartner has estimated that the AI industry was already worth around $1.2 trillion USD in 2018, a 70% increase from the previous year, and was expected to reach a combined value of $3.5 trillion USD within the next 3 years.

Can Artificial Intelligence Really Mimic a Human Being?

Although the literal definition of "artificial intelligence" is the simulation of a human brain using a computer to replace human thought and action with with a low-cost, highly efficient machine, this is a mistaken concept. Currently, the ways in which artificial intelligence and a human brain function are vastly different – for example, a computer can only process digital signals consisting of either 0s or 1s.

The biggest progress in AI technology has been enabling computers to “learn” to interpret “images” and “sounds”. Objects in a real environment have a large number and variety of different data features, such as the vector value of each pixel’s brightness and color within an image, analog signals of sound waves, or vector characteristics of different object shapes. This electronic data is then converted into digital signals so a computer can analyze and process it.

“Deep learning” uses multilayered models to filter the input data set, and continuously adjusts the weighted score of each data parameter at every layer to gradually increase the accuracy of predicted results, until finally the accuracy of the output value is within an ideal range. This whole process is called “training”.

After going through millions of adjustments, correct data parameters of known objects will receive a higher weighted score. Then, whenever the computer encounters a similar object or sound, it will compare the parameters of the new object with the verified parameters of a previously obtained “image” or “sound”. If the result of the compared object is within the allowable range, the computer will always be able to predict the right answer of the new object or sound – and this process is called “inferencing”. This is the basic principle of how computers recognize images, objects and sounds. However, since a certain degree of error is tolerated, deep learning is not a strict mathematical calculation but more precisely a kind of “guess”.

Now that computers can guess images and sounds with a high degree of accuracy, they can complete tasks that were difficult to perform in the past and assist humans in their daily work. What’s more, currently popular applications such as defect detection can be performed faster and better than people. And the application of deep learning in areas such as autonomous driving systems, voice assistants, facial recognition, medical diagnoses and photography is considered to be the key to creating a new revolution in technology.
What Are Some Popular Examples of Deep Learning Applications in Recent Years?
Although we may not know it, deep learning has already become embedded into our lives. For example, we have already become accustomed to a voice assistant that can speak fluently, enjoyed the convenience of autonomous driving technology, and are amazed by the image enhancement algorithms used in smartphone technology, which can turn the photo of a dim cityscape into a beautiful night scene. These exciting innovations are constantly provoking us to imagine the brand new world that’s possible in the future.

A few years ago, many of the world’s leading companies started investing considerable resources into the application of artificial intelligence, and after years of hard work deep learning technology is finally proving to have a huge commercial value for them. Here are three current mainstream examples of deep learning applications:
Image Recognition Technology
Image recognition technology will import a large image dataset, and use vector data from each parsed image (such as pixel distances and arrangement method) for training deep learning models, in order to identify specific pictures or objects in each image. The principle is that when the computer later encounters similar objects, the data of that object will be within a reasonable range of parameters from a similar object that was used in training data. Then within a certain range of error, the computer will be able to classify any new object it encounters, such as cars, traffic lights or storefronts. For example, when many people encounter a verification checker on a webpage form, they often are required to click on specific objects within a picture, and there are also companies who hire part-time workers to manually click on and classify objects within a picture – these are some of the methods used for classifying images to be used in deep learning model training.

On this basis, many applications have been developed from image recognition technology. Autonomous Driving is one of the most widely-known applications, and has resulted in a fierce arms race between NVIDIA, Tesla, Waymo, Intel’s Mobileye and other companies who have all invested a large amount of capital into research and development, hoping to target the traditional automotive market worth hundreds of billions of dollars annually.

Familiar smart phone camera features also use a considerable amount of deep learning-based image recognition technology, with many brands now advertising their “AI camera” feature claiming it is able to recognize thousands of photographic scenes and optimize each shot, and allowing the user to easily take clear and beautiful photos.

This technology works by detecting the particular scene through the lens of the smart phone, and then using a pre-installed algorithm to identity which scene mode should be used and adjust the picture accordingly. Although it is impossible for the smart phone to conduct “training”, it is still possible for the phone’s processor to implement “inferencing”. The phone manufacturer will therefore install a pre-trained image recognition model onto the smart phone, which only requires the use of the phone’s central processing unit (CPU) or a special image processing chip.

Another mature but still relatively unknown application is image and video filtering. Along with the popularization of social media platforms such as YouTube, Facebook and Twitter in our daily lives, content that is indecent, objectionable or infringes on copyright is immediately removed from the huge number of images and videos that are constantly uploaded.

These companies use deep learning-based image recognition technology to detect if illegal content is generated on their platform. For example, YouTube and Facebook can automatically detect the content of advertisements that are published in order to prevent the promotion of fraud or scams, or to avoid the potential of occupational injuries such as mental illness that could occur when using human content moderators.

Learn More:
《Solution: An Autonomous Vehicles Network with 5G URLLC Technology
Natural Language Processing
Google’s Voice Assistant, Apple’s Siri and Amazon’s Alexa are now able to clearly recognize our voice commands and conversations, relying on a branch of Deep Learning technology called “Natural Language Processing (NLP)”. Using a large amount of audio combined with text, NLP turns analog signals such as sound wavelength, voice segmentation and speech intonation into digital data for analysis and training, allowing the program to recognize human speech and grammatical structure.

Whenever we speak into our cell phone, the screen will also display the corresponding text. What actually happens is that when the device detects audio input conforming to the parameters of a Deep Learning model, it will compare it with previous training results, classify the audio, and then extract each corresponding word from a database, combining pre-set responses one by one – and creating a smooth voice assistant dialogue in which deep learning technology plays a key role.
Recommender Systems
When using Spotify or Netflix we will surely admire the accuracy of their Recommender Systems, which can always list out our favorite music and TV shows on screen ready for our immediate enjoyment. YouTube will also recommend different video content according to each user’s individual preferences.

In addition, the world's largest e-commerce company Amazon can also accurately recommend products to their customers, thereby increasing online transaction rates. How are these Recommender Systems able to understand every user's preferences ever more accurately? The secret is by using deep learning-based technology.

Although Collaborate Filtering (CF) has been commonly used in the past as an effective recommender system, the scoring system it uses can become diluted in many types of situations, greatly reducing accuracy. So modern recommender systems will add additional deep learning algorithms that will not only just base recommendations on singular information such as user’s review score, click-through rate, page view duration or song attributes.

So how do they work now? The recommender systems of these companies will now also incorporate a kind of NLP model as mentioned above, which can analyze a large number of blogs and website texts about music on the Internet via web crawling and evaluate adjectives from online reviews or comments about a particular TV show or music, or check if other artists were mentioned at the same time. These keywords will then be imported into a deep learning model and will be assigned different weighted scores through training, in order to boost the user’s true preferences.

However, what if the music choice is uncommon or unpopular? If there is not enough sample data for analysis of online information and user preferences, Spotify will evaluate and summarize the specific genre by analyzing the audio of the song itself, such as rhythm, treble, loudness, channel count or other characteristics, and then compare samples of similar music styles in order to make a recommendation to the user.

Deep learning technology will play a very important role in the era of online shopping, online advertising and the subscription economy. So long as user preferences can be better understood, marketing can be more precisely targeted toward specific groups of consumers, increasing value or customer engagement.
The Challenges with Developing Deep Learning Technologies
After discussing so many successful examples, why has Artificial Intelligence not been already adopted by more mainstream enterprises? In fact, although AI has been in development for many decades, it has only become commercially valuable within the past three years. And due to the high cost and limited pool of technical expertise, only large companies with deep reserves of capital have so far been able to make an investment into the development of AI technology. What’s more, training a deep learning model first requires the accumulation of a very large dataset, which also needs to be “clean and consistent” before it can be used.

Enterprises must prepare their own dataset, which can be very costly – including the need to hire a large number of employees to collect and organize this data, in order to build a database necessary for deep learning. Although it is worth mentioning that many machine learning development tools have been made available for free as open source libraries, frameworks or learning resources by the AI companies that developed them, capturing required information from a huge pool of data also requires a high degree of technical proficiency in addition to the accumulation of this data in the first place.

There’s another concern when an enterprise wants to begin developing Deep Learning technology: they not only need to consider about dataset collection and software stack integration, but also about the compatibility and performance efficiency of hardware. If the enterprise is unable to build its own hardware environment, it must purchase expensive cloud computing services from AWS or Google Cloud. The price of these services will vary greatly depending on the region, the hardware required, the complexity of the deep learning model and the network bandwidth speed.

If the most basic cloud computing configuration is adopted for a single deep learning model together with additional GPU processing support in order to shorten training time, a company would need to spend at least a few thousand dollars a month, but a more complex deep learning model requires the purchase of even more cores and a larger amount of memory. An even more common situation is to train several different deep learning models at once, multiplying this cost by dozens of times and making it difficult for many small or medium enterprises to bear.

There is another option - enterprises can choose to install their own GPU servers to train Deep Learning models, as most companies in Taiwan already choose to build and operate their own data centers for research and development. And in addition to supplying high performance computing hardware, there are also many system integrators on the market that can provide integrated deep learning solutions that their enterprise customers can simply buy and start using immediately.

For example, GIGABYTE has collaborated with Taiwan’s Industrial Technology Research Institute (ITRI), integrating GIGABYTE’s GPU hardware solutions together with ITRI’s DNN Training System software stack to launch the DNN Training Appliance, an integrated hardware and software solution for deep learning. The solution uses GIGABYTE’s G481-HA1 Deep Learning server a hardware base, which adopts a single PCIe root system architecture to control multiple GPUs via a single CPU. 《Recommend for You: More Information About GIGABYTE’s G-Series GPU Server Products》

Deep learning uses extremely large datasets to train models, requiring a huge amount of GPU computing power. Extremely frequent GPU to GPU communication is also necessary to exchange weighted values during the training process. This is where the advantages of a single root system architecture becomes apparent, as all GPUs are able to communicate via the same CPU, greatly reducing transmission latency by minimizing the amount of data that needs to be transferred between CPUs, and therefore further decreasing the training time required for the deep learning workload.
Deep Learning Hardware & Software Architecture
Take a look at the above deep learning solution stack as an example. At the base of the stack is GIGABYTE’s G481-HA1 deep learning server, which incorporates hardware components such as a CPU (Central Processing Unit), GPUs (Graphics Processing Units) and RAM (memory). The server also has been installed with Ubuntu OS and NVIDIA GPU drivers to form the underlying operating system environment. To perform training however, enterprises also need deep learning frameworks and libraries.

What are deep learning frameworks? Since designing a deep learning model is very difficult, developers will not rewrite code from scratch every time, but instead use existing frameworks and libraries to build deep learning models in a more efficient way. Publicly available versions of frameworks have been modularized, similar in concept to lego bricks – so engineers don’t need to redesign any models, but just combine the public versions of models designed by others and fine-tune them according to their own needs, similar to stacking lego bricks together to form your ideal structure. TensorFlow, Caffe and pyTorch are examples of the currently most popular deep learning frameworks.

In addition, frameworks also need to rely on libraries for GPU acceleration to improve training efficiency. A library is a collection of sub-routines for software development, and forms an important component of any training model. Any so-called library is not a stand-alone executable program, but instead a database of code that developers can reference. Since all code features a pre-adjusted, standard method of use (such as to accelerate the speed of deep learning), developers can improve the training performance of deep learning models by simply referencing the appropriate code from the library. Well-known libraries include DALI, NCCL and cuDNN.

GIGABYTE’s DNN Training Appliance integrates the above mentioned software and hardware stacks to provide a turnkey deep learning development environment that can be used immediately. Enterprises no longer need to perform complicated hardware integration, compatibility testing or software optimization, but can merely focus on application development for deep learning.
GIGABYTE’s DNN Training Appliance Provides a User Friendly Development Environment
ITRI’s DNN Training system also includes many convenient management tools, providing GIGABYTE’s DNN Training Appliance with a simple to use deep learning development environment. For example, it supports a database management tool that can automatically convert datasets into formats compatible with deep learning models, and features a Graphical User Interface (GUI) that allows developers to easily manage the content of training datasets, visualize and edit the structure of deep learning models, adjust model hyperparameters, analyze model training results and perform version control.

GIGABYTE’s DNN Training Appliance also supports one-click hyperparameter tuning of deep learning models. This feature will automatically adjust hyperparameter settings to improve deep learning training accuracy, immediately improving training efficiency rather than relying on trial and error to adjust each hyperparameter setting manually, reducing time and labor costs. It is the equivalent of a completely optimized deep learning solution for your enterprise: after purchase, the customer simply needs to import their data and perform some fine-tuning before they will be able to quickly begin development of their deep learning applications.

Generally speaking, most companies need to continuously experiment with the deep learning process, accumulating knowledge and experience in order to understand which particular business problems can be solved using deep learning, and then understand how to use specific deep learning technologies to solve these problems. This type of integrated hardware and software solution therefore offers great advantages in terms of lowering the technical threshold, computing cost and training time of deep learning.

With GIGABYTE’s DNN Training Appliance, your enterprise does not need to use expensive public cloud computing services, or purchase, integrate and configure hardware and software from scratch. This kind of solution is a complete computing package. To conduct research and development of deep learning technology, users need only to simply purchase GIGABTYE’s DNN Training Appliance – an integrated solution can help your company to considerably reduce development costs and remove the technical bottlenecks of deep learning. 《Learn More: Understand More about GIGABYTE’s DNN Training Appliance
Mature Deep Learning Solutions Will Allow Major Companies to Quickly Implement Applications
Although it is still very difficult for small and medium-sized enterprises to implement deep learning technologies, the public availability of trained deep learning models from major technology companies will allow AI to be widely used in the future in various situations of our daily lives. For example, autonomous driving technology from NVIDIA, Intel and Waymo is already publicly available on the market, allowing even traditional automotive manufacturers without a technical background to integrate their own automotive products with the same advanced self-driving features.

Another example is Qualcomm’s Snapdragon series of mobile phone processors, which are now available with a number of pre-installed AI algorithms. Even if the cell phone vendor does not have any background in developing image recognition algorithms, it’s now possible for them to purchase public versions of these algorithms.

As the threshold of implementing Artificial Intelligence is lowered, companies can now not only purchase pre-trained deep learning models but also fine-tune them for their own needs, combining the same frameworks together with different datasets and settings to train customized deep learning models.

Although it’s predicted that AI technologies will be widely used in various industries in the future, it is essential for any enterprise to get a head start in research and development into the application of AI-related technologies if they wish to stay at the forefront of their respective industries. Then, once the technology has reached maturity they will be able to quickly deploy their deep learning know-how in order to strengthen the competitiveness of their products.

The real advantage of deep learning is in “automation”, which can rapidly enhance the competitiveness of a product, and making it a highly scalable technology. Since the adoption speed of each industry will not be the same, every enterprise should try and master this new technology trend as soon as possible. Otherwise, they may suddenly discover the competitiveness of their product has suddenly been lost. For example, in 2019 the cyber security industry adopted deep learning technology to detect potential virus threats, allowing the entire industry to become more competitive.

Like most new technologies, deep learning will surely find itself a way into our lives as time matures, helping humans to complete tedious daily tasks more efficiently, and helping us get back more of our precious time that we can then invest in higher value work.
Realtion Tags
HPC
DNN
AI Training
Artificial Intelligence
Deep Learning
WE RECOMMEND
RELATED ARTICLES
How to Pick the Right Server for AI? Part One: CPU & GPU

Tech Guide

How to Pick the Right Server for AI? Part One: CPU & GPU

With the advent of generative AI and other practical applications of artificial intelligence, the procurement of “AI servers” has become a priority for industries ranging from automotive to healthcare, and for academic and public institutions alike. In GIGABYTE Technology’s latest Tech Guide, we take you step by step through the eight key components of an AI server, starting with the two most important building blocks: CPU and GPU. Picking the right processors will jumpstart your supercomputing platform and expedite your AI-related computing workloads.
10 Frequently Asked Questions about Artificial Intelligence

AI & AIoT

10 Frequently Asked Questions about Artificial Intelligence

Artificial intelligence. The world is abuzz with its name, yet how much do you know about this exciting new trend that is reshaping our world and history? Fret not, friends; GIGABYTE Technology has got you covered. Here is what you need to know about the ins and outs of AI, presented in 10 bite-sized Q and A’s that are fast to read and easy to digest!
How to Benefit from AI in the Automotive & Transportation Industry

AI & AIoT

How to Benefit from AI in the Automotive & Transportation Industry

If you work in the automotive and transportation industry, spend a few minutes to read our in-depth analysis of how artificial intelligence has created new opportunities in this sector, and what tools you can use to get ahead. This article is part of GIGABYTE Technology’s ongoing “Power of AI” series, which examines the latest AI-related trends, and how intrepid visionaries can reap the benefits of this exciting paradigm shift.
How to Benefit from AI  In the Healthcare & Medical Industry

AI & AIoT

How to Benefit from AI In the Healthcare & Medical Industry

If you work in healthcare and medicine, take some minutes to browse our in-depth analysis on how artificial intelligence has brought new opportunities to this sector, and what tools you can use to benefit from them. This article is part of GIGABYTE Technology’s ongoing “Power of AI” series, which examines the latest AI trends and elaborates on how industry leaders can come out on top of this invigorating paradigm shift.
Get the inside scoop on the latest tech trends, subscribe today!
Get Updates