Artificial Intelligence

  • What is it?
    Artificial Intelligence (AI) is a broad branch of computer science. The goal of AI is to create machines that can function intelligently and independently, and that can work and react the same way as humans. To build these abilities, machines and the software & applications that enable them need to derive their intelligence in the same way that humans do – by retaining information and becoming smarter over time.

    AI is not a new concept – the idea has been in discussion since the 1950s – but it has only become technically feasible to develop and deploy into the real world relatively recently due to advances in technology – such as our ability to now collect and store huge amounts of data that are required for machine learning, and also the rapid increases in processing speeds and computing capabilities which make it possible to process the data collected to train a machine / application and make it "smarter".

  • Why do you need it?
    Although we tend to associate AI with the image of self-aware robot that can move, act and think just like a human being (courtesy of countless science fiction films), you could be already using AI more than you know – for example, YouTube or Netflix rely on AI to make user video recommendations, classify content or censor inappropriate material, and speech recognition or language translation platforms like Amazon Alexa or Google Translate also use AI to be able to better understand real-world speech or perform translation – and as users interact with these applications they become smarter by remembering user behavior or reactions. AI will be a key enabler of many technologies that are on the verge of being deployed into the mainstream, such as autonomous driving technology or flying drones used for package delivery. The importance of AI for these applications is the ability to be able to make decisions independently in real-time based on real world data, and to learn from this data and feedback from the user & environment to become more accurate over time.

  • How is GIGABYTE helpful?
    Currently one of the most widely adopted methods to develop artificial intelligence in machines and applications is with machine learning, and its advanced variant Deep Learning, which adopts Deep Neural Networks (DNN) models - complicated algorithms similar in structure and function to the human brain. Deep Learning requires not only a large amount of data (which can be stored and processed with GIGABYTE's Storage Servers and / or High Density Server), but also massive parallel computing power to train an algorithm based on this data. GIGABYTE's GPU Server (such as G481-S80 or G291-280) are ideal for this task.

    GIGABYTE also has developed a DNN Training Appliance, a fully integrated software and hardware stack built on our G481-HA1 server for hassle-free machine learning environment setup, management and monitoring, and includes hardware and software optimizations that reduce the time required and improve the accuracy of DNN training jobs.