Skip to Content

Artificial Intelligence (AI) Cheat Sheet

But with AI’s rapid growth, it is easy to have missed some of the terminology behind the field of study, with its many subsets and different approaches. To help out, here’s a quick intro by DCD.

AI Cheat Sheet

AI Overview

AI is the overarching term for intelligence demonstrated by machines, rather than the natural intelligence displayed by humans and other animals. It’s a broad phrase that can be used to describe anything from the Artificial General Intelligence systems that could one day have the intelligence of a human, to Artificial Narrow Intelligence that is only good at one task – such as chess, facial recognition, or translation.

Machine Learning

Within AI, the subset that you will hear the most about is machine learning. ML is when you give an algorithm as much sample data as possible – known as training data – and it is able to use that to make predictions or decisions based on that data, without being explicitly programmed to perform a task.

An oft-cited example is with cats – give a machine learning system thousands of pictures of a cat, and it will be able to spot what a cat looks like, without you having to define what a cat looks like to the system. However, with a ‘shallow’ ML system, you may have to spend time defining the edges of the cat, for example, to help the system out.

Deep Learning

Unlike shallower forms of ML, the subset deep learning instead uses multiple layers of learning algorithm to understand data. Loosely based on the human brain, these artificial neural networks are more complex – so the first layers could work out the edges of a cat image, and later ones could focus on whiskers and eyes. This generally requires more data and more computing power than shallower approaches, but far less human intervention.

Generative Adversarial Networks

This is one to watch. Invented by Ian Goodfellow in 2014, GANs take two neural networks and make them compete against each other. One neural network, the generator, creates new data instances (say, made up images of cats), and then the other, the discriminator, decides whether the data passes muster, often by checking a set of real world data (say, actual images of cats). Together, they produce highly realistic new data (again, a lifelike cat).

Processing

Depending on the complexity of the AI method used, and the size and type of dataset, different levels of processing power are required – put simply, the more complex, the more power needed. That’s why Nvidia takes a lot of credit for the current deep learning boom – the company’s GPUs were instrumental in making it possible to train deep learning workloads quickly. In 2009, using Nvidia GPUs, Google Brain’s Andrew Ng noted a speed-up of 100x over conventional methods, solidifying GPUs as the go-to AI accelerator for training.

Training

This is the process of actually creating the algorithm you need by using a deep learning framework. When training a neural network, data is put into the first layer of the network, and individual neurons assign a weighting to the input, based on how correct or incorrect it is. GPUs lead this field, but the market has since been flooded with chips hoping to compete.

Inference

This is the comparatively simpler (and vastly more common) task of taking the previously trained machine learning algorithm and actually using it. So you train your algorithm with cat pictures to recognize cats, and then you show it a picture of a cat and it infers that it is indeed a picture of a cat.