ABSTRACT

The chapter gives a brief and simple overview of Artificial Intelligence (AI). AI was first named and identified in 1955 in the US, but had mathematical heritage going back at least as far as the 1930s, and arguably the nineteenth century. Key categories of AI are Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). This book is concerned with ANI, the only one that is yet real. The global economic significance of AI is substantial and growing. Machine Learning uses regression models to detect patterns in data for decision making. Supervised learning uses labelled datasets to enable categorisation of new data. Reinforcement learning uses external success measures (e.g., the score on a video game) to allow a system to train itself for improvement. Unsupervised learning gathers unlabelled data into statistical groups which could be named and exploited by humans. Self-supervised learning is a branch of unsupervised learning which uses findings from one part of a dataset to predict other parts of the dataset, and is increasingly being used in the media. Neural Networks (mathematical models notionally modelled on the human brain) use large sequences of calculations to reach solutions to detailed problems. When they are used with many layers, that is Deep Learning, the most accurate method for solution of many complex problems. Computer Vision uses Neural Networks to classify images as a whole (image classification), or find objects within an image (object detection for multiple objects, object localisation for a single object.) Generative Adversarial Networks (GANs) pair up two neural networks to compete with each other to create and/or detect fake content, until the content becomes good enough to pass for the real thing. The two networks are called the ‘generator’ and the ‘discriminator.’ They are used in content generation, because their competitive structure promotes verisimilitude in the eventual product generated. Natural Language Processing is a tool for the deconstruction, comprehension and replication of text, using deep learning, and is producing some of the most exciting ‘creative’ tools in AI in the media, including text and translation. Many of these systems derive from so-called ‘foundational’ AI models which are trained on vast amounts of data, u to half a trillion parameters. AI’s technical and ethical limitations are discussed.