ABSTRACT

This chapter considers the hardware, algorithms, data, applications, and economic effects, and consider the lessons of history with respect to the pace at which new technologies are commercialized and brought to their full potential. The current machine-learning landscape has been dominated by two types of processor—the ever-adaptable CPU, which powers inference in most machine-learning applications today and the GPU, whose architecture has won it massive market share as a platform for training deep neural networks. The training of neural networks relies heavily on gradient descent, with floating-point numbers being a prerequisite for the gradient estimates which drive model updates. Conventional software has a perfect memory with stunningly reliable data storage but suffers from the limitation that it can only perform operations simple enough for human engineers to reduce to code. Bayesian machine learning is currently something of a backwater: it doesn’t directly power the applications that are making the headlines.