ABSTRACT

Artificial neural networks (ANNs) are at the root of many state-of-the-art deep learning algorithms. The term feed-forward linear neuron is used since the processing units are connected in such a way that their connections go only one way from the input to the output. This chapter focusses on this type of network, although it is worth noting that there are networks that involve feedback paths in the data processing flow and are thus not merely feed-forward. It introduces feed-forward MLNNs by reviewing key historical developments as well as illustrating the basic. It also reviews some "tricks" that were commonly used during the "second wave" of ANNs that started roughly in the mid-1980s. The chapter briefly presents the development of MLP. It should be evident from the discussion that two key problems are of primary concern in a neural network approach: designing the network architecture and designing a learning algorithm.