ABSTRACT

This chapter provides an introduction to the basic theory of artificial neural networks (ANN). It focuses on four basic architectures—Multi-Layer Perceptron, Radial-Basis Function Networks, Kohonen Networks, and Reinforcement Learning Networks. The first three probably represent the most widely used networks in engineering applications. The fourth type of network is included because of its importance and its connection with the stochastic learning automata. There is a vast array of possible ANN architectures and learning algorithms. Like biological neural networks, ANN have "neurons" and "synaptic connections" that are highly simplified abstracts of their counterparts in real neural networks. Among the existing ANN, Multi-Layer-Perceptron (MLP) using a Back-Propagation learning algorithm is one of the most widely used networks because of its simplicity and powerful representation ability. Unlike the single layer per-ceptron, MLP networks can implement any complicated function due to the additional hidden layers. The chapter also focuses on network selection for pattern classification and non-linear system modelling problems.