ABSTRACT

An artificial neural network or simply a neural network, is composed of multiple nodes, where each node imitates vaguely a biological neuron of the brain. A node takes input data from other nodes and performs a simple operation on the data. The result of this operation is passed to other nodes. The nodes are interconnected via links each associated with a weight. The weights of the links of a neural network are determined by training the neural network using a labeled data set. In this Chapter, the feedforward neural network and the backpropagation algorithm for training a neural network, are presented. Subsequently, three different ways that the backpropagation method can be applied to a training data set, namely, the stochastic gradient descent, the batch gradient descent, and the mini-batch gradient descent, are discussed. Finally, a discussion on how to avoid overfitting and also how to select the hyper-parameters of a neural network is given. A set of exercises and a neural nets project is given at the end of the Chapter.