ABSTRACT

Neural networks are one of the oldest and most widely used machine learning techniques, with a lineage dating back to at least the 1950s. Inputs are supplied to an multilayer perceptron (MLP) by setting the values that represent the levels of excitation of its input nodes. These values are typically bounded to lie in the range zero to one, though the range minus one to plus one also works well with MLPs. It is the hidden nodes in an MLP that are responsible for its ability to learn complex nonlinear relationships, and the more hidden nodes a network has, the more complex are the relationships that it can learn. Unfortunately, increasing the number of hidden nodes also comes at cost of increased training time and, as with increasing numbers of inputs, an increased risk of overfitting. Training an MLP usually involves repeatedly iterating through a set of training examples that each consist of a pairing of inputs and target outputs.