ABSTRACT

Hebbian theory is often summarized as “Cells that fire together wire together. Hebb’s rule attempts to explain associative or Hebbian learning, in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. Since the learning in the neural network is due to the weights, one should be able to make an artificial neural network (ANN) learn more complicated things by increasing the number of weights. Training of neural networks uses backpropagation, resilient backpropagation with or without weight backtracking or the modified globally convergent version. ANNs take input data and output desired outcomes after training. An ANN model includes the input layer, one or more hidden layers, and the output layer. Each layer contains input nodes and output nodes, weights, and activation functions. The ANNs are directed acyclic networks or feedforward networks (vanilla ANNs).