ABSTRACT

The concept of systems that can learn by Nilsson in his book Learning Machines where he summarized many developments of that time. The publication of the Mynsky and Paper book slowed down artificial neural network research, and the mathematical foundation of the back-propagation algorithm by Werbos went unnoticed. Neural networks can be trained efficiently only if networks are transparent so small changes in weights’ values produce changes on neural outputs. The neuron receives maximum excitation if input pattern and weight vector are equal. The correlation learning rule is based on a similar principle as the Hebbian learning rule. The winner takes all is a modification of the instar algorithm, where weights are modified only for the neuron with the highest net value. B. Widrow and M. E. Hoff developed a supervised training algorithm that allows training a neuron for the desired response.