ABSTRACT

444 445In the framework of the McCullouch-Pitts neuron model we present a simple model which is biologically meaningful and each neuron is capable of generating chaos to be learned in a collective Hebbian interaction among neurons. In the interest of reducing the complexity, we have considered three variations of neuron models: (1) an internal threshold dynamic model of three degrees of freedom of neurons, (2) a smooth N-shape input-output mapping based on a cubic polynomial, and (3) a piecewise negative sigmoidal mapping model. We have derived an asymptotic bifurcation route to chaos where Feigenbaum metric universality is generalized by the antisymmetric cubic polynomial function. Numerically, we have investigated the Hebbian learning of information processing capability of artificial neural networks (ANN) consisting of a large collection of such neuron models. Snapshots of several hundred thousand neuronic outputs of a single layer, called neural images, are generated for the purpose of graphical illustration of the iterative neurodynamics with a global broadcast without a delay in global communication. The fixed-point attractor dynamics, based on the Hebbian learning rule of the synaptic weight matrix among all chaotic neurons, has generated a mean field of the iteration feedback baseline from other neurons, which reveals a spatially coherent neural image as the information content. In the case of N-shape sigmoidal neurons, results of the neural images show psychologically the possibilities of misconception, perceptual habituation or adaptation, novelty detection and noise-generated hallucination. To achieve an exponentially fast pattern recognition inherited from the iterative mapping chaos, a massively parallel design of chaos ANN chip is suggested. Designs toward chaos chips without inductance elements are discussed.