ABSTRACT

Current connectionist models are oversimplified in terms of the internal mechanisms of individual neurons and the communication between them. Although connectionist models offer significant advantages in certain aspects, this oversimplification leads to the inefficiency of these models in addressing issues in explicit symbolic processing, which is proven to be essential to human intelligence. What we are aiming at is a connectionist architecture which is capable of simple, flexible representations of high level knowledge structures and efficient performance of reasoning based on the data. We first propose a discrete neural network model which contains state variables for each neuron in which a set of discrete states is explicitly specified instead of a continuous activation function. A technique is developed for representing concepts in this network, which utilizes the connections to define the concepts and represents the concepts in both verbal and compiled forms. The main advantage is that this scheme can handle variable bindings efficiently. A reasoning scheme is developed in the discrete neural network model, which utilizes the inherent parallelism in a neural network model, performing all possible inference steps in parallel, implementable in a fine-grained massively parallel computer.