ABSTRACT

The Lexical Distance (LD) model, presented here, functions as the front end of a connectionist Natural Language Understanding system (e.g. Sharkey, 1989a and b). The lexicon consists of a vector of microfeatures which are divided among 3 classes: orthographic, semantic and situational. Treating lexical space as an energy landscape, the entry for each word is learned as a minimum of the energy function E (see Kawamoto, in press for a similar treatment). Initial access to the lexicon is via the graphemic microfeatures. When these are activated by the visual presentation of an word, the lexical net is destabilised and the system begins gradient descent in the energy function until it relaxes in an attractor basin which represents the meaning of the input word. The model characterises context effects in word recognition experiments by deriving time predictions based on the movement of the system from its initial state to the target state. Two classes of context are discussed along with their interactions with word frequency and stimulus degradation. The research demonstrates how these effects fall quite naturally out of the processing specifications of the LD model without need for ad hoc parameters.