ABSTRACT

In the first chapter of this section, Dyer et al. discuss a method for modifying distributed representations dynamically, by maintaining a separate, distributed connectionist network as a symbol memory, where each symbol is composed of a pattern of activation. Symbol representations start out as random patterns of activation. Over time they are “recirculated” through the symbolic tasks being demanded of them, and as a result, gradually form distributed representations that aid in the performance of these tasks. These distributed symbols enter into structured relations with other symbols, while exhibiting features of distributed representations, e.g. tolerance to noise and similarity-based generalisation to novel cases. Dyer et al. discuss in detail a method of symbol recirculation based on using entire weight matrices, formed in one network, as patterns of activation in a larger network. In the case of natural language processing, the resulting symbol memory can serve as a store for lexical entries, symbols, and relations among symbols, and thus represent semantic information.