ABSTRACT

Recent descriptions of connectionist models have argued that connectionist representations are unstructured, atomic, and bounded (e.g., Fodor & Pylyshyn, 1988). This paper describes results with recurrent networks and distributed representations which contest these claims. Simulation results are described which demonstrate that connectionist networks are able to learn representations which are richly structured and open-ended. These representations make use both of the high dimensional space described by hidden unit patterns, as well as trajectories through this space in time, and posses a rich structure which reflects regularities in the input. Specific proposals are advanced which address the type/token distinction, the representation of hierarchical categories in language, and the representation of grammatical structure.