ABSTRACT

We make explicit the structure of internal representations developed by learning in recurrent networks. The analysis shows that this structure reflects the structure of the complex external entities being represented, thus providing a basis for systematic connectionist processing. We also find that the networks can represent graded variables, so the representation scheme developed is a general one.