ABSTRACT

Evidence suggests that an early representation in the visual processing of orthography is neither visual nor phonological, but codes abstract letter identities (ALIs) independent of case, font, size, etc. How could the visual system come to develop such a representation? We propose that, because many letters look similar regardless of case, font, etc., different visual forms of the same letter tend to appear in visually similar contexts (e.g., in the same words written in different ways) and that correlation-based learning in visual cortex picks up on this similarity among contexts to produce ALIs. We present a simple self-organizing Hebbian neural network model that illustrates how this idea could work and that produces ALIs when presented with appropriate input.