ABSTRACT

In this chapter the author focuses on Annette Karmiloff-Smith's reservations about the ability of Parallel Distributed Processing (PDP)/connectionist networks to capture representational redescription. Annette argued that such models only capture implicit learning of regularities in the physical transitions of objects in the world or in the structures of language: the models do not capture the phase when children move beyond mastery to knowledge that is both more flexible and available to conscious access. The author examines how this insight initially led neural network modellers to think about ways to capture the relationship between explicit and implicit learning through separate processing pathways, such as the work of Axel Cleeremans in the 1990s and early 2000s. In her inimitably frank and direct manner, Annette made no bones about her reactions to the author's own work on cognitive development when she first heard about it, shortly after the publication of the PDP volumes and her own 1986 paper.