ABSTRACT

This chapter explores the epistemic roles played by connectionist models of cognition, and offers a formal analysis of how connectionist models explain. It looks at how other types of computational models explain. Classical artificial intelligence (AI) programs explain using abductive reasoning, or inference to the best explanation; they begin with the phenomena to be explained, and devise rules that can produce the right outcome. The chapter also looks at several examples of connectionist models of cognition, observing what sorts of constraints are used in their design, and how their results are evaluated. It argues that the point of implementing networks roughly analogous to neural structures is to discover and explores the generic mechanisms at work in the brain, not to deduce the precise activities of specific structures. The chapter explores a formal analysis of the explanations offered, which interprets connectionist models and the cognitive theories they represent as sharing membership in a type of mechanism.