ABSTRACT

This chapter focuses on what’s novel in the perspective that the prediction error minimization (PEM) framework affords on the cognitive-scientific project of explaining intelligence by appeal to internal representations. It shows how truth-conditional and resemblance-based approaches to representation in generative models may be integrated. The PEM framework in cognitive science is an approach to cognition and perception centered on a simple idea: organisms represent the world by constantly predicting their own internal states. PEM theories often stress the hierarchical structure of the generative models they posit. The novel explanatory power of the PEM account derives largely from the way in which pairs of generative and recognition models interact. “Predictive coding” refers to an encoding strategy in which predicted portions of an input signal are subtracted from the actual signal received, so that only the difference between the two is passed as output to the next stage of information processing.