ABSTRACT

Values are critical for intelligent behavior, since values determine interests, and interests determine relevance. Therefore we address relevance and its role in intelligent behavior in animals and machines. Animals avoid exhaustive enumeration of possibilities by focusing on relevant aspects of the environment, which emerge into the (cognitive) foreground, while suppressing irrelevant aspects, which submerge into the background. Nevertheless, the background is not invisible, and aspects of it can pop into the foreground if background processing deems them potentially relevant.

This illuminates the differences between representation in natural intelligence and (traditional) artificial intelligence. Traditionally artificial intelligence has started with simple, primitive features, and attempted to construct from them a representation of the environment. If too few features are used, then the processing is imprecise and crude. However, if sufficient features are used to permit precise processing in all contexts, then the system is defeated by the combinatorial explosion of features. In natural intelligence, in contrast, we begin with a nervous system that can process in real-time the “concrete space” represented by the interface between the animal’s nervous system and its environment. The separation of foreground from background then serves to increase the efficiency of this process. Instead of trying to construct the concrete world from abstract predicates, the brain projects the very high-dimensional concrete world into lower dimensional subspaces; this projection is context-sensitive and rapidly adaptable. Therefore it is not vulnerable to the combinatorial explosion.

We consider the connection between these ideas and the concepts of intentionality, as discussed by Brentano and Husserl, and information, as quantified by Shannon and Weaver. In particular, the Shannon-Weaver measure ignores relevance, which is essential to biological information. Further, Brentano and Husserl characterized intentionality in terms of the “directedness of consciousness,” which can be explained as a decrease in the entropy (disorder) in the probability of processing, which is produced by the separation of foreground from background.

Essential to these ideas are questions of how contexts are switched, which defines cognitive/behavioral episodes, and how new contexts are created, which allows the efficiency of foreground/background processing to be extended to new behaviors and cognitive domains.

Next we consider mathematical characterizations of the foreground/background distinction, which we treat as a dynamic separation of the concrete space into (approximately) orthogonal subspaces, which are processed differently. Background processing is characterized by large receptive fields which project into a space of relatively low dimension to accomplish rough categorization of a novel stimulus and its approximate location. Such background processing is partly innate and partly learned, and we discuss possible correlational (Hebbian) learning mechanisms.

32Foreground processing is characterized by small receptive fields which project into a space of comparatively high dimension to accomplish precise categorization and localization of the stimuli relevant to the context. We also consider mathematical models of valences and affordances, which are an aspect of the foreground. Cells processing foreground information have no fixed meaning (i.e., their meaning is contextual), so it is necessary to explain how the processing accomplished by foreground neurons can be made relative to the context. Thus we consider the properties of several simple mathematical models of how the contextual representation controls foreground processing.

We show how simple correlational processes accomplish the contextual separation of foreground from background on the basis of differential reinforcement. That is, these processes account for the contextual separation of the concrete space into disjoint subspaces corresponding to the foreground and background.

Since an episode may comprise the activation of several contexts (at varying levels of activity) we consider models, suggested by quantum mechanics, of foreground processing in superposition. That is, the contextual state may be a weighted superposition of several pure contexts, with a corresponding superposition of the foreground representations and the processes operating on them. This leads us to a consideration of the nature and origin of contexts. Although some contexts are innate, many are learned. We discuss a mathematical model of contexts which allows a context to split into several contexts, agglutinate from several contexts, or to constellate out of relatively acontextual processing. Finally, we consider the acontextual processing which occurs when the current context is no longer relevant, and may trigger the switch to another context or the formation of a new context. We relate this to the situation known as “breakdown” in phenomenology.