ABSTRACT

This chapter demonstrates the ability of artificial systems to ground their symbols in the potential activity afforded by their environment. To illustrate the grounding of symbols in affordances the analysis is presented in terms of D. Marr’s three descriptive levels. The placement of an A.I. system’s behavior within the broader context of its environment widens the potential for grounding and draws the focus away from inner formal-symbol operations. The problem theoretically arises because the symbols in an A.I. system’s representation layer are manipulated formally according to preset programmed rules and do not have any causal connections with the exterior world. According to Harnad, human mental symbols are grounded in our daily interactions with the exterior world. An affordance structured analysis grounds an A.I. system’s symbols by appealing to stimulus information, as opposed to the traditionalist appeal to causal energy connections. The symbol grounding problem is perspective dependent and is in nature, only a theoretical conflict.