ABSTRACT

Perceiving where things are in the environment poses a fundamental problem for mobile animals. The perception of external space demands the capacity to detect, represent and transform information in multiple sensory frames of reference since the body's sensors have different intrinsic geometries and move relative to one another. Similarly, even the simplest forms of action involve the identification of a goal in one frame of reference, such as the limb's desired position in space, and the transformation of those spatial coordinates into motor commands in another frame of reference that move the limb into the appropriate position. Between visual perception and visually guided action lies a rich repertoire of spatial cognitive abilities that participate in transformations between signals derived in different sensory modalities and integration among them.