ABSTRACT

Until relatively direct interfaces with brain signals become widely available, users will have to play a physically active part in controlling the virtual environment (VE). This involves moving the head, eyes, limbs, or whole body. Control and perception of movement depends heavily on proprioception, which is traditionally defined as the sensation of limb and whole body position and movement derived from somatic mechanoreceptors. This chapter presents evidence that proprioception actually is computed from somatic (muscle, joint, tendon, skin, vestibular, visceral) sensory signals, motor command signals, vision, and audition. Experimental manipulation of these signals can alter the perceived spatial position and movement of a body part, attributions about the source and magnitude of forces applied to the body, representation of body dimensions and topography, and the localization of objects and support surfaces. The effortless and unified way these qualities are perceived in normal environments depends on multiple, interdependent adaptation mechanisms that continuously update internal models of the sensory and motor signals associated with the position, motion and form of the body, the support surface, the force background, and properties of movable objects in relation to intended movements.