ABSTRACT

The generation of spatial representations linking sensation to action is a primary function of the posterior parietal cortex. Recent proposals suggest that parietal neurons compute basis functions of their input signals, implementing a gain field mechanism (i.e., modulation of visual information by postural variables) to perform coordinate transformations through simple linear mappings. Our study builds on the basis functions approach to develop a biologically inspired computational model accounting for sensorimotor transformations in saccadic planning and spatial attention. We show that the modulation of parietal neurons by the same postural variable to which their visual selectivity is invariant can be used to remap eye-centered representations across eye movements. We also demonstrate that the same connections involved in the generation of saccadic eye movements produce top-down attentional priming without requiring any additional mechanism and learning, providing computational support to the premotor theory of attention. Finally, the model predicts that top-down signals can determine attentional facilitation in at least two different spatial maps, which code spatial locations in eye and head-centered coordinates.