ABSTRACT

Cognitive models which interact with their environment are comparatively scarce. The speed of a model’s task performance is therefore governed by the speed of cognition, with the rate at which data can be exchanged with the external world simply being ignored. Typically, the accuracy of task performance is also too high: no slips of action can occur because no external actions are ever performed. To investigate ways of solving these problems, simulations of visual perception and motor action have been developed and integrated with two cognitive models. When visual perception and motor action are used, the speed of task performance is reduced to more realistic levels. Implementing plausible simulations of visual perception and motor action requires a wide range of skills, including advanced programming techniques. The utility of the simulated perceptual capabilities is being demonstrated by reimplementing them for integration with another cognitive model, using a commercially available interface toolkit.