ABSTRACT

This chapter presents a cognitive-psychology analysis of spontaneous, concurrent speech gestures in a face-to-face architectural design meeting (A1). The long-term objective is to formulate specifications for remote collaborative-design systems, especially for supporting the use of different semiotic modalities (multi-modal interaction). According to their function for design, interaction, and collaboration, we distinguish a number of gesture families: representational (entity designating or specifying), organisational (management of discourse, interaction, or functional design actions), focalising, discourse and interaction modulating, and disambiguating gestures. Discussion and conclusions concern the following points. It is impossible to attribute fixed functions to particular gesture forms. ‘Designating’ gestures may also have a design function. The gestures identified in A1 possess a certain generic character. The gestures identified are neither systematically irreplaceable, nor optional accessories to speech or drawing. We discuss the possibilities for gesture in computer-supported collaborative software systems.