ABSTRACT

Computational approaches towards gesture recognition and synthesis tend to regard gestures as isolated carriers of propositional content. Thus far, the imagistic role of gestures, particularly their cohesive use of space, has been barely considered in human-computer interaction, despite its significance discussed in psycholinguistic literature. An empirical study on object descriptions unveiled that gestures often convey an underspecified abstraction of shape reducing 3D objects to projections of lower dimensions. The Imagistic Description Tree (IDT) model serves as a target representation for a gesture understanding system, and as source for gesture synthesis by an animated agent. For gesture understanding, input data is segmented and translated into a form representation of the gesture's meaningful part. Features that encode spatial extent are inserted in the imagery module as axes. The IDT is thus built up stepwise until external segmentation cues, like lowering the hands, indicate the beginning of a new imagistic description. A formal semantics for the imaginai content of shape-related gestures is proposed.