ABSTRACT

This chapter examines the computational analysis of conductors temporal gestures in the context of three interrelated areas of research, namely the computational extraction of music and movement-related features from audio and motion-capture recordings, the kinematics of conductors temporal gestures and musicians ability to synchronize with conductors gestures. The research shows that a large number of movement and music-related features can be computationally extracted from motion-capture and audio data. Adopting a computational approach to movement and musical feature extraction increases the speed and precision of the process, and makes it possible to analyse a large dataset with relative ease. In terms of people's synchronization with conductor's temporal gestures, there is a dearth of relevant literature. The content of conducting manuals tends to focus more on conveying emotional expression rather than temporal information, and most research on conductors gestures has focused on expressive aspects of conducting.