ABSTRACT

The term music robot typically refers to a device that is digitally controlled but produces sound acoustically. For example, a percussion robot might use solenoids attached to sticks to hit a drum membrane for sound production. In this chapter, the challenges and opportunities of creating improvised music that combines human and robotic musicians are discussed. The importance of robots being able to understand musically their own output as well as the collective output of the group is emphasised. This will be illustrated through specific case studies of how techniques of automatic music analysis can be used in the context of live performance. For example, we describe how to build robotic instruments that are able to correctly identify the instruments they are actuating without explicit mapping, are able to calibrate their dynamics automatically and can self-tune. We also discuss how to model higher-level activities such as recognising gestures of a performer and rhythmic patterns independently of tempo. In order to put these techniques into a specific context, we discuss their use for creating music pieces for the interactive Trimpin installation Canon ‘X + 4:33 = 100’. The installation uses reconstructed acoustic pianos paying homage to John Cage and Conlon Nancarrow.