ABSTRACT

Interactive music systems change their behavior in response to musical input (Rowe, 1993). They can be used to accompany human players performing fully notated scores in a way that is sensitive to the soloist’s timing and expression, as well as to improvise along with other humans or programs. The articulation of artificially intelligent applications often includes a description of the knowledge level the task involves. “The knowledge level abstracts completely from the internal processing and the internal representation. Thus, all that is left is the content of the representations and the goals toward which that content will be used.” (Newell, 1990, pp. 48–49). The content of the representations and the goals of interactive music systems are roughly equivalent to human musicianship. As such they encompass a particularly daunting scope. My argument in this chapter will be that even a little bit of musicianship can make computer programs engaging partners for a variety of applications and compelling members of performing and improvising ensembles. In other words, not all musicianship has to be encoded to make these systems useful, and many incremental improvements are available to take them beyond the level they have already reached. Before discussing the organization and implementation of musically aware interactive music systems, let us first review some of their main features and those tools that are widely used in designing them.