ABSTRACT

This chapter surveys the components of nonlinear game music: where it exists in gameplay, and how it is created. Within nonlinear (aka dynamic) music, there are two types of playback structures: adaptive and interactive. These terms are still in a state of flux, with game developers, music software/hardware developers, and authors interchanging the terms in different ways. For our purposes, we will call music that changes without user input adaptive and with their input interactive. Preproduction considerations include assessing the project scope, including estimates of the individual music files needed, overall soundtrack length, and selecting the musical palette itself. Scoring the music can be done by a single composer using virtual instruments, or by hiring musicians in an ensemble, orchestra, and/or choir. In some situations, the composer will score everything with virtual instruments in a MIDI sequencer, and an orchestrator can be hired to translate the MIDI compositions into a suitable format for live performers. Some soundtracks contain a blend of MIDI and live performers, which can save a considerable bit of time and money and still have a realistic feel (especially if the live performers are playing the most prominent instruments). Playback techniques include horizontal resequencing, vertical reorchestration, generative music, and hybrid approaches. The remainder of this chapter will discuss mixing and editing techniques, mastering strategies, and considerations for testing music in the game. The chapter includes an interview with Wilbert Roget II, game composer.