ABSTRACT

It takes many experienced people, working together in complementary roles, to create a modern game or virtual experience. This chapter explains the way that interactive audio development work is partitioned in large and small projects, by job role and by content category.

Contrasting the roles of sound designers, system and application audio programmers, attitudes to desktop, console and mobile platform differences, this chapter considers the work involved in creating audio for front-end menus, weapons, ambiences and vehicle sounds, crowds, surfaces, reflections and reverberation. Since speech recording and translation dominates the audio budget for many world-market products, its special requirements are discussed in detail.

Other topics: level design and asset management budgets, the “vertical slice,” attaching triggers to audio events, finding out why sounds aren’t sounding, suppressed rainfall, detecting damage, reporting status, avoiding monotony, when audio should control graphics and vice versa, lip sync, ducking systems, interactive music cues, localised speech and six-figure asset counts, separate text and sound locales, phrasing, splicing and word order, premature congratulations and needless repetition, preparing for late changes.

Platform differences: exploiting commonality while playing to the strengths and minimising the weaknesses.