ABSTRACT

This chapter explains how the components of an interactive audio system fit together and what additional sub-systems can be implemented cheaply to make the foundation more practically useful.

The design splits the implementation into two layers—one which is generic to all platforms, written in a portable subset of C++, providing a rich and consistent interface for the game or application audio programmer to work with, and an inner “physical” layer which tailors the output of the platform-independent “logical” layer to the most suitable hardware or software interface provided by the platform vendor.

Concepts: eight reasons for layering audio runtimes, scheduling new platform implementations, steps in extending existing implementations, finding the nearest sound, coupling asynchronous AI systems, synchronising sound and vision, compiler differences, more context-specific optimisations.

The scheme described has been implemented and shipped on a dozen platforms, for four game engines and five asset toolchains, with software and hardware mixing, on big and little-endian 32- and 64-bit processors from Intel, AMD, MIPS, ARM and PowerPCs.