ABSTRACT

Auditory interfaces and sonifi cation-information display by means of nonspeech audio (Kramer et al., 1999)—have been the subject of increasing interest in recent decades (for reviews, see Kramer et al., 1999; Frysinger, 2005). With the advent of ubiquitous digital technologies, high-fi delity sound samples have become increasingly easy and inexpensive to produce and implement (Hereford and Winn, 1994; Flowers et al., 2005). Perhaps more important, however, an increasing awareness of the shortcomings and limitations of traditional visual interfaces has spurred research on sound as a viable mode of information display. Nonspeech audio cues have been implemented to varying degrees in interface design, ranging from nonspeech audio as a complement or supplement to existing visual displays (e.g., Brown et al., 1989; Brewster, 1997), to hybrid systems that integrate nonspeech audio with other audio technologies (e.g., screen readers; see Morley et al., 1999; Stockman et al., 2005). Attempts have even been made to develop interfaces (usually for the visually impaired) where feedback and interaction are driven primarily by sounds (e.g., Edwards, 1989a, 1989b; Mynatt, 1997; Bonebright and Nees, in press).