ABSTRACT

Subtitles for the deaf and hard of hearing (SDH), known as captions in the US, Canada and Australia, may take the form of intralingual or interlingual translation. They may be “burnt in (open) or superimposed (closed)”, “prepared beforehand or delivered live”, “provided in an edited or (near) verbatim form”, and provide access to verbal as well as non-verbal (acoustic) information. Research on SDH started in the US in the early 1970s with a series of reception studies, mostly doctoral theses that explored the benefits of captions for deaf students. Live subtitling is “the real-time transcription of spoken words, sound effects, relevant musical cues, and other relevant audio information” to enable deaf or hard-of-hearing persons to follow a live audiovisual programme. Since their introduction in the US and Europe in the early 1980s, live subtitles have been produced through different methods: standard QWERTY keyboards, dual keyboard, Velotype and the two most common approaches, namely stenography and respeaking.