ABSTRACT

In traditional cell animation, synchronization between the drawn images and the speech track is usually achieved through the tedious process of reading the prerecorded speech track to find the frame times of significant speech events. Key frames with corresponding mouth positions and expressions are then drawn to match these key speech events. The mouth positions used are usually based on a canonical mapping of speech sounds into mouth positions [Blair 49].