ABSTRACT

We present here a framework for developing a generic talking head capable of reproducing the anatomy and the facial deformations induced by speech movements with only a few parameters. Speech-related facial movements are controlled by six parameters. We characterize the skull and mandible variability by six and seven free parameters, respectively. Speaker-specific skull, jaw, and face data are normalized using generic meshes of these organs and a robust 3D-to-3D matching procedure. Further analysis of these normalized data is performed using a decomposition of the 3D variance based on iterative principal component analysis aimed at identifying and predicting kinematic consequences of anatomical settings.