Multimodal motion learning system for traditional arts
The present paper describes an interactive multimodal motion learning system for traditional arts. This system uses several modalities, such as synthesized speech, video, and agent actions, to instruct a user. Input modality uses speech modality to control the system considering a hands-busy situation in motion learning. This system is developed using the proposed multimodal interaction system framework, which realizes easy construction of learning contents from high-level data modeling. As an example to illustrate the use of this framework, we apply the multimodal learning system to the Japanese tea ceremony.