We aim to create a model of emotional reactive virtual humans. This model will help to define realistic behavior for virtual characters based on emotions and events in the virtual environment to which they react. A large set of pre-recorded animations will be used to obtain such model. We have defined a knowledge-based system to store animations of reflex movements taking into account personality and emotional state. Populating such a database is a complex task. In this paper we describe a multimodal authoring tool that provides a solution to this problem. Our multimodal tool makes use of motion capture equipment, a handheld device and a large projection screen
Ronan Boulic, Ricardo Andres Chavarriaga Lozano, Bruno Herbelin, José del Rocio Millán Ruiz, Olaf Blanke, Fumiaki Iwane, Thibault Serge Mario Porssut
Olaf Blanke, Hyeongdong Park, Hyukjun Moon, Baptiste Gauthier, Emanuela De Falco, Louis Philippe Albert, Corentin Marie Hervé Robert Tasu