Multimodal modeling of expressiveness for human-machine interaction - CNRS - Centre national de la recherche scientifique Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Multimodal modeling of expressiveness for human-machine interaction

Résumé

Myriad of applications involve the interaction of humans with machines , such as reception agents, home assistants, chatbots or autonomous vehicles' agents. Humans can control the virtual agents by the mean of various modalities including sound, vision, and touch. As the number of these applications increases, a key problem is the requirement of integrating all modalities, to leverage the interaction's quality, as well as the user's experience in the virtual world. In this State-of-the-Art review paper, we discuss about designing engaging virtual agents with expressive gestures and prosody. This paper is part of a work that aims to review the mechanisms that govern multimodal interaction, such as the agent's expressiveness and the adaptation of its behavior, to help remove technological barriers and develop a conversational agent capable of adapting naturally and coherently to its interlocutor.
Fichier principal
Vignette du fichier
WACAI_MireilleFares.pdf (363.21 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02928055 , version 1 (02-09-2020)

Identifiants

  • HAL Id : hal-02928055 , version 1

Citer

Mireille Fares, Catherine Pelachaud, Nicolas Obin. Multimodal modeling of expressiveness for human-machine interaction. WACAI 2020, Jun 2020, Île d'Oléron, France. ⟨hal-02928055⟩
127 Consultations
105 Téléchargements

Partager

Gmail Facebook X LinkedIn More