C. Adam, L. Cavedon, and L. Padgham, Hello Emily, how are you today?: personalised dialogue in a toy to engage children, Proceedings of the 2010 Workshop on Companionable Dialogue Systems, pp.19-24, 2010.

F. Ameka, Interjections: The universal yet neglected part of speech, Journal of pragmatics, vol.18, pp.101-118, 1992.

V. Aubergé,

L. , enjeux et risques du robot «compagnon». des JA-SFTAG 2014, p.13

V. Aubergé, Y. Sasa, N. Bonnefond, B. Meillon, T. Robert et al., The eee corpus: socio-affective" glue, 2014.

T. Bickmore and T. Giorgino, Health dialog systems for patients and consumers, Journal of biomedical informatics, vol.39, pp.556-571, 2006.

T. Bickmore, D. Schulman, and L. Yin, Maintaining engagement in long-term interventions with relational agents, Applied Artificial Intelligence, vol.24, pp.648-666, 2010.

T. W. Bickmore and R. W. Picard, Establishing and Maintaining Long-Term Human-Computer Relationships, ACM Trans. Comput.-Hum. Interact, vol.12, issue.2, pp.293-327, 2005.

D. Bohus and E. Horvitz, Managing Human-Robot Engagement with Forecasts and, Proceedings of the 16th International Conference on Multimodal Interaction, 2014.

A. Petter-bae-brandtzaeg, J. Følstad, and . Heim, Enjoyment: lessons from Karasek, pp.331-341, 2018.

A. Cafaro, B. Ravenet, and C. Pelachaud, Exploiting evolutionary algorithms to model nonverbal reactions to conversational interruptions in user-agent interactions, IEEE Transactions on Affective Computing, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02382570

A. Cafaro, T. Hannes-högni-vilhjálmsson, and . Bickmore, First Impressions in Human-Agent Virtual Encounters, ACM Trans. Comput.-Hum. Interact, vol.23, 2016.

S. Campano, C. Clavel, and C. Pelachaud, I like this painting too": When an ECA Shares Appreciations to Engage Users, AAMAS, 2015.
URL : https://hal.archives-ouvertes.fr/hal-02412126

G. Castellano, M. Mancini, C. Peters, and . Peter-w-mcowan, Expressive copying behavior for social agents: A perceptual analysis, IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol.42, pp.776-783, 2011.

G. Castellano, M. Mortillaro, A. Camurri, G. Volpe, and K. Scherer, Automated analysis of body movement in emotionally expressive piano performances. Music Perception, An Interdisciplinary Journal, vol.26, pp.103-119, 2008.

C. Clavel, A. Cafaro, S. Campano, and C. Pelachaud, Fostering User Engagement in Face-to-Face Human-Agent Interactions: A Survey, 2016.
URL : https://hal.archives-ouvertes.fr/hal-02287405

, , pp.93-120

S. Dahl and A. Friberg, Visual perception of expressiveness in musicians' body movements, Music Perception: An Interdisciplinary Journal, vol.24, pp.433-454, 2007.

S. K. Mello and A. C. Graesser, AutoTutor and affective autotutor: Learning by talking with cognitively and emotionally intelligent computers that talk back, TiiS, vol.2, p.39, 2012.

. Starkey-duncan, Some signals and rules for taking speaking turns in conversations, Journal of personality and social psychology, vol.23, p.283, 1972.

R. Gockley, A. Bruce, J. Forlizzi, M. Michalowski, A. Mundell et al., Designing robots for long-term social interaction, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005.

D. Griol, J. M. Molina, and Z. Callejas, Modeling the User State for Context-Aware Spoken Interaction in Ambient Assisted Living, Applied Intelligence, vol.40, issue.4, pp.749-771, 2014.

L. Mark, J. A. Knapp, T. G. Hall, and . Horgan, Nonverbal communication in human interaction, Cengage Learning, 2013.

S. Kopp, L. Gesellensetter, N. C. Krämer, and I. Wachsmuth, A Conversational Agent as Museum Guide -Design and Evaluation of a Real-World Application, Intelligent Virtual Agents, Themis Panayiotopoulos, 2005.

H. Springer-berlin, , pp.329-343

G. Eva, K. R. Krumhuber, and . Scherer, Affect bursts: dynamic patterns of facial expression, Emotion, vol.11, p.825, 2011.

C. Monzo, I. Iriondo, and J. C. Socoró, Voice quality modelling for expressive speech synthesis, The Scientific World Journal, 2014.

S. Norris, Analyzing multimodal interaction: A methodological framework, 2004.

N. Obin, MeLos: Analysis and modelling of speech prosody and speaking style, 2011.
URL : https://hal.archives-ouvertes.fr/tel-00694687

C. Peters, G. Castellano, and S. De-freitas, An exploration of user engagement in HCI, Proceedings of the International Workshop on Affective-Aware Virtual Agents and Social Robots, pp.1-3, 2009.

I. Rodriguez, J. María-martínez-otzeta, I. Irigoien, and E. Lazkano, Spontaneous talking gestures using generative adversarial networks. Robotics and Autonomous Systems, vol.114, pp.57-65, 2019.

R. Klaus and . Scherer, What are emotions? And how can they be measured?, Social science information, vol.44, pp.695-729, 2005.

M. Schröder, Experimental study of affect bursts, Speech communication, vol.40, pp.99-116, 2003.

M. Schröder, Perception of non-verbal emotional listener feedback, Proc. Speech Prosody, 2006.

. Cl-sidner, Where to look: A study of human-robot interaction, Proc. International Conference on Intelligent User Interfaces (ACM IUI, pp.78-84, 2004.

C. L. Sidner, C. D. Kidd, C. Lee, and N. Lesh, Where to Look: A Study of Human-Robot Engagement, Proceedings of the 9th International Conference on Intelligent User Interfaces, pp.78-84, 2004.

J. Streeck, Elements of Meaning in Gesture, Geneviève Calbris, pp.978-90, 2011.

M. Tomasello, M. Carpenter, J. Call, T. Behne, and H. Moll, Understanding and sharing intentions: The origins of cultural cognition, Behavioral and brain sciences, vol.28, pp.675-691, 2005.

Y. Xu, Prosody, tone and intonation. The Routledge handbook of phonetics, pp.314-356, 2019.

Y. Yoon, . Woo-ri, M. Ko, J. Jang, J. Lee et al., Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots, 2019 International Conference on Robotics and Automation (ICRA), pp.4303-4309, 2019.