In this paper, we investigate whether information related to touches and rotations impressed to an object can be effectively used to classify the emotion of the agent manipulating it. We specifically focus on sequences of basic actions (e.g., grasping, rotating), which are constituents of daily interactions. We use the iCube, a 5 cm cube...
-
2021 (v1)PublicationUploaded on: July 13, 2023
-
2007 (v1)Publication
In this paper we present a model of facial behaviour encompassing interpersonal relations for an Embodied Conversational Agent (ECA). Although previous solutions of this problem exist in ECA's domain, in our approach a variety of facial expressions (i.e. expressed, masked, inhibited, and fake expressions) is used for the first time. Moreover,...
Uploaded on: February 14, 2024 -
2012 (v1)Publication
Laughter is a strong social signal in human-human and human-machine communication. However, very few attempts to model it exist. In this paper we discuss several challenges regarding the generation of laughs. We focus, more particularly, on two aspects a) modeling laughter with different intensities and b) modeling respiration behavior during...
Uploaded on: February 13, 2024 -
2007 (v1)Publication
In this paper we propose an algorithm based on fuzzy similarity which models the concept of resemblance between facial expressions of an Embodied Conversational Agent (ECA). The algorithm measures the degree of visual resemblance between any two facial expressions. We also present an evaluation study in which we compared the users' perception...
Uploaded on: February 14, 2024 -
2010 (v1)Publication
In this paper we present our embodied conversational agent (ECA) capable of displaying a vast set of facial expressions to communicate its emotional states as well as its social relations. Our agent is able to superpose and mask its emotional states as well as fake or inhibit them. We defined complex facial expressions as expressions arising...
Uploaded on: February 14, 2024 -
2024 (v1)Publication
In this paper, we investigate the impact of sonification on the willingness for physical contact. For this purpose, we introduce a novel system designed to explore this impact within a performative art setting. It consists of a MIDI controller which detects physical contact between two dancers and transforms it into sounds. We use it in a...
Uploaded on: October 24, 2024 -
2023 (v1)Publication
We investigate the recognition of the affective states of a person performing an action with an object, by processing the object-sensed data. We focus on sequences of basic actions such as grasping and rotating, which are constituents of daily-life interactions. iCube, a 5 cm cube, was used to collect tactile and kinematics data that consist of...
Uploaded on: October 11, 2023 -
2009 (v1)Publication
In this paper we present a system which allows a virtual character to display multimodal sequential expressions i.e. expressions that are composed of different signals partially ordered in time and belonging to different nonverbal communicative channels. It is composed of a language for the description of such expressions from real data and of...
Uploaded on: February 4, 2024 -
2011 (v1)Publication
The term "believability" is often used to describe expectations concerning virtual agents. In this paper, we analyze which factors influence the believability of the agent acting as the software assistant. We consider several factors such as embodiment, communicative behavior, and emotional capabilities. We conduct a perceptive study where we...
Uploaded on: February 14, 2024 -
2010 (v1)Publication
Believability is a key issue for virtual agents. Most of the authors agree that emotional behavior and personality have a high impact on agents' believability. The social capacities of the agents also have an effect on users' judgment of believability. In this paper we analyze the role of plausible and/or socially appropriate emotional displays...
Uploaded on: February 14, 2024 -
2010 (v1)Publication
A smile may communicate different meanings depending on subtle characteristics of the facial expression. In this article, we have studied the morphological and dynamic characteristics of amused, polite, and embarrassed smiles displayed by a virtual agent. A web application has been developed to collect virtual agent's smile descriptions corpus...
Uploaded on: February 13, 2024 -
2009 (v1)Publication
A model of multimodal sequential expressions of emotion for an Embodied Conversational Agent was developed. The model is based on video annotations and on descriptions found in the literature. A language has been derived to describe expressions of emotions as a sequence of facial and body movement signals. An evaluation study of our model is...
Uploaded on: February 13, 2024 -
2011 (v1)Publication
This paper focuses on the user's perception of virtual agents embedded in real and virtual worlds. In particular, we analyze the perception of spatial relations and the perception of coexistence. For this purpose, we measure the user's voice compensation which is one of the human automatic behaviors to their surrounding environment. The results...
Uploaded on: February 13, 2024 -
2010 (v1)Publication
No description
Uploaded on: February 13, 2024 -
2008 (v1)Publication
Recent research has shown that empathic virtual agents enable to improve human-machine interaction. Virtual agent's expressions of empathy are generally fixed intuitively and are not evaluated. In this paper, we propose a novel approach for the expressions of empathy using complex facial expressions like superposition and masking. An evaluation...
Uploaded on: January 31, 2024 -
2024 (v1)Publication
We propose a novel dataset for studying and modeling facial expression intensity. Facial expression intensity recognition is a rarely discussed challenge, likely stemming from a lack of suitable datasets. Our dataset has been created by extracting facial expressions from actors across twelve fiction films, followed by crowd-sourced online...
Uploaded on: October 24, 2024 -
2011 (v1)Publication
Emotional expressions play a very important role in the interaction between virtual agents and human users. In this paper, we present a new constraint-based approach to the generation of multimodal emotional displays. The displays generated with our method are not limited to the face, but are composed of different signals partially ordered in...
Uploaded on: February 14, 2024 -
2022 (v1)Publication
Eating is a fundamental part of human life and is, more than anything, a social activity. A new field, known as Computational Commensality has been created to computationally address various social aspects of food and eating. This paper illustrates a study on remote dining we conducted online in May 2021. To better understand this phenomenon,...
Uploaded on: February 22, 2023 -
2006 (v1)Publication
We propose a computational model of emotions that takes into account two aspects of emotions: the emotions triggered by an event and the expressed emotions (the displayed ones), which may differ in real life. More particularly, we present a formalization of emotion eliciting-events based on a model of the agent's mental state composed of...
Uploaded on: February 6, 2024 -
2006 (v1)Publication
We propose a computational model of emotions that takes into account two aspects of emotions: the emotions triggered by an event and the expressed emotions (the displayed ones), which may differ in real life. More particularly, we present a formalization of emotion eliciting-events based on a model of the agent's mental state composed of...
Uploaded on: February 6, 2024 -
2012 (v1)Publication
A smile may communicate different communicative intentions depending on subtle characteristics of the facial expression. In this article, we propose an algorithm to determine the morphological and dynamic characteristics of virtual agent's smiles of amusement, politeness, and embarrassment. The algorithm has been defined based on a virtual...
Uploaded on: February 14, 2024 -
2013 (v1)Publication
Imitation of natural facial behavior in real-time is still challenging when it comes to natural behavior such as laughter and nonverbal expressions. This paper explains our ongoing work on methodologies and tools for estimating Facial Animation Parameters (FAPs) and intensities of Action Units (AUs) in order to imitate lifelike facial...
Uploaded on: February 13, 2024 -
2005 (v1)Publication
We propose an architecture of an embodied conversational agent that takes into account two aspects of emotions: the emotions triggered by an event (the felt emotions) and the expressed emotions (the displayed ones), which may differ in real life, In this paper, we present a formalization of emotion eliciting-events based on a model of the...
Uploaded on: February 14, 2024 -
2022 (v1)Publication
Eating meals together is one of the most frequent human social experiences. When eating in the company of others, we talk, joke, laugh, and celebrate. In the paper, we focus on commensal activities, i.e., the actions related to food consumption (e.g., food chewing, in-taking) and the social signals (e.g., smiling, speaking, gazing) that appear...
Uploaded on: October 11, 2023 -
2023 (v1)Publication
This paper addresses the need for collecting and labeling affect-related data in ecological settings. Collecting the annotations in the wild is a very challenging task, which, however, is crucial for the creation of datasets and emotion recognition models. We propose a novel solution to collect and annotate such data: a questionnaire based on...
Uploaded on: February 16, 2024