Back to overview

Faces in motion: understanding and mapping the decoding of dynamic facial expressions of emotion

English title Faces in motion: understanding and mapping the decoding of dynamic facial expressions of emotion
Applicant Caldara Roberto
Number 201145
Funding scheme Project funding
Research institution Département de Psychologie Université de Fribourg
Institution of higher education University of Fribourg - FR
Main discipline Psychology
Start/End 01.03.2022 - 28.02.2026
Approved amount 1'228'671.00
Show all

Keywords (6)

Eye movements; Development ; Facial expressions of emotion; Functional brain imaging (EEG and fMRI); Brain damaged patients; Autism

Lay Summary (French)

Les visages en mouvement: comprendre et cartographier le décodage des expressions faciales dynamiques d'émotion
Lay summary

Les êtres humains communiquent leurs états émotionnels et motivationnels à travers de complexes signaux faciaux dynamiques, qui ont été façonnés au cours de l’évolution par l’expérience et une combinaison decontraintes biologiques. En ce sens, nos interactions sociales sont caractérisées par des expressions faciales d'émotion (EFE) qui se déploient progressivement d'un signal neutre à expressif sur une courte période de temps, avant de revenir à un état neutre. Nous travaux récents ont démontré qu’afin d’établir une communication efficace, la dynamique spatio-temporelle des EFE est finement réglée par la culture, et est également optimisée pour transmettre rapidement à la personne qui les perçoit des signaux non ambigus. Cependant, de manière surprenante, alors que nos interactions sociales quotidiennes sont inondées de signaux dynamiques, nos connaissances sur la reconnaissance des expressions faciales (REF) ont été développées à partir de l'unique utilisation d'images de visages statiques. Ceci est d’autant plus étonnant, car la fréquence d'exposition aux visages dynamiques, des évidences évolutionnistes et ontogénétiques prédisent toutes un avantage pour la REF des signaux dynamiques par rapport aux signaux statiques.

Les objectifs de ce programme de recherche visent à pallier à ce problème en :

i. Retraçant le développement de REF statiques et dynamiques chez les enfants et les adolescents (D), les adultes (A) et les patients (P);

ii. Isolant les stratégies perceptives utilisées pour reconnaître les EFE statiques et dynamiques dans les populations DAP à l’aide des mouvements oculaires;

iii. Élucidant les mécanismes cérébraux spatio-temporels sous-jacents au traitement des EFE statiques et dynamiques.

Ces recherches multidisciplinaires constitueront une avancée majeure dans la compréhension théorique des EFE et offriront de nouvelles réhabilitations à des populations cliniques souffrant d’un déficit dans la REF.

Direct link to Lay Summary Last update: 27.04.2021

Responsible applicant and co-applicants


Project partner


Humans communicate social and motivational internal states through complex dynamic facial signals that have been shaped by biological and evolutionary constraints. Everyday human social interactions are characterized by facial expressions of emotion (FEE) that progressively unfold from one expressive or neutral signal to another over a short period of time, before returning to a neutral baseline. During the last decade, we have demonstrated that the spatiotemporal dynamics of FEE are finely tuned by culture and are optimized to rapidly transmit to the decoder orthogonal unambiguous signals for effective communication. However, surprisingly, while real life social interactions are flooded with dynamic signals, most of the scientific literature and knowledge on facial expression recognition (FER) has been developed from the use of static face images. This scientific bias can be partly accounted for by both the technological limitations typical of the early years of FER studies and the subsequent replicability of those studies. Nowadays, technology has massively evolved, and dynamic stimuli can easily be acquired and implemented in experimental designs. Astonishingly, this progress in technology is still not paired with the use of those means in this scientific field, as the large majority of studies continue to use static face images instead of the more ecologically valid dynamic ones. This is even more critical, as frequency of exposure with dynamic faces, evolutionary and ontogenetic evidence all predict a special status of dynamic over static signals for FER. This dynamic over static advantage is objectively confirmed by emergent results in the literature, from us and others, pointing towards a dissociation in terms of development, performance, processes, and neural mechanisms engaged during the decoding of FEE. For instance, in a recent large cross-sectional study, we showed that dynamic compared to static FEE are processed better in early and late life. With the current project, we would like to break from the prevailing practice to progress and further understand the differences between static and dynamic FEE and boost the use of more ecological dynamic faces for scientific research.As such, the main objective of the current research proposal is to expand the understanding of how static and dynamic FEE are processed and decoded from the primary entry point of qualitative and quantitative visual signals to their decoding at the neural level, across different populations: Developing children and adolescents, Adults and Patients (DAP). The overarching aims of the research program include:i. Identifying facial signal recognition thresholds for static and dynamic expressions in children and adolescents (D), adults (A), and patients (P) by using novel psychophysical approachesii. Isolating the perceptual strategies used to recognize static and dynamic expressions in the DAP populations by mapping eye movement fixation patterns in typical and atypical performanceiii. Elucidating further the underlying spatiotemporal brain mechanisms of static and dynamic FEE processing, and tracking the neural markers of specific recognition abilities and deficitsTo achieve these aims, we will use a multidisciplinary approach with novel psychophysical techniques, behavioral measures, eye tracking, electrophysiology, and functional Magnetic Resonance Imaging, as well as developing a new toolbox for the statistical analysis of eye movements with dynamic stimuli (iMap Motion). We genuinely believe that highlighting the differences between the processing of static and dynamic FEE has the potential for profound theoretical, social, clinical, and economic impact. From a theoretical point of view, we are truly convinced that investigating the use of dynamic faces will be a major step forward in the understanding of face perception in ecological settings, paving the way to reinforce scientific research in that direction. The research planned here will provide a tighter link between behavioral, neuropsychological, and functional neuroimaging findings and the way in which faces are processed by humans in everyday life. Modern social interactions are characterized by the decoding of both realistic dynamic FEE and static ones, typically used in the virtual social networks. Thus, at the societal level, outlining the specific processes involved in both types of signals could promote the use of dynamic signals for a given target audience, for example, the elderly and fragile populations, which show more effective processing of dynamic signals. Ultimately, this could optimize and enhance intergenerational affective communication. Importantly, the knowledge and techniques developed here could also be of use in a variety of clinical settings to tailor early interventions or rehabilitation training programs, as well as in the realm of patient care. Finally, since dynamic expressions are increasingly used, the research here has the potential for knowledge transfer and economic impact, as it could attract interest in the advertising, animation, computing, and robotic industry.