Project

Back to overview

The neuronal mechanisms underlying the perception of auditory and visual speech

Applicant Megevand Pierre
Number 167836
Funding scheme Ambizione
Research institution Département des neurosciences fondamentales Faculté de Médecine Université de Genève
Institution of higher education University of Geneva - GE
Main discipline Neurophysiology and Brain Research
Start/End 01.06.2017 - 31.07.2020
Approved amount 629'018.00
Show all

All Disciplines (2)

Discipline
Neurophysiology and Brain Research
Neurology, Psychiatry

Keywords (7)

electrocorticography; intracranial EEG; speech perception; neuronal oscillations; multisensory integration; human subjects; audiovisual speech illusions

Lay Summary (French)

Lead
Lorsque nous parlons, nous bougeons : les mouvements que nous faisons sont visibles à nos interlocuteurs et transmettent une partie du message contenu dans le langage. Nous ne comprenons pas bien comment le cerveau analyse ces signaux visuels du langage et comment il les combine avec les informations auditives. Ce projet vise à étudier ces questions en utilisant des enregistrements de l’activité électrique du cerveau.
Lay summary

Nous lisons tous sur les lèvres, même sans nous en apercevoir : notre cerveau est sensible aux signaux visuels générés par le visage de la personne qui nous parle. Les mécanismes par lesquels le cerveau détecte et décode ces signaux visuels et les intègre avec le son de la voix pour créer la compréhension du langage sont mal connus.

Ce projet propose deux expériences pour étudier ces mécanismes. Nous manipulerons artificiellement la combinaison de signaux visuels et auditifs du langage pour modifier ce que les participants auront l’impression de voir ou d’entendre. Cela permettra d’étudier séparément l’effet des signaux visuels ou auditifs sur le cerveau, indépendamment de la signification de ces signaux.

Nous mesurerons l’activité électrique du cerveau en réponse aux signaux visuels et auditifs du langage par électroencéphalographie (EEG). Une partie des EEG se fera avec des électrodes insérées chirurgicalement à l’intérieur du crâne chez des patients souffrant d’épilepsie ; l’autre partie se fera avec un grand nombre d’électrodes appliquées à la surface du cuir chevelu. Nous analyserons le rôle de l’activité neuronale et des oscillations cérébrales dans l’intégration des signaux visuels et auditifs et la compréhension du langage.

Les connaissances obtenues par ce projet pourraient améliorer nos capacités à cartographier les régions du cerveau impliquées dans le langage, dans le but de permettre des opérations de chirurgie du cerveau plus sûres pour les patients.

Direct link to Lay Summary Last update: 02.06.2017

Responsible applicant and co-applicants

Employees

Publications

Publication
Animated virtual characters to explore audio-visual speech in controlled and naturalistic environments
Thézé Raphaël, Gadiri Mehdi Ali, Albert Louis, Provost Antoine, Giraud Anne-Lise, Mégevand Pierre (2020), Animated virtual characters to explore audio-visual speech in controlled and naturalistic environments, in Scientific Reports, 10(1), 15540-15540.
Brain Recording, Mind-Reading, and Neurotechnology: Ethical Issues from Consumer Devices to Brain-Based Speech Decoding
Rainey Stephen, Martin Stéphanie, Christen Andy, Mégevand Pierre, Fourneret Eric (2020), Brain Recording, Mind-Reading, and Neurotechnology: Ethical Issues from Consumer Devices to Brain-Based Speech Decoding, in Science and Engineering Ethics, 26(4), 2295-2311.
Lateralized Rhythmic Delta Activity Synchronous with Hippocampal Epileptiform Discharges on Intracranial EEG
De Stefano Pia, Vulliémoz Serge, Seeck Margitta, Mégevand Pierre (2020), Lateralized Rhythmic Delta Activity Synchronous with Hippocampal Epileptiform Discharges on Intracranial EEG, in European Neurology, 83(2), 225-227.
Electric source imaging for presurgical epilepsy evaluation: current status and future prospects
Mégevand Pierre, Seeck Margitta (2020), Electric source imaging for presurgical epilepsy evaluation: current status and future prospects, in Expert Review of Medical Devices, 17(5), 405-412.
Focal EEG changes indicating critical illness associated cerebral microbleeds in a Covid-19 patient
De Stefano Pia, Nencha Umberto, De Stefano Ludovico, Mégevand Pierre, Seeck Margitta (2020), Focal EEG changes indicating critical illness associated cerebral microbleeds in a Covid-19 patient, in Clinical Neurophysiology Practice, 5, 125-129.
The rough sound of salience enhances aversion through neural synchronisation
Arnal Luc H., Kleinschmidt Andreas, Spinelli Laurent, Giraud Anne-Lise, Mégevand Pierre (2019), The rough sound of salience enhances aversion through neural synchronisation, in Nature Communications, 10(1), 3671-3671.
Neuroprosthetic Speech: The Ethical Significance of Accuracy, Control and Pragmatics
RAINEY STEPHEN, MASLEN HANNAH, MÉGEVAND PIERRE, ARNAL LUC H., FOURNERET ERIC, YVERT BLAISE (2019), Neuroprosthetic Speech: The Ethical Significance of Accuracy, Control and Pragmatics, in Cambridge Quarterly of Healthcare Ethics, 28(04), 657-670.
Human amygdala response to unisensory and multisensory emotion input: No evidence for superadditivity from intracranial recordings
Domínguez-Borràs Judith, Guex Raphaël, Méndez-Bértolo Constantino, Legendre Guillaume, Spinelli Laurent, Moratti Stephan, Frühholz Sascha, Mégevand Pierre, Arnal Luc, Strange Bryan, Seeck Margitta, Vuilleumier Patrik (2019), Human amygdala response to unisensory and multisensory emotion input: No evidence for superadditivity from intracranial recordings, in Neuropsychologia, 131, 9-24.
Biomarkers for closed-loop deep brain stimulation in Parkinson disease and beyond
Bouthour Walid, Mégevand Pierre, Donoghue John, Lüscher Christian, Birbaumer Niels, Krack Paul (2019), Biomarkers for closed-loop deep brain stimulation in Parkinson disease and beyond, in Nature Reviews Neurology, 15(6), 343-352.
Crossmodal phase reset and evoked responses provide complementary mechanisms for the influence of visual speech in auditory cortex
MégevandPierre, MercierManuel, GroppeDavid, Zion GolumbicElana, MesgaraniNima, BeauchampMichael, SchroederCharles, MehtaAehesh, Crossmodal phase reset and evoked responses provide complementary mechanisms for the influence of visual speech in auditory cortex, in Journal of Neuroscience.
The phase of cortical oscillations determines the perceptual fate of visual cues in naturalistic audiovisual speech
Thézé Raphaël, Giraud Anne-Lise, Mégevand Pierre, The phase of cortical oscillations determines the perceptual fate of visual cues in naturalistic audiovisual speech, in Science Advances.

Datasets

Virtual Characters for Audiovisual Speech > Input and output data from the behavioral experiment

Author Mégevand, Pierre
Publication date 02.10.2020
Persistent Identifier (PID) 10.26037/yareta:shp4bepp7ngv3etn5u4xkms45q
Repository Yareta
Abstract
This dataset consists of the input and output data (stored as comma-separated values files) from a custom-designed behavioral experiment on the perception of artificial but naturalistic audiovisual speech.24 participants attended the experiment. Each participant ran 2 blocks. Consequently, there are 2 input .csv files and 2 output .csv files per participant.Additionally, a MATLAB script to upload the data and make them available for further analysis is provided.

Virtual Characters for Audiovisual Speech > Preprocessed EEG data

Author Mégevand, Pierre
Publication date 02.10.2020
Persistent Identifier (PID) 10.26037/yareta:nickbz4mbne7levc6bqj4j7rsi
Repository Yareta
Abstract
This dataset consists of the preprocessed EEG data from 15 participants in a speech perception experiment using virtual characters and synthetic speech. The EEG data for all participants are contained in large MATLAB data files. A text file briefly describes the content of each MATLAB data file.

Collaboration

Group / person Country
Types of collaboration
Laboratory for Cognitive Neuroscience and Neuroimaging, Columbia University, New York United States of America (North America)
- in-depth/constructive exchanges on approaches, methods or results
- Publication
Mesgarani lab, Columbia University, New York United States of America (North America)
- in-depth/constructive exchanges on approaches, methods or results
- Publication
Laboratory for Multimodal Human Brain Mapping, North Shore University Hospital, Manhasset United States of America (North America)
- in-depth/constructive exchanges on approaches, methods or results
- Publication

Scientific events

Active participation

Title Type of contribution Title of article or contribution Date Place Persons involved
Alpine Brain Imaging Meeting 2020 Poster Neuronal activity reflects the perceptual fate of auditory and visual stimuli in naturalistic speech 12.01.2020 Champéry, Switzerland Megevand Pierre; Boissonnet Raphaël;
Alpine Brain Imaging Meeting 2019 Poster Neural correlates of cross-modal influences in top-down processing of visual speech 06.01.2019 Champéry, Switzerland Boissonnet Raphaël; Megevand Pierre;
Society for Neuroscience annual meeting 2018 Poster Neural correlates of cross-modal influences in top-down processing of visual speech 03.11.2018 San Diego, United States of America Megevand Pierre; Boissonnet Raphaël;
Alpine Brain Imaging Meeting 2018 Talk given at a conference Dynamics of crossmodal speech signal tracking in the human auditory and visual cortex 11.01.2018 Champéry, Switzerland Megevand Pierre; Boissonnet Raphaël;


Self-organised

Title Date Place
4th research meeting on intracranial EEG 04.06.2019 Genève, Switzerland

Communication with the public

Communication Title Media Place Year
Media relations: radio, television CQFD RTS La 1ère Western Switzerland 2020

Associated projects

Number Title Start Funding scheme
148388 Neocortical oscillations subtend the multisensory integration of auditory and visual speech 01.08.2013 Advanced Postdoc.Mobility

Abstract

Speech is multimodal: we must move to speak, and these movements are visible to our interlocutors. Indeed, visual speech cues enrich the information transmitted by auditory speech. How the human brain perceives visual speech remains poorly understood, however. Here, I present a series of experiments in cognitive neurophysiology that aim to (1) characterize the cortical representation of visual speech and (2) explore how this representation interacts with that of auditory speech and with cortical areas involved in the processing of language. I will introduce innovative experimental paradigms that allow varying auditory or visual speech inputs without altering comprehension or vice versa. In order to pinpoint the representation of visual speech cues, I will perform intracranial EEG recordings in patients considered for epilepsy surgery, since this technique brings the highest spatiotemporal resolution currently available in humans. I will complement these recordings with high-density scalp EEG, a technique that affords broad coverage of the human brain. Analysis will focus on (1) pinpointing cortical areas responsive to visual speech cues using high-gamma power as an index of local neuronal firing, and (2) assessing the role of neuronal oscillations in directed information exchanges between sensory speech representations and language-processing cortex. The present project has the potential to move forward our understanding of the fundamental neuronal mechanisms by which the cerebral cortex processes audiovisual speech. Furthermore, better knowledge of the role of cortical areas in the processing of speech and language will also improve the yield of functional brain mapping, leading to more individualized surgical plans and better functional outcomes for patients undergoing epilepsy surgery.
-