electrocorticography; intracranial EEG; speech perception; neuronal oscillations; multisensory integration; human subjects; audiovisual speech illusions
Thézé Raphaël, Gadiri Mehdi Ali, Albert Louis, Provost Antoine, Giraud Anne-Lise, Mégevand Pierre (2020), Animated virtual characters to explore audio-visual speech in controlled and naturalistic environments, in
Scientific Reports, 10(1), 15540-15540.
Rainey Stephen, Martin Stéphanie, Christen Andy, Mégevand Pierre, Fourneret Eric (2020), Brain Recording, Mind-Reading, and Neurotechnology: Ethical Issues from Consumer Devices to Brain-Based Speech Decoding, in
Science and Engineering Ethics, 26(4), 2295-2311.
De Stefano Pia, Vulliémoz Serge, Seeck Margitta, Mégevand Pierre (2020), Lateralized Rhythmic Delta Activity Synchronous with Hippocampal Epileptiform Discharges on Intracranial EEG, in
European Neurology, 83(2), 225-227.
Mégevand Pierre, Seeck Margitta (2020), Electric source imaging for presurgical epilepsy evaluation: current status and future prospects, in
Expert Review of Medical Devices, 17(5), 405-412.
De Stefano Pia, Nencha Umberto, De Stefano Ludovico, Mégevand Pierre, Seeck Margitta (2020), Focal EEG changes indicating critical illness associated cerebral microbleeds in a Covid-19 patient, in
Clinical Neurophysiology Practice, 5, 125-129.
Arnal Luc H., Kleinschmidt Andreas, Spinelli Laurent, Giraud Anne-Lise, Mégevand Pierre (2019), The rough sound of salience enhances aversion through neural synchronisation, in
Nature Communications, 10(1), 3671-3671.
RAINEY STEPHEN, MASLEN HANNAH, MÉGEVAND PIERRE, ARNAL LUC H., FOURNERET ERIC, YVERT BLAISE (2019), Neuroprosthetic Speech: The Ethical Significance of Accuracy, Control and Pragmatics, in
Cambridge Quarterly of Healthcare Ethics, 28(04), 657-670.
Domínguez-Borràs Judith, Guex Raphaël, Méndez-Bértolo Constantino, Legendre Guillaume, Spinelli Laurent, Moratti Stephan, Frühholz Sascha, Mégevand Pierre, Arnal Luc, Strange Bryan, Seeck Margitta, Vuilleumier Patrik (2019), Human amygdala response to unisensory and multisensory emotion input: No evidence for superadditivity from intracranial recordings, in
Neuropsychologia, 131, 9-24.
Bouthour Walid, Mégevand Pierre, Donoghue John, Lüscher Christian, Birbaumer Niels, Krack Paul (2019), Biomarkers for closed-loop deep brain stimulation in Parkinson disease and beyond, in
Nature Reviews Neurology, 15(6), 343-352.
MégevandPierre, MercierManuel, GroppeDavid, Zion GolumbicElana, MesgaraniNima, BeauchampMichael, SchroederCharles, MehtaAehesh, Crossmodal phase reset and evoked responses provide complementary mechanisms for the influence of visual speech in auditory cortex, in
Journal of Neuroscience.
Thézé Raphaël, Giraud Anne-Lise, Mégevand Pierre, The phase of cortical oscillations determines the perceptual fate of visual cues in naturalistic audiovisual speech, in
Science Advances.
Virtual Characters for Audiovisual Speech > Input and output data from the behavioral experiment
Author |
Mégevand, Pierre |
Publication date |
02.10.2020 |
Persistent Identifier (PID) |
10.26037/yareta:shp4bepp7ngv3etn5u4xkms45q |
Repository |
Yareta
|
Abstract |
This dataset consists of the input and output data (stored as comma-separated values files) from a custom-designed behavioral experiment on the perception of artificial but naturalistic audiovisual speech.24 participants attended the experiment. Each participant ran 2 blocks. Consequently, there are 2 input .csv files and 2 output .csv files per participant.Additionally, a MATLAB script to upload the data and make them available for further analysis is provided.
Virtual Characters for Audiovisual Speech > Preprocessed EEG data
Author |
Mégevand, Pierre |
Publication date |
02.10.2020 |
Persistent Identifier (PID) |
10.26037/yareta:nickbz4mbne7levc6bqj4j7rsi |
Repository |
Yareta
|
Abstract |
This dataset consists of the preprocessed EEG data from 15 participants in a speech perception experiment using virtual characters and synthetic speech. The EEG data for all participants are contained in large MATLAB data files. A text file briefly describes the content of each MATLAB data file.
Speech is multimodal: we must move to speak, and these movements are visible to our interlocutors. Indeed, visual speech cues enrich the information transmitted by auditory speech. How the human brain perceives visual speech remains poorly understood, however. Here, I present a series of experiments in cognitive neurophysiology that aim to (1) characterize the cortical representation of visual speech and (2) explore how this representation interacts with that of auditory speech and with cortical areas involved in the processing of language. I will introduce innovative experimental paradigms that allow varying auditory or visual speech inputs without altering comprehension or vice versa. In order to pinpoint the representation of visual speech cues, I will perform intracranial EEG recordings in patients considered for epilepsy surgery, since this technique brings the highest spatiotemporal resolution currently available in humans. I will complement these recordings with high-density scalp EEG, a technique that affords broad coverage of the human brain. Analysis will focus on (1) pinpointing cortical areas responsive to visual speech cues using high-gamma power as an index of local neuronal firing, and (2) assessing the role of neuronal oscillations in directed information exchanges between sensory speech representations and language-processing cortex. The present project has the potential to move forward our understanding of the fundamental neuronal mechanisms by which the cerebral cortex processes audiovisual speech. Furthermore, better knowledge of the role of cortical areas in the processing of speech and language will also improve the yield of functional brain mapping, leading to more individualized surgical plans and better functional outcomes for patients undergoing epilepsy surgery.