human voice; limbic system; neural auditory system; vocal production; functional magnetic resonance imaging; hearing limits; neural network; psychoacoustics; auditory perception; vocal emotions
Staib Matthias, Frühholz Sascha (2022), Distinct functional levels of human voice processing in the auditory cortex, in
Cerebral Cortex, 1.
Steiner Florence, Bobin Marine, Frühholz Sascha (2021), Auditory cortical micro-networks show differential connectivity during voice and speech processing in humans, in
Communications Biology, 4(1), 801-801.
Ceravolo Leonardo, Frühholz Sascha, Pierce Jordan, Grandjean Didier, Péron Julie (2021), Basal ganglia and cerebellum contributions to vocal emotion processing as revealed by high-resolution fMRI, in
Scientific Reports, 11(1), 10645-10645.
Handler Alexander, Frühholz Sascha (2021), Eyewitness Memory for Person Identification: Predicting Mugbook Recognition Accuracy According to Person Description Abilities and Subjective Confidence of Witnesses, in
Frontiers in Psychology, 12, 1.
Kegel Lorena Chantal, Frühholz Sascha, Grunwald Thomas, Mersch Dieter, Rey Anton, Jokeit Hennric (2021), Temporal lobe epilepsy alters neural responses to human and avatar facial expressions in the face perception network, in
Brain and Behavior, 11(6), 1.
Staib Matthias, Frühholz Sascha (2021), Cortical voice processing is grounded in elementary sound analyses for vocalization relevant sound patterns, in
Progress in Neurobiology, 200, 101982-101982.
Frühholz Sascha, Dietziker Joris, Staib Matthias, Trost Wiebke (2021), Neurocognitive processing efficiency for discriminating human non-alarm rather than alarm scream calls, in
PLOS Biology, 19(4), e3000751-e3000751.
Roswandowitz Claudia, Swanborough Huw, Frühholz Sascha (2021), Categorizing human vocal signals depends on an integrated auditory‐frontal cortical network, in
Human Brain Mapping, 42(5), 1503-1517.
Dietziker Joris, Staib Matthias, Frühholz Sascha (2021), Neural competition between concurrent speech production and other speech perception, in
NeuroImage, 228, 117710-117710.
Ceravolo L, Schaerlaeken S, Frühholz S, Glowinski D, Grandjean D (2021), Frontoparietal, Cerebellum Network Codes for Accurate Intention Prediction in Altered Perceptual Conditions, in
Cerebral Cortex Communications, 2(2), 1.
Swanborough Huw, Staib Matthias, Frühholz Sascha (2020), Neurocognitive dynamics of near-threshold voice signal detection and affective voice evaluation, in
Science Advances, 6(50), 1.
Gruber Thibaud, Debracque Coralie, Ceravolo Leonardo, Igloi Kinga, Marin Bosch Blanca, Frühholz Sascha, Grandjean Didier (2020), Human Discrimination and Categorization of Emotions in Voices: A Functional Near-Infrared Spectroscopy (fNIRS) Study, in
Frontiers in Neuroscience, 14, 1.
Trevor Caitlyn, Arnal Luc H., Frühholz Sascha (2020), Terrifying film music mimics alarming acoustic feature of human screams, in
The Journal of the Acoustical Society of America, 147(6), EL540-EL545.
Frühholz Sascha, Trost Wiebke, Constantinescu Irina, Grandjean Didier (2020), Neural Dynamics of Karaoke-Like Voice Imitation in Singing Performance, in
Frontiers in Human Neuroscience, 14, 1.
Kegel Lorena C, Brugger Peter, Frühholz Sascha, Grunwald Thomas, Hilfiker Peter, Kohnen Oona, Loertscher Miriam L, Mersch Dieter, Rey Anton, Sollfrank Teresa, Steiger Bettina K, Sternagel Joerg, Weber Michel, Jokeit Hennric (2020), Dynamic human and avatar facial expressions elicit differential brain responses, in
Social Cognitive and Affective Neuroscience, 15(3), 303-317.
Frühholz Sascha, Trost Wiebke, Grandjean Didier, Belin Pascal (2020), Neural oscillations in human auditory cortex revealed by fast fMRI during auditory perception, in
NeuroImage, 207, 116401-116401.
Auditory perception and communication often face considerable challenges in daily life, such as hearing acoustic signals at low intensities (i.e. low loudness) or hearing signals in extremely noisy or in multi-speaker environments. Hearing under these conditions is a tightrope walk between the successful or unsuccessful perception of meaningful auditory information, which often can lead to serious social misunderstandings. In the proposal, I describe three challenges imposed on the auditory system. The proposed projects are based on and are a continuation of the projects in the original proposal. The first project part (Part A) originally aimed at investigating the perceptual abilities at the lowest level of loudness, which renders auditory objects nearly unperceivable. Interestingly, appropriate noise can improve perception at these low-intensity levels. The data acquired in project A1 indeed point to the possibility that certain noise can significantly improve the detection of nearly unperceivable voices. We accordingly now hypothesize if certain properties of noise and random noise fluctuation might underlie these effects. We thus aim at investigating the “voice-like” properties of some types of noise in project A3 that might facilitate the detection of voices under difficult hearing conditions. The second project part (Part B) originally dealt with auditory perception during considerable background noise. It included new experimental perspectives derived from machine-based algorithms of speech decoding, enabling us to understand how meaningful auditory objects and object features are singled out from noise. The data acquired in project B1 revealed that the auditory cortex is involved in detecting voices and vocal emotions in noise. We now hypothesize if using a brain stimulation method (i.e. transcranial direct current stimulation, tDCS) applied to the auditory cortex, the auditory cortex might detect voices and vocal emotions at even under more severe noise conditions. The proposed project B3 thus aims at investigating how a stimulation of brain signal in the auditory cortex might facilitate voice-in-noise detection. The third project part (Part C) originally dealt with conditions of hearing vocalizations of other individuals while the listener simultaneously is producing vocalizations. This condition represents an enormous multitask challenge and introduces a competition between the production of self-vocalizations and the perception of the vocalizations of others. The data acquired in project C1 pointed to the fact that especially the right auditory cortex is involved in dealing with the condition that both the speakers own voice and the voice of another person was presented to the left ear of the speaker, while the speaker classified the speech of the other person. In the proposed project C3 we now aim at investigating the effects of temporally inhibiting the proper functioning of the right auditory cortex using transcranial magnetic stimulation (TMS) on classifying speech of another person while the speaker produces own speech. Overall, the newly proposed projects would allow, first, a critical test of the ecological validity of hearing mechanisms by introducing well-known environmental challenges. Second, it will allow revealing new principles of auditory perception including some paradoxical effects in the auditory system that are not yet fully captured by common neurocognitive theories of auditory perception. Third, the results will have strong implications for many fields of applied sciences, such as for the development of hearing aid techniques and for digital voice technologies.