最新の研究成果 : 脳が腹話術に騙される仕組み

New research: How the brain is fooled by ventriloquists

When watching a ventriloquist perform, the audience perceives the speech sounds as coming from a location other than the true location of the sound source. We feel as if the speech sounds are coming from the puppet’s moving mouth when they are actually being emitted by the ventriloquist’s unmoving mouth. Likewise, when we watch TV, voices appear to come from the moving mouths of the people we see on the screen, and not from the TV’s loudspeakers. This ventriloquism effect is one example of ‘visual capture.’ In general, vision dominates – or captures – perception when spatially disparate visual and auditory stimuli are simultaneously presented. Although the ventriloquism effect has been studied extensively, its neural basis is still unresolved.

In this research, Akiko Callan and colleagues used fMRI to look at the neural basis of the ventriloquism effect. The study was performed in two steps. First, they investigated how sound locations were represented in the auditory cortex. Second, they investigated how simultaneous presentation of spatially disparate visual stimuli affects neural processing of sound locations. The results show that, when sounds are presented alone, activity in the posterior superior temporal gyrus (pSTG) is stronger to sounds located in the contralateral space. However, when contralateral sounds are delivered together with visual stimuli displayed in the center of the participant’s field of view, activity in the pSTG activity is attenuated. In other words, when paired with a visual stimulus, neural responses for lateral sounds become more similar to neural responses observed when the sound source is actually located in the center of the field of view.

This is the first neuroimaging study to reveal brain activity changes in the pSTG associated with the azimuthal location of sound sources. Moreover, this study shows that auditory spatial processing in the brain is fooled by simultaneous presentation of spatially disparate visual stimuli, suggesting that when spatially discordant multimodal information is presented, the human brain integrates that information by modifying the processing of the sensory modality of lower acuity (in this case, the auditory modality), so that the result becomes more consistent with the input coming from the sensory modality with higher acuity (visual modality). In reality, most of the time, the brain processes multiple sensory modality inputs. This study significantly advances our understanding of the neural processes underlying multisensory perception.

This paper appeared in the journal Cerebral Cortex on January 9, 2015.

Full Reference:
“An fMRI study of the ventriloquism effect”
Akiko Callan, Daniel Callan, and Hiroshi Ando
Cerebal Cortex (2015) doi: 10.1093/cercor/bhu306
http://cercor.oxfordjournals.org/content/early/2015/01/09/cercor.bhu306.full