November 21, 2017 15:00 〜 16:00
CiNet 1F Conference Room
Princeton Neuroscience Institute
Host : Masahiko Haruno (PI)
The voice is the most direct link we have to others’ minds, allowing us to communicate using a rich variety of cues. This link is particularly critical early in life, as parents draw infants into the structure of their environment using infant-directed speech (IDS), a communicative code with unique pitch and rhythmic characteristics relative to adult- directed speech (ADS). To begin breaking into language, infants must discern subtle statistical differences about people and voices in order to direct their attention toward the most relevant signals.
In a recently published study, we reveal a new defining feature of IDS : mothers significantly alter statistical properties of their vocal timbre when speaking to their infants. Timbre, or tone color, is a spectral fingerprint that helps us instantly identify and classify sound sources, such as individual people and musical instruments. We recorded 24 mothers’ naturalistic speech while they interacted with their infants and with adult experimenters in their native language. Half of the participants were English speakers, and half were not. Using an SVM classifier, we found that mothers consistently shifted their timbre between ADS and IDS. Importantly, this shift was highly similar across languages (i.e., a classifier trained to discriminate IDS from ADS on English data alone could distinguish the two modes when tested on non- English data, and vice versa), suggesting that such alterations of timbre are universal. Furthermore, this shift could not be explained by differences in pitch or background noise across conditions. These findings have theoretical implications for understanding how infants tune in to their local communicative environments and could inform educational tools aimed at enhancing children’s learning.
In ongoing work, we are using fNIRS to investigate neural coupling between caregivers and their infants. Previous research using fMRI has shown that neural synchrony, between a speaker and listeners, underlies successful communication during storytelling. One prediction of our multifaceted developmental project is that the prosodic cues contained in IDS are instrumental in harnessing effective caregiver-child neural coupling during naturalistic interactions, which translates into better language learning. This work could have broad implications for the origins of human communication and may eventually provide early biomarkers for disorders such as autism.