How Infants Learn Language Using Speech Rhythm and Neuronal Oscillations
Young children spontaneously develop awareness of "big" phonological (speech sound) units such as prosodic stress patterns, syllables and rhymes. By 7.5 months, infants can use prosodic rhythm (motifs of strong and weak syllables) to segment words from continuous speech. This is a complex feat of speech engineering, requiring the child to "hack" the acoustic signal for its implicit phonological structure. In this talk, I will present converging computational and experimental evidence which suggests that infants could perform this feat through speech-to-brain coupling. This a process by which endogenous neuronal oscillations in the cortex entrain to a temporally-matched hierarchy of rhythmic patterns in the speech signal. Nursery rhymes and other forms of infant-directed speech have an enhanced and exaggerated rhythmic architecture which provides a rich substrate for acoustic-phonological extraction by the infant brain. Finally, I will provide preliminary evidence that brain-to-brain coupling between adults and infants could provide an early neural mechanism for the development of joint attention, which plays a important social modulatory role in early language learning.