How the brain distinguishes speech from music

Some seemingly simple questions are actually an opening to much larger questions. Christina Vanden Bosch der Nederlanden of the University of Toronto made the observation. A cellist from an early age, she became a neuroscientist and long wondered how the brain could understand the differences between music and speech. “We know that from the age of 4, children can and easily make the explicit difference between music and language, she remembers. Although it seems pretty obvious, there has been little to no data asking kids to make these sorts of distinctions.” He corrects it in a study made public on the occasion of the annual meeting of the Society for Cognitive Neuroscience (CNS) in San Francisco.

Between babies and adults, opposite results

This work is the result of an experiment carried out with 4-month-old babies: they listened to words and songs, pronounced in a slightly singsong voice, similar to the one we use to address children, or in a monotone. During this time, the researchers recorded electrical activity in the brain using an electroencephalogram (EEG). They find that babies are more successful at following sentences when they are spoken to, rather than singing, but the opposite is found in adults: they integrate words better when they are sung. The researchers also noted that pitch and rhythm affect brain activity. According to them, thelack of pitch stability” is an important acoustic feature to guide attention in babies. According to Christina Vanden Bosch der Nederlanden, pitch stability can help a listener identify a song and, conversely, instability tells a baby that he or she is listening to a song. person speak, not sing.

Understand how people feel that differentiate between music and speech.

In an online experiment, the scientist and her colleagues asked children and adults to qualitatively describe how music and language differ.Both children and adults described features such as tempo, pitch, and rhythm as important features in differentiating between speech and singing.”, she says. “This gave me a great data set that says a lot about how people think music and language differ acoustically and also how the functional roles of music and language vary in our daily lives.”

Future clinical applications?

Understanding the relationship between music and language can “help explore fundamental questions of human cognition, such as why humans need music and speech, and how humans communicate and interact with each other through these ways“, believes Andrew Chang, one of the participants in this CNS meeting. These results are also a path to new clinical trials, which could, for example, be interested in music as an alternative form of verbal communication for people with aphasia, that is to say that they have lost part or all of their faculties of speech.

Interested in this topic? Come and discuss it in our forum!

Leave a Comment