Introducing the brain: processing spoken language (Introduction)

by David Turell @, Friday, November 25, 2022, 19:17 (65 days ago) @ David Turell

New study shos specific steps are taken:

https://medicalxpress.com/news/2022-11-brains-time-stamp-words.html

"Our brains "time-stamp" the order of incoming sounds, allowing us to correctly process the words that we hear, shows a new study by a team of psychology and linguistics researchers. Its findings,

"'To understand speech, your brain needs to accurately interpret both the speech sounds' identity and the order that they were uttered to correctly recognize the words being said," explains Laura Gwilliams, the paper's lead author, an NYU doctoral student at the time of the research and now a postdoctoral fellow at the University of California, San Francisco. "We show how the brain achieves this feat: Different sounds are responded to with different neural populations. And, each sound is time-stamped with how much time has gone by since it entered the ear. This allows the listener to know both the order and the identity of the sounds that someone is saying to correctly figure out what words the person is saying."

***

"...the scientists aimed to understand how the brain processes the identity and order of speech sounds, given that they unfold so quickly. This is significant because your brain needs to accurately interpret both the speech sounds' identity (e.g., l-e-m-o-n) and the order that they were uttered (e.g., 1-2-3-4-5) to correctly recognize the words being said (e.g. "lemon" and not "melon").

***

"The researchers found that the brain processes speech using a buffer, thereby maintaining a running representation—i.e., time-stamping—of the past three speech sounds. The results also showed that the brain processes multiple sounds at the same time without mixing up the identity of each sound by passing information between neurons in the auditory cortex.

"'We found that each speech sound initiates a cascade of neurons firing in different places in the auditory cortex," explains Gwilliams, who will return to NYU's Department of Psychology as an assistant professor in 2023. "This means that the information about each individual sound in the phonetic word 'k-a-t' gets passed between different neural populations in a predictable way, which serves to time-stamp each sound with its relative order.'"

Comment: spoken language is a late development. The human larynx presumably was present in
Erectus. Therefore, the brain mechanisms to interpret speech is a late development in brain plasticity as spoken language evolved.


Complete thread:

 RSS Feed of thread

powered by my little forum