The iPhone and Brain have similar "Autofill" Features for Predicting Words

Link to the original article- https://www.scientificamerican.com/article/the-brain-has-its-own-ldquo-autofill-rdquo-function-for-speech/

     In the article The Brain Has Its Own "Autofill" Function for Speech, author Mo Costandi explains the human brain automatically attenuates the level of surprise we feel when facing an unpredictable world. In essence, the brain is able to take in only part of a sensory experience and, based on previous experience, fills in the rest so we can quickly identify what we are seeing or hearing. Costandi then reports that new research has found a similar mechanism exists for predicting which sounds will occur next when someone is speaking to us.

     Neuroscientists in England recently performed a series of experiments on both humans and macaques. The researchers taught the participants a nonsense language until the humans and monkeys had learned the artificial grammatical rules of the language. Then, they measured neuronal activity in hundreds of individual neurons, as well as neural networks, while the participants listened to sounds that either conformed to or violated the language's grammatical rules.

     In both humans and monkeys, sounds that conformed to the grammatical rules altered the firing patterns of neurons in the auditory cortex, and produced synchronized neuronal firing in other parts of the brain. When the rules of grammar were violated, a different pattern was initially produced. However, the brain "auto-corrected" these patterns after about a half-second to produce the same synchronized neural patterns as before. These findings suggest that the brain reinterpreted the sounds using predictions generated from previous experience with the language. This means that when we hear a mispronounced word, we often understand what was said to us even before we realize it was incorrectly spoken.

     This research is fascinating because it not only provides evidence for the similarity of sound processing in humans and other primates, but may provide insights into treatments for language disorders. For example, perhaps there is a deficit in the sound prediction capability of patients with dyslexia, which could inform more effective treatments. Another exciting finding of this research is that humans and monkeys share similar responses in the brain when confronted with sounds as complex as language. This new evidence affords us a glimpse into the evolutionary origin of our ability to understand language.

Christopher Mullin
April 29, 2017

Comments

  1. I find myself curious as to these results pertaining to artists in the music industry. Often times when a note is followed or accompanied by another note that does not sound in tune with it, we take note of this mismatch and experience displeasure. Predicting what we want to or what we are used to hearing is an important part of life at a primal level of hearing a car horn and becoming alert for potential accidents, or for knowing what type of cry a baby lets out- hunger, sadness, etc. I'm interested in the more nuanced effects of this function, like in conversational tone or music. Potentially there is activity in the anterior cingulate gyrus that uses our attention to identify mismatches in what we experience versus what we expect.

    ReplyDelete
  2. This data is interesting and applicable to most people, as i'm sure many have experienced this effect at a conversational level. It would be interesting to discover what activity occurs in other areas related to speech and comprehension. for example, one might assume that in correcting the mispronounced word, there may be more activity in the Broca's area than there would be if the word was pronounced correctly. This may be argues as the speech would have to be reproduced by the listener.

    ReplyDelete

Post a Comment

Popular posts from this blog

Sniffing Pleasant Odors may Decrease Cigarette Cravings

Holding hands can sync brainwaves, ease pain, study shows

Music Therapy