Hearing Speech Requires Quiet – In More Ways Than One

By Kim Krieger

Perceiving speech requires quieting certain types of brain cells, report a team of researchers from UConn Health and University of Rochester in the Journal of Neurophysiology. Their research reveals a previously unknown population of brain cells, and opens up a new way of understanding how the brain hears.

Your brain is never silent. Brain cells, known as neurons, constantly chatter. When a neuron gets excited, it fires up and chatters louder. Following the analogy further, a neuron at maximum excitement could be said to shout.

When a friend says your name, your ears signal cells in the middle of the brain. Those cells are attuned to something called the amplitude modulation frequency. That’s the frequency at which the amplitude, or volume, of the sound changes over time.

UConn Health researchers have gained new insights into how the brain hears, thanks to a discovery of a previously unknown population of neurons. (Courtesy of Duck Kim)

UConn Health researchers have gained new insights into how the brain hears, thanks to a discovery of a previously unknown population of neurons. (Courtesy of Duck Kim)

Amplitude modulation is very important to human speech. It carries a lot of the meaning. If the amplitude modulation patterns are muffled, speech becomes much harder to understand. Researchers have known there are groups of neurons keenly attuned to specific frequency ranges of amplitude modulation; such a group of neurons might focus on sounds with amplitude modulation frequencies around 32 Hertz (Hz), or 64 Hz, or 128 Hz, or some other frequencies within the range of human hearing. But many previous studies of the brain had shown that populations of neurons exposed to specific amplitude modulated sounds would get excited in seemingly disorganized patterns. The responses could seem like a raucous jumble, not the organized and predictable patterns you would expect if the theory, of specific neurons attuned to specific amplitude modulation frequencies, was the whole story.

Shig Kuwada and Duck Kim in the lab, in April 2010 (courtesy of Duck Kim).

Shig Kuwada and Duck Kim in the lab, in April 2010 (courtesy of Duck Kim).

UConn Health neuroscientists Duck O. Kim and Shigeyuki Kuwada passionately wanted to figure out the real story. Kuwada had made many contributions to science’s understanding of binaural (two-eared) hearing, beginning in the 1970s. Binaural hearing is essential to how we localize where a sound is coming from. Kuwada (or Shig, as his colleagues called him) and Kim, both professors in the School of Medicine, began collaborating in 2005 on how neural processing of amplitude modulation influences the way we recognize speech. They had a lot of experience studying individual neurons in the brain, and together with Laurel Carney at the University of Rochester, they came up with an ambitious plan: they would systematically probe how every single neuron in a specific part of the brain reacted to a certain sound when that sound was amplitude modulated, and when it was not. They studied isolated single-neuron responses of 105 neurons in the inferior colliculus (a part of the brainstem) and 30 neurons in the medial geniculate body (a part of the thalamus) of rabbits. The study took them two hours a day, every day, over a period of years to get the data they needed.

While they were writing up their results, Shig became ill with cancer. But still he persisted in the research. And after years of painstaking measurement, all three of the researchers were amazed at the results of their analysis: there was a hitherto unknown population of neurons that did the exact opposite of what the conventional wisdom predicted. Instead of getting excited when they heard certain amplitude modulated frequencies, they quieted down. The more the sound was amplitude modulated in a specific modulation frequency, the quieter they got.

It was particularly intriguing because the visual system of the brain has long been understood to operate in a similar way. One population of visual neurons (called the “ON” neurons) gets excited by certain visual stimuli while, at the same time, another population of neurons (called the “OFF” neurons) gets suppressed.

Last year, when Shig was dying, Kim made him a promise.

“In the final days of Shig, I indicated to him and his family that I will put my full effort toward having our joint research results published. I feel relieved now that it is accomplished,” Kim says. The new findings could be particularly helpful for people who have lost their ability to hear and understand spoken words. If they can be offered therapy with an implant that stimulates brain cells directly, it could try to match the natural behavior of the hearing brain.

“It should not excite every neuron; it should try to match how the brain responds to sounds, with some neurons excited and others suppressed,” Kim says.

This research was funded by the National Institutes of Health. This article was republished with permission from University of Connecticut. Duck Kim, D.Sc. is a 1981, 1984, 1985, 1989, 1990, 1992, 2004, 2005, and 2006 Emerging Research Grants (ERG) scientist.

Print Friendly and PDF

BLOG ARCHIVE