speech in noise

Perceptual Decision-Making for Speech Recognition in Noise

Our study indicates that perceptual decision-making is engaged for difficult word recognition conditions, and that frontal cortex activity may adjust how much information is collected to benefit word recognition task performance.

Print Friendly and PDF

Pinpointing How Older Adults Can Better Hear Speech in Noise

In real-world listening situations, we always listen to speech in the presence of other sources of masking, or competing sounds. One of the major sources of masking in such situations is the speech signal that the listener is not paying attention to. The process of understanding the target speech in the presence of a masking speech involves separating the acoustic information of the target speech and tuning out masker speech.

Print Friendly and PDF

Study Explains ‘Cocktail Party Effect’ In Hearing Impairment

Commonly known as the “cocktail party effect,” people with hearing loss find it’s especially difficult to understand speech in a noisy environment. New research suggests that this may have less to do with actually discerning sounds. Instead, it may be a processing problem in which two ears blend different sounds together.

Print Friendly and PDF

Hearing Difficulties in Noise Traced to Altered Brain Dynamics Following Cochlear Neural Degeneration

The challenge is hearing in noisy environments. In humans, evidence suggests that difficulty hearing in noisy, social settings may reflect premature auditory nerve degeneration. We report finding deterioration in perception in noisy environments after inducing bilateral moderate auditory nerve degeneration in adult mice.

Print Friendly and PDF

A Clue Toward Understanding Difficulties With Speech Perception in Noise

While it is well known that hearing loss degrades speech perception, especially in noisy environments, less is understood as to why some individuals with typical hearing may also struggle with speech perception in noise (SPiN). Several factors appear to contribute to SPiN abilities in adults with typical hearing, including the top-down cognitive functions of attention, working memory, and inhibition.

Print Friendly and PDF

Can I Get My Hearing Tested Online?

Online hearing tests, or tests you take yourself using a computer or smartphone, are becoming more prevalent and popular, especially alongside the market for “hearables” (smart wireless earbuds). With over-the-counter hearing aids set to become available soon, these tests that can be convenient to take at home are likely to proliferate even more.

Print Friendly and PDF

I Would Love to Hear the Conversation

Music for Alex, and for many others with hearing loss, is both a blessing and a curse. Sometimes loud music volumes, especially in crowded spaces, can be a distraction for him. This recently became apparent at dinner in a restaurant with our parents.

Print Friendly and PDF

Making Friends and Influencing People

By Kathi Mestayer

Lorrie Moore, the author of “Who Will Run the Frog Hospital?” was in town in Williamsburg, Virginia, giving a reading at Tucker Hall at the College of William and Mary. My friend Susan had invited me, and I actually remembered the author’s name and knew that book was somewhere on my shelf, so I said yes.

My husband Mac had read the book, and was sure I would like it. I managed to find it on our jumbled bookshelves, which are kind of in alphabetical order (for fiction, at least). And it was short, only 147 pages! Before long, I realized that I had already read it, too. Not because I remembered anything, mind you, but because my marks and scars were present pencil lines in the margins, and a few dog-eared pages. Mac never marks up a book, or dog-ears the pages, and it drives him crazy when I do. So, it’s usually easy to tell whether I’ve read a book. In this case, I was probably walking that fine line with my fine lines.

I got 33 pages into the book, and it was lively stuff. One passage I had circled (in ink!) was, “She inhaled and held the smoke deep inside, like the worst secret in the world, and then let it burst from her in a cry.” I love revisiting a book, like a stone skipping over water, hitting the high spots thanks to my notes.

So when the day of the reading arrived, I went to listen to Lorrie Moore read her favorite passages in her own voice.

Wishful Thinking

When I got to the lecture hall, I sat by Susan, who was fortunately in the second row, near the aisle. Someone introduced Lorrie Moore. I couldn’t hear most of that, but it didn’t really matter. Then she got up to read, holding a big, thick book (her latest), with a microphone clipped to her lapel.

I couldn’t hear a word of it. It seemed as though she was muttering softly, but I’m not a good judge of that. I leaned over and whispered to Susan that I was having trouble hearing and was going to sit in the front row. Well, Susan outed me immediately, and informed the guy who had introduced Lorrie that she wasn’t audible. While I tried to surreptitiously move to the very center of the front row, he asked Lorrie to hold the lapel mic in her hand, so it could be closer to her mouth.

balloons-popping.jpg

She did that for a few minutes, but it got awkward when she needed to hold the book, too. And when she held the mic in her hand, it was so close to her mouth that her speech was distorted, with the P’s and T’s sounding like balloons popping. Tiny balloons, but enough to muddy her speech. For me.

So, at the suggestion of a young man on my right, she put the mic back on her lapel, but closer to her face. She asked, “Can everyone hear me now?” I didn’t turn around to see the response behind me, but it was clear that she got some no’s because she started playing around with the mic and saying, “How about now?” And, “Now?”

That was when one of the professors leapt over the front two rows, got on the stage, took the big, regular-mic holder (which was empty), bent it around to the front of the lectern, and clipped the tiny lapel mic to it. Okay. It was closer to her mouth, and she could use her hands for other things.

Let the reading begin. Again.

This time, she read for about 20 minutes, and I still couldn’t hear clearly enough to know what she was mumbling into the mic, with the P’s and T’s popping again due to its proximity to her mouth. I sat there patiently, not wanting to be disruptive again, and thought about other things, in between the audience’s intermittent chuckling. To my credit, I did not get my phone out to check my email.

After she was done, and the Q&A period started, I slunk out of the room, as quietly as possible. Others were doing the same, so I felt a little less rude. The next day, I got an apologetic email from Susan.

Not Just Me

A couple of days later, I was in a gift shop downtown, and a young woman behind the counter asked if I had been at the reading the other night in Tucker Hall. I said yes. Turns out, she was sitting right behind me. When I mentioned that I had a really hard time hearing in that space, she replied, “Oh, I HATE that room!  It’s the worst one on campus! I can never hear in there.”

The good news is that, the next time Susan invited me to a reading, she made a point of saying they had gotten the good mic back up and running. And, in fairness, making an entire campus of classrooms and other spaces hearing-friendly will take time, money… and attention. In fact, I’ve already managed to get an FM system installed in two auditoriums in another building on campus. So, slowly, the system is getting better, one complaint at a time.

kathi mestayer headshot.jpg

I think of that passage I ink-circled, about inhaling smoke like a big secret and letting it burst forth. Advocating to hear can put you in the spotlight, uncomfortably, especially in a group situation, but we should let our needs burst forth to help others who are no doubt in the same situation.

Kathi Mestayer is a staff writer for Hearing Health magazine.

Print Friendly and PDF

Detailing the Relationships Between Auditory Processing and Cognitive-Linguistic Abilities in Children

According to our framework, cognitive and linguistic factors are included along with auditory factors as potential sources of deficits that may contribute individually or in combination to cause listening difficulties in children.

Print Friendly and PDF

ERG Grantees' Advancements in OAE Hearing Tests, Speech-in-Noise Listening

By Yishane Lee and Inyong Choi, Ph.D.

Support for a Theory Explaining Otoacoustic Emissions: Fangyi Chen, Ph.D.

Groves hair cells 002.jpeg

It’s a remarkable feature of the ear that it not only hears sound but also generates it. These sounds, called otoacoustic emissions (OAEs), were discovered in 1978. Thanks in part to ERG research in outer hair cell motility, measuring OAEs has become a common, noninvasive hearing test, especially among infants too young to respond to sound prompts..

There are two theories about how the ear produces its own sound emanating from the interior of the cochlea out toward its base. The traditional one is the backward traveling wave theory, in which sound emissions travel slowly as a transverse wave along the basilar membrane, which divides the cochlea into two fluid-filled cavities. In a transverse wave, the wave particles move perpendicular to the wave direction. But this theory does not explain some anomalies, leading to a second hypothesis: The fast compression wave theory holds that the emissions travel as a longitudinal wave via lymph fluids around the basilar membrane. In a longitudinal wave, the wave particles travel in the same direction as the wave motion.

Figuring out how the emissions are created will promote greater accuracy of the OAE hearing test and a better understanding of cochlear mechanics. Fangyi Chen, Ph.D., a 2010 Emerging Research Grants (ERG) recipient, started investigating the issue at Oregon Health & Science University and is now at China’s Southern University of Science and Technology. His team’s paper, published in the journal Neural Plasticity in July 2018, for the first time experimentally validates the backward traveling wave theory.

Chen and his coauthors—including Allyn Hubbard, Ph.D., and Alfred Nuttall, Ph.D., who are each 1989–90 ERG recipients—directly measured the basilar membrane vibration in order to determine the wave propagation mechanism of the emissions. The team stimulated the membrane at a specific location, allowing for the vibration source that initiates the backward wave to be pinpointed. Then the resulting vibrations along the membrane were measured at multiple locations in vivo (in guinea pigs), showing a consistent lag as distance increased from the vibration source. The researchers also measured the waves at speeds in the order of tens of meters per second, much slower than would be the speed of a compression wave in water. The results were confirmed using a computer simulation. In addition to the wave propagation study, a mathematical model of the cochlea based on an acoustic electrical analogy was created and simulated. This was used to interpret why no peak frequency-to-place map was observed in the backward traveling wave, explaining some of the previous anomalies associated with this OAE theory.

Speech-in-Noise Understanding Relies on How Well You Combine Information Across Multiple Frequencies: Inyong Choi, Ph.D.

Understanding speech in noisy environments is a crucial ability for communications, although many individuals with or without hearing loss suffer from dysfunctions in that ability. Our study in Hearing Research, published in September 2018, finds that how well you combine information across multiple frequencies, tested by a pitch-fusion task in "hybrid" cochlear implant users who receive both low-frequency acoustic and high-frequency electric stimulation within the same ear, is a critical factor for good speech-in-noise understanding.

In the pitch-fusion task, subjects heard either a tone consisting of many frequencies in a simple mathematical relationship or a tone with more irregular spacing between frequencies. Subjects had to say whether the tone sounded "natural" or "unnatural" to them, given the fact that a tone consisting of frequencies in a simple mathematical relationship sounds much more natural to us. My team and I are now studying how we can improve the sensitivity to this "naturalness" in listeners with hearing loss, expecting to provide individualized therapeutic options to address the difficulties in speech-in-noise understanding.

2017 ERG recipient Inyong Choi, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Iowa in Iowa City.


We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

Print Friendly and PDF