speech in noise

I Would Love to Hear the Conversation

By Joe Mussomeli

Music is another language that calls to my brother, Alex. Though he was born with hearing loss, he experiences music as more than just sounds, as something more beautiful. He sets his daily activities—painting, doing homework, or reading—to the melodies of either classical or popular music. 

Music for Alex, and for many others with hearing loss, is both a blessing and a curse. Sometimes loud music volumes, especially in crowded spaces, can be a distraction for him. This recently became apparent at dinner in a restaurant with our parents. At first, he appreciated everything about the restaurant: the delicious smells, the cheerful faces, and the lively music.  We talked amongst ourselves until problems arose for Alex. Alex struggles to hear what others say under ordinary circumstances, but in a loud restaurant, conversation is virtually impossible for him. 

Restaurants serve and are staffed by so many people in close quarters, all of whom are immersed in their own simultaneous conversations. Music creates another layer of sound on top of these many voices. In this environment, Alex is only able to hear a tornado of noises, all scrambled together, that do not make any sense to him. 

That evening at the restaurant, Alex desperately tried to make sense of what we were saying, but couldn’t. The noise was too loud and too much to bear. We tried to accommodate Alex by repeating our words or speaking closer to him. Unfortunately, as the evening went on, the restaurant got more crowded and the noises, including the music, grew louder.

Eventually, Alex couldn’t manage the noise anymore, so we left. When we got home, Alex sat in his room for hours before I eventually entered to ask if he was okay. He was unhappily replaying the experience in his head. He told me, “I was lost in a storm of noise, unable to find my way out.” 

Alex Dinner Restaurant.jpg

I just sat there for a moment, unsure of how to respond, but I knew I had to say something. So, I asked Alex what he was going to do about his problem. Would he find a solution or simply refuse to go to another restaurant ever again? The choice was up to him. With that, Alex reflected, and eventually, an idea came to him: The Mini-Mic. 

The Mini-Mic is an assistive device Alex had previously used at school whenever he needed to hear others more clearly in crowded, noisy spaces. When someone speaks directly into the mic, the audio feeds into Alex’s hearing aid and cochlear implant. The mic had worked well in the classroom, so Alex figured that it could work successfully in a restaurant, too. After this realization, Alex was determined to give the restaurant another try.  

Nothing had changed at the restaurant, but Alex had. The crowded restaurant buzzed with loud chatter and music. Alex was not discouraged. As soon as we were seated, my mom placed the Mini-Mic on the table. Alex connected his implant and hearing aid to it, and then, he could hear everything. Just like everyone else, Alex was able to enjoy a meal and conversation at the same time. He was able to dine with us, talk with us, and laugh with us. And he was able to enjoy the music, playing vibrantly in the background.

Joe Mussomeli is an 11th-grade student who lives in Westport, CT. His younger brother, Alex, has been featured in Hearing Health magazine and is a participant in HHF’s “Faces of Hearing Loss” campaign.

Print Friendly and PDF

Making Friends and Influencing People

By Kathi Mestayer

Lorrie Moore, the author of “Who Will Run the Frog Hospital?” was in town in Williamsburg, Virginia, giving a reading at Tucker Hall at the College of William and Mary. My friend Susan had invited me, and I actually remembered the author’s name and knew that book was somewhere on my shelf, so I said yes.

My husband Mac had read the book, and was sure I would like it. I managed to find it on our jumbled bookshelves, which are kind of in alphabetical order (for fiction, at least). And it was short, only 147 pages! Before long, I realized that I had already read it, too. Not because I remembered anything, mind you, but because my marks and scars were present pencil lines in the margins, and a few dog-eared pages. Mac never marks up a book, or dog-ears the pages, and it drives him crazy when I do. So, it’s usually easy to tell whether I’ve read a book. In this case, I was probably walking that fine line with my fine lines.

I got 33 pages into the book, and it was lively stuff. One passage I had circled (in ink!) was, “She inhaled and held the smoke deep inside, like the worst secret in the world, and then let it burst from her in a cry.” I love revisiting a book, like a stone skipping over water, hitting the high spots thanks to my notes.

So when the day of the reading arrived, I went to listen to Lorrie Moore read her favorite passages in her own voice.

Wishful Thinking

When I got to the lecture hall, I sat by Susan, who was fortunately in the second row, near the aisle. Someone introduced Lorrie Moore. I couldn’t hear most of that, but it didn’t really matter. Then she got up to read, holding a big, thick book (her latest), with a microphone clipped to her lapel.

I couldn’t hear a word of it. It seemed as though she was muttering softly, but I’m not a good judge of that. I leaned over and whispered to Susan that I was having trouble hearing and was going to sit in the front row. Well, Susan outed me immediately, and informed the guy who had introduced Lorrie that she wasn’t audible. While I tried to surreptitiously move to the very center of the front row, he asked Lorrie to hold the lapel mic in her hand, so it could be closer to her mouth.

balloons-popping.jpg

She did that for a few minutes, but it got awkward when she needed to hold the book, too. And when she held the mic in her hand, it was so close to her mouth that her speech was distorted, with the P’s and T’s sounding like balloons popping. Tiny balloons, but enough to muddy her speech. For me.

So, at the suggestion of a young man on my right, she put the mic back on her lapel, but closer to her face. She asked, “Can everyone hear me now?” I didn’t turn around to see the response behind me, but it was clear that she got some no’s because she started playing around with the mic and saying, “How about now?” And, “Now?”

That was when one of the professors leapt over the front two rows, got on the stage, took the big, regular-mic holder (which was empty), bent it around to the front of the lectern, and clipped the tiny lapel mic to it. Okay. It was closer to her mouth, and she could use her hands for other things.

Let the reading begin. Again.

This time, she read for about 20 minutes, and I still couldn’t hear clearly enough to know what she was mumbling into the mic, with the P’s and T’s popping again due to its proximity to her mouth. I sat there patiently, not wanting to be disruptive again, and thought about other things, in between the audience’s intermittent chuckling. To my credit, I did not get my phone out to check my email.

After she was done, and the Q&A period started, I slunk out of the room, as quietly as possible. Others were doing the same, so I felt a little less rude. The next day, I got an apologetic email from Susan.

Not Just Me

A couple of days later, I was in a gift shop downtown, and a young woman behind the counter asked if I had been at the reading the other night in Tucker Hall. I said yes. Turns out, she was sitting right behind me. When I mentioned that I had a really hard time hearing in that space, she replied, “Oh, I HATE that room!  It’s the worst one on campus! I can never hear in there.”

The good news is that, the next time Susan invited me to a reading, she made a point of saying they had gotten the good mic back up and running. And, in fairness, making an entire campus of classrooms and other spaces hearing-friendly will take time, money… and attention. In fact, I’ve already managed to get an FM system installed in two auditoriums in another building on campus. So, slowly, the system is getting better, one complaint at a time.

kathi mestayer headshot.jpg

I think of that passage I ink-circled, about inhaling smoke like a big secret and letting it burst forth. Advocating to hear can put you in the spotlight, uncomfortably, especially in a group situation, but we should let our needs burst forth to help others who are no doubt in the same situation.

Kathi Mestayer is a staff writer for Hearing Health magazine.

Print Friendly and PDF

Detailing the Relationships Between Auditory Processing and Cognitive-Linguistic Abilities in Children

By Beula Magimairaj, Ph.D.

Children suspected to have or diagnosed with auditory processing disorder (APD) present with difficulty understanding speech despite typical-range peripheral hearing and typical intellectual abilities. Children with APD (also known as central auditory processing disorder, CAPD) may experience difficulties while listening in noise, discriminating speech and non-speech sounds, recognizing auditory patterns, identifying the location of a sound source, and processing time-related aspects of sound, such as rapid sound fluctuations or detecting short gaps between sounds. According to 2010 clinical practice guidelines by the American Academy of Audiology and a 2005 American Speech-Language-Hearing Association (ASHA) report, developmental APD is a unique clinical entity. According to ASHA, APD is not the result of cognitive or language deficits.

child-reading-tablet.jpg

In our July 2018 study in the journal Language Speech and Hearing Services in the Schools for its special issue on “working memory,” my coauthor and I present a novel framework for conceptualizing auditory processing abilities in school-age children. According to our framework, cognitive and linguistic factors are included along with auditory factors as potential sources of deficits that may contribute individually or in combination to cause listening difficulties in children.

We present empirical evidence from hearing, language, and cognitive science in explaining the relationships between children’s auditory processing abilities and cognitive abilities such as memory and attention. We also discuss studies that have identified auditory abilities that are unique and may benefit from assessment and intervention. Our unified framework is based on studies from typically developing children; those suspected to have APD, developmental language impairment, or attention deficit disorders; and models of attention and memory in children. In addition, the framework is based on what we know about the integrated functioning of the nervous system and evidence of multiple risk factors in developmental disorders. A schematic of this framework is shown here.

APD chart.png

In our publication, for example, we discuss how traditional APD diagnostic models show remarkable overlap with models of working memory (WM). WM refers to an active memory system that individuals use to hold and manipulate information in conscious awareness. Overlapping components among the models include verbal short-term memory capacity (auditory decoding and memory), integration of audiovisual information and information from long-term memory, and central executive functions such as attention and organization. Therefore, a deficit in the WM system can also potentially mimic the APD profile.

Similarly, auditory decoding (i.e., processing speech sounds), audiovisual integration, and organization abilities can influence language processing at various levels of complexity. For example, poor phonological (speech sound) processing abilities, such as those seen in some children with primary language impairment or dyslexia, could potentially lead to auditory processing profiles that correspond to APD. Auditory memory and auditory sequencing of spoken material are often challenging for children diagnosed with APD. These are the same integral functions attributed to the verbal short-term memory component of WM. Such observations are supported by the frequent co-occurrence of language impairment, APD, and attention deficit disorders.

Furthermore, it is important to note that cognitive-linguistic and auditory systems are highly interconnected in the nervous system. Therefore, heterogeneous profiles of children with listening difficulties may reflect a combination of deficits across these systems. This calls for a unified approach to model functional listening difficulties in children.

Given the overlap in developmental trajectories of auditory skills and WM abilities, the age at evaluation must be taken into account during assessment of auditory processing. The American Academy of Audiology does not recommend APD testing for children developmentally younger than age 7. Clinicians must therefore adhere to this recommendation to save time and resources for parents and children and to avoid misdiagnosis.

However, any significant listening difficulties noted in children at any age (especially at younger ages) must call for a speech-language evaluation, a peripheral hearing assessment, and cognitive assessment. This is because identification of deficits or areas of risk in language or cognitive processing triggers the consideration of cognitive-language enrichment opportunities for the children. Early enrichment of overall language knowledge and processing abilities (e.g., phonological/speech sound awareness, vocabulary) has the potential to improve children's functional communication abilities, especially when listening in complex auditory environments. 

Given the prominence of children's difficulty listening in complex auditory environments and emerging evidence suggesting a distinction of speech perception in noise and spatialized listening from other auditory and cognitive factors, listening training in spatialized noise appears to hold promise in terms of intervention. This needs to be systematically replicated across independent research studies. 

Other evidence-based implications discussed in our publication include improving auditory access using assistive listening devices (e.g., FM systems), using a hierarchical assessment model, or employing a multidisciplinary front-end screening of sensitive areas (with minimized overlap across audition, language, memory, and attention) prior to detailed assessments in needed areas.

Finally, we emphasize that prevention should be at the forefront. This calls for integrating auditory enrichment with meaningful activities such as musical experience, play, social interaction, and rich language experience beginning early in infancy while optimizing attention and memory load. While these approaches are not new, current research evidence on neuroplasticity makes a compelling case to promote auditory enrichment experiences in infants and young children.

Beula_Magimairaj.jpg

A 2015 Emerging Research Grants (ERG) scientist generously funded by the General Grand Chapter Royal Arch Masons International, Beula Magimairaj, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Central Arkansas. Magimairaj’s related ERG research on working memory appears in the Journal of Communication Disorders, and she wrote about an earlier paper from her ERG grant in the Summer 2018 issue of Hearing Health.

 

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
donate hh button.jpg
 
Print Friendly and PDF

ERG Grantees' Advancements in OAE Hearing Tests, Speech-in-Noise Listening

By Yishane Lee and Inyong Choi, Ph.D.

Support for a Theory Explaining Otoacoustic Emissions: Fangyi Chen, Ph.D.

Groves hair cells 002.jpeg

It’s a remarkable feature of the ear that it not only hears sound but also generates it. These sounds, called otoacoustic emissions (OAEs), were discovered in 1978. Thanks in part to ERG research in outer hair cell motility, measuring OAEs has become a common, noninvasive hearing test, especially among infants too young to respond to sound prompts..

There are two theories about how the ear produces its own sound emanating from the interior of the cochlea out toward its base. The traditional one is the backward traveling wave theory, in which sound emissions travel slowly as a transverse wave along the basilar membrane, which divides the cochlea into two fluid-filled cavities. In a transverse wave, the wave particles move perpendicular to the wave direction. But this theory does not explain some anomalies, leading to a second hypothesis: The fast compression wave theory holds that the emissions travel as a longitudinal wave via lymph fluids around the basilar membrane. In a longitudinal wave, the wave particles travel in the same direction as the wave motion.

Figuring out how the emissions are created will promote greater accuracy of the OAE hearing test and a better understanding of cochlear mechanics. Fangyi Chen, Ph.D., a 2010 Emerging Research Grants (ERG) recipient, started investigating the issue at Oregon Health & Science University and is now at China’s Southern University of Science and Technology. His team’s paper, published in the journal Neural Plasticity in July 2018, for the first time experimentally validates the backward traveling wave theory.

Chen and his coauthors—including Allyn Hubbard, Ph.D., and Alfred Nuttall, Ph.D., who are each 1989–90 ERG recipients—directly measured the basilar membrane vibration in order to determine the wave propagation mechanism of the emissions. The team stimulated the membrane at a specific location, allowing for the vibration source that initiates the backward wave to be pinpointed. Then the resulting vibrations along the membrane were measured at multiple locations in vivo (in guinea pigs), showing a consistent lag as distance increased from the vibration source. The researchers also measured the waves at speeds in the order of tens of meters per second, much slower than would be the speed of a compression wave in water. The results were confirmed using a computer simulation. In addition to the wave propagation study, a mathematical model of the cochlea based on an acoustic electrical analogy was created and simulated. This was used to interpret why no peak frequency-to-place map was observed in the backward traveling wave, explaining some of the previous anomalies associated with this OAE theory.

Speech-in-Noise Understanding Relies on How Well You Combine Information Across Multiple Frequencies: Inyong Choi, Ph.D.

Understanding speech in noisy environments is a crucial ability for communications, although many individuals with or without hearing loss suffer from dysfunctions in that ability. Our study in Hearing Research, published in September 2018, finds that how well you combine information across multiple frequencies, tested by a pitch-fusion task in "hybrid" cochlear implant users who receive both low-frequency acoustic and high-frequency electric stimulation within the same ear, is a critical factor for good speech-in-noise understanding.

In the pitch-fusion task, subjects heard either a tone consisting of many frequencies in a simple mathematical relationship or a tone with more irregular spacing between frequencies. Subjects had to say whether the tone sounded "natural" or "unnatural" to them, given the fact that a tone consisting of frequencies in a simple mathematical relationship sounds much more natural to us. My team and I are now studying how we can improve the sensitivity to this "naturalness" in listeners with hearing loss, expecting to provide individualized therapeutic options to address the difficulties in speech-in-noise understanding.

2017 ERG recipient Inyong Choi, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Iowa in Iowa City.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF