listening

9 Things to Know About Noise-Induced Hearing Loss (NIHL)

Noise-induced hearing loss is probably the biggest global public health emergency you’ve never heard of. The World Health Organization (WHO) says 1 of every 5 U.S. teens (ages 12–19) has a measurable hearing loss likely from loud noise.

Print Friendly and PDF

A Hope of Hearing Clearer

Interviewing an expert to learn more about a particular subject is fascinating and fun. Talking to someone accomplished and learning about them and their experiences is exhilarating. But if you have a hearing loss, this can be a stressful endeavor.

Print Friendly and PDF

Train Your Brain to Listen

One of the most important things a person with hearing loss can do is to develop listening strategies. Auditory training, or auditory rehabilitation, is essentially a formal program for teaching the brain to recognize speech and other sounds that may not be as clear as they are with typical hearing.

Print Friendly and PDF

A Multidisciplinary Approach to Overcome Pediatric Listening Difficulties

Listening difficulties occur in children diagnosed with auditory processing disorders (also known as central auditory processing disorders) and may co-occur in children who have developmental language disorder or attention/memory deficits. Persistent listening difficulties negatively affect children's learning and functioning. Studying factors that influence children’s listening performance using a unified multidisciplinary approach is crucial to better identify and manage deficits that contribute to listening difficulties in children.

Print Friendly and PDF

Pinpointing How the Temporal Processing of Nerve Cell Signals Is Important in Hearing

In our study published in the Journal of Neurophysiology, we examined the temporal processing of knockout mice whose cholinergic signaling is disrupted compared with wild-type mouse controls. Findings underscore the importance of cholinergic signaling in types of neurodevelopmental and auditory processing disorders.

Print Friendly and PDF

ERG Grantees' Advancements in OAE Hearing Tests, Speech-in-Noise Listening

By Yishane Lee and Inyong Choi, Ph.D.

Support for a Theory Explaining Otoacoustic Emissions: Fangyi Chen, Ph.D.

Groves hair cells 002.jpeg

It’s a remarkable feature of the ear that it not only hears sound but also generates it. These sounds, called otoacoustic emissions (OAEs), were discovered in 1978. Thanks in part to ERG research in outer hair cell motility, measuring OAEs has become a common, noninvasive hearing test, especially among infants too young to respond to sound prompts..

There are two theories about how the ear produces its own sound emanating from the interior of the cochlea out toward its base. The traditional one is the backward traveling wave theory, in which sound emissions travel slowly as a transverse wave along the basilar membrane, which divides the cochlea into two fluid-filled cavities. In a transverse wave, the wave particles move perpendicular to the wave direction. But this theory does not explain some anomalies, leading to a second hypothesis: The fast compression wave theory holds that the emissions travel as a longitudinal wave via lymph fluids around the basilar membrane. In a longitudinal wave, the wave particles travel in the same direction as the wave motion.

Figuring out how the emissions are created will promote greater accuracy of the OAE hearing test and a better understanding of cochlear mechanics. Fangyi Chen, Ph.D., a 2010 Emerging Research Grants (ERG) recipient, started investigating the issue at Oregon Health & Science University and is now at China’s Southern University of Science and Technology. His team’s paper, published in the journal Neural Plasticity in July 2018, for the first time experimentally validates the backward traveling wave theory.

Chen and his coauthors—including Allyn Hubbard, Ph.D., and Alfred Nuttall, Ph.D., who are each 1989–90 ERG recipients—directly measured the basilar membrane vibration in order to determine the wave propagation mechanism of the emissions. The team stimulated the membrane at a specific location, allowing for the vibration source that initiates the backward wave to be pinpointed. Then the resulting vibrations along the membrane were measured at multiple locations in vivo (in guinea pigs), showing a consistent lag as distance increased from the vibration source. The researchers also measured the waves at speeds in the order of tens of meters per second, much slower than would be the speed of a compression wave in water. The results were confirmed using a computer simulation. In addition to the wave propagation study, a mathematical model of the cochlea based on an acoustic electrical analogy was created and simulated. This was used to interpret why no peak frequency-to-place map was observed in the backward traveling wave, explaining some of the previous anomalies associated with this OAE theory.

Speech-in-Noise Understanding Relies on How Well You Combine Information Across Multiple Frequencies: Inyong Choi, Ph.D.

Understanding speech in noisy environments is a crucial ability for communications, although many individuals with or without hearing loss suffer from dysfunctions in that ability. Our study in Hearing Research, published in September 2018, finds that how well you combine information across multiple frequencies, tested by a pitch-fusion task in "hybrid" cochlear implant users who receive both low-frequency acoustic and high-frequency electric stimulation within the same ear, is a critical factor for good speech-in-noise understanding.

In the pitch-fusion task, subjects heard either a tone consisting of many frequencies in a simple mathematical relationship or a tone with more irregular spacing between frequencies. Subjects had to say whether the tone sounded "natural" or "unnatural" to them, given the fact that a tone consisting of frequencies in a simple mathematical relationship sounds much more natural to us. My team and I are now studying how we can improve the sensitivity to this "naturalness" in listeners with hearing loss, expecting to provide individualized therapeutic options to address the difficulties in speech-in-noise understanding.

2017 ERG recipient Inyong Choi, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Iowa in Iowa City.


We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

Print Friendly and PDF

Clear Speech: It’s Not Just About Conversation

By Kathi Mestayer

In the Spring 2018 issue of Hearing Health, we talk about ways to help our conversational partners speak more clearly, so we can understand them better.

But what about public broadcast speech? It comes to us via phone, radio, television, and computer screen, as well as those echo-filled train stations, bus terminals, and airports. There’s room for improvement everywhere.

This digital oscilloscope representation of speech, with pauses, shows that gaps as short as a few milliseconds are used to separate words and syllables. According to Frank Musiek, Ph.D., CCC-A, a professor of speech, language and hearing sciences a…

This digital oscilloscope representation of speech, with pauses, shows that gaps as short as a few milliseconds are used to separate words and syllables. According to Frank Musiek, Ph.D., CCC-A, a professor of speech, language and hearing sciences at the University of Arizona, people with some kinds of hearing difficulties require longer than normal gap intervals in order to perceive them.
Credit: Frank Musiek

In some cases, like Amtrak’s 30th Street Station in Philadelphia [LISTEN], clear speech is a real challenge. The beautiful space has towering cathedral ceilings, and is wildly reverberant, like a huge echo chamber. Even typical-hearing people can’t understand a word that comes over the PA system. Trust me; I’ve asked several times.

In that space, a large visual display in the center of the hall and the lines of people moving toward the boarding areas get the message across: It’s time to get on the train. I wonder why they even bother with the announcements, except that they signal that something is going on, so people will check the display.

Radio is very different, at least in my kitchen. There are no echoes, so I can enjoy listening to talk radio while I make my coffee in the morning. The other day, the broadcast about one of the station’s nonprofit supporters was described as: “…supporting creative people and defective institutions…”

Huh? That couldn’t be right. It took a few seconds for me to realize what had actually been said: “supporting creative people and effective institutions.” Inter-word pauses are one of the key characteristics of clear speech. A slightly longer pause between the words “and” and “effective” would, in this case, have done the trick.

In the meantime, I chuckle every time that segment airs (which is often), and wonder if anyone else thinks about the defective institutions!

Staff writer Kathi Mestayer serves on advisory boards for the Virginia Department for the Deaf and Hard of Hearing and the Greater Richmond, Virginia, chapter of the Hearing Loss Association of America.

Print Friendly and PDF