EVELYN DAVIES-VENN, AU.D., PH.D.
More than 460 million people worldwide live with some form of hearing loss. For most, hearing aids are the primary rehabilitation tool, yet there is no one-size-fits-all approach. As a result, many hearing aid users are frustrated by their listening experiences, especially
understanding speech in noise.
Evelyn Davies-Venn, Au.D., Ph.D., of the University of Minnesota, is focusing on two projects—one of which is funded by Hearing Health Foundation (HHF) through its Emerging Research Grants (ERG) program—that will enhance the customization of hearing aids. She presented the two projects at the Hearing Loss Association of America convention in June.
Davies-Venn explains that some of the factors dictating individual variances in hearing aid outcomes in noisy environments include audibility, spectral resolution, and cognitive ability. Audibility changes—how much of the speech spectrum is available to the hearing aid user—is the biggest factor. “Speech must be audible before it is intelligible,” Davies-Venn says. Another primary factor is spectral resolution, or your ear’s ability to make use of the spectrum or frequency changes in sounds. This also directly affects listening outcomes.
Secondary factors include the user’s working memory and the volume of the amplified speech. These impact how well someone can handle making sense of distortions (from ambient noise as well as from signal processing) in an incoming speech signal. Working memory is needed to provide context in the event of missing speech fragments, for instance. Needless to say,
it is a challenge for conventional hearing aid technology to address all of
these complex variables.
In light of this challenge for conventional hearing aids, Davies-Venn highlights two projects that are meant to improve hearing aid success. The first focuses on an emerging technology called the “cognitive control of a hearing aid,” or COCOHA. It is an improved hearing aid that will analyze multiple sounds, complete an acoustic scene analysis, and separate the sounds into individual streams, she says.
Then, based on the cognitive/electrophysiological recordings from the individual, the COCOHA will select the specific stream that the person is interested in listening to and amplify it—such as a particular speaker’s voice. The cognitive recording is captured with a noninvasive, far-field measure of electrical signals emitted from the brain in response to sound stimuli (similar to how an electroencephalogram, or EEG, captures signals).
Davies-Venn’s ERG grant will support research on the use of electrophysiology—far-field or distant (recorded at the scalp) electrical signals from the brain—to design hearing aid algorithms that can control individual variances due to level-induced (high intensity) distortions from hearing aids.
The second project involves sensory substitution. This project explores the conversion of speech to another sense—for example, touch—through a mobile processing device or a “skin hearing aid.” For the device to function, a vibration is relayed to the brain for speech interpretation. This technology seems cutting edge, but is believed to have been invented in the 1960s by Paul Bach-y-Rita, M.D., of the Smith-Kettlewell Institute of Visual Sciences in San Francisco. Even though it has not yet been incorporated into hearing aid technology intended for mass production, David Eagleman, Ph.D., of Stanford University, and others are hoping to make this a reality.
Davies-Venn’s research motives are inspired by a personal connection to her work. “I have a conductive hearing loss myself,” she says. “I had persistent/chronic ear infections as a child that left me a bit delayed in developing speech. I still get ear infections as an adult and have grown accustomed to the low-frequency hearing loss that results until they resolve.” Davies-Venn notes that she also has family members with hearing loss, which makes developing advanced hearing assistance technology even more important to her.
Davies-Venn’s projects are in the early stages, and it may take as long as a decade for them to reach the market from the concept. “The goal is to develop individualized hearing aid signal processing to improve treatment outcomes in noisy soundscapes,” she says. “We want
to say, this is the most optimal treatment protocol, and it’s different from this person’s, even though you have the same hearing threshold.”
Solving hearing aid variances in a precise, individual manner that accounts for variables such as age and cognitive ability will improve communication and quality of life for the millions with hearing loss who use hearing technology.
Rachael R. Baiduc, Ph.D., MPH
Timothy Balmer, Ph.D.
Renee Banakis Hartl, M.D., Au.D.
Joseph H. Bochner, Ph.D.
Angela Yarnell Bonino, Ph.D., CCC-A
Inyong Choi, Ph.D.
Oscar Diaz-Horta, Ph.D.
David Ehrlich, Ph.D.
Alisha Lambeth Jones, Au.D., Ph.D.
David Jung, M.D., Ph.D.
Ngoc-Nhi Luu, M.D., Dr. Med.
Senthilvelan Manohar, Ph.D.
Tenzin Ngodup, Ph.D.
Clive Morgan, Ph.D.
Khaleel Razak, Ph.D.
Christina Reuterskiöld, Ph.D.
Jennifer Resnik, Ph.D.
Michael Roberts, Ph.D.
Sandeep Sheth, Ph.D.
Ian Swinburne, Ph.D.
Xiaodong Tan, Ph.D.
Joseph Toscano, Ph.D.
Babak Vazifehkhahghaffari, Ph.D.
A. Catalina Vélez-Ortega, Ph.D.
Philippe Vincent, Ph.D.