Evelyn Davies-Venn, Au.D., Ph.D
Meet the Researcher
Evelyn Davies-Venn, Au.D., Ph.D., received her master’s degree and doctorate of audiology as well as a Ph.D. in audiology all from the University of Washington, and completed postdoctoral training at Purdue University, Indiana. She is an assistant professor in the department of speech-language and hearing sciences at the University of Minnesota. Her ERG grant was partially funded by an anonymous donor.
More than 460 million people worldwide live with some form of hearing loss. For most, hearing aids are the primary rehabilitation tool, yet there is no one-size-fits-all approach. As a result, many hearing aid users are frustrated by their listening experiences, especially
understanding speech in noise.
Evelyn Davies-Venn, Au.D., Ph.D., of the University of Minnesota, is focusing on two projects—one of which is funded by Hearing Health Foundation (HHF) through its Emerging Research Grants (ERG) program—that will enhance the customization of hearing aids. She presented the two projects at the Hearing Loss Association of America convention in June.
Davies-Venn explains that some of the factors dictating individual variances in hearing aid outcomes in noisy environments include audibility, spectral resolution, and cognitive ability. Audibility changes—how much of the speech spectrum is available to the hearing aid user—is the biggest factor. “Speech must be audible before it is intelligible,” Davies-Venn says. Another primary factor is spectral resolution, or your ear’s ability to make use of the spectrum or frequency changes in sounds. This also directly affects listening outcomes.
Secondary factors include the user’s working memory and the volume of the amplified speech. These impact how well someone can handle making sense of distortions (from ambient noise as well as from signal processing) in an incoming speech signal. Working memory is needed to provide context in the event of missing speech fragments, for instance. Needless to say,
it is a challenge for conventional hearing aid technology to address all of
these complex variables.
In light of this challenge for conventional hearing aids, Davies-Venn highlights two projects that are meant to improve hearing aid success. The first focuses on an emerging technology called the “cognitive control of a hearing aid,” or COCOHA. It is an improved hearing aid that will analyze multiple sounds, complete an acoustic scene analysis, and separate the sounds into individual streams, she says.
Then, based on the cognitive/electrophysiological recordings from the individual, the COCOHA will select the specific stream that the person is interested in listening to and amplify it—such as a particular speaker’s voice. The cognitive recording is captured with a noninvasive, far-field measure of electrical signals emitted from the brain in response to sound stimuli (similar to how an electroencephalogram, or EEG, captures signals).
Davies-Venn’s ERG grant will support research on the use of electrophysiology—far-field or distant (recorded at the scalp) electrical signals from the brain—to design hearing aid algorithms that can control individual variances due to level-induced (high intensity) distortions from hearing aids.
The second project involves sensory substitution. This project explores the conversion of speech to another sense—for example, touch—through a mobile processing device or a “skin hearing aid.” For the device to function, a vibration is relayed to the brain for speech interpretation. This technology seems cutting edge, but is believed to have been invented in the 1960s by Paul Bach-y-Rita, M.D., of the Smith-Kettlewell Institute of Visual Sciences in San Francisco. Even though it has not yet been incorporated into hearing aid technology intended for mass production, David Eagleman, Ph.D., of Stanford University, and others are hoping to make this a reality.
Davies-Venn’s research motives are inspired by a personal connection to her work. “I have a conductive hearing loss myself,” she says. “I had persistent/chronic ear infections as a child that left me a bit delayed in developing speech. I still get ear infections as an adult and have grown accustomed to the low-frequency hearing loss that results until they resolve.” Davies-Venn notes that she also has family members with hearing loss, which makes developing advanced hearing assistance technology even more important to her.
Davies-Venn’s projects are in the early stages, and it may take as long as a decade for them to reach the market from the concept. “The goal is to develop individualized hearing aid signal processing to improve treatment outcomes in noisy soundscapes,” she says. “We want
to say, this is the most optimal treatment protocol, and it’s different from this person’s, even though you have the same hearing threshold.”
Solving hearing aid variances in a precise, individual manner that accounts for variables such as age and cognitive ability will improve communication and quality of life for the millions with hearing loss who use hearing technology.
The Research
University of Minnesota
Behavioral and neural correlates of amplification outcome
Understanding individual factors, beyond hearing thresholds, that account for the high variability in success in the use of hearing aids is an important question with immediate clinical implications. This project aims to evaluate how fundamental aspects of auditory processing interact with high-intensity sounds and influence hearing aid amplification outcomes. Behavioral and speech and non-speech measures will be used to determine how spectral auditory processing interacts with high intensity sounds and influences amplification. This will help us determine factors that contribute to diminished speech perception in noisy environments for individuals with hearing loss and how their perception of amplified speech can be enhanced in noisy environments.
Long-term goal: To better understand basic auditory mechanisms that are affected by background noise in order to improve hearing aid algorithm design and hearing loss treatment outcomes.