2023

Timothy Balmer, Ph.D.

Timothy Balmer, Ph.D.

Arizona State University
The role of unipolar brush cells in vestibular circuit processing and in balance

The cerebellum receives vestibular sensory signals and is crucial for balance, posture, and gait. Disruption of the vestibular signals that are processed by the vestibular cerebellum, as in the case of Ménière’s disease, leads to profound disability. Our lack of understanding of the circuitry and physiology of this part of the vestibular system makes developing treatments for vestibular disorders extremely difficult. This project focuses on a cell type in the vestibular cerebellum called the unipolar brush cell (UBC). UBCs process vestibular sensory signals and amplify them to downstream targets. However, the identity of these targets and how they process UBC input is not understood. In addition, the role of UBCs in vestibular function must be clarified. The experiments outlined here will identify the targets of UBCs, their synaptic responses, and the role of UBCs in balance. A better understanding of vestibular cerebellar circuitry and function will help us identify the causes of vestibular disorders and suggest possible treatments for them.

Long-term goal: To develop a better understanding of the neural circuits that underlie vestibular function. A more complete understanding of the circuitry and physiology of the vestibular cerebellum is necessary to develop therapies for vestibular dysfunction caused by peripheral disorders such as Ménière’s disease.

Francisco Barros-Becker, Ph.D.

Francisco Barros-Becker, Ph.D.

University of Washington

Aminoglycoside compartmentalization and its role in hair cell death

The goals for this project are to develop new tools that will help the scientific community to deepen our understanding of the vesicular network in hair cells, both during stress and normal conditions. We will develop novel fluorescent probes to mark the different compartments in the vesicular network. This will allow for visualization of the drug as it transitions through various levels within the vesicular network. In order to better analyze these structures, we will pair these images with custom-made image analysis algorithms that will allow us to study vesicles in a deeper way. Overall, these tools will allow us to open new research avenues that will help to further understand how aminoglycosides, and other drugs with ototoxic effects, like cisplatin, a cancer chemotherapy drug, are killing hair cells. Our results could help direct new research and lead to novel therapeutic treatments to avoid further hearing loss in patients undergoing these treatments.

George Burwood, Ph.D.

George Burwood, Ph.D.

Oregon Health & Science University

Apical cochlear mechanics after cochlear implantation

The long-term research goal is to establish, treat, and prevent cochlear implantation-induced hearing loss. This mechanics project is the first time the vibration of the inner ear has been measured in the presence of a cochlear implant, and there is much to discover—such as measuring the efficacy of drugs that help to suppress scarring, as well as testing different electrode designs, and even extending to other diseases of the inner ear such as Ménière’s disease. I believe that optical coherence tomography has a big role to play in the future of both basic hearing science and hearing restoration.

James Dewey, Ph.D.

James Dewey, Ph.D.

University of Southern California
Filtering of otoacoustic emissions: a window onto cochlear frequency tuning

Healthy ears emit sounds that can be measured in the ear canal with a sensitive microphone. These otoacoustic emissions (OAEs) offer a noninvasive window onto the mechanical processes within the cochlea that confer typical hearing, and are commonly measured in the clinic to detect hearing loss. Nevertheless, their interpretation remains limited by uncertainties regarding how they are generated within the cochlea and how they propagate out of it. Through experiments in mice, this project will test theoretical relationships that suggest that OAEs are strongly shaped (or “filtered”) as they travel through the cochlea, and that this filtering is related to how well the ear can discriminate sounds at different frequencies. This may lead to novel, noninvasive tests of human cochlear function, and specifically frequency discrimination, which is important for understanding speech.

James W. Dias, Ph.D.

James W. Dias, Ph.D.

The Medical University of South Carolina
Neural determinants of age-related change in auditory-visual speech processing

Older adults typically have more difficulty than younger adults identifying the speech they hear, especially in noisy listening environments. However, some older adults demonstrate a preserved ability to identify speech that is both heard and seen. This preserved audiovisual speech perception by older adults is not explained by an improved ability to speechread (lipread), as speechreading also typically declines with age. Instead, older adults can exhibit an improved ability to integrate information available from across auditory and visual sources. This behavioral evidence is consistent with findings suggesting that the neural processing of audiovisual speech can improve with age. Despite the accumulating and intriguing evidence, the underlying changes in brain structure and function that support the preservation of audiovisual speech perception in older adults remains a critical knowledge gap. This project uses an innovative neural systems approach to determine how age-related changes in cortical structure and function, both within and between regions of the brain, can preserve audiovisual speech perception in older adults.

Mishaela DiNino, Ph.D.

Mishaela DiNino, Ph.D.

Carnegie Mellon University
Neural mechanisms of speech sound encoding in older adults

Many older adults have trouble understanding speech in noisy environments, often to a greater extent than their hearing thresholds would predict. Age-related changes in the central auditory system, not just hearing loss, are thought to contribute to this perceptual impairment, but the exact mechanisms by which this would occur are not yet known. As individuals age, auditory neurons become less able to synchronize to the timing information in sound. This project will examine the relationship between reduced neural processing of fine timing information and older adults’ ability to encode the acoustic building blocks of speech sounds. Limited capacity to code and use these acoustic cues might impair speech perception, particularly in the presence of background noise, independent of hearing thresholds. The results of this study will provide a better understanding of how the neural mechanisms important for speech-in-noise recognition may be altered with age, laying the groundwork for development of novel treatments for older adults who experience difficulty perceiving speech in noise.

Subong Kim, Ph.D.

Subong Kim, Ph.D.

Purdue University
Influence of individual pathophysiology and cognitive profiles on noise tolerance and noise reduction outcomes

Listening to speech in noisy environments can be significantly challenging for people with hearing loss, even with help from hearing aids. Current digital hearing aids are commonly equipped with noise-reduction algorithms; however, noise-reduction processing introduces inevitable distortions of speech cues while attenuating noise. It is known that hearing-impaired listeners with similar audiograms react very differently to background noise and noise-reduction processing in hearing aids, but the biological mechanisms contributing to that variability is particularly understudied.

This project is focused on combining an array of physiological and psychophysical measures to obtain comprehensive hearing and cognitive profiles for listeners. We hope this approach will allow us to explain individual noise tolerance and sensitivity to speech-cue distortions induced by noise-reduction processing in hearing aids. With these distinct biological profiles, we will have a deeper understanding of individual differences in listeners and how those profiles affect communication outcomes across patients who are clinically classified with the same hearing status. This study’s results will assist in the development of objective diagnostics for hearing interventions tailored to individual needs.

Manoj Kumar, Ph.D.

Manoj Kumar, Ph.D.

University of Pittsburgh
Signaling mechanisms of auditory cortex plasticity after noise-induced hearing loss

Exposure to loud noises is the most common cause of hearing loss, which can also lead to hyperacusis and tinnitus. Despite the high prevalence and adverse consequences of noise-induced hearing loss (NIHL), treatment options are limited to cognitive behavioral therapy and hearing prosthetics. Therefore, to aid in the development of pharmacotherapeutic or rehabilitative treatment options for impaired hearing after NIHL, it is imperative to identify the precise signaling mechanisms underlying the auditory cortex plasticity after NIHL. It is well established that reduced GABAergic signaling contributes to the plasticity of the auditory cortex after the onset of NIHL. However, the role and the timing of plasticity of the different subtypes of GABAergic inhibitory neurons remain unknown. Here, we will employ in vivo two-photon Ca2+ imaging and track the different subtypes of GABAergic inhibitory neurons after NIHL at single-cell resolution in awake mice. Determining the inhibitory circuit mechanisms underlying the plasticity of the auditory cortex after NIHL will reveal novel therapeutic targets for treating and rehabilitating impaired hearing after NIHL. Also, because auditory cortex plasticity is associated with hyperexcitability-related disorders such as tinnitus and hyperacusis, a detailed mechanistic understanding of auditory cortex plasticity will highlight a pathway toward the development of novel treatments for these disorders.

Matthew Masapollo, Ph.D.

Matthew Masapollo, Ph.D.

University of Florida
Contributions of auditory and somatosensory feedback to speech motor control in congenitally deaf 9- to-10-year-olds and adults

Cochlear implants have led to stunning advances in prospects for children with congenital hearing loss to acquire spoken language in a typical manner, but problems persist. In particular, children with CIs show much larger deficits in acquiring sensitivity to the individual speech sounds of language (phonological structure) than in acquiring vocabulary and syntax. This project will test the hypothesis that the acquisition of detailed phonological representations would be facilitated by a stronger emphasis on the speech motor control associated with producing those representations. This approach is novel because most interventions for children with CIs focus strongly on listening to spoken language, which may be overlooking the importance of practice in producing language, an idea we will examine. To achieve that objective, we will observe speech motor control directly in speakers with congenital hearing loss and CIs, with and without sensory feedback.

Carolyn McClaskey, Ph.D.

Carolyn McClaskey, Ph.D.

Medical University of South Carolina

Age and hearing loss effects on subcortical envelope encoding

Generously funded by Royal Arch Research Assistance

Sharlen Moore, Ph.D.

Sharlen Moore, Ph.D.

Johns Hopkins University

Modulation of neuro-glial cortical networks during tinnitus

My long term goals are to understand the complexity and temporal sequencing of tinnitus effectors with an integrative perspective, considering the interplay of the diverse cell types that might promote the development and maintenance of tinnitus to provide an updated interpretation of this disorder. Additionally, to use glial cells as a key therapeutic target to treat tinnitus.

Generously funded by the Les Paul Foundation

Z. Ellen Peng, Ph.D.

Z. Ellen Peng, Ph.D.

University of Wisconsin-Madison
Investigating cortical processing during comprehension of reverberant speech in adolescents and young adults with cochlear implants

Through early cochlear implant (CI) fitting, many children diagnosed with profound neurosensorial hearing loss gain access to verbal communications through electrical hearing and go on to develop spoken language. Despite good speech outcomes tested in sound booths, many children experience difficulties in understanding speech in most noisy and reverberant indoor environments. While up to 75 percent of their time learning is spent in classrooms, the difficulty from adverse acoustics adding to children’s processing of degraded speech from CI is not well understood. In this project, we examine speech understanding in classroom-like environments through immersive acoustic virtual reality. In addition to behavioral responses, we measure neural activity using functional near-infrared spectroscopy (fNIRS)—a noninvasive, CI-compatible neuroimaging technique, in cortical regions that are responsible for speech understanding and sustained attention. Our findings will reveal the neural signature of speech processing by CI users, who developed language through electrical hearing, in classroom-like environments with adverse room acoustics.

Melissa Polonenko, Ph.D.

Melissa Polonenko, Ph.D.

University of Minnesota–Twin Cities

Identifying hearing loss through neural responses to engaging stories

Spoken language acquisition in children with hearing loss relies on early identification of hearing loss followed by timely fitting of hearing devices to ensure they receive an adequate representation of the speech that they need to hear. Yet current tests for young children rely on non-speech stimuli, which are processed differently by hearing aids and do not fully capture the complexity of speech. This project will develop a new and efficient test - called multiband peaky speech - that uses engaging narrated stories and records responses from the surface of the head (EEG) to identify frequency-specific hearing loss. Computer modeling and EEG experiments in adults will determine the best combination of parameters and stories to speed up the testing for use in children, and evaluate the test’s ability to identify hearing loss. This work lays the necessary groundwork for extending this method to children and paves the way for clinics to use this test as a hearing screener for young children–and ultimately our ability to provide timely, enhanced information to support spoken language development.

The long-term goal is to develop an engaging, objective clinical test that uses stories to identify hearing loss in young children and evaluate changes to their responses to the same speech through hearing aids. This goal addresses two important needs identified by the U.S.’s Early Hearing Detection and Intervention (EHDI) program, and will positively impact the developmental trajectory of thousands of children who need monitoring of their hearing status and evaluation of outcomes with their hearing devices.

Generously funded by Royal Arch Research Assistance

Megan Beers Wood, Ph.D.

Megan Beers Wood, Ph.D.

Johns Hopkins University School of Medicine
Type II auditory nerve fibers as instigators of the cochlear immune response after acoustic trauma

A subset of patients with hyperacusis experience pain in the presence of typically tolerated sound. Little is known about the origin of this pain. One hypothesis is that the type II auditory nerve fibers (type II neurons) of the inner ear may act as pain receptors after exposure to damaging levels of noise (acoustic damage). Our lab has shown that type II neurons share key characteristics with pain neurons: They respond to tissue damage; they are hyperactive after acoustic damage; and they express genes similar to pain neurons, such as the gene for CGRP-alpha. However, type II neurons are not the only cell types that respond to acoustic damage. The immune system responds quickly after damaging noise exposure. In other systems of the body such as the skin, CGRP-alpha can affect immune cell function. This project looks at the expression of CGRP-alpha in type II neurons after noise exposure. CGRP-alpha will be blocked during noise exposure to see if this affects the immune response to tissue damage.

Calvin Wu, Ph.D.

Calvin Wu, Ph.D.

University of Michigan
Development and transmission of the tinnitus neural code

Noise overexposure is a common risk factor of tinnitus, and is thus used as a common tinnitus inducer in animal research. However, noise exposure does not always cause tinnitus, and researchers would rely on behavioral testing to infer an animal’s subjective pathology. However, behavioral tests only work under the assumption that tinnitus is unchanging during the long testing period, which does not reflect the dynamic nature of tinnitus as well as ignoring variability. This inability to measure tinnitus within a short time window impedes our understanding of its emergence and progression. The project addresses these limitations through bypassing behavioral testing and directly identifying and locating an objective code for tinnitus in real-time spiking neurons. Using a novel data-driven approach, we can pinpoint exactly when/where tinnitus emerges and examine how noise trauma triggers and transmits the tinnitus signal throughout the auditory pathway.