2026

Bishara Awwad, Ph.D.

Bishara Awwad, Ph.D.

Mass Eye and Ear

Auditory-limbic circuit dynamics as therapeutic targets in hyperacusis

Our research addresses a critical gap in understanding the neural basis of hyperacusis by focusing on the emotional dimensions of sound hypersensitivity. Previous work has established that cochlear damage leads to hyperexcitability throughout the central auditory pathway, but our approach uniquely focuses on the circuit-specific mechanisms that link auditory processing to emotional responses.

Specifically, we investigate how noise-induced hearing loss affects two parallel pathways to the lateral amygdala: the cortico-amygdalar (CAmy) and thalamo-amygdalar (TAmy) projections. This pathway- specific investigation represents a novel approach to understanding hyperacusis, as it targets the precise neural circuits that may mediate both the perceptual and emotional components of this disorder.

Timothy Balmer, Ph.D.

Arizona State University

The role of NMDA receptors in vestibular circuit function and balance

The vestibular cerebellum is the part of the brain that integrates signals that convey head, body and eye movements to coordinate balance. When this neural processing is disrupted by central or peripheral vestibular disorders, profound instability, vertigo, and balance errors result. We lack a basic understanding of the development and physiology of the first vestibular processing region in the cerebellum, the granule cell layer. This lack of knowledge is a major roadblock to the development of therapies that could ameliorate peripheral disorders such as Ménière’s disease. This project will look at a specific understudied cell type in the granule cell layer of the cerebellum, unipolar brush cells (UBCs). Our focus is particularly on the cells’ glutamate receptors, which control synaptic communication. It remains unclear how glutamate receptors assume their form and function during development, and we hypothesize that the NMDA-type glutamate receptors expressed by developing UBCs are necessary for the development of the remarkable dendritic brush of these cells, which slows and controls communication across the synapse, and the cells’ function in the circuit.

Patrick Cody, Ph.D.

Patrick Cody, Ph.D.

University of Pittsburgh

Comprehensive hearing recovery evaluation of novel targeting sequences for cell-type–specific gene therapy for hearing loss

Gene replacement therapy has the potential to restore natural hearing in individuals with congenital deafness, potentially overcoming the limitations of cochlear implants. While cochlear implants provide substantial benefits, they rely on artificial signals to bypass affected inner ear structures that are essential for accurate speech perception in noisy environments. In contrast, gene therapy treats congenital hearing loss by delivering a functional copy of affected genes to restore natural mechanisms of the inner ear. However, evaluations of current therapies fail to capture hearing recovery in complex listening situations and hearing restoration is limited due to imprecise targeting of the affected inner ear cell types. There are over 100 forms of congenital deafness that impact specific inner ear cell types making treatments challenging to scale.

This project addresses these limitations through two innovations. First, using an established animal model of congenital deafness, we introduce a comprehensive approach to gene therapy evaluation that tracks how the auditory pathway adapts to complex sound environments and recovers over time. We will explore whether improved cell-type targeting of gene delivery can improve how the brain recovers and adapts to sound. Second, we apply a model-based platform to design gene regulatory elements that target gene therapy to specific inner ear cell types. This generalizable approach can be tailored to target various cell types for delivery at specific ages and thus can accelerate the development of therapies for other forms of gene linked deafness.

Margaret Cychosz, Ph.D.

Margaret Cychosz, Ph.D.

University of California, Los Angeles
Leveraging automatic speech recognition algorithms to understand how the home listening environment impacts spoken language development among infants with cochlear implants

To develop spoken language, infants must rapidly process thousands of words spoken by caregivers around them each day. This is a daunting task, even for typical hearing infants. It is even harder for infants with cochlear implants as electrical hearing compromises many critical cues for speech perception and language development. The challenges that infants with cochlear implants face have long-term consequences: Starting in early childhood, cochlear implant users perform 1-2 standard deviations below peers with typical hearing on nearly every measure of speech, language, and literacy. My lab investigates how children with hearing loss develop spoken language despite the degraded speech signal that they hear and learn language from. This project addresses the urgent need to identify predictors of speech-language development for pediatric cochlear implant users in infancy.

Jia Guo, Ph.D.

Jia Guo, Ph.D.

Columbia University

Enhanced cochlear endolymphatic hydrops imaging for Ménière’s disease with intracochlear MRI contrast delivery via microneedle

Hui Hong, Ph.D.

Hui Hong, Ph.D.

Creighton University

Peripheral auditory input regulates lateral cochlear efferent system

When we think about hearing, we often picture sound traveling from the ear to the brain—a one-direction sensory pathway. However, hearing also involves a lesser-known feedback system called the auditory efferent system, which sends signals from the brain back to the ear. This system helps regulate how we hear in different sound environments and plays a protective role for the inner ear. Hearing loss is a prevalent health issue in modern society and is closely associated with other auditory disorders. Most research on hearing loss has focused on the sensory pathway. In contrast, much less is known about how the efferent system contributes to these conditions.

This project focuses on the lateral olivocochlear (LOC) neurons, the most abundant auditory efferent neurons, to understand how their function changes after noise-induced hearing loss. These changes include both the neurons’ own activity and the inputs that regulate that activity. A key question is whether the observed changes are driven directly by noise exposure or by the resulting hearing loss. To address this, we will compare LOC function following noise- induced hearing loss with that following non-noise-induced hearing loss, the latter produced by targeted ear lesions. Our approach combines whole-cell patch-clamp electrophysiology, a classic technique for recording the activity of individual neurons, with state-of-the-art optogenetics, which allows precise control and study of neuronal inputs. This research will deepen our understanding of how hearing loss impacts the auditory efferent system and help resolve inconsistencies observed in clinical studies on efferent involvement in conditions such as central auditory processing disorder, hyperacusis, and tinnitus. Ultimately, these insights will guide clinicians in refining therapeutic interventions by pinpointing dysfunctions within the brain and suggesting strategies for functional recovery. They will also help tailor treatments based on the underlying cause of hearing loss.

Manoj Kumar, Ph.D.

Manoj Kumar, Ph.D.

University of Pittsburgh

KCNQ2/3 potassium channel activator mitigates noise-trauma–induced hypersensitivity to sounds
in mice

Noise-induced hearing loss (NIHL) is one of the most common causes of hearing disorders. NIHL reduces the auditory sensory information relayed from the cochlea to the brain, including the primary auditory cortex (A1). To compensate for reduced peripheral sensory input, A1 undergoes homeostatic plasticity. Namely, the sound-evoked activity of A1 excitatory principal neurons (PNs) recovers or even surpasses pre-noise trauma levels and exhibits increased response gain (the slope of neuronal responses against sound levels). This increased gain of A1 PNs after NIHL is associated with highly debilitating hearing disorders, such as tinnitus (perception of phantom sounds), hyperacusis (painful perception of sounds), and hypersensitivity to sounds (increased sensitivity to everyday sounds). Despite the high prevalence of these hearing disorders, treatment options are limited to cognitive behavioral therapy and hearing prosthetics with no FDA-approved pharmacotherapeutic options available. Therefore, to aid in the development of pharmacotherapeutic options, it is imperative to 1) develop animal models of these hearing disorders, 2) identify the brain plasticity underlying these hearing disorders, and 3) test potential pharmacotherapy to rehabilitate hearing and brain plasticity after NIHL. Here, we aim to develop a novel mouse model of hypersensitivity to sounds, identify its underlying A1 plasticity, and test pharmacotherapy to mitigate it after NIHL.

Ben-Zheng Li, Ph.D.

Ben-Zheng Li, Ph.D.

University of Colorado

Alterations in the sound localization pathway resulting in hearing deficits: an optogenetic approach

Sound localization is a key function of the brain that enables individuals to detect and focus on specific sound sources in complex acoustic environments. When spatial hearing is impaired, such as in individuals with central hearing loss, it significantly diminishes the ability to communicate effectively in noisy environments, leading to a reduced quality of life. This research aims to advance our understanding of the neural mechanisms underlying sound localization, focusing on how the brain processes very small differences in the timing of sounds reaching each ear (interaural time differences, or ITDs). These differences are processed by a nucleus of the auditory brainstem called the medial superior olive (MSO), which integrates excitatory and inhibitory inputs from both left and right ears with exceptional temporal precision, allowing for the detection of microsecond-level differences in the time of arrival of sounds. By developing a computational model of this process and validating it through optogenetic manipulation of inhibitory inputs in animal models, this project will provide new insights into how alterations in inhibition and myelination affect sound localization. Ultimately, the goal of this research is to contribute to the development of innovative therapeutic strategies aimed at restoring spatial hearing in individuals with hearing impairments, including those with autism and age-related deficits.

Anahita Mehta, Ph.D.

Anahita Mehta, Ph.D.

University of Michigan

Effects of age on interactions of acoustic features with timing judgments in auditory sequences

Imagine being at a busy party where everyone is talking at once, yet you can still focus on your friend’s voice. This ability to discern important sounds from noise involves integrating different features, such as the pitch (how high or low a sound is), location, and timing of these sounds. As we age, even with good hearing, this integration may become harder, affecting our ability to understand speech in noisy environments. Our brains must combine these features to make sense of our surroundings, a process known as feature integration. However, it’s not entirely clear how these features interact, especially when they conflict. For example, how does our brain handle mixed signals regarding pitch and sound location?
Previous research shows that when cues from different senses, like hearing and sight, occur simultaneously, our performance improves. But if they are out of sync, it becomes harder. Less is known about how our brains integrate conflicting cues within the same sense, such as pitch and spatial location in hearing. Our study aims to explore how this ability changes with age and develop a simple test that could be used as an easy task of feature integration, especially for older adults. This research may lead to better rehabilitation strategies, making everyday listening tasks easier for everyone.

Jane Mondul, Au.D., Ph.D.

Jane Mondul, Au.D., Ph.D.

Purdue University

Sound-induced plasticity of the lateral olivocochlear efferent system

Loud sounds can damage the auditory system and cause hearing loss. But not all sound is bad – safe sound exposure can actually help the brain fine-tune how we hear, especially in noisy places. A part of the auditory system called the lateral olivocochlear (LOC) pathway may help with this. The LOC system’s chemical signals change after sound exposure, suggesting a form of “plasticity” (adaptability), but scientists don’t yet know exactly how it works. Our project will study how the LOC system changes after safe sound exposure, how this LOC plasticity affects hearing, and whether it still occurs when the ear is damaged. We will test this in mice by measuring their ability to hear sounds in noise and by looking closely at cells in the ear and brain. What we learn could guide new sound-based or drug-based therapies to protect hearing and improve communication in noisy settings.

Bruna Mussoi, Au.D., Ph.D.

Bruna Mussoi, Au.D., Ph.D.

University of Tennessee

Auditory neuroplasticity following experience with cochlear implants

Cochlear implants provide several benefits to older adults, though the amount of benefit varies across people. The greatest improvements in speech understanding abilities usually happen within the first 6 months after implantation. It is generally accepted that these gains in performance are a result of neural changes in the auditory system, but while there is strong evidence of neural changes following cochlear implantation in children, there is limited evidence in adults with hearing loss in both ears. This study will examine how neural responses change as a function of the amount of cochlear implant use, when compared to longstanding hearing aid use. Listeners who are candidates for a cochlear implant (who either decide to pursue implantation or to keep wearing hearing aids) will be tested at several time points, from pre-implantation and up to 6 months after implantation. The results of this project will improve our understanding of the impact of cochlear implant use on neural responses in older adults, and their relationship with the ability to understand speech.

Melissa Papesh, Au.D., Ph.D.

Melissa Papesh, Au.D., Ph.D.

Portland VA Research Foundation

Central auditory processing disorder and insomnia

At least 26 million American adults complain of hearing and communication difficulties that negatively impact their quality of life despite having no signs of hearing loss. This condition is often referred to auditory processing disorder (APD) reflecting altered processing of auditory information in the brain. While traumatic brain injuries are the most widely known cause for APD, a host of other health conditions are also likely to significantly impact how the brain interprets sound. Our goal with the current project is to examine possible associations between APD and another common condition: insomnia. The objectives of this project are a). to compare peripheral and central auditory system function in patients with normal hearing sensitivity with and without diagnoses of chronic insomnia, and b). to examine the potential for cognitive behavioral therapy (CBT) sleep therapy to improve auditory function in patients with chronic insomnia. In addition to standard hearing tests, we will also measure responses on questionnaires to gauge self-perceived hearing difficulty, assess participants’ ability to discriminate and identify several different types of auditory stimuli, and measure brain responses to sound at multiple levels of the auditory pathway within the brain. Because chronic insomnia is associated with higher rates of cognitive impairment and mental health conditions, we will also measure cognitive function and symptoms of depression and anxiety. Auditory, cognitive and mental health measures will be obtained in patients with diagnosed chronic insomnia both before and after completion of CBT, as well as in a group of control participants following a similar testing timeline. We hypothesize that patients with chronic insomnia will perform more poorly on clinical measures of auditory processing and will report higher rates of hearing handicap compared to controls. In addition, we suspect that that those with chronic insomnia will display abnormally high levels of activity in the brain as well as poor pre-attentive filtering of auditory information compared to good sleepers due to persistent neuronal hyperarousal. Finally, our hope is that these auditory manifestations will improve once participants complete CBT thus providing a pathway to improved hearing in a subset of patients experiencing APD.

Patrick Parker, Ph.D.

Patrick Parker, Ph.D.

Johns Hopkins University School of Medicine

Emergence of tonotopically organized spontaneous activity in the brain after genetic disruption of MET channel

Secondary disorders develop alongside hearing loss, such as the perception of phantom sounds (tinnitus) and a hypersensitivity to sound (hyperacusis). These disorders have no curative treatment, in part due to our poor understanding of the underlying neurobiology. Using experimental models of hearing loss, I recently discovered that sound-independent (SI) patterns of neural activity emerge in the brain’s auditory centers that resemble those elicited by sound in hearing mice. This finding indicates that the brain can self- generate neural patterns that resemble sound processing, independent of the inner ear. To understand the relevance of SI activity to human disorders like tinnitus, I’ll expand these findings to more translationally relevant models of hearing loss (loud noise exposure and age-related hearing loss), as well as determine the area of the brain that generates these patterns. The results of these studies may help to develop new approaches to treat tinnitus, hyperacusis, and related disorders.

Marina Silveira, Ph.D.

Marina Silveira, Ph.D.

University of Texas at San Antonio

Age-related changes in neuromodulatory signaling in the auditory midbrain

Age-related hearing loss is the most common form of hearing loss in older adults. Age-related hearing loss causes difficulty understanding speech in noisy environments. Consequently, hearing loss has a major negative impact on quality of life and leads to social isolation, loneliness, and is a significant risk factor for dementia and Alzheimer’s. It is well known that age-related hearing loss shifts the excitatory-inhibitory balance in the inferior colliculus to favor excitability. This enhancement in excitability can contribute to pathological conditions such as tinnitus and poor temporal processing. However, the mechanisms that regulate this enhanced excitability in the inferior colliculus are unclear. A major neuromodulator called serotonin has been shown to regulate the excitatory-inhibitory balance in other brain regions. In preliminary studies for this proposal, we found that serotonin strongly influences the activity of a class of inhibitory neurons in the inferior colliculus that express neuropeptide Y. Here, we will test the hypothesis that dysfunction in serotonergic and neuropeptide Y signaling underlies enhanced excitability in the inferior colliculus in age-related hearing loss. Because the serotonergic system is a common target for pharmaceuticals, our results will also provide foundational insights that might guide future pharmacological interventions for age-related hearing loss.