Central Auditory Processing Disorders (CAPD)
Hearing Health Foundation’s Emerging Research Grants (ERG) program awards grants to researchers studying central auditory processing disorders (CAPD), including:
Normal and abnormal auditory processing
Creating testable models of auditory processing disorders
Etiology, diagnosis, and treatment of CAPD
Genetics of CAPD
Development of screening tools and diagnostic tests for CAPD including behavioral, physiologic and neuroimaging
Language, music, learning, and communication issues related to CAPD
ERG awards are for up to $50,000 per year, one year in length in the first instance, and renewable for a second year. Find more information below about CAPD projects awarded a grant in prior years.
Researchers interested in applying for an Emerging Research Grant are encouraged to review our grant policy. Please also check our ERG page and sign up for grant alerts for application cycle dates and specific grant opportunities available this year.
Recent CAPD Grantees & Projects
University of Iowa
Neural correlates of semantic structure in children who are hard of hearing
Mild to severe hearing loss places children at risk for delays in language development. One aspect of language that is affected is vocabulary development; children who are hard of hearing tend to know less about word meanings than their typical hearing peers. This gap in vocabulary skills is crucial because vocabulary is one of the strongest predictors of academic achievement. Therefore, it is essential to examine factors that are both: 1) amenable to change through intervention, and 2) influence vocabulary knowledge, in order to help close the vocabulary gap. One such factor is semantic memory structure (i.e., how the brain groups concepts with common properties). In essence, semantic structure determines how individuals understand and interact with the social and physical world. Yet, very little is known about how children with hearing loss structure semantic information in the brain. This project addresses a critical need by characterizing semantic structure in the brains of children who are hard of hearing, and results will inform vocabulary interventions. Given the predictive validity of vocabulary knowledge for academic achievement, improving vocabulary understanding in children with hearing loss has the potential to impact all aspects of language (form, content, and use).
Johns Hopkins University
Age-related changes on neural mechanisms in the auditory cortex for learning complex sounds
In everyday environments, we encounter complex acoustic streams yet we rapidly perceive only relevant information with little conscious effort, such as when having a conversation in a noisy background. With aging, this ability seems to degrade due to disrupted neural mechanisms in the brain. One of the key processes that enable efficient auditory perception is rapid and implicit learning of new sounds through their reoccurrences, allowing our brains to link auditory streams with relevant memories to perceive meaningful information. This process must be conveyed by populations of neurons in relevant brain regions—for hearing, in the auditory cortex. This project focuses on age-related changes in implicit learning. We aim to identify how neuronal activity encodes sensory signals, detects reoccurring stimuli, and ultimately stores reoccurring sensory signals in memory. We will use optical imaging and holographic stimulation to identify changes in a group of neurons in the auditory cortex that are involved in such processes. Our goal is to acquire a comprehensive understanding of the neural circuits involved in learning new sounds in a healthy young population as well as to characterize altered neural circuits caused by aging.
University of Pittsburgh
Signaling mechanisms of auditory cortex plasticity after noise-induced hearing loss
Exposure to loud noises is the most common cause of hearing loss, which can also lead to hyperacusis and tinnitus. Despite the high prevalence and adverse consequences of noise-induced hearing loss (NIHL), treatment options are limited to cognitive behavioral therapy and hearing prosthetics. Therefore, to aid in the development of pharmacotherapeutic or rehabilitative treatment options for impaired hearing after NIHL, it is imperative to identify the precise signaling mechanisms underlying the auditory cortex plasticity after NIHL. It is well established that reduced GABAergic signaling contributes to the plasticity of the auditory cortex after the onset of NIHL. However, the role and the timing of plasticity of the different subtypes of GABAergic inhibitory neurons remain unknown. Here, we will employ in vivo two-photon Ca2+ imaging and track the different subtypes of GABAergic inhibitory neurons after NIHL at single-cell resolution in awake mice. Determining the inhibitory circuit mechanisms underlying the plasticity of the auditory cortex after NIHL will reveal novel therapeutic targets for treating and rehabilitating impaired hearing after NIHL. Also, because auditory cortex plasticity is associated with hyperexcitability-related disorders such as tinnitus and hyperacusis, a detailed mechanistic understanding of auditory cortex plasticity will highlight a pathway toward the development of novel treatments for these disorders.
University of Colorado
Alterations in the sound localization pathway resulting in hearing deficits: an optogenetic approach
Sound localization is a key function of the brain that enables individuals to detect and focus on specific sound sources in complex acoustic environments. When spatial hearing is impaired, such as in individuals with central hearing loss, it significantly diminishes the ability to communicate effectively in noisy environments, leading to a reduced quality of life. This research aims to advance our understanding of the neural mechanisms underlying sound localization, focusing on how the brain processes very small differences in the timing of sounds reaching each ear (interaural time differences, or ITDs). These differences are processed by a nucleus of the auditory brainstem called the medial superior olive (MSO), which integrates excitatory and inhibitory inputs from both left and right ears with exceptional temporal precision, allowing for the detection of microsecond-level differences in the time of arrival of sounds. By developing a computational model of this process and validating it through optogenetic manipulation of inhibitory inputs in animal models, this project will provide new insights into how alterations in inhibition and myelination affect sound localization. Ultimately, the goal of this research is to contribute to the development of innovative therapeutic strategies aimed at restoring spatial hearing in individuals with hearing impairments, including those with autism and age-related deficits.
Wayne State University
Cochlear electrical stimulation-induced tinnitus suppression and related neural activity change in the rat inferior colliculus
Tinnitus is a prevalent public health problem that affects millions of people and imposes a significant economic burden on society. Cochlear electrical stimulation (CES) is one promising treatment for managing tinnitus. However, little is known about the underlying mechanisms of CES-induced tinnitus suppression. In addition, electrode insertion during surgery can cause acute tissue trauma and cell losses, initiating programmed cell death within the damaged tissue of the cochlea. Recently, we demonstrated that cochlear stimulation suppressed tinnitus-like behaviors in rats, which is accompanied with tinnitus-related neural activity changes. We therefore suggest that CES-induced tinnitus suppression would be more robust when hearing is protected from implant trauma by intra-cochlear application of AM-111, a novel enzyme inhibitor.
Medical University of South Carolina
Age and hearing loss effects on subcortical envelope encoding
Generously funded by Royal Arch Research Assistance
University of Michigan
Effects of age on interactions of acoustic features with timing judgments in auditory sequences
Imagine being at a busy party where everyone is talking at once, yet you can still focus on your friend’s voice. This ability to discern important sounds from noise involves integrating different features, such as the pitch (how high or low a sound is), location, and timing of these sounds. As we age, even with good hearing, this integration may become harder, affecting our ability to understand speech in noisy environments. Our brains must combine these features to make sense of our surroundings, a process known as feature integration. However, it’s not entirely clear how these features interact, especially when they conflict. For example, how does our brain handle mixed signals regarding pitch and sound location?
Previous research shows that when cues from different senses, like hearing and sight, occur simultaneously, our performance improves. But if they are out of sync, it becomes harder. Less is known about how our brains integrate conflicting cues within the same sense, such as pitch and spatial location in hearing. Our study aims to explore how this ability changes with age and develop a simple test that could be used as an easy task of feature integration, especially for older adults. This research may lead to better rehabilitation strategies, making everyday listening tasks easier for everyone.
University at Buffalo, the State University of New York
Potential of inhibition of poly ADP-ribose polymerase as a therapeutic approach in blast-induced cochlear and brain injury
Many potential drugs in the preclinical phase for treating different types of noise-induced hearing loss (from blast and non-blast noise) revolve around targeting oxidative stress or interfering in the cell death cascade. Though noise-induced oxidative stress and cell death is well studied in the auditory periphery, the effects of noise exposure on the central auditory system remains understudied, especially in blast noise exposure where both auditory and non-auditory structures in the brain are affected. Impulsive noise (blast wave)-induced hearing loss is different from continuous noise exposure as it is more likely to be accompanied by accelerated cognitive deficits, depression, anxiety, dementia, and brain atrophy. It is well established that poly ADP-ribose polymerase (PARP) is a key mediator of cell death and it is overactivated by oxidative stress. Thus this project will explore the potential of PARP inhibition as a potential therapeutic approach for blast-induced cochlear and brain injury. The dampening of PARP overactivation by its inhibitor 3-aminobenzamide is expected to both mitigate blast noise-induced oxidative stress and to interfere with the cell death cascade, thereby reducing cell death in both the peripheral and central auditory system.
University of Wisconsin-Madison
Investigating cortical processing during comprehension of reverberant speech in adolescents and young adults with cochlear implants
Through early cochlear implant (CI) fitting, many children diagnosed with profound neurosensorial hearing loss gain access to verbal communications through electrical hearing and go on to develop spoken language. Despite good speech outcomes tested in sound booths, many children experience difficulties in understanding speech in most noisy and reverberant indoor environments. While up to 75 percent of their time learning is spent in classrooms, the difficulty from adverse acoustics adding to children’s processing of degraded speech from CI is not well understood. In this project, we examine speech understanding in classroom-like environments through immersive acoustic virtual reality. In addition to behavioral responses, we measure neural activity using functional near-infrared spectroscopy (fNIRS)—a noninvasive, CI-compatible neuroimaging technique, in cortical regions that are responsible for speech understanding and sustained attention. Our findings will reveal the neural signature of speech processing by CI users, who developed language through electrical hearing, in classroom-like environments with adverse room acoustics.
University of Minnesota–Twin Cities
Identifying hearing loss through neural responses to engaging stories
Spoken language acquisition in children with hearing loss relies on early identification of hearing loss followed by timely fitting of hearing devices to ensure they receive an adequate representation of the speech that they need to hear. Yet current tests for young children rely on non-speech stimuli, which are processed differently by hearing aids and do not fully capture the complexity of speech. This project will develop a new and efficient test - called multiband peaky speech - that uses engaging narrated stories and records responses from the surface of the head (EEG) to identify frequency-specific hearing loss. Computer modeling and EEG experiments in adults will determine the best combination of parameters and stories to speed up the testing for use in children, and evaluate the test’s ability to identify hearing loss. This work lays the necessary groundwork for extending this method to children and paves the way for clinics to use this test as a hearing screener for young children–and ultimately our ability to provide timely, enhanced information to support spoken language development.
The long-term goal is to develop an engaging, objective clinical test that uses stories to identify hearing loss in young children and evaluate changes to their responses to the same speech through hearing aids. This goal addresses two important needs identified by the U.S.’s Early Hearing Detection and Intervention (EHDI) program, and will positively impact the developmental trajectory of thousands of children who need monitoring of their hearing status and evaluation of outcomes with their hearing devices.
Generously funded by Royal Arch Research Assistance
The Ohio State University
Electrophysiological characteristics in children with auditory neuropathy spectrum disorder
This project will focus on understanding different sites of lesion (impairment) in children with auditory neuropathy spectrum disorder (ANSD). ANSD is a unique form of hearing loss that is thought to occur in approximately 10 to 20 percent of all children with severe to profound sensorineural hearing loss and results in abnormal auditory perception. Neural encoding processes of the auditory nerve in children using electrophysiologic techniques (acoustically and electrically evoked) will be investigated in order to provide objective evidence of peripheral auditory function. Results can then be used to optimize and impact care from the very beginning of cochlear implant use in children with this impairment.
Boston Children’s Hospital
Toward better assessment of pediatric unilateral hearing loss
Although it is now more widely understood that children with unilateral hearing loss are at risk for challenges, many appear to adjust well without intervention. The range of options for audiological intervention for children with severe-to-profound hearing loss in only one ear (i.e., single-sided deafness, SSD) has increased markedly in recent years, from no intervention beyond classroom accommodations all the way to cochlear implant (CI) surgery. In the absence of clear data, current practice is based largely on the philosophy and convention at different institutions around the country. The work in our lab aims to improve assessment and management of pediatric unilateral hearing loss. This current project will evaluate the validity of an expanded audiological and neuropsychological test battery in school-aged children with SSD. Performance on test measures will be compared across different subject groups: typical hearing; unaided SSD; SSD with the use of a CROS (contralateral routing of signals) hearing aid; SSD with the use of a cochlear implant. This research will enhance our basic understanding of auditory and non-auditory function in children with untreated and treated SSD, and begin the work needed to translate experimental measures into viable clinical protocols.