Central Auditory Processing Disorders (CAPD)
Hearing Health Foundation’s Emerging Research Grants (ERG) program awards grants to researchers studying central auditory processing disorders (CAPD), including:
Normal and abnormal auditory processing
Creating testable models of auditory processing disorders
Etiology, diagnosis, and treatment of CAPD
Genetics of CAPD
Development of screening tools and diagnostic tests for CAPD including behavioral, physiologic and neuroimaging
Language, music, learning, and communication issues related to CAPD
ERG awards are for up to $50,000 per year, one year in length in the first instance, and renewable for a second year. Find more information below about CAPD projects awarded a grant in prior years.
Researchers interested in applying for an Emerging Research Grant are encouraged to review our grant policy. Please also check our ERG page and sign up for grant alerts for application cycle dates and specific grant opportunities available this year.
Recent CAPD Grantees & Projects
Johns Hopkins University
Age-related changes on neural mechanisms in the auditory cortex for learning complex sounds
In everyday environments, we encounter complex acoustic streams yet we rapidly perceive only relevant information with little conscious effort, such as when having a conversation in a noisy background. With aging, this ability seems to degrade due to disrupted neural mechanisms in the brain. One of the key processes that enable efficient auditory perception is rapid and implicit learning of new sounds through their reoccurrences, allowing our brains to link auditory streams with relevant memories to perceive meaningful information. This process must be conveyed by populations of neurons in relevant brain regions—for hearing, in the auditory cortex. This project focuses on age-related changes in implicit learning. We aim to identify how neuronal activity encodes sensory signals, detects reoccurring stimuli, and ultimately stores reoccurring sensory signals in memory. We will use optical imaging and holographic stimulation to identify changes in a group of neurons in the auditory cortex that are involved in such processes. Our goal is to acquire a comprehensive understanding of the neural circuits involved in learning new sounds in a healthy young population as well as to characterize altered neural circuits caused by aging.
University of Pittsburgh
Signaling mechanisms of auditory cortex plasticity after noise-induced hearing loss
Exposure to loud noises is the most common cause of hearing loss, which can also lead to hyperacusis and tinnitus. Despite the high prevalence and adverse consequences of noise-induced hearing loss (NIHL), treatment options are limited to cognitive behavioral therapy and hearing prosthetics. Therefore, to aid in the development of pharmacotherapeutic or rehabilitative treatment options for impaired hearing after NIHL, it is imperative to identify the precise signaling mechanisms underlying the auditory cortex plasticity after NIHL. It is well established that reduced GABAergic signaling contributes to the plasticity of the auditory cortex after the onset of NIHL. However, the role and the timing of plasticity of the different subtypes of GABAergic inhibitory neurons remain unknown. Here, we will employ in vivo two-photon Ca2+ imaging and track the different subtypes of GABAergic inhibitory neurons after NIHL at single-cell resolution in awake mice. Determining the inhibitory circuit mechanisms underlying the plasticity of the auditory cortex after NIHL will reveal novel therapeutic targets for treating and rehabilitating impaired hearing after NIHL. Also, because auditory cortex plasticity is associated with hyperexcitability-related disorders such as tinnitus and hyperacusis, a detailed mechanistic understanding of auditory cortex plasticity will highlight a pathway toward the development of novel treatments for these disorders.
University of Colorado
Alterations in the sound localization pathway resulting in hearing deficits: an optogenetic approach
Sound localization is a key function of the brain that enables individuals to detect and focus on specific sound sources in complex acoustic environments. When spatial hearing is impaired, such as in individuals with central hearing loss, it significantly diminishes the ability to communicate effectively in noisy environments, leading to a reduced quality of life. This research aims to advance our understanding of the neural mechanisms underlying sound localization, focusing on how the brain processes very small differences in the timing of sounds reaching each ear (interaural time differences, or ITDs). These differences are processed by a nucleus of the auditory brainstem called the medial superior olive (MSO), which integrates excitatory and inhibitory inputs from both left and right ears with exceptional temporal precision, allowing for the detection of microsecond-level differences in the time of arrival of sounds. By developing a computational model of this process and validating it through optogenetic manipulation of inhibitory inputs in animal models, this project will provide new insights into how alterations in inhibition and myelination affect sound localization. Ultimately, the goal of this research is to contribute to the development of innovative therapeutic strategies aimed at restoring spatial hearing in individuals with hearing impairments, including those with autism and age-related deficits.
Medical University of South Carolina
Age and hearing loss effects on subcortical envelope encoding
Generously funded by Royal Arch Research Assistance
University of Michigan
Effects of age on interactions of acoustic features with timing judgments in auditory sequences
Imagine being at a busy party where everyone is talking at once, yet you can still focus on your friend’s voice. This ability to discern important sounds from noise involves integrating different features, such as the pitch (how high or low a sound is), location, and timing of these sounds. As we age, even with good hearing, this integration may become harder, affecting our ability to understand speech in noisy environments. Our brains must combine these features to make sense of our surroundings, a process known as feature integration. However, it’s not entirely clear how these features interact, especially when they conflict. For example, how does our brain handle mixed signals regarding pitch and sound location?
Previous research shows that when cues from different senses, like hearing and sight, occur simultaneously, our performance improves. But if they are out of sync, it becomes harder. Less is known about how our brains integrate conflicting cues within the same sense, such as pitch and spatial location in hearing. Our study aims to explore how this ability changes with age and develop a simple test that could be used as an easy task of feature integration, especially for older adults. This research may lead to better rehabilitation strategies, making everyday listening tasks easier for everyone.
Portland VA Research Foundation
Central auditory processing disorder and insomnia
At least 26 million American adults complain of hearing and communication difficulties that negatively impact their quality of life despite having no signs of hearing loss. This condition is often referred to auditory processing disorder (APD) reflecting altered processing of auditory information in the brain. While traumatic brain injuries are the most widely known cause for APD, a host of other health conditions are also likely to significantly impact how the brain interprets sound. Our goal with the current project is to examine possible associations between APD and another common condition: insomnia. The objectives of this project are a). to compare peripheral and central auditory system function in patients with normal hearing sensitivity with and without diagnoses of chronic insomnia, and b). to examine the potential for cognitive behavioral therapy (CBT) sleep therapy to improve auditory function in patients with chronic insomnia. In addition to standard hearing tests, we will also measure responses on questionnaires to gauge self-perceived hearing difficulty, assess participants’ ability to discriminate and identify several different types of auditory stimuli, and measure brain responses to sound at multiple levels of the auditory pathway within the brain. Because chronic insomnia is associated with higher rates of cognitive impairment and mental health conditions, we will also measure cognitive function and symptoms of depression and anxiety. Auditory, cognitive and mental health measures will be obtained in patients with diagnosed chronic insomnia both before and after completion of CBT, as well as in a group of control participants following a similar testing timeline. We hypothesize that patients with chronic insomnia will perform more poorly on clinical measures of auditory processing and will report higher rates of hearing handicap compared to controls. In addition, we suspect that that those with chronic insomnia will display abnormally high levels of activity in the brain as well as poor pre-attentive filtering of auditory information compared to good sleepers due to persistent neuronal hyperarousal. Finally, our hope is that these auditory manifestations will improve once participants complete CBT thus providing a pathway to improved hearing in a subset of patients experiencing APD.
University of Wisconsin-Madison
Investigating cortical processing during comprehension of reverberant speech in adolescents and young adults with cochlear implants
Through early cochlear implant (CI) fitting, many children diagnosed with profound neurosensorial hearing loss gain access to verbal communications through electrical hearing and go on to develop spoken language. Despite good speech outcomes tested in sound booths, many children experience difficulties in understanding speech in most noisy and reverberant indoor environments. While up to 75 percent of their time learning is spent in classrooms, the difficulty from adverse acoustics adding to children’s processing of degraded speech from CI is not well understood. In this project, we examine speech understanding in classroom-like environments through immersive acoustic virtual reality. In addition to behavioral responses, we measure neural activity using functional near-infrared spectroscopy (fNIRS)—a noninvasive, CI-compatible neuroimaging technique, in cortical regions that are responsible for speech understanding and sustained attention. Our findings will reveal the neural signature of speech processing by CI users, who developed language through electrical hearing, in classroom-like environments with adverse room acoustics.
University of Minnesota–Twin Cities
Identifying hearing loss through neural responses to engaging stories
Spoken language acquisition in children with hearing loss relies on early identification of hearing loss followed by timely fitting of hearing devices to ensure they receive an adequate representation of the speech that they need to hear. Yet current tests for young children rely on non-speech stimuli, which are processed differently by hearing aids and do not fully capture the complexity of speech. This project will develop a new and efficient test - called multiband peaky speech - that uses engaging narrated stories and records responses from the surface of the head (EEG) to identify frequency-specific hearing loss. Computer modeling and EEG experiments in adults will determine the best combination of parameters and stories to speed up the testing for use in children, and evaluate the test’s ability to identify hearing loss. This work lays the necessary groundwork for extending this method to children and paves the way for clinics to use this test as a hearing screener for young children–and ultimately our ability to provide timely, enhanced information to support spoken language development.
The long-term goal is to develop an engaging, objective clinical test that uses stories to identify hearing loss in young children and evaluate changes to their responses to the same speech through hearing aids. This goal addresses two important needs identified by the U.S.’s Early Hearing Detection and Intervention (EHDI) program, and will positively impact the developmental trajectory of thousands of children who need monitoring of their hearing status and evaluation of outcomes with their hearing devices.
Generously funded by Royal Arch Research Assistance
University of Texas at San Antonio
Age-related changes in neuromodulatory signaling in the auditory midbrain
Age-related hearing loss is the most common form of hearing loss in older adults. Age-related hearing loss causes difficulty understanding speech in noisy environments. Consequently, hearing loss has a major negative impact on quality of life and leads to social isolation, loneliness, and is a significant risk factor for dementia and Alzheimer’s. It is well known that age-related hearing loss shifts the excitatory-inhibitory balance in the inferior colliculus to favor excitability. This enhancement in excitability can contribute to pathological conditions such as tinnitus and poor temporal processing. However, the mechanisms that regulate this enhanced excitability in the inferior colliculus are unclear. A major neuromodulator called serotonin has been shown to regulate the excitatory-inhibitory balance in other brain regions. In preliminary studies for this proposal, we found that serotonin strongly influences the activity of a class of inhibitory neurons in the inferior colliculus that express neuropeptide Y. Here, we will test the hypothesis that dysfunction in serotonergic and neuropeptide Y signaling underlies enhanced excitability in the inferior colliculus in age-related hearing loss. Because the serotonergic system is a common target for pharmaceuticals, our results will also provide foundational insights that might guide future pharmacological interventions for age-related hearing loss.
Boston Children’s Hospital
Toward better assessment of pediatric unilateral hearing loss
Although it is now more widely understood that children with unilateral hearing loss are at risk for challenges, many appear to adjust well without intervention. The range of options for audiological intervention for children with severe-to-profound hearing loss in only one ear (i.e., single-sided deafness, SSD) has increased markedly in recent years, from no intervention beyond classroom accommodations all the way to cochlear implant (CI) surgery. In the absence of clear data, current practice is based largely on the philosophy and convention at different institutions around the country. The work in our lab aims to improve assessment and management of pediatric unilateral hearing loss. This current project will evaluate the validity of an expanded audiological and neuropsychological test battery in school-aged children with SSD. Performance on test measures will be compared across different subject groups: typical hearing; unaided SSD; SSD with the use of a CROS (contralateral routing of signals) hearing aid; SSD with the use of a cochlear implant. This research will enhance our basic understanding of auditory and non-auditory function in children with untreated and treated SSD, and begin the work needed to translate experimental measures into viable clinical protocols.