Hearing Loss in Children
Hearing Health Foundation’s Emerging Research Grants (ERG) program awards grants to researchers studying hearing loss in children, including:
Etiology of childhood hearing loss (e.g., genetic, infectious, traumatic)
Assessment and diagnosis of childhood hearing loss
Auditory neuropathy
Behavioral, cognitive, developmental, and psychosocial consequences of childhood hearing loss
Impact of early intervention
Education of the hearing impaired child
Cochlear implants and auditory brainstem implants in children
ERG awards are for up to $50,000 per year, one year in length in the first instance, and renewable for a second year. Find more information below about hearing loss in children projects awarded a grant in prior years.
Researchers interested in applying for an Emerging Research Grant are encouraged to review our grant policy. Please also check our ERG page and sign up for grant alerts for application cycle dates and specific grant opportunities available this year.
Recent Hearing Loss in Children Grantees & Projects
University of Wisconsin–Madison
Neural correlates of amplified speech in children with sensorineural hearing loss
About three of every 1,000 infants are born with permanent hearing loss. With the implementation of newborn hearing screening programs worldwide, infants born with hearing loss are now identified soon after birth and provided with hearing aids as early as 3 months of age. However, until infants are 8 to 10 months of age and can participate in clinical tests, the use of neural measures is the only feasible method to infer an infant’s hearing ability with hearing aids. This project will investigate the relationship between behavioral and neural measures of speech audibility in children ages 5 to 16 years old with congenital sensorineural hearing loss, who are capable of reliably indicating hearing sounds. Specifically, the project will use speech-elicited, envelope-following responses—a type of scalp-recorded measure that reflects neural activity to periodicity in speech. Study findings will reveal the accuracy of the chosen neural measure in confirming whether speech sounds are audible in children with congenital hearing loss when hearing aids are used. Results will inform future investigations and the clinical feasibility of using neural measures to assess hearing aid benefit in infants with hearing loss who are unable to confirm their detection of speech behaviorally.
Boston Children’s Hospital
Toward better assessment of pediatric unilateral hearing loss
Although it is now more widely understood that children with unilateral hearing loss are at risk for challenges, many appear to adjust well without intervention. The range of options for audiological intervention for children with severe-to-profound hearing loss in only one ear (i.e., single-sided deafness, SSD) has increased markedly in recent years, from no intervention beyond classroom accommodations all the way to cochlear implant (CI) surgery. In the absence of clear data, current practice is based largely on the philosophy and convention at different institutions around the country. The work in our lab aims to improve assessment and management of pediatric unilateral hearing loss. This current project will evaluate the validity of an expanded audiological and neuropsychological test battery in school-aged children with SSD. Performance on test measures will be compared across different subject groups: typical hearing; unaided SSD; SSD with the use of a CROS (contralateral routing of signals) hearing aid; SSD with the use of a cochlear implant. This research will enhance our basic understanding of auditory and non-auditory function in children with untreated and treated SSD, and begin the work needed to translate experimental measures into viable clinical protocols.
Mass Eye and Ear
Age-specific cochlear implant programming for optimal hearing performance
Cochlear implants (CI) offer life-altering hearing restoration for deafened individuals who no longer benefit from hearing aid technologies. Despite advances in CI technology, recipients struggle to process complex sounds in real-world environments, such as speech-in-noise and music. Poor performance results from artifacts of the implants (e.g., adjacent channel interaction, distorted signal input) and age-specific biological differences (e.g., neuronal health, auditory plasticity). Our group determined that children with CIs require a better signal input than adults with CIs to achieve the same level of performance. Additional evidence demonstrates that auditory signal blurring in adults is less impactful on performance outcomes. These findings imply that age should be considered when programming a CI. However, the current clinical practice largely adopts a one-size-fits-all approach toward CI management and uses programming parameters defined by adult CI users. Our project’s main objective is to understand how to better program CIs in children to improve complex sound processing by taking into context the listening environment (e.g., complex sound processing in a crowded room), differences between age groups, and variations in needs or anatomy between individuals.
University of Florida
Contributions of auditory and somatosensory feedback to speech motor control in congenitally deaf 9- to-10-year-olds and adults
Cochlear implants have led to stunning advances in prospects for children with congenital hearing loss to acquire spoken language in a typical manner, but problems persist. In particular, children with CIs show much larger deficits in acquiring sensitivity to the individual speech sounds of language (phonological structure) than in acquiring vocabulary and syntax. This project will test the hypothesis that the acquisition of detailed phonological representations would be facilitated by a stronger emphasis on the speech motor control associated with producing those representations. This approach is novel because most interventions for children with CIs focus strongly on listening to spoken language, which may be overlooking the importance of practice in producing language, an idea we will examine. To achieve that objective, we will observe speech motor control directly in speakers with congenital hearing loss and CIs, with and without sensory feedback.
University of Wisconsin-Madison
Investigating cortical processing during comprehension of reverberant speech in adolescents and young adults with cochlear implants
Through early cochlear implant (CI) fitting, many children diagnosed with profound neurosensorial hearing loss gain access to verbal communications through electrical hearing and go on to develop spoken language. Despite good speech outcomes tested in sound booths, many children experience difficulties in understanding speech in most noisy and reverberant indoor environments. While up to 75 percent of their time learning is spent in classrooms, the difficulty from adverse acoustics adding to children’s processing of degraded speech from CI is not well understood. In this project, we examine speech understanding in classroom-like environments through immersive acoustic virtual reality. In addition to behavioral responses, we measure neural activity using functional near-infrared spectroscopy (fNIRS)—a noninvasive, CI-compatible neuroimaging technique, in cortical regions that are responsible for speech understanding and sustained attention. Our findings will reveal the neural signature of speech processing by CI users, who developed language through electrical hearing, in classroom-like environments with adverse room acoustics.
University of Minnesota–Twin Cities
Identifying hearing loss through neural responses to engaging stories
Spoken language acquisition in children with hearing loss relies on early identification of hearing loss followed by timely fitting of hearing devices to ensure they receive an adequate representation of the speech that they need to hear. Yet current tests for young children rely on non-speech stimuli, which are processed differently by hearing aids and do not fully capture the complexity of speech. This project will develop a new and efficient test - called multiband peaky speech - that uses engaging narrated stories and records responses from the surface of the head (EEG) to identify frequency-specific hearing loss. Computer modeling and EEG experiments in adults will determine the best combination of parameters and stories to speed up the testing for use in children, and evaluate the test’s ability to identify hearing loss. This work lays the necessary groundwork for extending this method to children and paves the way for clinics to use this test as a hearing screener for young children–and ultimately our ability to provide timely, enhanced information to support spoken language development.
The long-term goal is to develop an engaging, objective clinical test that uses stories to identify hearing loss in young children and evaluate changes to their responses to the same speech through hearing aids. This goal addresses two important needs identified by the U.S.’s Early Hearing Detection and Intervention (EHDI) program, and will positively impact the developmental trajectory of thousands of children who need monitoring of their hearing status and evaluation of outcomes with their hearing devices.
Generously funded by Royal Arch Research Assistance
University of Miami Miller School of Medicine
Elucidating the development of the otic lineage using stem cell-derived organoid systems
One of the main causes of hearing loss is the damage to and/or loss of specialized, cochlear hair cells and neurons, which are ultimately responsible for our sense of hearing. Stem cell–derived 3D inner ear organoids (lab-grown, simplified mini-organs) provide an opportunity to study hair cells and sensory neurons in a dish. However, the system is in its infancy, and hair cell–containing organoids are difficult to produce and maintain. This project will use a stem cell–derived 3D inner ear organoid system as a model to study mammalian inner ear development. The developmental knowledge gained will then be used to optimize the efficacy of the organoid system. As such, the results will progress our understanding of how the inner ear forms and functions, with the improved organoid system then allowing us directly to elucidate the factors causing the congenital hearing loss.
University of California, Los Angeles
Leveraging automatic speech recognition algorithms to understand how the home listening environment impacts spoken language development among infants with cochlear implants
To develop spoken language, infants must rapidly process thousands of words spoken by caregivers around them each day. This is a daunting task, even for typical hearing infants. It is even harder for infants with cochlear implants as electrical hearing compromises many critical cues for speech perception and language development. The challenges that infants with cochlear implants face have long-term consequences: Starting in early childhood, cochlear implant users perform 1-2 standard deviations below peers with typical hearing on nearly every measure of speech, language, and literacy. My lab investigates how children with hearing loss develop spoken language despite the degraded speech signal that they hear and learn language from. This project addresses the urgent need to identify predictors of speech-language development for pediatric cochlear implant users in infancy.