Hearing Loss in Children

Margaret Cychosz, Ph.D.

Margaret Cychosz, Ph.D.

University of California, Los Angeles
Leveraging automatic speech recognition algorithms to understand how the home listening environment impacts spoken language development among infants with cochlear implants

To develop spoken language, infants must rapidly process thousands of words spoken by caregivers around them each day. This is a daunting task, even for typical hearing infants. It is even harder for infants with cochlear implants as electrical hearing compromises many critical cues for speech perception and language development. The challenges that infants with cochlear implants face have long-term consequences: Starting in early childhood, cochlear implant users perform 1-2 standard deviations below peers with typical hearing on nearly every measure of speech, language, and literacy. My lab investigates how children with hearing loss develop spoken language despite the degraded speech signal that they hear and learn language from. This project addresses the urgent need to identify predictors of speech-language development for pediatric cochlear implant users in infancy.

Oscar Diaz-Horta, Ph.D.

Oscar Diaz-Horta, Ph.D.

University of Miami
The role of FAM65B in the regulation of post-translational modifications of auditory hair cell proteins

Recent genetic studies have identified the FAM65B protein as an important molecule for hearing. In this study we will search for inner ear hair cell proteins interacting with FAM65B in order to further delineate FAM65B’s function. We will focus on FAM65B’s role in the modification of its partner proteins. These studies will help characterize molecular aspects of hearing and how hearing loss occurs when they are disrupted.

Vijayalakshmi Easwar, Ph.D.

Vijayalakshmi Easwar, Ph.D.

University of Wisconsin–Madison
Neural correlates of amplified speech in children with sensorineural hearing loss

About three of every 1,000 infants are born with permanent hearing loss. With the implementation of newborn hearing screening programs worldwide, infants born with hearing loss are now identified soon after birth and provided with hearing aids as early as 3 months of age. However, until infants are 8 to 10 months of age and can participate in clinical tests, the use of neural measures is the only feasible method to infer an infant’s hearing ability with hearing aids. This project will investigate the relationship between behavioral and neural measures of speech audibility in children ages 5 to 16 years old with congenital sensorineural hearing loss, who are capable of reliably indicating hearing sounds. Specifically, the project will use speech-elicited, envelope-following responses—a type of scalp-recorded measure that reflects neural activity to periodicity in speech. Study findings will reveal the accuracy of the chosen neural measure in confirming whether speech sounds are audible in children with congenital hearing loss when hearing aids are used. Results will inform future investigations and the clinical feasibility of using neural measures to assess hearing aid benefit in infants with hearing loss who are unable to confirm their detection of speech behaviorally.

Amanda Griffin, Ph.D., Au.D.

Amanda Griffin, Ph.D., Au.D.

Boston Children’s Hospital

Toward better assessment of pediatric unilateral hearing loss 

Although it is now more widely understood that children with unilateral hearing loss are at risk for challenges, many appear to adjust well without intervention. The range of options for audiological intervention for children with severe-to-profound hearing loss in only one ear (i.e., single-sided deafness, SSD) has increased markedly in recent years, from no intervention beyond classroom accommodations all the way to cochlear implant (CI) surgery. In the absence of clear data, current practice is based largely on the philosophy and convention at different institutions around the country. The work in our lab aims to improve assessment and management of pediatric unilateral hearing loss. This current project will evaluate the validity of an expanded audiological and neuropsychological test battery in school-aged children with SSD. Performance on test measures will be compared across different subject groups: typical hearing; unaided SSD; SSD with the use of a CROS (contralateral routing of signals) hearing aid; SSD with the use of a cochlear implant. This research will enhance our basic understanding of auditory and non-auditory function in children with untreated and treated SSD, and begin the work needed to translate experimental measures into viable clinical protocols.

Nicole Tin-Lok Jiam, M.D.

Nicole Tin-Lok Jiam, M.D.

Mass Eye and Ear

Age-specific cochlear implant programming for optimal hearing performance

Cochlear implants (CI) offer life-altering hearing restoration for deafened individuals who no longer benefit from hearing aid technologies. Despite advances in CI technology, recipients struggle to process complex sounds in real-world environments, such as speech-in-noise and music. Poor performance results from artifacts of the implants (e.g., adjacent channel interaction, distorted signal input) and age-specific biological differences (e.g., neuronal health, auditory plasticity). Our group determined that children with CIs require a better signal input than adults with CIs to achieve the same level of performance. Additional evidence demonstrates that auditory signal blurring in adults is less impactful on performance outcomes. These findings imply that age should be considered when programming a CI. However, the current clinical practice largely adopts a one-size-fits-all approach toward CI management and uses programming parameters defined by adult CI users. Our project’s main objective is to understand how to better program CIs in children to improve complex sound processing by taking into context the listening environment (e.g., complex sound processing in a crowded room), differences between age groups, and variations in needs or anatomy between individuals.

Matthew Masapollo, Ph.D.

Matthew Masapollo, Ph.D.

University of Florida
Contributions of auditory and somatosensory feedback to speech motor control in congenitally deaf 9- to-10-year-olds and adults

Cochlear implants have led to stunning advances in prospects for children with congenital hearing loss to acquire spoken language in a typical manner, but problems persist. In particular, children with CIs show much larger deficits in acquiring sensitivity to the individual speech sounds of language (phonological structure) than in acquiring vocabulary and syntax. This project will test the hypothesis that the acquisition of detailed phonological representations would be facilitated by a stronger emphasis on the speech motor control associated with producing those representations. This approach is novel because most interventions for children with CIs focus strongly on listening to spoken language, which may be overlooking the importance of practice in producing language, an idea we will examine. To achieve that objective, we will observe speech motor control directly in speakers with congenital hearing loss and CIs, with and without sensory feedback.

Z. Ellen Peng, Ph.D.

Z. Ellen Peng, Ph.D.

University of Wisconsin-Madison
Investigating cortical processing during comprehension of reverberant speech in adolescents and young adults with cochlear implants

Through early cochlear implant (CI) fitting, many children diagnosed with profound neurosensorial hearing loss gain access to verbal communications through electrical hearing and go on to develop spoken language. Despite good speech outcomes tested in sound booths, many children experience difficulties in understanding speech in most noisy and reverberant indoor environments. While up to 75 percent of their time learning is spent in classrooms, the difficulty from adverse acoustics adding to children’s processing of degraded speech from CI is not well understood. In this project, we examine speech understanding in classroom-like environments through immersive acoustic virtual reality. In addition to behavioral responses, we measure neural activity using functional near-infrared spectroscopy (fNIRS)—a noninvasive, CI-compatible neuroimaging technique, in cortical regions that are responsible for speech understanding and sustained attention. Our findings will reveal the neural signature of speech processing by CI users, who developed language through electrical hearing, in classroom-like environments with adverse room acoustics.

Melissa Polonenko, Ph.D.

Melissa Polonenko, Ph.D.

University of Minnesota–Twin Cities

Identifying hearing loss through neural responses to engaging stories

Spoken language acquisition in children with hearing loss relies on early identification of hearing loss followed by timely fitting of hearing devices to ensure they receive an adequate representation of the speech that they need to hear. Yet current tests for young children rely on non-speech stimuli, which are processed differently by hearing aids and do not fully capture the complexity of speech. This project will develop a new and efficient test - called multiband peaky speech - that uses engaging narrated stories and records responses from the surface of the head (EEG) to identify frequency-specific hearing loss. Computer modeling and EEG experiments in adults will determine the best combination of parameters and stories to speed up the testing for use in children, and evaluate the test’s ability to identify hearing loss. This work lays the necessary groundwork for extending this method to children and paves the way for clinics to use this test as a hearing screener for young children–and ultimately our ability to provide timely, enhanced information to support spoken language development.

The long-term goal is to develop an engaging, objective clinical test that uses stories to identify hearing loss in young children and evaluate changes to their responses to the same speech through hearing aids. This goal addresses two important needs identified by the U.S.’s Early Hearing Detection and Intervention (EHDI) program, and will positively impact the developmental trajectory of thousands of children who need monitoring of their hearing status and evaluation of outcomes with their hearing devices.

Generously funded by Royal Arch Research Assistance

Regie Lyn P. Santos-Cortez, M.D., Ph.D.

Regie Lyn P. Santos-Cortez, M.D., Ph.D.

Baylor College of Medicine
Identification of genes that predispose to chronic otitis media in an indigenous population

The study aims to identify genes predisposing to otitis media by studying gene variants that are identified from a complex pedigree within an indigenous population that has a high prevalence of chronic otitis media. The study population is ideal for gene mapping due to the limited number of founders and marriages only within the indigenous population. Next-generation sequencing will be performed in order to quickly and cost-effectively detect the causal genetic variants for otitis media that fall within the mapped genomic region. The discovery of gene variants predisposing to otitis media opens great possibilities towards increased knowledge of pathophysiology, prediction of the likelihood of otitis media through genetic diagnosis, and development of innovative treatments for otitis media.

Research areas: otitis media, genetics

Long-term goal of research: The discovery of genes predisposing to otitis media will lead to increased knowledge of the disease process behind otitis media and development of new diagnostic and treatment strategies for otitis media. The study findings are expected to benefit not only the indigenous population but also otitis media patients from other populations, as the genes that will be found can be followed up in other populations and as new therapies are developed through knowledge of genes predisposing to otitis media.

Regie-Lyn Santos-Cortez, M.D., Ph.D. graduated from the University of the Philippines Manila College of Medicine – Philippine General Hospital for both her medical education and residency in otorhinolaryngology. She studied genetic epidemiology in Erasmus Medical Centre Rotterdam, the Netherlands and did most of her PhD work on the genetics of non-syndromic hearing impairment at the Leal lab at Baylor College of Medicine, Houston, Texas, USA. She is now Assistant Professor at the Center for Statistical Genetics, Department of Molecular and Human Genetics at Baylor.

Pei-Ciao Tang, Ph.D.

Pei-Ciao Tang, Ph.D.

University of Miami Miller School of Medicine
Elucidating the development of the otic lineage using stem cell-derived organoid systems

One of the main causes of hearing loss is the damage to and/or loss of specialized, cochlear hair cells and neurons, which are ultimately responsible for our sense of hearing. Stem cell–derived 3D inner ear organoids (lab-grown, simplified mini-organs) provide an opportunity to study hair cells and sensory neurons in a dish. However, the system is in its infancy, and hair cell–containing organoids are difficult to produce and maintain. This project will use a stem cell–derived 3D inner ear organoid system as a model to study mammalian inner ear development. The developmental knowledge gained will then be used to optimize the efficacy of the organoid system. As such, the results will progress our understanding of how the inner ear forms and functions, with the improved organoid system then allowing us directly to elucidate the factors causing the congenital hearing loss.

Babak Vazifehkhahghaffari, Ph.D.

Babak Vazifehkhahghaffari, Ph.D.

Washington University in St. Louis
Enhancing cochlear implant performance through development of improved auditory nerve fiber biophysical models with a combined wet lab and dry lab approach

While the cochlear implant (CI) allows access to sound for those with severe hearing loss, perceiving pitch and music and understanding speech in the presence of reverberation, multiple speakers, or background noise remains very limited. To improve the CI, it is important to understand how it affects neuronal (nerve cell) behavior in the inner ear by uncovering the properties of neuronal excitability. Neuronal excitability mainly depends on the movement of different ions through the cell membrane and is affected by components such as ionic currents and ion channels. A more precise model of the auditory nerve combined with models of the CI electric field potential will help improve CI stimulation methods by understanding stimulus-response phenomena and their underlying biophysical mechanisms.