Christian N. Paxton, Ph.D.

Christian N. Paxton, Ph.D.

University of Utah

The role of Fgf4 in otic placode induction

Development and patterning of the inner ear is a complex process that is mediated by several signaling molecules, including members of the fibroblast growth factor (FGF) family. We recently found that Fgf4 is expressed in the ear-forming region just prior to the induction of ear development. Fgf4 has not previously been described in the induction or formation of the inner ear. Based on its temporal and spatial pattern of expression we hypothesize that Fgf4 is involved in the early processes of ear development and propose to investigate its role(s) in these processes by determining whether it is sufficient and/or required to induce the early stages of inner ear development. We also will examine the signals responsible for localizing Fgf4 expression to the otic forming domain.

Z. Ellen Peng, Ph.D.

Z. Ellen Peng, Ph.D.

University of Wisconsin-Madison
Investigating cortical processing during comprehension of reverberant speech in adolescents and young adults with cochlear implants

Through early cochlear implant (CI) fitting, many children diagnosed with profound neurosensorial hearing loss gain access to verbal communications through electrical hearing and go on to develop spoken language. Despite good speech outcomes tested in sound booths, many children experience difficulties in understanding speech in most noisy and reverberant indoor environments. While up to 75 percent of their time learning is spent in classrooms, the difficulty from adverse acoustics adding to children’s processing of degraded speech from CI is not well understood. In this project, we examine speech understanding in classroom-like environments through immersive acoustic virtual reality. In addition to behavioral responses, we measure neural activity using functional near-infrared spectroscopy (fNIRS)—a noninvasive, CI-compatible neuroimaging technique, in cortical regions that are responsible for speech understanding and sustained attention. Our findings will reveal the neural signature of speech processing by CI users, who developed language through electrical hearing, in classroom-like environments with adverse room acoustics.

Tatjana Piotrowski, Ph.D.

Tatjana Piotrowski, Ph.D.

University of Utah Medical School

Molecular analysis of hair cell regeneration in the zebrafish lateral line

We are aiming to elucidate the genetic pathways underlying hair cell regeneration in zebrafish with the long-term goal of activating these pathways in mammals. Our lab is taking a twofold approach to identify genes involved in hair cell regeneration. We are performing gene expression analyses from mantle cells of control larvae and from larvae in which mantle cells are proliferating to regenerate killed hair cells (as proposed in this application). As a second approach we are performing a mutagenesis screen for zebrafish mutants which are not able to regenerate hair cells, and thus carry mutations in regeneration-specific genes. A prominent cause of deafness is loss of hair cells due to age, noise or antibiotic treatments. In contrast to mammalian hair cells, fish, bird and amphibian hair cells turn over frequently and regenerate following hair cell death. Little is known why lower vertebrates are able to regenerate hair cells but humans do not. This is partly due to the relative inaccessibility of inner ear hair cells to direct observation and manipulation. Our aim is to take advantage of the lateral line of zebrafish to define and characterize the molecular and cellular interactions occurring during hair cell regeneration. If successful, our results will set the stage for testing whether hair cell regeneration can be activated in humans.

Sarah F. Poissant, Ph.D.

Sarah F. Poissant, Ph.D.

University of Massachusetts, Amherst
The Impact of Total Communication on the Auditory Perception of Speech

For decades, most children with severe-to-profound hearing loss were educated in special schools for the deaf. In more recent years, increasing numbers of these children have been partially or fully main-streamed and educated along-side their peers with normal hearing. Much debate has ensued regarding the best language of instruction (sign-only, sign+speech, speech only) for them. It is generally thought that a symbolic, gesture-based language system, such as manually-coded English used as part of simultaneous communication methods, provides a facilitative benefit. However, there is not enough information about how children combine manual and spoken cues in this type of communication system to draw firm conclusions about optimal approaches to classroom teaching that best support aural reception of spoken language. We plan to ask and answer a very specific question: What is the direct effect of simultaneously delivered sign language on the perception of speech for children with hearing loss developing spoken language? The research approach builds from the observation that perception of speech that has been artificially degraded (e.g., to mimic a hearing loss) is strikingly improved when listeners have knowledge of the content of the message. The proposed study applies this hypothesis to children with hearing loss to determine whether signs serve in part as a prime to improve auditory perception of speech.

Research area: Auditory Development; Congenital Hearing Loss; Fundamental Auditory Research

Long-term goal of research: To assess how total communication – the combined use of manual signs, speech, and speech-reading – can most effectively be employed as a habilitation strategy to improve auditory perceptual abilities.

Melissa Polonenko, Ph.D.

Melissa Polonenko, Ph.D.

University of Minnesota–Twin Cities

Identifying hearing loss through neural responses to engaging stories

Spoken language acquisition in children with hearing loss relies on early identification of hearing loss followed by timely fitting of hearing devices to ensure they receive an adequate representation of the speech that they need to hear. Yet current tests for young children rely on non-speech stimuli, which are processed differently by hearing aids and do not fully capture the complexity of speech. This project will develop a new and efficient test - called multiband peaky speech - that uses engaging narrated stories and records responses from the surface of the head (EEG) to identify frequency-specific hearing loss. Computer modeling and EEG experiments in adults will determine the best combination of parameters and stories to speed up the testing for use in children, and evaluate the test’s ability to identify hearing loss. This work lays the necessary groundwork for extending this method to children and paves the way for clinics to use this test as a hearing screener for young children–and ultimately our ability to provide timely, enhanced information to support spoken language development.

The long-term goal is to develop an engaging, objective clinical test that uses stories to identify hearing loss in young children and evaluate changes to their responses to the same speech through hearing aids. This goal addresses two important needs identified by the U.S.’s Early Hearing Detection and Intervention (EHDI) program, and will positively impact the developmental trajectory of thousands of children who need monitoring of their hearing status and evaluation of outcomes with their hearing devices.

Generously funded by Royal Arch Research Assistance

Erin K. Purcell, Ph.D.

Erin K. Purcell, Ph.D.

Sonja Pyott, Ph.D.

Sonja Pyott, Ph.D.

University of North Carolina Wilmington

Enhancement of the efferent-hair cell synapse by metabotropic glutamate receptors

This proposal aims to improve our understanding of the molecular mechanisms regulating synapses in the cochlea and will specifically characterize how a class of molecules, metabotropic glutamate receptors (mGluRs), regulates the efferent-hair cell synapses. Sensory hair cells of the cochlea communicate with the brain at specialized sites called synapses. Inner hair cells have numerous afferent synapses that relay information about sound from the hair cell to the brain. In contrast, outer hair cells are characterized by efferent synapses from the brain that regulate hair cell activity. Although these efferent and afferent synapses are normally considered to be independent from one another, experiments studying immature inner hair cells suggest that glutamate, the neurotransmitter required for transmission at the afferent synapse, may also modify the response of the efferent synapse. Efferent innervation of the cochlea is thought to protect against noise-induced hearing loss. Considering that noise-induced hearing loss accounts for one-third of all cases of deafness, understanding the mechanisms regulating efferent synapses is of special clinical relevance. This project will investigate this hypothesis and should uncover novel pharmaceutical targets to modulate the efferent synaptic response to either dampen hair cell activity and prevent noise-induced hearing or boost hair cell activity and combat deafness.

Kelly Radziwon, Ph.D.

Kelly Radziwon, Ph.D.

State University of New York at Buffalo
Noise-induced hyperacusis in rats with and without hearing loss

Hyperacusis is an auditory perceptual disorder in which everyday sounds are perceived as uncomfortably or excruciatingly loud. Researchers and audiologists assess hyperacusis in the clinic by asking patients to rate sounds based on their perceived loudness, resulting in a measure known as a loudness discomfort level (LDL). Loudness discomfort ratings are a useful clinical tool, but in the lab we cannot ask animals to “rate” sounds. Instead, to measure loudness perception in animals, our lab trains rats to detect a variety of sounds of varying intensity. By measuring how quickly the animals respond to each sound—faster in reaction to higher intensity sounds and more slowly to lower intensity sounds—we can obtain an accurate picture of perceived loudness in animals. By comparing electrophysiological recordings with behavioral performances of the individual animals, this project aims to characterize the relationship between changes in neural activity and loudness perception in animals with and without noise-induced hearing loss.

The relationship between pain-associated proteins in the auditory pathway and hyperacusis

Hyperacusis is a condition in which sounds of moderate intensity are perceived as intolerably loud or even painful. Despite the apparent link between pain and hyperacusis in humans, little research has been conducted directly comparing the presence of inflammation along the auditory pathway and the occurrence of hyperacusis. One of the major factors limiting this research has been the lack of a reliable animal behavioral model of hyperacusis. However, using reaction time measurements as a marker for loudness perception, I have successfully assessed rats for drug-induced hyperacusis and, more recently, noise-induced hyperacusis. Briefly, the animals will be trained to detect noise bursts of varying intensity. As in humans, the rats will respond faster with increasing sound intensity. Following drug administration or noise exposure, rats will be deemed to have hyperacusis if they have faster-than-normal reaction times to moderate and high-level sounds. Therefore, the goal of the proposed research is to correlate the presence of pain-related molecules along the auditory pathway with reliable behavioral measures of drug and noise-induced hyperacusis.

Lavanya Rajagopalan, Ph.D.

Lavanya Rajagopalan, Ph.D.

Baylor College of Medicine

The structural and functional basis of electromotility in prestin, the outer ear amplifier protein

Prestin, a membrane protein in outer hair cells in the cochlea, is involved in cochlear amplification leading to frequency sensitivity. The long-term objectives of this study are to understand the molecular basis of prestin function, to advance the field closer to designing therapeutics in certain types of hearing loss. This will provide insight into the molecular basis of prestin-related hearing loss, and can lead to rational design of therapeutics to treat such conditions.

Robert Raphael, Ph.D.

Robert Raphael, Ph.D.

Rice University
Understanding the biophysics and protein biomarkers of Ménière’s disease via optical coherence tomography imaging

Our sense of hearing and balance depends on maintaining proper fluid balance in a specialized fluid in the inner ear called the endolymph. Ménière’s disease is an inner ear disorder associated with increased fluid pressure in the endolymph that involves dizziness, hearing loss, and tinnitus. Ménière’s disease is difficult to diagnose and treat clinically, which is a source of frustration for both physicians and patients. Part of the barrier to diagnosing and treating Ménière’s disease is the lack of imaging tools to study the inner ear and a poor understanding of the underlying causes. The goal of this research is to develop an approach to noninvasively image the inner ear and study the internal structures in the vestibular system in typical and disease states. We will utilize optical coherence tomography (OCT), a technique capable of imaging through bone, and observe changes in the fluid compartments in the inner ear. The expected outcome of this research will be the establishment of a powerful non-invasive imaging platform of the inner ear that will enable us to test hypotheses, in living animals, on how ion transport regulates the endolymph, how disorders of ion transport cause disruption of endolymphatic fluid, and how the expression of different biomarkers lead to disorders of ion transport.

Khaleel Razak, Ph.D.

 Khaleel Razak, Ph.D.

University of California, Riverside
Age-related hearing loss and cortical processing

Presbycusis (age-related hearing loss) is one of the most prevalent forms of hearing impairment in humans, and contributes to speech recognition impairments and cognitive decline. Both peripheral and central auditory system changes are involved in presbycusis. The relative contributions of peripheral hearing loss and brain aging to presbycusis-related auditory processing declines remain unclear. This project will address this question by comparing genetically engineered, age-matched mice with one group experiencing presbycusis and a second group that does not. Spectrotemporal processing (such as speech processing) will be studied as an outcome measure.

Lina Reiss, Ph.D.

 Lina Reiss, Ph.D.

Oregon Health & Science University
Changes in Residual Hearing in a Hearing-impaired Guinea Pig Model of Hybrid Cochlear Implants (CIs)

The goal of the current study is to understand mechanisms of hearing loss with “hybrid” or “electro-acoustic” cochlear implants (CIs), a new type of CI designed to preserve low-frequency hearing and allow combined acoustic-electric stimulation in the same ear. Hybrid CI users perform significantly better than standard CI users on musical melody recognition, voice recognition, and speech recognition in the presence of background talkers. However, approximately 10% of hybrid CI patients lose all residual hearing, and another 20% lose 20-30 dB after implantation. We hypothesize that in addition to surgical trauma, electrical stimulation through the hybrid CIs also damages cochlear cells, leading to the residual hearing loss (HL). Aim 1 is to determine the contribution of electrical stimulation to the residual HL in hybrid CI guinea pigs with noise-induced steeply-sloping high frequency hearing loss (NIHFHL). Aim 2 is to examine the effect of electrical stimulation on the cochlear pathology. The findings will guide the development of strategies to prevent hearing loss with electrical stimulation, and allow extension of the hybrid concept to all cochlear implant recipients with usable residual hearing.

Research area: Cochlear implants

Long term goal of research: To improve residual hearing preservation with “hybrid” or “electro-acoustic” cochlear implants (CIs), a new type of CI designed to preserve low-frequency hearing and allow combined acoustic-electric stimulation in the same ear.

Jennifer Resnik, Ph.D.

Jennifer Resnik, Ph.D.

Mass Eye and Ear, Harvard Medical School
Homeostatic modifications in cortical GABA circuits enable states of hyperexcitability and reduced sound level tolerance after auditory nerve degeneration

Sensorineural hearing loss due to noise exposure, aging, ototoxic drugs, or certain diseases reduce the neural activity transmitted from the cochlea to the central auditory system. These types of hearing loss often give rise to hyperacusis, an auditory hypersensitivity disorder in which low- to moderate-intensity sounds are perceived as intolerably loud or even painful. Previously thought as originating in the damaged ear, hyperacusis is emerging as a complex disorder. While it can be triggered by a peripheral injury, it develops from a maladaptation of the central auditory system to the peripheral dysfunction. My research will test the hypothesis that the recovery of sound detection and speech comprehension, may cause an overcompensation that leads to an increase in sound sensitivity and reduced tolerance of moderately loud sounds.

This hypothesis will be tested using a combination of chronic single-unit recordings, operant behavioral methods and optogenetic interrogation of specific sub-classes of cortical interneurons. By understanding how brain plasticity is modulated, we will gain deeper insight into the neuronal mechanism underlying aberrant sound processing and its potential reversal.

Christina Reuterskiöld, Ph.D.

Christina Reuterskiöld, Ph.D.

New York University
Rhyme Awareness in Children with Cochlear Implants: Investigating the Effect of a Degraded Auditory System on Auditory Processing, Language, and Literacy Development

Successful literacy is critical for a child’s development. Decoding written words is mostly dependent on the child’s processing of speech sounds, requiring a certain level of awareness of speech sounds and words in order to develop literacy skills. If the benefits of early cochlear implantation support the development of central auditory processing skills and phonological awareness, children with cochlear implants (CIs) would be expected to acquire phonological awareness skills comparable to children with typical hearing.

However, past research has generated conflicting results on this topic, which this project will attempt to remedy through investigating rhyme recognition skills and vocabulary acquisition in children who received CIs early in life. With co-principal investigator Katrien Vermeire, Ph.D., we will also shed light on the importance of central auditory processing during a child’s first years of life for developing strong literacy skills.

William “Jason” Riggs, Au.D.

William “Jason” Riggs, Au.D.

The Ohio State University
Electrophysiological characteristics in children with auditory neuropathy spectrum disorder

This project will focus on understanding different sites of lesion (impairment) in children with auditory neuropathy spectrum disorder (ANSD). ANSD is a unique form of hearing loss that is thought to occur in approximately 10 to 20 percent of all children with severe to profound sensorineural hearing loss and results in abnormal auditory perception. Neural encoding processes of the auditory nerve in children using electrophysiologic techniques (acoustically and electrically evoked) will be investigated in order to provide objective evidence of peripheral auditory function. Results can then be used to optimize and impact care from the very beginning of cochlear implant use in children with this impairment.

Michael Roberts, Ph.D.

Michael Roberts, Ph.D.

University of Michigan
Cellular and synaptic basis of binaural gain control through the commissure of the inferior colliculus

Deficits in binaural hearing make it difficult for users of cochlear implants and hearing aids to localize sounds and follow speech in everyday situations. One of the most important sites for binaural computations is the inferior colliculus (IC). Located in the auditory midbrain, the IC is the hub of the central auditory system, receiving most of the ascending output of the auditory brainstem and much of the descending output of the auditory cortex. The left and right lobes of the IC communicate with each other through a massive connection called the commissure. Recent data from in vivo recordings show that commissural projections shape how IC neurons encode sound location. This suggests that important binaural interactions arise through the IC commissure, but the cellular and synaptic basis of these interactions are largely unknown. Understanding these interactions will provide foundational knowledge to guide future efforts to restore binaural hearing.

Susan M. Robey-Bond, Ph.D.

Susan M. Robey-Bond, Ph.D.

University of Vermont and State Agricultural College
The Role of a Mutation in Histidyl-tRNA Synthetase in Usher-like Syndrome Deafness

An Usher-like syndrome, comprising deafness, blindness, and fever-induced hallucinations was recently discovered, caused by recessive inheritance of a mutation in histidyl-tRNA synthetase (HARS). The HARS enzyme is required for protein production in cells: it attaches the amino acid histidine to a transfer ribonucleic acid (RNA) molecule which activates and transports the amino acid to the ribosome for protein synthesis. We will measure the effects of this mutation on the molecules required for protein synthesis. Preliminary results suggest HARS may be chemically modified by the cell, and that mutant HARS is modified differently, which is evidence HARS may have roles in the cell separate from its known function in protein synthesis. We additionally propose to determine the interactions of HARS and mutant HARS with other cellular proteins, specifically in cells derived from embryonic mouse inner ears, as a first step in elucidating a different role for HARS in hearing.

Research area: Usher and Usher-like syndrome deafness

Long-term goal of research: Our long term goal is to describe the specific role HARS, and the HARS mutation, plays in sensory cell development and maintenance. With a greater understanding of the proteome - the expressed proteins and protein interactions of a cell - during different stages of development of affected cells, we hope to discover more potential avenues for therapy to prevent or alleviate symptoms of Usher and Usher-like syndromes.

Sonia M. S. Rocha-Sanchez, Ph.D.

Sonia M. S. Rocha-Sanchez, Ph.D.

Creighton University

Role of central auditory neurons in pathogenic mechanism of progressive high frequency hearing loss (PHFHL)

The long-term objective of this study is to assess the relative contribution of Central Auditory Neurons (CANs) to high frequency hearing loss. The peripheral auditory system suggests that progressive hearing loss is resultant of SGNs and/or IHCs dysfunction. This study proposes to determine the effects of the mutations using genetically engineered mice with DN-KCNQ4 expression specific to CANs. Achieving these objectives will open doors to the formulation of therapeutic modalities and possible interventions to PHFHL treatment.

Adrian Rodriguez-Contreras, Ph.D.

Adrian Rodriguez-Contreras, Ph.D.

The City College of New York

Defining the role of olivo-cochlear feedback in the development of the auditory brainstem

During early brain development auditory neurons spontaneously generate highly patterned electrical activity in the absence of sound. In this project Rodriguez-Contreras will explore the role of cholinergic brainstem neurons in modulating the patterns of spontaneous activity. His work could provide clues to develop treatments that ameliorate hearing impairments such as tinnitus and deafness.

Merri J. Rosen, Ph.D.

Merri J. Rosen, Ph.D.

Northeast Ohio Medical University
Effects of developmental conductive hearing loss on communication processing: perceptual deficits and neural correlates in an animal model

Conductive hearing loss (CHL), which reduces the sound conducted to the inner ear, is often associated with chronic ear infections (otitis media). There is growing awareness that CHL in children is a risk factor for speech and language deficits. However, children often have intermittent bouts of hearing loss and receive varying treatments. My research uses an animal model in which the duration and extent of CHL can be effectively controlled. This research will identify parameters of natural vocalizations (such as slow or fast changes in pitch or loudness) that are poorly detected after early CHL. Neural responses from the auditory cortex will be recorded while animals behaviorally distinguish vocalizations that vary in specific ways. This will reveal the specific vocalization components that are perceptually impaired by developmental hearing loss. These components should be used as targets for intervention and remediation. Creating training paradigms for children that target these parameters should improve speech perception and comprehension.

Research area: Hearing Loss; Auditory Development; Auditory Physiology; Fundamental Auditory Research

Long-term goal of research: To identify neural mechanisms that impairs auditory perception of natural sounds as a result of hearing loss. This will show how the brain distinguishes sounds from different sources in complex environments. Neurophysiological, perceptual, and computational techniques to study animal models of hearing loss were applied. This multifaceted approach allowed the identification of neural impairments in more detail than if it was obtained when studying humans, yet is directly applicable to clarify human hearing problems and establish effective treatments.