Senthilvelan Manohar, Ph.D.

Senthilvelan Manohar, Ph.D.

University at Buffalo
Behavioral Model of Loudness Intolerance

High-level noise causes discomfort for typical-hearing individuals. However, following cochlear damage, even moderate-level noise can become intolerable and painful, a condition known as hyperacusis.

One of the critical requirements for understanding and finding a cure for hyperacusis is the development of animal models. I have developed two new animal behavior models to study the pain and annoyance components of hyperacusis. The Active Sound Avoidance Paradigm (ASAP) uses a mouse’s innate aversion to a light open area and preference for a dark enclosed box. In the presence of intense noise, the animal shifts its preference to the light area. The Auditory Nociception Test (ANT) is based on a traditional pain threshold assessment. Although animals show an elevated pain threshold in the presence of 90 and 100 dB, at 110 and 115 dB they show a reduced pain tolerance. Using these two tests together will allow me to assess emotional reactions to sound as well as the neural interactions between auditory perception and pain sensation.

Adam Markaryan, Ph.D.

Adam Markaryan, Ph.D.

University of Chicago

Mitochondrial DNA deletions and cochlear element degeneration in presbycusis

The long term goal of the Bloom Temporal Bone Laboratory is to understand the molecular mechanisms involved in age-related hearing loss and develop a rationale for therapy based on this information. This project will quantify the mitochondrial DNA common deletion level and total deletion load in the cochlear elements obtained from individuals with presbycusis and normal hearing controls. The relationship between deletion levels, the extent of cochlear element degeneration, and the severity of hearing loss will be explored in human archival tissues to clarify the role of deletions in presbycusis.

This research award is funded by The Burch-Safford Foundation Inc.

David Martinelli, Ph.D.

David Martinelli, Ph.D.

University of Connecticut Health Center
Creation and validation of a novel, genetically induced animal model for hyperacusis

Hyperacusis is a condition in which a person experiences pain at much lower sound levels than listeners with typical hearing. While the presence of outer hair cell afferent neurons is known, it is not known what information the outer hair cells communicate to the brain through these afferents. This project’s hypothesis is that the function of these mysterious afferents is to communicate to the brain when sounds are intense enough to be painful and/or damaging, and that this circuitry is distinct from the cochlea-to-brain circuitry that provides general hearing. The hypothesis will be tested using a novel animal model in which a certain protein that is essential for the proposed “pain” circuit is missing. The absence of this protein is predicted to cause a lessening of the perception of auditory pain when high intensity sounds are presented. If true, this research has implications for those suffering from hyperacusis.

Matthew Masapollo, Ph.D.

Matthew Masapollo, Ph.D.

University of Florida
Contributions of auditory and somatosensory feedback to speech motor control in congenitally deaf 9- to-10-year-olds and adults

Cochlear implants have led to stunning advances in prospects for children with congenital hearing loss to acquire spoken language in a typical manner, but problems persist. In particular, children with CIs show much larger deficits in acquiring sensitivity to the individual speech sounds of language (phonological structure) than in acquiring vocabulary and syntax. This project will test the hypothesis that the acquisition of detailed phonological representations would be facilitated by a stronger emphasis on the speech motor control associated with producing those representations. This approach is novel because most interventions for children with CIs focus strongly on listening to spoken language, which may be overlooking the importance of practice in producing language, an idea we will examine. To achieve that objective, we will observe speech motor control directly in speakers with congenital hearing loss and CIs, with and without sensory feedback.

Jameson Mattingly, M.D.

Jameson Mattingly, M.D.

The Ohio State University
Differentiating Ménière's disease and vestibular migraine using audiometry and vestibular threshold measurements

Patients presenting with recurrent episodic vertigo (dizziness), such as Ménière's disease (MD) and vestibular migraine (VM), can present a diagnostic challenge as they can both produce recurrent vertigo, tinnitus, motion intolerance, and hearing loss. Further complicating this issue is that the diagnosis of each is based upon patient history with little contribution from an objective measure. Previous attempts to better differentiate MD and VM have included a variety of auditory and vestibular tests, but these evaluations have demonstrated limitations or not shown the appropriate sensitivity and specificity to be used in the clinical setting. Recently, vestibular perceptual threshold testing has shown the potential to better differentiate MD and VM by demonstrating different and opposite trends with testing, and these evaluations are ongoing. In addition to vestibular evaluations, audiometry (hearing testing) is a mainstay of testing in those with vestibular symptoms, especially with any concern of MD, and is thus commonly available. Standard hearing testing, however, is not sensitive or specific enough alone to differentiate MD and VM, but this project’s hypothesis is that combining audiograms with vestibular perceptual threshold testing will result in a diagnostic power greater than that possible with either option used individually. The population of patients with MD and VM is an ideal setting to examine similarities and differences, as MD is classically an otologic disease and VM, in theory, has little to do with auditory function. Additionally, this same principal can be applied to any disease process that affects both vestibular and auditory function (such as tumors, ototoxicity).

Andrew A. McCall, M.D.

Andrew A. McCall, M.D.

University of Pittsburgh
The Influence of Dynamic Limb Movement on Activity within the Vestibular Nuclei: the Role of the Cerebellum

Balance is inherently a multi-modal sense. To maintain balance in upright stance or during walking, input from several modalities – namely the vestibular system (from the inner ear), proprioceptive system (from muscles and joints), and visual system – must be interpreted by the central nervous system and synthesized to understand body position in space relative to gravity. Our goal is to investigate how vestibular and limb proprioceptive inputs interact in the central nervous system, with a particular focus on the brainstem and cerebellum as these are key sites of multisensory processing of balance input. We anticipate that the results of these studies will have important implications for the understanding of multi-sensory processing within central vestibular pathways and for the clinical treatment of humans with vestibular disorders.

Research area: Vestibular and Balance Disorders; Vestibular Physiology

Long-term goal of research: To elucidate the physiologic pathways responsible for integrating vestibular and proprioceptive information and to ultimately develop clinical strategies based upon these physiologic underpinnings to improve the health of humans with vestibular disorders.

Carolyn McClaskey, Ph.D.

Carolyn McClaskey, Ph.D.

Medical University of South Carolina

Age and hearing loss effects on subcortical envelope encoding

Generously funded by Royal Arch Research Assistance

Elizabeth McCullagh, Ph.D.

Elizabeth McCullagh, Ph.D.

University of Colorado
The role of the MNTB in sound localization impairments in autism spectrum disorder

The processing of sound location and the establishment of spatial channels to separate several simultaneous sounds is critical for social interaction, such as carrying on a conversation in a noisy room or focusing on a person speaking. Impairments in sound localization can often result in central auditory processing disorders (CAPD). A form of CAPD is also observed clinically in all autism spectrum disorders, and is a significant to quality-of-life issues in autistic patients.

The circuit in charge of initially localizing sound sources and establishing spatial channels is located in the auditory brain stem and functions with precisely integrated neural excitation and inhibition. A recent theory posits that autism may be caused by an imbalance of excitatory and inhibitory synapses, particularly in sensory systems. An imbalance of excitation and inhibition would lead to a decreased ability to separate competing sound sources. While the current excitation to inhibition model of autism assumes that most inhibition in the brain is GABAergic, the sound localization pathway in the brainstem functions primarily with temporally faster and more precise glycinergic inhibition.

The role of glycinergic inhibition has never been studied in autism disorders, and could be a crucial component of altered synaptic processing in autism. The brainstem is a good model to address this question since the primary form of inhibition is through glycine, and the ratio of excitation to inhibition is crucial for normal processing.

Melissa McGovern, Ph.D.

Melissa McGovern, Ph.D.

University of Pittsburgh

Hair cell regeneration in the mature cochlea: investigating new models to reprogram cochlear epithelial cells into hair cells 

Sensory hair cells in the inner ear detect mechanical auditory stimulation and convert it into a signal that the brain can interpret. Hair cells are susceptible to damage from loud noises and some medications. Our lab investigates the ability of nonsensory cells in our inner ears to be able to regenerate lost hair cells. We regenerate cells in the ear by converting nonsensory cells into sensory cells through genetic reprogramming. Key hair cell-inducing program genes are expressed in non-hair cells and partially convert them into hair cells. There are multiple types of nonsensory cells in the inner ear and they are all important for different reasons. In addition, they are in different locations relative to the sensory hair cells. In order to better understand the ability of different groups of cells to restore hearing, we need to be able to isolate different populations of cells. The funded project will allow us to create a new model to target specific nonsensory cells within the inner ear to better understand how these cells can be converted into hair cells. By using this new model, we can specifically investigate cells near the sensory hair cells and understand how they can be reprogrammed. Our lab is also very interested in how the partial loss of genes in the inner ear can affect cellular identities. In addition to targeting specific cells in the ear, we will investigate whether the partial loss of a protein in nonsensory cells may improve their ability to be converted into sensory cells. This information will allow us to further explore possible therapeutic targets for hearing restoration.

Kathleen McNerney, Ph.D.

Kathleen McNerney, Ph.D.

University at Buffalo, SUNY

The vestibular evoked myogenic potential: unanswered questions regarding stimulus and recording parameters

The vestibular evoked myogenic potential (VEMP) is a response that can be recorded from the sternocleidomastoid (SCM) muscle as well as other neck muscles such as the trapezius. It is believed to be generated by the saccule, which is a part of our vestibular system that is normally responsible for our sense of balance. Recent studies have shown that it is also responsive to sound. Three types of stimuli that are used to elicit the VEMP are air-conducted (AC) stimuli, bone-conducted (BC) stimuli, and galvanic (electrical) stimuli. Although there are several universal findings that have held true throughout previous studies, there are several questions which remain unanswered. The present study will attempt to address these issues by making a direct comparison between the three types of stimuli listed above, within the same subjects. In addition, input/output functions will be defined for all three types of stimuli. Finally, we will be looking at the repeatability of the three types of stimuli across subjects as well as address the inconsistencies that have been found between monaural and binaural stimulation. This study will not only provide us with a better understanding of the VEMP, it will also enhance its clinical utility.

Anahita Mehta, Ph.D.

Anahita Mehta, Ph.D.

University of Michigan

Effects of age on interactions of acoustic features with timing judgments in auditory sequences

Imagine being at a busy party where everyone is talking at once, yet you can still focus on your friend’s voice. This ability to discern important sounds from noise involves integrating different features, such as the pitch (how high or low a sound is), location, and timing of these sounds. As we age, even with good hearing, this integration may become harder, affecting our ability to understand speech in noisy environments. Our brains must combine these features to make sense of our surroundings, a process known as feature integration. However, it’s not entirely clear how these features interact, especially when they conflict. For example, how does our brain handle mixed signals regarding pitch and sound location?
Previous research shows that when cues from different senses, like hearing and sight, occur simultaneously, our performance improves. But if they are out of sync, it becomes harder. Less is known about how our brains integrate conflicting cues within the same sense, such as pitch and spatial location in hearing. Our study aims to explore how this ability changes with age and develop a simple test that could be used as an easy task of feature integration, especially for older adults. This research may lead to better rehabilitation strategies, making everyday listening tasks easier for everyone.

Frances Meredith, Ph.D.

Frances Meredith, Ph.D.

University of Colorado Denver
The role of K+ conductances in coding vestibular afferent responses

Approximately 615,000 people in the United States suffer from Meniere’s disease, a disorder of the inner ear that causes episodic vertigo, tinnitus and progressive hearing loss. The underlying etiology of the disease is not known but may include defects in ion channels and alterations in inner ear fluid potassium (K+) ion concentration. Specialized hair cells inside the ear detect head movement in the vestibular system and sound signals in the cochlea. A rich variety of channels is found on the membranes of hair cells as well as on the afferent nerve endings that form connections (synapses) with hair cells. Many of these channels selectively allow the passage of K+ ions and are thought to be important for maintaining the appropriate balance of K+ ions in inner ear fluids. I study an unusual type of nerve ending called a calyx, found at the ends of afferent nerves that form synapses with type I hair cells of the vestibular system. These nerves send electrical signals to the brain about head movements. My goal is to use immunocytochemistry and electrophysiology to identity K+ channels on the calyx membrane and to explore their role in regulating electrical activity and K+ levels in inner ear fluid. I will identify potential routes for K+ entry that could influence calyx properties. I will investigate whether altered ionic concentrations in inner ear fluid change the buffering capacity of K+ channels and whether this affects the signals that travel along the afferent vestibular nerve to the brain. Meniere’s disease is a disorder of the entire membranous labyrinth of the inner ear and thus affects both the vestibular sensory organs and the cochlea. Similar K+ ion channels are expressed in vestibular and auditory afferent neurons. Studying ion channels present in both auditory and vestibular systems will reveal properties common to both systems and will increase our understanding of the importance of ion channels in Meniere’s disease.

Iain M. Miller, Ph.D.

Iain M. Miller, Ph.D.

Ohio University

The distribution of glutamate receptors in the turtle utricle: a confocal and electron microscope study

When stimulated by acceleration and head tilt (gravity), sensory hair cells in the turtle utricle, an organ in the inner ear, transmit information about these stimuli to the brain. The long term goal of this research is to understand what role synaptic structure and composition play in the observed spatially heterogeneous and diverse discharge properties of afferents supplying the vestibular end organs, and in particular, the utricle. This knowledge is central for accurate diagnosis and rational treatment strategies for vestibular dysfunction.

Srikanta Mishra, Ph.D.

Srikanta Mishra, Ph.D.

New Mexico State University
Medial Efferent Mechanisms in Auditory Processing Disorders

Many individuals experience listening difficulty in background noise despite clinically normal hearing and no obvious auditory pathology. This condition has often received a clinical label called auditory processing disorder (APD). However, the mechanisms and pathophysiology of APD are poorly understood. One mechanism thought to aid in listening-in-noise is the medial olivocochlear (MOC) inhibition— a part of the descending auditory system. The purpose of this translational project is to evaluate whether the functioning of the MOC system is altered in individuals with APD. The benefits of measuring MOC inhibition in individuals with APD are twofold: 1) it could be useful to better define APD and identify its potential mechanisms, and 2) it may elucidate the functional significance of MOC efferents in listening in complex environments. The potential role of the MOC system in APD pathophysiology, should it be confirmed, would be of significant clinical interest because current APD clinical test batteries lack mechanism-based physiologic tools.

Rahul Mittal, Ph.D.

Rahul Mittal, Ph.D.

University of Miami Miller School of Medicine
Deciphering the role of Slc22a4 in the development of stria vascularis, and to determine the effect of supplementation of its antioxidant substrate ergothioneine, on age-related hearing loss

Since mutations in the SLC22 gene family have been implicated in various pathological conditions, there has been a renewed interest in understanding their role in the maintenance of normal physiological functions of cells. SLC22A4 is ubiquitously expressed in the body and transports across the cellular plasma membrane various compounds, including acetylcholine and carnitine as well as the naturally occurring antioxidant ergothioneine (ERGO). In addition, SLC22A4 is abundantly expressed in the stria vascularis (SV), but its role in SV biology is not known.

This project will help in understanding how SLC22A4 contributes to SV development, atrophy, and dysfunction of the cochlea, leading to hearing loss. The project also aims to determine whether ERGO supplementation can prevent SV atrophy and ameliorate age-related hearing loss (presbycusis) in a mouse model.

Sharlen Moore, Ph.D.

Sharlen Moore, Ph.D.

Johns Hopkins University

Modulation of neuro-glial cortical networks during tinnitus

My long term goals are to understand the complexity and temporal sequencing of tinnitus effectors with an integrative perspective, considering the interplay of the diverse cell types that might promote the development and maintenance of tinnitus to provide an updated interpretation of this disorder. Additionally, to use glial cells as a key therapeutic target to treat tinnitus.

Generously funded by the Les Paul Foundation

Clive Morgan, Ph.D.

Clive Morgan, Ph.D.

Oregon Health & Science University
Characterization of Usher syndrome 1F protein complexes

Much of our current knowledge on the molecular makeup of the hair bundle has origins in genetic studies. Several key genes have been discovered but are limited to those genes that are absolutely required for hearing and dispensable in other systems. Many independent genetic mutations also occur in a handful of genes, so that finding new genes can be quite difficult and expensive. My colleague Peter Barr-Gillespie, Ph.D., has pioneered the use of hair bundle isolation techniques to allow studies of the hair bundle proteome, allowing us to uncover many of the features of the hair bundle in single experiments. The next step is to look at how these proteins interact to fulfill the functions of a mechanically sensitive hair bundle and the effects of genetic abnormalities on the whole bundle proteome (set of proteins). In this project I will analyze individual protein complexes using a new hair bundle isolation strategy that allows us to isolate and analyze protein complexes from the hair bundle. I will perform a comparative analysis of the makeup of all Usher syndrome protein complexes. This will shed new light on the proteins involved directly in mechanotransduction.

Bruna Mussoi, Au.D., Ph.D.

Bruna Mussoi, Au.D., Ph.D.

University of Tennessee

Auditory neuroplasticity following experience with cochlear implants

Cochlear implants provide several benefits to older adults, though the amount of benefit varies across people. The greatest improvements in speech understanding abilities usually happen within the first 6 months after implantation. It is generally accepted that these gains in performance are a result of neural changes in the auditory system, but while there is strong evidence of neural changes following cochlear implantation in children, there is limited evidence in adults with hearing loss in both ears. This study will examine how neural responses change as a function of the amount of cochlear implant use, when compared to longstanding hearing aid use. Listeners who are candidates for a cochlear implant (who either decide to pursue implantation or to keep wearing hearing aids) will be tested at several time points, from pre-implantation and up to 6 months after implantation. The results of this project will improve our understanding of the impact of cochlear implant use on neural responses in older adults, and their relationship with the ability to understand speech.

Mirna Mustapha-Chaib, Ph.D.

Mirna Mustapha-Chaib, Ph.D.

University of Michigan

Determine the functional role of the unique amino terminus of MYO15 in hearing using genetically engineered mice

Assessing the role of the N-terminus of MYO15 in structural development of hair cells and in the neurosensory process of hearing is expected to provide basic information about the process of hearing at the molecular level. Long term, we expect proteins that interact with the N-terminus of MYO15 will also be defective in some forms of hearing loss. Models similar to the one we propose have been used as proof of principle for gene therapy. Mutations in humans indicate that the N-terminal portion of MYO15 is required in some way for hearing. Using our resources and experience in genetically engineered mice will advance the understanding of the specific molecular function of the N-terminus of Myo15 in mammalian hearing and determine the consequences on morphological development and signal transduction within the cochlear hair cells. Thus, these studies will immediately make a contribution to the rapidly advancing field of molecular hearing research. The next step will be to identify the proteins that interact with the N-terminus, screen pedigrees for mutations in these genes and work towards therapeutic intervention for genes that are common causes of deafness.

Vijaya Prakash Krishnan Muthaiah, Ph.D.

Vijaya Prakash Krishnan Muthaiah, Ph.D.

University at Buffalo, the State University of New York
Potential of inhibition of poly ADP-ribose polymerase as a therapeutic approach in blast-induced cochlear and brain injury

Many potential drugs in the preclinical phase for treating different types of noise-induced hearing loss (from blast and non-blast noise) revolve around targeting oxidative stress or interfering in the cell death cascade. Though noise-induced oxidative stress and cell death is well studied in the auditory periphery, the effects of noise exposure on the central auditory system remains understudied, especially in blast noise exposure where both auditory and non-auditory structures in the brain are affected. Impulsive noise (blast wave)-induced hearing loss is different from continuous noise exposure as it is more likely to be accompanied by accelerated cognitive deficits, depression, anxiety, dementia, and brain atrophy. It is well established that poly ADP-ribose polymerase (PARP) is a key mediator of cell death and it is overactivated by oxidative stress. Thus this project will explore the potential of PARP inhibition as a potential therapeutic approach for blast-induced cochlear and brain injury. The dampening of PARP overactivation by its inhibitor 3-aminobenzamide is expected to both mitigate blast noise-induced oxidative stress and to interfere with the cell death cascade, thereby reducing cell death in both the peripheral and central auditory system.