Ross Maddox, Ph.D.

Ross Maddox, Ph.D.

University of Washington
Relating behavior to brain in an audio-visual scene

Every day, listeners are presented with a barrage of sensory information in multiple sensory modalities. This can be overwhelming, but it also can allow for redundant information to be combined across the senses. This binding is well documented, but not well understood. Behavioral tests and brain imaging (magneto- and electroencephalography) will be used to study the brain activity associated with combing visual and auditory information. Particular interests include how congruent timing in auditory and visual stimuli allows them to be combined into a single sensory object, and what benefits this has for the listener. Using magneto- and electroencephalography will allow us to examine the brain’s response to our stimuli at a fine time-scale to determine what parts of the brain are involved in binding auditory and visual stimuli together. Listening to speech in noisy conditions can be difficult for normal-hearing listeners, but it is even harder for impaired listeners, such as hearing aid users, cochlear implant users, and those with central auditory processing disorders (CAPD). In this first phase, we will work with normal hearing listeners, to establish a baseline and understand how an individual's brain activity is related to their perception.

Research area: Central Auditory Processing Disorder; Fundamental auditory research,

Long-term goal of research: This proposal is the beginning of a line of research investigating the specific behavioral effects of audio-visual binding and its processing in the brain. Behavioral tests with brain imaging will be used to investigate the importance of combining information across the visual and auditory senses, and establish relationships in brain activity and behavior, an effort that could inspire new audio-logical therapies.

Beula Magimairaj, Ph.D.

Beula Magimairaj, Ph.D.

University of Central Arkansas
Moving the science forward through interdisciplinary collaborative research integrating Hearing, Language, and Cognitive Science

Clinicians and researchers lack a consensual theoretical and clinical framework for conceptualizing Central Auditory Processing Disorder (CAPD) because professionals in different disciplines characterize it differently. Children diagnosed with CAPD may have deficits in attention, language, and memory, which often go unrecognized. There is a lack of a valid and reliable assessment tool that can characterize auditory processing, attention, language, and memory on the front-end. This project is an interdisciplinary effort to lay the foundation for such an assessment. Our goal is to develop an assessment that includes sensitive measures that can help build an initial profile of a child’s source(s) of difficulties that may be manifested as auditory processing deficits. During this 1-year project, computer-based behavioral tasks that integrate theoretical and methodological advances from the CAPD literature, and hearing, language, and cognitive science, will be developed. Tasks will be piloted on sixty typically developing children (7-11 years) who have no history of auditory processing/cognitive disorders for feasibility testing. Developing an assessment that will validly characterize the abilities of affected children is a multi-stage enterprise and this project is a critical first step.

Anna Majewska, Ph.D.

Anna Majewska, Ph.D.

University of Rochester

Cortical synaptic plasticity in a mouse model of moderate sensorineural hearing loss

The development of cortical networks is exquisitely sensitive to patterned activity elicited through sensory stimulation. Although much is known about somatosensory and visual cortical development, very little is known about the development of auditory cortex network connectivity. Changes in hearing that occur as a result of defects in sensation at the cochlea likely affect the development of higher brain areas which process auditory information. Our research will explore changes in the neural networks that process auditory stimuli in the cortex in a mouse model where prestin, a protein crucial for outer hair cell electromotile function is absent during development. We will address this question by looking at synaptic sites which link individual neurons into networks and compare their density, distribution and dynamic remodeling in control and prestin-null mice. We hypothesize that changes in both static and dynamic synaptic structure will be present in the auditory cortex of prestin-null mice, suggesting that cortical auditory networks are altered by degraded hearing during development. This work will shed light on synaptic mechanisms and possible treatments of developmentally acquired hearing loss.

Ani Manichaikul, Ph.D.

Ani Manichaikul, Ph.D.

University of Virginia
Susceptibility to chronic otitis media: translating gene to function

Each year in the United States, over $5 billion is spent on healthcare for inflammation of the middle ear (ME) known as Otitis Media (OM) in children. Some children develop chronic middle ear infections known as chronic otitis media with effusion and/or recurrent otitis media (COME/ROM). Our goal is to find genetic factors that increase risk for COME/ROM in children. The discovery of causal variants would increase knowledge of novel genes and pathways involved in COME/ROM pathogenesis.

Research area: Otitis Media; Genetics

Long-term goal of research: To improve the clinical prevention of chronic infections; therefore decreasing pediatric antibiotic use, surgery, and deafness.

Senthilvelan Manohar, Ph.D.

Senthilvelan Manohar, Ph.D.

University at Buffalo
Behavioral Model of Loudness Intolerance

High-level noise causes discomfort for typical-hearing individuals. However, following cochlear damage, even moderate-level noise can become intolerable and painful, a condition known as hyperacusis.

One of the critical requirements for understanding and finding a cure for hyperacusis is the development of animal models. I have developed two new animal behavior models to study the pain and annoyance components of hyperacusis. The Active Sound Avoidance Paradigm (ASAP) uses a mouse’s innate aversion to a light open area and preference for a dark enclosed box. In the presence of intense noise, the animal shifts its preference to the light area. The Auditory Nociception Test (ANT) is based on a traditional pain threshold assessment. Although animals show an elevated pain threshold in the presence of 90 and 100 dB, at 110 and 115 dB they show a reduced pain tolerance. Using these two tests together will allow me to assess emotional reactions to sound as well as the neural interactions between auditory perception and pain sensation.

Adam Markaryan, Ph.D.

Adam Markaryan, Ph.D.

University of Chicago

Mitochondrial DNA deletions and cochlear element degeneration in presbycusis

The long term goal of the Bloom Temporal Bone Laboratory is to understand the molecular mechanisms involved in age-related hearing loss and develop a rationale for therapy based on this information. This project will quantify the mitochondrial DNA common deletion level and total deletion load in the cochlear elements obtained from individuals with presbycusis and normal hearing controls. The relationship between deletion levels, the extent of cochlear element degeneration, and the severity of hearing loss will be explored in human archival tissues to clarify the role of deletions in presbycusis.

This research award is funded by The Burch-Safford Foundation Inc.

David Martinelli, Ph.D.

David Martinelli, Ph.D.

University of Connecticut Health Center
Creation and validation of a novel, genetically induced animal model for hyperacusis

Hyperacusis is a condition in which a person experiences pain at much lower sound levels than listeners with typical hearing. While the presence of outer hair cell afferent neurons is known, it is not known what information the outer hair cells communicate to the brain through these afferents. This project’s hypothesis is that the function of these mysterious afferents is to communicate to the brain when sounds are intense enough to be painful and/or damaging, and that this circuitry is distinct from the cochlea-to-brain circuitry that provides general hearing. The hypothesis will be tested using a novel animal model in which a certain protein that is essential for the proposed “pain” circuit is missing. The absence of this protein is predicted to cause a lessening of the perception of auditory pain when high intensity sounds are presented. If true, this research has implications for those suffering from hyperacusis.

Matthew Masapollo, Ph.D.

Matthew Masapollo, Ph.D.

University of Florida
Contributions of auditory and somatosensory feedback to speech motor control in congenitally deaf 9- to-10-year-olds and adults

Cochlear implants have led to stunning advances in prospects for children with congenital hearing loss to acquire spoken language in a typical manner, but problems persist. In particular, children with CIs show much larger deficits in acquiring sensitivity to the individual speech sounds of language (phonological structure) than in acquiring vocabulary and syntax. This project will test the hypothesis that the acquisition of detailed phonological representations would be facilitated by a stronger emphasis on the speech motor control associated with producing those representations. This approach is novel because most interventions for children with CIs focus strongly on listening to spoken language, which may be overlooking the importance of practice in producing language, an idea we will examine. To achieve that objective, we will observe speech motor control directly in speakers with congenital hearing loss and CIs, with and without sensory feedback.

Jameson Mattingly, M.D.

Jameson Mattingly, M.D.

The Ohio State University
Differentiating Ménière's disease and vestibular migraine using audiometry and vestibular threshold measurements

Patients presenting with recurrent episodic vertigo (dizziness), such as Ménière's disease (MD) and vestibular migraine (VM), can present a diagnostic challenge as they can both produce recurrent vertigo, tinnitus, motion intolerance, and hearing loss. Further complicating this issue is that the diagnosis of each is based upon patient history with little contribution from an objective measure. Previous attempts to better differentiate MD and VM have included a variety of auditory and vestibular tests, but these evaluations have demonstrated limitations or not shown the appropriate sensitivity and specificity to be used in the clinical setting. Recently, vestibular perceptual threshold testing has shown the potential to better differentiate MD and VM by demonstrating different and opposite trends with testing, and these evaluations are ongoing. In addition to vestibular evaluations, audiometry (hearing testing) is a mainstay of testing in those with vestibular symptoms, especially with any concern of MD, and is thus commonly available. Standard hearing testing, however, is not sensitive or specific enough alone to differentiate MD and VM, but this project’s hypothesis is that combining audiograms with vestibular perceptual threshold testing will result in a diagnostic power greater than that possible with either option used individually. The population of patients with MD and VM is an ideal setting to examine similarities and differences, as MD is classically an otologic disease and VM, in theory, has little to do with auditory function. Additionally, this same principal can be applied to any disease process that affects both vestibular and auditory function (such as tumors, ototoxicity).

Andrew A. McCall, M.D.

Andrew A. McCall, M.D.

University of Pittsburgh
The Influence of Dynamic Limb Movement on Activity within the Vestibular Nuclei: the Role of the Cerebellum

Balance is inherently a multi-modal sense. To maintain balance in upright stance or during walking, input from several modalities – namely the vestibular system (from the inner ear), proprioceptive system (from muscles and joints), and visual system – must be interpreted by the central nervous system and synthesized to understand body position in space relative to gravity. Our goal is to investigate how vestibular and limb proprioceptive inputs interact in the central nervous system, with a particular focus on the brainstem and cerebellum as these are key sites of multisensory processing of balance input. We anticipate that the results of these studies will have important implications for the understanding of multi-sensory processing within central vestibular pathways and for the clinical treatment of humans with vestibular disorders.

Research area: Vestibular and Balance Disorders; Vestibular Physiology

Long-term goal of research: To elucidate the physiologic pathways responsible for integrating vestibular and proprioceptive information and to ultimately develop clinical strategies based upon these physiologic underpinnings to improve the health of humans with vestibular disorders.

Carolyn McClaskey, Ph.D.

Carolyn McClaskey, Ph.D.

Medical University of South Carolina

Age and hearing loss effects on subcortical envelope encoding

As we get older, it often becomes difficult to understand speech in busy environments such as a crowded restaurant or large crowd. Changes in auditory temporal processing are known to partly underlie these communication difficulties. Such altered temporal processing, including distorted neural encoding of sound envelopes, may arise from age- and hearing loss-related changes to subcortical auditory structures. This project uses a combination of behavioral, electrophysiological, and neuroimaging measures to assess how the auditory midbrain changes with age and age-related hearing loss and how this affects envelope encoding and speech-in-noise recognition. 

Generously funded by Royal Arch Research Assistance

Elizabeth McCullagh, Ph.D.

Elizabeth McCullagh, Ph.D.

University of Colorado
The role of the MNTB in sound localization impairments in autism spectrum disorder

The processing of sound location and the establishment of spatial channels to separate several simultaneous sounds is critical for social interaction, such as carrying on a conversation in a noisy room or focusing on a person speaking. Impairments in sound localization can often result in central auditory processing disorders (CAPD). A form of CAPD is also observed clinically in all autism spectrum disorders, and is a significant to quality-of-life issues in autistic patients.

The circuit in charge of initially localizing sound sources and establishing spatial channels is located in the auditory brain stem and functions with precisely integrated neural excitation and inhibition. A recent theory posits that autism may be caused by an imbalance of excitatory and inhibitory synapses, particularly in sensory systems. An imbalance of excitation and inhibition would lead to a decreased ability to separate competing sound sources. While the current excitation to inhibition model of autism assumes that most inhibition in the brain is GABAergic, the sound localization pathway in the brainstem functions primarily with temporally faster and more precise glycinergic inhibition.

The role of glycinergic inhibition has never been studied in autism disorders, and could be a crucial component of altered synaptic processing in autism. The brainstem is a good model to address this question since the primary form of inhibition is through glycine, and the ratio of excitation to inhibition is crucial for normal processing.

Melissa McGovern, Ph.D.

Melissa McGovern, Ph.D.

University of Pittsburgh

Hair cell regeneration in the mature cochlea: investigating new models to reprogram cochlear epithelial cells into hair cells 

Sensory hair cells in the inner ear detect mechanical auditory stimulation and convert it into a signal that the brain can interpret. Hair cells are susceptible to damage from loud noises and some medications. Our lab investigates the ability of nonsensory cells in our inner ears to be able to regenerate lost hair cells. We regenerate cells in the ear by converting nonsensory cells into sensory cells through genetic reprogramming. Key hair cell-inducing program genes are expressed in non-hair cells and partially convert them into hair cells. There are multiple types of nonsensory cells in the inner ear and they are all important for different reasons. In addition, they are in different locations relative to the sensory hair cells. In order to better understand the ability of different groups of cells to restore hearing, we need to be able to isolate different populations of cells. The funded project will allow us to create a new model to target specific nonsensory cells within the inner ear to better understand how these cells can be converted into hair cells. By using this new model, we can specifically investigate cells near the sensory hair cells and understand how they can be reprogrammed. Our lab is also very interested in how the partial loss of genes in the inner ear can affect cellular identities. In addition to targeting specific cells in the ear, we will investigate whether the partial loss of a protein in nonsensory cells may improve their ability to be converted into sensory cells. This information will allow us to further explore possible therapeutic targets for hearing restoration.

Kathleen McNerney, Ph.D.

Kathleen McNerney, Ph.D.

University at Buffalo, SUNY

The vestibular evoked myogenic potential: unanswered questions regarding stimulus and recording parameters

The vestibular evoked myogenic potential (VEMP) is a response that can be recorded from the sternocleidomastoid (SCM) muscle as well as other neck muscles such as the trapezius. It is believed to be generated by the saccule, which is a part of our vestibular system that is normally responsible for our sense of balance. Recent studies have shown that it is also responsive to sound. Three types of stimuli that are used to elicit the VEMP are air-conducted (AC) stimuli, bone-conducted (BC) stimuli, and galvanic (electrical) stimuli. Although there are several universal findings that have held true throughout previous studies, there are several questions which remain unanswered. The present study will attempt to address these issues by making a direct comparison between the three types of stimuli listed above, within the same subjects. In addition, input/output functions will be defined for all three types of stimuli. Finally, we will be looking at the repeatability of the three types of stimuli across subjects as well as address the inconsistencies that have been found between monaural and binaural stimulation. This study will not only provide us with a better understanding of the VEMP, it will also enhance its clinical utility.

Anahita Mehta, Ph.D.

Anahita Mehta, Ph.D.

University of Michigan

Effects of age on interactions of acoustic features with timing judgments in auditory sequences

Imagine being at a busy party where everyone is talking at once, yet you can still focus on your friend’s voice. This ability to discern important sounds from noise involves integrating different features, such as the pitch (how high or low a sound is), location, and timing of these sounds. As we age, even with good hearing, this integration may become harder, affecting our ability to understand speech in noisy environments. Our brains must combine these features to make sense of our surroundings, a process known as feature integration. However, it’s not entirely clear how these features interact, especially when they conflict. For example, how does our brain handle mixed signals regarding pitch and sound location?
Previous research shows that when cues from different senses, like hearing and sight, occur simultaneously, our performance improves. But if they are out of sync, it becomes harder. Less is known about how our brains integrate conflicting cues within the same sense, such as pitch and spatial location in hearing. Our study aims to explore how this ability changes with age and develop a simple test that could be used as an easy task of feature integration, especially for older adults. This research may lead to better rehabilitation strategies, making everyday listening tasks easier for everyone.

Frances Meredith, Ph.D.

Frances Meredith, Ph.D.

University of Colorado Denver
The role of K+ conductances in coding vestibular afferent responses

Approximately 615,000 people in the United States suffer from Meniere’s disease, a disorder of the inner ear that causes episodic vertigo, tinnitus and progressive hearing loss. The underlying etiology of the disease is not known but may include defects in ion channels and alterations in inner ear fluid potassium (K+) ion concentration. Specialized hair cells inside the ear detect head movement in the vestibular system and sound signals in the cochlea. A rich variety of channels is found on the membranes of hair cells as well as on the afferent nerve endings that form connections (synapses) with hair cells. Many of these channels selectively allow the passage of K+ ions and are thought to be important for maintaining the appropriate balance of K+ ions in inner ear fluids. I study an unusual type of nerve ending called a calyx, found at the ends of afferent nerves that form synapses with type I hair cells of the vestibular system. These nerves send electrical signals to the brain about head movements. My goal is to use immunocytochemistry and electrophysiology to identity K+ channels on the calyx membrane and to explore their role in regulating electrical activity and K+ levels in inner ear fluid. I will identify potential routes for K+ entry that could influence calyx properties. I will investigate whether altered ionic concentrations in inner ear fluid change the buffering capacity of K+ channels and whether this affects the signals that travel along the afferent vestibular nerve to the brain. Meniere’s disease is a disorder of the entire membranous labyrinth of the inner ear and thus affects both the vestibular sensory organs and the cochlea. Similar K+ ion channels are expressed in vestibular and auditory afferent neurons. Studying ion channels present in both auditory and vestibular systems will reveal properties common to both systems and will increase our understanding of the importance of ion channels in Meniere’s disease.

Iain M. Miller, Ph.D.

Iain M. Miller, Ph.D.

Ohio University

The distribution of glutamate receptors in the turtle utricle: a confocal and electron microscope study

When stimulated by acceleration and head tilt (gravity), sensory hair cells in the turtle utricle, an organ in the inner ear, transmit information about these stimuli to the brain. The long term goal of this research is to understand what role synaptic structure and composition play in the observed spatially heterogeneous and diverse discharge properties of afferents supplying the vestibular end organs, and in particular, the utricle. This knowledge is central for accurate diagnosis and rational treatment strategies for vestibular dysfunction.

Srikanta Mishra, Ph.D.

Srikanta Mishra, Ph.D.

New Mexico State University
Medial Efferent Mechanisms in Auditory Processing Disorders

Many individuals experience listening difficulty in background noise despite clinically normal hearing and no obvious auditory pathology. This condition has often received a clinical label called auditory processing disorder (APD). However, the mechanisms and pathophysiology of APD are poorly understood. One mechanism thought to aid in listening-in-noise is the medial olivocochlear (MOC) inhibition— a part of the descending auditory system. The purpose of this translational project is to evaluate whether the functioning of the MOC system is altered in individuals with APD. The benefits of measuring MOC inhibition in individuals with APD are twofold: 1) it could be useful to better define APD and identify its potential mechanisms, and 2) it may elucidate the functional significance of MOC efferents in listening in complex environments. The potential role of the MOC system in APD pathophysiology, should it be confirmed, would be of significant clinical interest because current APD clinical test batteries lack mechanism-based physiologic tools.

Rahul Mittal, Ph.D.

Rahul Mittal, Ph.D.

University of Miami Miller School of Medicine
Deciphering the role of Slc22a4 in the development of stria vascularis, and to determine the effect of supplementation of its antioxidant substrate ergothioneine, on age-related hearing loss

Since mutations in the SLC22 gene family have been implicated in various pathological conditions, there has been a renewed interest in understanding their role in the maintenance of normal physiological functions of cells. SLC22A4 is ubiquitously expressed in the body and transports across the cellular plasma membrane various compounds, including acetylcholine and carnitine as well as the naturally occurring antioxidant ergothioneine (ERGO). In addition, SLC22A4 is abundantly expressed in the stria vascularis (SV), but its role in SV biology is not known.

This project will help in understanding how SLC22A4 contributes to SV development, atrophy, and dysfunction of the cochlea, leading to hearing loss. The project also aims to determine whether ERGO supplementation can prevent SV atrophy and ameliorate age-related hearing loss (presbycusis) in a mouse model.

Jane Mondul, Au.D., Ph.D., CCC-A

Jane Mondul, Au.D., Ph.D., CCC-A

Purdue University

Sound-induced plasticity of the lateral olivocochlear efferent system

Loud sounds can damage the auditory system and cause hearing loss. But not all sound is bad – safe sound exposure can actually help the brain fine-tune how we hear, especially in noisy places. A part of the auditory system called the lateral olivocochlear (LOC) pathway may help with this. The LOC system’s chemical signals change after sound exposure, suggesting a form of “plasticity” (adaptability), but scientists don’t yet know exactly how it works. Our project will study how the LOC system changes after safe sound exposure, how this LOC plasticity affects hearing, and whether it still occurs when the ear is damaged. We will test this in mice by measuring their ability to hear sounds in noise and by looking closely at cells in the ear and brain. What we learn could guide new sound-based or drug-based therapies to protect hearing and improve communication in noisy settings.