Central Auditory Processing Disorder

Ross Maddox, Ph.D.

Ross Maddox, Ph.D.

University of Washington
Relating behavior to brain in an audio-visual scene

Every day, listeners are presented with a barrage of sensory information in multiple sensory modalities. This can be overwhelming, but it also can allow for redundant information to be combined across the senses. This binding is well documented, but not well understood. Behavioral tests and brain imaging (magneto- and electroencephalography) will be used to study the brain activity associated with combing visual and auditory information. Particular interests include how congruent timing in auditory and visual stimuli allows them to be combined into a single sensory object, and what benefits this has for the listener. Using magneto- and electroencephalography will allow us to examine the brain’s response to our stimuli at a fine time-scale to determine what parts of the brain are involved in binding auditory and visual stimuli together. Listening to speech in noisy conditions can be difficult for normal-hearing listeners, but it is even harder for impaired listeners, such as hearing aid users, cochlear implant users, and those with central auditory processing disorders (CAPD). In this first phase, we will work with normal hearing listeners, to establish a baseline and understand how an individual's brain activity is related to their perception.

Research area: Central Auditory Processing Disorder; Fundamental auditory research,

Long-term goal of research: This proposal is the beginning of a line of research investigating the specific behavioral effects of audio-visual binding and its processing in the brain. Behavioral tests with brain imaging will be used to investigate the importance of combining information across the visual and auditory senses, and establish relationships in brain activity and behavior, an effort that could inspire new audio-logical therapies.

Beula Magimairaj, Ph.D.

Beula Magimairaj, Ph.D.

University of Central Arkansas
Moving the science forward through interdisciplinary collaborative research integrating Hearing, Language, and Cognitive Science

Clinicians and researchers lack a consensual theoretical and clinical framework for conceptualizing Central Auditory Processing Disorder (CAPD) because professionals in different disciplines characterize it differently. Children diagnosed with CAPD may have deficits in attention, language, and memory, which often go unrecognized. There is a lack of a valid and reliable assessment tool that can characterize auditory processing, attention, language, and memory on the front-end. This project is an interdisciplinary effort to lay the foundation for such an assessment. Our goal is to develop an assessment that includes sensitive measures that can help build an initial profile of a child’s source(s) of difficulties that may be manifested as auditory processing deficits. During this 1-year project, computer-based behavioral tasks that integrate theoretical and methodological advances from the CAPD literature, and hearing, language, and cognitive science, will be developed. Tasks will be piloted on sixty typically developing children (7-11 years) who have no history of auditory processing/cognitive disorders for feasibility testing. Developing an assessment that will validly characterize the abilities of affected children is a multi-stage enterprise and this project is a critical first step.

Carolyn McClaskey, Ph.D.

Carolyn McClaskey, Ph.D.

Medical University of South Carolina

Age and hearing loss effects on subcortical envelope encoding

Generously funded by Royal Arch Research Assistance

Elizabeth McCullagh, Ph.D.

Elizabeth McCullagh, Ph.D.

University of Colorado
The role of the MNTB in sound localization impairments in autism spectrum disorder

The processing of sound location and the establishment of spatial channels to separate several simultaneous sounds is critical for social interaction, such as carrying on a conversation in a noisy room or focusing on a person speaking. Impairments in sound localization can often result in central auditory processing disorders (CAPD). A form of CAPD is also observed clinically in all autism spectrum disorders, and is a significant to quality-of-life issues in autistic patients.

The circuit in charge of initially localizing sound sources and establishing spatial channels is located in the auditory brain stem and functions with precisely integrated neural excitation and inhibition. A recent theory posits that autism may be caused by an imbalance of excitatory and inhibitory synapses, particularly in sensory systems. An imbalance of excitation and inhibition would lead to a decreased ability to separate competing sound sources. While the current excitation to inhibition model of autism assumes that most inhibition in the brain is GABAergic, the sound localization pathway in the brainstem functions primarily with temporally faster and more precise glycinergic inhibition.

The role of glycinergic inhibition has never been studied in autism disorders, and could be a crucial component of altered synaptic processing in autism. The brainstem is a good model to address this question since the primary form of inhibition is through glycine, and the ratio of excitation to inhibition is crucial for normal processing.

Srikanta Mishra, Ph.D.

Srikanta Mishra, Ph.D.

New Mexico State University
Medial Efferent Mechanisms in Auditory Processing Disorders

Many individuals experience listening difficulty in background noise despite clinically normal hearing and no obvious auditory pathology. This condition has often received a clinical label called auditory processing disorder (APD). However, the mechanisms and pathophysiology of APD are poorly understood. One mechanism thought to aid in listening-in-noise is the medial olivocochlear (MOC) inhibition— a part of the descending auditory system. The purpose of this translational project is to evaluate whether the functioning of the MOC system is altered in individuals with APD. The benefits of measuring MOC inhibition in individuals with APD are twofold: 1) it could be useful to better define APD and identify its potential mechanisms, and 2) it may elucidate the functional significance of MOC efferents in listening in complex environments. The potential role of the MOC system in APD pathophysiology, should it be confirmed, would be of significant clinical interest because current APD clinical test batteries lack mechanism-based physiologic tools.

Vijaya Prakash Krishnan Muthaiah, Ph.D.

Vijaya Prakash Krishnan Muthaiah, Ph.D.

University at Buffalo, the State University of New York
Potential of inhibition of poly ADP-ribose polymerase as a therapeutic approach in blast-induced cochlear and brain injury

Many potential drugs in the preclinical phase for treating different types of noise-induced hearing loss (from blast and non-blast noise) revolve around targeting oxidative stress or interfering in the cell death cascade. Though noise-induced oxidative stress and cell death is well studied in the auditory periphery, the effects of noise exposure on the central auditory system remains understudied, especially in blast noise exposure where both auditory and non-auditory structures in the brain are affected. Impulsive noise (blast wave)-induced hearing loss is different from continuous noise exposure as it is more likely to be accompanied by accelerated cognitive deficits, depression, anxiety, dementia, and brain atrophy. It is well established that poly ADP-ribose polymerase (PARP) is a key mediator of cell death and it is overactivated by oxidative stress. Thus this project will explore the potential of PARP inhibition as a potential therapeutic approach for blast-induced cochlear and brain injury. The dampening of PARP overactivation by its inhibitor 3-aminobenzamide is expected to both mitigate blast noise-induced oxidative stress and to interfere with the cell death cascade, thereby reducing cell death in both the peripheral and central auditory system.

Kirill Vadimovich Nourski, M.D., Ph.D.

Kirill Vadimovich Nourski, M.D., Ph.D.

University of Iowa
Temporal Processing in the Human Auditory Cortex

My research area is the function of the auditory cortex—the hearing center in the brain. Some neurosurgical patients undergo an operation in which arrays of electrodes are temporarily implanted in the brain for clinical diagnostic purposes. This provides a unique opportunity to study how the auditory cortex works, by measuring its activity (“brain waves”) directly from the brain. My personal project involves measuring the brain’s responses to the timing information of sounds and the ability of the brain to accurately follow this timing and use this information to build a coherent percept of the environment. I want to understand where and how, specifically, timing cues are processed in the auditory cortex. Patients with cochlear implants are largely dependent on timing and rhythm cues to understand speech and communicate. On the other hand, people who have auditory processing disorders, may be impaired in their ability to process that kind of information. In order to come up with new ways of assisting people with auditory processing disorders, it’s important to understand how the timing and rhythm of speech is usually handled by the brain.

Research area: fundamental auditory research

Long-term goal of research: Beyond this study, my long term goal is to better understand how the different areas that comprise the auditory cortex in humans are organized and what specific roles they play in processing information about sounds, particularly, as it relates to perception of speech. Ultimately, this knowledge will contribute to finding new and/or improved solutions for people with hearing loss and auditory processing disorders.

Z. Ellen Peng, Ph.D.

Z. Ellen Peng, Ph.D.

University of Wisconsin-Madison
Investigating cortical processing during comprehension of reverberant speech in adolescents and young adults with cochlear implants

Through early cochlear implant (CI) fitting, many children diagnosed with profound neurosensorial hearing loss gain access to verbal communications through electrical hearing and go on to develop spoken language. Despite good speech outcomes tested in sound booths, many children experience difficulties in understanding speech in most noisy and reverberant indoor environments. While up to 75 percent of their time learning is spent in classrooms, the difficulty from adverse acoustics adding to children’s processing of degraded speech from CI is not well understood. In this project, we examine speech understanding in classroom-like environments through immersive acoustic virtual reality. In addition to behavioral responses, we measure neural activity using functional near-infrared spectroscopy (fNIRS)—a noninvasive, CI-compatible neuroimaging technique, in cortical regions that are responsible for speech understanding and sustained attention. Our findings will reveal the neural signature of speech processing by CI users, who developed language through electrical hearing, in classroom-like environments with adverse room acoustics.

Melissa Polonenko, Ph.D.

Melissa Polonenko, Ph.D.

University of Minnesota–Twin Cities

Identifying hearing loss through neural responses to engaging stories

Spoken language acquisition in children with hearing loss relies on early identification of hearing loss followed by timely fitting of hearing devices to ensure they receive an adequate representation of the speech that they need to hear. Yet current tests for young children rely on non-speech stimuli, which are processed differently by hearing aids and do not fully capture the complexity of speech. This project will develop a new and efficient test - called multiband peaky speech - that uses engaging narrated stories and records responses from the surface of the head (EEG) to identify frequency-specific hearing loss. Computer modeling and EEG experiments in adults will determine the best combination of parameters and stories to speed up the testing for use in children, and evaluate the test’s ability to identify hearing loss. This work lays the necessary groundwork for extending this method to children and paves the way for clinics to use this test as a hearing screener for young children–and ultimately our ability to provide timely, enhanced information to support spoken language development.

The long-term goal is to develop an engaging, objective clinical test that uses stories to identify hearing loss in young children and evaluate changes to their responses to the same speech through hearing aids. This goal addresses two important needs identified by the U.S.’s Early Hearing Detection and Intervention (EHDI) program, and will positively impact the developmental trajectory of thousands of children who need monitoring of their hearing status and evaluation of outcomes with their hearing devices.

Generously funded by Royal Arch Research Assistance

Erin K. Purcell, Ph.D.

Erin K. Purcell, Ph.D.

Khaleel Razak, Ph.D.

 Khaleel Razak, Ph.D.

University of California, Riverside
Age-related hearing loss and cortical processing

Presbycusis (age-related hearing loss) is one of the most prevalent forms of hearing impairment in humans, and contributes to speech recognition impairments and cognitive decline. Both peripheral and central auditory system changes are involved in presbycusis. The relative contributions of peripheral hearing loss and brain aging to presbycusis-related auditory processing declines remain unclear. This project will address this question by comparing genetically engineered, age-matched mice with one group experiencing presbycusis and a second group that does not. Spectrotemporal processing (such as speech processing) will be studied as an outcome measure.

Christina Reuterskiöld, Ph.D.

Christina Reuterskiöld, Ph.D.

New York University
Rhyme Awareness in Children with Cochlear Implants: Investigating the Effect of a Degraded Auditory System on Auditory Processing, Language, and Literacy Development

Successful literacy is critical for a child’s development. Decoding written words is mostly dependent on the child’s processing of speech sounds, requiring a certain level of awareness of speech sounds and words in order to develop literacy skills. If the benefits of early cochlear implantation support the development of central auditory processing skills and phonological awareness, children with cochlear implants (CIs) would be expected to acquire phonological awareness skills comparable to children with typical hearing.

However, past research has generated conflicting results on this topic, which this project will attempt to remedy through investigating rhyme recognition skills and vocabulary acquisition in children who received CIs early in life. With co-principal investigator Katrien Vermeire, Ph.D., we will also shed light on the importance of central auditory processing during a child’s first years of life for developing strong literacy skills.

William “Jason” Riggs, Au.D.

William “Jason” Riggs, Au.D.

The Ohio State University
Electrophysiological characteristics in children with auditory neuropathy spectrum disorder

This project will focus on understanding different sites of lesion (impairment) in children with auditory neuropathy spectrum disorder (ANSD). ANSD is a unique form of hearing loss that is thought to occur in approximately 10 to 20 percent of all children with severe to profound sensorineural hearing loss and results in abnormal auditory perception. Neural encoding processes of the auditory nerve in children using electrophysiologic techniques (acoustically and electrically evoked) will be investigated in order to provide objective evidence of peripheral auditory function. Results can then be used to optimize and impact care from the very beginning of cochlear implant use in children with this impairment.

Merri J. Rosen, Ph.D.

Merri J. Rosen, Ph.D.

Northeast Ohio Medical University
Effects of developmental conductive hearing loss on communication processing: perceptual deficits and neural correlates in an animal model

Conductive hearing loss (CHL), which reduces the sound conducted to the inner ear, is often associated with chronic ear infections (otitis media). There is growing awareness that CHL in children is a risk factor for speech and language deficits. However, children often have intermittent bouts of hearing loss and receive varying treatments. My research uses an animal model in which the duration and extent of CHL can be effectively controlled. This research will identify parameters of natural vocalizations (such as slow or fast changes in pitch or loudness) that are poorly detected after early CHL. Neural responses from the auditory cortex will be recorded while animals behaviorally distinguish vocalizations that vary in specific ways. This will reveal the specific vocalization components that are perceptually impaired by developmental hearing loss. These components should be used as targets for intervention and remediation. Creating training paradigms for children that target these parameters should improve speech perception and comprehension.

Research area: Hearing Loss; Auditory Development; Auditory Physiology; Fundamental Auditory Research

Long-term goal of research: To identify neural mechanisms that impairs auditory perception of natural sounds as a result of hearing loss. This will show how the brain distinguishes sounds from different sources in complex environments. Neurophysiological, perceptual, and computational techniques to study animal models of hearing loss were applied. This multifaceted approach allowed the identification of neural impairments in more detail than if it was obtained when studying humans, yet is directly applicable to clarify human hearing problems and establish effective treatments.

Nirmal Kumar Srinivasan, Ph.D.

Nirmal Kumar Srinivasan, Ph.D.

Towson University
Understanding and decoding CAPD in adults

Understanding speech in complex listening environments involves both on top-down and bottom-up processes. Central auditory processing disorder (CAPD) refers to a reduction in the efficiency and effectiveness of how the central nervous system utilizes the presented auditory information. It is characterized by a diminished perception of speech and non-speech sounds that is not attributable to peripheral hearing loss or intellectual impairment. Hearing loss and CAPD can adversely affect everyday communication, learning, and physical well-being.

A substantial number of adults evaluated for CAPD complain about difficulties in resolving auditory events that are similar to that of individuals with hearing impairment. These individuals have audiograms that are similar to those of age-matched individuals. Since the audiogram is the primary tool used in the clinic to distinguish people with hearing loss, it is imperative to understand the fundamental differences observed in behavioral experiments for individuals with CAPD and individuals with hearing loss.

Joseph Toscano, Ph.D.

Joseph Toscano, Ph.D.

Villanova University
Cortical EEG measure of speech sound encoding for hearing assessment

Accurate speech recognition depends on fine-grained acoustic cues in the speech signal. Deficits in how these cues are processed may be informative for detecting hearing loss, and particularly for identifying auditory neuropathy, a problem with the way the brain processes sounds. Diagnosing auditory neuropathy in newborns and infants is particularly challenging, as it is often difficult to distinguish it from sensorineural hearing loss using current measurement approaches. Speech tests that measure cortical responses may allow us to overcome this problem. The current project uses electroencephalogram (EEG) techniques to measure brain responses to specific acoustic cues in speech (e.g., the difference between “d” and “t”). These data will be compared with listeners’ speech recognition accuracy, pure-tone audiograms, and self-reported hearing difficulty to determine how these responses vary as a function of hearing status and may be used to detect early stages of hearing loss.

Kenneth Vaden, Ph.D.

Kenneth Vaden, Ph.D.

Medical University of South Carolina
Adaptive control of auditory representations in listeners with central auditory processing disorder

Central Auditory Processing Disorder (CAPD) is typically defined as impairment in the ability to listen and use auditory information because of atypical function within the central auditory system. The current study uses neuroimaging to characterize CAPD in older adults whose impaired auditory processing abilities could be driven by cognitive and hearing-related declines, in addition to differences in central auditory nervous system function. Functional neuroimaging experiments will be used to test the hypothesis that older adults with CAPD fail to benefit from top-down enhancement of auditory cortex representations for speech. In particular, activation of the adaptive control system in cingulo-opercular cortex is predicted to enhance speech representations in auditory cortex for normal listeners, but not to the same extent for older adults with CAPD. This project aims to develop methods to assess the quality of speech representations based on brain activity and characterize top-down control systems that interact with auditory cortex. The results of this study will improve our understanding of a specific top-down control mechanism, and examine when and how adaptive control enhances speech recognition for people with CAPD.

Daniel Winkowski, Ph.D.

Daniel Winkowski, Ph.D.

University of Maryland
Noise trauma induced reorganization of the auditory cortex

Tinnitus (‘ringing in the ears’) is a debilitating condition that is experienced by millions of people worldwide. Tinnitus is frequently seen after noise trauma to the ear. One of the core hypotheses of the etiology of tinnitus is that the percept of ‘ringing in the ears’ is generated by changes in patterns of neural activity in brain circuits at many levels of the auditory pathway. One brain area thought to be at least partly responsible for the tinnitus percept is the primary auditory cortex (A1). However, the precise changes in neural activity within local neuron populations have not been investigated directly. The goal of the proposed project is to probe how noise trauma affects both large- and local-scale organization of A1 brain circuits with unprecedented spatial and cellular resolution in an animal model of tinnitus. Proposed experiments will use state-of-the-art optical imaging approaches to investigate how entire auditory cortical areas (large-scale) and auditory cortical microcircuits (local-scale) are disrupted by noise trauma. A multi-level understanding of circuit dynamics underlying tinnitus (from single neurons to complete representations) will enhance our understanding of precisely how cortical circuits remodel after noise trauma and, in turn, develop and identify strategies by which this debilitating condition can be repaired.