Detailing the Relationships Between Auditory Processing and Cognitive-Linguistic Abilities in Children

By Beula Magimairaj, Ph.D.

Children suspected to have or diagnosed with auditory processing disorder (APD) present with difficulty understanding speech despite typical-range peripheral hearing and typical intellectual abilities. Children with APD (also known as central auditory processing disorder, CAPD) may experience difficulties while listening in noise, discriminating speech and non-speech sounds, recognizing auditory patterns, identifying the location of a sound source, and processing time-related aspects of sound, such as rapid sound fluctuations or detecting short gaps between sounds. According to 2010 clinical practice guidelines by the American Academy of Audiology and a 2005 American Speech-Language-Hearing Association (ASHA) report, developmental APD is a unique clinical entity. According to ASHA, APD is not the result of cognitive or language deficits.


In our July 2018 study in the journal Language Speech and Hearing Services in the Schools for its special issue on “working memory,” my coauthor and I present a novel framework for conceptualizing auditory processing abilities in school-age children. According to our framework, cognitive and linguistic factors are included along with auditory factors as potential sources of deficits that may contribute individually or in combination to cause listening difficulties in children.

We present empirical evidence from hearing, language, and cognitive science in explaining the relationships between children’s auditory processing abilities and cognitive abilities such as memory and attention. We also discuss studies that have identified auditory abilities that are unique and may benefit from assessment and intervention. Our unified framework is based on studies from typically developing children; those suspected to have APD, developmental language impairment, or attention deficit disorders; and models of attention and memory in children. In addition, the framework is based on what we know about the integrated functioning of the nervous system and evidence of multiple risk factors in developmental disorders. A schematic of this framework is shown here.

APD chart.png

In our publication, for example, we discuss how traditional APD diagnostic models show remarkable overlap with models of working memory (WM). WM refers to an active memory system that individuals use to hold and manipulate information in conscious awareness. Overlapping components among the models include verbal short-term memory capacity (auditory decoding and memory), integration of audiovisual information and information from long-term memory, and central executive functions such as attention and organization. Therefore, a deficit in the WM system can also potentially mimic the APD profile.

Similarly, auditory decoding (i.e., processing speech sounds), audiovisual integration, and organization abilities can influence language processing at various levels of complexity. For example, poor phonological (speech sound) processing abilities, such as those seen in some children with primary language impairment or dyslexia, could potentially lead to auditory processing profiles that correspond to APD. Auditory memory and auditory sequencing of spoken material are often challenging for children diagnosed with APD. These are the same integral functions attributed to the verbal short-term memory component of WM. Such observations are supported by the frequent co-occurrence of language impairment, APD, and attention deficit disorders.

Furthermore, it is important to note that cognitive-linguistic and auditory systems are highly interconnected in the nervous system. Therefore, heterogeneous profiles of children with listening difficulties may reflect a combination of deficits across these systems. This calls for a unified approach to model functional listening difficulties in children.

Given the overlap in developmental trajectories of auditory skills and WM abilities, the age at evaluation must be taken into account during assessment of auditory processing. The American Academy of Audiology does not recommend APD testing for children developmentally younger than age 7. Clinicians must therefore adhere to this recommendation to save time and resources for parents and children and to avoid misdiagnosis.

However, any significant listening difficulties noted in children at any age (especially at younger ages) must call for a speech-language evaluation, a peripheral hearing assessment, and cognitive assessment. This is because identification of deficits or areas of risk in language or cognitive processing triggers the consideration of cognitive-language enrichment opportunities for the children. Early enrichment of overall language knowledge and processing abilities (e.g., phonological/speech sound awareness, vocabulary) has the potential to improve children's functional communication abilities, especially when listening in complex auditory environments. 

Given the prominence of children's difficulty listening in complex auditory environments and emerging evidence suggesting a distinction of speech perception in noise and spatialized listening from other auditory and cognitive factors, listening training in spatialized noise appears to hold promise in terms of intervention. This needs to be systematically replicated across independent research studies. 

Other evidence-based implications discussed in our publication include improving auditory access using assistive listening devices (e.g., FM systems), using a hierarchical assessment model, or employing a multidisciplinary front-end screening of sensitive areas (with minimized overlap across audition, language, memory, and attention) prior to detailed assessments in needed areas.

Finally, we emphasize that prevention should be at the forefront. This calls for integrating auditory enrichment with meaningful activities such as musical experience, play, social interaction, and rich language experience beginning early in infancy while optimizing attention and memory load. While these approaches are not new, current research evidence on neuroplasticity makes a compelling case to promote auditory enrichment experiences in infants and young children.


A 2015 Emerging Research Grants (ERG) scientist generously funded by the General Grand Chapter Royal Arch Masons International, Beula Magimairaj, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Central Arkansas. Magimairaj’s related ERG research on working memory appears in the Journal of Communication Disorders, and she wrote about an earlier paper from her ERG grant in the Summer 2018 issue of Hearing Health.

Print Friendly and PDF

Measuring Brain Signals Leads to Insights Into Mild Tinnitus

By Julia Campbell, Au.D., Ph.D.

Tinnitus, or the perception of sound where none is present, has been estimated to affect approximately 15 percent of adults. Unfortunately, there is no cure for tinnitus, nor is there an objective measure of the disorder, with professionals relying instead upon patient report.

There are several theories as to why tinnitus occurs, with one of the more prevalent hypotheses involving what is termed decreased inhibition. Neural inhibition is a normal function throughout the nervous system, and works in tandem with excitatory neural signals for accomplishing tasks ranging from motor output to the processing of sensory input. In sensory processing, such as hearing, both inhibitory and excitatory neural signals depend on external input.

For example, if an auditory signal cannot be relayed through the central auditory pathways due to cochlear damage resulting in hearing loss, both central excitation and inhibition may be reduced. This reduction in auditory-related inhibitory function may result in several changes in the central nervous system, including increased spontaneous neural firing, neural synchrony, and reorganization of cortical regions in the brain. Such changes, or plasticity, could possibly result in the perception of tinnitus, allowing signals that are normally suppressed to be perceived by the affected individual. Indeed, tinnitus has been reported in an estimated 30 percent of those with clinical hearing loss over the frequency range of 0.25 to 8 kilohertz (kHz), suggesting that cochlear damage and tinnitus may be interconnected.

However, many individuals with clinically normal hearing report tinnitus. Therefore, it is possible that in this specific population, inhibitory dysfunction may not underlie these phantom perceptions, or may arise from a different trigger other than hearing loss.

One measure of central inhibition is sensory gating. Sensory gating involves filtering out signals that are repetitive and therefore unimportant for conscious perception. This automatic process can be measured through electrical responses in the brain, termed cortical auditory evoked potentials (CAEPs). CAEPs are recorded via electroencephalography (EEG) using noninvasive sensors to record electrical activity from the brain at the level of the scalp.

  Cortical auditory evoked potentials (CAEPs) are recorded via electroencephalography (EEG) using noninvasive sensors to record electrical activity from the brain.

Cortical auditory evoked potentials (CAEPs) are recorded via electroencephalography (EEG) using noninvasive sensors to record electrical activity from the brain.

In healthy gating function, it is expected that the CAEP response to an initial auditory signal will be larger in amplitude when compared with a secondary CAEP response elicited by the same auditory signal. This illustrates the inhibition of repetitive information by the central nervous system. If inhibitory processes are dysfunctional, CAEP responses are similar in amplitude, reflecting decreased inhibition and the reduced filtering of incoming auditory information.

Due to the hypothesis that atypical inhibition may play a role in tinnitus, we conducted a study to evaluate inhibitory function in adults with normal hearing, with and without mild tinnitus, using sensory gating measures. To our knowledge, sensory gating had not been used to investigate central inhibition in individuals with tinnitus. We also evaluated extended high-frequency auditory sensitivity in participants at 10, 12.5, and 16 kHz—which are frequencies not included in the usual clinical evaluation—to determine if participants with mild tinnitus showed hearing loss in these regions.

Tinnitus severity was measured subjectively using the Tinnitus Handicap Index. This score was correlated with measures of gating function to determine if tinnitus severity may be worse with decreased inhibition.

Our results, published in Audiology Research on Oct. 2, 2018, showed that gating function was impaired in adults with typical hearing and mild tinnitus, and that decreased gating was significantly correlated with tinnitus severity. In addition, those with tinnitus did not show significantly different extended high-frequency thresholds in comparison to the participants without tinnitus, but it was found that better hearing in this frequency range related to worse tinnitus severity.

This result conflicts with the theory that hearing loss may trigger tinnitus, at least in adults with typical hearing, and may indicate that these individuals possess heightened auditory awareness, although this hypothesis should be directly tested.

Julia Campbell.jpg
les pauls 100th logo.png

Overall, it appears that central inhibition is atypical in adults with typical hearing and tinnitus, and that this is not related to hearing loss as measured in clinically or non-clinically tested frequency regions. The cause of decreased inhibition in this population remains unknown, but genetic factors may play a role. We are currently investigating the use of sensory gating as an objective clinical measure of tinnitus, particularly in adults with hearing loss, as well as the networks in the brain that may underlie dysfunctional gating processes.

2016 Emerging Research Grants scientist Julia Campbell, Au.D., Ph.D., CCC-A, F-AAA, received the Les Paul Foundation Award for Tinnitus Research. She is an assistant professor in communication sciences and disorders in the Central Sensory Processes Laboratory at the University of Texas at Austin.

Print Friendly and PDF

ERG Grantees' Advancements in OAE Hearing Tests, Speech-in-Noise Listening

By Yishane Lee and Inyong Choi, Ph.D.

Support for a Theory Explaining Otoacoustic Emissions: Fangyi Chen, Ph.D.

Groves hair cells 002.jpeg

It’s a remarkable feature of the ear that it not only hears sound but also generates it. These sounds, called otoacoustic emissions (OAEs), were discovered in 1978. Thanks in part to ERG research in outer hair cell motility, measuring OAEs has become a common, noninvasive hearing test, especially among infants too young to respond to sound prompts..

There are two theories about how the ear produces its own sound emanating from the interior of the cochlea out toward its base. The traditional one is the backward traveling wave theory, in which sound emissions travel slowly as a transverse wave along the basilar membrane, which divides the cochlea into two fluid-filled cavities. In a transverse wave, the wave particles move perpendicular to the wave direction. But this theory does not explain some anomalies, leading to a second hypothesis: The fast compression wave theory holds that the emissions travel as a longitudinal wave via lymph fluids around the basilar membrane. In a longitudinal wave, the wave particles travel in the same direction as the wave motion.

Figuring out how the emissions are created will promote greater accuracy of the OAE hearing test and a better understanding of cochlear mechanics. Fangyi Chen, Ph.D., a 2010 Emerging Research Grants (ERG) recipient, started investigating the issue at Oregon Health & Science University and is now at China’s Southern University of Science and Technology. His team’s paper, published in the journal Neural Plasticity in July 2018, for the first time experimentally validates the backward traveling wave theory.

Chen and his coauthors—including Allyn Hubbard, Ph.D., and Alfred Nuttall, Ph.D., who are each 1989–90 ERG recipients—directly measured the basilar membrane vibration in order to determine the wave propagation mechanism of the emissions. The team stimulated the membrane at a specific location, allowing for the vibration source that initiates the backward wave to be pinpointed. Then the resulting vibrations along the membrane were measured at multiple locations in vivo (in guinea pigs), showing a consistent lag as distance increased from the vibration source. The researchers also measured the waves at speeds in the order of tens of meters per second, much slower than would be the speed of a compression wave in water. The results were confirmed using a computer simulation. In addition to the wave propagation study, a mathematical model of the cochlea based on an acoustic electrical analogy was created and simulated. This was used to interpret why no peak frequency-to-place map was observed in the backward traveling wave, explaining some of the previous anomalies associated with this OAE theory.

Speech-in-Noise Understanding Relies on How Well You Combine Information Across Multiple Frequencies: Inyong Choi, Ph.D.

Understanding speech in noisy environments is a crucial ability for communications, although many individuals with or without hearing loss suffer from dysfunctions in that ability. Our study in Hearing Research, published in September 2018, finds that how well you combine information across multiple frequencies, tested by a pitch-fusion task in "hybrid" cochlear implant users who receive both low-frequency acoustic and high-frequency electric stimulation within the same ear, is a critical factor for good speech-in-noise understanding.

In the pitch-fusion task, subjects heard either a tone consisting of many frequencies in a simple mathematical relationship or a tone with more irregular spacing between frequencies. Subjects had to say whether the tone sounded "natural" or "unnatural" to them, given the fact that a tone consisting of frequencies in a simple mathematical relationship sounds much more natural to us. My team and I are now studying how we can improve the sensitivity to this "naturalness" in listeners with hearing loss, expecting to provide individualized therapeutic options to address the difficulties in speech-in-noise understanding.

2017 ERG recipient Inyong Choi, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Iowa in Iowa City.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

Print Friendly and PDF

Meet the Researcher: A. Catalina Vélez-Ortega

By Yishane Lee

2018 Emerging Research Grants (ERG) awardee A. Catalina Vélez-Ortega received a master’s in biology from the University of Antioquia, Colombia, and a doctorate in physiology from the University of Kentucky, where she completed postdoctoral training and is now an assistant professor in the department of physiology.

erg Velez Ortega.jpg


TRPA1 is an ion channel known for its role as an “irritant sensor” in pain-sensing neurons (nerve cells). Noise exposure leads to the production of some cellular “irritants” that activate TRPA1 channels in the inner ear. The role of TRPA1 channels has been a puzzling project, with most experiments leaving more questions to pursue. My current project seeks to uncover how TRPA1 activation modifies cochlear mechanics and hearing sensitivity, in order to find new therapeutic targets to prevent hearing loss or tinnitus.

My father, our town’s surgeon, fueled my desire to learn. When I asked him how the human heart works, he called the butcher, got a pig’s heart, and we dissected it together. I was about 5 when I learned how the heart’s chambers are connected and how valves work. He also set up an astronomy class at home with a flashlight, globe, and ball when I asked, “Why does the moon change shape?” My father’s excitement kept my curiosity from fading as I grew older. That eager-to-learn personality now drives my career in science and teaching.

My training in biomedical engineering guided my interest into hearing science. The field of inner ear research mixes physics and mechanics with molecular biology and genetics in a way I find extremely attractive. Analytics also intrigues me. People who work with me know how complex my calendar and spreadsheets can get. I absolutely love logging all kinds of data and looking for correlations. I also like to plan ahead—passport renewal 10 years from now? Already in my calendar!

I take dance lessons and participate in flash mobs and other dance performances. But I used to be extremely shy. As a child I simply could not look anyone in the eye when talking to them. I was also terrified of being onstage. It was only after college that I decided to finally correct the problem. Interestingly, taking sign language lessons was very helpful. Sign language forced me to stare at people to be able to communicate. It was terrifying at first, but it started to feel very natural after just a few months.

Vélez-Ortega’s 2018 ERG grant was generously funded by cochlear implant manufacturer Cochlear Americas.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

Print Friendly and PDF

Quantifying the Effects of a Hyperacusis Treatment

By Xiying Guan, Ph.D

A typical inner ear has two mobile windows: the oval and round window (RW). The flexible, membrane-covered RW allows fluid in the cochlea to move as the oval window vibrates in response to movement from the stapes bone during sound stimulation.

Copy of Guan HHF Photo_CROP.jpg

Superior canal dehiscence (SCD), a pathological opening in the bony wall of the superior semicircular canal, forms a third window of the inner ear. This structural anomaly results in various auditory and vestibular symptoms. One common symptom is increased sensitivity to self-generated sounds or external vibrations, such as hearing one’s own pulse, neck and joint movement, and even eye movement. This hypersensitive hearing associated with SCD has been termed conductive hyperacusis.

Recently, surgically stiffening the RW is emerging as a treatment for hyperacusis in patients with and without SCD. However, the postsurgical results are mixed: Some patients experience improvement while others complain of worsening symptoms and have asked to reverse the RW treatment. Although this “experimental” surgical treatment for hyperacusis is increasingly reported, its efficacy has not been studied scientifically.

In the present study, we experimentally tested how RW reinforcement affects air-conduction sound transmission in the typical ear (that is, without a SCD). We measured the sound pressures in two cochlear fluid-filled cavities—the scala vestibuli (assigned the value “Psv”) and the scala tympani (“Pst”)—together with the stapes velocity in response to sound at the ear canal. We estimated hearing ability based on a formula for the “cochlear input drive” (Pdiff = Psv – Pst) before and after RW reinforcement in a human cadaveric ear.

We found that RW reinforcement can affect the cochlear input drive in unexpected ways. At very low frequencies, below 200 Hz, it resulted in a reduced stapes motion but an increase in the cochlear input drive that would be consistent with improved hearing. At 200 to 1,000 Hz, the stapes motion and input drive both were slightly decreased. Above 1,000 Hz stiffening the RW had no effect.

hyperacusis research logo.png

The results suggest that RW reinforcement has the potential to worsen low-frequency hyperacusis while causing some hearing loss in the mid-frequencies. Although this preliminary study shows that the RW treatment does not have much effect on air-conduction hearing, the effect on bone-conduction hearing is unknown and is one of our future areas for experimentation.

A 2017 ERG scientist funded by Hyperacusis Research Ltd., Xiying Guan, Ph.D., is a postdoctoral fellow at Massachusetts Eye and Ear, Harvard Medical School, in Boston.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

Print Friendly and PDF

Introducing the 2018 Emerging Research Grantees

By Lauren McGrath

ERG Logo.jpg

Hearing Health Foundation (HHF) is pleased to present our Emerging Research Grants (ERG) awardees for the 2018 project cycle.

  Grantee Tenzin Ngodup, Ph.D., will investigate neuronal activity in the ventral cochlear nucleus to help prevent and treat tinnitus.

Grantee Tenzin Ngodup, Ph.D., will investigate neuronal activity in the ventral cochlear nucleus to help prevent and treat tinnitus.

15 individuals at various institutions nationwide—including Johns Hopkins School of Medicine, University of Minnesota, and the National Cancer Institute—will conduct innovative research in the following topic areas:

  • Central Auditory Processing Disorder (CAPD)
  • General Hearing Health
  • Hearing Loss in Children
  • Hyperacusis
  • Tinnitus
  • Usher Syndrome

Our grantees’ research investigations seek to solve specific auditory and vestibular problems such as declines in complex sound processing in age-related hearing loss (presbycusis), ototoxicity caused by the life-saving chemotherapy drug cisplatin, and noise-induced hearing loss.

HHF looks forward to the advancements that will come about from these promising scientific endeavors. The foundation owes many thanks to the General Grand Chapter Royal Arch Masons International, Cochlear, Hyperacusis Research, the Les Paul Foundation, and several generous, anonymous donors who have collectively empowered this important work.

We are currently planning for our 2019 ERG grant cycle, for which applications will open September 1. Learn more about the application process.


  Grantee Rachael R. Baiduc, Ph.D., will identify  cardiovascular disease risk factors that may contribute to hearing loss.

Grantee Rachael R. Baiduc, Ph.D., will identify
cardiovascular disease risk factors that may contribute to hearing loss.

Print Friendly and PDF

In Memoriam: David J. Lim, M.D.

By Nadine Dehgan

  Credit: UCLA Head and Neck Surgery

Credit: UCLA Head and Neck Surgery

We recognize with profound sadness the recent passing of David J. Lim, M.D., who was pivotal to the establishment of Hearing Health Foundation (HHF) and remained committed to our research throughout his life.

As a member of our Council of Scientific Trustees (CST)—the governing body of HHF’s Emerging Research Grants (ERG) program—and as a Centurion donor, Lim worked tirelessly to ensure the most promising auditory and vestibular science was championed.

Prior to his appointment to the CST, “Lim contributed to our understanding of the mechanics of hearing through his excellent scanning electron micrographs of the inner ear,” says Elizabeth Keithley, Ph.D, Chair of the Board of HHF. Lim pursued this critical work in 1970 through his first of many ERG grants.

“Lim was also one of the founding members of the Association for Research in Otolaryngology (ARO) and served as the historian of this esteemed scientific organization,” says Judy Dubno, Ph.D., of HHF’s Board of Directors. “Along with HHF, he cared deeply about ARO and will be missed by many.”

Most recently, Lim was a surgeon-scientist and a director of the UCLA Pathogenesis of Ear Diseases Laboratory, where he was considered an authority on temporal bone histopathology, morphology and cell biology of the ear, and the innate immunity of the middle and inner ear.

We, the HHF community, are grateful to have known and to have benefited from Lim’s wisdom, good humor, and kind spirit. HHF will honor his legacy by continuing our mission, knowing we are indebted to his leadership and dedication to advancements in hearing health.

Print Friendly and PDF

New Research Shows Hearing Aids Improve Brain Function and Memory in Older Adults

By University of Maryland Department of Hearing and Speech Sciences

One of the most prevalent health conditions among older adults, age-related hearing loss, can lead to cognitive decline, social isolation and depression. However, new research from the University of Maryland (UMD) Department of Hearing and Speech Sciences (HESP) shows that the use of hearing aids not only restores the capacity to hear, but can improve brain function and working memory.

brain and memory.jpg

The UMD-led research team monitored a group of first-time hearing aid users with mild-to-moderate hearing loss over a period of six months. The researchers used a variety of behavioral and cognitive tests designed to assess participants’ hearing as well as their working memory, attention and processing speed. They also measured electrical activity produced in response to speech sounds in the auditory cortex and midbrain.

At the end of the six months, participants showed improved memory, improved neural speech processing, and greater ease of listening as a result of the hearing aid use. Findings from the study were published recently in Clinical Neurophysiology and Neuropsychologia.

“Our results suggest that the benefits of auditory rehabilitation through the use of hearing aids may extend beyond better hearing and could include improved working memory and auditory brain function,” says HESP Assistant Professor Samira Anderson, Ph.D., who led the research team. “In effect, hearing aids can actually help reverse several of the major problems with communication that are common as we get older.”

According to the National Institutes of Health, as many as 28.8 million Americans could benefit from wearing hearing aids, but less than a third of that population actually uses them. Several barriers prevent more widespread use of hearing aids—namely, their high cost and the fact that many people find it difficult to adjust to wearing them. A growing body of evidence has demonstrated a link between hearing loss and cognitive decline in older adults. Aging and hearing loss can also lead to changes in the brain’s ability to efficiently process speech, leading to decreased ability to understand what others are saying, especially in noisy backgrounds.

The UMD researchers say the results of their study provide hope that hearing aid use can at least partially restore deficits in cognitive function and auditory brain function in older adults.

“We hope our findings underscore the need to not only make hearing aids more accessible and affordable for older adults, but also to improve fitting procedures to ensure that people continue to wear them and benefit from them,” Anderson says.

Mason new logo_2016.png

The research team is working on developing better procedures for fitting people with hearing aids for the first time. The study was funded by Hearing Health Foundation and the National Institutes of Health (NIDCD R21DC015843).

This is republished with permission from the University of Maryland’s press office. Samira Anderson, Au.D., Ph.D., is a 2014 Emerging Research Grants (ERG) researcher generously funded by the General Grand Chapter Royal Arch Masons International. We thank the Royal Arch Masons for their ongoing support of research in the area of central auditory processing disorder. These two new published papers and an earlier paper by Anderson all stemmed from Anderson’s ERG project.

Samira Anderson sm.png

Read more about Anderson in Meet the Researcher and “A Closer Look,” in the Winter 2014 issue of Hearing Health.


Receive updates on life-changing hearing research and resources by subscribing to HHF's free quarterly magazine and e-newsletter.

Print Friendly and PDF

Sound Processing in Early Brain Regions

By Yishane Lee    

Standard hearing tests may not account for the difficulty some individuals have understanding speech, especially in noisy environments, even though the sounds are loud enough to hear. To better identify and treat these central auditory processing disorders that appear despite normal ear function, 2016 Emerging Research Grants (ERG) scientist Richard A. Felix II, Ph.D., and colleagues have been investigating how the brain processes complex sounds such as speech.

In the past, speech processing research has focused on higher-level brain regions like the auditory cortex, but there is strong evidence showing that lower-level subcortical areas may play a significant role in hearing disorders. In their paper “Subcortical Pathways: Toward a Better Understanding of Auditory Disorders,” published online in the journal Hearing Research on Jan. 31, 2018, Felix and team review studies that examine the auditory brainstem and midbrain and their functional effect on hearing ability.

  The illustration shows the major inhibitory and excitatory, ascending and descending, neurotransmitter connections of subcortical pathways. The table lists features of auditory processing, with the contribution (or potential impairment) of each structure depicted with increasing strength represented by darker colors.

The illustration shows the major inhibitory and excitatory, ascending and descending, neurotransmitter connections of subcortical pathways. The table lists features of auditory processing, with the contribution (or potential impairment) of each structure depicted with increasing strength represented by darker colors.

Speech contains various acoustic hallmarks such as pitch, timbre, and gaps between starts and stops of sound energy that the brain uses to create distinct auditory “objects”—for example, listening to one voice among multiple talkers in a noisy room. Our brains extract these acoustic clues by decoding spectral, temporal, and spatial information in order to identify and understand complex sounds.

Studies of mammalian species show that these sound features are extracted at the level of the midbrain by nerve cells in a region called the inferior colliculus, and through the integration of multiple ascending (“bottom-up”) pathways: from inner ear hair cells to the auditory nerve; to the brainstem’s cochlear nucleus and superior olivary complex; to the midbrain’s inferior colliculus; to the forebrain’s thalamus; and to the auditory cortex.

For instance, several key functions of auditory processing previously attributed to the cortex, such as the selectivity of neurons to particular vocalizations, are now demonstrated in subcortical pathways. The cortex builds upon these coding strategies to produce typical hearing and communication abilities in most individuals.

Felix and team go on to detail auditory disorders that may result in large part from subcortical processing failures. Since neurotransmitters are important in the brain, including subcortical regions, an imbalance in these chemicals’ excitatory or inhibitory actions (as typically happens with age) can affect the ability to hear complex sounds.

Disruptions of bottom-up processing may lead to hearing difficulties that are not revealed using standard hearing tests. This includes auditory synaptopathy and auditory neuropathy (terms sometimes used interchangeably), also called “hidden hearing loss.” One concern with hidden hearing loss is that subcortical processing may be affected by noise levels previously thought to be relatively safe (as low as 80 decibels).

Likewise, central auditory processing disorders may be a result of abnormal top-down processing, leading to problems with selective attention and other hearing-related tasks.

The authors conclude, “Subcortical pathways represent early-stage processing on which sound perception is built; therefore problems with understanding complex sounds such as speech often have neural correlates of dysfunction in the auditory brainstem, midbrain, and thalamus.” The hope is that further study of animal models as well as human subjects will lead to tools to aid in the diagnosis and treatment of hearing disorders caused by problems with subcortical sound processing.

Richard Felix.jpeg

Richard A. Felix II, Ph.D., is a postdoctoral researcher in the Hearing and Communications Lab at Washington State University Vancouver. A 2016 Emerging Research Grants recipient, he was generously funded by the General Grand Chapter Royal Masons International. Read more about Felix in “Meet the Researcher,” in the Summer 2017 issue of Hearing Health.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

Print Friendly and PDF

New Data-Driven Analysis Procedure for Diagnostic Hearing Test

By Carol Stoll

Stimulus frequency otoacoustic emissions (SFOAEs) are sounds generated by the inner ear in response to a pure-tone stimulus. Hearing tests that measure SFOAEs are noninvasive and effective for those who are unable to participate, such as infants and young children. They also give valuable insight into cochlear function and can be used to diagnose specific types and causes of hearing loss. Though interpreting SFOAEs is simpler than other types of emissions, it is difficult to extract the SFOAEs from the same-frequency stimulus and from background noise caused by patient movement and microphone slippage in the ear canal.

2014 Emerging Research Grants (ERG) recipient Srikanta Mishra, Ph.D., and colleagues have addressed SFOAE analysis issues by developing an efficient data-driven analysis procedure. Their new method considers and rejects irrelevant background noise such as breathing, yawning, and subtle movements of the subject and/or microphone cable. The researchers used their new analysis procedure to characterize the standard features of SFOAEs in typical-hearing young adults and published their results in Hearing Research.

Mishra and team chose 50 typical-hearing young adults to participate in their study. Instead of using a discrete-tone procedure that measures SFOAEs one frequency at a time, they used a more efficient method: a single sweep-tone stimulus that seamlessly changes frequencies from 500 to 4,000 Hz, and vice versa, over 16 and 24 seconds. The sweep tones were interspersed with suppressor tones that reduce the response to the previous tone. The tester manually paused and restarted the sweep recording when they detected background noises from the subject’s movements.


The SFOAEs generated were analyzed using a mathematical model called a least square fit (LSF) and a series of algorithms based on statistical analysis of the data. This model objectively minimized the potential error from extraneous noises. Conventional SFOAE features such as level, noise floor, and signal-to-noise ratio (SNR) were described for the typical-hearing subjects.

Overall, the results of this study demonstrate the effectiveness of the automated noise rejection procedure of sweep-tone–evoked SFOAEs in adults. The features of SFOAEs characterized in this study from a large group of typical-hearing young adults should be useful for developing tests for cochlear function that can be useful in the clinic and laboratory.

Srikanta Mishra, Ph.D, was a 2014 Emerging Research Grants scientist and a General Grand Chapter Royal Arch Masons International award recipient. For more, see Sweep-tone evoked stimulus frequency otoacoustic emissions in humans: Development of a noise-rejection algorithm and normative features” in Hearing Research.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.


Print Friendly and PDF