speech

Detailing the Relationships Between Auditory Processing and Cognitive-Linguistic Abilities in Children

By Beula Magimairaj, Ph.D.

Children suspected to have or diagnosed with auditory processing disorder (APD) present with difficulty understanding speech despite typical-range peripheral hearing and typical intellectual abilities. Children with APD (also known as central auditory processing disorder, CAPD) may experience difficulties while listening in noise, discriminating speech and non-speech sounds, recognizing auditory patterns, identifying the location of a sound source, and processing time-related aspects of sound, such as rapid sound fluctuations or detecting short gaps between sounds. According to 2010 clinical practice guidelines by the American Academy of Audiology and a 2005 American Speech-Language-Hearing Association (ASHA) report, developmental APD is a unique clinical entity. According to ASHA, APD is not the result of cognitive or language deficits.

child-reading-tablet.jpg

In our July 2018 study in the journal Language Speech and Hearing Services in the Schools for its special issue on “working memory,” my coauthor and I present a novel framework for conceptualizing auditory processing abilities in school-age children. According to our framework, cognitive and linguistic factors are included along with auditory factors as potential sources of deficits that may contribute individually or in combination to cause listening difficulties in children.

We present empirical evidence from hearing, language, and cognitive science in explaining the relationships between children’s auditory processing abilities and cognitive abilities such as memory and attention. We also discuss studies that have identified auditory abilities that are unique and may benefit from assessment and intervention. Our unified framework is based on studies from typically developing children; those suspected to have APD, developmental language impairment, or attention deficit disorders; and models of attention and memory in children. In addition, the framework is based on what we know about the integrated functioning of the nervous system and evidence of multiple risk factors in developmental disorders. A schematic of this framework is shown here.

APD chart.png

In our publication, for example, we discuss how traditional APD diagnostic models show remarkable overlap with models of working memory (WM). WM refers to an active memory system that individuals use to hold and manipulate information in conscious awareness. Overlapping components among the models include verbal short-term memory capacity (auditory decoding and memory), integration of audiovisual information and information from long-term memory, and central executive functions such as attention and organization. Therefore, a deficit in the WM system can also potentially mimic the APD profile.

Similarly, auditory decoding (i.e., processing speech sounds), audiovisual integration, and organization abilities can influence language processing at various levels of complexity. For example, poor phonological (speech sound) processing abilities, such as those seen in some children with primary language impairment or dyslexia, could potentially lead to auditory processing profiles that correspond to APD. Auditory memory and auditory sequencing of spoken material are often challenging for children diagnosed with APD. These are the same integral functions attributed to the verbal short-term memory component of WM. Such observations are supported by the frequent co-occurrence of language impairment, APD, and attention deficit disorders.

Furthermore, it is important to note that cognitive-linguistic and auditory systems are highly interconnected in the nervous system. Therefore, heterogeneous profiles of children with listening difficulties may reflect a combination of deficits across these systems. This calls for a unified approach to model functional listening difficulties in children.

Given the overlap in developmental trajectories of auditory skills and WM abilities, the age at evaluation must be taken into account during assessment of auditory processing. The American Academy of Audiology does not recommend APD testing for children developmentally younger than age 7. Clinicians must therefore adhere to this recommendation to save time and resources for parents and children and to avoid misdiagnosis.

However, any significant listening difficulties noted in children at any age (especially at younger ages) must call for a speech-language evaluation, a peripheral hearing assessment, and cognitive assessment. This is because identification of deficits or areas of risk in language or cognitive processing triggers the consideration of cognitive-language enrichment opportunities for the children. Early enrichment of overall language knowledge and processing abilities (e.g., phonological/speech sound awareness, vocabulary) has the potential to improve children's functional communication abilities, especially when listening in complex auditory environments. 

Given the prominence of children's difficulty listening in complex auditory environments and emerging evidence suggesting a distinction of speech perception in noise and spatialized listening from other auditory and cognitive factors, listening training in spatialized noise appears to hold promise in terms of intervention. This needs to be systematically replicated across independent research studies. 

Other evidence-based implications discussed in our publication include improving auditory access using assistive listening devices (e.g., FM systems), using a hierarchical assessment model, or employing a multidisciplinary front-end screening of sensitive areas (with minimized overlap across audition, language, memory, and attention) prior to detailed assessments in needed areas.

Finally, we emphasize that prevention should be at the forefront. This calls for integrating auditory enrichment with meaningful activities such as musical experience, play, social interaction, and rich language experience beginning early in infancy while optimizing attention and memory load. While these approaches are not new, current research evidence on neuroplasticity makes a compelling case to promote auditory enrichment experiences in infants and young children.

Beula_Magimairaj.jpg

A 2015 Emerging Research Grants (ERG) scientist generously funded by the General Grand Chapter Royal Arch Masons International, Beula Magimairaj, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Central Arkansas. Magimairaj’s related ERG research on working memory appears in the Journal of Communication Disorders, and she wrote about an earlier paper from her ERG grant in the Summer 2018 issue of Hearing Health.

Print Friendly and PDF

Accomplishments by ERG Alumni

By Elizabeth Crofts

Progress Investigating Potential Causes and Treatments of Ménière’s Disease

brain-research.jpg

Gail Ishiyama, M.D., a clinician-scientist who is a neurology associate professor at UCLA’s David Geffen School of Medicine, has been investigating balance disorders for nearly two decades and recently coauthored two studies on the topic. While not directly funded by HHF, Ishiyama is a 2016 Emerging Research Grants recipient and also received a Ménière’s Disease Grant in 2017.

Ishiyama and colleague’s December 2018 paper in the journal Brain Research investigated oxidative stress, which plays a large role in several inner ear diseases as well as in aging. Oxidative stress is an imbalance between the production of free radicals and antioxidant defenses. The gene responsible for reducing oxidative stress throughout the body is called nuclear factor (erythroid-derived 2)-like 2, or NRF2. Ishiyama’s study looked at the localization of NRF2 in the proteins in the cells of the human cochlea and vestibule. It was found that NRF2-immunoreactivity (IR) was localized in the organ of Corti of the cochlea. Additionally, it was observed that NRF2-IR decreases significantly in the cochlea of older individuals. The team postulates for future studies that modulation of NRF2 expression may protect from hearing loss that results from exposure to noise and ototoxic drugs.

In a January 2018 report in the journal Otology & Neurotology, Ishiyama and team researched endolymphatic hydrops (EH), a ballooning of the endolymphatic fluid system in the inner ear that is associated with Ménière’s disease. Symptoms include fluctuating hearing loss, as well as vertigo, tinnitus, and pressure in the ear.

For the study, patients with EH and vestibular schwannoma were tested to evaluate the clinical outcome of patients when EH is treated medically. Vestibular schwannoma, also known as acoustic neuroma, are benign tumors that grow in the vestibular system of the inner ear, which controls balance. Often when patients develop episodic vertigo spells and have a known diagnosis of vestibular schwannoma, surgeons recommend surgical intervention, as they attribute the symptoms to the vestibular schwannoma. However, a noninvasive treatment may hold promise. Through the use of high-resolution MRI scans, the researchers found that when EH coexists with vestibular schwannoma in a patient, and the patient also experiences vertigo spells, a medical treatment for EH—that is, the use of diuretics to relieve inner ear fluid buildup—may alleviate the vestibular symptoms.

Copy of Ishiyama.jpg


A 2016 ERG scientist funded by The Estate of Howard F. Schum, Gail Ishiyama, M.D., is an associate professor of neurology at UCLA’s David Geffen School of Medicine. She also received a 2017 Ménière’s Disease Grant.


New Insights Into Aging Effects on Speech Recognition

Age-related changes in perceptual organization have received less attention than other potential sources of decline in hearing ability. Perceptual organization is the process by which the auditory system interprets acoustic input from multiple sources, and creates an auditory scene. In daily life this is essential, because speech communication occurs in environments in which background sounds fluctuate and can mask the intended message.

Perceptual organization includes three interrelated auditory processes: glimpsing, speech segregation, and phonemic restoration. Glimpsing is the process of identifying recognizable fragments of speech and connecting them across gaps to create a coherent stream. Speech segregation refers to the process where the glimpses (speech fragments) are separated from background speech, to focus on a single target when the background includes multiple talkers. Phonemic restoration refers to the process of filling in missing information using prior knowledge of language, conversational context, and acoustic cues.

A July 2018 study in The Journal of the Acoustical Society of America by William J. Bologna, Au.D., Ph.D., Kenneth I. Vaden, Jr., Ph.D., Jayne B. Ahlstrom, M.S., and Judy R. Dubno, Ph.D., investigated these three components of perceptual organization to determine the extent to which their declines may be the source of increased difficulty in speech recognition with age. Younger and older adults with typical hearing listened to sentences interrupted with either silence or envelope-modulated noise, presented in quiet or with a competing talker.

As expected, older adults performed more poorly than younger adults across all speech conditions. The interaction between age and the duration of glimpses indicated that, compared with younger adults, older adults were less able to make efficient use of limited speech information to recognize keywords. There was an apparent decline in glimpsing, where interruptions in speech had a larger effect on the older adult group.

Older adults saw a greater improvement in speech recognition when envelope modulations were partially restored, leading to better continuity. This demonstrated that with age comes a poorer ability to resolve temporal distortions in the envelope. In speech segregation, the decline in performance with a competing talker was expected to be greater for older adults than younger adults, but this was not supported by the data.

Copy of Kenny Vaden ERG 2015.JPG
Dubno_crop.jpg

A 2015 Emerging Research Grants scientist, Kenneth I. Vaden, Jr., Ph.D., is a research assistant professor in the department of otolaryngology–head and neck surgery at the Medical University of South Carolina.

A 1986–88 ERG scientist, Judy R. Dubno, Ph.D., is a member of HHF’s Board of Directors. The study’s lead author, William Bologna, Au.D., Ph.D., is a postdoctoral research fellow at the National Center for Rehabilitative Auditory Research in Portland, Oregon.

A 2018 HHF intern, Author Elizabeth Crofts is a junior at Boston University studying biomedical engineering. For our continually updated list of published papers by ERG alumni, see hhf.org/erg-alumni.

Print Friendly and PDF

ERG Grantees' Advancements in OAE Hearing Tests, Speech-in-Noise Listening

By Yishane Lee and Inyong Choi, Ph.D.

Support for a Theory Explaining Otoacoustic Emissions: Fangyi Chen, Ph.D.

Groves hair cells 002.jpeg

It’s a remarkable feature of the ear that it not only hears sound but also generates it. These sounds, called otoacoustic emissions (OAEs), were discovered in 1978. Thanks in part to ERG research in outer hair cell motility, measuring OAEs has become a common, noninvasive hearing test, especially among infants too young to respond to sound prompts..

There are two theories about how the ear produces its own sound emanating from the interior of the cochlea out toward its base. The traditional one is the backward traveling wave theory, in which sound emissions travel slowly as a transverse wave along the basilar membrane, which divides the cochlea into two fluid-filled cavities. In a transverse wave, the wave particles move perpendicular to the wave direction. But this theory does not explain some anomalies, leading to a second hypothesis: The fast compression wave theory holds that the emissions travel as a longitudinal wave via lymph fluids around the basilar membrane. In a longitudinal wave, the wave particles travel in the same direction as the wave motion.

Figuring out how the emissions are created will promote greater accuracy of the OAE hearing test and a better understanding of cochlear mechanics. Fangyi Chen, Ph.D., a 2010 Emerging Research Grants (ERG) recipient, started investigating the issue at Oregon Health & Science University and is now at China’s Southern University of Science and Technology. His team’s paper, published in the journal Neural Plasticity in July 2018, for the first time experimentally validates the backward traveling wave theory.

Chen and his coauthors—including Allyn Hubbard, Ph.D., and Alfred Nuttall, Ph.D., who are each 1989–90 ERG recipients—directly measured the basilar membrane vibration in order to determine the wave propagation mechanism of the emissions. The team stimulated the membrane at a specific location, allowing for the vibration source that initiates the backward wave to be pinpointed. Then the resulting vibrations along the membrane were measured at multiple locations in vivo (in guinea pigs), showing a consistent lag as distance increased from the vibration source. The researchers also measured the waves at speeds in the order of tens of meters per second, much slower than would be the speed of a compression wave in water. The results were confirmed using a computer simulation. In addition to the wave propagation study, a mathematical model of the cochlea based on an acoustic electrical analogy was created and simulated. This was used to interpret why no peak frequency-to-place map was observed in the backward traveling wave, explaining some of the previous anomalies associated with this OAE theory.

Speech-in-Noise Understanding Relies on How Well You Combine Information Across Multiple Frequencies: Inyong Choi, Ph.D.

Understanding speech in noisy environments is a crucial ability for communications, although many individuals with or without hearing loss suffer from dysfunctions in that ability. Our study in Hearing Research, published in September 2018, finds that how well you combine information across multiple frequencies, tested by a pitch-fusion task in "hybrid" cochlear implant users who receive both low-frequency acoustic and high-frequency electric stimulation within the same ear, is a critical factor for good speech-in-noise understanding.

In the pitch-fusion task, subjects heard either a tone consisting of many frequencies in a simple mathematical relationship or a tone with more irregular spacing between frequencies. Subjects had to say whether the tone sounded "natural" or "unnatural" to them, given the fact that a tone consisting of frequencies in a simple mathematical relationship sounds much more natural to us. My team and I are now studying how we can improve the sensitivity to this "naturalness" in listeners with hearing loss, expecting to provide individualized therapeutic options to address the difficulties in speech-in-noise understanding.

2017 ERG recipient Inyong Choi, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Iowa in Iowa City.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

New Research Shows Hearing Aids Improve Brain Function and Memory in Older Adults

By University of Maryland Department of Hearing and Speech Sciences

One of the most prevalent health conditions among older adults, age-related hearing loss, can lead to cognitive decline, social isolation and depression. However, new research from the University of Maryland (UMD) Department of Hearing and Speech Sciences (HESP) shows that the use of hearing aids not only restores the capacity to hear, but can improve brain function and working memory.

brain and memory.jpg

The UMD-led research team monitored a group of first-time hearing aid users with mild-to-moderate hearing loss over a period of six months. The researchers used a variety of behavioral and cognitive tests designed to assess participants’ hearing as well as their working memory, attention and processing speed. They also measured electrical activity produced in response to speech sounds in the auditory cortex and midbrain.

At the end of the six months, participants showed improved memory, improved neural speech processing, and greater ease of listening as a result of the hearing aid use. Findings from the study were published recently in Clinical Neurophysiology and Neuropsychologia.

“Our results suggest that the benefits of auditory rehabilitation through the use of hearing aids may extend beyond better hearing and could include improved working memory and auditory brain function,” says HESP Assistant Professor Samira Anderson, Ph.D., who led the research team. “In effect, hearing aids can actually help reverse several of the major problems with communication that are common as we get older.”

According to the National Institutes of Health, as many as 28.8 million Americans could benefit from wearing hearing aids, but less than a third of that population actually uses them. Several barriers prevent more widespread use of hearing aids—namely, their high cost and the fact that many people find it difficult to adjust to wearing them. A growing body of evidence has demonstrated a link between hearing loss and cognitive decline in older adults. Aging and hearing loss can also lead to changes in the brain’s ability to efficiently process speech, leading to decreased ability to understand what others are saying, especially in noisy backgrounds.

The UMD researchers say the results of their study provide hope that hearing aid use can at least partially restore deficits in cognitive function and auditory brain function in older adults.

“We hope our findings underscore the need to not only make hearing aids more accessible and affordable for older adults, but also to improve fitting procedures to ensure that people continue to wear them and benefit from them,” Anderson says.

Mason new logo_2016.png

The research team is working on developing better procedures for fitting people with hearing aids for the first time. The study was funded by Hearing Health Foundation and the National Institutes of Health (NIDCD R21DC015843).

This is republished with permission from the University of Maryland’s press office. Samira Anderson, Au.D., Ph.D., is a 2014 Emerging Research Grants (ERG) researcher generously funded by the General Grand Chapter Royal Arch Masons International. We thank the Royal Arch Masons for their ongoing support of research in the area of central auditory processing disorder. These two new published papers and an earlier paper by Anderson all stemmed from Anderson’s ERG project.

Samira Anderson sm.png

Read more about Anderson in Meet the Researcher and “A Closer Look,” in the Winter 2014 issue of Hearing Health.

WE NEED YOUR HELP IN FUNDING THE EXCITING WORK OF HEARING AND BALANCE SCIENTISTS. DONATE TODAY TO HEARING HEALTH FOUNDATION AND SUPPORT GROUNDBREAKING RESEARCH: HHF.ORG/DONATE.

Receive updates on life-changing hearing research and resources by subscribing to HHF's free quarterly magazine and e-newsletter.

 
 
Print Friendly and PDF

Clear Speech: It’s Not Just About Conversation

By Kathi Mestayer

In the Spring 2018 issue of Hearing Health, we talk about ways to help our conversational partners speak more clearly, so we can understand them better.

But what about public broadcast speech? It comes to us via phone, radio, television, and computer screen, as well as those echo-filled train stations, bus terminals, and airports. There’s room for improvement everywhere.

This digital oscilloscope representation of speech, with pauses, shows that gaps as short as a few milliseconds are used to separate words and syllables. According to Frank Musiek, Ph.D., CCC-A, a professor of speech, language and hearing sciences at the University of Arizona, people with some kinds of hearing difficulties require longer than normal gap intervals in order to perceive them.    Credit: Frank Musiek

This digital oscilloscope representation of speech, with pauses, shows that gaps as short as a few milliseconds are used to separate words and syllables. According to Frank Musiek, Ph.D., CCC-A, a professor of speech, language and hearing sciences at the University of Arizona, people with some kinds of hearing difficulties require longer than normal gap intervals in order to perceive them.
Credit: Frank Musiek

In some cases, like Amtrak’s 30th Street Station in Philadelphia [LISTEN], clear speech is a real challenge. The beautiful space has towering cathedral ceilings, and is wildly reverberant, like a huge echo chamber. Even typical-hearing people can’t understand a word that comes over the PA system. Trust me; I’ve asked several times.

In that space, a large visual display in the center of the hall and the lines of people moving toward the boarding areas get the message across: It’s time to get on the train. I wonder why they even bother with the announcements, except that they signal that something is going on, so people will check the display.

Radio is very different, at least in my kitchen. There are no echoes, so I can enjoy listening to talk radio while I make my coffee in the morning. The other day, the broadcast about one of the station’s nonprofit supporters was described as: “…supporting creative people and defective institutions…”

Huh? That couldn’t be right. It took a few seconds for me to realize what had actually been said: “supporting creative people and effective institutions.” Inter-word pauses are one of the key characteristics of clear speech. A slightly longer pause between the words “and” and “effective” would, in this case, have done the trick.

In the meantime, I chuckle every time that segment airs (which is often), and wonder if anyone else thinks about the defective institutions!

Staff writer Kathi Mestayer serves on advisory boards for the Virginia Department for the Deaf and Hard of Hearing and the Greater Richmond, Virginia, chapter of the Hearing Loss Association of America.

Print Friendly and PDF