General Grand Chapter Royal Arch Masons

Lasting Effects From Head and Brain Injury

By Elliott Kozin, M.D.

Traumatic brain injury (TBI) is a major public health issue and contributes to injury-related morbidity and mortality worldwide. The estimated economic cost of TBI is estimated to be in excess of $76 billion per year in the United States. Unfortunately, the health effects of TBI are profound. TBI can lead to chronic and debilitating physical and psychosocial symptoms, such as loss of cognitive, sensory, and psychological function. Auditory and vestibular dysfunction has long been recognized as a consequence of head injury, including TBI. 

In our research “Patient‐Reported Auditory Handicap Measures Following Mild Traumatic Brain Injury,” published in The Laryngoscope, we examined auditory complaints following traumatic brain injury, as well as changes that occur to the peripheral vestibular system in the postmortem setting. In patients with mild traumatic brain injury (mTBI), we used patient-reported outcome measures to assess auditory complaints. The team found that auditory symptoms and associated handicap were common in patients with non-blast mTBI. 

erg Elliot Kozin.JPG

For another paper in The Laryngoscope, “Peripheral Vestibular Organ Degeneration After Temporal Bone Fracture: A Human Otopathology Study,” we evaluated postmortem specimens from the National Temporal Bone Pathology Registry with head injury. In a cohort of patients with temporal bone fractures, there were distinct peripheral vestibular changes. Collectively, these findings have implications for the pathophysiology and management of symptoms in this patient population.

rara logo.gif

Elliott Kozin, M.D., is a neurotology fellow at Eaton Peabody Laboratories, Massachusetts Eye and Ear/Harvard Medical School, and a 2018 Emerging Research Grants recipient generously funded by the General Grand Chapter Royal Arch Masons International.

Print Friendly and PDF

Developing Better Tests for Discovering “Hidden” Hearing Loss

By Hari Bharadwaj, Ph.D., with Inyong Choi, Ph.D.

Conventionally, hearing loss is thought to be a consequence of damage to delicate sensory hair cells in the inner ear (cochlea). However, over the past decade animal studies have shown that nerve endings in the cochlea are considerably more vulnerable to damage than the sensory hair cells, and that such nerve damage is likely to happen before conventionally recognized forms of hearing loss occur.

Emerging Research Grants (ERG) recipients Bharadwaj and Choi, and colleagues, systematically investigated the many sources of variability that obscure cochlear nerve damage (“synaptopathy”) to provide recommendations for how best to measure such nerve damage.

Emerging Research Grants (ERG) recipients Bharadwaj and Choi, and colleagues, systematically investigated the many sources of variability that obscure cochlear nerve damage (“synaptopathy”) to provide recommendations for how best to measure such nerve damage.

Unfortunately, damage to cochlear nerve endings cannot be detected by current clinical hearing tests. Yet, this “hidden” damage can hypothetically still affect hearing in everyday noisy environments such as crowded restaurants and busy streets. Therefore, it is important to develop tests to detect such damage in humans, and there is considerable interest among hearing scientists toward this enterprise.

In our paper published in Neuroscience on March 8, 2019, we considered noninvasive tests that can potentially reveal such nerve damage and systematically investigated other extraneous sources of variability that might reduce the sensitivity and specificity of these tests. This helped us come up with recommendations for how we can best apply these tests. Funding from Hearing Health Foundation’s Emerging Research Grants contributed to experiments that helped understand and articulate the role of two key variables: how variations in the anatomy of individuals (e.g., brain shape and size) affected our noninvasive tests; and how certain cognitive factors like attention may affect hearing independently of how well the inner ear is capturing the information in sounds.

Armed with the knowledge about these variables and other factors described in the paper, we anticipate that hearing scientists will be able to design more powerful experiments to understand the effects of damage to cochlear nerve endings, and build more powerful tests to detect such damage in the clinic. This work is crucial in enabling clinical translation of the basic science that has been uncovered over the past decade.

hari_bharadwaj.jpg
inyong choi MTR.jpeg

A 2015 Emerging Research Grants (ERG) scientist, Hari Bharadwaj, Ph.D., is an assistant professor at Purdue University in Indiana with a joint appointment in speech, language, and hearing sciences, and biomedical engineering. Inyong Choi, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Iowa. Choi’s 2017 ERG grant was generously funded by the General Grand Chapter Royal Arch Masons International.

Print Friendly and PDF

Detailing the Relationships Between Auditory Processing and Cognitive-Linguistic Abilities in Children

By Beula Magimairaj, Ph.D.

Children suspected to have or diagnosed with auditory processing disorder (APD) present with difficulty understanding speech despite typical-range peripheral hearing and typical intellectual abilities. Children with APD (also known as central auditory processing disorder, CAPD) may experience difficulties while listening in noise, discriminating speech and non-speech sounds, recognizing auditory patterns, identifying the location of a sound source, and processing time-related aspects of sound, such as rapid sound fluctuations or detecting short gaps between sounds. According to 2010 clinical practice guidelines by the American Academy of Audiology and a 2005 American Speech-Language-Hearing Association (ASHA) report, developmental APD is a unique clinical entity. According to ASHA, APD is not the result of cognitive or language deficits.

child-reading-tablet.jpg

In our July 2018 study in the journal Language Speech and Hearing Services in the Schools for its special issue on “working memory,” my coauthor and I present a novel framework for conceptualizing auditory processing abilities in school-age children. According to our framework, cognitive and linguistic factors are included along with auditory factors as potential sources of deficits that may contribute individually or in combination to cause listening difficulties in children.

We present empirical evidence from hearing, language, and cognitive science in explaining the relationships between children’s auditory processing abilities and cognitive abilities such as memory and attention. We also discuss studies that have identified auditory abilities that are unique and may benefit from assessment and intervention. Our unified framework is based on studies from typically developing children; those suspected to have APD, developmental language impairment, or attention deficit disorders; and models of attention and memory in children. In addition, the framework is based on what we know about the integrated functioning of the nervous system and evidence of multiple risk factors in developmental disorders. A schematic of this framework is shown here.

APD chart.png

In our publication, for example, we discuss how traditional APD diagnostic models show remarkable overlap with models of working memory (WM). WM refers to an active memory system that individuals use to hold and manipulate information in conscious awareness. Overlapping components among the models include verbal short-term memory capacity (auditory decoding and memory), integration of audiovisual information and information from long-term memory, and central executive functions such as attention and organization. Therefore, a deficit in the WM system can also potentially mimic the APD profile.

Similarly, auditory decoding (i.e., processing speech sounds), audiovisual integration, and organization abilities can influence language processing at various levels of complexity. For example, poor phonological (speech sound) processing abilities, such as those seen in some children with primary language impairment or dyslexia, could potentially lead to auditory processing profiles that correspond to APD. Auditory memory and auditory sequencing of spoken material are often challenging for children diagnosed with APD. These are the same integral functions attributed to the verbal short-term memory component of WM. Such observations are supported by the frequent co-occurrence of language impairment, APD, and attention deficit disorders.

Furthermore, it is important to note that cognitive-linguistic and auditory systems are highly interconnected in the nervous system. Therefore, heterogeneous profiles of children with listening difficulties may reflect a combination of deficits across these systems. This calls for a unified approach to model functional listening difficulties in children.

Given the overlap in developmental trajectories of auditory skills and WM abilities, the age at evaluation must be taken into account during assessment of auditory processing. The American Academy of Audiology does not recommend APD testing for children developmentally younger than age 7. Clinicians must therefore adhere to this recommendation to save time and resources for parents and children and to avoid misdiagnosis.

However, any significant listening difficulties noted in children at any age (especially at younger ages) must call for a speech-language evaluation, a peripheral hearing assessment, and cognitive assessment. This is because identification of deficits or areas of risk in language or cognitive processing triggers the consideration of cognitive-language enrichment opportunities for the children. Early enrichment of overall language knowledge and processing abilities (e.g., phonological/speech sound awareness, vocabulary) has the potential to improve children's functional communication abilities, especially when listening in complex auditory environments. 

Given the prominence of children's difficulty listening in complex auditory environments and emerging evidence suggesting a distinction of speech perception in noise and spatialized listening from other auditory and cognitive factors, listening training in spatialized noise appears to hold promise in terms of intervention. This needs to be systematically replicated across independent research studies. 

Other evidence-based implications discussed in our publication include improving auditory access using assistive listening devices (e.g., FM systems), using a hierarchical assessment model, or employing a multidisciplinary front-end screening of sensitive areas (with minimized overlap across audition, language, memory, and attention) prior to detailed assessments in needed areas.

Finally, we emphasize that prevention should be at the forefront. This calls for integrating auditory enrichment with meaningful activities such as musical experience, play, social interaction, and rich language experience beginning early in infancy while optimizing attention and memory load. While these approaches are not new, current research evidence on neuroplasticity makes a compelling case to promote auditory enrichment experiences in infants and young children.

Beula_Magimairaj.jpg

A 2015 Emerging Research Grants (ERG) scientist generously funded by the General Grand Chapter Royal Arch Masons International, Beula Magimairaj, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Central Arkansas. Magimairaj’s related ERG research on working memory appears in the Journal of Communication Disorders, and she wrote about an earlier paper from her ERG grant in the Summer 2018 issue of Hearing Health.

 

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
donate hh button.jpg
 
Print Friendly and PDF

Introducing the 2018 Emerging Research Grantees

By Lauren McGrath

ERG Logo.jpg

Hearing Health Foundation (HHF) is pleased to present our Emerging Research Grants (ERG) awardees for the 2018 project cycle.

Grantee Tenzin Ngodup, Ph.D., will investigate neuronal activity in the ventral cochlear nucleus to help prevent and treat tinnitus.

Grantee Tenzin Ngodup, Ph.D., will investigate neuronal activity in the ventral cochlear nucleus to help prevent and treat tinnitus.

15 individuals at various institutions nationwide—including Johns Hopkins School of Medicine, University of Minnesota, and the National Cancer Institute—will conduct innovative research in the following topic areas:

  • Central Auditory Processing Disorder (CAPD)

  • General Hearing Health

  • Hearing Loss in Children

  • Hyperacusis

  • Tinnitus

  • Usher Syndrome

Our grantees’ research investigations seek to solve specific auditory and vestibular problems such as declines in complex sound processing in age-related hearing loss (presbycusis), ototoxicity caused by the life-saving chemotherapy drug cisplatin, and noise-induced hearing loss.

HHF looks forward to the advancements that will come about from these promising scientific endeavors. The foundation owes many thanks to the General Grand Chapter Royal Arch Masons International, Cochlear, Hyperacusis Research, the Les Paul Foundation, and several generous, anonymous donors who have collectively empowered this important work.

We are currently planning for our 2019 ERG grant cycle, for which applications will open September 1. Learn more about the application process.

WE NEED YOUR HELP IN FUNDING THE EXCITING WORK OF HEARING AND BALANCE SCIENTISTS. DONATE TODAY TO HEARING HEALTH FOUNDATION AND SUPPORT GROUNDBREAKING RESEARCH: HHF.ORG/DONATE.

Grantee Rachael R. Baiduc, Ph.D., will identify  cardiovascular disease risk factors that may contribute to hearing loss.

Grantee Rachael R. Baiduc, Ph.D., will identify
cardiovascular disease risk factors that may contribute to hearing loss.

Print Friendly and PDF

New Research Shows Hearing Aids Improve Brain Function and Memory in Older Adults

By University of Maryland Department of Hearing and Speech Sciences

One of the most prevalent health conditions among older adults, age-related hearing loss, can lead to cognitive decline, social isolation and depression. However, new research from the University of Maryland (UMD) Department of Hearing and Speech Sciences (HESP) shows that the use of hearing aids not only restores the capacity to hear, but can improve brain function and working memory.

brain and memory.jpg

The UMD-led research team monitored a group of first-time hearing aid users with mild-to-moderate hearing loss over a period of six months. The researchers used a variety of behavioral and cognitive tests designed to assess participants’ hearing as well as their working memory, attention and processing speed. They also measured electrical activity produced in response to speech sounds in the auditory cortex and midbrain.

At the end of the six months, participants showed improved memory, improved neural speech processing, and greater ease of listening as a result of the hearing aid use. Findings from the study were published recently in Clinical Neurophysiology and Neuropsychologia.

“Our results suggest that the benefits of auditory rehabilitation through the use of hearing aids may extend beyond better hearing and could include improved working memory and auditory brain function,” says HESP Assistant Professor Samira Anderson, Ph.D., who led the research team. “In effect, hearing aids can actually help reverse several of the major problems with communication that are common as we get older.”

According to the National Institutes of Health, as many as 28.8 million Americans could benefit from wearing hearing aids, but less than a third of that population actually uses them. Several barriers prevent more widespread use of hearing aids—namely, their high cost and the fact that many people find it difficult to adjust to wearing them. A growing body of evidence has demonstrated a link between hearing loss and cognitive decline in older adults. Aging and hearing loss can also lead to changes in the brain’s ability to efficiently process speech, leading to decreased ability to understand what others are saying, especially in noisy backgrounds.

The UMD researchers say the results of their study provide hope that hearing aid use can at least partially restore deficits in cognitive function and auditory brain function in older adults.

“We hope our findings underscore the need to not only make hearing aids more accessible and affordable for older adults, but also to improve fitting procedures to ensure that people continue to wear them and benefit from them,” Anderson says.

Mason new logo_2016.png

The research team is working on developing better procedures for fitting people with hearing aids for the first time. The study was funded by Hearing Health Foundation and the National Institutes of Health (NIDCD R21DC015843).

This is republished with permission from the University of Maryland’s press office. Samira Anderson, Au.D., Ph.D., is a 2014 Emerging Research Grants (ERG) researcher generously funded by the General Grand Chapter Royal Arch Masons International. We thank the Royal Arch Masons for their ongoing support of research in the area of central auditory processing disorder. These two new published papers and an earlier paper by Anderson all stemmed from Anderson’s ERG project.

Samira Anderson sm.png

Read more about Anderson in Meet the Researcher and “A Closer Look,” in the Winter 2014 issue of Hearing Health.

WE NEED YOUR HELP IN FUNDING THE EXCITING WORK OF HEARING AND BALANCE SCIENTISTS. DONATE TODAY TO HEARING HEALTH FOUNDATION AND SUPPORT GROUNDBREAKING RESEARCH: HHF.ORG/DONATE.

Receive updates on life-changing hearing research and resources by subscribing to HHF's free quarterly magazine and e-newsletter.

 
 
Print Friendly and PDF

New Data-Driven Analysis Procedure for Diagnostic Hearing Test

By Carol Stoll

Stimulus frequency otoacoustic emissions (SFOAEs) are sounds generated by the inner ear in response to a pure-tone stimulus. Hearing tests that measure SFOAEs are noninvasive and effective for those who are unable to participate, such as infants and young children. They also give valuable insight into cochlear function and can be used to diagnose specific types and causes of hearing loss. Though interpreting SFOAEs is simpler than other types of emissions, it is difficult to extract the SFOAEs from the same-frequency stimulus and from background noise caused by patient movement and microphone slippage in the ear canal.

2014 Emerging Research Grants (ERG) recipient Srikanta Mishra, Ph.D., and colleagues have addressed SFOAE analysis issues by developing an efficient data-driven analysis procedure. Their new method considers and rejects irrelevant background noise such as breathing, yawning, and subtle movements of the subject and/or microphone cable. The researchers used their new analysis procedure to characterize the standard features of SFOAEs in typical-hearing young adults and published their results in Hearing Research.

Mishra and team chose 50 typical-hearing young adults to participate in their study. Instead of using a discrete-tone procedure that measures SFOAEs one frequency at a time, they used a more efficient method: a single sweep-tone stimulus that seamlessly changes frequencies from 500 to 4,000 Hz, and vice versa, over 16 and 24 seconds. The sweep tones were interspersed with suppressor tones that reduce the response to the previous tone. The tester manually paused and restarted the sweep recording when they detected background noises from the subject’s movements.

mishra.jpeg

The SFOAEs generated were analyzed using a mathematical model called a least square fit (LSF) and a series of algorithms based on statistical analysis of the data. This model objectively minimized the potential error from extraneous noises. Conventional SFOAE features such as level, noise floor, and signal-to-noise ratio (SNR) were described for the typical-hearing subjects.

Overall, the results of this study demonstrate the effectiveness of the automated noise rejection procedure of sweep-tone–evoked SFOAEs in adults. The features of SFOAEs characterized in this study from a large group of typical-hearing young adults should be useful for developing tests for cochlear function that can be useful in the clinic and laboratory.

Srikanta Mishra, Ph.D, was a 2014 Emerging Research Grants scientist and a General Grand Chapter Royal Arch Masons International award recipient. For more, see Sweep-tone evoked stimulus frequency otoacoustic emissions in humans: Development of a noise-rejection algorithm and normative features” in Hearing Research.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 


Print Friendly and PDF

Cortical Alpha Oscillations Predict Speech Intelligibility

By Andrew Dimitrijevic, Ph.D.

Hearing Health Foundation Emerging Research Grants recipient Andrew Dimitrijevic, Ph.D., and colleagues recently published “Cortical Alpha Oscillations Predict Speech Intelligibility” in the journal Frontiers in Human Neuroscience.

The scientists measured brain activity that originates from the cortex, known as alpha rhythms. Previous research has linked these rhythms to sensory processes involving working memory and attention, two crucial tasks for listening to speech in noise. However, no previous research has studied alpha rhythms directly during a clinical speech in noise perception task. The purpose of this study was to measure alpha rhythms during attentive listening in a commonly used speech-in-noise task, known as digits-in-nose (DiN), to better understand the neural processes associated with speech hearing in noise.

Fourteen typical-hearing young adult subjects performed the DiN test while wearing electrode caps to measure alpha rhythms. All subjects completed the task in active and passive listening conditions. The active condition mimicked attentive listening and asked the subject to repeat the digits heard in varying levels of background noise. In the passive condition, the subjects were instructed to ignore the digits and watch a movie of their choice, with captions and no audio.

Two key findings emerged from this study in regards to the influence of attention, individual variability, and predictability of correct recall.

First, the authors concluded that the active condition produced alpha rhythms, while passive listening yielded no such activity. Selective auditory attention can therefore be indexed through this measurement. This result also illustrates that these alpha rhythms arise from neural processes associated with selective attention, rather than from the physical characteristics of sound. To the authors’ knowledge, these differences between passive and active conditions have not previously been reported.

Secondly, all participants showed similar brain activation that predicted when one was going to make a mistake on the DiN task. Specifically, a greater magnitude in one particular aspect of alpha rhythms was found to correlate with comprehension; a larger magnitude on correct trials was observed relative to incorrect trials. This finding was consistent throughout the study and has great potential for clinical use.

Dimitrijevic and his colleagues’ novel findings propel the field’s understanding of the neural activity related to speech-in-noise tasks. It informs the assessment of clinical populations with speech in noise deficits, such as those with auditory neuropathy spectrum disorder or central auditory processing disorder (CAPD).

Future research will attempt to use this alpha rhythms paradigm in typically developing children and those with CAPD. Ultimately, the scientists hope to develop a clinical tool to better assess listening in a more real-world situation, such as in the presence of background noise, to augment traditional audiological testing.

Andrew Dimitrijevic, Ph.D., is a 2015 Emerging Research Grantee and General Grand Chapter Royal Arch Masons International award recipient. Hearing Health Foundation would like to thank the Royal Arch Masons for their generous contributions to Emerging Research Grants scientists working in the area of central auditory processing disorders (CAPD). We appreciate their ongoing commitment to funding CAPD research.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

Introducing HHF's 2016 Emerging Research Grant Recipients

By Morgan Leppla

We are excited to announce the 2016 Emerging Research Grant recipients. This year, HHF funded five research areas:

  • Central Auditory Processing Disorder (CAPD): research investigating a range of disorders within the ear and brain that affect the processing of auditory information. HHF thanks the General Grand Chapter Royal Arch Masons International for enabling us to fund four grants in the area of CAPD. 
     
  • Hyperacusis: research that explores the mechanisms, causes, and diagnosis of loudness intolerance. One grant was generously funded by Hyperacusis Research.
     
  • Méniere’s Disease: research that investigates the inner ear and balance disorder. One grant was funded by the Estate of Howard F. Schum.
     
  • Stria: research that furthers our understanding of the stria vascularis, strial atrophy, and/or development of the stria. One grant was funded by an anonymous family foundation interested in this research.
     
  • Tinnitus: research to understand the perception of sound in the ear in the absence of an acoustic stimulus. Two grants were awarded, thanks to the generosity the Les Paul Foundation and the the Barbara Epstein Foundation.

To learn more about our 2016 ERG grantees and their research goals, please visit hhf.org/2016_researchers

HHF is also currently planning for our 2017 ERG grant cycle. If you're interested in naming a research grant in any discipline within the hearing and balance space, please contact development@hhf.org.

Print Friendly and PDF

Neural sensitivity to binaural cues with bilateral cochlear implants

By Massachusetts Eye and Ear/Harvard Medical School

Many profoundly deaf people wearing cochlear implants (CIs) still face challenges in everyday situations, such as understanding conversations in noise. Even with CIs in both ears, they have difficulty making full use of subtle differences in the sounds reaching the two ears (interaural time difference, [ITD]) to identify where the sound is coming from. This problem is especially acute at the high stimulation rates used in clinical CI processors.

 A team of researchers from Massachusetts Eye and Ear/Harvard Medical School, including past funded Emerging Research Grantee, Yoojin Chung, Ph.D., studied how the neurons in the auditory midbrain encode binaural cues delivered by bilateral CIs in an animal model. They found that the majority of neurons in the auditory midbrain were sensitive to ITDs, however, their sensitivity degraded with increasing pulse rate. This degradation paralleled pulse-rate dependence of perceptual limits in human CI users.

This study provides a better understanding of neural mechanisms underlying the limitation of current clinical bilateral CIs and suggests directions for improvement such as delivering ITD information in low-rate pulse trains.

The full paper was published in The Journal of Neuroscience and is available here. This article was republished with permission of the Massachusetts Eye and Ear/Harvard Medical School.

Dr. Yoojin Chung, Ph.D. was a 2012 and 2013 General Grand Chapter Royal Arch Masons International award recipient through our Emerging Research Grants program. Hearing Health Foundation would like to thank the Royal Arch Masons for their generous contributions to Emerging Research Grantees working in the area of central auditory processing disorders (CAPD). We appreciate their ongoing commitment to funding CAPD research.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

Defining Auditory-Visual Objects

By Molly McElroy, PhD

If you've ever been to a crowded bar, you may notice that it's easier to hear your friend if you watch his face and mouth movements. And if you want to pick out the melody of the first violin in a string quartet, it helps to watch the strokes of the players' bow.

I-LABS faculty member Adrian KC Lee and co-authors use these examples to illustrate auditory-visual objects, the topic of the researchers' recently published opinion paper in the prestigious journal Trends in Neurosciences.

Lee, who is an associate professor in the UW Department of Speech & Hearing Sciences, studies brain mechanisms that underlie hearing. With an engineering background, Lee is particularly interested in understanding how to improve hearing prosthetics.

Previous I-LABS research has shown that audio-visual processing is evident as early as 18 weeks of age, suggesting it is a fundamental part of how the human brain processes speech. Those findings, published in 1982 by the journal Science, showed that infants understand the correspondence between sight and the sound of language movements.

In the new paper, Lee and co-authors Jennifer Bizley, of University College London, and Ross Maddox, of I-LABS, discuss how the brain integrates auditory and visual information—a type of multisensory processing that has been referred to by various terms but with no clear delineation.

The researchers wrote the paper to provide their field with a more standard nomenclature for what an audio-visual object is and give experimental paradigms for testing it.

“That we combine sounds and visual stimuli in our brains is typically taken for granted, but the specifics of how we do that aren’t really known," said Maddox, a postdoctoral researcher working with Lee. “Before we can figure that out we need a common framework for talking about these issues. That’s what we hoped to provide in this piece.”

Trends in Neurosciences is a leading peer-reviewed journal that publishes articles it invites from leading experts in the field and focuses on topics that are of current interest or under debate in the neuroscience field.

Multisensory, especially audio-visual, work is of importance for several reasons, Maddox said. Being able to see someone talking offers huge performance improvements, which is relevant to making hearing aids that take visual information into account and in studying how people with developmental disorders like autism spectrum disorders or central auditory processing disorders (CAPD) may combine audio-visual information differently.

"The issues are debated because we think studying audio-visual phenomena would benefit from new paradigms, and here we hoped to lay out a framework for those paradigms based on hypotheses of how the brain functions," Maddox said.

Read the full paper onlineThis article was republished with permission of the Institute for Learning & Brain Sciences at the University of Washington

Ross Maddox, Ph.D. was a 2013 General Grand Chapter Royal Arch Masons International award recipient. Hearing Health Foundation would like to thank the Royal Arch Masons for their generous contributions to Emerging Research Grantees working in the area of central auditory processing disorders (CAPD). We appreciate their ongoing commitment to funding CAPD research.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF