ERG

New Insights into the Development of the Hair Cell Bundle

By Yishane Lee

Recent genetic studies have identified that the protein Ripor2 (formerly known as Fam65b) is an important molecule for hearing. It localizes to the stereocilia of auditory hair cells and causes deafness when mutations disrupt its function.

In a study published in the Journal of Molecular Medicine in November 2018, Oscar Diaz-Horta, Ph.D., a 2017 Emerging Research Grants (ERG) scientist, and colleagues further show the role the protein plays by demonstrating how it interacts with other proteins during the development of the hair cell bundle. The team found that the absence of Ripor2 changes the orientation of the hair cell bundle, which in turn affects hearing ability.

Ripor2 interacts with Myh9, a protein encoded by a known deafness gene, and Myh9 is expressed in the hair cell bundle stereocilia as well as its kinocilia (apices). The team found that the absence of Ripor2 means that Myh9 is low in abundance. In the study, Ripor2-deficient mice developed hair cell bundles with atypically localized kinocilia and reduced abundance of a phosphorylated form of Myh9. (Phosphorylation is a cellular process critical for protein function.)

chi (1).png
Diaz-Horta MTR.jpeg

Another specific kinociliary protein, acetylated alpha tubulin, helps stabilize cell structures. The researchers found it is also reduced in the absence of Ripor2.

The study concludes that Ripor2 deficiency affects the abundance and/or role of proteins in stereocilia and kinocilia, which negatively affects the structure and function of the auditory hair cell bundle. These newly detailed molecular aspects of hearing will help to better understand how, when these molecular actions are disrupted, hearing loss occurs.

A 2017 ERG scientist funded by the Children’s Hearing Institute (CHI), Oscar Diaz-Horta, Ph.D., was an assistant scientist in the department of human genetics at the University of Miami. He passed away suddenly in August 2018, while this paper was in production. HHF and CHI both send our deepest condolences to Diaz-Horta’s family and colleagues.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

Disrupted Nerve Cell Function and Tinnitus

By Xiping Zhan, Ph.D.

Tinnitus is a condition in which one hears a ringing and/or buzzing sound in the ear without an external sound source, and as a chronic condition it can be associated with depression, anxiety, and stress. Tinnitus has been linked to hearing loss, with the majority of tinnitus cases occurring in the presence of hearing loss. For military service members and individuals who are constantly in an environment where loud noise is generated, it is a major health issue.

This figure shows the quinine effect on the physiology of dopaminergic neurons in the substantia nigra, a structure in the midbrain.

This figure shows the quinine effect on the physiology of dopaminergic neurons in the substantia nigra, a structure in the midbrain.

During this phantom ringing/buzzing sensation, neurons in the auditory cortex continue to fire in the absence of a sound source, or even after deafferentation following the loss of auditory hair cells. The underlying mechanisms of tinnitus are not yet known.

In our paper published in the journal Neurotoxicity Research in July 2018, my team and I examined chemical-induced tinnitus as a side effect of medication. Tinnitus patients who have chemical-induced tinnitus comprise a significant portion of all tinnitus sufferers, and approaching this type of tinnitus can help us to understand tinnitus in general.

We focused on quinine, an antimalarial drug that also causes hearing loss and tinnitus. We theorized this is due to the disruption of dopamine neurons rather than cochlear hair cells through the blockade of neuronal ion channels in the auditory system. We found that dopamine neurons are more sensitive than the hair cells or ganglion neurons in the auditory system. To a lesser extent, quinine also causes muscle reactions such as tremors and spasms (dystonia) and the loss of control over body movements (ataxia).

lp logo.png

As dopaminergic neurons (nerve cells that produce the neurotransmitter dopamine) are implicated in playing a role in all of these diseases, we tested the toxicity of quinine on induced dopaminergic neurons derived from human pluripotent stem cells and isolated dopaminergic neurons from the mouse brain.

Xiping_zhan.jpg

We found that quinine can affect the basic physiological function of dopamine neurons in humans and mice. Specifically, we found it can target and disturb the hyperpolarization-dependent ion channels in dopamine neurons. This toxicity of quinine may underlie the movement disorders and depression seen in quinine overdoses (cinchonism), and understanding this mechanism will help to learn how dopamine plays a role in tinnitus modulation.

A 2015 ERG scientist, Xiping Zhan, Ph.D., received the Les Paul Foundation Award for Tinnitus Research. He is an assistant professor of physiology and biophysics at Howard University in Washington, D.C. One figure from the paper appeared on the cover of the July 2018 issue of Neurotoxicity Research.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

Recording Electrical Responses to Improve the Diagnosis of Hearing Conditions

By Yishane Lee

Electrocochleography (ECochG) is a method to record electrical responses from the inner ear and the auditory nerve in the first 5 milliseconds after a sound stimulus, such as a click or tone burst. These stimuli can be adjusted for repetition rate and polarity, and recordings can also be taken from either the ear canal, eardrum, or through the eardrum. The main components of ECochG response are the summating potential (SP), from the sensory hair cells in the cochlea, and the action potential (AP) of auditory nerve fibers. In an October 2018 paper in the journal Canadian Audiologist, 2015 ERG scientist Wafaa Kaf, Ph.D., reviews the diagnostic applications for using ECochG for Ménière’s disease and cochlear synaptopathy, two conditions that can be difficult to pinpoint, especially early in the disease, and suggests how to improve the use of ECochG as a clinical tool.

Shutterstock

Shutterstock

Endolymphatic hydrops, or abnormal fluctuations in inner ear fluid, is believed to be the underlying cause of Ménière’s disease and its associated hearing and balance disorder. ECochG collects information about the SP/AP amplitude and area ratios that can be used to confirm a Ménière’s diagnosis, without relying solely on clinical symptoms.

Since the SP/AP amplitude ratio can vary among known Ménière’s patients, Kaf suggests including data about the SP/AP area ratio as well can help with diagnosing the disease. To further distinguish Ménière’s, Kaf suggests using ECochG AP latencies, and, building on her prior research, the effect of fast click rates on the auditory nerve latency and amplitude. Using the continuous loop averaging deconvolution technique, various properties of the SP and AP waveforms are easier to identify and parse. Results suggest that the functions of the cochlear nerve and/or cochlear synapses are damaged in Ménière’s. Earlier research that shows an abnormal acoustic reflex decay in about a quarter of Ménière’s patients, and a reduced number of synapses between inner hair cells and auditory nerve fibers, underscores the presence of nerve damage in Ménière’s.

Cochlear synaptopathy is a noise-induced or age-related dysfunction that is also causing reduced synapses between inner ear hair cells and auditory nerve fibers, resulting in tinnitus, hyperacusis, and difficulty hearing in noise despite normal hearing sensitivity. ECochG may help with its diagnosis, especially given that traditional audiograms and hearing tests have been found to miss this “hidden hearing loss.” The use of both the SP/AP amplitude and area ratios and specific auditory brainstem responses can help confirm this condition and distinguish it from Ménière’s disease.

kaf menieres.jpeg

ECochG can also be used to help confirm the diagnosis of auditory neuropathy spectrum disorder, a problem with the way sound is transmitted between the inner ear and the brain, and other inner ear disorders. The technique can also be used to monitor ear responses, real-time, during surgeries such as a stapedectomy, endolymphatic shunt, and cochlear implantation, additional instances demonstrating how ECochG holds promise for expanded use in the clinic.

A 2015 ERG scientist funded by The Estate of Howard F. Schum, Wafaa Kaf, Ph.D., is a professor of audiology at Missouri State University.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

Detailing the Relationships Between Auditory Processing and Cognitive-Linguistic Abilities in Children

By Beula Magimairaj, Ph.D.

Children suspected to have or diagnosed with auditory processing disorder (APD) present with difficulty understanding speech despite typical-range peripheral hearing and typical intellectual abilities. Children with APD (also known as central auditory processing disorder, CAPD) may experience difficulties while listening in noise, discriminating speech and non-speech sounds, recognizing auditory patterns, identifying the location of a sound source, and processing time-related aspects of sound, such as rapid sound fluctuations or detecting short gaps between sounds. According to 2010 clinical practice guidelines by the American Academy of Audiology and a 2005 American Speech-Language-Hearing Association (ASHA) report, developmental APD is a unique clinical entity. According to ASHA, APD is not the result of cognitive or language deficits.

child-reading-tablet.jpg

In our July 2018 study in the journal Language Speech and Hearing Services in the Schools for its special issue on “working memory,” my coauthor and I present a novel framework for conceptualizing auditory processing abilities in school-age children. According to our framework, cognitive and linguistic factors are included along with auditory factors as potential sources of deficits that may contribute individually or in combination to cause listening difficulties in children.

We present empirical evidence from hearing, language, and cognitive science in explaining the relationships between children’s auditory processing abilities and cognitive abilities such as memory and attention. We also discuss studies that have identified auditory abilities that are unique and may benefit from assessment and intervention. Our unified framework is based on studies from typically developing children; those suspected to have APD, developmental language impairment, or attention deficit disorders; and models of attention and memory in children. In addition, the framework is based on what we know about the integrated functioning of the nervous system and evidence of multiple risk factors in developmental disorders. A schematic of this framework is shown here.

APD chart.png

In our publication, for example, we discuss how traditional APD diagnostic models show remarkable overlap with models of working memory (WM). WM refers to an active memory system that individuals use to hold and manipulate information in conscious awareness. Overlapping components among the models include verbal short-term memory capacity (auditory decoding and memory), integration of audiovisual information and information from long-term memory, and central executive functions such as attention and organization. Therefore, a deficit in the WM system can also potentially mimic the APD profile.

Similarly, auditory decoding (i.e., processing speech sounds), audiovisual integration, and organization abilities can influence language processing at various levels of complexity. For example, poor phonological (speech sound) processing abilities, such as those seen in some children with primary language impairment or dyslexia, could potentially lead to auditory processing profiles that correspond to APD. Auditory memory and auditory sequencing of spoken material are often challenging for children diagnosed with APD. These are the same integral functions attributed to the verbal short-term memory component of WM. Such observations are supported by the frequent co-occurrence of language impairment, APD, and attention deficit disorders.

Furthermore, it is important to note that cognitive-linguistic and auditory systems are highly interconnected in the nervous system. Therefore, heterogeneous profiles of children with listening difficulties may reflect a combination of deficits across these systems. This calls for a unified approach to model functional listening difficulties in children.

Given the overlap in developmental trajectories of auditory skills and WM abilities, the age at evaluation must be taken into account during assessment of auditory processing. The American Academy of Audiology does not recommend APD testing for children developmentally younger than age 7. Clinicians must therefore adhere to this recommendation to save time and resources for parents and children and to avoid misdiagnosis.

However, any significant listening difficulties noted in children at any age (especially at younger ages) must call for a speech-language evaluation, a peripheral hearing assessment, and cognitive assessment. This is because identification of deficits or areas of risk in language or cognitive processing triggers the consideration of cognitive-language enrichment opportunities for the children. Early enrichment of overall language knowledge and processing abilities (e.g., phonological/speech sound awareness, vocabulary) has the potential to improve children's functional communication abilities, especially when listening in complex auditory environments. 

Given the prominence of children's difficulty listening in complex auditory environments and emerging evidence suggesting a distinction of speech perception in noise and spatialized listening from other auditory and cognitive factors, listening training in spatialized noise appears to hold promise in terms of intervention. This needs to be systematically replicated across independent research studies. 

Other evidence-based implications discussed in our publication include improving auditory access using assistive listening devices (e.g., FM systems), using a hierarchical assessment model, or employing a multidisciplinary front-end screening of sensitive areas (with minimized overlap across audition, language, memory, and attention) prior to detailed assessments in needed areas.

Finally, we emphasize that prevention should be at the forefront. This calls for integrating auditory enrichment with meaningful activities such as musical experience, play, social interaction, and rich language experience beginning early in infancy while optimizing attention and memory load. While these approaches are not new, current research evidence on neuroplasticity makes a compelling case to promote auditory enrichment experiences in infants and young children.

Beula_Magimairaj.jpg

A 2015 Emerging Research Grants (ERG) scientist generously funded by the General Grand Chapter Royal Arch Masons International, Beula Magimairaj, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Central Arkansas. Magimairaj’s related ERG research on working memory appears in the Journal of Communication Disorders, and she wrote about an earlier paper from her ERG grant in the Summer 2018 issue of Hearing Health.

Print Friendly and PDF

Measuring Brain Signals Leads to Insights Into Mild Tinnitus

By Julia Campbell, Au.D., Ph.D.

Tinnitus, or the perception of sound where none is present, has been estimated to affect approximately 15 percent of adults. Unfortunately, there is no cure for tinnitus, nor is there an objective measure of the disorder, with professionals relying instead upon patient report.

There are several theories as to why tinnitus occurs, with one of the more prevalent hypotheses involving what is termed decreased inhibition. Neural inhibition is a normal function throughout the nervous system, and works in tandem with excitatory neural signals for accomplishing tasks ranging from motor output to the processing of sensory input. In sensory processing, such as hearing, both inhibitory and excitatory neural signals depend on external input.

For example, if an auditory signal cannot be relayed through the central auditory pathways due to cochlear damage resulting in hearing loss, both central excitation and inhibition may be reduced. This reduction in auditory-related inhibitory function may result in several changes in the central nervous system, including increased spontaneous neural firing, neural synchrony, and reorganization of cortical regions in the brain. Such changes, or plasticity, could possibly result in the perception of tinnitus, allowing signals that are normally suppressed to be perceived by the affected individual. Indeed, tinnitus has been reported in an estimated 30 percent of those with clinical hearing loss over the frequency range of 0.25 to 8 kilohertz (kHz), suggesting that cochlear damage and tinnitus may be interconnected.

However, many individuals with clinically normal hearing report tinnitus. Therefore, it is possible that in this specific population, inhibitory dysfunction may not underlie these phantom perceptions, or may arise from a different trigger other than hearing loss.

One measure of central inhibition is sensory gating. Sensory gating involves filtering out signals that are repetitive and therefore unimportant for conscious perception. This automatic process can be measured through electrical responses in the brain, termed cortical auditory evoked potentials (CAEPs). CAEPs are recorded via electroencephalography (EEG) using noninvasive sensors to record electrical activity from the brain at the level of the scalp.

Cortical auditory evoked potentials (CAEPs) are recorded via electroencephalography (EEG) using noninvasive sensors to record electrical activity from the brain.

Cortical auditory evoked potentials (CAEPs) are recorded via electroencephalography (EEG) using noninvasive sensors to record electrical activity from the brain.

In healthy gating function, it is expected that the CAEP response to an initial auditory signal will be larger in amplitude when compared with a secondary CAEP response elicited by the same auditory signal. This illustrates the inhibition of repetitive information by the central nervous system. If inhibitory processes are dysfunctional, CAEP responses are similar in amplitude, reflecting decreased inhibition and the reduced filtering of incoming auditory information.

Due to the hypothesis that atypical inhibition may play a role in tinnitus, we conducted a study to evaluate inhibitory function in adults with normal hearing, with and without mild tinnitus, using sensory gating measures. To our knowledge, sensory gating had not been used to investigate central inhibition in individuals with tinnitus. We also evaluated extended high-frequency auditory sensitivity in participants at 10, 12.5, and 16 kHz—which are frequencies not included in the usual clinical evaluation—to determine if participants with mild tinnitus showed hearing loss in these regions.

Tinnitus severity was measured subjectively using the Tinnitus Handicap Index. This score was correlated with measures of gating function to determine if tinnitus severity may be worse with decreased inhibition.

Our results, published in Audiology Research on Oct. 2, 2018, showed that gating function was impaired in adults with typical hearing and mild tinnitus, and that decreased gating was significantly correlated with tinnitus severity. In addition, those with tinnitus did not show significantly different extended high-frequency thresholds in comparison to the participants without tinnitus, but it was found that better hearing in this frequency range related to worse tinnitus severity.

This result conflicts with the theory that hearing loss may trigger tinnitus, at least in adults with typical hearing, and may indicate that these individuals possess heightened auditory awareness, although this hypothesis should be directly tested.

Julia Campbell.jpg
les pauls 100th logo.png

Overall, it appears that central inhibition is atypical in adults with typical hearing and tinnitus, and that this is not related to hearing loss as measured in clinically or non-clinically tested frequency regions. The cause of decreased inhibition in this population remains unknown, but genetic factors may play a role. We are currently investigating the use of sensory gating as an objective clinical measure of tinnitus, particularly in adults with hearing loss, as well as the networks in the brain that may underlie dysfunctional gating processes.

2016 Emerging Research Grants scientist Julia Campbell, Au.D., Ph.D., CCC-A, F-AAA, received the Les Paul Foundation Award for Tinnitus Research. She is an assistant professor in communication sciences and disorders in the Central Sensory Processes Laboratory at the University of Texas at Austin.

Print Friendly and PDF

ERG Grantees' Advancements in OAE Hearing Tests, Speech-in-Noise Listening

By Yishane Lee and Inyong Choi, Ph.D.

Support for a Theory Explaining Otoacoustic Emissions: Fangyi Chen, Ph.D.

Groves hair cells 002.jpeg

It’s a remarkable feature of the ear that it not only hears sound but also generates it. These sounds, called otoacoustic emissions (OAEs), were discovered in 1978. Thanks in part to ERG research in outer hair cell motility, measuring OAEs has become a common, noninvasive hearing test, especially among infants too young to respond to sound prompts..

There are two theories about how the ear produces its own sound emanating from the interior of the cochlea out toward its base. The traditional one is the backward traveling wave theory, in which sound emissions travel slowly as a transverse wave along the basilar membrane, which divides the cochlea into two fluid-filled cavities. In a transverse wave, the wave particles move perpendicular to the wave direction. But this theory does not explain some anomalies, leading to a second hypothesis: The fast compression wave theory holds that the emissions travel as a longitudinal wave via lymph fluids around the basilar membrane. In a longitudinal wave, the wave particles travel in the same direction as the wave motion.

Figuring out how the emissions are created will promote greater accuracy of the OAE hearing test and a better understanding of cochlear mechanics. Fangyi Chen, Ph.D., a 2010 Emerging Research Grants (ERG) recipient, started investigating the issue at Oregon Health & Science University and is now at China’s Southern University of Science and Technology. His team’s paper, published in the journal Neural Plasticity in July 2018, for the first time experimentally validates the backward traveling wave theory.

Chen and his coauthors—including Allyn Hubbard, Ph.D., and Alfred Nuttall, Ph.D., who are each 1989–90 ERG recipients—directly measured the basilar membrane vibration in order to determine the wave propagation mechanism of the emissions. The team stimulated the membrane at a specific location, allowing for the vibration source that initiates the backward wave to be pinpointed. Then the resulting vibrations along the membrane were measured at multiple locations in vivo (in guinea pigs), showing a consistent lag as distance increased from the vibration source. The researchers also measured the waves at speeds in the order of tens of meters per second, much slower than would be the speed of a compression wave in water. The results were confirmed using a computer simulation. In addition to the wave propagation study, a mathematical model of the cochlea based on an acoustic electrical analogy was created and simulated. This was used to interpret why no peak frequency-to-place map was observed in the backward traveling wave, explaining some of the previous anomalies associated with this OAE theory.

Speech-in-Noise Understanding Relies on How Well You Combine Information Across Multiple Frequencies: Inyong Choi, Ph.D.

Understanding speech in noisy environments is a crucial ability for communications, although many individuals with or without hearing loss suffer from dysfunctions in that ability. Our study in Hearing Research, published in September 2018, finds that how well you combine information across multiple frequencies, tested by a pitch-fusion task in "hybrid" cochlear implant users who receive both low-frequency acoustic and high-frequency electric stimulation within the same ear, is a critical factor for good speech-in-noise understanding.

In the pitch-fusion task, subjects heard either a tone consisting of many frequencies in a simple mathematical relationship or a tone with more irregular spacing between frequencies. Subjects had to say whether the tone sounded "natural" or "unnatural" to them, given the fact that a tone consisting of frequencies in a simple mathematical relationship sounds much more natural to us. My team and I are now studying how we can improve the sensitivity to this "naturalness" in listeners with hearing loss, expecting to provide individualized therapeutic options to address the difficulties in speech-in-noise understanding.

2017 ERG recipient Inyong Choi, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Iowa in Iowa City.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

Meet the Researcher: A. Catalina Vélez-Ortega

By Yishane Lee

2018 Emerging Research Grants (ERG) awardee A. Catalina Vélez-Ortega received a master’s in biology from the University of Antioquia, Colombia, and a doctorate in physiology from the University of Kentucky, where she completed postdoctoral training and is now an assistant professor in the department of physiology.

erg Velez Ortega.jpg

IN HER WORDS:

TRPA1 is an ion channel known for its role as an “irritant sensor” in pain-sensing neurons (nerve cells). Noise exposure leads to the production of some cellular “irritants” that activate TRPA1 channels in the inner ear. The role of TRPA1 channels has been a puzzling project, with most experiments leaving more questions to pursue. My current project seeks to uncover how TRPA1 activation modifies cochlear mechanics and hearing sensitivity, in order to find new therapeutic targets to prevent hearing loss or tinnitus.

My father, our town’s surgeon, fueled my desire to learn. When I asked him how the human heart works, he called the butcher, got a pig’s heart, and we dissected it together. I was about 5 when I learned how the heart’s chambers are connected and how valves work. He also set up an astronomy class at home with a flashlight, globe, and ball when I asked, “Why does the moon change shape?” My father’s excitement kept my curiosity from fading as I grew older. That eager-to-learn personality now drives my career in science and teaching.

My training in biomedical engineering guided my interest into hearing science. The field of inner ear research mixes physics and mechanics with molecular biology and genetics in a way I find extremely attractive. Analytics also intrigues me. People who work with me know how complex my calendar and spreadsheets can get. I absolutely love logging all kinds of data and looking for correlations. I also like to plan ahead—passport renewal 10 years from now? Already in my calendar!

I take dance lessons and participate in flash mobs and other dance performances. But I used to be extremely shy. As a child I simply could not look anyone in the eye when talking to them. I was also terrified of being onstage. It was only after college that I decided to finally correct the problem. Interestingly, taking sign language lessons was very helpful. Sign language forced me to stare at people to be able to communicate. It was terrifying at first, but it started to feel very natural after just a few months.

Vélez-Ortega’s 2018 ERG grant was generously funded by cochlear implant manufacturer Cochlear Americas.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

Quantifying the Effects of a Hyperacusis Treatment

By Xiying Guan, Ph.D

A typical inner ear has two mobile windows: the oval and round window (RW). The flexible, membrane-covered RW allows fluid in the cochlea to move as the oval window vibrates in response to movement from the stapes bone during sound stimulation.

Copy of Guan HHF Photo_CROP.jpg

Superior canal dehiscence (SCD), a pathological opening in the bony wall of the superior semicircular canal, forms a third window of the inner ear. This structural anomaly results in various auditory and vestibular symptoms. One common symptom is increased sensitivity to self-generated sounds or external vibrations, such as hearing one’s own pulse, neck and joint movement, and even eye movement. This hypersensitive hearing associated with SCD has been termed conductive hyperacusis.

Recently, surgically stiffening the RW is emerging as a treatment for hyperacusis in patients with and without SCD. However, the postsurgical results are mixed: Some patients experience improvement while others complain of worsening symptoms and have asked to reverse the RW treatment. Although this “experimental” surgical treatment for hyperacusis is increasingly reported, its efficacy has not been studied scientifically.

In the present study, we experimentally tested how RW reinforcement affects air-conduction sound transmission in the typical ear (that is, without a SCD). We measured the sound pressures in two cochlear fluid-filled cavities—the scala vestibuli (assigned the value “Psv”) and the scala tympani (“Pst”)—together with the stapes velocity in response to sound at the ear canal. We estimated hearing ability based on a formula for the “cochlear input drive” (Pdiff = Psv – Pst) before and after RW reinforcement in a human cadaveric ear.

We found that RW reinforcement can affect the cochlear input drive in unexpected ways. At very low frequencies, below 200 Hz, it resulted in a reduced stapes motion but an increase in the cochlear input drive that would be consistent with improved hearing. At 200 to 1,000 Hz, the stapes motion and input drive both were slightly decreased. Above 1,000 Hz stiffening the RW had no effect.

hyperacusis research logo.png

The results suggest that RW reinforcement has the potential to worsen low-frequency hyperacusis while causing some hearing loss in the mid-frequencies. Although this preliminary study shows that the RW treatment does not have much effect on air-conduction hearing, the effect on bone-conduction hearing is unknown and is one of our future areas for experimentation.

A 2017 ERG scientist funded by Hyperacusis Research Ltd., Xiying Guan, Ph.D., is a postdoctoral fellow at Massachusetts Eye and Ear, Harvard Medical School, in Boston.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

Introducing the 2018 Emerging Research Grantees

By Lauren McGrath

ERG Logo.jpg

Hearing Health Foundation (HHF) is pleased to present our Emerging Research Grants (ERG) awardees for the 2018 project cycle.

Grantee Tenzin Ngodup, Ph.D., will investigate neuronal activity in the ventral cochlear nucleus to help prevent and treat tinnitus.

Grantee Tenzin Ngodup, Ph.D., will investigate neuronal activity in the ventral cochlear nucleus to help prevent and treat tinnitus.

15 individuals at various institutions nationwide—including Johns Hopkins School of Medicine, University of Minnesota, and the National Cancer Institute—will conduct innovative research in the following topic areas:

  • Central Auditory Processing Disorder (CAPD)
  • General Hearing Health
  • Hearing Loss in Children
  • Hyperacusis
  • Tinnitus
  • Usher Syndrome

Our grantees’ research investigations seek to solve specific auditory and vestibular problems such as declines in complex sound processing in age-related hearing loss (presbycusis), ototoxicity caused by the life-saving chemotherapy drug cisplatin, and noise-induced hearing loss.

HHF looks forward to the advancements that will come about from these promising scientific endeavors. The foundation owes many thanks to the General Grand Chapter Royal Arch Masons International, Cochlear, Hyperacusis Research, the Les Paul Foundation, and several generous, anonymous donors who have collectively empowered this important work.

We are currently planning for our 2019 ERG grant cycle, for which applications will open September 1. Learn more about the application process.

WE NEED YOUR HELP IN FUNDING THE EXCITING WORK OF HEARING AND BALANCE SCIENTISTS. DONATE TODAY TO HEARING HEALTH FOUNDATION AND SUPPORT GROUNDBREAKING RESEARCH: HHF.ORG/DONATE.

Grantee Rachael R. Baiduc, Ph.D., will identify  cardiovascular disease risk factors that may contribute to hearing loss.

Grantee Rachael R. Baiduc, Ph.D., will identify
cardiovascular disease risk factors that may contribute to hearing loss.

Print Friendly and PDF

In Memoriam: David J. Lim, M.D.

By Nadine Dehgan

Credit: UCLA Head and Neck Surgery

Credit: UCLA Head and Neck Surgery

We recognize with profound sadness the recent passing of David J. Lim, M.D., who was pivotal to the establishment of Hearing Health Foundation (HHF) and remained committed to our research throughout his life.

As a member of our Council of Scientific Trustees (CST)—the governing body of HHF’s Emerging Research Grants (ERG) program—and as a Centurion donor, Lim worked tirelessly to ensure the most promising auditory and vestibular science was championed.

Prior to his appointment to the CST, “Lim contributed to our understanding of the mechanics of hearing through his excellent scanning electron micrographs of the inner ear,” says Elizabeth Keithley, Ph.D, Chair of the Board of HHF. Lim pursued this critical work in 1970 through his first of many ERG grants.

“Lim was also one of the founding members of the Association for Research in Otolaryngology (ARO) and served as the historian of this esteemed scientific organization,” says Judy Dubno, Ph.D., of HHF’s Board of Directors. “Along with HHF, he cared deeply about ARO and will be missed by many.”

Most recently, Lim was a surgeon-scientist and a director of the UCLA Pathogenesis of Ear Diseases Laboratory, where he was considered an authority on temporal bone histopathology, morphology and cell biology of the ear, and the innate immunity of the middle and inner ear.

We, the HHF community, are grateful to have known and to have benefited from Lim’s wisdom, good humor, and kind spirit. HHF will honor his legacy by continuing our mission, knowing we are indebted to his leadership and dedication to advancements in hearing health.

Print Friendly and PDF