Research

Detailing the Relationships Between Auditory Processing and Cognitive-Linguistic Abilities in Children

According to our framework, cognitive and linguistic factors are included along with auditory factors as potential sources of deficits that may contribute individually or in combination to cause listening difficulties in children.

Print Friendly and PDF

BLOG ARCHIVE

Measuring Brain Signals Leads to Insights Into Mild Tinnitus

By Julia Campbell, Au.D., Ph.D.

Tinnitus, or the perception of sound where none is present, has been estimated to affect approximately 15 percent of adults. Unfortunately, there is no cure for tinnitus, nor is there an objective measure of the disorder, with professionals relying instead upon patient report.

There are several theories as to why tinnitus occurs, with one of the more prevalent hypotheses involving what is termed decreased inhibition. Neural inhibition is a normal function throughout the nervous system, and works in tandem with excitatory neural signals for accomplishing tasks ranging from motor output to the processing of sensory input. In sensory processing, such as hearing, both inhibitory and excitatory neural signals depend on external input.

For example, if an auditory signal cannot be relayed through the central auditory pathways due to cochlear damage resulting in hearing loss, both central excitation and inhibition may be reduced. This reduction in auditory-related inhibitory function may result in several changes in the central nervous system, including increased spontaneous neural firing, neural synchrony, and reorganization of cortical regions in the brain. Such changes, or plasticity, could possibly result in the perception of tinnitus, allowing signals that are normally suppressed to be perceived by the affected individual. Indeed, tinnitus has been reported in an estimated 30 percent of those with clinical hearing loss over the frequency range of 0.25 to 8 kilohertz (kHz), suggesting that cochlear damage and tinnitus may be interconnected.

However, many individuals with clinically normal hearing report tinnitus. Therefore, it is possible that in this specific population, inhibitory dysfunction may not underlie these phantom perceptions, or may arise from a different trigger other than hearing loss.

One measure of central inhibition is sensory gating. Sensory gating involves filtering out signals that are repetitive and therefore unimportant for conscious perception. This automatic process can be measured through electrical responses in the brain, termed cortical auditory evoked potentials (CAEPs). CAEPs are recorded via electroencephalography (EEG) using noninvasive sensors to record electrical activity from the brain at the level of the scalp.

Cortical auditory evoked potentials (CAEPs) are recorded via electroencephalography (EEG) using noninvasive sensors to record electrical activity from the brain.

Cortical auditory evoked potentials (CAEPs) are recorded via electroencephalography (EEG) using noninvasive sensors to record electrical activity from the brain.

In healthy gating function, it is expected that the CAEP response to an initial auditory signal will be larger in amplitude when compared with a secondary CAEP response elicited by the same auditory signal. This illustrates the inhibition of repetitive information by the central nervous system. If inhibitory processes are dysfunctional, CAEP responses are similar in amplitude, reflecting decreased inhibition and the reduced filtering of incoming auditory information.

Due to the hypothesis that atypical inhibition may play a role in tinnitus, we conducted a study to evaluate inhibitory function in adults with normal hearing, with and without mild tinnitus, using sensory gating measures. To our knowledge, sensory gating had not been used to investigate central inhibition in individuals with tinnitus. We also evaluated extended high-frequency auditory sensitivity in participants at 10, 12.5, and 16 kHz—which are frequencies not included in the usual clinical evaluation—to determine if participants with mild tinnitus showed hearing loss in these regions.

Tinnitus severity was measured subjectively using the Tinnitus Handicap Index. This score was correlated with measures of gating function to determine if tinnitus severity may be worse with decreased inhibition.

Our results, published in Audiology Research on Oct. 2, 2018, showed that gating function was impaired in adults with typical hearing and mild tinnitus, and that decreased gating was significantly correlated with tinnitus severity. In addition, those with tinnitus did not show significantly different extended high-frequency thresholds in comparison to the participants without tinnitus, but it was found that better hearing in this frequency range related to worse tinnitus severity.

This result conflicts with the theory that hearing loss may trigger tinnitus, at least in adults with typical hearing, and may indicate that these individuals possess heightened auditory awareness, although this hypothesis should be directly tested.

Julia Campbell.jpg
les pauls 100th logo.png

Overall, it appears that central inhibition is atypical in adults with typical hearing and tinnitus, and that this is not related to hearing loss as measured in clinically or non-clinically tested frequency regions. The cause of decreased inhibition in this population remains unknown, but genetic factors may play a role. We are currently investigating the use of sensory gating as an objective clinical measure of tinnitus, particularly in adults with hearing loss, as well as the networks in the brain that may underlie dysfunctional gating processes.

2016 Emerging Research Grants scientist Julia Campbell, Au.D., Ph.D., CCC-A, F-AAA, received the Les Paul Foundation Award for Tinnitus Research. She is an assistant professor in communication sciences and disorders in the Central Sensory Processes Laboratory at the University of Texas at Austin.

 

You can empower work toward better treatments and cures for hearing loss and tinnitus. If you are able, please make a contribution today.

Print Friendly and PDF

BLOG ARCHIVE

Accomplishments by ERG Alumni

Gail Ishiyama, M.D., a clinician-scientist who is a neurology associate professor at UCLA’s David Geffen School of Medicine, has been investigating balance disorders for nearly two decades and recently coauthored two studies on the topic.

Print Friendly and PDF

BLOG ARCHIVE

The Hearing Restoration Project: Update on the Seattle Plan and More

By Peter G. Barr-Gillespie, Ph.D.

Hearing Health Foundation launched the Hearing Restoration Project (HRP) to understand how to regenerate inner ear sensory cells in humans to restore hearing. These sensory hair cells detect and turn sound waves into electrical impulses that are sent to the brain for decoding. Once hair cells are damaged or die, hearing is impaired, but in most species, such as birds and fish, hair cells spontaneously regrow and hearing is restored.

The overarching principle of the HRP consortium is cross-discipline collaboration: open sharing of data and ideas. By having almost immediate access to one another’s data, HRP scientists are able to perform follow-up experiments much faster, rather than having to wait years until data is published.

Regenerated hair cells from chicken auditory organs, with the cell body, nucleus and hair bundle labeled with various colored markers. Image courtesy of Jennifer Stone, Ph.D.

Regenerated hair cells from chicken auditory organs, with the cell body, nucleus and hair bundle labeled with various colored markers. Image courtesy of Jennifer Stone, Ph.D.

You may remember that two years ago, we changed how we develop our projects. We decided together on a group of four projects—the “Seattle Plan”—that are the most fundamental to the consortium’s progress. These projects, which grew out of previous HRP projects, have now been funded for two years, and considerable progress has been made. We have also funded several other projects that have bubbled up out of new observations and capabilities, and they have added considerably to our knowledge base. With this in mind, I am pleased to share with you the latest updates for our 2018–19 projects.

SEATTLE PLAN PROJECTS

Transcriptome changes in single chick cells
Stefan Heller, Ph.D.

  • Found that all “tall” hair cells are exclusively regenerated mitotically in this animal model.

  • Compiled evidence for different supporting cell subtypes.

  • Obtained good quality single cell RNA sequencing (scRNA-seq) data and are in the process of evolving an analysis strategy for the baseline cell types (control group). Identified about 50 novel marker genes for hair cells, supporting cells, and homogene cells, including subgroups.

  • Developed a strategy to finish all scRNA-seq using a novel peeling technique and latest generation library construction methods.

  •  Established two methods for multi-color in situ hybridization (PLISH, proximity ligation in situ hybridization) and SGA (sequential genomic analysis) for spatial and temporal mRNA expression validation.

Epigenetics of the mouse inner ear
Michael Lovett, Ph.D., David Raible, Ph.D., Neil Segil, Ph,D., Jennifer Stone, Ph.D.

  • Completed epigenetic, chromatin structure, and RNA-seq datasets for FACS-purified cochlear hair cells and supporting cells from postnatal day 1 and postnatal day 6 mice, and provision of these data sets to the gEAR (gene Expression Analysis Resource portal) for mounting on their webpage through EpiViz for access by the HRP consortium.

  • Established a webpage (EarCode) so that HRP consortium members can access the current data directly through a University of California, Santa Cruz, genome browser.

  • Discovered maintenance of the transcriptionally silent state of the hair cell gene regulatory network in perinatal supporting cells is dependent on a combination of H3K27me3 and active H3k27-deacetylation, and that during transdifferentiation, these epigenetic marks are modified to an active state.



Mouse functional testing
John Brigande, Ph.D.

  • Defined in vitro and in vivo model systems to interrogate genome editing efficacy using CRISPR/Cas9.

Implementing the gEAR for data sharing within the HRP
Ronna Hertzano, M.D., Ph.D.

  • Added scRNA-seq workbench for easy sharing and viewing of scRNA-seq data. Such data, which are now driving the field forward, have been particularly difficult to share

  • Created additional public datasets to improve data sharing.

  • Completely rewrote the gEAR backbone to be updated to the latest technologies, allowing the portal to now to handle a much larger number of datasets and users.

  • Performed hands-on gEAR workshops at the Association for Research in Otolaryngology and the Gordon Research Conference, increasing the number of users with accounts to greater than 300.


Single Cell RNA-seq of homeostatic neuromasts
Tatjana Piotrowski, Ph.D.

  • Optimized protocols for fluorescent-activated cell sorting and scRNA-seq; obtained high quality scRNA-seq transcriptome results from 1,400 neuromast cells; clustered all cells into seven groups; and performed analyses to align the cells along developmental time, providing a temporal readout of gene expressions during hair cell development.

OTHER PROJECTS

Integrated systems biology of hearing restoration
Seth Ament, Ph.D.

  • Discovered 29 novel risk loci for age-related hearing difficulty through new analyses of genome-wide association studies of multiple hearing-related traits in the U.K. Biobank (comprising 330,000 people), and predicted the causal genes and variants at these loci through integration with transcriptomics and epigenomics data from HRP consortium members.

  • Generated scRNA-seq of 9,472 cells in the neonatal mouse cochlea and utricle (postnatal days 2 and 7).

  • Conducted systems biology analyses that integrate multiple HRP datasets to characterize gene regulatory networks and predict driver genes associated with the development and regeneration of hair cells. These analyses utilize scRNA-seq of sensory epithelial cells in mouse, chicken, and zebrafish hearing and vestibular organs, as well as epigenomic data (ATAC-seq) from hair cells, support cells, and non-epithelial cells in the mouse cochlea.


Comparison of three reprogramming cocktails
Andy Groves, Ph.D.

  • Created and validated transgenic mouse lines expressing three different combinations of reprogramming transcription factors.

  • Demonstrated these lines can produce new hair cell–like cells in the undamaged and damaged cochlea of the immature mouse.

  • Compiled preliminary data showing Atoh1 and Gfi1 genes can create ectopic hair cells in the adult mouse cochlea.


Signaling molecules controlling avian auditory hair cell regeneration
Jennifer Stone, Ph.D.

  • Identified four molecular pathways (FGF, BMP, VEGF, and Wnt) that control hair cell regeneration in the bird auditory organ. These pathways were identified in Phase I (gene discovery) as being transcriptionally dynamic in birds, fish, and mice during regeneration, which indicated they may be universal regulators of hair cell regeneration.

  • Determined that the Notch signaling pathway (a powerful inhibitor of stem cells) also blocks supporting cell division in the chicken auditory organ after damage. This discovery shows that Notch is a negative regulator of regeneration, conserved in birds, fish, and mice.

  • Identified signaling molecules in birds that are correlated with either mitotic or non-mitotic modes of hair cell regeneration, and are now exploring how these signaling molecules interact to determine which mode of regeneration occurs. Since mammals only exhibit non-mitotic regeneration, we are particularly interested in determining how this mode is controlled.

UP NEXT

We look forward to our annual meeting, which will be held in Seattle in November. There we will discuss and integrate these data to develop our plans for our 2019–20 projects.

barr-gillespie.jpg

As always we are very grateful for the donations we receive to fund this groundbreaking research to find better treatments for hearing loss and related conditions. Every dollar counts, and we sincerely thank our supporters.

HRP scientific director Peter G. Barr-Gillespie, Ph.D., is a professor of otolaryngology at the Oregon Hearing Research Center, a senior scientist at the Vollum Institute, and the interim senior vice president for research, all at Oregon Health & Science University. For more, see hhf.org/hrp.

 

Empower the life-changing research of the Hearing Restoration Project and other scientists today.

Print Friendly and PDF

BLOG ARCHIVE

ERG Grantees' Advancements in OAE Hearing Tests, Speech-in-Noise Listening

By Yishane Lee and Inyong Choi, Ph.D.

Support for a Theory Explaining Otoacoustic Emissions: Fangyi Chen, Ph.D.

Groves hair cells 002.jpeg

It’s a remarkable feature of the ear that it not only hears sound but also generates it. These sounds, called otoacoustic emissions (OAEs), were discovered in 1978. Thanks in part to ERG research in outer hair cell motility, measuring OAEs has become a common, noninvasive hearing test, especially among infants too young to respond to sound prompts..

There are two theories about how the ear produces its own sound emanating from the interior of the cochlea out toward its base. The traditional one is the backward traveling wave theory, in which sound emissions travel slowly as a transverse wave along the basilar membrane, which divides the cochlea into two fluid-filled cavities. In a transverse wave, the wave particles move perpendicular to the wave direction. But this theory does not explain some anomalies, leading to a second hypothesis: The fast compression wave theory holds that the emissions travel as a longitudinal wave via lymph fluids around the basilar membrane. In a longitudinal wave, the wave particles travel in the same direction as the wave motion.

Figuring out how the emissions are created will promote greater accuracy of the OAE hearing test and a better understanding of cochlear mechanics. Fangyi Chen, Ph.D., a 2010 Emerging Research Grants (ERG) recipient, started investigating the issue at Oregon Health & Science University and is now at China’s Southern University of Science and Technology. His team’s paper, published in the journal Neural Plasticity in July 2018, for the first time experimentally validates the backward traveling wave theory.

Chen and his coauthors—including Allyn Hubbard, Ph.D., and Alfred Nuttall, Ph.D., who are each 1989–90 ERG recipients—directly measured the basilar membrane vibration in order to determine the wave propagation mechanism of the emissions. The team stimulated the membrane at a specific location, allowing for the vibration source that initiates the backward wave to be pinpointed. Then the resulting vibrations along the membrane were measured at multiple locations in vivo (in guinea pigs), showing a consistent lag as distance increased from the vibration source. The researchers also measured the waves at speeds in the order of tens of meters per second, much slower than would be the speed of a compression wave in water. The results were confirmed using a computer simulation. In addition to the wave propagation study, a mathematical model of the cochlea based on an acoustic electrical analogy was created and simulated. This was used to interpret why no peak frequency-to-place map was observed in the backward traveling wave, explaining some of the previous anomalies associated with this OAE theory.

Speech-in-Noise Understanding Relies on How Well You Combine Information Across Multiple Frequencies: Inyong Choi, Ph.D.

Understanding speech in noisy environments is a crucial ability for communications, although many individuals with or without hearing loss suffer from dysfunctions in that ability. Our study in Hearing Research, published in September 2018, finds that how well you combine information across multiple frequencies, tested by a pitch-fusion task in "hybrid" cochlear implant users who receive both low-frequency acoustic and high-frequency electric stimulation within the same ear, is a critical factor for good speech-in-noise understanding.

In the pitch-fusion task, subjects heard either a tone consisting of many frequencies in a simple mathematical relationship or a tone with more irregular spacing between frequencies. Subjects had to say whether the tone sounded "natural" or "unnatural" to them, given the fact that a tone consisting of frequencies in a simple mathematical relationship sounds much more natural to us. My team and I are now studying how we can improve the sensitivity to this "naturalness" in listeners with hearing loss, expecting to provide individualized therapeutic options to address the difficulties in speech-in-noise understanding.

2017 ERG recipient Inyong Choi, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Iowa in Iowa City.


We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

Print Friendly and PDF

BLOG ARCHIVE

Meet the Researcher: A. Catalina Vélez-Ortega

By Yishane Lee

2018 Emerging Research Grants (ERG) awardee A. Catalina Vélez-Ortega received a master’s in biology from the University of Antioquia, Colombia, and a doctorate in physiology from the University of Kentucky, where she completed postdoctoral training and is now an assistant professor in the department of physiology.

erg Velez Ortega.jpg

IN HER WORDS:

TRPA1 is an ion channel known for its role as an “irritant sensor” in pain-sensing neurons (nerve cells). Noise exposure leads to the production of some cellular “irritants” that activate TRPA1 channels in the inner ear. The role of TRPA1 channels has been a puzzling project, with most experiments leaving more questions to pursue. My current project seeks to uncover how TRPA1 activation modifies cochlear mechanics and hearing sensitivity, in order to find new therapeutic targets to prevent hearing loss or tinnitus.

My father, our town’s surgeon, fueled my desire to learn. When I asked him how the human heart works, he called the butcher, got a pig’s heart, and we dissected it together. I was about 5 when I learned how the heart’s chambers are connected and how valves work. He also set up an astronomy class at home with a flashlight, globe, and ball when I asked, “Why does the moon change shape?” My father’s excitement kept my curiosity from fading as I grew older. That eager-to-learn personality now drives my career in science and teaching.

My training in biomedical engineering guided my interest into hearing science. The field of inner ear research mixes physics and mechanics with molecular biology and genetics in a way I find extremely attractive. Analytics also intrigues me. People who work with me know how complex my calendar and spreadsheets can get. I absolutely love logging all kinds of data and looking for correlations. I also like to plan ahead—passport renewal 10 years from now? Already in my calendar!

I take dance lessons and participate in flash mobs and other dance performances. But I used to be extremely shy. As a child I simply could not look anyone in the eye when talking to them. I was also terrified of being onstage. It was only after college that I decided to finally correct the problem. Interestingly, taking sign language lessons was very helpful. Sign language forced me to stare at people to be able to communicate. It was terrifying at first, but it started to feel very natural after just a few months.

Vélez-Ortega’s 2018 ERG grant was generously funded by cochlear implant manufacturer Cochlear Americas.


We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

Print Friendly and PDF

BLOG ARCHIVE

HHF 2019 Grant Applications Open

By Lauren McGrath

We are excited to inform you that the applications for Hearing Health Foundation (HHF)'s 2019 Emerging Research Grants (ERG) and Ménière's Disease Grants (MRG) programs are officially open as of September 1.

Call for 2019 Grants.png

HHF's ERG grants provide seed money to stimulate data collection that leads to a continuing, independently fundable line of research. According to a 2017 analysis, every $1 of funding that HHF awards to ERG grantees is matched by the NIH with $91.

ERG grant funding shall not exceed $30,000 for the one-year project period, and only research proposals in the following topic will be considered for the 2019 ERG cycle: General Hearing Health (GHH)*,  [Central] Auditory Processing Disorders, Hearing Loss in Children, Hyperacusis, Ménière’s Disease, Ototoxic Medications, Tinnitus, and Usher Syndrome.

More Information About ERG
Begin Your ERG Application

The highly competitive Ménière’s Disease Grants (MDG) program funds scientists to better our understanding of this complicated condition with an eye for better treatments and cures for those who suffer from Ménière’s disease.

MDG grant funding shall not exceed $125,000 for the two-year project period. Areas of interest for the 2019 MDG Cycle include: the mechanisms of endolymphatic hydrops; genetics of Ménière’s disease; development and validation of biomarkers, including imaging and/or electrophysiologic and behavioral measures for its diagnosis and measurement of therapeutic effectiveness; animal models of Ménière’s disease; and the development of novel therapeutics.

More Information About MDG
Begin Your MDG Application

Applications for both ERG and MDG will close Tuesday, January 15.

If you have any questions about the grant program and processes, contact us at grants@hhf.org.  
Please forward and share this information with your colleagues who may be interested.

Print Friendly and PDF

BLOG ARCHIVE

Understanding Individual Variances in Hearing Aid Outcomes in Quiet and Noisy Environments

By Elizabeth Crofts

Evelyn Davies-Venn, Au.D., Ph.D.

Evelyn Davies-Venn, Au.D., Ph.D.

More than 460 million people worldwide live with some form of hearing loss. For most, hearing aids are the primary rehabilitation tool, yet there is no one-size-fits-all approach. As a result, many hearing aid users are frustrated by their listening experiences, especially understanding speech in noise.

Evelyn Davies-Venn, Au.D., Ph.D., of the University of Minnesota is currently focusing on two projects, one of which is funded by Hearing Health Foundation (HHF) through its Emerging Research Grants (ERG) program, that will enhance the customization of hearing aids. She presented the two projects at the Hearing Loss Association of America (HLAA) convention in June.

Davies-Venn explains that some of the factors dictating individual variance in hearing aid listening outcomes in noisy environments include audibility, spectral resolution, and cognitive ability. Audibility changes—how much of the speech spectrum is available to the hearing aid user—is the biggest factor. “Speech must be audible before it is intelligible,” Davies-Venn says. Another primary factor is spectral resolution, or your ear’s ability to make use of the spectrum or frequency changes in sounds. This also directly affects listening outcomes.

Secondary factors include the user’s working memory and the volume of the amplified speech. These impact how well someone can handle making sense of distortions (from ambient noise as well as from signal processing) in an incoming speech signal. Working memory is needed to provide context in the event of missing speech fragments, for instance. Needless to say, it is a challenge for conventional hearing aid technology to address all of these complex variables.

Davies-Venn’s highlights two emerging projects that take an innovative approach to resolving this challenge. The first project aims to improve hearing aid success focuses on an emerging technology called the “cognitive control of a hearing aid,” or COCOHA. It is an improved hearing aid that will analyze multiple sounds, complete an acoustic scene analysis, and separate the sounds into individual streams, she says.

Then, based on the cognitive/electrophysiological recordings from the individual, the COCOHA will select the specific stream that the person is interested in listening to and amplify it—such as a particular speaker’s voice. The cognitive recording is captured with a noninvasive, far-field measure of electrical signals emitted from the brain in response to sound stimuli (similar to how an electroencephalogram, EEG, captures signals).

Davies-Venn’s ERG grant from HHF will support research on the use of electrophysiology, far-field or distant (i.e. recorded at the scalp) electrical signals from the brain, to design hearing aid algorithms that can control individual variances due to level-induced (i.e. high intensity) distortions from hearing aids.

The other project involves sensory substitution. This project explores the conversion of speech to another sense—for example, touch—through a mobile processing device or a “skin hearing aid.” For the device to function, a vibration is relayed to the brain for speech understanding. This technology seems cutting edge, but is believed to have been invented in the 1960s by Paul Bach-y-Rita, M.D., of the Smith-Kettlewell Institute of Visual Sciences in San Francisco. Even though it has not yet been incorporated into hearing aid technology intended for mass production, David Eagleman, Ph.D., of Stanford University and others are hoping to make this a reality.

Davies-Venn’s research motives are inspired by a personal connection to her work. “I have a conductive hearing loss myself,” she says. “I had persistent/chronic ear infections as a child that left me a bit delayed in developing speech, and still get ear infections as an adult and have ground accustomed to the low-frequency hearing loss that results until they resolve.” She also has family members with hearing loss and understands the importance of developing more advanced hearing assistance technology.

The projects are in the early stages, and it may take as long as a decade for them to reach the market from the concept. “The goal is to develop individualized hearing aid signal processing to improve treatment outcomes in noisy soundscapes,” Davies-Venn says. “We want to say, this is the most optimal treatment protocol, and it’s different from this person’s, even though you have the same hearing threshold.” Solving hearing aid variances in a precise, individual manner that accounts for variables such as age and cognitive ability will improve communication and quality of life for the millions with hearing loss who use hearing technology.


We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

Print Friendly and PDF

BLOG ARCHIVE

Quantifying the Effects of a Hyperacusis Treatment

By Xiying Guan, Ph.D

A typical inner ear has two mobile windows: the oval and round window (RW). The flexible, membrane-covered RW allows fluid in the cochlea to move as the oval window vibrates in response to movement from the stapes bone during sound stimulation.

Copy of Guan HHF Photo_CROP.jpg

Superior canal dehiscence (SCD), a pathological opening in the bony wall of the superior semicircular canal, forms a third window of the inner ear. This structural anomaly results in various auditory and vestibular symptoms. One common symptom is increased sensitivity to self-generated sounds or external vibrations, such as hearing one’s own pulse, neck and joint movement, and even eye movement. This hypersensitive hearing associated with SCD has been termed conductive hyperacusis.

Recently, surgically stiffening the RW is emerging as a treatment for hyperacusis in patients with and without SCD. However, the postsurgical results are mixed: Some patients experience improvement while others complain of worsening symptoms and have asked to reverse the RW treatment. Although this “experimental” surgical treatment for hyperacusis is increasingly reported, its efficacy has not been studied scientifically.

In the present study, we experimentally tested how RW reinforcement affects air-conduction sound transmission in the typical ear (that is, without a SCD). We measured the sound pressures in two cochlear fluid-filled cavities—the scala vestibuli (assigned the value “Psv”) and the scala tympani (“Pst”)—together with the stapes velocity in response to sound at the ear canal. We estimated hearing ability based on a formula for the “cochlear input drive” (Pdiff = Psv – Pst) before and after RW reinforcement in a human cadaveric ear.

We found that RW reinforcement can affect the cochlear input drive in unexpected ways. At very low frequencies, below 200 Hz, it resulted in a reduced stapes motion but an increase in the cochlear input drive that would be consistent with improved hearing. At 200 to 1,000 Hz, the stapes motion and input drive both were slightly decreased. Above 1,000 Hz stiffening the RW had no effect.

hyperacusis research logo.png

The results suggest that RW reinforcement has the potential to worsen low-frequency hyperacusis while causing some hearing loss in the mid-frequencies. Although this preliminary study shows that the RW treatment does not have much effect on air-conduction hearing, the effect on bone-conduction hearing is unknown and is one of our future areas for experimentation.

A 2017 ERG scientist funded by Hyperacusis Research Ltd., Xiying Guan, Ph.D., is a postdoctoral fellow at Massachusetts Eye and Ear, Harvard Medical School, in Boston.


We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

Print Friendly and PDF

BLOG ARCHIVE

Introducing the 2018 Emerging Research Grantees

Our grantees’ research investigations seek to solve specific auditory and vestibular problems such as declines in complex sound processing in age-related hearing loss (presbycusis), ototoxicity caused by the life-saving chemotherapy drug cisplatin, and noise-induced hearing loss.

Print Friendly and PDF

BLOG ARCHIVE