Research

The People Behind the Science

By Yishane Lee

Eight years ago we introduced a column called “Meet the Researcher.” Placed on the last page of the magazine (prime editorial real estate!), the MTR column was designed as a way to give our Emerging Research Grants (ERG) scientists a place to talk about their ERG project in more detail—and in lay terms for our readers—including its genesis, planned execution, and future goals.

Credit: Jane G. Photography

Credit: Jane G. Photography

“Meet the Researcher” also is an opportunity for us to glimpse the person behind the science, with the researchers sharing how they became interested in their field and whether they have any personal connection to hearing conditions. Perhaps not surprisingly, many researchers do become interested in hearing and balance science as a result of their own experience with hearing loss. For instance, 2010 ERG scientist Judith Kempfle, M.D., told us she received an artificial eardrum at age 13, after many ear infections that her brother also got when they were kids growing up in Germany. With her ERG funded by the Royal Arch Masons General Grand Chapter International, Kempfle has gone on to work on many papers with Hearing Restoration Project member Albert Edge, Ph.D. (including a recent one about the effort to deliver drugs directly to the inner ear).

Ed Bartlett, Ph.D., Purdue University

Ed Bartlett, Ph.D., Purdue University

Also a Royal Arch Masons grantee, 2011 ERG scientist Ed Bartlett, Ph.D., who published research on the lasting effects of blast shock waves on auditory processing, remembers asking his teacher whether we actually hear thoughts or if it something else. “So, I guess I was destined for auditory neuroscience,” says Bartlett, who also earned ERG funding in 2003, 2004, and 2009.

2011 and 2012 ERG scientist Regie Santos-Cortez, M.D., Ph.D., who earned the Collette Ramsey Baker Award named after HHF’s founder, spoke about the challenges of getting access to genetic information for her study that eventually pinpointed a gene mutation linked to a predisposition for ear infections. 2012 ERG scientist Bradley J. Walters, Ph.D., says he started out studying evolutionary biology, switched to studying regenerating damaged brain tissue, and then switched to hearing research. “I realized a lot of the ideas I had been working on in the brain could be applied to the ear,” he says. A 2017 paper he coauthored described successfully using gene therapy to regenerate hair cells in adult mice.

Alan Kan, Ph.D., University of Wisconsin, Madison

Alan Kan, Ph.D., University of Wisconsin, Madison

An early love of logic puzzles for 2013 ERG scientist Alan Kan, Ph.D., a Royal Arch Masons grantee, turned into studying audio engineering and his 2018 paper looking at how to improve speech understanding among people who use bilateral cochlear implants. Fellow 2013 Royal Arch Masons recipient Ross Maddox, Ph.D., remembers varying how he cupped his hands over his ears to get different sounds, leading to an interest in auditory processes and, eventually, research on how auditory and visual input is synthesized to understand sound.

After 26 years as a clinical audiologist, Royal Arch Masons 2014 ERG scientist Samira Anderson, Ph.D., switched to research. “Part of my motivation came from working with patients who struggled with their hearing aids,” she says. “I was frustrated that I was unable to predict who would benefit from hearing aids based on the results of audiological evaluations.” She produced three papers on the topic, bringing us closer to improving fit for and increasing the use of hearing aids.

Likewise, fellow Royal Arch Masons grantee Srikanta Mishra, Ph.D., produced two papers, one in 2017 and one in 2018, on children’s hearing that stemmed from his 2014 ERG grant—work that also led to a prestigious National Institutes on Deafness and Other Communication Disorders grant. And we liked the backstory for 2014 ERG scientist Brad Buran, Ph.D., so much that we put him on the cover of our magazine. Buran, who wears cochlear implants, multitasks during happy hour with his colleagues. “In an environment where it’s hard to hear,” he says, “within an hour they have all the information they need to use Cued Speech,” which uses visual representations of phonemes.

Beula Magimairaj, Ph.D., University of Central Arkansas

Beula Magimairaj, Ph.D., University of Central Arkansas

In 2015, we expanded our coverage of ERG recipients, so that every grantee is profiled in a “Meet the Researcher” column, all available online. Three papers resulted from the Royal Arch Masons grant received by 2015 ERG scientist Beula Magimairaj, Ph.D., and her research into children’s speech perception in noise and auditory processing (the third paper is in press). Funded by Hyperacusis Research Ltd., 2015 ERG scientist Kelly Radziwon, Ph.D., has managed to create a reliable animal model for loudness hyperacusis (essentially, inducing loudness intolerance in a rat and making sure it reacts to gradually increasing sound intensities) as well as finding a potential link between neuroinflammation and hyperacusis. 2015 and 2016 ERG scientist Wafaa Kaf, Ph.D.—who has 18 other family members (and counting!) who work in science—has been investigating Ménière’s disease, publishing on improving its diagnosis as it can be mistaken for other conditions, and the use of electrocochleography (ECochG) for diagnosing and monitoring the hearing and balance disorder.

Elizabeth McCullagh, Ph.D., University of Colorado

Elizabeth McCullagh, Ph.D., University of Colorado

A karaoke fan who admits he “cannot resist Celine Dion,” Royal Arch Masons 2016 ERG scientist Richard Felix II, Ph.D. published on the greater-than-expected role of lower-level brain regions on speech processing. Fellow Royal Arch Masons grantee, 2016 ERG scientist Elizabeth McCullagh, Ph.D., makes her own cheese and beer in between uncovering new clues to sound localization problems in the genetic condition known as Fragile X syndrome, which can lead to autism.

Rahul Mittal, Ph.D., University of Miami Miller School of Medicine

Rahul Mittal, Ph.D., University of Miami Miller School of Medicine

2016 ERG scientist Harrison Lin, Ph.D., funded by The Barbara Epstein Foundation Inc., credits his older brother, also an otolaryngologist, for developing in him a love for science. He coauthored a January 2018 JAMA Otolaryngology–Head & Neck Surgery paper that detailed the gap between hearing loss diagnoses and treatments. 2016 ERG scientist Rahul Mital, Ph.D., who says he’d write fiction if not doing research, published an overview of hair cell regeneration, and Julia Campbell, Au.D., Ph.D., whose 2016 grant was funded by the Les Paul Foundation, understands firsthand what is feels like to have tinnitus, a topic she recently published a paper that investigated mild tinnitus in young patients with typical hearing. Hyperacusis Research-funded 2016 ERG scientist Xiying Guan, Ph.D., whose parents grew up doing manual labor in China, published a paper evaluating a treatment for conductive hyperacusis.

Some of our 2017 ERG scientists are already publishing. Royal Arch Masons grantee Inyong Choi, Ph.D., produced research on hybrid cochlear implants, which make use of residual hearing to produce more natural hearing. Oscar Diaz-Horta, Ph.D., whose 2017 ERG grant was funded by the Children’s Hearing Institute, investigated hair cell bundle structure and orientation. Very regretfully, Diaz-Horta died unexpectedly just as this paper was published.

Ian Swinburne, Ph.D., Harvard Medical School

Ian Swinburne, Ph.D., Harvard Medical School

Ian Swinburne, Ph.D., one of our Ménière’s Disease Grants scientists during its inaugural year in 2017, published a paper detailing one possible cause of Ménière’s disease. Swinburne and team discovered a structure in the inner ear’s endolymphatic sac acts a pressure-sensitive relief valve. Its failure may account for problems with inner ear fluid pressure and volume that may lead to hearing and balance disorders, including Ménière’s. “One activity I loved as a child was waterworks: building canals and aqueducts out of sand or dirt and then pouring water through them just to watch it flow,” he says. “Now I recognize an echo of that play in my study of water pressure and flow within the ear.”

We very much look forward to published research from all of our ERG scientists, including our latest crop of 2018 ERG scientists, whose ranks include a former college mascot, a violinist, a horse rider (of a horse named Gandalf), a Tibetan neuroscientist (and cookbook writer), a cricket player, a nonprofit cook who has prepared meals for 50,000 people, a dancer (including in flash mobs), and a builder of airplane scale models. Our ERG scientists deliver surprises of all sorts, from their backgrounds and how they got to where they are to the ground-breaking science they are spearheading in the lab.

 

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
donate hh button.jpg
 
Print Friendly and PDF

On a Data-Driven Mission

By Peter G. Barr-Gillespie, Ph.D.

The annual meeting of Hearing Health Foundation’s (HHF) Hearing Restoration Project was held in Seattle November 11-12, 2018. We used this meeting to update one another on recent progress on HHF-funded projects, discuss in detail the implications of new data, evaluate the directions of ongoing projects, and plan for the next funding period.

As you may recall, in November 2016 the Hearing Restoration Project (HRP) made a deliberate turn toward funding only the highest-impact science that our group leads the world in researching—we have termed this the “Seattle Plan.” We therefore devoted a substantial portion of our efforts to cross-species comparisons that contrast molecular responses to inner ear sensory hair cell damage in species that regenerate their hair cells, especially chickens and fish, with responses in mice, which like other mammals do not regenerate their hair cells. We also have been examining the “epigenetic” structure of key genes in the mouse, as one hypothesis is that epigenetic modifications of the DNA—that is, the inactivation of genes through chemical changes to the DNA—causes mouse (and human) cells of the cochlea to no longer respond to hair cell damage by regenerating hair cells.

Avian and mammal supporting cell subtypes differ, but Stefan Heller, Ph.D., and team are investigating if an evolutionary homogenous equivalent exists in the organ of Corti, and if this knowledge could be used for hair cell regeneration. Credit: Chris Gralapp / Otolaryngology Head and Neck Surgery (OHNS) - Stanford University School of Medicine

Avian and mammal supporting cell subtypes differ, but Stefan Heller, Ph.D., and team are investigating if an evolutionary homogenous equivalent exists in the organ of Corti, and if this knowledge could be used for hair cell regeneration. Credit: Chris Gralapp / Otolaryngology Head and Neck Surgery (OHNS) - Stanford University School of Medicine

I am happy to report that progress over the past two years on these two major projects has been outstanding. For the cross-species comparisons, Stefan Heller, Ph.D., and Tatjana Piotrowski, Ph.D., reported on single cell analysis of, respectively, chick and fish hair cell organs responding to damage. Using single cell analysis—isolating hundreds to thousands of individual cells and quantifying all of the protein-assembly messages they express—we can determine the molecular pathways by which hair cells are formed during development and regeneration. This approach has always been promising, but this year we have begun to reap the expected benefits, as those projects have given us an unprecedented view of hair cell formation.

The epigenetics project overseen by Neil Segil, Ph.D., has now reached maturity, and using the voluminous data acquired over the past several years his lab has shown how supporting cells (from which we intend to regenerate hair cells) change the epigenetic modification of their DNA so they no longer are able to switch on key genes used for turning them into hair cells. A topic of great interest at the meeting was that of genetic reprogramming: Can we use genes (like transcription factors, proteins that control the transfer of genetic information) or small molecules (which often can be taken orally and still reach their targets) to overcome the epigenetic modification and push supporting cells to turn into hair cells? Preliminary results from Segil’s lab and from others in the field make us optimistic that the reprogramming approach will eventually be part of a regeneration strategy.

We also heard from Seth Ament, Ph.D., a bioinformatics expert we recently recruited to the HRP to explicitly compare our various datasets and find the common threads between them. Ament has used gene expression data from the chick, fish, and mouse, as well as the epigenetic data from the mouse, to hypothesize which genes may be important for hair cell regeneration. As a systems biology specialist, Ament brings a fresh eye to the field of auditory science and has not only identified some of the genes we expected to be important, but new ones as well. His success nicely justifies our cross-species approach, and the bioinformatics comparisons that he has been able to achieve in his initial HRP project have been impressive.

Finally, two other Seattle Plan projects have gone well, including our data-sharing platform called the gEAR (gene Expression Analysis Resource), developed by Ronna Hertzano, M.D., Ph.D., which allows us to analyze our data privately but also to efficiently share data with the public. In addition, John Brigande, Ph.D., reported on his project developing mouse models for testing interesting new genes; his group will be adding several powerful models in the year to come.

The excitement at the meeting extended to our future plans. We agreed that the Seattle Plan was the still the proper course, and we eagerly anticipate more data and results to come from our consortium of researchers. We are truly getting a clearer picture of hair cell regeneration due to the HRP’s efforts. That said, there is a long way to go; our efforts show us how surprisingly intricate biology is, despite knowing from the start that systems like the inner ear are remarkably complex. Nature always has surprises for us, by turns dashing treasured hypotheses while revealing unexpected mechanisms. The HRP is most definitely on track for success, and all of us in the HRP sincerely thank you for your continued support.

barr-gillespie.jpg


HRP scientific director Peter G. Barr-Gillespie, Ph.D., is a professor of otolaryngology at the Oregon Hearing Research Center, a senior scientist at the Vollum Institute, and the interim senior vice president for research, all at Oregon Health & Science University. For more, see hhf.org/hrp.

 

Empower the Hearing Restoration Project's life-changing research. If you are able, please make a contribution today.

 
donate hh button.jpg
 
Print Friendly and PDF

Detailing the Relationships Between Auditory Processing and Cognitive-Linguistic Abilities in Children

By Beula Magimairaj, Ph.D.

Children suspected to have or diagnosed with auditory processing disorder (APD) present with difficulty understanding speech despite typical-range peripheral hearing and typical intellectual abilities. Children with APD (also known as central auditory processing disorder, CAPD) may experience difficulties while listening in noise, discriminating speech and non-speech sounds, recognizing auditory patterns, identifying the location of a sound source, and processing time-related aspects of sound, such as rapid sound fluctuations or detecting short gaps between sounds. According to 2010 clinical practice guidelines by the American Academy of Audiology and a 2005 American Speech-Language-Hearing Association (ASHA) report, developmental APD is a unique clinical entity. According to ASHA, APD is not the result of cognitive or language deficits.

child-reading-tablet.jpg

In our July 2018 study in the journal Language Speech and Hearing Services in the Schools for its special issue on “working memory,” my coauthor and I present a novel framework for conceptualizing auditory processing abilities in school-age children. According to our framework, cognitive and linguistic factors are included along with auditory factors as potential sources of deficits that may contribute individually or in combination to cause listening difficulties in children.

We present empirical evidence from hearing, language, and cognitive science in explaining the relationships between children’s auditory processing abilities and cognitive abilities such as memory and attention. We also discuss studies that have identified auditory abilities that are unique and may benefit from assessment and intervention. Our unified framework is based on studies from typically developing children; those suspected to have APD, developmental language impairment, or attention deficit disorders; and models of attention and memory in children. In addition, the framework is based on what we know about the integrated functioning of the nervous system and evidence of multiple risk factors in developmental disorders. A schematic of this framework is shown here.

APD chart.png

In our publication, for example, we discuss how traditional APD diagnostic models show remarkable overlap with models of working memory (WM). WM refers to an active memory system that individuals use to hold and manipulate information in conscious awareness. Overlapping components among the models include verbal short-term memory capacity (auditory decoding and memory), integration of audiovisual information and information from long-term memory, and central executive functions such as attention and organization. Therefore, a deficit in the WM system can also potentially mimic the APD profile.

Similarly, auditory decoding (i.e., processing speech sounds), audiovisual integration, and organization abilities can influence language processing at various levels of complexity. For example, poor phonological (speech sound) processing abilities, such as those seen in some children with primary language impairment or dyslexia, could potentially lead to auditory processing profiles that correspond to APD. Auditory memory and auditory sequencing of spoken material are often challenging for children diagnosed with APD. These are the same integral functions attributed to the verbal short-term memory component of WM. Such observations are supported by the frequent co-occurrence of language impairment, APD, and attention deficit disorders.

Furthermore, it is important to note that cognitive-linguistic and auditory systems are highly interconnected in the nervous system. Therefore, heterogeneous profiles of children with listening difficulties may reflect a combination of deficits across these systems. This calls for a unified approach to model functional listening difficulties in children.

Given the overlap in developmental trajectories of auditory skills and WM abilities, the age at evaluation must be taken into account during assessment of auditory processing. The American Academy of Audiology does not recommend APD testing for children developmentally younger than age 7. Clinicians must therefore adhere to this recommendation to save time and resources for parents and children and to avoid misdiagnosis.

However, any significant listening difficulties noted in children at any age (especially at younger ages) must call for a speech-language evaluation, a peripheral hearing assessment, and cognitive assessment. This is because identification of deficits or areas of risk in language or cognitive processing triggers the consideration of cognitive-language enrichment opportunities for the children. Early enrichment of overall language knowledge and processing abilities (e.g., phonological/speech sound awareness, vocabulary) has the potential to improve children's functional communication abilities, especially when listening in complex auditory environments. 

Given the prominence of children's difficulty listening in complex auditory environments and emerging evidence suggesting a distinction of speech perception in noise and spatialized listening from other auditory and cognitive factors, listening training in spatialized noise appears to hold promise in terms of intervention. This needs to be systematically replicated across independent research studies. 

Other evidence-based implications discussed in our publication include improving auditory access using assistive listening devices (e.g., FM systems), using a hierarchical assessment model, or employing a multidisciplinary front-end screening of sensitive areas (with minimized overlap across audition, language, memory, and attention) prior to detailed assessments in needed areas.

Finally, we emphasize that prevention should be at the forefront. This calls for integrating auditory enrichment with meaningful activities such as musical experience, play, social interaction, and rich language experience beginning early in infancy while optimizing attention and memory load. While these approaches are not new, current research evidence on neuroplasticity makes a compelling case to promote auditory enrichment experiences in infants and young children.

Beula_Magimairaj.jpg

A 2015 Emerging Research Grants (ERG) scientist generously funded by the General Grand Chapter Royal Arch Masons International, Beula Magimairaj, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Central Arkansas. Magimairaj’s related ERG research on working memory appears in the Journal of Communication Disorders, and she wrote about an earlier paper from her ERG grant in the Summer 2018 issue of Hearing Health.

 

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
donate hh button.jpg
 
Print Friendly and PDF

Measuring Brain Signals Leads to Insights Into Mild Tinnitus

By Julia Campbell, Au.D., Ph.D.

Tinnitus, or the perception of sound where none is present, has been estimated to affect approximately 15 percent of adults. Unfortunately, there is no cure for tinnitus, nor is there an objective measure of the disorder, with professionals relying instead upon patient report.

There are several theories as to why tinnitus occurs, with one of the more prevalent hypotheses involving what is termed decreased inhibition. Neural inhibition is a normal function throughout the nervous system, and works in tandem with excitatory neural signals for accomplishing tasks ranging from motor output to the processing of sensory input. In sensory processing, such as hearing, both inhibitory and excitatory neural signals depend on external input.

For example, if an auditory signal cannot be relayed through the central auditory pathways due to cochlear damage resulting in hearing loss, both central excitation and inhibition may be reduced. This reduction in auditory-related inhibitory function may result in several changes in the central nervous system, including increased spontaneous neural firing, neural synchrony, and reorganization of cortical regions in the brain. Such changes, or plasticity, could possibly result in the perception of tinnitus, allowing signals that are normally suppressed to be perceived by the affected individual. Indeed, tinnitus has been reported in an estimated 30 percent of those with clinical hearing loss over the frequency range of 0.25 to 8 kilohertz (kHz), suggesting that cochlear damage and tinnitus may be interconnected.

However, many individuals with clinically normal hearing report tinnitus. Therefore, it is possible that in this specific population, inhibitory dysfunction may not underlie these phantom perceptions, or may arise from a different trigger other than hearing loss.

One measure of central inhibition is sensory gating. Sensory gating involves filtering out signals that are repetitive and therefore unimportant for conscious perception. This automatic process can be measured through electrical responses in the brain, termed cortical auditory evoked potentials (CAEPs). CAEPs are recorded via electroencephalography (EEG) using noninvasive sensors to record electrical activity from the brain at the level of the scalp.

Cortical auditory evoked potentials (CAEPs) are recorded via electroencephalography (EEG) using noninvasive sensors to record electrical activity from the brain.

Cortical auditory evoked potentials (CAEPs) are recorded via electroencephalography (EEG) using noninvasive sensors to record electrical activity from the brain.

In healthy gating function, it is expected that the CAEP response to an initial auditory signal will be larger in amplitude when compared with a secondary CAEP response elicited by the same auditory signal. This illustrates the inhibition of repetitive information by the central nervous system. If inhibitory processes are dysfunctional, CAEP responses are similar in amplitude, reflecting decreased inhibition and the reduced filtering of incoming auditory information.

Due to the hypothesis that atypical inhibition may play a role in tinnitus, we conducted a study to evaluate inhibitory function in adults with normal hearing, with and without mild tinnitus, using sensory gating measures. To our knowledge, sensory gating had not been used to investigate central inhibition in individuals with tinnitus. We also evaluated extended high-frequency auditory sensitivity in participants at 10, 12.5, and 16 kHz—which are frequencies not included in the usual clinical evaluation—to determine if participants with mild tinnitus showed hearing loss in these regions.

Tinnitus severity was measured subjectively using the Tinnitus Handicap Index. This score was correlated with measures of gating function to determine if tinnitus severity may be worse with decreased inhibition.

Our results, published in Audiology Research on Oct. 2, 2018, showed that gating function was impaired in adults with typical hearing and mild tinnitus, and that decreased gating was significantly correlated with tinnitus severity. In addition, those with tinnitus did not show significantly different extended high-frequency thresholds in comparison to the participants without tinnitus, but it was found that better hearing in this frequency range related to worse tinnitus severity.

This result conflicts with the theory that hearing loss may trigger tinnitus, at least in adults with typical hearing, and may indicate that these individuals possess heightened auditory awareness, although this hypothesis should be directly tested.

Julia Campbell.jpg
les pauls 100th logo.png

Overall, it appears that central inhibition is atypical in adults with typical hearing and tinnitus, and that this is not related to hearing loss as measured in clinically or non-clinically tested frequency regions. The cause of decreased inhibition in this population remains unknown, but genetic factors may play a role. We are currently investigating the use of sensory gating as an objective clinical measure of tinnitus, particularly in adults with hearing loss, as well as the networks in the brain that may underlie dysfunctional gating processes.

2016 Emerging Research Grants scientist Julia Campbell, Au.D., Ph.D., CCC-A, F-AAA, received the Les Paul Foundation Award for Tinnitus Research. She is an assistant professor in communication sciences and disorders in the Central Sensory Processes Laboratory at the University of Texas at Austin.

 

You can empower work toward better treatments and cures for hearing loss and tinnitus. If you are able, please make a contribution today.

 
donate hh button.jpg
 
Print Friendly and PDF

Accomplishments by ERG Alumni

By Elizabeth Crofts

Progress Investigating Potential Causes and Treatments of Ménière’s Disease

brain-research.jpg

Gail Ishiyama, M.D., a clinician-scientist who is a neurology associate professor at UCLA’s David Geffen School of Medicine, has been investigating balance disorders for nearly two decades and recently coauthored two studies on the topic. While not directly funded by HHF, Ishiyama is a 2016 Emerging Research Grants recipient and also received a Ménière’s Disease Grant in 2017.

Ishiyama and colleague’s December 2018 paper in the journal Brain Research investigated oxidative stress, which plays a large role in several inner ear diseases as well as in aging. Oxidative stress is an imbalance between the production of free radicals and antioxidant defenses. The gene responsible for reducing oxidative stress throughout the body is called nuclear factor (erythroid-derived 2)-like 2, or NRF2. Ishiyama’s study looked at the localization of NRF2 in the proteins in the cells of the human cochlea and vestibule. It was found that NRF2-immunoreactivity (IR) was localized in the organ of Corti of the cochlea. Additionally, it was observed that NRF2-IR decreases significantly in the cochlea of older individuals. The team postulates for future studies that modulation of NRF2 expression may protect from hearing loss that results from exposure to noise and ototoxic drugs.

In a January 2018 report in the journal Otology & Neurotology, Ishiyama and team researched endolymphatic hydrops (EH), a ballooning of the endolymphatic fluid system in the inner ear that is associated with Ménière’s disease. Symptoms include fluctuating hearing loss, as well as vertigo, tinnitus, and pressure in the ear.

For the study, patients with EH and vestibular schwannoma were tested to evaluate the clinical outcome of patients when EH is treated medically. Vestibular schwannoma, also known as acoustic neuroma, are benign tumors that grow in the vestibular system of the inner ear, which controls balance. Often when patients develop episodic vertigo spells and have a known diagnosis of vestibular schwannoma, surgeons recommend surgical intervention, as they attribute the symptoms to the vestibular schwannoma. However, a noninvasive treatment may hold promise. Through the use of high-resolution MRI scans, the researchers found that when EH coexists with vestibular schwannoma in a patient, and the patient also experiences vertigo spells, a medical treatment for EH—that is, the use of diuretics to relieve inner ear fluid buildup—may alleviate the vestibular symptoms.

Copy of Ishiyama.jpg


A 2016 ERG scientist funded by The Estate of Howard F. Schum, Gail Ishiyama, M.D., is an associate professor of neurology at UCLA’s David Geffen School of Medicine. She also received a 2017 Ménière’s Disease Grant.


New Insights Into Aging Effects on Speech Recognition

Age-related changes in perceptual organization have received less attention than other potential sources of decline in hearing ability. Perceptual organization is the process by which the auditory system interprets acoustic input from multiple sources, and creates an auditory scene. In daily life this is essential, because speech communication occurs in environments in which background sounds fluctuate and can mask the intended message.

Perceptual organization includes three interrelated auditory processes: glimpsing, speech segregation, and phonemic restoration. Glimpsing is the process of identifying recognizable fragments of speech and connecting them across gaps to create a coherent stream. Speech segregation refers to the process where the glimpses (speech fragments) are separated from background speech, to focus on a single target when the background includes multiple talkers. Phonemic restoration refers to the process of filling in missing information using prior knowledge of language, conversational context, and acoustic cues.

A July 2018 study in The Journal of the Acoustical Society of America by William J. Bologna, Au.D., Ph.D., Kenneth I. Vaden, Jr., Ph.D., Jayne B. Ahlstrom, M.S., and Judy R. Dubno, Ph.D., investigated these three components of perceptual organization to determine the extent to which their declines may be the source of increased difficulty in speech recognition with age. Younger and older adults with typical hearing listened to sentences interrupted with either silence or envelope-modulated noise, presented in quiet or with a competing talker.

As expected, older adults performed more poorly than younger adults across all speech conditions. The interaction between age and the duration of glimpses indicated that, compared with younger adults, older adults were less able to make efficient use of limited speech information to recognize keywords. There was an apparent decline in glimpsing, where interruptions in speech had a larger effect on the older adult group.

Older adults saw a greater improvement in speech recognition when envelope modulations were partially restored, leading to better continuity. This demonstrated that with age comes a poorer ability to resolve temporal distortions in the envelope. In speech segregation, the decline in performance with a competing talker was expected to be greater for older adults than younger adults, but this was not supported by the data.

Copy of Kenny Vaden ERG 2015.JPG
Dubno_crop.jpg

A 2015 Emerging Research Grants scientist, Kenneth I. Vaden, Jr., Ph.D., is a research assistant professor in the department of otolaryngology–head and neck surgery at the Medical University of South Carolina.

A 1986–88 ERG scientist, Judy R. Dubno, Ph.D., is a member of HHF’s Board of Directors. The study’s lead author, William Bologna, Au.D., Ph.D., is a postdoctoral research fellow at the National Center for Rehabilitative Auditory Research in Portland, Oregon.

A 2018 HHF intern, Author Elizabeth Crofts is a junior at Boston University studying biomedical engineering. For our continually updated list of published papers by ERG alumni, see hhf.org/erg-alumni.

 

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

The Hearing Restoration Project: Update on the Seattle Plan and More

By Peter G. Barr-Gillespie, Ph.D.

Hearing Health Foundation launched the Hearing Restoration Project (HRP) to understand how to regenerate inner ear sensory cells in humans to restore hearing. These sensory hair cells detect and turn sound waves into electrical impulses that are sent to the brain for decoding. Once hair cells are damaged or die, hearing is impaired, but in most species, such as birds and fish, hair cells spontaneously regrow and hearing is restored.

The overarching principle of the HRP consortium is cross-discipline collaboration: open sharing of data and ideas. By having almost immediate access to one another’s data, HRP scientists are able to perform follow-up experiments much faster, rather than having to wait years until data is published.

Regenerated hair cells from chicken auditory organs, with the cell body, nucleus and hair bundle labeled with various colored markers. Image courtesy of Jennifer Stone, Ph.D.

Regenerated hair cells from chicken auditory organs, with the cell body, nucleus and hair bundle labeled with various colored markers. Image courtesy of Jennifer Stone, Ph.D.

You may remember that two years ago, we changed how we develop our projects. We decided together on a group of four projects—the “Seattle Plan”—that are the most fundamental to the consortium’s progress. These projects, which grew out of previous HRP projects, have now been funded for two years, and considerable progress has been made. We have also funded several other projects that have bubbled up out of new observations and capabilities, and they have added considerably to our knowledge base. With this in mind, I am pleased to share with you the latest updates for our 2018–19 projects.

SEATTLE PLAN PROJECTS

Transcriptome changes in single chick cells
Stefan Heller, Ph.D.

  • Found that all “tall” hair cells are exclusively regenerated mitotically in this animal model.

  • Compiled evidence for different supporting cell subtypes.

  • Obtained good quality single cell RNA sequencing (scRNA-seq) data and are in the process of evolving an analysis strategy for the baseline cell types (control group). Identified about 50 novel marker genes for hair cells, supporting cells, and homogene cells, including subgroups.

  • Developed a strategy to finish all scRNA-seq using a novel peeling technique and latest generation library construction methods.

  •  Established two methods for multi-color in situ hybridization (PLISH, proximity ligation in situ hybridization) and SGA (sequential genomic analysis) for spatial and temporal mRNA expression validation.

Epigenetics of the mouse inner ear
Michael Lovett, Ph.D., David Raible, Ph.D., Neil Segil, Ph,D., Jennifer Stone, Ph.D.

  • Completed epigenetic, chromatin structure, and RNA-seq datasets for FACS-purified cochlear hair cells and supporting cells from postnatal day 1 and postnatal day 6 mice, and provision of these data sets to the gEAR (gene Expression Analysis Resource portal) for mounting on their webpage through EpiViz for access by the HRP consortium.

  • Established a webpage (EarCode) so that HRP consortium members can access the current data directly through a University of California, Santa Cruz, genome browser.

  • Discovered maintenance of the transcriptionally silent state of the hair cell gene regulatory network in perinatal supporting cells is dependent on a combination of H3K27me3 and active H3k27-deacetylation, and that during transdifferentiation, these epigenetic marks are modified to an active state.



Mouse functional testing
John Brigande, Ph.D.

  • Defined in vitro and in vivo model systems to interrogate genome editing efficacy using CRISPR/Cas9.

Implementing the gEAR for data sharing within the HRP
Ronna Hertzano, M.D., Ph.D.

  • Added scRNA-seq workbench for easy sharing and viewing of scRNA-seq data. Such data, which are now driving the field forward, have been particularly difficult to share

  • Created additional public datasets to improve data sharing.

  • Completely rewrote the gEAR backbone to be updated to the latest technologies, allowing the portal to now to handle a much larger number of datasets and users.

  • Performed hands-on gEAR workshops at the Association for Research in Otolaryngology and the Gordon Research Conference, increasing the number of users with accounts to greater than 300.


Single Cell RNA-seq of homeostatic neuromasts
Tatjana Piotrowski, Ph.D.

  • Optimized protocols for fluorescent-activated cell sorting and scRNA-seq; obtained high quality scRNA-seq transcriptome results from 1,400 neuromast cells; clustered all cells into seven groups; and performed analyses to align the cells along developmental time, providing a temporal readout of gene expressions during hair cell development.

OTHER PROJECTS

Integrated systems biology of hearing restoration
Seth Ament, Ph.D.

  • Discovered 29 novel risk loci for age-related hearing difficulty through new analyses of genome-wide association studies of multiple hearing-related traits in the U.K. Biobank (comprising 330,000 people), and predicted the causal genes and variants at these loci through integration with transcriptomics and epigenomics data from HRP consortium members.

  • Generated scRNA-seq of 9,472 cells in the neonatal mouse cochlea and utricle (postnatal days 2 and 7).

  • Conducted systems biology analyses that integrate multiple HRP datasets to characterize gene regulatory networks and predict driver genes associated with the development and regeneration of hair cells. These analyses utilize scRNA-seq of sensory epithelial cells in mouse, chicken, and zebrafish hearing and vestibular organs, as well as epigenomic data (ATAC-seq) from hair cells, support cells, and non-epithelial cells in the mouse cochlea.


Comparison of three reprogramming cocktails
Andy Groves, Ph.D.

  • Created and validated transgenic mouse lines expressing three different combinations of reprogramming transcription factors.

  • Demonstrated these lines can produce new hair cell–like cells in the undamaged and damaged cochlea of the immature mouse.

  • Compiled preliminary data showing Atoh1 and Gfi1 genes can create ectopic hair cells in the adult mouse cochlea.


Signaling molecules controlling avian auditory hair cell regeneration
Jennifer Stone, Ph.D.

  • Identified four molecular pathways (FGF, BMP, VEGF, and Wnt) that control hair cell regeneration in the bird auditory organ. These pathways were identified in Phase I (gene discovery) as being transcriptionally dynamic in birds, fish, and mice during regeneration, which indicated they may be universal regulators of hair cell regeneration.

  • Determined that the Notch signaling pathway (a powerful inhibitor of stem cells) also blocks supporting cell division in the chicken auditory organ after damage. This discovery shows that Notch is a negative regulator of regeneration, conserved in birds, fish, and mice.

  • Identified signaling molecules in birds that are correlated with either mitotic or non-mitotic modes of hair cell regeneration, and are now exploring how these signaling molecules interact to determine which mode of regeneration occurs. Since mammals only exhibit non-mitotic regeneration, we are particularly interested in determining how this mode is controlled.

UP NEXT

We look forward to our annual meeting, which will be held in Seattle in November. There we will discuss and integrate these data to develop our plans for our 2019–20 projects.

barr-gillespie.jpg

As always we are very grateful for the donations we receive to fund this groundbreaking research to find better treatments for hearing loss and related conditions. Every dollar counts, and we sincerely thank our supporters.

HRP scientific director Peter G. Barr-Gillespie, Ph.D., is a professor of otolaryngology at the Oregon Hearing Research Center, a senior scientist at the Vollum Institute, and the interim senior vice president for research, all at Oregon Health & Science University. For more, see hhf.org/hrp.

 

Empower the Hearing Restoration Project's life-changing research. If you are able, please make a contribution today.

 
 
Print Friendly and PDF

ERG Grantees' Advancements in OAE Hearing Tests, Speech-in-Noise Listening

By Yishane Lee and Inyong Choi, Ph.D.

Support for a Theory Explaining Otoacoustic Emissions: Fangyi Chen, Ph.D.

Groves hair cells 002.jpeg

It’s a remarkable feature of the ear that it not only hears sound but also generates it. These sounds, called otoacoustic emissions (OAEs), were discovered in 1978. Thanks in part to ERG research in outer hair cell motility, measuring OAEs has become a common, noninvasive hearing test, especially among infants too young to respond to sound prompts..

There are two theories about how the ear produces its own sound emanating from the interior of the cochlea out toward its base. The traditional one is the backward traveling wave theory, in which sound emissions travel slowly as a transverse wave along the basilar membrane, which divides the cochlea into two fluid-filled cavities. In a transverse wave, the wave particles move perpendicular to the wave direction. But this theory does not explain some anomalies, leading to a second hypothesis: The fast compression wave theory holds that the emissions travel as a longitudinal wave via lymph fluids around the basilar membrane. In a longitudinal wave, the wave particles travel in the same direction as the wave motion.

Figuring out how the emissions are created will promote greater accuracy of the OAE hearing test and a better understanding of cochlear mechanics. Fangyi Chen, Ph.D., a 2010 Emerging Research Grants (ERG) recipient, started investigating the issue at Oregon Health & Science University and is now at China’s Southern University of Science and Technology. His team’s paper, published in the journal Neural Plasticity in July 2018, for the first time experimentally validates the backward traveling wave theory.

Chen and his coauthors—including Allyn Hubbard, Ph.D., and Alfred Nuttall, Ph.D., who are each 1989–90 ERG recipients—directly measured the basilar membrane vibration in order to determine the wave propagation mechanism of the emissions. The team stimulated the membrane at a specific location, allowing for the vibration source that initiates the backward wave to be pinpointed. Then the resulting vibrations along the membrane were measured at multiple locations in vivo (in guinea pigs), showing a consistent lag as distance increased from the vibration source. The researchers also measured the waves at speeds in the order of tens of meters per second, much slower than would be the speed of a compression wave in water. The results were confirmed using a computer simulation. In addition to the wave propagation study, a mathematical model of the cochlea based on an acoustic electrical analogy was created and simulated. This was used to interpret why no peak frequency-to-place map was observed in the backward traveling wave, explaining some of the previous anomalies associated with this OAE theory.

Speech-in-Noise Understanding Relies on How Well You Combine Information Across Multiple Frequencies: Inyong Choi, Ph.D.

Understanding speech in noisy environments is a crucial ability for communications, although many individuals with or without hearing loss suffer from dysfunctions in that ability. Our study in Hearing Research, published in September 2018, finds that how well you combine information across multiple frequencies, tested by a pitch-fusion task in "hybrid" cochlear implant users who receive both low-frequency acoustic and high-frequency electric stimulation within the same ear, is a critical factor for good speech-in-noise understanding.

In the pitch-fusion task, subjects heard either a tone consisting of many frequencies in a simple mathematical relationship or a tone with more irregular spacing between frequencies. Subjects had to say whether the tone sounded "natural" or "unnatural" to them, given the fact that a tone consisting of frequencies in a simple mathematical relationship sounds much more natural to us. My team and I are now studying how we can improve the sensitivity to this "naturalness" in listeners with hearing loss, expecting to provide individualized therapeutic options to address the difficulties in speech-in-noise understanding.

2017 ERG recipient Inyong Choi, Ph.D., is an assistant professor in the department of communication sciences and disorders at the University of Iowa in Iowa City.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

Meet the Researcher: A. Catalina Vélez-Ortega

By Yishane Lee

2018 Emerging Research Grants (ERG) awardee A. Catalina Vélez-Ortega received a master’s in biology from the University of Antioquia, Colombia, and a doctorate in physiology from the University of Kentucky, where she completed postdoctoral training and is now an assistant professor in the department of physiology.

erg Velez Ortega.jpg

IN HER WORDS:

TRPA1 is an ion channel known for its role as an “irritant sensor” in pain-sensing neurons (nerve cells). Noise exposure leads to the production of some cellular “irritants” that activate TRPA1 channels in the inner ear. The role of TRPA1 channels has been a puzzling project, with most experiments leaving more questions to pursue. My current project seeks to uncover how TRPA1 activation modifies cochlear mechanics and hearing sensitivity, in order to find new therapeutic targets to prevent hearing loss or tinnitus.

My father, our town’s surgeon, fueled my desire to learn. When I asked him how the human heart works, he called the butcher, got a pig’s heart, and we dissected it together. I was about 5 when I learned how the heart’s chambers are connected and how valves work. He also set up an astronomy class at home with a flashlight, globe, and ball when I asked, “Why does the moon change shape?” My father’s excitement kept my curiosity from fading as I grew older. That eager-to-learn personality now drives my career in science and teaching.

My training in biomedical engineering guided my interest into hearing science. The field of inner ear research mixes physics and mechanics with molecular biology and genetics in a way I find extremely attractive. Analytics also intrigues me. People who work with me know how complex my calendar and spreadsheets can get. I absolutely love logging all kinds of data and looking for correlations. I also like to plan ahead—passport renewal 10 years from now? Already in my calendar!

I take dance lessons and participate in flash mobs and other dance performances. But I used to be extremely shy. As a child I simply could not look anyone in the eye when talking to them. I was also terrified of being onstage. It was only after college that I decided to finally correct the problem. Interestingly, taking sign language lessons was very helpful. Sign language forced me to stare at people to be able to communicate. It was terrifying at first, but it started to feel very natural after just a few months.

Vélez-Ortega’s 2018 ERG grant was generously funded by cochlear implant manufacturer Cochlear Americas.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

HHF 2019 Grant Applications Open

By Lauren McGrath

We are excited to inform you that the applications for Hearing Health Foundation (HHF)'s 2019 Emerging Research Grants (ERG) and Ménière's Disease Grants (MRG) programs are officially open as of September 1.

Call for 2019 Grants.png

HHF's ERG grants provide seed money to stimulate data collection that leads to a continuing, independently fundable line of research. According to a 2017 analysis, every $1 of funding that HHF awards to ERG grantees is matched by the NIH with $91.

ERG grant funding shall not exceed $30,000 for the one-year project period, and only research proposals in the following topic will be considered for the 2019 ERG cycle: General Hearing Health (GHH)*,  [Central] Auditory Processing Disorders, Hearing Loss in Children, Hyperacusis, Ménière’s Disease, Ototoxic Medications, Tinnitus, and Usher Syndrome.

More Information About ERG
Begin Your ERG Application

The highly competitive Ménière’s Disease Grants (MDG) program funds scientists to better our understanding of this complicated condition with an eye for better treatments and cures for those who suffer from Ménière’s disease.

MDG grant funding shall not exceed $125,000 for the two-year project period. Areas of interest for the 2019 MDG Cycle include: the mechanisms of endolymphatic hydrops; genetics of Ménière’s disease; development and validation of biomarkers, including imaging and/or electrophysiologic and behavioral measures for its diagnosis and measurement of therapeutic effectiveness; animal models of Ménière’s disease; and the development of novel therapeutics.

More Information About MDG
Begin Your MDG Application

Applications for both ERG and MDG will close Tuesday, January 15.

If you have any questions about the grant program and processes, contact us at grants@hhf.org.  
Please forward and share this information with your colleagues who may be interested.

Print Friendly and PDF

Understanding Individual Variances in Hearing Aid Outcomes in Quiet and Noisy Environments

By Elizabeth Crofts

Evelyn Davies-Venn, Au.D., Ph.D.

Evelyn Davies-Venn, Au.D., Ph.D.

More than 460 million people worldwide live with some form of hearing loss. For most, hearing aids are the primary rehabilitation tool, yet there is no one-size-fits-all approach. As a result, many hearing aid users are frustrated by their listening experiences, especially understanding speech in noise.

Evelyn Davies-Venn, Au.D., Ph.D., of the University of Minnesota is currently focusing on two projects, one of which is funded by Hearing Health Foundation (HHF) through its Emerging Research Grants (ERG) program, that will enhance the customization of hearing aids. She presented the two projects at the Hearing Loss Association of America (HLAA) convention in June.

Davies-Venn explains that some of the factors dictating individual variance in hearing aid listening outcomes in noisy environments include audibility, spectral resolution, and cognitive ability. Audibility changes—how much of the speech spectrum is available to the hearing aid user—is the biggest factor. “Speech must be audible before it is intelligible,” Davies-Venn says. Another primary factor is spectral resolution, or your ear’s ability to make use of the spectrum or frequency changes in sounds. This also directly affects listening outcomes.

Secondary factors include the user’s working memory and the volume of the amplified speech. These impact how well someone can handle making sense of distortions (from ambient noise as well as from signal processing) in an incoming speech signal. Working memory is needed to provide context in the event of missing speech fragments, for instance. Needless to say, it is a challenge for conventional hearing aid technology to address all of these complex variables.

Davies-Venn’s highlights two emerging projects that take an innovative approach to resolving this challenge. The first project aims to improve hearing aid success focuses on an emerging technology called the “cognitive control of a hearing aid,” or COCOHA. It is an improved hearing aid that will analyze multiple sounds, complete an acoustic scene analysis, and separate the sounds into individual streams, she says.

Then, based on the cognitive/electrophysiological recordings from the individual, the COCOHA will select the specific stream that the person is interested in listening to and amplify it—such as a particular speaker’s voice. The cognitive recording is captured with a noninvasive, far-field measure of electrical signals emitted from the brain in response to sound stimuli (similar to how an electroencephalogram, EEG, captures signals).

Davies-Venn’s ERG grant from HHF will support research on the use of electrophysiology, far-field or distant (i.e. recorded at the scalp) electrical signals from the brain, to design hearing aid algorithms that can control individual variances due to level-induced (i.e. high intensity) distortions from hearing aids.

The other project involves sensory substitution. This project explores the conversion of speech to another sense—for example, touch—through a mobile processing device or a “skin hearing aid.” For the device to function, a vibration is relayed to the brain for speech understanding. This technology seems cutting edge, but is believed to have been invented in the 1960s by Paul Bach-y-Rita, M.D., of the Smith-Kettlewell Institute of Visual Sciences in San Francisco. Even though it has not yet been incorporated into hearing aid technology intended for mass production, David Eagleman, Ph.D., of Stanford University and others are hoping to make this a reality.

Davies-Venn’s research motives are inspired by a personal connection to her work. “I have a conductive hearing loss myself,” she says. “I had persistent/chronic ear infections as a child that left me a bit delayed in developing speech, and still get ear infections as an adult and have ground accustomed to the low-frequency hearing loss that results until they resolve.” She also has family members with hearing loss and understands the importance of developing more advanced hearing assistance technology.

The projects are in the early stages, and it may take as long as a decade for them to reach the market from the concept. “The goal is to develop individualized hearing aid signal processing to improve treatment outcomes in noisy soundscapes,” Davies-Venn says. “We want to say, this is the most optimal treatment protocol, and it’s different from this person’s, even though you have the same hearing threshold.” Solving hearing aid variances in a precise, individual manner that accounts for variables such as age and cognitive ability will improve communication and quality of life for the millions with hearing loss who use hearing technology.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF