Research

Researcher Discovers Gene Mutation Related to Usher Syndrome Type 3

By Pranav Parikh

Usher syndrome type 3 is an inherited disease in which an individual is born with typical hearing and develops hearing loss in the stages of early childhood. They will most likely develop complete hearing loss by the time they are an adult. Though cases of Usher syndrome type 3 (and its subtypes) are quite infrequent, representing 2 percent of total Usher syndrome cases, the onset symptoms have damaging and often irreversible consequences that severely disrupt the lives of those living with the condition. There is currently no cure for the disease, but cochlear implants have seen some success in providing partial hearing function in patients.

A 3D model of the HARS enzyme, including the catalytic site (where the reaction occurs) and the anticodon site (the part that starts protein synthesis through RNA transcription).

A 3D model of the HARS enzyme, including the catalytic site (where the reaction occurs) and the anticodon site (the part that starts protein synthesis through RNA transcription).

Susan Robey-Bond, Ph.D., a 2012 Emerging Research Grants scientist, and her team at the University of Vermont College of Medicine were able to isolate a mechanism involved in the development of Usher syndrome. Histidyl-tRNA synthetase is an enzyme that is instrumental in protein synthesis. This enzyme, given the acronym HARS, is thought to be involved in the presentation of Usher syndrome type 3B in patients. The early symptoms of temporary hearing and vision loss, hallucinations, and sometimes sudden fatal buildup of fluid in the lungs may be triggered by a fever-causing illness. The hearing and vision loss are eventually severe and permanent.

A graphical representation depicting temperature variation between the wild-type and mutant version of the HARS enzyme.

A graphical representation depicting temperature variation between the wild-type and mutant version of the HARS enzyme.

Usher syndrome type 3B is autosomal recessive, meaning children of parents carrying the gene but who do not display symptoms have a likelihood of developing the disease. It is caused by a USH3B mutation, which substitutes a serine amino acid for a tyrosine amino acid in HARS. The team studying the biochemical properties of the gene compared the Y454S mutation in the HARS enzyme with its wild-type (non-mutated) form and found similar functional biochemical characteristics, as stated in the researchers’ recent paper in Biochemistry.

The amino acid activation, aminoacylation, and tRNA binding functions were all consistent between the mutation and wild-type genes. In later analysis, though, the team found that at an elevated temperature the Y454S substitution was less stable than the wild-type. More specifically, cells from patients containing the Y454S mutation displayed lower levels of protein synthesis, which could explain the onset of deafness these patients experience. How these proteins are implicated in the hearing processes will eventually help develop cures or better treatments for Usher syndrome.

Susan Robey-Bond, Ph.D., was a 2012 Emerging Research Grants recipient. For more, see her Biochemistry paper:, “The Usher Syndrome Type IIIB Histidyl-tRNA Synthetase Mutation Confers Temperature Sensitivity.”


Print Friendly and PDF

BLOG ARCHIVE

Early Detection Improved Vocabulary Scores in Kids with Hearing Loss

By Molly Walker

Children with hearing loss in both ears had improved vocabulary skills if they met all of the Early Hearing Detection and Intervention guidelines, a small cross-sectional study found.

Those children with bilateral hearing loss who met all three components of the Early Hearing Detection and Intervention guidelines (hearing screening by 1 month, diagnosis of hearing loss by 3 months and intervention by 6 months) had significantly higher vocabulary quotients, reported Christine Yoshinaga-Itano, PhD, of the University of Colorado Boulder, writing in Pediatrics.

The authors added that recent research reported better language outcomes for children born in areas of the country during years where universal newborn hearing screening programs were implemented, and that these children also experienced long-term benefits in reading ability. The authors said that studies in the U.S. also reported better language outcomes for children whose hearing loss was identified early, who received hearing aids earlier or who began intervention services earlier. But those studies were limited in geographic scope or contained outdated definitions of "early" hearing loss.

"To date, no studies have reported vocabulary or other language outcomes of children meeting all three components of the [Early Hearing Detection and Intervention] guidelines," they wrote.

Researchers examined a cohort of 448 children with bilateral prelingual hearing loss between 8 and 39 months of age (mean 25.3 months), who participated in the National Early Childhood Assessment Project -- a large multistate study. About 80% of children had no additional disabilities that interfered with their language capabilities, while over half of the children with additional disabilities reported cognitive impairment. Expressive vocabulary was measured with the MacArthur-Bates Communicative Development Inventories.

While meeting all three components of the Early Hearing Detection and Intervention guidelines was a primary variable, the authors identified five other independent predictor variables into the analysis:

  • Chronological age

  • Disability status

  • Mother's level of education

  • Degree of loss

  • Adult who is deaf/hard of hearing

They wrote that the overall model was significantly predictive, with the combination of the six factors explaining 41% of the variance in vocabulary outcomes. Higher vocabulary quotients were predicted by higher maternal levels of education, lesser degrees of hearing loss and the presence of a parent who was deaf/or hard of hearing, in addition to the absence of additional disabilities, the authors said. But even after controlling for these factors, meeting all three components of the Early Hearing Detection and Intervention guidelines had "a meaningful impact" on vocabulary outcomes.

The authors also said that mean vocabulary quotients decreased as a child's chronological age increased, and this gap was greater for older children. They argued that this complements previous findings, where children with hearing loss fail to acquire vocabulary at the pace of hearing children.

Overall, the mean vocabulary quotient was 74.4. For children without disabilities, the mean vocabulary quotient was 77.6, and for those with additional disabilities, it was 59.8.

Even those children without additional disabilities who met the guidelines had a mean vocabulary quotient of 82, which the authors noted was "considerably less" than the expected mean of 100. They added that 37% of this subgroup had vocabulary quotients below the 10th percentile (<75).

"Although this percentage is substantially better than for those who did not meet [Early Hearing Detection and Intervention] guidelines ... it points to the importance of identifying additional factors that may lead to improved vocabulary outcomes," they wrote.

Limitations to the study included that only expressive vocabulary was examined and the authors recommended that future studies consider additional language components. Other limitations included that disability status was determined by parent, with the potential for misclassification.

The authors said that the results of their study emphasize the importance of pediatricians and other medical professionals to help identify children with hearing loss at a younger age, adding that "only one-half to two-thirds of children met the guidelines" across participating states.

This article was republished with permission from MedPageToday

Print Friendly and PDF

BLOG ARCHIVE

Small Solution, Large Impact: Updating Hearing Aid Technology

By Apoorva Murarka

For many people, the sound quality and battery life of their devices are often no more than a second thought. But for hearing aid users, these are pivotal factors in being able to interact with the world around them.

One possible way to update existing technology – which has gone unchanged for decades – is small in size but monumental in impact. Apoorva Murarka, a Ph.D. candidate in electrical engineering at MIT, has developed an award-winning microspeaker to improve the functions of devices that emit sound. Murarka sees hearing aids as one of the most important applications of his new technology.

The Current Problem – Feeling the Heat

Most hearing aids have long used a system of coils and magnets to produce sound within the ear canal. These microspeakers use battery power to operate, and lots of it. Valuable battery life is wasted in the form of heat as an electric current works hard to travel through the coil to eventually help produce sound. The more limited a user’s hearing is, the more the speaker must work to produce sound, and ultimately that much more battery is used up. 

As a result, research has shown that many hearing aid users in the United States use about 80 to 120 batteries a year or have to recharge batteries daily. Aside from the anxiety that can accompany the varying dependability of this old technology, the cost of constantly replacing these batteries can quickly add up. 

But battery life is not the only factor to consider. Because the coil and magnet system has not been updated in decades, the quality of sound produced by hearing aid speakers (without additional signal processing) has been just as limited. Even small upgrades in sound quality could make a world of difference for users.

The Future Solution – Going Smaller and Smarter

Apoorva Murarka has invented an alternative to the old coil and magnet system, removing those components completely from the picture. In their stead, he has developed an electrostatic transducer that relies on electrostatic force instead of magnetic force to vibrate the sound-producing diaphragm. This way of producing sound wastes much less energy, meaning significantly longer battery life in hearing aids. Apoorva was recently awarded the $15,000 Lemelson-MIT Student Prize for this groundbreaking development.

The biggest difference? Size. You would need to look closely to even see this microspeaker’s membrane – its thickness is about 1/1,000 the width of a human hair. 

Additionally, the microspeaker’s ultrathin membrane and micro-structured design enhance the quality of sound reproduced in the ear. Power savings due to the microspeaker’s electrostatic drive can be used to optimize other existing features in hearing aids such as noise filtration, directionality, and wireless streaming. This could pave the way for energy-efficient “smart” hearing aids that improve the quality of life for users significantly. 

This invention is being developed further and Apoorva hopes to work with the hard-of-hearing community, relevant organizations and hearing aid companies to understand the needs of users and explore how his invention can be adapted within hearing aids.

You can read more about Apoorva and his invention here

Print Friendly and PDF

BLOG ARCHIVE

An Animal Behavioral Model of Loudness Hyperacusis

By Kelly Radziwon, Ph.D., and Richard Salvi, Ph.D.

One of the defining features of hyperacusis is reduced sound level tolerance; individuals with “loudness hyperacusis” experience everyday sound volumes as uncomfortably loud and potentially painful. Given that loudness perception is a key behavioral correlate of hyperacusis, our lab at the University at Buffalo has developed a rat behavioral model of loudness estimation utilizing a reaction time paradigm. In this model, the rats were trained to remove their noses from a hole whenever a sound was heard. This task is similar to asking a human listener to raise his/her hand when a sound is played (the rats receive food rewards upon correctly detecting the sound).
 

FIGURE: Reaction time-Intensity functions for broadband noise bursts for 7 rats.The rats are significantly faster following high-dose (300 mg/kg) salicylate administration (left panel; red squares) for moderate and high level sounds, indicative of t…

FIGURE: Reaction time-Intensity functions for broadband noise bursts for 7 rats.

The rats are significantly faster following high-dose (300 mg/kg) salicylate administration (left panel; red squares) for moderate and high level sounds, indicative of temporary loudness hyperacusis. The rats showed no behavioral effect following low-dose (50 mg/kg) salicylate.

By establishing this trained behavioral response, we measured reaction time, or how fast the animal responds to a variety of sounds of varying intensities. Previous studies have established that the more intense a sound is, the faster a listener will respond to it. As a result, we thought having hyperacusis would influence reaction time due to an enhanced sensitivity to sound.

In our recent paper published in Hearing Research, we tested the hypothesis that high-dose sodium salicylate, the active ingredient in aspirin, can induce hyperacusis-like changes in rats trained in our behavioral paradigm. High-dose aspirin has long been known to induce temporary hearing loss and acute tinnitus in both humans and animals, and it has served as an extremely useful model to investigate the neural and biological mechanisms underlying tinnitus and hearing loss. Therefore, if the rats’ responses to sound are faster than they typically were following salicylate administration, then we will have developed a relevant animal model of loudness hyperacusis.

Although prior hyperacusis research utilizing salicylate has demonstrated that high-dose sodium salicylate induced hyperacusis-like behavior, the effect of dosage and the stimulus frequency were not considered. We wanted to determine how the dosage of salicylate as well as the frequency of the tone bursts affected reaction time.

We found that salicylate caused a reduction in behavioral reaction time in a dose-dependent manner and across a range of stimulus frequencies, suggesting that both our behavioral paradigm and the salicylate model are useful tools in the broader study of hyperacusis. In addition, our behavioral results appear highly correlated with the physiological changes in the auditory system shown in earlier studies following both salicylate treatment and noise exposure, which points to a common neural mechanism in the generation of hyperacusis.

Although people with hyperacusis rarely attribute their hyperacusis to aspirin, the use of the salicylate model of hyperacusis in animals provides the necessary groundwork for future studies of noise-induced hyperacusis and loudness intolerance.


Kelly Radziwon, Ph.D., is a 2015 Emerging Research Grants recipient. Her grant was generously funded by Hyperacusis Research Ltd. Learn more about Radziwon and her work in “Meet the Researcher.”


Print Friendly and PDF

BLOG ARCHIVE

NIH Researchers Show Protein in Inner Ear Is Key to How Cells That Help With Hearing and Balance Are Positioned

By the National Institute on Deafness and Other Communication Disorders (NIDCD)

Line of polarity reversal (LPR) and location of Emx2 within two inner ear structures. Arrows indicate hair bundle orientation. Source: eLife

Line of polarity reversal (LPR) and location of Emx2 within two inner ear structures. Arrows indicate hair bundle orientation. Source: eLife

Using animal models, scientists have demonstrated that a protein called Emx2 is critical to how specialized cells that are important for maintaining hearing and balance are positioned in the inner ear. Emx2 is a transcription factor, a type of protein that plays a role in how genes are regulated. Conducted by scientists at the National Institute on Deafness and Other Communication Disorders (NIDCD), part of the National Institutes of Health (NIH), the research offers new insight into how specialized sensory hair cells develop and function, providing opportunities for scientists to explore novel ways to treat hearing loss, balance disorders, and deafness. The results are published March 7, 2017, in eLife.

Our ability to hear and maintain balance relies on thousands of sensory hair cells in various parts of the inner ear. On top of these hair cells are clusters of tiny hair-like extensions called hair bundles. When triggered by sound, head movements, or other input, the hair bundles bend, opening channels that turn on the hair cells and create electrical signals to send information to the brain. These signals carry, for example, sound vibrations so the brain can tell us what we’ve heard or information about how our head is positioned or how it is moving, which the brain uses to help us maintain balance.

NIDCD researchers Doris Wu, Ph.D., chief of the Section on Sensory Cell Regeneration and Development and member of HHF’s Scientific Advisory Board, which provides oversight and guidance to our Hearing Restoration Project (HRP) consortium; Katie Kindt, Ph.D., acting chief of the Section on Sensory Cell Development and Function; and Tao Jiang, a doctoral student at the University of Maryland College Park, sought to describe how the hair cells and hair bundles in the inner ear are formed by exploring the role of Emx2, a protein known to be essential for the development of inner ear structures. They turned first to mice, which have been critical to helping scientists understand how intricate parts of the inner ear function in people.

Each hair bundle in the inner ear bends in only one direction to turn on the hair cell; when the bundle bends in the opposite direction, it is deactivated, or turned off, and the channels that sense vibrations close. Hair bundles in various sensory organs of the inner ear are oriented in a precise pattern. Scientists are just beginning to understand how the hair cells determine in which direction to point their hair bundles so that they perform their jobs.

In the parts of the inner ear where hair cells and their hair bundles convert sound vibrations into signals to the brain, the hair bundles are oriented in the same direction. The same is true for hair bundles involved in some aspects of balance, known as angular acceleration. But for hair cells involved in linear acceleration—or how the head senses the direction of forward and backward movement—the hair bundles divide into two regions that are oriented in opposite directions, which scientists call reversed polarity. The hair bundles face either toward or away from each other, depending on whether they are in the utricle or the saccule, two of the inner ear structures involved in balance. In mammals, the dividing line at which the hair bundles are oriented in opposite directions is called the line of polarity reversal (LPR).

Using gene expression analysis and loss- and gain-of-function analyses in mice that either lacked Emx2 or possessed extra amounts of the protein, the scientists found that Emx2 is expressed on only one side of the LPR. In addition, they discovered that Emx2 reversed hair bundle polarity by 180 degrees, thereby orienting hair bundles in the Emx2 region in opposite directions from hair bundles on the other side of the LPR. When the Emx2 was missing, the hair bundles in the same location were positioned to face the same direction.

Looking to other animals to see if Emx2 played the same role, they found that Emx2 reversed hair bundle orientation in the zebrafish neuromast, the organ where hair cells with reversed polarity that are sensitive to water movement reside.

These results suggest that Emx2 plays a key role in establishing the structural basis of hair bundle polarity and establishing the LPR. If Emx2 is found to function similarly in humans, as expected, the findings could help advance therapies for hearing loss and balance disorders. They could also advance research into understanding the mechanisms underlying sensory hair cell development within organs other than the inner ear.

This work was supported within the intramural laboratories of the NIDCD (ZIA DC000021 and ZIA DC000085).

Doris Wu Ph.D. is member of HHF’s Scientific Advisory Board, which provides oversight and guidance to our Hearing Restoration Project (HRP) consortium This article was repurpsed with permission from the National Institute on Deafness and Other Communication Disorders. 


Print Friendly and PDF

BLOG ARCHIVE

The Connection Between Hearing Loss and Dementia

By Alycia Gordan

hearing-loss-dementia

June is Alzheimer's & Brain Awareness Month and Hearing Health Foundation would like to shine light on the effects untreated hearing loss can have on our brains and memory. Hearing loss is often linked with dementia, and research is being conducted to establish the exact link between the two. Evidence suggests that by treating hearing loss, the risk of dementia can be mitigated.

Dementia is a medical term that is used to describe a host of symptoms, characterized by a deterioration in a patient’s cognitive abilities. The degeneration of brain cells causes neurons to stop functioning, leading to a series of dysfunctions.

A person may have dementia if at least two of his mental faculties are affected: the loss of memory and focus; difficulty communicating; short or interrupted attention spans; impaired judgment; or an inability to perform everyday tasks.

Frank Lin, M.D., Ph.D., an associate professor of otolaryngology and epidemiology at Johns Hopkins University, conducted a study in 2011 in which the mental abilities of 639 cognitively stable individuals were supervised regularly for 12 to 18 years. The results indicated that volunteers with normal hearing were much less susceptible to acquiring dementia while those with mild, moderate, and severe hearing loss were two, three, and five times more susceptible to the disorder, respectively.

Another study conducted by Lin in 2013 involved observing the cognitive abilities of 1,984 older adults over six years. The research concluded that older adults with hearing loss tended to experience 30 to 40 percent accelerated cognitive dysfunction and were at a higher risk of developing dementia.

What Is the Cause?

Since the exact link between hearing loss and dementia is still a mystery, there are theories about how the former may aggravate the latter.

One of the theories suggests that if the brain struggles to cope with degraded sounds, its resources are allocated to processing these sounds and this “cognitive load” causes a decrease in overall cognitive functioning. Moreover, hearing loss accelerates atrophy in the cerebrum which is not exclusive to processing sound as it also plays a role in memory. In addition, it is speculated that social isolation that results from hearing loss causes stress and depression and exacerbates cognitive deterioration.

What Is the Solution?

Not many studies have been conducted to check the influence of treating hearing loss for treating dementia. However, the studies that have been conducted so far do provide considerable hope.

One way to improve profound hearing loss is receiving cochlear implants. French researcher Isabelle Mosnier, M.D., of the Assistance Publique-Hôpitaux de Paris, evaluated the effect of cochlear implants on cognitive functioning in 94 elderly people who had profound deafness (in at least one ear).

Mosnier found that hearing rehabilitation improved not only cognitive functioning of the elderly, but their speech perception as well.

The most direct link between auditory impairment and memory loss is the brain. Thus, any stimulus that helps the brain remain alert will keep the person active too. Hence, researchers are considering the use of music therapy to restore cognitive functions in people who suffer from memory loss.           

Concetta Tomaino, a cofounder of the Institute for Music and Neurological Function, found that music stimulates parts of the brain made inactive by dementia. In a pilot study, music therapy sessions were conducted with 45 individuals with chronic dementia and the results showed that neurological and cognitive abilities improved significantly for those in the music group.

This research shows there are techniques that can aid individuals with dementia and hearing loss. If you or a loved one has hearing problems, please see a hearing health professional to get a hearing test in order to potentially prevent future cognitive issues. 

Alycia Gordan writes for Brain Blog.


Print Friendly and PDF

BLOG ARCHIVE

Success of Sensory Cell Regeneration Raises Hope for Hearing Restoration

By St. Jude Children's Research Hospital

Jian Zuo, Ph.D., and his colleagues induced supporting cells located in the inner ear of adult mice to take on the appearance of immature hair cells and to begin producing some of the signature proteins of hair cells.

Jian Zuo, Ph.D., and his colleagues induced supporting cells located in the inner ear of adult
mice to take on the appearance of immature hair cells and to begin producing some of the signature proteins of hair cells.

In an apparent first, St. Jude Children's Research Hospital investigators have used genetic manipulation to regenerate auditory hair cells in adult mice. The research marks a possible advance in treatment of hearing loss in humans. The study appears today in the journal Cell Reports.

Loss of auditory hair cells due to prolonged exposure to loud noise, accidents, illness, aging or medication is a leading cause of hearing loss and long-term disability in adults worldwide. Some childhood cancer survivors are also at risk because of hair cells damage due to certain chemotherapy agents. Treatment has focused on electronic devices like hearing aids or cochlear implants because once lost, human auditory hair cells do not grow back.

"In this study, we looked to Mother Nature for answers and we were rewarded," said corresponding author Jian Zuo, Ph.D., a member of the St. Jude Department of Developmental Neurobiology. "Unlike in humans, auditory hair cells do regenerate in fish and chicken. The process involves down-regulating expression of the protein p27 and up-regulating the expression of the protein Atoh1. So we tried the same approach in specially bred mice."

By manipulating the same genes, Zuo and his colleagues induced supporting cells located in the inner ear of adult mice to take on the appearance of immature hair cells and to begin producing some of the signature proteins of hair cells.

The scientists also identified a genetic pathway for hair cell regeneration and detailed how proteins in that pathway cooperate to foster the process. The pathway includes the proteins GATA3 and POU4F3 along with p27 and ATOH1. In fact, investigators found that POU4F3 alone was sufficient to regenerate hair cells, but that more hair cells were regenerated when both ATOH1 and POU4F3 were involved.

"Work in other organs has shown that reprogramming cells is rarely accomplished by manipulating a single factor," Zuo said. "This study suggests that supporting cells in the cochlea are no exception and may benefit from therapies that target the proteins identified in this study."

The findings have implications for a phase 1 clinical trial now underway that uses gene therapy to restart expression of ATOH1 to regenerate hair cells for treatment of hearing loss.

ATOH1 is a transcription factor necessary for hair cell development. In humans and other mammals, the gene is switched off when the process is complete. In humans, ATOH1 production ceases before birth.

"This study suggests that targeting p27, GATA3 and POU4F3 may enhance the outcome of gene therapy and other approaches that aim to restart ATOH1 expression," Zuo said.

The research also revealed a novel role for p27. The protein is best known as serving as a check on cell proliferation. However, in this study p27 suppressed GATA3 production. Since GATA3 and ATOH1 work together to increase expression of POU4F3, reducing GATA3 levels also reduced expression of POU4F3. When the p27 gene was deleted in mice, GATA3 levels increased along with expression of POU4F3. Hair cell regeneration increased as well.

"Work continues to identify the other factors, including small molecules, necessary to not only promote the maturation and survival of the newly generated hair cells, but also increase their number," Zuo said.

Bradley J. Walters, Ph.D. was a 2012 Hearing Health Foundation Emerging Research Grants recipient. This article was repurpsed with permission. 

Print Friendly and PDF

BLOG ARCHIVE

A Balancing Act Before the Onset of Hearing

By Sonja J. Pyott, Ph.D.

Our ability to hear relies on the proper connections between the sensory hair cells in the inner ear and the brain. Activity of the sensory hair cells (red) and these connections ( green) before hearing begins is essential for the proper development…

Our ability to hear relies on the proper connections between the sensory hair cells in the inner ear and the brain. Activity of the sensory hair cells (red) and these connections ( green) before hearing begins is essential for the proper development of hearing. The research conducted by Sonja J. Pyott, Ph.D., and colleagues investigated the mechanisms that regulate this activity.

The development of the auditory system begins in the womb and culminates in a newborn’s ability to hear upon entering the world. While the age at which hearing begins varies across mammals, the sensory structures of the inner ears are active before the onset of hearing. This activity instructs the maturation of the neural connections between the inner ear and brain, an essential component of the proper development of hearing. However, we still know very little about the mechanisms regulating the activity of these sensory structures and their neural connections, specifically during the critical period just before the onset of hearing.

In our paper, “mGluR1 enhances efferent inhibition of inner hair cells in the developing rat cochlea,” soon to be published in an upcoming issue of The Journal of Physiology, we investigate the role of glutamate, a neurotransmitter, in regulating activity of the sensory structures and their connections in the inner ear before the start of hearing.

Neurotransmitters assist in the communication between neurons and are typically classified as either excitatory or inhibitory based on their action. Excitatory action results in stimulation; inhibitory action assists in the calming of the brain. Our research found that although glutamate typically excites activity, it also elicits inhibitory activity. This dual role for glutamate occurs because it activates two distinct classes of glutamate receptors: ionotropic glutamate receptors (iGluRs) and metabotropic glutamate receptors (mGluRs).

Importantly, this dual activation balances excitatory and inhibitory activity of the sensory structures, a balance of which is likely important in the final refinement of the neural connections between the inner ear and brain prior to the onset of hearing.

As part of future research, we will further investigate the role of mGluRs, one the distinct classes of glutamate receptors, in the development of hearing. We will also investigate if mGluRs balance excitatory and inhibitory activity in the adult inner ear, similar to its role prior to the onset of hearing. Insights into these mechanisms may identify new ways to modulate activity and prevent congenital or acquired hearing loss.

Study coauthor Sonja J. Pyott, Ph.D., was a 2007 and 2008 Hearing Health Foundation Emerging Research Grants recipient.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

BLOG ARCHIVE

Cortical Alpha Oscillations Predict Speech Intelligibility

By Andrew Dimitrijevic, Ph.D.

Hearing Health Foundation Emerging Research Grants recipient Andrew Dimitrijevic, Ph.D., and colleagues recently published “Cortical Alpha Oscillations Predict Speech Intelligibility” in the journal Frontiers in Human Neuroscience.

The scientists measured brain activity that originates from the cortex, known as alpha rhythms. Previous research has linked these rhythms to sensory processes involving working memory and attention, two crucial tasks for listening to speech in noise. However, no previous research has studied alpha rhythms directly during a clinical speech in noise perception task. The purpose of this study was to measure alpha rhythms during attentive listening in a commonly used speech-in-noise task, known as digits-in-nose (DiN), to better understand the neural processes associated with speech hearing in noise.

Fourteen typical-hearing young adult subjects performed the DiN test while wearing electrode caps to measure alpha rhythms. All subjects completed the task in active and passive listening conditions. The active condition mimicked attentive listening and asked the subject to repeat the digits heard in varying levels of background noise. In the passive condition, the subjects were instructed to ignore the digits and watch a movie of their choice, with captions and no audio.

Two key findings emerged from this study in regards to the influence of attention, individual variability, and predictability of correct recall.

First, the authors concluded that the active condition produced alpha rhythms, while passive listening yielded no such activity. Selective auditory attention can therefore be indexed through this measurement. This result also illustrates that these alpha rhythms arise from neural processes associated with selective attention, rather than from the physical characteristics of sound. To the authors’ knowledge, these differences between passive and active conditions have not previously been reported.

Secondly, all participants showed similar brain activation that predicted when one was going to make a mistake on the DiN task. Specifically, a greater magnitude in one particular aspect of alpha rhythms was found to correlate with comprehension; a larger magnitude on correct trials was observed relative to incorrect trials. This finding was consistent throughout the study and has great potential for clinical use.

Dimitrijevic and his colleagues’ novel findings propel the field’s understanding of the neural activity related to speech-in-noise tasks. It informs the assessment of clinical populations with speech in noise deficits, such as those with auditory neuropathy spectrum disorder or central auditory processing disorder (CAPD).

Future research will attempt to use this alpha rhythms paradigm in typically developing children and those with CAPD. Ultimately, the scientists hope to develop a clinical tool to better assess listening in a more real-world situation, such as in the presence of background noise, to augment traditional audiological testing.

Andrew Dimitrijevic, Ph.D., is a 2015 Emerging Research Grantee and General Grand Chapter Royal Arch Masons International award recipient. Hearing Health Foundation would like to thank the Royal Arch Masons for their generous contributions to Emerging Research Grants scientists working in the area of central auditory processing disorders (CAPD). We appreciate their ongoing commitment to funding CAPD research.

We need your help supporting innovative hearing and balance science. Please make a contribution today.

Print Friendly and PDF

BLOG ARCHIVE

John Brigande provides commentary: Hearing in the mouse of Usher

Oregon Health & Science University

The March issue of Nature Biotechnology brings together a set of articles that provide an overview of promising RNA-based therapies and the challenges of clinical validation and commercialization. In his News and Views essay, “Hearing in the mouse of Usher,” John V. Brigande, Ph.D., provides commentary on two studies in the issue that report important progress in research on gene therapy for the inner ear.

One in eight people in the United States aged 12 years or older has hearing loss in both ears. That figure suggests that, if you don’t have hearing loss, you likely know someone who does. Worldwide, hearing loss profoundly interferes with life tasks like learning and interpersonal communication for an estimated 32 million children and 328 million adults worldwide. Inherited genetic mutations cause about 50 percent of these cases.

The challenge in developing gene therapy for the inner ear isn’t a lack of known genes associated with hearing loss, but a lack of vectors to deliver DNA into cells. Brigande, associate professor of otolaryngology and cell, developmental, and cancer biology at the OHSU School of Medicine, provides perspective on companion studies that demonstrate adeno-associated viral vectors as a potent gene transfer agent for cochlear cell targets.

The first study demonstrates safe and efficient gene transfer to hair cells of the mouse inner ear using a synthetic adeno-associated viral vector that promises to be a powerful starting point for developing appropriate vectors for use in the human inner ear. The second study demonstrates that a single neonatal treatment with this viral vector successfully delivers a healthy gene to the inner ear to achieve unprecedented recovery of hearing and balance in a mouse model of a disease called Usher syndrome. Individuals with Usher syndrome type 1c are born deaf and with profound balance issues and experience vision loss by early adolescence. The research teams were led by scientists from the Harvard School of Medicine.

Brigande sees these new studies as potentially spurring investment and kickstarting the development of new approaches to correct a diverse set of deafness genes. 

Hearing Restoration Project consortium member John V. Brigande, Ph.D., is a developmental neurobiologist at the Oregon Hearing Research Center. He also teaches in the Neuroscience Graduate Program and in the Program in Molecular and Cellular Biology at the Oregon Health & Science University. This blog was reposted with the permission of Oregon Health & Science University.

Print Friendly and PDF

BLOG ARCHIVE