Research

Cellular Changes and Ménière’s Disease Symptoms

By Carol Stoll

Ménière’s disease is characterized by fluctuating hearing loss, vertigo, tinnitus, and ear fullness, but the causes of these symptoms are not well understood. Past research has suggested that a damaged blood labyrinthine barrier (BLB) in the inner ear may be involved in the pathophysiology of inner ear disorders. Hearing Health Foundation (HHF)’s 2016 Emerging Research Grants (ERG) recipient Gail Ishiyama, M.D., was the first to test this proposition by using electron microscopy to analyze the BLB in both typical and Ménière’s disease patients. Ishiyama’s research was fully funded by HHF and was recently published in Nature publishing group, Scientific Reports.

The BLB in a Meniere’s disease capillary. a) Capillary located in the stroma of the macula utricle from a Meniere’s subject (55-year-old-male). The lumen (lu) of the capillary is narrow, vascular endothelial cells (vec) are swollen and the cytoplasm…

The BLB in a Meniere’s disease capillary. a) Capillary located in the stroma of the macula utricle from a Meniere’s subject (55-year-old-male). The lumen (lu) of the capillary is narrow, vascular endothelial cells (vec) are swollen and the cytoplasm is vacuolated (pink asterisks). b. Diagram showing the alterations in the swollen vec, microvacuoles are also abundant (v). Abbreviations, rbc: red blood cells, tj: tight junctions, m: mitochondria, n: cell nucleus, pp: pericyte process; pvbm: perivascular basement membrane. Bar is 2 microns.

The BLB is composed of a network of vascular endothelial cells (VECs) that line all capillaries in the inner ear organs to separate the vasculature (blood vessels) from the inner ear fluids. A critical function of the BLB is to maintain proper composition and levels of inner ear fluid via selective permeability. However, the inner ear fluid space in patients with Ménière’s has been shown to be ballooned out due to excess fluid. Additionally, the group had identified permeability changes in magnetic resonance imaging studies of Meniere’s patients, which may be an indication of BLB malfunction.

Ishiyama’s research team used transmission electron microscopy (TEM) to investigate the fine cellular structure of the BLB in the utricle, a balance-regulating organ of the inner ear. Two utricles were taken by autopsy from individuals with no vestibular or auditory disease. Five utricles were surgically extracted from patients with severe stage IV Ménière’s disease with profound hearing loss and intractable recurrent vertigo spells, who were undergoing surgery as curative treatment.

Microscopic examination revealed significant structural differences of the BLB within the utricle between individuals with and without Ménière’s disease. In the normal utricle samples, the VECs of the BLB contained numerous mitochondria and very few fluid-containing organelles called vesicles and vacuoles. The cells were connected by tight junctions to form a smooth, continuous lining, and were surrounded by a uniform membrane.

However, samples with confirmed Ménière’s disease showed varying degrees of structural changes within the VECs; while the VECs remained connected by tight junctions, an increased number of vesicles and vacuoles was found, which may cause swelling and degeneration of other organelles. In the most severe case, there was complete VEC necrosis, or cell death, and a severe thickening of the basal membrane surrounding the VECs.

The documentation of the cellular changes in the utricle of Ménière’s patients was the first of its kind and has important implications for future treatments. Ishiyama’s study concluded that the alteration and degeneration of the BLB likely contributes to fluid changes in the inner ear organs that regulate hearing and balance, thus causing the Ménière’s symptoms. Further scientific understanding of the specific cellular and molecular components affected by Ménière’s can lead to the development of new drug therapies that target the BLB to decrease vascular damage in the inner ear.

Gail Ishiyama, M.D., is a 2016 Emerging Research Grants recipient. Her grant was generously funded by The Estate of Howard F. Schum.

WE NEED YOUR HELP IN FUNDING THE EXCITING WORK OF HEARING AND BALANCE SCIENTISTS. DONATE TODAY TO HEARING HEALTH FOUNDATION AND SUPPORT GROUNDBREAKING RESEARCH: HHF.ORG/DONATE.

Print Friendly and PDF

BLOG ARCHIVE

Therapies for Hearing Loss: What Is Being Tested?

By Pranav Parikh

regenerated hair cells.png

Untreated hearing loss is linked to a lower quality of life, physical functionality, and communicative ability. The most common type of hearing loss, sensorineural, is often a result of damage to the delicate sensory hair cells in the inner ear. Because hair cell loss is irreversible, and hearing impairment therefore permanent, new treatment strategies are a welcome sign. In the July 2017 issue of Otology & Neurotology, Hearing Restoration Project (HRP) consortium member Ronna Hertzano, M.D., Ph.D., and Debara L. Tucci, M.D., a member of Hearing Health Foundation’s Council of Scientific Trustees (CST), along with Matthew Gordon Crowson, M.D., examined the field of emerging therapies for sensorineural hearing loss.

The team identified 22 active clinical drug trials in the U.S., and reviewed six potential therapies. Four use mechanisms to reduce oxidative stress believed to be involved in the inner ear cell death. Three of the therapeutic molecules being tested—D-methionine, N-acetylcysteine (NAC), and glutathione peroxidase mimicry (ebselen)—act as antioxidants to mop up free radicals caused by noise or other trauma to the inner ear. (For more about D-methionine, see page 11.) The fourth, sodium thiosulfate, is a chemical found to counteract the ototoxic effects of chemotherapy drugs.

The fifth approach is to manipulate the “cell death cascade.” This occurs when cells endure significant stress or injury, leading to the release of free radicals and changes in pH and protein that then kill the cell. Since hair cells do not regenerate like other cells, the cell death cascade causes permanent hearing loss. A trial is underway to make the cochlear neuroepithelium (inner ear tissue) more resilient to cell death signaling, using an inhibitor called AM-111 to block the chain of events leading to cell death. Finally, the sixth approach is a novel hair cell replacement therapy using the gene Atoh1, known to be a vital regulator of hair cell regeneration, causing cells to differentiate (change) into hair cells. Using mouse models, it has been shown that if Atoh1 is blocked, hair cell differentiation does not occur, and if it is induced, hair cell formation occurs, at least in the ears of very young mice.

Drug delivery methods to the inner ear are also being investigated. In addition to orally, delivery methods include a topical ear gel, intravenous infusion, and, most revolutionarily, direct injection of viruses to deliver genes to the inner ear. And while many of the drugs had to overcome hurdles to reach late-phase clinical trials, questions about safety, efficacy, and side effects remain, in addition to whether animal model results translate to human biology.

Ronna Hertzano.jpeg

HRP consortium member Ronna Hertzano, M.D., Ph.D. (far left), is an assistant professor at the University of Maryland School of Medicine. HHF CST member Debara L. Tucci, M.D., is a professor at Duke University Medical Center in North Carolina.

This article originally appeared in the Fall 2017 issue of Hearing Health magazine. Find it here, along with many other innovative research updates. 

Empower the Hearing Restoration Project's life-changing research. If you are able, please make a contribution today.

 
 
Print Friendly and PDF

BLOG ARCHIVE

Gaining Better Clarity of Neural Networks

By Pranav Parikh

The ear, just like any other organ in the human body, uses nerves to function properly. One of the most vital nerves that the ear uses is the cochlear nerve, which connects the inner ear to the brain, or more specifically to the tonotopically-based regions of the cochlear nuclear complex located in the brainstem. This nerve shares the same shape and design of most nerves in the body, with dendrites absorbing information from various sources, sending the signal down the axon of the nerve through action potentials, and terminating the signal in a synapse so the message can be spread. In order to allow for this process to occur expediently, the nerve encounters a process known as myelination (providing a myelin sheath to propagate a signal faster). This is done through a glial cell known as an oligodendrocyte. Oligodendrocytes form a layer of lipid (fat) and protein around the axon to provide insulation, thereby allowing for signals to be sent to the brain more efficiently.

The immunoreactivity of Olig2 was detected during postnatal day (PND) 0 to 7, which became weaker after PND 10. Before PND 7, the majority of Olig2-expressing cells were found within the modiolus at the basal cochlear turn, while a few cells were lo…

The immunoreactivity of Olig2 was detected during postnatal day (PND) 0 to 7, which became weaker after PND 10. Before PND 7, the majority of Olig2-expressing cells were found within the modiolus at the basal cochlear turn, while a few cells were located peripherally to the DIC-PCTZ and in close proximity to the spiral lamina at the basal cochlea turn. After PND 7, Olig2-expressing cells were fully overlapped with the DIC-PCTZ within modiolus at the spiral lamina in the basal cochlea.

A team of scientists led by Dr. Zhengqing Hu, funded by Hearing Health Foundation through its Emerging Research Grants program (2010 & 2011) was able to analyze oligodendrocyte protein expression in the cochlear nerve of postnatal mice. Through the use of Differential Interference Contrast (DIC) microscopy, they were able to investigate the cochlear nerve at staggered postnatal days, meaning the period following birth.

Their findings indicate oligodendrocytes are found to migrate along with the transition zone between the central and peripheral nervous systems. As the fetus develops after birth, and myelination occurs in the nerves connecting to the brain, the oligodendrocyte protein marker Oligo2 was observed. This could mean loss of hearing function could be connected to unmyelinated axons. There are many other neurodegenerative autoimmune diseases, such as multiple sclerosis, caused by demyelination, and hearing loss could potentially be added to that list. Dr. Hu’s work improves clarity of the neural network connecting the inner ear and the brain.

Zhengqing Hu, M.D., Ph.D. , is a 2010 and 2011 Emerging Research Grants recipient. Hu's research was published by Otolaryngology-Head and Neck Surgery on July 11, 2017.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

BLOG ARCHIVE

New Clues to Sound Localization Issues in Fragile X Syndrome

By Pranav Parikh, Kathleen Wallace, and Elizabeth McCullagh, Ph.D.

Fragile X syndrome (FXS), the most common genetic form of autism, is characterized by impaired cognition, hyperactivity, seizures, attention deficits, and hypersensitivity to sensory stimuli, specifically auditory stimuli.

Individuals with FXS also experience difficulty with determining the source of a sound, known as sound localization. Sound localization is essential for listening in the presence of background noise such as a noisy classroom. The ability to localize sound properly is due to precise excitatory and inhibitory inputs to areas of the brain. 2016 Emerging Research Grants recipient Elizabeth McCullagh, Ph.D., and colleagues hypothesize that the auditory symptoms seen in FXS, specifically issues with sound localization, are due to an overall imbalance of excitatory and inhibitory synaptic input in these brain areas.

Elizabeth McCullagh.jpg

Investigators compared number and size of synaptic structures in different areas of the brainstem responsible for sound localization for several inhibitory neurotransmitters (glycine and GABA) and the primary excitatory neurotransmitter (glutamate) in a mouse model of FXS with a control group. The areas of the brainstem responsible for sound localization are connected to one another in a frequency-specific manner, with low frequency sounds stimulating similar areas and the same for high frequency. It was found that most areas of the brainstem examined did not have changes in number or size of structures, but one area—the medial nucleus of the trapezoid body (MNTB)—had alterations to inhibitory inputs that were specific to the frequency encoded by that region. Glycinergic inhibition was decreased in the high frequency region of MNTB, while GABAergic inhibition was decreased in the low frequency region.

The study by McCullagh and team in The Journal of Comparative Neurology is the first to explore alterations in glycinergic inhibition in the auditory brainstem of FXS mice. Due to the well-characterized functional roles of excitatory and inhibitory neurotransmitters in the auditory brainstem, the sound localization pathway is an ideal circuit to measure the sensory alterations of FXS. Given the findings in this study, further knowledge of the alterations in the lower auditory areas, such as the tonotopic differences in inhibition to the MNTB, may be necessary to better understand the altered sound processing found in those with FXS.

Elizabeth McCullagh, Ph.D., was a 2016 Emerging Research Grants scientist and a General Grand Chapter Royal Arch Masons International award recipient. For more, see Tonotopic alterations in inhibitory input to the medial nucleus of the trapezoid body in a mouse model of Fragile X syndrome” in The Journal of Comparative Neurology.

We need your help supporting innovative hearing and balance science through our Emerging Research Grants program. Please make a contribution today.

 
 
Print Friendly and PDF

BLOG ARCHIVE

New Study Suggests Serotonin May Worsen Tinnitus

Millions of people suffer from the constant sensation of ringing or buzzing in the ears known as tinnitus, creating constant irritation for some and severe anxiety for others. New research by scientists at OHSU shows why a common antidepressant medication may actually worsen the condition.

Print Friendly and PDF

BLOG ARCHIVE

Researcher Discovers Gene Mutation Related to Usher Syndrome Type 3

By Pranav Parikh

Usher syndrome type 3 is an inherited disease in which an individual is born with typical hearing and develops hearing loss in the stages of early childhood. They will most likely develop complete hearing loss by the time they are an adult. Though cases of Usher syndrome type 3 (and its subtypes) are quite infrequent, representing 2 percent of total Usher syndrome cases, the onset symptoms have damaging and often irreversible consequences that severely disrupt the lives of those living with the condition. There is currently no cure for the disease, but cochlear implants have seen some success in providing partial hearing function in patients.

A 3D model of the HARS enzyme, including the catalytic site (where the reaction occurs) and the anticodon site (the part that starts protein synthesis through RNA transcription).

A 3D model of the HARS enzyme, including the catalytic site (where the reaction occurs) and the anticodon site (the part that starts protein synthesis through RNA transcription).

Susan Robey-Bond, Ph.D., a 2012 Emerging Research Grants scientist, and her team at the University of Vermont College of Medicine were able to isolate a mechanism involved in the development of Usher syndrome. Histidyl-tRNA synthetase is an enzyme that is instrumental in protein synthesis. This enzyme, given the acronym HARS, is thought to be involved in the presentation of Usher syndrome type 3B in patients. The early symptoms of temporary hearing and vision loss, hallucinations, and sometimes sudden fatal buildup of fluid in the lungs may be triggered by a fever-causing illness. The hearing and vision loss are eventually severe and permanent.

A graphical representation depicting temperature variation between the wild-type and mutant version of the HARS enzyme.

A graphical representation depicting temperature variation between the wild-type and mutant version of the HARS enzyme.

Usher syndrome type 3B is autosomal recessive, meaning children of parents carrying the gene but who do not display symptoms have a likelihood of developing the disease. It is caused by a USH3B mutation, which substitutes a serine amino acid for a tyrosine amino acid in HARS. The team studying the biochemical properties of the gene compared the Y454S mutation in the HARS enzyme with its wild-type (non-mutated) form and found similar functional biochemical characteristics, as stated in the researchers’ recent paper in Biochemistry.

The amino acid activation, aminoacylation, and tRNA binding functions were all consistent between the mutation and wild-type genes. In later analysis, though, the team found that at an elevated temperature the Y454S substitution was less stable than the wild-type. More specifically, cells from patients containing the Y454S mutation displayed lower levels of protein synthesis, which could explain the onset of deafness these patients experience. How these proteins are implicated in the hearing processes will eventually help develop cures or better treatments for Usher syndrome.

Susan Robey-Bond, Ph.D., was a 2012 Emerging Research Grants recipient. For more, see her Biochemistry paper:, “The Usher Syndrome Type IIIB Histidyl-tRNA Synthetase Mutation Confers Temperature Sensitivity.”


Print Friendly and PDF

BLOG ARCHIVE

Early Detection Improved Vocabulary Scores in Kids with Hearing Loss

By Molly Walker

Children with hearing loss in both ears had improved vocabulary skills if they met all of the Early Hearing Detection and Intervention guidelines, a small cross-sectional study found.

Those children with bilateral hearing loss who met all three components of the Early Hearing Detection and Intervention guidelines (hearing screening by 1 month, diagnosis of hearing loss by 3 months and intervention by 6 months) had significantly higher vocabulary quotients, reported Christine Yoshinaga-Itano, PhD, of the University of Colorado Boulder, writing in Pediatrics.

The authors added that recent research reported better language outcomes for children born in areas of the country during years where universal newborn hearing screening programs were implemented, and that these children also experienced long-term benefits in reading ability. The authors said that studies in the U.S. also reported better language outcomes for children whose hearing loss was identified early, who received hearing aids earlier or who began intervention services earlier. But those studies were limited in geographic scope or contained outdated definitions of "early" hearing loss.

"To date, no studies have reported vocabulary or other language outcomes of children meeting all three components of the [Early Hearing Detection and Intervention] guidelines," they wrote.

Researchers examined a cohort of 448 children with bilateral prelingual hearing loss between 8 and 39 months of age (mean 25.3 months), who participated in the National Early Childhood Assessment Project -- a large multistate study. About 80% of children had no additional disabilities that interfered with their language capabilities, while over half of the children with additional disabilities reported cognitive impairment. Expressive vocabulary was measured with the MacArthur-Bates Communicative Development Inventories.

While meeting all three components of the Early Hearing Detection and Intervention guidelines was a primary variable, the authors identified five other independent predictor variables into the analysis:

  • Chronological age

  • Disability status

  • Mother's level of education

  • Degree of loss

  • Adult who is deaf/hard of hearing

They wrote that the overall model was significantly predictive, with the combination of the six factors explaining 41% of the variance in vocabulary outcomes. Higher vocabulary quotients were predicted by higher maternal levels of education, lesser degrees of hearing loss and the presence of a parent who was deaf/or hard of hearing, in addition to the absence of additional disabilities, the authors said. But even after controlling for these factors, meeting all three components of the Early Hearing Detection and Intervention guidelines had "a meaningful impact" on vocabulary outcomes.

The authors also said that mean vocabulary quotients decreased as a child's chronological age increased, and this gap was greater for older children. They argued that this complements previous findings, where children with hearing loss fail to acquire vocabulary at the pace of hearing children.

Overall, the mean vocabulary quotient was 74.4. For children without disabilities, the mean vocabulary quotient was 77.6, and for those with additional disabilities, it was 59.8.

Even those children without additional disabilities who met the guidelines had a mean vocabulary quotient of 82, which the authors noted was "considerably less" than the expected mean of 100. They added that 37% of this subgroup had vocabulary quotients below the 10th percentile (<75).

"Although this percentage is substantially better than for those who did not meet [Early Hearing Detection and Intervention] guidelines ... it points to the importance of identifying additional factors that may lead to improved vocabulary outcomes," they wrote.

Limitations to the study included that only expressive vocabulary was examined and the authors recommended that future studies consider additional language components. Other limitations included that disability status was determined by parent, with the potential for misclassification.

The authors said that the results of their study emphasize the importance of pediatricians and other medical professionals to help identify children with hearing loss at a younger age, adding that "only one-half to two-thirds of children met the guidelines" across participating states.

This article was republished with permission from MedPageToday

Print Friendly and PDF

BLOG ARCHIVE

Small Solution, Large Impact: Updating Hearing Aid Technology

By Apoorva Murarka

For many people, the sound quality and battery life of their devices are often no more than a second thought. But for hearing aid users, these are pivotal factors in being able to interact with the world around them.

One possible way to update existing technology – which has gone unchanged for decades – is small in size but monumental in impact. Apoorva Murarka, a Ph.D. candidate in electrical engineering at MIT, has developed an award-winning microspeaker to improve the functions of devices that emit sound. Murarka sees hearing aids as one of the most important applications of his new technology.

The Current Problem – Feeling the Heat

Most hearing aids have long used a system of coils and magnets to produce sound within the ear canal. These microspeakers use battery power to operate, and lots of it. Valuable battery life is wasted in the form of heat as an electric current works hard to travel through the coil to eventually help produce sound. The more limited a user’s hearing is, the more the speaker must work to produce sound, and ultimately that much more battery is used up. 

As a result, research has shown that many hearing aid users in the United States use about 80 to 120 batteries a year or have to recharge batteries daily. Aside from the anxiety that can accompany the varying dependability of this old technology, the cost of constantly replacing these batteries can quickly add up. 

But battery life is not the only factor to consider. Because the coil and magnet system has not been updated in decades, the quality of sound produced by hearing aid speakers (without additional signal processing) has been just as limited. Even small upgrades in sound quality could make a world of difference for users.

The Future Solution – Going Smaller and Smarter

Apoorva Murarka has invented an alternative to the old coil and magnet system, removing those components completely from the picture. In their stead, he has developed an electrostatic transducer that relies on electrostatic force instead of magnetic force to vibrate the sound-producing diaphragm. This way of producing sound wastes much less energy, meaning significantly longer battery life in hearing aids. Apoorva was recently awarded the $15,000 Lemelson-MIT Student Prize for this groundbreaking development.

The biggest difference? Size. You would need to look closely to even see this microspeaker’s membrane – its thickness is about 1/1,000 the width of a human hair. 

Additionally, the microspeaker’s ultrathin membrane and micro-structured design enhance the quality of sound reproduced in the ear. Power savings due to the microspeaker’s electrostatic drive can be used to optimize other existing features in hearing aids such as noise filtration, directionality, and wireless streaming. This could pave the way for energy-efficient “smart” hearing aids that improve the quality of life for users significantly. 

This invention is being developed further and Apoorva hopes to work with the hard-of-hearing community, relevant organizations and hearing aid companies to understand the needs of users and explore how his invention can be adapted within hearing aids.

You can read more about Apoorva and his invention here

Print Friendly and PDF

BLOG ARCHIVE

An Animal Behavioral Model of Loudness Hyperacusis

By Kelly Radziwon, Ph.D., and Richard Salvi, Ph.D.

One of the defining features of hyperacusis is reduced sound level tolerance; individuals with “loudness hyperacusis” experience everyday sound volumes as uncomfortably loud and potentially painful. Given that loudness perception is a key behavioral correlate of hyperacusis, our lab at the University at Buffalo has developed a rat behavioral model of loudness estimation utilizing a reaction time paradigm. In this model, the rats were trained to remove their noses from a hole whenever a sound was heard. This task is similar to asking a human listener to raise his/her hand when a sound is played (the rats receive food rewards upon correctly detecting the sound).
 

FIGURE: Reaction time-Intensity functions for broadband noise bursts for 7 rats.The rats are significantly faster following high-dose (300 mg/kg) salicylate administration (left panel; red squares) for moderate and high level sounds, indicative of t…

FIGURE: Reaction time-Intensity functions for broadband noise bursts for 7 rats.

The rats are significantly faster following high-dose (300 mg/kg) salicylate administration (left panel; red squares) for moderate and high level sounds, indicative of temporary loudness hyperacusis. The rats showed no behavioral effect following low-dose (50 mg/kg) salicylate.

By establishing this trained behavioral response, we measured reaction time, or how fast the animal responds to a variety of sounds of varying intensities. Previous studies have established that the more intense a sound is, the faster a listener will respond to it. As a result, we thought having hyperacusis would influence reaction time due to an enhanced sensitivity to sound.

In our recent paper published in Hearing Research, we tested the hypothesis that high-dose sodium salicylate, the active ingredient in aspirin, can induce hyperacusis-like changes in rats trained in our behavioral paradigm. High-dose aspirin has long been known to induce temporary hearing loss and acute tinnitus in both humans and animals, and it has served as an extremely useful model to investigate the neural and biological mechanisms underlying tinnitus and hearing loss. Therefore, if the rats’ responses to sound are faster than they typically were following salicylate administration, then we will have developed a relevant animal model of loudness hyperacusis.

Although prior hyperacusis research utilizing salicylate has demonstrated that high-dose sodium salicylate induced hyperacusis-like behavior, the effect of dosage and the stimulus frequency were not considered. We wanted to determine how the dosage of salicylate as well as the frequency of the tone bursts affected reaction time.

We found that salicylate caused a reduction in behavioral reaction time in a dose-dependent manner and across a range of stimulus frequencies, suggesting that both our behavioral paradigm and the salicylate model are useful tools in the broader study of hyperacusis. In addition, our behavioral results appear highly correlated with the physiological changes in the auditory system shown in earlier studies following both salicylate treatment and noise exposure, which points to a common neural mechanism in the generation of hyperacusis.

Although people with hyperacusis rarely attribute their hyperacusis to aspirin, the use of the salicylate model of hyperacusis in animals provides the necessary groundwork for future studies of noise-induced hyperacusis and loudness intolerance.


Kelly Radziwon, Ph.D., is a 2015 Emerging Research Grants recipient. Her grant was generously funded by Hyperacusis Research Ltd. Learn more about Radziwon and her work in “Meet the Researcher.”


Print Friendly and PDF

BLOG ARCHIVE

NIH Researchers Show Protein in Inner Ear Is Key to How Cells That Help With Hearing and Balance Are Positioned

By the National Institute on Deafness and Other Communication Disorders (NIDCD)

Line of polarity reversal (LPR) and location of Emx2 within two inner ear structures. Arrows indicate hair bundle orientation. Source: eLife

Line of polarity reversal (LPR) and location of Emx2 within two inner ear structures. Arrows indicate hair bundle orientation. Source: eLife

Using animal models, scientists have demonstrated that a protein called Emx2 is critical to how specialized cells that are important for maintaining hearing and balance are positioned in the inner ear. Emx2 is a transcription factor, a type of protein that plays a role in how genes are regulated. Conducted by scientists at the National Institute on Deafness and Other Communication Disorders (NIDCD), part of the National Institutes of Health (NIH), the research offers new insight into how specialized sensory hair cells develop and function, providing opportunities for scientists to explore novel ways to treat hearing loss, balance disorders, and deafness. The results are published March 7, 2017, in eLife.

Our ability to hear and maintain balance relies on thousands of sensory hair cells in various parts of the inner ear. On top of these hair cells are clusters of tiny hair-like extensions called hair bundles. When triggered by sound, head movements, or other input, the hair bundles bend, opening channels that turn on the hair cells and create electrical signals to send information to the brain. These signals carry, for example, sound vibrations so the brain can tell us what we’ve heard or information about how our head is positioned or how it is moving, which the brain uses to help us maintain balance.

NIDCD researchers Doris Wu, Ph.D., chief of the Section on Sensory Cell Regeneration and Development and member of HHF’s Scientific Advisory Board, which provides oversight and guidance to our Hearing Restoration Project (HRP) consortium; Katie Kindt, Ph.D., acting chief of the Section on Sensory Cell Development and Function; and Tao Jiang, a doctoral student at the University of Maryland College Park, sought to describe how the hair cells and hair bundles in the inner ear are formed by exploring the role of Emx2, a protein known to be essential for the development of inner ear structures. They turned first to mice, which have been critical to helping scientists understand how intricate parts of the inner ear function in people.

Each hair bundle in the inner ear bends in only one direction to turn on the hair cell; when the bundle bends in the opposite direction, it is deactivated, or turned off, and the channels that sense vibrations close. Hair bundles in various sensory organs of the inner ear are oriented in a precise pattern. Scientists are just beginning to understand how the hair cells determine in which direction to point their hair bundles so that they perform their jobs.

In the parts of the inner ear where hair cells and their hair bundles convert sound vibrations into signals to the brain, the hair bundles are oriented in the same direction. The same is true for hair bundles involved in some aspects of balance, known as angular acceleration. But for hair cells involved in linear acceleration—or how the head senses the direction of forward and backward movement—the hair bundles divide into two regions that are oriented in opposite directions, which scientists call reversed polarity. The hair bundles face either toward or away from each other, depending on whether they are in the utricle or the saccule, two of the inner ear structures involved in balance. In mammals, the dividing line at which the hair bundles are oriented in opposite directions is called the line of polarity reversal (LPR).

Using gene expression analysis and loss- and gain-of-function analyses in mice that either lacked Emx2 or possessed extra amounts of the protein, the scientists found that Emx2 is expressed on only one side of the LPR. In addition, they discovered that Emx2 reversed hair bundle polarity by 180 degrees, thereby orienting hair bundles in the Emx2 region in opposite directions from hair bundles on the other side of the LPR. When the Emx2 was missing, the hair bundles in the same location were positioned to face the same direction.

Looking to other animals to see if Emx2 played the same role, they found that Emx2 reversed hair bundle orientation in the zebrafish neuromast, the organ where hair cells with reversed polarity that are sensitive to water movement reside.

These results suggest that Emx2 plays a key role in establishing the structural basis of hair bundle polarity and establishing the LPR. If Emx2 is found to function similarly in humans, as expected, the findings could help advance therapies for hearing loss and balance disorders. They could also advance research into understanding the mechanisms underlying sensory hair cell development within organs other than the inner ear.

This work was supported within the intramural laboratories of the NIDCD (ZIA DC000021 and ZIA DC000085).

Doris Wu Ph.D. is member of HHF’s Scientific Advisory Board, which provides oversight and guidance to our Hearing Restoration Project (HRP) consortium This article was repurpsed with permission from the National Institute on Deafness and Other Communication Disorders. 


Print Friendly and PDF

BLOG ARCHIVE