Hearing Loss

How We Got Our Dad to Say Yes to Hearing Aids (With Some Help From Aristotle)

Aristotle’s three appeals—ethos, logos, and pathos—helped us get our dad into an audiologist's office.

Print Friendly and PDF

BLOG ARCHIVE

You Are a Masterpiece

This retinal eye specialist—who was the first person to utter, “You have Usher syndrome” to me—had the worst bedside manner. Immediately after I left his office I cried—a lot—but then regained my composure and made a few calls to see a second retinal eye specialist doctor for a second opinion.

Print Friendly and PDF

BLOG ARCHIVE

The Strength of Our Olympians

By Vicky Chan

The competitors in this year’s Winter Olympics are full of drive and determination. Olympians throughout history have overcome various challenges for a chance to win the gold, including hearing loss. Hearing loss has played a big role in the lives of some Olympians. In spite of their disability, or, perhaps, because of it, hard-of-hearing Olympians have thrived as athletes. Rather than viewing their hearing loss as a limitation, these Olympians—our very own Gold Medalists—have claimed that compromised hearing has shaped their work ethics and contributed to their success.

Adam Rippon

American Figure skater Adam Rippon. Credit: Jim Gensheimer/Bay Area News Group.

American Figure skater Adam Rippon. Credit: Jim Gensheimer/Bay Area News Group.

Adam Rippon is a figure skater participating in the 2018 Olympics. He was born with an eye infection and 80% hearing loss. Before his first birthday, he had major surgeries to correct both issues. At age 5, he survived a bursted appendix and severe respiratory condition. Despite his early health difficulties, he won a gold medal at the Four Continent Championship and the national title in 2016.

Tamika Catchings

Tamika Catchings is a retired American WNBA star who was born with hearing loss. She participated in more than 15 WNBA seasons and won four Olympic Gold Medals. Catchings has attributed her success to her hearing loss—compared to her typical-hearing opponents, she is more observant on court which allows her to react faster than they can. Catchings said, “As a young child, I remember being teased for...my big, clunky hearing aids, and the speech problems...Every day was a challenge for me...I outworked [the kids who made fun of me], plain and simple.”

Frank Bartolillo

Frank Bartolillo is an Australian fencer who competed in the 2004 Olympics. He was born with hearing loss, but Bartolillo states that his hearing loss has actually helped him improve his fencing skills by allowing him to fully focus on his opponent.

Carlo Orlandi

Carlo Orlandi was an Italian boxer. At age 18, Orlandi became the first deaf athlete to compete and win a Gold Medal in the 1928 Olympics. Later, he became a professional boxer with a career that spanned 15 years and won nearly 100 matches.

David Smith

David Smith is an American volleyball player who was born with severe hearing loss. At age three, he was fitted for hearing aids in both ears. As an athlete, he relies heavily on hand signals and lip reading to communicate with his teammates. On the court, Smith can’t wear his hearing aid, so his coach, John Speraw, uses the “David Smith Rule.” This rule mandates that “when David wants it, David takes it,'" says Speraw. "Because in the middle of a play, you can't call him off...He's mitigated any issues he has by being a great all-around volleyball player."

Chris Colwill

Chris Colwill is an American diver who was born with hearing loss. Although his hearing aid allows him to hear at an 85-90% level, he can not use it while diving and relies on the scoreboard for his cue to dive. But Colwill stated that this is an advantage for him—noise from the crowd doesn’t distract his concentration on diving.

Katherine Merry

Katherine Merry is a former English sprinter who won a Bronze Medal in the 2000 Olympics. At age 30, she developed tinnitus when a nurse made a mistake during a routine ear cleaning procedure. Ever since, she has lived with a constant high-pitch buzzing sound in her ears. It becomes worse when she is tired, overworked or on a flight. Today, Merry works as a BBC Sports Presenter.

These Olympians prove that those affected by hearing loss can pursue successful careers in sports. Refusing to let anything hold them back, they turned their disabilities into advantages in their respective competitions. Hearing loss allows them to block out distractions and focus on the sport. Their disability has shaped their determination, forcing them to become stronger and better athletes.

Print Friendly and PDF

BLOG ARCHIVE

Help Us Move Beyond Grateful

By Nadine Dehgan

Thank you for your partnership as we progress toward our dream of cures for hearing loss and tinnitus.

Our researchers are hard at work discovering how reptiles, birds, and fish are able to restore their hearing after being deafened so they can translate this knowledge into cures for mammals and humans. 

When better treatments and cures are discovered, I know Jamie—pictured below with her four children—will be incredibly grateful for the opportunity to have her hearing restored. We will all be grateful.

jamie-december-donate.png

Jamie's life changed one year ago when her daily activities were suddenly compromised. Words turned into mere muffled sounds—and then silence. She found herself increasingly dependent on lip-reading to avoid asking people to repeat themselves, a request that embarrassed her.

Her fears were confirmed when her doctor determined that Jamie, 32, has severe hearing loss in both of her ears. The doctor was astonished by the intensity of the decline in Jamie's hearing.

Jamie is fortunate to have a supportive and loving husband and family. But she lives in fear she may never be able to hear her beautiful children and other important sounds in her life.

Can you help bring us closer to better treatments and a cure for hearing loss for Jamie and 48 million other Americans with hearing loss?

Please, if you are able, give to HHF today. 100% of your generous gift will be directed to the area of your designation. 

Thank you and happy holidays!

Print Friendly and PDF

BLOG ARCHIVE

New Hearing Implant Changes Life of Born This Way Star Sean McElwee

By Carol Stoll and Lauren McGrath

“It could happen” is Sean McElwee’s mantra. Born with Down syndrome, a collapsed right ear canal, and three speech disorders, Sean has drawn on his natural optimism to overcome these medical obstacles and become a television star.

At age 22, Sean was discovered and cast on A&E’s Emmy-winning TV series Born This Way, which follows the lives of seven young adults living with Down syndrome in Los Angeles. Sean’s radiant personality made him a favorite on the show, but his progressive hearing loss eventually negatively affected his on-camera communication. Deaf in his right ear since age six and now losing hearing in his left, Sean resolved to make a change. Hearing rejuvenation “could happen”—and it did happen—thanks to Sean’s positive attitude and a Cochlear Baha System.

Sean enjoying the sights and sounds of penguins at the zoo. Photo by the McElwee family.

Sean enjoying the sights and sounds of penguins at the zoo. Photo by the McElwee family.

Sean grew up going to mainstream public schools in Orange County, CA, because his mom wanted him to experience life like every other child. Throughout his childhood, he developed a plethora of hobbies and talents. Sean has been singing and dancing since age three when he joined his church and school choir. He still sings, though now mostly in the shower at home to Adam Lambert songs. Sean also loves to break dance to rap and hip-hop music, and can even put both feet behind his head. He plays many sports including basketball, baseball, flag football, swimming, and golf. He is also an expert bowler and has scored a perfect score of 300 before!

In addition to keeping up with his hobbies, starring on Born This Way, and traveling to public speaking engagements, Sean works at a trampoline park where he enjoys talking to the customers. Sean’s new Baha 5 Sound Processor has enabled him to hear clearly while continuing to work and engage in sports and the arts. The new device is convenient because it can connect directly to, and stream audio from an iPhone to his sound processor. Most notably, Sean’s girlfriend can now sit on either side of him during a conversation and he can still hear her.

Sean, now 24, takes his work very seriously and recently started his own clothing design company called Seanese (named after his own language) to further spread awareness of Down syndrome and general positive messages. “It could happen” was the slogan on his first T-shirt, and now he has added dozens of phrases, designs, and clothing items. He is especially excited that he hired a special artist to design new Halloween shirts this October with images of a mummy, skeleton, and zombie.

Beyond furthering his clothing line, Sean’s personal goals include going to all 50 states (he only has 14 left to go!), appearing on The Ellen DeGeneres Show, going to Atlantis Paradise Island in the Bahamas, and working out to develop his abdominal muscles. He hopes that in the future, “everyone will accept people with Down syndrome and see that we’re just like everyone else.”

Receive updates on life-changing hearing research and resources by subscribing to HHF's free quarterly magazine and e-newsletter.

 
 
Print Friendly and PDF

BLOG ARCHIVE

Early Detection Improved Vocabulary Scores in Kids with Hearing Loss

By Molly Walker

Children with hearing loss in both ears had improved vocabulary skills if they met all of the Early Hearing Detection and Intervention guidelines, a small cross-sectional study found.

Those children with bilateral hearing loss who met all three components of the Early Hearing Detection and Intervention guidelines (hearing screening by 1 month, diagnosis of hearing loss by 3 months and intervention by 6 months) had significantly higher vocabulary quotients, reported Christine Yoshinaga-Itano, PhD, of the University of Colorado Boulder, writing in Pediatrics.

The authors added that recent research reported better language outcomes for children born in areas of the country during years where universal newborn hearing screening programs were implemented, and that these children also experienced long-term benefits in reading ability. The authors said that studies in the U.S. also reported better language outcomes for children whose hearing loss was identified early, who received hearing aids earlier or who began intervention services earlier. But those studies were limited in geographic scope or contained outdated definitions of "early" hearing loss.

"To date, no studies have reported vocabulary or other language outcomes of children meeting all three components of the [Early Hearing Detection and Intervention] guidelines," they wrote.

Researchers examined a cohort of 448 children with bilateral prelingual hearing loss between 8 and 39 months of age (mean 25.3 months), who participated in the National Early Childhood Assessment Project -- a large multistate study. About 80% of children had no additional disabilities that interfered with their language capabilities, while over half of the children with additional disabilities reported cognitive impairment. Expressive vocabulary was measured with the MacArthur-Bates Communicative Development Inventories.

While meeting all three components of the Early Hearing Detection and Intervention guidelines was a primary variable, the authors identified five other independent predictor variables into the analysis:

  • Chronological age

  • Disability status

  • Mother's level of education

  • Degree of loss

  • Adult who is deaf/hard of hearing

They wrote that the overall model was significantly predictive, with the combination of the six factors explaining 41% of the variance in vocabulary outcomes. Higher vocabulary quotients were predicted by higher maternal levels of education, lesser degrees of hearing loss and the presence of a parent who was deaf/or hard of hearing, in addition to the absence of additional disabilities, the authors said. But even after controlling for these factors, meeting all three components of the Early Hearing Detection and Intervention guidelines had "a meaningful impact" on vocabulary outcomes.

The authors also said that mean vocabulary quotients decreased as a child's chronological age increased, and this gap was greater for older children. They argued that this complements previous findings, where children with hearing loss fail to acquire vocabulary at the pace of hearing children.

Overall, the mean vocabulary quotient was 74.4. For children without disabilities, the mean vocabulary quotient was 77.6, and for those with additional disabilities, it was 59.8.

Even those children without additional disabilities who met the guidelines had a mean vocabulary quotient of 82, which the authors noted was "considerably less" than the expected mean of 100. They added that 37% of this subgroup had vocabulary quotients below the 10th percentile (<75).

"Although this percentage is substantially better than for those who did not meet [Early Hearing Detection and Intervention] guidelines ... it points to the importance of identifying additional factors that may lead to improved vocabulary outcomes," they wrote.

Limitations to the study included that only expressive vocabulary was examined and the authors recommended that future studies consider additional language components. Other limitations included that disability status was determined by parent, with the potential for misclassification.

The authors said that the results of their study emphasize the importance of pediatricians and other medical professionals to help identify children with hearing loss at a younger age, adding that "only one-half to two-thirds of children met the guidelines" across participating states.

This article was republished with permission from MedPageToday

Print Friendly and PDF

BLOG ARCHIVE

Small Solution, Large Impact: Updating Hearing Aid Technology

By Apoorva Murarka

For many people, the sound quality and battery life of their devices are often no more than a second thought. But for hearing aid users, these are pivotal factors in being able to interact with the world around them.

One possible way to update existing technology – which has gone unchanged for decades – is small in size but monumental in impact. Apoorva Murarka, a Ph.D. candidate in electrical engineering at MIT, has developed an award-winning microspeaker to improve the functions of devices that emit sound. Murarka sees hearing aids as one of the most important applications of his new technology.

The Current Problem – Feeling the Heat

Most hearing aids have long used a system of coils and magnets to produce sound within the ear canal. These microspeakers use battery power to operate, and lots of it. Valuable battery life is wasted in the form of heat as an electric current works hard to travel through the coil to eventually help produce sound. The more limited a user’s hearing is, the more the speaker must work to produce sound, and ultimately that much more battery is used up. 

As a result, research has shown that many hearing aid users in the United States use about 80 to 120 batteries a year or have to recharge batteries daily. Aside from the anxiety that can accompany the varying dependability of this old technology, the cost of constantly replacing these batteries can quickly add up. 

But battery life is not the only factor to consider. Because the coil and magnet system has not been updated in decades, the quality of sound produced by hearing aid speakers (without additional signal processing) has been just as limited. Even small upgrades in sound quality could make a world of difference for users.

The Future Solution – Going Smaller and Smarter

Apoorva Murarka has invented an alternative to the old coil and magnet system, removing those components completely from the picture. In their stead, he has developed an electrostatic transducer that relies on electrostatic force instead of magnetic force to vibrate the sound-producing diaphragm. This way of producing sound wastes much less energy, meaning significantly longer battery life in hearing aids. Apoorva was recently awarded the $15,000 Lemelson-MIT Student Prize for this groundbreaking development.

The biggest difference? Size. You would need to look closely to even see this microspeaker’s membrane – its thickness is about 1/1,000 the width of a human hair. 

Additionally, the microspeaker’s ultrathin membrane and micro-structured design enhance the quality of sound reproduced in the ear. Power savings due to the microspeaker’s electrostatic drive can be used to optimize other existing features in hearing aids such as noise filtration, directionality, and wireless streaming. This could pave the way for energy-efficient “smart” hearing aids that improve the quality of life for users significantly. 

This invention is being developed further and Apoorva hopes to work with the hard-of-hearing community, relevant organizations and hearing aid companies to understand the needs of users and explore how his invention can be adapted within hearing aids.

You can read more about Apoorva and his invention here

Print Friendly and PDF

BLOG ARCHIVE

More People = Less Noise?

By Kathi Mestayer

Beautiful, open, echoey space.

Beautiful, open, echoey space.

In the summer, attendance at our church falls noticeably as people go on vacation and spend weekend mornings doing other seasonal things, like birdwatching. After the service on a recent Sunday, we all headed out of the sanctuary, toward the atrium. Normally, this is a time when it’s really difficult for me to talk with anyone because of the reverberant nature of our building. It’s an architectural masterpiece and wonderful for music—and an acoustical nightmare, at least for speech comprehension.

To be fair, our church is not the only one with a large, open worship space where sound bounces around for what can seem like…. forever.  It’s actually becoming more common; when churches get bigger, sound challenges follow. As the authors of a research paper on the topic point out, “We are witnessing a paradigm shift from small church enclosures to very large church auditoriums.  Most of these auditoriums fall short of providing good sound quality and… sooner or later it becomes a very serious problem because such buildings are places for communication to an audience.…”

So, I’ve gotten used to the reverberation, and just try to avoid conversation until we’re out of the sanctuary. That summer day, however, as I worked my way toward the exit, I noticed that the noise level was significantly louder than usual. “That’s weird,” I thought, "fewer people, but more noise?” I checked with a couple of friends, and they had also noticed that the noise level seemed much higher than usual. So it wasn’t just me.

When I got home, I told my (physicist) husband about it, and he asked me how many people were at the service. I said, "Way fewer, less than half the usual number…probably vacations.” He replied, "Oh, that’s probably why it was noisier. People absorb sound.” But at such a noticeable level?

Ask an Acoustician

In search of a second opinion, I contacted Rich Peppin, the president of Engineers for Change, a nonprofit acoustics and vibrations consulting firm. Rich had helped me with a Hearing Health article, “Caution: Noise at Work,” so I knew he’d have the answer. I posited our working hypothesis in my email to him: that a reverberant space would be noticeably noisier if there are fewer people in it.

Rich replied: “Yes. Because people absorb sound and hence reduce reflections. We can calculate the reduction of reverberation if we know before and after numbers of people.” Now, we’re getting somewhere.

The calculations Rich was talking about are based, in part, on how much sound humans absorb. In addition to the sound absorption by human bodies, there are other variables that impact reverberation, such as: what the people are wearing, whether they are sitting or standing, whether there are padded seats in the room, and the size and shape of the room.

In my church example, however, most of the major variables were unchanged between winter and summer: lightly padded seats with metal frames; hard floor, walls, and ceiling; and no drapes. And everyone was standing up, walking out to the atrium, where conversation is a little more possible.

So, how much sound can people absorb? The study Rich shared with me had the results of controlled tests of sound absorption with different numbers of people (zero, one, two, three). The results varied widely for different frequencies (more sound absorption per added person at the higher frequencies tested).  

Human speech, however, was the source of the sound in our church sanctuary, and its frequencies range from an average of 125 Hz (for males) to 200 Hz (for females).  

And the result? Sound absorption increased by about 5 to 20 percent (depending on the frequency) with each person added to the test chamber.

Even though I didn’t know the exact numbers of people at my church, it was a big difference between the winter months, when it’s close to full, and that summer day, with its small attendance.  I estimate at least 75 fewer people. So it was not so surprising that the sanctuary was noisier the day that I, and a few others, noticed it. The bottom line? My husband was right—again. Oh, me of little faith!

Kathi Mestayer is a staff writer for Hearing Health magazine.

Print Friendly and PDF

BLOG ARCHIVE

An Animal Behavioral Model of Loudness Hyperacusis

By Kelly Radziwon, Ph.D., and Richard Salvi, Ph.D.

One of the defining features of hyperacusis is reduced sound level tolerance; individuals with “loudness hyperacusis” experience everyday sound volumes as uncomfortably loud and potentially painful. Given that loudness perception is a key behavioral correlate of hyperacusis, our lab at the University at Buffalo has developed a rat behavioral model of loudness estimation utilizing a reaction time paradigm. In this model, the rats were trained to remove their noses from a hole whenever a sound was heard. This task is similar to asking a human listener to raise his/her hand when a sound is played (the rats receive food rewards upon correctly detecting the sound).
 

FIGURE: Reaction time-Intensity functions for broadband noise bursts for 7 rats.The rats are significantly faster following high-dose (300 mg/kg) salicylate administration (left panel; red squares) for moderate and high level sounds, indicative of t…

FIGURE: Reaction time-Intensity functions for broadband noise bursts for 7 rats.

The rats are significantly faster following high-dose (300 mg/kg) salicylate administration (left panel; red squares) for moderate and high level sounds, indicative of temporary loudness hyperacusis. The rats showed no behavioral effect following low-dose (50 mg/kg) salicylate.

By establishing this trained behavioral response, we measured reaction time, or how fast the animal responds to a variety of sounds of varying intensities. Previous studies have established that the more intense a sound is, the faster a listener will respond to it. As a result, we thought having hyperacusis would influence reaction time due to an enhanced sensitivity to sound.

In our recent paper published in Hearing Research, we tested the hypothesis that high-dose sodium salicylate, the active ingredient in aspirin, can induce hyperacusis-like changes in rats trained in our behavioral paradigm. High-dose aspirin has long been known to induce temporary hearing loss and acute tinnitus in both humans and animals, and it has served as an extremely useful model to investigate the neural and biological mechanisms underlying tinnitus and hearing loss. Therefore, if the rats’ responses to sound are faster than they typically were following salicylate administration, then we will have developed a relevant animal model of loudness hyperacusis.

Although prior hyperacusis research utilizing salicylate has demonstrated that high-dose sodium salicylate induced hyperacusis-like behavior, the effect of dosage and the stimulus frequency were not considered. We wanted to determine how the dosage of salicylate as well as the frequency of the tone bursts affected reaction time.

We found that salicylate caused a reduction in behavioral reaction time in a dose-dependent manner and across a range of stimulus frequencies, suggesting that both our behavioral paradigm and the salicylate model are useful tools in the broader study of hyperacusis. In addition, our behavioral results appear highly correlated with the physiological changes in the auditory system shown in earlier studies following both salicylate treatment and noise exposure, which points to a common neural mechanism in the generation of hyperacusis.

Although people with hyperacusis rarely attribute their hyperacusis to aspirin, the use of the salicylate model of hyperacusis in animals provides the necessary groundwork for future studies of noise-induced hyperacusis and loudness intolerance.


Kelly Radziwon, Ph.D., is a 2015 Emerging Research Grants recipient. Her grant was generously funded by Hyperacusis Research Ltd. Learn more about Radziwon and her work in “Meet the Researcher.”


Print Friendly and PDF

BLOG ARCHIVE

NIH Researchers Show Protein in Inner Ear Is Key to How Cells That Help With Hearing and Balance Are Positioned

By the National Institute on Deafness and Other Communication Disorders (NIDCD)

Line of polarity reversal (LPR) and location of Emx2 within two inner ear structures. Arrows indicate hair bundle orientation. Source: eLife

Line of polarity reversal (LPR) and location of Emx2 within two inner ear structures. Arrows indicate hair bundle orientation. Source: eLife

Using animal models, scientists have demonstrated that a protein called Emx2 is critical to how specialized cells that are important for maintaining hearing and balance are positioned in the inner ear. Emx2 is a transcription factor, a type of protein that plays a role in how genes are regulated. Conducted by scientists at the National Institute on Deafness and Other Communication Disorders (NIDCD), part of the National Institutes of Health (NIH), the research offers new insight into how specialized sensory hair cells develop and function, providing opportunities for scientists to explore novel ways to treat hearing loss, balance disorders, and deafness. The results are published March 7, 2017, in eLife.

Our ability to hear and maintain balance relies on thousands of sensory hair cells in various parts of the inner ear. On top of these hair cells are clusters of tiny hair-like extensions called hair bundles. When triggered by sound, head movements, or other input, the hair bundles bend, opening channels that turn on the hair cells and create electrical signals to send information to the brain. These signals carry, for example, sound vibrations so the brain can tell us what we’ve heard or information about how our head is positioned or how it is moving, which the brain uses to help us maintain balance.

NIDCD researchers Doris Wu, Ph.D., chief of the Section on Sensory Cell Regeneration and Development and member of HHF’s Scientific Advisory Board, which provides oversight and guidance to our Hearing Restoration Project (HRP) consortium; Katie Kindt, Ph.D., acting chief of the Section on Sensory Cell Development and Function; and Tao Jiang, a doctoral student at the University of Maryland College Park, sought to describe how the hair cells and hair bundles in the inner ear are formed by exploring the role of Emx2, a protein known to be essential for the development of inner ear structures. They turned first to mice, which have been critical to helping scientists understand how intricate parts of the inner ear function in people.

Each hair bundle in the inner ear bends in only one direction to turn on the hair cell; when the bundle bends in the opposite direction, it is deactivated, or turned off, and the channels that sense vibrations close. Hair bundles in various sensory organs of the inner ear are oriented in a precise pattern. Scientists are just beginning to understand how the hair cells determine in which direction to point their hair bundles so that they perform their jobs.

In the parts of the inner ear where hair cells and their hair bundles convert sound vibrations into signals to the brain, the hair bundles are oriented in the same direction. The same is true for hair bundles involved in some aspects of balance, known as angular acceleration. But for hair cells involved in linear acceleration—or how the head senses the direction of forward and backward movement—the hair bundles divide into two regions that are oriented in opposite directions, which scientists call reversed polarity. The hair bundles face either toward or away from each other, depending on whether they are in the utricle or the saccule, two of the inner ear structures involved in balance. In mammals, the dividing line at which the hair bundles are oriented in opposite directions is called the line of polarity reversal (LPR).

Using gene expression analysis and loss- and gain-of-function analyses in mice that either lacked Emx2 or possessed extra amounts of the protein, the scientists found that Emx2 is expressed on only one side of the LPR. In addition, they discovered that Emx2 reversed hair bundle polarity by 180 degrees, thereby orienting hair bundles in the Emx2 region in opposite directions from hair bundles on the other side of the LPR. When the Emx2 was missing, the hair bundles in the same location were positioned to face the same direction.

Looking to other animals to see if Emx2 played the same role, they found that Emx2 reversed hair bundle orientation in the zebrafish neuromast, the organ where hair cells with reversed polarity that are sensitive to water movement reside.

These results suggest that Emx2 plays a key role in establishing the structural basis of hair bundle polarity and establishing the LPR. If Emx2 is found to function similarly in humans, as expected, the findings could help advance therapies for hearing loss and balance disorders. They could also advance research into understanding the mechanisms underlying sensory hair cell development within organs other than the inner ear.

This work was supported within the intramural laboratories of the NIDCD (ZIA DC000021 and ZIA DC000085).

Doris Wu Ph.D. is member of HHF’s Scientific Advisory Board, which provides oversight and guidance to our Hearing Restoration Project (HRP) consortium This article was repurpsed with permission from the National Institute on Deafness and Other Communication Disorders. 


Print Friendly and PDF

BLOG ARCHIVE