Age-Related Hearing Loss

James W. Dias, Ph.D.

James W. Dias, Ph.D.

The Medical University of South Carolina
Neural determinants of age-related change in auditory-visual speech processing

Older adults typically have more difficulty than younger adults identifying the speech they hear, especially in noisy listening environments. However, some older adults demonstrate a preserved ability to identify speech that is both heard and seen. This preserved audiovisual speech perception by older adults is not explained by an improved ability to speechread (lipread), as speechreading also typically declines with age. Instead, older adults can exhibit an improved ability to integrate information available from across auditory and visual sources. This behavioral evidence is consistent with findings suggesting that the neural processing of audiovisual speech can improve with age. Despite the accumulating and intriguing evidence, the underlying changes in brain structure and function that support the preservation of audiovisual speech perception in older adults remains a critical knowledge gap. This project uses an innovative neural systems approach to determine how age-related changes in cortical structure and function, both within and between regions of the brain, can preserve audiovisual speech perception in older adults.

Mishaela DiNino, Ph.D.

Mishaela DiNino, Ph.D.

Carnegie Mellon University
Neural mechanisms of speech sound encoding in older adults

Many older adults have trouble understanding speech in noisy environments, often to a greater extent than their hearing thresholds would predict. Age-related changes in the central auditory system, not just hearing loss, are thought to contribute to this perceptual impairment, but the exact mechanisms by which this would occur are not yet known. As individuals age, auditory neurons become less able to synchronize to the timing information in sound. This project will examine the relationship between reduced neural processing of fine timing information and older adults’ ability to encode the acoustic building blocks of speech sounds. Limited capacity to code and use these acoustic cues might impair speech perception, particularly in the presence of background noise, independent of hearing thresholds. The results of this study will provide a better understanding of how the neural mechanisms important for speech-in-noise recognition may be altered with age, laying the groundwork for development of novel treatments for older adults who experience difficulty perceiving speech in noise.

Anahita Mehta, Ph.D.

Anahita Mehta, Ph.D.

University of Michigan

Effects of age on interactions of acoustic features with timing judgments in auditory sequences

Imagine being at a busy party where everyone is talking at once, yet you can still focus on your friend’s voice. This ability to discern important sounds from noise involves integrating different features, such as the pitch (how high or low a sound is), location, and timing of these sounds. As we age, even with good hearing, this integration may become harder, affecting our ability to understand speech in noisy environments. Our brains must combine these features to make sense of our surroundings, a process known as feature integration. However, it’s not entirely clear how these features interact, especially when they conflict. For example, how does our brain handle mixed signals regarding pitch and sound location?
Previous research shows that when cues from different senses, like hearing and sight, occur simultaneously, our performance improves. But if they are out of sync, it becomes harder. Less is known about how our brains integrate conflicting cues within the same sense, such as pitch and spatial location in hearing. Our study aims to explore how this ability changes with age and develop a simple test that could be used as an easy task of feature integration, especially for older adults. This research may lead to better rehabilitation strategies, making everyday listening tasks easier for everyone.