Why Children With Autism May Experience Auditory Sensory Overload

By Hari Bharadwaj, Ph.D.

The successful navigation of complex everyday environments with multiple sensory inputs—such as restaurants, busy streets, and other social settings—relies on the brain’s ability to organize the barrage of information into discrete perceptual objects on which cognitive processes, such as selective attention, can act. Failure of this scene-segregation process, where one sound source stands out as the foreground “figure” and the remaining stimuli form the “background,” can result in an overwhelming sensory experience that makes it difficult to focus attention selectively on one source of interest while suppressing the others.

The experience of sensory overload and difficulty with being able to selectively listen to a foreground sound source of interest are ubiquitous in autism spectrum disorders (ASD). To group sound elements scattered across different frequencies correctly into individual sources (e.g., components of a speaker’s voice vs. components of background traffic noise in a busy street), the brain must carefully analyze the temporal coherence across the different sound elements in a mixture. 

A leading hypothesis about sensory processing abnormalities in ASD is that this kind of temporal “synthesis” of sensory information is atypical. This hypothesis stems from behavioral data indicating that individuals with ASD often show impaired processing of analogous dynamic stimuli in the visual domain, such as the coherent motion of visual dots. Although behavioral evidence in ASD is consistent with the impaired-temporal-synthesis hypothesis, direct neural correlates have not been identified. 

In our study published in PLOS Biology in February 2022, we investigated whether auditory temporal coherence processing is in ASD by employing a novel auditory paradigm. By manipulating temporal coherence in the scene with synthetic sounds, the paradigm was designed such that the acoustic features of the stimulus perceptually bind together into auditory objects with different levels of salience, with the salience of the foreground “figure” object parametrically increasing with increasing temporal coherence. 

Responses evoked by overall stimulus onset.

Responses evoked by overall stimulus onset. (A) Averaged left hemisphere evoked responses relative to stimulus onset, in source space, based on individually identified regions of interest (inset), for each group. (B) Same as (A), for the right hemisphere. Shaded areas show standard error per group. (ASD, autism spectrum disorder; L, left; R, right; TD, typically developing.) Credit: PLOS Biology

As hypothesized, we found that children with ASD had significantly reduced evoked responses to the pop-out of the foreground figure, alongside a lower magnitude of induced gamma band activity. Importantly, the cortical measures were not correlated with the behaviorally assessed ability to suppress attention to distractors, suggesting that lower-level auditory processes contribute to the observed abnormalities. The cortical measures did, instead, correlate with both ASD severity and abnormality of auditory sensory processing, thus predicting how children within the ASD group would stratify behaviorally. 

These results suggest that neural processing of the temporal coherence of the acoustic constituents of an auditory stimulus is atypical in ASD. Given the importance of temporal coherence as a binding and scene segregation cue in natural sounds, the atypical processing of temporal coherence in ASD could contribute to poorer object binding, as has indeed been suggested, and demonstrated indirectly in the visual domain in ASD. 

In scenes with multiple sound sources, the reduced growth of the response with temporal coherence could lead to foreground sounds “standing out” less saliently and a reduced ability to filter out extraneous sounds, contributing to a feeling of sensory overload. This would inevitably also impact speech and language processing, which are highly temporally sensitive, especially in environments with multiple sound sources. Indeed, speech perception impairments in noisy environments in particular have been documented in ASD.

One key advantage of our novel stimuli and the passive design is that the paradigm is translatable to patient populations where behavioral assessments cannot easily be performed (such as infants and toddlers). The paradigm is also applicable to animal models of ASD and other neurodevelopmental disorders. 

Hari Bharadwaj PhD
General Grand Chapter Royal Arch Masons International Logo

In sum, our observations of reduced growth of neural responses with increasing temporal coherence of auditory stimuli in ASD provide new insights into the mechanisms that may ultimately contribute to the sensory processing deficits and social-communication challenges that are characteristic of ASD. If borne out in larger-scale studies, our findings also raise the possibility that audiological interventions (such as remote microphones in classroom settings) that help those with central auditory processing challenges may also be beneficial for children with ASD.

A 2015 Emerging Research Grants scientist generously funded by the General Grand Chapter Royal Arch Masons International, Hari Bharadwaj, Ph.D., is an assistant professor at Purdue University with a joint appointment in speech, language, and hearing sciences, and biomedical engineering.


The Latest Blog Posts

Print Friendly and PDF

BLOG ARCHIVE