University of California, Los Angeles
Leveraging automatic speech recognition algorithms to understand how the home listening environment impacts spoken language development among infants with cochlear implants
To develop spoken language, infants must rapidly process thousands of words spoken by caregivers around them each day. This is a daunting task, even for typical hearing infants. It is even harder for infants with cochlear implants as electrical hearing compromises many critical cues for speech perception and language development. The challenges that infants with cochlear implants face have long-term consequences: Starting in early childhood, cochlear implant users perform 1-2 standard deviations below peers with typical hearing on nearly every measure of speech, language, and literacy. My lab investigates how children with hearing loss develop spoken language despite the degraded speech signal that they hear and learn language from. This project addresses the urgent need to identify predictors of speech-language development for pediatric cochlear implant users in infancy.