The modern world can be a chaotic listening environment, full of noise from radios, televisions, and children. Adults with Autism Spectrum Disorders (ASD) may have particular difficulty recognizing speech in acoustically-hostile environments (e.g., Alcantara et al. 2004), but an underlying cause is unknown. Toddlers with ASD are at elevated risk for language delay or disorder, but no studies have explored whether they have greater difficulty processing spoken language in the presence of noise. If children with ASD, like adults with ASD, are less adept at separating speech from auditory distractors, they may be unable to process language in most typical daily environments. Moreover, when faced with a noisy environment, individuals without ASD typically watch the speaker's face to help them understand speech - but individuals with ASD do not attend to faces in the same way. This could place children with ASD a “Catch-22”: they have greater need for linguistic input (as they are already at risk for language deficits), but are simultaneously less equipped to profit from typical communicative settings. This project examines the ability of children with autism to understand speech in the presence of background noise; if toddlers with ASD have difficulties processing speech in the presence of acoustic distraction, this would suggest a need for modifying the speech environment to reduce these distractions. In collaboration with Elizabeth Redcay (Psychology) and Nan Ratner (HESP).