Abstract: From their earliest months, infants are deeply engaged in learning from others. Yet we have a relatively impoverished understanding of the social information that infants see in their daily lives. By equipping infants with head-mounted cameras, researchers have begun to document their egocentric perspective; in this talk, I’ll discuss our recent work using automated detection methods to analyze these videos and to quantify the social information in the infant view. We use a pose detection model (OpenPose) to detect the faces and hands of caregivers seen from the infant perspective, analyzing hundreds of hours of footage from both a dense, longitudinal corpus of home-recordings as well as in-lab play sessions. Overall, these findings point towards both slow developmental changes in the social information in view and highlight activity contexts (e.g., mealtime vs storytime) as well as postural developments as strong drivers of variability across individuals.