How do we design artificial systems that learn as we do early in life -- as "scientists in the crib" who explore and experiment with our surroundings? How do we make AI "curious" so that it explores without explicit external feedback? Topics draw from cognitive science (intuitive physics and psychology, developmental differences), computational theory (active learning, optimal experiment design), and AI practice (self-supervised learning, deep reinforcement learning). Students present readings and complete both an introductory computational project (e.g. train a neural network on a self-supervised task) and a deeper-dive project in either cognitive science (e.g. design a novel human subject experiment) or AI (e.g. implement and test a curiosity variant in an RL environment). Prerequisites: python familiarity and practical data science (e.g. sklearn or R).