There has been remarkable recent progress in artificial intelligence and machine learning. Computers can drive cars, defeat Go masters, and hold simple conversations, yet the best example of intelligence is clearly still natural intelligence. My research program explores the many aspects of human cognition that elude machine systems. For example, people can learn a new concept from just one or a few examples, whereas machine learning algorithms typically need tens or hundreds of examples to reach similar levels of classification performance. People can also use their learned concepts in richer and more flexible ways than current machine systems -- for imagination, extrapolation, and explanation. Finally, people can learn by asking sophisticated and creative questions, whereas current algorithms pose much simpler and more stereotyped queries. My talk will present behavioral experiments and computational models that examine these uniquely human characteristics across domains such as learning handwritten characters, learning complex visual concepts (i.e., fractals), and asking questions in simple games. I identify three key cognitive principles -- compositionality, causality, and learning-to-learn -- that are critical ingredients of human-level intelligence. Computational models that integrate these three principles offer insight into how humans solve these impressive tasks while also suggesting new approaches to machine intelligence.