Andrew Nam, PhD student with Professor James L. McClelland, Department of Psychology, Stanford University
Title: "Can humans and neural networks learn to reason fungibly?"
Abstract: Humans possess an incredible ability to learn complex relationships from extremely few examples, sometimes as few as one. By inferring specific properties of the task domain, people can generalize to the point where the task is no longer defined by the exemplars but by the principles that define them. For instance, when learning math formulas, examples may be used to augment the learning but the student is expected to solve the problems regardless of the specific numbers used. In these types of tasks where the reasoning is procedurally invariant to the inputs, I explore whether humans and machine learning algorithms can learn to reason in a way that I call fungible.
In this talk, I will describe some of the empirical work I've conducted with human subjects to describe human performance on learning the puzzle game Sudoku, a task with highly structurally fungible properties, and the computational work using neural networks on similar tasks. First, I will present a study where participants new to Sudoku are taught a basic technique and tasked with solving problems that demand fungible reasoning. Second, I will demonstrate an approach for designing neural network architectures that also exhibit fungible problem solving. As highly exploratory and far-from-complete studies, I will conclude with future directions in understanding extensible reasoning.