Amarjot Singh, a postdoc working with James McClelland, Department of Psychology, Stanford University
Title: Continual Learning using the SHDL Framework and ReplayAbstract: Continual learning (CL) is the ability of a model to learn continually from a stream of data, building on what was learnt previously, hence exhibiting positive transfer, as well as being able to remember previously seen tasks. CL is a fundamental step towards artificial intelligence, as it allows the agent to adapt to a continuously changing environment, a hallmark of natural intelligence. However, traditional machine learning systems are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. In this talk, I will first present the broader concept of an efficient continual learning system and the different components required to construct such a system. In the second half of the talk, I will detail the proposed learning architecture of the system and the proposed data presentation policy used to avoid catastrophic forgetting and achieve positive transfer. These claims are supported with experiments on the cifar100 dataset.