Hannah Payne: Interactions between circuit architecture and plasticity in a closed-loop system; Ben Poole: Time-warped PCA: simultaneous alignment and dimensionality reduction of neural data

Date and Time: 
Monday, February 6, 2017 - 5:10pm - 6:30pm

Sloan Hall, Bldg 380, Room 380-C

Payne Abstract: Plasticity at specific synapses is thought to underlie learning and memory. However, determining the site(s) and direction of plasticity in intact animals is a challenge, since feedback loops can produce non-intuitive changes in neural spike output that obscure underlying synaptic changes. Interactions between feedback loops and plasticity were studied in the context of a form of motor learning that calibrates the vestibulo-ocular reflex (VOR), which stabilizes gaze during head movements. Although the underlying neural circuits in the cerebellum and brainstem are relatively well characterized, previous studies disagree on the loci and direction of the synaptic changes driving VOR learning.

To address this controversy, we constructed a set of models that differed in the level of positive feedback between the cerebellum and its premotor targets. Different models produced highly similar learned changes in Purkinje cell spike output, despite opposite changes in the strength of excitatory synapses onto these cells. Despite opposite predictions about the direction of synaptic plasticity, all models could correctly produce the so-called paradoxical changes in Purkinje cell activity recorded over the course of learning. The results demonstrate how feedback can create changes in spike output that appear to contradict the sign of the underlying plasticity. We suggest an approach for disambiguating such confounding effects of feedback by using transient electrical perturbations.

Poole abstract: Analysis of multi-trial neural data often relies on rigid alignment of neural activity to stimulus triggers or behavioral events. However, activity on a single trial may be shifted and skewed in time due to differences in attentional state, biophysical kinetics, and other unobserved latent variables. This temporal variability can inflate the apparent dimensionality of data and obscure our ability to recover inherently simple, low-dimensional structure.  For example, small temporal shifts on each trial introduce illusory dimensions as revealed by principal component analysis (PCA). We demonstrate the prevalence of these issues in spike-triggered analysis of retinal ganglion cells and in primate motor cortical neurons during a reaching task. To address these challenges, we develop a novel method, twPCA, that simultaneously identifies time warps of individual trials and low-dimensional structure across neurons and time. Our method contains a single hyperparameter that trades off complexity of the temporal warps against the dimensionality of the aligned data. Furthermore, we identify the temporal warping in a data-driven, unsupervised manner, removing the need for explicit alignment with external variables. We apply twPCA to motor cortical data recorded from a monkey performing a center-out delayed reaching task. The learned warpings can explain 70% of the variability in reaction time. Time-warped PCA is broadly applicable to a variety of neural systems as a method for disentangling temporal variability across trials as well as discovering underlying neural dynamics and structure of interest.

Public Access: 
Not open to the public