FriSem

Date
Fri November 6th 2020, 3:15 - 4:30pm
Event Sponsor
Department of Psychology
Location
Zoom

Insub Kim, PhD student with Professor Kalanit Grill-Spector, Department of Psychology, Stanford University

Abstract: Visual scenes we experience contain objects that move over positions and time. To interpret dynamic changes in the visual scene, our visual system requires representations of both space and time. However, previous population receptive field (pRF) models focused on the spatial domain of the stimulus. Here, we developed a spatiotemporal pRF model that considers both spatial and temporal components of the stimuli within a single computational framework. To ensure the spatiotemporal pRF model’s validity and reproducibility, we implemented a simulation software that (i) generates ground-truth synthetic BOLD and (ii) solves spatiotemporal pRF parameters based on the ground-truth synthetic BOLD. After generating concrete predictions based on the simulation software, we tested the spatiotemporal model’s performance in real fMRI data. We found that incorporating the temporal component in the pRF model improved the model performance when the stimulus was presented briefly and in a transient manner. This suggests that the spatiotemporal pRF model accounts for a wide range of spatial and temporal stimulus modulations.