Identifying the format of neural codes for orientation WM by predictive modeling of fMRI activation patterns

Abstract

Activation patterns measured from primary visual cortex can be used to decode stimulus values held in working memory (WM; Serences et al, 2009; Harrison & Tong, 2009), and it has been theorized that this is possible because neurons responsible for representing perceived visual features are recruited for representing those features during WM (Serences, 2016). However, when performing an orientation WM task, participants might strategically recode the remembered grating orientation using a spatial code (e.g., attending to a location on the screen and/or imagining a line), which may still lead to successful orientation decoding. Here, we tested whether participants use a spatial code during an orientation WM task by building a forward model based on spatial voxel receptive field (vRF) models. Participants (n = 5) maintained the precise orientation of a grating (0.5 s stimulus duration, followed by a 1s filtered noise mask), then reported that orientation after a 12 second delay. We identified each voxel’s spatial selectivity during a vRF mapping session. First, we employed an inverted encoding model to successfully decode orientation representations in early visual cortex during the WM task. Then, to test the spatial recoding hypothesis, on each trial we sorted voxels into ‘parallel’ and ‘orthogonal’ groups based on their spatial selectivity relative to the remembered orientation and compared their mean activation levels during the delay. If orientation information in working memory is converted into spatial information, voxels with vRF positions aligned parallel with the remembered orientation should show higher activation than voxels aligned to the orthogonal orientation. Consistent with the spatial recoding hypothesis, the parallel voxels exhibited greater delay-period activation than orthogonal voxels in early visual cortex. These results are aligned with previous reports that participants store information in the code most meaningful for upcoming behavior (Lee et al, 2013; Henderson et al, pp2021).

ICB Affiliated Authors

Authors
Vu-Cheung, K. and Sprague, T.C.
Date
Type
Peer-Reviewed Article
Journal
Journal of Vision
Volume
22
Number
14
Pages
4479