Recurrent neural network models for working memory of
continuous variables: activity manifolds, connectivity
patterns, and dynamic codes
Christopher J. Cueva, Adel Ardalan, Misha Tsodyks, and Ning Qian,
arXiv, 2021, arXiv:2111.01275.
Download the full
paper (PDF file)
Abstract
Many daily activities and psychophysical experiments involve keeping multiple items in
working memory. When the items take continuous values (e.g., orientation, direction, cotrast, length, weight, loudness) they must be stored in a continuous structure of appropriate
dimensions. We investigate how such a structure might be represented in neural circuits by
training recurrent networks to report two previously flashed stimulus orientations. We find
that the activity manifold for the two orientations resembles a Clifford torus. Although a
Clifford torus and a standard torus (the surface of a donut) are topologically equivalent,
they have important functional differences. A Clifford torus treats the two orientations
equally and keeps them in orthogonal subspaces, as demanded by the task, whereas a stadard torus does not. We further find that the Clifford-torus-like manifold is realized by two
different sets of locally-excitatory/globally-inhibitory connectivity patterns. Moreover, in
addition to attractors that store information via persistent activity, our networks also use
a dynamic coding scheme such that many units change their tuning to prevent the new
sensory input from overwriting the previously stored one. We argue that such dynamic
codes are generally required whenever multiple inputs enter a memory system via shared
connections. Finally, we apply our framework to a human psychophysics experiment in
which subjects reported two remembered orientations. We demonstrate that not all RNNs
reproduce human behavior. By varying the training conditions of the RNNs, we test and
support the hypothesis that human behavior is a product of both neural noise and reliance
on the more stable and behaviorally relevant memory of the ordinal relationship between
the two orientations. This suggests that suitable inductive biases in RNNs are important
for uncovering how the human brain implements working memory. Together, these results
offer an understanding of the neural computations underlying a class of visual decoding
tasks, bridging the scales from human behavior to synaptic connectivity.
Back to Qian Lab Home Page