GATSBY COMPUTATIONAL NEUROSCIENCE UNIT
UCL Logo

Computation on the Transient

Reza Moazzezi, Peter Dayan

Gatsby Computational Neuroscience Unit, UCL , UK

Line attractor networks have become standard workhorses of computational accounts of neural population processing for optimal perceptual inference, working memory, decision making and more. Such networks are defined by possessing a one- (line) or multi-dimensional (surface) manifold in the high dimensional space of the activities of all the neurons in the net, to a point on which the state of the network is projected in a non-linear manner by the network’s dynamics. The standard view that the network represents information by the location of the point on this manifold at which it sits [1] is only appropriate if the computation to be performed by the network is aligned with the underlying symmetry implied by the manifold. In interesting cases, the computation that must be performed is orthogonal to this symmetry structure, and so an alternative computational view is required. Here, we illustrate the problem using a well-studied visual hyperacuity task, and suggest solutions involving different classes of computations during the network’s transient evolution. We consider the bisection task, which involves deciding to which end bar, the middle of three parallel visual bars is closer [2]. As with other hyperacuity tasks, performance is impressively invariant to factors such as positional deviations arising from various forms of eye movement, which can at most only have been partially trained. The natural noise-removing line attractor for the population coded representation of this task has exactly the problem that the required computation (determining the sign of the miniscule displacement of the central bar) is orthogonal to the positional symmetry implied by the line. Li & Dayan [3] suggested that learning might create in V1 a different sort of attractor network, in which there are two different attractive lines, one for each possible decision. Unfortunately, this network only performs well over a limited range of retinal positions [3,4]. Here we suggest a completely different computational approach, in which the decision is based on the way that a readily computed statistic of the activity (the population centre of mass) changes over the transient evolution of the activity. This significantly improves performance, nearly up to the level of an ideal observer for the model input, even in the case that the network contains only one single attractive line.

References:

[1] Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. K. Zhang, J. Neurosci. 16:2112-2126, 1996.

[2] Perceptual learning with spatial uncertainties. T. U. Otto, M. H. Herzog, M. Fahle, L. Zhaoping Vision Research 46:3223-3233, 2006.

[3] Position variance, recurrence and perceptual learning. Z. Li and P. Dayan, Neural Information Processing Systems ed T K Leen, T G Dietterich and V Tresp:31-37, 2000.

[4] Nonlinear ideal observation and recurrent preprocessing in perceptual learning. L. Zhaoping, M. H. Herzog and P. Dayan, Network: Comput. Neural Syst. 14:233-247, 2003.