Next
Previous
Back to the abstracts page
Back to the NCCD 2015 home page
Continuous parameter working memory in a balanced chaotic neural network
Nimrod Shaham1 and Yoram Burak1,2
1Racah Institute of Physics 2Edmond and Lily Safra Center of Neuroscience
The Hebrew University of Jerusalem

There has been considerable theoretical and experimental interest in the storage in working memory of continuous parameters, such as direction, color, or frequency. One of the main models of working memory of continuous parameters is local circuits in the brain that exhibit continuous attractor dynamics, where different values of a stimulus can be represented by different locations along a continuum of steady states. However, it has been unclear whether this theoretical idea is compatible with another proposal for the architecture of cortical circuits - the balanced network ([1], [2]). In previous work ([3], [4], [5]) slow dynamics within a balanced network was achieved using complementary mechanisms, such as multiple synaptic time scales or short term plasticity.

In this work we study a network with random connectivity which generates a balanced state. Using mutual inhibition between two balanced populations, we find an architecture for which the network can sustain slow dynamics in a certain direction in the mean-activity space. The persistence is achieved without using short term plasticity or multiple synaptic time scales. The slow dynamics make this balanced network an appropriate candidate for the storage of working memory.

In the limit of 1<<K<<N (K is the average number of inputs per neuron, N is the population size), we show analytically the existence of continuum of balanced steady states. For finite K and infinite population size we performed noisy numerical simulations of the mean field equations, showing the slow dynamics along the line. In the case of finite population size we perform full numerical simulations of up to ~10^5 binary neurons. We find that the chaotic dynamics of neural activity in the finite balanced network drives diffusive motion along the attractor, similar to the dynamics of networks in which noise arises from intrinsic neural or synaptic mechanisms [6]. Moreover, we can relate the diffusion to the statistics of single neuron activity in a classical balanced network (without mutual inhibition). In addition to the diffusive motion, the network can exhibit systematic motion due to mistuning of the network parameters, and the overall dynamics along the attractor follows similar statistics to an Ornstein-Uhlenbeck process.

Using analytical and numerical analysis, we show that the diffusion coefficient along the attractor is inversely proportional to the network size. Thus, the persistence of the network can be improved by increasing the number of neurons. In practice, ~10^5 neurons are sufficient in our model (with proper tuning of the synaptic weights) to achieve persistence times of several seconds, larger by several orders of magnitude than the single neuron time scale.

[1] C. van Vreeswijk and H. Sompolinsky Science 274(5293): 1724-1726 (1996).
[2] C. van Vreeswijk and H. Sompolinsky Neural Comput. 10(6): 1321--1371 (1998).
[3] D. Hansel and G. Mato J. Neurosci. 33(1): 133-149 (2013).
[4] S. Lim and M. Goldman Nat. Neurosci. 16(9): 1306-1314 (2013).
[5] S. Lim and M. Goldman J. Neurosci. 34(20): 6790-6806 (2014).
[6] Y. Burak and I. Fiete PNAS 109(43): 17645-17650 (2012).