Next
Previous
Back to the abstracts page
Back to the NCCD 2015 home page
FORCE learning on spiking networks
Ben Dongsung Huh, Peter Latham
Gatsby Computational Neuroscience Unit, UCL

Neurons in the brain often exhibit complex activity patterns, with fluctuations on time scales of several seconds. The generation of complex patterns is critical for directing movements, and is likely to be involved in processing time-varying input (such as speech). However, it is not yet understood how networks of spiking neurons, with time constants of only a few milliseconds, could exhibit such slow dynamics. This should be contrasted with rate-based neural networks, which can be trained to generate arbitrary complex activity patterns by an iterative training method (FORCE learning [1]). So far, however, FORCE learning has not led to successful training of spiking neural networks.

Both rate networks and spiking networks exhibit chaotic activity that grows with the strength of recurrent connectivity. In rate networks, the fluctuating input signal efficiently suppresses the chaotic activity [2], which is critical for successful learning. Surprisingly, however, such chaos suppression does not occur in strongly recurrent spiking networks. This is because the microscopic chaos in spike-timing does not get tamed by the slow input fluctuation, which makes FORCE learning challenging in spiking networks.

We investigate how the recurrent connectivity and input signal strength affect chaos suppression in spiking networks, and show that in the weakly recurrent regime, spiking networks can indeed be successfully trained to generate arbitrary periodic signals as well as chaotic signals of Lorentz system.

[1] D. Sussillo and L.F. Abbott, Neuron 63.4: 544-557 (2009).
[2] K. Rajan, L.F. Abbott and H. Sompolinsky, Physical Review E 82.1: 011903 (2010).