Next
Previous
Up
Superposition of information in large ensembles of neurons in primary visual cortex
Stefan Häusler1, Wolf Singer2,3, Wolfgang Maass1 and Danko Nikolic2,3
1Inst. for Theoretical Computer Science, Graz Univ. of Technology, Austria 2Department of Neurophysiology, Max-Planck-Institute for Brain Research, Frankfurt (Main), Germany 3Frankfurt Inst. for Advanced Studies, Wolfgang Goethe Univ., Frankfurt (Main), Germany

We applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons in primary visual cortex of anesthetized cats. We presented sequences of up to 3 different visual stimuli (letters) that lasted 100 ms and followed at intervals of 100 ms. We found that most of the information about visual stimuli extractable by advanced methods from machine learning (e.g., Support Vector Machines) could also be extracted by simple linear classifiers (perceptrons). Hence, in principle this information can be extracted by a biological neuron. A surprising result was that new stimuli did not erase information about previous stimuli. In fact, information about the nature of the preceding stimulus remained as high as the information about the current stimulus. Separately trained linear readouts could retrieve information about both the current and the preceding stimulus from responses to the current stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and in the precise timing of individual spikes, and persisted for several 100 ms beyond the offset of stimuli.

This superposition of information about sequentially presented stimuli constrains computational models for visual processing. It poses a conundrum for models that assume separate classification processes for each frame of visual input and supports models for cortical computation ([1], [2]) which argue that a frame-by frame processing is neither feasible within highly recurrent networks nor useful for classifying and predicting rapidly changing stimulus sequences. Specific predictions of these alternative computational models are that i) information from different frames of visual input is superimposed in recurrent circuits and ii) nonlinear combinations of different information components are immediately provided in the spike output. Our results indicate that the network from which we recorded provided nonlinear combinations of information from sequential frames. Such nonlinear preprocessing increases the discrimination capability of any linear readout neurons receiving distributed input from the kind of cells we recorded from. These readout neurons could be implemented within V1 and/or at subsequent processing levels.

[1] D.V. Buonomano and M.M. Merzenich. Science. 267:1028-1030 (1995).
[2] W. Maass, T. Natschlager and H. Markram. Neural Computation. 14(11):2531-2560 (2002).