Probabilistic Inference with Spiking Neurons: a New Role for Divisive Inhibition
Approaches based on the assumption that neural systems learn the underlying cause of their sensory input have been successful in accounting for response properties in the very first stage of sensory processing. Thus, in the visual domain, center/surround receptive fields (RF) and simple cells RF arise from the hypothesis that images are composed of a linear combination of independent causes.
Assuming a linear combination has the advantage of mathematical tractability, but is a relatively poor description of natural stimuli. For example, natural images are typically composed of superimposed objects, i.e. the lighting of a pixel results from one or another object, not a combination of both. Moreover, these models are most often static, whereas the sensory system has to estimate the state of constantly changing variables (i.e. the presence of an oriented edge in a movie) from noisy streams of stochastic data (photons, spikes).
We propose as a first approximation to consider the sensory input as an "or" combination of independent binary hidden Markov processes. We found that this model, applied to natural stimuli, account for a large set of extra-classical RF effects, in particular the RF shape of visual neuron as a function of time and contrast. Here, we show that inference and learning in such model can be approximated by a network of leaky integrate and fire neurons, where each neuron specialize for a particular time-varying cause. Feedforward connections represent the basic components of the images, while lateral connections implement competitions between the causes: Units (causes) that are already activated should prevent other units from taking into account the input they already explain. This "explaining away" corresponds to a divisive (shunting) inhibition, as widely observed in primary visual cortex.