Motor Programming with Population Code-Based Networks
A growing body of experiments has shown that the brain integrates sensory and motor information in a nearly optimal fashion. The Bayesian approach provides a good theoretical framework to understand this integration. Several theoretical studies have investigated how probability distributions could be represented in the activity of a population of neurons. In this study, we propose how recurrent networks could build, over time, a representation of the probability distributions of relevant sensory and motor variables, combining information about the executed motor commands and the sensory feedback. We explore the implication of this model for neural responses in brain areas involved in sensorimotor integration and motor control.
The sensorimotor network model is a layer of units i representing various preferred combinations of sensory states and motor states xi, at various time shifts ti relative to the actual time (time shifts can be negative, i.e. lagging compared to the true state of the system, or positive, i.e. anticipating the future state of the system). Sensory inputs, in the form of noisy population codes, and motor commands are applied on this layer. In parallel, recurrent connections within the layer predict the sensorimotor state at various time shifts, such that the connection between neuron i and j corresponds to the probability of being in state j at time tj given that one was in state i at time ti. When tj>ti, these connections predict the future given the current state, and implement a probabilistic forward model for the motor dynamics. When tj<ti, the connections implement a probabilistic inverse model (which is able to compute the succession of motor commands leading from an initial state to a final desired state). The propagation of activity in this network is equivalent to the forward-backward algorithm used for inference in hidden Markov models.
Using such a network architecture, a sensorimotor area could compute the most precise estimate of the current sensorimotor state given past motor commands and sensory feedback, perform optimal trajectory planning given the motor noise, and learn the forward and inverse dynamics of the sensorimotor system with simple hebbian learning rules. We applied this approach to the control of a simplified arm, with priors on low accelerations. As reported previously, this predicts smooth bell-shaped velocity profile, as observed in human arm movements.