Up
Previous
Next
Learning dynamics of reaching movements
Reza Shadmehr
Johns Hopkins University
When one moves their hand from one point to another, the brain guides
the arm by relying on neural structures that estimate physical
dynamics of the task and transform the desired motion into motor
commands. If our hand is holding an object, the subtle changes in the
dynamics of the arm are taken into account by these neural structures
and this is reflected in the altered motor commands. These
observations have suggested that in generating motor commands, the
brain relies on internal models that predict physical dynamics of the
external world. Here, I will review the neural and computational data
on how the brain learns internal models of reaching movements. Data
suggests that internal models are sensorimotor transformations that
map a sensory state of the arm into an estimate of forces. If we
assume this neural computation is performed via a population code, one
can infer properties of the tuning curves of the computational
elements from the patterns of generalization and trial-to-trial
changes in performance. A new theory is presented that allows for
quantification of generalization from trial-to-trial changes in
performance. The patterns of generalization appear consistent with
computational elements that are bimodal in velocity space and the
discharge is modulated linearly as a function of the static position
of the hand. These gain-field properties are consistent with tuning
curves of some cells in the primary motor cortex and the cerebellum.