Princeton Neuroscience Institute and Department of Psychology, Princeton University, USA
Wednesday 16 April 2008
Seminar Room B10 (Basement)
Alexandra House, 17 Queen Square, London, WC1N 3AR
A computational substrate for goal-directed behavior
Behavioral and neuroscientific research increasingly suggests that action selection occurs within two distinct systems: a ‘habit’ system, which exploits pre-established situation-action associations, and a ‘goal-directed’ system which predicts and compares the consequences of available lines of behavior. Goal-directed action selection is believed to depend critically on both dorsolateral and orbital prefrontal cortex. However, the computations performed by these areas in the generation of goal-directed behavior are not yet well understood. Raw materials for a computational account can be found in operations research, which offers a set of techniques for solving sequential decision problems. One particularly germane approach characterizes decision-making as a form of probabilistic inference. We propose a model of prefrontal function that builds on this computational foundation. The model takes the form of a probabilistic graphical model, encoding the dependencies among three sets of variables: decision variables, which, like some dorsolateral prefrontal neurons, code for behavioral decision rules; utility variables, which, like some orbitofrontal neurons, code for incentive value; and state variables, which together encode a causal model of the environment. Applying standard procedures for Bayesian inference, networks of this kind can be proven to converge on optimal behavioral policies. More importantly, the framework provides an account for numerous empirical phenomena, including latent learning, detour behavior, and effects of devaluation and effort on instrumental choice.