UCL logo
skip to navigation. skip to content.

Gatsby Computational Neuroscience Unit




UCL Home
  • UCL Home
  • UCL Gatsby Computational Neuroscience Unit
UCL Gatsby Unit
  • introduction
  • people
  • research
  • publications
  • courses
  • phd programme
  • events
  • directions
  • greater gatsby
  • vacancies
  • Internal
  • ucl

 

 

  • Home
  • Staff & Students
  • Vacancies

 

Paul Schrater

 

 

http://gandalf.psych.umn.edu/users/schrater/PaulSchrater.htm

 

 

(Psychology & Computer Science)

 

Wednesday 25th April 2012

16:00

 

B10 Seminar Room, Basement,

Alexandra House, 17 Queen Square, London, WC1N 3AR

 

 

"Rational Learning may be minimalist"

 

What sorts of things should you learn about the environment?  To be concrete, if I expose you to the projectile motions in a game like Angry Birds, how much will you learn about the trajectories of the birds? At one extreme, you might use extensive feedback to hone strategies for controlling the bird’s destructive desires without understanding the details of the trajectory, and at the other extreme, you may acquire a highly accurate predictive model for trajectories that can allow for complex and novel interactions, like mid-flight control.  Current learning theory offers little guidance to predict what aspects of the environment will be learned, and what sorts of task and feedback will facilitate or inhibit learning richer internal models.  I will describe experimental results from our lab that support a kind of minimalist learning strategy, we could phrase “only learn what you need”.  Minimalist learning predicts that internal models will only be acquired when both the task requires prediction/counterfactual reasoning, and that model improvement can be anticipated to improve performance.   I show counter-intuitive experimental evidence for this hypothesis in a family of angry bird-like tasks, including more learning with less-reliable data, no learning with full feedback, and how subtle changes to the predictive requirements in a task can lead to large differences in internal model learning.   I will show how modeling learning from a Bayesian adaptive control perspective with cognitive costs can provide a normative framework for miminalist learning, and will argue that miminalist learning may be critical for skill formation.

 

 

 

 

 

 

 

 

  • Disclaimer
  • Freedom of Information
  • Accessibility
  • Privacy
  • Advanced Search
  • Contact Us
Gatsby Computational Neuroscience Unit - Alexandra House - 17 Queen Square - London - WC1N 3AR - Telephone: +44 (0)20 7679 1176

© UCL 1999–20112011