Q-learning

Chris Watkins   Peter Dayan
Machine Learning, 8, 279-292.


Abstract

Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. The paper presents and proves in detail a convergence theorem for Q-learning. It shows that Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. Extensions to the cases of nondiscounted, but absorbing, Markov environments, and where many Q values can be changed each iteration, rather than just one.
pdf      errata

back to:   top     publications