The Infinite Hidden Markov Model

Matthew J. Beal, Gatsby Computational Neuroscience Unit, UCL
Zoubin Ghahramani, Gatsby Computational Neuroscience Unit, UCL
Carl Edward Rasmussen, Gatsby Computational Neuroscience Unit, UCL

We show that it is possible to extend hidden Markov models to have a countably infinite number of hidden states. By using the theory of Dirichlet processes we can implicitly integrate out the infinitely many transition parameters, leaving only three hyperparameters which can be learned from data. These three hyperparameters define a hierarchical Dirichlet process capable of capturing a rich set of transition dynamics. The three hyperparameters control the time scale of the dynamics, the sparsity of the underlying state-transition matrix, and the expected number of distinct hidden states in a finite sequence. In this framework it is also natural to allow the alphabet of emitted symbols to be infinite - consider, for example, symbols being possible words appearing in English text.

Presented at NIPS*2001, to appear in Advances in Neural Information Processing Systems 14, eds. T. Dietterich, S. Becker, Z. Ghahramani, MIT Press (2002).

Available as ps pdf.