Next
Previous
Up
Laplace's method in neural decoding
Shinsuke Koyama1,2 , Lucia Castellanos Perez-Bolde3 , Cosma Rohilla Shalizi1 and Robert E. Kass1,2
1Deparatment of Statistics, 2Center for the Neural Basis of Cognition, 3Department of Machine Learning, Carnegie Mellon University, USA

State-space models are a promising technique for neural decoding, especially in domains like neural prostheses where the signal to be reconstructed has significant temporal structure. The optimal estimate of the state is its conditional expectation given the observed spike-train histories, but taking this expectation is computationally hard, especially when nonlinearities are present. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we propose a new nonlinear filter which uses Laplace's method, an asymptotic series expansion, to approximate the conditional mean and variance, and a Gaussian approximation to the conditional distribution of the state. This "Laplace-Gaussian filter (LGF)" gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the decoding ability of the LGF by applying it to a simulation of the cortical control of hand motion, where it delivers superior results to sequential Monte Carlo in a fraction of the time.