GATSBY COMPUTATIONAL NEUROSCIENCE UNIT
UCL Logo
 

 

GATSBY COMPUTATIONAL NEUROSCIENCE UNIT
UCL Logo
 

Matthias Seeger

 

Wednesday 27th July 2011

4pm

 

EPF Lausanne Switzerland

 

 

Approximate Bayesian Inference for Large Scale Inverse Problems:

A Computational Viewpoint

 

Joint work with Hannes Nickisch

 

 

Tomographic sparse linear inverse problems are at the core of medical imaging (MRI, CT), astronomy, analysis of large scale networks, and many other applications. Viewed as a probabilistic graphical model, they are characterized by a densely, non-locally coupled likelihood and a non-Gaussian sparsity prior. Even MAP estimation is challenging for these models, yet intense recent research has produced a range of competitive MAP algorithms. However, for these underdetermined problems, there are compelling reasons to move beyond MAP towards Bayesian inference and decision making, such as increased robustness and interpretability, built-in mechanisms to fit linear or nonlinear hyperparameters, and adaptation of the measurement operator by experimental design (active learning). Unfortunately, current approximate inference algorithms are many orders of magnitude too slow to accept this challenge.

 

A key strategy to narrow the gap to MAP is to find ways to reduce approximate inference to subproblems of penalized likelihood structure. Using tools from convex duality, I show how to achieve such iterative decoupling for a range of commonly used variational inference relaxations. Resulting double loop algorithms are orders of magnitude faster than previous coordinate descent (or "message-passing") algorithms. Not surprisingly, approximate inference remains harder than MAP, but the added difficulties are transparent and amenable to fast techniques from signal processing and numerical mathematics.

 

Time permitting, I will comment on work in progress on integrating factorization assumptions and on approximate Bayesian blind deconvolution.