links

Gatsby Computational Neuroscience Unit CSML University College London

Contact arthur.gretton@gmail.com
Gatsby Computational Neuroscience Unit
Sainsbury Wellcome Centre
25 Howland Street
London W1T 4JG UK

Phone
+44 (0)7795 291 705

bottom corner

info

Arthur Gretton I am a Reader (Associate Professor) with the Gatsby Computational Neuroscience Unit, part of the Centre for Computational Statistics and Machine Learning at UCL. A short biography.

My current research focus is on using kernel methods to reveal properties and relations in data. A first application is in measuring distances between probability distributions. These distances can be used to determine strength of dependence, for example in measuring how strongly two bodies of text in different languages are related; testing for similarities in two datasets, which can be used in attribute matching for databases (that is, automatically finding which fields of two databases correspond); and testing for conditional dependence, which is useful in detecting redundant variables that carry no additional predictive information, given the variables already observed. I am also working on applications of kernel methods to inference in graphical models, where the relations between variables are learned directly from training data: applications include cross-language document retrieval, depth prediction from still images, and protein configuration prediction.

Recent news

• I'm co-chairing AISTATS 2016 with Christian Robert . This will take place from 9-11 May 2016 in Cadiz, Spain.
• A kernel test of goodness-of-fit. Paper .
• A Test of Relative Similarity for Model Selection in Generative Models, in ICLR 2016 . Paper .
• Updated paper (as of Jan 2016) on Learning Theory for Distribution Regression. We now show that minimax rates for regression are attainable in the two-stage sampled setting (where only samples from sampled distributions are observable).
• Slides for my NIPS 2015 workshop talks are online: see the talks page.
Gradient-free Hamiltonian Monte Carlo with Efficient Kernel Exponential Families, NIPS 2015. Adaptive Hamiltonian Monte Carlo, where the target gradient is learned from the past chain samples. Demonstrated using experimental studies on Approximate Bayesian Computation and exact-approximate MCMC. See also Heiko's blog post . Code is now online.
Fast Two-Sample Testing with Analytic Representations of Probability Measures, NIPS 2015. A class of powerful nonparametric two-sample tests with a cost linear in the sample size. Code.

Older news

• Slides online for the kernel course at the Machine Learning Summer School in Tuebignen. See the teaching page .
Kernel-Based Just-In-Time Learning for Passing Expectation Propagation Messages A fast, online algorithm for nonparametric learning of EP message updates (UAI 2015). Code.

bottom corner