Gatsby Computational Neuroscience Unit
Sainsbury Wellcome Centre
25 Howland Street
London W1T 4JG UK
+44 (0)7795 291 705
I am a Reader (Associate Professor) with the Gatsby Computational Neuroscience Unit, part of the Centre for Computational Statistics and Machine Learning at UCL. A short biography.
My current research focus is on using kernel methods to reveal properties and relations in data. A first application is in measuring distances between probability distributions. These distances can be used to determine strength of dependence, for example in measuring how strongly two bodies of text in different languages are related; testing for similarities in two datasets, which can be used in attribute matching for databases (that is, automatically finding which fields of two databases correspond); and testing for conditional dependence, which is useful in detecting redundant variables that carry no additional predictive information, given the variables already observed. I am also working on applications of kernel methods to inference in graphical models, where the relations between variables are learned directly from training data: applications include cross-language document retrieval, depth prediction from still images, and protein configuration prediction.
I'm co-chairing AISTATS 2016 with
Christian Robert .
This will take place from 9-11 May 2016 in Cadiz, Spain.
A Test of Relative Similarity for Model Selection in Generative Models. Paper. .
Slides online for the kernel course at the Machine Learning Summer School in Tuebignen. See the teaching page .
Gradient-free Hamiltonian Monte Carlo with Efficient Kernel Exponential Families. Adaptive Hamiltonian Monte Carlo, where the target gradient is learned from the past chain samples. Demonstrated using experimental studies on Approximate Bayesian Computation and exact-approximate MCMC. See also Heiko's blog post .
Fast Two-Sample Testing with Analytic Representations of Probability Measures. A class of powerful nonparametric two-sample tests with a cost linear in the sample size. Code.
Kernel-Based Just-In-Time Learning for Passing Expectation Propagation Messages A fast, online algorithm for nonparametric learning of EP message updates (UAI 2015). Code.
A low variance consistent test of relative dependency . The test enables us to determine whether one source variable is significantly more dependent on a first target variable or a second (ICML 2015).
Update to: Kernel Mean Shrinkage Estimators (March 2015)
A Wild Bootstrap for Degenerate Kernel Tests, in NIPS 2014 (accepted for a full oral presentation), code.
NIPS 2014 Workshop , Modern Nonparametrics 3: Automating the Learning Pipeline.
Updated paper (as of Nov. 2014) on infinite dimensional exponential families.
Updated paper on regressing from probability distributions. Code.
Course notes from my April 2014 tutorial at the Nonparametric Measures of Dependence workshop at Columbia. See the teaching page .