links

Gatsby Computational Neuroscience Unit CSML University College London

Contact arthur.gretton@gmail.com
Gatsby Computational Neuroscience Unit
Sainsbury Wellcome Centre
25 Howland Street
London W1T 4JG UK

Phone
+44 (0)7795 291 705

bottom corner

info

Arthur Gretton I am a Professor with the Gatsby Computational Neuroscience Unit, part of the Centre for Computational Statistics and Machine Learning at UCL. A short biography.

My research focus is on using kernel methods to reveal properties and relations in data. A first application is in measuring distances between probability distributions. These distances can be used to determine strength of dependence, for example in measuring how strongly two bodies of text in different languages are related; testing for similarities in two datasets, which can be used in attribute matching for databases (that is, automatically finding which fields of two databases correspond); and testing for conditional dependence, which is useful in detecting redundant variables that carry no additional predictive information, given the variables already observed. I am also working on applications of kernel methods to inference in graphical models, where the relations between variables are learned directly from training data.

Recent news

GANs with integral probability metrics: some results and conjectures. Talk slides from MILA, October 2019. Use Acrobat Reader to play the animations.
Kernel Instrumental Variable Regression. If measurements of input X and output Y are confounded, the causal relationship can be identified via an instrumental variable Z that influences X directly, but is conditionally independent of Y given X. We generalise classical two stage least squares regression for this setting to nonlinear relationships among X, Y, and Z. At NeurIPS 2019, oral presentation.
Exponential Family Estimation via Adversarial Dynamics Embedding, simultaneously learns both an exponential family model and an HMC-like sampling procedure. At NeurIPS 2019.
Informative Features for Model Comparison, a Wasserstein gradient flow for the Maximum Mean Discrepancy, with an injection noise on the gradient that greatly improves convergence. We gain useful insights into GAN training, and the dynamics of gradient descent for large neural networks. At NeurIPS 2019.
• A kernel stein test for comparing latent variable models: paper and talk slides from the ICML 2019 workshop on Stein's method.
Machine Learning Summer School co-organised with Marc Deisenroth in London, July 2019. All slides, videos, and tutorials are available.

Older news

ICML 2019 workshops co-chaired with Honglak Lee.
Learning deep kernels for exponential family densities: a scheme for learning a kernel parameterized by a deep network, which can find complex location-dependent local features of the data geometry. Code, talk slides, and high level explanation. At ICML 2019.
Kernel Exponential Family Estimation via Doubly Dual Embedding at AISTATS 2019.
A maximum-mean-discrepancy goodness-of-fit test for censored data at AISTATS 2019.
Antithetic and Monte Carlo kernel estimators for partial rankings, appearing in Statistics and Computing
• Course slides for the machine learning with kernels (2019) guest lecture at the University of Paris Saclay, also given at the Greek Stochastics Workshop, now linked in the teaching page.

bottom corner