links

Gatsby Computational Neuroscience Unit CSML University College London

Contact arthur.gretton@gmail.com
Gatsby Computational Neuroscience Unit
Sainsbury Wellcome Centre
25 Howland Street
London W1T 4JG UK

Phone
+44 (0)7795 291 705

bottom corner

info

Arthur Gretton I am a Professor with the Gatsby Computational Neuroscience Unit, and director of the Centre for Computational Statistics and Machine Learning at UCL. A short biography.

My recent research interests in machine learning include the design and training of generative models, both implicit (e.g. GANs) and explicit (high/infinite dimensional exponential family models and energy-based models), nonparametric hypothesis testing, survival analysis, causality, and kernel methods.

Recent news

• Talk slides for the Institute for Advanced Studies lecture on Generalized Energy-Based Models, covering this paper. The talk video may be found here.
• Talk slides for the Machine Learning Summer School 2020: Part 1, Part 2. All course videos are here.
• Talk slides for the NeurIPS 2019 tutorial: Part 1, Part 2, Part 3.
GANs with integral probability metrics: some results and conjectures. Talk slides from Oxford, February 2020. The talk covers MMD GANs, Wasserstein GANs, and variational f-GANs. It also covers the purpose of gradient regularization: to ensure that the gradient signal from the critic to the generator is informative during all stages of training. Based on an earlier talk at MILA (Oct 2019). The Oxford talk contains a lot of new material, but the MILA talk does contains a few interesting slides dropped from the Oxford talk.
Kernelized Wasserstein Natural Gradient, a general framework to approximate the natural gradient for the Wasserstein metric, by leveraging a dual formulation of the metric restricted to a Reproducing Kernel Hilbert Space. The gradient trades off accuracy and computational cost, with theoretical guarantees. At ICLR 2020.
GANs with integral probability metrics: some results and conjectures. Talk slides from MILA, October 2019. Use Acrobat Reader to play the animations.
Kernel Instrumental Variable Regression. If measurements of input X and output Y are confounded, the causal relationship can be identified via an instrumental variable Z that influences X directly, but is conditionally independent of Y given X. We generalise classical two stage least squares regression for this setting to nonlinear relationships among X, Y, and Z. At NeurIPS 2019, oral presentation.
Exponential Family Estimation via Adversarial Dynamics Embedding, simultaneously learns both an exponential family model and an HMC-like sampling procedure. At NeurIPS 2019.
Maximum Mean Discrepancy Gradient Flow, a Wasserstein gradient flow for the Maximum Mean Discrepancy, with an injection noise on the gradient that greatly improves convergence. We gain useful insights into GAN training, and the dynamics of gradient descent for large neural networks. At NeurIPS 2019.
• A kernel stein test for comparing latent variable models: paper and talk slides from the ICML 2019 workshop on Stein's method.
Machine Learning Summer School co-organised with Marc Deisenroth in London, July 2019. All slides, videos, and tutorials are available.

Older news

ICML 2019 workshops co-chaired with Honglak Lee.
Learning deep kernels for exponential family densities: a scheme for learning a kernel parameterized by a deep network, which can find complex location-dependent local features of the data geometry. Code, talk slides, and high level explanation. At ICML 2019.
Kernel Exponential Family Estimation via Doubly Dual Embedding at AISTATS 2019.
A maximum-mean-discrepancy goodness-of-fit test for censored data at AISTATS 2019.
Antithetic and Monte Carlo kernel estimators for partial rankings, appearing in Statistics and Computing
• Course slides for the machine learning with kernels (2019) guest lecture at the University of Paris Saclay, also given at the Greek Stochastics Workshop, now linked in the teaching page.

bottom corner