links

Gatsby Computational Neuroscience Unit CSML University College London

Contact arthur.gretton@gmail.com
Gatsby Computational Neuroscience Unit
Sainsbury Wellcome Centre
25 Howland Street
London W1T 4JG UK

Phone
+44 (0)7795 291 705

bottom corner

info

Arthur Gretton I am a Professor with the Gatsby Computational Neuroscience Unit, and director of the Centre for Computational Statistics and Machine Learning at UCL. A short biography.

My recent research interests in machine learning include the design and training of generative models, both implicit (e.g. GANs) and explicit (high/infinite dimensional exponential family models and energy-based models), nonparametric hypothesis testing, survival analysis, causality, and kernel methods.

Recent news

Learning Deep Features in Instrumental Variable Regression. Instrumental variable (IV) regression learns causal relationships between confounded treatment and outcome variables by utilizing an instrumental variable, which affects the outcome only through the treatment. Deep neural nets are trained to define informative nonlinear features on the instruments and treatments. Competitive performance in challenging IV benchmarks, including settings involving high dimensional image data, and in off-policy policy evaluation for reinforcement learning. At ICLR 2021.
Efficient Wasserstein Natural Gradients for Reinforcement Learning, a computationally efficient Wasserstein natural gradient (WNG) approach to policy gradients, taking advantage of the geometry induced by a Wasserstein penalty to speed optimization. At ICLR 2021.
Generalized energy-based models and talks slides, combining a GAN generator as base measure and an energy function derived from the GAN critic. Samples are drawn using a Kinetic Langevin MCMC procedure. At ICLR 2021.
A kernel test for quasi-independence, where data correspond to pairs of ordered times (e.g, the times at which a new user creates an account and makes the first purchase on a website). In these settings, the two times are not independent (the second occurs after the first), and we test whether there exists significant dependence beyond their ordering in time. Spotlight presentation at NeurIPS 2020.
A Non-Asymptotic Analysis for Stein Variational Gradient Descent, which optimises a set of particles to approximate a target probability distribution. We provide a novel finite time analysis for the SVGD algorithm, a descent lemma establishing that the algorithm decreases the objective at each iteration, and rates of convergence for the average Stein Fisher divergence (also referred to as Kernel Stein Discrepancy). At NeurIPS 2020.

Older news

• Talk slides for the Institute for Advanced Studies lecture on Generalized Energy-Based Models, covering this paper. The talk video may be found here.
• Talk slides for the Machine Learning Summer School 2020: Part 1, Part 2. All course videos are here.
• Talk slides for the NeurIPS 2019 tutorial: Part 1, Part 2, Part 3.
GANs with integral probability metrics: some results and conjectures. Talk slides from Oxford, February 2020. The talk covers MMD GANs, Wasserstein GANs, and variational f-GANs. It also covers the purpose of gradient regularization: to ensure that the gradient signal from the critic to the generator is informative during all stages of training. Based on an earlier talk at MILA (Oct 2019). The Oxford talk contains a lot of new material, but the MILA talk does contains a few interesting slides dropped from the Oxford talk.
Machine Learning Summer School co-organised with Marc Deisenroth in London, July 2019. All slides, videos, and tutorials are available.


bottom corner