Gatsby Computational Neuroscience Unit CSML University College London

Gatsby Computational Neuroscience Unit
Sainsbury Wellcome Centre
25 Howland Street
London W1T 4JG UK

+44 (0)7795 291 705

bottom corner


Arthur Gretton I am a Professor with the Gatsby Computational Neuroscience Unit, part of the Centre for Computational Statistics and Machine Learning at UCL. A short biography.

My research focus is on using kernel methods to reveal properties and relations in data. A first application is in measuring distances between probability distributions. These distances can be used to determine strength of dependence, for example in measuring how strongly two bodies of text in different languages are related; testing for similarities in two datasets, which can be used in attribute matching for databases (that is, automatically finding which fields of two databases correspond); and testing for conditional dependence, which is useful in detecting redundant variables that carry no additional predictive information, given the variables already observed. I am also working on applications of kernel methods to inference in graphical models, where the relations between variables are learned directly from training data.

Recent news

Demystifying MMD GANs. Wasserstein GANs, MMD GANs, and Cramer GANs all have the exact same bias properties: all gradients are unbiased, but the critic may still give biased losses when trained. Also: simpler discriminator networks compared with WGANs, dynamic adaptive learning rate adjustment; a new Kernel Inception Distance. To appear, ICLR 2018.
Conditional infinite exponential family learns conditional density models which can be sampled by HMC. To appear, AISTATS 2018.
• An efficient density estimator for the infinite dimensional exponential family. Contains a comparison with a score estimator based on autoencoders. Oral presentation, AISTATS 2018.
Talk slides on Conditional Densities and Efficient Models in Infinite Exponential Families. NIPS 2017 workshop on Modeling and Learning Interactions from Complex Data.
NIPS 2017 best paper award: a linear time kernel goodness-of-fit test . Gives a linear time test for assessing the quality of a model, compared with a reference sample. Code, including a demonstration notebook from the ML Train NIPS workshop
• UAI 2017 Tutorial on representing and comparing probabilities (August 2017): Slides 1 and Slides 2, and Video.
• Some notes on the Cramer GAN, showing that it is a generative moment matching network with a particular kernel, and describing a problem with the critic.
Density Estimation in Infinite Dimensional Exponential Families in JMLR (July 2017).
GP-Select: Accelerating EM Using Adaptive Subspace Preselection in Neural Computation (August 2017)
• ICML 2017 paper: linear time kernel independence test based on covariance of analytic features, with code
• Criticizing and training generative models using MMD: ICLR 2017 paper and code. Also presented at the adversarial learning workshop at NIPS (see below).

Older news

Talk slides for the NIPS 2016 workshop on generative adversarial networks (more detailed slides from the Dagstuhl workshop). Adaptive MMD test paper and code; linear-time ME test paper and code.
• NIPS 2016 oral presentation: adaptive linear-time two-sample tests, with power matching quadratic-time tests: Paper and code.
• I co-chaired AISTATS 2016 with Christian Robert . This took place from 9-11 May 2016 in Cadiz, Spain.
• Slides online for the kernel courses at the Machine Learning Summer Schools in Tuebignen, Cadiz, and Arequipa. See the teaching page .

bottom corner