links

Gatsby Computational Neuroscience Unit CSML University College London

Contact arthur.gretton@gmail.com
Gatsby Computational Neuroscience Unit
Sainsbury Wellcome Centre
25 Howland Street
London W1T 4JG UK

Phone
+44 (0)7795 291 705

bottom corner

info

Arthur Gretton I am a Professor with the Gatsby Computational Neuroscience Unit, part of the Centre for Computational Statistics and Machine Learning at UCL. A short biography.

My research focus is on using kernel methods to reveal properties and relations in data. A first application is in measuring distances between probability distributions. These distances can be used to determine strength of dependence, for example in measuring how strongly two bodies of text in different languages are related; testing for similarities in two datasets, which can be used in attribute matching for databases (that is, automatically finding which fields of two databases correspond); and testing for conditional dependence, which is useful in detecting redundant variables that carry no additional predictive information, given the variables already observed. I am also working on applications of kernel methods to inference in graphical models, where the relations between variables are learned directly from training data.

Recent news

• Course at the Machine Learning Summer School, Madrid, on representing and comparing probabilities (September 2018): Slides 1, Slides 2, and Slides 3.
Talk slides from the ICML 2018 Workshop on Theoretical Foundations and Applications of Deep Generative Models. Focus is on the papers On gradient regularizers for MMD GANs and Demystifying MMD GANs.
• Tutorial at the Data Science Summer School, Paris, on representing and comparing probabilities (June 2018): Slides 1 and Slides 2. Accompanying practical session by Heiko Strathmann.
On gradient regularizers for MMD GANs. A new gradient regulariser for MMD GANs with state-of-the-art performance (as of June 2018) on 160x160 CelebA and 64x64 Imagenet. And there's code
Demystifying MMD GANs, ICLR 2018. Wasserstein GANs, MMD GANs, and Cramer GANs all have the exact same bias properties: all gradients are unbiased, but the critic may still give biased losses when trained. Also: simpler discriminator networks compared with WGANs, dynamic adaptive learning rate adjustment; a new Kernel Inception Distance. Code
Conditional infinite exponential family, AISTATS 2018. learns conditional density models which can be sampled by HMC. Code
Efficient density estimator, infinite dimensional exponential family Oral presentation, AISTATS 2018. Contains a comparison with a score estimator based on autoencoders.Code
Talk slides on Conditional Densities and Efficient Models in Infinite Exponential Families. NIPS 2017 workshop on Modeling and Learning Interactions from Complex Data.

Older news

NIPS 2017 best paper award: a linear time kernel goodness-of-fit test . Gives a linear time test for assessing the quality of a model, compared with a reference sample. Code, including a demonstration notebook from the ML Train NIPS workshop
• UAI 2017 Tutorial on representing and comparing probabilities (August 2017): Slides 1 and Slides 2, and Video.
• Some notes on the Cramer GAN, showing that it is a generative moment matching network with a particular kernel, and describing a problem with the critic.
Density Estimation in Infinite Dimensional Exponential Families in JMLR (July 2017).
GP-Select: Accelerating EM Using Adaptive Subspace Preselection in Neural Computation (August 2017)
• ICML 2017 paper: linear time kernel independence test based on covariance of analytic features, with code
• Criticizing and training generative models using MMD: ICLR 2017 paper and code. Also presented at the adversarial learning workshop at NIPS (see below).

bottom corner