links

Gatsby Computational Neuroscience Unit CSML University College London

Contact arthur.gretton@gmail.com
Gatsby Computational Neuroscience Unit
Sainsbury Wellcome Centre
25 Howland Street
London W1T 4JG UK

Phone
+44 (0)7795 291 705

bottom corner

info

Arthur Gretton I am a Professor with the Gatsby Computational Neuroscience Unit, part of the Centre for Computational Statistics and Machine Learning at UCL. A short biography.

My research focus is on using kernel methods to reveal properties and relations in data. A first application is in measuring distances between probability distributions. These distances can be used to determine strength of dependence, for example in measuring how strongly two bodies of text in different languages are related; testing for similarities in two datasets, which can be used in attribute matching for databases (that is, automatically finding which fields of two databases correspond); and testing for conditional dependence, which is useful in detecting redundant variables that carry no additional predictive information, given the variables already observed. I am also working on applications of kernel methods to inference in graphical models, where the relations between variables are learned directly from training data.

Recent news

Machine Learning Summer School to take place July 2019 in London, co-organised with Marc Deisenroth.
ICML 2019 workshops now posted, co-chaired with Honglak Lee.
Kernel Exponential Family Estimation via Doubly Dual Embedding at AISTATS 2019.
A maximum-mean-discrepancy goodness-of-fit test for censored data at AISTATS 2019.
Antithetic and Monte Carlo kernel estimators for partial rankings, appearing in Statistics and Computing
• Course slides for the machine learning with kernels (2019) guest lecture at the University of Paris Saclay, also given at the Greek Stochastics Workshop, now linked in the teaching page.
Learning deep kernels for exponential family densities: a scheme for learning a kernel parameterized by a deep network, which can find complex location-dependent local features of the data geometry. Code, talk slides, and high level explanation.

Older news

On gradient regularizers for MMD GANs, NeurIPS 2018. A new gradient regulariser for MMD GANs with state-of-the-art performance (as of June 2018) on 160x160 CelebA and 64x64 Imagenet. Code and talk slides.
Informative Features for Model Comparison, NeurIPS 2018. When comparing complex generative models in high dimensions, the question to ask is not "which model is correct" (neither), or "which model is better," but rather "where does each model do better than the other?" Code
BRUNO: A Deep Recurrent Model for Exchangeable Data, NeurIPS 2018. A deep generative model which is provably exchangeable, meaning that the joint distribution over observations is invariant under permutation. The model does not require variational approximations to train. Used for generalisation from short observed sequences. Code
• Course slides and videos for the 2018 Machine Learning Summer School Madrid, and the 2018 Data Science Summer School Paris, now linked in the teaching page.

bottom corner