links
home group publications talks teaching workshops software Gatsby Unit ELLIS Unit, UCL
Contact
arthur.gretton@gmail.com
Gatsby Computational Neuroscience Unit
Sainsbury Wellcome Centre
25 Howland Street
London W1T 4JG UK

info
I am a Professor with the Gatsby Computational Neuroscience Unit; director of the
Centre for Computational Statistics and Machine Learning
at UCL; and a Research Scientist at Google Deepmind. A short biography.
My recent research interests in machine learning include causal inference and representation learning, design and training of implicit and explicit generative models, and nonparametric hypothesis testing.
Recent papers
Spectral Representation for Causal Estimation with Hidden Confounders . A spectral method for causal effect estimation with hidden confounders, applying to instrumental variable and proxy causal learning. AISTATS 2025
Kernel Single Proxy Control for Deterministic Confounding . Proxy causal learning generally requires two proxy variables - a treatment and an outcome proxy. When is it possible to use just one? AISTATS 2025
Density Ratio-based Proxy Causal Learning Without Density Ratios . Proxy Causal Learning (PCL) estimates causal effects from observed data in the presence of hidden confounding. We propose an alternative bridge function to achieve this. AISTATS 2025
Credal Two-Sample Tests of Epistemic Uncertainty . A new framework for comparing credal sets -- convex sets of probability measures where each element captures aleatoric uncertainty and the set itself represents epistemic uncertainty. AISTATS 2025
Deep MMD Gradient Flow without adversarial training
. Adaprive MMD gradient flow trained on samples from a forward diffusion process, with competitve performance on image generation, and the ability to efficiently generate one sample at a time. ICLR 2025
Optimality and Adaptivity of Deep Neural Features for Instrumental Variable Regression . Convergence analysis of deep feature instrumental variable (DFIV) regression, a nonparametric approach to IV regression using data-adaptive features learned by deep neural networks in two stages. Shows that NN approaches are better than fixed-feature (kernel or sieve) approaches when the target function has low spatial homogeneity, and that NN approaches are more sample-efficient in the Stage 1 samples. ICLR 2025
Near-Optimality of Contrastive Divergence Algorithms . Contrastive divergence learns energy-based models with the same rates (and almost the same constant) as maximum likelihood! NeurIPS 2024
Optimal Rates for Vector-Valued Spectral Regularization Learning Algorithms . Learn conditional mean embeddings with spectral regularizers beyond Tikhonov, and avoid the saturation effect for smooth target functions. NeurIPS 2024
Mind the Graph When Balancing Data for Fairness or Robustness. Data balancing for fairness: when does it work? When does it not? NeurIPS 2024
Foundations of Multivariate Distributional Reinforcement Learning. Distributional successor features: zero-shot generalization of return distribution functions across finite-dimensional reward function classes NeurIPS 2024
Towards Optimal Sobolev Norm Rates for the Vector-Valued Regularized Least-Squares Algorithm. The first optimal rates for infinite-dimensional vector-valued kernel ridge regression, including the misspecified case. JMLR 2024
Kernel methods for causal functions: dose, heterogeneous and incremental response curves . Kernel ridge regression estimators for nonparametric causal functions, with uniform consistency and finite sample rates. Includes causal functions identified by front and back door criteria. Biometrika 2024
Recent talks and courses
Gradient Flows on the Maximum Mean Discrepancy Slides from the RSS/Turing Workshop on Gradient Flows for Sampling, Inference, and Learning, London (March 2025).
Causal Effect Estimation with Context and Confounders Slides from the presentation at ESSEC Business School, Paris (March 2).
Learning to act in noisy contexts using deep proxy learning: keynote Slides and video from the NeurIPS Workshop on Causal Representation Learning
Learning to act in noisy contexts using deep proxy learning Talk Slides from the University of Stuttgart ELLIS Unit.
Causal Effect Estimation with Context and Confounders. Talk Slides from University of Warwick.
Causal Effect Estimation with Context and Confounders. Course Slides 1 and Slides 2 from
MLSS 2024, Okinawa.
Learning to act in noisy contexts using deep proxy learning: slides from the Tandon Lecture at NYU, March 2024.
Proxy Methods for Causal Effect Estimation with Hidden Confounders: Talk slides from the UCL Centre for Data Science Symposium, November 2023.
Adaptive two-sample testing: Talk slides from a seminar at the Cambridge Centre for Mathematical Sciences, October 2023.
Causal Effect Estimation with Hidden Confounders using Instruments and Proxies: Talk slides from the ELLIS RobustML Workshop, September 2023.
Course on hypothesis testing, causality, and generative models at the Columbia Statistics Department, July 2023 (10 lectures). Slides and related reading.
Causal Effect Estimation with Context and Confounders. Slides from keynote,
AISTATS 2023.
Kernel Methods for Two-Sample and Goodness-Of-Fit Testing. Slides from PhyStat 2023.
Older news
A Distributional Analogue to the Successor Representation, formulates the distributional successor measure (SM) as a distribution over distributions on states, and theory connecting it with distributional and model-based RL. The distributional SM is learned from data by minimizing a two-level MMD. Spotlight presentation, ICML 2024
Distributional Bellman Operators over Mean Embeddings, a novel algorithmic framework for distributional RL, by learning finite-dimensional mean embeddings of return distributions. Includes new methods for TD learning, asymptotic convergence theory, and a new deep RL agent that improves over baselineson the Arcade Learning Environment. ICML 2024
Conditional Bayesian Quadrature, for estimating conditional or parametric expectations in the setting where obtaining samples or evaluating integrands is costly. Applications in Bayesian sensitivity analysis, computational finance and decision making under uncertainty.
UAI 2024
Proxy Methods for Domain Adaptation. Domain adaptation under distribution shift, where the shift is due to a change in the distribution of an unobserved, latent variable that confounds both the covariates and the labels. We employ proximal causal learning, demonstrating that proxy variables allow for adaptation to distribution shift without explicitly recovering or modeling latent variables. We consider two settings, (i) Concept Bottleneck: an additional ''concept'' variable is observed that mediates the relationship between the covariates and labels; (ii) Multi-domain: training data from multiple source domains is available, where each source domain exhibits a different distribution over the latent confounder.
AISTATS 2024
MMD-FUSE: Learning and Combining Kernels for Two-Sample Testing Without Data Splitting. A procedure to maximise the power of a two-sample test based on the Maximum Mean Discrepancy (MMD), by adapting over the set of kernels used in defining it. For finite sets, this reduces to combining (normalised) MMD values under each of these kernels via a weighted soft maximum. The kernels can be chosen in a data-dependent but permutation-independent way, in a well-calibrated test, avoiding data splitting. Deep kernels can also be used, with features learnt using unsupervised models such as auto-encoders.
Spotlight presentation, NeurIPS 2023
Fast and scalable score-based kernel calibration tests. A nonparametric, kernel-based test for assessing the calibration of probabilistic models with well-defined scores (which may not be normalized, eg posterior estimates in Bayesian inference). We use a new family of kernels for score-based probabilities that can be estimated without probability density samples.
Spotlight presentation, UAI 2023
A kernel Stein test of goodness of fit for sequential models.
A goodness-of-fit measure for probability densities modeling observations with varying dimensionality, such as text documents of differing lengths or variable-length sequences. The proposed measure is an instance of the kernel Stein discrepancy (KSD), which has been used to construct goodness-of-fit tests for unnormalized densities. The test does not require the density to be normalized, allowing the evaluation of a large class of models. At ICML, 2023.
Efficient Conditionally Invariant Representation Learning.
We introduce the Conditional Independence Regression CovariancE (CIRCE), a measure of conditional independence for multivariate continuous-valued variables. CIRCE applies as a regularizer in settings where we wish to learn neural features of data X to estimate a target Y, while being conditionally independent of a distractor Z given Y. Both Z and Y are assumed to be continuous-valued but relatively low dimensional, whereas X and its features may be complex and high dimensional. Relevant settings include domain-invariant learning, fairness, and causal learning.
Top 5% paper, ICLR 2023 .
A Neural Mean Embedding Approach for Back-door and Front-door Adjustment.
Estimate average and counterfactual treatment effects without having an access to a hidden confounder, by applying two-stage regression on learned neural net features. At ICLR, 2023.
