About me

I am a Senior Lecturer (equivalent to US Assistant Professor) in Computer Science at the University of Bristol's Computational Neuroscience Unit. I use Bayesian approaches to uncover the theoretical principles behind biological and artificial intelligence. I also develop novel algorithms for data-analysis and apply them to datasets ranging from calcium-imaging to human behaviour.

Prior to my move to Bristol, I did a PostDoc with Prof. Máté Lengyel in the University of Cambridge, and a PhD with Prof. Peter Latham at the Gatsby Unit. Through that time, I have been simultaneously working on pure machine learning, theoretical neuroscience, and the analysis of experimental data ranging from calcium imaging to behaviour.

PhD Applications

I have some PhD positions available, including for a *March 2020* start, please contact me if you are interested, or consider the Interactive AI Center for Doctoral Training. I am happy to consider China Scholarship Council students.

Research Interests

The theory of deep neural networks

Deep neural networks have revolutionised machine learning. But what makes deep networks so effective? We give rigorous theory, describing how the flexibility in finite (but not infinite) neural networks shapes representations to solve difficult tasks.

Adaptive stochastic gradient descent as Bayesian filtering

How should we train our neural network? There is no easy answer: many, many algorithms have been suggested, and at present, there is no easy way to choose between them. Remarkably, we can show that three of the most important techniques: Adam, decoupled weight decay, and RAdam arise by considering stochastic gradient descent as a Bayesian inference problem.

Tensor Monte Carlo

How can we perform accurate inference in large-scale models with rich statistical structure. Here, we apply insights from classical approaches such as particle filtering and message passing, to obtain exponentially many importance samples in state-of-the-art deep variational autoencoders

Bayesian neural networks

What do our neural networks know? And more importantly, what don't they know? Here, we apply ideas from areas ranging from neuroscience to the theory of deep networks to the problem of reasoning accurately about uncertainty in neural network parameters.

Flow-based models with structured priors

How can deep models learn about the structure of the world without explicit supervision? Here, we impose high-level, interpretable structure on the neural representations induced by state-of-the-art flow-based models of natural stimuli.

Variability and uncertainty in neural systems

How can the brain compute efficiently under energetic constaints? And how can the brain represent uncertainty about the world? Remarkably, we have been able to show that the solution to these problems is one and the same: efficient computation automatically reasons about uncertainty, and reasoning about uncertainty allows the brain to compute efficiently.