Hello! I am a Senior Staff Research Scientist (Tech Lead, Manager) at Google DeepMind in Mountain View (USA), where I lead a team of research scientists and engineers. Prior to that, I was a Senior Staff Research Scientist at Google Brain and Staff Research Scientist at DeepMind.
My research interests are in scalable, probabilistic machine learning. My PhD thesis [17] was focused on exploring (and exploiting :)) connections between neat mathematical ideas in (non-parametric) Bayesian land and computationally efficient tricks in decision tree land, to get the best of both worlds.
More recently, I have focused on probabilistic deep learning:
Uncertainty and robustness in deep learning [21,30,32,35-44,47-69]
Out-of-distribution robustness of generative models [29,33,34,45]
Deep generative models including generative adversarial networks (GANs), normalizing flows and variational auto-encoders (VAEs) [20,22,23,24,25,26,46]
Applying probabilistic deep learning ideas in healthcare [28,47] and Google products [31]
E-mail: balaji (at) gatsby.ucl.ac.uk or balajiln (at) google.com
Links: Google scholar, Github, Twitter, LinkedIn
Here's my Google Scholar page.
Building One-class Detector for Anything: Open-vocabulary Zero-shot OOD Detection Using Text-image Models
Yunhao Ge, Jie Ren, Jiaping Zhao, Kaifeng Chen, Andrew Gallagher, Laurent Itti, Balaji Lakshminarayanan
[arXiv]
A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models
James Urquhart Allingham, Jie Ren, Michael W Dusenberry, Jeremiah Zhe Liu, Xiuye Gu, Yin Cui, Dustin Tran, Balaji Lakshminarayanan
[arXiv]
ICML, 2023
Improving Zero-shot Generalization and Robustness of Multi-modal Models
Yunhao Ge, Jie Ren, Yuxiao Wang, Andrew Gallagher, Ming-Hsuan Yang, Laurent Itti, Hartwig Adam, Balaji Lakshminarayanan, Jiaping Zhao
[arXiv]
CVPR, 2023
Note: A short version of this work was presented at the NeurIPS ML Safety Workshop, 2022.
Improving the Robustness of Conditional Language Models by Detecting and Removing Input Noise
Kundan Krishna, Yao Zhao, Jie Ren, Balaji Lakshminarayanan, Jiaming Luo, Mohammad Saleh, Peter J Liu
[arXiv]
Note: A short version of this work was presented at the NeurIPS ML Safety Workshop, 2022.
Pushing the Accuracy-Group Robustness Frontier with Introspective Self-play
Jeremiah Zhe Liu, Krishnamurthy Dj Dvijotham, Jihyeon Lee, Quan Yuan, Martin Strobel, Balaji Lakshminarayanan, Deepak Ramachandran
[link]
ICLR, 2023
Note: A short version of this work was presented at the NeurIPS Workshop on Distribution Shifts, 2022.
Out-of-Distribution Detection and Selective Generation for Conditional Language Models
Jie Ren, Jiaming Luo, Yao Zhao, Kundan Krishna, Mohammad Saleh, Balaji Lakshminarayanan, Peter J Liu
[arXiv]
ICLR, 2023
Note: A short version of this work was presented at the NeurIPS workshop on Robustness in Sequence Modeling, 2022.
Plex: Towards Reliability using Pretrained Large Model Extensions
Dustin Tran, Jeremiah Liu, Mike Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han, Zi Wang, Zelda Mariet, Huiyi Hu, Neil Band, Tim GJ Rudner, Karan Singhal, Zachary Nado, Joost van Amersfoort, Andreas Kirsch, Rodolphe Jenatton, Nithum Thain, Honglin Yuan, Kelly Buchanan, Kevin Murphy, D Sculley, Yarin Gal, Zoubin Ghahramani, Jasper Snoek, Balaji Lakshminarayanan
[arXiv] [code] [blog]
Reliability benchmarks for image segmentation
E Kelly Buchanan, Michael W Dusenberry, Jie Ren, Kevin Patrick Murphy, Balaji Lakshminarayanan, Dustin Tran
[link]
NeurIPS Workshop on Distribution Shifts, 2022.
A Simple Approach to Improve Single-Model Deep Uncertainty via Distance-Awareness
Jeremiah Zhe Liu, Shreyas Padhy, Jie Ren, Zi Lin, Yeming Wen, Ghassen Jerfel, Zack Nado, Jasper Snoek, Dustin Tran and Balaji Lakshminarayanan
JMLR, 2022
[arXiv] [code]
Reliable Graph Neural Networks for Drug Discovery under Distributional Shift
Kehang Han, Balaji Lakshminarayanan, Jeremiah Liu
NeurIPS workshop on Distribution Shifts, 2021.
[arXiv] [code]
Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation
Yao Qin, Chiyuan Zhang, Ting Chen, Balaji Lakshminarayanan, Alex Beutel, Xuezhi Wang
NeurIPS, 2022
Deep Classifiers with Label Noise Modeling and Distance Awareness
Vincent Fortuin, Mark Collier, Florian Wenzel, James Allingham, Jeremiah Liu, Dustin Tran, Balaji Lakshminarayanan, Jesse Berent, Rodolphe Jenatton, Effrosyni Kokiopoulou
TMLR, 2022
[arXiv] [code]
Sparse MoEs meet Efficient Ensembles
James Urquhart Allingham, Florian Wenzel, Zelda E Mariet, Basil Mustafa, Joan Puigcerver, Neil Houlsby, Ghassen Jerfel, Vincent Fortuin, Balaji Lakshminarayanan, Jasper Snoek, Dustin Tran, Carlos Riquelme Ruiz, Rodolphe Jenatton
TMLR, 2022
[arXiv]
Soft Calibration Objectives for Neural Networks
Archit Karandikar, Nicholas Cain, Dustin Tran, Balaji Lakshminarayanan, Jonathon Shlens, Michael C. Mozer and Becca Roelofs
NeurIPS, 2021
[arXiv] [code]
BEDS-Bench: Behavior of EHR-models under Distributional Shift–A Benchmark
Anand Avati, Martin Seneviratne, Emily Xue, Zhen Xu, Balaji Lakshminarayanan and Andrew M. Dai
NeurIPS workshop on Distribution Shifts, 2021.
[arXiv] [code]
Exploring the Limits of Out-of-Distribution Detection
Stanislav Fort, Jie Ren and Balaji Lakshminarayanan
NeurIPS, 2021
[arXiv] [code]
Note: A short version of this work was presented at the ICML workshop on Uncertainty and Robustness in Deep Learning, 2021.
A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection
Jie Ren, Stanislav Fort, Jeremiah Liu, Abhijit Guha Roy, Shreyas Padhy and Balaji Lakshminarayanan
ICML workshop on Uncertainty and Robustness in Deep Learning, 2021.
[arXiv] [code]
Test Sample Accuracy Scales with Training Sample Density in Neural Networks
Xu Ji, Razvan Pascanu, Devon Hjelm, Balaji Lakshminarayanan, Andrea Vedaldi
CoLLA, 2022
[arXiv]
What are effective labels for augmented data? Improving robustness with AutoLabel
Yao Qin, Xuezhi Wang, Balaji Lakshminarayanan, Ed Chi, Alex Beutel
[link]
SATML, 2023
Note: A short version of this work was presented at the ICML workshop on Uncertainty and Robustness in Deep Learning, 2021.
Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning
Zachary Nado, Neil Band, Mark Collier, Josip Djolonga, Michael Dusenberry, Sebastian Farquhar, Angelos Filos, Marton Havasi, Rodolphe Jenatton, Ghassen Jerfel, Jeremiah Liu, Zelda Mariet, Jeremy Nixon, Shreyas Padhy, Jie Ren, Tim Rudner, Yeming Wen, Florian Wenzel, Kevin Murphy, D. Sculley, Balaji Lakshminarayanan, Jasper Snoek, Yarin Gal, Dustin Tran
NeurIPS workshop on Bayesian deep learning, 2021.
[arXiv] [code]
An Instance-Dependent Simulation Framework for Learning with Label Noise
Keren Gu, Xander Masotto, Vandana Bachani, Balaji Lakshminarayanan, Jack Nikodem, Dong Yin
Machine Learning, 2022
[arXiv]
Task-agnostic Continual Learning with Hybrid Probabilistic Models
Polina Kirichenko, Mehrdad Farajtabar, Dushyant Rao, Balaji Lakshminarayanan, Nir Levine, Ang Li, Huiyi Hu, Andrew Gordon Wilson, Razvan Pascanu
[link]
Note: A short version of this work was presented at the ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models, 2021.
Does Your Dermatology Classifier Know What It Doesn't Know? Detecting the Long-Tail of Unseen Conditions
Abhijit Guha Roy, Jie Ren, Shekoofeh Azizi, Aaron Loh, Vivek Natarajan, Basil Mustafa, Nick Pawlowski, Jan Freyberg, Yuan Liu, Zach Beaver, Nam Vo, Peggy Bui, Samantha Winter, Patricia MacWilliams, Greg S. Corrado, Umesh Telang, Yun Liu, Taylan Cemgil, Alan Karthikesalingam, Balaji Lakshminarayanan, Jim Winkens
Medical Image Analysis, 2022
[link] [arXiv pre-print]
Normalizing flows for probabilistic modeling and inference
George Papamakarios, Eric Nalisnick, Danilo J. Rezende, Shakir Mohamed and Balaji Lakshminarayanan
JMLR, 2021
[arXiv]
Density of States Estimation for Out-of-Distribution Detection
Warren R. Morningstar, Cusuh Ham, Andrew G. Gallagher, Balaji Lakshminarayanan, Alexander A. Alemi and Josh Dillon
AISTATS, 2021
[arXiv]
Training independent subnetworks for robust prediction
Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew Dai, Dustin Tran
ICLR, 2021
[arXiv] [code]
Combining Ensembles and Data Augmentation can Harm your Calibration
Yeming Wen, Ghassen Jerfel, Rafael Muller, Michael Dusenberry, Jasper Snoek, Balaji Lakshminarayanan and Dustin Tran
ICLR, 2021
[arXiv] [code]
Note: A short version of this work was presented at the ICML workshop on Uncertainty and Robustness in Deep Learning, 2020.
Why Aren’t Bootstrapped Neural Networks Better?
Jeremy Nixon, Dustin Tran and Balaji Lakshminarayanan
“I Can’t Believe It’s Not Better!” workshop at NeurIPS 2020.
[pdf]
Bayesian Deep Ensembles via the Neural Tangent Kernel
Bobby He, Balaji Lakshminarayanan and Yee Whye Teh
NeurIPS, 2020
[arXiv] [code]
Note: A short version of this work was presented at the ICML workshop on Uncertainty and Robustness in Deep Learning, 2020.
Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness
Jeremiah Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax-Weiss and Balaji Lakshminarayanan
NeurIPS, 2020
[arXiv] [code]
Note: A short version of this work was presented at the ICML workshop on Uncertainty and Robustness in Deep Learning, 2020.
Revisiting One-vs-All Classifiers for Predictive Uncertainty and Out-of-Distribution Detection in Neural Networks
Shreyas Padhy, Zachary Nado, Jie Ren, Jeremiah Liu, Jasper Snoek and Balaji Lakshminarayanan
ICML workshop on Uncertainty and Robustness in Deep Learning, 2020.
[arXiv] [code]
Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift
Zachary Nado, Shreyas Padhy, D Sculley, Alexander D'Amour, Balaji Lakshminarayanan and Jasper Snoek
ICML workshop on Uncertainty and Robustness in Deep Learning, 2020.
[arXiv]
Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors
Michael Dusenberry, Ghassen Jerfel, Yeming Wen, Yi-an Ma, Jasper Snoek, Katherine Heller, Balaji Lakshminarayanan and Dustin Tran
ICML, 2020
[arXiv] [code]
AugMix: A simple data processing method to improve robustness and uncertainty
Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer and Balaji Lakshminarayanan
ICLR, 2020
[arXiv] [code]
Deep ensembles: A loss landscape perspective
Stanislav Fort, Clara Huiyi Hu and Balaji Lakshminarayanan
Contributed oral talk at the NeurIPS workshop on Bayesian deep learning, 2019.
[arXiv] [code] [poster] [slides]
Detecting out-of-distribution inputs to deep generative models using a test for typicality
Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh and Balaji Lakshminarayanan
[arXiv] [poster]
Note: A short version of this work was presented at the NeurIPS workshop on Bayesian deep learning, 2019.
Likelihood ratios for out-of-distribution detection
Jie Ren, Peter Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark DePristo, Josh Dillon and Balaji Lakshminarayanan
NeurIPS, 2019
[arXiv] [code] [poster] [3-minute video] [blog]
Note: A short version of this work was presented at the ICML workshop on Uncertainty and Robustness in Deep Learning, 2019.
Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift
Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D Sculley, Sebastian Nowozin, Josh Dillon, Balaji Lakshminarayanan and Jasper Snoek
NeurIPS, 2019
[arXiv] [code] [poster] [blog]
Note: A short version of this work was presented at the ICML workshop on Uncertainty and Robustness in Deep Learning, 2019.
Learning from delayed outcomes via proxies with applications to recommender systems
Timothy Mann*, Sven Gowal*, Andras Gyorgy, Ray Jiang, Clara Huiyi Hu, Balaji Lakshminarayanan and Prav Srinivasan
ICML, 2019
[link]
Hybrid models with deep and invertible features
Eric Nalisnick*, Akihiro Matsukawa*, Yee Whye Teh, Dilan Gorur and Balaji Lakshminarayanan
ICML, 2019
[arXiv] [poster]
Note: A short version of this work was presented at the NeurIPS workshop on Bayesian deep learning, 2018.
Do deep generative models know what they don't know?
Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur and Balaji Lakshminarayanan
[arXiv] [poster]
ICLR, 2019
Note: A short version of this work was presented at the NeurIPS workshop on Bayesian deep learning, 2018.
Adapting auxiliary losses using gradient similarity
Yunshu Du, Wojciech Czarnecki, Siddhant Jayakumar, Razvan Pascanu and Balaji Lakshminarayanan
[arXiv] [poster]
Note: A short version of this work was presented at the NeurIPS workshop on continual learning, 2018.
Clinically applicable deep learning for diagnosis and referral in retinal disease
Jeffrey De Fauw, Joseph R Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O'Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O Hughes, Rosalind Raine, Julian Hughes, Dawn A Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, Olaf Ronneberger
Nature Medicine, 2018
[link]
[blog]
Distribution matching in variational inference
Mihaela Rosca, Balaji Lakshminarayanan and Shakir Mohamed
[arXiv]
Many paths to equilibrium: GANs do not need to decrease a divergence at every step
William Fedus*, Mihaela Rosca*, Balaji Lakshminarayanan, Andrew Dai, Shakir Mohamed and Ian Goodfellow
ICLR, 2018
[arXiv]
Variational approaches for auto-encoding generative adversarial networks
Mihaela Rosca*, Balaji Lakshminarayanan*, David Warde-Farley and Shakir Mohamed
* denotes equal contribution
[arXiv]
The Cramer distance as a solution to biased Wasserstein gradients
Marc G. Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer and Remi Munos
[arXiv]
Comparison of maximum likelihood and GAN-based training of Real NVPs
Ivo Danihelka, Balaji Lakshminarayanan, Benigno Uria, Daan Wierstra and Peter Dayan
[arXiv]
Simple and scalable predictive uncertainty estimation using deep ensembles
Balaji Lakshminarayanan, Alexander Pritzel and Charles Blundell
NeurIPS, 2017
[arXiv] [slides] [poster]
Note: A version of this work was presented at the NeurIPS workshop on Bayesian deep learning, 2016.
Learning in implicit generative models
Shakir Mohamed* and Balaji Lakshminarayanan*
* denotes equal contribution
[arXiv]
Note: A version of this work was presented at the NeurIPS workshop on adversarial training, 2016.
Learning deep nearest neighbor representations using differentiable boundary trees
Daniel Zoran, Balaji Lakshminarayanan and Charles Blundell
[arXiv]
Distributed Bayesian learning with stochastic natural-gradient expectation propagation and the posterior server
Leonard Hasenclever, Stefan Webb, Thibaut Lienart, Sebastian Vollmer, Balaji Lakshminarayanan, Charles Blundell and Yee Whye Teh
JMLR, 2017
[arXiv] [code]
Decision trees and forests: a probabilistic perspective
Balaji Lakshminarayanan
Ph.D. thesis, University College London, 2016
[pdf]
The Mondrian kernel
Matej Balog, Balaji Lakshminarayanan, Zoubin Ghahramani, Daniel M. Roy and Yee Whye Teh
UAI, 2016
[arXiv] [code] [slides] [poster]
Approximate inference with the variational Holder bound
Guillaume Bouchard and Balaji Lakshminarayanan
[arXiv]
Mondrian forests for large-scale regression when uncertainty matters
Balaji Lakshminarayanan, Daniel M. Roy and Yee Whye Teh
AISTATS, 2016
[pdf] [code] [slides]
Kernel-based just-in-time learning for passing expectation propagation messages
Wittawat Jitkrittum, Arthur Gretton, Nicolas Heess, Ali Eslami, Balaji Lakshminarayanan, Dino Sejdinovic and Zoltan Szabo
UAI, 2015
[arXiv] [code]
Particle Gibbs for Bayesian additive regression trees
Balaji Lakshminarayanan, Daniel M. Roy and Yee Whye Teh
AISTATS, 2015
[pdf] [code]
Mondrian forests: Efficient online random forests
Balaji Lakshminarayanan, Daniel M. Roy and Yee Whye Teh
NeurIPS, 2014
[pdf] [code] [slides]
Note: See here for a much faster, scikit-learn compatible re-implementation of Mondrian forests.
Distributed Bayesian posterior sampling via moment sharing
Minjie Xu, Balaji Lakshminarayanan, Yee Whye Teh, Jun Zhu and Bo Zhang
NeurIPS, 2014
[pdf] [code]
Latent IBP compound Dirichlet allocation
Cedric Archambeau, Balaji Lakshminarayanan, and Guillaume Bouchard
TPAMI special issue on Bayesian Nonparametrics, 2015
[pdf] [code available upon request] [IEEE link]
Note: A short version of this work appeared at the NeurIPS workshop on Bayesian nonparametrics, 2011.
Inferring ground truth from multi-annotator ordinal data: a probabilistic approach
Balaji Lakshminarayanan and Yee Whye Teh
[arXiv] [code]
Top-down particle filtering for Bayesian decision trees
Balaji Lakshminarayanan, Daniel M. Roy and Yee Whye Teh
ICML, 2013
[pdf] [code] [slides]
Acoustic classification of multiple simultaneous bird species: A multi-instance multi-label approach
Forrest Briggs, Balaji Lakshminarayanan, Lawrence Neal, Xiaoli Z. Fern, Raviv Raich, Sarah J. K. Hadley, Adam S. Hadley, and Matthew G. Betts
JASA, 2012
[link]
Inference in supervised latent Dirichlet allocation
Balaji Lakshminarayanan and Raviv Raich
MLSP, 2011
[pdf] [code available upon request] [IEEE link]
Robust Bayesian matrix factorization
Balaji Lakshminarayanan, Guillaume Bouchard, and Cedric Archambeau
AISTATS, 2011
[revised pdf] [code available upon request]
Note: the updates for a_n, c_m were wrong in the original version of the pdf. The increments ought to be ell_n and ell_m respectively instead of 1.
Probabilistic models for classification of bioacoustic data
Balaji Lakshminarayanan
M.S. thesis, Oregon State University, 2010
[pdf]
Non-negative matrix factorization for parameter estimation in Hidden Markov models
Balaji Lakshminarayanan and Raviv Raich
MLSP, 2010
[pdf] [IEEE link]
A syllable-level probabilistic framework for bird species identification
Balaji Lakshminarayanan, Raviv Raich and Xiaoli Z. Fern
ICMLA, 2009
[pdf] [IEEE link]
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. Personal use of this material is permitted. Permission must be obtained from the copyright holder for all other uses.
Practical Tutorial on Uncertainty and Out-of-distribution Robustness in Deep Learning
Open Data Science Conference (ODSC) West, 2022
Building Neural Networks That Know What They Don’t Know
Invited Talk at SIAM Conference on Uncertainty Quantification, 2022
Reliable Deep Anomaly Detection
Applied Machine Learning Days, 2021
Introduction to Uncertainty in Deep Learning
CIFAR Deep Learning + Reinforcement Learning DLRL Summer School, 2021
Practical uncertainty estimation and out-of-distribution robustness in deep learning
Video of NeurIPS Tutorial, 2020
Uncertainty and Out-of-Distribution Robustness in Deep Learning
Talk at Harvard ML theory, 2020
Uncertainty in Deep Learning
Guest lecture at Deep Learning for Science School, Lawrence Berkeley National Laboratory, Berkeley, CA, 2020
Detecting out-of-distribution inputs using deep generative models: Pitfalls and promises
ReWork summit at San Francisco, 2020
Uncertainty and Out-of-Distribution Robustness in Deep Learning
Plenary Talk at MLUQ workshop at USC, 2019
Do Deep Generative Models Know What They Don't Know?
Stanford University RL forum, 2019
Uncertainty and Out-of-Distribution Robustness in Deep Learning
Guest lecture at UC Berkeley course on Trustworthy Deep Learning, 2019
Probabilistic model ensembles for predictive uncertainty estimation
Bayesian deep learning workshop, NeurIPS 2018
Understanding Generative Adversarial Networks
Age of AI conference, San Francisco, 2018
Mondrian Forests
Bayesian nonparametrics in the North workshop, Lille, 2015
Ph.D. in Machine Learning, University College London, London, UK
I worked with Yee Whye Teh, and was part of the Gatsby Unit.
M.S., Oregon State University, Corvallis, USA
I worked with Raviv Raich, and was part of the Bioacoustics research group.
B.Eng, College of Engineering,
Guindy, Anna University, Chennai, India
I was part of the Integrated Systems laboratory, where I worked on communication sub-systems for the Anna University Micro-satellite project, India's first student-built satellite.
May 2023 - present: Senior Staff Research scientist at Google DeepMind.
Apr 2020 - May 2023: Senior Staff Research scientist at Google Brain.
Sep 2015 - Apr 2020: Staff Research scientist at Google DeepMind. I was in London, UK until Aug 2017 and moved to Mountain View, USA after that.
Jan - Oct 2011: Yandex Labs, Palo Alto, USA. I worked
with Dmitry Pavlov and the Machine Learning Ranking team.
Summer 2010: Machine Learning
group, Xerox Research Centre Europe, Grenoble, France. I worked with Cedric Archambeau and Guillaume Bouchard.
Summer 2009: Center for Advanced Research, PricewaterhouseCoopers, San Jose, USA.
Action Editor for TMLR
Area Chair/Senior PC member for NeurIPS (2019-2022), ICML (2019-2022), ICLR (2020-2023), AISTATS 2019
Reviewer for the following journals: JMLR, TPAMI, JRSS-B, Bayesian analysis, Statistics and Computing, Neural Computation, IJCV
Reviewer/PC member for the following conferences: NeurIPS (2013-2018), ICML (2015-2018), AISTATS (2015-2018), UAI (2015-2019), ICLR (2017-2019)
Co-organized ICML workshop on Uncertainty and Robustness in Deep Learning, 2021
Co-presented NeurIPS 2020 Tutorial on Practical uncertainty estimation and out-of-distribution robustness in deep learning (video, slides)
Co-organized ICML workshop on Uncertainty and Robustness in Deep Learning, 2020
Co-organized ICML workshop on Uncertainty and Robustness in Deep Learning, 2019
Co-organized UAI workshop on Uncertainty in Deep Learning, 2018
Co-organized ICML workshop on Implicit Models, 2017
Teaching assistant for Probabilistic and Unsupervised Learning course (2012) at Gatsby Unit, University College London
Teaching assistant for Statistical Machine Learning and Data Mining course (2014) at Dept of Statistics, University of Oxford
Co-organized CSML lunch talk series at University College London from March 2012 till August 2013