XClose

Gatsby Computational Neuroscience Unit

Home
Menu

Eric Nalisnick

 

Monday 9th March 2020

 

Time: 12 - 1pm

 

Ground Floor Seminar Room

25 Howland Street, London, W1T 4JG

 

Building and Critiquing Models for Probabilistic Deep Learning

Deep neural networks have demonstrated impressive performance in predictive tasks.  However, these models have been found to be opaque, brittle, and data-hungry---which frustrates their use for scientific, medical, and other safety-critical applications.  In this talk, I describe how imposing additional probabilistic structure on the network makes it more amenable to the best practices of traditional statistical modeling.  For instance, I show that the deep learning regularization strategy known as “dropout” can be interpreted as a Bayesian structured shrinkage prior.  Taking this perspective better illuminates modeling assumptions as well as improves performance in small-data settings.  For a second example, I show how to constrain the deep neural network to encode only bijective functions.  This constraint endows the network with additional capabilities, such as the ability to detect covariate shift.  I close the talk by highlighting open problems in model specification, posterior inference, and data-efficient criticism.

Bio:
Eric Nalisnick is a postdoctoral researcher at the University of Cambridge.  His research interests span statistical machine learning, with a current emphasis on Bayesian deep learning, generative modeling, and out-of-distribution detection.  He received his PhD from the University of California, Irvine, where he was supervised by Padhraic Smyth.  Eric previously was a research scientist at DeepMind and an intern at DeepMind, Microsoft, Twitter, and Amazon.