XClose

Gatsby Computational Neuroscience Unit

Home
Menu

Yingzhen Li

 

Friday 21st February 2020

 

Time: 12 - 1pm

 

Ground Floor Seminar Room

25 Howland Street, London, W1T 4JG

 

Deep probabilistic modelling for reliable machine learning systems

Machine learning, especially deep learning, has many recent successes including beating the human champion of the board game Go. However, as we start to deploy deep learning models to assist high risk decision making, we naturally question about the reliability of the model on predicting important quantities. In this regard, deep probabilistic modelling is an emerging research field which shows great promise for reliable machine learning systems. It provides a principled solution to incorporate domain knowledge in model design and quantify uncertainty in prediction, on the other hand the expressiveness of the probabilistic model is enhanced by deep neural networks. This talk will describe both computational and modelling aspects of deep probabilistic modelling, with defending adversarial examples as a running application example. First I will motivate the usage of Bayesian inference in neural networks to better defend and detect adversarial attacks, and present a unified framework for fast and accurate approximate Bayesian inference in deep probabilistic models. Then I will discuss the difference between generative and discriminative approaches for classification, and present an empirical study on the impact of graphical model structure on adversarial robustness and detection. Finally I will conclude the talk with an outlook of better probabilistic modelling for reliable machine learning pipelines.