Thursday 19th September 2019
Ground Floor Seminar Room
25 Howland Street, London, W1T 4JG
Learning-Algorithms from Bayesian Principle
In machine learning, new learning algorithms are designed by borrowing ideas from optimization and statistics followed by an extensive empirical efforts to make them practical. However, there is a lack of underlying principles to guide this process. I will present a stochastic learning algorithm derived from Bayesian principle. Using this algorithm, we can obtain a range of existing algorithms: from classical methods such as least-squares, Newton's method, and Kalman filter to new deep-learning algorithms such as RMSprop and Adam. Surprisingly, using the same principles, new algorithms can be naturally obtained even for the challenging learning tasks such as online learning, continual learning, and reinforcement learning. This talk will summarize recent works and outline future directions on how this principle can be used to make algorithms that mimic the learning behaviour of living beings.
I am a team leader at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where I lead the Approximate Bayesian Inference (ABI) Team. From April 2018, I am a visiting professor at the EE department in Tokyo University of Agriculture and Technology (TUAT). I am an Action Editor for the Journal of Machine Learning (JMLR). From 2014 to 2016, I was a scientist at EPFL in Matthias Grossglausser's lab. During my time at EPFL, I taught two large machine learning courses for which I received a teaching award. I first joined EPFL as a post-doc with Matthias Seeger in 2013 and before that I finished my PhD at UBC in 2012 under the supervision of Kevin Murphy.