Gatsby Computational Neuroscience Unit CSML University College London

Gatsby Computational Neuroscience Unit
Sainsbury Wellcome Centre
25 Howland Street
London W1T 4JG UK

+44 (0)7795 291 705

bottom corner

Recent courses

  • COMP GI13/COMP M050: Advanced Topics in Machine Learning
    This course comprises 15 hours on kernel methods, and 15 hours on learning theory. The kernel methods part covers: construction of RKHS, in terms of feature spaces and smoothing properties; simple linear algorithms in RKHS (PCA, ridge regression); kernel methods for hypothesis testing (two-sample, independence); support vector machines for classification, including both the C-SVM and nu-SVM; and further applications of kernels (feature selection, clustering, ICA). There is an additional component (not assessed) on theory of reproducing kernel Hilbert spaces.
    Course web page

  • Machine Learning Summer School (Cadiz and Arequipa 2016, Tuebingen 2015)
    A short course on kernels for the Machine Learning Summer Schools in Tuebignen , Cadiz , and Arequipa . The first lecture covers the fundamentals of reproducing kernel Hilbert spaces. The second lecture introduces distribution embeddings, characteristic kernels, hypothesis testing, and optimal kernel choice for testing. The third lecture covers advanced topics: three-variable interactions, covariance in feature spaces, kernels that induce energy distances, and Bayesian inference with kernels.
    First lecture and video
    Second lecture and video
    Third lecture and video
    Note: videos are from the Tuebingen summer school, but some of the slides are from more recent summer schools and have udpates.
    The UAI 2017 tutorial slides (from my main homepage) are more recent still, and are less techincally detailed but more polished.

  • Short Course for the Workshop on Nonparametric Measures of Dependence, Columbia 2014
    A short course on kernels for the Nonparametric Measures of Dependence workshop at Columbia. The course covers three nonparametric hypothesis testing problems: (1) Given samples from distributions p and q, a homogeneity test determines whether to accept or reject p=q; (2) Given a joint distribution p_xy over random variables x and y, an independence test investigates whether p_xy = p_x p_y, (3) Given a joint distribution over several variables, we may test for whether there exist a factorization (e.g., P_xyz = P_xyP_z, or for the case of total independence, P_xyz=P_xP_yP_z). The tests benefit from many years of machine research on kernels for various domains, and thus apply to distributions on high dimensional vectors, images, strings, graphs, groups, and semigroups, among others. The energy distance and distance covariance statistics are also shown to fall within the RKHS family.
    Course web page

  • Short Course for the Workshop on Kernel Methods for Big Data, Lille 2014
    A short course on kernels for the Kernel methods for big data workshop. The first lecture is an introduction to RKHS. The second covers embeddings of probabilities to RKHS and characteristic kernels. The third lecture covers advanced topics: relation of RKHS embeddings of probabilities and energy distances, optimal kernel choice for two-sample testing, testing three-way interactions, and Bayesian inference without models.
    Note that the Columbia course covers the topics of Lectures 1 and 2 in greater depth, but does not cover all the topics in Lecture 3.
    First lecture
    Second lecture
    Third lecture

  • Introduction to Machine Learning, short course on kernel methods
    This course comprises three hours of lectures, and a three hour practical session. Material includes construction of RKHS, in terms of feature spaces and smoothing properties; simple linear algorithms in RKHS (maximum mean discrepancy, ridge regression); and support vector machines for classification.
    Course web page

bottom corner