links

Gatsby Computational Neuroscience Unit CSML University College London

Collaborations • I am afflilated with the MPI for Biological Cybernetics as a research scientist.
• I am in collaboration with the Select Lab at CMU.

Contact arthur.gretton@gmail.com
Gatsby Computational Neuroscience Unit
Alexandra House
17 Queen Square
London - WC1N 3AR

Phone
207-679 1186

bottom corner

workshops (2012)

  • NIPS 2012 Workshop on the Confluence between Kernel Methods and Graphical Models
    This workshop addresses two main research questions: first, how may kernel methods be used to address difficult learning problems for graphical models, such as inference for multi-modal continuous distributions on many variables, and dealing with non-conjugate priors? And second, how might kernel methods be advanced by bringing in concepts from graphical models, for instance by incorporating sophisticated conditional independence structures, latent variables, and prior information?
    Web page

  • NIPS 2012 Workshop on Modern Nonparametric Methods in Machine Learning
    Statistical analysis of big, high-dimensional data has become frequent in many scientific fields ranging from biology, genomics and health sciences to astronomy, economics and machine learning. The aim of this workshop is to bring together practitioners, who work on specialized applications, and theoreticians that are interested in providing sound methodology. We hope to advertise recent successes of nonparametric methods in a number of domains, involving large scale high-dimensional problems, and to dismiss the common belief that nonparametric methods are not suitable for dealing with challenges arising from big data.
    Web page

  • ICML 2012 Workshop on RKHS and kernel-based methods: theoretical topics and recent advances
    The workshop brings together researchers in probability theory, mathematicians, and machine learning researchers working on RKHS methods. The goals of the workshop are threefold: first, to provide an accessible review and synthesis of classical results in RKHS theory from the point of view of functional analysis, probability theory, and numerical analysis. Second, to cover recent advances in RKHS theory relevant to machine learners (for instance, operator valued RKHS, kernels on time series, kernel embeddings of conditional probabilities). Third, to provide a forum for open problems, to elucidate misconceptions that sometimes occur in the literature, and to discuss technical challenges.
    Web page

workshops (2010)

  • NIPS 2010 Workshop on Low-rank Methods for Large-scale Machine Learning
    There has been a growing body of research devoted to developing randomized methods to efficiently and accurately generate low-rank approximations to large matrices. This workshop aims to survey recent developments with an emphasis on usefulness for practical large-scale machine learning problems. Questions to be addressed include: What are the state-of-the-art approximation techniques? How does the heterogeneity of data affect the randomization aspects of these algorithms? Which methods are appropriate for various machine learning tasks? How do these methods work in practice for large-scale machine learning tasks? What is the tradeoff between numerical precision and time/space efficiency vs performance, e.g., classification or clustering accuracy?
    Web page

workshops (2009)

  • NIPS 2009 Workshop on Temporal Segmentation
    Data with temporal (or sequential) structure arise in several applications, such as speaker diarization, human action segmentation, network intrusion detection, DNA copy number analysis, and neuron activity modelling, to name a few. The purpose of this workshop is to bring together experts working on temporal segmentation from the statistics, machine learning, and signal processing communities, to address a broad range of applications from robotics to neuroscience, and to define the current and future challenges in the field.
    Videolectures
    Web page

  • NIPS 2009 Workshop on Large-Scale Machine Learning: Parallelism and Massive Datasets
    Prior NIPS workshops have focused on the topic of scaling machine learning, which remains an important developing area. We introduce a new perspective by focusing on how large-scale machine learning algorithms should be informed by future parallel architectures. By bringing together experts from computer architectures, parallel algorithms, and scientific computing, and machine learning we will develop a concrete research agenda for large-scale learning on parallel architectures.
    Videolectures
    Web page

workshops (2004-2008)

  • NIPS 2008 Workshop on Kernel Learning: Automatic Selection of Optimal Kernels
    Videolectures
    Web page

  • NIPS 2007 Workshop on Representations and Inference on Probability Distributions
    Videolectures

  • NIPS 2005 Workshop on Kernel Methods and Structured Domains
    Videolectures

  • NIPS 2004 Workshop on Learning with Structured Outputs

bottom corner