XClose

Gatsby Computational Neuroscience Unit

Home
Menu

Research

 

The Unit's core strengths are in computationally and probabilistically oriented theoretical neuroscience, and statistical machine learning. In neuroscience, we have particular interests in plasticity, neuromodulation, population coding and neural dynamics; applied to the fields of audition, control/action selection, and vision. In machine learning, we work on parametric and non-parametric Bayesian methods, graphical models and sampled and deterministic approximate inference and learning methods, applied to neuroscience problems as well as to other areas.

Please click here for research publications

 

1) Theoretical Neuroscience

 

Please click to jump to:

Dynamics, Neuromodulation, Neural coding, Plasticity, Vision and Audition.

 

2) Machine Learning

 

Please click to jump to:

Bayesian statistics, Graphical models, Kernel methods, Reinforcement learning, Neural data analysis, Bioinformatics and Natural language processing.

 

1) Theoretical Neuroscience

Dynamics Biological neural networks exhibit rich dynamical behaviours, whose importance for computation is under constant debate. We study the import of oscillatory excitatory-inhibitory systems in such areas as preventing spontaneous symmetry breaking in neural activities, perceptual learning, neural plasticity, associative memory, the representation of interval time, and the oscillatory coordination between the hippocampus and neocortex. We also study the dynamical properties of active membrane processes associated with spiking.
Neuromodulation Neuromodulators such as acetylcholine, norepinephrine, serotonin and dopamine play critical roles in controlling and plasticising neural circuits, having a particular association with reinforcement and attention. Starting from a computational analysis of appetitive conditioning, which suggests that the phasic release of dopamine reports a (temporal difference) prediction error for summed future reward, we are extending our studies to consider attentional aspects of dopamine and opponency between serotonin and dopamine. We also study how neuromodulators may affect perceptual processing, for example how acetylcholine and also norepinephrine might report on uncertainty and novelty to control the integration of bottom-up and top-down information in inference and learning, and how the different modulatory systems affect representational learning in perceptual systems. We are starting to consider the role played by neuromodulators in addiction.
Neural coding Understanding the relationship between stimuli and neural spiking activity is one of the most fundamental questions in neuroscience. We approach the question in many ways, on the one hand working with empirical data to understand, process and formalise the information available in them, and on the other, looking at theoretical issues associated with sophisticated versions of population codes. We also study how principles of early sensory coding may be derived from efficient coding principles of information theory.
Plasticity A remarkable feature of the brain is its ability to adapt to, and learn from, experience. This learning has measurable physiological correlates in terms of changes at individual synapses, as well as in resulting modifications of the stimulus-response properties of individual neurons. We study the theoretical significance of these changes at a number of levels, including the interpretation of spike-timing update rules for synaptic strength, the interaction of reinforcement and neuromodulation with receptive field plasticity, and the consequences of plastic changes on perceptual learning.
Vision We study the organisational and computational principles that lie behind physiological, anatomical, and psychophysical observations in biological vision. Using both theoretical models and psychophysical experiments, we focus on coding principles that can help elucidate the information-processing function of receptive fields in the retina and cortex, on the mechanisms of visual grouping, adaptation, and segmentation in early visual cortex, and on visual inference and attentional mechanisms
Audition Starting with only a 1- or 2-dimensional time series (the sound wave at one or two ears), the auditory system extracts a rich portrait of the auditory environment; accurately segmenting and locating auditory objects in the presence of noise, distortion, echos and other signal imperfections. We study the question of how this is done, applying both algorithmic and neuroscientific tools.

Return to the top of the page

2) Machine Learning

Bayesian statistics Bayesian statistics is a framework for doing inference by combining prior knowledge and data, and as such has been influential in the understanding of intelligent learning systems. We work on many areas of Bayesian statistics, including using variational methods to do inference efficiently in complex domains, model selection and non-parametric modelling, novel Markov chain methods, semi-supervised learning and modelling temporal sequences.
Graphical models Realistic models often require representing the dependencies between many random variables. Graphical models provide an elegant formalism for representing these dependencies and for doing efficient probabilistic inference and decision making. We study novel algorithms for approximate inference and methods for learning both parameters and the structure of graphical models from data.
Kernel methods Difficult real-world pattern recognition and function learning problems require that the learning system be highly flexible. Kernel methods such as Gaussian processes and support vector machines are one way of defining highly flexible non-parametric models based on similarities between data points. Gaussian processes, which correspond to neural networks with infinitely many hidden neurons, have proved powerful at avoiding some of the common pitfalls of learning such as 'overfitting'. We focus on how to make kernel methods even more flexible and efficient, how to learn the kernel from data, and how to use them in a variety of applications.
Reinforcement learning Reinforcement learning studies how systems can actively learn about the transition and reward structure of their environments and come to choose appropriate actions. Apart from the links with conditioning and neuromodulation, we have studied various aspects of the trade-off between exploration and exploitation, the effects of approximation and the divination of hierarchical structure.
Neural data analysis The brain is perhaps the most complex subject of empirical investigation in scientific history. The scale is staggering: over 10 11 neurons, each making an average of 10 3 synapses, with computation occurring on scales ranging from a single dendritic spine to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterise this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. In collaborations with experimental laboratories we have adapted machine learning techniques to characterise data from multiple extracellular electrodes, from identified single cells, as well as from local-field and magnetoencephalographic recordings. These studies have the potential to introduce powerful new theoretically-motivated ways of looking at neural data
Bioinformatics Recent advances in biology have led to a wealth of data on the structure and function of genes and proteins. This data can advance our understanding of life and the causes and cures of disease. However, because the thousands of genes and proteins involved interact in unknown and complex ways, understanding them and posing and testing hypotheses about their interaction is a challenging problem. We use statistical machine learning methods, such as graphical models, non-parametric models, and state-space models to model the interaction between genes, to model the structure of proteins, and to classify the function of new proteins.
Natural language processing Building systems for processing, understanding and generating natural languages both helps to shed light on how we ourselves learn and use languages and has important applications in improving human-computer interactions and analysing data in the form of written text (e.g. most of the web). Given the complexities and intricacies of human languages, it is not surprising to find problems in natural langauge processing to be difficult. We use machine learning methods to build statistical models of languages and documents and for the analysis of sentences (e.g. parts of speech, word senses, parsing, machine translation).

Return to the top of the page