GATSBY COMPUTATIONAL NEUROSCIENCE UNIT
UCL Logo
Peter Sollich & Matthew Urry

Department of Mathematics, King’s College London, UK

 

Wednesday 17 February 2010

16.00

 

Seminar Room B10 (Basement)

Alexandra House, 17 Queen Square, London, WC1N 3AR

 


Kernels and learning curves for Gaussian process regression on random graphs

We investigate how well Gaussian process regression can learn functions defined on graphs, using large regular random graphs as a paradigmatic example. Random-walk based kernels then have some non-trivial properties: within the standard approximation of a locally tree-like graph structure, the kernel does not become constant, i.e. neighbouring function values do not become fully correlated, when the lengthscale $\sigma$ of the kernel is made large. Instead the kernel attains a non-trivial limiting form, which we calculate.  The fully correlated limit is reached only once loops become relevant, and we estimate where the crossover to this regime occurs.

Our main subject are learning curves of Bayes error versus training set size. We show that these are qualitatively well predicted by a simple approximation borrowed from the case of Euclidean inputs, using only the spectrum of a large tree as input, and generically scale with $n/V$, the number of training examples per vertex. Finally, we consider the dependence of the results on the graph type, by comparing with Poisson (Erdos-Renyi) graphs. We also show initial results from an improved theoretical approach that is based on belief propagation or equivalently the cavity method, and should become exact for large graphs.