You'll find here a list of additional material to help you understand the course and go deeper in the concepts.

This is a growing selection accumulated over many previous years. It is by no means exhaustive but also may not all be relevant any more. You shouldn't expect the exam to be constrained by this.

(most of this is adapted from previous years)

## Past exam questions

Some past exam papers can be found on the (old) Gatsby internal wiki. Note that this is only accessible from within the Gatsby network. If any papers are missing, please chase up the PIs and update the wiki.## General reading

- The main course textbook is Theoretical Neuroscience by Dayan and Abbott. The appendix of the book covers a fair amount of the maths needed for the course.
- One recommended and comprehensive maths textbook is Mathematical Methods for Physics and Engineering by Riley, Hobson, and Bence. Strang also has an excellent text on Linear Algebra and its Applications
- For some reading on neuroscience, the most comprehensive textbook is probably Principles of Neural Science by Kandel. If you want something more concise, these Instant Notes in Neuroscience are really good and efficient.
- A nice readable introduction to dynamical systems and phase plane analysis with particular emphasis on single neuron models can be found in Dynamical Systems in Neuroscience by Izhikevich.
- Liam Paninski also has some truly wonderful lecture notes here, if you want to read a little more about the statistical analysis of neural data.
- Some quick references: background cribsheet, matrix identities, Fourier transforms and matrix cookbook.

## Plasticity

- Review of plasticity; see in particular Fig. 2:

L. F. Abbott and Sacha B. Nelson (2000). Synaptic plasticity: taming the beast. Nat Neurosci, 3: 1178 - 1183

## Systems

The Kandel is a great source of information about many aspects of neuroscience (see above). Some additional reading:

- Crick, F. and Asanuma, C. (1986). Certain aspects of the anatomy and physiology of the cerebral cortex. In J. L. McClelland and D. E. Rumelhart (eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. II, chapter 20, pp. 333-371. MIT Press
- Hubel, DH. and Wiesel, T. (1962). Receptive fields, binocular interaction, and functional architecture in the cat’s visual cortex. J. Physiol, 160: 106-154.
- King, AJ. and Nelken, I. (2009). Unraveling the principles of auditory cortical processing: can we learn from the visual system?. Nat Neurosci, 12(6): 698-701

## Neural Coding

- An amazing review about spike-triggered average and related techniques can be found in the following paper:

Schwartz O, Pillow JW, Rust NC, Simoncelli EP. (2006). Spike-triggered neural characterization. Journal of Vision, 6(4):484-507 - Liam Paninski’s notes on the statistical analysis of neural data are also particularly relevant for this part of the course

Here are some additional papers selected by R. Williamson:

- This first set of four papers provide an excellent background to spike-triggered neural characterization and the use of generalized linear models in neural data analysis.
- Pillow JW, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky EJ, Simoncelli EP. (2008). Spatio-temporal correlations and visual signaling in a complete neuronal population.. Nature, 454: 995-999.
- Paninski L, Pillow JW, and Lewi J. (2007). Statistical models for neural encoding, decoding, and optimal stimulus design. In Computational Neuroscience: Theoretical Insights Into Brain Function, eds. P Cisek, T Drew, & J Kalaska.
- Pillow JW. (2007). Likelihood-based approaches to modeling the neural code. In Bayesian Brain: Probabilistic Approaches to Neural Coding, eds. K Doya, S Ishii, A Pouget & R Rao. MIT press. 53-70.

- this paper provides an excellent review of the use of nonlinear systems identification in neural coding:

Wu MC, David SV, and Gallant JL. (2006). Complete functional characterization of sensory neurons by systems identification. Annu Rev Neurosci, 29:477-505 - here are a bunch of general references on receptive field estimation (and the issue of nonlinearities), the two Sahani & Linden papers are certainly worth a read.
- Sahani M, and Linden JF. (2003). Evidence optimization techniques for estimating stimulus-response functions. In S. Becker, S. Thrun, and K. Obermayer, eds., Advances in Neural Information Processing Systems, vol. 15, pp. 301–308, Cambridge, MA, 2003. MIT Press.
**[Mentioned in class]**Sahani M, and Linden JF (2003). How linear are auditory cortical responses?In S. Becker, S. Thrun, and K. Obermayer, eds., Advances in Neural Information Processing Systems, vol. 15, pp. 109–116, Cambridge, MA, 2003. MIT Press.- Suzuki K. (2004). Determining the receptive field of a neural filter. J. Neural Eng. I: 228-237
- Klein DJ, Depireux DA, Simon JZ & Shamma, SA. (2000). Robust spectrotemporal reverse correlation for the auditory system: Optimizing stimulus design. J. Comp. Neurosci. 9, 85-111.
- Aertsen, AM. & Johannesma, PI. (1981). The spectro-temporal receptive field. A functional characteristic of auditory neurons. Biol Cybern42, 133-43.
- Linden JF, Liu RC, Sahani M, Schreiner CE and Merzenich MM (2003). Spectrotemporal Structure of Receptive Fields in Areas AI and AAF of Mouse Auditory Cortex. J Neurophysiol 90:2660-2675.
- Depireux DA, Simon JZ, Klein DJ, Shamma SA. (2001). Spectro-temporal response field characterization with dynamic ripples in ferret primary auditory cortex. J Neurophysol. 85(3):1220-34.

## Biophysics

- Dayan and Abbott cover Biophysics thoroughly. The notation is different from the lecture though.
- Dynamical Systems in Neuroscience is also a good source of information, specifically for the Phase plan analysis and an overview of many neuron models.
- Spike Neuron Models by W. Gerstner and W. Kistler covers all those models as well. The mathematical derivations are well explained and it covers population dynamics too. Scholarpedia
- Here are Peter Latham's handwritten notes. He used them to teach the course, but they are a few years old right now, so the material does not correspond well to the lectures anymore. But they could be used as a supplement for the course. They are probably impossible to understand without going to the course and may contain mistakes.
- Lecture 1 - passive neurons and Hodgkin Huxley
- Lecture 2 - simplified model, suitable for nullcline analysis
- Lecture 3 - a little philosophy, and nullcline analysis
- Lecture 4 - passive dendrites and axons
- Lecture 5 - synapses
- Lecture 6 - also synapses (but an older set of notes)
- Lecture 7 - philosophy and phase plane analysis
- Lecture 8 - grand summary of biophysics

- Also here is an introductory lecture from another course courtesy of Peter Latham:

biophysics.pptx

## Networks

- Peter Latham's handwritten notes. Same comments than above.
- Lecture 9 - Wilson-Cowan equations
- Lecture 10 - Computations
- Lecture 11 - Hopfield networks 1
- Lecture 12 - Hopfield networks 2
- Lecture 13 - Line attractors