A collection of 10-15 minute talks I have given on a range of topics:
- Important pitfalls of Bayesian model comparison: Why Andrew Gelman "hates" Bayesian model comparison.
- Object oriented matlab talk, example class definition for an unormalised Gaussian in natural parameter form.
- Why version control is an essential tool for
scientists. talk, brief
- Publication quality figures using matlab: Why I do as much as
matlab. talk, materials
- Unit Testing: Why I write more test code than regular
- A generalisation of the von Mises distribution for circular time-series data talk.
- An even more cautionary tale about variational methods, talk.
- Learning 3-D Scene Structure from a Single Still Image, Ashutosh Saxena, Min Sun and Andrew Y. Ng slides, and the website for the project.
- Image impainting: Scene Completion Using Millions of Photographs, Hays and Efros, 2007 slides.
- A cautionary tale about variational Bayes, notes, figs, and code.
- "Fair" Elections, Arrows Theorem and non-deterministic systems: How making an electoral system more random can make it more fair.
- The sleeping beauty paradox: An inference problem about Memory wiping, or universe size.
- Joint work with the CNBH on VTL inference using a factor analysis model with a mixture of Gaussians prior.
- Candes and Tao's work on recovery of sparse signals from a small number of linear measurements.
- Efficient Auditory Coding, Smith and Lewicki, Nature Feb. 2006 - A generative model for sound waveforms which predicts audiory nerve filter shapes.
- The evolution of eyes, Land and Fernald, 1992 - a talk focussing on Figure 1 of this beautiful paper about the convergent evolution of optical devices in nature.
- Multi-linear analysis of Image Ensembles: Tensor Faces - or how to do multi-linear PCA
- Bubbles: A unifying Framework for Low-Level Statistical Properties of Natural Image Sequences. Hyvarinen et al, J. Opt. Soc. Am. 2003 - A paper that should have been included in the journal club on Lewicki and Karklin's work
- A Optimal, Unsupervised Learning in Invariant Object Recognition: Wallis and Baddeley, Neural Comp. 9, 883-894, (1997) - A summary and some thoughts on their theory.
- A Model comparison - a note about Iain's note about David Mackay's picture.
- A brief introduction to slow features analysis - at the end of the talk I gave a sketch and the basic intuition for a probabilistic modelling interpretation but this is absent from the slides
- Sudoku musings
- Why do most people find the maximum likelihood covariance matrix of a multivariate Gaussian via a fluke? Derivatives of functions of covariance matrices . For a more complete treatment - see my tensor notes
- Why there are more 1s in the world than 2s: Benford's law.
- How to turn twelve people into thirteen by borrowing body parts (and other tricks): Carroll and Dodgson
Return to main page