Recent experiments[1,2] have suggested single synapses could be similar to noisy binary switches. Binary synapses would have the advantage of robustness to noise and hence could preserve memory over longer time scales compared to analog systems. Learning in systems with discrete synapses is known to be a computationally hard problem though. We developed and studied a neurobiologically plausible on-line learning algorithm that is derived from Belief Propagation algorithms.
This algorithm performs remarkably well in a model neuron with N binary synapses, and a discrete number of `hidden' states per synapse, that has to learn a random classification problem, or to learn a classification rule from a teacher. In the first case, such a system is able to learn a number of associations which is close to the information theoretic limit, in a time which is sub-linear in system size, corresponding to very few presentations of each pattern. In the second case, perfect generalization is shown, from experiments and analytical calculations, to be achieved in finite (and short) time.
The algorithm is similar to the standard `perceptron' learning algorithm, but with an additional rule for synaptic transitions which occur only if a currently presented pattern is `barely correct' (that is, a single synaptic flip would have caused an error). In this case, the synaptic changes are meta-plastic only (change in hidden states and not in actual synaptic state), and go towards stabilizing the synapse in its current state. This rule is crucial to the algorithm's performance, in both learning protocols, and we suggest that it is sufficiently simple to be easily implemented by neurobiological systems.
[1] C.C. Petersen, R.C. Malenka, R.A. Nicoll and J.J. Hopefield, Proc. Natl. Acad. Sci. USA
95:4732-4737 (1998).
[2] D.H. O'Connor, G.M. Wittenberg and S.S-H. Wang, Proc. Natl. Acad. Sci. USA
102:9679-9684 (2005).