The synfire chain was proposed to constitute the elementary building block of the substrate of information processing in the cortex [1,2]. If such a subnetwork is stimulated appropriately, synchronized spiking activity propagates through consecutive neuron groups. Despite intense research over the past 20 years or so, statistical tools to reliably and unambiguously detect synfire activity are still not available. Existing methods cannot cope with massively parallel spike trains (say 100 neurons or more) or the analysis is carried out for individual spatial or spatio-temporal spike patterns [e.g. 3,4] not respecting the variability expected in synfire activity. Here we present a new approach that overcomes both problems and allows to directly detect and visualize the signature of synfire activity in large but finite data sets. The analysis of random parallel recordings from large-scale neural network simulations with embedded synfire chains demonstrates sensitivity and specificity.
The first step in the analysis is to discretize time with a bin size h appropriate for the expected precision of spike synchronization, e.g. h=2ms. Each bin is identified by an integer i corresponding to the left border of the time interval. Let S(i) be the subset of all recorded neurons active in bin i. We now compare network activity at discretized times i and j by computing the number of elements in the intersection of S(i) and S(j), i.e. the number of neurons active at both instances of time. This value is normalized to the range between 0 and 1 by dividing by the number of elements in the set with fewer elements. The two time axes i and j span a matrix M(i,j) of the new measure. Clearly M(i,i) = 1 and M(j,i) = M(i,j), therefore it suffices to study only the lower triangular matrix i>j.
The signature of a synfire chain that fires twice is as a short diagonal stripe of large entries. The stripe starts at a particular matrix element M(i,j) corresponding to the start times of the synfire chain. The overlap M(i,j) is large because the set S(i) includes all active members of the first neuron group in the chain, and set S(i) includes the active neurons of the group in the second run. The two sets corrsponding to M(i+d,j+d) include the active neurons of the second group and so on, where d depends on the synaptic delay. If a particular chain is activated at discretized times k1 to kn, n(n-1)/2 stripes are found originating at matrix elements (k1,k2), (k1,k3), ..., (k1,kn); (k2,k3), ... , (k2, kn); ..., (kn-1, kn). Finally, if several chains are active, each produces its set of diagonal stripes. Synfire activity is detectable if the overlap activity is differentiable from chance overlap. The temporal aspect of the synfire signature is taken into account by filtering the matrix with an appropriate diagonal filter. This improves visualization and considerably enhances sensitivity. The amplitude distribution of the matrix elements allows to define a threshold separating signal from noise.
We calibrate the method by recordings from a balanced neural network model with (50.000 I&F neurons, 80% excitatory, 20% inhibitory, connectivity: 0.1). In contrast to the standard wiring scheme all excitatory to excitatory synapses are formed by synfire chains. The network embeds 50 chains composed of 20 consecutive groups each containing 100 neurons. The chains are activated by pulse packets, at random times and low rate. Including all 50.000 neurons into the analysis reveals clear signatures of the synfire runs. However, even when the number of recorded neurons is reduced to 200, synfire activity is reliably detected.
This level of sensitivity should allow experimental testing (e.g. by optically recorded data) in the fairly near future and enable either verification or rejection of the synfire idea.
 M Abeles (1991). Corticonics: Neural Circuits of the Cerebral
Cortex. Cambridge University Press.
 E Bienenstock (1995) Network: Comput Neural Systems 6, 179-224.
 M Abeles and G L Gerstein (1988) J Neurophysiol 60(3): 909-924.
 S Gr"un, M Diesmann, and A Aertsen (2002) Neural Comput 14(1), 81-119.