Most of these studies compare information theoretic measurements from data with those from hypothetical Poisson neurons. However, real neurons have refractory periods which cannot be captured naturally by Poisson processes. Thus one cannot exclude that the differences in information theoretic measures between the experimental data and the Poisson process are simply due this well known refractory period.
To get some insight in the effect of refractoriness, I study information theory in a model in which the neuronal spiking is described by an arbitrary rate dependent renewal process. In this model, the probability density of firing at time t, given that the previous spike was at time tl, is given by
where R is the rate and P(x) is some probability density for which E(x)=1.
I show that, even for a neuron with a constant rate one generally loses information if one uses the total spike count rather then the precise timing of the spikes to estimate the firing rate. I also derive a simple estimator for the rate, which is asymptotically optimal. Using numerical simulations, I show that, in a situation in which a neuron has one of two rates, binning changes the mutual information between the firing rate and the output, with a decrease in the mutual information as the bin-width increases. Finally I show that, if one measures the information in the occurrence of 'words' of a various number of bins, the information rate varies non-trivially with the word length unless the renewal process is Poisson.