The way in which groups of neurons can represent single values using a population code has been a subject of great interest to theoretical neuroscientists. However, little work has been done on how the population may encode a probability distribution over these values. Anderson (1994), and Zemel, Dayan and Pouget (1998), have studied the population encoding of functions, which may in some cases be interpreted as probability distributions. However, neuronal populations must also be able to represent functions over sensory dimensions in cases where the stimulus is, in fact, extended over that dimension; for example, due to the spatial extent of objects or due to transparent motion in multiple directions.
This leads to a problem: should a broad area of activation in a population code be interpreted as indicating an extended stimulus or else uncertainty about the stimulus value? What about cases where stimuli are both extended and uncertain? Can the population represent uncertainty about the location of a spatially extended object, or about the directions of transparent motion?
I will suggest a natural scheme by which information about simultaneously extended and uncertain stimuli can be represented in the firing rates of the same population. The encoding of probability distributions over extended functions is efficient, and although decoding is computationally expensive, it is not an operation of biological interest. Instead, I will argue that computations using uncertainty can be carried out in an efficient manner by almost linear transforms, and suggest an algorithm by which the appropriate mapping may be learned.