Nonlinearities and contextual influences in the auditory cortex modeled with multilinear spectrotemporal methods
Gatsby Computational Neuroscience Unit, UCL , UK
The relationship between a sound and its neural representation in the auditory cortex remains elusive. Simple measures such as the frequency response area or tuning curves provide little insight into the function of the auditory cortex in complex sound environments. Spectrotemporal receptive field (STRF) models, despite their descriptive potential, perform poorly when used to predict auditory cortical responses, showing that nonlinear features of cortical response functions, which are not captured by STRFs, are functionally important. We introduce a new approach to the description of auditory cortical responses, using multilinear modeling methods. These descriptions simultaneously account for several nonlinearities in the stimulus-response function, including adaptation, spectral interactions, and nonlinear sensitivity to sound level. The models reveal multiple inseparabilities in cortical processing of time lag, frequency and sound level, and provide functional mechanisms by which auditory cortical neurons are sensitive to stimulus context. By explicitly modeling these contextual influences, the models are able to predict auditory cortical responses much more accurately than are STRF models. In addition, they can explain some forms of stimulus-dependence in STRFs that were previously poorly understood.