The fitting of meaningful models to the stimulus-response functions of neurons is often hampered by several factors. Compact models might lack the flexibility to adequately capture the nonlinear dynamics of the neural responses, but elaborate models may be hard to estimate due to a lack of good estimation algorithms and large numbers of model parameters. Here we describe a class of nonlinear neural encoding models based on multilinear (tensor) mathematics, which share many of the conveniences of linear models -- such as robust estimation algorithms and low numbers of parameters -- yet are able to capture nonlinear effects such as short-term stimulus-specific adaptation. They achieve this through an (interpretable) multiplicative factorization in an extended stimulus space. The effectiveness of the methods is illustrated on data from primary auditory cortex.