| |
Linear Heteroencoders
Sam Roweis, Gatsby Computational Neuroscience Unit
Carlos Brody, Computation and Neural Systems, California Institute
GCNU TR 1999-002 [September 1999]
Abstract
This note gives a closed form expression for the linear transform computed by an
optimally trained linear heteroencoder network of arbitrary topology trained to
minimize squared error: The transform can be thought of as a restricted rank version
of the basic linear least-squares regression (discrete Wiener filter) between input and
output. The rank restriction is set by the 'bottleneck' size of the network - the
minimum number of hidden units in any layer. A special case of this expression is
the well known result that linear autoencoders with a bottleneck of size r
perform a transform equivalent to projecting into the subspace spanned by the first r
principal components of the data. This result eliminates the need to explicitly
train linear heteroencoder networks.
Download: ps.gz or pdf
|