Paper ID: 2311.13580
$\sigma$-PCA: a unified neural model for linear and nonlinear principal component analysis
Fahdi Kanavati, Lucy Katsnith, Masayuki Tsuneki
Linear principal component analysis (PCA), nonlinear PCA, and linear independent component analysis (ICA) -- those are three methods with single-layer autoencoder formulations for learning special linear transformations from data. Linear PCA learns orthogonal transformations that orient axes to maximise variance, but it suffers from a subspace rotational indeterminacy: it fails to find a unique rotation for axes that share the same variance. Both nonlinear PCA and linear ICA reduce the subspace indeterminacy from rotational to permutational by maximising statistical independence under the assumption of unit variance. The main difference between them is that nonlinear PCA only learns rotations while linear ICA learns not just rotations but any linear transformation with unit variance. The relationship between all three can be understood by the singular value decomposition of the linear ICA transformation into a sequence of rotation, scale, rotation. Linear PCA learns the first rotation; nonlinear PCA learns the second. The scale is the inverse of the standard deviations. The problem is that, in contrast to linear PCA, conventional nonlinear PCA cannot be used directly on the data to learn the first rotation, the first being special as it reduces dimensionality and orders by variances. In this paper, as solution to this problem, we propose $\sigma$-PCA: a unified neural model for linear and nonlinear PCA as single-layer autoencoders. Essentially, we propose a modification that allows nonlinear PCA to learn not just the second, but also the first rotation -- by maximising both variance and statistical independence. And so, like linear PCA, nonlinear PCA can now learn a semi-orthogonal transformation that reduces dimensionality and orders by variances, but, unlike linear PCA, nonlinear PCA can also eliminate the subspace rotational indeterminacy.
Submitted: Nov 22, 2023