Paper ID: 2307.12937

Unsupervised Learning of Invariance Transformations

Aleksandar Vučković, Benedikt Stock, Alexander V. Hopp, Mathias Winkel, Helmut Linde

The need for large amounts of training data in modern machine learning is one of the biggest challenges of the field. Compared to the brain, current artificial algorithms are much less capable of learning invariance transformations and employing them to extrapolate knowledge from small sample sets. It has recently been proposed that the brain might encode perceptual invariances as approximate graph symmetries in the network of synaptic connections. Such symmetries may arise naturally through a biologically plausible process of unsupervised Hebbian learning. In the present paper, we illustrate this proposal on numerical examples, showing that invariance transformations can indeed be recovered from the structure of recurrent synaptic connections which form within a layer of feature detector neurons via a simple Hebbian learning rule. In order to numerically recover the invariance transformations from the resulting recurrent network, we develop a general algorithmic framework for finding approximate graph automorphisms. We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.

Submitted: Jul 24, 2023