Equivariant Self

Equivariant self-supervised learning aims to train neural networks that produce outputs which transform predictably under input transformations (equivariance), using only unlabeled data. Current research focuses on developing self-supervised learning objectives that enforce this equivariance, often employing architectures like capsule networks or group-equivariant convolutional neural networks. This approach shows promise in improving performance on downstream tasks requiring geometric understanding, such as 3D object detection and medical image analysis, by leveraging inherent structural information within the data itself and reducing reliance on expensive labeled datasets.

Papers