Equivariant Self
Equivariant self-supervised learning aims to train neural networks that produce outputs which transform predictably under input transformations (equivariance), using only unlabeled data. Current research focuses on developing self-supervised learning objectives that enforce this equivariance, often employing architectures like capsule networks or group-equivariant convolutional neural networks. This approach shows promise in improving performance on downstream tasks requiring geometric understanding, such as 3D object detection and medical image analysis, by leveraging inherent structural information within the data itself and reducing reliance on expensive labeled datasets.
Papers
May 23, 2024
April 17, 2024
March 8, 2023
October 12, 2022
September 3, 2022