Paper ID: 2301.05865
Gated Self-supervised Learning For Improving Supervised Learning
Erland Hilman Fuadi, Aristo Renaldo Ruslim, Putu Wahyu Kusuma Wardhana, Novanto Yudistira
In past research on self-supervised learning for image classification, the use of rotation as an augmentation has been common. However, relying solely on rotation as a self-supervised transformation can limit the ability of the model to learn rich features from the data. In this paper, we propose a novel approach to self-supervised learning for image classification using several localizable augmentations with the combination of the gating method. Our approach uses flip and shuffle channel augmentations in addition to the rotation, allowing the model to learn rich features from the data. Furthermore, the gated mixture network is used to weigh the effects of each self-supervised learning on the loss function, allowing the model to focus on the most relevant transformations for classification.
Submitted: Jan 14, 2023