Feature Decorrelation
Feature decorrelation, in machine learning, aims to reduce redundancy and dependencies between features within data representations, improving model performance and generalization. Current research focuses on applying decorrelation techniques within various model architectures, including deep neural networks and self-supervised learning frameworks, often using regularizers or loss functions to encourage feature independence. This approach enhances model robustness, efficiency, and interpretability across diverse applications such as image compression, time series forecasting, and reinforcement learning, leading to improved accuracy and reduced computational costs. The impact extends to addressing issues like dataset bias and improving generalization to unseen data.
Papers
Exploring the Equivalence of Siamese Self-Supervised Learning via A Unified Gradient Framework
Chenxin Tao, Honghui Wang, Xizhou Zhu, Jiahua Dong, Shiji Song, Gao Huang, Jifeng Dai
Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning
Yujun Shi, Kuangqi Zhou, Jian Liang, Zihang Jiang, Jiashi Feng, Philip Torr, Song Bai, Vincent Y. F. Tan