Feature Redundancy

Feature redundancy, the presence of duplicated or unneeded information within data or models, is a central challenge in machine learning, aiming to improve efficiency and performance. Current research focuses on identifying and mitigating redundancy in various deep learning architectures, including convolutional neural networks (CNNs), transformers, and implicit neural representations (INRs), often employing techniques like filter pruning, attention mechanism modification, and feature selection algorithms. Addressing feature redundancy leads to smaller, faster models with improved generalization, impacting both computational resource usage and the interpretability of complex models across diverse applications such as image processing, natural language processing, and bioinformatics.

Papers