Feature Redundancy
Feature redundancy, the presence of duplicated or unneeded information within data or models, is a central challenge in machine learning, aiming to improve efficiency and performance. Current research focuses on identifying and mitigating redundancy in various deep learning architectures, including convolutional neural networks (CNNs), transformers, and implicit neural representations (INRs), often employing techniques like filter pruning, attention mechanism modification, and feature selection algorithms. Addressing feature redundancy leads to smaller, faster models with improved generalization, impacting both computational resource usage and the interpretability of complex models across diverse applications such as image processing, natural language processing, and bioinformatics.
Papers
Change Is the Only Constant: Dynamic LLM Slicing based on Layer Redundancy
Razvan-Gabriel Dumitru, Paul-Ioan Clotan, Vikas Yadav, Darius Peteleaza, Mihai Surdeanu
Kernel Orthogonality does not necessarily imply a Decrease in Feature Map Redundancy in CNNs: Convolutional Similarity Minimization
Zakariae Belmekki, Jun Li, Patrick Reuter, David Antonio Gómez Jáuregui, Karl Jenkins