Feature Sparsity
Feature sparsity, the presence of many irrelevant or redundant features in data, is a central challenge across numerous machine learning applications, hindering model efficiency and interpretability. Current research focuses on developing methods for feature selection and sparsity-promoting regularization, employing techniques like divide-and-learn frameworks, graph regularization, and novel sparsity-inducing norms within various model architectures including neural networks and matrix factorization. These advancements aim to improve model accuracy, reduce computational costs, and enhance the explainability of predictions, impacting fields ranging from software performance prediction to image recognition and anomaly detection.
Papers
September 11, 2024
July 16, 2024
May 15, 2024
April 2, 2024
March 16, 2024
February 20, 2024
October 11, 2023
June 11, 2023
June 1, 2023
January 25, 2023
October 26, 2022
July 18, 2022
March 8, 2022