Feature Selection
Feature selection aims to identify the most relevant subset of features from a larger dataset, improving model performance, interpretability, and efficiency. Current research emphasizes developing novel algorithms, including those based on neural networks (e.g., RelChaNet), genetic algorithms, and large language models (LLMs), to select features effectively and efficiently, often incorporating techniques like causal inference and uncertainty quantification. These advancements are crucial for various applications, such as medical diagnosis, financial prediction, and recommender systems, where reducing dimensionality and improving model explainability are paramount. The field is also actively exploring new evaluation metrics and addressing challenges like fairness and privacy in feature selection.
Papers
DiabML: AI-assisted diabetes diagnosis method with meta-heuristic-based feature selection
Vahideh Hayyolalam, Öznur Özkasap
FoLDTree: A ULDA-Based Decision Tree Framework for Efficient Oblique Splits and Feature Selection
Siyu Wang
Automatic feature selection and weighting using Differentiable Information Imbalance
Romina Wild, Vittorio Del Tatto, Felix Wodaczek, Bingqing Cheng, Alessandro Laio