Sparse Classification
Sparse classification focuses on building efficient and interpretable classification models by selecting only the most relevant features from high-dimensional data. Current research emphasizes developing faster algorithms, such as those based on majorization-minimization or splicing iteration, to handle massive datasets and high-dimensional feature spaces, often within distributed computing frameworks. This work addresses challenges like catastrophic forgetting in online learning and communication bottlenecks in distributed settings, aiming for improved model throughput and accuracy. The resulting sparse models offer advantages in computational efficiency, interpretability, and reduced storage requirements, impacting fields like bioinformatics and precision medicine where high-dimensional data is prevalent.