Sparse Optimization

Sparse optimization focuses on finding solutions with minimal non-zero elements, improving efficiency and interpretability in high-dimensional data analysis. Current research emphasizes developing robust and efficient algorithms, such as iterative hard thresholding and iteratively reweighted L1 methods, often integrated with deep learning architectures like autoencoders and structured pruning techniques for neural network compression. These advancements are significant for various applications, including feature selection, model compression, robust regression, and the analysis of complex dynamical systems, leading to more efficient and accurate models across diverse scientific and engineering domains.

Papers