Non Convex Penalty
Non-convex penalties are increasingly used in optimization problems across various scientific fields to encourage sparsity and improve model interpretability. Current research focuses on developing efficient algorithms, such as smoothing proximal gradient methods and alternating direction method of multipliers (ADMM), to handle the computational challenges posed by these penalties, particularly in high-dimensional and distributed data settings. These advancements are enabling improved performance in diverse applications, including quantile regression, image segmentation, and causal inference, by facilitating the identification of key features and robust model estimation. The development of faster and more robust algorithms for non-convex penalized models continues to be a significant area of active research.