Minimax Concave Penalty
The minimax concave penalty (MCP) is a non-convex regularization technique used primarily to enhance the sparsity and accuracy of statistical models, particularly in high-dimensional settings. Current research focuses on applying MCP within optimization frameworks like ADMM and proximal gradient methods, often in conjunction with smoothing techniques, to address challenges posed by its non-convexity, especially in distributed or federated learning scenarios for quantile regression. This work aims to improve the efficiency and convergence properties of algorithms employing MCP, leading to more robust and accurate model estimation in various applications, including those involving large-scale datasets from IoT devices and tensor recovery problems. The resulting improvements in model accuracy and computational efficiency have significant implications for diverse fields like signal processing, machine learning, and data analysis.