Projected Gradient Descent
Projected Gradient Descent (PGD) is an iterative optimization algorithm used to solve constrained optimization problems, primarily by repeatedly taking gradient steps and projecting the result onto a feasible set. Current research focuses on applying PGD within diverse contexts, including adversarial training for enhanced model robustness, calibration of complex models like finite element simulations, and generating adversarial examples for evaluating the security of large language models. The efficiency and adaptability of PGD make it a valuable tool across numerous fields, impacting areas such as machine learning security, medical image analysis, and the development of more efficient optimization techniques.
Papers
October 16, 2024
September 27, 2024
September 18, 2024
August 12, 2024
July 29, 2024
June 11, 2024
February 14, 2024
January 16, 2024
October 23, 2023
October 4, 2023
September 29, 2023
September 18, 2023
May 25, 2023
March 19, 2023
September 21, 2022
August 17, 2022
May 16, 2022