Lipschitz Regularization
Lipschitz regularization is a technique used to constrain the smoothness of neural networks, primarily aiming to improve model robustness and generalization. Current research focuses on applying this regularization to enhance the performance of various models, including reinforcement learning agents, adversarial example defenses, and deep classifiers, often by controlling the Lipschitz constant (a measure of smoothness) through different methods like spectral norm regularization or novel loss functions. This approach has shown promise in improving model stability, reducing overfitting, and increasing certified robustness against adversarial attacks, impacting fields such as computer vision, robotics, and machine learning more broadly.
Papers
October 28, 2024
April 22, 2024
December 20, 2023
September 29, 2023
September 12, 2023
August 22, 2023
June 12, 2023
May 25, 2023
February 2, 2023
July 13, 2022
June 15, 2022
April 29, 2022