State of the Art Robustness
Robustness in machine learning models, particularly deep neural networks, focuses on ensuring reliable performance despite various perturbations, including adversarial attacks, noisy data, and variations in training conditions. Current research emphasizes improving robustness through techniques like gradient clipping in distributed learning, compositional estimation of Lipschitz constants for tighter certification bounds, and developing defenses against adversarial patches while maintaining computational efficiency. These advancements are crucial for deploying reliable AI systems in real-world applications where unexpected inputs or data shifts are common, impacting fields ranging from autonomous driving to medical diagnosis.
Papers
November 7, 2024
June 3, 2024
May 23, 2024
April 5, 2024
October 19, 2023
August 8, 2023
June 5, 2023
April 21, 2023
March 22, 2023
March 2, 2023
August 30, 2022
August 15, 2022