Adversarial Setting
Adversarial settings in machine learning explore how algorithms perform under malicious attacks or unpredictable environments, aiming to develop robust and reliable systems. Current research focuses on improving model robustness through techniques like adversarial training (including variations such as class-wise calibrated fair adversarial training), and developing algorithms with strong theoretical guarantees in both stochastic and adversarial regimes (e.g., best-of-both-worlds algorithms for bandits and partial monitoring). This research is crucial for ensuring the trustworthiness and safety of machine learning systems deployed in real-world applications, particularly in high-stakes domains like finance, healthcare, and security.
Papers
October 14, 2024
June 9, 2024
June 7, 2024
June 6, 2024
May 21, 2024
March 17, 2024
March 5, 2024
February 18, 2024
February 16, 2024
February 13, 2024
February 6, 2024
January 31, 2024
January 26, 2024
December 27, 2023
November 11, 2023
August 19, 2023
June 22, 2023
March 25, 2023
March 1, 2023