Adversarial Setting

Adversarial settings in machine learning explore how algorithms perform under malicious attacks or unpredictable environments, aiming to develop robust and reliable systems. Current research focuses on improving model robustness through techniques like adversarial training (including variations such as class-wise calibrated fair adversarial training), and developing algorithms with strong theoretical guarantees in both stochastic and adversarial regimes (e.g., best-of-both-worlds algorithms for bandits and partial monitoring). This research is crucial for ensuring the trustworthiness and safety of machine learning systems deployed in real-world applications, particularly in high-stakes domains like finance, healthcare, and security.

Papers