Adversarial Constraint
Adversarial constraint research focuses on designing algorithms and models that perform well despite adversarial inputs or constraints, aiming to minimize both error (regret) and constraint violations. Current research emphasizes developing "best-of-both-worlds" algorithms that handle both stochastic and adversarial constraints effectively, often employing techniques like optimistic constraint estimation and Lyapunov optimization within frameworks such as online convex optimization and bandit problems. This work is significant for improving the robustness and reliability of machine learning models in various applications, from network security to online decision-making, by mitigating the impact of malicious or unpredictable inputs.
Papers
October 28, 2024
October 3, 2024
May 25, 2024
May 15, 2024
February 25, 2024
February 13, 2024
October 29, 2023
June 6, 2023
May 31, 2023
April 4, 2023
March 14, 2023
February 21, 2023