Adaptive Adversary
Adaptive adversaries in machine learning represent a significant challenge, focusing on how algorithms and systems perform against attackers who can adjust their strategies based on observed system responses. Current research investigates the impact of adaptive adversaries across various machine learning tasks, including online learning, federated learning, and bandit problems, often employing techniques like game theory and submodular optimization to model these interactions. Understanding and mitigating the effects of adaptive adversaries is crucial for building robust and secure machine learning systems, impacting the reliability and trustworthiness of AI in diverse applications.
Papers
April 18, 2022
March 2, 2022
February 19, 2022
January 8, 2022
January 3, 2022
November 19, 2021