Attack Strategy
Attack strategies in machine learning and related fields focus on exploiting vulnerabilities in models and systems to achieve malicious objectives, such as data theft, model manipulation, or performance degradation. Current research emphasizes various attack types, including adversarial examples (e.g., crafted inputs causing misclassification), backdoor attacks (injecting triggers to control model outputs), and membership inference attacks (determining if a data point was used in training). These studies often involve deep neural networks, large language models, and reinforcement learning algorithms, and their findings are crucial for developing more robust and secure systems across diverse applications, from cybersecurity to AI safety.
Papers
Eroding Trust In Aerial Imagery: Comprehensive Analysis and Evaluation Of Adversarial Attacks In Geospatial Systems
Michael Lanier, Aayush Dhakal, Zhexiao Xiong, Arthur Li, Nathan Jacobs, Yevgeniy Vorobeychik
Cost Aware Untargeted Poisoning Attack against Graph Neural Networks,
Yuwei Han, Yuni Lai, Yulin Zhu, Kai Zhou