Attack Strategy
Attack strategies in machine learning and related fields focus on exploiting vulnerabilities in models and systems to achieve malicious objectives, such as data theft, model manipulation, or performance degradation. Current research emphasizes various attack types, including adversarial examples (e.g., crafted inputs causing misclassification), backdoor attacks (injecting triggers to control model outputs), and membership inference attacks (determining if a data point was used in training). These studies often involve deep neural networks, large language models, and reinforcement learning algorithms, and their findings are crucial for developing more robust and secure systems across diverse applications, from cybersecurity to AI safety.
Papers
November 13, 2024
November 1, 2024
October 21, 2024
October 9, 2024
September 24, 2024
September 17, 2024
September 4, 2024
August 17, 2024
August 12, 2024
June 27, 2024
June 25, 2024
May 25, 2024
April 9, 2024
February 28, 2024
February 15, 2024
February 14, 2024
February 10, 2024
December 19, 2023
December 14, 2023
December 12, 2023