Targeted Attack
Targeted attacks in machine learning aim to manipulate model inputs or training data to cause specific, undesirable outputs, impacting model reliability and security. Current research focuses on developing increasingly sophisticated attack methods against various models, including deep learning architectures for image recognition, large language models, and time-series forecasting, often employing gradient-based optimization or data manipulation techniques. Understanding and mitigating these attacks is crucial for ensuring the trustworthiness and robustness of machine learning systems across diverse applications, from cybersecurity to healthcare. The field is actively exploring both improved attack strategies and robust defenses.
Papers
October 31, 2024
August 14, 2024
January 16, 2024
December 4, 2023
October 23, 2023
July 12, 2023
April 26, 2023
April 21, 2023
March 30, 2023
February 13, 2023
January 27, 2023
October 18, 2022
September 8, 2022
July 16, 2022
July 2, 2022
June 25, 2022