Model Attack
Model attacks exploit vulnerabilities in machine learning models to either extract sensitive training data or manipulate model outputs, undermining their reliability and security. Current research focuses on developing increasingly sophisticated attack methods, including generative adversarial networks (GANs) and reinforcement learning, targeting various model architectures like large language models (LLMs), convolutional neural networks (CNNs), and graph neural networks. These attacks highlight critical security risks in deploying machine learning systems across diverse applications, from healthcare to security systems, necessitating the development of robust defense mechanisms.
Papers
October 21, 2024
October 19, 2024
October 4, 2024
September 6, 2024
June 18, 2024
May 23, 2024
February 6, 2024
January 16, 2024
January 8, 2024
December 22, 2023
September 20, 2023
July 17, 2023
April 5, 2023
March 9, 2023
March 7, 2023
December 31, 2022
December 22, 2022
November 26, 2022