Model Attack
Model attacks exploit vulnerabilities in machine learning models to either extract sensitive training data or manipulate model outputs, undermining their reliability and security. Current research focuses on developing increasingly sophisticated attack methods, including generative adversarial networks (GANs) and reinforcement learning, targeting various model architectures like large language models (LLMs), convolutional neural networks (CNNs), and graph neural networks. These attacks highlight critical security risks in deploying machine learning systems across diverse applications, from healthcare to security systems, necessitating the development of robust defense mechanisms.
Papers
February 7, 2022
January 18, 2022
January 15, 2022