New Attack
Research on attacks against large language models (LLMs) and related AI systems is rapidly expanding, focusing on vulnerabilities exploited to elicit harmful outputs or extract sensitive information. Current efforts concentrate on developing and evaluating various attack methods, including jailbreaking, data poisoning, prompt injection, and membership inference attacks, often targeting specific model architectures like transformer-based LLMs and diffusion models. This research is crucial for understanding and mitigating the risks associated with increasingly powerful AI systems, informing the development of more robust and trustworthy AI applications across diverse sectors.
Papers
January 25, 2024
January 17, 2024
January 16, 2024
December 4, 2023
November 21, 2023
November 15, 2023
November 13, 2023
November 10, 2023
November 2, 2023
October 23, 2023
October 18, 2023
October 16, 2023
September 30, 2023
September 22, 2023
September 21, 2023
September 19, 2023
September 18, 2023
September 11, 2023