Benchmark Attack
Benchmark attacks evaluate the robustness of machine learning models, particularly large language models and object detectors, against various adversarial manipulations. Current research focuses on developing comprehensive benchmark frameworks that encompass diverse attack types (e.g., prompt injection, physical attacks, model poisoning) and evaluate their effectiveness across different model architectures and datasets, often using simulation to control experimental conditions. These benchmarks are crucial for identifying vulnerabilities and driving the development of more secure and reliable AI systems, impacting fields ranging from autonomous driving to finance.
Papers
October 3, 2024
August 17, 2024
March 6, 2024
February 29, 2024
October 20, 2023
June 26, 2023
May 22, 2023
September 7, 2022