Robust Defense
Robust defense in machine learning focuses on developing methods to protect models against various adversarial attacks, including backdoors, jailbreaks, data poisoning, and evasion attacks targeting different model architectures like LLMs and CNNs. Current research emphasizes developing defenses that are both effective against diverse attack strategies and efficient, addressing challenges like resource constraints and privacy concerns through techniques such as randomized smoothing, gradient masking, and reinforcement learning-based approaches. These advancements are crucial for ensuring the reliability and trustworthiness of AI systems across diverse applications, from autonomous driving to medical diagnosis.
Papers
June 12, 2024
June 1, 2024
May 24, 2024
May 6, 2024
May 3, 2024
April 2, 2024
March 20, 2024
March 4, 2024
February 23, 2024
February 19, 2024
January 4, 2024
November 17, 2023
November 15, 2023
October 16, 2023
September 3, 2023
February 27, 2023
September 7, 2022
August 18, 2022