Robust Defense
Robust defense in machine learning focuses on developing methods to protect models against various adversarial attacks, including backdoors, jailbreaks, data poisoning, and evasion attacks targeting different model architectures like LLMs and CNNs. Current research emphasizes developing defenses that are both effective against diverse attack strategies and efficient, addressing challenges like resource constraints and privacy concerns through techniques such as randomized smoothing, gradient masking, and reinforcement learning-based approaches. These advancements are crucial for ensuring the reliability and trustworthiness of AI systems across diverse applications, from autonomous driving to medical diagnosis.
Papers
November 12, 2024
October 23, 2024
October 15, 2024
October 11, 2024
October 8, 2024
October 5, 2024
September 23, 2024
September 21, 2024
September 12, 2024
August 28, 2024
August 20, 2024
August 15, 2024
July 27, 2024
June 28, 2024
June 12, 2024
June 1, 2024
May 24, 2024
May 6, 2024