Zero Shot Adversarial Robustness
Zero-shot adversarial robustness focuses on improving the resilience of large-scale models, particularly vision-language models like CLIP, to adversarial attacks without requiring retraining on the target task. Current research emphasizes techniques like adversarial prompt learning, contrastive training, and model-guided fine-tuning to enhance robustness while preserving zero-shot generalization capabilities. These methods aim to improve the accuracy and reliability of these models in real-world scenarios where unseen or maliciously perturbed inputs are common. The field's advancements are crucial for building more trustworthy and dependable AI systems across various applications.
Papers
October 29, 2024
October 21, 2024
May 27, 2024
March 21, 2024
January 9, 2024
September 8, 2023
June 20, 2023
January 30, 2023
December 14, 2022
September 14, 2022
July 14, 2022