Adversarial Vulnerability
Adversarial vulnerability explores how seemingly minor input manipulations can drastically alter the predictions of machine learning models, particularly deep neural networks (DNNs), including vision transformers (ViTs) and vision-language pre-training (VLP) models. Current research focuses on developing more effective adversarial attacks, evaluating model robustness across various architectures and tasks (e.g., image classification, semantic segmentation, multi-task learning, and LLMs), and designing defenses such as adversarial training and robust proxy learning. Understanding and mitigating adversarial vulnerability is crucial for ensuring the reliability and safety of AI systems deployed in high-stakes applications, from autonomous vehicles to medical diagnosis.