Adversarial Perturbation
Adversarial perturbation research focuses on developing and mitigating the vulnerability of machine learning models to maliciously crafted inputs designed to cause misclassification or other errors. Current research emphasizes improving the robustness of various model architectures, including deep convolutional neural networks, vision transformers, and graph neural networks, often employing techniques like adversarial training, vector quantization, and optimal transport methods. This field is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and face recognition to robotics and natural language processing, by identifying and addressing vulnerabilities to attacks.
423papers
Papers - Page 3
January 31, 2025
January 24, 2025
January 19, 2025
January 3, 2025
December 25, 2024
December 24, 2024
SurvAttack: Black-Box Attack On Survival Models through Ontology-Informed EHR Perturbation
Mohsen Nayebi Kerdabadi, Arya Hadizadeh Moghaddam, Bin Liu, Mei Liu, Zijun YaoRobustness-aware Automatic Prompt Optimization
Zeru Shi, Zhenting Wang, Yongye Su, Weidi Luo, Hang Gao, Fan Yang, Ruixiang Tang, Yongfeng Zhang
December 23, 2024
December 17, 2024
December 13, 2024
On Adversarial Robustness and Out-of-Distribution Robustness of Large Language Models
April Yang, Jordan Tab, Parth Shah, Paul Kotchavong\textrm{A}\textrm{2}RNet: Adversarial Attack Resilient Network for Robust Infrared and Visible Image Fusion
Jiawei Li, Hongwei Yu, Jiansheng Chen, Xinlong Ding, Jinlong Wang, Jinyuan Liu, Bochao Zou, Huimin MaPrompt2Perturb (P2P): Text-Guided Diffusion-Based Adversarial Attacks on Breast Ultrasound Images
Yasamin Medghalchi, Moein Heidari, Clayton Allard, Leonid Sigal, Ilker Hacihaliloglu