Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Light-weight Fine-tuning Method for Defending Adversarial Noise in Pre-trained Medical Vision-Language Models
Xu Han, Linghao Jin, Xuezhe Ma, Xiaofeng Liu
EvolBA: Evolutionary Boundary Attack under Hard-label Black Box condition
Ayane Tajima, Satoshi Ono
MALT Powers Up Adversarial Attacks
Odelia Melamed, Gilad Yehudai, Adi Shamir
Formal Verification of Object Detection
Avraham Raviv, Yizhak Y. Elboher, Michelle Aluf-Medina, Yael Leibovich Weiss, Omer Cohen, Roy Assa, Guy Katz, Hillel Kugler
Multi-View Black-Box Physical Attacks on Infrared Pedestrian Detectors Using Adversarial Infrared Grid
Kalibinuer Tiliwalidi, Chengyin Hu, Weiwen Shi
DiffuseDef: Improved Robustness to Adversarial Attacks
Zhenhao Li, Marek Rei, Lucia Specia
Emotion Loss Attacking: Adversarial Attack Perception for Skeleton based on Multi-dimensional Features
Feng Liu, Qing Xu, Qijian Zheng
IDT: Dual-Task Adversarial Attacks for Privacy Protection
Pedro Faustini, Shakila Mahjabin Tonni, Annabelle McIver, Qiongkai Xu, Mark Dras
Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness
Erh-Chung Chen, Pin-Yu Chen, I-Hsin Chung, Che-Rung Lee
Zero-Query Adversarial Attack on Black-box Automatic Speech Recognition Systems
Zheng Fang, Tao Wang, Lingchen Zhao, Shenyi Zhang, Bowen Li, Yunjie Ge, Qi Li, Chao Shen, Qian Wang
Dysca: A Dynamic and Scalable Benchmark for Evaluating Perception Ability of LVLMs
Jie Zhang, Zhongqi Wang, Mengqi Lei, Zheng Yuan, Bei Yan, Shiguang Shan, Xilin Chen
Detecting Brittle Decisions for Free: Leveraging Margin Consistency in Deep Robust Classifiers
Jonas Ngnawé, Sabyasachi Sahoo, Yann Pequignot, Frédéric Precioso, Christian Gagné
Artificial Immune System of Secure Face Recognition Against Adversarial Attacks
Min Ren, Yunlong Wang, Yuhao Zhu, Yongzhen Huang, Zhenan Sun, Qi Li, Tieniu Tan
Diffusion-based Adversarial Purification for Intrusion Detection
Mohamed Amine Merzouk, Erwan Beurier, Reda Yaich, Nora Boulahia-Cuppens, Frédéric Cuppens
Treatment of Statistical Estimation Problems in Randomized Smoothing for Adversarial Robustness
Vaclav Voracek
Detection of Synthetic Face Images: Accuracy, Robustness, Generalization
Nela Petrzelkova, Jan Cech
CuDA2: An approach for Incorporating Traitor Agents into Cooperative Multi-Agent Systems
Zhen Chen, Yong Liao, Youpeng Zhao, Zipeng Dai, Jian Zhao
Automated Adversarial Discovery for Safety Classifiers
Yash Kumar Lal, Preethi Lahoti, Aradhana Sinha, Yao Qin, Ananth Balashankar
UNICAD: A Unified Approach for Attack Detection, Noise Reduction and Novel Class Identification
Alvaro Lopez Pellicer, Kittipos Giatgong, Yi Li, Neeraj Suri, Plamen Angelov