Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Cross-Modal Transferable Image-to-Video Attack on Video Quality Metrics
Georgii Gotin, Ekaterina Shumitskaya, Anastasia Antsiferova, Dmitriy Vatolin
Towards an End-to-End (E2E) Adversarial Learning and Application in the Physical World
Dudi Biton, Jacob Shams, Koda Satoru, Asaf Shabtai, Yuval Elovici, Ben Nassi
VENOM: Text-driven Unrestricted Adversarial Example Generation with Diffusion Models
Hui Kuurila-Zhang, Haoyu Chen, Guoying Zhao
MOS-Attack: A Scalable Multi-objective Adversarial Attack Framework
Ping Guo, Cheng Gong, Xi Lin, Fei Liu, Zhichao Lu, Qingfu Zhang, Zhenkun Wang
Protego: Detecting Adversarial Examples for Vision Transformers via Intrinsic Capabilities
Jialin Wu, Kaikai Pan, Yanjiao Chen, Jiangyi Deng, Shengyuan Pang, Wenyuan Xu
Enforcing Fundamental Relations via Adversarial Attacks on Input Parameter Correlations
Timo Saala, Lucie Flek, Alexander Jung, Akbar Karimi, Alexander Schmidt, Matthias Schott, Philipp Soldin, Christopher Wiebusch
CROPS: Model-Agnostic Training-Free Framework for Safe Image Synthesis with Latent Diffusion Models
Junha Park, Ian Ryu, Jaehui Hwang, Hyungkeun Park, Jiyoon Kim, Jong-Seok Lee
DiffAttack: Diffusion-based Timbre-reserved Adversarial Attack in Speaker Identification
Qing Wang, Jixun Yao, Zhaokai Sun, Pengcheng Guo, Lei Xie, John H.L. Hansen
On Measuring Unnoticeability of Graph Adversarial Attacks: Observations, New Measure, and Applications
Hyeonsoo Jo, Hyunjin Hwang, Fanchen Bu, Soo Yong Lee, Chanyoung Park, Kijung Shin
Synthetic Data Privacy Metrics
Amy Steier, Lipika Ramaswamy, Andre Manoel, Alexa Haushalter
Rethinking Adversarial Attacks in Reinforcement Learning from Policy Distribution Perspective
Tianyang Duan, Zongyuan Zhang, Zheng Lin, Yue Gao, Ling Xiong, Yong Cui, Hongbin Liang, Xianhao Chen, Heming Cui, Dong Huang