Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Log-normal Mutations and their Use in Detecting Surreptitious Fake Images
Ismail Labiad, Thomas Bäck, Pierre Fernandez, Laurent Najman, Tom Sanders, Furong Ye, Mariia Zameshina, Olivier Teytaud
Improving Adversarial Robustness for 3D Point Cloud Recognition at Test-Time through Purified Self-Training
Jinpeng Lin, Xulei Yang, Tianrui Li, Xun Xu
Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI
Ambrish Rawat, Stefan Schoepf, Giulio Zizzo, Giandomenico Cornacchia, Muhammad Zaid Hameed, Kieran Fraser, Erik Miehling, Beat Buesser, Elizabeth M. Daly, Mark Purcell, Prasanna Sattigeri, Pin-Yu Chen, Kush R. Varshney
ViTGuard: Attention-aware Detection against Adversarial Examples for Vision Transformer
Shihua Sun, Kenechukwu Nwodo, Shridatt Sugrim, Angelos Stavrou, Haining Wang
Efficient Visualization of Neural Networks with Generative Models and Adversarial Perturbations
Athanasios Karagounis
Deterministic versus stochastic dynamical classifiers: opposing random adversarial attacks with noise
Lorenzo Chicchi, Duccio Fanelli, Diego Febbe, Lorenzo Buffoni, Francesca Di Patti, Lorenzo Giambagli, Raffele Marino
Relationship between Uncertainty in DNNs and Adversarial Attacks
Abigail Adeniran, Adewale Adeyemo
Hidden Activations Are Not Enough: A General Approach to Neural Network Predictions
Samuel Leblanc, Aiky Rasolomanana, Marco Armenta
Deep generative models as an adversarial attack strategy for tabular machine learning
Salijona Dyrmishi, Mihaela Cătălina Stoian, Eleonora Giunchiglia, Maxime Cordy
TEAM: Temporal Adversarial Examples Attack Model against Network Intrusion Detection System Applied to RNN
Ziyi Liu, Dengpan Ye, Long Tang, Yunming Zhang, Jiacheng Deng
ITPatch: An Invisible and Triggered Physical Adversarial Patch against Traffic Sign Recognition
Shuai Yuan, Hongwei Li, Xingshuo Han, Guowen Xu, Wenbo Jiang, Tao Ni, Qingchuan Zhao, Yuguang Fang
Enhancing 3D Robotic Vision Robustness by Minimizing Adversarial Mutual Information through a Curriculum Training Approach
Nastaran Darabi, Dinithi Jayasuriya, Devashri Naik, Theja Tulabandhula, Amit Ranjan Trivedi
Golden Ratio Search: A Low-Power Adversarial Attack for Deep Learning based Modulation Classification
Deepsayan Sadhukhan, Nitin Priyadarshini Shankar, Sheetal Kalyani
EIA: Environmental Injection Attack on Generalist Web Agents for Privacy Leakage
Zeyi Liao, Lingbo Mo, Chejian Xu, Mintong Kang, Jiawei Zhang, Chaowei Xiao, Yuan Tian, Bo Li, Huan Sun