Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks
Heitor R. Guimarães, Arthur Pimentel, Anderson Avila, Tiago H. Falk
QAL-BP: An Augmented Lagrangian Quantum Approach for Bin Packing
Lorenzo Cellini, Antonio Macaluso, Michele Lombardi
Improving Machine Learning Robustness via Adversarial Training
Long Dang, Thushari Hapuarachchi, Kaiqi Xiong, Jing Lin
AudioFool: Fast, Universal and synchronization-free Cross-Domain Attack on Speech Recognition
Mohamad Fakih, Rouwaida Kanj, Fadi Kurdahi, Mohammed E. Fouda
PRAT: PRofiling Adversarial aTtacks
Rahul Ambati, Naveed Akhtar, Ajmal Mian, Yogesh Singh Rawat
It's Simplex! Disaggregating Measures to Improve Certified Robustness
Andrew C. Cullen, Paul Montague, Shijie Liu, Sarah M. Erfani, Benjamin I. P. Rubinstein
Extreme Image Transformations Facilitate Robust Latent Object Representations
Girik Malik, Dakarai Crowder, Ennio Mingolla
Adversarial Attacks Against Uncertainty Quantification
Emanuele Ledda, Daniele Angioni, Giorgio Piras, Giorgio Fumera, Battista Biggio, Fabio Roli
Language Guided Adversarial Purification
Himanshu Singh, A V Subramanyam
Transferable Adversarial Attack on Image Tampering Localization
Yuqi Wang, Gang Cao, Zijie Lou, Haochen Zhu
Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments
Simon Queyrut, Valerio Schiavoni, Pascal Felber
PhantomSound: Black-Box, Query-Efficient Audio Adversarial Attack via Split-Second Phoneme Injection
Hanqing Guo, Guangjing Wang, Yuanda Wang, Bocheng Chen, Qiben Yan, Li Xiao