Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning
Mohamed el Shehaby, Ashraf Matrawy
Boosting Adversarial Transferability by Achieving Flat Local Maxima
Zhijin Ge, Hongying Liu, Xiaosen Wang, Fanhua Shang, Yuanyuan Liu
FedSecurity: Benchmarking Attacks and Defenses in Federated Learning and Federated LLMs
Shanshan Han, Baturalp Buyukates, Zijian Hu, Han Jin, Weizhao Jin, Lichao Sun, Xiaoyang Wang, Wenxuan Wu, Chulin Xie, Yuhang Yao, Kai Zhang, Qifan Zhang, Yuhui Zhang, Carlee Joe-Wong, Salman Avestimehr, Chaoyang He
Expanding Scope: Adapting English Adversarial Attacks to Chinese
Hanyu Liu, Chengyuan Cai, Yanjun Qi
A Linearly Convergent GAN Inversion-based Algorithm for Reverse Engineering of Deceptions
Darshan Thaker, Paris Giampouras, René Vidal
Divide and Repair: Using Options to Improve Performance of Imitation Learning Against Adversarial Demonstrations
Prithviraj Dasgupta
CFDP: Common Frequency Domain Pruning
Samir Khaki, Weihan Luo
Adversarial attacks and defenses in explainable artificial intelligence: A survey
Hubert Baniecki, Przemyslaw Biecek
Adversarial Attacks and Defenses for Semantic Communication in Vehicular Metaverses
Jiawen Kang, Jiayi He, Hongyang Du, Zehui Xiong, Zhaohui Yang, Xumin Huang, Shengli Xie
Revisiting the Trade-off between Accuracy and Robustness via Weight Distribution of Filters
Xingxing Wei, Shiji Zhao, Bo li
A Robust Likelihood Model for Novelty Detection
Ranya Almohsen, Shivang Patel, Donald A. Adjeroh, Gianfranco Doretto
Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception
Drew Linsley, Pinyuan Feng, Thibaut Boissin, Alekh Karkada Ashok, Thomas Fel, Stephanie Olaiya, Thomas Serre
Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning
Lucas Beerens, Desmond J. Higham
Poisoning Network Flow Classifiers
Giorgio Severi, Simona Boboila, Alina Oprea, John Holodnak, Kendra Kratkiewicz, Jason Matterer
VoteTRANS: Detecting Adversarial Text without Training by Voting on Hard Labels of Transformations
Hoang-Quoc Nguyen-Son, Seira Hidano, Kazuhide Fukushima, Shinsaku Kiyomoto, Isao Echizen
Adversarial Attack Based on Prediction-Correction
Chen Wan, Fangjun Huang