Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
1813papers
Papers - Page 67
June 8, 2023
Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning
Mohamed elShehaby, Ashraf MatrawyBoosting Adversarial Transferability by Achieving Flat Local Maxima
Zhijin Ge, Hongying Liu, Xiaosen Wang, Fanhua Shang, Yuanyuan LiuFedSecurity: Benchmarking Attacks and Defenses in Federated Learning and Federated LLMs
Shanshan Han, Baturalp Buyukates, Zijian Hu, Han Jin, Weizhao Jin, Lichao Sun, Xiaoyang Wang, Wenxuan Wu, Chulin Xie, Yuhang Yao, Kai Zhang+5Expanding Scope: Adapting English Adversarial Attacks to Chinese
Hanyu Liu, Chengyuan Cai, Yanjun Qi
June 7, 2023
A Linearly Convergent GAN Inversion-based Algorithm for Reverse Engineering of Deceptions
Darshan Thaker, Paris Giampouras, René VidalDivide and Repair: Using Options to Improve Performance of Imitation Learning Against Adversarial Demonstrations
Prithviraj DasguptaCFDP: Common Frequency Domain Pruning
Samir Khaki, Weihan Luo
June 6, 2023
Adversarial attacks and defenses in explainable artificial intelligence: A survey
Hubert Baniecki, Przemyslaw BiecekAdversarial Attacks and Defenses for Semantic Communication in Vehicular Metaverses
Jiawen Kang, Jiayi He, Hongyang Du, Zehui Xiong, Zhaohui Yang, Xumin Huang, Shengli XieRevisiting the Trade-off between Accuracy and Robustness via Weight Distribution of Filters
Xingxing Wei, Shiji Zhao, Bo liA Robust Likelihood Model for Novelty Detection
Ranya Almohsen, Shivang Patel, Donald A. Adjeroh, Gianfranco Doretto
June 5, 2023
Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception
Drew Linsley, Pinyuan Feng, Thibaut Boissin, Alekh Karkada Ashok, Thomas Fel, Stephanie Olaiya, Thomas SerreAdversarial Ink: Componentwise Backward Error Attacks on Deep Learning
Lucas Beerens, Desmond J. Higham
June 2, 2023
Poisoning Network Flow Classifiers
Giorgio Severi, Simona Boboila, Alina Oprea, John Holodnak, Kendra Kratkiewicz, Jason MattererVoteTRANS: Detecting Adversarial Text without Training by Voting on Hard Labels of Transformations
Hoang-Quoc Nguyen-Son, Seira Hidano, Kazuhide Fukushima, Shinsaku Kiyomoto, Isao EchizenAdversarial Attack Based on Prediction-Correction
Chen Wan, Fangjun Huang