Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
AED-PADA:Improving Generalizability of Adversarial Example Detection via Principal Adversarial Domain Adaptation
Heqi Peng, Yunhong Wang, Ruijie Yang, Beichen Li, Rui Wang, Yuanfang Guo
SA-Attack: Speed-adaptive stealthy adversarial attack on trajectory prediction
Huilin Yin, Jiaxiang Li, Pengju Zhen, Jun Yan
Proteus: Preserving Model Confidentiality during Graph Optimizations
Yubo Gao, Maryam Haghifam, Christina Giannoula, Renbo Tu, Gennady Pekhimenko, Nandita Vijaykumar
Advancing the Robustness of Large Language Models through Self-Denoised Smoothing
Jiabao Ji, Bairu Hou, Zhen Zhang, Guanhua Zhang, Wenqi Fan, Qing Li, Yang Zhang, Gaowen Liu, Sijia Liu, Shiyu Chang
Exploring DNN Robustness Against Adversarial Attacks Using Approximate Multipliers
Mohammad Javad Askarizadeh, Ebrahim Farahmand, Jorge Castro-Godinez, Ali Mahani, Laura Cabrera-Quiros, Carlos Salazar-Garcia
GenFighter: A Generative and Evolutive Textual Attack Removal
Md Athikul Islam, Edoardo Serra, Sushil Jajodia
PASA: Attack Agnostic Unsupervised Adversarial Detection using Prediction & Attribution Sensitivity Analysis
Dipkamal Bhusal, Md Tanvirul Alam, Monish K. Veerabhadran, Michael Clifford, Sara Rampazzi, Nidhi Rastogi
A Survey of Neural Network Robustness Assessment in Image Recognition
Jie Wang, Jun Ai, Minyan Lu, Haoran Su, Dan Yu, Yutao Zhang, Junda Zhu, Jingyu Liu
Struggle with Adversarial Defense? Try Diffusion
Yujie Li, Yanbin Wang, Haitao Xu, Bin Liu, Jianguo Sun, Zhenhao Guo, Wenrui Ma
Practical Region-level Attack against Segment Anything Models
Yifan Shen, Zhengyuan Li, Gang Wang
Adversarial purification for no-reference image-quality metrics: applicability study and new methods
Aleksandr Gushchin, Anna Chistyakova, Vladislav Minashkin, Anastasia Antsiferova, Dmitriy Vatolin
Logit Calibration and Feature Contrast for Robust Federated Learning on Non-IID Data
Yu Qiao, Chaoning Zhang, Apurba Adhikary, Choong Seon Hong
Quantum Adversarial Learning for Kernel Methods
Giuseppe Montalbano, Leonardo Banchi
Out-of-Distribution Data: An Acquaintance of Adversarial Examples -- A Survey
Naveen Karunanayake, Ravin Gunawardena, Suranga Seneviratne, Sanjay Chawla
Semantic Stealth: Adversarial Text Attacks on NLP Using Several Methods
Roopkatha Dey, Aivy Debnath, Sayak Kumar Dutta, Kaustav Ghosh, Arijit Mitra, Arghya Roy Chowdhury, Jaydip Sen
Evaluating Adversarial Robustness: A Comparison Of FGSM, Carlini-Wagner Attacks, And The Role of Distillation as Defense Mechanism
Trilokesh Ranjan Sarkar, Nilanjan Das, Pralay Sankar Maitra, Bijoy Some, Ritwik Saha, Orijita Adhikary, Bishal Bose, Jaydip Sen
Re-pseudonymization Strategies for Smart Meter Data Are Not Robust to Deep Learning Profiling Attacks
Ana-Maria Cretu, Miruna Rusu, Yves-Alexandre de Montjoye