Small Perturbation
Small perturbation analysis investigates the impact of minor changes to inputs or model parameters on the behavior of machine learning models, particularly deep neural networks. Current research focuses on leveraging perturbation for tasks such as improving model robustness (e.g., against adversarial attacks or noisy data), enhancing interpretability, and achieving efficient model compression through techniques like quantization and weight pruning. This work is significant because understanding and controlling the effects of small perturbations is crucial for building reliable, trustworthy, and efficient AI systems across diverse applications, from image recognition and natural language processing to robotics and medical image analysis.
Papers
Perturbation Towards Easy Samples Improves Targeted Adversarial Transferability
Junqi Gao, Biqing Qi, Yao Li, Zhichang Guo, Dong Li, Yuming Xing, Dazhi Zhang
One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models
Hao Fang, Jiawei Kong, Wenbo Yu, Bin Chen, Jiawei Li, Shutao Xia, Ke Xu