Small Perturbation
Small perturbation analysis investigates the impact of minor changes to inputs or model parameters on the behavior of machine learning models, particularly deep neural networks. Current research focuses on leveraging perturbation for tasks such as improving model robustness (e.g., against adversarial attacks or noisy data), enhancing interpretability, and achieving efficient model compression through techniques like quantization and weight pruning. This work is significant because understanding and controlling the effects of small perturbations is crucial for building reliable, trustworthy, and efficient AI systems across diverse applications, from image recognition and natural language processing to robotics and medical image analysis.
Papers
February 22, 2023
November 12, 2022
August 8, 2022
July 28, 2022
July 5, 2022
June 27, 2022
June 21, 2022
May 17, 2022
May 2, 2022
March 15, 2022
January 29, 2022
December 29, 2021