Small Perturbation

Small perturbation analysis investigates the impact of minor changes to inputs or model parameters on the behavior of machine learning models, particularly deep neural networks. Current research focuses on leveraging perturbation for tasks such as improving model robustness (e.g., against adversarial attacks or noisy data), enhancing interpretability, and achieving efficient model compression through techniques like quantization and weight pruning. This work is significant because understanding and controlling the effects of small perturbations is crucial for building reliable, trustworthy, and efficient AI systems across diverse applications, from image recognition and natural language processing to robotics and medical image analysis.

Papers