Additive Perturbation

Additive perturbation research explores how small changes to input data affect the performance and robustness of various models, primarily focusing on improving model reliability and understanding their decision-making processes. Current research investigates this across diverse fields, employing techniques like adversarial training, randomized smoothing, and variational methods within model architectures ranging from large language models and neural networks to graph neural networks. This work is crucial for enhancing the trustworthiness and reliability of machine learning systems in safety-critical applications and for gaining deeper insights into model behavior and vulnerabilities.

Papers