Additive Perturbation
Additive perturbation research explores how small changes to input data affect the performance and robustness of various models, primarily focusing on improving model reliability and understanding their decision-making processes. Current research investigates this across diverse fields, employing techniques like adversarial training, randomized smoothing, and variational methods within model architectures ranging from large language models and neural networks to graph neural networks. This work is crucial for enhancing the trustworthiness and reliability of machine learning systems in safety-critical applications and for gaining deeper insights into model behavior and vulnerabilities.
Papers
April 21, 2022
April 10, 2022
March 20, 2022
March 19, 2022
March 2, 2022
December 10, 2021
November 29, 2021
November 25, 2021
November 20, 2021
November 11, 2021