Input Perturbation
Input perturbation research investigates how machine learning models, particularly deep neural networks and large language models, respond to variations in their input data. Current research focuses on evaluating model robustness to various perturbations, including noise, adversarial attacks, and modifications like synonym replacement or image blurring, often using benchmarks tailored to specific tasks (e.g., named entity recognition, slot filling). This work is crucial for improving model reliability and trustworthiness in real-world applications where noisy or manipulated inputs are common, ultimately leading to more robust and dependable AI systems.
Papers
September 24, 2024
August 2, 2024
June 7, 2024
April 7, 2024
April 1, 2024
February 25, 2024
February 24, 2024
February 22, 2024
February 10, 2024
February 7, 2024
November 15, 2023
October 10, 2023
October 5, 2023
September 19, 2023
May 19, 2023
January 27, 2023
November 30, 2022
November 29, 2022
March 22, 2022