Input Perturbation
Input perturbation research investigates how machine learning models, particularly deep neural networks and large language models, respond to variations in their input data. Current research focuses on evaluating model robustness to various perturbations, including noise, adversarial attacks, and modifications like synonym replacement or image blurring, often using benchmarks tailored to specific tasks (e.g., named entity recognition, slot filling). This work is crucial for improving model reliability and trustworthiness in real-world applications where noisy or manipulated inputs are common, ultimately leading to more robust and dependable AI systems.
Papers
February 2, 2022