Text Perturbation
Text perturbation involves intentionally altering text data—through word substitutions, syntactic changes, or other modifications—to evaluate the robustness and fairness of natural language processing (NLP) models. Current research focuses on using various perturbation techniques to assess model vulnerability to adversarial attacks, improve model generalization and fairness, and enhance the reliability of explanations generated by explainable AI (XAI) methods. This research is crucial for building more reliable and robust NLP systems, impacting areas like machine translation, question answering, and text classification by identifying and mitigating biases and vulnerabilities.
Papers
August 28, 2024
July 16, 2024
July 14, 2024
June 25, 2024
June 16, 2024
June 13, 2024
June 4, 2024
February 1, 2024
September 10, 2023
May 23, 2023
May 21, 2023
March 15, 2023
March 9, 2023
January 16, 2023
November 29, 2022
September 2, 2022