Text Perturbation

Text perturbation involves intentionally altering text data—through word substitutions, syntactic changes, or other modifications—to evaluate the robustness and fairness of natural language processing (NLP) models. Current research focuses on using various perturbation techniques to assess model vulnerability to adversarial attacks, improve model generalization and fairness, and enhance the reliability of explanations generated by explainable AI (XAI) methods. This research is crucial for building more reliable and robust NLP systems, impacting areas like machine translation, question answering, and text classification by identifying and mitigating biases and vulnerabilities.

Papers