Level Perturbation
Level perturbation research investigates the robustness of natural language processing (NLP) models by introducing carefully designed modifications at various linguistic levels (word, phrase, sentence, or even attention mechanisms within models). Current research focuses on developing both perturbation methods (attacks) to expose vulnerabilities and defense mechanisms to improve model resilience, often employing techniques like adversarial training and contrastive learning within transformer-based architectures. This work is crucial for building more reliable and trustworthy NLP systems, mitigating risks associated with adversarial examples and improving the generalization capabilities of models across diverse real-world scenarios.
Papers
November 11, 2024
October 10, 2024
June 30, 2024
June 27, 2024
April 2, 2024
February 26, 2024
January 30, 2024
November 15, 2023
November 10, 2023
November 1, 2023
October 11, 2023
September 20, 2023
June 1, 2023
March 15, 2023
March 9, 2023
February 27, 2023
October 25, 2022
October 23, 2022
August 24, 2022