Level Perturbation
Level perturbation research investigates the robustness of natural language processing (NLP) models by introducing carefully designed modifications at various linguistic levels (word, phrase, sentence, or even attention mechanisms within models). Current research focuses on developing both perturbation methods (attacks) to expose vulnerabilities and defense mechanisms to improve model resilience, often employing techniques like adversarial training and contrastive learning within transformer-based architectures. This work is crucial for building more reliable and trustworthy NLP systems, mitigating risks associated with adversarial examples and improving the generalization capabilities of models across diverse real-world scenarios.
Papers
May 22, 2022
April 13, 2022
December 16, 2021