Robust Text Classification
Robust text classification aims to develop text classifiers that maintain accuracy even when faced with noisy data, adversarial attacks, or distributional shifts across datasets. Current research focuses on improving model robustness through techniques like randomized smoothing, prototype-based networks, and data augmentation strategies such as counterfactual generation and backdoor adjustment, often applied to large language models. These advancements are crucial for deploying reliable text classifiers in real-world applications, particularly in safety-critical domains like healthcare and finance, where spurious correlations and biases can have significant consequences.
Papers
November 1, 2024
August 15, 2024
August 1, 2024
November 11, 2023
October 19, 2023
October 3, 2023
July 1, 2023
May 11, 2023
March 29, 2023