Human Written Perturbation
Human-written text perturbation research investigates how humans subtly alter text, impacting applications like hate speech detection and logical reasoning systems. Current work focuses on creating benchmark datasets of these perturbations, often using crowdsourced data, and evaluating the robustness of machine learning models (including transformers like BERT and RoBERTa) against them. This research is crucial for improving the reliability and resilience of AI systems in real-world scenarios where adversarial or noisy text is common, ultimately leading to more robust and trustworthy applications.
Papers
October 17, 2023
March 18, 2023
January 16, 2023