NLP Model
Natural Language Processing (NLP) models aim to enable computers to understand, interpret, and generate human language. Current research focuses on improving model robustness to noisy or user-generated content, enhancing explainability and interpretability through techniques like counterfactual explanations and latent concept attribution, and addressing biases related to fairness and privacy. These advancements are crucial for building reliable and trustworthy NLP systems with broad applications across various domains, including legal tech, healthcare, and social media analysis.
Papers
Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals
Yanai Elazar, Bhargavi Paranjape, Hao Peng, Sarah Wiegreffe, Khyathi Raghavi, Vivek Srikumar, Sameer Singh, Noah A. Smith
What if you said that differently?: How Explanation Formats Affect Human Feedback Efficacy and User Perception
Chaitanya Malaviya, Subin Lee, Dan Roth, Mark Yatskar
Hierarchical Classification System for Breast Cancer Specimen Report (HCSBC) -- an end-to-end model for characterizing severity and diagnosis
Thiago Santos, Harish Kamath, Christopher R. McAdams, Mary S. Newell, Marina Mosunjac, Gabriela Oprea-Ilies, Geoffrey Smith, Constance Lehman, Judy Gichoya, Imon Banerjee, Hari Trivedi
People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection
Indira Sen, Dennis Assenmacher, Mattia Samory, Isabelle Augenstein, Wil van der Aalst, Claudia Wagner