NLP Model
Natural Language Processing (NLP) models aim to enable computers to understand, interpret, and generate human language. Current research focuses on improving model robustness to noisy or user-generated content, enhancing explainability and interpretability through techniques like counterfactual explanations and latent concept attribution, and addressing biases related to fairness and privacy. These advancements are crucial for building reliable and trustworthy NLP systems with broad applications across various domains, including legal tech, healthcare, and social media analysis.
Papers
October 26, 2023
October 21, 2023
October 13, 2023
October 11, 2023
October 9, 2023
October 2, 2023
September 29, 2023
September 8, 2023
August 31, 2023
July 11, 2023
June 27, 2023
June 18, 2023
June 8, 2023
Overview of the Problem List Summarization (ProbSum) 2023 Shared Task on Summarizing Patients' Active Diagnoses and Problems from Electronic Health Record Progress Notes
Yanjun Gao, Dmitriy Dligach, Timothy Miller, Matthew M. Churpek, Majid Afshar
Assessing Phrase Break of ESL Speech with Pre-trained Language Models and Large Language Models
Zhiyi Wang, Shaoguang Mao, Wenshan Wu, Yan Xia, Yan Deng, Jonathan Tien
June 7, 2023
June 4, 2023
June 3, 2023
May 31, 2023