Clinical Note Summarization
Clinical note summarization aims to automatically condense lengthy medical records into concise, informative summaries, improving clinician efficiency and patient care. Current research heavily utilizes large language models (LLMs), often employing fine-tuning techniques and prompt engineering to enhance accuracy and faithfulness while mitigating issues like hallucinations and omissions. This field is crucial for reducing clinician workload, improving diagnostic accuracy, and facilitating better communication and decision-making within healthcare settings. A key focus is developing robust evaluation metrics that accurately capture the quality and factual accuracy of generated summaries.
Papers
Towards Evaluating and Building Versatile Large Language Models for Medicine
Chaoyi Wu, Pengcheng Qiu, Jinxin Liu, Hongfei Gu, Na Li, Ya Zhang, Yanfeng Wang, Weidi Xie
uMedSum: A Unified Framework for Advancing Medical Abstractive Summarization
Aishik Nagar, Yutong Liu, Andy T. Liu, Viktor Schlegel, Vijay Prakash Dwivedi, Arun-Kumar Kaliya-Perumal, Guna Pratheep Kalanchiam, Yili Tang, Robby T. Tan
CUED at ProbSum 2023: Hierarchical Ensemble of Summarization Models
Potsawee Manakul, Yassir Fathullah, Adian Liusie, Vyas Raina, Vatsal Raina, Mark Gales
Overview of the Problem List Summarization (ProbSum) 2023 Shared Task on Summarizing Patients' Active Diagnoses and Problems from Electronic Health Record Progress Notes
Yanjun Gao, Dmitriy Dligach, Timothy Miller, Matthew M. Churpek, Majid Afshar