Structured Summary
Structured summarization research focuses on automatically generating concise and informative summaries from various text sources, prioritizing factual accuracy and coherence. Current efforts concentrate on improving the faithfulness and informativeness of Large Language Models (LLMs) for summarization, addressing issues like hallucination and bias, and developing more robust evaluation metrics beyond simple overlap measures. This field is crucial for efficiently managing the ever-increasing volume of digital information, with applications ranging from healthcare and finance to scientific literature review and improved accessibility of information. The development of more effective summarization techniques is driving advancements in both LLM architecture and evaluation methodologies.
Papers
Factual Consistency Evaluation of Summarisation in the Era of Large Language Models
Zheheng Luo, Qianqian Xie, Sophia Ananiadou
The Lay Person's Guide to Biomedicine: Orchestrating Large Language Models
Zheheng Luo, Qianqian Xie, Sophia Ananiadou
Ranking Large Language Models without Ground Truth
Amit Dhurandhar, Rahul Nair, Moninder Singh, Elizabeth Daly, Karthikeyan Natesan Ramamurthy
MedSumm: A Multimodal Approach to Summarizing Code-Mixed Hindi-English Clinical Queries
Akash Ghosh, Arkadeep Acharya, Prince Jha, Aniket Gaudgaul, Rajdeep Majumdar, Sriparna Saha, Aman Chadha, Raghav Jain, Setu Sinha, Shivani Agarwal
Question-Answering Based Summarization of Electronic Health Records using Retrieval Augmented Generation
Walid Saba, Suzanne Wendelken, James. Shanahan