Structured Summary
Structured summarization research focuses on automatically generating concise and informative summaries from various text sources, prioritizing factual accuracy and coherence. Current efforts concentrate on improving the faithfulness and informativeness of Large Language Models (LLMs) for summarization, addressing issues like hallucination and bias, and developing more robust evaluation metrics beyond simple overlap measures. This field is crucial for efficiently managing the ever-increasing volume of digital information, with applications ranging from healthcare and finance to scientific literature review and improved accessibility of information. The development of more effective summarization techniques is driving advancements in both LLM architecture and evaluation methodologies.
Papers
MedSumm: A Multimodal Approach to Summarizing Code-Mixed Hindi-English Clinical Queries
Akash Ghosh, Arkadeep Acharya, Prince Jha, Aniket Gaudgaul, Rajdeep Majumdar, Sriparna Saha, Aman Chadha, Raghav Jain, Setu Sinha, Shivani Agarwal
Question-Answering Based Summarization of Electronic Health Records using Retrieval Augmented Generation
Walid Saba, Suzanne Wendelken, James. Shanahan
Zero-shot Conversational Summarization Evaluations with small Large Language Models
Ramesh Manuvinakurike, Saurav Sahay, Sangeeta Manepalli, Lama Nachman
Reinforcement Replaces Supervision: Query focused Summarization using Deep Reinforcement Learning
Swaroop Nath, Harshad Khadilkar, Pushpak Bhattacharyya
LLM aided semi-supervision for Extractive Dialog Summarization
Nishant Mishra, Gaurav Sahu, Iacer Calixto, Ameen Abu-Hanna, Issam H. Laradji
Leveraging Generative AI for Clinical Evidence Summarization Needs to Ensure Trustworthiness
Gongbo Zhang, Qiao Jin, Denis Jered McInerney, Yong Chen, Fei Wang, Curtis L. Cole, Qian Yang, Yanshan Wang, Bradley A. Malin, Mor Peleg, Byron C. Wallace, Zhiyong Lu, Chunhua Weng, Yifan Peng