Structured Summary
Structured summarization research focuses on automatically generating concise and informative summaries from various text sources, prioritizing factual accuracy and coherence. Current efforts concentrate on improving the faithfulness and informativeness of Large Language Models (LLMs) for summarization, addressing issues like hallucination and bias, and developing more robust evaluation metrics beyond simple overlap measures. This field is crucial for efficiently managing the ever-increasing volume of digital information, with applications ranging from healthcare and finance to scientific literature review and improved accessibility of information. The development of more effective summarization techniques is driving advancements in both LLM architecture and evaluation methodologies.
Papers
Neural Natural Language Processing for Long Texts: A Survey on Classification and Summarization
Dimitrios Tsirmpas, Ioannis Gkionis, Georgios Th. Papadopoulos, Ioannis Mademlis
Abstractive Summary Generation for the Urdu Language
Ali Raza, Hadia Sultan Raja, Usman Maratib
Do You Hear The People Sing? Key Point Analysis via Iterative Clustering and Abstractive Summarisation
Hao Li, Viktor Schlegel, Riza Batista-Navarro, Goran Nenadic
Neural Summarization of Electronic Health Records
Koyena Pal, Seyed Ali Bahrainian, Laura Mercurio, Carsten Eickhoff
SummIt: Iterative Text Summarization via ChatGPT
Haopeng Zhang, Xiao Liu, Jiawei Zhang
AWESOME: GPU Memory-constrained Long Document Summarization using Memory Mechanism and Global Salient Content
Shuyang Cao, Lu Wang