Abstractive Summarization
Abstractive summarization aims to generate concise, coherent summaries that capture the essence of a source text, unlike extractive methods which simply select existing sentences. Current research emphasizes improving the accuracy and faithfulness of these summaries, particularly addressing issues like hallucination (generating false information) and ensuring coverage of diverse perspectives. This involves leveraging large language models (LLMs) and transformer-based architectures, often incorporating techniques like reinforcement learning, attention mechanisms, and multi-modal learning, and exploring methods for incorporating user preferences or focusing on specific aspects. The field's advancements have significant implications for various applications, including information retrieval, document processing, and healthcare, by enabling efficient and accurate information synthesis from large volumes of text and other data modalities.
Papers
Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents
Marcio Fonseca, Yftah Ziser, Shay B. Cohen
Counterfactual Data Augmentation improves Factuality of Abstractive Summarization
Dheeraj Rajagopal, Siamak Shakeri, Cicero Nogueira dos Santos, Eduard Hovy, Chung-Ching Chang