Abstractive Summarization
Abstractive summarization aims to generate concise, coherent summaries that capture the essence of a source text, unlike extractive methods which simply select existing sentences. Current research emphasizes improving the accuracy and faithfulness of these summaries, particularly addressing issues like hallucination (generating false information) and ensuring coverage of diverse perspectives. This involves leveraging large language models (LLMs) and transformer-based architectures, often incorporating techniques like reinforcement learning, attention mechanisms, and multi-modal learning, and exploring methods for incorporating user preferences or focusing on specific aspects. The field's advancements have significant implications for various applications, including information retrieval, document processing, and healthcare, by enabling efficient and accurate information synthesis from large volumes of text and other data modalities.
Papers
Systematic Exploration of Dialogue Summarization Approaches for Reproducibility, Comparative Assessment, and Methodological Innovations for Advancing Natural Language Processing in Abstractive Summarization
Yugandhar Reddy Gogireddy, Jithendra Reddy Gogireddy
DomainSum: A Hierarchical Benchmark for Fine-Grained Domain Shift in Abstractive Text Summarization
Haohan Yuan, Haopeng Zhang
L3Cube-MahaSum: A Comprehensive Dataset and BART Models for Abstractive Text Summarization in Marathi
Pranita Deshmukh, Nikita Kulkarni, Sanhita Kulkarni, Kareena Manghani, Raviraj Joshi
Extra Global Attention Designation Using Keyword Detection in Sparse Transformer Architectures
Evan Lucas, Dylan Kangas, Timothy C Havens