Abstractive Summarization
Abstractive summarization aims to generate concise, coherent summaries that capture the essence of a source text, unlike extractive methods which simply select existing sentences. Current research emphasizes improving the accuracy and faithfulness of these summaries, particularly addressing issues like hallucination (generating false information) and ensuring coverage of diverse perspectives. This involves leveraging large language models (LLMs) and transformer-based architectures, often incorporating techniques like reinforcement learning, attention mechanisms, and multi-modal learning, and exploring methods for incorporating user preferences or focusing on specific aspects. The field's advancements have significant implications for various applications, including information retrieval, document processing, and healthcare, by enabling efficient and accurate information synthesis from large volumes of text and other data modalities.
Papers
Correcting Diverse Factual Errors in Abstractive Summarization via Post-Editing and Language Model Infilling
Vidhisha Balachandran, Hannaneh Hajishirzi, William W. Cohen, Yulia Tsvetkov
Salience Allocation as Guidance for Abstractive Summarization
Fei Wang, Kaiqiang Song, Hongming Zhang, Lifeng Jin, Sangwoo Cho, Wenlin Yao, Xiaoyang Wang, Muhao Chen, Dong Yu