Abstractive Summarization
Abstractive summarization aims to generate concise, coherent summaries that capture the essence of a source text, unlike extractive methods which simply select existing sentences. Current research emphasizes improving the accuracy and faithfulness of these summaries, particularly addressing issues like hallucination (generating false information) and ensuring coverage of diverse perspectives. This involves leveraging large language models (LLMs) and transformer-based architectures, often incorporating techniques like reinforcement learning, attention mechanisms, and multi-modal learning, and exploring methods for incorporating user preferences or focusing on specific aspects. The field's advancements have significant implications for various applications, including information retrieval, document processing, and healthcare, by enabling efficient and accurate information synthesis from large volumes of text and other data modalities.
Papers
Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback
Paul Roit, Johan Ferret, Lior Shani, Roee Aharoni, Geoffrey Cideron, Robert Dadashi, Matthieu Geist, Sertan Girgin, Léonard Hussenot, Orgad Keller, Nikola Momchev, Sabela Ramos, Piotr Stanczyk, Nino Vieillard, Olivier Bachem, Gal Elidan, Avinatan Hassidim, Olivier Pietquin, Idan Szpektor
IDAS: Intent Discovery with Abstractive Summarization
Maarten De Raedt, Fréderic Godin, Thomas Demeester, Chris Develder