Text Summarization
Text summarization aims to condense large amounts of text into concise, informative summaries, automating a task crucial for information processing and retrieval. Current research heavily utilizes large language models (LLMs), exploring both extractive (selecting existing sentences) and abstractive (generating new text) methods, often incorporating techniques like attention mechanisms, reinforcement learning, and various fine-tuning strategies to improve accuracy and coherence. This field is significant due to its broad applications across diverse domains, from news aggregation and scientific literature review to improving efficiency in various professional settings, and ongoing research focuses on addressing challenges like hallucination (factual inaccuracies) and improving evaluation metrics.
Papers
Evaluating Factual Consistency of Texts with Semantic Role Labeling
Jing Fan, Dennis Aumiller, Michael Gertz
InheritSumm: A General, Versatile and Compact Summarizer by Distilling from GPT
Yichong Xu, Ruochen Xu, Dan Iter, Yang Liu, Shuohang Wang, Chenguang Zhu, Michael Zeng
Enhancing Coherence of Extractive Summarization with Multitask Learning
Renlong Jie, Xiaojun Meng, Lifeng Shang, Xin Jiang, Qun Liu
Learning to Rank Utterances for Query-Focused Meeting Summarization
Xingxian Liu, Yajing Xu