Text Summarization
Text summarization aims to condense large amounts of text into concise, informative summaries, automating a task crucial for information processing and retrieval. Current research heavily utilizes large language models (LLMs), exploring both extractive (selecting existing sentences) and abstractive (generating new text) methods, often incorporating techniques like attention mechanisms, reinforcement learning, and various fine-tuning strategies to improve accuracy and coherence. This field is significant due to its broad applications across diverse domains, from news aggregation and scientific literature review to improving efficiency in various professional settings, and ongoing research focuses on addressing challenges like hallucination (factual inaccuracies) and improving evaluation metrics.
Papers
Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization
George Chrysostomou, Zhixue Zhao, Miles Williams, Nikolaos Aletras
Benchmarking Generation and Evaluation Capabilities of Large Language Models for Instruction Controllable Summarization
Yixin Liu, Alexander R. Fabbri, Jiawen Chen, Yilun Zhao, Simeng Han, Shafiq Joty, Pengfei Liu, Dragomir Radev, Chien-Sheng Wu, Arman Cohan
Integrating Summarization and Retrieval for Enhanced Personalization via Large Language Models
Chris Richardson, Yao Zhang, Kellen Gillespie, Sudipta Kar, Arshdeep Singh, Zeynab Raeesy, Omar Zia Khan, Abhinav Sethy
Improving Factual Consistency of Text Summarization by Adversarially Decoupling Comprehension and Embellishment Abilities of LLMs
Huawen Feng, Yan Fan, Xiong Liu, Ting-En Lin, Zekun Yao, Yuchuan Wu, Fei Huang, Yongbin Li, Qianli Ma