Text Summarization
Text summarization aims to condense large amounts of text into concise, informative summaries, automating a task crucial for information processing and retrieval. Current research heavily utilizes large language models (LLMs), exploring both extractive (selecting existing sentences) and abstractive (generating new text) methods, often incorporating techniques like attention mechanisms, reinforcement learning, and various fine-tuning strategies to improve accuracy and coherence. This field is significant due to its broad applications across diverse domains, from news aggregation and scientific literature review to improving efficiency in various professional settings, and ongoing research focuses on addressing challenges like hallucination (factual inaccuracies) and improving evaluation metrics.
Papers
Integrating Summarization and Retrieval for Enhanced Personalization via Large Language Models
Chris Richardson, Yao Zhang, Kellen Gillespie, Sudipta Kar, Arshdeep Singh, Zeynab Raeesy, Omar Zia Khan, Abhinav Sethy
Improving Factual Consistency of Text Summarization by Adversarially Decoupling Comprehension and Embellishment Abilities of LLMs
Huawen Feng, Yan Fan, Xiong Liu, Ting-En Lin, Zekun Yao, Yuchuan Wu, Fei Huang, Yongbin Li, Qianli Ma
Generating Summaries with Controllable Readability Levels
Leonardo F. R. Ribeiro, Mohit Bansal, Markus Dreyer
On Context Utilization in Summarization with Large Language Models
Mathieu Ravaut, Aixin Sun, Nancy F. Chen, Shafiq Joty
Text Summarization Using Large Language Models: A Comparative Study of MPT-7b-instruct, Falcon-7b-instruct, and OpenAI Chat-GPT Models
Lochan Basyal, Mihir Sanghvi