Text Summarization
Text summarization aims to condense large amounts of text into concise, informative summaries, automating a task crucial for information processing and retrieval. Current research heavily utilizes large language models (LLMs), exploring both extractive (selecting existing sentences) and abstractive (generating new text) methods, often incorporating techniques like attention mechanisms, reinforcement learning, and various fine-tuning strategies to improve accuracy and coherence. This field is significant due to its broad applications across diverse domains, from news aggregation and scientific literature review to improving efficiency in various professional settings, and ongoing research focuses on addressing challenges like hallucination (factual inaccuracies) and improving evaluation metrics.
Papers
Multi-Dimensional Evaluation of Text Summarization with In-Context Learning
Sameer Jain, Vaishakh Keshava, Swarnashree Mysore Sathyendra, Patrick Fernandes, Pengfei Liu, Graham Neubig, Chunting Zhou
Hybrid Long Document Summarization using C2F-FAR and ChatGPT: A Practical Study
Guang Lu, Sylvia B. Larcher, Tu Tran
UMSE: Unified Multi-scenario Summarization Evaluation
Shen Gao, Zhitao Yao, Chongyang Tao, Xiuying Chen, Pengjie Ren, Zhaochun Ren, Zhumin Chen
Incorporating Distributions of Discourse Structure for Long Document Abstractive Summarization
Dongqi Pu, Yifan Wang, Vera Demberg
AaKOS: Aspect-adaptive Knowledge-based Opinion Summarization
Guan Wang, Weihua Li, Edmund M-K. Lai, Quan Bai