Text Summarization
Text summarization aims to condense large amounts of text into concise, informative summaries, automating a task crucial for information processing and retrieval. Current research heavily utilizes large language models (LLMs), exploring both extractive (selecting existing sentences) and abstractive (generating new text) methods, often incorporating techniques like attention mechanisms, reinforcement learning, and various fine-tuning strategies to improve accuracy and coherence. This field is significant due to its broad applications across diverse domains, from news aggregation and scientific literature review to improving efficiency in various professional settings, and ongoing research focuses on addressing challenges like hallucination (factual inaccuracies) and improving evaluation metrics.
Papers
On the Role of Summary Content Units in Text Summarization Evaluation
Marcel Nawrath, Agnieszka Nowak, Tristan Ratz, Danilo C. Walenta, Juri Opitz, Leonardo F. R. Ribeiro, João Sedoc, Daniel Deutsch, Simon Mille, Yixin Liu, Lining Zhang, Sebastian Gehrmann, Saad Mahamood, Miruna Clinciu, Khyathi Chandu, Yufang Hou
Hallucination Diversity-Aware Active Learning for Text Summarization
Yu Xia, Xu Liu, Tong Yu, Sungchul Kim, Ryan A. Rossi, Anup Rao, Tung Mai, Shuai Li
Automatic Summarization of Doctor-Patient Encounter Dialogues Using Large Language Model through Prompt Tuning
Mengxian Lyu, Cheng Peng, Xiaohan Li, Patrick Balian, Jiang Bian, Yonghui Wu
Investigating Text Shortening Strategy in BERT: Truncation vs Summarization
Mirza Alim Mutasodirin, Radityo Eko Prasojo