Structured Summary
Structured summarization research focuses on automatically generating concise and informative summaries from various text sources, prioritizing factual accuracy and coherence. Current efforts concentrate on improving the faithfulness and informativeness of Large Language Models (LLMs) for summarization, addressing issues like hallucination and bias, and developing more robust evaluation metrics beyond simple overlap measures. This field is crucial for efficiently managing the ever-increasing volume of digital information, with applications ranging from healthcare and finance to scientific literature review and improved accessibility of information. The development of more effective summarization techniques is driving advancements in both LLM architecture and evaluation methodologies.
Papers
Needle in a Haystack: An Analysis of High-Agreement Workers on MTurk for Summarization
Lining Zhang, Simon Mille, Yufang Hou, Daniel Deutsch, Elizabeth Clark, Yixin Liu, Saad Mahamood, Sebastian Gehrmann, Miruna Clinciu, Khyathi Chandu, João Sedoc
BUMP: A Benchmark of Unfaithful Minimal Pairs for Meta-Evaluation of Faithfulness Metrics
Liang Ma, Shuyang Cao, Robert L. Logan, Di Lu, Shihao Ran, Ke Zhang, Joel Tetreault, Alejandro Jaimes
Inverse Reinforcement Learning for Text Summarization
Yu Fu, Deyi Xiong, Yue Dong
What to Read in a Contract? Party-Specific Summarization of Legal Obligations, Entitlements, and Prohibitions
Abhilasha Sancheti, Aparna Garimella, Balaji Vasan Srinivasan, Rachel Rudinger
LR-Sum: Summarization for Less-Resourced Languages
Chester Palen-Michel, Constantine Lignos
OASum: Large-Scale Open Domain Aspect-based Summarization
Xianjun Yang, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Xiaoman Pan, Linda Petzold, Dong Yu