Edit Summary
Edit summarization, the task of automatically generating concise and informative descriptions of text edits, is crucial for maintaining large-scale collaborative projects like Wikipedia and for improving the efficiency of text summarization systems. Current research focuses on developing and evaluating language models, often fine-tuned on curated datasets of human-written and synthetic summaries, to generate high-quality edit summaries and detect inconsistencies in existing summaries. However, a significant challenge lies in creating automated evaluation metrics that accurately reflect human judgment of summary quality, particularly in complex domains like biomedical literature reviews where contradictions are common. Improved automated evaluation and summarization methods will enhance the efficiency and accuracy of information management across various applications.
Papers
Interpretable Automatic Fine-grained Inconsistency Detection in Text Summarization
Hou Pong Chan, Qi Zeng, Heng Ji
Automated Metrics for Medical Multi-Document Summarization Disagree with Human Evaluations
Lucy Lu Wang, Yulia Otmakhova, Jay DeYoung, Thinh Hung Truong, Bailey E. Kuehl, Erin Bransom, Byron C. Wallace