Abstractive Summary
Abstractive summarization aims to create concise, coherent summaries that paraphrase source text, unlike extractive methods which select sentences directly. Current research focuses on improving the factual accuracy and coherence of abstractive summaries generated by large language models (LLMs), often employing techniques like reinforcement learning, contrastive learning, and knowledge distillation to address issues such as hallucinations and inconsistencies. These advancements are crucial for applications requiring reliable summaries of diverse data types, including user activity, legal documents, and scientific literature, improving information access and analysis across various fields.
Papers
November 6, 2024
October 26, 2024
October 22, 2024
August 30, 2024
June 22, 2024
June 20, 2024
June 12, 2024
June 7, 2024
June 6, 2024
June 1, 2024
May 7, 2024
April 29, 2024
April 16, 2024
March 31, 2024
March 12, 2024
March 2, 2024
February 22, 2024
February 20, 2024
February 19, 2024