Abstractive Summary
Abstractive summarization aims to create concise, coherent summaries that paraphrase source text, unlike extractive methods which select sentences directly. Current research focuses on improving the factual accuracy and coherence of abstractive summaries generated by large language models (LLMs), often employing techniques like reinforcement learning, contrastive learning, and knowledge distillation to address issues such as hallucinations and inconsistencies. These advancements are crucial for applications requiring reliable summaries of diverse data types, including user activity, legal documents, and scientific literature, improving information access and analysis across various fields.
Papers
February 13, 2024
February 7, 2024
February 2, 2024
January 3, 2024
November 30, 2023
November 27, 2023
November 16, 2023
November 14, 2023
November 4, 2023
November 3, 2023
October 16, 2023
October 12, 2023
September 15, 2023
July 24, 2023
June 28, 2023
June 7, 2023
June 6, 2023