Abstractive Summary
Abstractive summarization aims to create concise, coherent summaries that paraphrase source text, unlike extractive methods which select sentences directly. Current research focuses on improving the factual accuracy and coherence of abstractive summaries generated by large language models (LLMs), often employing techniques like reinforcement learning, contrastive learning, and knowledge distillation to address issues such as hallucinations and inconsistencies. These advancements are crucial for applications requiring reliable summaries of diverse data types, including user activity, legal documents, and scientific literature, improving information access and analysis across various fields.
Papers
May 31, 2023
May 29, 2023
May 24, 2023
May 23, 2023
May 22, 2023
May 20, 2023
May 19, 2023
May 2, 2023
April 9, 2023
April 5, 2023
February 9, 2023
December 20, 2022
December 6, 2022
December 2, 2022
November 15, 2022
October 27, 2022
September 8, 2022
July 11, 2022