Transformer Based Summarization Model
Transformer-based models are revolutionizing text summarization by generating abstractive summaries that go beyond simple extraction. Current research focuses on improving these models' accuracy, particularly in addressing factual errors and handling long documents, often through techniques like ensemble methods, attention head manipulation, and curriculum learning. These advancements are crucial for various applications, including automatic minuting, literature review generation, and efficient information retrieval across diverse domains, ultimately enhancing human productivity and knowledge synthesis.
Papers
October 4, 2024
June 18, 2024
December 15, 2023
November 1, 2023
July 14, 2023
May 26, 2023
May 24, 2023
February 2, 2023
September 8, 2022
April 6, 2022
December 22, 2021