Abstractive Summarisation
Abstractive summarization aims to generate concise, coherent summaries that capture the essence of source text, unlike extractive methods which simply select portions of the original. Current research focuses on improving the accuracy and fluency of generated summaries, particularly addressing issues like factual consistency ("hallucinations"), and adapting models to diverse data types (e.g., video, medical records, legal documents) using architectures such as transformers and variational autoencoders, often incorporating techniques like reinforcement learning and pre-training with specialized datasets. This field is significant for its potential to streamline information access across various domains, from healthcare and legal proceedings to news aggregation and accessibility for visually impaired individuals.
Papers
Towards Abstractive Timeline Summarisation using Preference-based Reinforcement Learning
Yuxuan Ye, Edwin Simpson
Discharge Summary Hospital Course Summarisation of In Patient Electronic Health Record Text with Clinical Concept Guided Deep Pre-Trained Transformer Models
Thomas Searle, Zina Ibrahim, James Teo, Richard Dobson