Summarization Task
Text summarization research aims to automatically generate concise and informative summaries of text, focusing on improving the alignment of machine-generated summaries with human preferences and addressing challenges in evaluation. Current research emphasizes the use of large language models (LLMs) and explores various architectures like BART and Mixture-of-Experts models, along with novel prompting techniques and fine-tuning strategies to enhance summarization quality across diverse domains and input types (e.g., medical reports, financial documents, code). This field is crucial for managing information overload and has significant implications for various applications, including information retrieval, content recommendation, and knowledge synthesis.
Papers
Are Large Language Models In-Context Personalized Summarizers? Get an iCOPERNICUS Test Done!
Divya Patel, Pathik Patel, Ankush Chander, Sourish Dasgupta, Tanmoy Chakraborty
UniSumEval: Towards Unified, Fine-Grained, Multi-Dimensional Summarization Evaluation for LLMs
Yuho Lee, Taewon Yun, Jason Cai, Hang Su, Hwanjun Song