Response Generation
Response generation, the task of creating text outputs in response to various inputs, aims to produce accurate, coherent, and relevant text. Current research focuses on improving personalization, addressing issues like hallucination and ambiguity, and enhancing efficiency through techniques such as retrieval-augmented generation (RAG), reinforcement learning, and fine-tuning of large language models (LLMs). These advancements are driven by the need for more reliable and contextually appropriate responses across diverse applications, from conversational AI to question answering and information retrieval. The development of robust evaluation metrics and benchmarks is also a key area of ongoing investigation.
Papers
Information for Conversation Generation: Proposals Utilising Knowledge Graphs
Alex Clay, Ernesto Jiménez-Ruiz
Policy-driven Knowledge Selection and Response Generation for Document-grounded Dialogue
Longxuan Ma, Jiapeng Li, Mingda Li, Wei-Nan Zhang, Ting Liu
Developing Retrieval Augmented Generation (RAG) based LLM Systems from PDFs: An Experience Report
Ayman Asad Khan, Md Toufique Hasan, Kai Kristian Kemell, Jussi Rasku, Pekka Abrahamsson