Response Generation
Response generation, the task of creating text outputs in response to various inputs, aims to produce accurate, coherent, and relevant text. Current research focuses on improving personalization, addressing issues like hallucination and ambiguity, and enhancing efficiency through techniques such as retrieval-augmented generation (RAG), reinforcement learning, and fine-tuning of large language models (LLMs). These advancements are driven by the need for more reliable and contextually appropriate responses across diverse applications, from conversational AI to question answering and information retrieval. The development of robust evaluation metrics and benchmarks is also a key area of ongoing investigation.
Papers
Granite-Function Calling Model: Introducing Function Calling Abilities via Multi-task Learning of Granular Tasks
Ibrahim Abdelaziz, Kinjal Basu, Mayank Agarwal, Sadhana Kumaravel, Matthew Stallone, Rameswar Panda, Yara Rizk, GP Bhargav, Maxwell Crouse, Chulaka Gunasekara, Shajith Ikbal, Sachin Joshi, Hima Karanam, Vineet Kumar, Asim Munawar, Sumit Neelam, Dinesh Raghu, Udit Sharma, Adriana Meza Soria, Dheeraj Sreedhar, Praveen Venkateswaran, Merve Unuvar, David Cox, Salim Roukos, Luis Lastras, Pavan Kapanipathi
LLM-based Frameworks for API Argument Filling in Task-Oriented Conversational Systems
Jisoo Mok, Mohammad Kachuee, Shuyang Dai, Shayan Ray, Tara Taghavi, Sungroh Yoon