Non Negative Textual Response
Non-negative textual response research focuses on generating accurate, helpful, and unbiased text outputs from large language models (LLMs), addressing issues like hallucinations (fabricating information) and biases. Current research emphasizes improving the faithfulness of LLM responses to input context, often using retrieval-augmented generation (RAG) or fine-tuning techniques to enhance model accuracy and reduce reliance on inherent biases. This work is crucial for building trustworthy LLMs applicable to various fields, including healthcare, education, and customer service, where reliable and unbiased information is paramount.
Papers
From RAGs to rich parameters: Probing how language models utilize external knowledge over parametric information for factual queries
Hitesh Wadhwa, Rahul Seetharaman, Somyaa Aggarwal, Reshmi Ghosh, Samyadeep Basu, Soundararajan Srinivasan, Wenlong Zhao, Shreyas Chaudhari, Ehsan Aghazadeh
PSLM: Parallel Generation of Text and Speech with LLMs for Low-Latency Spoken Dialogue Systems
Kentaro Mitsui, Koh Mitsuda, Toshiaki Wakatsuki, Yukiya Hono, Kei Sawada