Non Negative Textual Response
Non-negative textual response research focuses on generating accurate, helpful, and unbiased text outputs from large language models (LLMs), addressing issues like hallucinations (fabricating information) and biases. Current research emphasizes improving the faithfulness of LLM responses to input context, often using retrieval-augmented generation (RAG) or fine-tuning techniques to enhance model accuracy and reduce reliance on inherent biases. This work is crucial for building trustworthy LLMs applicable to various fields, including healthcare, education, and customer service, where reliable and unbiased information is paramount.
Papers
November 26, 2022
October 31, 2022
October 10, 2022
September 29, 2022
September 7, 2022
August 22, 2022
August 4, 2022
August 1, 2022
June 10, 2022
April 28, 2022
April 22, 2022
April 2, 2022
March 20, 2022
January 25, 2022
January 13, 2022
November 25, 2021