Non Negative Textual Response
Non-negative textual response research focuses on generating accurate, helpful, and unbiased text outputs from large language models (LLMs), addressing issues like hallucinations (fabricating information) and biases. Current research emphasizes improving the faithfulness of LLM responses to input context, often using retrieval-augmented generation (RAG) or fine-tuning techniques to enhance model accuracy and reduce reliance on inherent biases. This work is crucial for building trustworthy LLMs applicable to various fields, including healthcare, education, and customer service, where reliable and unbiased information is paramount.
Papers
February 12, 2024
December 19, 2023
December 1, 2023
November 13, 2023
November 7, 2023
October 16, 2023
October 3, 2023
September 27, 2023
September 26, 2023
September 18, 2023
September 13, 2023
September 12, 2023
September 6, 2023
August 30, 2023
July 25, 2023
July 24, 2023
July 7, 2023
June 22, 2023
June 21, 2023