Non Negative Textual Response
Non-negative textual response research focuses on generating accurate, helpful, and unbiased text outputs from large language models (LLMs), addressing issues like hallucinations (fabricating information) and biases. Current research emphasizes improving the faithfulness of LLM responses to input context, often using retrieval-augmented generation (RAG) or fine-tuning techniques to enhance model accuracy and reduce reliance on inherent biases. This work is crucial for building trustworthy LLMs applicable to various fields, including healthcare, education, and customer service, where reliable and unbiased information is paramount.
Papers
June 16, 2024
May 22, 2024
May 6, 2024
May 4, 2024
April 29, 2024
April 22, 2024
April 12, 2024
April 4, 2024
March 28, 2024
March 26, 2024
March 22, 2024
March 14, 2024
February 20, 2024
February 12, 2024
December 19, 2023
December 1, 2023
November 13, 2023
November 7, 2023
October 16, 2023