Great Truth
Research on "truth" in the context of large language models (LLMs) focuses on developing methods to assess and improve the factual accuracy and reliability of LLM outputs. Current efforts involve analyzing the underlying causes of LLM inaccuracies (e.g., multi-step reasoning failures, biases in training data), designing game-theoretic approaches to enhance consistency and reliability during decoding, and developing robust lie detection methods using techniques like spectral analysis of model activations. This research is crucial for mitigating the risks of misinformation spread by LLMs and building more trustworthy AI systems across various applications, from healthcare to social media.
Papers
November 11, 2024
November 7, 2024
October 15, 2024
October 6, 2024
October 1, 2024
September 9, 2024
July 7, 2024
July 3, 2024
June 19, 2024
June 4, 2024
March 14, 2024
March 13, 2024
March 8, 2024
March 5, 2024
February 26, 2024
February 9, 2024
January 21, 2024
December 21, 2023
November 13, 2023