LLM Credence
LLM credence research investigates the reliability and trustworthiness of large language models (LLMs), focusing on understanding how confident these models are in their outputs and whether this confidence is justified. Current research explores methods for assessing LLM confidence, analyzing the linguistic styles of different models to improve attribution and identify biases, and developing techniques to enhance model safety and mitigate risks associated with their deployment. This work is crucial for building trust in LLMs and ensuring their responsible application across various domains, including healthcare, where bias and safety are paramount concerns.
Papers
October 15, 2024
October 13, 2024
July 11, 2024
June 19, 2024
April 9, 2024
April 1, 2024
February 22, 2024
February 7, 2024
December 28, 2023