LLM Explanation
Research on Large Language Model (LLM) explanations focuses on understanding how LLMs generate explanations, evaluating their quality and faithfulness, and mitigating potential risks like bias and memorization. Current work explores various LLM architectures and prompting techniques to improve explanation generation across diverse domains, including medicine, quantum computing, and code. This research is crucial for enhancing trust in LLMs, improving their usability in high-stakes applications, and addressing ethical concerns related to transparency and accountability.
Papers
November 4, 2024
October 28, 2024
October 25, 2024
October 16, 2024
September 30, 2024
September 26, 2024
September 25, 2024
September 20, 2024
September 12, 2024
July 29, 2024
June 22, 2024
June 11, 2024
May 11, 2024
March 5, 2024
February 18, 2024
February 16, 2024
February 14, 2024
January 15, 2024