Critique Ability
Critique ability in large language models (LLMs) focuses on evaluating their capacity to identify and correct errors in their own reasoning and generated outputs. Current research emphasizes benchmarking this ability across diverse tasks, using metrics beyond simple accuracy to assess aspects like reasoning steps, constraint satisfaction, and handling of complex instructions, often employing techniques like chain-of-thought prompting and self-critique mechanisms. This research is crucial for improving LLM reliability and trustworthiness, impacting fields ranging from automated reasoning and code generation to more nuanced applications requiring robust and explainable AI.
Papers
July 12, 2023
June 12, 2023
June 11, 2023
June 2, 2023
May 24, 2023
May 4, 2023
April 25, 2023
April 12, 2023
March 6, 2023
February 28, 2023
February 4, 2023
February 3, 2023
December 24, 2022
November 29, 2022
November 23, 2022
November 22, 2022
November 21, 2022
October 31, 2022