Contrastive Reasoner
Contrastive reasoners are large language models (LLMs) enhanced to improve their reasoning capabilities, primarily by addressing the limitations of existing methods like chain-of-thought prompting. Current research focuses on developing techniques such as multi-agent systems with validator agents, process supervision during pre-training, and contrastive prompting to encourage more robust and reliable reasoning, often evaluated through metrics beyond simple accuracy. This work is significant because it aims to create more trustworthy and explainable AI systems, with potential applications in diverse fields requiring complex reasoning, such as legal analysis, scientific discovery, and decision-making in autonomous systems.
Papers
March 13, 2024
March 7, 2024
February 24, 2024
February 9, 2024
February 6, 2024
February 1, 2024
January 25, 2024
November 16, 2023
November 9, 2023
November 5, 2023
October 9, 2023
October 7, 2023
October 6, 2023
September 9, 2023
August 10, 2023
July 21, 2023
June 16, 2023
May 24, 2023
May 23, 2023