Reasoning Bias
Reasoning bias research investigates systematic errors in logical thinking, both in humans and increasingly, in large language models (LLMs). Current studies focus on analyzing these biases using syllogistic reasoning tasks and exploring how techniques like chain-of-thought prompting and bias-augmented consistency training can mitigate them in LLMs, particularly within architectures like GPT-3 and GPT-4. Understanding these biases is crucial for improving the reliability and trustworthiness of AI systems and for gaining insights into the cognitive processes underlying human reasoning. This work has implications for developing more robust and less biased AI systems across various applications.
Papers
August 8, 2024
June 17, 2024
March 8, 2024
November 8, 2023
November 1, 2023
September 11, 2023
August 30, 2023