Program Based Reasoning
Program-based reasoning focuses on enhancing Large Language Models (LLMs) by integrating them with programming capabilities to improve their ability to solve complex reasoning tasks requiring data analysis and multi-step operations. Current research emphasizes developing benchmarks to evaluate these models' performance on diverse quantitative reasoning problems, including statistical and causal inference, and exploring techniques like reverse curriculum reinforcement learning and few-shot prompting to improve their accuracy and calibration. This approach holds significant promise for advancing AI's capacity for complex reasoning, impacting fields like fact-checking, data analysis, and even neural network verification.
Papers
June 5, 2024
February 27, 2024
February 8, 2024
November 16, 2023
May 22, 2023