Chain of Thought Prompting
Chain-of-thought (CoT) prompting is a technique that enhances large language models' (LLMs) reasoning abilities by guiding them to solve problems step-by-step, mimicking human thought processes. Current research focuses on improving CoT prompting's effectiveness through various methods, including iterative refinement, contrastive learning, and the integration of external knowledge sources, often applied to models like GPT-3/4 and Llama. This approach significantly impacts LLM performance across diverse tasks, from mathematical problem-solving and question answering to more complex domains like planning and clinical reasoning, leading to more reliable and interpretable AI systems.
Papers
In Context Learning and Reasoning for Symbolic Regression with Large Language Models
Samiha Sharlin, Tyler R. Josephson
SG-FSM: A Self-Guiding Zero-Shot Prompting Paradigm for Multi-Hop Question Answering Based on Finite State Machine
Xiaochen Wang, Junqing He, Liang Chen, Reza Haf Zhe Yang, Yiru Wang, Xiangdi Meng, Kunhao Pan, Zhifang Sui