Chain of Thought Prompting
Chain-of-thought (CoT) prompting is a technique that enhances large language models' (LLMs) reasoning abilities by guiding them to solve problems step-by-step, mimicking human thought processes. Current research focuses on improving CoT prompting's effectiveness through various methods, including iterative refinement, contrastive learning, and the integration of external knowledge sources, often applied to models like GPT-3/4 and Llama. This approach significantly impacts LLM performance across diverse tasks, from mathematical problem-solving and question answering to more complex domains like planning and clinical reasoning, leading to more reliable and interpretable AI systems.
Papers
October 22, 2023
October 20, 2023
September 22, 2023
September 16, 2023
September 6, 2023
August 10, 2023
August 1, 2023
July 31, 2023
July 18, 2023
June 24, 2023
May 25, 2023
May 24, 2023
May 23, 2023
May 19, 2023
May 17, 2023
April 23, 2023
April 19, 2023
April 12, 2023