Chain of Thought
Chain of Thought (CoT) prompting enhances the reasoning abilities of large language models (LLMs) by encouraging them to generate intermediate reasoning steps before arriving at a final answer. Current research focuses on improving CoT's effectiveness through techniques like multi-perspective verification, incorporating external knowledge (e.g., symbolic knowledge or multi-modal information), and optimizing the efficiency of the reasoning process (e.g., through compressed representations or adaptive sampling). This work is significant because it addresses limitations in LLMs' reasoning capabilities, leading to improved performance on complex tasks across diverse domains, including question answering, translation, and even medical diagnosis.
Papers
Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models
Haritz Puerto, Tilek Chubakov, Xiaodan Zhu, Harish Tayyar Madabushi, Iryna Gurevych
FSM: A Finite State Machine Based Zero-Shot Prompting Paradigm for Multi-Hop Question Answering
Xiaochen Wang, Junqing He, Zhe yang, Yiru Wang, Xiangdi Meng, Kunhao Pan, Zhifang Sui