Chain of Thought
Chain of Thought (CoT) prompting enhances the reasoning abilities of large language models (LLMs) by encouraging them to generate intermediate reasoning steps before arriving at a final answer. Current research focuses on improving CoT's effectiveness through techniques like multi-perspective verification, incorporating external knowledge (e.g., symbolic knowledge or multi-modal information), and optimizing the efficiency of the reasoning process (e.g., through compressed representations or adaptive sampling). This work is significant because it addresses limitations in LLMs' reasoning capabilities, leading to improved performance on complex tasks across diverse domains, including question answering, translation, and even medical diagnosis.
Papers
Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?
Zhanke Zhou, Rong Tao, Jianing Zhu, Yiwen Luo, Zengmao Wang, Bo Han
OCEAN: Offline Chain-of-thought Evaluation and Alignment in Large Language Models
Junda Wu, Xintong Li, Ruoyu Wang, Yu Xia, Yuxin Xiong, Jianing Wang, Tong Yu, Xiang Chen, Branislav Kveton, Lina Yao, Jingbo Shang, Julian McAuley
A Theoretical Understanding of Chain-of-Thought: Coherent Reasoning and Error-Aware Demonstration
Yingqian Cui, Pengfei He, Xianfeng Tang, Qi He, Chen Luo, Jiliang Tang, Yue Xing
CoT-TL: Low-Resource Temporal Knowledge Representation of Planning Instructions Using Chain-of-Thought Reasoning
Kumar Manas, Stefan Zwicklbauer, Adrian Paschke
Improve Vision Language Model Chain-of-thought Reasoning
Ruohong Zhang, Bowen Zhang, Yanghao Li, Haotian Zhang, Zhiqing Sun, Zhe Gan, Yinfei Yang, Ruoming Pang, Yiming Yang