Chain of Thought
Chain of Thought (CoT) prompting enhances the reasoning abilities of large language models (LLMs) by encouraging them to generate intermediate reasoning steps before arriving at a final answer. Current research focuses on improving CoT's effectiveness through techniques like multi-perspective verification, incorporating external knowledge (e.g., symbolic knowledge or multi-modal information), and optimizing the efficiency of the reasoning process (e.g., through compressed representations or adaptive sampling). This work is significant because it addresses limitations in LLMs' reasoning capabilities, leading to improved performance on complex tasks across diverse domains, including question answering, translation, and even medical diagnosis.
Papers
Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought Reasoning
Hao Shao, Shengju Qian, Han Xiao, Guanglu Song, Zhuofan Zong, Letian Wang, Yu Liu, Hongsheng Li
DriveCoT: Integrating Chain-of-Thought Reasoning with End-to-End Driving
Tianqi Wang, Enze Xie, Ruihang Chu, Zhenguo Li, Ping Luo
MasonTigers at SemEval-2024 Task 9: Solving Puzzles with an Ensemble of Chain-of-Thoughts
Md Nishat Raihan, Dhiman Goswami, Al Nahian Bin Emran, Sadiya Sayara Chowdhury Puspo, Amrita Ganguly, Marcos Zampieri
Stance Reasoner: Zero-Shot Stance Detection on Social Media with Explicit Reasoning
Maksym Taranukhin, Vered Shwartz, Evangelos Milios