Chain of Thought
Chain of Thought (CoT) prompting enhances the reasoning abilities of large language models (LLMs) by encouraging them to generate intermediate reasoning steps before arriving at a final answer. Current research focuses on improving CoT's effectiveness through techniques like multi-perspective verification, incorporating external knowledge (e.g., symbolic knowledge or multi-modal information), and optimizing the efficiency of the reasoning process (e.g., through compressed representations or adaptive sampling). This work is significant because it addresses limitations in LLMs' reasoning capabilities, leading to improved performance on complex tasks across diverse domains, including question answering, translation, and even medical diagnosis.
Papers
UBENCH: Benchmarking Uncertainty in Large Language Models with Multiple Choice Questions
Xunzhi Wang, Zhuowei Zhang, Qiongyu Li, Gaonan Chen, Mengting Hu, Zhiyu li, Bitong Luo, Hang Gao, Zhixin Han, Haotian Wang
Nash CoT: Multi-Path Inference with Preference Equilibrium
Ziqi Zhang, Cunxiang Wang, Xiong Xiao, Yue Zhang, Donglin Wang
Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs
Xuan Zhang, Chao Du, Tianyu Pang, Qian Liu, Wei Gao, Min Lin
Chain-of-Though (CoT) prompting strategies for medical error detection and correction
Zhaolong Wu, Abul Hasan, Jinge Wu, Yunsoo Kim, Jason P. Y. Cheung, Teng Zhang, Honghan Wu