Chain of Thought
Chain of Thought (CoT) prompting enhances the reasoning abilities of large language models (LLMs) by encouraging them to generate intermediate reasoning steps before arriving at a final answer. Current research focuses on improving CoT's effectiveness through techniques like multi-perspective verification, incorporating external knowledge (e.g., symbolic knowledge or multi-modal information), and optimizing the efficiency of the reasoning process (e.g., through compressed representations or adaptive sampling). This work is significant because it addresses limitations in LLMs' reasoning capabilities, leading to improved performance on complex tasks across diverse domains, including question answering, translation, and even medical diagnosis.
Papers
Mind's Mirror: Distilling Self-Evaluation Capability and Comprehensive Thinking from Large Language Models
Weize Liu, Guocong Li, Kai Zhang, Bang Du, Qiyuan Chen, Xuming Hu, Hongxia Xu, Jintai Chen, Jian Wu
The Role of Chain-of-Thought in Complex Vision-Language Reasoning Task
Yifan Wu, Pengchuan Zhang, Wenhan Xiong, Barlas Oguz, James C. Gee, Yixin Nie
Ask more, know better: Reinforce-Learned Prompt Questions for Decision Making with Large Language Models
Xue Yan, Yan Song, Xinyu Cui, Filippos Christianos, Haifeng Zhang, David Henry Mguni, Jun Wang
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory
Niloofar Mireshghallah, Hyunwoo Kim, Xuhui Zhou, Yulia Tsvetkov, Maarten Sap, Reza Shokri, Yejin Choi
MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning
Zayne Sprague, Xi Ye, Kaj Bostrom, Swarat Chaudhuri, Greg Durrett
FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
Hyunwoo Kim, Melanie Sclar, Xuhui Zhou, Ronan Le Bras, Gunhee Kim, Yejin Choi, Maarten Sap