Thought Reasoning
Thought reasoning in artificial intelligence focuses on enabling large language models (LLMs) to perform complex, multi-step reasoning tasks, mirroring human cognitive processes. Current research emphasizes improving the reliability and interpretability of LLM reasoning through techniques like chain-of-thought prompting, graph-based reasoning structures (e.g., Tree of Thoughts, Graph of Thoughts), and the integration of symbolic logic and code execution. These advancements are crucial for building more trustworthy and explainable AI systems, with significant implications for applications ranging from scientific discovery and medical diagnosis to improved decision-making in various fields.
Papers
Internalizing ASR with Implicit Chain of Thought for Efficient Speech-to-Speech Conversational LLM
Robin Shing-Hei Yuen, Timothy Tin-Long Tse, Jian Zhu
Proof of Thought : Neurosymbolic Program Synthesis allows Robust and Interpretable Reasoning
Debargha Ganguly, Srinivasan Iyengar, Vipin Chaudhary, Shivkumar Kalyanaraman
Judgment of Thoughts: Courtroom of the Binary Logical Reasoning in Large Language Models
Sungjune Park, Daeseon Choi