Tree of Thought
Tree of Thought (ToT) is a novel approach to enhance the reasoning capabilities of large language models (LLMs) by guiding them to explore multiple solution paths in a tree-like structure, rather than a linear chain of thought. Current research focuses on improving ToT's efficiency and accuracy through various techniques, including multi-agent systems with validator agents, stochastic methods for exploring diverse reasoning paths, and integrating symbolic logic or external knowledge bases. This framework holds significant promise for advancing LLM performance on complex reasoning tasks, particularly in areas like multi-hop question answering and scientific problem-solving, leading to more reliable and explainable AI systems.
Papers
January 4, 2025
December 23, 2024
December 18, 2024
December 17, 2024
December 12, 2024
December 5, 2024
November 5, 2024
October 21, 2024
October 16, 2024
October 14, 2024
September 30, 2024
September 17, 2024
July 4, 2024
June 28, 2024
May 28, 2024
May 7, 2024
April 25, 2024
April 10, 2024
April 6, 2024