Monte Carlo Tree Search
Monte Carlo Tree Search (MCTS) is a decision-making algorithm that builds a search tree by simulating possible future outcomes, balancing exploration and exploitation to find optimal actions. Current research focuses on enhancing MCTS's efficiency and applicability in diverse domains, including quantum computing, mathematical reasoning, and autonomous agent control, often integrating it with large language models and reinforcement learning. These advancements are significantly impacting fields like AI planning, game playing, and robotics by enabling more efficient and effective decision-making in complex, uncertain environments.
Papers
SWE-Search: Enhancing Software Agents with Monte Carlo Tree Search and Iterative Refinement
Antonis Antoniades, Albert Örwall, Kexun Zhang, Yuxi Xie, Anirudh Goyal, William Wang
DAWN-ICL: Strategic Planning of Problem-solving Trajectories for Zero-Shot In-Context Learning
Xinyu Tang, Xiaolei Wang, Wayne Xin Zhao, Ji-Rong Wen
ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning
Xiao Yu, Baolin Peng, Vineeth Vajipey, Hao Cheng, Michel Galley, Jianfeng Gao, Zhou Yu
Interpretable Contrastive Monte Carlo Tree Search Reasoning
Zitian Gao, Boye Niu, Xuzheng He, Haotian Xu, Hongzhang Liu, Aiwei Liu, Xuming Hu, Lijie Wen
Finding path and cycle counting formulae in graphs with Deep Reinforcement Learning
Jason Piquenot, Maxime Bérar, Pierre Héroux, Jean-Yves Ramel, Romain Raveaux, Sébastien Adam