Multi Agent Debate
Multi-agent debate leverages multiple large language models (LLMs) to collaboratively solve complex reasoning tasks, aiming to improve accuracy and reliability beyond the capabilities of single LLMs. Current research focuses on optimizing debate efficiency (e.g., through group discussions and sparse communication topologies), mitigating LLM hallucinations (e.g., via uncertainty estimation and counterfactual arguments), and enhancing the trustworthiness of LLM-generated explanations. This approach holds significant promise for improving the reliability and explainability of AI systems across various applications, from question answering and fact-checking to more complex decision-making processes in fields like healthcare.
Papers
SWE-Search: Enhancing Software Agents with Monte Carlo Tree Search and Iterative Refinement
Antonis Antoniades, Albert Örwall, Kexun Zhang, Yuxi Xie, Anirudh Goyal, William Wang
MAD-Sherlock: Multi-Agent Debates for Out-of-Context Misinformation Detection
Kumud Lakara, Juil Sock, Christian Rupprecht, Philip Torr, John Collomosse, Christian Schroeder de Witt