Multi Agent Debate
Multi-agent debate leverages multiple large language models (LLMs) to collaboratively solve complex reasoning tasks, aiming to improve accuracy and reliability beyond the capabilities of single LLMs. Current research focuses on optimizing debate efficiency (e.g., through group discussions and sparse communication topologies), mitigating LLM hallucinations (e.g., via uncertainty estimation and counterfactual arguments), and enhancing the trustworthiness of LLM-generated explanations. This approach holds significant promise for improving the reliability and explainability of AI systems across various applications, from question answering and fact-checking to more complex decision-making processes in fields like healthcare.
Papers
February 24, 2024
February 12, 2024
January 30, 2024
January 11, 2024
December 8, 2023
November 29, 2023
October 10, 2023
August 14, 2023
May 30, 2023
May 23, 2023