Multi Agent Discussion
Multi-agent discussion leverages multiple large language models (LLMs) to collaboratively address complex tasks, aiming to improve accuracy, calibration, and reasoning capabilities beyond what single LLMs can achieve. Current research focuses on designing effective discussion frameworks, including strategies like chain-of-thought prompting and mechanisms to manage agent persona consistency and confidence calibration. This approach shows promise for enhancing various applications, such as open-ended text evaluation, question answering, and financial sentiment analysis, by harnessing the "collective wisdom" of multiple LLMs to produce more reliable and nuanced results.
Papers
October 30, 2024
May 6, 2024
April 14, 2024
March 28, 2024
February 28, 2024
February 26, 2024
January 11, 2024