LLM Based Multi Agent
LLM-based multi-agent systems aim to enhance the capabilities of large language models by enabling collaboration among multiple agents, each specializing in different tasks or possessing unique knowledge. Current research focuses on improving agent communication, task decomposition strategies (like meta-task planning), and robust handling of diverse input modalities (including visual and textual data), often employing architectures inspired by internet protocols or assembly-line paradigms. This field is significant because it addresses limitations of single-agent LLMs in complex tasks, paving the way for more sophisticated applications in areas such as data science automation, software development, and urban planning.
Papers
Human-In-the-Loop Software Development Agents
Wannita Takerngsaksiri, Jirat Pasuksmit, Patanamon Thongtanunam, Chakkrit Tantithamthavorn, Ruixiong Zhang, Fan Jiang, Jing Li, Evan Cook, Kun Chen, Ming Wu
DynFocus: Dynamic Cooperative Network Empowers LLMs with Video Understanding
Yudong Han, Qingpei Guo, Liyuan Pan, Liu Liu, Yu Guan, Ming Yang