Agent System
Agent systems, encompassing autonomous software entities capable of complex actions, aim to improve efficiency and decision-making across diverse fields. Current research emphasizes enhancing agent controllability and safety, often utilizing large language models (LLMs) within multi-agent frameworks employing techniques like chain-of-thought reasoning, hierarchical task delegation, and adversarial training to improve robustness and accuracy. These advancements hold significant potential for automating tasks in areas such as software engineering, materials science, and even scientific research itself, streamlining workflows and accelerating progress.
Papers
MobA: A Two-Level Agent System for Efficient Mobile Task Automation
Zichen Zhu, Hao Tang, Yansi Li, Kunyao Lan, Yixuan Jiang, Hao Zhou, Yixiao Wang, Situo Zhang, Liangtai Sun, Lu Chen, Kai Yu
AdaSwitch: Adaptive Switching between Small and Large Agents for Effective Cloud-Local Collaborative Learning
Hao Sun, Jiayi Wu, Hengyi Cai, Xiaochi Wei, Yue Feng, Bo Wang, Shuaiqiang Wang, Yan Zhang, Dawei Yin
Aegis:An Advanced LLM-Based Multi-Agent for Intelligent Functional Safety Engineering
Lu Shi, Bin Qi, Jiarui Luo, Yang Zhang, Zhanzhao Liang, Zhaowei Gao, Wenke Deng, Yang Sun
PRefLexOR: Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning and Agentic Thinking
Markus J. Buehler
Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance
Yaxi Lu, Shenzhi Yang, Cheng Qian, Guirong Chen, Qinyu Luo, Yesai Wu, Huadong Wang, Xin Cong, Zhong Zhang, Yankai Lin, Weiwen Liu, Yasheng Wang, Zhiyuan Liu, Fangming Liu, Maosong Sun