Open Ended
Research on open-ended learning focuses on developing AI agents capable of continuously learning and adapting to novel, unforeseen tasks and environments, moving beyond pre-defined goals and datasets. Current efforts concentrate on leveraging large language models (LLMs) and reinforcement learning (RL) techniques, often integrated with retrieval-augmented generation (RAG) and other methods like mixture-of-experts models, to create more robust and generalizable agents. This research is significant because it addresses limitations of current AI systems, paving the way for more adaptable and versatile AI agents with applications in education, robotics, and human-computer interaction.
Papers
(Ir)rationality in AI: State of the Art, Research Challenges and Open Questions
Olivia Macmillan-Scott, Mirco Musolesi
COLE: A Hierarchical Generation Framework for Multi-Layered and Editable Graphic Design
Peidong Jia, Chenxuan Li, Yuhui Yuan, Zeyu Liu, Yichao Shen, Bohan Chen, Xingru Chen, Yinglin Zheng, Dong Chen, Ji Li, Xiaodong Xie, Shanghang Zhang, Baining Guo