Open Ended
Research on open-ended learning focuses on developing AI agents capable of continuously learning and adapting to novel, unforeseen tasks and environments, moving beyond pre-defined goals and datasets. Current efforts concentrate on leveraging large language models (LLMs) and reinforcement learning (RL) techniques, often integrated with retrieval-augmented generation (RAG) and other methods like mixture-of-experts models, to create more robust and generalizable agents. This research is significant because it addresses limitations of current AI systems, paving the way for more adaptable and versatile AI agents with applications in education, robotics, and human-computer interaction.
Papers
An Overview and Discussion on Using Large Language Models for Implementation Generation of Solutions to Open-Ended Problems
Hashmath Shaik, Alex Doboli
EQUATOR: A Deterministic Framework for Evaluating LLM Reasoning with Open-Ended Questions. # v1.0.0-beta
Raymond Bernard, Shaina Raza (PhD), Subhabrata Das (PhD), Rahul Murugan