Skill Learning
Skill learning in robotics and AI focuses on enabling agents to acquire complex behaviors efficiently, often through methods like reinforcement learning and imitation learning. Current research emphasizes developing more efficient and robust skill acquisition methods, including hierarchical approaches, the use of large language models for task decomposition and reward design, and leveraging multimodal data (vision, tactile, trajectory) within transformer and diffusion model architectures. These advancements are crucial for creating more adaptable and versatile robots capable of performing a wider range of tasks in dynamic and unstructured environments, with implications for various fields including manufacturing, healthcare, and domestic assistance.
Papers
Modulating Reservoir Dynamics via Reinforcement Learning for Efficient Robot Skill Synthesis
Zahra Koulaeizadeh, Erhan Oztop
Multi-Modal Self-Supervised Learning for Surgical Feedback Effectiveness Assessment
Arushi Gupta, Rafal Kocielnik, Jiayun Wang, Firdavs Nasriddinov, Cherine Yang, Elyssa Wong, Anima Anandkumar, Andrew Hung
Versatile Demonstration Interface: Toward More Flexible Robot Demonstration Collection
Michael Hagenow, Dimosthenis Kontogiorgos, Yanwei Wang, Julie Shah
SkiLD: Unsupervised Skill Discovery Guided by Factor Interactions
Zizhao Wang, Jiaheng Hu, Caleb Chuck, Stephen Chen, Roberto Martín-Martín, Amy Zhang, Scott Niekum, Peter Stone
SLIDE: A Framework Integrating Small and Large Language Models for Open-Domain Dialogues Evaluation
Kun Zhao, Bohao Yang, Chen Tang, Chenghua Lin, Liang Zhan
An Adaptive Framework for Manipulator Skill Reproduction in Dynamic Environments
Ryan Donald, Brendan Hertel, Stephen Misenti, Yan Gu, Reza Azadeh