Textual Demonstration
Textual demonstration, encompassing learning from demonstration (LfD) and in-context learning (ICL), focuses on enabling agents (robots, AI models) to acquire skills or knowledge from human-provided examples, rather than solely through explicit programming or reinforcement learning. Current research emphasizes improving the robustness and efficiency of LfD and ICL, exploring techniques like hypernetworks, Linear Quadratic Regulators (LQR), and various neural network architectures (e.g., Transformers, Dynamic Movement Primitives) to handle complex tasks and noisy data. This field is significant for advancing robotics, AI, and human-computer interaction, offering potential for more adaptable and efficient automation in diverse domains, including manufacturing, healthcare, and agriculture.
Papers
Comparative Analysis of Demonstration Selection Algorithms for LLM In-Context Learning
Dong Shu, Mengnan Du
Explainable Behavior Cloning: Teaching Large Language Model Agents through Learning by Demonstration
Yanchu Guan, Dong Wang, Yan Wang, Haiqing Wang, Renen Sun, Chenyi Zhuang, Jinjie Gu, Zhixuan Chu
A Practical Roadmap to Learning from Demonstration for Robotic Manipulators in Manufacturing
Alireza Barekatain, Hamed Habibi, Holger Voos
Joint Demonstration and Preference Learning Improves Policy Alignment with Human Feedback
Chenliang Li, Siliang Zeng, Zeyi Liao, Jiaxiang Li, Dongyeop Kang, Alfredo Garcia, Mingyi Hong