Textual Demonstration
Textual demonstration, encompassing learning from demonstration (LfD) and in-context learning (ICL), focuses on enabling agents (robots, AI models) to acquire skills or knowledge from human-provided examples, rather than solely through explicit programming or reinforcement learning. Current research emphasizes improving the robustness and efficiency of LfD and ICL, exploring techniques like hypernetworks, Linear Quadratic Regulators (LQR), and various neural network architectures (e.g., Transformers, Dynamic Movement Primitives) to handle complex tasks and noisy data. This field is significant for advancing robotics, AI, and human-computer interaction, offering potential for more adaptable and efficient automation in diverse domains, including manufacturing, healthcare, and agriculture.
Papers
Demonstration of Robust and Efficient Quantum Property Learning with Shallow Shadows
Hong-Ye Hu, Andi Gu, Swarnadeep Majumder, Hang Ren, Yipei Zhang, Derek S. Wang, Yi-Zhuang You, Zlatko Minev, Susanne F. Yelin, Alireza Seif
Demonstrating and Reducing Shortcuts in Vision-Language Representation Learning
Maurits Bleeker, Mariya Hendriksen, Andrew Yates, Maarten de Rijke