Noisy Demonstration
Noisy demonstration research focuses on improving machine learning models' ability to learn effectively from imperfect or incomplete training data, a common challenge in real-world applications. Current research emphasizes developing algorithms and model architectures that can filter noise, infer underlying patterns from diverse or suboptimal demonstrations, and leverage techniques like inverse reinforcement learning, behavior cloning, and various forms of few-shot learning to enhance performance. This work is significant because it addresses a critical limitation in many machine learning approaches, paving the way for more robust and reliable systems in fields ranging from robotics and natural language processing to autonomous driving and healthcare.
Papers
BC-IRL: Learning Generalizable Reward Functions from Demonstrations
Andrew Szot, Amy Zhang, Dhruv Batra, Zsolt Kira, Franziska Meier
Coordinated Multi-Robot Shared Autonomy Based on Scheduling and Demonstrations
Michael Hagenow, Emmanuel Senft, Nitzan Orr, Robert Radwin, Michael Gleicher, Bilge Mutlu, Dylan P. Losey, Michael Zinn
Learning Rational Subgoals from Demonstrations and Instructions
Zhezheng Luo, Jiayuan Mao, Jiajun Wu, Tomás Lozano-Pérez, Joshua B. Tenenbaum, Leslie Pack Kaelbling
ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction
Jiabang He, Lei Wang, Yi Hu, Ning Liu, Hui Liu, Xing Xu, Heng Tao Shen