Human Instruction
Human instruction following in AI focuses on developing models capable of accurately and reliably executing complex tasks based on diverse instructions, encompassing text, images, and audio. Current research emphasizes improving model alignment through techniques like instruction tuning and response tuning, often utilizing large language models (LLMs) and diffusion transformers, and exploring novel evaluation metrics for multi-modal, multi-turn interactions. This field is crucial for advancing human-computer interaction, enabling more intuitive and effective collaboration between humans and AI systems across various domains, from robotics and manufacturing to healthcare and education.
Papers
MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simulated-World Control
Enshen Zhou, Yiran Qin, Zhenfei Yin, Yuzhou Huang, Ruimao Zhang, Lu Sheng, Yu Qiao, Jing Shao
Can LLMs Generate Human-Like Wayfinding Instructions? Towards Platform-Agnostic Embodied Instruction Synthesis
Vishnu Sashank Dorbala, Sanjoy Chowdhury, Dinesh Manocha
InsCL: A Data-efficient Continual Learning Paradigm for Fine-tuning Large Language Models with Instructions
Yifan Wang, Yafei Liu, Chufan Shi, Haoling Li, Chen Chen, Haonan Lu, Yujiu Yang
Pragmatic Instruction Following and Goal Assistance via Cooperative Language-Guided Inverse Planning
Tan Zhi-Xuan, Lance Ying, Vikash Mansinghka, Joshua B. Tenenbaum
Follow My Instruction and Spill the Beans: Scalable Data Extraction from Retrieval-Augmented Generation Systems
Zhenting Qi, Hanlin Zhang, Eric Xing, Sham Kakade, Himabindu Lakkaraju
AmbigNLG: Addressing Task Ambiguity in Instruction for NLG
Ayana Niwa, Hayate Iso