Human Instruction
Human instruction following in AI focuses on developing models capable of accurately and reliably executing complex tasks based on diverse instructions, encompassing text, images, and audio. Current research emphasizes improving model alignment through techniques like instruction tuning and response tuning, often utilizing large language models (LLMs) and diffusion transformers, and exploring novel evaluation metrics for multi-modal, multi-turn interactions. This field is crucial for advancing human-computer interaction, enabling more intuitive and effective collaboration between humans and AI systems across various domains, from robotics and manufacturing to healthcare and education.
Papers
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo