Natural Language Instruction
Natural language instruction focuses on enabling artificial intelligence agents to understand and execute commands expressed in human language, aiming to bridge the gap between human communication and machine action. Current research emphasizes improving the robustness and accuracy of large language models (LLMs) in interpreting nuanced instructions, often employing techniques like chain-of-thought prompting, contrastive learning, and reinforcement learning to enhance performance across diverse tasks, including embodied AI and code generation. This field is significant for advancing human-computer interaction and enabling more intuitive control of complex systems in various domains, from robotics and data science to healthcare and software development.
Papers
A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications
Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, Aman Chadha
Point and Instruct: Enabling Precise Image Editing by Unifying Direct Manipulation and Text Instructions
Alec Helbling, Seongmin Lee, Polo Chau
LLF-Bench: Benchmark for Interactive Learning from Language Feedback
Ching-An Cheng, Andrey Kolobov, Dipendra Misra, Allen Nie, Adith Swaminathan
Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions
Federico Cassano, Luisa Li, Akul Sethi, Noah Shinn, Abby Brennan-Jones, Jacob Ginesin, Edward Berman, George Chakhnashvili, Anton Lozhkov, Carolyn Jane Anderson, Arjun Guha
Which way is `right'?: Uncovering limitations of Vision-and-Language Navigation model
Meera Hahn, Amit Raj, James M. Rehg
InstructSeq: Unifying Vision Tasks with Instruction-conditioned Multi-modal Sequence Generation
Rongyao Fang, Shilin Yan, Zhaoyang Huang, Jingqiu Zhou, Hao Tian, Jifeng Dai, Hongsheng Li