Natural Language Instruction
Natural language instruction focuses on enabling artificial intelligence agents to understand and execute commands expressed in human language, aiming to bridge the gap between human communication and machine action. Current research emphasizes improving the robustness and accuracy of large language models (LLMs) in interpreting nuanced instructions, often employing techniques like chain-of-thought prompting, contrastive learning, and reinforcement learning to enhance performance across diverse tasks, including embodied AI and code generation. This field is significant for advancing human-computer interaction and enabling more intuitive control of complex systems in various domains, from robotics and data science to healthcare and software development.
Papers
NSP: A Neuro-Symbolic Natural Language Navigational Planner
William English, Dominic Simon, Rickard Ewetz, Sumit Jha
Enhancing Emotional Text-to-Speech Controllability with Natural Language Guidance through Contrastive Learning and Diffusion Models
Xin Jing, Kun Zhou, Andreas Triantafyllopoulos, Björn W. Schuller
Large Language Model for Verilog Generation with Golden Code Feedback
Ning Wang, Bingkun Yao, Jie Zhou, Xi Wang, Zhe Jiang, Nan Guan
Towards Automated Data Sciences with Natural Language and SageCopilot: Practices and Lessons Learned
Yuan Liao, Jiang Bian, Yuhui Yun, Shuo Wang, Yubo Zhang, Jiaming Chu, Tao Wang, Kewei Li, Yuchen Li, Xuhong Li, Shilei Ji, Haoyi Xiong
Open (Clinical) LLMs are Sensitive to Instruction Phrasings
Alberto Mario Ceballos Arroyo, Monica Munnangi, Jiuding Sun, Karen Y. C. Zhang, Denis Jered McInerney, Byron C. Wallace, Silvio Amir
IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents
Shrestha Mohanty, Negar Arabzadeh, Andrea Tupini, Yuxuan Sun, Alexey Skrynnik, Artem Zholus, Marc-Alexandre Côté, Julia Kiseleva