Interaction Generation
Interaction generation research focuses on enabling computers and robots to understand and respond appropriately to human interactions, aiming to create more natural and effective human-machine communication. Current efforts concentrate on developing models that leverage large language models (LLMs), diffusion models, and reinforcement learning algorithms to generate realistic and contextually relevant responses in diverse scenarios, including human-robot dialogue, multi-agent collaboration, and embodied AI tasks. This field is crucial for advancing human-computer interaction, robotics, and AI safety, with applications ranging from personalized virtual assistants and improved human-robot collaboration to more intuitive and explainable AI systems.
Papers
Aligning LLMs with Individual Preferences via Interaction
Shujin Wu, May Fung, Cheng Qian, Jeonghwan Kim, Dilek Hakkani-Tur, Heng Ji
A Service Robot in the Wild: Analysis of Users Intentions, Robot Behaviors, and Their Impact on the Interaction
Simone Arreghini, Gabriele Abbate, Alessandro Giusti, Antonio Paolillo