Interaction Generation
Interaction generation research focuses on enabling computers and robots to understand and respond appropriately to human interactions, aiming to create more natural and effective human-machine communication. Current efforts concentrate on developing models that leverage large language models (LLMs), diffusion models, and reinforcement learning algorithms to generate realistic and contextually relevant responses in diverse scenarios, including human-robot dialogue, multi-agent collaboration, and embodied AI tasks. This field is crucial for advancing human-computer interaction, robotics, and AI safety, with applications ranging from personalized virtual assistants and improved human-robot collaboration to more intuitive and explainable AI systems.
Papers
Unified Understanding of Environment, Task, and Human for Human-Robot Interaction in Real-World Environments
Yuga Yano, Akinobu Mizutani, Yukiya Fukuda, Daiju Kanaoka, Tomohiro Ono, Hakaru Tamukoh
Large Language Model Enhanced Recommender Systems: Taxonomy, Trend, Application and Future
Qidong Liu, Xiangyu Zhao, Yuhao Wang, Yejing Wang, Zijian Zhang, Yuqi Sun, Xiang Li, Maolin Wang, Pengyue Jia, Chong Chen, Wei Huang, Feng Tian
It Takes Two: Real-time Co-Speech Two-person's Interaction Generation via Reactive Auto-regressive Diffusion Model
Mingyi Shi, Dafei Qin, Leo Ho, Zhouyingcheng Liao, Yinghao Huang, Junichi Yamagishi, Taku Komura
Cascaded Multi-Scale Attention for Enhanced Multi-Scale Feature Extraction and Interaction with Low-Resolution Images
Xiangyong Lu, Masanori Suganuma, Takayuki Okatani