Human Robot Interaction
Human-robot interaction (HRI) research focuses on designing robots that can effectively and naturally interact with humans, aiming to improve collaboration, communication, and overall user experience. Current research emphasizes developing robots capable of understanding and responding to diverse human behaviors, including speech, gestures, and even physiological signals, often employing machine learning models like vision transformers, convolutional neural networks, and reinforcement learning algorithms to achieve this. These advancements are significant because they pave the way for safer, more intuitive, and productive human-robot collaborations across various domains, from industrial settings to assistive technologies and service robotics.
Papers
Are Large Language Models Aligned with People's Social Intuitions for Human-Robot Interactions?
Lennart Wachowiak, Andrew Coles, Oya Celiktutan, Gerard Canal
Safe Spot: Perceived safety of dominant and submissive appearances of quadruped robots in human-robot interactions
Nanami Hashimoto, Emma Hagens, Arkady Zgonnikov, Maria Luce Lupetti
EgoPAT3Dv2: Predicting 3D Action Target from 2D Egocentric Vision for Human-Robot Interaction
Irving Fang, Yuzhong Chen, Yifan Wang, Jianghan Zhang, Qiushi Zhang, Jiali Xu, Xibo He, Weibo Gao, Hao Su, Yiming Li, Chen Feng
NatSGD: A Dataset with Speech, Gestures, and Demonstrations for Robot Learning in Natural Human-Robot Interaction
Snehesh Shrestha, Yantian Zha, Saketh Banagiri, Ge Gao, Yiannis Aloimonos, Cornelia Fermuller
Paper index: Designing an introductory HRI course (workshop at HRI 2024)
Henny Admoni, Daniel Szafir, Wafa Johal, Anara Sandygulova
Improving Visual Perception of a Social Robot for Controlled and In-the-wild Human-robot Interaction
Wangjie Zhong, Leimin Tian, Duy Tho Le, Hamid Rezatofighi