Human Robot Interaction
Human-robot interaction (HRI) research focuses on designing robots that can effectively and naturally interact with humans, aiming to improve collaboration, communication, and overall user experience. Current research emphasizes developing robots capable of understanding and responding to diverse human behaviors, including speech, gestures, and even physiological signals, often employing machine learning models like vision transformers, convolutional neural networks, and reinforcement learning algorithms to achieve this. These advancements are significant because they pave the way for safer, more intuitive, and productive human-robot collaborations across various domains, from industrial settings to assistive technologies and service robotics.
Papers
Ego-Foresight: Agent Visuomotor Prediction as Regularization for RL
Manuel S. Nunes, Atabak Dehban, Yiannis Demiris, José Santos-Victor
Advancing Behavior Generation in Mobile Robotics through High-Fidelity Procedural Simulations
Victor A. Kich, Jair A. Bottega, Raul Steinmetz, Ricardo B. Grando, Ayanori Yorozu, Akihisa Ohya
ROB 204: Introduction to Human-Robot Systems at the University of Michigan, Ann Arbor
Leia Stirling, Joseph Montgomery, Mark Draelos, Christoforos Mavrogiannis, Lionel P. Robert, Odest Chadwicke Jenkins
Learning Multimodal Confidence for Intention Recognition in Human-Robot Interaction
Xiyuan Zhao, Huijun Li, Tianyuan Miao, Xianyi Zhu, Zhikai Wei, Aiguo Song
A Minimum-Jerk Approach to Handle Singularities in Virtual Fixtures
Giovanni Braglia, Sylvain Calinon, Luigi Biagiotti
CushSense: Soft, Stretchable, and Comfortable Tactile-Sensing Skin for Physical Human-Robot Interaction
Boxin Xu, Luoyan Zhong, Grace Zhang, Xiaoyu Liang, Diego Virtue, Rishabh Madan, Tapomayukh Bhattacharjee