Human Robot
Human-robot interaction (HRI) research focuses on designing robots that can effectively collaborate with humans, encompassing physical tasks, communication, and shared decision-making. Current research emphasizes improving robot perception and control through techniques like model predictive control, reinforcement learning, and the integration of large language models for natural communication and intent recognition. These advancements aim to create safer, more efficient, and intuitive human-robot teams for applications ranging from industrial assembly to assistive robotics, impacting both the robotics and human-computer interaction fields.
Papers
SCOUT: A Situated and Multi-Modal Human-Robot Dialogue Corpus
Stephanie M. Lukin, Claire Bonial, Matthew Marge, Taylor Hudson, Cory J. Hayes, Kimberly A. Pollard, Anthony Baker, Ashley N. Foots, Ron Artstein, Felix Gervits, Mitchell Abrams, Cassidy Henry, Lucia Donatelli, Anton Leuski, Susan G. Hill, David Traum, Clare R. Voss
Human-Robot Dialogue Annotation for Multi-Modal Common Ground
Claire Bonial, Stephanie M. Lukin, Mitchell Abrams, Anthony Baker, Lucia Donatelli, Ashley Foots, Cory J. Hayes, Cassidy Henry, Taylor Hudson, Matthew Marge, Kimberly A. Pollard, Ron Artstein, David Traum, Clare R. Voss
PARTNR: A Benchmark for Planning and Reasoning in Embodied Multi-agent Tasks
Matthew Chang, Gunjan Chhablani, Alexander Clegg, Mikael Dallaire Cote, Ruta Desai, Michal Hlavac, Vladimir Karashchuk, Jacob Krantz, Roozbeh Mottaghi, Priyam Parashar, Siddharth Patki, Ishita Prasad, Xavier Puig, Akshara Rai, Ram Ramrakhya, Daniel Tran, Joanne Truong, John M. Turner, Eric Undersander, Tsung-Yen Yang
Simulating User Agents for Embodied Conversational-AI
Daniel Philipov, Vardhan Dongre, Gokhan Tur, Dilek Hakkani-Tür