Robot Manipulation
Robot manipulation research focuses on enabling robots to dexterously interact with their environment, primarily through the development of robust and generalizable control policies. Current efforts concentrate on improving learning efficiency via techniques like reinforcement learning (RL) combined with large language models (LLMs) for task decomposition and feedback, and leveraging advanced simulation methods (e.g., Gaussian splatting, model reduction) to bridge the sim-to-real gap. These advancements are crucial for expanding the capabilities of robots in diverse applications, from industrial automation to assistive technologies and home robotics.
Papers
Caging in Time: A Framework for Robust Object Manipulation under Uncertainties and Limited Robot Perception
Gaotian Wang, Kejia Ren, Andrew S. Morgan, Kaiyu Hang
ARCADE: Scalable Demonstration Collection and Generation via Augmented Reality for Imitation Learning
Yue Yang, Bryce Ikeda, Gedas Bertasius, Daniel Szafir
Constraining Gaussian Process Implicit Surfaces for Robot Manipulation via Dataset Refinement
Abhinav Kumar, Peter Mitrano, Dmitry Berenson
RL-GSBridge: 3D Gaussian Splatting Based Real2Sim2Real Method for Robotic Manipulation Learning
Yuxuan Wu, Lei Pan, Wenhua Wu, Guangming Wang, Yanzi Miao, Hesheng Wang