Robotic Grasping
Robotic grasping research aims to enable robots to reliably and efficiently grasp objects, a crucial step towards broader automation. Current efforts focus on improving grasp detection accuracy and robustness in cluttered environments, often employing deep learning models like convolutional neural networks and transformers, along with vision-language models for more nuanced object understanding and language-guided manipulation. These advancements are driving progress in various applications, from industrial automation and warehouse logistics to assistive robotics and surgery, by enhancing the dexterity and adaptability of robotic systems.
Papers
NeuralGrasps: Learning Implicit Representations for Grasps of Multiple Robotic Hands
Ninad Khargonkar, Neil Song, Zesheng Xu, Balakrishnan Prabhakaran, Yu Xiang
Deep Learning Approaches to Grasp Synthesis: A Review
Rhys Newbury, Morris Gu, Lachlan Chumbley, Arsalan Mousavian, Clemens Eppner, Jürgen Leitner, Jeannette Bohg, Antonio Morales, Tamim Asfour, Danica Kragic, Dieter Fox, Akansel Cosgun
Sim-to-Real 6D Object Pose Estimation via Iterative Self-training for Robotic Bin Picking
Kai Chen, Rui Cao, Stephen James, Yichuan Li, Yun-Hui Liu, Pieter Abbeel, Qi Dou
GloCAL: Glocalized Curriculum-Aided Learning of Multiple Tasks with Application to Robotic Grasping
Anil Kurkcu, Cihan Acar, Domenico Campolo, Keng Peng Tee