Grasp Anything
"Grasp Anything" research focuses on enabling robots to robustly grasp a wide variety of objects in cluttered environments, mimicking human dexterity. Current efforts concentrate on developing vision-based systems that leverage deep learning models, such as those based on diffusion models, transformer networks, and graph neural networks, often incorporating multimodal data (vision and language) and incorporating techniques like prompt engineering and hierarchical policy learning to improve grasp detection and planning. This field is crucial for advancing robotics in various sectors, including manufacturing, logistics, and assistive technologies, by enabling more versatile and adaptable robotic manipulation.
Papers
November 7, 2024
October 21, 2024
October 19, 2024
October 11, 2024
September 22, 2024
September 19, 2024
September 11, 2024
August 21, 2024
July 18, 2024
July 16, 2024
July 11, 2024
July 2, 2024
May 29, 2024
May 7, 2024
April 19, 2024
March 21, 2024
March 15, 2024
March 8, 2024
February 23, 2024
February 6, 2024