Grasp Anything
"Grasp Anything" research focuses on enabling robots to robustly grasp a wide variety of objects in cluttered environments, mimicking human dexterity. Current efforts concentrate on developing vision-based systems that leverage deep learning models, such as those based on diffusion models, transformer networks, and graph neural networks, often incorporating multimodal data (vision and language) and incorporating techniques like prompt engineering and hierarchical policy learning to improve grasp detection and planning. This field is crucial for advancing robotics in various sectors, including manufacturing, logistics, and assistive technologies, by enabling more versatile and adaptable robotic manipulation.
Papers
January 13, 2025
December 25, 2024
December 14, 2024
December 13, 2024
December 4, 2024
November 25, 2024
November 7, 2024
October 21, 2024
October 19, 2024
October 11, 2024
September 22, 2024
September 19, 2024
September 11, 2024
August 21, 2024
July 18, 2024
July 16, 2024
July 11, 2024
July 2, 2024
May 29, 2024