Hand Object Interaction
Hand-object interaction research focuses on accurately modeling and reconstructing how humans manipulate objects, aiming to improve computer vision, robotics, and virtual/augmented reality applications. Current research heavily utilizes graph neural networks, diffusion models, and variational autoencoders to address challenges like occlusion, contact modeling, and physical plausibility in 3D hand and object pose estimation and interaction synthesis. This field is significant due to its potential to enable more realistic human-computer interaction, improve robotic manipulation capabilities, and advance our understanding of human motor control and perception. The development of large, diverse datasets capturing hand-object interactions in various contexts is also a key area of ongoing effort.
Papers
GRIP: Generating Interaction Poses Using Latent Consistency and Spatial Cues
Omid Taheri, Yi Zhou, Dimitrios Tzionas, Yang Zhou, Duygu Ceylan, Soren Pirk, Michael J. Black
Novel-view Synthesis and Pose Estimation for Hand-Object Interaction from Sparse Views
Wentian Qu, Zhaopeng Cui, Yinda Zhang, Chenyu Meng, Cuixia Ma, Xiaoming Deng, Hongan Wang