3D Hand
3D hand research focuses on accurately estimating and reconstructing three-dimensional hand poses and shapes from various input modalities, primarily images and videos, aiming to improve human-computer interaction and related applications. Current research emphasizes the development of robust and efficient models, often employing deep learning architectures like transformers and diffusion models, to address challenges such as occlusion, self-occlusion, and hand-object interaction. These advancements are significant for fields like sign language recognition, virtual reality, robotics, and assistive technologies, enabling more natural and intuitive interactions between humans and machines.
Papers
Mesh Represented Recycle Learning for 3D Hand Pose and Mesh Estimation
Bosang Kim, Jonghyun Kim, Hyotae Lee, Lanying Jin, Jeongwon Ha, Dowoo Kwon, Jungpyo Kim, Wonhyeok Im, KyungMin Jin, Jungho Lee
ShapeGraFormer: GraFormer-Based Network for Hand-Object Reconstruction from a Single Depth Map
Ahmed Tawfik Aboukhadra, Jameel Malik, Nadia Robertini, Ahmed Elhayek, Didier Stricker