Egocentric 3D Hand Pose Estimation
Egocentric 3D hand pose estimation aims to accurately determine the 3D position and orientation of a person's hand in a first-person perspective, primarily using RGB video data. Current research focuses on improving accuracy using techniques like multi-view fusion, pseudo-depth generation from single RGB images, and advanced architectures such as Vision Transformers (ViTs) and state-space models, often incorporating uncertainty estimation. This field is crucial for advancing human-computer interaction in virtual and augmented reality, robotics, and activity recognition, with ongoing efforts to create robust and efficient methods applicable across diverse camera setups and lighting conditions.
Papers
September 28, 2024
September 11, 2024
August 19, 2024
June 18, 2024
April 14, 2024
March 7, 2024
October 7, 2023