Human Hand
Research on the human hand focuses on understanding its complex mechanics and capabilities, primarily through advanced computer vision and machine learning techniques. Current efforts concentrate on accurate 3D hand pose estimation and tracking, often employing transformer-based networks, convolutional neural networks, and diffusion models to analyze images and videos, including egocentric perspectives. These advancements are driving progress in areas like robotic hand design, prosthetic control, human-computer interaction, and assistive technologies for individuals with disabilities, particularly stroke survivors. The ultimate goal is to create more realistic and functional robotic hands and improve the diagnosis and treatment of hand-related conditions.
Papers
Bidimensional Leaderboards: Generate and Evaluate Language Hand in Hand
Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Lavinia Dunagan, Jacob Morrison, Alexander R. Fabbri, Yejin Choi, Noah A. Smith
Learning to Localize, Grasp, and Hand Over Unmodified Surgical Needles
Albert Wilcox, Justin Kerr, Brijen Thananjeyan, Jeffrey Ichnowski, Minho Hwang, Samuel Paradis, Danyal Fer, Ken Goldberg