Human Pose Estimation
Human pose estimation aims to accurately determine the location of human body joints from various input modalities, such as images, videos, or sensor data. Current research focuses on improving accuracy and efficiency, particularly in challenging scenarios like occlusions and low-resolution inputs, through the development and refinement of transformer-based models, graph convolutional networks, and other deep learning architectures. These advancements have significant implications for numerous applications, including human-robot interaction, healthcare, sports analysis, and augmented/virtual reality, by enabling more robust and efficient systems for movement analysis and human-computer interaction.
Papers
Improved 2D Keypoint Detection in Out-of-Balance and Fall Situations -- combining input rotations and a kinematic model
Michael Zwölfer, Dieter Heinrich, Kurt Schindelwig, Bastian Wandt, Helge Rhodin, Joerg Spoerri, Werner Nachbauer
Bottom-up approaches for multi-person pose estimation and it's applications: A brief review
Milan Kresović, Thong Duy Nguyen
Rethinking Keypoint Representations: Modeling Keypoints and Poses as Objects for Multi-Person Human Pose Estimation
William McNally, Kanav Vats, Alexander Wong, John McPhee
Pose Recognition in the Wild: Animal pose estimation using Agglomerative Clustering and Contrastive Learning
Samayan Bhattacharya, Sk Shahnawaz