Pruned Pose Transformer
Pruned Pose Transformers represent a class of efficient deep learning models designed to improve the speed and accuracy of human pose estimation and related tasks like 3D mesh recovery and anomaly detection in videos. Current research focuses on optimizing transformer architectures through techniques like token pruning and self-distillation, aiming to reduce computational costs without sacrificing performance. These advancements are significant because they enable deployment of sophisticated pose estimation models on resource-constrained devices and improve the robustness of these models in challenging scenarios such as occlusion or complex scenes, impacting applications in robotics, human-computer interaction, and video surveillance.
Papers
August 27, 2024
July 12, 2024
April 25, 2024
April 4, 2024
March 19, 2024
November 29, 2023
November 20, 2023
April 12, 2023
September 16, 2022