Keypoint Normalization
Keypoint normalization is a technique used to standardize the representation of keypoints extracted from various data sources, such as images, point clouds, and videos, improving the robustness and generalizability of machine learning models. Current research focuses on developing normalization methods tailored to specific data types and tasks, including affine transformations for object pose estimation, missing keypoint imputation for sign language recognition, and azimuth-based normalization for point cloud analysis. These advancements enhance the performance and efficiency of algorithms across diverse applications, ranging from 3D object recognition and scene understanding to sign language translation and human-computer interaction.
Papers
December 2, 2024
November 15, 2023
June 30, 2023
September 18, 2022
September 15, 2022
July 13, 2022
April 22, 2022