Joint Localization
Joint localization, the simultaneous estimation of multiple related entities' positions, is a core problem across diverse fields like robotics, computer vision, and signal processing. Current research focuses on improving accuracy and robustness using various approaches, including diffusion models for end-to-end navigation and planning, hierarchical relation networks for multi-person pose estimation, and optimal transport formulations for handling data association challenges in multi-source localization. These advancements are crucial for enhancing applications such as autonomous navigation, human-computer interaction, and medical image analysis, where precise and efficient localization of multiple objects or features is essential.
Papers
Joint localization and classification of breast tumors on ultrasound images using a novel auxiliary attention-based framework
Zong Fan, Ping Gong, Shanshan Tang, Christine U. Lee, Xiaohui Zhang, Pengfei Song, Shigao Chen, Hua Li
CASAPose: Class-Adaptive and Semantic-Aware Multi-Object Pose Estimation
Niklas Gard, Anna Hilsmann, Peter Eisert