Exocentric Video
Exocentric video research focuses on bridging the gap between first-person (egocentric) and third-person (exocentric) perspectives of actions, aiming to generate one view from the other or to learn view-invariant representations of actions. Current research employs generative models, particularly diffusion models, and leverages techniques like multi-view stereo matching, self-supervised learning, and contrastive learning to achieve cross-view translation and action understanding. This work is significant for advancing embodied AI, augmented reality, and human-computer interaction by enabling AI systems to better understand and interact with the world from diverse viewpoints.
Papers
October 27, 2024
March 24, 2024
March 14, 2024
March 11, 2024
November 30, 2023
November 28, 2023
August 22, 2023
June 8, 2023
May 25, 2023
March 16, 2023
January 3, 2023
August 28, 2022
May 5, 2022