Egocentric Video
Egocentric video, capturing the world from a first-person perspective, is revolutionizing computer vision by enabling the analysis of human activities and interactions in their natural context. Current research heavily focuses on developing robust multimodal models, often leveraging transformer architectures and large language models, to understand and generate information from egocentric video data, addressing challenges like motion estimation, action recognition, and affordance prediction. This field is significant for advancing artificial intelligence, particularly in embodied AI and human-computer interaction, with applications ranging from assistive technologies and virtual reality to robotics and understanding human behavior. The development of large-scale datasets and standardized evaluation metrics is also driving progress.
Papers
ReLER@ZJU Submission to the Ego4D Moment Queries Challenge 2022
Jiayi Shao, Xiaohan Wang, Yi Yang
InternVideo-Ego4D: A Pack of Champion Solutions to Ego4D Challenges
Guo Chen, Sen Xing, Zhe Chen, Yi Wang, Kunchang Li, Yizhuo Li, Yi Liu, Jiahao Wang, Yin-Dong Zheng, Bingkun Huang, Zhiyu Zhao, Junting Pan, Yifei Huang, Zun Wang, Jiashuo Yu, Yinan He, Hongjie Zhang, Tong Lu, Yali Wang, Limin Wang, Yu Qiao