Zero Shot Action Recognition
Zero-shot action recognition aims to enable computer systems to identify actions in videos without prior training on those specific actions, focusing on generalizing learned knowledge to novel categories. Current research heavily utilizes vision-language models (VLMs), often incorporating techniques like dual visual-text alignment, multimodal prompting, and information compensation to bridge the semantic gap between visual features and textual descriptions of actions. This field is significant because it addresses the scalability and generalization limitations of traditional action recognition methods, potentially impacting applications such as robotics, video surveillance, and assistive technologies.
Papers
October 28, 2024
October 14, 2024
September 22, 2024
August 29, 2024
August 28, 2024
July 23, 2024
July 8, 2024
July 1, 2024
June 19, 2024
June 2, 2024
May 27, 2024
May 14, 2024
March 5, 2024
January 22, 2024
January 18, 2024
December 13, 2023
November 14, 2023
October 19, 2023
September 29, 2023
August 15, 2023