Paper ID: 2312.13330

SOVC: Subject-Oriented Video Captioning

Chang Teng, Yunchuan Ma, Guorong Li, Yuankai Qi, Laiyu Qing, Qingming Huang

Describing video content according to users' needs is a long-held goal. Although existing video captioning methods have made significant progress, the generated captions may not focus on the entity that users are particularly interested in. To address this problem, we propose a new video captioning task, Subject-Oriented Video Captioning (SOVC), which aims to allow users to specify the describing target via a bounding box. To support this task, we construct two subject-oriented video captioning datasets based on two widely used video captioning datasets: MSVD and MSRVTT, by annotating subjects in each video for each caption. These datasets pave the way for describing users' interested targets. To tackle this task, we introduce a method tailored to this task, named SOVCNet. It consists of two key components: a subject-oriented sampling module that samples frames related to the subject to minimize irrelevant information; and a subject-oriented encoding module that utilizes the subject areas as hard prompts and integrates learnable soft prompts, enhancing the model's focus on the subject's activities and facilitating adaptation to the downstream generation task. Extensive experimental results demonstrate the effectiveness of our method on this new task.

Submitted: Dec 20, 2023