Paper ID: 2305.16698

Detect Any Shadow: Segment Anything for Video Shadow Detection

Yonghui Wang, Wengang Zhou, Yunyao Mao, Houqiang Li

Segment anything model (SAM) has achieved great success in the field of natural image segmentation. Nevertheless, SAM tends to consider shadows as background and therefore does not perform segmentation on them. In this paper, we propose ShadowSAM, a simple yet effective framework for fine-tuning SAM to detect shadows. Besides, by combining it with long short-term attention mechanism, we extend its capability for efficient video shadow detection. Specifically, we first fine-tune SAM on ViSha training dataset by utilizing the bounding boxes obtained from the ground truth shadow mask. Then during the inference stage, we simulate user interaction by providing bounding boxes to detect a specific frame (e.g., the first frame). Subsequently, using the detected shadow mask as a prior, we employ a long short-term network to learn spatial correlations between distant frames and temporal consistency between adjacent frames, thereby achieving precise shadow information propagation across video frames. Extensive experimental results demonstrate the effectiveness of our method, with notable margin over the state-of-the-art approaches in terms of MAE and IoU metrics. Moreover, our method exhibits accelerated inference speed compared to previous video shadow detection approaches, validating the effectiveness and efficiency of our method. The source code is now publicly available at https://github.com/harrytea/Detect-AnyShadow.

Submitted: May 26, 2023