Paper ID: 2408.07867

Continuous Perception Benchmark

Zeyu Wang, Zhenzhen Weng, Serena Yeung-Levy

Humans continuously perceive and process visual signals. However, current video models typically either sample key frames sparsely or divide videos into chunks and densely sample within each chunk. This approach stems from the fact that most existing video benchmarks can be addressed by analyzing key frames or aggregating information from separate chunks. We anticipate that the next generation of vision models will emulate human perception by processing visual input continuously and holistically. To facilitate the development of such models, we propose the Continuous Perception Benchmark, a video question answering task that cannot be solved by focusing solely on a few frames or by captioning small chunks and then summarizing using language models. Extensive experiments demonstrate that existing models, whether commercial or open-source, struggle with these tasks, indicating the need for new technical advancements in this direction.

Submitted: Aug 15, 2024