Video Generation Benchmark
Video generation benchmarks evaluate the performance of algorithms creating realistic and coherent videos, focusing on metrics like visual fidelity and temporal consistency. Current research emphasizes improving the efficiency and quality of video generation using diffusion models, exploring novel architectures like generative deformation fields and hybrid pixel-latent approaches, and leveraging advancements in large language models and tokenization techniques. These benchmarks are crucial for driving progress in video synthesis, impacting fields such as entertainment, animation, and scientific visualization by enabling the creation of high-quality, diverse, and easily manipulated video content.
Papers
July 10, 2024
December 7, 2023
November 24, 2023
October 9, 2023
September 27, 2023
April 5, 2023
December 10, 2022
February 1, 2022