Video Based Generative Task
Video-based generative tasks aim to create or manipulate video content using artificial intelligence, focusing on improving the quality, realism, and flexibility of generated videos. Current research emphasizes developing novel architectures, such as diffusion models and implicit neural representations (INRs), to achieve better temporal coherence and handle diverse tasks like video outpainting, inpainting, and interpolation. These advancements are significantly impacting fields like biomedical research (e.g., augmenting cell tracking datasets) and are poised to improve various applications requiring video generation or manipulation.
Papers
April 25, 2024
March 20, 2024