Scene Representation Transformer
Scene Representation Transformers (SRTs) aim to create efficient and accurate 3D scene representations from 2D images or videos, enabling novel view synthesis and other downstream tasks. Current research focuses on improving SRT architectures, such as incorporating relative pose information for scalability, leveraging external knowledge bases for enhanced scene understanding, and developing methods for handling dynamic scenes and unposed imagery. These advancements are significant for applications like autonomous driving, 3D scene generation, and visual grounding, offering improvements in speed, accuracy, and data efficiency compared to traditional methods.
Papers
April 15, 2024
March 21, 2024
March 5, 2024
November 11, 2023
October 9, 2023
April 3, 2023
February 7, 2023
November 25, 2022
August 24, 2022
June 14, 2022
November 25, 2021