Dense Scene
Dense scene understanding aims to comprehensively analyze and reconstruct 3D scenes from various sensor inputs, enabling applications like autonomous driving and virtual/augmented reality. Current research heavily focuses on multi-task learning using advanced architectures like transformers and novel decoder designs (e.g., Mamba-based decoders) to efficiently handle long-range dependencies and cross-task interactions within complex scenes. These advancements address challenges in accurate depth estimation, robust pose estimation, and scalable scene representation, particularly for large-scale environments and scenarios involving incomplete or noisy data, ultimately improving the accuracy and robustness of 3D scene reconstruction.
Papers
August 27, 2024
July 2, 2024
April 9, 2024
August 28, 2023
April 21, 2023
February 27, 2023
September 13, 2022