Dyna Depthformer
DepthFormer, and its variations like Dyna-DepthFormer, represent a family of transformer-based models tackling the challenge of depth estimation from images and videos. These models leverage the power of self- and cross-attention mechanisms to effectively capture both local and long-range dependencies within and across multiple frames, improving accuracy over previous methods. Current research focuses on refining these architectures to handle dynamic scenes and integrating multimodal data (e.g., combining color and depth information) for enhanced performance. The improved accuracy and efficiency of these models have significant implications for applications such as autonomous driving, robotics, and 3D scene reconstruction.
Papers
January 14, 2023
November 8, 2022
July 10, 2022
April 15, 2022