Scale Aware Depth
Scale-aware depth estimation aims to accurately predict the distance of objects in images, overcoming the inherent scale ambiguity of monocular vision. Current research focuses on improving scale consistency across multiple views (e.g., using full surround monodepth or multi-view geometry), incorporating sparse depth priors from other sensors or geometric constraints, and developing robust model architectures like transformers and encoder-decoder networks to handle diverse scenes and improve detail generation. These advancements are crucial for applications like autonomous driving, robotic navigation, and medical imaging, enabling more reliable 3D scene understanding and improved performance in tasks requiring precise depth information.
Papers
October 14, 2024
August 14, 2024
July 15, 2024
June 13, 2024
October 25, 2023
June 29, 2023
April 21, 2023
March 9, 2023
July 16, 2022
March 10, 2022
February 3, 2022