Cross View
Cross-view research focuses on bridging the significant visual discrepancies between images captured from different viewpoints, primarily aiming to improve the accuracy and robustness of tasks like geolocalization, scene understanding, and 3D reconstruction. Current research heavily utilizes deep learning models, including transformers, autoencoders, and diffusion models, often incorporating techniques like contrastive learning, bird's-eye-view transformations, and geometric constraints to align and fuse information across views. This field is crucial for advancing autonomous navigation, remote sensing, and human-computer interaction applications by enabling more reliable and efficient processing of multi-perspective data.
Papers
Any Way You Look At It: Semantic Crossview Localization and Mapping with LiDAR
Ian D. Miller, Anthony Cowley, Ravi Konkimalla, Shreyas S. Shivakumar, Ty Nguyen, Trey Smith, Camillo Jose Taylor, Vijay Kumar
Multi-focus thermal image fusion
Radek Benes, Pavel Dvorak, Marcos Faundez-Zanuy, Virginia Espinosa-Duro, Jiri Mekyska