Remote Sensing
Remote sensing utilizes satellite and aerial imagery to analyze Earth's surface, aiming to extract meaningful information for various applications like environmental monitoring and urban planning. Current research heavily emphasizes deep learning techniques, particularly transformer-based architectures and masked autoencoders, to improve the accuracy and efficiency of tasks such as semantic segmentation, object detection, and image-text retrieval. These advancements are crucial for enhancing our understanding of Earth's systems and informing decision-making in areas ranging from climate change mitigation to resource management. The field also sees growing interest in multimodal fusion, few-shot learning, and explainable AI to address challenges like data scarcity and model interpretability.
Papers
Change Captioning in Remote Sensing: Evolution to SAT-Cap -- A Single-Stage Transformer Approach
Yuduo Wang, Weikang Yu, Pedram Ghamisi
EarthView: A Large Scale Remote Sensing Dataset for Self-Supervision
Diego Velazquez, Pau Rodriguez López, Sergio Alonso, Josep M. Gonfaus, Jordi Gonzalez, Gerardo Richarte, Javier Marin, Yoshua Bengio, Alexandre Lacoste
Threshold Attention Network for Semantic Segmentation of Remote Sensing Images
Wei Long, Yongjun Zhang, Zhongwei Cui, Yujie Xu, Xuexue Zhang
GeoPix: Multi-Modal Large Language Model for Pixel-level Image Understanding in Remote Sensing
Ruizhe Ou, Yuan Hu, Fan Zhang, Jiaxin Chen, Yu Liu
RSRefSeg: Referring Remote Sensing Image Segmentation with Foundation Models
Keyan Chen, Jiafan Zhang, Chenyang Liu, Zhengxia Zou, Zhenwei Shi
Multi-Label Scene Classification in Remote Sensing Benefits from Image Super-Resolution
Ashitha Mudraje, Brian B. Moser, Stanislav Frolov, Andreas Dengel