Paper ID: 2407.11381

Leveraging Segment Anything Model in Identifying Buildings within Refugee Camps (SAM4Refugee) from Satellite Imagery for Humanitarian Operations

Yunya Gao

Updated building footprints with refugee camps from high-resolution satellite imagery can support related humanitarian operations. This study explores the utilization of the "Segment Anything Model" (SAM) and one of its branches, SAM-Adapter, for semantic segmentation tasks in the building extraction from satellite imagery. SAM-Adapter is a lightweight adaptation of the SAM and emerges as a powerful tool for this extraction task across diverse refugee camps. Our research proves that SAM-Adapter excels in scenarios where data availability is limited compared to other classic (e.g., U-Net) or advanced semantic segmentation models (e.g., Transformer). Furthermore, the impact of upscaling techniques on model performance is highlighted, with methods like super-resolution (SR) models proving invaluable for improving model performance. Additionally, the study unveils intriguing phenomena, including the model's rapid convergence in the first training epoch when using upscaled image data for training, suggesting opportunities for future research. The codes covering each step from data preparation, model training, model inferencing, and the generation of Shapefiles for predicted masks are available on a GitHub repository to benefit the extended scientific community and humanitarian operations.

Submitted: Jul 16, 2024