Paper ID: 2404.06637
GeoSynth: Contextually-Aware High-Resolution Satellite Image Synthesis
Srikumar Sastry, Subash Khanal, Aayush Dhakal, Nathan Jacobs
We present GeoSynth, a model for synthesizing satellite images with global style and image-driven layout control. The global style control is via textual prompts or geographic location. These enable the specification of scene semantics or regional appearance respectively, and can be used together. We train our model on a large dataset of paired satellite imagery, with automatically generated captions, and OpenStreetMap data. We evaluate various combinations of control inputs, including different types of layout controls. Results demonstrate that our model can generate diverse, high-quality images and exhibits excellent zero-shot generalization. The code and model checkpoints are available at https://github.com/mvrl/GeoSynth.
Submitted: Apr 9, 2024