Paper ID: 2206.09242

GaLeNet: Multimodal Learning for Disaster Prediction, Management and Relief

Rohit Saha, Mengyi Fang, Angeline Yasodhara, Kyryl Truskovskyi, Azin Asgarian, Daniel Homola, Raahil Shah, Frederik Dieleman, Jack Weatheritt, Thomas Rogers

After a natural disaster, such as a hurricane, millions are left in need of emergency assistance. To allocate resources optimally, human planners need to accurately analyze data that can flow in large volumes from several sources. This motivates the development of multimodal machine learning frameworks that can integrate multiple data sources and leverage them efficiently. To date, the research community has mainly focused on unimodal reasoning to provide granular assessments of the damage. Moreover, previous studies mostly rely on post-disaster images, which may take several days to become available. In this work, we propose a multimodal framework (GaLeNet) for assessing the severity of damage by complementing pre-disaster images with weather data and the trajectory of the hurricane. Through extensive experiments on data from two hurricanes, we demonstrate (i) the merits of multimodal approaches compared to unimodal methods, and (ii) the effectiveness of GaLeNet at fusing various modalities. Furthermore, we show that GaLeNet can leverage pre-disaster images in the absence of post-disaster images, preventing substantial delays in decision making.

Submitted: Jun 18, 2022