Explainable social media disaster image classification using a lightweight attention-based deep learning approach

Rashmi Kangokar Taranath, Geeta Chidanandappa Mara

Abstract


In recent years, the rapid dissemination of social media content during natural and man-made disasters has created a need for automated and accurate disaster image classification systems. This paper proposes lightweight explainable attention-based disaster network (LEAD-Net), a deep learning (DL) model designed for classifying disaster-related images with high accuracy and interpretability. The system integrates an EfficientNet-B0 backbone enhanced with squeeze-and-excitation (SE) attention modules and a lightweight neural architecture search (NAS-lite) strategy for tuning the classifier head and training hyperparameters. The model was evaluated on two benchmark datasets comprehensive disaster dataset (CDD) and damage multimodal dataset (DMD) achieving 96% and 87% accuracy, respectively, outperforming several established convolutional neural network (CNN) baselines. To ensure transparency, gradient-weighted class activation mapping (Grad-CAM) was employed to generate visual explanations of the model’s decisions, confirming its focus on semantically relevant image regions.

Keywords


Attention mechanism; Computational-efficiency; Deep learning; Disaster image classification; Explainable AI; Real-time deployment; Transfer learning

Full Text:

PDF


DOI: http://doi.org/10.11591/ijai.v15.i2.pp1464-1472

Refbacks

  • There are currently no refbacks.


Copyright (c) 2026 Rashmi Kangokar Taranath, Geeta Chidanandappa Mara

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN/e-ISSN 2089-4872/2252-8938 
This journal is published by the Institute of Advanced Engineering and Science (IAES).

View IJAI Stats