Image captioning to aid blind and visually impaired outdoor navigation

Ruvita Faurina, Anisa Jelita, Arie Vatresia, Indra Agustian


Artificial intelligence technology has dramatically improved the quality of services for human needs, one of which is technology to improve the quality of services for the blind and visually impaired, particularly technology that can help them understand visual sights to facilitate navigation in their daily lives. This study developed an image captioning model to aid the blind and visually impaired in outdoor navigation. The image captioning model employs the encoder-decoder method, with the convolutional neural network (CNN) feature extraction and attention layer as encoders and the long short-term memory (LSTM) as decoders. ResNet101 and ResNet152 are used in the encoder to extract image features. The results of the extraction and caption are forwarded to the attention layer and the LSTM network. The attention layer uses the Bahdanau attention mechanism. The accuracy of the model is calculated using the bilingual evaluation understudy score (BLEU), metric for evaluation of translation with explicit ordering (METEOR) and recall-oriented understudy for gisting evaluation-longest common subsequence (ROUGE-L). ResNet101 performed the best on BLEU-4, scoring 91.811% and 94.0337% in the METEOR evaluation. The captioning results show that the model is quite successful in displaying a simple caption that is suitable for each image.


Attention mechanism; Convolutional neural network; Image captioning; Long short-term memory; Visually impaired

Full Text:




  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN/e-ISSN 2089-4872/2252-8938 
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).

View IJAI Stats