Android based application for visually impaired using deep learning approach

Haslinah Mohd Nasir, Noor Mohd Ariff Brahin, Mai Mariam Mohamed Aminuddin, Mohd Syafiq Mispan, Mohd Faizal Zulkifli

Abstract


People with visually impaired had difficulties in doing activities related to environment, social and technology. Furthermore, they are having issues with independent and safe in their daily routine. This research propose deep learning based visual object recognition model to help the visually impaired people in their daily basis using the android application platform. This research is mainly focused on the recognition of the money, cloth and other basic things to make their life easier. The convolution neural network (CNN) based visual recognition model by TensorFlow object application programming interface (API) that used single shot detector (SSD) with a pre-trained model from Mobile V2 is developed at Google dataset. Visually impaired persons capture the image and will be compared with the preloaded image dataset for dataset recognition. The verbal message with the name of the image will let the blind used know the captured image. The object recognition achieved high accuracy and can be used without using internet connection. The visually impaired specifically are largely benefited by this research.

Keywords


Aided engineering, Android application, Convolution neural network, Deep learning, Visually impaired

Full Text:

PDF


DOI: http://doi.org/10.11591/ijai.v10.i4.pp879-888

Refbacks

  • There are currently no refbacks.


View IJAI Stats

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.