Deep learning approaches for Braille detection and classification: comparative analysis
Abstract
This study proposes a hybrid approach to Braille translation leveraging the strengths of both YOLO for object detection and multitude of classification models such as ResNet, and ResNet for accurate Braille character classification from images. Upon comparing numerous models on various performance metrics, ResNet and DenseNet outperformed other models, exhibiting high accuracy (0.9487 and 0.9647 respectively) and F1-scores (0.9481 and 0.9666) due to their deep, densely connected architectures adept at capturing intricate Braille patterns. CNNs with pooling showed balanced results, while MobileNetV2's lightweight design limited complex classification. ResNeXt's multi-path learning achieved respectable performance but lagged behind ResNet and DenseNet. In the future the results from our study could be further explored on contracted Braille recognition, be adapted to various Braille codes, and optimized for mobile devices, for real time Braille detection and translation on smartphones.
Keywords
Braille classification; Braille detection; Convolutional neural network; DenseNet; MobileNetV2; ResNet; ResNeXt
Full Text:
PDFDOI: http://doi.org/10.11591/ijai.v14.i6.pp4652-4660
Refbacks
- There are currently no refbacks.
Copyright (c) 2025 Surekha Janrao, Tavion Fernandes, Ojas Golatkar, Swaraj Dusane

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN/e-ISSN 2089-4872/2252-8938
This journal is published by the Institute of Advanced Engineering and Science (IAES).