DistractNet: a deep convolutional neural network architecture for distracted driver classification

Ismail Nasri, Mohammed Karrouchi, Hajar Snoussi, Kamal Kassmi, Abdelhafid Messaoudi

Abstract


Distracted driving has been considered one of the reasons for traffic accidents. The american national highway traffic safety administration (NHTSA) defines distracted driving as any activity that takes attention away from driving, such as doing makeup, texting, calling, and reaching behind. Most deaths, physical injuries, and economic losses could have been prevented if the distracted driver is alerted on time. This paper has proposed a new convolutional neural network (CNN) called DistractNet to detect drivers' distractions. The proposed model was trained and tested by state farm distracted driver detection image datasets available at Kaggle that contains images of drivers in the most common activities performed, which lead to distraction while driving divided into ten classes. Also, we have studied the performances of the proposed CNN model based on accuracy, training time, and model size. The performance of the proposed model was compared with four pre-trained networks such as ResNet-50, GoogLeNet, InceptionV3, and AlexNet using transfer learning techniques. The obtained experimental results show that the developed model-based CNN can achieve an overage accuracy of more than 99.32% with 93 min of training time and 7.99 MB of size. The extracted model can classify driver states into ten different classes with the predicted label and probability % for each class.

Keywords


convolutional neural network; deep learning; distracted driver; safe driving; transfer learning

Full Text:

PDF


DOI: http://doi.org/10.11591/ijai.v11.i2.pp494-503

Refbacks

  • There are currently no refbacks.


View IJAI Stats

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.