Forward feature selection for toxic speech classification using support vector machine and random forest

Agustinus Bimo Gumelar, Astri Yogatama, Derry Pramono Adi, Frismanda Frismanda, Indar Sugiarto

Abstract


This study describes the methods for eliminating irrelevant features in speech data to enhance toxic speech classification accuracy and reduce the complexity of the learning process. Therefore, the wrapper method is introduced to estimate the forward selection technique based on support vector machine (SVM) and random forest (RF) classifier algorithms. Eight main speech features were then extracted with derivatives consisting of 9 statistical sub-features from 72 features in the extraction process. Furthermore, Python is used to implement the classifier algorithm of 2,000 toxic data collected through the world's largest video sharing media, known as YouTube. Conclusively, this experiment shows that after the feature selection process, the classification performance using SVM and RF algorithms increases to an excellent extent. We were able to select 10 speech features out of 72 original feature sets using the forward feature selection method, with 99.5% classification accuracy using RF and 99.2% using SVM.

Keywords


Feature selection; Forward selection; Random forest; Support vector machine; Toxic speech

Full Text:

PDF


DOI: http://doi.org/10.11591/ijai.v11.i2.pp717-726

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN/e-ISSN 2089-4872/2252-8938 
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).

View IJAI Stats