Evaluating text classification with explainable artificial intelligence

Kanwal Zahoor, Narmeen Zakaria Bawany, Tehreem Qamar


Nowadays, artificial intelligence (AI) in general and machine learning techniques in particular has been widely employed in automated systems. Increasing complexity of these machine learning based systems have consequently given rise to blackbox models that are typically not understandable or explainable by humans. There is a need to understand the logic and reason behind these automated decision-making black box models as they are involved in our day-to-day activities such as driving, facial recognition identity systems, online recruitment. Explainable artificial intelligence (XAI) is an evolving field that makes it possible for humans to evaluate machine learning models for their correctness, fairness, and reliability. We extend our previous research work and perform a detailed analysis of the model created for text classification and sentiment analysis using a popular Explainable AI tool named local interpretable model agnostic explanations (LIME). The results verify that it is essential to evaluate machine learning models using explainable AI tools as accuracy and other related metrics does not ensure the correctness, fairness, and reliability of the model. We also present the comparison of explainability and interpretability of various machine learning algorithms using LIME. 


Explainable artificial intelligence; LIME; Model interpretability; Text classification;

Full Text:


DOI: http://doi.org/10.11591/ijai.v13.i1.pp278-286


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN/e-ISSN 2089-4872/2252-8938 
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).

View IJAI Stats