Fine-tuning multilingual transformers for Hinglish sentiment analysis: a comparative evaluation with BiLSTM

Jyoti S. Verma, Jaimin N. Undavia

Abstract


Growing trend of code-mixing in languages, in the form of Hinglish, greatly tests the skills of conventional sentiment analysis tools. The research contributes a fine-tuned multilingual transformer model built exclusively for classifying sentiment of Hinglish customer reviews. Drawing from pre trained BERT-base-multilingual-case architecture, the model gets transformed with the process of fine-tuning the same on synthetically prepared and balanced dataset simulating positive, negative, and neutral sentiments. Sophisticated methods like focal loss for addressing the class imbalance and mixed precision training for maximization of computational effectiveness are embedded within the training process. Experimental results suggest that the proposed method significantly captures the fine-grained linguistic patterns of code-mixed text, improving sentiment classification accuracy. The results show promising potential for enhancing customer feedback analysis in e-commerce, social media monitoring, and customer support use cases, where it is crucial to comprehend the sentiment behind code-mixed reviews.

Keywords


Boosting algorithm; Hindi-English rating; Natural language processing; Sentiment analysis; Stacking ensemble

Full Text:

PDF


DOI: http://doi.org/10.11591/ijai.v14.i6.pp4684-4693

Refbacks

  • There are currently no refbacks.


Copyright (c) 2025 Jyoti S. Verma, Jaimin N. Undavia

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN/e-ISSN 2089-4872/2252-8938 
This journal is published by the Institute of Advanced Engineering and Science (IAES).

View IJAI Stats