Javanese part-of-speech tagging using cross-lingual transfer learning

Gabriel Enrique, Ika Alfina, Evi Yulianti

Abstract


Large datasets that are publicly available for POS tagging do not always exist for some languages. One of those languages is Javanese, a local language in Indonesia, which is considered as a low-resource language. This research aims to examine the effectiveness of cross-lingual transfer learning for Javanese POS tagging by fine-tuning the state-of-the-art Transformer-based models (such as IndoBERT, mBERT, and XLM-RoBERTa) using different kinds of source languages that have a higher resource (such as Indonesian, English, Uyghur, Latin, and Hungarian languages), and then fine-tuning it again using the Javanese language as the target language. We found that the models using cross-lingual transfer learning can increase the accuracy of the models without using cross-lingual transfer learning by 14.3%–15.3% over LSTM-based models, and by 0.21%–3.95% over Transformer-based models. Our results show that the most accurate Javanese POS tagger model is XLM-RoBERTa that is fine-tuned in two stages (the first one using Indonesian language as the source language, and the second one using Javanese language as the target language), capable of achieving an accuracy of 87.65%

Keywords


Cross-lingual transfer learning; Deep learning; Low-resource language; Part-of-speech tagging; Transformer

Full Text:

PDF


DOI: http://doi.org/10.11591/ijai.v13.i3.pp3498-3509

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN/e-ISSN 2089-4872/2252-8938 
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).

View IJAI Stats