Transliteration and translation of Hindi language using integrated domain-based Auto-encoder
Abstract
The main objective of translation is to translate words' meanings from one language to another; in contrast, transliteration does not translate any contextual meanings between languages. Transliteration, as opposed to translation, just considers the individual letters that make up each word. In this paper an Integrated deep neural network transliteration and translation model (NNTT) based autoencoder model is developed. The model is segmented into transliteration model and translation model; the transliteration involves the process of converting text from one script to another evaluated on the Dakshina dataset wherein Hindi typically uses a sequence-to-sequence model with an attention mechanism, the translation model is trained to translate text from one language to another. Translation models regularly use a sequence-to-sequence model performed on the WAT (Workshop on Asian Translation) 2021 dataset with an attention mechanism, similar to the one used in the transliteration model for Hindi. The proposed NNTT model merges the in-domain and out-domain frameworks to develop a training framework so that the information is transferred between the domains. The results evaluated show that the proposed model works effectively in comparison with the existing system for the Hindi language.
Keywords
Dakshina dataset; Neural network transliteration and translation; Sequence-to-sequence; Translation; Transliteration; Workshop on asian translation 2021
Full Text:
PDFDOI: http://doi.org/10.11591/ijai.v13.i4.pp4906-4914
Refbacks
- There are currently no refbacks.
Copyright (c) 2024 Institute of Advanced Engineering and Science
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN/e-ISSN 2089-4872/2252-8938
This journal is published by the Institute of Advanced Engineering and Science (IAES).