SMU Data Science Review
Abstract
Neural Machine Translation (NMT) leverages one or more trained neural networks for the translation of phrases. Sutskever intro- duced a sequence to sequence based encoder decoder model which be- came the standard for NMT based systems. Attention mechanisms were later introduced to address the issues with the translation of long sen- tences and improving overall accuracy. In this paper, we propose two improvements to the encoder decoder based NMT approach. Most trans- lation models are trained as one model for one translation. We introduce a neutral/universal model representation that can be used to predict more than one language depending on the source and a provided target. Secondly, we introduce an improvement on an attention model by adding an overall learning vector to improve the attention score. With these two improvements, we proved the possibility of machine translation using a universal model for more than one language.
Recommended Citation
Yi, Joshua; Mylapore, Satish; Paul, Ryan; and Slater, Robert
(2020)
"Universal Vector Neural Machine Translation with Effective Attention,"
SMU Data Science Review: Vol. 3:
No.
1, Article 10.
Available at:
https://scholar.smu.edu/datasciencereview/vol3/iss1/10
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License