•  
  •  
 

SMU Data Science Review

Abstract

Neural Machine Translation (NMT) leverages one or more trained neural networks for the translation of phrases. Sutskever intro- duced a sequence to sequence based encoder decoder model which be- came the standard for NMT based systems. Attention mechanisms were later introduced to address the issues with the translation of long sen- tences and improving overall accuracy. In this paper, we propose two improvements to the encoder decoder based NMT approach. Most trans- lation models are trained as one model for one translation. We introduce a neutral/universal model representation that can be used to predict more than one language depending on the source and a provided target. Secondly, we introduce an improvement on an attention model by adding an overall learning vector to improve the attention score. With these two improvements, we proved the possibility of machine translation using a universal model for more than one language.

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License

Share

COinS