HOME

TheInfoList



OR:

Neural machine translation (NMT) is an approach to
machine translation Machine translation, sometimes referred to by the abbreviation MT (not to be confused with computer-aided translation, machine-aided human translation or interactive translation), is a sub-field of computational linguistics that investigates th ...
that uses an
artificial neural network Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units ...
to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.


Properties

They require only a fraction of the memory needed by traditional
statistical machine translation Statistical machine translation (SMT) is a machine translation paradigm where translations are generated on the basis of statistical models whose parameters are derived from the analysis of bilingual text corpora. The statistical approach contrast ...
(SMT) models. Furthermore, unlike conventional translation systems, all parts of the neural translation model are trained jointly (end-to-end) to maximize the translation performance.


History

Deep learning applications appeared first in
speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers with the mai ...
in the 1990s. The first scientific paper on using neural networks in machine translation appeared in 2014. This year Bahdanau et al.Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. In: Proceedings of the 3rd International Conference on Learning Representations; 2015 May 7–9; San Diego, USA; 2015. and Sutskever et al.Sutskever I, Vinyals O, Le QV. Sequence to sequence learning with neural networks. In: Proceedings of the 27th International Conference on Neural Information Processing Systems; 2014 Dec 8–13; Montreal, QC, Canada; 2014.proposed end-to-end neural network translation models and formally used the term "neural machine translation". First large-scale NMT system was launched by Baidu in 2015. Next year Google launched an NMT system too, followed by others.Haifeng Wang, Hua Wu, Zhongjun He, Liang Huang, Kenneth Ward Churc
Progress in Machine Translation
// Engineering (2021), doi: https://doi.org/10.1016/j.eng.2021.03.023
It was followed by a lot of advances in the following few years. (Large-vocabulary NMT, application to Image captioning, Subword-NMT, Multilingual NMT, Multi-Source NMT, Character-dec NMT, Zero-Resource NMT, Google, Fully Character-NMT, Zero-Shot NMT in 2017) In 2015 there was the first appearance of a NMT system in a public machine translation competition (OpenMT'15). WMT'15 also for the first time had a NMT contender; the following year it already had 90% of NMT systems among its winners. Since 2017, neural machine translation has been used by the European Patent Office to make information from the global patent system instantly accessible. The system, developed in collaboration with
Google Google LLC () is an American multinational technology company focusing on search engine technology, online advertising, cloud computing, computer software, quantum computing, e-commerce, artificial intelligence, and consumer electronics. I ...
, is paired with 31 languages, and as of 2018, the system has translated over nine million documents.


Workings

NMT departs from phrase-based
statistical Statistics (from German: ''Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industri ...
approaches that use separately engineered subcomponents. Neural machine translation (NMT) is not a drastic step beyond what has been traditionally done in statistical machine translation (SMT). Its main departure is the use of vector representations ("embeddings", "continuous space representations") for words and internal states. The structure of the models is simpler than phrase-based models. There is no separate language model, translation model, and reordering model, but just a single sequence model that predicts one word at a time. However, this sequence prediction is conditioned on the entire source sentence and the entire already produced target sequence. NMT models use
deep learning Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised. D ...
and
representation learning In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature ...
. The word sequence modeling was at first typically done using a
recurrent neural network A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes can create a cycle, allowing output from some nodes to affect subsequent input to the same nodes. This allows it to exhibit temporal dynamic ...
(RNN). A bidirectional recurrent neural network, known as an ''encoder'', is used by the neural network to encode a source sentence for a second RNN, known as a ''decoder'', that is used to predict words in the target language. Recurrent neural networks face difficulties in encoding long inputs into a single vector. This can be compensated by an attention mechanism which allows the decoder to focus on different parts of the input while generating each word of the output. There are further Coverage Models addressing the issues in such attention mechanisms, such as ignoring of past alignment information leading to over-translation and under-translation. Convolutional Neural Networks (Convnets) are in principle somewhat better for long continuous sequences, but were initially not used due to several weaknesses. These were successfully compensated for in 2017 by using "attention mechanisms". The
Transformer A transformer is a passive component that transfers electrical energy from one electrical circuit to another circuit, or multiple circuits. A varying current in any coil of the transformer produces a varying magnetic flux in the transformer's ...
an attention-based model, remains the dominant architecture for several language pairs. The self-attention layers of the Transformer model learn the dependencies between words in a sequence by examining links between all the words in the paired sequences and by directly modeling those relationships. It's a simpler approach than the gating mechanism that RNNs employ. And its simplicity has enabled researchers to develop high-quality translation models with the Transformer model, even in low-resource settings.


Remarks


References

{{Differentiable computing Applications of artificial intelligence Computational linguistics Machine translation Tasks of natural language processing