<s>
Neural	B-General_Concept
machine	I-General_Concept
translation	I-General_Concept
(	O
NMT	O
)	O
is	O
an	O
approach	O
to	O
machine	B-Application
translation	I-Application
that	O
uses	O
an	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
to	O
predict	O
the	O
likelihood	O
of	O
a	O
sequence	O
of	O
words	O
,	O
typically	O
modeling	O
entire	O
sentences	O
in	O
a	O
single	O
integrated	O
model	O
.	O
</s>
<s>
They	O
require	O
only	O
a	O
fraction	O
of	O
the	O
memory	O
needed	O
by	O
traditional	O
statistical	B-General_Concept
machine	I-General_Concept
translation	I-General_Concept
(	O
SMT	O
)	O
models	O
.	O
</s>
<s>
Furthermore	O
,	O
unlike	O
conventional	O
translation	B-Application
systems	I-Application
,	O
all	O
parts	O
of	O
the	O
neural	O
translation	O
model	O
are	O
trained	O
jointly	O
(	O
end-to-end	O
)	O
to	O
maximize	O
the	O
translation	O
performance	O
.	O
</s>
<s>
Deep	B-Algorithm
learning	I-Algorithm
applications	O
appeared	O
first	O
in	O
speech	B-Application
recognition	I-Application
in	O
the	O
1990s	O
.	O
</s>
<s>
The	O
first	O
scientific	O
paper	O
on	O
using	O
neural	B-Architecture
networks	I-Architecture
in	O
machine	B-Application
translation	I-Application
appeared	O
in	O
2014	O
.	O
</s>
<s>
et	O
al.proposed	O
end-to-end	O
neural	B-Architecture
network	I-Architecture
translation	O
models	O
and	O
formally	O
used	O
the	O
term	O
"	O
neural	B-General_Concept
machine	I-General_Concept
translation	I-General_Concept
"	O
.	O
</s>
<s>
Next	O
year	O
Google	B-Application
launched	O
an	O
NMT	O
system	O
too	O
,	O
followed	O
by	O
others	O
.	O
</s>
<s>
(	O
Large-vocabulary	O
NMT	O
,	O
application	O
to	O
Image	O
captioning	O
,	O
Subword-NMT	O
,	O
Multilingual	O
NMT	O
,	O
Multi-Source	O
NMT	O
,	O
Character-dec	O
NMT	O
,	O
Zero-Resource	O
NMT	O
,	O
Google	B-Application
,	O
Fully	O
Character-NMT	O
,	O
Zero-Shot	O
NMT	O
in	O
2017	O
)	O
In	O
2015	O
there	O
was	O
the	O
first	O
appearance	O
of	O
a	O
NMT	O
system	O
in	O
a	O
public	O
machine	B-Application
translation	I-Application
competition	O
(	O
OpenMT'15	O
)	O
.	O
</s>
<s>
Since	O
2017	O
,	O
neural	B-General_Concept
machine	I-General_Concept
translation	I-General_Concept
has	O
been	O
used	O
by	O
the	O
European	O
Patent	O
Office	O
to	O
make	O
information	O
from	O
the	O
global	O
patent	O
system	O
instantly	O
accessible	O
.	O
</s>
<s>
The	O
system	O
,	O
developed	O
in	O
collaboration	O
with	O
Google	B-Application
,	O
is	O
paired	O
with	O
31	O
languages	O
,	O
and	O
as	O
of	O
2018	O
,	O
the	O
system	O
has	O
translated	O
over	O
nine	O
million	O
documents	O
.	O
</s>
<s>
NMT	O
departs	O
from	O
phrase-based	O
statistical	B-General_Concept
approaches	O
that	O
use	O
separately	O
engineered	O
subcomponents	O
.	O
</s>
<s>
Neural	B-General_Concept
machine	I-General_Concept
translation	I-General_Concept
(	O
NMT	O
)	O
is	O
not	O
a	O
drastic	O
step	O
beyond	O
what	O
has	O
been	O
traditionally	O
done	O
in	O
statistical	B-General_Concept
machine	I-General_Concept
translation	I-General_Concept
(	O
SMT	O
)	O
.	O
</s>
<s>
NMT	O
models	O
use	O
deep	B-Algorithm
learning	I-Algorithm
and	O
representation	B-General_Concept
learning	I-General_Concept
.	O
</s>
<s>
The	O
word	O
sequence	O
modeling	O
was	O
at	O
first	O
typically	O
done	O
using	O
a	O
recurrent	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
(	O
RNN	O
)	O
.	O
</s>
<s>
A	O
bidirectional	O
recurrent	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
,	O
known	O
as	O
an	O
encoder	O
,	O
is	O
used	O
by	O
the	O
neural	B-Architecture
network	I-Architecture
to	O
encode	O
a	O
source	O
sentence	O
for	O
a	O
second	O
RNN	O
,	O
known	O
as	O
a	O
decoder	O
,	O
that	O
is	O
used	O
to	O
predict	O
words	O
in	O
the	O
target	O
language	O
.	O
</s>
<s>
Recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
face	O
difficulties	O
in	O
encoding	O
long	O
inputs	O
into	O
a	O
single	O
vector	O
.	O
</s>
<s>
Convolutional	O
Neural	B-Architecture
Networks	I-Architecture
(	O
Convnets	O
)	O
are	O
in	O
principle	O
somewhat	O
better	O
for	O
long	O
continuous	O
sequences	O
,	O
but	O
were	O
initially	O
not	O
used	O
due	O
to	O
several	O
weaknesses	O
.	O
</s>
<s>
The	O
Transformer	B-Algorithm
an	O
attention-based	O
model	O
,	O
remains	O
the	O
dominant	O
architecture	O
for	O
several	O
language	O
pairs	O
.	O
</s>
<s>
The	O
self-attention	O
layers	O
of	O
the	O
Transformer	B-Algorithm
model	I-Algorithm
learn	O
the	O
dependencies	O
between	O
words	O
in	O
a	O
sequence	O
by	O
examining	O
links	O
between	O
all	O
the	O
words	O
in	O
the	O
paired	O
sequences	O
and	O
by	O
directly	O
modeling	O
those	O
relationships	O
.	O
</s>
<s>
And	O
its	O
simplicity	O
has	O
enabled	O
researchers	O
to	O
develop	O
high-quality	O
translation	O
models	O
with	O
the	O
Transformer	B-Algorithm
model	I-Algorithm
,	O
even	O
in	O
low-resource	O
settings	O
.	O
</s>
