<s>
Seq2seq	B-Algorithm
is	O
a	O
family	O
of	O
machine	O
learning	O
approaches	O
used	O
for	O
natural	B-Language
language	I-Language
processing	I-Language
.	O
</s>
<s>
Applications	O
include	O
language	O
translation	O
,	O
image	O
captioning	O
,	O
conversational	O
models	O
and	O
text	B-Application
summarization	I-Application
.	O
</s>
<s>
The	O
algorithm	O
was	O
later	O
developed	O
by	O
Google	O
for	O
use	O
in	O
machine	B-Application
translation	I-Application
.	O
</s>
<s>
In	O
2019	O
,	O
Facebook	B-Application
announced	O
its	O
use	O
in	O
symbolic	B-Algorithm
integration	I-Algorithm
and	O
resolution	O
of	O
differential	O
equations	O
.	O
</s>
<s>
The	O
company	O
claimed	O
that	O
it	O
could	O
solve	O
complex	O
equations	O
more	O
rapidly	O
and	O
with	O
greater	O
accuracy	O
than	O
commercial	O
solutions	O
such	O
as	O
Mathematica	B-Application
,	O
MATLAB	B-Language
and	O
Maple	B-Language
.	O
</s>
<s>
An	O
LSTM	B-Algorithm
neural	B-Architecture
network	I-Architecture
then	O
applies	O
its	O
standard	O
pattern	O
recognition	O
facilities	O
to	O
process	O
the	O
tree	O
.	O
</s>
<s>
In	O
2020	O
,	O
Google	O
released	O
Meena	O
,	O
a	O
2.6	O
billion	O
parameter	B-Architecture
seq2seq-based	O
chatbot	B-Application
trained	O
on	O
a	O
341	O
GB	O
data	O
set	O
.	O
</s>
<s>
Google	O
claimed	O
that	O
the	O
chatbot	B-Application
has	O
1.7	O
times	O
greater	O
model	O
capacity	O
than	O
OpenAI	O
's	O
GPT-2	B-General_Concept
,	O
whose	O
May	O
2020	O
successor	O
,	O
the	O
175	O
billion	O
parameter	B-Architecture
GPT-3	B-General_Concept
,	O
trained	O
on	O
a	O
"	O
45TB	O
dataset	O
of	O
plaintext	O
words	O
(	O
45,000	O
GB	O
)	O
that	O
was	O
...	O
filtered	O
down	O
to	O
570	O
GB.	O
"	O
</s>
<s>
In	O
2022	O
,	O
Amazon	B-Application
introduced	O
AlexaTM	O
20B	O
,	O
a	O
moderate-sized	O
(	O
20	O
billion	O
parameter	B-Architecture
)	O
seq2seq	B-Algorithm
language	B-Language
model	I-Language
.	O
</s>
<s>
The	O
model	O
outperforms	O
the	O
much	O
larger	O
GPT-3	B-General_Concept
in	O
language	O
translation	O
and	O
summarization	O
.	O
</s>
<s>
AlexaTM	O
20B	O
achieved	O
state-of-the-art	O
performance	O
in	O
few-shot-learning	O
tasks	O
across	O
all	O
Flores-101	O
language	O
pairs	O
,	O
outperforming	O
GPT-3	B-General_Concept
on	O
several	O
tasks	O
.	O
</s>
<s>
Seq2seq	B-Algorithm
turns	O
one	O
sequence	O
into	O
another	O
sequence	O
(	O
sequence	O
transformation	O
)	O
.	O
</s>
<s>
It	O
does	O
so	O
by	O
use	O
of	O
a	O
recurrent	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
(	O
RNN	O
)	O
or	O
more	O
often	O
LSTM	B-Algorithm
or	O
GRU	B-Algorithm
to	O
avoid	O
the	O
problem	O
of	O
vanishing	B-Algorithm
gradient	I-Algorithm
.	O
</s>
<s>
Attention	B-General_Concept
:	O
The	O
input	O
to	O
the	O
decoder	O
is	O
a	O
single	O
vector	O
which	O
stores	O
the	O
entire	O
context	O
.	O
</s>
<s>
Attention	B-General_Concept
allows	O
the	O
decoder	O
to	O
look	O
at	O
the	O
input	O
sequence	O
selectively	O
.	O
</s>
<s>
Beam	B-Algorithm
Search	I-Algorithm
:	O
Instead	O
of	O
picking	O
the	O
single	O
output	O
(	O
word	O
)	O
as	O
the	O
output	O
,	O
multiple	O
highly	O
probable	O
choices	O
are	O
retained	O
,	O
structured	O
as	O
a	O
tree	O
(	O
using	O
a	O
Softmax	B-Algorithm
on	O
the	O
set	O
of	O
attention	B-General_Concept
scores	O
)	O
.	O
</s>
<s>
Average	O
the	O
encoder	O
states	O
weighted	O
by	O
the	O
attention	B-General_Concept
distribution	O
.	O
</s>
<s>
Software	O
adopting	O
similar	O
approaches	O
includes	O
OpenNMT	O
(	O
Torch	B-Algorithm
)	O
,	O
Neural	O
Monkey	O
(	O
TensorFlow	B-Language
)	O
and	O
NEMATUS	O
(	O
Theano	B-Algorithm
)	O
.	O
</s>
