<s>
A	O
transformer	B-Algorithm
is	O
a	O
deep	B-Algorithm
learning	I-Algorithm
model	O
that	O
adopts	O
the	O
mechanism	O
of	O
self-attention	B-General_Concept
,	O
differentially	O
weighting	O
the	O
significance	O
of	O
each	O
part	O
of	O
the	O
input	O
(	O
which	O
includes	O
the	O
recursive	O
output	O
)	O
data	O
.	O
</s>
<s>
It	O
is	O
used	O
primarily	O
in	O
the	O
fields	O
of	O
natural	B-Language
language	I-Language
processing	I-Language
(	O
NLP	B-Language
)	O
and	O
computer	B-Application
vision	I-Application
(	O
CV	O
)	O
.	O
</s>
<s>
Like	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
(	O
RNNs	O
)	O
,	O
transformers	B-Algorithm
are	O
designed	O
to	O
process	O
sequential	O
input	O
data	O
,	O
such	O
as	O
natural	O
language	O
,	O
with	O
applications	O
towards	O
tasks	O
such	O
as	O
translation	B-General_Concept
and	O
text	B-Application
summarization	I-Application
.	O
</s>
<s>
However	O
,	O
unlike	O
RNNs	O
,	O
transformers	B-Algorithm
process	O
the	O
entire	O
input	O
all	O
at	O
once	O
.	O
</s>
<s>
The	O
attention	B-General_Concept
mechanism	I-General_Concept
provides	O
context	O
for	O
any	O
position	O
in	O
the	O
input	O
sequence	O
.	O
</s>
<s>
For	O
example	O
,	O
if	O
the	O
input	O
data	O
is	O
a	O
natural	O
language	O
sentence	O
,	O
the	O
transformer	B-Algorithm
does	O
not	O
have	O
to	O
process	O
one	O
word	O
at	O
a	O
time	O
.	O
</s>
<s>
This	O
allows	O
for	O
more	O
parallelization	B-Operating_System
than	O
RNNs	O
and	O
therefore	O
reduces	O
training	O
times	O
.	O
</s>
<s>
Transformers	B-Algorithm
were	O
introduced	O
in	O
2017	O
by	O
a	O
team	O
at	O
Google	B-Application
Brain	I-Application
and	O
are	O
increasingly	O
becoming	O
the	O
model	O
of	O
choice	O
for	O
NLP	B-Language
problems	O
,	O
replacing	O
RNN	O
models	O
such	O
as	O
long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
(	O
LSTM	B-Algorithm
)	O
.	O
</s>
<s>
The	O
"	O
linear	O
"	O
Transformer	B-Algorithm
goes	O
back	O
to	O
Schmidhuber	O
's	O
work	O
(	O
1992	O
)	O
.	O
</s>
<s>
The	O
additional	O
training	O
parallelization	B-Operating_System
allows	O
training	O
on	O
larger	O
datasets	O
.	O
</s>
<s>
This	O
led	O
to	O
the	O
development	O
of	O
pretrained	B-General_Concept
systems	I-General_Concept
such	O
as	O
BERT	B-General_Concept
(	O
Bidirectional	O
Encoder	O
Representations	O
from	O
Transformers	B-Algorithm
)	O
and	O
GPT	B-General_Concept
(	O
Generative	O
Pre-trained	O
Transformer	B-Algorithm
)	O
,	O
which	O
were	O
trained	O
with	O
large	O
language	O
datasets	O
,	O
such	O
as	O
the	O
Wikipedia	O
Corpus	O
and	O
Common	O
Crawl	O
,	O
and	O
can	O
be	O
fine-tuned	O
for	O
specific	O
tasks	O
.	O
</s>
<s>
Before	O
transformers	B-Algorithm
,	O
most	O
state-of-the-art	O
NLP	B-Language
systems	O
relied	O
on	O
gated	O
RNNs	O
,	O
such	O
as	O
LSTMs	B-Algorithm
and	O
gated	B-Algorithm
recurrent	I-Algorithm
units	I-Algorithm
(	O
GRUs	O
)	O
,	O
with	O
added	O
attention	B-General_Concept
mechanisms	I-General_Concept
.	O
</s>
<s>
Transformers	B-Algorithm
also	O
make	O
use	O
of	O
attention	B-General_Concept
mechanisms	I-General_Concept
but	O
,	O
unlike	O
RNNs	O
,	O
do	O
not	O
have	O
a	O
recurrent	O
structure	O
.	O
</s>
<s>
This	O
means	O
that	O
provided	O
with	O
enough	O
training	O
data	O
,	O
attention	B-General_Concept
mechanisms	I-General_Concept
alone	O
can	O
match	O
the	O
performance	O
of	O
RNNs	O
with	O
attention	B-General_Concept
.	O
</s>
<s>
In	O
practice	O
this	O
mechanism	O
is	O
flawed	O
:	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
leaves	O
the	O
model	O
's	O
state	O
at	O
the	O
end	O
of	O
a	O
long	O
sentence	O
without	O
precise	O
,	O
extractable	O
information	O
about	O
preceding	O
tokens	O
.	O
</s>
<s>
The	O
dependency	O
of	O
token	O
computations	O
on	O
the	O
results	O
of	O
previous	O
token	O
computations	O
also	O
makes	O
it	O
hard	O
to	O
parallelize	O
computation	O
on	O
modern	O
deep-learning	B-Algorithm
hardware	O
.	O
</s>
<s>
These	O
problems	O
were	O
addressed	O
by	O
attention	B-General_Concept
mechanisms	I-General_Concept
.	O
</s>
<s>
Attention	B-General_Concept
mechanisms	I-General_Concept
let	O
a	O
model	O
draw	O
from	O
the	O
state	O
at	O
any	O
preceding	O
point	O
along	O
the	O
sequence	O
.	O
</s>
<s>
The	O
attention	B-General_Concept
layer	O
can	O
access	O
all	O
previous	O
states	O
and	O
weigh	O
them	O
according	O
to	O
a	O
learned	O
measure	O
of	O
relevance	O
,	O
providing	O
relevant	O
information	O
about	O
far-away	O
tokens	O
.	O
</s>
<s>
A	O
clear	O
example	O
of	O
the	O
value	O
of	O
attention	B-General_Concept
is	O
in	O
language	O
translation	B-General_Concept
,	O
where	O
context	O
is	O
essential	O
to	O
assign	O
the	O
meaning	O
of	O
a	O
word	O
in	O
a	O
sentence	O
.	O
</s>
<s>
In	O
an	O
English-to-French	O
translation	B-Application
system	I-Application
,	O
the	O
first	O
word	O
of	O
the	O
French	O
output	O
most	O
probably	O
depends	O
heavily	O
on	O
the	O
first	O
few	O
words	O
of	O
the	O
English	O
input	O
.	O
</s>
<s>
However	O
,	O
in	O
a	O
classic	O
LSTM	B-Algorithm
model	O
,	O
in	O
order	O
to	O
produce	O
the	O
first	O
word	O
of	O
the	O
French	O
output	O
,	O
the	O
model	O
is	O
given	O
only	O
the	O
state	O
vector	O
after	O
processing	O
the	O
last	O
English	O
word	O
.	O
</s>
<s>
In	O
practice	O
,	O
this	O
information	O
is	O
often	O
poorly	O
preserved	O
by	O
the	O
LSTM	B-Algorithm
.	O
</s>
<s>
An	O
attention	B-General_Concept
mechanism	I-General_Concept
can	O
be	O
added	O
to	O
address	O
this	O
problem	O
:	O
the	O
decoder	O
is	O
given	O
access	O
to	O
the	O
state	O
vectors	O
of	O
every	O
English	O
input	O
word	O
,	O
not	O
just	O
the	O
last	O
,	O
and	O
can	O
learn	O
attention	B-General_Concept
weights	O
that	O
dictate	O
how	O
much	O
to	O
attend	O
to	O
each	O
English	O
input	O
state	O
vector	O
.	O
</s>
<s>
When	O
added	O
to	O
RNNs	O
,	O
attention	B-General_Concept
mechanisms	I-General_Concept
increase	O
performance	O
.	O
</s>
<s>
The	O
development	O
of	O
the	O
Transformer	B-Algorithm
architecture	O
revealed	O
that	O
attention	B-General_Concept
mechanisms	I-General_Concept
were	O
powerful	O
in	O
themselves	O
and	O
that	O
sequential	O
recurrent	O
processing	O
of	O
data	O
was	O
not	O
necessary	O
to	O
achieve	O
the	O
quality	O
gains	O
of	O
RNNs	O
with	O
attention	B-General_Concept
.	O
</s>
<s>
Transformers	B-Algorithm
use	O
an	O
attention	B-General_Concept
mechanism	I-General_Concept
without	O
an	O
RNN	O
,	O
processing	O
all	O
tokens	O
simultaneously	O
and	O
calculating	O
attention	B-General_Concept
weights	O
between	O
them	O
in	O
successive	O
layers	O
.	O
</s>
<s>
Since	O
the	O
attention	B-General_Concept
mechanism	I-General_Concept
only	O
uses	O
information	O
about	O
other	O
tokens	O
from	O
lower	O
layers	O
,	O
it	O
can	O
be	O
computed	O
for	O
all	O
tokens	O
in	O
parallel	B-Operating_System
,	O
which	O
leads	O
to	O
improved	O
training	O
speed	O
.	O
</s>
<s>
The	O
modern	O
Transformer	B-Algorithm
was	O
published	O
by	O
Ashish	O
Vaswani	O
et	O
.	O
</s>
<s>
in	O
their	O
2017	O
paper	O
"	O
Attention	B-General_Concept
Is	O
All	O
You	O
Need.	O
"	O
</s>
<s>
It	O
is	O
now	O
frequently	O
used	O
in	O
natural	B-Language
language	I-Language
processing	I-Language
problems	O
,	O
replacing	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
(	O
RNNs	O
)	O
such	O
as	O
long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
(	O
LSTM	B-Algorithm
)	O
.	O
</s>
<s>
Basic	O
ideas	O
for	O
this	O
go	O
back	O
a	O
long	O
way	O
:	O
in	O
1992	O
,	O
Juergen	O
Schmidhuber	O
published	O
the	O
Transformer	B-Algorithm
with	O
"	O
linearized	O
self-attention	B-General_Concept
"	O
(	O
save	O
for	O
a	O
normalization	O
operator	O
)	O
,	O
</s>
<s>
which	O
is	O
also	O
called	O
the	O
"	O
linear	O
Transformer.	O
"	O
</s>
<s>
He	O
advertised	O
it	O
as	O
an	O
"	O
alternative	O
to	O
RNNs	O
"	O
that	O
can	O
learn	O
"	O
internal	O
spotlights	O
of	O
attention	B-General_Concept
,	O
"	O
and	O
experimentally	O
applied	O
it	O
to	O
problems	O
of	O
variable	O
binding	O
.	O
</s>
<s>
Here	O
a	O
slow	O
feedforward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
learns	O
by	O
gradient	B-Algorithm
descent	I-Algorithm
to	O
control	O
the	O
fast	O
weights	O
of	O
another	O
neural	O
network	O
through	O
outer	O
products	O
of	O
self-generated	O
activation	O
patterns	O
called	O
"	O
FROM	O
"	O
and	O
"	O
TO	O
"	O
which	O
in	O
Transformer	B-Algorithm
terminology	O
are	O
called	O
"	O
key	O
"	O
and	O
"	O
value	O
"	O
(	O
terms	O
borrowed	O
from	O
key	B-Data_Structure
–	I-Data_Structure
value	I-Data_Structure
databases	I-Data_Structure
)	O
for	O
"	O
self-attention.	O
"	O
</s>
<s>
This	O
fast	O
weight	O
"	O
attention	B-General_Concept
mapping	O
"	O
is	O
applied	O
to	O
queries	O
.	O
</s>
<s>
The	O
2017	O
Transformer	B-Algorithm
combines	O
this	O
with	O
a	O
softmax	B-Algorithm
operator	O
and	O
a	O
projection	O
matrix	O
.	O
</s>
<s>
Further	O
developments	O
of	O
Transformers	B-Algorithm
include	O
Perceivers	B-General_Concept
by	O
Andrew	O
Jaegle	O
et	O
al	O
.	O
</s>
<s>
(	O
2021	O
)	O
(	O
which	O
can	O
learn	O
from	O
large	O
amounts	O
of	O
heterogeneous	O
data	O
)	O
and	O
Vision	B-Algorithm
transformers	I-Algorithm
by	O
Jean-Baptiste	O
Cordonnier	O
et	O
al	O
.	O
</s>
<s>
The	O
vision	B-Algorithm
transformer	I-Algorithm
breaks	O
down	O
input	O
images	O
as	O
a	O
series	O
of	O
patches	O
which	O
,	O
once	O
transformed	O
into	O
vectors	O
,	O
are	O
treated	O
like	O
words	O
in	O
a	O
standard	O
transformer	B-Algorithm
.	O
</s>
<s>
The	O
input	O
text	O
is	O
parsed	O
into	O
tokens	O
by	O
a	O
byte	B-Algorithm
pair	I-Algorithm
encoding	I-Algorithm
tokenizer	B-Application
,	O
and	O
each	O
token	O
is	O
converted	O
via	O
a	O
word	B-General_Concept
embedding	I-General_Concept
into	O
a	O
vector	O
.	O
</s>
<s>
Then	O
,	O
positional	O
information	O
of	O
the	O
token	O
is	O
added	O
to	O
the	O
word	B-General_Concept
embedding	I-General_Concept
.	O
</s>
<s>
Like	O
earlier	O
seq2seq	B-Algorithm
models	O
,	O
the	O
original	O
Transformer	B-Algorithm
model	I-Algorithm
used	O
an	O
encoder	O
–	O
decoder	O
architecture	O
.	O
</s>
<s>
To	O
achieve	O
this	O
,	O
each	O
encoder	O
and	O
decoder	O
layer	O
makes	O
use	O
of	O
an	O
attention	B-General_Concept
mechanism	I-General_Concept
.	O
</s>
<s>
For	O
each	O
part	O
of	O
the	O
input	O
,	O
attention	B-General_Concept
weighs	O
the	O
relevance	O
of	O
every	O
other	O
part	O
and	O
draws	O
from	O
them	O
to	O
produce	O
the	O
output	O
.	O
</s>
<s>
Each	O
decoder	O
layer	O
has	O
an	O
additional	O
attention	B-General_Concept
mechanism	I-General_Concept
that	O
draws	O
information	O
from	O
the	O
outputs	O
of	O
previous	O
decoders	O
,	O
before	O
the	O
decoder	O
layer	O
draws	O
information	O
from	O
the	O
encodings	O
.	O
</s>
<s>
Both	O
the	O
encoder	O
and	O
decoder	O
layers	O
have	O
a	O
feed-forward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
for	O
additional	O
processing	O
of	O
the	O
outputs	O
and	O
contain	O
residual	O
connections	O
and	O
layer	O
normalization	O
steps	O
.	O
</s>
<s>
The	O
transformer	B-Algorithm
building	O
blocks	O
are	O
scaled	O
dot-product	B-General_Concept
attention	I-General_Concept
units	O
.	O
</s>
<s>
When	O
a	O
sentence	O
is	O
passed	O
into	O
a	O
transformer	B-Algorithm
model	I-Algorithm
,	O
attention	B-General_Concept
weights	O
are	O
calculated	O
between	O
every	O
token	O
simultaneously	O
.	O
</s>
<s>
The	O
attention	B-General_Concept
unit	I-General_Concept
produces	O
embeddings	B-General_Concept
for	O
every	O
token	O
in	O
context	O
that	O
contain	O
information	O
about	O
the	O
token	O
itself	O
along	O
with	O
a	O
weighted	O
combination	O
of	O
other	O
relevant	O
tokens	O
each	O
weighted	O
by	O
its	O
attention	B-General_Concept
weight	O
.	O
</s>
<s>
For	O
each	O
attention	B-General_Concept
unit	I-General_Concept
,	O
the	O
transformer	B-Algorithm
model	I-Algorithm
learns	O
three	O
weight	O
matrices	O
;	O
the	O
query	O
weights	O
,	O
the	O
key	O
weights	O
,	O
and	O
the	O
value	O
weights	O
.	O
</s>
<s>
For	O
each	O
token	O
,	O
the	O
input	O
word	B-General_Concept
embedding	I-General_Concept
is	O
multiplied	O
with	O
each	O
of	O
the	O
three	O
weight	O
matrices	O
to	O
produce	O
a	O
query	O
vector	O
,	O
a	O
key	O
vector	O
,	O
and	O
a	O
value	O
vector	O
.	O
</s>
<s>
Attention	B-General_Concept
weights	O
are	O
calculated	O
using	O
the	O
query	O
and	O
key	O
vectors	O
:	O
the	O
attention	B-General_Concept
weight	O
from	O
token	O
to	O
token	O
is	O
the	O
dot	O
product	O
between	O
and	O
.	O
</s>
<s>
The	O
attention	B-General_Concept
weights	O
are	O
divided	O
by	O
the	O
square	O
root	O
of	O
the	O
dimension	O
of	O
the	O
key	O
vectors	O
,	O
,	O
which	O
stabilizes	O
gradients	O
during	O
training	O
,	O
and	O
passed	O
through	O
a	O
softmax	B-Algorithm
which	O
normalizes	O
the	O
weights	O
.	O
</s>
<s>
The	O
fact	O
that	O
and	O
are	O
different	O
matrices	O
allows	O
attention	B-General_Concept
to	O
be	O
non-symmetric	O
:	O
if	O
token	O
attends	O
to	O
token	O
(	O
i.e.	O
</s>
<s>
The	O
output	O
of	O
the	O
attention	B-General_Concept
unit	I-General_Concept
for	O
token	O
is	O
the	O
weighted	O
sum	O
of	O
the	O
value	O
vectors	O
of	O
all	O
tokens	O
,	O
weighted	O
by	O
,	O
the	O
attention	B-General_Concept
from	O
token	O
to	O
each	O
token	O
.	O
</s>
<s>
The	O
attention	B-General_Concept
calculation	O
for	O
all	O
tokens	O
can	O
be	O
expressed	O
as	O
one	O
large	O
matrix	O
calculation	O
using	O
the	O
softmax	B-Algorithm
function	I-Algorithm
,	O
which	O
is	O
useful	O
for	O
training	O
due	O
to	O
computational	O
matrix	O
operation	O
optimizations	O
that	O
quickly	O
compute	O
matrix	O
operations	O
.	O
</s>
<s>
where	O
softmax	B-Algorithm
is	O
taken	O
over	O
the	O
horizontal	O
axis	O
.	O
</s>
<s>
One	O
set	O
of	O
matrices	O
is	O
called	O
an	O
attention	B-General_Concept
head	O
,	O
and	O
each	O
layer	O
in	O
a	O
transformer	B-Algorithm
model	I-Algorithm
has	O
multiple	O
attention	B-General_Concept
heads	O
.	O
</s>
<s>
While	O
each	O
attention	B-General_Concept
head	O
attends	O
to	O
the	O
tokens	O
that	O
are	O
relevant	O
to	O
each	O
token	O
,	O
with	O
multiple	O
attention	B-General_Concept
heads	O
the	O
model	O
can	O
do	O
this	O
for	O
different	O
definitions	O
of	O
"	O
relevance	O
"	O
.	O
</s>
<s>
Many	O
transformer	B-Algorithm
attention	B-General_Concept
heads	O
encode	O
relevance	O
relations	O
that	O
are	O
meaningful	O
to	O
humans	O
.	O
</s>
<s>
For	O
example	O
,	O
some	O
attention	B-General_Concept
heads	O
can	O
attend	O
mostly	O
to	O
the	O
next	O
word	O
,	O
while	O
others	O
mainly	O
attend	O
from	O
verbs	O
to	O
their	O
direct	O
objects	O
.	O
</s>
<s>
The	O
computations	O
for	O
each	O
attention	B-General_Concept
head	O
can	O
be	O
performed	O
in	O
parallel	B-Operating_System
,	O
which	O
allows	O
for	O
fast	O
processing	O
.	O
</s>
<s>
The	O
outputs	O
for	O
the	O
attention	B-General_Concept
layer	O
are	O
concatenated	O
to	O
pass	O
into	O
the	O
feed-forward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
layers	O
.	O
</s>
<s>
Concretely	O
,	O
let	O
the	O
multiple	O
attention	B-General_Concept
heads	O
be	O
indexed	O
by	O
,	O
then	O
we	O
have	O
where	O
the	O
matrices	O
are	O
"	O
projection	O
matrices	O
"	O
owned	O
by	O
individual	O
attention	B-General_Concept
head	O
,	O
and	O
is	O
a	O
final	O
projection	O
matrix	O
owned	O
by	O
the	O
whole	O
multi-headed	O
attention	B-General_Concept
head	O
.	O
</s>
<s>
It	O
may	O
be	O
necessary	O
to	O
cut	O
out	O
attention	B-General_Concept
links	O
between	O
some	O
word-pairs	O
.	O
</s>
<s>
This	O
may	O
be	O
accomplished	O
before	O
the	O
softmax	B-Algorithm
stage	O
by	O
adding	O
a	O
mask	O
matrix	O
that	O
is	O
negative	O
infinity	O
at	O
entries	O
where	O
the	O
attention	B-General_Concept
link	O
must	O
be	O
cut	O
,	O
and	O
zero	O
at	O
other	O
places	O
.	O
</s>
<s>
Each	O
encoder	O
consists	O
of	O
two	O
major	O
components	O
:	O
a	O
self-attention	B-General_Concept
mechanism	O
and	O
a	O
feed-forward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
.	O
</s>
<s>
The	O
self-attention	B-General_Concept
mechanism	O
accepts	O
input	O
encodings	O
from	O
the	O
previous	O
encoder	O
and	O
weights	O
their	O
relevance	O
to	O
each	O
other	O
to	O
generate	O
output	O
encodings	O
.	O
</s>
<s>
The	O
feed-forward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
further	O
processes	O
each	O
output	O
encoding	O
individually	O
.	O
</s>
<s>
The	O
first	O
encoder	O
takes	O
positional	O
information	O
and	O
embeddings	B-General_Concept
of	O
the	O
input	O
sequence	O
as	O
its	O
input	O
,	O
rather	O
than	O
encodings	O
.	O
</s>
<s>
The	O
positional	O
information	O
is	O
necessary	O
for	O
the	O
transformer	B-Algorithm
to	O
make	O
use	O
of	O
the	O
order	O
of	O
the	O
sequence	O
,	O
because	O
no	O
other	O
part	O
of	O
the	O
transformer	B-Algorithm
makes	O
use	O
of	O
this	O
.	O
</s>
<s>
Attention	B-General_Concept
can	O
be	O
placed	O
on	O
tokens	O
before	O
and	O
after	O
the	O
current	O
token	O
.	O
</s>
<s>
A	O
positional	O
encoding	O
is	O
a	O
fixed-size	O
vector	O
representation	O
that	O
encapsulates	O
the	O
relative	O
positions	O
of	O
tokens	O
within	O
a	O
target	O
sequence	O
:	O
it	O
provides	O
the	O
transformer	B-Algorithm
model	I-Algorithm
with	O
information	O
about	O
where	O
the	O
words	O
are	O
in	O
the	O
input	O
sequence	O
.	O
</s>
<s>
This	O
allows	O
the	O
transformer	B-Algorithm
to	O
take	O
any	O
encoded	O
position	O
,	O
and	O
find	O
the	O
encoding	O
of	O
the	O
position	O
n-steps-ahead	O
or	O
n-steps-behind	O
,	O
by	O
a	O
matrix	O
multiplication	O
.	O
</s>
<s>
This	O
allows	O
the	O
transformer	B-Algorithm
to	O
take	O
any	O
encoded	O
position	O
and	O
find	O
a	O
linear	O
sum	O
of	O
the	O
encoded	O
locations	O
of	O
its	O
neighbors	O
.	O
</s>
<s>
This	O
sum	O
of	O
encoded	O
positions	O
,	O
when	O
fed	O
into	O
the	O
attention	B-General_Concept
mechanism	I-General_Concept
,	O
would	O
create	O
attention	B-General_Concept
weights	O
on	O
its	O
neighbors	O
,	O
much	O
like	O
what	O
happens	O
in	O
a	O
convolutional	B-Architecture
neural	I-Architecture
network	I-Architecture
language	B-Language
model	I-Language
.	O
</s>
<s>
Each	O
decoder	O
consists	O
of	O
three	O
major	O
components	O
:	O
a	O
self-attention	B-General_Concept
mechanism	O
,	O
an	O
attention	B-General_Concept
mechanism	I-General_Concept
over	O
the	O
encodings	O
,	O
and	O
a	O
feed-forward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
.	O
</s>
<s>
The	O
decoder	O
functions	O
in	O
a	O
similar	O
fashion	O
to	O
the	O
encoder	O
,	O
but	O
an	O
additional	O
attention	B-General_Concept
mechanism	I-General_Concept
is	O
inserted	O
which	O
instead	O
draws	O
relevant	O
information	O
from	O
the	O
encodings	O
generated	O
by	O
the	O
encoders	O
.	O
</s>
<s>
This	O
mechanism	O
can	O
also	O
be	O
called	O
the	O
encoder-decoder	O
attention	B-General_Concept
.	O
</s>
<s>
Like	O
the	O
first	O
encoder	O
,	O
the	O
first	O
decoder	O
takes	O
positional	O
information	O
and	O
embeddings	B-General_Concept
of	O
the	O
output	O
sequence	O
as	O
its	O
input	O
,	O
rather	O
than	O
encodings	O
.	O
</s>
<s>
The	O
transformer	B-Algorithm
must	O
not	O
use	O
the	O
current	O
or	O
future	O
output	O
to	O
predict	O
an	O
output	O
,	O
so	O
the	O
output	O
sequence	O
must	O
be	O
partially	O
masked	O
to	O
prevent	O
this	O
reverse	O
information	O
flow	O
.	O
</s>
<s>
This	O
allows	O
for	O
autoregressive	B-Algorithm
text	B-General_Concept
generation	I-General_Concept
.	O
</s>
<s>
For	O
all	O
attention	B-General_Concept
heads	O
,	O
attention	B-General_Concept
ca	O
n't	O
be	O
placed	O
on	O
following	O
tokens	O
.	O
</s>
<s>
The	O
last	O
decoder	O
is	O
followed	O
by	O
a	O
final	O
linear	O
transformation	O
and	O
softmax	B-Algorithm
layer	I-Algorithm
,	O
to	O
produce	O
the	O
output	O
probabilities	O
over	O
the	O
vocabulary	O
.	O
</s>
<s>
GPT	B-General_Concept
has	O
a	O
decoder-only	O
architecture	O
.	O
</s>
<s>
Training	O
transformer-based	O
architectures	O
can	O
be	O
expensive	O
,	O
especially	O
for	O
long	O
inputs	O
.	O
</s>
<s>
This	O
is	O
done	O
using	O
locality-sensitive	B-Algorithm
hashing	I-Algorithm
and	O
reversible	O
layers	O
.	O
</s>
<s>
Ordinary	O
transformers	B-Algorithm
require	O
a	O
memory	O
size	O
that	O
is	O
quadratic	O
in	O
the	O
size	O
of	O
the	O
context	O
window	O
.	O
</s>
<s>
Attention	B-General_Concept
Free	O
Transformers	B-Algorithm
reduce	O
this	O
to	O
a	O
linear	O
dependence	O
while	O
still	O
retaining	O
the	O
advantages	O
of	O
a	O
transformer	B-Algorithm
by	O
linking	O
the	O
key	O
to	O
the	O
value	O
.	O
</s>
<s>
A	O
benchmark	O
for	O
comparing	O
transformer	B-Algorithm
architectures	O
was	O
introduced	O
in	O
late	O
2020	O
by	O
the	O
name	O
of	O
Long	O
Range	O
Arena	O
.	O
</s>
<s>
The	O
plain	O
Transformer	B-Algorithm
architecture	O
has	O
difficulty	O
converging	O
.	O
</s>
<s>
A	O
2020	O
paper	O
found	O
that	O
using	O
layer	O
normalization	O
before	O
(	O
instead	O
of	O
after	O
)	O
multiheaded	O
attention	B-General_Concept
and	O
feedforward	O
layers	O
stabilizes	O
training	O
,	O
not	O
requiring	O
learning	O
rate	O
warmup	O
.	O
</s>
<s>
Transformers	B-Algorithm
typically	O
undergo	O
self-supervised	B-General_Concept
learning	I-General_Concept
involving	O
unsupervised	B-General_Concept
pretraining	O
followed	O
by	O
supervised	B-General_Concept
fine-tuning	O
.	O
</s>
<s>
The	O
transformer	B-Algorithm
has	O
had	O
great	O
success	O
in	O
natural	B-Language
language	I-Language
processing	I-Language
(	O
NLP	B-Language
)	O
,	O
for	O
example	O
the	O
tasks	O
of	O
machine	B-Application
translation	I-Application
and	O
time	O
series	O
prediction	O
.	O
</s>
<s>
Many	O
pretrained	O
models	O
such	O
as	O
GPT-2	B-General_Concept
,	O
GPT-3	B-General_Concept
,	O
GPT-4	O
,	O
BERT	B-General_Concept
,	O
XLNet	O
,	O
RoBERTa	O
and	O
ChatGPT	B-General_Concept
demonstrate	O
the	O
ability	O
of	O
transformers	B-Algorithm
to	O
perform	O
a	O
wide	O
variety	O
of	O
such	O
NLP-related	O
tasks	O
,	O
and	O
have	O
the	O
potential	O
to	O
find	O
real-world	O
applications	O
.	O
</s>
<s>
video	B-Application
understanding	I-Application
.	O
</s>
<s>
The	O
transformer	B-Algorithm
model	I-Algorithm
has	O
been	O
implemented	O
in	O
standard	O
deep	B-Algorithm
learning	I-Algorithm
frameworks	B-Architecture
such	O
as	O
TensorFlow	B-Language
and	O
PyTorch	B-Algorithm
.	O
</s>
<s>
Transformers	B-Algorithm
is	O
a	O
library	O
produced	O
by	O
Hugging	B-Application
Face	I-Application
that	O
supplies	O
transformer-based	O
architectures	O
and	O
pretrained	O
models	O
.	O
</s>
