<s>
In	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
,	O
attention	O
is	O
a	O
technique	O
that	O
is	O
meant	O
to	O
mimic	O
cognitive	O
attention	O
.	O
</s>
<s>
Learning	O
which	O
part	O
of	O
the	O
data	O
is	O
more	O
important	O
than	O
another	O
depends	O
on	O
the	O
context	O
,	O
and	O
this	O
is	O
trained	O
by	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
Uses	O
of	O
attention	O
include	O
memory	O
in	O
fast	O
weight	O
controllers	O
that	O
can	O
learn	O
"	O
internal	O
spotlights	O
of	O
attention	O
"	O
(	O
also	O
known	O
as	O
Transformers	B-Algorithm
with	O
"	O
linearized	O
self-attention	O
"	O
)	O
,	O
neural	B-Algorithm
Turing	I-Algorithm
machines	I-Algorithm
,	O
reasoning	O
tasks	O
in	O
differentiable	B-Algorithm
neural	I-Algorithm
computers	I-Algorithm
,	O
language	O
processing	O
in	O
transformers	B-Algorithm
,	O
and	O
LSTMs	B-Algorithm
,	O
and	O
multi-sensory	O
data	O
processing	O
(	O
sound	O
,	O
images	O
,	O
video	O
,	O
and	O
text	O
)	O
in	O
perceivers	B-General_Concept
.	O
</s>
<s>
Given	O
a	O
sequence	O
of	O
tokens	O
labeled	O
by	O
the	O
index	O
,	O
a	O
neural	B-Architecture
network	I-Architecture
computes	O
a	O
soft	O
weight	O
for	O
each	O
with	O
the	O
property	O
that	O
is	O
non-negative	O
and	O
.	O
</s>
<s>
Each	O
is	O
assigned	O
a	O
value	O
vector	O
which	O
is	O
computed	O
from	O
the	O
word	B-General_Concept
embedding	I-General_Concept
of	O
the	O
th	O
token	O
.	O
</s>
<s>
The	O
weighted	O
average	O
is	O
the	O
output	O
of	O
the	O
attention	B-General_Concept
mechanism	I-General_Concept
.	O
</s>
<s>
From	O
the	O
word	B-General_Concept
embedding	I-General_Concept
of	O
each	O
token	O
,	O
it	O
computes	O
its	O
corresponding	O
query	O
vector	O
and	O
key	O
vector	O
.	O
</s>
<s>
The	O
weights	O
are	O
obtained	O
by	O
taking	O
the	O
softmax	B-Algorithm
function	I-Algorithm
of	O
the	O
dot	O
product	O
where	O
represents	O
the	O
current	O
token	O
and	O
represents	O
the	O
token	O
that	O
's	O
being	O
attended	O
to	O
.	O
</s>
<s>
In	O
some	O
architectures	O
,	O
there	O
are	O
multiple	O
"	O
heads	O
"	O
of	O
attention	O
(	O
termed	O
'	O
multi-head	B-General_Concept
attention	I-General_Concept
 '	O
)	O
,	O
each	O
operating	O
independently	O
with	O
their	O
own	O
queries	O
,	O
keys	O
,	O
and	O
values	O
.	O
</s>
<s>
To	O
build	O
a	O
machine	O
that	O
translates	O
English	O
to	O
French	O
,	O
one	O
takes	O
the	O
basic	O
Encoder-Decoder	O
and	O
grafts	O
an	O
attention	B-General_Concept
unit	I-General_Concept
to	O
it	O
(	O
diagram	O
below	O
)	O
.	O
</s>
<s>
In	O
the	O
simplest	O
case	O
,	O
the	O
attention	B-General_Concept
unit	I-General_Concept
consists	O
of	O
dot	O
products	O
of	O
the	O
recurrent	O
encoder	O
states	O
and	O
does	O
not	O
need	O
training	O
.	O
</s>
<s>
In	O
practice	O
,	O
the	O
attention	B-General_Concept
unit	I-General_Concept
consists	O
of	O
3	O
fully-connected	O
neural	B-Architecture
network	I-Architecture
layers	O
called	O
query-key-value	O
that	O
need	O
to	O
be	O
trained	O
.	O
</s>
<s>
sentence	O
length	O
300	O
Embedding	B-General_Concept
size	O
(	O
word	O
dimension	O
)	O
500	O
Length	O
of	O
hidden	O
vector	O
9k	O
,	O
10k	O
Dictionary	O
size	O
of	O
input	O
&	O
output	O
languages	O
respectively	O
.	O
</s>
<s>
x	O
300-long	O
word	B-General_Concept
embedding	I-General_Concept
vector	O
.	O
</s>
<s>
The	O
vectors	O
are	O
usually	O
pre-calculated	O
from	O
other	O
projects	O
such	O
as	O
GloVe	B-General_Concept
or	O
Word2Vec	B-Algorithm
.	O
</s>
<s>
The	O
final	O
h	O
can	O
be	O
viewed	O
as	O
a	O
"	O
sentence	O
"	O
vector	O
,	O
or	O
a	O
thought	B-Algorithm
vector	I-Algorithm
as	O
Hinton	O
calls	O
it	O
.	O
</s>
<s>
E	O
500	O
neuron	O
RNN	B-Algorithm
encoder	O
.	O
</s>
<s>
Input	O
count	O
is	O
800	O
–	O
300	O
from	O
source	O
embedding	B-General_Concept
+	O
500	O
from	O
recurrent	O
connections	O
.	O
</s>
<s>
This	O
view	O
of	O
the	O
attention	O
weights	O
addresses	O
the	O
"	O
explainability	O
"	O
problem	O
that	O
neural	B-Architecture
networks	I-Architecture
are	O
criticized	O
for	O
.	O
</s>
<s>
The	O
off-diagonal	O
dominance	O
shows	O
that	O
the	O
attention	B-General_Concept
mechanism	I-General_Concept
is	O
more	O
nuanced	O
.	O
</s>
<s>
There	O
are	O
many	O
variants	O
of	O
attention	O
that	O
implements	O
soft	O
weights	O
,	O
including	O
Juergen	O
Schmidhuber	O
's	O
"	O
internal	O
spotlights	O
of	O
attention	O
"	O
generated	O
by	O
fast	O
weight	O
programmers	O
or	O
fast	O
weight	O
controllers	O
(	O
1992	O
)	O
(	O
also	O
known	O
as	O
Transformers	B-Algorithm
with	O
"	O
linearized	O
self-attention	O
"	O
)	O
.	O
</s>
<s>
Here	O
a	O
slow	O
neural	B-Architecture
network	I-Architecture
learns	O
by	O
gradient	B-Algorithm
descent	I-Algorithm
to	O
program	O
the	O
fast	O
weights	O
of	O
another	O
neural	B-Architecture
network	I-Architecture
through	O
outer	O
products	O
of	O
self-generated	O
activation	O
patterns	O
called	O
"	O
FROM	O
"	O
and	O
"	O
TO	O
"	O
which	O
in	O
Transformer	B-Algorithm
terminology	O
are	O
called	O
"	O
key	O
"	O
and	O
"	O
value.	O
"	O
</s>
<s>
(	O
b	O
)	O
Bahdanau	O
Attention	O
,	O
also	O
referred	O
to	O
as	O
additive	O
attention	O
,	O
and	O
(	O
c	O
)	O
Luong	O
Attention	O
which	O
is	O
known	O
as	O
multiplicative	O
attention	O
,	O
built	O
on	O
top	O
of	O
additive	O
attention	O
,	O
and	O
(	O
d	O
)	O
self-attention	O
introduced	O
in	O
transformers	B-Algorithm
.	O
</s>
<s>
For	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
,	O
the	O
attention	B-General_Concept
mechanisms	I-General_Concept
can	O
also	O
be	O
distinguished	O
by	O
the	O
dimension	O
on	O
which	O
they	O
operate	O
,	O
namely	O
:	O
spatial	O
attention	O
,	O
channel	O
attention	O
,	O
or	O
combinations	O
of	O
both	O
.	O
</s>
<s>
S	O
,	O
T	O
S	O
,	O
decoder	O
hidden	O
state	O
;	O
T	O
,	O
target	O
word	B-General_Concept
embedding	I-General_Concept
.	O
</s>
<s>
In	O
the	O
Pytorch	O
Tutorial	O
variant	O
training	O
phase	O
,	O
T	O
alternates	O
between	O
2	O
sources	O
depending	O
on	O
the	O
level	O
of	O
teacher	B-Algorithm
forcing	I-Algorithm
used	O
.	O
</s>
<s>
T	O
could	O
be	O
the	O
embedding	B-General_Concept
of	O
the	O
network	O
's	O
output	O
word	O
;	O
i.e.	O
</s>
<s>
embedding(argmax(FC output )	O
)	O
.	O
</s>
<s>
Alternatively	O
with	O
teacher	B-Algorithm
forcing	I-Algorithm
,	O
T	O
could	O
be	O
the	O
embedding	B-General_Concept
of	O
the	O
known	O
correct	O
word	O
which	O
can	O
occur	O
with	O
a	O
constant	O
forcing	O
probability	O
,	O
say	O
1/2	O
.	O
</s>
<s>
X	O
,	O
H	O
H	O
,	O
encoder	O
hidden	O
state	O
;	O
X	O
,	O
input	O
word	B-General_Concept
embeddings	I-General_Concept
.	O
</s>
