<s>
In	O
machine	O
learning	O
,	O
backpropagation	B-Algorithm
is	O
a	O
widely	O
used	O
algorithm	O
for	O
training	O
feedforward	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
or	O
other	O
parameterized	O
networks	O
with	O
differentiable	O
nodes	O
.	O
</s>
<s>
the	O
reverse	O
mode	O
of	O
automatic	B-Algorithm
differentiation	I-Algorithm
or	O
reverse	O
accumulation	O
,	O
due	O
to	O
Seppo	O
Linnainmaa	O
(	O
1970	O
)	O
.	O
</s>
<s>
The	O
term	O
"	O
back-propagating	O
error	O
correction	O
"	O
was	O
introduced	O
in	O
1962	O
by	O
Frank	O
Rosenblatt	O
,	O
but	O
he	O
did	O
not	O
know	O
how	O
to	O
implement	O
this	O
,	O
although	O
Henry	O
J	O
.	O
Kelley	O
had	O
a	O
continuous	O
precursor	O
of	O
backpropagation	B-Algorithm
already	O
in	O
1960	O
in	O
the	O
context	O
of	O
control	O
theory	O
.	O
</s>
<s>
Backpropagation	B-Algorithm
computes	O
the	O
gradient	O
of	O
a	O
loss	O
function	O
with	O
respect	O
to	O
the	O
weights	O
of	O
the	O
network	O
for	O
a	O
single	O
input	O
–	O
output	O
example	O
,	O
and	O
does	O
so	O
efficiently	B-General_Concept
,	O
computing	O
the	O
gradient	O
one	O
layer	O
at	O
a	O
time	O
,	O
iterating	B-Algorithm
backward	O
from	O
the	O
last	O
layer	O
to	O
avoid	O
redundant	O
calculations	O
of	O
intermediate	O
terms	O
in	O
the	O
chain	O
rule	O
;	O
this	O
can	O
be	O
derived	O
through	O
dynamic	B-Algorithm
programming	I-Algorithm
.	O
</s>
<s>
Gradient	B-Algorithm
descent	I-Algorithm
,	O
or	O
variants	O
such	O
as	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
,	O
are	O
commonly	O
used	O
.	O
</s>
<s>
The	O
term	O
backpropagation	B-Algorithm
strictly	O
refers	O
only	O
to	O
the	O
algorithm	O
for	O
computing	O
the	O
gradient	O
,	O
not	O
how	O
the	O
gradient	O
is	O
used	O
;	O
however	O
,	O
the	O
term	O
is	O
often	O
used	O
loosely	O
to	O
refer	O
to	O
the	O
entire	O
learning	O
algorithm	O
,	O
including	O
how	O
the	O
gradient	O
is	O
used	O
,	O
such	O
as	O
by	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
This	O
contributed	O
to	O
the	O
popularization	O
of	O
backpropagation	B-Algorithm
and	O
helped	O
to	O
initiate	O
an	O
active	O
period	O
of	O
research	O
in	O
multilayer	B-Algorithm
perceptrons	I-Algorithm
.	O
</s>
<s>
Backpropagation	B-Algorithm
computes	O
the	O
gradient	O
in	O
weight	O
space	O
of	O
a	O
feedforward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
,	O
with	O
respect	O
to	O
a	O
loss	O
function	O
.	O
</s>
<s>
For	O
classification	B-General_Concept
,	O
output	O
will	O
be	O
a	O
vector	O
of	O
class	O
probabilities	O
(	O
e.g.	O
,	O
,	O
and	O
target	O
output	O
is	O
a	O
specific	O
class	O
,	O
encoded	O
by	O
the	O
one-hot/dummy	O
variable	O
(	O
e.g.	O
,	O
)	O
.	O
</s>
<s>
For	O
classification	B-General_Concept
,	O
this	O
is	O
usually	O
cross	O
entropy	O
(	O
XC	O
,	O
log	O
loss	O
)	O
,	O
while	O
for	O
regression	O
it	O
is	O
usually	O
squared	O
error	O
loss	O
(	O
SEL	O
)	O
.	O
</s>
<s>
For	O
classification	B-General_Concept
the	O
last	O
layer	O
is	O
usually	O
the	O
logistic	O
function	O
for	O
binary	O
classification	B-General_Concept
,	O
and	O
softmax	B-Algorithm
(	O
softargmax	B-Algorithm
)	O
for	O
multi-class	O
classification	B-General_Concept
,	O
while	O
for	O
the	O
hidden	O
layers	O
this	O
was	O
traditionally	O
a	O
sigmoid	B-Algorithm
function	I-Algorithm
(	O
logistic	O
function	O
or	O
others	O
)	O
on	O
each	O
node	O
(	O
coordinate	O
)	O
,	O
but	O
today	O
is	O
more	O
varied	O
,	O
with	O
rectifier	B-Algorithm
(	O
ramp	O
,	O
ReLU	B-Algorithm
)	O
being	O
common	O
.	O
</s>
<s>
In	O
the	O
derivation	O
of	O
backpropagation	B-Algorithm
,	O
other	O
intermediate	O
quantities	O
are	O
used	O
;	O
they	O
are	O
introduced	O
as	O
needed	O
below	O
.	O
</s>
<s>
For	O
the	O
purpose	O
of	O
backpropagation	B-Algorithm
,	O
the	O
specific	O
loss	O
function	O
and	O
activation	B-Algorithm
functions	I-Algorithm
do	O
not	O
matter	O
,	O
as	O
long	O
as	O
they	O
and	O
their	O
derivatives	O
can	O
be	O
evaluated	O
efficiently	B-General_Concept
.	O
</s>
<s>
Traditional	O
activation	B-Algorithm
functions	I-Algorithm
include	O
but	O
are	O
not	O
limited	O
to	O
sigmoid	O
,	O
tanh	O
,	O
and	O
ReLU	B-Algorithm
.	O
</s>
<s>
Since	O
,	O
swish	B-Algorithm
,	O
mish	O
,	O
and	O
other	O
activation	B-Algorithm
functions	I-Algorithm
were	O
proposed	O
as	O
well	O
.	O
</s>
<s>
The	O
overall	O
network	O
is	O
a	O
combination	O
of	O
function	B-Application
composition	I-Application
and	O
matrix	O
multiplication	O
:	O
</s>
<s>
Backpropagation	B-Algorithm
computes	O
the	O
gradient	O
for	O
a	O
fixed	O
input	O
–	O
output	O
pair	O
,	O
where	O
the	O
weights	O
can	O
vary	O
.	O
</s>
<s>
Backpropagation	B-Algorithm
efficiently	B-General_Concept
computes	O
the	O
gradient	O
by	O
avoiding	O
duplicate	O
calculations	O
and	O
not	O
computing	O
unnecessary	O
intermediate	O
values	O
,	O
by	O
computing	O
the	O
gradient	O
of	O
each	O
layer	O
–	O
specifically	O
,	O
the	O
gradient	O
of	O
the	O
weighted	O
input	O
of	O
each	O
layer	O
,	O
denoted	O
by	O
–	O
from	O
back	O
to	O
front	O
.	O
</s>
<s>
Backpropagation	B-Algorithm
can	O
be	O
expressed	O
for	O
simple	O
feedforward	O
networks	O
in	O
terms	O
of	O
matrix	O
multiplication	O
,	O
or	O
more	O
generally	O
in	O
terms	O
of	O
the	O
adjoint	O
graph	O
.	O
</s>
<s>
For	O
the	O
basic	O
case	O
of	O
a	O
feedforward	O
network	O
,	O
where	O
nodes	O
in	O
each	O
layer	O
are	O
connected	O
only	O
to	O
nodes	O
in	O
the	O
immediate	O
next	O
layer	O
(	O
without	O
skipping	O
any	O
layers	O
)	O
,	O
and	O
there	O
is	O
a	O
loss	O
function	O
that	O
computes	O
a	O
scalar	O
loss	O
for	O
the	O
final	O
output	O
,	O
backpropagation	B-Algorithm
can	O
be	O
understood	O
simply	O
by	O
matrix	O
multiplication	O
.	O
</s>
<s>
Essentially	O
,	O
backpropagation	B-Algorithm
evaluates	O
the	O
expression	O
for	O
the	O
derivative	O
of	O
the	O
cost	O
function	O
as	O
a	O
product	O
of	O
derivatives	O
between	O
each	O
layer	O
from	O
right	O
to	O
left	O
–	O
"	O
backwards	O
"	O
–	O
with	O
the	O
gradient	O
of	O
the	O
weights	O
between	O
each	O
layer	O
being	O
a	O
simple	O
modification	O
of	O
the	O
partial	O
products	O
(	O
the	O
"	O
backwards	O
propagated	O
error	O
"	O
)	O
.	O
</s>
<s>
For	O
backpropagation	B-Algorithm
,	O
the	O
activation	O
as	O
well	O
as	O
the	O
derivatives	O
(	O
evaluated	O
at	O
)	O
must	O
be	O
cached	O
for	O
use	O
during	O
the	O
backwards	O
pass	O
.	O
</s>
<s>
These	O
terms	O
are	O
:	O
the	O
derivative	O
of	O
the	O
loss	O
function	O
;	O
the	O
derivatives	O
of	O
the	O
activation	B-Algorithm
functions	I-Algorithm
;	O
and	O
the	O
matrices	O
of	O
weights	O
:	O
</s>
<s>
Backpropagation	B-Algorithm
then	O
consists	O
essentially	O
of	O
evaluating	O
this	O
expression	O
from	O
right	O
to	O
left	O
(	O
equivalently	O
,	O
multiplying	O
the	O
previous	O
expression	O
for	O
the	O
derivative	O
from	O
left	O
to	O
right	O
)	O
,	O
computing	O
the	O
gradient	O
at	O
each	O
layer	O
on	O
the	O
way	O
;	O
there	O
is	O
an	O
added	O
step	O
,	O
because	O
the	O
gradient	O
of	O
the	O
weights	O
is	O
n't	O
just	O
a	O
subexpression	O
:	O
there	O
's	O
an	O
extra	O
multiplication	O
.	O
</s>
<s>
The	O
gradients	O
of	O
the	O
weights	O
can	O
thus	O
be	O
computed	O
using	O
a	O
few	O
matrix	O
multiplications	O
for	O
each	O
level	O
;	O
this	O
is	O
backpropagation	B-Algorithm
.	O
</s>
<s>
there	O
are	O
two	O
key	O
differences	O
with	O
backpropagation	B-Algorithm
:	O
</s>
<s>
For	O
more	O
general	O
graphs	O
,	O
and	O
other	O
advanced	O
variations	O
,	O
backpropagation	B-Algorithm
can	O
be	O
understood	O
in	O
terms	O
of	O
automatic	B-Algorithm
differentiation	I-Algorithm
,	O
where	O
backpropagation	B-Algorithm
is	O
a	O
special	O
case	O
of	O
reverse	O
accumulation	O
(	O
or	O
"	O
reverse	O
mode	O
"	O
)	O
.	O
</s>
<s>
The	O
goal	O
of	O
any	O
supervised	B-General_Concept
learning	I-General_Concept
algorithm	O
is	O
to	O
find	O
a	O
function	O
that	O
best	O
maps	O
a	O
set	O
of	O
inputs	O
to	O
their	O
correct	O
output	O
.	O
</s>
<s>
The	O
motivation	O
for	O
backpropagation	B-Algorithm
is	O
to	O
train	O
a	O
multi-layered	O
neural	B-General_Concept
network	I-General_Concept
such	O
that	O
it	O
can	O
learn	O
the	O
appropriate	O
internal	O
representations	O
to	O
allow	O
it	O
to	O
learn	O
any	O
arbitrary	O
mapping	O
of	O
input	O
to	O
output	O
.	O
</s>
<s>
To	O
understand	O
the	O
mathematical	O
derivation	O
of	O
the	O
backpropagation	B-Algorithm
algorithm	O
,	O
it	O
helps	O
to	O
first	O
develop	O
some	O
intuition	O
about	O
the	O
relationship	O
between	O
the	O
actual	O
output	O
of	O
a	O
neuron	O
and	O
the	O
correct	O
output	O
for	O
a	O
particular	O
training	O
example	O
.	O
</s>
<s>
Consider	O
a	O
simple	O
neural	B-General_Concept
network	I-General_Concept
with	O
two	O
input	O
units	O
,	O
one	O
output	O
unit	O
and	O
no	O
hidden	O
units	O
,	O
and	O
in	O
which	O
each	O
neuron	O
uses	O
a	O
linear	O
output	O
(	O
unlike	O
most	O
work	O
on	O
neural	B-General_Concept
networks	I-General_Concept
,	O
in	O
which	O
mapping	O
from	O
inputs	O
to	O
outputs	O
is	O
non-linear	O
)	O
that	O
is	O
the	O
weighted	O
sum	O
of	O
its	O
input	O
.	O
</s>
<s>
Then	O
the	O
neuron	O
learns	O
from	O
training	O
examples	O
,	O
which	O
in	O
this	O
case	O
consist	O
of	O
a	O
set	O
of	O
tuples	B-Application
where	O
and	O
are	O
the	O
inputs	O
to	O
the	O
network	O
and	O
is	O
the	O
correct	O
output	O
(	O
the	O
output	O
the	O
network	O
should	O
produce	O
given	O
those	O
inputs	O
,	O
when	O
it	O
has	O
been	O
trained	O
)	O
.	O
</s>
<s>
For	O
regression	O
analysis	O
problems	O
the	O
squared	O
error	O
can	O
be	O
used	O
as	O
a	O
loss	O
function	O
,	O
for	O
classification	B-General_Concept
the	O
categorical	O
crossentropy	O
can	O
be	O
used	O
.	O
</s>
<s>
One	O
commonly	O
used	O
algorithm	O
to	O
find	O
the	O
set	O
of	O
weights	O
that	O
minimizes	O
the	O
error	O
is	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
By	O
backpropagation	B-Algorithm
,	O
the	O
steepest	B-Algorithm
descent	I-Algorithm
direction	O
is	O
calculated	O
of	O
the	O
loss	O
function	O
versus	O
the	O
present	O
synaptic	O
weights	O
.	O
</s>
<s>
Then	O
,	O
the	O
weights	O
can	O
be	O
modified	O
along	O
the	O
steepest	B-Algorithm
descent	I-Algorithm
direction	O
,	O
and	O
the	O
error	O
is	O
minimized	O
in	O
an	O
efficient	O
way	O
.	O
</s>
<s>
The	O
gradient	B-Algorithm
descent	I-Algorithm
method	I-Algorithm
involves	O
calculating	O
the	O
derivative	O
of	O
the	O
loss	O
function	O
with	O
respect	O
to	O
the	O
weights	O
of	O
the	O
network	O
.	O
</s>
<s>
This	O
is	O
normally	O
done	O
using	O
backpropagation	B-Algorithm
.	O
</s>
<s>
where	O
the	O
activation	B-Algorithm
function	I-Algorithm
is	O
non-linear	O
and	O
differentiable	O
over	O
the	O
activation	O
region	O
(	O
the	O
ReLU	B-Algorithm
is	O
not	O
differentiable	O
at	O
one	O
point	O
)	O
.	O
</s>
<s>
A	O
historically	O
used	O
activation	B-Algorithm
function	I-Algorithm
is	O
the	O
logistic	O
function	O
:	O
</s>
<s>
The	O
derivative	O
of	O
the	O
output	O
of	O
neuron	O
with	O
respect	O
to	O
its	O
input	O
is	O
simply	O
the	O
partial	O
derivative	O
of	O
the	O
activation	B-Algorithm
function	I-Algorithm
:	O
</s>
<s>
This	O
is	O
the	O
reason	O
why	O
backpropagation	B-Algorithm
requires	O
that	O
the	O
activation	B-Algorithm
function	I-Algorithm
be	O
differentiable	O
.	O
</s>
<s>
(	O
Nevertheless	O
,	O
the	O
ReLU	B-Algorithm
activation	B-Algorithm
function	I-Algorithm
,	O
which	O
is	O
non-differentiable	O
at	O
0	O
,	O
has	O
become	O
quite	O
popular	O
,	O
e.g.	O
</s>
<s>
To	O
update	O
the	O
weight	O
using	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
one	O
must	O
choose	O
a	O
learning	O
rate	O
,	O
.	O
</s>
<s>
Using	O
a	O
Hessian	O
matrix	O
of	O
second-order	O
derivatives	O
of	O
the	O
error	O
function	O
,	O
the	O
Levenberg-Marquardt	B-Algorithm
algorithm	I-Algorithm
often	O
converges	O
faster	O
than	O
first-order	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
especially	O
when	O
the	O
topology	O
of	O
the	O
error	O
function	O
is	O
complicated	O
.	O
</s>
<s>
For	O
backpropagation	B-Algorithm
,	O
the	O
loss	O
function	O
calculates	O
the	O
difference	O
between	O
the	O
network	O
output	O
and	O
its	O
expected	O
output	O
,	O
after	O
a	O
training	O
example	O
has	O
propagated	O
through	O
the	O
network	O
.	O
</s>
<s>
The	O
mathematical	O
expression	O
of	O
the	O
loss	O
function	O
must	O
fulfill	O
two	O
conditions	O
in	O
order	O
for	O
it	O
to	O
be	O
possibly	O
used	O
in	O
backpropagation	B-Algorithm
.	O
</s>
<s>
The	O
reason	O
for	O
this	O
assumption	O
is	O
that	O
the	O
backpropagation	B-Algorithm
algorithm	O
calculates	O
the	O
gradient	O
of	O
the	O
error	O
function	O
for	O
a	O
single	O
training	O
example	O
,	O
which	O
needs	O
to	O
be	O
generalized	O
to	O
the	O
overall	O
error	O
function	O
.	O
</s>
<s>
The	O
second	O
assumption	O
is	O
that	O
it	O
can	O
be	O
written	O
as	O
a	O
function	O
of	O
the	O
outputs	O
from	O
the	O
neural	B-General_Concept
network	I-General_Concept
.	O
</s>
<s>
Gradient	B-Algorithm
descent	I-Algorithm
with	O
backpropagation	B-Algorithm
is	O
not	O
guaranteed	O
to	O
find	O
the	O
global	O
minimum	O
of	O
the	O
error	O
function	O
,	O
but	O
only	O
a	O
local	O
minimum	O
;	O
also	O
,	O
it	O
has	O
trouble	O
crossing	O
plateaus	O
in	O
the	O
error	O
function	O
landscape	O
.	O
</s>
<s>
This	O
issue	O
,	O
caused	O
by	O
the	O
non-convexity	O
of	O
error	O
functions	O
in	O
neural	B-General_Concept
networks	I-General_Concept
,	O
was	O
long	O
thought	O
to	O
be	O
a	O
major	O
drawback	O
,	O
but	O
Yann	O
LeCun	O
et	O
al	O
.	O
</s>
<s>
Backpropagation	B-Algorithm
learning	O
does	O
not	O
require	O
normalization	O
of	O
input	O
vectors	O
;	O
however	O
,	O
normalization	O
could	O
improve	O
performance	O
.	O
</s>
<s>
Backpropagation	B-Algorithm
requires	O
the	O
derivatives	O
of	O
activation	B-Algorithm
functions	I-Algorithm
to	O
be	O
known	O
at	O
network	O
design	O
time	O
.	O
</s>
<s>
Modern	O
backpropagation	B-Algorithm
is	O
Seppo	O
Linnainmaa	O
's	O
reverse	O
mode	O
of	O
automatic	B-Algorithm
differentiation	I-Algorithm
(	O
1970	O
)	O
for	O
discrete	O
connected	O
networks	O
of	O
nested	O
differentiable	O
functions	O
.	O
</s>
<s>
The	O
terminology	O
"	O
back-propagating	O
error	O
correction	O
"	O
was	O
introduced	O
in	O
1962	O
by	O
Frank	O
Rosenblatt	O
,	O
but	O
he	O
did	O
not	O
know	O
how	O
to	O
implement	O
this	O
,	O
although	O
Henry	O
J	O
.	O
Kelley	O
had	O
a	O
continuous	O
precursor	O
of	O
backpropagation	B-Algorithm
already	O
in	O
1960	O
in	O
the	O
context	O
of	O
control	O
theory	O
.	O
</s>
<s>
The	O
first	O
deep	B-Algorithm
learning	I-Algorithm
multilayer	B-Algorithm
perceptron	I-Algorithm
(	O
MLP	O
)	O
trained	O
by	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
was	O
published	O
in	O
1967	O
by	O
Shun'ichi	O
Amari	O
.	O
</s>
<s>
In	O
1982	O
,	O
Paul	O
Werbos	O
applied	O
backpropagation	B-Algorithm
to	O
MLPs	O
in	O
the	O
way	O
that	O
has	O
become	O
standard	O
.	O
</s>
<s>
This	O
contributed	O
to	O
the	O
popularization	O
of	O
backpropagation	B-Algorithm
and	O
helped	O
to	O
initiate	O
an	O
active	O
period	O
of	O
research	O
in	O
multilayer	B-Algorithm
perceptrons	I-Algorithm
.	O
</s>
<s>
Kelley	O
(	O
1960	O
)	O
and	O
Arthur	O
E	O
.	O
Bryson	O
(	O
1961	O
)	O
used	O
principles	O
of	O
dynamic	B-Algorithm
programming	I-Algorithm
to	O
derive	O
the	O
above-mentioned	O
continuous	O
precursor	O
of	O
the	O
method	O
.	O
</s>
<s>
Yann	O
LeCun	O
proposed	O
an	O
alternative	O
form	O
of	O
backpropagation	B-Algorithm
for	O
neural	B-General_Concept
networks	I-General_Concept
in	O
his	O
PhD	O
thesis	O
in	O
1987	O
.	O
</s>
<s>
In	O
1993	O
,	O
Eric	O
Wan	O
won	O
an	O
international	O
pattern	O
recognition	O
contest	O
through	O
backpropagation	B-Algorithm
.	O
</s>
<s>
During	O
the	O
2000s	O
it	O
fell	O
out	O
of	O
favour	O
,	O
but	O
returned	O
in	O
the	O
2010s	O
,	O
benefitting	O
from	O
cheap	O
,	O
powerful	O
GPU-based	O
computing	O
systems	O
.	O
</s>
<s>
This	O
has	O
been	O
especially	O
so	O
in	O
speech	B-Application
recognition	I-Application
,	O
machine	B-General_Concept
vision	I-General_Concept
,	O
natural	B-Language
language	I-Language
processing	I-Language
,	O
and	O
language	O
structure	O
learning	O
research	O
(	O
in	O
which	O
it	O
has	O
been	O
used	O
to	O
explain	O
a	O
variety	O
of	O
phenomena	O
related	O
to	O
first	O
and	O
second	O
language	O
learning	O
.	O
</s>
<s>
Error	O
backpropagation	B-Algorithm
has	O
been	O
suggested	O
to	O
explain	O
human	O
brain	O
ERP	B-Algorithm
components	O
like	O
the	O
N400	B-Algorithm
and	O
P600	B-Algorithm
.	O
</s>
