<s>
In	O
machine	O
learning	O
,	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
is	O
encountered	O
when	O
training	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
with	O
gradient-based	B-Algorithm
learning	I-Algorithm
methods	I-Algorithm
and	O
backpropagation	B-Algorithm
.	O
</s>
<s>
In	O
such	O
methods	O
,	O
during	O
each	O
iteration	O
of	O
training	O
each	O
of	O
the	O
neural	B-Architecture
network	I-Architecture
's	O
weights	O
receives	O
an	O
update	O
proportional	O
to	O
the	O
partial	O
derivative	O
of	O
the	O
error	O
function	O
with	O
respect	O
to	O
the	O
current	O
weight	O
.	O
</s>
<s>
In	O
the	O
worst	O
case	O
,	O
this	O
may	O
completely	O
stop	O
the	O
neural	B-Architecture
network	I-Architecture
from	O
further	O
training	O
.	O
</s>
<s>
As	O
one	O
example	O
of	O
the	O
problem	O
cause	O
,	O
traditional	O
activation	B-Algorithm
functions	I-Algorithm
such	O
as	O
the	O
hyperbolic	O
tangent	O
function	O
have	O
gradients	O
in	O
the	O
range	O
,	O
and	O
backpropagation	B-Algorithm
computes	O
gradients	O
by	O
the	O
chain	O
rule	O
.	O
</s>
<s>
Back-propagation	B-Algorithm
allowed	O
researchers	O
to	O
train	O
supervised	B-General_Concept
deep	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
from	O
scratch	O
,	O
initially	O
with	O
little	O
success	O
.	O
</s>
<s>
Hochreiter	O
's	O
diplom	O
thesis	O
of	O
1991	O
formally	O
identified	O
the	O
reason	O
for	O
this	O
failure	O
in	O
the	O
"	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
"	O
,	O
which	O
not	O
only	O
affects	O
many-layered	B-Algorithm
feedforward	B-Algorithm
networks	I-Algorithm
,	O
but	O
also	O
recurrent	O
networks	O
.	O
</s>
<s>
The	O
latter	O
are	O
trained	O
by	O
unfolding	O
them	O
into	O
very	O
deep	O
feedforward	B-Algorithm
networks	I-Algorithm
,	O
where	O
a	O
new	O
layer	O
is	O
created	O
for	O
each	O
time	O
step	O
of	O
an	O
input	O
sequence	O
processed	O
by	O
the	O
network	O
.	O
</s>
<s>
(	O
The	O
combination	O
of	O
unfolding	O
and	O
backpropagation	B-Algorithm
is	O
termed	O
backpropagation	B-Algorithm
through	I-Algorithm
time	I-Algorithm
.	O
)	O
</s>
<s>
When	O
activation	B-Algorithm
functions	I-Algorithm
are	O
used	O
whose	O
derivatives	O
can	O
take	O
on	O
larger	O
values	O
,	O
one	O
risks	O
encountering	O
the	O
related	O
exploding	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
.	O
</s>
<s>
A	O
generic	O
recurrent	B-Algorithm
network	I-Algorithm
has	O
hidden	O
states	O
inputs	O
,	O
and	O
outputs	O
.	O
</s>
<s>
The	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
already	O
presents	O
itself	O
clearly	O
when	O
,	O
so	O
we	O
simplify	O
our	O
notation	O
to	O
the	O
special	O
case	O
with	O
:	O
Now	O
,	O
take	O
its	O
differential:Training	O
the	O
network	O
requires	O
us	O
to	O
define	O
a	O
loss	O
function	O
to	O
be	O
minimized	O
.	O
</s>
<s>
where	O
is	O
the	O
network	O
parameter	O
,	O
is	O
the	O
sigmoid	B-Algorithm
activation	I-Algorithm
function	I-Algorithm
,	O
applied	O
to	O
each	O
vector	O
coordinate	O
separately	O
,	O
and	O
is	O
the	O
bias	O
vector	O
.	O
</s>
<s>
This	O
is	O
the	O
prototypical	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
.	O
</s>
<s>
The	O
effect	O
of	O
a	O
vanishing	B-Algorithm
gradient	I-Algorithm
is	O
that	O
the	O
network	O
cannot	O
learn	O
long-range	O
effects	O
.	O
</s>
<s>
For	O
the	O
prototypical	O
exploding	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
,	O
the	O
next	O
model	O
is	O
clearer	O
.	O
</s>
<s>
Following	O
(	O
Doya	O
,	O
1993	O
)	O
,	O
consider	O
this	O
one-neuron	O
recurrent	B-Algorithm
network	I-Algorithm
with	O
sigmoid	O
activation:At	O
the	O
small	O
limit	O
,	O
the	O
dynamics	O
of	O
the	O
network	O
becomesConsider	O
first	O
the	O
autonomous	O
case	O
,	O
with	O
.	O
</s>
<s>
If	O
puts	O
the	O
system	O
far	O
from	O
an	O
unstable	O
point	O
,	O
then	O
a	O
small	O
variation	O
in	O
would	O
have	O
no	O
effect	O
on	O
,	O
making	O
,	O
a	O
case	O
of	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
.	O
</s>
<s>
Batch	B-General_Concept
normalization	I-General_Concept
is	O
a	O
standard	O
method	O
for	O
solving	O
both	O
the	O
exploding	O
and	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problems	I-Algorithm
.	O
</s>
<s>
This	O
does	O
not	O
solve	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
.	O
</s>
<s>
One	O
is	O
Jürgen	O
Schmidhuber	O
's	O
multi-level	O
hierarchy	O
of	O
networks	O
(	O
1992	O
)	O
pre-trained	O
one	O
level	O
at	O
a	O
time	O
through	O
unsupervised	B-General_Concept
learning	I-General_Concept
,	O
fine-tuned	O
through	O
backpropagation	B-Algorithm
.	O
</s>
<s>
Similar	O
ideas	O
have	O
been	O
used	O
in	O
feed-forward	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
for	O
unsupervised	O
pre-training	O
to	O
structure	O
a	O
neural	B-Architecture
network	I-Architecture
,	O
making	O
it	O
first	O
learn	O
generally	O
useful	O
feature	O
detectors	O
.	O
</s>
<s>
Then	O
the	O
network	O
is	O
trained	O
further	O
by	O
supervised	B-General_Concept
backpropagation	B-Algorithm
to	O
classify	O
labeled	O
data	O
.	O
</s>
<s>
The	O
deep	B-Algorithm
belief	I-Algorithm
network	I-Algorithm
model	O
by	O
Hinton	O
et	O
al	O
.	O
</s>
<s>
It	O
uses	O
a	O
restricted	B-Algorithm
Boltzmann	I-Algorithm
machine	I-Algorithm
to	O
model	O
each	O
new	O
layer	O
of	O
higher	O
level	O
features	O
.	O
</s>
<s>
Another	O
technique	O
particularly	O
used	O
for	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
is	O
the	O
long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
(	O
LSTM	B-Algorithm
)	O
network	O
of	O
1997	O
by	O
Hochreiter	O
&	O
Schmidhuber	O
.	O
</s>
<s>
In	O
2009	O
,	O
deep	O
multidimensional	O
LSTM	B-Algorithm
networks	O
demonstrated	O
the	O
power	O
of	O
deep	B-Algorithm
learning	I-Algorithm
with	O
many	O
nonlinear	O
layers	O
,	O
by	O
winning	O
three	O
ICDAR	O
2009	O
competitions	O
in	O
connected	O
handwriting	B-Application
recognition	I-Application
,	O
without	O
any	O
prior	O
knowledge	O
about	O
the	O
three	O
different	O
languages	O
to	O
be	O
learned	O
.	O
</s>
<s>
Hardware	O
advances	O
have	O
meant	O
that	O
from	O
1991	O
to	O
2015	O
,	O
computer	O
power	O
(	O
especially	O
as	O
delivered	O
by	O
GPUs	B-Architecture
)	O
has	O
increased	O
around	O
a	O
million-fold	O
,	O
making	O
standard	O
backpropagation	B-Algorithm
feasible	O
for	O
networks	O
several	O
layers	O
deeper	O
than	O
when	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
was	O
recognized	O
.	O
</s>
<s>
Schmidhuber	O
notes	O
that	O
this	O
"	O
is	O
basically	O
what	O
is	O
winning	O
many	O
of	O
the	O
image	O
recognition	O
competitions	O
now	O
"	O
,	O
but	O
that	O
it	O
"	O
does	O
not	O
really	O
overcome	O
the	O
problem	O
in	O
a	O
fundamental	O
way	O
"	O
since	O
the	O
original	O
models	O
tackling	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
by	O
Hinton	O
and	O
others	O
were	O
trained	O
in	O
a	O
Xeon	B-Device
processor	I-Device
,	O
not	O
GPUs	B-Architecture
.	O
</s>
<s>
One	O
of	O
the	O
newest	O
and	O
most	O
effective	O
ways	O
to	O
resolve	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
is	O
with	O
residual	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
,	O
or	O
ResNets	B-Algorithm
(	O
not	O
to	O
be	O
confused	O
with	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
)	O
.	O
</s>
<s>
ResNets	B-Algorithm
refer	O
to	O
neural	B-Architecture
networks	I-Architecture
where	O
skip	B-Algorithm
connections	I-Algorithm
or	O
residual	O
connections	O
are	O
part	O
of	O
the	O
network	O
architecture	O
.	O
</s>
<s>
These	O
skip	B-Algorithm
connections	I-Algorithm
allow	O
gradient	O
information	O
to	O
pass	O
through	O
the	O
layers	O
,	O
by	O
creating	O
"	O
highways	O
"	O
of	O
information	O
,	O
where	O
the	O
output	O
of	O
a	O
previous	O
layer/activation	O
is	O
added	O
to	O
the	O
output	O
of	O
a	O
deeper	O
layer	O
.	O
</s>
<s>
Skip	B-Algorithm
connections	I-Algorithm
are	O
a	O
critical	O
component	O
of	O
what	O
allowed	O
successful	O
training	O
of	O
deeper	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
ResNets	B-Algorithm
yielded	O
lower	O
training	O
error	O
(	O
and	O
test	O
error	O
)	O
than	O
their	O
shallower	O
counterparts	O
simply	O
by	O
reintroducing	O
outputs	O
from	O
shallower	O
layers	O
in	O
the	O
network	O
to	O
compensate	O
for	O
the	O
vanishing	O
data	O
.	O
</s>
<s>
Note	O
that	O
ResNets	B-Algorithm
are	O
an	O
ensemble	O
of	O
relatively	O
shallow	O
nets	O
and	O
do	O
not	O
resolve	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
by	O
preserving	O
gradient	O
flow	O
throughout	O
the	O
entire	O
depth	O
of	O
the	O
network	O
–	O
rather	O
,	O
they	O
avoid	O
the	O
problem	O
simply	O
by	O
constructing	O
ensembles	O
of	O
many	O
short	O
networks	O
together	O
.	O
</s>
<s>
Rectifiers	B-Algorithm
such	O
as	O
ReLU	B-Algorithm
suffer	O
less	O
from	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
,	O
because	O
they	O
only	O
saturate	O
in	O
one	O
direction	O
.	O
</s>
<s>
Weight	O
initialization	O
is	O
another	O
approach	O
that	O
has	O
been	O
proposed	O
to	O
reduce	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
in	O
deep	O
networks	O
.	O
</s>
<s>
Kumar	O
suggested	O
that	O
the	O
distribution	O
of	O
initial	O
weights	O
should	O
vary	O
according	O
to	O
activation	B-Algorithm
function	I-Algorithm
used	O
and	O
proposed	O
to	O
initialize	O
the	O
weights	O
in	O
networks	O
with	O
the	O
logistic	O
activation	B-Algorithm
function	I-Algorithm
using	O
a	O
Gaussian	O
distribution	O
with	O
a	O
zero	O
mean	O
and	O
a	O
standard	O
deviation	O
of	O
3.6/sqrt	O
( N	O
)	O
,	O
where	O
N	O
is	O
the	O
number	O
of	O
neurons	O
in	O
a	O
layer	O
.	O
</s>
<s>
Recently	O
,	O
Yilmaz	O
and	O
Poli	O
performed	O
a	O
theoretical	O
analysis	O
on	O
how	O
gradients	O
are	O
affected	O
by	O
the	O
mean	O
of	O
the	O
initial	O
weights	O
in	O
deep	O
neural	B-Architecture
networks	I-Architecture
using	O
the	O
logistic	O
activation	B-Algorithm
function	I-Algorithm
and	O
found	O
that	O
gradients	O
do	O
not	O
vanish	O
if	O
the	O
mean	O
of	O
the	O
initial	O
weights	O
is	O
set	O
according	O
to	O
the	O
formula	O
:	O
max( −1	O
,	O
-8/N	O
)	O
.	O
</s>
<s>
This	O
simple	O
strategy	O
allows	O
networks	O
with	O
10	O
or	O
15	O
hidden	O
layers	O
to	O
be	O
trained	O
very	O
efficiently	O
and	O
effectively	O
using	O
the	O
standard	O
backpropagation	B-Algorithm
.	O
</s>
<s>
Behnke	O
relied	O
only	O
on	O
the	O
sign	O
of	O
the	O
gradient	O
(	O
Rprop	B-Algorithm
)	O
when	O
training	O
his	O
Neural	O
Abstraction	O
Pyramid	O
to	O
solve	O
problems	O
like	O
image	O
reconstruction	O
and	O
face	O
localization	O
.	O
</s>
<s>
Neural	B-Architecture
networks	I-Architecture
can	O
also	O
be	O
optimized	O
by	O
using	O
a	O
universal	O
search	O
algorithm	O
on	O
the	O
space	O
of	O
neural	B-Architecture
network	I-Architecture
's	O
weights	O
,	O
e.g.	O
,	O
random	O
guess	O
or	O
more	O
systematically	O
genetic	B-Algorithm
algorithm	I-Algorithm
.	O
</s>
<s>
This	O
approach	O
is	O
not	O
based	O
on	O
gradient	O
and	O
avoids	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
.	O
</s>
