<s>
In	O
(	O
unconstrained	O
)	O
mathematical	O
optimization	O
,	O
a	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
is	O
a	O
line	B-Algorithm
search	I-Algorithm
method	I-Algorithm
to	O
determine	O
the	O
amount	O
to	O
move	O
along	O
a	O
given	O
search	O
direction	O
.	O
</s>
<s>
The	O
method	O
involves	O
starting	O
with	O
a	O
relatively	O
large	O
estimate	O
of	O
the	O
step	B-General_Concept
size	I-General_Concept
for	O
movement	O
along	O
the	O
line	B-Algorithm
search	I-Algorithm
direction	O
,	O
and	O
iteratively	O
shrinking	O
the	O
step	B-General_Concept
size	I-General_Concept
(	O
i.e.	O
,	O
"	O
backtracking	O
"	O
)	O
until	O
a	O
decrease	O
of	O
the	O
objective	O
function	O
is	O
observed	O
that	O
adequately	O
corresponds	O
to	O
the	O
amount	O
of	O
decrease	O
that	O
is	O
expected	O
,	O
based	O
on	O
the	O
step	B-General_Concept
size	I-General_Concept
and	O
the	O
local	O
gradient	O
of	O
the	O
objective	O
function	O
.	O
</s>
<s>
A	O
common	O
stopping	O
criterion	O
is	O
the	O
Armijo	B-Algorithm
–	I-Algorithm
Goldstein	I-Algorithm
condition	I-Algorithm
.	O
</s>
<s>
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
is	O
typically	O
used	O
for	O
gradient	B-Algorithm
descent	I-Algorithm
(	O
GD	O
)	O
,	O
but	O
it	O
can	O
also	O
be	O
used	O
in	O
other	O
contexts	O
.	O
</s>
<s>
For	O
example	O
,	O
it	O
can	O
be	O
used	O
with	O
Newton	B-Algorithm
's	I-Algorithm
method	I-Algorithm
if	O
the	O
Hessian	O
matrix	O
is	O
positive	B-Algorithm
definite	I-Algorithm
.	O
</s>
<s>
Given	O
a	O
starting	O
position	O
and	O
a	O
search	O
direction	O
,	O
the	O
task	O
of	O
a	O
line	B-Algorithm
search	I-Algorithm
is	O
to	O
determine	O
a	O
step	B-General_Concept
size	I-General_Concept
that	O
adequately	O
reduces	O
the	O
objective	O
function	O
(	O
assumed	O
i.e.	O
</s>
<s>
Once	O
an	O
improved	O
starting	O
point	O
has	O
been	O
identified	O
by	O
the	O
line	B-Algorithm
search	I-Algorithm
,	O
another	O
subsequent	O
line	B-Algorithm
search	I-Algorithm
will	O
ordinarily	O
be	O
performed	O
in	O
a	O
new	O
direction	O
.	O
</s>
<s>
The	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
starts	O
with	O
a	O
large	O
estimate	O
of	O
and	O
iteratively	O
shrinks	O
it	O
.	O
</s>
<s>
This	O
condition	O
,	O
when	O
used	O
appropriately	O
as	O
part	O
of	O
a	O
line	B-Algorithm
search	I-Algorithm
,	O
can	O
ensure	O
that	O
the	O
step	B-General_Concept
size	I-General_Concept
is	O
not	O
excessively	O
large	O
.	O
</s>
<s>
However	O
,	O
this	O
condition	O
is	O
not	O
sufficient	O
on	O
its	O
own	O
to	O
ensure	O
that	O
the	O
step	B-General_Concept
size	I-General_Concept
is	O
nearly	O
optimal	O
,	O
since	O
any	O
value	O
of	O
that	O
is	O
sufficiently	O
small	O
will	O
satisfy	O
the	O
condition	O
.	O
</s>
<s>
Thus	O
,	O
the	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
strategy	O
starts	O
with	O
a	O
relatively	O
large	O
step	B-General_Concept
size	I-General_Concept
,	O
and	O
repeatedly	O
shrinks	O
it	O
by	O
a	O
factor	O
until	O
the	O
Armijo	B-Algorithm
–	I-Algorithm
Goldstein	I-Algorithm
condition	I-Algorithm
is	O
fulfilled	O
.	O
</s>
<s>
Starting	O
with	O
a	O
maximum	O
candidate	O
step	B-General_Concept
size	I-General_Concept
value	O
,	O
using	O
search	O
control	O
parameters	O
and	O
,	O
the	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
algorithm	O
can	O
be	O
expressed	O
as	O
follows	O
:	O
</s>
<s>
In	O
other	O
words	O
,	O
reduce	O
by	O
a	O
factor	O
of	O
in	O
each	O
iteration	O
until	O
the	O
Armijo	B-Algorithm
–	I-Algorithm
Goldstein	I-Algorithm
condition	I-Algorithm
is	O
fulfilled	O
.	O
</s>
<s>
In	O
practice	O
,	O
the	O
above	O
algorithm	O
is	O
typically	O
iterated	O
to	O
produce	O
a	O
sequence	O
,	O
,	O
to	O
converge	B-Algorithm
to	O
a	O
minimum	O
,	O
provided	O
such	O
a	O
minimum	O
exists	O
and	O
is	O
selected	O
appropriately	O
in	O
each	O
step	O
.	O
</s>
<s>
For	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
is	O
selected	O
as	O
.	O
</s>
<s>
The	O
value	O
of	O
for	O
the	O
that	O
fulfills	O
the	O
Armijo	B-Algorithm
–	I-Algorithm
Goldstein	I-Algorithm
condition	I-Algorithm
depends	O
on	O
and	O
,	O
and	O
is	O
thus	O
denoted	O
below	O
by	O
.	O
</s>
<s>
This	O
addresses	O
the	O
question	O
whether	O
there	O
is	O
a	O
systematic	O
way	O
to	O
find	O
a	O
positive	O
number	O
-	O
depending	O
on	O
the	O
function	O
f	O
,	O
the	O
point	O
and	O
the	O
descent	O
direction	O
-	O
so	O
that	O
all	O
learning	B-General_Concept
rates	I-General_Concept
satisfy	O
Armijo	O
's	O
condition	O
.	O
</s>
<s>
In	O
the	O
same	O
situation	O
where	O
,	O
an	O
interesting	O
question	O
is	O
how	O
large	O
learning	B-General_Concept
rates	I-General_Concept
can	O
be	O
chosen	O
in	O
Armijo	O
's	O
condition	O
(	O
that	O
is	O
,	O
when	O
one	O
has	O
no	O
limit	B-Algorithm
on	O
as	O
defined	O
in	O
the	O
section	O
"	O
Function	O
minimization	O
using	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
in	O
practice	O
"	O
)	O
,	O
since	O
larger	O
learning	B-General_Concept
rates	I-General_Concept
when	O
is	O
closer	O
to	O
the	O
limit	B-Algorithm
point	O
(	O
if	O
exists	O
)	O
can	O
make	O
convergence	B-Algorithm
faster	O
.	O
</s>
<s>
An	O
upper	O
bound	O
for	O
learning	B-General_Concept
rates	I-General_Concept
is	O
shown	O
to	O
exist	O
if	O
one	O
wants	O
the	O
constructed	O
sequence	O
converges	B-Algorithm
to	O
a	O
non-degenerate	O
critical	O
point	O
,	O
see	O
:	O
The	O
learning	B-General_Concept
rates	I-General_Concept
must	O
be	O
bounded	O
from	O
above	O
roughly	O
by	O
.	O
</s>
<s>
Here	O
H	O
is	O
the	O
Hessian	O
of	O
the	O
function	O
at	O
the	O
limit	B-Algorithm
point	O
,	O
is	O
its	O
inverse	O
,	O
and	O
is	O
the	O
norm	O
of	O
a	O
linear	O
operator	O
.	O
</s>
<s>
Thus	O
,	O
this	O
result	O
applies	O
for	O
example	O
when	O
one	O
uses	O
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
for	O
Morse	O
functions	O
.	O
</s>
<s>
Note	O
that	O
in	O
dimension	O
1	O
,	O
is	O
a	O
number	O
and	O
hence	O
this	O
upper	O
bound	O
is	O
of	O
the	O
same	O
size	O
as	O
the	O
lower	O
bound	O
in	O
the	O
section	O
"	O
Lower	O
bound	O
for	O
learning	B-General_Concept
rates	I-General_Concept
"	O
.	O
</s>
<s>
On	O
the	O
other	O
hand	O
,	O
if	O
the	O
limit	B-Algorithm
point	O
is	O
degenerate	O
,	O
then	O
learning	B-General_Concept
rates	I-General_Concept
can	O
be	O
unbounded	O
.	O
</s>
<s>
For	O
example	O
,	O
a	O
modification	O
of	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
known	O
as	O
unbounded	O
backtracking	O
gradient	B-Algorithm
descent	I-Algorithm
(	O
see	O
)	O
allows	O
the	O
learning	B-General_Concept
rate	I-General_Concept
to	O
be	O
half	O
the	O
size	O
,	O
where	O
is	O
a	O
constant	O
.	O
</s>
<s>
Experiments	O
with	O
simple	O
functions	O
such	O
as	O
show	O
that	O
unbounded	O
backtracking	O
gradient	B-Algorithm
descent	I-Algorithm
converges	B-Algorithm
much	O
faster	O
than	O
the	O
basic	O
version	O
described	O
in	O
the	O
section	O
"	O
Function	O
minimization	O
using	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
in	O
practice	O
"	O
.	O
</s>
<s>
An	O
argument	O
against	O
the	O
use	O
of	O
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
,	O
in	O
particular	O
in	O
Large	O
scale	O
optimisation	O
,	O
is	O
that	O
satisfying	O
Armijo	O
's	O
condition	O
is	O
expensive	O
.	O
</s>
<s>
There	O
is	O
a	O
way	O
(	O
so-called	O
Two-way	O
Backtracking	O
)	O
to	O
go	O
around	O
,	O
with	O
good	O
theoretical	O
guarantees	O
and	O
has	O
been	O
tested	O
with	O
good	O
results	O
on	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
,	O
see	O
.	O
</s>
<s>
One	O
observes	O
that	O
if	O
the	O
sequence	O
converges	B-Algorithm
(	O
as	O
wished	O
when	O
one	O
makes	O
use	O
of	O
an	O
iterative	O
optimisation	O
method	O
)	O
,	O
then	O
the	O
sequence	O
of	O
learning	B-General_Concept
rates	I-General_Concept
should	O
vary	O
little	O
when	O
n	O
is	O
large	O
enough	O
.	O
</s>
<s>
The	O
second	O
observation	O
is	O
that	O
could	O
be	O
larger	O
than	O
,	O
and	O
hence	O
one	O
should	O
allow	O
to	O
increase	O
learning	B-General_Concept
rate	I-General_Concept
(	O
and	O
not	O
just	O
decrease	O
as	O
in	O
the	O
section	O
Algorithm	O
)	O
.	O
</s>
<s>
(	O
Increase	O
learning	B-General_Concept
rate	I-General_Concept
if	O
Armijo	O
's	O
condition	O
is	O
satisfied	O
.	O
)	O
</s>
<s>
(	O
Otherwise	O
,	O
reduce	O
the	O
learning	B-General_Concept
rate	I-General_Concept
if	O
Armijo	O
's	O
condition	O
is	O
not	O
satisfied	O
.	O
)	O
</s>
<s>
Return	O
for	O
the	O
learning	B-General_Concept
rate	I-General_Concept
.	O
</s>
<s>
(	O
In	O
one	O
can	O
find	O
a	O
description	O
of	O
an	O
algorithm	O
with	O
1	O
)	O
,	O
3	O
)	O
and	O
4	O
)	O
above	O
,	O
which	O
was	O
not	O
tested	O
in	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
before	O
the	O
cited	O
paper	O
.	O
)	O
</s>
<s>
One	O
can	O
save	O
time	O
further	O
by	O
a	O
hybrid	O
mixture	O
between	O
two-way	O
backtracking	O
and	O
the	O
basic	O
standard	O
gradient	B-Algorithm
descent	I-Algorithm
algorithm	O
.	O
</s>
<s>
Roughly	O
speaking	O
,	O
we	O
run	O
two-way	O
backtracking	O
a	O
few	O
times	O
,	O
then	O
use	O
the	O
learning	B-General_Concept
rate	I-General_Concept
we	O
get	O
from	O
then	O
unchanged	O
,	O
except	O
if	O
the	O
function	O
value	O
increases	O
.	O
</s>
<s>
(	O
So	O
,	O
in	O
this	O
case	O
,	O
use	O
the	O
learning	B-General_Concept
rate	I-General_Concept
unchanged	O
.	O
)	O
</s>
<s>
Indeed	O
,	O
so	O
far	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
and	O
its	O
modifications	O
are	O
the	O
most	O
theoretically	O
guaranteed	O
methods	O
among	O
all	O
numerical	O
optimization	O
algorithms	O
concerning	O
convergence	B-Algorithm
to	O
critical	O
points	O
and	O
avoidance	O
of	O
saddle	O
points	O
,	O
see	O
below	O
.	O
</s>
<s>
In	O
the	O
setting	O
of	O
deep	B-Algorithm
learning	I-Algorithm
,	O
saddle	O
points	O
are	O
also	O
prevalent	O
,	O
see	O
.	O
</s>
<s>
Thus	O
,	O
to	O
apply	O
in	O
deep	B-Algorithm
learning	I-Algorithm
,	O
one	O
needs	O
results	O
for	O
non-convex	O
functions	O
.	O
</s>
<s>
For	O
convergence	B-Algorithm
to	O
critical	O
points	O
:	O
For	O
example	O
,	O
if	O
the	O
cost	O
function	O
is	O
a	O
real	B-Language
analytic	I-Language
function	I-Language
,	O
then	O
it	O
is	O
shown	O
in	O
that	O
convergence	B-Algorithm
is	O
guaranteed	O
.	O
</s>
<s>
The	O
main	O
idea	O
is	O
to	O
use	O
Łojasiewicz	O
inequality	O
which	O
is	O
enjoyed	O
by	O
a	O
real	B-Language
analytic	I-Language
function	I-Language
.	O
</s>
<s>
For	O
non-smooth	O
functions	O
satisfying	O
Łojasiewicz	O
inequality	O
,	O
the	O
above	O
convergence	B-Algorithm
guarantee	O
is	O
extended	O
,	O
see	O
.	O
</s>
<s>
In	O
,	O
there	O
is	O
a	O
proof	O
that	O
for	O
every	O
sequence	O
constructed	O
by	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
,	O
a	O
cluster	O
point	O
(	O
i.e.	O
</s>
<s>
the	O
limit	B-Algorithm
of	O
one	O
subsequence	O
,	O
if	O
the	O
subsequence	O
converges	B-Algorithm
)	O
is	O
a	O
critical	O
point	O
.	O
</s>
<s>
For	O
the	O
case	O
of	O
a	O
function	O
with	O
at	O
most	O
countably	O
many	O
critical	O
points	O
(	O
such	O
as	O
a	O
Morse	O
function	O
)	O
and	O
compact	O
sublevels	B-Algorithm
,	O
as	O
well	O
as	O
with	O
Lipschitz	O
continuous	O
gradient	O
where	O
one	O
uses	O
standard	O
GD	O
with	O
learning	B-General_Concept
rate	I-General_Concept
<	O
1/L	O
(	O
see	O
the	O
section	O
"	O
Stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
"	O
)	O
,	O
then	O
convergence	B-Algorithm
is	O
guaranteed	O
,	O
see	O
for	O
example	O
Chapter	O
12	O
in	O
.	O
</s>
<s>
Here	O
the	O
assumption	O
about	O
compact	O
sublevels	B-Algorithm
is	O
to	O
make	O
sure	O
that	O
one	O
deals	O
with	O
compact	O
sets	O
of	O
the	O
Euclidean	O
space	O
only	O
.	O
</s>
<s>
In	O
the	O
general	O
case	O
,	O
where	O
is	O
only	O
assumed	O
to	O
be	O
and	O
have	O
at	O
most	O
countably	O
many	O
critical	O
points	O
,	O
convergence	B-Algorithm
is	O
guaranteed	O
,	O
see	O
.	O
</s>
<s>
In	O
the	O
same	O
reference	O
,	O
similarly	O
convergence	B-Algorithm
is	O
guaranteed	O
for	O
other	O
modifications	O
of	O
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
(	O
such	O
as	O
Unbounded	O
backtracking	O
gradient	B-Algorithm
descent	I-Algorithm
mentioned	O
in	O
the	O
section	O
"	O
Upper	O
bound	O
for	O
learning	B-General_Concept
rates	I-General_Concept
"	O
)	O
,	O
and	O
even	O
if	O
the	O
function	O
has	O
uncountably	O
many	O
critical	O
points	O
still	O
one	O
can	O
deduce	O
some	O
non-trivial	O
facts	O
about	O
convergence	B-Algorithm
behaviour	O
.	O
</s>
<s>
In	O
the	O
stochastic	O
setting	O
,	O
under	O
the	O
same	O
assumption	O
that	O
the	O
gradient	O
is	O
Lipschitz	O
continuous	O
and	O
one	O
uses	O
a	O
more	O
restrictive	O
version	O
(	O
requiring	O
in	O
addition	O
that	O
the	O
sum	O
of	O
learning	B-General_Concept
rates	I-General_Concept
is	O
infinite	O
and	O
the	O
sum	O
of	O
squares	O
of	O
learning	B-General_Concept
rates	I-General_Concept
is	O
finite	O
)	O
of	O
diminishing	O
learning	B-General_Concept
rate	I-General_Concept
scheme	O
(	O
see	O
section	O
"	O
Stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
"	O
)	O
and	O
moreover	O
the	O
function	O
is	O
strictly	O
convex	O
,	O
then	O
the	O
convergence	B-Algorithm
is	O
established	O
in	O
the	O
well-known	O
result	O
,	O
see	O
for	O
generalisations	O
to	O
less	O
restrictive	O
versions	O
of	O
a	O
diminishing	O
learning	B-General_Concept
rate	I-General_Concept
scheme	O
.	O
</s>
<s>
For	O
avoidance	O
of	O
saddle	O
points	O
:	O
For	O
example	O
,	O
if	O
the	O
gradient	O
of	O
the	O
cost	O
function	O
is	O
Lipschitz	O
continuous	O
and	O
one	O
chooses	O
standard	O
GD	O
with	O
learning	B-General_Concept
rate	I-General_Concept
<	O
1/L	O
,	O
then	O
with	O
a	O
random	O
choice	O
of	O
initial	O
point	O
(	O
more	O
precisely	O
,	O
outside	O
a	O
set	O
of	O
Lebesgue	O
measure	O
zero	O
)	O
,	O
the	O
sequence	O
constructed	O
will	O
not	O
converge	B-Algorithm
to	O
a	O
non-degenerate	O
saddle	O
point	O
(	O
proven	O
in	O
)	O
,	O
and	O
more	O
generally	O
it	O
is	O
also	O
true	O
that	O
the	O
sequence	O
constructed	O
will	O
not	O
converge	B-Algorithm
to	O
a	O
degenerate	O
saddle	O
point	O
(	O
proven	O
in	O
)	O
.	O
</s>
<s>
Under	O
the	O
same	O
assumption	O
that	O
the	O
gradient	O
is	O
Lipschitz	O
continuous	O
and	O
one	O
uses	O
a	O
diminishing	O
learning	B-General_Concept
rate	I-General_Concept
scheme	O
(	O
see	O
the	O
section	O
"	O
Stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
"	O
)	O
,	O
then	O
avoidance	O
of	O
saddle	O
points	O
is	O
established	O
in	O
.	O
</s>
<s>
While	O
it	O
is	O
trivial	O
to	O
mention	O
,	O
if	O
the	O
gradient	O
of	O
a	O
cost	O
function	O
is	O
Lipschitz	O
continuous	O
,	O
with	O
Lipschitz	O
constant	O
L	O
,	O
then	O
with	O
choosing	O
learning	B-General_Concept
rate	I-General_Concept
to	O
be	O
constant	O
and	O
of	O
the	O
size	O
,	O
one	O
has	O
a	O
special	O
case	O
of	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
(	O
for	O
gradient	B-Algorithm
descent	I-Algorithm
)	O
.	O
</s>
<s>
This	O
scheme	O
however	O
requires	O
that	O
one	O
needs	O
to	O
have	O
a	O
good	O
estimate	O
for	O
L	O
,	O
otherwise	O
if	O
learning	B-General_Concept
rate	I-General_Concept
is	O
too	O
big	O
(	O
relative	O
to	O
1/L	O
)	O
then	O
the	O
scheme	O
has	O
no	O
convergence	B-Algorithm
guarantee	O
.	O
</s>
<s>
Also	O
,	O
if	O
the	O
gradient	O
of	O
the	O
function	O
is	O
not	O
globally	O
Lipschitz	O
continuous	O
,	O
then	O
this	O
scheme	O
has	O
no	O
convergence	B-Algorithm
guarantee	O
.	O
</s>
<s>
For	O
example	O
,	O
this	O
is	O
similar	O
to	O
an	O
exercise	O
in	O
,	O
for	O
the	O
cost	O
function	O
and	O
for	O
whatever	O
constant	O
learning	B-General_Concept
rate	I-General_Concept
one	O
chooses	O
,	O
with	O
a	O
random	O
initial	O
point	O
the	O
sequence	O
constructed	O
by	O
this	O
special	O
scheme	O
does	O
not	O
converge	B-Algorithm
to	O
the	O
global	O
minimum	O
0	O
.	O
</s>
<s>
If	O
one	O
does	O
not	O
care	O
about	O
the	O
condition	O
that	O
learning	B-General_Concept
rate	I-General_Concept
must	O
be	O
bounded	O
by	O
1/L	O
,	O
then	O
this	O
special	O
scheme	O
has	O
been	O
used	O
much	O
older	O
,	O
at	O
least	O
since	O
1847	O
by	O
Cauchy	B-Algorithm
,	O
which	O
can	O
be	O
called	O
standard	O
GD	O
(	O
not	O
to	O
be	O
confused	O
with	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
,	O
which	O
is	O
abbreviated	O
herein	O
as	O
SGD	O
)	O
.	O
</s>
<s>
In	O
the	O
stochastic	O
setting	O
(	O
such	O
as	O
in	O
the	O
mini-batch	O
setting	O
in	O
deep	B-Algorithm
learning	I-Algorithm
)	O
,	O
standard	O
GD	O
is	O
called	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
,	O
or	O
SGD	O
.	O
</s>
<s>
Even	O
if	O
the	O
cost	O
function	O
has	O
globally	O
continuous	O
gradient	O
,	O
good	O
estimate	O
of	O
the	O
Lipschitz	O
constant	O
for	O
the	O
cost	O
functions	O
in	O
deep	B-Algorithm
learning	I-Algorithm
may	O
not	O
be	O
feasible	O
or	O
desirable	O
,	O
given	O
the	O
very	O
high	O
dimensions	O
of	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
.	O
</s>
<s>
Hence	O
,	O
there	O
is	O
a	O
technique	O
of	O
fine-tuning	O
of	O
learning	B-General_Concept
rates	I-General_Concept
in	O
applying	O
standard	O
GD	O
or	O
SGD	O
.	O
</s>
<s>
One	O
way	O
is	O
to	O
choose	O
many	O
learning	B-General_Concept
rates	I-General_Concept
from	O
a	O
grid	O
search	O
,	O
with	O
the	O
hope	O
that	O
some	O
of	O
the	O
learning	B-General_Concept
rates	I-General_Concept
can	O
give	O
good	O
results	O
.	O
</s>
<s>
Another	O
way	O
is	O
the	O
so-called	O
adaptive	O
standard	O
GD	O
or	O
SGD	O
,	O
some	O
representatives	O
are	O
Adam	O
,	O
Adadelta	O
,	O
RMSProp	O
and	O
so	O
on	O
,	O
see	O
the	O
article	O
on	O
Stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
In	O
adaptive	O
standard	O
GD	O
or	O
SGD	O
,	O
learning	B-General_Concept
rates	I-General_Concept
are	O
allowed	O
to	O
vary	O
at	O
each	O
iterate	O
step	O
n	O
,	O
but	O
in	O
a	O
different	O
manner	O
from	O
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
for	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
Apparently	O
,	O
it	O
would	O
be	O
more	O
expensive	O
to	O
use	O
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
for	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
since	O
one	O
needs	O
to	O
do	O
a	O
loop	O
search	O
until	O
Armijo	O
's	O
condition	O
is	O
satisfied	O
,	O
while	O
for	O
adaptive	O
standard	O
GD	O
or	O
SGD	O
no	O
loop	O
search	O
is	O
needed	O
.	O
</s>
<s>
Most	O
of	O
these	O
adaptive	O
standard	O
GD	O
or	O
SGD	O
do	O
not	O
have	O
the	O
descent	O
property	O
,	O
for	O
all	O
n	O
,	O
as	O
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
for	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
Only	O
a	O
few	O
has	O
this	O
property	O
,	O
and	O
which	O
have	O
good	O
theoretical	O
properties	O
,	O
but	O
they	O
turn	O
out	O
to	O
be	O
special	O
cases	O
of	O
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
or	O
more	O
generally	O
Armijo	O
's	O
condition	O
.	O
</s>
<s>
The	O
first	O
one	O
is	O
when	O
one	O
chooses	O
learning	B-General_Concept
rate	I-General_Concept
to	O
be	O
a	O
constant	O
<	O
1/L	O
,	O
as	O
mentioned	O
above	O
,	O
if	O
one	O
can	O
have	O
a	O
good	O
estimate	O
of	O
L	O
.	O
The	O
second	O
is	O
the	O
so	O
called	O
diminishing	O
learning	B-General_Concept
rate	I-General_Concept
,	O
used	O
in	O
the	O
well-known	O
paper	O
by	O
,	O
if	O
again	O
the	O
function	O
has	O
globally	O
Lipschitz	O
continuous	O
gradient	O
(	O
but	O
the	O
Lipschitz	O
constant	O
may	O
be	O
unknown	O
)	O
and	O
the	O
learning	B-General_Concept
rates	I-General_Concept
converge	B-Algorithm
to	O
0	O
.	O
</s>
<s>
This	O
section	O
describes	O
some	O
main	O
points	O
to	O
be	O
noted	O
in	O
the	O
more	O
theoretical	O
setting	O
of	O
stochastic	O
optimization	O
and	O
the	O
more	O
realistic	O
setting	O
of	O
mini-batch	O
in	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
.	O
</s>
<s>
In	O
general	O
,	O
it	O
is	O
very	O
difficult	O
to	O
compute	O
the	O
expectation	O
,	O
in	O
particular	O
when	O
working	O
for	O
example	O
with	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
(	O
see	O
the	O
next	O
part	O
)	O
.	O
</s>
<s>
We	O
need	O
to	O
modify	O
Armijo	O
's	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
as	O
follows	O
:	O
Then	O
the	O
descent	O
direction	O
is	O
chosen	O
as	O
and	O
the	O
Armijo	O
's	O
learning	B-General_Concept
rate	I-General_Concept
is	O
chosen	O
relative	O
to	O
the	O
function	O
.	O
</s>
<s>
However	O
,	O
it	O
will	O
be	O
explained	O
in	O
the	O
section	O
about	O
implementing	O
in	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
that	O
there	O
are	O
difficulties/not	O
very	O
good	O
performance	O
with	O
implementing	O
this	O
scheme	O
.	O
</s>
<s>
Then	O
,	O
one	O
can	O
check	O
with	O
Python	O
,	O
that	O
if	O
we	O
choose	O
and	O
for	O
all	O
step	O
n	O
,	O
and	O
run	O
Armijo	O
's	O
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
with	O
maximum	O
learning	B-General_Concept
rate	I-General_Concept
,	O
in	O
this	O
setting	O
as	O
described	O
above	O
,	O
then	O
we	O
have	O
divergence	O
(	O
not	O
convergence	B-Algorithm
as	O
one	O
would	O
hope	O
for	O
this	O
too	O
simple	O
convex	O
function	O
)	O
!	O
</s>
<s>
[	O
What	O
happens	O
here	O
is	O
that	O
,	O
while	O
small	O
,	O
there	O
is	O
a	O
non-zero	O
probability	O
of	O
the	O
random	O
value	O
of	O
is	O
,	O
in	O
which	O
case	O
what	O
Armijo	O
's	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
will	O
do	O
is	O
to	O
push	O
away	O
from	O
the	O
point	O
0	O
by	O
a	O
large	O
amount.	O
]	O
</s>
<s>
Another	O
way	O
is	O
,	O
originating	O
from	O
the	O
classical	O
work	O
,	O
in	O
SGD	O
to	O
keep	O
constant	O
(	O
even	O
can	O
choose	O
to	O
be	O
always	O
1	O
)	O
,	O
but	O
let	O
the	O
learning	B-General_Concept
rates	I-General_Concept
going	O
to	O
0	O
.	O
</s>
<s>
[	O
Even	O
if	O
the	O
random	O
value	O
of	O
is	O
,	O
since	O
the	O
learning	B-General_Concept
rate	I-General_Concept
is	O
smaller	O
and	O
smaller	O
,	O
this	O
scheme	O
will	O
push	O
away	O
from	O
0	O
less	O
than	O
backtracking.	O
]	O
</s>
<s>
While	O
this	O
scheme	O
works	O
well	O
also	O
for	O
this	O
example	O
,	O
again	O
in	O
the	O
setting	O
of	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
the	O
experimental	O
results	O
are	O
not	O
very	O
good	O
.	O
</s>
<s>
It	O
is	O
worthy	O
to	O
note	O
also	O
that	O
if	O
one	O
uses	O
SGD	O
and	O
keeps	O
constant	O
,	O
but	O
also	O
learning	B-General_Concept
rates	I-General_Concept
constant	O
,	O
then	O
one	O
can	O
observe	O
divergence	O
even	O
for	O
some	O
values	O
of	O
the	O
learning	B-General_Concept
rates	I-General_Concept
for	O
which	O
there	O
is	O
convergence	B-Algorithm
guarantee	O
for	O
.	O
</s>
<s>
Hence	O
,	O
by	O
classical	O
theory	O
,	O
for	O
learning	B-General_Concept
rate	I-General_Concept
,	O
SGD	O
with	O
constant	O
learning	B-General_Concept
rate	I-General_Concept
will	O
converge	B-Algorithm
for	O
F(x )	O
.	O
</s>
<s>
However	O
,	O
one	O
check	O
with	O
Python	O
that	O
for	O
the	O
associated	O
stochastic	O
optimization	O
problem	O
,	O
even	O
with	O
the	O
choice	O
of	O
a	O
smaller	O
learning	B-General_Concept
rate	I-General_Concept
0.1	O
,	O
one	O
observes	O
divergence	O
to	O
infinity	O
!	O
</s>
<s>
On	O
the	O
other	O
hand	O
,	O
with	O
the	O
learning	B-General_Concept
rate	I-General_Concept
0.05	O
then	O
one	O
observes	O
convergence	B-Algorithm
to	O
the	O
minimum	O
0	O
.	O
</s>
<s>
Therefore	O
,	O
it	O
seems	O
that	O
even	O
for	O
this	O
simple	O
function	O
,	O
the	O
choice	O
of	O
(	O
constant	O
)	O
learning	B-General_Concept
rate	I-General_Concept
must	O
take	O
into	O
account	O
the	O
distribution	O
,	O
and	O
not	O
just	O
that	O
from	O
the	O
deterministic	O
function	O
F(x )	O
,	O
for	O
to	O
obtain	O
convergence	B-Algorithm
for	O
the	O
stochastic	O
optimization	O
problem	O
.	O
</s>
<s>
[	O
Note	O
:	O
if	O
here	O
one	O
runs	O
Armijo	O
's	O
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
with	O
maximum	O
learning	B-General_Concept
rate	I-General_Concept
,	O
then	O
one	O
observes	O
convergence	B-Algorithm
!	O
</s>
<s>
Hence	O
,	O
still	O
Armijo	O
's	O
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
helps	O
with	O
choosing	O
a	O
good	O
learning	B-General_Concept
rate	I-General_Concept
to	O
use	O
,	O
compared	O
to	O
SGD	O
with	O
constant	O
learning	O
rates.	O
]	O
</s>
<s>
Then	O
both	O
of	O
the	O
following	O
schemes	O
guarantee	O
convergence	B-Algorithm
:	O
Scheme	O
1	O
:	O
Backtracking	O
with	O
increasing	O
"	O
mini-batch	O
"	O
sizes	O
.	O
</s>
<s>
Scheme	O
2	O
:	O
SGD	O
with	O
learning	B-General_Concept
rates	I-General_Concept
decreasing	O
to	O
0	O
.	O
</s>
<s>
Why	O
these	O
schemes	O
yield	O
not	O
very	O
good	O
experimental	O
results	O
in	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
can	O
be	O
because	O
:	O
</s>
<s>
-	O
The	O
implementation	O
does	O
not	O
take	O
into	O
account	O
the	O
differences	O
between	O
stochastic	O
optimization	O
and	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
(	O
see	O
more	O
in	O
the	O
next	O
section	O
)	O
;	O
</s>
<s>
-	O
The	O
assumptions	O
of	O
these	O
theoretical	O
results	O
are	O
not	O
satisfied	O
(	O
too	O
strong	O
to	O
be	O
satisfied	O
)	O
by	O
the	O
cost	O
functions	O
in	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
.	O
</s>
<s>
It	O
is	O
worthy	O
to	O
notice	O
that	O
current	O
successful	O
implementations	O
of	O
backtracking	O
in	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
use	O
constant	O
mini-batch	O
sizes	O
.	O
</s>
<s>
If	O
one	O
chooses	O
for	O
all	O
n	O
,	O
then	O
one	O
can	O
check	O
with	O
Python	O
that	O
Armijo	O
's	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
with	O
maximum	O
learning	B-General_Concept
rate	I-General_Concept
in	O
this	O
stochastic	O
setting	O
converges	B-Algorithm
!	O
</s>
<s>
The	O
same	O
happens	O
with	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
:	O
one	O
needs	O
to	O
choose	O
a	O
big	O
enough	O
mini-batch	O
size	O
depending	O
on	O
the	O
specific	O
question	O
and	O
setting	O
(	O
see	O
next	O
section	O
)	O
.	O
</s>
<s>
2	O
)	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
:	O
</s>
<s>
While	O
stochastic	O
optimization	O
is	O
meant	O
to	O
be	O
a	O
theoretical	O
model	O
for	O
what	O
happens	O
in	O
training	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
,	O
in	O
practice	O
there	O
are	O
important	O
differences	O
in	O
many	O
aspects	O
:	O
assumptions	O
,	O
resources	O
,	O
ways	O
to	O
proceed	O
and	O
goals	O
.	O
</s>
<s>
In	O
summary	O
,	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
(	O
and	O
its	O
modifications	O
)	O
is	O
a	O
method	O
which	O
is	O
easy	O
to	O
implement	O
,	O
is	O
applicable	O
for	O
very	O
general	O
functions	O
,	O
has	O
very	O
good	O
theoretical	O
guarantee	O
(	O
for	O
both	O
convergence	B-Algorithm
to	O
critical	O
points	O
and	O
avoidance	O
of	O
saddle	O
points	O
)	O
and	O
works	O
well	O
in	O
practice	O
.	O
</s>
<s>
Several	O
other	O
methods	O
which	O
have	O
good	O
theoretical	O
guarantee	O
,	O
such	O
as	O
diminishing	O
learning	B-General_Concept
rates	I-General_Concept
or	O
standard	O
GD	O
with	O
learning	B-General_Concept
rate	I-General_Concept
<	O
1/L	O
–	O
both	O
require	O
the	O
gradient	O
of	O
the	O
objective	O
function	O
to	O
be	O
Lipschitz	O
continuous	O
,	O
turn	O
out	O
to	O
be	O
a	O
special	O
case	O
of	O
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
or	O
satisfy	O
Armijo	O
's	O
condition	O
.	O
</s>
