<s>
Stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
(	O
often	O
abbreviated	O
SGD	O
)	O
is	O
an	O
iterative	O
method	O
for	O
optimizing	O
an	O
objective	O
function	O
with	O
suitable	O
smoothness	O
properties	O
(	O
e.g.	O
</s>
<s>
differentiable	O
or	O
subdifferentiable	B-Algorithm
)	O
.	O
</s>
<s>
It	O
can	O
be	O
regarded	O
as	O
a	O
stochastic	O
approximation	O
of	O
gradient	B-Algorithm
descent	I-Algorithm
optimization	I-Algorithm
,	O
since	O
it	O
replaces	O
the	O
actual	O
gradient	B-Algorithm
(	O
calculated	O
from	O
the	O
entire	O
data	B-General_Concept
set	I-General_Concept
)	O
by	O
an	O
estimate	O
thereof	O
(	O
calculated	O
from	O
a	O
randomly	O
selected	O
subset	O
of	O
the	O
data	O
)	O
.	O
</s>
<s>
While	O
the	O
basic	O
idea	O
behind	O
stochastic	O
approximation	O
can	O
be	O
traced	O
back	O
to	O
the	O
Robbins	O
–	O
Monro	O
algorithm	O
of	O
the	O
1950s	O
,	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
has	O
become	O
an	O
important	O
optimization	O
method	O
in	O
machine	O
learning	O
.	O
</s>
<s>
where	O
the	O
parameter	B-General_Concept
that	O
minimizes	O
is	O
to	O
be	O
estimated	O
.	O
</s>
<s>
Each	O
summand	O
function	O
is	O
typically	O
associated	O
with	O
the	O
-th	O
observation	O
in	O
the	O
data	B-General_Concept
set	I-General_Concept
(	O
used	O
for	O
training	O
)	O
.	O
</s>
<s>
In	O
classical	O
statistics	O
,	O
sum-minimization	O
problems	O
arise	O
in	O
least	B-Algorithm
squares	I-Algorithm
and	O
in	O
maximum-likelihood	O
estimation	O
(	O
for	O
independent	O
observations	O
)	O
.	O
</s>
<s>
The	O
sum-minimization	O
problem	O
also	O
arises	O
for	O
empirical	B-General_Concept
risk	I-General_Concept
minimization	I-General_Concept
.	O
</s>
<s>
When	O
used	O
to	O
minimize	O
the	O
above	O
function	O
,	O
a	O
standard	O
(	O
or	O
"	O
batch	O
"	O
)	O
gradient	B-Algorithm
descent	I-Algorithm
method	I-Algorithm
would	O
perform	O
the	O
following	O
iterations	O
:	O
</s>
<s>
where	O
is	O
a	O
step	B-General_Concept
size	I-General_Concept
(	O
sometimes	O
called	O
the	O
learning	B-General_Concept
rate	I-General_Concept
in	O
machine	O
learning	O
)	O
.	O
</s>
<s>
In	O
many	O
cases	O
,	O
the	O
summand	O
functions	O
have	O
a	O
simple	O
form	O
that	O
enables	O
inexpensive	O
evaluations	O
of	O
the	O
sum-function	O
and	O
the	O
sum	O
gradient	B-Algorithm
.	O
</s>
<s>
For	O
example	O
,	O
in	O
statistics	O
,	O
one-parameter	O
exponential	O
families	O
allow	O
economical	O
function-evaluations	O
and	O
gradient-evaluations	O
.	O
</s>
<s>
However	O
,	O
in	O
other	O
cases	O
,	O
evaluating	O
the	O
sum-gradient	O
may	O
require	O
expensive	O
evaluations	O
of	O
the	O
gradients	O
from	O
all	O
summand	O
functions	O
.	O
</s>
<s>
When	O
the	O
training	O
set	O
is	O
enormous	O
and	O
no	O
simple	O
formulas	O
exist	O
,	O
evaluating	O
the	O
sums	O
of	O
gradients	O
becomes	O
very	O
expensive	O
,	O
because	O
evaluating	O
the	O
gradient	B-Algorithm
requires	O
evaluating	O
all	O
the	O
summand	O
functions	O
 '	O
gradients	O
.	O
</s>
<s>
To	O
economize	O
on	O
the	O
computational	O
cost	O
at	O
every	O
iteration	O
,	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
samples	O
a	O
subset	O
of	O
summand	O
functions	O
at	O
every	O
step	O
.	O
</s>
<s>
In	O
stochastic	O
(	O
or	O
"	O
on-line	O
"	O
)	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
the	O
true	O
gradient	B-Algorithm
of	O
is	O
approximated	O
by	O
a	O
gradient	B-Algorithm
at	O
a	O
single	O
sample	O
:	O
</s>
<s>
Typical	O
implementations	O
may	O
use	O
an	O
adaptive	O
learning	B-General_Concept
rate	I-General_Concept
so	O
that	O
the	O
algorithm	O
converges	O
.	O
</s>
<s>
In	O
pseudocode	O
,	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
can	O
be	O
presented	O
as	O
:	O
</s>
<s>
Choose	O
an	O
initial	O
vector	O
of	O
parameters	O
and	O
learning	B-General_Concept
rate	I-General_Concept
.	O
</s>
<s>
A	O
compromise	O
between	O
computing	O
the	O
true	O
gradient	B-Algorithm
and	O
the	O
gradient	B-Algorithm
at	O
a	O
single	O
sample	O
is	O
to	O
compute	O
the	O
gradient	B-Algorithm
against	O
more	O
than	O
one	O
training	O
sample	O
(	O
called	O
a	O
"	O
mini-batch	O
"	O
)	O
at	O
each	O
step	O
.	O
</s>
<s>
This	O
can	O
perform	O
significantly	O
better	O
than	O
"	O
true	O
"	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
described	O
,	O
because	O
the	O
code	O
can	O
make	O
use	O
of	O
vectorization	B-Algorithm
libraries	O
rather	O
than	O
computing	O
each	O
step	O
separately	O
as	O
was	O
first	O
shown	O
in	O
where	O
it	O
was	O
called	O
"	O
the	O
bunch-mode	O
back-propagation	B-Algorithm
algorithm	O
"	O
.	O
</s>
<s>
It	O
may	O
also	O
result	O
in	O
smoother	O
convergence	O
,	O
as	O
the	O
gradient	B-Algorithm
computed	O
at	O
each	O
step	O
is	O
averaged	O
over	O
more	O
training	O
sample	O
.	O
</s>
<s>
The	O
convergence	O
of	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
has	O
been	O
analyzed	O
using	O
the	O
theories	O
of	O
convex	O
minimization	O
and	O
of	O
stochastic	O
approximation	O
.	O
</s>
<s>
Briefly	O
,	O
when	O
the	O
learning	B-General_Concept
rates	I-General_Concept
decrease	O
with	O
an	O
appropriate	O
rate	O
,	O
</s>
<s>
Suppose	O
we	O
want	O
to	O
fit	O
a	O
straight	O
line	O
to	O
a	O
training	O
set	O
with	O
observations	O
and	O
corresponding	O
estimated	O
responses	O
using	O
least	B-Algorithm
squares	I-Algorithm
.	O
</s>
<s>
Note	O
that	O
in	O
each	O
iteration	O
(	O
also	O
called	O
update	O
)	O
,	O
the	O
gradient	B-Algorithm
is	O
only	O
evaluated	O
at	O
a	O
single	O
point	O
instead	O
of	O
at	O
the	O
set	O
of	O
all	O
samples	O
.	O
</s>
<s>
The	O
key	O
difference	O
compared	O
to	O
standard	O
(	O
Batch	O
)	O
Gradient	B-Algorithm
Descent	I-Algorithm
is	O
that	O
only	O
one	O
piece	O
of	O
data	O
from	O
the	O
dataset	B-General_Concept
is	O
used	O
to	O
calculate	O
the	O
step	O
,	O
and	O
the	O
piece	O
of	O
data	O
is	O
picked	O
randomly	O
at	O
each	O
step	O
.	O
</s>
<s>
Stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
is	O
a	O
popular	O
algorithm	O
for	O
training	O
a	O
wide	O
range	O
of	O
models	O
in	O
machine	O
learning	O
,	O
including	O
(	O
linear	O
)	O
support	B-Algorithm
vector	I-Algorithm
machines	I-Algorithm
,	O
logistic	O
regression	O
(	O
see	O
,	O
e.g.	O
,	O
Vowpal	B-Algorithm
Wabbit	I-Algorithm
)	O
and	O
graphical	O
models	O
.	O
</s>
<s>
When	O
combined	O
with	O
the	O
backpropagation	B-Algorithm
algorithm	O
,	O
it	O
is	O
the	O
de	O
facto	O
standard	O
algorithm	O
for	O
training	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
.	O
</s>
<s>
Stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
competes	O
with	O
the	O
L-BFGS	B-Algorithm
algorithm	O
,	O
which	O
is	O
also	O
widely	O
used	O
.	O
</s>
<s>
Stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
has	O
been	O
used	O
since	O
at	O
least	O
1960	O
for	O
training	O
linear	B-General_Concept
regression	I-General_Concept
models	I-General_Concept
,	O
originally	O
under	O
the	O
name	O
ADALINE	B-Algorithm
.	O
</s>
<s>
Another	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
algorithm	O
is	O
the	O
least	O
mean	O
squares	O
(	O
LMS	O
)	O
adaptive	O
filter	O
.	O
</s>
<s>
Many	O
improvements	O
on	O
the	O
basic	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
algorithm	O
have	O
been	O
proposed	O
and	O
used	O
.	O
</s>
<s>
In	O
particular	O
,	O
in	O
machine	O
learning	O
,	O
the	O
need	O
to	O
set	O
a	O
learning	B-General_Concept
rate	I-General_Concept
(	O
step	B-General_Concept
size	I-General_Concept
)	O
has	O
been	O
recognized	O
as	O
problematic	O
.	O
</s>
<s>
Setting	O
this	O
parameter	B-General_Concept
too	O
high	O
can	O
cause	O
the	O
algorithm	O
to	O
diverge	O
;	O
setting	O
it	O
too	O
low	O
makes	O
it	O
slow	O
to	O
converge	O
.	O
</s>
<s>
A	O
conceptually	O
simple	O
extension	O
of	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
makes	O
the	O
learning	B-General_Concept
rate	I-General_Concept
a	O
decreasing	O
function	O
of	O
the	O
iteration	O
number	O
,	O
giving	O
a	O
learning	B-General_Concept
rate	I-General_Concept
schedule	O
,	O
so	O
that	O
the	O
first	O
iterations	O
cause	O
large	O
changes	O
in	O
the	O
parameters	O
,	O
while	O
the	O
later	O
ones	O
do	O
only	O
fine-tuning	O
.	O
</s>
<s>
Practical	O
guidance	O
on	O
choosing	O
the	O
step	B-General_Concept
size	I-General_Concept
in	O
several	O
variants	O
of	O
SGD	O
is	O
given	O
by	O
Spall	O
.	O
</s>
<s>
As	O
mentioned	O
earlier	O
,	O
classical	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
is	O
generally	O
sensitive	O
to	O
learning	B-General_Concept
rate	I-General_Concept
.	O
</s>
<s>
Fast	O
convergence	O
requires	O
large	O
learning	B-General_Concept
rates	I-General_Concept
but	O
this	O
may	O
induce	O
numerical	O
instability	O
.	O
</s>
<s>
The	O
problem	O
can	O
be	O
largely	O
solved	O
by	O
considering	O
implicit	O
updates	O
whereby	O
the	O
stochastic	O
gradient	B-Algorithm
is	O
evaluated	O
at	O
the	O
next	O
iterate	O
rather	O
than	O
the	O
current	O
one	O
:	O
</s>
<s>
Classical	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
proceeds	O
as	O
follows	O
:	O
</s>
<s>
In	O
contrast	O
,	O
implicit	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
(	O
shortened	O
as	O
ISGD	O
)	O
can	O
be	O
solved	O
in	O
closed-form	O
as	O
:	O
</s>
<s>
This	O
procedure	O
will	O
remain	O
numerically	O
stable	O
virtually	O
for	O
all	O
as	O
the	O
learning	B-General_Concept
rate	I-General_Concept
is	O
now	O
normalized	O
.	O
</s>
<s>
Even	O
though	O
a	O
closed-form	O
solution	O
for	O
ISGD	O
is	O
only	O
possible	O
in	O
least	B-Algorithm
squares	I-Algorithm
,	O
the	O
procedure	O
can	O
be	O
efficiently	O
implemented	O
in	O
a	O
wide	O
range	O
of	O
models	O
.	O
</s>
<s>
Least	B-Algorithm
squares	I-Algorithm
obeys	O
this	O
rule	O
,	O
and	O
so	O
does	O
logistic	O
regression	O
,	O
and	O
most	O
generalized	O
linear	O
models	O
.	O
</s>
<s>
For	O
instance	O
,	O
in	O
least	B-Algorithm
squares	I-Algorithm
,	O
,	O
and	O
in	O
logistic	O
regression	O
,	O
where	O
is	O
the	O
logistic	O
function	O
.	O
</s>
<s>
Further	O
proposals	O
include	O
the	O
momentum	B-Algorithm
method	O
or	O
the	O
heavy	O
ball	O
method	O
,	O
which	O
in	O
ML	O
context	O
appeared	O
in	O
Rumelhart	O
,	O
Hinton	O
and	O
Williams	O
 '	O
paper	O
on	O
backpropagation	B-Algorithm
learning	O
and	O
borrowed	O
the	O
idea	O
from	O
Soviet	O
mathematician	O
Boris	O
Polyak	O
's	O
1964	O
article	O
on	O
solving	O
functional	O
equations	O
.	O
</s>
<s>
Stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
with	O
momentum	B-Algorithm
remembers	O
the	O
update	O
at	O
each	O
iteration	O
,	O
and	O
determines	O
the	O
next	O
update	O
as	O
a	O
linear	O
combination	O
of	O
the	O
gradient	B-Algorithm
and	O
the	O
previous	O
update	O
:	O
</s>
<s>
where	O
the	O
parameter	B-General_Concept
which	O
minimizes	O
is	O
to	O
be	O
estimated	O
,	O
is	O
a	O
step	B-General_Concept
size	I-General_Concept
(	O
sometimes	O
called	O
the	O
learning	B-General_Concept
rate	I-General_Concept
in	O
machine	O
learning	O
)	O
and	O
is	O
an	O
exponential	O
decay	O
factor	O
between	O
0	O
and	O
1	O
that	O
determines	O
the	O
relative	O
contribution	O
of	O
the	O
current	O
gradient	B-Algorithm
and	O
earlier	O
gradients	O
to	O
the	O
weight	O
change	O
.	O
</s>
<s>
The	O
name	O
momentum	B-Algorithm
stems	O
from	O
an	O
analogy	O
to	O
momentum	B-Algorithm
in	O
physics	O
:	O
the	O
weight	O
vector	O
,	O
thought	O
of	O
as	O
a	O
particle	O
traveling	O
through	O
parameter	B-General_Concept
space	O
,	O
incurs	O
acceleration	O
from	O
the	O
gradient	B-Algorithm
of	O
the	O
loss	O
(	O
"	O
force	O
"	O
)	O
.	O
</s>
<s>
Unlike	O
in	O
classical	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
,	O
it	O
tends	O
to	O
keep	O
traveling	O
in	O
the	O
same	O
direction	O
,	O
preventing	O
oscillations	O
.	O
</s>
<s>
Momentum	B-Algorithm
has	O
been	O
used	O
successfully	O
by	O
computer	O
scientists	O
in	O
the	O
training	O
of	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
for	O
several	O
decades	O
.	O
</s>
<s>
The	O
momentum	B-Algorithm
method	O
is	O
closely	O
related	O
to	O
underdamped	O
Langevin	O
dynamics	O
,	O
and	O
may	O
be	O
combined	O
with	O
Simulated	O
Annealing	O
.	O
</s>
<s>
In	O
mid-1980s	O
the	O
method	O
was	O
modified	O
by	O
Yurii	O
Nesterov	O
to	O
use	O
the	O
gradient	B-Algorithm
predicted	O
at	O
the	O
next	O
point	O
,	O
and	O
the	O
resulting	O
so-called	O
Nesterov	O
Accelerated	O
Gradient	B-Algorithm
was	O
sometimes	O
used	O
in	O
ML	O
in	O
the	O
2010s	O
.	O
</s>
<s>
Averaged	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
,	O
invented	O
independently	O
by	O
Ruppert	O
and	O
Polyak	O
in	O
the	O
late	O
1980s	O
,	O
is	O
ordinary	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
that	O
records	O
an	O
average	O
of	O
its	O
parameter	B-General_Concept
vector	O
over	O
time	O
.	O
</s>
<s>
When	O
optimization	O
is	O
done	O
,	O
this	O
averaged	O
parameter	B-General_Concept
vector	O
takes	O
the	O
place	O
of	O
.	O
</s>
<s>
AdaGrad	O
(	O
for	O
adaptive	O
gradient	B-Algorithm
algorithm	O
)	O
is	O
a	O
modified	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
algorithm	O
with	O
per-parameter	O
learning	B-General_Concept
rate	I-General_Concept
,	O
first	O
published	O
in	O
2011	O
.	O
</s>
<s>
Informally	O
,	O
this	O
increases	O
the	O
learning	B-General_Concept
rate	I-General_Concept
for	O
sparser	O
parameters	O
and	O
decreases	O
the	O
learning	B-General_Concept
rate	I-General_Concept
for	O
ones	O
that	O
are	O
less	O
sparse	O
.	O
</s>
<s>
This	O
strategy	O
often	O
improves	O
convergence	O
performance	O
over	O
standard	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
in	O
settings	O
where	O
data	O
is	O
sparse	O
and	O
sparse	O
parameters	O
are	O
more	O
informative	O
.	O
</s>
<s>
where	O
,	O
the	O
gradient	B-Algorithm
,	O
at	O
iteration	O
.	O
</s>
<s>
This	O
vector	O
essentially	O
stores	O
a	O
historical	O
sum	O
of	O
gradient	B-Algorithm
squares	O
by	O
dimension	O
and	O
is	O
updated	O
after	O
every	O
iteration	O
.	O
</s>
<s>
or	O
,	O
written	O
as	O
per-parameter	O
updates	O
,	O
</s>
<s>
Each	O
gives	O
rise	O
to	O
a	O
scaling	O
factor	O
for	O
the	O
learning	B-General_Concept
rate	I-General_Concept
that	O
applies	O
to	O
a	O
single	O
parameter	B-General_Concept
.	O
</s>
<s>
Since	O
the	O
denominator	O
in	O
this	O
factor	O
,	O
is	O
the	O
ℓ2	O
norm	O
of	O
previous	O
derivatives	O
,	O
extreme	O
parameter	B-General_Concept
updates	O
get	O
dampened	O
,	O
while	O
parameters	O
that	O
get	O
few	O
or	O
small	O
updates	O
receive	O
higher	O
learning	B-General_Concept
rates	I-General_Concept
.	O
</s>
<s>
RMSProp	O
(	O
for	O
Root	O
Mean	O
Square	O
Propagation	O
)	O
is	O
a	O
method	O
invented	O
by	O
Geoffrey	O
Hinton	O
in	O
2012	O
in	O
which	O
the	O
learning	B-General_Concept
rate	I-General_Concept
is	O
,	O
like	O
in	O
Adagrad	O
,	O
adapted	O
for	O
each	O
of	O
the	O
parameters	O
.	O
</s>
<s>
The	O
idea	O
is	O
to	O
divide	O
the	O
learning	B-General_Concept
rate	I-General_Concept
for	O
a	O
weight	O
by	O
a	O
running	O
average	O
of	O
the	O
magnitudes	O
of	O
recent	O
gradients	O
for	O
that	O
weight	O
.	O
</s>
<s>
The	O
concept	O
of	O
storing	O
the	O
historical	O
gradient	B-Algorithm
as	O
sum	O
of	O
squares	O
is	O
borrowed	O
from	O
Adagrad	O
,	O
but	O
"	O
forgetting	O
"	O
is	O
introduced	O
to	O
solve	O
Adagrad	O
's	O
diminishing	O
learning	B-General_Concept
rates	I-General_Concept
in	O
non-convex	O
problems	O
by	O
gradually	O
decreasing	O
the	O
influence	O
of	O
old	O
data	O
.	O
</s>
<s>
RMSProp	O
has	O
shown	O
good	O
adaptation	O
of	O
learning	B-General_Concept
rate	I-General_Concept
in	O
different	O
applications	O
.	O
</s>
<s>
RMSProp	O
can	O
be	O
seen	O
as	O
a	O
generalization	O
of	O
Rprop	B-Algorithm
and	O
is	O
capable	O
to	O
work	O
with	O
mini-batches	O
as	O
well	O
opposed	O
to	O
only	O
full-batches	O
.	O
</s>
<s>
Adam	O
(	O
short	O
for	O
Adaptive	O
Moment	O
Estimation	O
)	O
is	O
a	O
2014	O
update	O
to	O
the	O
RMSProp	O
optimizer	O
combining	O
it	O
with	O
the	O
main	O
feature	O
of	O
the	O
Momentum	B-Algorithm
method	O
.	O
</s>
<s>
Given	O
parameters	O
and	O
a	O
loss	O
function	O
,	O
where	O
indexes	O
the	O
current	O
training	O
iteration	O
(	O
indexed	O
at	O
)	O
,	O
Adam	O
's	O
parameter	B-General_Concept
update	O
is	O
given	O
by	O
:	O
</s>
<s>
The	O
profound	O
influence	O
of	O
this	O
algorithm	O
inspired	O
multiple	O
newer	O
,	O
less	O
well-known	O
momentum-based	O
optimization	O
schemes	O
using	O
Nesterov-enhanced	O
gradients	O
(	O
eg	O
:	O
NAdam	O
and	O
FASFA	O
)	O
and	O
varying	O
interpretations	O
of	O
second-order	O
information	O
(	O
eg	O
:	O
Powerpropagation	O
and	O
AdaSqrt	O
)	O
.	O
</s>
<s>
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
is	O
another	O
variant	O
of	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
It	O
is	O
based	O
on	O
a	O
condition	O
known	O
as	O
the	O
Armijo	B-Algorithm
–	I-Algorithm
Goldstein	I-Algorithm
condition	I-Algorithm
.	O
</s>
<s>
Both	O
methods	O
allow	O
learning	B-General_Concept
rates	I-General_Concept
to	O
change	O
at	O
each	O
iteration	O
;	O
however	O
,	O
the	O
manner	O
of	O
the	O
change	O
is	O
different	O
.	O
</s>
<s>
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
uses	O
function	O
evaluations	O
to	O
check	O
Armijo	O
's	O
condition	O
,	O
and	O
in	O
principle	O
the	O
loop	O
in	O
the	O
algorithm	O
for	O
determining	O
the	O
learning	B-General_Concept
rates	I-General_Concept
can	O
be	O
long	O
and	O
unknown	O
in	O
advance	O
.	O
</s>
<s>
Adaptive	O
SGD	O
does	O
not	O
need	O
a	O
loop	O
in	O
determining	O
learning	B-General_Concept
rates	I-General_Concept
.	O
</s>
<s>
On	O
the	O
other	O
hand	O
,	O
adaptive	O
SGD	O
does	O
not	O
guarantee	O
the	O
"	O
descent	O
property	O
"	O
–	O
which	O
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
enjoys	O
–	O
which	O
is	O
that	O
for	O
all	O
n	O
.	O
If	O
the	O
gradient	B-Algorithm
of	O
the	O
cost	O
function	O
is	O
globally	O
Lipschitz	O
continuous	O
,	O
with	O
Lipschitz	O
constant	O
L	O
,	O
and	O
learning	B-General_Concept
rate	I-General_Concept
is	O
chosen	O
of	O
the	O
order	O
1/L	O
,	O
then	O
the	O
standard	O
version	O
of	O
SGD	O
is	O
a	O
special	O
case	O
of	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
.	O
</s>
<s>
A	O
stochastic	O
analogue	O
of	O
the	O
standard	O
(	O
deterministic	O
)	O
Newton	B-Algorithm
–	I-Algorithm
Raphson	I-Algorithm
algorithm	I-Algorithm
(	O
a	O
"	O
second-order	O
"	O
method	O
)	O
provides	O
an	O
asymptotically	O
optimal	O
or	O
near-optimal	O
form	O
of	O
iterative	O
optimization	O
in	O
the	O
setting	O
of	O
stochastic	O
approximation	O
.	O
</s>
<s>
Another	O
approach	O
to	O
the	O
approximation	O
Hessian	O
matrix	O
is	O
replacing	O
it	O
with	O
the	O
Fisher	O
information	O
matrix	O
,	O
which	O
transforms	O
usual	O
gradient	B-Algorithm
to	O
natural	O
.	O
</s>
