<s>
In	O
mathematics	O
,	O
gradient	B-Algorithm
descent	I-Algorithm
(	O
also	O
often	O
called	O
steepest	B-Algorithm
descent	I-Algorithm
)	O
is	O
a	O
first-order	O
iterative	B-Algorithm
optimization	O
algorithm	O
for	O
finding	O
a	O
local	O
minimum	O
of	O
a	O
differentiable	O
function	O
.	O
</s>
<s>
The	O
idea	O
is	O
to	O
take	O
repeated	O
steps	O
in	O
the	O
opposite	O
direction	O
of	O
the	O
gradient	O
(	O
or	O
approximate	O
gradient	O
)	O
of	O
the	O
function	O
at	O
the	O
current	O
point	O
,	O
because	O
this	O
is	O
the	O
direction	O
of	O
steepest	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
Conversely	O
,	O
stepping	O
in	O
the	O
direction	O
of	O
the	O
gradient	O
will	O
lead	O
to	O
a	O
local	O
maximum	O
of	O
that	O
function	O
;	O
the	O
procedure	O
is	O
then	O
known	O
as	O
gradient	B-Algorithm
ascent	I-Algorithm
.	O
</s>
<s>
Despite	O
its	O
simplicity	O
and	O
efficiency	O
,	O
gradient	B-Algorithm
descent	I-Algorithm
has	O
some	O
limitations	O
and	O
variations	O
have	O
been	O
developed	O
to	O
overcome	O
these	O
limitations	O
.	O
</s>
<s>
Overall	O
,	O
gradient	B-Algorithm
descent	I-Algorithm
has	O
revolutionized	O
various	O
fields	O
and	O
continues	O
to	O
be	O
an	O
active	O
area	O
of	O
research	O
and	O
development	O
.	O
</s>
<s>
Gradient	B-Algorithm
descent	I-Algorithm
is	O
generally	O
attributed	O
to	O
Augustin-Louis	O
Cauchy	O
,	O
who	O
first	O
suggested	O
it	O
in	O
1847	O
.	O
</s>
<s>
Its	O
convergence	B-Algorithm
properties	O
for	O
non-linear	O
optimization	O
problems	O
were	O
first	O
studied	O
by	O
Haskell	O
Curry	O
in	O
1944	O
,	O
with	O
the	O
method	O
becoming	O
increasingly	O
well-studied	O
and	O
used	O
in	O
the	O
following	O
decades	O
.	O
</s>
<s>
Gradient	B-Algorithm
descent	I-Algorithm
is	O
based	O
on	O
the	O
observation	O
that	O
if	O
the	O
multi-variable	O
function	O
is	O
defined	O
and	O
differentiable	O
in	O
a	O
neighborhood	O
of	O
a	O
point	O
,	O
then	O
decreases	O
fastest	O
if	O
one	O
goes	O
from	O
in	O
the	O
direction	O
of	O
the	O
negative	O
gradient	O
of	O
at	O
.	O
</s>
<s>
for	O
a	O
small	O
enough	O
step	B-General_Concept
size	I-General_Concept
or	O
learning	B-General_Concept
rate	I-General_Concept
,	O
then	O
.	O
</s>
<s>
Note	O
that	O
the	O
value	O
of	O
the	O
step	B-General_Concept
size	I-General_Concept
is	O
allowed	O
to	O
change	O
at	O
every	O
iteration	O
.	O
</s>
<s>
With	O
certain	O
assumptions	O
on	O
the	O
function	O
(	O
for	O
example	O
,	O
convex	O
and	O
Lipschitz	O
)	O
and	O
particular	O
choices	O
of	O
(	O
e.g.	O
,	O
chosen	O
either	O
via	O
a	O
line	B-Algorithm
search	I-Algorithm
that	O
satisfies	O
the	O
Wolfe	O
conditions	O
,	O
or	O
the	O
Barzilai-Borwein	O
method	O
shown	O
as	O
following	O
)	O
,	O
</s>
<s>
convergence	B-Algorithm
to	O
a	O
local	O
minimum	O
can	O
be	O
guaranteed	O
.	O
</s>
<s>
When	O
the	O
function	O
is	O
convex	O
,	O
all	O
local	O
minima	O
are	O
also	O
global	O
minima	O
,	O
so	O
in	O
this	O
case	O
gradient	B-Algorithm
descent	I-Algorithm
can	O
converge	O
to	O
the	O
global	O
solution	O
.	O
</s>
<s>
We	O
see	O
that	O
gradient	B-Algorithm
descent	I-Algorithm
leads	O
us	O
to	O
the	O
bottom	O
of	O
the	O
bowl	O
,	O
that	O
is	O
,	O
to	O
the	O
point	O
where	O
the	O
value	O
of	O
the	O
function	O
is	O
minimal	O
.	O
</s>
<s>
The	O
basic	O
intuition	O
behind	O
gradient	B-Algorithm
descent	I-Algorithm
can	O
be	O
illustrated	O
by	O
a	O
hypothetical	O
scenario	O
.	O
</s>
<s>
They	O
can	O
use	O
the	O
method	O
of	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
which	O
involves	O
looking	O
at	O
the	O
steepness	O
of	O
the	O
hill	O
at	O
their	O
current	O
position	O
,	O
then	O
proceeding	O
in	O
the	O
direction	O
with	O
the	O
steepest	B-Algorithm
descent	I-Algorithm
(	O
i.e.	O
,	O
downhill	O
)	O
.	O
</s>
<s>
If	O
they	O
were	O
trying	O
to	O
find	O
the	O
top	O
of	O
the	O
mountain	O
(	O
i.e.	O
,	O
the	O
maximum	O
)	O
,	O
then	O
they	O
would	O
proceed	O
in	O
the	O
direction	O
of	O
steepest	B-Algorithm
ascent	I-Algorithm
(	O
i.e.	O
,	O
uphill	O
)	O
.	O
</s>
<s>
The	O
instrument	O
used	O
to	O
measure	O
steepness	O
is	O
differentiation	B-Algorithm
.	O
</s>
<s>
The	O
amount	O
of	O
time	O
they	O
travel	O
before	O
taking	O
another	O
measurement	O
is	O
the	O
step	B-General_Concept
size	I-General_Concept
.	O
</s>
<s>
Since	O
using	O
a	O
step	B-General_Concept
size	I-General_Concept
that	O
is	O
too	O
small	O
would	O
slow	O
convergence	B-Algorithm
,	O
and	O
a	O
too	O
large	O
would	O
lead	O
to	O
divergence	O
,	O
finding	O
a	O
good	O
setting	O
of	O
is	O
an	O
important	O
practical	O
problem	O
.	O
</s>
<s>
Whilst	O
using	O
a	O
direction	O
that	O
deviates	O
from	O
the	O
steepest	B-Algorithm
descent	I-Algorithm
direction	O
may	O
seem	O
counter-intuitive	O
,	O
the	O
idea	O
is	O
that	O
the	O
smaller	O
slope	O
may	O
be	O
compensated	O
for	O
by	O
being	O
sustained	O
over	O
a	O
much	O
longer	O
distance	O
.	O
</s>
<s>
To	O
reason	O
about	O
this	O
mathematically	O
,	O
consider	O
a	O
direction	O
and	O
step	B-General_Concept
size	I-General_Concept
and	O
consider	O
the	O
more	O
general	O
update	O
:	O
</s>
<s>
In	O
principle	O
inequality	O
(	O
)	O
could	O
be	O
optimized	O
over	O
and	O
to	O
choose	O
an	O
optimal	O
step	B-General_Concept
size	I-General_Concept
and	O
direction	O
.	O
</s>
<s>
Forgo	O
the	O
benefits	O
of	O
a	O
clever	O
descent	O
direction	O
by	O
setting	O
,	O
and	O
use	O
line	B-Algorithm
search	I-Algorithm
to	O
find	O
a	O
suitable	O
step-size	O
,	O
such	O
as	O
one	O
that	O
satisfies	O
the	O
Wolfe	O
conditions	O
.	O
</s>
<s>
A	O
more	O
economic	O
way	O
of	O
choosing	O
learning	B-General_Concept
rates	I-General_Concept
is	O
backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
,	O
a	O
method	O
that	O
has	O
both	O
good	O
theoretical	O
guarantees	O
and	O
experimental	O
results	O
.	O
</s>
<s>
Usually	O
by	O
following	O
one	O
of	O
the	O
recipes	O
above	O
,	O
convergence	B-Algorithm
to	O
a	O
local	O
minimum	O
can	O
be	O
guaranteed	O
.	O
</s>
<s>
When	O
the	O
function	O
is	O
convex	O
,	O
all	O
local	O
minima	O
are	O
also	O
global	O
minima	O
,	O
so	O
in	O
this	O
case	O
gradient	B-Algorithm
descent	I-Algorithm
can	O
converge	O
to	O
the	O
global	O
solution	O
.	O
</s>
<s>
The	O
line	B-Algorithm
search	I-Algorithm
minimization	O
,	O
finding	O
the	O
locally	O
optimal	O
step	B-General_Concept
size	I-General_Concept
on	O
every	O
iteration	O
,	O
can	O
be	O
performed	O
analytically	O
for	O
quadratic	O
functions	O
,	O
and	O
explicit	O
formulas	O
for	O
the	O
locally	O
optimal	O
are	O
known	O
.	O
</s>
<s>
For	O
example	O
,	O
for	O
real	O
symmetric	B-Algorithm
and	O
positive-definite	B-Algorithm
matrix	I-Algorithm
,	O
a	O
simple	O
algorithm	O
can	O
be	O
as	O
follows	O
,	O
</s>
<s>
The	O
method	O
is	O
rarely	O
used	O
for	O
solving	O
linear	O
equations	O
,	O
with	O
the	O
conjugate	B-Algorithm
gradient	I-Algorithm
method	I-Algorithm
being	O
one	O
of	O
the	O
most	O
popular	O
alternatives	O
.	O
</s>
<s>
The	O
number	O
of	O
gradient	B-Algorithm
descent	I-Algorithm
iterations	O
is	O
commonly	O
proportional	O
to	O
the	O
spectral	O
condition	B-Algorithm
number	I-Algorithm
of	O
the	O
system	O
matrix	O
(	O
the	O
ratio	O
of	O
the	O
maximum	O
to	O
minimum	O
eigenvalues	O
of	O
,	O
while	O
the	O
convergence	B-Algorithm
of	O
conjugate	B-Algorithm
gradient	I-Algorithm
method	I-Algorithm
is	O
typically	O
determined	O
by	O
a	O
square	O
root	O
of	O
the	O
condition	B-Algorithm
number	I-Algorithm
,	O
i.e.	O
,	O
is	O
much	O
faster	O
.	O
</s>
<s>
Both	O
methods	O
can	O
benefit	O
from	O
preconditioning	O
,	O
where	O
gradient	B-Algorithm
descent	I-Algorithm
may	O
require	O
less	O
assumptions	O
on	O
the	O
preconditioner	O
.	O
</s>
<s>
Gradient	B-Algorithm
descent	I-Algorithm
can	O
also	O
be	O
used	O
to	O
solve	O
a	O
system	O
of	O
nonlinear	O
equations	O
.	O
</s>
<s>
Below	O
is	O
an	O
example	O
that	O
shows	O
how	O
to	O
use	O
the	O
gradient	B-Algorithm
descent	I-Algorithm
to	O
solve	O
for	O
three	O
unknown	O
variables	O
,	O
x1	O
,	O
x2	O
,	O
and	O
x3	O
.	O
</s>
<s>
This	O
example	O
shows	O
one	O
iteration	O
of	O
the	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
This	O
can	O
be	O
done	O
with	O
any	O
of	O
a	O
variety	O
of	O
line	B-Algorithm
search	I-Algorithm
algorithms	O
.	O
</s>
<s>
Gradient	B-Algorithm
descent	I-Algorithm
works	O
in	O
spaces	O
of	O
any	O
number	O
of	O
dimensions	O
,	O
even	O
in	O
infinite-dimensional	O
ones	O
.	O
</s>
<s>
In	O
the	O
latter	O
case	O
,	O
the	O
search	O
space	O
is	O
typically	O
a	O
function	B-Algorithm
space	I-Algorithm
,	O
and	O
one	O
calculates	O
the	O
Fréchet	O
derivative	B-Algorithm
of	O
the	O
functional	O
to	O
be	O
minimized	O
to	O
determine	O
the	O
descent	O
direction	O
.	O
</s>
<s>
That	O
gradient	B-Algorithm
descent	I-Algorithm
works	O
in	O
any	O
number	O
of	O
dimensions	O
(	O
finite	O
number	O
at	O
least	O
)	O
can	O
be	O
seen	O
as	O
a	O
consequence	O
of	O
the	O
Cauchy-Schwarz	O
inequality	O
.	O
</s>
<s>
In	O
the	O
case	O
of	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
that	O
would	O
be	O
when	O
the	O
vector	O
of	O
independent	O
variable	O
adjustments	O
is	O
proportional	O
to	O
the	O
gradient	O
vector	O
of	O
partial	O
derivatives	B-Algorithm
.	O
</s>
<s>
The	O
gradient	B-Algorithm
descent	I-Algorithm
can	O
take	O
many	O
iterations	O
to	O
compute	O
a	O
local	O
minimum	O
with	O
a	O
required	O
accuracy	O
,	O
if	O
the	O
curvature	O
in	O
different	O
directions	O
is	O
very	O
different	O
for	O
the	O
given	O
function	O
.	O
</s>
<s>
For	O
such	O
functions	O
,	O
preconditioning	O
,	O
which	O
changes	O
the	O
geometry	O
of	O
the	O
space	O
to	O
shape	O
the	O
function	O
level	B-Algorithm
sets	I-Algorithm
like	O
concentric	O
circles	O
,	O
cures	O
the	O
slow	O
convergence	B-Algorithm
.	O
</s>
<s>
The	O
gradient	B-Algorithm
descent	I-Algorithm
can	O
be	O
combined	O
with	O
a	O
line	B-Algorithm
search	I-Algorithm
,	O
finding	O
the	O
locally	O
optimal	O
step	B-General_Concept
size	I-General_Concept
on	O
every	O
iteration	O
.	O
</s>
<s>
Performing	O
the	O
line	B-Algorithm
search	I-Algorithm
can	O
be	O
time-consuming	O
.	O
</s>
<s>
Conversely	O
,	O
using	O
a	O
fixed	O
small	O
can	O
yield	O
poor	O
convergence	B-Algorithm
.	O
</s>
<s>
Methods	O
based	O
on	O
Newton	B-Algorithm
's	I-Algorithm
method	I-Algorithm
and	O
inversion	O
of	O
the	O
Hessian	O
using	O
conjugate	B-Algorithm
gradient	I-Algorithm
techniques	O
can	O
be	O
better	O
alternatives	O
.	O
</s>
<s>
An	O
example	O
is	O
the	O
BFGS	B-Algorithm
method	I-Algorithm
which	O
consists	O
in	O
calculating	O
on	O
every	O
step	O
a	O
matrix	O
by	O
which	O
the	O
gradient	O
vector	O
is	O
multiplied	O
to	O
go	O
into	O
a	O
"	O
better	O
"	O
direction	O
,	O
combined	O
with	O
a	O
more	O
sophisticated	O
line	B-Algorithm
search	I-Algorithm
algorithm	O
,	O
to	O
find	O
the	O
"	O
best	O
"	O
value	O
of	O
For	O
extremely	O
large	O
problems	O
,	O
where	O
the	O
computer-memory	O
issues	O
dominate	O
,	O
a	O
limited-memory	O
method	O
such	O
as	O
L-BFGS	B-Algorithm
should	O
be	O
used	O
instead	O
of	O
BFGS	B-Algorithm
or	O
the	O
steepest	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
Gradient	B-Algorithm
descent	I-Algorithm
can	O
be	O
viewed	O
as	O
applying	O
Euler	B-Algorithm
's	I-Algorithm
method	I-Algorithm
for	O
solving	O
ordinary	O
differential	O
equations	O
to	O
a	O
gradient	B-Algorithm
flow	I-Algorithm
.	O
</s>
<s>
It	O
can	O
be	O
shown	O
that	O
there	O
is	O
a	O
correspondence	O
between	O
neuroevolution	B-Algorithm
and	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
Gradient	B-Algorithm
descent	I-Algorithm
can	O
converge	O
to	O
a	O
local	O
minimum	O
and	O
slow	O
down	O
in	O
a	O
neighborhood	O
of	O
a	O
saddle	O
point	O
.	O
</s>
<s>
Even	O
for	O
unconstrained	O
quadratic	O
minimization	O
,	O
gradient	B-Algorithm
descent	I-Algorithm
develops	O
a	O
zig-zag	O
pattern	O
of	O
subsequent	O
iterates	O
as	O
iterations	O
progress	O
,	O
resulting	O
in	O
slow	O
convergence	B-Algorithm
.	O
</s>
<s>
Multiple	O
modifications	O
of	O
gradient	B-Algorithm
descent	I-Algorithm
have	O
been	O
proposed	O
to	O
address	O
these	O
deficiencies	O
.	O
</s>
<s>
Yurii	O
Nesterov	O
has	O
proposed	O
a	O
simple	O
modification	O
that	O
enables	O
faster	O
convergence	B-Algorithm
for	O
convex	O
problems	O
and	O
has	O
been	O
since	O
further	O
generalized	O
.	O
</s>
<s>
Specifically	O
,	O
if	O
the	O
differentiable	O
function	O
is	O
convex	O
and	O
is	O
Lipschitz	O
,	O
and	O
it	O
is	O
not	O
assumed	O
that	O
is	O
strongly	O
convex	O
,	O
then	O
the	O
error	O
in	O
the	O
objective	O
value	O
generated	O
at	O
each	O
step	O
by	O
the	O
gradient	B-Algorithm
descent	I-Algorithm
method	I-Algorithm
will	O
be	O
bounded	O
by	O
.	O
</s>
<s>
For	O
constrained	O
or	O
non-smooth	O
problems	O
,	O
Nesterov	O
's	O
FGM	O
is	O
called	O
the	O
fast	O
proximal	B-Algorithm
gradient	I-Algorithm
method	I-Algorithm
(	O
FPGM	O
)	O
,	O
an	O
acceleration	O
of	O
the	O
proximal	B-Algorithm
gradient	I-Algorithm
method	I-Algorithm
.	O
</s>
<s>
Trying	O
to	O
break	O
the	O
zig-zag	O
pattern	O
of	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
the	O
momentum	O
or	O
heavy	O
ball	O
method	O
uses	O
a	O
momentum	O
term	O
in	O
analogy	O
to	O
a	O
heavy	O
ball	O
sliding	O
on	O
the	O
surface	O
of	O
values	O
of	O
the	O
function	O
being	O
minimized	O
,	O
or	O
to	O
mass	O
movement	O
in	O
Newtonian	O
dynamics	O
through	O
a	O
viscous	O
medium	O
in	O
a	O
conservative	O
force	O
field	O
.	O
</s>
<s>
Gradient	B-Algorithm
descent	I-Algorithm
with	O
momentum	O
remembers	O
the	O
solution	O
update	O
at	O
each	O
iteration	O
,	O
and	O
determines	O
the	O
next	O
update	O
as	O
a	O
linear	O
combination	O
of	O
the	O
gradient	O
and	O
the	O
previous	O
update	O
.	O
</s>
<s>
For	O
unconstrained	O
quadratic	O
minimization	O
,	O
a	O
theoretical	O
convergence	B-Algorithm
rate	O
bound	O
of	O
the	O
heavy	O
ball	O
method	O
is	O
asymptotically	O
the	O
same	O
as	O
that	O
for	O
the	O
optimal	O
conjugate	B-Algorithm
gradient	I-Algorithm
method	I-Algorithm
.	O
</s>
<s>
This	O
technique	O
is	O
used	O
in	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
and	O
as	O
an	O
extension	O
to	O
the	O
backpropagation	B-Algorithm
algorithms	O
used	O
to	O
train	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
.	O
</s>
<s>
In	O
the	O
direction	O
of	O
updating	O
,	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
adds	O
a	O
stochastic	O
property	O
.	O
</s>
<s>
The	O
weights	O
can	O
be	O
used	O
to	O
calculate	O
the	O
derivatives	B-Algorithm
.	O
</s>
<s>
Gradient	B-Algorithm
descent	I-Algorithm
can	O
be	O
extended	O
to	O
handle	O
constraints	B-Application
by	O
including	O
a	O
projection	B-Algorithm
onto	O
the	O
set	O
of	O
constraints	B-Application
.	O
</s>
<s>
This	O
method	O
is	O
only	O
feasible	O
when	O
the	O
projection	B-Algorithm
is	O
efficiently	O
computable	O
on	O
a	O
computer	O
.	O
</s>
<s>
This	O
method	O
is	O
a	O
specific	O
case	O
of	O
the	O
forward-backward	B-Algorithm
algorithm	I-Algorithm
for	O
monotone	O
inclusions	O
(	O
which	O
includes	O
convex	O
programming	O
and	O
variational	O
inequalities	O
)	O
.	O
</s>
<s>
Gradient	B-Algorithm
descent	I-Algorithm
is	O
a	O
special	O
case	O
of	O
mirror	B-Algorithm
descent	I-Algorithm
using	O
the	O
squared	O
Euclidean	O
distance	O
as	O
the	O
given	O
Bregman	B-Algorithm
divergence	I-Algorithm
.	O
</s>
