<s>
In	O
calculus	O
,	O
Newton	O
's	O
method	O
(	O
also	O
called	O
Newton	O
–	O
Raphson	O
)	O
is	O
an	O
iterative	B-Algorithm
method	I-Algorithm
for	O
finding	O
the	O
roots	O
of	O
a	O
differentiable	O
function	O
,	O
which	O
are	O
solutions	O
to	O
the	O
equation	O
.	O
</s>
<s>
As	O
such	O
,	O
Newton	O
's	O
method	O
can	O
be	O
applied	O
to	O
the	O
derivative	B-Algorithm
of	O
a	O
twice-differentiable	O
function	O
to	O
find	O
the	O
roots	O
of	O
the	O
derivative	B-Algorithm
(	O
solutions	O
to	O
)	O
,	O
also	O
known	O
as	O
the	O
critical	O
points	O
of	O
.	O
</s>
<s>
Newton	O
's	O
method	O
attempts	O
to	O
solve	O
this	O
problem	O
by	O
constructing	O
a	O
sequence	O
from	O
an	O
initial	O
guess	O
(	O
starting	O
point	O
)	O
that	O
converges	O
towards	O
a	O
minimizer	O
of	O
by	O
using	O
a	O
sequence	O
of	O
second-order	O
Taylor	O
approximations	O
of	O
around	O
the	O
iterates	B-Algorithm
.	O
</s>
<s>
The	O
next	O
iterate	B-Algorithm
is	O
defined	O
so	O
as	O
to	O
minimize	O
this	O
quadratic	O
approximation	O
in	O
,	O
and	O
setting	O
.	O
</s>
<s>
If	O
the	O
second	O
derivative	B-Algorithm
is	O
positive	O
,	O
the	O
quadratic	O
approximation	O
is	O
a	O
convex	O
function	O
of	O
,	O
and	O
its	O
minimum	O
can	O
be	O
found	O
by	O
setting	O
the	O
derivative	B-Algorithm
to	O
zero	O
.	O
</s>
<s>
The	O
geometric	O
interpretation	O
of	O
Newton	O
's	O
method	O
is	O
that	O
at	O
each	O
iteration	B-Algorithm
,	O
it	O
amounts	O
to	O
the	O
fitting	O
of	O
a	O
parabola	O
to	O
the	O
graph	B-Application
of	O
at	O
the	O
trial	O
value	O
,	O
having	O
the	O
same	O
slope	O
and	O
curvature	O
as	O
the	O
graph	B-Application
at	O
that	O
point	O
,	O
and	O
then	O
proceeding	O
to	O
the	O
maximum	O
or	O
minimum	O
of	O
that	O
parabola	O
(	O
in	O
higher	O
dimensions	O
,	O
this	O
may	O
also	O
be	O
a	O
saddle	O
point	O
)	O
,	O
see	O
below	O
.	O
</s>
<s>
The	O
above	O
iterative	B-Algorithm
scheme	I-Algorithm
can	O
be	O
generalized	O
to	O
dimensions	O
by	O
replacing	O
the	O
derivative	B-Algorithm
with	O
the	O
gradient	O
(	O
different	O
authors	O
use	O
different	O
notation	O
for	O
the	O
gradient	O
,	O
including	O
)	O
,	O
and	O
the	O
reciprocal	O
of	O
the	O
second	O
derivative	B-Algorithm
with	O
the	O
inverse	O
of	O
the	O
Hessian	O
matrix	O
(	O
different	O
authors	O
use	O
different	O
notation	O
for	O
the	O
Hessian	O
,	O
including	O
)	O
.	O
</s>
<s>
Often	O
Newton	O
's	O
method	O
is	O
modified	O
to	O
include	O
a	O
small	O
step	B-General_Concept
size	I-General_Concept
instead	O
of	O
:	O
</s>
<s>
This	O
is	O
often	O
done	O
to	O
ensure	O
that	O
the	O
Wolfe	O
conditions	O
,	O
or	O
much	O
simpler	O
and	O
efficient	O
Armijo	B-Algorithm
's	I-Algorithm
condition	I-Algorithm
,	O
are	O
satisfied	O
at	O
each	O
step	O
of	O
the	O
method	O
.	O
</s>
<s>
For	O
step	B-General_Concept
sizes	I-General_Concept
other	O
than	O
1	O
,	O
the	O
method	O
is	O
often	O
referred	O
to	O
as	O
the	O
relaxed	O
or	O
damped	O
Newton	O
's	O
method	O
.	O
</s>
<s>
which	O
may	O
be	O
solved	O
by	O
various	O
factorizations	O
or	O
approximately	O
(	O
but	O
to	O
great	O
accuracy	O
)	O
using	O
iterative	B-Algorithm
methods	I-Algorithm
.	O
</s>
<s>
Many	O
of	O
these	O
methods	O
are	O
only	O
applicable	O
to	O
certain	O
types	O
of	O
equations	O
,	O
for	O
example	O
the	O
Cholesky	O
factorization	O
and	O
conjugate	B-Algorithm
gradient	I-Algorithm
will	O
only	O
work	O
if	O
is	O
a	O
positive	O
definite	O
matrix	O
.	O
</s>
<s>
While	O
this	O
may	O
seem	O
like	O
a	O
limitation	O
,	O
it	O
is	O
often	O
a	O
useful	O
indicator	O
of	O
something	O
gone	O
wrong	O
;	O
for	O
example	O
if	O
a	O
minimization	O
problem	O
is	O
being	O
approached	O
and	O
is	O
not	O
positive	O
definite	O
,	O
then	O
the	O
iterations	B-Algorithm
are	O
converging	O
to	O
a	O
saddle	O
point	O
and	O
not	O
a	O
minimum	O
.	O
</s>
<s>
On	O
the	O
other	O
hand	O
,	O
if	O
a	O
constrained	B-Application
optimization	I-Application
is	O
done	O
(	O
for	O
example	O
,	O
with	O
Lagrange	O
multipliers	O
)	O
,	O
the	O
problem	O
may	O
become	O
one	O
of	O
saddle	O
point	O
finding	O
,	O
in	O
which	O
case	O
the	O
Hessian	O
will	O
be	O
symmetric	O
indefinite	O
and	O
the	O
solution	O
of	O
will	O
need	O
to	O
be	O
done	O
with	O
a	O
method	O
that	O
will	O
work	O
for	O
such	O
,	O
such	O
as	O
the	O
variant	O
of	O
Cholesky	O
factorization	O
or	O
the	O
conjugate	O
residual	O
method	O
.	O
</s>
<s>
There	O
also	O
exist	O
various	O
quasi-Newton	B-Algorithm
methods	I-Algorithm
,	O
where	O
an	O
approximation	O
for	O
the	O
Hessian	O
(	O
or	O
its	O
inverse	O
directly	O
)	O
is	O
built	O
up	O
from	O
changes	O
in	O
the	O
gradient	O
.	O
</s>
<s>
An	O
approach	O
exploited	O
in	O
the	O
Levenberg	B-Algorithm
–	I-Algorithm
Marquardt	I-Algorithm
algorithm	I-Algorithm
(	O
which	O
uses	O
an	O
approximate	O
Hessian	O
)	O
is	O
to	O
add	O
a	O
scaled	O
identity	O
matrix	O
to	O
the	O
Hessian	O
,	O
,	O
with	O
the	O
scale	O
adjusted	O
at	O
every	O
iteration	B-Algorithm
as	O
needed	O
.	O
</s>
<s>
For	O
large	O
and	O
small	O
Hessian	O
,	O
the	O
iterations	B-Algorithm
will	O
behave	O
like	O
gradient	B-Algorithm
descent	I-Algorithm
with	O
step	B-General_Concept
size	I-General_Concept
.	O
</s>
<s>
The	O
popular	O
modifications	O
of	O
Newton	O
's	O
method	O
,	O
such	O
as	O
quasi-Newton	B-Algorithm
methods	I-Algorithm
or	O
Levenberg-Marquardt	B-Algorithm
algorithm	I-Algorithm
mentioned	O
above	O
,	O
also	O
have	O
caveats	O
:	O
</s>
<s>
If	O
one	O
looks	O
at	O
the	O
papers	O
by	O
Levenberg	O
and	O
Marquardt	O
in	O
the	O
reference	O
for	O
Levenberg	B-Algorithm
–	I-Algorithm
Marquardt	I-Algorithm
algorithm	I-Algorithm
,	O
which	O
are	O
the	O
original	O
sources	O
for	O
the	O
mentioned	O
method	O
,	O
one	O
can	O
see	O
that	O
there	O
is	O
basically	O
no	O
theoretical	O
analysis	O
in	O
the	O
paper	O
by	O
Levenberg	O
,	O
while	O
the	O
paper	O
by	O
Marquardt	O
only	O
analyses	O
a	O
local	O
situation	O
and	O
does	O
not	O
prove	O
a	O
global	O
convergence	O
result	O
.	O
</s>
<s>
One	O
can	O
compare	O
with	O
Backtracking	B-Algorithm
line	I-Algorithm
search	I-Algorithm
method	O
for	O
Gradient	B-Algorithm
descent	I-Algorithm
,	O
which	O
has	O
good	O
theoretical	O
guarantee	O
under	O
more	O
general	O
assumptions	O
,	O
and	O
can	O
be	O
implemented	O
and	O
works	O
well	O
in	O
practical	O
large	O
scale	O
problems	O
such	O
as	O
Deep	O
Neural	O
Networks	O
.	O
</s>
