<s>
In	O
machine	O
learning	O
and	O
statistics	O
,	O
the	O
learning	B-General_Concept
rate	I-General_Concept
is	O
a	O
tuning	B-General_Concept
parameter	I-General_Concept
in	O
an	O
optimization	O
algorithm	O
that	O
determines	O
the	O
step	B-General_Concept
size	I-General_Concept
at	O
each	O
iteration	O
while	O
moving	O
toward	O
a	O
minimum	O
of	O
a	O
loss	O
function	O
.	O
</s>
<s>
In	O
the	O
adaptive	B-Algorithm
control	O
literature	O
,	O
the	O
learning	B-General_Concept
rate	I-General_Concept
is	O
commonly	O
referred	O
to	O
as	O
gain	O
.	O
</s>
<s>
In	O
setting	O
a	O
learning	B-General_Concept
rate	I-General_Concept
,	O
there	O
is	O
a	O
trade-off	O
between	O
the	O
rate	O
of	O
convergence	O
and	O
overshooting	O
.	O
</s>
<s>
While	O
the	O
descent	O
direction	O
is	O
usually	O
determined	O
from	O
the	O
gradient	B-Algorithm
of	O
the	O
loss	O
function	O
,	O
the	O
learning	B-General_Concept
rate	I-General_Concept
determines	O
how	O
big	O
a	O
step	O
is	O
taken	O
in	O
that	O
direction	O
.	O
</s>
<s>
A	O
too	O
high	O
learning	B-General_Concept
rate	I-General_Concept
will	O
make	O
the	O
learning	O
jump	O
over	O
minima	O
but	O
a	O
too	O
low	O
learning	B-General_Concept
rate	I-General_Concept
will	O
either	O
take	O
too	O
long	O
to	O
converge	O
or	O
get	O
stuck	O
in	O
an	O
undesirable	O
local	O
minimum	O
.	O
</s>
<s>
In	O
order	O
to	O
achieve	O
faster	O
convergence	O
,	O
prevent	O
oscillations	O
and	O
getting	O
stuck	O
in	O
undesirable	O
local	O
minima	O
the	O
learning	B-General_Concept
rate	I-General_Concept
is	O
often	O
varied	O
during	O
training	O
either	O
in	O
accordance	O
to	O
a	O
learning	B-General_Concept
rate	I-General_Concept
schedule	O
or	O
by	O
using	O
an	O
adaptive	B-Algorithm
learning	B-General_Concept
rate	I-General_Concept
.	O
</s>
<s>
The	O
learning	B-General_Concept
rate	I-General_Concept
and	O
its	O
adjustments	O
may	O
also	O
differ	O
per	O
parameter	O
,	O
in	O
which	O
case	O
it	O
is	O
a	O
diagonal	B-Algorithm
matrix	I-Algorithm
that	O
can	O
be	O
interpreted	O
as	O
an	O
approximation	O
to	O
the	O
inverse	O
of	O
the	O
Hessian	O
matrix	O
in	O
Newton	B-Algorithm
's	I-Algorithm
method	I-Algorithm
.	O
</s>
<s>
The	O
learning	B-General_Concept
rate	I-General_Concept
is	O
related	O
to	O
the	O
step	B-General_Concept
length	I-General_Concept
determined	O
by	O
inexact	O
line	B-Algorithm
search	I-Algorithm
in	O
quasi-Newton	B-Algorithm
methods	I-Algorithm
and	O
related	O
optimization	O
algorithms	O
.	O
</s>
<s>
A	O
learning	B-General_Concept
rate	I-General_Concept
schedule	O
changes	O
the	O
learning	B-General_Concept
rate	I-General_Concept
during	O
learning	O
and	O
is	O
most	O
often	O
changed	O
between	O
epochs/iterations	O
.	O
</s>
<s>
There	O
are	O
many	O
different	O
learning	B-General_Concept
rate	I-General_Concept
schedules	O
but	O
the	O
most	O
common	O
are	O
time-based	O
,	O
step-based	O
and	O
exponential	O
.	O
</s>
<s>
Decay	O
serves	O
to	O
settle	O
the	O
learning	O
in	O
a	O
nice	O
place	O
and	O
avoid	O
oscillations	O
,	O
a	O
situation	O
that	O
may	O
arise	O
when	O
a	O
too	O
high	O
constant	O
learning	B-General_Concept
rate	I-General_Concept
makes	O
the	O
learning	O
jump	O
back	O
and	O
forth	O
over	O
a	O
minimum	O
,	O
and	O
is	O
controlled	O
by	O
a	O
hyperparameter	B-General_Concept
.	O
</s>
<s>
Momentum	O
both	O
speeds	O
up	O
the	O
learning	O
(	O
increasing	O
the	O
learning	B-General_Concept
rate	I-General_Concept
)	O
when	O
the	O
error	O
cost	O
gradient	B-Algorithm
is	O
heading	O
in	O
the	O
same	O
direction	O
for	O
a	O
long	O
time	O
and	O
also	O
avoids	O
local	O
minima	O
by	O
'	O
rolling	O
over	O
 '	O
small	O
bumps	O
.	O
</s>
<s>
The	O
formula	O
for	O
factoring	O
in	O
the	O
momentum	O
is	O
more	O
complex	O
than	O
for	O
decay	O
but	O
is	O
most	O
often	O
built	O
in	O
with	O
deep	O
learning	O
libraries	O
such	O
as	O
Keras	B-Algorithm
.	O
</s>
<s>
Time-based	O
learning	O
schedules	O
alter	O
the	O
learning	B-General_Concept
rate	I-General_Concept
depending	O
on	O
the	O
learning	B-General_Concept
rate	I-General_Concept
of	O
the	O
previous	O
time	O
iteration	O
.	O
</s>
<s>
Factoring	O
in	O
the	O
decay	O
the	O
mathematical	O
formula	O
for	O
the	O
learning	B-General_Concept
rate	I-General_Concept
is	O
:	O
</s>
<s>
where	O
is	O
the	O
learning	B-General_Concept
rate	I-General_Concept
,	O
is	O
a	O
decay	O
parameter	O
and	O
is	O
the	O
iteration	O
step	O
.	O
</s>
<s>
Step-based	O
learning	O
schedules	O
changes	O
the	O
learning	B-General_Concept
rate	I-General_Concept
according	O
to	O
some	O
pre	O
defined	O
steps	O
.	O
</s>
<s>
where	O
is	O
the	O
learning	B-General_Concept
rate	I-General_Concept
at	O
iteration	O
,	O
is	O
the	O
initial	O
learning	B-General_Concept
rate	I-General_Concept
,	O
is	O
how	O
much	O
the	O
learning	B-General_Concept
rate	I-General_Concept
should	O
change	O
at	O
each	O
drop	O
(	O
0.5	O
corresponds	O
to	O
a	O
halving	O
)	O
and	O
corresponds	O
to	O
the	O
droprate	O
,	O
or	O
how	O
often	O
the	O
rate	O
should	O
be	O
dropped	O
(	O
10	O
corresponds	O
to	O
a	O
drop	O
every	O
10	O
iterations	O
)	O
.	O
</s>
<s>
The	O
issue	O
with	O
learning	B-General_Concept
rate	I-General_Concept
schedules	O
is	O
that	O
they	O
all	O
depend	O
on	O
hyperparameters	B-General_Concept
that	O
must	O
be	O
manually	O
chosen	O
for	O
each	O
given	O
learning	O
session	O
and	O
may	O
vary	O
greatly	O
depending	O
on	O
the	O
problem	O
at	O
hand	O
or	O
the	O
model	O
used	O
.	O
</s>
<s>
To	O
combat	O
this	O
there	O
are	O
many	O
different	O
types	O
of	O
adaptive	B-Algorithm
gradient	B-Algorithm
descent	I-Algorithm
algorithms	O
such	O
as	O
Adagrad	O
,	O
Adadelta	O
,	O
RMSprop	O
,	O
and	O
Adam	O
which	O
are	O
generally	O
built	O
into	O
deep	O
learning	O
libraries	O
such	O
as	O
Keras	B-Algorithm
.	O
</s>
