<s>
In	O
machine	O
learning	O
,	O
hyperparameter	B-General_Concept
optimization	I-General_Concept
or	O
tuning	O
is	O
the	O
problem	O
of	O
choosing	O
a	O
set	O
of	O
optimal	O
hyperparameters	B-General_Concept
for	O
a	O
learning	O
algorithm	O
.	O
</s>
<s>
A	O
hyperparameter	B-General_Concept
is	O
a	O
parameter	O
whose	O
value	O
is	O
used	O
to	O
control	O
the	O
learning	O
process	O
.	O
</s>
<s>
These	O
measures	O
are	O
called	O
hyperparameters	B-General_Concept
,	O
and	O
have	O
to	O
be	O
tuned	O
so	O
that	O
the	O
model	O
can	O
optimally	O
solve	O
the	O
machine	O
learning	O
problem	O
.	O
</s>
<s>
Hyperparameter	B-General_Concept
optimization	I-General_Concept
finds	O
a	O
tuple	O
of	O
hyperparameters	B-General_Concept
that	O
yields	O
an	O
optimal	O
model	O
which	O
minimizes	O
a	O
predefined	O
loss	O
function	O
on	O
given	O
independent	O
data	O
.	O
</s>
<s>
The	O
objective	O
function	O
takes	O
a	O
tuple	O
of	O
hyperparameters	B-General_Concept
and	O
returns	O
the	O
associated	O
loss	O
.	O
</s>
<s>
Cross-validation	B-Application
is	O
often	O
used	O
to	O
estimate	O
this	O
generalization	O
performance	O
.	O
</s>
<s>
The	O
traditional	O
way	O
of	O
performing	O
hyperparameter	B-General_Concept
optimization	I-General_Concept
has	O
been	O
grid	O
search	O
,	O
or	O
a	O
parameter	O
sweep	O
,	O
which	O
is	O
simply	O
an	O
exhaustive	B-Algorithm
searching	I-Algorithm
through	O
a	O
manually	O
specified	O
subset	O
of	O
the	O
hyperparameter	B-General_Concept
space	O
of	O
a	O
learning	O
algorithm	O
.	O
</s>
<s>
For	O
example	O
,	O
a	O
typical	O
soft-margin	O
SVM	B-Algorithm
classifier	B-General_Concept
equipped	O
with	O
an	O
RBF	B-Algorithm
kernel	I-Algorithm
has	O
at	O
least	O
two	O
hyperparameters	B-General_Concept
that	O
need	O
to	O
be	O
tuned	O
for	O
good	O
performance	O
on	O
unseen	O
data	O
:	O
a	O
regularization	O
constant	O
C	O
and	O
a	O
kernel	O
hyperparameter	B-General_Concept
γ	O
.	O
</s>
<s>
Grid	O
search	O
then	O
trains	O
an	O
SVM	B-Algorithm
with	O
each	O
pair	O
(	O
C	O
,	O
γ	O
)	O
in	O
the	O
Cartesian	O
product	O
of	O
these	O
two	O
sets	O
and	O
evaluates	O
their	O
performance	O
on	O
a	O
held-out	O
validation	O
set	O
(	O
or	O
by	O
internal	O
cross-validation	B-Application
on	O
the	O
training	O
set	O
,	O
in	O
which	O
case	O
multiple	O
SVMs	B-Algorithm
are	O
trained	O
per	O
pair	O
)	O
.	O
</s>
<s>
Grid	O
search	O
suffers	O
from	O
the	O
curse	B-Algorithm
of	I-Algorithm
dimensionality	I-Algorithm
,	O
but	O
is	O
often	O
embarrassingly	B-Operating_System
parallel	I-Operating_System
because	O
the	O
hyperparameter	B-General_Concept
settings	O
it	O
evaluates	O
are	O
typically	O
independent	O
of	O
each	O
other	O
.	O
</s>
<s>
It	O
can	O
outperform	O
Grid	O
search	O
,	O
especially	O
when	O
only	O
a	O
small	O
number	O
of	O
hyperparameters	B-General_Concept
affects	O
the	O
final	O
performance	O
of	O
the	O
machine	O
learning	O
algorithm	O
.	O
</s>
<s>
Random	O
Search	O
is	O
also	O
embarrassingly	B-Operating_System
parallel	I-Operating_System
,	O
and	O
additionally	O
allows	O
the	O
inclusion	O
of	O
prior	O
knowledge	O
by	O
specifying	O
the	O
distribution	O
from	O
which	O
to	O
sample	O
.	O
</s>
<s>
Despite	O
its	O
simplicity	O
,	O
random	O
search	O
remains	O
one	O
of	O
the	O
important	O
base-lines	O
against	O
which	O
to	O
compare	O
the	O
performance	O
of	O
new	O
hyperparameter	B-General_Concept
optimization	I-General_Concept
methods	O
.	O
</s>
<s>
Applied	O
to	O
hyperparameter	B-General_Concept
optimization	I-General_Concept
,	O
Bayesian	O
optimization	O
builds	O
a	O
probabilistic	O
model	O
of	O
the	O
function	O
mapping	O
from	O
hyperparameter	B-General_Concept
values	O
to	O
the	O
objective	O
evaluated	O
on	O
a	O
validation	O
set	O
.	O
</s>
<s>
By	O
iteratively	O
evaluating	O
a	O
promising	O
hyperparameter	B-General_Concept
configuration	O
based	O
on	O
the	O
current	O
model	O
,	O
and	O
then	O
updating	O
it	O
,	O
Bayesian	O
optimization	O
aims	O
to	O
gather	O
observations	O
revealing	O
as	O
much	O
information	O
as	O
possible	O
about	O
this	O
function	O
and	O
,	O
in	O
particular	O
,	O
the	O
location	O
of	O
the	O
optimum	O
.	O
</s>
<s>
It	O
tries	O
to	O
balance	O
exploration	O
(	O
hyperparameters	B-General_Concept
for	O
which	O
the	O
outcome	O
is	O
most	O
uncertain	O
)	O
and	O
exploitation	O
(	O
hyperparameters	B-General_Concept
expected	O
close	O
to	O
the	O
optimum	O
)	O
.	O
</s>
<s>
For	O
specific	O
learning	O
algorithms	O
,	O
it	O
is	O
possible	O
to	O
compute	O
the	O
gradient	O
with	O
respect	O
to	O
hyperparameters	B-General_Concept
and	O
then	O
optimize	O
the	O
hyperparameters	B-General_Concept
using	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
Since	O
then	O
,	O
these	O
methods	O
have	O
been	O
extended	O
to	O
other	O
models	O
such	O
as	O
support	B-Algorithm
vector	I-Algorithm
machines	I-Algorithm
or	O
logistic	O
regression	O
.	O
</s>
<s>
A	O
different	O
approach	O
in	O
order	O
to	O
obtain	O
a	O
gradient	O
with	O
respect	O
to	O
hyperparameters	B-General_Concept
consists	O
in	O
differentiating	O
the	O
steps	O
of	O
an	O
iterative	O
optimization	O
algorithm	O
using	O
automatic	B-Algorithm
differentiation	I-Algorithm
.	O
</s>
<s>
The	O
method	O
scales	O
to	O
millions	O
of	O
hyperparameters	B-General_Concept
and	O
requires	O
constant	O
memory	O
.	O
</s>
<s>
One	O
of	O
the	O
advantages	O
of	O
this	O
method	O
is	O
that	O
it	O
can	O
handle	O
discrete	O
hyperparameters	B-General_Concept
as	O
well	O
.	O
</s>
<s>
Apart	O
from	O
hypernetwork	O
approaches	O
,	O
gradient-based	O
methods	O
can	O
be	O
used	O
to	O
optimize	O
discrete	O
hyperparameters	B-General_Concept
also	O
by	O
adopting	O
a	O
continuous	O
relaxation	O
of	O
the	O
parameters	O
.	O
</s>
<s>
Such	O
methods	O
have	O
been	O
extensively	O
used	O
for	O
the	O
optimization	O
of	O
architecture	O
hyperparameters	B-General_Concept
in	O
neural	B-General_Concept
architecture	I-General_Concept
search	I-General_Concept
.	O
</s>
<s>
In	O
hyperparameter	B-General_Concept
optimization	I-General_Concept
,	O
evolutionary	O
optimization	O
uses	O
evolutionary	B-Algorithm
algorithms	I-Algorithm
to	O
search	O
the	O
space	O
of	O
hyperparameters	B-General_Concept
for	O
a	O
given	O
algorithm	O
.	O
</s>
<s>
Evolutionary	O
hyperparameter	B-General_Concept
optimization	I-General_Concept
follows	O
a	O
process	O
inspired	O
by	O
the	O
biological	O
concept	O
of	O
evolution	O
:	O
</s>
<s>
Evolutionary	O
optimization	O
has	O
been	O
used	O
in	O
hyperparameter	B-General_Concept
optimization	I-General_Concept
for	O
statistical	O
machine	O
learning	O
algorithms	O
,	O
automated	B-General_Concept
machine	I-General_Concept
learning	I-General_Concept
,	O
typical	O
neural	O
network	O
and	O
deep	O
neural	O
network	O
architecture	B-General_Concept
search	I-General_Concept
,	O
as	O
well	O
as	O
training	O
of	O
the	O
weights	O
in	O
deep	O
neural	O
networks	O
.	O
</s>
<s>
Population	O
Based	O
Training	O
(	O
PBT	O
)	O
learns	O
both	O
hyperparameter	B-General_Concept
values	O
and	O
network	O
weights	O
.	O
</s>
<s>
Multiple	O
learning	O
processes	O
operate	O
independently	O
,	O
using	O
different	O
hyperparameters	B-General_Concept
.	O
</s>
<s>
As	O
with	O
evolutionary	B-Algorithm
methods	I-Algorithm
,	O
poorly	O
performing	O
models	O
are	O
iteratively	O
replaced	O
with	O
models	O
that	O
adopt	O
modified	O
hyperparameter	B-General_Concept
values	O
and	O
weights	O
based	O
on	O
the	O
better	O
performers	O
.	O
</s>
<s>
This	O
replacement	O
model	O
warm	O
starting	O
is	O
the	O
primary	O
differentiator	O
between	O
PBT	O
and	O
other	O
evolutionary	B-Algorithm
methods	I-Algorithm
.	O
</s>
<s>
PBT	O
thus	O
allows	O
the	O
hyperparameters	B-General_Concept
to	O
evolve	O
and	O
eliminates	O
the	O
need	O
for	O
manual	O
hypertuning	O
.	O
</s>
<s>
PBT	O
and	O
its	O
variants	O
are	O
adaptive	O
methods	O
:	O
they	O
update	O
hyperparameters	B-General_Concept
during	O
the	O
training	O
of	O
the	O
models	O
.	O
</s>
<s>
On	O
the	O
contrary	O
,	O
non-adaptive	O
methods	O
have	O
the	O
sub-optimal	O
strategy	O
to	O
assign	O
a	O
constant	O
set	O
of	O
hyperparameters	B-General_Concept
for	O
the	O
whole	O
training	O
.	O
</s>
<s>
A	O
class	O
of	O
early	O
stopping-based	O
hyperparameter	B-General_Concept
optimization	I-General_Concept
algorithms	O
is	O
purpose	O
built	O
for	O
large	O
search	O
spaces	O
of	O
continuous	O
and	O
discrete	O
hyperparameters	B-General_Concept
,	O
particularly	O
when	O
the	O
computational	O
cost	O
to	O
evaluate	O
the	O
performance	O
of	O
a	O
set	O
of	O
hyperparameters	B-General_Concept
is	O
high	O
.	O
</s>
<s>
Another	O
early	O
stopping	O
hyperparameter	B-General_Concept
optimization	I-General_Concept
algorithm	O
is	O
successive	O
halving	O
(	O
SHA	O
)	O
,	O
which	O
begins	O
as	O
a	O
random	O
search	O
but	O
periodically	O
prunes	O
low-performing	O
models	O
,	O
thereby	O
focusing	O
computational	O
resources	O
on	O
more	O
promising	O
models	O
.	O
</s>
<s>
RBF	B-Algorithm
and	O
spectral	B-Algorithm
approaches	O
have	O
also	O
been	O
developed	O
.	O
</s>
