<s>
Neural	B-General_Concept
architecture	I-General_Concept
search	I-General_Concept
(	O
NAS	O
)	O
is	O
a	O
technique	O
for	O
automating	O
the	O
design	O
of	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
(	O
ANN	O
)	O
,	O
a	O
widely	O
used	O
model	O
in	O
the	O
field	O
of	O
machine	O
learning	O
.	O
</s>
<s>
NAS	O
is	O
closely	O
related	O
to	O
hyperparameter	B-General_Concept
optimization	I-General_Concept
and	O
meta-learning	B-General_Concept
and	O
is	O
a	O
subfield	O
of	O
automated	B-General_Concept
machine	I-General_Concept
learning	I-General_Concept
(	O
AutoML	B-General_Concept
)	O
.	O
</s>
<s>
Barret	O
Zoph	O
and	O
Quoc	O
Viet	O
Le	O
applied	O
NAS	O
with	O
RL	O
targeting	O
the	O
CIFAR-10	B-General_Concept
dataset	O
and	O
achieved	O
a	O
network	O
architecture	O
that	O
rivals	O
the	O
best	O
manually-designed	O
architecture	O
for	O
accuracy	O
,	O
with	O
an	O
error	O
rate	O
of	O
3.65	O
,	O
0.09	O
percent	O
better	O
and	O
1.05x	O
faster	O
than	O
a	O
related	O
hand-designed	O
model	O
.	O
</s>
<s>
On	O
the	O
Penn	O
Treebank	O
dataset	O
,	O
that	O
model	O
composed	O
a	O
recurrent	O
cell	O
that	O
outperforms	O
LSTM	B-Algorithm
,	O
reaching	O
a	O
test	O
set	O
perplexity	O
of	O
62.4	O
,	O
or	O
3.6	O
perplexity	O
better	O
than	O
the	O
prior	O
leading	O
system	O
.	O
</s>
<s>
The	O
design	O
was	O
constrained	O
to	O
use	O
two	O
types	O
of	O
convolutional	B-Architecture
cells	O
to	O
return	O
feature	O
maps	O
that	O
serve	O
two	O
main	O
functions	O
when	O
convoluting	O
an	O
input	O
feature	O
map	O
:	O
normal	O
cells	O
that	O
return	O
maps	O
of	O
the	O
same	O
extent	O
(	O
height	O
and	O
width	O
)	O
and	O
reduction	O
cells	O
in	O
which	O
the	O
returned	O
feature	O
map	O
height	O
and	O
width	O
is	O
reduced	O
by	O
a	O
factor	O
of	O
two	O
.	O
</s>
<s>
In	O
the	O
studied	O
example	O
,	O
the	O
best	O
convolutional	B-Architecture
layer	O
(	O
or	O
"	O
cell	O
"	O
)	O
was	O
designed	O
for	O
the	O
CIFAR-10	B-General_Concept
dataset	O
and	O
then	O
applied	O
to	O
the	O
ImageNet	B-General_Concept
dataset	O
by	O
stacking	O
copies	O
of	O
this	O
cell	O
,	O
each	O
with	O
its	O
own	O
parameters	O
.	O
</s>
<s>
In	O
the	O
so-called	O
Efficient	O
Neural	B-General_Concept
Architecture	I-General_Concept
Search	I-General_Concept
(	O
ENAS	O
)	O
,	O
a	O
controller	O
discovers	O
architectures	O
by	O
learning	O
to	O
search	O
for	O
an	O
optimal	O
subgraph	O
within	O
a	O
large	O
graph	O
.	O
</s>
<s>
On	O
CIFAR-10	B-General_Concept
,	O
the	O
ENAS	O
design	O
achieved	O
a	O
test	O
error	O
of	O
2.89	O
%	O
,	O
comparable	O
to	O
NASNet	O
.	O
</s>
<s>
An	O
alternative	O
approach	O
to	O
NAS	O
is	O
based	O
on	O
evolutionary	B-Algorithm
algorithms	I-Algorithm
,	O
which	O
has	O
been	O
employed	O
by	O
several	O
groups	O
.	O
</s>
<s>
An	O
Evolutionary	B-Algorithm
Algorithm	I-Algorithm
for	O
Neural	B-General_Concept
Architecture	I-General_Concept
Search	I-General_Concept
generally	O
performs	O
the	O
following	O
procedure	O
.	O
</s>
<s>
On	O
CIFAR-10	B-General_Concept
and	O
ImageNet	B-General_Concept
,	O
evolution	O
and	O
RL	O
performed	O
comparably	O
,	O
while	O
both	O
slightly	O
outperformed	O
random	B-Algorithm
search	I-Algorithm
.	O
</s>
<s>
Bayesian	O
Optimization	O
which	O
has	O
proven	O
to	O
be	O
an	O
efficient	O
method	O
for	O
hyperparameter	B-General_Concept
optimization	I-General_Concept
can	O
also	O
be	O
applied	O
to	O
NAS	O
.	O
</s>
<s>
Another	O
group	O
used	O
a	O
hill	B-Algorithm
climbing	I-Algorithm
procedure	O
that	O
applies	O
network	O
morphisms	O
,	O
followed	O
by	O
short	O
cosine-annealing	O
optimization	O
runs	O
.	O
</s>
<s>
E.g.	O
,	O
on	O
CIFAR-10	B-General_Concept
,	O
the	O
method	O
designed	O
and	O
trained	O
a	O
network	O
with	O
an	O
error	O
rate	O
below	O
5%	O
in	O
12	O
hours	O
on	O
a	O
single	O
GPU	O
.	O
</s>
<s>
LEMONADE	O
is	O
an	O
evolutionary	B-Algorithm
algorithm	I-Algorithm
that	O
adopted	O
Lamarckism	O
to	O
efficiently	O
optimize	O
multiple	O
objectives	O
.	O
</s>
<s>
For	O
example	O
,	O
FBNet	O
(	O
which	O
is	O
short	O
for	O
Facebook	O
Berkeley	O
Network	O
)	O
demonstrated	O
that	O
supernetwork-based	O
search	O
produces	O
networks	O
that	O
outperform	O
the	O
speed-accuracy	O
tradeoff	O
curve	O
of	O
mNASNet	O
and	O
MobileNetV2	O
on	O
the	O
ImageNet	B-General_Concept
image-classification	O
dataset	O
.	O
</s>
<s>
Further	O
,	O
SqueezeNAS	O
demonstrated	O
that	O
supernetwork-based	O
NAS	O
produces	O
neural	B-Architecture
networks	I-Architecture
that	O
outperform	O
the	O
speed-accuracy	O
tradeoff	O
curve	O
of	O
MobileNetV3	O
on	O
the	O
Cityscapes	O
semantic	O
segmentation	O
dataset	O
,	O
and	O
SqueezeNAS	O
uses	O
over	O
100x	O
less	O
search	O
time	O
than	O
was	O
used	O
in	O
the	O
MobileNetV3	O
authors	O
 '	O
RL-based	O
search	O
.	O
</s>
<s>
Neural	B-General_Concept
architecture	I-General_Concept
search	I-General_Concept
often	O
requires	O
large	O
computational	O
resources	O
,	O
due	O
to	O
its	O
expensive	O
training	O
and	O
evaluation	O
phases	O
.	O
</s>
<s>
A	O
surrogate	O
benchmark	O
uses	O
a	O
surrogate	O
model	O
(	O
eg	O
:	O
a	O
neural	B-Architecture
network	I-Architecture
)	O
to	O
predict	O
the	O
performance	O
of	O
an	O
architecture	O
from	O
the	O
search	O
space	O
.	O
</s>
