<s>
In	O
machine	O
learning	O
,	O
knowledge	B-Algorithm
distillation	I-Algorithm
is	O
the	O
process	O
of	O
transferring	O
knowledge	O
from	O
a	O
large	O
model	O
to	O
a	O
smaller	O
one	O
.	O
</s>
<s>
While	O
large	O
models	O
(	O
such	O
as	O
very	O
deep	O
neural	B-Architecture
networks	I-Architecture
or	O
ensembles	O
of	O
many	O
models	O
)	O
have	O
higher	O
knowledge	O
capacity	O
than	O
small	O
models	O
,	O
this	O
capacity	O
might	O
not	O
be	O
fully	O
utilized	O
.	O
</s>
<s>
Knowledge	B-Algorithm
distillation	I-Algorithm
transfers	O
knowledge	O
from	O
a	O
large	O
model	O
to	O
a	O
smaller	O
model	O
without	O
loss	O
of	O
validity	O
.	O
</s>
<s>
As	O
smaller	O
models	O
are	O
less	O
expensive	O
to	O
evaluate	O
,	O
they	O
can	O
be	O
deployed	O
on	O
less	O
powerful	O
hardware	O
(	O
such	O
as	O
a	O
mobile	B-Application
device	I-Application
)	O
.	O
</s>
<s>
Knowledge	B-Algorithm
distillation	I-Algorithm
has	O
been	O
successfully	O
used	O
in	O
several	O
applications	O
of	O
machine	O
learning	O
such	O
as	O
object	B-General_Concept
detection	I-General_Concept
,	O
acoustic	B-General_Concept
models	I-General_Concept
,	O
and	O
natural	B-Language
language	I-Language
processing	I-Language
.	O
</s>
<s>
Recently	O
,	O
it	O
has	O
also	O
been	O
introduced	O
to	O
graph	O
neural	B-Architecture
networks	I-Architecture
applicable	O
to	O
non-grid	O
data	O
.	O
</s>
<s>
However	O
,	O
some	O
information	O
about	O
a	O
concise	O
knowledge	O
representation	O
is	O
encoded	O
in	O
the	O
pseudolikelihoods	B-General_Concept
assigned	O
to	O
its	O
output	O
:	O
when	O
a	O
model	O
correctly	O
predicts	O
a	O
class	O
,	O
it	O
assigns	O
a	O
large	O
value	O
to	O
the	O
output	O
variable	O
corresponding	O
to	O
such	O
class	O
,	O
and	O
smaller	O
values	O
to	O
the	O
other	O
output	O
variables	O
.	O
</s>
<s>
Therefore	O
,	O
the	O
goal	O
of	O
economical	O
deployment	O
of	O
a	O
valid	O
model	O
can	O
be	O
achieved	O
by	O
training	O
only	O
the	O
large	O
model	O
on	O
the	O
data	O
,	O
exploiting	O
its	O
better	O
ability	O
to	O
learn	O
concise	O
knowledge	O
representations	O
,	O
and	O
then	O
distilling	O
such	O
knowledge	O
into	O
the	O
smaller	O
model	O
,	O
that	O
would	O
not	O
be	O
able	O
to	O
learn	O
it	O
on	O
its	O
own	O
,	O
by	O
training	O
it	O
to	O
learn	O
the	O
soft	B-Error_Name
output	I-Error_Name
of	O
the	O
large	O
model	O
.	O
</s>
<s>
A	O
first	O
example	O
of	O
distilling	O
an	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
into	O
another	O
network	O
dates	O
back	O
to	O
1992	O
,	O
when	O
Juergen	O
Schmidhuber	O
compressed	O
or	O
collapsed	O
a	O
hierarchy	O
of	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
(	O
RNNs	O
)	O
into	O
a	O
single	O
RNN	O
,	O
by	O
distilling	O
a	O
higher	O
level	O
chunker	O
network	O
into	O
a	O
lower	O
level	O
automatizer	O
network	O
.	O
</s>
<s>
A	O
related	O
methodology	O
to	O
compress	O
the	O
knowledge	O
of	O
multiple	O
models	O
into	O
a	O
single	O
neural	B-Architecture
network	I-Architecture
was	O
called	O
model	O
compression	O
in	O
2006	O
.	O
</s>
<s>
Compression	O
was	O
achieved	O
by	O
training	O
a	O
smaller	O
model	O
on	O
large	O
amounts	O
of	O
pseudo-data	O
labelled	O
by	O
a	O
higher-performing	O
ensemble	B-General_Concept
,	O
optimising	O
to	O
match	O
the	O
logit	O
of	O
the	O
compressed	O
model	O
to	O
the	O
logit	O
of	O
the	O
ensemble	B-General_Concept
.	O
</s>
<s>
Knowledge	B-Algorithm
distillation	I-Algorithm
is	O
a	O
generalisation	O
of	O
such	O
approach	O
,	O
introduced	O
by	O
Geoffrey	O
Hinton	O
et	O
al	O
.	O
</s>
<s>
in	O
2015	O
,	O
in	O
a	O
preprint	O
that	O
formulated	O
the	O
concept	O
and	O
showed	O
some	O
results	O
achieved	O
in	O
the	O
task	O
of	O
image	O
classification	B-General_Concept
.	O
</s>
<s>
Knowledge	B-Algorithm
distillation	I-Algorithm
is	O
also	O
related	O
to	O
the	O
concept	O
of	O
behavioral	O
cloning	O
discussed	O
by	O
Faraz	O
Torabi	O
et	O
.	O
</s>
<s>
where	O
is	O
a	O
parameter	O
called	O
temperature	O
,	O
that	O
for	O
a	O
standard	O
softmax	B-Algorithm
is	O
normally	O
set	O
to	O
1	O
.	O
</s>
<s>
The	O
softmax	B-Algorithm
operator	O
converts	O
the	O
logit	O
values	O
to	O
pseudo-probabilities	O
,	O
and	O
higher	O
values	O
of	O
temperature	O
have	O
the	O
effect	O
of	O
generating	O
a	O
softer	O
distribution	O
of	O
pseudo-probabilities	O
among	O
the	O
output	O
classes	O
.	O
</s>
<s>
In	O
this	O
context	O
,	O
a	O
high	O
temperature	O
increases	O
the	O
entropy	O
of	O
the	O
output	O
,	O
and	O
therefore	O
provides	O
more	O
information	O
to	O
learn	O
for	O
the	O
distilled	O
model	O
compared	O
to	O
hard	O
targets	O
,	O
at	O
the	O
same	O
time	O
reducing	O
the	O
variance	O
of	O
the	O
gradient	O
between	O
different	O
records	O
and	O
therefore	O
allowing	O
higher	O
learning	B-General_Concept
rates	I-General_Concept
.	O
</s>
<s>
Under	O
the	O
assumption	O
that	O
the	O
logits	O
have	O
zero	O
mean	O
,	O
it	O
is	O
possible	O
to	O
show	O
that	O
model	O
compression	O
is	O
a	O
special	O
case	O
of	O
knowledge	B-Algorithm
distillation	I-Algorithm
.	O
</s>
