<s>
In	O
information	O
theory	O
,	O
perplexity	B-General_Concept
is	O
a	O
measurement	O
of	O
how	O
well	O
a	O
probability	O
distribution	O
or	O
probability	O
model	O
predicts	O
a	O
sample	O
.	O
</s>
<s>
A	O
low	O
perplexity	B-General_Concept
indicates	O
the	O
probability	O
distribution	O
is	O
good	O
at	O
predicting	O
the	O
sample	O
.	O
</s>
<s>
(	O
The	O
base	O
need	O
not	O
be	O
2	O
:	O
The	O
perplexity	B-General_Concept
is	O
independent	O
of	O
the	O
base	O
,	O
provided	O
that	O
the	O
entropy	O
and	O
the	O
exponentiation	O
use	O
the	O
same	O
base	O
.	O
)	O
</s>
<s>
Perplexity	B-General_Concept
of	O
a	O
random	O
variable	O
X	O
may	O
be	O
defined	O
as	O
the	O
perplexity	B-General_Concept
of	O
the	O
distribution	O
over	O
its	O
possible	O
values	O
x	O
.	O
</s>
<s>
In	O
the	O
special	O
case	O
where	O
p	O
models	O
a	O
fair	O
k-sided	O
die	O
(	O
a	O
uniform	O
distribution	O
over	O
k	O
discrete	O
events	O
)	O
,	O
its	O
perplexity	B-General_Concept
is	O
k	O
.	O
A	O
random	O
variable	O
with	O
perplexity	B-General_Concept
k	O
has	O
the	O
same	O
uncertainty	O
as	O
a	O
fair	O
k-sided	O
die	O
,	O
and	O
one	O
is	O
said	O
to	O
be	O
"	O
k-ways	O
perplexed	O
"	O
about	O
the	O
value	O
of	O
the	O
random	O
variable	O
.	O
</s>
<s>
Perplexity	B-General_Concept
is	O
sometimes	O
used	O
as	O
a	O
measure	O
of	O
how	O
hard	O
a	O
prediction	O
problem	O
is	O
.	O
</s>
<s>
The	O
perplexity	B-General_Concept
is	O
2−	O
0.9	O
log2	O
0.9	O
-	O
0.1	O
log2	O
0.1	O
=	O
1.38	O
.	O
</s>
<s>
The	O
inverse	O
of	O
the	O
perplexity	B-General_Concept
(	O
which	O
,	O
in	O
the	O
case	O
of	O
the	O
fair	O
k-sided	O
die	O
,	O
represents	O
the	O
probability	O
of	O
guessing	O
correctly	O
)	O
,	O
is	O
1/1.38	O
=	O
0.72	O
,	O
not	O
0.9	O
.	O
</s>
<s>
The	O
perplexity	B-General_Concept
is	O
the	O
exponentiation	O
of	O
the	O
entropy	O
,	O
which	O
is	O
a	O
more	O
clearcut	O
quantity	O
.	O
</s>
<s>
Thus	O
,	O
they	O
have	O
lower	O
perplexity	B-General_Concept
:	O
they	O
are	O
less	O
surprised	O
by	O
the	O
test	O
sample	O
.	O
</s>
<s>
The	O
exponent	O
above	O
may	O
be	O
regarded	O
as	O
the	O
average	O
number	O
of	O
bits	O
needed	O
to	O
represent	O
a	O
test	O
event	O
xi	O
if	O
one	O
uses	O
an	O
optimal	O
code	O
based	O
on	O
q	O
.	O
Low-perplexity	O
models	O
do	O
a	O
better	O
job	O
of	O
compressing	O
the	O
test	O
sample	O
,	O
requiring	O
few	O
bits	O
per	O
test	O
element	O
on	O
average	O
because	O
q(xi )	O
tends	O
to	O
be	O
high	O
.	O
</s>
<s>
where	O
denotes	O
the	O
empirical	B-General_Concept
distribution	I-General_Concept
of	O
the	O
test	O
sample	O
(	O
i.e.	O
,	O
if	O
x	O
appeared	O
n	O
times	O
in	O
the	O
test	O
sample	O
of	O
size	O
N	O
)	O
.	O
</s>
<s>
Consequently	O
,	O
the	O
perplexity	B-General_Concept
is	O
minimized	O
when	O
.	O
</s>
<s>
In	O
natural	B-Language
language	I-Language
processing	I-Language
,	O
a	O
corpus	O
is	O
a	O
set	O
of	O
sentences	O
or	O
texts	O
,	O
and	O
a	O
language	B-Language
model	I-Language
is	O
a	O
probability	O
distribution	O
over	O
entire	O
sentences	O
or	O
texts	O
.	O
</s>
<s>
Consequently	O
,	O
we	O
can	O
define	O
the	O
perplexity	B-General_Concept
of	O
a	O
language	B-Language
model	I-Language
over	O
a	O
corpus	O
.	O
</s>
<s>
However	O
,	O
in	O
NLP	B-Language
,	O
the	O
more	O
commonly	O
used	O
measure	O
is	O
perplexity	B-General_Concept
per	O
word	O
,	O
defined	O
aswhere	O
are	O
the	O
sentences	O
in	O
the	O
corpus	O
,	O
but	O
is	O
the	O
number	O
of	O
words	O
in	O
the	O
corpus	O
.	O
</s>
<s>
Suppose	O
the	O
average	O
sentence	O
xi	O
in	O
the	O
corpus	O
has	O
probability	O
according	O
to	O
the	O
language	B-Language
model	I-Language
.	O
</s>
<s>
This	O
would	O
give	O
an	O
enormous	O
model	O
perplexity	B-General_Concept
of	O
2190	O
per	O
sentence	O
.	O
</s>
<s>
Thus	O
,	O
if	O
the	O
test	O
sample	O
's	O
sentences	O
comprised	O
a	O
total	O
of	O
1,000	O
words	O
,	O
and	O
could	O
be	O
coded	O
using	O
a	O
total	O
of	O
7.95	O
bits	O
per	O
word	O
,	O
one	O
could	O
report	O
a	O
model	O
perplexity	B-General_Concept
of	O
27.95	O
=	O
247	O
per	O
word	O
.	O
</s>
<s>
The	O
lowest	O
perplexity	B-General_Concept
that	O
has	O
been	O
published	O
on	O
the	O
Brown	O
Corpus	O
(	O
1	O
million	O
words	O
of	O
American	O
English	O
of	O
varying	O
topics	O
and	O
genres	O
)	O
as	O
of	O
1992	O
is	O
indeed	O
about	O
247	O
per	O
word	O
,	O
corresponding	O
to	O
a	O
cross-entropy	O
of	O
log2247	O
=	O
7.95	O
bits	O
per	O
word	O
or	O
1.75	O
bits	O
per	O
letter	O
using	O
a	O
trigram	B-General_Concept
model	O
.	O
</s>
<s>
It	O
is	O
often	O
possible	O
to	O
achieve	O
lower	O
perplexity	B-General_Concept
on	O
more	O
specialized	O
corpora	O
,	O
as	O
they	O
are	O
more	O
predictable	O
.	O
</s>
<s>
Again	O
,	O
simply	O
guessing	O
that	O
the	O
next	O
word	O
in	O
the	O
Brown	O
corpus	O
is	O
the	O
word	O
"	O
the	O
"	O
will	O
have	O
an	O
accuracy	O
of	O
7	O
percent	O
,	O
not	O
1/247	O
=	O
0.4	O
percent	O
,	O
as	O
a	O
naive	O
use	O
of	O
perplexity	B-General_Concept
as	O
a	O
measure	O
of	O
predictiveness	O
might	O
lead	O
one	O
to	O
believe	O
.	O
</s>
<s>
This	O
guess	O
is	O
based	O
on	O
the	O
unigram	O
statistics	O
of	O
the	O
Brown	O
corpus	O
,	O
not	O
on	O
the	O
trigram	B-General_Concept
statistics	O
,	O
which	O
yielded	O
the	O
word	O
perplexity	B-General_Concept
247	O
.	O
</s>
<s>
Using	O
trigram	B-General_Concept
statistics	O
would	O
further	O
improve	O
the	O
chances	O
of	O
a	O
correct	O
guess	O
.	O
</s>
