<s>
A	O
cache	B-General_Concept
language	I-General_Concept
model	I-General_Concept
is	O
a	O
type	O
of	O
statistical	B-Language
language	I-Language
model	I-Language
.	O
</s>
<s>
These	O
occur	O
in	O
the	B-Language
natural	I-Language
language	I-Language
processing	I-Language
subfield	O
of	O
computer	B-General_Concept
science	I-General_Concept
and	O
assign	O
probabilities	O
to	O
given	O
sequences	O
of	O
words	O
by	O
means	O
of	O
a	O
probability	O
distribution	O
.	O
</s>
<s>
Statistical	B-Language
language	I-Language
models	I-Language
are	O
key	O
components	O
of	O
speech	B-Application
recognition	I-Application
systems	O
and	O
of	O
many	O
machine	B-Application
translation	I-Application
systems	I-Application
:	O
they	O
tell	O
such	O
systems	O
which	O
possible	O
output	O
word	O
sequences	O
are	O
probable	O
and	O
which	O
are	O
improbable	O
.	O
</s>
<s>
The	O
particular	O
characteristic	O
of	O
a	O
cache	B-General_Concept
language	I-General_Concept
model	I-General_Concept
is	O
that	O
it	O
contains	O
a	O
cache	B-General_Concept
component	I-General_Concept
and	O
assigns	O
relatively	O
high	O
probabilities	O
to	O
words	O
or	O
word	O
sequences	O
that	O
occur	O
elsewhere	O
in	O
a	O
given	O
text	O
.	O
</s>
<s>
The	O
primary	O
,	O
but	O
by	O
no	O
means	O
sole	O
,	O
use	O
of	O
cache	B-General_Concept
language	I-General_Concept
models	I-General_Concept
is	O
in	O
speech	B-Application
recognition	I-Application
systems	O
.	O
</s>
<s>
To	O
understand	O
why	O
it	O
is	O
a	O
good	O
idea	O
for	O
a	O
statistical	B-Language
language	I-Language
model	I-Language
to	O
contain	O
a	O
cache	B-General_Concept
component	I-General_Concept
one	O
might	O
consider	O
someone	O
who	O
is	O
dictating	O
a	O
letter	O
about	O
elephants	O
to	O
a	O
speech	B-Application
recognition	I-Application
system	O
.	O
</s>
<s>
Standard	O
(	O
non-cache	O
)	O
N-gram	B-Language
language	B-Language
models	I-Language
will	O
assign	O
a	O
very	O
low	O
probability	O
to	O
the	O
word	O
"	O
elephant	O
"	O
because	O
it	O
is	O
a	O
very	O
rare	O
word	O
in	O
English	O
.	O
</s>
<s>
If	O
the	O
speech	B-Application
recognition	I-Application
system	O
does	O
not	O
contain	O
a	O
cache	B-General_Concept
component	I-General_Concept
,	O
the	O
person	O
dictating	O
the	O
letter	O
may	O
be	O
annoyed	O
:	O
each	O
time	O
the	O
word	O
"	O
elephant	O
"	O
is	O
spoken	O
another	O
sequence	O
of	O
words	O
with	O
a	O
higher	O
probability	O
according	O
to	O
the	O
N-gram	B-Language
language	B-Language
model	I-Language
may	O
be	O
recognized	O
(	O
e.g.	O
,	O
"	O
tell	O
a	O
plan	O
"	O
)	O
.	O
</s>
<s>
If	O
the	O
system	O
has	O
a	O
cache	B-General_Concept
language	I-General_Concept
model	I-General_Concept
,	O
"	O
elephant	O
"	O
will	O
still	O
probably	O
be	O
misrecognized	O
the	O
first	O
time	O
it	O
is	O
spoken	O
and	O
will	O
have	O
to	O
be	O
entered	O
into	O
the	O
text	O
manually	O
;	O
however	O
,	O
from	O
this	O
point	O
on	O
the	O
system	O
is	O
aware	O
that	O
"	O
elephant	O
"	O
is	O
likely	O
to	O
occur	O
again	O
–	O
the	O
estimated	O
probability	O
of	O
occurrence	O
of	O
"	O
elephant	O
"	O
has	O
been	O
increased	O
,	O
making	O
it	O
more	O
likely	O
that	O
if	O
it	O
is	O
spoken	O
it	O
will	O
be	O
recognized	O
correctly	O
.	O
</s>
<s>
There	O
exist	O
variants	O
of	O
the	O
cache	B-General_Concept
language	I-General_Concept
model	I-General_Concept
in	O
which	O
not	O
only	O
single	O
words	O
but	O
also	O
multi-word	O
sequences	O
that	O
have	O
occurred	O
previously	O
are	O
assigned	O
higher	O
probabilities	O
(	O
e.g.	O
,	O
if	O
"	O
San	O
Francisco	O
"	O
occurred	O
near	O
the	O
beginning	O
of	O
the	O
text	O
subsequent	O
instances	O
of	O
it	O
would	O
be	O
assigned	O
a	O
higher	O
probability	O
)	O
.	O
</s>
<s>
The	O
cache	B-General_Concept
language	I-General_Concept
model	I-General_Concept
was	O
first	O
proposed	O
in	O
a	O
paper	O
published	O
in	O
1990	O
,	O
after	O
which	O
the	O
IBM	O
speech-recognition	B-Application
group	O
experimented	O
with	O
the	O
concept	O
.	O
</s>
<s>
The	O
group	O
found	O
that	O
implementation	O
of	O
a	O
form	O
of	O
cache	B-General_Concept
language	I-General_Concept
model	I-General_Concept
yielded	O
a	O
24%	O
drop	O
in	O
word-error	B-General_Concept
rates	I-General_Concept
once	O
the	O
first	O
few	O
hundred	O
words	O
of	O
a	O
document	O
had	O
been	O
dictated	O
.	O
</s>
<s>
A	O
detailed	O
survey	O
of	O
language	B-Language
modeling	I-Language
techniques	O
concluded	O
that	O
the	O
cache	B-General_Concept
language	I-General_Concept
model	I-General_Concept
was	O
one	O
of	O
the	O
few	O
new	O
language	B-Language
modeling	I-Language
techniques	O
that	O
yielded	O
improvements	O
over	O
the	O
standard	O
N-gram	B-Language
approach	O
:	O
"	O
Our	O
caching	B-General_Concept
results	O
show	O
that	O
caching	B-General_Concept
is	O
by	O
far	O
the	O
most	O
useful	O
technique	O
for	O
perplexity	O
reduction	O
at	O
small	O
and	O
medium	O
training	O
data	O
sizes	O
"	O
.	O
</s>
<s>
The	O
development	O
of	O
the	O
cache	B-General_Concept
language	I-General_Concept
model	I-General_Concept
has	O
generated	O
considerable	O
interest	O
among	O
those	O
concerned	O
with	O
computational	O
linguistics	O
in	O
general	O
and	O
statistical	O
natural	B-Language
language	I-Language
processing	I-Language
in	O
particular	O
:	O
recently	O
,	O
there	O
has	O
been	O
interest	O
in	O
applying	O
the	O
cache	B-General_Concept
language	I-General_Concept
model	I-General_Concept
in	O
the	O
field	O
of	O
statistical	B-General_Concept
machine	I-General_Concept
translation	I-General_Concept
.	O
</s>
<s>
The	O
success	O
of	O
the	O
cache	B-General_Concept
language	I-General_Concept
model	I-General_Concept
in	O
improving	O
word	O
prediction	O
rests	O
on	O
the	O
human	O
tendency	O
to	O
use	O
words	O
in	O
a	O
"	O
bursty	O
"	O
fashion	O
:	O
when	O
one	O
is	O
discussing	O
a	O
certain	O
topic	O
in	O
a	O
certain	O
context	O
,	O
the	O
frequency	O
with	O
which	O
one	O
uses	O
certain	O
words	O
will	O
be	O
quite	O
different	O
from	O
their	O
frequencies	O
when	O
one	O
is	O
discussing	O
other	O
topics	O
in	O
other	O
contexts	O
.	O
</s>
<s>
The	O
traditional	O
N-gram	B-Language
language	B-Language
models	I-Language
,	O
which	O
rely	O
entirely	O
on	O
information	O
from	O
a	O
very	O
small	O
number	O
(	O
four	O
,	O
three	O
,	O
or	O
two	O
)	O
of	O
words	O
preceding	O
the	O
word	O
to	O
which	O
a	O
probability	O
is	O
to	O
be	O
assigned	O
,	O
do	O
not	O
adequately	O
model	O
this	O
"	O
burstiness	O
"	O
.	O
</s>
<s>
Recently	O
,	O
the	O
cache	B-General_Concept
language	I-General_Concept
model	I-General_Concept
concept	O
-	O
originally	O
conceived	O
for	O
the	O
N-gram	B-Language
statistical	B-Language
language	I-Language
model	I-Language
paradigm	O
-	O
has	O
been	O
adapted	O
for	O
use	O
in	O
the	O
neural	O
paradigm	O
.	O
</s>
