<s>
Katz	O
back-off	O
is	O
a	O
generative	O
n-gram	B-Language
language	B-Language
model	I-Language
that	O
estimates	O
the	O
conditional	O
probability	O
of	O
a	O
word	O
given	O
its	O
history	O
in	O
the	O
n-gram	B-Language
.	O
</s>
<s>
Prior	O
to	O
that	O
,	O
n-gram	B-Language
language	B-Language
models	I-Language
were	O
constructed	O
by	O
training	O
individual	O
models	O
for	O
different	O
n-gram	B-Language
orders	O
using	O
maximum	O
likelihood	O
estimation	O
and	O
then	O
interpolating	O
them	O
together	O
.	O
</s>
<s>
The	O
equation	O
for	O
Katz	B-General_Concept
's	I-General_Concept
back-off	I-General_Concept
model	I-General_Concept
is	O
:	O
</s>
<s>
Essentially	O
,	O
this	O
means	O
that	O
if	O
the	O
n-gram	B-Language
has	O
been	O
seen	O
more	O
than	O
k	O
times	O
in	O
training	O
,	O
the	O
conditional	O
probability	O
of	O
a	O
word	O
given	O
its	O
history	O
is	O
proportional	O
to	O
the	O
maximum	O
likelihood	O
estimate	O
of	O
that	O
n-gram	B-Language
.	O
</s>
<s>
For	O
example	O
,	O
suppose	O
that	O
the	O
bigram	O
"	O
a	O
b	O
"	O
and	O
the	O
unigram	B-Language
"	O
c	O
"	O
are	O
very	O
common	O
,	O
but	O
the	O
trigram	O
"	O
a	O
b	O
c	O
"	O
is	O
never	O
seen	O
.	O
</s>
