<s>
In	O
the	O
field	O
of	O
computational	O
linguistics	O
,	O
an	O
n-gram	B-Language
(	O
sometimes	O
also	O
called	O
Q-gram	B-Language
)	O
is	O
a	O
contiguous	O
sequence	O
of	O
n	O
items	O
from	O
a	O
given	O
sample	O
of	O
text	O
or	O
speech	O
.	O
</s>
<s>
The	O
items	O
can	O
be	O
phonemes	B-Language
,	O
syllables	B-General_Concept
,	O
letters	O
,	O
words	O
or	O
base	O
pairs	O
according	O
to	O
the	O
application	O
.	O
</s>
<s>
The	O
n-grams	B-Language
typically	O
are	O
collected	O
from	O
a	O
text	O
or	O
speech	O
corpus	O
.	O
</s>
<s>
Using	O
Latin	O
numerical	O
prefixes	O
,	O
an	O
n-gram	B-Language
of	O
size	O
1	O
is	O
referred	O
to	O
as	O
a	O
"	O
unigram	B-Language
"	O
;	O
size	O
2	O
is	O
a	O
"	O
bigram	O
"	O
(	O
or	O
,	O
less	O
commonly	O
,	O
a	O
"	O
digram	O
"	O
)	O
;	O
size	O
3	O
is	O
a	O
"	O
trigram	B-General_Concept
"	O
.	O
</s>
<s>
English	O
cardinal	O
numbers	O
are	O
sometimes	O
used	O
,	O
e.g.	O
,	O
"	O
four-gram	B-Language
"	O
,	O
"	O
five-gram	B-Language
"	O
,	O
and	O
so	O
on	O
.	O
</s>
<s>
In	O
computational	O
biology	O
,	O
a	O
polymer	B-Language
or	O
oligomer	O
of	O
a	O
known	O
size	O
is	O
called	O
a	O
k-mer	O
instead	O
of	O
an	O
n-gram	B-Language
,	O
with	O
specific	O
names	O
using	O
Greek	O
numerical	O
prefixes	O
such	O
as	O
"	O
monomer	O
"	O
,	O
"	O
dimer	O
"	O
,	O
"	O
trimer	O
"	O
,	O
"	O
tetramer	O
"	O
,	O
"	O
pentamer	O
"	O
,	O
etc.	O
,	O
or	O
English	O
cardinal	O
numbers	O
,	O
"	O
one-mer	O
"	O
,	O
"	O
two-mer	O
"	O
,	O
"	O
three-mer	O
"	O
,	O
etc	O
.	O
</s>
<s>
+	O
Figure	O
1	O
n-gram	B-Language
examples	O
from	O
various	O
disciplines	O
Field	O
Unit	O
Sample	O
sequence	O
1-gram	O
sequence	O
2-gram	O
sequence	O
3-gram	O
sequence	O
Vernacular	O
name	O
unigram	B-Language
bigram	O
trigram	B-General_Concept
Order	O
of	O
resulting	O
Markov	O
model	O
0	O
1	O
2	O
Protein	O
sequencing	O
amino	O
acid	O
...	O
Cys-Gly-Leu-Ser-Trp	O
...	O
...	O
,	O
Cys	O
,	O
Gly	O
,	O
Leu	O
,	O
Ser	O
,	O
Trp	O
,	O
...	O
...	O
,	O
Cys-Gly	O
,	O
Gly-Leu	O
,	O
Leu-Ser	O
,	O
Ser-Trp	O
,	O
...	O
...	O
,	O
Cys-Gly-Leu	O
,	O
Gly-Leu-Ser	O
,	O
Leu-Ser-Trp	O
,	O
...	O
DNA	O
sequencing	O
base	O
pair	O
AGCTTCGA	O
...	O
...	O
,	O
A	O
,	O
G	O
,	O
C	O
,	O
T	O
,	O
T	O
,	O
C	O
,	O
G	O
,	O
A	O
,	O
...	O
...	O
,	O
AG	O
,	O
GC	O
,	O
CT	O
,	O
TT	O
,	O
TC	O
,	O
CG	O
,	O
GA	O
,	O
...	O
...	O
,	O
AGC	O
,	O
GCT	O
,	O
CTT	O
,	O
TTC	O
,	O
TCG	O
,	O
CGA	O
,	O
...	O
Computational	O
linguistics	O
character	O
to_be_or_not_to_be	O
...	O
...	O
,	O
t	O
,	O
o	O
,	O
_	O
,	O
b	O
,	O
e	O
,	O
_	O
,	O
o	O
,	O
r	O
,	O
_	O
,	O
n	O
,	O
o	O
,	O
t	O
,	O
_	O
,	O
t	O
,	O
o	O
,	O
_	O
,	O
b	O
,	O
e	O
,	O
...	O
...	O
,	O
to	O
,	O
o_	O
,	O
_b	O
,	O
be	O
,	O
e_	O
,	O
_o	O
,	O
or	O
,	O
r_	O
,	O
_n	O
,	O
no	O
,	O
ot	O
,	O
t_	O
,	O
_t	O
,	O
to	O
,	O
o_	O
,	O
_b	O
,	O
be	O
,	O
...	O
...	O
,	O
to_	O
,	O
o_b	O
,	O
_be	O
,	O
be_	O
,	O
e_o	O
,	O
_or	O
,	O
or_	O
,	O
r_n	O
,	O
_no	O
,	O
not	O
,	O
ot_	O
,	O
t_t	O
,	O
_to	O
,	O
to_	O
,	O
o_b	O
,	O
_be	O
,	O
...	O
Computational	O
linguistics	O
word	O
...	O
to	O
be	O
or	O
not	O
to	O
be	O
...	O
...	O
,	O
to	O
,	O
be	O
,	O
or	O
,	O
not	O
,	O
to	O
,	O
be	O
,	O
...	O
...	O
,	O
to	O
be	O
,	O
be	O
or	O
,	O
or	O
not	O
,	O
not	O
to	O
,	O
to	O
be	O
,	O
...	O
...	O
,	O
to	O
be	O
or	O
,	O
be	O
or	O
not	O
,	O
or	O
not	O
to	O
,	O
not	O
to	O
be	O
,	O
...	O
</s>
<s>
Here	O
are	O
further	O
examples	O
;	O
these	O
are	O
word-level	O
3-grams	O
and	O
4-grams	B-Language
(	O
and	O
counts	O
of	O
the	O
number	O
of	O
times	O
they	O
appeared	O
)	O
from	O
the	O
Google	O
n-gram	B-Language
corpus	O
.	O
</s>
<s>
An	O
n-gram	B-Language
model'''	O
models	O
sequences	O
,	O
notably	O
natural	O
languages	O
,	O
using	O
the	O
statistical	O
properties	O
of	O
n-grams	B-Language
.	O
</s>
<s>
More	O
concisely	O
,	O
an	O
n-gram	B-Language
model	O
predicts	O
based	O
on	O
.	O
</s>
<s>
When	O
used	O
for	O
language	B-Language
modeling	I-Language
,	O
independence	O
assumptions	O
are	O
made	O
so	O
that	O
each	O
word	O
depends	O
only	O
on	O
the	O
last	O
n	O
−	O
1	O
words	O
.	O
</s>
<s>
This	O
Markov	O
model	O
is	O
used	O
as	O
an	O
approximation	O
of	O
the	O
true	O
underlying	O
language	B-Language
.	O
</s>
<s>
This	O
assumption	O
is	O
important	O
because	O
it	O
massively	O
simplifies	O
the	O
problem	O
of	O
estimating	O
the	O
language	B-Language
model	I-Language
from	O
data	O
.	O
</s>
<s>
In	O
addition	O
,	O
because	O
of	O
the	O
open	O
nature	O
of	O
language	B-Language
,	O
it	O
is	O
common	O
to	O
group	O
words	O
unknown	O
to	O
the	O
language	B-Language
model	I-Language
together	O
.	O
</s>
<s>
Note	O
that	O
in	O
a	O
simple	O
n-gram	B-Language
language	B-Language
model	I-Language
,	O
the	O
probability	O
of	O
a	O
word	O
,	O
conditioned	O
on	O
some	O
number	O
of	O
previous	O
words	O
(	O
one	O
word	O
in	O
a	O
bigram	O
model	O
,	O
two	O
words	O
in	O
a	O
trigram	B-General_Concept
model	O
,	O
etc	O
.	O
)	O
</s>
<s>
In	O
practice	O
,	O
the	O
probability	O
distributions	O
are	O
smoothed	B-Application
by	O
assigning	O
non-zero	O
probabilities	O
to	O
unseen	O
words	O
or	O
n-grams	B-Language
;	O
see	O
smoothing	B-Application
techniques	O
.	O
</s>
<s>
Applications	O
and	O
considerations	O
n-gram	B-Language
models	O
are	O
now	O
widely	O
used	O
in	O
probability	O
,	O
communication	O
theory	O
,	O
computational	O
linguistics	O
(	O
for	O
instance	O
,	O
statistical	O
natural	B-Language
language	I-Language
processing	I-Language
)	O
,	O
computational	O
biology	O
(	O
for	O
instance	O
,	O
biological	O
sequence	O
analysis	O
)	O
,	O
and	O
data	B-General_Concept
compression	I-General_Concept
.	O
</s>
<s>
Two	O
benefits	O
of	O
n-gram	B-Language
models	O
(	O
and	O
algorithms	O
that	O
use	O
them	O
)	O
are	O
simplicity	O
and	O
scalability	O
–	O
with	O
larger	O
n	O
,	O
a	O
model	O
can	O
store	O
more	O
context	O
with	O
a	O
well-understood	O
space	O
–	O
time	O
tradeoff	O
,	O
enabling	O
small	O
experiments	O
to	O
scale	O
up	O
efficiently.n-gram	O
models	O
are	O
widely	O
used	O
in	O
statistical	O
natural	B-Language
language	I-Language
processing	I-Language
.	O
</s>
<s>
In	O
speech	B-Application
recognition	I-Application
,	O
phonemes	B-Language
and	O
sequences	O
of	O
phonemes	B-Language
are	O
modeled	O
using	O
a	O
n-gram	B-Language
distribution	O
.	O
</s>
<s>
For	O
parsing	O
,	O
words	O
are	O
modeled	O
such	O
that	O
each	O
n-gram	B-Language
is	O
composed	O
of	O
n	O
words	O
.	O
</s>
<s>
For	O
language	B-General_Concept
identification	I-General_Concept
,	O
sequences	O
of	O
characters/graphemes	O
(	O
e.g.	O
,	O
letters	O
of	O
the	O
alphabet	O
)	O
are	O
modeled	O
for	O
different	O
languages	O
.	O
</s>
<s>
For	O
sequences	O
of	O
characters	O
,	O
the	O
3-grams	O
(	O
sometimes	O
referred	O
to	O
as	O
"	O
trigrams	B-General_Concept
"	O
)	O
that	O
can	O
be	O
generated	O
from	O
"	O
good	O
morning	O
"	O
are	O
"	O
goo	O
"	O
,	O
"	O
ood	O
"	O
,	O
"	O
od	O
"	O
,	O
"	O
d	O
m	O
"	O
,	O
"	O
mo	O
"	O
,	O
"	O
mor	O
"	O
and	O
so	O
forth	O
,	O
counting	O
the	O
space	O
character	O
as	O
a	O
gram	O
(	O
sometimes	O
the	O
beginning	O
and	O
end	O
of	O
a	O
text	O
are	O
modeled	O
explicitly	O
,	O
adding	O
"	O
_	O
⁠_g	O
"	O
,	O
"	O
_go	O
"	O
,	O
"	O
ng_	O
"	O
,	O
and	O
"	O
g_	O
⁠_	O
"	O
)	O
.	O
</s>
<s>
For	O
sequences	O
of	O
words	O
,	O
the	O
trigrams	B-General_Concept
(	O
shingles	O
)	O
that	O
can	O
be	O
generated	O
from	O
"	O
the	O
dog	O
smelled	O
like	O
a	O
skunk	O
"	O
are	O
"	O
#	O
the	O
dog	O
"	O
,	O
"	O
the	O
dog	O
smelled	O
"	O
,	O
"	O
dog	O
smelled	O
like	O
"	O
,	O
"	O
smelled	O
like	O
a	O
"	O
,	O
"	O
like	O
a	O
skunk	O
"	O
and	O
"	O
a	O
skunk	O
#	O
"	O
.	O
</s>
<s>
Punctuation	O
is	O
also	O
commonly	O
reduced	O
or	O
removed	O
by	O
preprocessing	O
and	O
is	O
frequently	O
used	O
to	O
trigger	O
functionality.n-grams	O
can	O
also	O
be	O
used	O
for	O
sequences	O
of	O
words	O
or	O
almost	O
any	O
type	O
of	O
data	O
.	O
</s>
<s>
For	O
example	O
,	O
they	O
have	O
been	O
used	O
for	O
extracting	O
features	O
for	O
clustering	B-Algorithm
large	O
sets	O
of	O
satellite	O
earth	O
images	O
and	O
for	O
determining	O
what	O
part	O
of	O
the	O
Earth	O
a	O
particular	O
image	O
came	O
from	O
.	O
</s>
<s>
n-gram	B-Language
models	O
are	O
often	O
criticized	O
because	O
they	O
lack	O
any	O
explicit	O
representation	O
of	O
long	O
range	O
dependency	O
.	O
</s>
<s>
This	O
is	O
because	O
the	O
only	O
explicit	O
dependency	O
range	O
is	O
(	O
n	O
−	O
1	O
)	O
tokens	O
for	O
an	O
n-gram	B-Language
model	O
,	O
and	O
since	O
natural	O
languages	O
incorporate	O
many	O
cases	O
of	O
unbounded	O
dependencies	O
(	O
such	O
as	O
wh-movement	O
)	O
,	O
this	O
means	O
that	O
an	O
n-gram	B-Language
model	O
cannot	O
in	O
principle	O
distinguish	O
unbounded	O
dependencies	O
from	O
noise	O
(	O
since	O
long	O
range	O
correlations	O
drop	O
exponentially	O
with	O
distance	O
for	O
any	O
Markov	O
model	O
)	O
.	O
</s>
<s>
For	O
this	O
reason	O
,	O
n-gram	B-Language
models	O
have	O
not	O
made	O
much	O
impact	O
on	O
linguistic	O
theory	O
,	O
where	O
part	O
of	O
the	O
explicit	O
goal	O
is	O
to	O
model	O
such	O
dependencies	O
.	O
</s>
<s>
Another	O
criticism	O
that	O
has	O
been	O
made	O
is	O
that	O
Markov	O
models	O
of	O
language	B-Language
,	O
including	O
n-gram	B-Language
models	O
,	O
do	O
not	O
explicitly	O
capture	O
the	O
performance/competence	O
distinction	O
.	O
</s>
<s>
This	O
is	O
because	O
n-gram	B-Language
models	O
are	O
not	O
designed	O
to	O
model	O
linguistic	O
knowledge	O
as	O
such	O
,	O
and	O
make	O
no	O
claims	O
to	O
being	O
(	O
even	O
potentially	O
)	O
complete	O
models	O
of	O
linguistic	O
knowledge	O
;	O
instead	O
,	O
they	O
are	O
used	O
in	O
practical	O
applications	O
.	O
</s>
<s>
In	O
practice	O
,	O
n-gram	B-Language
models	O
have	O
been	O
shown	O
to	O
be	O
extremely	O
effective	O
in	O
modeling	O
language	B-Language
data	O
,	O
which	O
is	O
a	O
core	O
component	O
in	O
modern	O
statistical	O
language	B-Language
applications	O
.	O
</s>
<s>
Most	O
modern	O
applications	O
that	O
rely	O
on	O
n-gram	B-Language
based	O
models	O
,	O
such	O
as	O
machine	B-Application
translation	I-Application
applications	O
,	O
do	O
not	O
rely	O
exclusively	O
on	O
such	O
models	O
;	O
instead	O
,	O
they	O
typically	O
also	O
incorporate	O
Bayesian	O
inference	O
.	O
</s>
<s>
When	O
a	O
language	B-Language
model	I-Language
is	O
used	O
,	O
it	O
is	O
used	O
as	O
part	O
of	O
the	O
prior	O
distribution	O
(	O
e.g.	O
</s>
<s>
Handcrafted	B-General_Concept
features	I-General_Concept
of	O
various	O
sorts	O
are	O
also	O
used	O
,	O
for	O
example	O
variables	O
that	O
represent	O
the	O
position	O
of	O
a	O
word	O
in	O
a	O
sentence	O
or	O
the	O
general	O
topic	O
of	O
discourse	O
.	O
</s>
<s>
An	O
issue	O
when	O
using	O
n-gram	B-Language
language	B-Language
models	I-Language
are	O
out-of-vocabulary	O
(	O
OOV	O
)	O
words	O
.	O
</s>
<s>
They	O
are	O
encountered	O
in	O
computational	O
linguistics	O
and	O
natural	B-Language
language	I-Language
processing	I-Language
when	O
the	O
input	O
includes	O
words	O
which	O
were	O
not	O
present	O
in	O
a	O
system	O
's	O
dictionary	O
or	O
database	O
during	O
its	O
preparation	O
.	O
</s>
<s>
By	O
default	O
,	O
when	O
a	O
language	B-Language
model	I-Language
is	O
estimated	O
,	O
the	O
entire	O
observed	O
vocabulary	O
is	O
used	O
.	O
</s>
<s>
In	O
some	O
cases	O
,	O
it	O
may	O
be	O
necessary	O
to	O
estimate	O
the	O
language	B-Language
model	I-Language
with	O
a	O
specific	O
fixed	O
vocabulary	O
.	O
</s>
<s>
In	O
such	O
a	O
scenario	O
,	O
the	O
n-grams	B-Language
in	O
the	O
corpus	O
that	O
contain	O
an	O
out-of-vocabulary	O
word	O
are	O
ignored	O
.	O
</s>
<s>
The	O
n-gram	B-Language
probabilities	O
are	O
smoothed	B-Application
over	O
all	O
the	O
words	O
in	O
the	O
vocabulary	O
even	O
if	O
they	O
were	O
not	O
observed	O
.	O
</s>
<s>
Out-of-vocabulary	O
words	O
in	O
the	O
corpus	O
are	O
effectively	O
replaced	O
with	O
this	O
special	O
<unk>	O
token	O
before	O
n-grams	B-Language
counts	O
are	O
cumulated	O
.	O
</s>
<s>
With	O
this	O
option	O
,	O
it	O
is	O
possible	O
to	O
estimate	O
the	O
transition	O
probabilities	O
of	O
n-grams	B-Language
involving	O
out-of-vocabulary	O
words	O
.	O
</s>
<s>
n-grams	B-Language
can	O
also	O
be	O
used	O
for	O
efficient	O
approximate	O
matching	O
.	O
</s>
<s>
By	O
converting	O
a	O
sequence	O
of	O
items	O
to	O
a	O
set	O
of	O
n-grams	B-Language
,	O
it	O
can	O
be	O
embedded	O
in	O
a	O
vector	O
space	O
,	O
thus	O
allowing	O
the	O
sequence	O
to	O
be	O
compared	O
to	O
other	O
sequences	O
in	O
an	O
efficient	O
manner	O
.	O
</s>
<s>
Other	O
metrics	O
have	O
also	O
been	O
applied	O
to	O
vectors	O
of	O
n-grams	B-Language
with	O
varying	O
,	O
sometimes	O
better	O
,	O
results	O
.	O
</s>
<s>
For	O
example	O
,	O
z-scores	O
have	O
been	O
used	O
to	O
compare	O
documents	O
by	O
examining	O
how	O
many	O
standard	O
deviations	O
each	O
n-gram	B-Language
differs	O
from	O
its	O
mean	O
occurrence	O
in	O
a	O
large	O
collection	O
,	O
or	O
text	O
corpus	O
,	O
of	O
documents	O
(	O
which	O
form	O
the	O
"	O
background	O
"	O
vector	O
)	O
.	O
</s>
<s>
In	O
the	O
event	O
of	O
small	O
counts	O
,	O
the	O
g-score	O
(	O
also	O
known	O
as	O
g-test	B-General_Concept
)	O
may	O
give	O
better	O
results	O
for	O
comparing	O
alternative	O
models	O
.	O
</s>
<s>
It	O
is	O
also	O
possible	O
to	O
take	O
a	O
more	O
principled	O
approach	O
to	O
the	O
statistics	O
of	O
n-grams	B-Language
,	O
modeling	O
similarity	O
as	O
the	O
likelihood	O
that	O
two	O
strings	O
came	O
from	O
the	O
same	O
source	O
directly	O
in	O
terms	O
of	O
a	O
problem	O
in	O
Bayesian	O
inference	O
.	O
</s>
<s>
n-gram-based	O
searching	O
can	O
also	O
be	O
used	O
for	O
plagiarism	O
detection	O
.	O
</s>
<s>
n-grams	B-Language
find	O
use	O
in	O
several	O
areas	O
of	O
computer	O
science	O
,	O
computational	O
linguistics	O
,	O
and	O
applied	O
mathematics	O
.	O
</s>
<s>
find	O
likely	O
candidates	O
for	O
the	O
correct	B-Protocol
spelling	I-Protocol
of	O
a	O
misspelled	O
wordU.S.	O
</s>
<s>
predict	O
letters	O
or	O
words	O
at	O
random	O
in	O
order	O
to	O
create	O
text	O
,	O
as	O
in	O
the	O
dissociated	B-Operating_System
press	I-Operating_System
algorithm	O
.	O
</s>
<s>
Consider	O
an	O
n-gram	B-Language
where	O
the	O
units	O
are	O
characters	O
and	O
a	O
text	O
with	O
t	O
characters	O
,	O
where	O
.	O
</s>
<s>
Thus	O
,	O
the	O
total	O
space	O
required	O
for	O
this	O
n-gram	B-Language
is	O
,	O
which	O
is	O
simplified	O
to	O
:	O
</s>
<s>
To	O
choose	O
a	O
value	O
for	O
n	O
in	O
an	O
n-gram	B-Language
model	O
,	O
it	O
is	O
necessary	O
to	O
find	O
the	O
right	O
trade-off	O
between	O
the	O
stability	O
of	O
the	O
estimate	O
against	O
its	O
appropriateness	O
.	O
</s>
<s>
This	O
means	O
that	O
trigram	B-General_Concept
(	O
i.e.	O
</s>
<s>
Also	O
,	O
items	O
not	O
seen	O
in	O
the	O
training	O
data	O
will	O
be	O
given	O
a	O
probability	O
of	O
0.0	O
without	O
smoothing	B-Application
.	O
</s>
<s>
In	O
practice	O
it	O
is	O
necessary	O
to	O
smooth	O
the	O
probability	O
distributions	O
by	O
also	O
assigning	O
non-zero	O
probabilities	O
to	O
unseen	O
words	O
or	O
n-grams	B-Language
.	O
</s>
<s>
The	O
reason	O
is	O
that	O
models	O
derived	O
directly	O
from	O
the	O
n-gram	B-Language
frequency	O
counts	O
have	O
severe	O
problems	O
when	O
confronted	O
with	O
any	O
n-grams	B-Language
that	O
have	O
not	O
explicitly	O
been	O
seen	O
before	O
–	O
the	B-Algorithm
zero-frequency	I-Algorithm
problem	I-Algorithm
.	O
</s>
<s>
Various	O
smoothing	B-Application
methods	O
are	O
used	O
,	O
from	O
simple	O
"	O
add-one	O
"	O
(	O
Laplace	O
)	O
smoothing	B-Application
(	O
assign	O
a	O
count	O
of	O
1	O
to	O
unseen	O
n-grams	B-Language
;	O
see	O
Rule	O
of	O
succession	O
)	O
to	O
more	O
sophisticated	O
models	O
,	O
such	O
as	O
Good	O
–	O
Turing	O
discounting	O
or	O
back-off	B-General_Concept
models	I-General_Concept
.	O
</s>
<s>
Some	O
of	O
these	O
methods	O
are	O
equivalent	O
to	O
assigning	O
a	O
prior	O
distribution	O
to	O
the	O
probabilities	O
of	O
the	O
n-grams	B-Language
and	O
using	O
Bayesian	O
inference	O
to	O
compute	O
the	O
resulting	O
posterior	O
n-gram	B-Language
probabilities	O
.	O
</s>
<s>
However	O
,	O
the	O
more	O
sophisticated	O
smoothing	B-Application
models	O
were	O
typically	O
not	O
derived	O
in	O
this	O
fashion	O
,	O
but	O
instead	O
through	O
independent	O
considerations	O
.	O
</s>
<s>
In	O
the	O
field	O
of	O
computational	O
linguistics	O
,	O
in	O
particular	O
language	B-Language
modeling	I-Language
,	O
skip-grams	O
'''	O
are	O
a	O
generalization	O
of	O
n-grams	B-Language
in	O
which	O
the	O
components	O
(	O
typically	O
words	O
)	O
need	O
not	O
be	O
consecutive	O
in	O
the	O
text	O
under	O
consideration	O
,	O
but	O
may	O
leave	O
gaps	O
that	O
are	O
skipped	O
over	O
.	O
</s>
<s>
They	O
provide	O
one	O
way	O
of	O
overcoming	O
the	O
data	O
sparsity	O
problem	O
found	O
with	O
conventional	O
n-gram	B-Language
analysis	O
.	O
</s>
<s>
In	O
the	O
area	O
of	O
computer	O
security	O
,	O
skip-grams	O
have	O
proven	O
more	O
robust	O
to	O
attack	O
than	O
ngrams	B-Language
.	O
</s>
<s>
Syntactic	O
n-grams	B-Language
are	O
n-grams	B-Language
defined	O
by	O
paths	O
in	O
syntactic	O
dependency	O
or	O
constituent	O
trees	O
rather	O
than	O
the	O
linear	O
structure	O
of	O
the	O
text	O
.	O
</s>
<s>
For	O
example	O
,	O
the	O
sentence	O
"	O
economic	O
news	O
has	O
little	O
effect	O
on	O
financial	O
markets	O
"	O
can	O
be	O
transformed	O
to	O
syntactic	O
n-grams	B-Language
following	O
the	O
tree	O
structure	O
of	O
its	O
dependency	O
relations	O
:	O
news-economic	O
,	O
effect-little	O
,	O
effect-on-markets-financial	O
.	O
</s>
<s>
Syntactic	O
n-grams	B-Language
are	O
intended	O
to	O
reflect	O
syntactic	O
structure	O
more	O
faithfully	O
than	O
linear	O
n-grams	B-Language
,	O
and	O
have	O
many	O
of	O
the	O
same	O
applications	O
,	O
especially	O
as	O
features	O
in	O
a	O
vector	O
space	O
model	O
.	O
</s>
<s>
Syntactic	O
n-grams	B-Language
for	O
certain	O
tasks	O
gives	O
better	O
results	O
than	O
the	O
use	O
of	O
standard	O
n-grams	B-Language
,	O
for	O
example	O
,	O
for	O
authorship	O
attribution	O
.	O
</s>
<s>
Another	O
type	O
of	O
syntactic	O
n-grams	B-Language
are	O
part-of-speech	O
n-grams	B-Language
,	O
defined	O
as	O
fixed-length	O
contiguous	O
overlapping	O
subsequences	O
that	O
are	O
extracted	O
from	O
part-of-speech	O
sequences	O
of	O
text	O
.	O
</s>
<s>
Part-of-speech	O
n-grams	B-Language
have	O
several	O
applications	O
,	O
most	O
commonly	O
in	O
information	B-Library
retrieval	I-Library
.	O
</s>
<s>
Christopher	O
D	O
.	O
Manning	O
,	O
Hinrich	O
Schütze	O
,	O
Foundations	O
of	O
Statistical	O
Natural	B-Language
Language	I-Language
Processing	I-Language
,	O
MIT	O
Press	O
:	O
1999	O
.	O
.	O
</s>
