<s>
A	O
language	B-Language
model	I-Language
is	O
a	O
probability	O
distribution	O
over	O
sequences	O
of	O
words	O
.	O
</s>
<s>
Given	O
any	O
sequence	O
of	O
words	O
of	O
length	O
,	O
a	O
language	B-Language
model	I-Language
assigns	O
a	O
probability	O
to	O
the	O
whole	O
sequence	O
.	O
</s>
<s>
Language	B-Language
models	I-Language
generate	O
probabilities	O
by	O
training	O
on	O
text	O
corpora	O
in	O
one	O
or	O
many	O
languages	O
.	O
</s>
<s>
Given	O
that	O
languages	O
can	O
be	O
used	O
to	O
express	O
an	O
infinite	O
variety	O
of	O
valid	O
sentences	O
(	O
the	O
property	O
of	O
digital	O
infinity	O
)	O
,	O
language	B-Language
modeling	I-Language
faces	O
the	O
problem	O
of	O
assigning	O
non-zero	O
probabilities	O
to	O
linguistically	O
valid	O
sequences	O
that	O
may	O
never	O
be	O
encountered	O
in	O
the	O
training	O
data	O
.	O
</s>
<s>
Several	O
modelling	O
approaches	O
have	O
been	O
designed	O
to	O
surmount	O
this	O
problem	O
,	O
such	O
as	O
applying	O
the	O
Markov	O
assumption	O
or	O
using	O
neural	O
architectures	O
such	O
as	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
or	O
transformers	B-Algorithm
.	O
</s>
<s>
Language	B-Language
models	I-Language
are	O
useful	O
for	O
a	O
variety	O
of	O
problems	O
in	O
computational	O
linguistics	O
;	O
from	O
initial	O
applications	O
in	O
speech	B-Application
recognition	I-Application
to	O
ensure	O
nonsensical	O
(	O
i.e.	O
</s>
<s>
low-probability	O
)	O
word	O
sequences	O
are	O
not	O
predicted	O
,	O
to	O
wider	O
use	O
in	O
machine	B-Application
translation	I-Application
(	O
e.g.	O
</s>
<s>
scoring	O
candidate	O
translations	O
)	O
,	O
natural	B-General_Concept
language	I-General_Concept
generation	I-General_Concept
(	O
generating	O
more	O
human-like	O
text	O
)	O
,	O
part-of-speech	O
tagging	O
,	O
parsing	B-Language
,	O
optical	B-Application
character	I-Application
recognition	I-Application
,	O
handwriting	B-Application
recognition	I-Application
,	O
grammar	B-Algorithm
induction	I-Algorithm
,	O
information	B-Library
retrieval	I-Library
,	O
and	O
other	O
applications	O
.	O
</s>
<s>
Language	B-Language
models	I-Language
are	O
used	O
in	O
information	B-Library
retrieval	I-Library
in	O
the	O
query	B-Library
likelihood	O
model	O
.	O
</s>
<s>
There	O
,	O
a	O
separate	O
language	B-Language
model	I-Language
is	O
associated	O
with	O
each	O
document	O
in	O
a	O
collection	O
.	O
</s>
<s>
Documents	O
are	O
ranked	O
based	O
on	O
the	O
probability	O
of	O
the	O
query	B-Library
in	O
the	O
document	O
's	O
language	B-Language
model	I-Language
:	O
.	O
</s>
<s>
Commonly	O
,	O
the	O
unigram	B-Language
language	B-Language
model	I-Language
is	O
used	O
for	O
this	O
purpose	O
.	O
</s>
<s>
Since	O
2018	O
,	O
large	O
language	B-Language
models	I-Language
(	O
LLMs	O
)	O
consisting	O
of	O
deep	B-Architecture
neural	I-Architecture
networks	I-Architecture
with	O
billions	O
of	O
trainable	O
parameters	O
,	O
trained	O
on	O
massive	O
datasets	O
of	O
unlabelled	O
text	O
,	O
have	O
demonstrated	O
impressive	O
results	O
on	O
a	O
wide	O
variety	O
of	O
natural	O
language	O
processing	O
tasks	O
.	O
</s>
<s>
Maximum	O
entropy	O
language	B-Language
models	I-Language
encode	O
the	O
relationship	O
between	O
a	O
word	O
and	O
the	O
n-gram	B-Language
history	O
using	O
feature	O
functions	O
.	O
</s>
<s>
In	O
the	O
simplest	O
case	O
,	O
the	O
feature	O
function	O
is	O
just	O
an	O
indicator	O
of	O
the	O
presence	O
of	O
a	O
certain	O
n-gram	B-Language
.	O
</s>
<s>
The	O
log-bilinear	O
model	O
is	O
another	O
example	O
of	O
an	O
exponential	O
language	B-Language
model	I-Language
.	O
</s>
<s>
Neural	O
language	B-Language
models	I-Language
(	O
or	O
continuous	O
space	O
language	B-Language
models	I-Language
)	O
use	O
continuous	O
representations	O
or	O
embeddings	B-General_Concept
of	I-General_Concept
words	I-General_Concept
to	O
make	O
their	O
predictions	O
.	O
</s>
<s>
These	O
models	O
make	O
use	O
of	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
Continuous	O
space	O
embeddings	O
help	O
to	O
alleviate	O
the	O
curse	B-Algorithm
of	I-Algorithm
dimensionality	I-Algorithm
in	O
language	B-Language
modeling	I-Language
:	O
as	O
language	B-Language
models	I-Language
are	O
trained	O
on	O
larger	O
and	O
larger	O
texts	O
,	O
the	O
number	O
of	O
unique	O
words	O
(	O
the	O
vocabulary	O
)	O
increases	O
.	O
</s>
<s>
Neural	B-Architecture
networks	I-Architecture
avoid	O
this	O
problem	O
by	O
representing	O
words	O
in	O
a	O
distributed	O
way	O
,	O
as	O
non-linear	O
combinations	O
of	O
weights	O
in	O
a	O
neural	B-Architecture
net	I-Architecture
.	O
</s>
<s>
An	O
alternate	O
description	O
is	O
that	O
a	O
neural	B-Architecture
net	I-Architecture
approximates	O
the	O
language	O
function	O
.	O
</s>
<s>
The	O
neural	B-Architecture
net	I-Architecture
architecture	O
might	O
be	O
feed-forward	B-Algorithm
or	O
recurrent	B-Algorithm
,	O
and	O
while	O
the	O
former	O
is	O
simpler	O
the	O
latter	O
is	O
more	O
common	O
.	O
</s>
<s>
This	O
is	O
done	O
using	O
standard	O
neural	B-Architecture
net	I-Architecture
training	O
algorithms	O
such	O
as	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
with	O
backpropagation	B-Algorithm
.	O
</s>
<s>
from	O
a	O
feature	B-Algorithm
vector	I-Algorithm
representing	O
the	O
previous	O
words	O
.	O
</s>
<s>
This	O
is	O
called	O
a	O
bag-of-words	B-General_Concept
model	I-General_Concept
.	O
</s>
<s>
When	O
the	O
feature	B-Algorithm
vectors	I-Algorithm
for	O
the	O
words	O
in	O
the	O
context	O
are	O
combined	O
by	O
a	O
continuous	O
operation	O
,	O
this	O
model	O
is	O
referred	O
to	O
as	O
the	O
continuous	O
bag-of-words	B-General_Concept
architecture	O
(	O
CBOW	O
)	O
.	O
</s>
<s>
A	O
third	O
option	O
that	O
trains	O
slower	O
than	O
the	O
CBOW	O
but	O
performs	O
slightly	O
better	O
is	O
to	O
invert	O
the	O
previous	O
problem	O
and	O
make	O
a	O
neural	B-Architecture
network	I-Architecture
learn	O
the	O
context	O
,	O
given	O
a	O
word	O
.	O
</s>
<s>
This	O
is	O
called	O
a	O
skip-gram	O
language	B-Language
model	I-Language
.	O
</s>
<s>
Bag-of-words	B-General_Concept
and	O
skip-gram	O
models	O
are	O
the	O
basis	O
of	O
the	O
word2vec	B-Algorithm
program	O
.	O
</s>
<s>
Instead	O
of	O
using	O
neural	B-Architecture
net	I-Architecture
language	B-Language
models	I-Language
to	O
produce	O
actual	O
probabilities	O
,	O
it	O
is	O
common	O
to	O
instead	O
use	O
the	O
distributed	O
representation	O
encoded	O
in	O
the	O
networks	O
 '	O
"	O
hidden	O
"	O
layers	O
as	O
representations	O
of	O
words	O
;	O
each	O
word	O
is	O
then	O
mapped	O
onto	O
an	O
-dimensional	O
real	O
vector	O
called	O
the	O
word	B-General_Concept
embedding	I-General_Concept
,	O
where	O
is	O
the	O
size	O
of	O
the	O
layer	O
just	O
before	O
the	O
output	O
layer	O
.	O
</s>
<s>
where	O
≈	O
is	O
made	O
precise	O
by	O
stipulating	O
that	O
its	O
right-hand	O
side	O
must	O
be	O
the	O
nearest	B-Algorithm
neighbor	I-Algorithm
of	O
the	O
value	O
of	O
the	O
left-hand	O
side	O
.	O
</s>
<s>
A	O
positional	O
language	B-Language
model	I-Language
assesses	O
the	O
probability	O
of	O
given	O
words	O
occurring	O
close	O
to	O
one	O
another	O
in	O
a	O
text	O
,	O
not	O
necessarily	O
immediately	O
adjacent	O
.	O
</s>
<s>
Despite	O
the	O
limited	O
successes	O
in	O
using	O
neural	B-Architecture
networks	I-Architecture
,	O
authors	O
acknowledge	O
the	O
need	O
for	O
other	O
techniques	O
when	O
modelling	O
sign	O
languages	O
.	O
</s>
<s>
Evaluation	O
of	O
the	O
quality	O
of	O
language	B-Language
models	I-Language
is	O
mostly	O
done	O
by	O
comparison	O
to	O
human	O
created	O
sample	O
benchmarks	O
created	O
from	O
typical	O
language-oriented	O
tasks	O
.	O
</s>
<s>
Other	O
,	O
less	O
established	O
,	O
quality	O
tests	O
examine	O
the	O
intrinsic	O
character	O
of	O
a	O
language	B-Language
model	I-Language
or	O
compare	O
two	O
such	O
models	O
.	O
</s>
<s>
Since	O
language	B-Language
models	I-Language
are	O
typically	O
intended	O
to	O
be	O
dynamic	O
and	O
to	O
learn	O
from	O
data	O
it	O
sees	O
,	O
some	O
proposed	O
models	O
investigate	O
the	O
rate	O
of	O
learning	O
,	O
e.g.	O
</s>
<s>
Although	O
contemporary	O
language	B-Language
models	I-Language
,	O
such	O
as	O
GPT-3	O
,	O
can	O
be	O
shown	O
to	O
match	O
human	O
performance	O
on	O
some	O
tasks	O
,	O
it	O
is	O
not	O
clear	O
they	O
are	O
plausible	O
cognitive	O
models	O
.	O
</s>
<s>
For	O
instance	O
,	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
have	O
been	O
shown	O
to	O
learn	O
patterns	O
humans	O
do	O
not	O
learn	O
and	O
fail	O
to	O
learn	O
patterns	O
that	O
humans	O
do	O
learn	O
.	O
</s>
