<s>
Word2vec	B-Algorithm
is	O
a	O
technique	O
for	O
natural	B-Language
language	I-Language
processing	I-Language
(	O
NLP	B-Language
)	O
published	O
in	O
2013	O
.	O
</s>
<s>
The	O
word2vec	B-Algorithm
algorithm	O
uses	O
a	O
neural	B-Architecture
network	I-Architecture
model	O
to	O
learn	O
word	O
associations	O
from	O
a	O
large	O
corpus	O
of	O
text	O
.	O
</s>
<s>
As	O
the	O
name	O
implies	O
,	O
word2vec	B-Algorithm
represents	O
each	O
distinct	O
word	O
with	O
a	O
particular	O
list	O
of	O
numbers	O
called	O
a	O
vector	O
.	O
</s>
<s>
Word2vec	B-Algorithm
is	O
a	O
group	O
of	O
related	O
models	O
that	O
are	O
used	O
to	O
produce	O
word	B-General_Concept
embeddings	I-General_Concept
.	O
</s>
<s>
These	O
models	O
are	O
shallow	O
,	O
two-layer	O
neural	B-Architecture
networks	I-Architecture
that	O
are	O
trained	O
to	O
reconstruct	O
linguistic	O
contexts	O
of	O
words	O
.	O
</s>
<s>
Word2vec	B-Algorithm
takes	O
as	O
its	O
input	O
a	O
large	O
corpus	O
of	O
text	O
and	O
produces	O
a	O
vector	O
space	O
,	O
typically	O
of	O
several	O
hundred	O
dimensions	O
,	O
with	O
each	O
unique	O
word	O
in	O
the	O
corpus	O
being	O
assigned	O
a	O
corresponding	O
vector	O
in	O
the	O
space	O
.	O
</s>
<s>
Word2vec	B-Algorithm
can	O
utilize	O
either	O
of	O
two	O
model	O
architectures	O
to	O
produce	O
these	O
distributed	O
representations	O
of	O
words	O
:	O
continuous	O
bag-of-words	B-General_Concept
(	O
CBOW	O
)	O
or	O
continuous	O
skip-gram	O
.	O
</s>
<s>
In	O
both	O
architectures	O
,	O
word2vec	B-Algorithm
considers	O
both	O
individual	O
words	O
and	O
a	O
sliding	O
window	O
of	O
context	O
words	O
surrounding	O
individual	O
words	O
as	O
it	O
iterates	O
over	O
the	O
entire	O
corpus	O
.	O
</s>
<s>
In	O
the	O
continuous	O
bag-of-words	B-General_Concept
architecture	O
,	O
the	O
model	O
predicts	O
the	O
current	O
word	O
from	O
the	O
window	O
of	O
surrounding	O
context	O
words	O
.	O
</s>
<s>
The	O
order	O
of	O
context	O
words	O
does	O
not	O
influence	O
prediction	O
(	O
bag-of-words	B-General_Concept
assumption	O
)	O
.	O
</s>
<s>
After	O
the	O
model	O
has	O
trained	O
,	O
the	O
learned	O
word	B-General_Concept
embeddings	I-General_Concept
are	O
positioned	O
in	O
the	O
vector	O
space	O
such	O
that	O
words	O
that	O
share	O
common	O
contexts	O
in	O
the	O
corpus	O
—	O
that	O
is	O
,	O
words	O
that	O
are	O
semantically	O
and	O
syntactically	O
similar	O
—	O
are	O
located	O
close	O
to	O
one	O
another	O
in	O
the	O
space	O
.	O
</s>
<s>
Word2vec	B-Algorithm
was	O
created	O
,	O
patented	O
,	O
and	O
published	O
in	O
2013	O
by	O
a	O
team	O
of	O
researchers	O
led	O
by	O
Tomas	O
Mikolov	O
at	O
Google	B-Application
over	O
two	O
papers	O
.	O
</s>
<s>
Embedding	O
vectors	O
created	O
using	O
the	O
Word2vec	B-Algorithm
algorithm	O
have	O
some	O
advantages	O
compared	O
to	O
earlier	O
algorithms	O
such	O
as	O
latent	O
semantic	O
analysis	O
.	O
</s>
<s>
By	O
2022	O
,	O
the	O
Word2vec	B-Algorithm
approach	O
was	O
described	O
as	O
"	O
dated	O
"	O
,	O
with	O
transformer	B-Algorithm
models	I-Algorithm
being	O
regarded	O
as	O
the	O
state	O
of	O
the	O
art	O
in	O
NLP	B-Language
.	O
</s>
<s>
Results	O
of	O
word2vec	B-Algorithm
training	O
can	O
be	O
sensitive	O
to	O
parametrization	O
.	O
</s>
<s>
The	O
following	O
are	O
some	O
important	O
parameters	O
in	O
word2vec	B-Algorithm
training	O
.	O
</s>
<s>
A	O
Word2vec	B-Algorithm
model	O
can	O
be	O
trained	O
with	O
hierarchical	O
softmax	B-Algorithm
and/or	O
negative	O
sampling	O
.	O
</s>
<s>
To	O
approximate	O
the	O
conditional	O
log-likelihood	O
a	O
model	O
seeks	O
to	O
maximize	O
,	O
the	O
hierarchical	O
softmax	B-Algorithm
method	O
uses	O
a	O
Huffman	B-General_Concept
tree	I-General_Concept
to	O
reduce	O
calculation	O
.	O
</s>
<s>
According	O
to	O
the	O
authors	O
,	O
hierarchical	O
softmax	B-Algorithm
works	O
better	O
for	O
infrequent	O
words	O
while	O
negative	O
sampling	O
works	O
better	O
for	O
frequent	O
words	O
and	O
better	O
with	O
low	O
dimensional	O
vectors	O
.	O
</s>
<s>
As	O
training	O
epochs	O
increase	O
,	O
hierarchical	O
softmax	B-Algorithm
stops	O
being	O
useful	O
.	O
</s>
<s>
Quality	O
of	O
word	B-General_Concept
embedding	I-General_Concept
increases	O
with	O
higher	O
dimensionality	O
.	O
</s>
<s>
There	O
are	O
a	O
variety	O
of	O
extensions	O
to	O
word2vec	B-Algorithm
.	O
</s>
<s>
doc2vec	O
has	O
been	O
implemented	O
in	O
the	O
C	B-Language
,	O
Python	B-Language
and	O
Java/Scala	O
tools	O
(	O
see	O
below	O
)	O
,	O
with	O
the	O
Java	B-Language
and	O
Python	B-Language
versions	O
also	O
supporting	O
inference	O
of	O
document	O
embeddings	O
on	O
new	O
,	O
unseen	O
documents	O
.	O
</s>
<s>
doc2vec	O
estimates	O
the	O
distributed	O
representations	O
of	O
documents	O
much	O
like	O
how	O
word2vec	B-Algorithm
estimates	O
representations	O
of	O
words	O
:	O
doc2vec	O
utilizes	O
either	O
of	O
two	O
model	O
architectures	O
,	O
both	O
of	O
which	O
are	O
allegories	O
to	O
the	O
architectures	O
used	O
in	O
word2vec	B-Algorithm
.	O
</s>
<s>
The	O
second	O
architecture	O
,	O
Distributed	O
Bag	B-General_Concept
of	I-General_Concept
Words	I-General_Concept
version	O
of	O
Paragraph	O
Vector	O
(	O
PV-DBOW	O
)	O
,	O
is	O
identical	O
to	O
the	O
skip-gram	O
model	O
except	O
that	O
it	O
attempts	O
to	O
predict	O
the	O
window	O
of	O
surrounding	O
context	O
words	O
from	O
the	O
paragraph	O
identifier	O
instead	O
of	O
the	O
current	O
word	O
.	O
</s>
<s>
Another	O
extension	O
of	O
word2vec	B-Algorithm
is	O
top2vec	O
,	O
which	O
leverages	O
both	O
document	O
and	O
word	B-General_Concept
embeddings	I-General_Concept
to	O
estimate	O
distributed	O
representations	O
of	O
topics	O
.	O
</s>
<s>
top2vec	O
takes	O
document	O
embeddings	O
learned	O
from	O
a	O
doc2vec	O
model	O
and	O
reduces	B-Algorithm
them	O
into	O
a	O
lower	O
dimension	O
(	O
typically	O
using	O
UMAP	O
)	O
.	O
</s>
<s>
Finally	O
,	O
top2vec	O
searches	O
the	O
semantic	O
space	O
for	O
word	B-General_Concept
embeddings	I-General_Concept
located	O
near	O
to	O
the	O
topic	O
vector	O
to	O
ascertain	O
the	O
'	O
meaning	O
 '	O
of	O
the	O
topic	O
.	O
</s>
<s>
The	O
word	O
with	O
embeddings	O
most	O
similar	O
to	O
the	O
topic	O
vector	O
might	O
be	O
assigned	O
as	O
the	O
topic	O
's	O
title	O
,	O
whereas	O
far	O
away	O
word	B-General_Concept
embeddings	I-General_Concept
may	O
be	O
considered	O
unrelated	O
.	O
</s>
<s>
An	O
extension	O
of	O
word	B-General_Concept
vectors	I-General_Concept
for	O
n-grams	O
in	O
biological	O
sequences	O
(	O
e.g.	O
</s>
<s>
A	O
similar	O
variant	O
,	O
dna2vec	O
,	O
has	O
shown	O
that	O
there	O
is	O
correlation	O
between	O
Needleman	B-Algorithm
–	I-Algorithm
Wunsch	I-Algorithm
similarity	O
score	O
and	O
cosine	O
similarity	O
of	O
dna2vec	O
word	B-General_Concept
vectors	I-General_Concept
.	O
</s>
<s>
An	O
extension	O
of	O
word	B-General_Concept
vectors	I-General_Concept
for	O
creating	O
a	O
dense	O
vector	O
representation	O
of	O
unstructured	O
radiology	O
reports	O
has	O
been	O
proposed	O
by	O
Banerjee	O
et	O
al	O
.	O
</s>
<s>
One	O
of	O
the	O
biggest	O
challenges	O
with	O
Word2vec	B-Algorithm
is	O
how	O
to	O
handle	O
unknown	O
or	O
out-of-vocabulary	O
(	O
OOV	O
)	O
words	O
and	O
morphologically	O
similar	O
words	O
.	O
</s>
<s>
If	O
the	O
Word2vec	B-Algorithm
model	O
has	O
not	O
encountered	O
a	O
particular	O
word	O
before	O
,	O
it	O
will	O
be	O
forced	O
to	O
use	O
a	O
random	O
vector	O
,	O
which	O
is	O
generally	O
far	O
from	O
its	O
ideal	O
representation	O
.	O
</s>
<s>
This	O
can	O
particularly	O
be	O
an	O
issue	O
in	O
domains	O
like	O
medicine	O
where	O
synonyms	B-Application
and	O
related	O
words	O
can	O
be	O
used	O
depending	O
on	O
the	O
preferred	O
style	O
of	O
radiologist	O
,	O
and	O
words	O
may	O
have	O
been	O
used	O
infrequently	O
in	O
a	O
large	O
corpus	O
.	O
</s>
<s>
IWE	O
combines	O
Word2vec	B-Algorithm
with	O
a	O
semantic	O
dictionary	O
mapping	O
technique	O
to	O
tackle	O
the	O
major	O
challenges	O
of	O
information	B-General_Concept
extraction	I-General_Concept
from	O
clinical	O
texts	O
,	O
which	O
include	O
ambiguity	O
of	O
free	O
text	O
narrative	O
style	O
,	O
lexical	O
variations	O
,	O
use	O
of	O
ungrammatical	O
and	O
telegraphic	O
phases	O
,	O
arbitrary	O
ordering	O
of	O
words	O
,	O
and	O
frequent	O
appearance	O
of	O
abbreviations	O
and	O
acronyms	O
.	O
</s>
<s>
The	O
reasons	O
for	O
successful	O
word	B-General_Concept
embedding	I-General_Concept
learning	O
in	O
the	O
word2vec	B-Algorithm
framework	O
are	O
poorly	O
understood	O
.	O
</s>
<s>
Goldberg	O
and	O
Levy	O
point	O
out	O
that	O
the	O
word2vec	B-Algorithm
objective	O
function	O
causes	O
words	O
that	O
occur	O
in	O
similar	O
contexts	O
to	O
have	O
similar	O
embeddings	O
(	O
as	O
measured	O
by	O
cosine	O
similarity	O
)	O
and	O
note	O
that	O
this	O
is	O
in	O
line	O
with	O
J	O
.	O
R	O
.	O
Firth	O
's	O
distributional	O
hypothesis	O
.	O
</s>
<s>
(	O
2015	O
)	O
show	O
that	O
much	O
of	O
the	O
superior	O
performance	O
of	O
word2vec	B-Algorithm
or	O
similar	O
embeddings	O
in	O
downstream	O
tasks	O
is	O
not	O
a	O
result	O
of	O
the	O
models	O
per	O
se	O
,	O
but	O
of	O
the	O
choice	O
of	O
specific	O
hyperparameters	O
.	O
</s>
<s>
(	O
2016	O
)	O
explain	O
word2vec	B-Algorithm
and	O
related	O
algorithms	O
as	O
performing	O
inference	O
for	O
a	O
simple	O
generative	O
model	O
for	O
text	O
,	O
which	O
involves	O
a	O
random	O
walk	O
generation	O
process	O
based	O
upon	O
loglinear	O
topic	O
model	O
.	O
</s>
<s>
They	O
use	O
this	O
to	O
explain	O
some	O
properties	O
of	O
word	B-General_Concept
embeddings	I-General_Concept
,	O
including	O
their	O
use	O
to	O
solve	O
analogies	O
.	O
</s>
<s>
The	O
word	B-General_Concept
embedding	I-General_Concept
approach	O
is	O
able	O
to	O
capture	O
multiple	O
different	O
degrees	O
of	O
similarity	O
between	O
words	O
.	O
</s>
<s>
This	O
facet	O
of	O
word2vec	B-Algorithm
has	O
been	O
exploited	O
in	O
a	O
variety	O
of	O
other	O
contexts	O
.	O
</s>
<s>
For	O
example	O
,	O
word2vec	B-Algorithm
has	O
been	O
used	O
to	O
map	O
a	O
vector	O
space	O
of	O
words	O
in	O
one	O
language	O
to	O
a	O
vector	O
space	O
constructed	O
from	O
another	O
language	O
.	O
</s>
<s>
Relationships	O
between	O
translated	O
words	O
in	O
both	O
spaces	O
can	O
be	O
used	O
to	O
assist	O
with	O
machine	B-Application
translation	I-Application
of	O
new	O
words	O
.	O
</s>
<s>
(	O
2013	O
)	O
develop	O
an	O
approach	O
to	O
assessing	O
the	O
quality	O
of	O
a	O
word2vec	B-Algorithm
model	O
which	O
draws	O
on	O
the	O
semantic	O
and	O
syntactic	O
patterns	O
discussed	O
above	O
.	O
</s>
<s>
When	O
assessing	O
the	O
quality	O
of	O
a	O
vector	O
model	O
,	O
a	O
user	O
may	O
draw	O
on	O
this	O
accuracy	O
test	O
which	O
is	O
implemented	O
in	O
word2vec	B-Algorithm
,	O
or	O
develop	O
their	O
own	O
test	O
set	O
which	O
is	O
meaningful	O
to	O
the	O
corpora	O
which	O
make	O
up	O
the	O
model	O
.	O
</s>
<s>
The	O
use	O
of	O
different	O
model	O
parameters	O
and	O
different	O
corpus	O
sizes	O
can	O
greatly	O
affect	O
the	O
quality	O
of	O
a	O
word2vec	B-Algorithm
model	O
.	O
</s>
<s>
Altszyler	O
and	O
coauthors	O
(	O
2017	O
)	O
studied	O
Word2vec	B-Algorithm
performance	O
in	O
two	O
semantic	O
tests	O
for	O
different	O
corpus	O
size	O
.	O
</s>
<s>
They	O
found	O
that	O
Word2vec	B-Algorithm
has	O
a	O
steep	O
learning	B-General_Concept
curve	I-General_Concept
,	O
outperforming	O
another	O
word-embedding	O
technique	O
,	O
latent	O
semantic	O
analysis	O
(	O
LSA	O
)	O
,	O
when	O
it	O
is	O
trained	O
with	O
medium	O
to	O
large	O
corpus	O
size	O
(	O
more	O
than	O
10	O
million	O
words	O
)	O
.	O
</s>
