<s>
Bidirectional	O
Encoder	O
Representations	O
from	O
Transformers	B-Algorithm
(	O
BERT	B-General_Concept
)	O
is	O
a	O
family	O
of	O
masked-language	O
models	O
introduced	O
in	O
2018	O
by	O
researchers	O
at	O
Google	B-Application
.	I-Application
</s>
<s>
A	O
2020	O
literature	O
survey	O
concluded	O
that	O
"	O
in	O
a	O
little	O
over	O
a	O
year	O
,	O
BERT	B-General_Concept
has	O
become	O
a	O
ubiquitous	O
baseline	O
in	O
Natural	B-Language
Language	I-Language
Processing	I-Language
(	O
NLP	B-Language
)	O
experiments	O
counting	O
over	O
150	O
research	O
publications	O
analyzing	O
and	O
improving	O
the	O
model.	O
"	O
</s>
<s>
BERT	B-General_Concept
was	O
originally	O
implemented	O
in	O
the	O
English	O
language	O
at	O
two	O
model	O
sizes	O
:	O
(	O
1	O
)	O
BERTBASE	O
:	O
12	O
encoders	O
with	O
12	O
bidirectional	O
self-attention	O
heads	O
totaling	O
110	O
million	O
parameters	O
,	O
and	O
(	O
2	O
)	O
BERTLARGE	O
:	O
24	O
encoders	O
with	O
16	O
bidirectional	O
self-attention	O
heads	O
totaling	O
340	O
million	O
parameters	O
.	O
</s>
<s>
BERT	B-General_Concept
is	O
based	O
on	O
the	O
transformer	B-Algorithm
architecture	O
.	O
</s>
<s>
Specifically	O
,	O
BERT	B-General_Concept
is	O
composed	O
of	O
Transformer	B-Algorithm
encoder	O
layers	O
.	O
</s>
<s>
BERT	B-General_Concept
was	O
pre-trained	O
simultaneously	O
on	O
two	O
tasks	O
:	O
language	B-Language
modeling	I-Language
(	O
15%	O
of	O
tokens	O
were	O
masked	O
,	O
and	O
the	O
training	O
objective	O
was	O
to	O
predict	O
the	O
original	O
token	O
given	O
its	O
context	O
)	O
and	O
next	O
sentence	O
prediction	O
(	O
the	O
training	O
objective	O
was	O
to	O
classify	O
if	O
two	O
spans	O
of	O
text	O
appeared	O
sequentially	O
in	O
the	O
training	O
corpus	O
)	O
.	O
</s>
<s>
As	O
a	O
result	O
of	O
this	O
training	O
process	O
,	O
BERT	B-General_Concept
learns	O
latent	B-Algorithm
representations	I-Algorithm
of	O
words	O
and	O
sentences	O
in	O
context	O
.	O
</s>
<s>
After	O
pre-training	O
,	O
BERT	B-General_Concept
can	O
be	O
fine-tuned	O
with	O
fewer	O
resources	O
on	O
smaller	O
datasets	O
to	O
optimize	O
its	O
performance	O
on	O
specific	O
tasks	O
such	O
as	O
NLP	B-Language
tasks	O
(	O
language	O
inference	O
,	O
text	O
classification	O
)	O
and	O
sequence-to-sequence	O
based	O
language	B-General_Concept
generation	I-General_Concept
tasks	O
(	O
question-answering	O
,	O
conversational	O
response	O
generation	O
)	O
.	O
</s>
<s>
When	O
BERT	B-General_Concept
was	O
published	O
,	O
it	O
achieved	O
state-of-the-art	O
performance	O
on	O
a	O
number	O
of	O
natural	B-General_Concept
language	I-General_Concept
understanding	I-General_Concept
tasks	O
:	O
</s>
<s>
The	O
reasons	O
for	O
BERT	B-General_Concept
's	O
state-of-the-art	O
performance	O
on	O
these	O
natural	B-General_Concept
language	I-General_Concept
understanding	I-General_Concept
tasks	O
are	O
not	O
yet	O
well	O
understood	O
.	O
</s>
<s>
Current	O
research	O
has	O
focused	O
on	O
investigating	O
the	O
relationship	O
behind	O
BERT	B-General_Concept
's	O
output	O
as	O
a	O
result	O
of	O
carefully	O
chosen	O
input	O
sequences	O
,	O
analysis	O
of	O
internal	O
vector	O
representations	O
through	O
probing	O
classifiers	O
,	O
and	O
the	O
relationships	O
represented	O
by	O
attention	O
weights	O
.	O
</s>
<s>
The	O
high	O
performance	O
of	O
the	O
BERT	B-General_Concept
model	O
could	O
also	O
be	O
attributed	O
to	O
the	O
fact	O
that	O
it	O
is	O
bidirectionally	O
trained	O
.	O
</s>
<s>
This	O
means	O
that	O
BERT	B-General_Concept
,	O
based	O
on	O
the	O
Transformer	B-Algorithm
model	I-Algorithm
architecture	O
,	O
applies	O
its	O
self-attention	O
mechanism	O
to	O
learn	O
information	O
from	O
a	O
text	O
from	O
the	O
left	O
and	O
right	O
side	O
during	O
training	O
,	O
and	O
consequently	O
gains	O
a	O
deep	O
understanding	O
of	O
the	O
context	O
.	O
</s>
<s>
BERT	B-General_Concept
considers	O
the	O
words	O
surrounding	O
the	O
target	O
word	O
fine	O
from	O
the	O
left	O
and	O
right	O
side	O
.	O
</s>
<s>
However	O
it	O
comes	O
at	O
a	O
cost	O
:	O
due	O
to	O
encoder-only	O
architecture	O
lacking	O
a	O
decoder	O
,	O
BERT	B-General_Concept
ca	O
n't	O
be	B-General_Concept
prompted	I-General_Concept
and	O
ca	O
n't	O
generate	B-General_Concept
text	I-General_Concept
,	O
while	O
bidirectional	O
models	O
in	O
general	O
do	O
not	O
work	O
effectively	O
without	O
the	O
right	O
side	O
,	O
thus	O
being	O
difficult	O
to	O
prompt	O
,	O
with	O
even	O
short	O
text	B-General_Concept
generation	I-General_Concept
requiring	O
sophisticated	O
computationally	O
expensive	O
techniques	O
.	O
</s>
<s>
In	O
contrast	O
to	O
deep	O
learning	O
neural	O
networks	O
which	O
require	O
very	O
large	O
amounts	O
of	O
data	O
,	O
BERT	B-General_Concept
has	O
already	O
been	O
pre-trained	O
which	O
means	O
that	O
it	O
has	O
learnt	O
the	O
representations	O
of	O
the	O
words	O
and	O
sentences	O
as	O
well	O
as	O
the	O
underlying	O
semantic	O
relations	O
that	O
they	O
are	O
connected	O
with	O
.	O
</s>
<s>
BERT	B-General_Concept
can	O
then	O
be	O
fine-tuned	O
on	O
smaller	O
datasets	O
for	O
specific	O
tasks	O
such	O
as	O
sentiment	O
classification	O
.	O
</s>
<s>
The	O
weights	O
of	O
the	O
original	O
pre-trained	O
models	O
were	O
released	O
on	O
GitHub	B-Application
.	O
</s>
<s>
BERT	B-General_Concept
was	O
originally	O
published	O
by	O
Google	B-Application
researchers	O
Jacob	O
Devlin	O
,	O
Ming-Wei	O
Chang	O
,	O
Kenton	O
Lee	O
,	O
and	O
Kristina	O
Toutanova	O
.	O
</s>
<s>
The	O
design	O
has	O
its	O
origins	O
from	O
pre-training	O
contextual	O
representations	O
,	O
including	O
semi-supervised	B-General_Concept
sequence	I-General_Concept
learning	I-General_Concept
,	O
generative	O
pre-training	O
,	O
ELMo	B-General_Concept
,	O
and	O
ULMFit	O
.	O
</s>
<s>
Unlike	O
previous	O
models	O
,	O
BERT	B-General_Concept
is	O
a	O
deeply	O
bidirectional	O
,	O
unsupervised	B-General_Concept
language	O
representation	O
,	O
pre-trained	O
using	O
only	O
a	O
plain	O
text	O
corpus	O
.	O
</s>
<s>
Context-free	O
models	O
such	O
as	O
word2vec	B-Algorithm
or	O
GloVe	B-General_Concept
generate	O
a	O
single	O
word	O
embedding	O
representation	O
for	O
each	O
word	O
in	O
the	O
vocabulary	O
,	O
where	O
BERT	B-General_Concept
takes	O
into	O
account	O
the	O
context	O
for	O
each	O
occurrence	O
of	O
a	O
given	O
word	O
.	O
</s>
<s>
For	O
instance	O
,	O
whereas	O
the	O
vector	O
for	O
"	O
running	O
"	O
will	O
have	O
the	O
same	O
word2vec	B-Algorithm
vector	O
representation	O
for	O
both	O
of	O
its	O
occurrences	O
in	O
the	O
sentences	O
"	O
He	O
is	O
running	O
a	O
company	O
"	O
and	O
"	O
He	O
is	O
running	O
a	O
marathon	O
"	O
,	O
BERT	B-General_Concept
will	O
provide	O
a	O
contextualized	O
embedding	O
that	O
will	O
be	O
different	O
according	O
to	O
the	O
sentence	O
.	O
</s>
<s>
On	O
October	O
25	O
,	O
2019	O
,	O
Google	B-Application
announced	O
that	O
they	O
had	O
started	O
applying	O
BERT	B-General_Concept
models	O
for	O
English	O
language	O
search	B-Application
queries	I-Application
within	O
the	O
US	O
.	O
</s>
<s>
On	O
December	O
9	O
,	O
2019	O
,	O
it	O
was	O
reported	O
that	O
BERT	B-General_Concept
had	O
been	O
adopted	O
by	O
Google	B-Application
Search	I-Application
for	O
over	O
70	O
languages	O
.	O
</s>
<s>
In	O
October	O
2020	O
,	O
almost	O
every	O
single	O
English-based	O
query	O
was	O
processed	O
by	O
a	O
BERT	B-General_Concept
model	O
.	O
</s>
<s>
The	O
research	O
paper	O
describing	O
BERT	B-General_Concept
won	O
the	O
Best	O
Long	O
Paper	O
Award	O
at	O
the	O
2019	O
Annual	O
Conference	O
of	O
the	O
North	O
American	O
Chapter	O
of	O
the	O
Association	O
for	O
Computational	O
Linguistics	O
(	O
NAACL	O
)	O
.	O
</s>
