<s>
Generative	O
Pre-trained	O
Transformer	B-Algorithm
2	O
(	O
GPT-2	B-General_Concept
)	O
is	O
an	O
open-source	B-License
artificial	B-Application
intelligence	I-Application
large	O
language	B-Language
model	I-Language
created	O
by	O
OpenAI	O
in	O
February	O
2019	O
.	O
</s>
<s>
GPT-2	B-General_Concept
translates	B-Application
text	O
,	O
answers	B-Algorithm
questions	I-Algorithm
,	O
summarizes	B-Application
passages	O
,	O
and	O
generates	B-General_Concept
text	I-General_Concept
output	I-General_Concept
on	O
a	O
level	O
that	O
,	O
while	O
sometimes	O
indistinguishable	O
from	O
that	O
of	O
humans	O
,	O
can	O
become	O
repetitive	O
or	O
nonsensical	O
when	O
generating	O
long	O
passages	O
.	O
</s>
<s>
GPT-2	B-General_Concept
was	O
created	O
as	O
a	O
"	O
direct	O
scale-up	O
"	O
of	O
OpenAI	O
's	O
2018	O
GPT	O
model	O
,	O
with	O
a	O
ten-fold	O
increase	O
in	O
both	O
its	O
parameter	O
count	O
and	O
the	O
size	O
of	O
its	O
training	O
dataset	B-General_Concept
.	O
</s>
<s>
GPT-2	B-General_Concept
has	O
a	O
generative	O
pre-trained	O
transformer	B-Algorithm
architecture	O
which	O
implements	O
a	O
deep	O
neural	B-Architecture
network	I-Architecture
,	O
specifically	O
a	O
transformer	B-Algorithm
model	I-Algorithm
,	O
which	O
uses	O
attention	B-General_Concept
in	O
place	O
of	O
previous	O
recurrence	O
-	O
and	O
convolution-based	O
architectures	O
.	O
</s>
<s>
Attention	B-General_Concept
mechanisms	I-General_Concept
allow	O
the	O
model	O
to	O
selectively	O
focus	O
on	O
segments	O
of	O
input	O
text	O
it	O
predicts	O
to	O
be	O
the	O
most	O
relevant	O
.	O
</s>
<s>
This	O
model	O
allows	O
for	O
greatly	O
increased	O
parallelization	B-Operating_System
,	O
and	O
outperforms	O
previous	O
benchmarks	O
for	O
RNN/CNN/LSTM	O
-based	O
models	O
.	O
</s>
<s>
OpenAI	O
released	O
the	O
complete	O
version	O
of	O
the	O
GPT-2	B-General_Concept
language	B-Language
model	I-Language
(	O
with	O
1.5	O
billion	O
parameters	O
)	O
in	O
November	O
2019	O
.	O
</s>
<s>
GPT-2	B-General_Concept
was	O
to	O
be	O
followed	O
by	O
the	O
175-billion-parameter	O
GPT-3	B-General_Concept
,	O
revealed	O
to	O
the	O
public	O
in	O
2020	O
(	O
whose	O
source	O
code	O
has	O
never	O
been	O
made	O
available	O
)	O
.	O
</s>
<s>
Access	O
to	O
GPT-3	B-General_Concept
is	O
provided	O
exclusively	O
through	O
APIs	B-Application
offered	O
by	O
OpenAI	O
and	O
Microsoft	O
.	O
</s>
<s>
+architectureparameter	O
counttraining	O
dataGPT-112-level	O
,	O
12-headed	O
Transformer	B-Algorithm
decoder	O
(	O
no	O
encoder	O
)	O
,	O
followed	O
by	O
linear-softmax.0.12	O
billionBookCorpus	O
:	O
4.5	O
GB	O
of	O
text	O
,	O
from	O
7000	O
unpublished	O
books	O
of	O
various	O
genres.GPT-2GPT-1	O
,	O
but	O
with	O
modified	O
normalization1.5	O
billionWebText	O
:	O
40	O
GB	O
of	O
text	O
,	O
8	O
million	O
documents	O
,	O
from	O
45	O
million	O
webpages	O
upvoted	O
on	O
Reddit.GPT-3GPT-2	O
,	O
but	O
with	O
modification	O
to	O
allow	O
larger	O
scaling.175	O
billion570	O
GB	O
plaintext	O
,	O
0.4	O
trillion	O
tokens	O
.	O
</s>
<s>
On	O
June	O
11	O
,	O
2018	O
,	O
OpenAI	O
released	O
a	O
paper	O
entitled	O
"	O
Improving	O
Language	O
Understanding	O
by	O
Generative	O
Pre-Training	O
"	O
,	O
in	O
which	O
they	O
introduced	O
the	O
first	O
Generative	O
Pre-trained	O
Transformer	B-Algorithm
(	O
"	O
GPT-1	O
"	O
)	O
.	O
</s>
<s>
Up	O
to	O
this	O
point	O
,	O
the	O
best-performing	O
neural	O
NLP	O
models	O
primarily	O
employed	O
supervised	B-General_Concept
learning	I-General_Concept
from	O
large	O
amounts	O
of	O
manually	O
labeled	O
data	O
.	O
</s>
<s>
This	O
reliance	O
on	O
supervised	B-General_Concept
learning	I-General_Concept
limited	O
their	O
use	O
on	O
datasets	B-General_Concept
that	O
were	O
not	O
well-annotated	O
,	O
in	O
addition	O
to	O
making	O
it	O
prohibitively	O
expensive	O
and	O
time-consuming	O
to	O
train	O
extremely	O
large	O
models	O
;	O
many	O
languages	O
(	O
such	O
as	O
Swahili	O
or	O
Haitian	O
Creole	O
)	O
are	O
difficult	O
to	O
translate	O
and	O
interpret	O
using	O
such	O
models	O
due	O
to	O
a	O
lack	O
of	O
available	O
text	O
for	O
corpus-building	O
.	O
</s>
<s>
In	O
contrast	O
,	O
the	O
GPT	O
's	O
"	O
semi-supervised	O
"	O
approach	O
involved	O
two	O
stages	O
:	O
an	O
unsupervised	B-General_Concept
generative	O
"	O
pre-training	O
"	O
stage	O
in	O
which	O
a	O
language	B-Language
modeling	I-Language
objective	O
was	O
used	O
to	O
set	O
initial	O
parameters	O
,	O
and	O
a	O
supervised	O
discriminative	O
"	O
fine-tuning	O
"	O
stage	O
in	O
which	O
these	O
parameters	O
were	O
adapted	O
to	O
a	O
target	O
task	O
.	O
</s>
<s>
The	O
use	O
of	O
a	O
transformer	B-Algorithm
architecture	O
,	O
as	O
opposed	O
to	O
previous	O
techniques	O
involving	O
attention-augmented	O
RNNs	O
,	O
provided	O
GPT	O
models	O
with	O
a	O
more	O
structured	O
memory	O
than	O
could	O
be	O
achieved	O
through	O
recurrent	O
mechanisms	O
;	O
this	O
resulted	O
in	O
"	O
robust	O
transfer	O
performance	O
across	O
diverse	O
tasks	O
"	O
.During	O
transfer	O
,	O
we	O
utilize	O
task-specific	O
input	O
adaptations	O
derived	O
from	O
traversal-style	O
approaches	O
,	O
which	O
process	O
structured	O
text	O
input	O
as	O
a	O
single	O
contiguous	O
sequence	O
of	O
tokens	O
.	O
</s>
<s>
The	O
unsupervised	B-General_Concept
pre-training	O
was	O
performed	O
using	O
BookCorpus	O
,	O
a	O
dataset	B-General_Concept
of	O
over	O
7,000	O
unpublished	O
fiction	O
books	O
from	O
various	O
genres	O
;	O
this	O
dataset	B-General_Concept
was	O
chosen	O
in	O
part	O
because	O
its	O
long	O
passages	O
of	O
continuous	O
text	O
conditioned	O
the	O
model	O
to	O
handle	O
long-range	O
information	O
.	O
</s>
<s>
Other	O
available	O
datasets	B-General_Concept
,	O
while	O
larger	O
,	O
were	O
rejected	O
on	O
the	O
basis	O
that	O
they	O
lacked	O
this	O
long-range	O
structure	O
(	O
being	O
"	O
shuffled	O
"	O
at	O
a	O
sentence	O
level	O
)	O
.	O
</s>
<s>
The	O
GPT	O
architecture	O
itself	O
was	O
a	O
twelve-layer	O
decoder-only	O
transformer	B-Algorithm
,	O
using	O
twelve	O
masked	O
self-attention	O
heads	O
,	O
with	O
64	O
dimensional	O
states	O
each	O
(	O
for	O
a	O
total	O
of	O
768	O
)	O
.	O
</s>
<s>
Rather	O
than	O
simple	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
,	O
the	O
Adam	O
optimization	O
algorithm	O
was	O
used	O
;	O
the	O
learning	O
rate	O
was	O
increased	O
linearly	O
from	O
zero	O
over	O
the	O
first	O
2,000	O
updates	O
,	O
to	O
a	O
maximum	O
of	O
2.5	O
×10−4	O
,	O
and	O
annealed	O
to	O
0	O
using	O
a	O
cosine	O
schedule	O
.	O
</s>
<s>
We	O
used	O
a	O
bytepair	O
encoding	O
(	O
BPE	O
)	O
vocabulary	O
with	O
40,000	O
merges	O
[53]and	O
residual	O
,	O
embedding	O
,	O
and	O
attention	B-General_Concept
dropouts	O
with	O
a	O
rate	O
of	O
0.1	O
for	O
regularization	O
.	O
</s>
<s>
[...]Unless	O
specified	O
,	O
we	O
reuse	O
the	O
hyperparameter	O
settings	O
from	O
unsupervised	B-General_Concept
pre-training	O
.	O
</s>
<s>
On	O
natural	O
language	O
inference	O
(	O
also	O
known	O
as	O
textual	O
entailment	O
)	O
tasks	O
,	O
models	O
are	O
evaluated	O
on	O
their	O
ability	O
to	O
interpret	O
pairs	O
of	O
sentences	O
from	O
various	O
datasets	B-General_Concept
and	O
classify	O
the	O
relationship	O
between	O
them	O
as	O
"	O
entailment	O
"	O
,	O
"	O
contradiction	O
"	O
or	O
"	O
neutral	O
"	O
.	O
</s>
<s>
Examples	O
of	O
such	O
datasets	B-General_Concept
include	O
QNLI	O
(	O
Wikipedia	O
articles	O
)	O
and	O
MultiNLI	O
(	O
transcribed	O
speech	O
,	O
popular	O
fiction	O
and	O
government	O
reports	O
,	O
among	O
other	O
sources	O
)	O
;	O
on	O
these	O
GPT	O
achieved	O
,	O
respectively	O
,	O
a	O
5.8	O
%	O
and	O
1.5	O
%	O
improvement	O
over	O
previous	O
best	O
results	O
.	O
</s>
<s>
It	O
similarly	O
outperformed	O
previous	O
models	O
on	O
two	O
tasks	O
related	O
to	O
question	B-Algorithm
answering	I-Algorithm
and	O
commonsense	O
reasoning	O
—	O
by	O
5.7	O
%	O
on	O
RACE	O
,	O
a	O
dataset	B-General_Concept
of	O
written	O
question	O
–	O
answer	O
pairs	O
from	O
middle	O
and	O
high	O
school	O
exams	O
,	O
and	O
by	O
8.9	O
%	O
on	O
the	O
Story	O
Cloze	O
Test	O
.	O
</s>
<s>
Another	O
task	O
,	O
semantic	O
similarity	O
(	O
or	O
paraphrase	O
detection	O
)	O
,	O
assesses	O
whether	O
a	O
model	O
can	O
predict	O
whether	O
two	O
sentences	O
are	O
paraphrases	O
of	O
one	O
another	O
;	O
on	O
the	O
Quora	O
Question	O
Pairs	O
(	O
QQP	O
)	O
dataset	B-General_Concept
,	O
GPT	O
improved	O
on	O
previous	O
best-performing	O
models	O
by	O
4.2	O
%	O
.	O
</s>
<s>
GPT-2	B-General_Concept
was	O
created	O
as	O
a	O
direct	O
scale-up	O
of	O
GPT	O
,	O
with	O
both	O
its	O
parameter	O
count	O
and	O
dataset	B-General_Concept
size	O
increased	O
by	O
a	O
factor	O
of	O
10	O
.	O
</s>
<s>
Both	O
are	O
unsupervised	B-General_Concept
transformer	B-Algorithm
models	I-Algorithm
trained	O
to	O
generate	O
text	O
by	O
predicting	O
the	O
next	O
word	O
in	O
a	O
sequence	O
of	O
tokens	O
.	O
</s>
<s>
The	O
GPT-2	B-General_Concept
model	O
has	O
1.5	O
billion	O
parameters	O
,	O
and	O
was	O
trained	O
on	O
a	O
dataset	B-General_Concept
of	O
8	O
million	O
web	O
pages	O
.	O
</s>
<s>
While	O
GPT-2	B-General_Concept
was	O
reinforced	O
on	O
very	O
simple	O
criteria	O
(	O
interpreting	O
a	O
sequence	O
of	O
words	O
in	O
a	O
text	O
sample	O
and	O
predicting	O
the	O
most	O
likely	O
next	O
word	O
)	O
,	O
it	O
produces	O
full	O
sentences	O
and	O
paragraphs	O
by	O
continuing	O
to	O
predict	O
additional	O
words	O
,	O
generating	O
fully	O
comprehensible	O
(	O
and	O
semantically	B-Application
meaningful	O
)	O
statements	O
in	O
natural	O
language	O
.	O
</s>
<s>
Notably	O
,	O
GPT-2	B-General_Concept
was	O
evaluated	O
on	O
its	O
performance	O
on	O
tasks	O
in	O
a	O
zero-shot	B-Algorithm
setting	I-Algorithm
.	O
</s>
<s>
Since	O
the	O
transformer	B-Algorithm
architecture	O
enabled	O
massive	B-Operating_System
parallelization	I-Operating_System
,	O
GPT-series	O
models	O
could	O
be	O
trained	O
on	O
larger	O
corpora	O
than	O
previous	O
NLP	O
models	O
.	O
</s>
<s>
While	O
the	O
initial	O
GPT	O
model	O
demonstrated	O
that	O
the	O
approach	O
was	O
viable	O
,	O
GPT-2	B-General_Concept
would	O
further	O
explore	O
the	O
emergent	O
properties	O
of	O
networks	O
trained	O
on	O
extremely	O
large	O
corpora	O
.	O
</s>
<s>
CommonCrawl	O
,	O
a	O
large	O
corpus	O
produced	O
by	O
web	B-Application
crawling	I-Application
and	O
previously	O
used	O
in	O
training	O
NLP	O
systems	O
,	O
was	O
considered	O
due	O
to	O
its	O
large	O
size	O
,	O
but	O
was	O
rejected	O
after	O
further	O
review	O
revealed	O
large	O
amounts	O
of	O
unintelligible	O
content	O
.	O
</s>
<s>
Instead	O
,	O
OpenAI	O
developed	O
a	O
new	O
corpus	O
,	O
known	O
as	O
WebText	O
;	O
rather	O
than	O
scraping	O
content	O
indiscriminately	O
from	O
the	O
World	O
Wide	O
Web	O
,	O
WebText	O
was	O
generated	O
by	O
scraping	O
only	O
pages	O
linked	O
to	O
by	O
Reddit	B-Application
posts	O
that	O
had	O
received	O
at	O
least	O
three	O
upvotes	O
prior	O
to	O
December	O
2017	O
.	O
</s>
<s>
The	O
corpus	O
was	O
subsequently	O
cleaned	O
;	O
HTML	B-Language
documents	O
were	O
parsed	O
into	O
plain	O
text	O
,	O
duplicate	O
pages	O
were	O
eliminated	O
,	O
and	O
Wikipedia	O
pages	O
were	O
removed	O
(	O
since	O
their	O
presence	O
in	O
many	O
other	O
datasets	B-General_Concept
could	O
have	O
induced	O
overfitting	B-Error_Name
)	O
.	O
</s>
<s>
While	O
the	O
cost	O
of	O
training	O
GPT-2	B-General_Concept
is	O
known	O
to	O
have	O
been	O
$256	O
per	O
hour	O
,	O
the	O
amount	O
of	O
hours	O
it	O
took	O
to	O
complete	O
training	O
is	O
unknown	O
;	O
therefore	O
,	O
the	O
overall	O
training	O
cost	O
cannot	O
be	O
estimated	O
accurately	O
.	O
</s>
<s>
However	O
,	O
comparable	O
large	O
language	B-Language
models	I-Language
using	O
transformer	B-Algorithm
architectures	O
have	O
had	O
their	O
costs	O
documented	O
in	O
more	O
detail	O
;	O
the	O
training	O
processes	O
for	O
BERT	B-General_Concept
and	O
XLNet	O
consumed	O
,	O
respectively	O
,	O
$	O
6,912	O
and	O
$	O
245,000	O
of	O
resources	O
.	O
</s>
<s>
GPT-2	B-General_Concept
became	O
capable	O
of	O
performing	O
a	O
variety	O
of	O
tasks	O
beyond	O
simple	O
text	O
production	O
due	O
to	O
the	O
breadth	O
of	O
its	O
dataset	B-General_Concept
and	O
technique	O
:	O
answering	O
questions	O
,	O
summarizing	O
,	O
and	O
even	O
translating	B-Application
between	O
languages	O
in	O
a	O
variety	O
of	O
specific	O
domains	O
,	O
without	O
being	O
instructed	O
in	O
anything	O
beyond	O
how	O
to	O
predict	O
the	O
next	O
word	O
in	O
a	O
sequence	O
.	O
</s>
<s>
One	O
example	O
of	O
generalized	O
learning	O
is	O
GPT-2	B-General_Concept
'	O
s	O
ability	O
to	O
perform	O
machine	B-Application
translation	I-Application
between	O
French	O
and	O
English	O
,	O
for	O
which	O
task	O
GPT-2	B-General_Concept
'	O
s	O
performance	O
was	O
assessed	O
using	O
WMT-14	O
translation	O
tasks	O
.	O
</s>
<s>
GPT-2	B-General_Concept
'	O
s	O
training	O
corpus	O
included	O
virtually	O
no	O
French	O
text	O
;	O
non-English	O
text	O
was	O
deliberately	O
removed	O
while	O
cleaning	O
the	O
dataset	B-General_Concept
prior	O
to	O
training	O
,	O
and	O
as	O
a	O
consequence	O
,	O
only	O
10MB	O
of	O
French	O
of	O
the	O
remaining	O
40,000	O
MB	O
was	O
available	O
for	O
the	O
model	O
to	O
learn	O
from	O
(	O
mostly	O
from	O
foreign-language	O
quotations	O
in	O
English	O
posts	O
and	O
articles	O
)	O
.	O
</s>
<s>
Despite	O
this	O
,	O
GPT-2	B-General_Concept
achieved	O
5	O
BLEU	O
on	O
the	O
WMT-14	O
English-to-French	O
test	O
set	O
(	O
slightly	O
below	O
the	O
score	O
of	O
a	O
translation	O
via	O
word-for-word	O
substitution	O
)	O
.	O
</s>
<s>
It	O
was	O
also	O
able	O
to	O
outperform	O
several	O
contemporary	O
(	O
2017	O
)	O
unsupervised	B-General_Concept
machine	B-Application
translation	I-Application
baselines	O
on	O
the	O
French-to-English	O
test	O
set	O
,	O
where	O
GPT-2	B-General_Concept
achieved	O
11.5	O
BLEU	O
.	O
</s>
<s>
This	O
remained	O
below	O
the	O
highest-performing	O
contemporary	O
unsupervised	B-General_Concept
approach	I-General_Concept
(	O
2019	O
)	O
,	O
which	O
had	O
achieved	O
33.5	O
BLEU	O
.	O
</s>
<s>
However	O
,	O
other	O
models	O
used	O
large	O
amounts	O
of	O
French	O
text	O
to	O
achieve	O
these	O
results	O
;	O
GPT-2	B-General_Concept
was	O
estimated	O
to	O
have	O
used	O
a	O
monolingual	O
French	O
corpus	O
approximately	O
1/500	O
the	O
size	O
of	O
comparable	O
approaches	O
.	O
</s>
<s>
GPT-2	B-General_Concept
was	O
first	O
announced	O
on	O
14	O
February	O
2019	O
.	O
</s>
<s>
A	O
February	O
2019	O
article	O
in	O
The	O
Verge	O
by	O
James	O
Vincent	O
said	O
that	O
,	O
while	O
"[the]	O
writing	O
it	O
produces	O
is	O
usually	O
easily	O
identifiable	O
as	O
non-human	O
"	O
,	O
it	O
remained	O
"	O
one	O
of	O
the	O
most	O
exciting	O
examples	O
yet	O
"	O
of	O
language	B-General_Concept
generation	I-General_Concept
programs	O
:	O
</s>
<s>
The	O
Guardian	O
described	O
this	O
output	O
as	O
"	O
plausible	O
newspaper	O
prose	O
"	O
;	O
Kelsey	O
Piper	O
of	O
Vox	O
said	O
"	O
one	O
of	O
the	O
coolest	O
AI	B-Application
systems	O
I	O
’ve	O
ever	O
seen	O
may	O
also	O
be	O
the	O
one	O
that	O
will	O
kick	O
me	O
out	O
of	O
my	O
job	O
"	O
.	O
</s>
<s>
GPT-2	B-General_Concept
'	O
s	O
flexibility	O
was	O
described	O
as	O
"	O
impressive	O
"	O
by	O
The	O
Verge	O
;	O
specifically	O
,	O
its	O
ability	O
to	O
translate	B-Application
text	I-Application
between	O
languages	O
,	O
summarize	O
long	O
articles	O
,	O
and	O
answer	O
trivia	O
questions	O
were	O
noted	O
.	O
</s>
<s>
A	O
study	O
by	O
the	O
University	O
of	O
Amsterdam	O
employing	O
a	O
modified	O
Turing	O
test	O
found	O
that	O
at	O
least	O
in	O
some	O
scenarios	O
,	O
participants	O
were	O
unable	O
to	O
distinguish	O
poems	O
generated	O
by	O
GPT-2	B-General_Concept
from	O
those	O
written	O
by	O
humans	O
.	O
</s>
<s>
While	O
previous	O
OpenAI	O
models	O
had	O
been	O
made	O
immediately	O
available	O
to	O
the	O
public	O
,	O
OpenAI	O
initially	O
refused	O
to	O
make	O
a	O
public	O
release	O
of	O
GPT-2	B-General_Concept
'	O
s	O
source	O
code	O
when	O
announcing	O
it	O
in	O
February	O
,	O
citing	O
the	O
risk	O
of	O
malicious	O
use	O
;	O
limited	O
access	O
to	O
the	O
model	O
(	O
i.e.	O
</s>
<s>
One	O
commonly-cited	O
justification	O
was	O
that	O
,	O
since	O
generated	O
text	O
was	O
usually	O
completely	O
novel	O
,	O
it	O
could	O
be	O
used	O
by	O
spammers	O
to	O
evade	O
automated	O
filters	O
;	O
OpenAI	O
demonstrated	O
a	O
version	O
of	O
GPT-2	B-General_Concept
fine-tuned	O
to	O
"	O
generate	O
infinite	O
positive	O
–	O
or	O
negative	O
–	O
reviews	O
of	O
products	O
"	O
.	O
</s>
<s>
Another	O
justification	O
was	O
that	O
GPT-2	B-General_Concept
could	O
be	O
used	O
to	O
generate	O
text	O
that	O
was	O
obscene	O
or	O
racist	O
.	O
</s>
<s>
The	O
Allen	O
Institute	O
for	O
Artificial	B-Application
Intelligence	I-Application
,	O
in	O
response	O
to	O
GPT-2	B-General_Concept
,	O
announced	O
a	O
tool	O
to	O
detect	O
"	O
neural	O
fake	O
news	O
"	O
.	O
</s>
<s>
A	O
February	O
2019	O
article	O
in	O
The	O
Verge	O
argued	O
that	O
the	O
threat	O
posed	O
by	O
GPT-2	B-General_Concept
had	O
been	O
exaggerated	O
;	O
Anima	O
Anandkumar	O
,	O
a	O
professor	O
at	O
Caltech	O
and	O
director	O
of	O
machine	O
learning	O
research	O
at	O
Nvidia	O
,	O
said	O
that	O
there	O
was	O
no	O
evidence	O
that	O
GPT-2	B-General_Concept
had	O
the	O
capabilities	O
to	O
pose	O
the	O
threats	O
described	O
by	O
OpenAI	O
,	O
and	O
that	O
what	O
they	O
did	O
was	O
the	O
"	O
opposite	O
of	O
open	O
"	O
,	O
characterizing	O
their	O
refusal	O
to	O
release	O
the	O
full	O
model	O
as	O
"	O
malicious	O
BS	O
"	O
.	O
</s>
<s>
The	O
Gradient	O
published	O
an	O
open	O
letter	O
to	O
OpenAI	O
requesting	O
that	O
they	O
release	O
the	O
model	O
publicly	O
,	O
comparing	O
the	O
threat	O
posed	O
by	O
text-generation	O
AI	B-Application
to	O
the	O
threat	O
posed	O
by	O
the	O
printing	O
press	O
,	O
and	O
giving	O
Photoshop	B-Operating_System
as	O
an	O
example	O
of	O
"	O
a	O
technology	O
that	O
has	O
(	O
thankfully	O
)	O
not	O
destroyed	O
modern	O
society	O
despite	O
its	O
potential	O
for	O
chaos	O
"	O
:	O
</s>
<s>
Thirty	O
years	O
later	O
,	O
society	O
has	O
emerged	O
relatively	O
unscathed	O
despite	O
Photoshop	B-Operating_System
being	O
simple	O
enough	O
for	O
high	O
school	O
students	O
to	O
use	O
and	O
ubiquitous	O
enough	O
to	O
commandeer	O
its	O
own	O
verb	O
.	O
</s>
<s>
Precisely	O
because	O
everyone	O
knows	O
about	O
Photoshop	B-Operating_System
.	O
</s>
<s>
While	O
OpenAI	O
did	O
not	O
release	O
the	O
fully-trained	O
model	O
or	O
the	O
corpora	O
it	O
was	O
trained	O
on	O
,	O
description	O
of	O
their	O
methods	O
in	O
prior	O
publications	O
(	O
and	O
the	O
free	O
availability	O
of	O
underlying	O
technology	O
)	O
made	O
it	O
possible	O
for	O
GPT-2	B-General_Concept
to	O
be	O
replicated	O
by	O
others	O
as	O
free	B-Application
software	I-Application
;	O
one	O
such	O
replication	O
,	O
OpenGPT-2	O
,	O
was	O
released	O
in	O
August	O
2019	O
,	O
in	O
conjunction	O
with	O
a	O
freely	O
licensed	O
version	O
of	O
WebText	O
called	O
OpenWebText	O
.	O
</s>
<s>
On	O
August	O
20	O
,	O
2019	O
,	O
OpenAI	O
released	O
a	O
partial	O
version	O
of	O
GPT-2	B-General_Concept
,	O
with	O
774	O
million	O
parameters	O
(	O
roughly	O
half	O
the	O
size	O
of	O
the	O
full	O
1.5	O
billion	O
parameter	O
model	O
)	O
.	O
</s>
<s>
Initial	O
concerns	O
that	O
GPT-2	B-General_Concept
would	O
lend	O
itself	O
to	O
widespread	O
misuse	O
did	O
not	O
come	O
to	O
pass	O
;	O
The	O
Verge	O
said	O
that	O
"	O
there	O
are	O
reasons	O
to	O
be	O
skeptical	O
about	O
claims	O
that	O
AI	B-Application
technology	O
will	O
usher	O
in	O
some	O
sort	O
of	O
‘	O
infopocalypse.	O
’	O
For	O
a	O
start	O
,	O
we	O
already	O
have	O
programs	O
that	O
can	O
generate	O
plausible	O
text	O
at	O
high	O
volume	O
for	O
little	O
cost	O
:	O
humans.	O
"	O
</s>
<s>
While	O
GPT-2	B-General_Concept
'	O
s	O
ability	O
to	O
generate	O
plausible	O
passages	O
of	O
natural	O
language	O
text	O
were	O
generally	O
remarked	O
on	O
positively	O
,	O
its	O
shortcomings	O
were	O
noted	O
as	O
well	O
,	O
especially	O
when	O
generating	O
texts	O
longer	O
than	O
a	O
couple	O
paragraphs	O
;	O
Vox	O
said	O
"	O
the	O
prose	O
is	O
pretty	O
rough	O
,	O
there	O
’s	O
the	O
occasional	O
non-sequitur	O
,	O
and	O
the	O
articles	O
get	O
less	O
coherent	O
the	O
longer	O
they	O
get	O
"	O
.	O
</s>
<s>
The	O
Verge	O
similarly	O
noted	O
that	O
longer	O
samples	O
of	O
GPT-2	B-General_Concept
writing	O
tended	O
to	O
"	O
stray	O
off	O
topic	O
"	O
and	O
lack	O
overall	O
coherence	O
;	O
The	O
Register	O
opined	O
that	O
"	O
a	O
human	O
reading	O
it	O
should	O
,	O
after	O
a	O
short	O
while	O
,	O
realize	O
something	O
's	O
up	O
"	O
,	O
and	O
noted	O
that	O
"	O
GPT-2	B-General_Concept
does	O
n't	O
answer	O
questions	O
as	O
well	O
as	O
other	O
systems	O
that	O
rely	O
on	O
algorithms	O
to	O
extract	O
and	O
retrieve	O
information.	O
"	O
</s>
<s>
GPT-2	B-General_Concept
deployment	O
is	O
resource-intensive	O
;	O
the	O
full	O
version	O
of	O
the	O
model	O
is	O
larger	O
than	O
five	O
gigabytes	O
,	O
making	O
it	O
difficult	O
to	O
embed	O
locally	O
into	O
applications	O
,	O
and	O
consumes	O
large	O
amounts	O
of	O
RAM	O
.	O
</s>
<s>
In	O
addition	O
,	O
performing	O
a	O
single	O
prediction	O
"	O
can	O
occupy	O
a	O
CPU	O
at	O
100%	O
utilization	O
for	O
several	O
minutes	O
"	O
,	O
and	O
even	O
with	O
GPU	B-Architecture
processing	O
,	O
"	O
a	O
single	O
prediction	O
can	O
take	O
seconds	O
"	O
.	O
</s>
<s>
To	O
alleviate	O
these	O
issues	O
,	O
the	O
company	O
Hugging	B-Application
Face	I-Application
created	O
DistilGPT2	O
,	O
using	O
knowledge	B-Algorithm
distillation	I-Algorithm
to	O
produce	O
a	O
smaller	O
model	O
that	O
"	O
scores	O
a	O
few	O
points	O
lower	O
on	O
some	O
quality	O
benchmarks	O
"	O
,	O
but	O
is	O
"	O
33%	O
smaller	O
and	O
twice	O
as	O
fast	O
"	O
.	O
</s>
<s>
Possible	O
applications	O
of	O
GPT-2	B-General_Concept
described	O
by	O
journalists	O
included	O
aiding	O
humans	O
in	O
writing	O
text	O
like	O
news	O
articles	O
.	O
</s>
<s>
Even	O
before	O
the	O
release	O
of	O
the	O
full	O
version	O
,	O
GPT-2	B-General_Concept
was	O
used	O
for	O
a	O
variety	O
of	O
applications	O
and	O
services	O
,	O
as	O
well	O
as	O
for	O
entertainment	O
.	O
</s>
<s>
In	O
June	O
2019	O
,	O
a	O
subreddit	B-Application
named	O
r/SubSimulatorGPT2	O
was	O
created	O
in	O
which	O
a	O
variety	O
of	O
GPT-2	B-General_Concept
instances	O
trained	O
on	O
different	O
subreddits	O
made	O
posts	O
and	O
replied	O
to	O
each	O
other	O
's	O
comments	O
,	O
creating	O
a	O
situation	O
where	O
one	O
could	O
observe	O
"	O
an	O
AI	B-Application
personification	O
of	O
r/Bitcoin	O
argue	O
with	O
the	O
machine	O
learning-derived	O
spirit	O
of	O
r/ShittyFoodPorn	O
"	O
;	O
by	O
July	O
of	O
that	O
year	O
,	O
a	O
GPT-2-based	O
software	O
program	O
released	O
to	O
autocomplete	O
lines	O
of	O
code	O
in	O
a	O
variety	O
of	O
programming	O
languages	O
was	O
described	O
by	O
users	O
as	O
a	O
"	O
game-changer	O
"	O
.	O
</s>
<s>
In	O
2019	O
,	O
AI	B-Application
Dungeon	I-Application
was	O
launched	O
,	O
which	O
used	O
GPT-2	B-General_Concept
to	O
generate	O
dynamic	O
text	O
adventures	O
based	O
on	O
user	O
input	O
.	O
</s>
<s>
AI	B-Application
Dungeon	I-Application
now	O
offers	O
access	O
to	O
the	O
largest	O
release	O
of	O
GPT-3	B-General_Concept
API	B-Application
as	O
an	O
optional	O
paid	O
upgrade	O
,	O
the	O
free	O
version	O
of	O
the	O
site	O
uses	O
the	O
2nd	O
largest	O
release	O
of	O
GPT-3	B-General_Concept
.	O
</s>
<s>
Latitude	O
,	O
the	O
company	O
formed	O
around	O
AI	B-Application
Dungeon	I-Application
,	O
raised	O
$3.3	O
million	O
in	O
seed	O
funding	O
in	O
2021	O
.	O
</s>
<s>
Several	O
websites	O
host	O
interactive	O
demonstrations	O
of	O
different	O
instances	O
of	O
GPT-2	B-General_Concept
and	O
other	O
transformer	B-Algorithm
models	I-Algorithm
.	O
</s>
<s>
In	O
February	O
2021	O
,	O
a	O
crisis	O
center	O
for	O
troubled	O
teens	O
announced	O
that	O
they	O
would	O
begin	O
using	O
a	O
GPT-2-derived	O
chatbot	O
to	O
help	O
train	O
counselors	O
by	O
allowing	O
them	O
to	O
have	O
conversations	O
with	O
simulated	O
teens	O
(	O
this	O
use	O
was	O
purely	O
for	O
internal	O
purposes	O
,	O
and	O
did	O
not	O
involve	O
having	O
GPT-2	B-General_Concept
communicate	O
with	O
the	O
teens	O
themselves	O
)	O
.	O
</s>
