<s>
Generative	O
Pre-trained	O
Transformer	B-Algorithm
3	O
(	O
GPT-3	B-General_Concept
)	O
is	O
an	O
autoregressive	B-Algorithm
language	B-Language
model	I-Language
released	O
in	O
2020	O
that	O
uses	O
deep	B-Algorithm
learning	I-Algorithm
to	O
produce	O
human-like	O
text	O
.	O
</s>
<s>
The	O
architecture	O
is	O
a	O
decoder-only	O
transformer	B-Algorithm
network	I-Algorithm
with	O
a	O
2048-token-long	O
context	O
and	O
then-unprecedented	O
size	O
of	O
175	O
billion	O
parameters	B-Architecture
,	O
requiring	O
800GB	O
to	O
store	O
.	O
</s>
<s>
The	O
model	O
was	O
trained	O
using	O
generative	O
pre-training	O
;	O
it	O
is	O
trained	O
to	O
predict	O
what	O
the	O
next	O
token	B-Application
is	O
based	O
on	O
previous	O
tokens	O
.	O
</s>
<s>
The	O
model	O
demonstrated	O
strong	O
zero-shot	B-Algorithm
and	O
few-shot	O
learning	O
on	O
many	O
tasks	O
.	O
</s>
<s>
The	O
successor	O
to	O
GPT-2	B-General_Concept
,	O
GPT-3	B-General_Concept
is	O
the	O
third-generation	O
language	O
prediction	O
model	O
in	O
a	O
GPT	O
series	O
created	O
by	O
OpenAI	O
,	O
a	O
San	O
Francisco-based	O
artificial	B-Application
intelligence	I-Application
research	I-Application
laboratory	O
.	O
</s>
<s>
GPT-3	B-General_Concept
,	O
which	O
was	O
introduced	O
in	O
May	O
2020	O
,	O
and	O
was	O
in	O
beta	O
testing	O
as	O
of	O
July	O
2020	O
,	O
is	O
part	O
of	O
a	O
trend	O
in	O
natural	B-Language
language	I-Language
processing	I-Language
(	O
NLP	B-Language
)	O
systems	O
of	O
pre-trained	O
language	O
representations	O
.	O
</s>
<s>
The	O
quality	O
of	O
the	O
text	O
generated	O
by	O
GPT-3	B-General_Concept
is	O
so	O
high	O
that	O
it	O
can	O
be	O
difficult	O
to	O
determine	O
whether	O
or	O
not	O
it	O
was	O
written	O
by	O
a	O
human	O
,	O
which	O
has	O
both	O
benefits	O
and	O
risks	O
.	O
</s>
<s>
Thirty-one	O
OpenAI	O
researchers	O
and	O
engineers	O
presented	O
the	O
original	O
May	O
28	O
,	O
2020	O
paper	O
introducing	O
GPT-3	B-General_Concept
.	O
</s>
<s>
In	O
their	O
paper	O
,	O
they	O
warned	O
of	O
GPT-3	B-General_Concept
'	O
s	O
potential	O
dangers	O
and	O
called	O
for	O
research	O
to	O
mitigate	O
risk	O
.	O
</s>
<s>
David	O
Chalmers	O
,	O
an	O
Australian	O
philosopher	O
,	O
described	O
GPT-3	B-General_Concept
as	O
"	O
one	O
of	O
the	O
most	O
interesting	O
and	O
important	O
AI	B-Application
systems	O
ever	O
produced.	O
"	O
</s>
<s>
An	O
April	O
2022	O
review	O
in	O
The	O
New	O
York	O
Times	O
described	O
GPT-3	B-General_Concept
'	O
s	O
capabilities	O
as	O
being	O
able	O
to	O
write	O
original	O
prose	O
with	O
fluency	O
equivalent	O
to	O
that	O
of	O
a	O
human	O
.	O
</s>
<s>
Microsoft	O
announced	O
on	O
September	O
22	O
,	O
2020	O
,	O
that	O
it	O
had	O
licensed	O
"	O
exclusive	O
"	O
use	O
of	O
GPT-3	B-General_Concept
;	O
others	O
can	O
still	O
use	O
the	O
public	O
API	B-General_Concept
to	O
receive	O
output	O
,	O
but	O
only	O
Microsoft	O
has	O
access	O
to	O
GPT-3	B-General_Concept
'	O
s	O
underlying	O
model	O
.	O
</s>
<s>
One	O
architecture	O
used	O
in	O
natural	B-Language
language	I-Language
processing	I-Language
(	O
NLP	B-Language
)	O
is	O
a	O
neural	B-Architecture
network	I-Architecture
based	O
on	O
a	O
deep	B-Algorithm
learning	I-Algorithm
model	O
that	O
was	O
first	O
introduced	O
in	O
2017	O
—	O
the	O
transformer	B-Algorithm
.	O
</s>
<s>
GPT-n	O
models	O
are	O
transformer-based	O
deep	B-Algorithm
learning	I-Algorithm
neural	B-Architecture
network	I-Architecture
architectures	O
.	O
</s>
<s>
There	O
are	O
a	O
number	O
of	O
NLP	B-Language
systems	O
capable	O
of	O
processing	O
,	O
mining	O
,	O
organizing	O
,	O
connecting	O
and	O
contrasting	O
textual	O
input	O
,	O
as	O
well	O
as	O
correctly	O
answering	B-Algorithm
questions	I-Algorithm
.	O
</s>
<s>
On	O
June	O
11	O
,	O
2018	O
,	O
OpenAI	O
researchers	O
and	O
engineers	O
posted	O
their	O
original	O
paper	O
on	O
generative	O
models	O
—	O
language	B-Language
models	I-Language
—	O
artificial	B-Application
intelligence	I-Application
systems	O
—	O
that	O
could	O
be	O
pre-trained	O
with	O
an	O
enormous	O
and	O
diverse	O
corpus	O
of	O
text	O
via	O
datasets	O
,	O
in	O
a	O
process	O
they	O
called	O
generative	O
pre-training	O
(	O
GP	O
)	O
.	O
</s>
<s>
The	O
authors	O
described	O
how	O
language	O
understanding	O
performances	O
in	O
natural	B-Language
language	I-Language
processing	I-Language
(	O
NLP	B-Language
)	O
were	O
improved	O
in	O
GPT-n	O
through	O
a	O
process	O
of	O
"	O
generative	O
pre-training	O
of	O
a	O
language	B-Language
model	I-Language
on	O
a	O
diverse	O
corpus	O
of	O
unlabeled	O
text	O
,	O
followed	O
by	O
discriminative	O
fine-tuning	O
on	O
each	O
specific	O
task.	O
"	O
</s>
<s>
This	O
eliminated	B-General_Concept
the	I-General_Concept
need	I-General_Concept
for	I-General_Concept
human	I-General_Concept
supervision	I-General_Concept
and	O
for	O
time-intensive	O
hand-labeling	O
.	O
</s>
<s>
In	O
February	O
2020	O
,	O
Microsoft	O
introduced	O
its	O
Turing	O
Natural	O
Language	O
Generation	O
(	O
T-NLG	O
)	O
,	O
which	O
was	O
claimed	O
to	O
be	O
the	O
"	O
largest	O
language	B-Language
model	I-Language
ever	O
published	O
at	O
17	O
billion	O
parameters.	O
"	O
</s>
<s>
It	O
performed	O
better	O
than	O
any	O
other	O
language	B-Language
model	I-Language
at	O
a	O
variety	O
of	O
tasks	O
which	O
included	O
summarizing	B-Application
texts	I-Application
and	O
answering	B-Algorithm
questions	I-Algorithm
.	O
</s>
<s>
On	O
May	O
28	O
,	O
2020	O
,	O
an	O
arXiv	O
preprint	O
by	O
a	O
group	O
of	O
31	O
engineers	O
and	O
researchers	O
at	O
OpenAI	O
described	O
the	O
development	O
of	O
GPT-3	B-General_Concept
,	O
a	O
third-generation	O
"	O
state-of-the-art	O
language	B-Language
model	I-Language
"	O
.	O
</s>
<s>
The	O
team	O
increased	O
the	O
capacity	O
of	O
GPT-3	B-General_Concept
by	O
over	O
two	O
orders	O
of	O
magnitude	O
from	O
that	O
of	O
its	O
predecessor	O
,	O
GPT-2	B-General_Concept
,	O
making	O
GPT-3	B-General_Concept
the	O
largest	O
non-sparse	O
language	B-Language
model	I-Language
to	O
date	O
.	O
</s>
<s>
Because	O
GPT-3	B-General_Concept
is	O
structurally	O
similar	O
to	O
its	O
predecessors	O
,	O
its	O
greater	O
accuracy	O
is	O
attributed	O
to	O
its	O
increased	O
capacity	O
and	O
greater	O
number	O
of	O
parameters	B-Architecture
.	O
</s>
<s>
GPT-3	B-General_Concept
'	O
s	O
capacity	O
is	O
ten	O
times	O
larger	O
than	O
that	O
of	O
Microsoft	O
's	O
Turing	O
NLG	O
,	O
the	O
next	O
largest	O
NLP	B-Language
model	O
known	O
at	O
the	O
time	O
.	O
</s>
<s>
Lambdalabs	O
estimated	O
a	O
hypothetical	O
cost	O
of	O
around	O
$4.6	O
million	O
US	O
dollars	O
and	O
355	O
years	O
to	O
train	O
GPT-3	B-General_Concept
on	O
a	O
single	O
GPU	B-Architecture
in	O
2020	O
,	O
with	O
lower	O
actual	O
training	O
time	O
by	O
using	O
more	O
GPUs	B-Architecture
in	O
parallel	O
.	O
</s>
<s>
Sixty	O
percent	O
of	O
the	O
weighted	O
pre-training	O
dataset	O
for	O
GPT-3	B-General_Concept
comes	O
from	O
a	O
filtered	O
version	O
of	O
Common	O
Crawl	O
consisting	O
of	O
410	O
billion	O
byte-pair-encoded	B-Algorithm
tokens	O
.	O
</s>
<s>
GPT-3	B-General_Concept
was	O
trained	O
on	O
hundreds	O
of	O
billions	O
of	O
words	O
and	O
is	O
also	O
capable	O
of	O
coding	O
in	O
CSS	O
,	O
JSX	O
,	O
and	O
Python	O
,	O
among	O
others	O
.	O
</s>
<s>
Since	O
GPT-3	B-General_Concept
'	O
s	O
training	O
data	O
was	O
all-encompassing	O
,	O
it	O
does	O
not	O
require	O
further	O
training	O
for	O
distinct	O
language	O
tasks	O
.	O
</s>
<s>
The	O
training	O
data	O
contains	O
occasional	O
toxic	O
language	O
and	O
GPT-3	B-General_Concept
occasionally	O
generates	O
toxic	O
language	O
as	O
a	O
result	O
of	O
mimicking	O
its	O
training	O
data	O
.	O
</s>
<s>
A	O
study	O
from	O
the	O
University	O
of	O
Washington	O
found	O
that	O
GPT-3	B-General_Concept
produced	O
toxic	O
language	O
at	O
a	O
toxicity	O
level	O
comparable	O
to	O
the	O
similar	O
natural	B-Language
language	I-Language
processing	I-Language
models	O
of	O
GPT-2	B-General_Concept
and	O
CTRL	O
.	O
</s>
<s>
OpenAI	O
has	O
implemented	O
several	O
strategies	O
to	O
limit	O
the	O
amount	O
of	O
toxic	O
language	O
generated	O
by	O
GPT-3	B-General_Concept
.	O
</s>
<s>
As	O
a	O
result	O
,	O
GPT-3	B-General_Concept
produced	O
less	O
toxic	O
language	O
compared	O
to	O
its	O
predecessor	O
model	O
,	O
GPT-1	O
,	O
although	O
it	O
produced	O
both	O
more	O
generations	O
and	O
a	O
higher	O
toxicity	O
of	O
toxic	O
language	O
compared	O
to	O
CTRL	O
Wiki	O
,	O
a	O
language	B-Language
model	I-Language
trained	O
entirely	O
on	O
Wikipedia	O
data	O
.	O
</s>
<s>
On	O
June	O
11	O
,	O
2020	O
,	O
OpenAI	O
announced	O
that	O
users	O
could	O
request	O
access	O
to	O
its	O
user-friendly	O
GPT-3	B-General_Concept
API	B-General_Concept
—	O
a	O
"	O
machine	O
learning	O
toolset	O
"	O
—	O
to	O
help	O
OpenAI	O
"	O
explore	O
the	O
strengths	O
and	O
limits	O
"	O
of	O
this	O
new	O
technology	O
.	O
</s>
<s>
The	O
invitation	O
described	O
how	O
this	O
API	B-General_Concept
had	O
a	O
general-purpose	O
"	O
text	O
in	O
,	O
text	O
out	O
"	O
interface	O
that	O
can	O
complete	O
almost	O
"	O
any	O
English	O
language	O
task	O
"	O
,	O
instead	O
of	O
the	O
usual	O
single	O
use-case	O
.	O
</s>
<s>
According	O
to	O
one	O
user	O
,	O
who	O
had	O
access	O
to	O
a	O
private	O
early	O
release	O
of	O
the	O
OpenAI	O
GPT-3	B-General_Concept
API	B-General_Concept
,	O
GPT-3	B-General_Concept
was	O
"	O
eerily	O
good	O
"	O
at	O
writing	O
"	O
amazingly	O
coherent	O
text	O
"	O
with	O
only	O
a	O
few	O
simple	O
prompts	O
.	O
</s>
<s>
In	O
an	O
initial	O
experiment	O
80	O
US	O
subjects	O
were	O
asked	O
to	O
judge	O
if	O
short	O
~	O
200	O
word	O
articles	O
were	O
written	O
by	O
humans	O
or	O
GPT-3	B-General_Concept
.	O
</s>
<s>
On	O
November	O
18	O
,	O
2021	O
,	O
OpenAI	O
announced	O
that	O
enough	O
safeguards	O
had	O
been	O
implemented	O
that	O
access	O
to	O
its	O
API	B-General_Concept
would	O
be	O
unrestricted	O
.	O
</s>
<s>
On	O
January	O
27	O
,	O
2022	O
,	O
OpenAI	O
announced	O
that	O
its	O
newest	O
GPT-3	B-General_Concept
language	B-Language
models	I-Language
,	O
collectively	O
referred	O
to	O
as	O
InstructGPT	O
,	O
was	O
now	O
the	O
default	O
language	B-Language
model	I-Language
used	O
on	O
their	O
API	B-General_Concept
.	O
</s>
<s>
Because	O
GPT-3	B-General_Concept
can	O
"	O
generate	O
news	O
articles	O
which	O
human	O
evaluators	O
have	O
difficulty	O
distinguishing	O
from	O
articles	O
written	O
by	O
humans	O
,	O
"	O
GPT-3	B-General_Concept
has	O
the	O
"	O
potential	O
to	O
advance	O
both	O
the	O
beneficial	O
and	O
harmful	O
applications	O
of	O
language	O
models.	O
"	O
</s>
<s>
In	O
their	O
May	O
28	O
,	O
2020	O
paper	O
,	O
the	O
researchers	O
described	O
in	O
detail	O
the	O
potential	O
"	O
harmful	O
effects	O
of	O
GPT-3	B-General_Concept
"	O
which	O
include	O
"	O
misinformation	O
,	O
spam	O
,	O
phishing	O
,	O
abuse	O
of	O
legal	O
and	O
governmental	O
processes	O
,	O
fraudulent	O
academic	O
essay	O
writing	O
and	O
social	O
engineering	O
pretexting	O
"	O
.	O
</s>
<s>
GPT-3	B-General_Concept
is	O
capable	O
of	O
performing	O
zero-shot	B-Algorithm
and	O
few-shot	O
learning	O
(	O
including	O
one-shot	O
)	O
.	O
</s>
<s>
In	O
June	O
2022	O
,	O
Almira	O
Osmanovic	O
Thunström	O
wrote	O
that	O
GPT-3	B-General_Concept
was	O
the	O
primary	O
author	O
on	O
an	O
article	O
on	O
itself	O
,	O
that	O
they	O
had	O
submitted	O
it	O
for	O
publication	O
,	O
and	O
that	O
it	O
had	O
been	O
pre-published	O
while	O
waiting	O
for	O
completion	O
of	O
its	O
review	O
.	O
</s>
<s>
On	O
March	O
15	O
,	O
2022	O
,	O
OpenAI	O
made	O
available	O
new	O
versions	O
of	O
GPT-3	B-General_Concept
and	O
Codex	O
in	O
its	O
API	B-General_Concept
with	O
edit	O
and	O
insert	O
capabilities	O
under	O
the	O
names	O
"	O
text-davinci-002	O
"	O
and	O
"	O
code-davinci-002	O
"	O
.	O
</s>
<s>
On	O
November	O
30	O
,	O
2022	O
,	O
OpenAI	O
began	O
referring	O
to	O
these	O
models	O
as	O
belonging	O
to	O
the	O
"	O
GPT-3.5	O
"	O
series	O
,	O
and	O
released	O
ChatGPT	B-General_Concept
,	O
which	O
was	O
fine-tuned	O
from	O
a	O
model	O
in	O
the	O
GPT-3.5	O
series	O
.	O
</s>
<s>
GPT-3	B-General_Concept
,	O
specifically	O
the	O
Codex	O
model	O
,	O
is	O
the	O
basis	O
for	O
GitHub	B-Application
Copilot	I-Application
,	O
a	O
code	O
completion	O
and	O
generation	O
software	O
that	O
can	O
be	O
used	O
in	O
various	O
code	O
editors	O
and	O
IDEs	O
.	O
</s>
<s>
GPT-3	B-General_Concept
is	O
used	O
in	O
certain	O
Microsoft	O
products	O
to	O
translate	O
conventional	O
language	O
into	O
formal	O
computer	O
code	O
.	O
</s>
<s>
GPT-3	B-General_Concept
has	O
been	O
used	O
in	O
CodexDB	O
to	O
generate	O
query-specific	O
code	O
for	O
SQL	O
processing	O
.	O
</s>
<s>
GPT-3	B-General_Concept
has	O
been	O
used	O
by	O
Jason	O
Rohrer	O
in	O
a	O
retro-themed	O
chatbot	B-Application
project	O
named	O
"	O
Project	O
December	O
"	O
,	O
which	O
is	O
accessible	O
online	O
and	O
allows	O
users	O
to	O
converse	O
with	O
several	O
AIs	B-Application
using	O
GPT-3	B-General_Concept
technology	O
.	O
</s>
<s>
GPT-3	B-General_Concept
was	O
used	O
by	O
The	O
Guardian	O
to	O
write	O
an	O
article	O
about	O
AI	B-Application
being	O
harmless	O
to	O
human	O
beings	O
.	O
</s>
<s>
GPT-3	B-General_Concept
was	O
used	O
in	O
AI	B-Application
Dungeon	I-Application
,	O
which	O
generates	O
text-based	O
adventure	O
games	O
.	O
</s>
<s>
GPT-3	B-General_Concept
is	O
used	O
to	O
aid	O
in	O
writing	O
copy	O
and	O
other	O
marketing	O
materials	O
.	O
</s>
<s>
A	O
2022	O
study	O
from	O
Drexel	O
University	O
suggested	O
that	O
GPT-3-based	O
systems	O
could	O
be	O
used	O
to	O
screen	O
for	O
early	O
signs	O
of	O
Alzheimer	O
's	O
disease	O
.	O
</s>
<s>
In	O
a	O
July	O
2020	O
review	O
in	O
The	O
New	O
York	O
Times	O
,	O
Farhad	O
Manjoo	O
said	O
that	O
GPT-3	B-General_Concept
'	O
s	O
ability	O
to	O
generate	O
computer	O
code	O
,	O
poetry	O
,	O
and	O
prose	O
is	O
not	O
just	O
"	O
amazing	O
"	O
,	O
"	O
spooky	O
"	O
,	O
and	O
"	O
humbling	O
"	O
,	O
but	O
also	O
"	O
more	O
than	O
a	O
little	O
terrifying	O
"	O
.	O
</s>
<s>
Daily	O
Nous	O
presented	O
a	O
series	O
of	O
articles	O
by	O
nine	O
philosophers	O
on	O
GPT-3	B-General_Concept
.	O
</s>
<s>
Australian	O
philosopher	O
David	O
Chalmers	O
described	O
GPT-3	B-General_Concept
as	O
"	O
one	O
of	O
the	O
most	O
interesting	O
and	O
important	O
AI	B-Application
systems	O
ever	O
produced	O
"	O
.	O
</s>
<s>
A	O
review	O
in	O
Wired	O
said	O
that	O
GPT-3	B-General_Concept
was	O
"	O
provoking	O
chills	O
across	O
Silicon	O
Valley	O
"	O
.	O
</s>
<s>
The	O
National	O
Law	O
Review	O
said	O
that	O
GPT-3	B-General_Concept
is	O
an	O
"	O
impressive	O
step	O
in	O
the	O
larger	O
process	O
"	O
,	O
with	O
OpenAI	O
and	O
others	O
finding	O
"	O
useful	O
applications	O
for	O
all	O
of	O
this	O
power	O
"	O
while	O
continuing	O
to	O
"	O
work	O
toward	O
a	O
more	O
general	O
intelligence	O
"	O
.	O
</s>
<s>
An	O
article	O
in	O
the	O
MIT	O
Technology	O
Review	O
,	O
cowritten	O
by	O
Deep	B-Algorithm
Learning	I-Algorithm
critic	O
Gary	O
Marcus	O
,	O
stated	O
that	O
GPT-3	B-General_Concept
'	O
s	O
"	O
comprehension	O
of	O
the	O
world	O
is	O
often	O
seriously	O
off	O
,	O
which	O
means	O
you	O
can	O
never	O
really	O
trust	O
what	O
it	O
says.	O
"	O
</s>
<s>
According	O
to	O
the	O
authors	O
,	O
GPT-3	B-General_Concept
models	O
relationships	O
between	O
words	O
without	O
having	O
an	O
understanding	O
of	O
the	O
meaning	O
behind	O
each	O
word	O
.	O
</s>
<s>
Jerome	O
Pesenti	O
,	O
head	O
of	O
the	O
Facebook	O
AI	B-Application
lab	O
,	O
said	O
GPT-3	B-General_Concept
is	O
"	O
unsafe	O
,	O
"	O
pointing	O
to	O
the	O
sexist	O
,	O
racist	O
and	O
other	O
biased	O
and	O
negative	O
language	O
generated	O
by	O
the	O
system	O
when	O
it	O
was	O
asked	O
to	O
discuss	O
Jews	O
,	O
women	O
,	O
black	O
people	O
,	O
and	O
the	O
Holocaust	O
.	O
</s>
<s>
Nabla	O
,	O
a	O
French	O
start-up	O
specializing	O
in	O
healthcare	O
technology	O
,	O
tested	O
GPT-3	B-General_Concept
as	O
a	O
medical	O
chatbot	B-Application
,	O
though	O
OpenAI	O
itself	O
warned	O
against	O
such	O
use	O
.	O
</s>
<s>
As	O
expected	O
,	O
GPT-3	B-General_Concept
showed	O
several	O
limitations	O
.	O
</s>
<s>
For	O
example	O
,	O
while	O
testing	O
GPT-3	B-General_Concept
responses	O
about	O
mental	O
health	O
issues	O
,	O
the	O
AI	B-Application
advised	O
a	O
simulated	O
patient	O
to	O
commit	O
suicide	O
.	O
</s>
<s>
Noam	O
Chomsky	O
expressed	O
his	O
skepticism	O
about	O
GPT-3	B-General_Concept
'	O
s	O
scientific	O
value	O
:	O
"	O
It	O
's	O
not	O
a	O
language	B-Language
model	I-Language
.	O
</s>
<s>
It	O
is	O
therefore	O
refuted	O
,	O
if	O
intended	O
as	O
a	O
language	B-Language
model	I-Language
,	O
by	O
normal	O
scientific	O
criteria	O
.	O
</s>
<s>
OpenAI	O
's	O
Sam	O
Altman	O
himself	O
criticized	O
what	O
he	O
called	O
"	O
GPT-3	B-General_Concept
hype	O
"	O
,	O
acknowledging	O
GPT-3	B-General_Concept
"	O
has	O
serious	O
weakness	O
and	O
sometimes	O
makes	O
very	O
silly	O
mistakes	O
...	O
AI	B-Application
is	O
going	O
to	O
change	O
the	O
world	O
,	O
but	O
GPT-3	B-General_Concept
is	O
just	O
a	O
very	O
early	O
glimpse.	O
"	O
</s>
<s>
GPT-3	B-General_Concept
'	O
s	O
builder	O
,	O
OpenAI	O
,	O
was	O
initially	O
founded	O
as	O
a	O
non-profit	O
in	O
2015	O
.	O
</s>
<s>
In	O
2019	O
,	O
OpenAI	O
broke	O
from	O
its	O
usual	O
open-source	O
standards	O
by	O
not	O
publicly	O
releasing	O
GPT-3	B-General_Concept
'	O
s	O
predecessor	O
model	O
,	O
citing	O
concerns	O
that	O
the	O
model	O
could	O
facilitate	O
the	O
propagation	O
of	O
fake	O
news	O
.	O
</s>
<s>
OpenAI	O
eventually	O
released	O
a	O
version	O
of	O
GPT-2	B-General_Concept
that	O
was	O
8%	O
of	O
the	O
original	O
model	O
's	O
size	O
.	O
</s>
<s>
In	O
2020	O
,	O
Microsoft	O
announced	O
the	O
company	O
had	O
exclusive	O
licensing	O
of	O
GPT-3	B-General_Concept
for	O
Microsoft	O
's	O
products	O
and	O
services	O
following	O
a	O
multi-billion	O
dollar	O
investment	O
in	O
OpenAI	O
.	O
</s>
<s>
The	O
agreement	O
permits	O
OpenAI	O
to	O
offer	O
a	O
public-facing	O
API	B-General_Concept
such	O
that	O
users	O
can	O
send	O
text	O
to	O
GPT-3	B-General_Concept
to	O
receive	O
the	O
model	O
's	O
output	O
,	O
but	O
only	O
Microsoft	O
will	O
have	O
access	O
to	O
GPT-3	B-General_Concept
'	O
s	O
source	O
code	O
.	O
</s>
<s>
Large	O
language	B-Language
models	I-Language
,	O
such	O
as	O
GPT-3	B-General_Concept
,	O
have	O
come	O
under	O
criticism	O
from	O
a	O
few	O
of	O
Google	O
's	O
AI	B-Application
ethics	O
researchers	O
for	O
the	O
environmental	O
impact	O
of	O
training	O
and	O
storing	O
the	O
models	O
,	O
detailed	O
in	O
a	O
paper	O
co-authored	O
by	O
Timnit	O
Gebru	O
and	O
Emily	O
M	O
.	O
Bender	O
in	O
2021	O
.	O
</s>
<s>
The	O
growing	O
use	O
of	O
automated	O
writing	O
technologies	O
based	O
on	O
GPT-3	B-General_Concept
and	O
other	O
language	O
generators	O
,	O
has	O
raised	O
concerns	O
regarding	O
academic	O
integrity	O
and	O
raised	O
the	O
stakes	O
of	O
how	O
universities	O
and	O
schools	O
will	O
gauge	O
what	O
constitutes	O
academic	O
misconduct	O
such	O
as	O
plagiarism	O
.	O
</s>
<s>
TechCrunch	O
reports	O
this	O
training	O
data	O
includes	O
copyrighted	O
material	O
from	O
the	O
BBC	O
,	O
The	O
New	O
York	O
Times	O
,	O
Reddit	B-Application
,	O
the	O
full	O
text	O
of	O
online	O
books	O
,	O
and	O
more	O
.	O
</s>
<s>
In	O
its	O
response	O
to	O
a	O
2019	O
Request	O
for	O
Comments	O
on	O
Intellectual	O
Property	O
Protection	O
for	O
Artificial	B-Application
Intelligence	I-Application
Innovation	O
from	O
the	O
United	O
States	O
Patent	O
and	O
Trademark	O
Office	O
(	O
USPTO	O
)	O
,	O
OpenAI	O
argued	O
that	O
"	O
Under	O
current	O
law	O
,	O
training	O
AI	B-Application
systems	O
[	O
such	O
as	O
its	O
GPT	O
models ]	O
constitutes	O
fair	O
use	O
,	O
"	O
but	O
that	O
"	O
given	O
the	O
lack	O
of	O
case	O
law	O
on	O
point	O
,	O
OpenAI	O
and	O
other	O
AI	B-Application
developers	O
like	O
us	O
face	O
substantial	O
legal	O
uncertainty	O
and	O
compliance	O
costs.	O
"	O
</s>
