<s>
Generative	O
pre-trained	O
transformers	B-Algorithm
(	O
GPT	B-Language
)	O
are	O
a	O
family	O
of	O
large	O
language	O
models	O
(	O
LLMs	O
)	O
,	O
which	O
was	O
introduced	O
in	O
2018	O
by	O
the	O
American	O
artificial	B-Application
intelligence	I-Application
organization	O
OpenAI	O
.	O
</s>
<s>
GPT	B-Language
models	O
are	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
that	O
are	O
based	O
on	O
the	O
transformer	B-Algorithm
architecture	O
,	O
pre-trained	O
on	O
large	O
datasets	O
of	O
unlabelled	O
text	O
,	O
and	O
able	O
to	O
generate	O
novel	O
human-like	O
text	O
.	O
</s>
<s>
As	O
of	O
2023	O
,	O
most	O
LLMs	O
have	O
these	O
characteristics	O
and	O
are	O
sometimes	O
referred	O
to	O
broadly	O
as	O
GPTs	B-Language
.	O
</s>
<s>
Between	O
2018	O
and	O
2023	O
,	O
OpenAI	O
released	O
four	O
major	O
numbered	O
GPT	B-Language
foundational	O
models	O
,	O
with	O
each	O
being	O
significantly	O
more	O
capable	O
than	O
the	O
previous	O
due	O
to	O
increased	O
size	O
(	O
number	O
of	O
trainable	O
parameters	O
)	O
and	O
training	O
.	O
</s>
<s>
The	O
GPT-3	B-General_Concept
model	O
(	O
2020	O
)	O
has	O
175	O
billion	O
parameters	O
and	O
was	O
trained	O
on	O
400	O
billion	O
tokens	O
of	O
text	O
.	O
</s>
<s>
OpenAI	O
declined	O
to	O
publish	O
the	O
size	O
or	O
training	O
details	O
of	O
its	O
GPT-4	O
model	O
(	O
2023	O
)	O
,	O
citing	O
"	O
the	O
competitive	O
landscape	O
and	O
the	O
safety	O
implications	O
of	O
large-scale	O
models	O
"	O
.	O
</s>
<s>
These	O
"	O
GPT-n	O
"	O
models	O
have	O
been	O
the	O
basis	O
for	O
various	O
other	O
products	O
and	O
technologies	O
,	O
including	O
models	O
fine-tuned	O
for	O
instruction	O
followingwhich	O
in	O
turn	O
power	O
the	O
ChatGPT	B-General_Concept
chatbot	O
service	O
.	O
</s>
<s>
The	O
term	O
"	O
GPT	B-Language
"	O
is	O
also	O
used	O
in	O
the	O
names	O
of	O
some	O
generative	O
LLMs	O
developed	O
by	O
others	O
,	O
such	O
as	O
a	O
series	O
of	O
GPT-3	B-General_Concept
inspired	O
models	O
created	O
by	O
EleutherAI	O
,	O
and	O
recently	O
a	O
series	O
of	O
seven	O
models	O
created	O
by	O
Cerebras	O
.	O
</s>
<s>
sales	O
,	O
finance	O
)	O
also	O
use	O
the	O
term	O
"	O
GPT	B-Language
"	O
in	O
the	O
names	O
of	O
their	O
services	O
involving	O
or	O
utilizing	O
a	O
GPT	B-Language
technology	O
,	O
like	O
"	O
EinsteinGPT	O
"	O
and	O
"	O
BloombergGPT	O
"	O
.	O
</s>
<s>
On	O
June	O
11	O
,	O
2018	O
,	O
OpenAI	O
published	O
a	O
paper	O
entitled	O
"	O
Improving	O
Language	O
Understanding	O
by	O
Generative	O
Pre-Training	O
,	O
"	O
in	O
which	O
it	O
introduced	O
the	O
first	O
GPT	B-Language
system	O
.	O
</s>
<s>
Up	O
to	O
that	O
point	O
,	O
the	O
best-performing	O
neural	O
NLP	B-Language
(	O
natural	B-Language
language	I-Language
processing	I-Language
)	O
models	O
mostly	O
employed	O
supervised	B-General_Concept
learning	I-General_Concept
from	O
large	O
amounts	O
of	O
manually-labeled	O
data	O
.	O
</s>
<s>
The	O
reliance	O
on	O
supervised	B-General_Concept
learning	I-General_Concept
limited	O
their	O
use	O
on	O
datasets	O
that	O
were	O
not	O
well-annotated	O
,	O
and	O
also	O
made	O
it	O
prohibitively	O
expensive	O
and	O
time-consuming	O
to	O
train	O
extremely	O
large	O
language	O
models	O
.	O
</s>
<s>
The	O
particular	O
semi-supervised	B-General_Concept
approach	O
OpenAI	O
employed	O
to	O
make	O
a	O
large	O
scale	O
generative	O
systemand	O
was	O
first	O
to	O
do	O
with	O
a	O
transformer	B-Algorithm
modelinvolved	O
two	O
stages	O
:	O
an	O
unsupervised	B-General_Concept
generative	O
"	O
pre-training	O
"	O
stage	O
to	O
set	O
initial	O
parameters	O
using	O
a	O
language	O
modeling	O
objective	O
,	O
and	O
a	O
supervised	B-General_Concept
discriminative	O
"	O
fine-tuning	O
"	O
stage	O
to	O
adapt	O
these	O
parameters	O
to	O
a	O
target	O
task	O
.	O
</s>
<s>
+	O
GPT	B-Language
foundational	O
modelsModelArchitectureParameter	O
countTraining	O
dataRelease	O
dateOriginal	O
GPT	B-Language
(	O
GPT-1	O
)	O
12-level	O
,	O
12-headed	O
Transformer	B-Algorithm
decoder	O
(	O
no	O
encoder	O
)	O
,	O
followed	O
by	O
linear-softmax.117	O
millionBookCorpus	O
:	O
4.5	O
GB	O
of	O
text	O
,	O
from	O
7000	O
unpublished	O
books	O
of	O
various	O
genres.GPT-2GPT-1	O
,	O
but	O
with	O
modified	O
normalization1.5	O
billionWebText	O
:	O
40	O
GB	O
of	O
text	O
,	O
8	O
million	O
documents	O
,	O
from	O
45	O
million	O
webpages	O
upvoted	O
on	O
Reddit.GPT-3GPT-2	O
,	O
but	O
with	O
modification	O
to	O
allow	O
larger	O
scaling175	O
billion570	O
GB	O
plaintext	O
,	O
0.4	O
trillion	O
tokens	O
.	O
</s>
<s>
(	O
then	O
March	O
15	O
,	O
2022	O
,	O
for	O
a	O
revision	O
ultimately	O
termed	O
GPT-3.5	O
)	O
GPT-4Also	O
trained	O
with	O
both	O
text	O
prediction	O
and	O
RLHF	O
;	O
accepts	O
both	O
text	O
and	O
images	O
as	O
input	O
.	O
</s>
<s>
In	O
January	O
2022	O
,	O
OpenAI	O
introduced	O
InstructGPT	O
,	O
a	O
series	O
of	O
models	O
which	O
were	O
fine-tuned	O
to	O
follow	O
instructions	O
using	O
a	O
combination	O
of	O
supervised	B-General_Concept
training	O
and	O
reinforcement	O
learning	O
from	O
human	O
feedback	O
(	O
RLHF	O
)	O
on	O
base	O
GPT-3	B-General_Concept
language	O
models	O
.	O
</s>
<s>
In	O
November	O
2022	O
,	O
OpenAI	O
launched	O
ChatGPT	B-General_Concept
,	O
an	O
online	O
chat	O
interface	O
powered	O
by	O
an	O
instruction-tuned	O
language	O
model	O
trained	O
in	O
a	O
similar	O
fashion	O
to	O
InstructGPT	O
.	O
</s>
