<s>
A	O
foundation	B-General_Concept
model	I-General_Concept
is	O
a	O
large	O
artificial	B-Application
intelligence	I-Application
model	O
trained	O
on	O
a	O
vast	O
quantity	O
of	O
unlabeled	O
data	O
at	O
scale	O
(	O
usually	O
by	O
self-supervised	B-General_Concept
learning	I-General_Concept
)	O
resulting	O
in	O
a	O
model	O
that	O
can	O
be	O
adapted	O
to	O
a	O
wide	O
range	O
of	O
downstream	O
tasks	O
.	O
</s>
<s>
Foundation	B-General_Concept
models	I-General_Concept
have	O
helped	O
bring	O
about	O
a	O
major	O
transformation	O
in	O
how	O
AI	B-Application
systems	O
are	O
built	O
since	O
their	O
introduction	O
in	O
2018	O
.	O
</s>
<s>
Early	O
examples	O
of	O
foundation	B-General_Concept
models	I-General_Concept
were	O
pre-trained	O
large	O
language	O
models	O
(	O
LLMs	O
)	O
including	O
BERT	B-General_Concept
and	O
the	O
GPT-n	O
series	O
.	O
</s>
<s>
Subsequently	O
,	O
several	O
multimodal	O
foundation	B-General_Concept
models	I-General_Concept
have	O
been	O
produced	O
including	O
DALL-E	B-General_Concept
,	O
Flamingo	O
,	O
Florence	O
and	O
NOOR	O
.	O
</s>
<s>
The	O
Stanford	O
Institute	O
for	O
Human-Centered	O
Artificial	B-Application
Intelligence	I-Application
's	O
(	O
HAI	O
)	O
Center	O
for	O
Research	O
on	O
Foundation	B-General_Concept
Models	I-General_Concept
(	O
CRFM	O
)	O
popularized	O
the	O
term	O
.	O
</s>
<s>
The	O
Stanford	O
Institute	O
for	O
Human-Centered	O
Artificial	B-Application
Intelligence	I-Application
's	O
(	O
HAI	O
)	O
Center	O
for	O
Research	O
on	O
Foundation	B-General_Concept
Models	I-General_Concept
(	O
CRFM	O
)	O
coined	O
the	O
term	O
foundation	B-General_Concept
model	I-General_Concept
to	O
refer	O
to	O
"	O
any	O
model	O
that	O
is	O
trained	O
on	O
broad	O
data	O
(	O
generally	O
using	O
self-supervision	O
at	O
scale	O
)	O
that	O
can	O
be	O
adapted	O
(	O
e.g.	O
,	O
fine-tuned	O
)	O
to	O
a	O
wide	O
range	O
of	O
downstream	O
tasks	O
"	O
.	O
</s>
<s>
This	O
is	O
not	O
a	O
new	O
technique	O
in	O
itself	O
,	O
as	O
it	O
is	O
based	O
on	O
deep	O
neural	O
networks	O
and	O
self-supervised	B-General_Concept
learning	I-General_Concept
,	O
but	O
the	O
scale	O
at	O
which	O
it	O
has	O
been	O
developed	O
in	O
the	O
last	O
years	O
,	O
and	O
the	O
potential	O
for	O
one	O
model	O
to	O
be	O
used	O
for	O
many	O
different	O
purposes	O
,	O
warrants	O
a	O
new	O
term	O
,	O
the	O
Stanford	O
group	O
argue	O
.	O
</s>
<s>
A	O
foundation	B-General_Concept
model	I-General_Concept
is	O
a	O
"	O
paradigm	O
for	O
building	O
AI	B-Application
systems	O
"	O
in	O
which	O
a	O
model	O
trained	O
on	O
a	O
large	O
amount	O
of	O
unlabeled	O
data	O
can	O
be	O
adapted	O
to	O
many	O
applications	O
.	O
</s>
<s>
Foundation	B-General_Concept
models	I-General_Concept
are	O
"	O
designed	O
to	O
be	O
adapted	O
(	O
e.g.	O
,	O
finetuned	O
)	O
to	O
various	O
downstream	O
cognitive	O
tasks	O
by	O
pre-training	O
on	O
broad	O
data	O
at	O
scale	O
"	O
.	O
</s>
<s>
Key	O
characteristics	O
of	O
foundation	B-General_Concept
models	I-General_Concept
are	O
emergence	O
and	O
homogenization	O
.	O
</s>
<s>
Since	O
foundation	B-General_Concept
models	I-General_Concept
are	O
pre-trained	O
on	O
a	O
massive	O
dataset	O
,	O
they	O
are	O
not	O
capable	O
of	O
handling	O
specific	O
"	O
personal	O
"	O
concepts	O
that	O
a	O
user	O
may	O
be	O
interested	O
in	O
.	O
</s>
<s>
A	O
series	O
of	O
methods	O
were	O
designed	O
to	O
augment	O
a	O
foundation	B-General_Concept
model	I-General_Concept
with	O
personal	O
,	O
specific	O
items	O
without	O
retraining	O
the	O
full	O
model	O
.	O
</s>
<s>
For	O
example	O
,	O
for	O
few-shot	O
image	B-General_Concept
retrieval	I-General_Concept
it	O
was	O
shown	O
how	O
to	O
adapt	O
a	O
vision-language	O
foundation	B-General_Concept
model	I-General_Concept
(	O
CLIP	O
)	O
by	O
adding	O
new	O
concept	O
to	O
its	O
vocabulary	O
.	O
</s>
<s>
For	O
Text-to-image	B-General_Concept
generation	I-General_Concept
,	O
an	O
approach	O
called	O
textual	O
inversion	O
can	O
be	O
similarly	O
used	O
to	O
teach	O
the	O
system	O
new	O
concept	O
that	O
can	O
later	O
be	O
generated	O
in	O
conjunction	O
with	O
the	O
concepts	O
that	O
the	O
foundation	B-General_Concept
model	I-General_Concept
is	O
already	O
familiar	O
with	O
.	O
</s>
<s>
A	O
2021	O
arXiv	O
report	O
listed	O
foundation	B-General_Concept
models	I-General_Concept
 '	O
capabilities	O
in	O
regards	O
to	O
"	O
language	O
,	O
vision	O
,	O
robotics	O
,	O
reasoning	O
,	O
and	O
human	O
interaction	O
"	O
,	O
technical	O
principles	O
,	O
such	O
as	O
"	O
model	O
architectures	O
,	O
training	O
procedures	O
,	O
data	O
,	O
systems	O
,	O
security	O
,	O
evaluation	O
,	O
and	O
theory	O
"	O
,	O
their	O
applications	O
,	O
for	O
example	O
in	O
law	O
,	O
healthcare	O
,	O
and	O
education	O
and	O
their	O
potential	O
impact	O
on	O
society	O
,	O
including	O
"	O
inequity	O
,	O
misuse	O
,	O
economic	O
and	O
environmental	O
impact	O
,	O
legal	O
and	O
ethical	O
considerations	O
"	O
.	O
</s>
<s>
An	O
article	O
about	O
foundation	B-General_Concept
models	I-General_Concept
in	O
The	O
Economist	O
notes	O
that	O
"	O
some	O
worry	O
that	O
the	O
technology	O
's	O
heedless	O
spread	O
will	O
further	O
concentrate	O
economic	O
and	O
political	O
power	O
"	O
.	O
</s>
