<s>
Wu	B-General_Concept
Dao	I-General_Concept
(	O
)	O
is	O
a	O
multimodal	O
artificial	B-Application
intelligence	I-Application
developed	O
by	O
the	O
Beijing	O
Academy	O
of	O
Artificial	B-Application
Intelligence	I-Application
(	O
BAAI	O
)	O
.	O
</s>
<s>
Wu	B-General_Concept
Dao	I-General_Concept
1.0	O
was	O
first	O
announced	O
on	O
January	O
11	O
,	O
2021	O
;	O
an	O
improved	O
version	O
,	O
Wu	B-General_Concept
Dao	I-General_Concept
2.0	O
,	O
was	O
announced	O
on	O
May	O
31	O
.	O
</s>
<s>
It	O
has	O
been	O
compared	O
to	O
GPT-3	B-General_Concept
,	O
and	O
is	O
built	O
on	O
a	O
similar	O
architecture	O
;	O
in	O
comparison	O
,	O
GPT-3	B-General_Concept
has	O
175	O
billion	O
parameters	O
—	O
variables	O
and	O
inputs	O
within	O
the	O
machine	O
learning	O
model	O
—	O
while	O
Wu	B-General_Concept
Dao	I-General_Concept
has	O
1.75	O
trillion	O
parameters	O
.	O
</s>
<s>
Wu	B-General_Concept
Dao	I-General_Concept
was	O
trained	O
on	O
4.9	O
terabytes	O
of	O
images	O
and	O
texts	O
(	O
which	O
included	O
1.2	O
terabytes	O
of	O
Chinese	O
text	O
and	O
1.2	O
terabytes	O
of	O
English	O
text	O
)	O
,	O
while	O
GPT-3	B-General_Concept
was	O
trained	O
on	O
45	O
terabytes	O
of	O
text	O
data	O
.	O
</s>
<s>
The	O
chairman	O
of	O
BAAI	O
said	O
that	O
Wu	B-General_Concept
Dao	I-General_Concept
was	O
an	O
attempt	O
to	O
"	O
create	O
the	O
biggest	O
,	O
most	O
powerful	O
AI	B-Application
model	O
possible	O
"	O
;	O
although	O
direct	O
comparisons	O
between	O
models	O
based	O
on	O
parameter	O
count	O
(	O
i.e.	O
</s>
<s>
between	O
Wu	B-General_Concept
Dao	I-General_Concept
and	O
GPT-3	B-General_Concept
)	O
do	O
not	O
directly	O
correlate	O
to	O
quality	O
.	O
</s>
<s>
Wu	B-General_Concept
Dao	I-General_Concept
2.0	O
,	O
was	O
called	O
"	O
the	O
biggest	O
language	O
A.I.	B-Application
</s>
<s>
Notably	O
,	O
the	O
type	O
of	O
architecture	O
used	O
for	O
Wu	B-General_Concept
Dao	I-General_Concept
2.0	O
is	O
a	O
mixture-of-experts	O
(	O
MoE	O
)	O
model	O
,	O
unlike	O
GPT-3	B-General_Concept
,	O
which	O
is	O
a	O
"	O
dense	O
"	O
model	O
:	O
while	O
MoE	O
models	O
require	O
much	O
less	O
computational	O
power	O
to	O
train	O
than	O
dense	O
models	O
with	O
the	O
same	O
numbers	O
of	O
parameters	O
,	O
trillion-parameter	O
MoE	O
models	O
have	O
shown	O
comparable	O
performance	O
to	O
models	O
that	O
are	O
hundreds	O
of	O
times	O
smaller	O
.	O
</s>
<s>
Wu	B-General_Concept
Dao	I-General_Concept
's	O
creators	O
demonstrated	O
its	O
ability	O
to	O
perform	O
natural	B-Language
language	I-Language
processing	I-Language
and	O
image	O
recognition	O
,	O
in	O
addition	O
to	O
generation	O
of	O
text	O
and	O
images	O
.	O
</s>
<s>
Wu	B-General_Concept
Dao	I-General_Concept
also	O
showed	O
off	O
its	O
ability	O
to	O
power	O
virtual	O
idols	O
(	O
with	O
a	O
little	O
help	O
from	O
Microsoft-spinoff	O
Xiaoice	B-Protocol
)	O
and	O
predict	O
the	O
3D	O
structures	O
of	O
proteins	O
like	O
AlphaFold	B-Application
.	O
</s>
<s>
Wu	B-General_Concept
Dao	I-General_Concept
's	O
development	O
began	O
in	O
October	O
2020	O
,	O
several	O
months	O
after	O
the	O
May	O
2020	O
release	O
of	O
GPT-3	B-General_Concept
.	O
</s>
<s>
The	O
first	O
iteration	O
of	O
the	O
model	O
,	O
Wu	B-General_Concept
Dao	I-General_Concept
1.0	O
,	O
"	O
initiated	O
large-scale	O
research	O
projects	O
"	O
via	O
four	O
related	O
models	O
.	O
</s>
<s>
Wu	B-General_Concept
Dao	I-General_Concept
–	O
Wen	O
Yuan	O
,	O
a	O
2.6-billion-parameter	O
pretrained	O
language	O
model	O
,	O
was	O
designed	O
for	O
tasks	O
like	O
open-domain	O
answering	O
,	O
sentiment	O
analysis	O
,	O
and	O
grammar	O
correction	O
.	O
</s>
<s>
Wu	B-General_Concept
Dao	I-General_Concept
–	O
Wen	O
Lan	O
,	O
a	O
1-billion-parameter	O
multimodal	O
graphic	O
model	O
,	O
was	O
trained	O
on	O
50	O
million	O
image	O
pairs	O
to	O
perform	O
image	O
captioning	O
.	O
</s>
<s>
Wu	B-General_Concept
Dao	I-General_Concept
–	O
Wen	O
Hui	O
,	O
an	O
11.3-billion-parameter	O
generative	O
language	O
model	O
,	O
was	O
designed	O
for	O
"	O
essential	O
problems	O
in	O
general	O
artificial	B-Application
intelligence	I-Application
from	O
a	O
cognitive	O
perspective	O
"	O
;	O
Synced	O
says	O
that	O
it	O
can	O
"	O
generate	O
poetry	O
,	O
make	O
videos	O
,	O
draw	O
pictures	O
,	O
retrieve	O
text	O
,	O
perform	O
complex	O
reasoning	O
,	O
etc	O
"	O
.	O
</s>
<s>
Wu	B-General_Concept
Dao	I-General_Concept
–	O
Wen	O
Su	O
,	O
based	O
on	O
Google	B-Application
's	I-Application
BERT	O
language	O
model	O
and	O
trained	O
on	O
the	O
100-gigabyte	O
UNIPARC	O
database	O
(	O
as	O
well	O
as	O
thousands	O
of	O
gene	O
sequences	O
)	O
,	O
was	O
designed	O
for	O
biomolecular	O
structure	O
prediction	O
and	O
protein	O
folding	O
tasks	O
.	O
</s>
<s>
WuDao	O
Corpora	O
(	O
also	O
written	O
as	O
WuDaoCorpora	O
)	O
,	O
as	O
of	O
version	O
2.0	O
,	O
was	O
a	O
large	O
dataset	O
constructed	O
for	O
training	O
Wu	B-General_Concept
Dao	I-General_Concept
2.0	O
.	O
</s>
<s>
Wu	B-General_Concept
Dao	I-General_Concept
2.0	O
was	O
trained	O
using	O
FastMoE	O
,	O
a	O
variant	O
of	O
the	O
mixture	O
of	O
experts	O
architecture	O
published	O
by	O
Google	B-Application
.	I-Application
</s>
<s>
TheNextWeb	O
said	O
in	O
June	O
2021	O
that	O
"	O
details	O
as	O
to	O
exactly	O
how	O
Wu	B-General_Concept
Dao	I-General_Concept
was	O
trained	O
,	O
what	O
was	O
in	O
its	O
various	O
datasets	O
,	O
and	O
what	O
practical	O
applications	O
it	O
can	O
be	O
used	O
for	O
remain	O
scarce	O
"	O
.	O
</s>
<s>
OpenAI	O
's	O
policy	O
director	O
called	O
Wu	B-General_Concept
Dao	I-General_Concept
an	O
example	O
of	O
"	O
model	O
diffusion	O
"	O
,	O
a	O
neologism	O
describing	O
a	O
situation	O
in	O
which	O
multiple	O
entities	O
develop	O
models	O
similar	O
to	O
OpenAI	O
's	O
.	O
</s>
