<s>
Nvidia	B-Operating_System
DGX	I-Operating_System
is	O
a	O
line	O
of	O
Nvidia-produced	O
servers	O
and	O
workstations	O
which	O
specialize	O
in	O
using	O
GPGPU	B-Architecture
to	O
accelerate	O
deep	B-Algorithm
learning	I-Algorithm
applications	O
.	O
</s>
<s>
The	O
typical	O
design	O
of	O
a	O
DGX	O
system	O
is	O
based	O
upon	O
a	O
rackmount	B-Application
chassis	O
with	O
motherboard	B-Device
that	O
carries	O
high	O
performance	O
x86	B-Operating_System
server	O
CPUs	O
(	O
Typically	O
Intel	B-Device
Xeons	I-Device
,	O
with	O
the	O
exception	O
DGX	O
A100	O
and	O
DGX	O
Station	O
A100	O
,	O
which	O
both	O
utilize	O
AMD	O
EPYC	O
CPUs	O
)	O
.	O
</s>
<s>
The	O
main	O
component	O
of	O
a	O
DGX	O
system	O
is	O
a	O
set	O
of	O
4	O
to	O
16	O
Nvidia	B-Device
Tesla	I-Device
GPU	B-Architecture
modules	O
on	O
an	O
independent	O
system	B-Device
board	I-Device
.	O
</s>
<s>
The	O
GPU	B-Architecture
modules	O
are	O
typically	O
integrated	O
into	O
the	O
system	O
using	O
a	O
version	O
of	O
the	O
SXM	B-Application
socket	I-Application
.	O
</s>
<s>
DGX-1	O
servers	O
feature	O
8	O
GPUs	B-Architecture
based	O
on	O
the	O
Pascal	B-General_Concept
or	O
Volta	B-General_Concept
daughter	B-Device
cards	I-Device
with	O
128GB	O
of	O
total	O
HBM2	O
memory	O
,	O
connected	O
by	O
an	O
NVLink	O
mesh	B-Architecture
network	I-Architecture
.	O
</s>
<s>
All	O
models	O
are	O
based	O
on	O
a	O
dual	O
socket	O
configuration	O
of	O
Intel	B-Device
Xeon	I-Device
E5	O
CPUs	O
,	O
and	O
are	O
equipped	O
with	O
the	O
following	O
features	O
.	O
</s>
<s>
The	O
product	O
line	O
is	O
intended	O
to	O
bridge	O
the	O
gap	O
between	O
GPUs	B-Architecture
and	O
AI	B-General_Concept
accelerators	I-General_Concept
in	O
that	O
the	O
device	O
has	O
specific	O
features	O
specializing	O
it	O
for	O
deep	B-Algorithm
learning	I-Algorithm
workloads	O
.	O
</s>
<s>
The	O
initial	O
Pascal	B-General_Concept
based	O
DGX-1	O
delivered	O
170	O
teraflops	O
of	O
half	O
precision	O
processing	O
,	O
while	O
the	O
Volta-based	O
upgrade	O
increased	O
this	O
to	O
960	O
teraflops	O
.	O
</s>
<s>
The	O
DGX-1	O
was	O
first	O
available	O
only	O
with	O
the	O
Pascal	B-General_Concept
based	O
configuration	O
,	O
with	O
the	O
first	O
generation	O
SXM	B-Application
socket	I-Application
.	O
</s>
<s>
The	O
later	O
revision	O
of	O
the	O
DGX-1	O
offered	O
support	O
for	O
first	O
generation	O
Volta	B-General_Concept
cards	O
via	O
the	O
SXM-2	O
socket	O
.	O
</s>
<s>
Nvidia	O
offered	O
upgrade	O
kits	O
that	O
allowed	O
users	O
with	O
a	O
Pascal	B-General_Concept
based	O
DGX-1	O
to	O
upgrade	O
to	O
a	O
Volta	B-General_Concept
based	O
DGX-1	O
.	O
</s>
<s>
The	O
Pascal	B-General_Concept
based	O
DGX-1	O
has	O
two	O
variants	O
,	O
one	O
with	O
an	O
Intel	B-Device
Xeon	I-Device
E5-2698	O
V3	O
,	O
and	O
one	O
with	O
an	O
E5-2698	O
V4	O
.	O
</s>
<s>
The	O
Volta	B-General_Concept
based	O
DGX-1	O
is	O
equipped	O
with	O
an	O
E5-2698	O
V4	O
and	O
was	O
priced	O
at	O
launch	O
at	O
$	O
149,000	O
.	O
</s>
<s>
Designed	O
as	O
a	O
turnkey	O
deskside	O
AI	O
supercomputer	B-Architecture
,	O
the	O
DGX	O
Station	O
is	O
a	O
tower	B-General_Concept
computer	I-General_Concept
that	O
can	O
function	O
completely	O
independently	O
without	O
typical	O
datacenter	O
infrastructure	O
such	O
as	O
cooling	O
,	O
redundant	O
power	O
,	O
or	O
19	B-Application
inch	I-Application
racks	I-Application
.	O
</s>
<s>
This	O
,	O
among	O
other	O
features	O
,	O
made	O
this	O
system	O
a	O
compelling	O
purchase	O
for	O
customers	O
without	O
the	O
infrastructure	O
to	O
run	O
rackmount	B-Application
DGX	O
systems	O
,	O
which	O
can	O
be	O
loud	O
,	O
output	O
a	O
lot	O
of	O
heat	O
,	O
and	O
take	O
up	O
a	O
large	O
area	O
.	O
</s>
<s>
This	O
was	O
Nvidia	O
's	O
first	O
venture	O
into	O
bringing	O
high	B-Architecture
performance	I-Architecture
computing	I-Architecture
deskside	O
,	O
which	O
has	O
since	O
remained	O
a	O
prominent	O
marketing	O
strategy	O
for	O
Nvidia	O
.	O
</s>
<s>
The	O
successor	O
of	O
the	O
Nvidia	B-Operating_System
DGX-1	I-Operating_System
is	O
the	O
Nvidia	B-Operating_System
DGX-2	I-Operating_System
,	O
which	O
uses	O
sixteen	O
Volta-based	O
V100	O
32GB	O
(	O
second	O
generation	O
)	O
cards	O
in	O
a	O
single	O
unit	O
.	O
</s>
<s>
Also	O
present	O
are	O
eight	O
100Gb/sec	O
InfiniBand	B-Architecture
cards	O
and	O
30.72	O
TB	O
of	O
SSD	B-Device
storage	O
,	O
all	O
enclosed	O
within	O
a	O
massive	O
10U	O
rackmount	B-Application
chassis	O
and	O
drawing	O
up	O
to	O
10kW	O
under	O
maximum	O
load	O
.	O
</s>
<s>
The	O
DGX-2	O
differs	O
from	O
other	O
DGX	O
models	O
in	O
that	O
it	O
contains	O
two	O
separate	O
GPU	B-Architecture
daughterboards	O
,	O
each	O
with	O
eight	O
GPUs	B-Architecture
.	O
</s>
<s>
These	O
boards	O
are	O
connected	O
by	O
an	O
NVSwitch	O
system	O
that	O
allows	O
for	O
full	O
bandwidth	O
communication	O
across	O
all	O
GPUs	B-Architecture
in	O
the	O
system	O
,	O
without	O
additional	O
latency	O
between	O
boards	O
.	O
</s>
<s>
The	O
DGX-2H	O
replaced	O
the	O
DGX-2	O
'	O
s	O
dual	O
Intel	B-Device
Xeon	I-Device
Platinum	I-Device
8168	O
's	O
with	O
upgraded	O
dual	O
Intel	B-Device
Xeon	I-Device
Platinum	I-Device
8174	O
's	O
.	O
</s>
<s>
The	O
DGX	O
A100	O
was	O
the	O
3rd	O
generation	O
of	O
DGX	O
server	O
,	O
including	O
8	O
Ampere-based	O
A100	O
accelerators	O
.	O
</s>
<s>
Also	O
included	O
is	O
15TB	O
of	O
PCIe	O
gen	O
4	O
NVMe	B-Application
storage	O
,	O
1	O
TB	O
of	O
RAM	B-Architecture
,	O
and	O
eight	O
Mellanox-powered	O
200GB/s	O
HDR	O
InfiniBand	B-Architecture
ConnectX-6	O
NICs	B-Protocol
.	O
</s>
<s>
The	O
DGX	O
A100	O
also	O
moved	O
to	O
an	O
AMD	O
EYPC	O
7742	O
CPU	O
,	O
the	O
first	O
DGX	O
server	O
to	O
not	O
be	O
built	O
with	O
an	O
Intel	B-Device
Xeon	I-Device
CPU	I-Device
.	O
</s>
<s>
As	O
the	O
successor	O
to	O
the	O
original	O
DGX	O
Station	O
,	O
the	O
DGX	O
Station	O
A100	O
,	O
aims	O
to	O
fill	O
the	O
same	O
niche	O
as	O
the	O
DGX	O
station	O
in	O
being	O
a	O
quiet	O
,	O
efficient	O
,	O
turnkey	O
cluster-in-a-box	B-Architecture
solution	O
that	O
can	O
be	O
purchased	O
,	O
leased	O
,	O
or	O
rented	O
by	O
smaller	O
companies	O
or	O
individuals	O
who	O
want	O
to	O
utilize	O
machine	O
learning	O
.	O
</s>
<s>
It	O
follows	O
many	O
of	O
the	O
design	O
choices	O
of	O
the	O
original	O
DGX	O
station	O
,	O
such	O
as	O
the	O
tower	B-General_Concept
orientation	O
,	O
single	O
socket	O
CPU	O
mainboard	B-Device
,	O
a	O
new	O
refrigerant-based	O
cooling	O
system	O
,	O
and	O
a	O
reduced	O
number	O
of	O
accelerators	O
compared	O
to	O
the	O
corresponding	O
rackmount	B-Application
DGX	O
A100	O
of	O
the	O
same	O
generation	O
.	O
</s>
<s>
Four	O
Ampere-based	O
A100	O
accelerators	O
,	O
configured	O
with	O
40GB	O
(	O
HBM	O
)	O
or	O
80GB	O
(	O
HBM2e	O
)	O
memory	O
,	O
thus	O
giving	O
a	O
total	O
of	O
160GB	O
or	O
320GB	O
resulting	O
either	O
in	O
DGX	O
Station	O
A100	O
variants	O
160G	O
or	O
320G	O
.	O
</s>
<s>
Announced	O
March	O
22	O
,	O
2022	O
and	O
planned	O
for	O
release	O
in	O
Q3	O
2022	O
,	O
The	O
DGX	O
H100	O
is	O
the	O
4th	O
generation	O
of	O
DGX	O
servers	O
,	O
built	O
with	O
8	O
Hopper-based	O
H100	O
accelerators	O
,	O
for	O
a	O
total	O
of	O
32	O
PFLOPs	O
of	O
FP8	B-Algorithm
AI	O
compute	O
and	O
640GB	O
of	O
HBM3	O
Memory	O
,	O
an	O
upgrade	O
over	O
the	O
DGX	O
A100s	O
HBM2	O
memory	O
.	O
</s>
<s>
The	O
DGX	O
H100	O
increases	O
the	O
rackmount	B-Application
size	O
to	O
8U	O
to	O
accommodate	O
the	O
700W	O
TDP	O
of	O
each	O
H100	O
SXM	B-Application
card	O
.	O
</s>
<s>
The	O
DGX	O
H100	O
also	O
has	O
two	O
1.92TB	O
SSDs	B-Device
for	O
Operating	B-General_Concept
System	I-General_Concept
storage	O
,	O
and	O
30.72	O
TB	O
of	O
Solid	B-Device
state	I-Device
storage	I-Device
for	O
application	O
data	O
.	O
</s>
<s>
One	O
more	O
notable	O
addition	O
is	O
the	O
presence	O
of	O
two	O
Nvidia	B-Device
Bluefield	I-Device
3	O
DPUs	B-General_Concept
,	O
and	O
the	O
upgrade	O
to	O
400Gb/s	O
InfiniBand	B-Architecture
via	O
Mellanox	O
ConnectX-7	O
NICs	B-Protocol
,	O
double	O
the	O
bandwidth	O
of	O
the	O
DGX	O
A100	O
.	O
</s>
<s>
This	O
gives	O
the	O
DGX	O
H100	O
3.2Tb/s	O
of	O
fabric	O
bandwidth	O
across	O
Infiniband	B-Architecture
.	O
</s>
<s>
The	O
DGX	O
H100	O
has	O
two	O
currently	O
unspecified	O
4th	O
generation	O
Xeon	B-Device
Scalable	O
CPUs	O
(	O
Codenamed	O
Sapphire	B-Device
Rapids	I-Device
)	O
and	O
2	O
Terabytes	O
of	O
System	B-Architecture
Memory	I-Architecture
.	O
</s>
<s>
The	O
DGX	O
Superpod	O
is	O
a	O
high	O
performance	O
turnkey	O
supercomputer	B-Architecture
solution	O
provided	O
by	O
Nvidia	O
using	O
DGX	O
hardware	O
.	O
</s>
<s>
This	O
tightly	O
integrated	O
system	O
combines	O
high	O
performance	O
DGX	O
compute	O
nodes	O
with	O
fast	O
storage	O
and	O
high	O
bandwidth	O
networking	B-Architecture
to	O
provide	O
a	O
unique	O
plug	O
&	O
play	O
solution	O
to	O
extremely	O
high	O
demand	O
machine	O
learning	O
workloads	O
.	O
</s>
<s>
The	O
Selene	B-Device
Supercomputer	I-Device
,	O
at	O
the	O
Argonne	O
National	O
Laboratory	O
,	O
is	O
one	O
example	O
of	O
a	O
DGX	O
SuperPod	O
based	O
system	O
.	O
</s>
<s>
Selene	B-Device
,	O
built	O
from	O
280	O
DGX	O
A100	O
nodes	O
,	O
ranked	O
5th	O
on	O
the	O
Top500	B-Operating_System
list	O
for	O
most	O
powerful	O
supercomputers	B-Architecture
at	O
the	O
time	O
of	O
its	O
completion	O
,	O
and	O
has	O
continued	O
to	O
remain	O
high	O
in	O
performance	O
.	O
</s>
<s>
This	O
same	O
integration	O
is	O
available	O
to	O
any	O
customer	O
with	O
minimal	O
effort	O
on	O
their	O
behalf	O
,	O
and	O
the	O
new	O
Hopper	B-General_Concept
based	O
SuperPod	O
can	O
scale	O
to	O
32	O
DGX	O
H100	O
nodes	O
,	O
for	O
a	O
total	O
of	O
256	O
H100	O
GPUs	B-Architecture
and	O
64	O
x86	B-Operating_System
CPUs	O
.	O
</s>
<s>
This	O
gives	O
the	O
complete	O
SuperPod	O
a	O
whopping	O
20TB	O
of	O
HBM3	O
memory	O
,	O
70.4	O
TB/s	O
of	O
bisection	O
bandwidth	O
,	O
and	O
up	O
to	O
1	O
ExaFLOP	O
of	O
FP8	B-Algorithm
AI	O
compute	O
.	O
</s>
<s>
These	O
SuperPods	O
can	O
then	O
be	O
further	O
joined	O
to	O
create	O
even	O
larger	O
supercomputers	B-Architecture
.	O
</s>
<s>
The	O
upcoming	O
Eos	O
supercomputer	B-Architecture
,	O
designed	O
,	O
built	O
,	O
and	O
operated	O
by	O
Nvidia	O
,	O
will	O
be	O
constructed	O
of	O
18	O
H100	O
based	O
SuperPods	O
,	O
totaling	O
576	O
DGX	O
H100	O
systems	O
,	O
500	O
Quantum-2	O
InfiniBand	B-Architecture
switches	O
,	O
and	O
360	O
NVLink	O
Switches	O
,	O
this	O
will	O
allow	O
Eos	O
to	O
deliver	O
18	O
EFLOPs	O
of	O
FP8	B-Algorithm
compute	O
,	O
and	O
9	O
EFLOPs	O
of	O
FP16	O
compute	O
,	O
making	O
Eos	O
the	O
fastest	O
AI	O
supercomputer	B-Architecture
in	O
the	O
world	O
.	O
</s>
