<s>
A	O
deep	B-Algorithm
learning	I-Algorithm
processor	I-Algorithm
(	O
DLP	O
)	O
,	O
or	O
a	O
deep	B-General_Concept
learning	I-General_Concept
accelerator	I-General_Concept
,	O
is	O
an	O
electronic	O
circuit	O
designed	O
for	O
deep	B-Algorithm
learning	I-Algorithm
algorithms	O
,	O
usually	O
with	O
separate	O
data	B-General_Concept
memory	I-General_Concept
and	O
dedicated	O
instruction	B-General_Concept
set	I-General_Concept
architecture	I-General_Concept
.	O
</s>
<s>
Deep	B-Algorithm
learning	I-Algorithm
processors	I-Algorithm
range	O
from	O
mobile	O
devices	O
,	O
such	O
as	O
neural	B-General_Concept
processing	I-General_Concept
units	I-General_Concept
(	O
NPUs	O
)	O
in	O
Huawei	O
cellphones	O
,	O
</s>
<s>
to	O
cloud	B-Architecture
computing	I-Architecture
servers	O
such	O
as	O
tensor	B-Device
processing	I-Device
units	I-Device
(	O
TPU	O
)	O
in	O
the	O
Google	B-General_Concept
Cloud	I-General_Concept
Platform	I-General_Concept
.	O
</s>
<s>
The	O
goal	O
of	O
DLPs	O
is	O
to	O
provide	O
higher	O
efficiency	O
and	O
performance	O
for	O
deep	B-Algorithm
learning	I-Algorithm
algorithms	O
than	O
general	O
central	B-General_Concept
processing	I-General_Concept
unit	I-General_Concept
(	O
CPUs	O
)	O
and	O
graphics	B-Architecture
processing	I-Architecture
units	I-Architecture
(	O
GPUs	B-Architecture
)	O
would	O
.	O
</s>
<s>
Most	O
DLPs	O
employ	O
a	O
large	O
number	O
of	O
computing	O
components	O
to	O
leverage	O
high	O
data-level	O
parallelism	O
,	O
a	O
relatively	O
larger	O
on-chip	O
buffer/memory	O
to	O
leverage	O
the	O
data	O
reuse	O
patterns	O
,	O
and	O
limited	O
data-width	O
operators	O
for	O
error-resilience	O
of	O
deep	B-Algorithm
learning	I-Algorithm
.	O
</s>
<s>
Deep	B-Algorithm
learning	I-Algorithm
processors	I-Algorithm
differ	O
from	O
AI	B-General_Concept
accelerators	I-General_Concept
in	O
that	O
they	O
are	O
specialized	O
for	O
running	O
learning	O
algorithms	O
,	O
while	O
AI	B-General_Concept
accelerators	I-General_Concept
are	O
typically	O
more	O
specialized	O
for	O
inference	O
.	O
</s>
<s>
However	O
,	O
the	O
two	O
terms	O
(	O
DLP	O
vs	O
AI	B-General_Concept
accelerator	I-General_Concept
)	O
are	O
not	O
used	O
rigorously	O
and	O
there	O
is	O
often	O
overlap	O
between	O
the	O
two	O
.	O
</s>
<s>
At	O
the	O
beginning	O
,	O
general	O
CPUs	O
were	O
adopted	O
to	O
perform	O
deep	B-Algorithm
learning	I-Algorithm
algorithms	O
.	O
</s>
<s>
For	O
example	O
,	O
in	O
2012	O
,	O
Alex	O
Krizhevsky	O
adopted	O
two	O
GPUs	B-Architecture
to	O
train	O
a	O
deep	B-Algorithm
learning	I-Algorithm
network	O
,	O
i.e.	O
,	O
AlexNet	O
,	O
which	O
won	O
the	O
champion	O
of	O
the	O
ISLVRC-2012	O
competition	O
.	O
</s>
<s>
As	O
the	O
interests	O
in	O
deep	B-Algorithm
learning	I-Algorithm
algorithms	O
and	O
DLPs	O
keep	O
increasing	O
,	O
GPU	B-Architecture
manufacturers	O
start	O
to	O
add	O
deep	B-Algorithm
learning	I-Algorithm
related	O
features	O
in	O
both	O
hardware	O
(	O
e.g.	O
,	O
INT8	O
operators	O
)	O
and	O
software	O
(	O
e.g.	O
,	O
cuDNN	O
Library	O
)	O
.	O
</s>
<s>
For	O
example	O
,	O
Nvidia	O
even	O
released	O
the	O
Turing	O
Tensor	O
Core	O
—	O
a	O
DLP	O
—	O
to	O
accelerate	O
deep	B-Algorithm
learning	I-Algorithm
processing	O
.	O
</s>
<s>
of	O
the	O
accepted	O
papers	O
,	O
are	O
all	O
architecture	O
designs	O
about	O
deep	B-Algorithm
learning	I-Algorithm
.	O
</s>
<s>
With	O
the	O
rapid	O
evolution	O
of	O
deep	B-Algorithm
learning	I-Algorithm
algorithms	O
and	O
DLPs	O
,	O
many	O
architectures	O
have	O
been	O
explored	O
.	O
</s>
<s>
Regarding	O
the	O
computation	O
component	O
,	O
as	O
most	O
operations	O
in	O
deep	B-Algorithm
learning	I-Algorithm
can	O
be	O
aggregated	O
into	O
vector	O
operations	O
,	O
the	O
most	O
common	O
ways	O
for	O
building	O
computation	O
components	O
in	O
digital	O
DLPs	O
are	O
the	O
MAC-based	O
(	O
multiplier-accumulation	O
)	O
organization	O
,	O
either	O
with	O
vector	O
MACs	O
or	O
scalar	O
MACs	O
.	O
</s>
<s>
Rather	O
than	O
SIMD	O
or	O
SIMT	O
in	O
general	O
processing	O
devices	O
,	O
deep	B-Algorithm
learning	I-Algorithm
domain-specific	O
parallelism	O
is	O
better	O
explored	O
on	O
these	O
MAC-based	O
organizations	O
.	O
</s>
<s>
Regarding	O
the	O
memory	O
hierarchy	O
,	O
as	O
deep	B-Algorithm
learning	I-Algorithm
algorithms	O
require	O
high	O
bandwidth	O
to	O
provide	O
the	O
computation	O
component	O
with	O
sufficient	O
data	O
,	O
DLPs	O
usually	O
employ	O
a	O
relatively	O
larger	O
size	O
(	O
tens	O
of	O
kilobytes	O
or	O
several	O
megabytes	O
)	O
on-chip	O
buffer	O
but	O
with	O
dedicated	O
on-chip	O
data	O
reuse	O
strategy	O
and	O
data	O
exchange	O
strategy	O
to	O
alleviate	O
the	O
burden	O
for	O
memory	O
bandwidth	O
.	O
</s>
<s>
Instead	O
of	O
the	O
widely	O
used	O
cache	O
in	O
general	O
processing	O
devices	O
,	O
DLPs	O
always	O
use	O
scratchpad	O
memory	O
as	O
it	O
could	O
provide	O
higher	O
data	O
reuse	O
opportunities	O
by	O
leveraging	O
the	O
relatively	O
regular	O
data	O
access	O
pattern	O
in	O
deep	B-Algorithm
learning	I-Algorithm
algorithms	O
.	O
</s>
<s>
Regarding	O
the	O
control	O
logic	O
,	O
as	O
the	O
deep	B-Algorithm
learning	I-Algorithm
algorithms	O
keep	O
evolving	O
at	O
a	O
dramatic	O
speed	O
,	O
DLPs	O
start	O
to	O
leverage	O
dedicated	O
ISA	O
(	O
instruction	B-General_Concept
set	I-General_Concept
architecture	I-General_Concept
)	O
to	O
support	O
the	O
deep	B-Algorithm
learning	I-Algorithm
domain	O
flexibly	O
.	O
</s>
<s>
At	O
first	O
,	O
DianNao	O
used	O
a	O
VLIW-style	O
instruction	B-General_Concept
set	I-General_Concept
where	O
each	O
instruction	O
could	O
finish	O
a	O
layer	O
in	O
a	O
DNN	O
.	O
</s>
<s>
Cambricon	O
introduces	O
the	O
first	O
deep	B-Algorithm
learning	I-Algorithm
domain-specific	O
ISA	O
,	O
which	O
could	O
support	O
more	O
than	O
ten	O
different	O
deep	B-Algorithm
learning	I-Algorithm
algorithms	O
.	O
</s>
<s>
Despite	O
the	O
DLPs	O
,	O
GPUs	B-Architecture
and	O
FPGAs	O
are	O
also	O
being	O
used	O
as	O
accelerators	O
to	O
speed	O
up	O
the	O
execution	O
of	O
deep	B-Algorithm
learning	I-Algorithm
algorithms	O
.	O
</s>
<s>
For	O
example	O
,	O
Summit	O
,	O
a	O
supercomputer	O
from	O
IBM	O
for	O
Oak	O
Ridge	O
National	O
Laboratory	O
,	O
contains	O
27,648	O
Nvidia	O
Tesla	O
V100	O
cards	O
,	O
which	O
can	O
be	O
used	O
to	O
accelerate	O
deep	B-Algorithm
learning	I-Algorithm
algorithms	O
.	O
</s>
<s>
Microsoft	O
builds	O
its	O
deep	B-Algorithm
learning	I-Algorithm
platform	O
using	O
FPGAs	O
in	O
its	O
Azure	O
to	O
support	O
real-time	O
deep	B-Algorithm
learning	I-Algorithm
services	O
.	O
</s>
<s>
In	O
Table	O
2	O
we	O
compare	O
the	O
DLPs	O
against	O
GPUs	B-Architecture
and	O
FPGAs	O
in	O
terms	O
of	O
target	O
,	O
performance	O
,	O
energy	O
efficiency	O
,	O
and	O
flexibility	O
.	O
</s>
<s>
Atomically	O
thin	O
semiconductors	O
are	O
considered	O
promising	O
for	O
energy-efficient	O
deep	B-Algorithm
learning	I-Algorithm
hardware	O
where	O
the	O
same	O
basic	O
device	O
structure	O
is	O
used	O
for	O
both	O
logic	O
operations	O
and	O
data	O
storage	O
.	O
</s>
<s>
published	O
experiments	O
with	O
a	O
large-area	O
active	O
channel	O
material	O
for	O
developing	O
logic-in-memory	O
devices	O
and	O
circuits	O
based	O
on	O
floating-gate	B-Algorithm
field-effect	O
transistors	O
(	O
FGFETs	O
)	O
.	O
</s>
<s>
proposed	O
an	O
integrated	O
photonic	O
hardware	B-General_Concept
accelerator	I-General_Concept
for	O
parallel	O
convolutional	O
processing	O
.	O
</s>
<s>
The	O
authors	O
identify	O
two	O
key	O
advantages	O
of	O
integrated	O
photonics	O
over	O
its	O
electronic	O
counterparts	O
:	O
(	O
1	O
)	O
massively	O
parallel	O
data	O
transfer	O
through	O
wavelength	O
division	O
multiplexing	B-Architecture
in	O
conjunction	O
with	O
frequency	O
combs	O
,	O
and	O
(	O
2	O
)	O
extremely	O
high	O
data	O
modulation	O
speeds	O
.	O
</s>
