<s>
Latency	B-General_Concept
oriented	I-General_Concept
processor	I-General_Concept
architecture	I-General_Concept
is	O
the	O
microarchitecture	B-General_Concept
of	O
a	O
microprocessor	B-Architecture
designed	O
to	O
serve	O
a	O
serial	O
computing	O
thread	B-Operating_System
with	O
a	O
low	O
latency	O
.	O
</s>
<s>
This	O
is	O
typical	O
of	O
most	O
central	B-General_Concept
processing	I-General_Concept
units	I-General_Concept
(	O
CPU	O
)	O
being	O
developed	O
since	O
the	O
1970s	O
.	O
</s>
<s>
These	O
architectures	O
,	O
in	O
general	O
,	O
aim	O
to	O
execute	O
as	O
many	O
instructions	O
as	O
possible	O
belonging	O
to	O
a	O
single	O
serial	O
thread	B-Operating_System
,	O
in	O
a	O
given	O
window	O
of	O
time	O
;	O
however	O
,	O
the	O
time	O
to	O
execute	O
a	O
single	O
instruction	O
completely	O
from	O
fetch	O
to	O
retire	O
stages	O
may	O
vary	O
from	O
a	O
few	O
cycles	O
to	O
even	O
a	O
few	O
hundred	O
cycles	O
in	O
some	O
cases	O
.	O
</s>
<s>
Latency	B-General_Concept
oriented	I-General_Concept
processor	I-General_Concept
architectures	I-General_Concept
are	O
the	O
opposite	O
of	O
throughput-oriented	O
processors	O
which	O
concern	O
themselves	O
more	O
with	O
the	O
total	O
throughput	O
of	O
the	O
system	O
,	O
rather	O
than	O
the	O
service	O
latencies	O
for	O
all	O
individual	O
threads	B-Operating_System
that	O
they	O
work	O
on	O
.	O
</s>
<s>
Typically	O
,	O
latency	B-General_Concept
oriented	I-General_Concept
processor	I-General_Concept
architectures	I-General_Concept
execute	O
a	O
single	O
task	O
operating	O
on	O
a	O
single	O
data	O
stream	O
,	O
and	O
so	O
they	O
are	O
SISD	B-Operating_System
under	O
Flynn	O
's	O
taxonomy	O
.	O
</s>
<s>
Latency	B-General_Concept
oriented	I-General_Concept
processor	I-General_Concept
architectures	I-General_Concept
might	O
also	O
include	O
SIMD	B-Device
instruction	O
set	O
extensions	O
such	O
as	O
Intel	O
MMX	B-Architecture
and	O
SSE	B-General_Concept
;	O
even	O
though	O
these	O
extensions	O
operate	O
on	O
large	O
data	O
sets	O
,	O
their	O
primary	O
goal	O
is	O
to	O
reduce	O
overall	O
latency	O
.	O
</s>
<s>
These	O
typically	O
involve	O
adding	O
additional	O
hardware	O
in	O
the	O
pipeline	B-General_Concept
to	O
serve	O
instructions	O
as	O
soon	O
as	O
they	O
are	O
fetched	O
from	O
memory	B-Architecture
or	O
instruction	B-General_Concept
cache	I-General_Concept
.	O
</s>
<s>
A	O
notable	O
characteristic	O
of	O
these	O
architectures	O
is	O
that	O
a	O
significant	O
area	O
of	O
the	O
chip	O
is	O
used	O
up	O
in	O
parts	O
other	O
than	O
the	O
Execution	B-General_Concept
Units	I-General_Concept
themselves	O
.	O
</s>
<s>
Hence	O
,	O
it	O
makes	O
sense	O
that	O
the	O
microprocessor	B-Architecture
will	O
be	O
spending	O
its	O
time	O
doing	O
many	O
other	O
tasks	O
other	O
than	O
the	O
calculations	O
required	O
by	O
the	O
individual	O
instructions	O
themselves	O
.	O
</s>
<s>
If	O
the	O
hazards	B-General_Concept
encountered	O
during	O
computation	O
are	O
not	O
resolved	O
quickly	O
,	O
then	O
latency	O
for	O
the	O
thread	B-Operating_System
increases	O
.	O
</s>
<s>
This	O
is	O
because	O
hazards	B-General_Concept
stall	O
execution	O
of	O
subsequent	O
instructions	O
and	O
,	O
depending	O
upon	O
the	O
pipeline	B-General_Concept
implementation	O
,	O
may	O
either	O
stall	O
progress	O
completely	O
until	O
the	O
dependency	O
is	O
resolved	O
or	O
lead	O
to	O
an	O
avalanche	O
of	O
more	O
hazards	B-General_Concept
in	O
future	O
instructions	O
;	O
further	O
exacerbating	O
execution	O
time	O
for	O
the	O
thread	B-Operating_System
.	O
</s>
<s>
Below	O
are	O
some	O
of	O
the	O
most	O
commonly	O
employed	O
techniques	O
to	O
reduce	O
the	O
overall	O
latency	O
for	O
a	O
thread	B-Operating_System
.	O
</s>
<s>
Most	O
architectures	O
today	O
use	O
shorter	O
and	O
simpler	O
instructions	O
,	O
like	O
the	O
load/store	B-Architecture
architecture	I-Architecture
,	O
which	O
help	O
in	O
optimizing	O
the	O
instruction	O
pipeline	B-General_Concept
for	O
faster	O
execution	O
.	O
</s>
<s>
Such	O
an	O
ISA	O
is	O
called	O
a	O
RISC	B-Architecture
architecture	O
.	O
</s>
<s>
Pipelining	B-General_Concept
overlaps	O
execution	O
of	O
multiple	O
instructions	O
from	O
the	O
same	O
executing	O
thread	B-Operating_System
in	O
order	O
to	O
increase	O
clock	O
frequency	O
or	O
to	O
increase	O
the	O
number	O
of	O
instructions	O
that	O
complete	O
per	O
unit	O
time	O
;	O
thereby	O
reducing	O
the	O
overall	O
execution	O
time	O
for	O
a	O
thread	B-Operating_System
.	O
</s>
<s>
Instead	O
of	O
waiting	O
for	O
a	O
single	O
instruction	O
to	O
complete	O
all	O
its	O
execution	O
stages	O
,	O
multiple	O
instructions	O
are	O
processed	O
simultaneously	O
,	O
at	O
their	O
respective	O
stages	O
inside	O
the	O
pipeline	B-General_Concept
.	O
</s>
<s>
This	O
technique	O
is	O
used	O
to	O
effectively	O
increase	O
the	O
total	O
register	O
file	O
size	O
than	O
that	O
specified	O
in	O
the	O
ISA	O
to	O
programmers	O
,	O
and	O
to	O
eliminate	O
false	B-Operating_System
dependencies	I-Operating_System
.	O
</s>
<s>
To	O
eliminate	O
this	O
dependency	O
,	O
the	O
pipeline	B-General_Concept
would	O
'	O
rename	O
 '	O
the	O
instruction	O
internally	O
by	O
assigning	O
it	O
to	O
an	O
internal	O
register	O
.	O
</s>
<s>
Similarly	O
if	O
both	O
the	O
instructions	O
simply	O
meant	O
to	O
write	O
to	O
the	O
same	O
register	O
Write-After-Write	O
(	O
WAW	O
)	O
,	O
the	O
pipeline	B-General_Concept
would	O
rename	O
them	O
and	O
ensure	O
that	O
their	O
results	O
are	O
available	O
to	O
future	O
instructions	O
without	O
the	O
need	O
to	O
serialize	O
their	O
execution	O
.	O
</s>
<s>
The	O
different	O
levels	O
of	O
memory	B-Architecture
,	O
which	O
includes	O
caches	B-General_Concept
,	O
main	O
memory	B-Architecture
and	O
non-volatile	B-General_Concept
storage	I-General_Concept
like	O
hard	O
disks	O
(	O
where	O
the	O
program	O
instructions	O
and	O
data	O
reside	O
)	O
,	O
are	O
designed	O
to	O
exploit	O
spatial	B-General_Concept
locality	I-General_Concept
and	O
temporal	B-General_Concept
locality	I-General_Concept
to	O
reduce	O
the	O
total	O
memory	B-Architecture
access	I-Architecture
time	I-Architecture
.	O
</s>
<s>
The	O
less	O
time	O
the	O
processor	O
spends	O
waiting	O
for	O
data	O
to	O
be	O
fetched	O
from	O
memory	B-Architecture
,	O
the	O
lower	O
number	O
of	O
instructions	O
consume	O
pipeline	B-General_Concept
resources	O
while	O
just	O
sitting	O
idle	O
and	O
doing	O
no	O
useful	O
work	O
.	O
</s>
<s>
The	O
instruction	O
pipeline	B-General_Concept
will	O
be	O
completely	O
stalled	O
if	O
all	O
its	O
internal	O
buffers	O
(	O
for	O
example	O
reservation	B-General_Concept
stations	I-General_Concept
)	O
are	O
filled	O
to	O
their	O
respective	O
capacities	O
.	O
</s>
<s>
Hence	O
,	O
if	O
instructions	O
consume	O
fewer	O
idle	O
cycles	O
while	O
inside	O
the	O
pipeline	B-General_Concept
,	O
there	O
is	O
a	O
greater	O
chance	O
of	O
exploiting	O
Instruction	B-Operating_System
level	I-Operating_System
parallelism	I-Operating_System
(	O
ILP	O
)	O
as	O
the	O
fetch	O
logic	O
can	O
pull	O
in	O
greater	O
number	O
of	O
instructions	O
from	O
the	O
cache/memory	O
per	O
unit	O
time	O
.	O
</s>
<s>
A	O
major	O
cause	O
for	O
pipeline	B-General_Concept
stalls	O
are	O
control	O
flow	O
dependencies	O
,	O
i.e.	O
</s>
<s>
If	O
the	O
guess	O
turns	O
out	O
to	O
be	O
correct	O
,	O
then	O
the	O
instructions	O
are	O
allowed	O
to	O
complete	O
successfully	O
and	O
to	O
update	O
their	O
results	O
back	O
to	O
register	O
file/memory	O
.	O
</s>
<s>
If	O
the	O
guess	O
was	O
incorrect	O
,	O
then	O
all	O
speculative	O
instructions	O
are	O
flushed	O
from	O
the	O
pipeline	B-General_Concept
and	O
execution	O
(	O
re	O
)	O
starts	O
along	O
the	O
actual	O
correct	O
path	O
for	O
the	O
program	O
.	O
</s>
<s>
By	O
maintaining	O
a	O
high	O
prediction	O
accuracy	O
,	O
the	O
pipeline	B-General_Concept
is	O
able	O
to	O
significantly	O
increase	O
throughput	O
for	O
the	O
executing	O
thread	B-Operating_System
.	O
</s>
<s>
Not	O
all	O
instructions	O
in	O
a	O
thread	B-Operating_System
take	O
the	O
same	O
amount	O
of	O
time	O
to	O
execute	O
.	O
</s>
<s>
Superscalar	O
pipelines	B-General_Concept
usually	O
have	O
multiple	O
possible	O
paths	O
for	O
instructions	O
depending	O
upon	O
current	O
state	O
and	O
the	O
instruction	O
type	O
itself	O
.	O
</s>
<s>
Hence	O
,	O
to	O
increase	O
instructions	O
per	O
cycle	O
(	O
IPC	O
)	O
the	O
pipeline	B-General_Concept
allows	O
execution	O
of	O
instructions	O
out-of-order	O
so	O
that	O
instructions	O
later	O
in	O
the	O
program	O
are	O
not	O
stalled	O
due	O
to	O
an	O
instruction	O
which	O
will	O
take	O
longer	O
to	O
complete	O
.	O
</s>
<s>
All	O
instructions	O
are	O
registered	O
in	O
a	O
re-order	B-General_Concept
buffer	I-General_Concept
when	O
they	O
are	O
fetched	O
by	O
the	O
pipeline	B-General_Concept
and	O
allowed	O
to	O
retire	O
(	O
i.e.	O
</s>
<s>
A	O
super-scalar	O
instruction	O
pipeline	B-General_Concept
pulls	O
in	O
multiple	O
instructions	O
in	O
every	O
clock	O
cycle	O
,	O
as	O
opposed	O
to	O
a	O
simple	O
scalar	O
pipeline	B-General_Concept
.	O
</s>
<s>
This	O
increases	O
Instruction	B-Operating_System
level	I-Operating_System
parallelism	I-Operating_System
(	O
ILP	O
)	O
as	O
many	O
times	O
as	O
the	O
number	O
of	O
instructions	O
fetched	O
in	O
each	O
cycle	O
,	O
except	O
when	O
the	O
pipeline	B-General_Concept
is	O
stalled	O
due	O
to	O
data	O
or	O
control	O
flow	O
dependencies	O
.	O
</s>
<s>
Even	O
though	O
the	O
retire	O
rate	O
of	O
superscalar	O
pipelines	B-General_Concept
is	O
usually	O
less	O
than	O
their	O
fetch	O
rate	O
,	O
the	O
overall	O
number	O
of	O
instructions	O
executed	O
per	O
unit	O
time	O
(	O
>	O
1	O
)	O
is	O
generally	O
greater	O
than	O
a	O
scalar	O
pipeline	B-General_Concept
.	O
</s>
<s>
The	O
total	O
time	O
required	O
to	O
complete	O
1	O
execution	O
is	O
significantly	O
larger	O
than	O
that	O
of	O
a	O
latency	B-General_Concept
oriented	I-General_Concept
processor	I-General_Concept
architecture	I-General_Concept
,	O
however	O
,	O
the	O
total	O
time	O
to	O
complete	O
a	O
large	O
set	O
of	O
calculations	O
is	O
significantly	O
reduced	O
.	O
</s>
<s>
Latency	O
oriented	O
processors	O
expend	O
a	O
substantial	O
chip	O
area	O
on	O
sophisticated	O
control	O
structures	O
like	O
branch	O
prediction	O
,	O
data	B-General_Concept
forwarding	I-General_Concept
,	O
re-order	B-General_Concept
buffer	I-General_Concept
,	O
large	O
register	O
files	O
and	O
caches	B-General_Concept
in	O
each	O
processor	O
.	O
</s>
<s>
These	O
structures	O
help	O
reduce	O
operational	O
latency	O
and	O
memory-access	O
time	O
per	O
instruction	O
,	O
and	O
make	O
results	O
available	O
as	O
soon	O
as	O
possible	O
.	O
</s>
<s>
Throughput	O
oriented	O
architectures	O
on	O
the	O
other	O
hand	O
,	O
usually	O
have	O
a	O
multitude	O
of	O
processors	O
with	O
much	O
smaller	O
caches	B-General_Concept
and	O
simpler	O
control	O
logic	O
.	O
</s>
<s>
This	O
helps	O
to	O
efficiently	O
utilize	O
the	O
memory	B-Architecture
bandwidth	O
and	O
increase	O
total	O
the	O
number	O
of	O
total	O
number	O
of	O
execution	B-General_Concept
units	I-General_Concept
on	O
the	O
same	O
chip	O
area	O
.	O
</s>
<s>
GPUs	B-Architecture
are	O
a	O
typical	O
example	O
of	O
throughput	O
oriented	O
processor	O
architectures	O
.	O
</s>
