<s>
Deep	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
(	O
deep	O
RL	O
)	O
is	O
a	O
subfield	O
of	O
machine	O
learning	O
that	O
combines	O
reinforcement	O
learning	O
(	O
RL	O
)	O
and	O
deep	B-Algorithm
learning	I-Algorithm
.	O
</s>
<s>
Deep	O
RL	O
incorporates	O
deep	B-Algorithm
learning	I-Algorithm
into	O
the	O
solution	O
,	O
allowing	O
agents	O
to	O
make	O
decisions	O
from	O
unstructured	O
input	O
data	O
without	O
manual	O
engineering	O
of	O
the	O
state	O
space	O
.	O
</s>
<s>
Deep	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
has	O
been	O
used	O
for	O
a	O
diverse	O
set	O
of	O
applications	O
including	O
but	O
not	O
limited	O
to	O
robotics	O
,	O
video	O
games	O
,	O
natural	B-Language
language	I-Language
processing	I-Language
,	O
computer	B-Application
vision	I-Application
,	O
education	O
,	O
transportation	O
,	O
finance	O
and	O
healthcare	O
.	O
</s>
<s>
Deep	B-Algorithm
learning	I-Algorithm
is	O
a	O
form	O
of	O
machine	O
learning	O
that	O
utilizes	O
a	O
neural	B-Architecture
network	I-Architecture
to	O
transform	O
a	O
set	O
of	O
inputs	O
into	O
a	O
set	O
of	O
outputs	O
via	O
an	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
.	O
</s>
<s>
Deep	B-Algorithm
learning	I-Algorithm
methods	O
,	O
often	O
using	O
supervised	B-General_Concept
learning	I-General_Concept
with	O
labeled	O
datasets	O
,	O
have	O
been	O
shown	O
to	O
solve	O
tasks	O
that	O
involve	O
handling	O
complex	O
,	O
high-dimensional	O
raw	O
input	O
data	O
such	O
as	O
images	O
,	O
with	O
less	O
manual	O
feature	B-General_Concept
engineering	I-General_Concept
than	O
prior	O
methods	O
,	O
enabling	O
significant	O
progress	O
in	O
several	O
fields	O
including	O
computer	B-Application
vision	I-Application
and	O
natural	B-Language
language	I-Language
processing	I-Language
.	O
</s>
<s>
Deep	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
algorithms	O
incorporate	O
deep	B-Algorithm
learning	I-Algorithm
to	O
solve	O
such	O
MDPs	O
,	O
often	O
representing	O
the	O
policy	O
or	O
other	O
learned	O
functions	O
as	O
a	O
neural	B-Architecture
network	I-Architecture
and	O
developing	O
specialized	O
algorithms	O
that	O
perform	O
well	O
in	O
this	O
setting	O
.	O
</s>
<s>
Along	O
with	O
rising	O
interest	O
in	O
neural	B-Architecture
networks	I-Architecture
beginning	O
in	O
the	O
mid	O
1980s	O
,	O
interest	O
grew	O
in	O
deep	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
,	O
where	O
a	O
neural	B-Architecture
network	I-Architecture
is	O
used	O
in	O
reinforcement	O
learning	O
to	O
represent	O
policies	O
or	O
value	O
functions	O
.	O
</s>
<s>
Because	O
in	O
such	O
a	O
system	O
,	O
the	O
entire	O
decision	O
making	O
process	O
from	O
sensors	O
to	O
motors	O
in	O
a	O
robot	O
or	O
agent	O
involves	O
a	O
single	O
neural	B-Architecture
network	I-Architecture
,	O
it	O
is	O
also	O
sometimes	O
called	O
end-to-end	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
.	O
</s>
<s>
One	O
of	O
the	O
first	O
successful	O
applications	O
of	O
reinforcement	O
learning	O
with	O
neural	B-Architecture
networks	I-Architecture
was	O
TD-Gammon	B-Application
,	O
a	O
computer	O
program	O
developed	O
in	O
1992	O
for	O
playing	O
backgammon	B-Application
.	O
</s>
<s>
Starting	O
around	O
2012	O
,	O
the	O
so	O
called	O
Deep	B-Algorithm
learning	I-Algorithm
revolution	O
led	O
to	O
an	O
increased	O
interest	O
in	O
using	O
deep	O
neural	B-Architecture
networks	I-Architecture
as	O
function	O
approximators	O
across	O
a	O
variety	O
of	O
domains	O
.	O
</s>
<s>
This	O
led	O
to	O
a	O
renewed	O
interest	O
in	O
researchers	O
using	O
deep	O
neural	B-Architecture
networks	I-Architecture
to	O
learn	O
the	O
policy	O
,	O
value	O
,	O
and/or	O
Q	O
functions	O
present	O
in	O
existing	O
reinforcement	O
learning	O
algorithms	O
.	O
</s>
<s>
Beginning	O
around	O
2013	O
,	O
DeepMind	B-Application
showed	O
impressive	O
learning	O
results	O
using	O
deep	O
RL	O
to	O
play	O
Atari	O
video	O
games	O
.	O
</s>
<s>
The	O
computer	O
player	O
a	O
neural	B-Architecture
network	I-Architecture
trained	O
using	O
a	O
deep	O
RL	O
algorithm	O
,	O
a	O
deep	O
version	O
of	O
Q-learning	B-Algorithm
they	O
termed	O
deep	O
Q-networks	O
(	O
DQN	O
)	O
,	O
with	O
the	O
game	O
score	O
as	O
the	O
reward	O
.	O
</s>
<s>
They	O
used	O
a	O
deep	B-Architecture
convolutional	I-Architecture
neural	I-Architecture
network	I-Architecture
to	O
process	O
4	O
frames	O
RGB	O
pixels	O
(	O
84x84	O
)	O
as	O
inputs	O
.	O
</s>
<s>
Deep	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
reached	O
another	O
milestone	O
in	O
2015	O
when	O
AlphaGo	B-Application
,	O
a	O
computer	O
program	O
trained	O
with	O
deep	O
RL	O
to	O
play	O
Go	O
,	O
became	O
the	O
first	O
computer	O
Go	O
program	O
to	O
beat	O
a	O
human	O
professional	O
Go	O
player	O
without	O
handicap	O
on	O
a	O
full-sized	O
19×19	O
board	O
.	O
</s>
<s>
In	O
a	O
subsequent	O
project	O
in	O
2017	O
,	O
AlphaZero	B-Application
improved	O
performance	O
on	O
Go	O
while	O
also	O
demonstrating	O
they	O
could	O
use	O
the	O
same	O
algorithm	O
to	O
learn	O
to	O
play	O
chess	B-Application
and	O
shogi	O
at	O
a	O
level	O
competitive	O
or	O
superior	O
to	O
existing	O
computer	O
programs	O
for	O
those	O
games	O
,	O
and	O
again	O
improved	O
in	O
2019	O
with	O
MuZero	B-Application
.	O
</s>
<s>
Separately	O
,	O
another	O
milestone	O
was	O
achieved	O
by	O
researchers	O
from	O
Carnegie	O
Mellon	O
University	O
in	O
2019	O
developing	O
Pluribus	O
,	O
a	O
computer	O
program	O
to	O
play	O
poker	O
that	O
was	O
the	O
first	O
to	O
beat	O
professionals	O
at	O
multiplayer	O
games	O
of	O
no-limit	O
Texas	B-Device
hold	I-Device
'	I-Device
em	I-Device
.	O
</s>
<s>
OpenAI	B-Operating_System
Five	I-Operating_System
,	O
a	O
program	O
for	O
playing	O
five-on-five	O
Dota	B-Application
2	I-Application
beat	O
the	O
previous	O
world	O
champions	O
in	O
a	O
demonstration	O
match	O
in	O
2019	O
.	O
</s>
<s>
Deep	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
has	O
also	O
been	O
applied	O
to	O
many	O
domains	O
beyond	O
games	O
.	O
</s>
<s>
Various	O
techniques	O
exist	O
to	O
train	O
policies	O
to	O
solve	O
tasks	O
with	O
deep	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
algorithms	O
,	O
each	O
having	O
their	O
own	O
benefits	O
.	O
</s>
<s>
In	O
model-based	O
deep	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
algorithms	O
,	O
a	O
forward	O
model	O
of	O
the	O
environment	O
dynamics	O
is	O
estimated	O
,	O
usually	O
by	O
supervised	B-General_Concept
learning	I-General_Concept
using	O
a	O
neural	B-Architecture
network	I-Architecture
.	O
</s>
<s>
The	O
actions	O
selected	O
may	O
be	O
optimized	O
using	O
Monte	B-Algorithm
Carlo	I-Algorithm
methods	I-Algorithm
such	O
as	O
the	O
cross-entropy	B-Algorithm
method	I-Algorithm
,	O
or	O
a	O
combination	O
of	O
model-learning	O
with	O
model-free	O
methods	O
.	O
</s>
<s>
In	O
model-free	O
deep	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
algorithms	O
,	O
a	O
policy	O
is	O
learned	O
without	O
explicitly	O
modeling	O
the	O
forward	O
dynamics	O
.	O
</s>
<s>
Another	O
class	O
of	O
model-free	O
deep	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
algorithms	O
rely	O
on	O
dynamic	B-Algorithm
programming	I-Algorithm
,	O
inspired	O
by	O
temporal	O
difference	O
learning	O
and	O
Q-learning	B-Algorithm
.	O
</s>
<s>
In	O
discrete	O
action	O
spaces	O
,	O
these	O
algorithms	O
usually	O
learn	O
a	O
neural	B-Architecture
network	I-Architecture
Q-function	O
that	O
estimates	O
the	O
future	O
returns	O
taking	O
action	O
from	O
state	O
.	O
</s>
<s>
Deep	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
is	O
an	O
active	O
area	O
of	O
research	O
,	O
with	O
several	O
lines	O
of	O
inquiry	O
.	O
</s>
<s>
Generally	O
,	O
value-function	O
based	O
methods	O
such	O
as	O
Q-learning	B-Algorithm
are	O
better	O
suited	O
for	O
off-policy	O
learning	O
and	O
have	O
better	O
sample-efficiency	O
-	O
the	O
amount	O
of	O
data	O
required	O
to	O
learn	O
a	O
task	O
is	O
reduced	O
because	O
data	O
is	O
re-used	O
for	O
learning	O
.	O
</s>
<s>
Inverse	O
reinforcement	O
learning	O
can	O
be	O
used	O
for	O
learning	O
from	O
demonstrations	O
(	O
or	O
apprenticeship	B-General_Concept
learning	I-General_Concept
)	O
by	O
inferring	O
the	O
demonstrator	O
's	O
reward	O
and	O
then	O
optimizing	O
a	O
policy	O
to	O
maximize	O
returns	O
with	O
RL	O
.	O
</s>
<s>
Deep	B-Algorithm
learning	I-Algorithm
approaches	O
have	O
been	O
used	O
for	O
various	O
forms	O
of	O
imitation	O
learning	O
and	O
inverse	O
RL	O
.	O
</s>
<s>
Multi-agent	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
studies	O
the	O
problems	O
introduced	O
in	O
this	O
setting	O
.	O
</s>
<s>
The	O
promise	O
of	O
using	O
deep	B-Algorithm
learning	I-Algorithm
tools	O
in	O
reinforcement	O
learning	O
is	O
generalization	O
:	O
the	O
ability	O
to	O
operate	O
correctly	O
on	O
previously	O
unseen	O
inputs	O
.	O
</s>
<s>
For	O
instance	O
,	O
neural	B-Architecture
networks	I-Architecture
trained	O
for	O
image	O
recognition	O
can	O
recognize	O
that	O
a	O
picture	O
contains	O
a	O
bird	O
even	O
it	O
has	O
never	O
seen	O
that	O
particular	O
image	O
or	O
even	O
that	O
particular	O
bird	O
.	O
</s>
<s>
With	O
this	O
layer	O
of	O
abstraction	O
,	O
deep	B-Algorithm
reinforcement	I-Algorithm
learning	I-Algorithm
algorithms	O
can	O
be	O
designed	O
in	O
a	O
way	O
that	O
allows	O
them	O
to	O
be	O
general	O
and	O
the	O
same	O
model	O
can	O
be	O
used	O
for	O
different	O
tasks	O
.	O
</s>
<s>
One	O
method	O
of	O
increasing	O
the	O
ability	O
of	O
policies	O
trained	O
with	O
deep	O
RL	O
policies	O
to	O
generalize	O
is	O
to	O
incorporate	O
representation	B-General_Concept
learning	I-General_Concept
.	O
</s>
