<s>
Q-learning	B-Algorithm
is	O
a	O
model-free	B-Algorithm
reinforcement	O
learning	O
algorithm	O
to	O
learn	O
the	O
value	O
of	O
an	O
action	O
in	O
a	O
particular	O
state	O
.	O
</s>
<s>
It	O
does	O
not	O
require	O
a	O
model	O
of	O
the	O
environment	O
(	O
hence	O
"	O
model-free	B-Algorithm
"	O
)	O
,	O
and	O
it	O
can	O
handle	O
problems	O
with	O
stochastic	O
transitions	O
and	O
rewards	O
without	O
requiring	O
adaptations	O
.	O
</s>
<s>
For	O
any	O
finite	O
Markov	O
decision	O
process	O
(	O
FMDP	O
)	O
,	O
Q-learning	B-Algorithm
finds	O
an	O
optimal	O
policy	O
in	O
the	O
sense	O
of	O
maximizing	O
the	O
expected	O
value	O
of	O
the	O
total	O
reward	O
over	O
any	O
and	O
all	O
successive	O
steps	O
,	O
starting	O
from	O
the	O
current	O
state	O
.	O
</s>
<s>
Q-learning	B-Algorithm
can	O
identify	O
an	O
optimal	O
action-selection	O
policy	O
for	O
any	O
given	O
FMDP	O
,	O
given	O
infinite	O
exploration	O
time	O
and	O
a	O
partly-random	O
policy	O
.	O
</s>
<s>
Reinforcement	O
learning	O
involves	O
an	O
agent	B-General_Concept
,	O
a	O
set	O
of	O
states	O
,	O
and	O
a	O
set	O
of	O
actions	O
per	O
state	O
.	O
</s>
<s>
By	O
performing	O
an	O
action	O
,	O
the	O
agent	B-General_Concept
transitions	O
from	O
state	O
to	O
state	O
.	O
</s>
<s>
Executing	O
an	O
action	O
in	O
a	O
specific	O
state	O
provides	O
the	O
agent	B-General_Concept
with	O
a	O
reward	O
(	O
a	O
numerical	O
score	O
)	O
.	O
</s>
<s>
The	O
goal	O
of	O
the	O
agent	B-General_Concept
is	O
to	O
maximize	O
its	O
total	O
reward	O
.	O
</s>
<s>
After	O
steps	O
into	O
the	O
future	O
the	O
agent	B-General_Concept
will	O
decide	O
some	O
next	O
step	O
.	O
</s>
<s>
Then	O
,	O
at	O
each	O
time	O
the	O
agent	B-General_Concept
selects	O
an	O
action	O
,	O
observes	O
a	O
reward	O
,	O
enters	O
a	O
new	O
state	O
(	O
that	O
may	O
depend	O
on	O
both	O
the	O
previous	O
state	O
and	O
the	O
selected	O
action	O
)	O
,	O
and	O
is	O
updated	O
.	O
</s>
<s>
where	O
is	O
the	O
reward	O
received	O
when	O
moving	O
from	O
the	O
state	O
to	O
the	O
state	O
,	O
and	O
is	O
the	O
learning	B-General_Concept
rate	I-General_Concept
.	O
</s>
<s>
However	O
,	O
Q-learning	B-Algorithm
can	O
also	O
learn	O
in	O
non-episodic	O
tasks	O
(	O
as	O
a	O
result	O
of	O
the	O
property	O
of	O
convergent	O
infinite	O
series	O
)	O
.	O
</s>
<s>
The	O
learning	B-General_Concept
rate	I-General_Concept
or	O
step	B-General_Concept
size	I-General_Concept
determines	O
to	O
what	O
extent	O
newly	O
acquired	O
information	O
overrides	O
old	O
information	O
.	O
</s>
<s>
A	O
factor	O
of	O
0	O
makes	O
the	O
agent	B-General_Concept
learn	O
nothing	O
(	O
exclusively	O
exploiting	O
prior	O
knowledge	O
)	O
,	O
while	O
a	O
factor	O
of	O
1	O
makes	O
the	O
agent	B-General_Concept
consider	O
only	O
the	O
most	O
recent	O
information	O
(	O
ignoring	O
prior	O
knowledge	O
to	O
explore	O
possibilities	O
)	O
.	O
</s>
<s>
In	O
fully	O
deterministic	O
environments	O
,	O
a	O
learning	B-General_Concept
rate	I-General_Concept
of	O
is	O
optimal	O
.	O
</s>
<s>
When	O
the	O
problem	O
is	O
stochastic	O
,	O
the	O
algorithm	O
converges	O
under	O
some	O
technical	O
conditions	O
on	O
the	O
learning	B-General_Concept
rate	I-General_Concept
that	O
require	O
it	O
to	O
decrease	O
to	O
zero	O
.	O
</s>
<s>
In	O
practice	O
,	O
often	O
a	O
constant	O
learning	B-General_Concept
rate	I-General_Concept
is	O
used	O
,	O
such	O
as	O
for	O
all	O
.	O
</s>
<s>
A	O
factor	O
of	O
0	O
will	O
make	O
the	O
agent	B-General_Concept
"	O
myopic	O
"	O
(	O
or	O
short-sighted	O
)	O
by	O
only	O
considering	O
current	O
rewards	O
,	O
i.e.	O
</s>
<s>
For	O
,	O
without	O
a	O
terminal	O
state	O
,	O
or	O
if	O
the	O
agent	B-General_Concept
never	O
reaches	O
one	O
,	O
all	O
environment	O
histories	O
become	O
infinitely	O
long	O
,	O
and	O
utilities	O
with	O
additive	O
,	O
undiscounted	O
rewards	O
generally	O
become	O
infinite	O
.	O
</s>
<s>
Even	O
with	O
a	O
discount	O
factor	O
only	O
slightly	O
lower	O
than	O
1	O
,	O
Q-function	O
learning	O
leads	O
to	O
propagation	O
of	O
errors	O
and	O
instabilities	O
when	O
the	O
value	O
function	O
is	O
approximated	O
with	O
an	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
.	O
</s>
<s>
Since	O
Q-learning	B-Algorithm
is	O
an	O
iterative	O
algorithm	O
,	O
it	O
implicitly	O
assumes	O
an	O
initial	O
condition	O
before	O
the	O
first	O
update	O
occurs	O
.	O
</s>
<s>
Q-learning	B-Algorithm
at	O
its	O
simplest	O
stores	O
data	O
in	O
tables	O
.	O
</s>
<s>
This	O
approach	O
falters	O
with	O
increasing	O
numbers	O
of	O
states/actions	O
since	O
the	O
likelihood	O
of	O
the	O
agent	B-General_Concept
visiting	O
a	O
particular	O
state	O
and	O
performing	O
a	O
particular	O
action	O
is	O
increasingly	O
small	O
.	O
</s>
<s>
Q-learning	B-Algorithm
can	O
be	O
combined	O
with	O
function	O
approximation	O
.	O
</s>
<s>
One	O
solution	O
is	O
to	O
use	O
an	O
(	O
adapted	O
)	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
as	O
a	O
function	O
approximator	O
.	O
</s>
<s>
Another	O
possibility	O
is	O
to	O
integrate	O
Fuzzy	B-General_Concept
Rule	I-General_Concept
Interpolation	O
(	O
FRI	O
)	O
and	O
use	O
sparse	O
fuzzy	B-General_Concept
rule-bases	I-General_Concept
instead	O
of	O
discrete	O
Q-tables	O
or	O
ANNs	O
,	O
which	O
has	O
the	O
advantage	O
of	O
being	O
a	O
human-readable	O
knowledge	O
representation	O
form	O
.	O
</s>
<s>
Q-learning	B-Algorithm
was	O
introduced	O
by	O
Chris	O
Watkins	O
in	O
1989	O
.	O
</s>
<s>
The	O
memory	O
matrix	O
was	O
the	O
same	O
as	O
the	O
eight	O
years	O
later	O
Q-table	O
of	O
Q-learning	B-Algorithm
.	O
</s>
<s>
The	O
crossbar	O
learning	O
algorithm	O
,	O
written	O
in	O
mathematical	O
pseudocode	B-Language
in	O
the	O
paper	O
,	O
in	O
each	O
iteration	O
performs	O
the	O
following	O
computation	O
:	O
</s>
<s>
The	O
term	O
“	O
secondary	O
reinforcement	O
”	O
is	O
borrowed	O
from	O
animal	O
learning	O
theory	O
,	O
to	O
model	O
state	O
values	O
via	O
backpropagation	B-Algorithm
:	O
the	O
state	O
value	O
of	O
the	O
consequence	O
situation	O
is	O
backpropagated	O
to	O
the	O
previously	O
encountered	O
situations	O
.	O
</s>
<s>
This	O
learning	O
system	O
was	O
a	O
forerunner	O
of	O
the	O
Q-learning	B-Algorithm
algorithm	O
.	O
</s>
<s>
In	O
2014	O
,	O
Google	B-Application
DeepMind	I-Application
patented	O
an	O
application	O
of	O
Q-learning	B-Algorithm
to	O
deep	B-Algorithm
learning	I-Algorithm
,	O
titled	O
"	O
deep	O
reinforcement	O
learning	O
"	O
or	O
"	O
deep	O
Q-learning	B-Algorithm
"	O
that	O
can	O
play	O
Atari	B-General_Concept
2600	I-General_Concept
games	O
at	O
expert	O
human	O
levels	O
.	O
</s>
<s>
The	O
DeepMind	B-Application
system	O
used	O
a	O
deep	B-Architecture
convolutional	I-Architecture
neural	I-Architecture
network	I-Architecture
,	O
with	O
layers	O
of	O
tiled	O
convolutional	O
filters	O
to	O
mimic	O
the	O
effects	O
of	O
receptive	O
fields	O
.	O
</s>
<s>
Reinforcement	O
learning	O
is	O
unstable	O
or	O
divergent	O
when	O
a	O
nonlinear	O
function	O
approximator	O
such	O
as	O
a	O
neural	B-Architecture
network	I-Architecture
is	O
used	O
to	O
represent	O
Q	O
.	O
</s>
<s>
This	O
instability	O
comes	O
from	O
the	O
correlations	O
present	O
in	O
the	O
sequence	O
of	O
observations	O
,	O
the	O
fact	O
that	O
small	O
updates	O
to	O
Q	O
may	O
significantly	O
change	O
the	O
policy	O
of	O
the	O
agent	B-General_Concept
and	O
the	O
data	O
distribution	O
,	O
and	O
the	O
correlations	O
between	O
Q	O
and	O
the	O
target	O
values	O
.	O
</s>
<s>
Because	O
the	O
future	O
maximum	O
approximated	O
action	O
value	O
in	O
Q-learning	B-Algorithm
is	O
evaluated	O
using	O
the	O
same	O
Q	O
function	O
as	O
in	O
current	O
action	O
selection	O
policy	O
,	O
in	O
noisy	O
environments	O
Q-learning	B-Algorithm
can	O
sometimes	O
overestimate	O
the	O
action	O
values	O
,	O
slowing	O
the	O
learning	O
.	O
</s>
<s>
A	O
variant	O
called	O
Double	O
Q-learning	B-Algorithm
was	O
proposed	O
to	O
correct	O
this	O
.	O
</s>
<s>
Double	O
Q-learning	B-Algorithm
is	O
an	O
off-policy	O
reinforcement	O
learning	O
algorithm	O
,	O
where	O
a	O
different	O
policy	O
is	O
used	O
for	O
value	O
evaluation	O
than	O
what	O
is	O
used	O
to	O
select	O
the	O
next	O
action	O
.	O
</s>
<s>
The	O
double	O
Q-learning	B-Algorithm
update	O
step	O
is	O
then	O
as	O
follows	O
:	O
</s>
<s>
This	O
algorithm	O
was	O
later	O
modified	O
in	O
2015	O
and	O
combined	O
with	O
deep	B-Algorithm
learning	I-Algorithm
,	O
as	O
in	O
the	O
DQN	O
algorithm	O
,	O
resulting	O
in	O
Double	O
DQN	O
,	O
which	O
outperforms	O
the	O
original	O
DQN	O
algorithm	O
.	O
</s>
<s>
Delayed	O
Q-learning	B-Algorithm
is	O
an	O
alternative	O
implementation	O
of	O
the	O
online	O
Q-learning	B-Algorithm
algorithm	O
,	O
with	O
probably	O
approximately	O
correct	O
(	O
PAC	O
)	O
learning	O
.	O
</s>
<s>
Greedy	O
GQ	O
is	O
a	O
variant	O
of	O
Q-learning	B-Algorithm
to	O
use	O
in	O
combination	O
with	O
(	O
linear	O
)	O
function	O
approximation	O
.	O
</s>
<s>
Distributional	O
Q-learning	B-Algorithm
is	O
a	O
variant	O
of	O
Q-learning	B-Algorithm
which	O
seeks	O
to	O
model	O
the	O
distribution	O
of	O
returns	O
rather	O
than	O
the	O
expected	O
return	O
of	O
each	O
action	O
.	O
</s>
<s>
It	O
has	O
been	O
observed	O
to	O
facilitate	O
estimate	O
by	O
deep	O
neural	B-Architecture
networks	I-Architecture
and	O
can	O
enable	O
alternative	O
control	O
methods	O
,	O
such	O
as	O
risk-sensitive	O
control	O
.	O
</s>
<s>
Q-learning	B-Algorithm
has	O
been	O
proposed	O
in	O
the	O
multi-agent	O
setting	O
(	O
see	O
Section	O
4.1.2	O
in	O
.	O
</s>
<s>
Littman	O
proposes	O
the	O
minimax	O
Q	B-Algorithm
learning	I-Algorithm
algorithm	O
.	O
</s>
<s>
The	O
standard	O
Q-learning	B-Algorithm
algorithm	O
(	O
using	O
a	O
table	O
)	O
applies	O
only	O
to	O
discrete	O
action	O
and	O
state	O
spaces	O
.	O
</s>
<s>
Discretization	B-Algorithm
of	O
these	O
values	O
leads	O
to	O
inefficient	O
learning	O
,	O
largely	O
due	O
to	O
the	O
curse	B-Algorithm
of	I-Algorithm
dimensionality	I-Algorithm
.	O
</s>
<s>
However	O
,	O
there	O
are	O
adaptations	O
of	O
Q-learning	B-Algorithm
that	O
attempt	O
to	O
solve	O
this	O
problem	O
such	O
as	O
Wire-fitted	O
Neural	B-Architecture
Network	I-Architecture
Q-Learning	B-Algorithm
.	O
</s>
