<s>
State	B-Algorithm
–	I-Algorithm
action	I-Algorithm
–	I-Algorithm
reward	I-Algorithm
–	I-Algorithm
state	I-Algorithm
–	I-Algorithm
action	I-Algorithm
(	O
SARSA	O
)	O
is	O
an	O
algorithm	O
for	O
learning	O
a	O
Markov	O
decision	O
process	O
policy	O
,	O
used	O
in	O
the	O
reinforcement	O
learning	O
area	O
of	O
machine	O
learning	O
.	O
</s>
<s>
It	O
was	O
proposed	O
by	O
Rummery	O
and	O
Niranjan	O
in	O
a	O
technical	O
note	O
with	O
the	O
name	O
"	O
Modified	O
Connectionist	O
Q-Learning	B-Algorithm
"	O
(	O
MCQ-L	O
)	O
.	O
</s>
<s>
The	O
Q	O
value	O
for	O
a	O
state-action	O
is	O
updated	O
by	O
an	O
error	O
,	O
adjusted	O
by	O
the	O
learning	B-General_Concept
rate	I-General_Concept
alpha	O
.	O
</s>
<s>
Watkin	O
's	O
Q-learning	B-Algorithm
updates	O
an	O
estimate	O
of	O
the	O
optimal	O
state-action	O
value	O
function	O
based	O
on	O
the	O
maximum	O
reward	O
of	O
available	O
actions	O
.	O
</s>
<s>
While	O
SARSA	O
learns	O
the	O
Q	O
values	O
associated	O
with	O
taking	O
the	O
policy	O
it	O
follows	O
itself	O
,	O
Watkin	O
's	O
Q-learning	B-Algorithm
learns	O
the	O
Q	O
values	O
associated	O
with	O
taking	O
the	O
optimal	O
policy	O
while	O
following	O
an	O
exploration/exploitation	O
policy	O
.	O
</s>
<s>
Some	O
optimizations	O
of	O
Watkin	O
's	O
Q-learning	B-Algorithm
may	O
be	O
applied	O
to	O
SARSA	O
.	O
</s>
<s>
The	O
learning	B-General_Concept
rate	I-General_Concept
determines	O
to	O
what	O
extent	O
newly	O
acquired	O
information	O
overrides	O
old	O
information	O
.	O
</s>
