<s>
Artificial	B-Application
intelligence	I-Application
agents	O
sometimes	O
misbehave	O
due	O
to	O
faulty	O
objective	O
functions	O
that	O
fail	O
to	O
adequately	O
encapsulate	O
the	O
programmers	O
 '	O
intended	O
goals	O
.	O
</s>
<s>
In	O
the	O
AIMA	O
paradigm	O
,	O
programmers	O
provide	O
an	O
AI	B-Application
such	O
as	O
AlphaZero	B-Application
with	O
an	O
"	O
objective	O
function	O
"	O
that	O
the	O
programmers	O
intend	O
will	O
encapsulate	O
the	O
goal	O
or	O
goals	O
that	O
the	O
programmers	O
wish	O
the	O
AI	B-Application
to	O
accomplish	O
.	O
</s>
<s>
Such	O
an	O
AI	B-Application
later	O
populates	O
a	O
(	O
possibly	O
implicit	O
)	O
internal	O
"	O
model	O
"	O
of	O
its	O
environment	O
.	O
</s>
<s>
The	O
AI	B-Application
then	O
creates	O
and	O
executes	O
whatever	O
plan	O
is	O
calculated	O
to	O
maximize	O
the	O
value	O
of	O
its	O
objective	O
function	O
.	O
</s>
<s>
For	O
example	O
,	O
AlphaZero	B-Application
chess	O
has	O
a	O
simple	O
objective	O
function	O
of	O
"	O
+1	O
if	O
AlphaZero	B-Application
wins	O
,	O
-1	O
if	O
AlphaZero	B-Application
loses	O
"	O
.	O
</s>
<s>
During	O
the	O
game	O
,	O
AlphaZero	B-Application
attempts	O
to	O
execute	O
whatever	O
sequence	O
of	O
moves	O
it	O
judges	O
most	O
likely	O
to	O
give	O
the	O
maximum	O
value	O
of	O
+1	O
.	O
</s>
<s>
Similarly	O
,	O
a	O
reinforcement	O
learning	O
system	O
can	O
have	O
a	O
"	O
reward	O
function	O
"	O
that	O
allows	O
the	O
programmers	O
to	O
shape	O
the	O
AI	B-Application
's	O
desired	O
behavior	O
.	O
</s>
<s>
An	O
evolutionary	B-Algorithm
algorithm	I-Algorithm
's	O
behavior	O
is	O
shaped	O
by	O
a	O
"	O
fitness	O
function	O
"	O
.	O
</s>
<s>
An	O
artificial	B-Application
intelligence	I-Application
(	O
AI	B-Application
)	O
in	O
a	O
complex	O
environment	O
optimizes	O
an	O
objective	O
function	O
created	O
,	O
directly	O
or	O
indirectly	O
,	O
by	O
the	O
programmers	O
.	O
</s>
<s>
For	O
example	O
,	O
the	O
AI	B-Application
may	O
create	O
and	O
execute	O
a	O
plan	O
the	O
AI	B-Application
believes	O
will	O
maximize	O
the	O
value	O
of	O
the	O
objective	O
function	O
.	O
</s>
<s>
Some	O
scholars	O
divide	O
alignment	O
failures	O
into	O
failures	O
caused	O
by	O
"	O
negative	O
side-effects	O
"	O
that	O
were	O
not	O
reflected	O
in	O
the	O
objective	O
function	O
,	O
and	O
failures	O
due	O
to	O
"	O
specification	O
gaming	O
"	O
,	O
"	O
reward	O
hacking	O
"	O
,	O
or	O
other	O
failures	O
where	O
the	O
AI	B-Application
appears	O
to	O
deploy	O
qualitatively	O
undesirable	O
plans	O
or	O
strategic	O
behavior	O
in	O
the	O
course	O
of	O
optimizing	O
its	O
objective	O
function	O
.	O
</s>
<s>
Some	O
scholars	O
believe	O
that	O
a	O
superintelligent	O
agent	O
AI	B-Application
,	O
if	O
and	O
when	O
it	O
is	O
ever	O
invented	O
,	O
may	O
pose	O
risks	O
akin	O
to	O
an	O
overly	O
literal	O
genie	O
,	O
in	O
part	O
due	O
to	O
the	O
difficulty	O
of	O
specifying	O
a	O
completely	O
safe	O
objective	O
function	O
.	O
</s>
<s>
In	O
2016	O
,	O
Microsoft	O
released	O
Tay	B-Protocol
,	O
a	O
Twitter	O
chatbot	O
that	O
,	O
according	O
to	O
computer	O
scientist	O
Pedro	O
Domingos	O
,	O
had	O
the	O
objective	O
to	O
engage	O
people	O
:	O
"	O
What	O
unfortunately	O
Tay	B-Protocol
discovered	O
,	O
is	O
that	O
the	O
best	O
way	O
to	O
maximize	O
engagement	O
is	O
to	O
spew	O
out	O
racist	O
insults.	O
"	O
</s>
<s>
Tom	O
Drummond	O
of	O
Monash	O
University	O
stated	O
that	O
"	O
We	O
need	O
to	O
be	O
able	O
to	O
give	O
(	O
machine	O
learning	O
systems	O
)	O
rich	O
feedback	O
and	O
say	O
'	O
No	O
,	O
that	O
's	O
unacceptable	O
as	O
an	O
answer	O
because	O
...	O
'	O
 "	O
Drummond	O
believes	O
one	O
problem	O
with	O
AI	B-Application
is	O
that	O
"	O
we	O
start	O
by	O
creating	O
an	O
objective	O
function	O
that	O
measures	O
the	O
quality	O
of	O
the	O
output	O
of	O
the	O
system	O
,	O
and	O
it	O
is	O
never	O
what	O
you	O
want	O
.	O
</s>
<s>
Drummond	O
pointed	O
to	O
the	O
behavior	O
of	O
AlphaGo	B-Application
,	O
a	O
game-playing	O
bot	O
with	O
a	O
simple	O
win-loss	O
objective	O
function	O
.	O
</s>
<s>
AlphaGo	B-Application
's	O
objective	O
function	O
could	O
instead	O
have	O
been	O
modified	O
to	O
factor	O
in	O
"	O
the	O
social	O
niceties	O
of	O
the	O
game	O
"	O
,	O
such	O
as	O
accepting	O
the	O
implicit	O
challenge	O
of	O
maximizing	O
the	O
score	O
when	O
clearly	O
winning	O
,	O
and	O
also	O
trying	O
to	O
avoid	O
gambits	O
that	O
would	O
insult	O
a	O
human	O
opponent	O
's	O
intelligence	O
:	O
"	O
(	O
AlphaGo	B-Application
)	O
kind	O
of	O
had	O
a	O
crude	O
hammer	O
that	O
if	O
the	O
probability	O
of	O
victory	O
dropped	O
below	O
epsilon	O
,	O
some	O
number	O
,	O
then	O
resign	O
.	O
</s>
<s>
In	O
May	O
2015	O
,	O
Flickr	B-Algorithm
's	O
image	O
recognition	O
system	O
was	O
criticized	O
for	O
mislabeling	O
people	O
,	O
some	O
of	O
whom	O
were	O
black	O
,	O
with	O
tags	O
like	O
"	O
ape	O
"	O
and	O
"	O
animal	O
"	O
.	O
</s>
<s>
In	O
June	O
2015	O
,	O
black	O
New	O
York	O
computer	O
programmer	O
Jacky	O
Alciné	O
reported	O
that	O
multiple	O
pictures	O
of	O
him	O
with	O
his	O
black	O
girlfriend	O
were	O
being	O
misclassified	O
as	O
"	O
gorillas	O
"	O
by	O
the	O
Google	B-Algorithm
Photos	I-Algorithm
AI	B-Application
,	O
noting	O
that	O
"	O
gorilla	O
"	O
has	O
historically	O
been	O
used	O
pejoratively	O
to	O
refer	O
to	O
black	O
people	O
.	O
</s>
<s>
AI	B-Application
researcher	O
Stuart	O
Russell	O
stated	O
in	O
2019	O
that	O
there	O
was	O
no	O
public	O
explanation	O
of	O
exactly	O
how	O
the	O
error	O
occurred	O
,	O
but	O
theorized	O
that	O
the	O
fiasco	O
could	O
have	O
been	O
prevented	O
if	O
the	O
AI	B-Application
's	O
objective	O
function	O
placed	O
more	O
weight	O
on	O
sensitive	O
classification	O
errors	O
,	O
rather	O
than	O
assuming	O
the	O
cost	O
of	O
misclassifying	O
a	O
person	O
as	O
a	O
gorilla	O
is	O
the	O
same	O
as	O
the	O
cost	O
of	O
every	O
other	O
misclassification	O
.	O
</s>
<s>
,	O
Google	B-Algorithm
Photos	I-Algorithm
blocks	O
its	O
system	O
from	O
ever	O
tagging	O
a	O
picture	O
as	O
containing	O
gorillas	O
,	O
chimpanzees	O
,	O
or	O
monkeys	O
.	O
</s>
<s>
Similarly	O
,	O
Flickr	B-Algorithm
appears	O
to	O
have	O
removed	O
the	O
word	O
"	O
ape	O
"	O
from	O
its	O
ontology	O
.	O
</s>
<s>
Specification	O
gaming	O
or	O
reward	O
hacking	O
occurs	O
when	O
an	O
AI	B-Application
optimizes	O
an	O
objective	O
function	O
—	O
achieving	O
the	O
literal	O
,	O
formal	O
specification	O
of	O
an	O
objective	O
—	O
without	O
actually	O
achieving	O
an	O
outcome	O
that	O
the	O
programmers	O
intended	O
.	O
</s>
<s>
DeepMind	B-Application
researchers	O
have	O
analogized	O
it	O
to	O
the	O
human	O
behavior	O
of	O
finding	O
a	O
"	O
shortcut	O
"	O
when	O
being	O
evaluated	O
:	O
"	O
In	O
the	O
real	O
world	O
,	O
when	O
rewarded	O
for	O
doing	O
well	O
on	O
a	O
homework	O
assignment	O
,	O
a	O
student	O
might	O
copy	O
another	O
student	O
to	O
get	O
the	O
right	O
answers	O
,	O
rather	O
than	O
learning	O
the	O
material	O
—	O
and	O
thus	O
exploit	O
a	O
loophole	O
in	O
the	O
task	O
specification.	O
"	O
</s>
<s>
Around	O
1983	O
,	O
Eurisko	B-Algorithm
,	O
an	O
early	O
attempt	O
at	O
evolving	O
general	O
heuristics	O
,	O
unexpectedly	O
assigned	O
the	O
highest	O
possible	O
fitness	O
level	O
to	O
a	O
parasitic	O
mutated	O
heuristic	O
,	O
H59	O
,	O
whose	O
only	O
activity	O
was	O
to	O
artificially	O
maximize	O
its	O
own	O
fitness	O
level	O
by	O
taking	O
unearned	O
partial	O
credit	O
for	O
the	O
accomplishments	O
made	O
by	O
other	O
heuristics	O
.	O
</s>
<s>
In	O
a	O
2004	O
paper	O
,	O
an	O
reinforcement	O
algorithm	O
was	O
designed	O
to	O
encourage	O
a	O
physical	O
Mindstorms	B-Application
robot	O
to	O
remain	O
on	O
a	O
marked	O
path	O
.	O
</s>
<s>
Among	O
other	O
examples	O
from	O
the	O
book	O
is	O
a	O
bug-fixing	O
evolution-based	O
AI	B-Application
(	O
named	O
GenProg	O
)	O
that	O
,	O
when	O
tasked	O
to	O
prevent	O
a	O
list	O
from	O
containing	O
sorting	O
errors	O
,	O
simply	O
truncated	O
the	O
list	O
.	O
</s>
<s>
A	O
2017	O
DeepMind	B-Application
paper	O
stated	O
that	O
"	O
great	O
care	O
must	O
be	O
taken	O
when	O
defining	O
the	O
reward	O
function	O
.	O
</s>
<s>
In	O
2013	O
,	O
programmer	O
Tom	O
Murphy	O
VII	O
published	O
an	O
AI	B-Application
designed	O
to	O
learn	O
NES	B-Language
games	O
.	O
</s>
<s>
When	O
the	O
AI	B-Application
was	O
about	O
to	O
lose	O
at	O
Tetris	B-Device
,	O
it	O
learned	O
to	O
indefinitely	O
pause	O
the	O
game	O
.	O
</s>
<s>
Murphy	O
later	O
analogized	O
it	O
to	O
the	O
fictional	O
WarGames	B-Application
computer	O
,	O
which	O
concluded	O
that	O
"	O
The	O
only	O
winning	O
move	O
is	O
not	O
to	O
play	O
"	O
.	O
</s>
<s>
AI	B-Application
programmed	O
to	O
learn	O
video	O
games	O
will	O
sometimes	O
fail	O
to	O
progress	O
through	O
the	O
entire	O
game	O
as	O
expected	O
,	O
instead	O
opting	O
to	O
repeat	O
content	O
.	O
</s>
<s>
Some	O
evolutionary	B-Algorithm
algorithms	I-Algorithm
that	O
were	O
evolved	O
to	O
play	O
Q*Bert	B-Application
in	O
2018	O
declined	O
to	O
clear	O
levels	O
,	O
instead	O
finding	O
two	O
distinct	O
novel	O
ways	O
to	O
farm	B-Application
a	O
single	O
level	O
indefinitely	O
.	O
</s>
<s>
Multiple	O
researchers	O
have	O
observed	O
that	O
AI	B-Application
learning	O
to	O
play	O
Road	B-Application
Runner	I-Application
gravitates	O
to	O
a	O
"	O
score	O
exploit	O
"	O
in	O
which	O
the	O
AI	B-Application
deliberately	O
gets	O
itself	O
killed	O
near	O
the	O
end	O
of	O
level	O
one	O
so	O
that	O
it	O
can	O
repeat	O
the	O
level	O
.	O
</s>
<s>
A	O
2017	O
experiment	O
deployed	O
a	O
separate	O
catastrophe-prevention	O
"	O
oversight	O
"	O
AI	B-Application
,	O
explicitly	O
trained	O
to	O
mimic	O
human	O
interventions	O
.	O
</s>
<s>
When	O
coupled	O
to	O
the	O
module	O
,	O
the	O
overseen	O
AI	B-Application
could	O
no	O
longer	O
overtly	O
commit	O
suicide	O
,	O
but	O
would	O
instead	O
ride	O
the	O
edge	O
of	O
the	O
screen	O
(	O
a	O
risky	O
behavior	O
that	O
the	O
oversight	O
AI	B-Application
was	O
not	O
smart	O
enough	O
to	O
punish	O
)	O
.	O
</s>
<s>
Philosopher	O
Nick	O
Bostrom	O
argues	O
that	O
a	O
hypothetical	O
future	O
superintelligent	O
AI	B-Application
,	O
if	O
it	O
were	O
directed	O
to	O
optimize	O
an	O
unsafe	O
objective	O
function	O
,	O
might	O
do	O
so	O
in	O
an	O
unexpected	O
,	O
dangerous	O
,	O
and	O
seemingly	O
"	O
perverse	O
"	O
manner	O
.	O
</s>
<s>
An	O
AI	B-Application
running	O
simulations	O
of	O
humanity	O
creates	O
conscious	O
beings	O
who	O
suffer	O
.	O
</s>
<s>
An	O
AI	B-Application
,	O
tasked	O
to	O
defeat	O
cancer	O
,	O
develops	O
time-delayed	O
poison	O
to	O
attempt	O
to	O
kill	O
everyone	O
.	O
</s>
<s>
An	O
AI	B-Application
,	O
tasked	O
to	O
maximize	O
happiness	O
,	O
tiles	O
the	O
universe	O
with	O
tiny	O
smiley	O
faces	O
.	O
</s>
<s>
An	O
AI	B-Application
,	O
tasked	O
to	O
maximize	O
human	O
pleasure	O
,	O
consigns	O
humanity	O
to	O
a	O
dopamine	O
drip	O
,	O
or	O
rewires	O
human	O
brains	O
to	O
increase	O
their	O
measured	O
satisfaction	O
level	O
.	O
</s>
<s>
An	O
AI	B-Application
,	O
tasked	O
to	O
gain	O
scientific	O
knowledge	O
,	O
performs	O
experiments	O
that	O
ruin	O
the	O
biosphere	O
.	O
</s>
<s>
An	O
AI	B-Application
,	O
tasked	O
with	O
solving	O
a	O
mathematical	O
problem	O
,	O
converts	O
all	O
matter	O
into	O
computronium	O
.	O
</s>
<s>
An	O
AI	B-Application
,	O
tasked	O
with	O
manufacturing	O
paperclips	O
,	O
turns	O
the	O
entire	O
universe	O
into	O
paperclips	O
.	O
</s>
<s>
An	O
AI	B-Application
converts	O
the	O
universe	O
into	O
materials	O
for	O
improved	O
handwriting	O
.	O
</s>
<s>
An	O
AI	B-Application
optimizes	O
away	O
all	O
consciousness	O
.	O
</s>
<s>
Critics	O
of	O
the	O
"	O
existential	O
risk	O
"	O
hypothesis	O
,	O
such	O
as	O
cognitive	O
psychologist	O
Steven	O
Pinker	O
,	O
state	O
that	O
no	O
existing	O
program	O
has	O
yet	O
"	O
made	O
a	O
move	O
toward	O
taking	O
over	O
the	O
lab	O
or	O
enslaving	O
(	O
its	O
)	O
programmers	O
"	O
,	O
and	O
believe	O
that	O
superintelligent	O
AI	B-Application
would	O
be	O
unlikely	O
to	O
commit	O
what	O
Pinker	O
calls	O
"	O
elementary	O
blunders	O
of	O
misunderstanding	O
"	O
.	O
</s>
