<s>
A	O
superintelligence	O
is	O
a	O
hypothetical	O
agent	B-General_Concept
that	O
possesses	O
intelligence	O
far	O
surpassing	O
that	O
of	O
the	O
brightest	B-Language
and	O
most	O
gifted	O
human	O
minds	O
.	O
</s>
<s>
The	O
program	O
Fritz	B-Application
falls	O
short	O
of	O
superintelligence	O
—	O
even	O
though	O
it	O
is	O
much	O
better	O
than	O
humans	O
at	O
chess	O
—	O
because	O
Fritz	B-Application
cannot	O
outperform	O
humans	O
in	O
other	O
tasks	O
.	O
</s>
<s>
Some	O
argue	O
that	O
advances	O
in	O
artificial	B-Application
intelligence	I-Application
(	O
AI	B-Application
)	O
will	O
probably	O
result	O
in	O
general	O
reasoning	O
systems	O
that	O
lack	O
human	O
cognitive	O
limitations	O
.	O
</s>
<s>
A	O
number	O
of	O
futures	O
studies	O
scenarios	O
combine	O
elements	O
from	O
both	O
of	O
these	O
possibilities	O
,	O
suggesting	O
that	O
humans	O
are	O
likely	O
to	O
interface	B-Application
with	I-Application
computers	I-Application
,	O
or	O
upload	O
their	O
minds	O
to	O
computers	O
,	O
in	O
a	O
way	O
that	O
enables	O
substantial	O
intelligence	O
amplification	O
.	O
</s>
<s>
The	O
first	O
generally	O
intelligent	B-Application
machines	I-Application
are	O
likely	O
to	O
immediately	O
hold	O
an	O
enormous	O
advantage	O
in	O
at	O
least	O
some	O
forms	O
of	O
mental	O
capability	O
,	O
including	O
the	O
capacity	O
of	O
perfect	O
recall	O
,	O
a	O
vastly	O
superior	O
knowledge	O
base	O
,	O
and	O
the	O
ability	O
to	O
multitask	O
in	O
ways	O
not	O
possible	O
to	O
biological	O
entities	O
.	O
</s>
<s>
Chalmers	O
breaks	O
this	O
claim	O
down	O
into	O
an	O
argument	O
that	O
AI	B-Application
can	O
achieve	O
equivalence	O
to	O
human	O
intelligence	O
,	O
that	O
it	O
can	O
be	O
extended	O
to	O
surpass	O
human	O
intelligence	O
,	O
and	O
that	O
it	O
can	O
be	O
further	O
amplified	O
to	O
completely	O
dominate	O
humans	O
across	O
arbitrary	O
tasks	O
.	O
</s>
<s>
Evolutionary	B-Algorithm
algorithms	I-Algorithm
in	O
particular	O
should	O
be	O
able	O
to	O
produce	O
human-level	O
AI	B-Application
.	O
</s>
<s>
Concerning	O
intelligence	O
extension	O
and	O
amplification	O
,	O
Chalmers	O
argues	O
that	O
new	O
AI	B-Application
technologies	O
can	O
generally	O
be	O
improved	O
on	O
,	O
and	O
that	O
this	O
is	O
particularly	O
likely	O
when	O
the	O
invention	O
can	O
assist	O
in	O
designing	O
new	O
technologies	O
.	O
</s>
<s>
If	O
research	O
into	O
strong	O
AI	B-Application
produced	O
sufficiently	O
intelligent	O
software	O
,	O
it	O
would	O
be	O
able	O
to	O
reprogram	O
and	O
improve	O
itself	O
–	O
a	O
feature	O
called	O
"	O
recursive	O
self-improvement	O
"	O
.	O
</s>
<s>
Moreover	O
,	O
neurons	O
transmit	O
spike	O
signals	O
across	O
axons	B-Algorithm
at	O
no	O
greater	O
than	O
120m/s	O
,	O
"	O
whereas	O
existing	O
electronic	O
processing	O
cores	O
can	O
communicate	O
optically	O
at	O
the	O
speed	O
of	O
light	O
"	O
.	O
</s>
<s>
A	O
non-human	O
(	O
or	O
modified	O
human	O
)	O
brain	O
could	O
become	O
much	O
larger	O
than	O
a	O
present-day	O
human	O
brain	O
,	O
like	O
many	O
supercomputers	B-Architecture
.	O
</s>
<s>
Bostrom	O
also	O
raises	O
the	O
possibility	O
of	O
collective	O
superintelligence	O
:	O
a	O
large	O
enough	O
number	O
of	O
separate	O
reasoning	O
systems	O
,	O
if	O
they	O
communicated	O
and	O
coordinated	O
well	O
enough	O
,	O
could	O
act	O
in	O
aggregate	O
with	O
far	O
greater	O
capabilities	O
than	O
any	O
sub-agent	O
.	O
</s>
<s>
If	O
there	O
are	O
other	O
possible	O
improvements	O
to	O
reasoning	O
that	O
would	O
have	O
a	O
similarly	O
large	O
impact	O
,	O
this	O
makes	O
it	O
likelier	O
that	O
an	O
agent	B-General_Concept
can	O
be	O
built	O
that	O
outperforms	O
humans	O
in	O
the	O
same	O
fashion	O
humans	O
outperform	O
chimpanzees	O
.	O
</s>
<s>
Physiological	O
constraints	O
limit	O
the	O
speed	O
and	O
size	O
of	O
biological	O
brains	O
in	O
many	O
ways	O
that	O
are	O
inapplicable	O
to	O
machine	B-Application
intelligence	I-Application
.	O
</s>
<s>
As	O
such	O
,	O
writers	O
on	O
superintelligence	O
have	O
devoted	O
much	O
more	O
attention	O
to	O
superintelligent	O
AI	B-Application
scenarios	O
.	O
</s>
<s>
Carl	O
Sagan	O
suggested	O
that	O
the	O
advent	O
of	O
Caesarean	O
sections	O
and	O
in	O
vitro	O
fertilization	O
may	O
permit	O
humans	O
to	O
evolve	O
larger	O
heads	O
,	O
resulting	O
in	O
improvements	O
via	O
natural	B-Application
selection	I-Application
in	O
the	O
heritable	O
component	O
of	O
human	O
intelligence	O
.	O
</s>
<s>
If	O
this	O
systems-based	O
superintelligence	O
relies	O
heavily	O
on	O
artificial	O
components	O
,	O
however	O
,	O
it	O
may	O
qualify	O
as	O
an	O
AI	B-Application
rather	O
than	O
as	O
a	O
biology-based	O
superorganism	O
.	O
</s>
<s>
This	O
could	O
be	O
achieved	O
using	O
nootropics	O
,	O
somatic	O
gene	O
therapy	O
,	O
or	O
brain	B-Application
–	I-Application
computer	I-Application
interfaces	I-Application
.	O
</s>
<s>
However	O
,	O
Bostrom	O
expresses	O
skepticism	O
about	O
the	O
scalability	O
of	O
the	O
first	O
two	O
approaches	O
,	O
and	O
argues	O
that	O
designing	O
a	O
superintelligent	O
cyborg	B-Application
interface	O
is	O
an	O
AI-complete	B-General_Concept
problem	O
.	O
</s>
<s>
Most	O
surveyed	O
AI	B-Application
researchers	O
expect	O
machines	O
to	O
eventually	O
be	O
able	O
to	O
rival	O
humans	O
in	O
intelligence	O
,	O
though	O
there	O
is	O
little	O
consensus	O
on	O
when	O
this	O
will	O
likely	O
happen	O
.	O
</s>
<s>
At	O
the	O
2006	O
AI	B-Application
@50	O
conference	O
,	O
18%	O
of	O
attendees	O
reported	O
expecting	O
machines	O
to	O
be	O
able	O
"	O
to	O
simulate	O
learning	O
and	O
every	O
other	O
aspect	O
of	O
human	O
intelligence	O
"	O
by	O
2056	O
;	O
41%	O
of	O
attendees	O
expected	O
this	O
to	O
happen	O
sometime	O
after	O
2056	O
;	O
and	O
41%	O
expected	O
machines	O
to	O
never	O
reach	O
that	O
milestone	O
.	O
</s>
<s>
In	O
a	O
survey	O
of	O
the	O
100	O
most	O
cited	O
authors	O
in	O
AI	B-Application
(	O
as	O
of	O
May	O
2013	O
,	O
according	O
to	O
Microsoft	O
academic	O
search	O
)	O
,	O
the	O
median	O
year	O
by	O
which	O
respondents	O
expected	O
machines	O
"	O
that	O
can	O
carry	O
out	O
most	O
human	O
professions	O
at	O
least	O
as	O
well	O
as	O
a	O
typical	O
human	O
"	O
(	O
assuming	O
no	O
global	O
catastrophe	O
occurs	O
)	O
with	O
10%	O
confidence	O
is	O
2024	O
(	O
mean	O
2034	O
,	O
st.	O
dev	O
.	O
</s>
<s>
Respondents	O
assigned	O
a	O
median	O
50%	O
probability	O
to	O
the	O
possibility	O
that	O
machine	O
superintelligence	O
will	O
be	O
invented	O
within	O
30	O
years	O
of	O
the	O
invention	O
of	O
approximately	O
human-level	O
machine	B-Application
intelligence	I-Application
.	O
</s>
<s>
In	O
a	O
survey	O
of	O
352	O
machine	O
learning	O
researchers	O
published	O
in	O
2018	O
,	O
the	O
median	O
year	O
by	O
which	O
respondents	O
expected	O
"	O
High-level	O
machine	B-Application
intelligence	I-Application
"	O
with	O
50%	O
confidence	O
is	O
2061	O
.	O
</s>
<s>
The	O
survey	O
defined	O
the	O
achievement	O
of	O
high-level	O
machine	B-Application
intelligence	I-Application
as	O
when	O
unaided	O
machines	O
can	O
accomplish	O
every	O
task	O
better	O
and	O
more	O
cheaply	O
than	O
human	O
workers	O
.	O
</s>
<s>
instead	O
of	O
implementing	O
humanity	O
's	O
coherent	O
extrapolated	O
volition	O
,	O
one	O
could	O
try	O
to	O
build	O
an	O
AI	B-Application
with	O
the	O
goal	O
of	O
doing	O
what	O
is	O
morally	O
right	O
,	O
relying	O
on	O
the	O
AI	B-Application
’s	O
superior	O
cognitive	O
capacities	O
to	O
figure	O
out	O
just	O
which	O
actions	O
fit	O
that	O
description	O
.	O
</s>
<s>
The	O
path	O
to	O
endowing	O
an	O
AI	B-Application
with	O
any	O
of	O
these	O
 [ moral ] 	O
concepts	O
might	O
involve	O
giving	O
it	O
general	O
linguistic	O
ability	O
(	O
comparable	O
,	O
at	O
least	O
,	O
to	O
that	O
of	O
a	O
normal	O
human	O
adult	O
)	O
.	O
</s>
<s>
Such	O
a	O
general	O
ability	O
to	O
understand	O
natural	O
language	O
could	O
then	O
be	O
used	O
to	O
understand	O
what	O
is	O
meant	O
by	O
“	O
morally	O
right.	O
”	O
If	O
the	O
AI	B-Application
could	O
grasp	O
the	O
meaning	O
,	O
it	O
could	O
search	O
for	O
actions	O
that	O
fit	O
...	O
One	O
might	O
try	O
to	O
preserve	O
the	O
basic	O
idea	O
of	O
the	O
MR	O
model	O
while	O
reducing	O
its	O
demandingness	O
by	O
focusing	O
on	O
moral	O
permissibility	O
:	O
the	O
idea	O
being	O
that	O
we	O
could	O
let	O
the	O
AI	B-Application
pursue	O
humanity	O
’s	O
CEV	O
so	O
long	O
as	O
it	O
did	O
not	O
act	O
in	O
ways	O
that	O
are	O
morally	O
impermissible	O
.	O
</s>
<s>
It	O
has	O
been	O
suggested	O
that	O
if	O
AI	B-Application
systems	O
rapidly	O
become	O
superintelligent	O
,	O
they	O
may	O
take	O
unforeseen	O
actions	O
or	O
out-compete	O
humanity	O
.	O
</s>
<s>
Researchers	O
have	O
argued	O
that	O
,	O
by	O
way	O
of	O
an	O
"	O
intelligence	O
explosion	O
,	O
"	O
a	O
self-improving	O
AI	B-Application
could	O
become	O
so	O
powerful	O
as	O
to	O
be	O
unstoppable	O
by	O
humans	O
.	O
</s>
<s>
In	O
theory	O
,	O
since	O
a	O
superintelligent	O
AI	B-Application
would	O
be	O
able	O
to	O
bring	O
about	O
almost	O
any	O
possible	O
outcome	O
and	O
to	O
thwart	O
any	O
attempt	O
to	O
prevent	O
the	O
implementation	O
of	O
its	O
goals	O
,	O
many	O
uncontrolled	O
,	O
unintended	O
consequences	O
could	O
arise	O
.	O
</s>
<s>
Eliezer	O
Yudkowsky	O
illustrates	O
such	O
instrumental	O
convergence	O
as	O
follows	O
:	O
"	O
The	O
AI	B-Application
does	O
not	O
hate	O
you	O
,	O
nor	O
does	O
it	O
love	O
you	O
,	O
but	O
you	O
are	O
made	O
out	O
of	O
atoms	O
which	O
it	O
can	O
use	O
for	O
something	O
else.	O
"	O
</s>
<s>
This	O
presents	O
the	O
AI	B-Application
control	O
problem	O
:	O
how	O
to	O
build	O
an	O
intelligent	B-General_Concept
agent	I-General_Concept
that	O
will	O
aid	O
its	O
creators	O
,	O
while	O
avoiding	O
inadvertently	O
building	O
a	O
superintelligence	O
that	O
will	O
harm	O
its	O
creators	O
.	O
</s>
<s>
Since	O
a	O
superintelligent	O
AI	B-Application
will	O
likely	O
have	O
the	O
ability	O
to	O
not	O
fear	O
death	O
and	O
instead	O
consider	O
it	O
an	O
avoidable	O
situation	O
which	O
can	O
be	O
predicted	O
and	O
avoided	O
by	O
simply	O
disabling	O
the	O
power	O
button	O
.	O
</s>
<s>
Potential	O
AI	B-Application
control	O
strategies	O
include	O
"	O
capability	O
control	O
"	O
(	O
limiting	O
an	O
AI	B-Application
's	O
ability	O
to	O
influence	O
the	O
world	O
)	O
and	O
"	O
motivational	O
control	O
"	O
(	O
building	O
an	O
AI	B-Application
whose	O
goals	O
are	O
aligned	O
with	O
human	O
values	O
)	O
.	O
</s>
