<s>
In	O
statistics	O
and	O
machine	O
learning	O
,	O
the	O
bias	B-General_Concept
–	I-General_Concept
variance	I-General_Concept
tradeoff	I-General_Concept
is	O
the	O
property	O
of	O
a	O
model	O
that	O
the	O
variance	O
of	O
the	O
parameter	O
estimated	O
across	O
samples	O
can	O
be	O
reduced	O
by	O
increasing	O
the	O
bias	O
in	O
the	O
estimated	O
parameters	O
.	O
</s>
<s>
The	O
bias	B-General_Concept
–	I-General_Concept
variance	I-General_Concept
dilemma	I-General_Concept
or	O
bias	O
–	O
variance	O
problem	O
is	O
the	O
conflict	O
in	O
trying	O
to	O
simultaneously	O
minimize	O
these	O
two	O
sources	O
of	O
error	O
that	O
prevent	O
supervised	B-General_Concept
learning	I-General_Concept
algorithms	O
from	O
generalizing	O
beyond	O
their	O
training	O
set	O
:	O
</s>
<s>
High	O
bias	O
can	O
cause	O
an	O
algorithm	O
to	O
miss	O
the	O
relevant	O
relations	O
between	O
features	O
and	O
target	O
outputs	O
(	O
underfitting	B-Error_Name
)	O
.	O
</s>
<s>
High	O
variance	O
may	O
result	O
from	O
an	O
algorithm	O
modeling	O
the	O
random	O
noise	B-Algorithm
in	O
the	O
training	O
data	O
(	O
overfitting	B-Error_Name
)	O
.	O
</s>
<s>
The	O
bias	B-General_Concept
–	I-General_Concept
variance	I-General_Concept
decomposition	I-General_Concept
is	O
a	O
way	O
of	O
analyzing	O
a	O
learning	O
algorithm	O
's	O
expected	O
generalization	B-Algorithm
error	I-Algorithm
with	O
respect	O
to	O
a	O
particular	O
problem	O
as	O
a	O
sum	O
of	O
three	O
terms	O
,	O
the	O
bias	O
,	O
variance	O
,	O
and	O
a	O
quantity	O
called	O
the	O
irreducible	O
error	O
,	O
resulting	O
from	O
noise	B-Algorithm
in	O
the	O
problem	O
itself	O
.	O
</s>
<s>
The	O
bias	B-General_Concept
–	I-General_Concept
variance	I-General_Concept
tradeoff	I-General_Concept
is	O
a	O
central	O
problem	O
in	O
supervised	B-General_Concept
learning	I-General_Concept
.	O
</s>
<s>
Ideally	O
,	O
one	O
wants	O
to	O
choose	O
a	O
model	O
that	O
both	O
accurately	O
captures	O
the	O
regularities	O
in	O
its	O
training	O
data	O
,	O
but	O
also	O
generalizes	B-Algorithm
well	O
to	O
unseen	O
data	O
.	O
</s>
<s>
High-variance	O
learning	O
methods	O
may	O
be	O
able	O
to	O
represent	O
their	O
training	O
set	O
well	O
but	O
are	O
at	O
risk	O
of	O
overfitting	B-Error_Name
to	O
noisy	O
or	O
unrepresentative	O
training	O
data	O
.	O
</s>
<s>
have	O
low	O
bias	O
)	O
under	O
the	O
aforementioned	O
selection	O
conditions	O
,	O
but	O
may	O
result	O
in	O
underfitting	B-Error_Name
.	O
</s>
<s>
The	O
limiting	O
case	O
where	O
only	O
a	O
finite	O
number	O
of	O
data	O
points	O
are	O
selected	O
over	O
a	O
broad	O
sample	O
space	O
may	O
result	O
in	O
improved	O
precision	O
and	O
lower	O
variance	O
overall	O
,	O
but	O
may	O
also	O
result	O
in	O
an	O
overreliance	O
on	O
the	O
training	O
data	O
(	O
overfitting	B-Error_Name
)	O
.	O
</s>
<s>
To	O
mitigate	O
how	O
much	O
information	O
is	O
used	O
from	O
neighboring	O
observations	O
,	O
a	O
model	O
can	O
be	O
smoothed	B-Application
via	O
explicit	O
regularization	O
,	O
such	O
as	O
shrinkage	O
.	O
</s>
<s>
We	O
assume	O
that	O
there	O
is	O
a	O
function	O
f(x )	O
such	O
as	O
,	O
where	O
the	O
noise	B-Algorithm
,	O
,	O
has	O
zero	O
mean	O
and	O
variance	O
.	O
</s>
<s>
We	O
make	O
"	O
as	O
well	O
as	O
possible	O
"	O
precise	O
by	O
measuring	O
the	O
mean	B-Algorithm
squared	I-Algorithm
error	I-Algorithm
between	O
and	O
:	O
we	O
want	O
to	O
be	O
minimal	O
,	O
both	O
for	O
and	O
for	O
points	O
outside	O
of	O
our	O
sample	O
.	O
</s>
<s>
Of	O
course	O
,	O
we	O
cannot	O
hope	O
to	O
do	O
so	O
perfectly	O
,	O
since	O
the	O
contain	O
noise	B-Algorithm
;	O
this	O
means	O
we	O
must	O
be	O
prepared	O
to	O
accept	O
an	O
irreducible	O
error	O
in	O
any	O
function	O
we	O
come	O
up	O
with	O
.	O
</s>
<s>
Finding	O
an	O
that	O
generalizes	B-Algorithm
to	O
points	O
outside	O
of	O
the	O
training	O
set	O
can	O
be	O
done	O
with	O
any	O
of	O
the	O
countless	O
algorithms	O
used	O
for	O
supervised	B-General_Concept
learning	I-General_Concept
.	O
</s>
<s>
The	O
expectation	O
ranges	O
over	O
different	O
choices	O
of	O
the	O
training	O
set	O
,	O
all	O
sampled	O
from	O
the	O
same	O
joint	O
distribution	O
which	O
can	O
for	O
example	O
be	O
done	O
via	O
bootstrapping	B-Algorithm
.	O
</s>
<s>
E.g.	O
,	O
when	O
approximating	O
a	O
non-linear	O
function	O
using	O
a	O
learning	O
method	O
for	O
linear	B-Algorithm
models	I-Algorithm
,	O
there	O
will	O
be	O
error	O
in	O
the	O
estimates	O
due	O
to	O
this	O
assumption	O
;	O
</s>
<s>
The	O
derivation	O
of	O
the	O
bias	B-General_Concept
–	I-General_Concept
variance	I-General_Concept
decomposition	I-General_Concept
for	O
squared	O
error	O
proceeds	O
as	O
follows	O
.	O
</s>
<s>
Let	O
us	O
write	O
the	O
mean-squared	B-Algorithm
error	I-Algorithm
of	O
our	O
model	O
:	O
</s>
<s>
Dimensionality	B-Algorithm
reduction	I-Algorithm
and	O
feature	B-General_Concept
selection	I-General_Concept
can	O
decrease	O
variance	O
by	O
simplifying	O
models	O
.	O
</s>
<s>
linear	B-Algorithm
and	O
Generalized	O
linear	B-Algorithm
models	I-Algorithm
can	O
be	O
regularized	O
to	O
decrease	O
their	O
variance	O
at	O
the	O
cost	O
of	O
increasing	O
their	O
bias	O
.	O
</s>
<s>
In	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
,	O
the	O
variance	O
increases	O
and	O
the	O
bias	O
decreases	O
as	O
the	O
number	O
of	O
hidden	O
units	O
increase	O
,	O
although	O
this	O
classical	O
assumption	O
has	O
been	O
the	O
subject	O
of	O
recent	O
debate	O
.	O
</s>
<s>
In	O
k-nearest	B-General_Concept
neighbor	I-General_Concept
models	O
,	O
a	O
high	O
value	O
of	O
leads	O
to	O
high	O
bias	O
and	O
low	O
variance	O
(	O
see	O
below	O
)	O
.	O
</s>
<s>
In	O
instance-based	B-General_Concept
learning	I-General_Concept
,	O
regularization	O
can	O
be	O
achieved	O
varying	O
the	O
mixture	O
of	O
prototypes	B-Application
and	O
exemplars	O
.	O
</s>
<s>
In	O
decision	B-Algorithm
trees	I-Algorithm
,	O
the	O
depth	O
of	O
the	O
tree	O
determines	O
the	O
variance	O
.	O
</s>
<s>
Decision	B-Algorithm
trees	I-Algorithm
are	O
commonly	O
pruned	O
to	O
control	O
variance	O
.	O
</s>
<s>
One	O
way	O
of	O
resolving	O
the	O
trade-off	O
is	O
to	O
use	O
mixture	O
models	O
and	O
ensemble	B-Algorithm
learning	I-Algorithm
.	O
</s>
<s>
For	O
example	O
,	O
boosting	B-Algorithm
combines	O
many	O
"	O
weak	O
"	O
(	O
high	O
bias	O
)	O
models	O
in	O
an	O
ensemble	O
that	O
has	O
lower	O
bias	O
than	O
the	O
individual	O
models	O
,	O
while	O
bagging	B-Algorithm
combines	O
"	O
strong	O
"	O
learners	O
in	O
a	O
way	O
that	O
reduces	O
their	O
variance	O
.	O
</s>
<s>
Model	O
validation	O
methods	O
such	O
as	O
cross-validation	B-Application
(	O
statistics	O
)	O
can	O
be	O
used	O
to	O
tune	O
models	O
so	O
as	O
to	O
optimize	O
the	O
trade-off	O
.	O
</s>
<s>
In	O
the	O
case	O
of	O
-nearest	O
neighbors	O
regression	O
,	O
when	O
the	O
expectation	O
is	O
taken	O
over	O
the	O
possible	O
labeling	O
of	O
a	O
fixed	O
training	O
set	O
,	O
a	O
closed-form	O
expression	O
exists	O
that	O
relates	O
the	O
bias	B-General_Concept
–	I-General_Concept
variance	I-General_Concept
decomposition	I-General_Concept
to	O
the	O
parameter	O
:	O
</s>
<s>
where	O
are	O
the	O
nearest	B-General_Concept
neighbors	I-General_Concept
of	O
in	O
the	O
training	O
set	O
.	O
</s>
<s>
The	O
bias	B-General_Concept
–	I-General_Concept
variance	I-General_Concept
decomposition	I-General_Concept
forms	O
the	O
conceptual	O
basis	O
for	O
regression	O
regularization	O
methods	O
such	O
as	O
Lasso	B-Algorithm
and	O
ridge	O
regression	O
.	O
</s>
<s>
Regularization	O
methods	O
introduce	O
bias	O
into	O
the	O
regression	O
solution	O
that	O
can	O
reduce	O
variance	O
considerably	O
relative	O
to	O
the	O
ordinary	B-General_Concept
least	I-General_Concept
squares	I-General_Concept
(	O
OLS	O
)	O
solution	O
.	O
</s>
<s>
The	O
bias	B-General_Concept
–	I-General_Concept
variance	I-General_Concept
decomposition	I-General_Concept
was	O
originally	O
formulated	O
for	O
least-squares	O
regression	O
.	O
</s>
<s>
For	O
the	O
case	O
of	O
classification	B-General_Concept
under	O
the	O
0-1	O
loss	O
(	O
misclassification	O
rate	O
)	O
,	O
it	O
is	O
possible	O
to	O
find	O
a	O
similar	O
decomposition	O
.	O
</s>
<s>
Alternatively	O
,	O
if	O
the	O
classification	B-General_Concept
problem	O
can	O
be	O
phrased	O
as	O
probabilistic	B-General_Concept
classification	I-General_Concept
,	O
then	O
the	O
expected	O
squared	O
error	O
of	O
the	O
predicted	O
probabilities	O
with	O
respect	O
to	O
the	O
true	O
probabilities	O
can	O
be	O
decomposed	O
as	O
before	O
.	O
</s>
<s>
Even	O
though	O
the	O
bias	B-General_Concept
–	I-General_Concept
variance	I-General_Concept
decomposition	I-General_Concept
does	O
not	O
directly	O
apply	O
in	O
reinforcement	O
learning	O
,	O
a	O
similar	O
tradeoff	O
can	O
also	O
characterize	O
generalization	B-Algorithm
.	O
</s>
<s>
When	O
an	O
agent	O
has	O
limited	O
information	O
on	O
its	O
environment	O
,	O
the	O
suboptimality	O
of	O
an	O
RL	O
algorithm	O
can	O
be	O
decomposed	O
into	O
the	O
sum	O
of	O
two	O
terms	O
:	O
a	O
term	O
related	O
to	O
an	O
asymptotic	O
bias	O
and	O
a	O
term	O
due	O
to	O
overfitting	B-Error_Name
.	O
</s>
<s>
The	O
asymptotic	O
bias	O
is	O
directly	O
related	O
to	O
the	O
learning	O
algorithm	O
(	O
independently	O
of	O
the	O
quantity	O
of	O
data	O
)	O
while	O
the	O
overfitting	B-Error_Name
term	O
comes	O
from	O
the	O
fact	O
that	O
the	O
amount	O
of	O
data	O
is	O
limited	O
.	O
</s>
<s>
While	O
widely	O
discussed	O
in	O
the	O
context	O
of	O
machine	O
learning	O
,	O
the	O
bias	B-General_Concept
–	I-General_Concept
variance	I-General_Concept
dilemma	I-General_Concept
has	O
been	O
examined	O
in	O
the	O
context	O
of	O
human	O
cognition	O
,	O
most	O
notably	O
by	O
Gerd	O
Gigerenzer	O
and	O
co-workers	O
in	O
the	O
context	O
of	O
learned	O
heuristics	O
.	O
</s>
<s>
argue	O
that	O
the	O
bias	B-General_Concept
–	I-General_Concept
variance	I-General_Concept
dilemma	I-General_Concept
implies	O
that	O
abilities	O
such	O
as	O
generic	O
object	O
recognition	O
cannot	O
be	O
learned	O
from	O
scratch	O
,	O
but	O
require	O
a	O
certain	O
degree	O
of	O
"	O
hard	O
wiring	O
"	O
that	O
is	O
later	O
tuned	O
by	O
experience	O
.	O
</s>
