<s>
Cross-validation	B-Application
,	O
sometimes	O
called	O
rotation	B-Application
estimation	I-Application
or	O
out-of-sample	B-Application
testing	I-Application
,	O
is	O
any	O
of	O
various	O
similar	O
model	O
validation	O
techniques	O
for	O
assessing	O
how	O
the	O
results	O
of	O
a	O
statistical	O
analysis	O
will	O
generalize	B-Algorithm
to	O
an	O
independent	O
data	O
set	O
.	O
</s>
<s>
Cross-validation	B-Application
is	O
a	O
resampling	B-General_Concept
method	O
that	O
uses	O
different	O
portions	O
of	O
the	O
data	O
to	O
test	O
and	O
train	O
a	O
model	O
on	O
different	O
iterations	O
.	O
</s>
<s>
It	O
is	O
mainly	O
used	O
in	O
settings	O
where	O
the	O
goal	O
is	O
prediction	O
,	O
and	O
one	O
wants	O
to	O
estimate	O
how	O
accurately	O
a	O
predictive	B-General_Concept
model	I-General_Concept
will	O
perform	O
in	O
practice	O
.	O
</s>
<s>
In	O
a	O
prediction	O
problem	O
,	O
a	O
model	O
is	O
usually	O
given	O
a	O
dataset	O
of	O
known	O
data	O
on	O
which	O
training	O
is	O
run	O
(	O
training	O
dataset	O
)	O
,	O
and	O
a	O
dataset	O
of	O
unknown	O
data	O
(	O
or	O
first	O
seen	O
data	O
)	O
against	O
which	O
the	O
model	O
is	O
tested	O
(	O
called	O
the	O
validation	B-General_Concept
dataset	I-General_Concept
or	O
testing	O
set	O
)	O
.	O
</s>
<s>
The	O
goal	O
of	O
cross-validation	B-Application
is	O
to	O
test	O
the	O
model	O
's	O
ability	O
to	O
predict	O
new	O
data	O
that	O
was	O
not	O
used	O
in	O
estimating	O
it	O
,	O
in	O
order	O
to	O
flag	O
problems	O
like	O
overfitting	B-Error_Name
or	O
selection	O
bias	O
and	O
to	O
give	O
an	O
insight	O
on	O
how	O
the	O
model	O
will	O
generalize	B-Algorithm
to	O
an	O
independent	O
dataset	O
(	O
i.e.	O
,	O
an	O
unknown	O
dataset	O
,	O
for	O
instance	O
from	O
a	O
real	O
problem	O
)	O
.	O
</s>
<s>
One	O
round	O
of	O
cross-validation	B-Application
involves	O
partitioning	O
a	O
sample	O
of	O
data	O
into	O
complementary	O
subsets	O
,	O
performing	O
the	O
analysis	O
on	O
one	O
subset	O
(	O
called	O
the	O
training	O
set	O
)	O
,	O
and	O
validating	O
the	O
analysis	O
on	O
the	O
other	O
subset	O
(	O
called	O
the	O
validation	B-General_Concept
set	I-General_Concept
or	O
testing	O
set	O
)	O
.	O
</s>
<s>
To	O
reduce	O
variability	O
,	O
in	O
most	O
methods	O
multiple	O
rounds	O
of	O
cross-validation	B-Application
are	O
performed	O
using	O
different	O
partitions	O
,	O
and	O
the	O
validation	O
results	O
are	O
combined	O
(	O
e.g.	O
</s>
<s>
In	O
summary	O
,	O
cross-validation	B-Application
combines	O
(	O
averages	O
)	O
measures	O
of	O
fitness	O
in	O
prediction	O
to	O
derive	O
a	O
more	O
accurate	O
estimate	O
of	O
model	O
prediction	O
performance	O
.	O
</s>
<s>
Cross-validation	B-Application
is	O
a	O
way	O
to	O
estimate	O
the	O
size	O
of	O
this	O
effect	O
.	O
</s>
<s>
In	O
linear	B-General_Concept
regression	I-General_Concept
,	O
there	O
exist	O
real	O
response	O
values	O
y1	O
,	O
...	O
,	O
yn	O
,	O
and	O
n	O
p-dimensional	O
vector	O
covariates	O
x1	O
,	O
...	O
,	O
xn	O
.	O
</s>
<s>
If	O
least	B-Algorithm
squares	I-Algorithm
is	O
used	O
to	O
fit	O
a	O
function	O
in	O
the	O
form	O
of	O
a	O
hyperplane	O
ŷ	O
=	O
a	O
+	O
βTx	O
to	O
the	O
data	O
(	O
xi	O
,	O
yi	O
)	O
1	O
≤	O
i	O
≤	O
n	O
,	O
then	O
the	O
fit	O
can	O
be	O
assessed	O
using	O
the	O
mean	B-Algorithm
squared	I-Algorithm
error	I-Algorithm
(	O
MSE	O
)	O
.	O
</s>
<s>
If	O
the	O
model	O
is	O
correctly	O
specified	O
,	O
it	O
can	O
be	O
shown	O
under	O
mild	O
assumptions	O
that	O
the	O
expected	O
value	O
of	O
the	O
MSE	O
for	O
the	O
training	O
set	O
is	O
(	O
np1	O
)	O
/	O
( n+p+1	O
)	O
<	O
1	O
times	O
the	O
expected	O
value	O
of	O
the	O
MSE	O
for	O
the	O
validation	B-General_Concept
set	I-General_Concept
(	O
the	O
expected	O
value	O
is	O
taken	O
over	O
the	O
distribution	O
of	O
training	O
sets	O
)	O
.	O
</s>
<s>
This	O
biased	O
estimate	O
is	O
called	O
the	O
in-sample	O
estimate	O
of	O
the	O
fit	O
,	O
whereas	O
the	O
cross-validation	B-Application
estimate	O
is	O
an	O
out-of-sample	O
estimate	O
.	O
</s>
<s>
Since	O
in	O
linear	B-General_Concept
regression	I-General_Concept
it	O
is	O
possible	O
to	O
directly	O
compute	O
the	O
factor	O
(	O
np1	O
)	O
/	O
( n+p+1	O
)	O
by	O
which	O
the	O
training	O
MSE	O
underestimates	O
the	O
validation	O
MSE	O
under	O
the	O
assumption	O
that	O
the	O
model	O
specification	O
is	O
valid	O
,	O
cross-validation	B-Application
can	O
be	O
used	O
for	O
checking	O
whether	O
the	O
model	O
has	O
been	O
overfitted	B-Error_Name
,	O
in	O
which	O
case	O
the	O
MSE	O
in	O
the	O
validation	B-General_Concept
set	I-General_Concept
will	O
substantially	O
exceed	O
its	O
anticipated	O
value	O
.	O
</s>
<s>
(	O
Cross-validation	B-Application
in	O
the	O
context	O
of	O
linear	B-General_Concept
regression	I-General_Concept
is	O
also	O
useful	O
in	O
that	O
it	O
can	O
be	O
used	O
to	O
select	O
an	O
optimally	O
regularized	O
cost	O
function	O
.	O
)	O
</s>
<s>
Cross-validation	B-Application
is	O
,	O
thus	O
,	O
a	O
generally	O
applicable	O
way	O
to	O
predict	O
the	O
performance	O
of	O
a	O
model	O
on	O
unavailable	O
data	O
using	O
numerical	O
computation	O
in	O
place	O
of	O
theoretical	O
analysis	O
.	O
</s>
<s>
Two	O
types	O
of	O
cross-validation	B-Application
can	O
be	O
distinguished	O
:	O
exhaustive	O
and	O
non-exhaustive	O
cross-validation	B-Application
.	O
</s>
<s>
Exhaustive	O
cross-validation	B-Application
methods	O
are	O
cross-validation	B-Application
methods	O
which	O
learn	O
and	O
test	O
on	O
all	O
possible	O
ways	O
to	O
divide	O
the	O
original	O
sample	O
into	O
a	O
training	O
and	O
a	O
validation	B-General_Concept
set	I-General_Concept
.	O
</s>
<s>
Leave-p-out	O
cross-validation	B-Application
(	O
LpO	O
CV	O
)	O
involves	O
using	O
p	O
observations	O
as	O
the	O
validation	B-General_Concept
set	I-General_Concept
and	O
the	O
remaining	O
observations	O
as	O
the	O
training	O
set	O
.	O
</s>
<s>
This	O
is	O
repeated	O
on	O
all	O
ways	O
to	O
cut	O
the	O
original	O
sample	O
on	O
a	O
validation	B-General_Concept
set	I-General_Concept
of	O
p	O
observations	O
and	O
a	O
training	O
set	O
.	O
</s>
<s>
LpO	O
cross-validation	B-Application
require	O
training	O
and	O
validating	O
the	O
model	O
times	O
,	O
where	O
n	O
is	O
the	O
number	O
of	O
observations	O
in	O
the	O
original	O
sample	O
,	O
and	O
where	O
is	O
the	O
binomial	O
coefficient	O
.	O
</s>
<s>
A	O
variant	O
of	O
LpO	O
cross-validation	B-Application
with	O
p	O
=	O
2	O
known	O
as	O
leave-pair-out	O
cross-validation	B-Application
has	O
been	O
recommended	O
as	O
a	O
nearly	O
unbiased	O
method	O
for	O
estimating	O
the	O
area	O
under	O
ROC	B-Algorithm
curve	I-Algorithm
of	O
binary	B-General_Concept
classifiers	I-General_Concept
.	O
</s>
<s>
Leave-one-out	O
cross-validation	B-Application
(	O
LOOCV	O
)	O
is	O
a	O
particular	O
case	O
of	O
leave-p-out	O
cross-validation	B-Application
with	O
p	O
=	O
1	O
.	O
</s>
<s>
The	O
process	O
looks	O
similar	O
to	O
jackknife	B-Algorithm
;	O
however	O
,	O
with	O
cross-validation	B-Application
one	O
computes	O
a	O
statistic	O
on	O
the	O
left-out	O
sample(s )	O
,	O
while	O
with	O
jackknifing	O
one	O
computes	O
a	O
statistic	O
from	O
the	O
kept	O
samples	O
only	O
.	O
</s>
<s>
LOO	O
cross-validation	B-Application
requires	O
less	O
computation	O
time	O
than	O
LpO	O
cross-validation	B-Application
because	O
there	O
are	O
only	O
passes	O
rather	O
than	O
.	O
</s>
<s>
These	O
methods	O
are	O
approximations	O
of	O
leave-p-out	O
cross-validation	B-Application
.	O
</s>
<s>
In	O
k-fold	O
cross-validation	B-Application
,	O
the	O
original	O
sample	O
is	O
randomly	O
partitioned	O
into	O
k	O
equal	O
sized	O
subsamples	O
.	O
</s>
<s>
The	O
cross-validation	B-Application
process	O
is	O
then	O
repeated	O
k	O
times	O
,	O
with	O
each	O
of	O
the	O
k	O
subsamples	O
used	O
exactly	O
once	O
as	O
the	O
validation	O
data	O
.	O
</s>
<s>
10-fold	O
cross-validation	B-Application
is	O
commonly	O
used	O
,	O
but	O
in	O
general	O
k	O
remains	O
an	O
unfixed	O
parameter	O
.	O
</s>
<s>
For	O
example	O
,	O
setting	O
k	O
=	O
2	O
results	O
in	O
2-fold	O
cross-validation	B-Application
.	O
</s>
<s>
In	O
2-fold	O
cross-validation	B-Application
,	O
we	O
randomly	O
shuffle	O
the	O
dataset	O
into	O
two	O
sets	O
d0	O
and	O
d1	O
,	O
so	O
that	O
both	O
sets	O
are	O
equal	O
size	O
(	O
this	O
is	O
usually	O
implemented	O
by	O
shuffling	O
the	O
data	O
array	O
and	O
then	O
splitting	O
it	O
in	O
two	O
)	O
.	O
</s>
<s>
When	O
k	O
=	O
n	O
(	O
the	O
number	O
of	O
observations	O
)	O
,	O
k-fold	O
cross-validation	B-Application
is	O
equivalent	O
to	O
leave-one-out	O
cross-validation	B-Application
.	O
</s>
<s>
In	O
stratified	O
k-fold	O
cross-validation	B-Application
,	O
the	O
partitions	O
are	O
selected	O
so	O
that	O
the	O
mean	O
response	O
value	O
is	O
approximately	O
equal	O
in	O
all	O
the	O
partitions	O
.	O
</s>
<s>
In	O
the	O
case	O
of	O
binary	B-General_Concept
classification	I-General_Concept
,	O
this	O
means	O
that	O
each	O
partition	O
contains	O
roughly	O
the	O
same	O
proportions	O
of	O
the	O
two	O
types	O
of	O
class	O
labels	O
.	O
</s>
<s>
In	O
repeated	O
cross-validation	B-Application
the	O
data	O
is	O
randomly	O
split	O
into	O
k	O
partitions	O
several	O
times	O
.	O
</s>
<s>
When	O
many	O
different	O
statistical	O
or	O
machine	O
learning	O
models	O
are	O
being	O
considered	O
,	O
greedy	O
k-fold	O
cross-validation	B-Application
can	O
be	O
used	O
to	O
quickly	O
identify	O
the	O
most	O
promising	O
candidate	O
models	O
.	O
</s>
<s>
In	O
typical	O
cross-validation	B-Application
,	O
results	O
of	O
multiple	O
runs	O
of	O
model-testing	O
are	O
averaged	O
together	O
;	O
in	O
contrast	O
,	O
the	O
holdout	O
method	O
,	O
in	O
isolation	O
,	O
involves	O
a	O
single	O
run	O
.	O
</s>
<s>
Similarly	O
,	O
indicators	O
of	O
the	O
specific	O
role	O
played	O
by	O
various	O
predictor	O
variables	O
(	O
e.g.	O
,	O
values	O
of	O
regression	B-General_Concept
coefficients	I-General_Concept
)	O
will	O
tend	O
to	O
be	O
unstable	O
.	O
</s>
<s>
While	O
the	O
holdout	O
method	O
can	O
be	O
framed	O
as	O
"	O
the	O
simplest	O
kind	O
of	O
cross-validation	B-Application
"	O
,	O
many	O
sources	O
instead	O
classify	O
holdout	O
as	O
a	O
type	O
of	O
simple	O
validation	O
,	O
rather	O
than	O
a	O
simple	O
or	O
degenerate	O
form	O
of	O
cross-validation	B-Application
.	O
</s>
<s>
This	O
method	O
,	O
also	O
known	O
as	O
Monte	B-Algorithm
Carlo	I-Algorithm
cross-validation	B-Application
,	O
creates	O
multiple	O
random	O
splits	O
of	O
the	O
dataset	O
into	O
training	O
and	O
validation	O
data	O
.	O
</s>
<s>
This	O
method	O
also	O
exhibits	O
Monte	B-Algorithm
Carlo	I-Algorithm
variation	O
,	O
meaning	O
that	O
the	O
results	O
will	O
vary	O
if	O
the	O
analysis	O
is	O
repeated	O
with	O
different	O
random	O
splits	O
.	O
</s>
<s>
As	O
the	O
number	O
of	O
random	O
splits	O
approaches	O
infinity	O
,	O
the	O
result	O
of	O
repeated	O
random	O
sub-sampling	O
validation	O
tends	O
towards	O
that	O
of	O
leave-p-out	O
cross-validation	B-Application
.	O
</s>
<s>
A	O
method	O
that	O
applies	O
repeated	O
random	O
sub-sampling	O
is	O
RANSAC	B-Algorithm
.	O
</s>
<s>
When	O
cross-validation	B-Application
is	O
used	O
simultaneously	O
for	O
selection	O
of	O
the	O
best	O
set	O
of	O
hyperparameters	B-General_Concept
and	O
for	O
error	O
estimation	O
(	O
and	O
assessment	O
of	O
generalization	O
capacity	O
)	O
,	O
a	O
nested	O
cross-validation	B-Application
is	O
required	O
.	O
</s>
<s>
The	O
inner	O
training	O
sets	O
are	O
used	O
to	O
fit	O
model	O
parameters	O
,	O
while	O
the	O
outer	O
test	O
set	O
is	O
used	O
as	O
a	O
validation	B-General_Concept
set	I-General_Concept
to	O
provide	O
an	O
unbiased	O
evaluation	O
of	O
the	O
model	O
fit	O
.	O
</s>
<s>
Typically	O
,	O
this	O
is	O
repeated	O
for	O
many	O
different	O
hyperparameters	B-General_Concept
(	O
or	O
even	O
different	O
model	O
types	O
)	O
and	O
the	O
validation	B-General_Concept
set	I-General_Concept
is	O
used	O
to	O
determine	O
the	O
best	O
hyperparameter	B-General_Concept
set	O
(	O
and	O
model	O
type	O
)	O
for	O
this	O
inner	O
training	O
set	O
.	O
</s>
<s>
After	O
this	O
,	O
a	O
new	O
model	O
is	O
fit	O
on	O
the	O
entire	O
outer	O
training	O
set	O
,	O
using	O
the	O
best	O
set	O
of	O
hyperparameters	B-General_Concept
from	O
the	O
inner	O
cross-validation	B-Application
.	O
</s>
<s>
This	O
is	O
a	O
type	O
of	O
k*	O
l-fold	O
cross-validation	B-Application
when	O
l	O
=	O
k-1	O
.	O
</s>
<s>
A	O
single	O
k-fold	O
cross-validation	B-Application
is	O
used	O
with	O
both	O
a	O
validation	O
and	O
test	O
set	O
.	O
</s>
<s>
Then	O
,	O
one	O
by	O
one	O
,	O
one	O
of	O
the	O
remaining	O
sets	O
is	O
used	O
as	O
a	O
validation	B-General_Concept
set	I-General_Concept
and	O
the	O
other	O
k-2	O
sets	O
are	O
used	O
as	O
training	O
sets	O
until	O
all	O
possible	O
combinations	O
have	O
been	O
evaluated	O
.	O
</s>
<s>
Similar	O
to	O
the	O
k*	O
l-fold	O
cross	O
validation	O
,	O
the	O
training	O
set	O
is	O
used	O
for	O
model	O
fitting	O
and	O
the	O
validation	B-General_Concept
set	I-General_Concept
is	O
used	O
for	O
model	O
evaluation	O
for	O
each	O
of	O
the	O
hyperparameter	B-General_Concept
sets	O
.	O
</s>
<s>
Here	O
,	O
two	O
variants	O
are	O
possible	O
:	O
either	O
evaluating	O
the	O
model	O
that	O
was	O
trained	O
on	O
the	O
training	O
set	O
or	O
evaluating	O
a	O
new	O
model	O
that	O
was	O
fit	O
on	O
the	O
combination	O
of	O
the	O
training	O
and	O
the	O
validation	B-General_Concept
set	I-General_Concept
.	O
</s>
<s>
The	O
goal	O
of	O
cross-validation	B-Application
is	O
to	O
estimate	O
the	O
expected	O
level	O
of	O
fit	O
of	O
a	O
model	O
to	O
a	O
data	O
set	O
that	O
is	O
independent	O
of	O
the	O
data	O
that	O
were	O
used	O
to	O
train	O
the	O
model	O
.	O
</s>
<s>
For	O
example	O
,	O
for	O
binary	B-General_Concept
classification	I-General_Concept
problems	O
,	O
each	O
case	O
in	O
the	O
validation	B-General_Concept
set	I-General_Concept
is	O
either	O
predicted	O
correctly	O
or	O
incorrectly	O
.	O
</s>
<s>
When	O
the	O
value	O
being	O
predicted	O
is	O
continuously	O
distributed	O
,	O
the	O
mean	B-Algorithm
squared	I-Algorithm
error	I-Algorithm
,	O
root	B-General_Concept
mean	I-General_Concept
squared	I-General_Concept
error	I-General_Concept
or	O
median	B-General_Concept
absolute	I-General_Concept
deviation	I-General_Concept
could	O
be	O
used	O
to	O
summarize	O
the	O
errors	O
.	O
</s>
<s>
When	O
users	O
apply	O
cross-validation	B-Application
to	O
select	O
a	O
good	O
configuration	O
,	O
then	O
they	O
might	O
want	O
to	O
balance	O
the	O
cross-validated	O
choice	O
with	O
their	O
own	O
estimate	O
of	O
the	O
configuration	O
.	O
</s>
<s>
In	O
this	O
way	O
,	O
they	O
can	O
attempt	O
to	O
counter	O
the	O
volatility	O
of	O
cross-validation	B-Application
when	O
the	O
sample	O
size	O
is	O
small	O
and	O
include	O
relevant	O
information	O
from	O
previous	O
research	O
.	O
</s>
<s>
In	O
a	O
forecasting	O
combination	O
exercise	O
,	O
for	O
instance	O
,	O
cross-validation	B-Application
can	O
be	O
applied	O
to	O
estimate	O
the	O
weights	O
that	O
are	O
assigned	O
to	O
each	O
forecast	O
.	O
</s>
<s>
Or	O
,	O
if	O
cross-validation	B-Application
is	O
applied	O
to	O
assign	O
individual	O
weights	O
to	O
observations	O
,	O
then	O
one	O
can	O
penalize	O
deviations	O
from	O
equal	O
weights	O
to	O
avoid	O
wasting	O
potentially	O
relevant	O
information	O
.	O
</s>
<s>
Hoornweg	O
(	O
2018	O
)	O
shows	O
how	O
a	O
tuning	O
parameter	O
can	O
be	O
defined	O
so	O
that	O
a	O
user	O
can	O
intuitively	O
balance	O
between	O
the	O
accuracy	O
of	O
cross-validation	B-Application
and	O
the	O
simplicity	O
of	O
sticking	O
to	O
a	O
reference	O
parameter	O
that	O
is	O
defined	O
by	O
the	O
user	O
.	O
</s>
<s>
Relative	O
accuracy	O
can	O
be	O
quantified	O
as	O
,	O
so	O
that	O
the	O
mean	B-Algorithm
squared	I-Algorithm
error	I-Algorithm
of	O
a	O
candidate	O
is	O
made	O
relative	O
to	O
that	O
of	O
a	O
user-specified	O
.	O
</s>
<s>
With	O
,	O
the	O
user	O
determines	O
how	O
high	O
the	O
influence	O
of	O
the	O
reference	O
parameter	O
is	O
relative	O
to	O
cross-validation	B-Application
.	O
</s>
<s>
Hoornweg	O
(	O
2018	O
)	O
shows	O
that	O
a	O
loss	O
function	O
with	O
such	O
an	O
accuracy-simplicity	O
tradeoff	O
can	O
also	O
be	O
used	O
to	O
intuitively	O
define	O
shrinkage	O
estimators	O
like	O
the	O
(	O
adaptive	O
)	O
lasso	O
and	O
Bayesian	B-General_Concept
/	O
ridge	O
regression	O
.	O
</s>
<s>
Suppose	O
we	O
choose	O
a	O
measure	O
of	O
fit	O
F	O
,	O
and	O
use	O
cross-validation	B-Application
to	O
produce	O
an	O
estimate	O
F*	O
of	O
the	O
expected	O
fit	O
EF	O
of	O
a	O
model	O
to	O
an	O
independent	O
data	O
set	O
drawn	O
from	O
the	O
same	O
population	O
as	O
the	O
training	O
data	O
.	O
</s>
<s>
The	O
cross-validation	B-Application
estimator	O
F*	O
is	O
very	O
nearly	O
unbiased	O
for	O
EF	O
.	O
</s>
<s>
The	O
reason	O
that	O
it	O
is	O
slightly	O
biased	O
is	O
that	O
the	O
training	O
set	O
in	O
cross-validation	B-Application
is	O
slightly	O
smaller	O
than	O
the	O
actual	O
data	O
set	O
(	O
e.g.	O
</s>
<s>
For	O
this	O
reason	O
,	O
if	O
two	O
statistical	O
procedures	O
are	O
compared	O
based	O
on	O
the	O
results	O
of	O
cross-validation	B-Application
,	O
the	O
procedure	O
with	O
the	O
better	O
estimated	O
performance	O
may	O
not	O
actually	O
be	O
the	O
better	O
of	O
the	O
two	O
procedures	O
(	O
i.e.	O
</s>
<s>
Some	O
progress	O
has	O
been	O
made	O
on	O
constructing	O
confidence	O
intervals	O
around	O
cross-validation	B-Application
estimates	O
,	O
but	O
this	O
is	O
considered	O
a	O
difficult	O
problem	O
.	O
</s>
<s>
Most	O
forms	O
of	O
cross-validation	B-Application
are	O
straightforward	O
to	O
implement	O
as	O
long	O
as	O
an	O
implementation	O
of	O
the	O
prediction	O
method	O
being	O
studied	O
is	O
available	O
.	O
</s>
<s>
If	O
the	O
prediction	O
method	O
is	O
expensive	O
to	O
train	O
,	O
cross-validation	B-Application
can	O
be	O
very	O
slow	O
since	O
the	O
training	O
must	O
be	O
carried	O
out	O
repeatedly	O
.	O
</s>
<s>
In	O
some	O
cases	O
such	O
as	O
least	B-Algorithm
squares	I-Algorithm
and	O
kernel	B-Language
regression	I-Language
,	O
cross-validation	B-Application
can	O
be	O
sped	O
up	O
significantly	O
by	O
pre-computing	O
certain	O
values	O
that	O
are	O
needed	O
repeatedly	O
in	O
the	O
training	O
,	O
or	O
by	O
using	O
fast	O
"	O
updating	O
rules	O
"	O
such	O
as	O
the	O
Sherman	O
–	O
Morrison	O
formula	O
.	O
</s>
<s>
However	O
one	O
must	O
be	O
careful	O
to	O
preserve	O
the	O
"	O
total	O
blinding	O
"	O
of	O
the	O
validation	B-General_Concept
set	I-General_Concept
from	O
the	O
training	O
procedure	O
,	O
otherwise	O
bias	O
may	O
result	O
.	O
</s>
<s>
An	O
extreme	O
example	O
of	O
accelerating	O
cross-validation	B-Application
occurs	O
in	O
linear	B-General_Concept
regression	I-General_Concept
,	O
where	O
the	O
results	O
of	O
cross-validation	B-Application
have	O
a	O
closed-form	O
expression	O
known	O
as	O
the	O
prediction	O
residual	O
error	O
sum	O
of	O
squares	O
(	O
PRESS	O
)	O
.	O
</s>
<s>
Cross-validation	B-Application
only	O
yields	O
meaningful	O
results	O
if	O
the	O
validation	B-General_Concept
set	I-General_Concept
and	O
training	O
set	O
are	O
drawn	O
from	O
the	O
same	O
population	O
and	O
only	O
if	O
human	O
biases	O
are	O
controlled	O
.	O
</s>
<s>
In	O
many	O
applications	O
of	O
predictive	B-General_Concept
modeling	I-General_Concept
,	O
the	O
structure	O
of	O
the	O
system	O
being	O
studied	O
evolves	O
over	O
time	O
(	O
i.e.	O
</s>
<s>
Both	O
of	O
these	O
can	O
introduce	O
systematic	O
differences	O
between	O
the	O
training	O
and	O
validation	B-General_Concept
sets	I-General_Concept
.	O
</s>
<s>
young	O
people	O
or	O
males	O
)	O
,	O
but	O
is	O
then	O
applied	O
to	O
the	O
general	O
population	O
,	O
the	O
cross-validation	B-Application
results	O
from	O
the	O
training	O
set	O
could	O
differ	O
greatly	O
from	O
the	O
actual	O
predictive	O
performance	O
.	O
</s>
<s>
New	O
evidence	O
is	O
that	O
cross-validation	B-Application
by	O
itself	O
is	O
not	O
very	O
predictive	O
of	O
external	O
validity	O
,	O
whereas	O
a	O
form	O
of	O
experimental	O
validation	O
known	O
as	O
swap	O
sampling	O
that	O
does	O
control	O
for	O
human	O
bias	O
can	O
be	O
much	O
more	O
predictive	O
of	O
external	O
validity	O
.	O
</s>
<s>
As	O
defined	O
by	O
this	O
large	O
MAQC-II	O
study	O
across	O
30,000	O
models	O
,	O
swap	O
sampling	O
incorporates	O
cross-validation	B-Application
in	O
the	O
sense	O
that	O
predictions	O
are	O
tested	O
across	O
independent	O
training	O
and	O
validation	O
samples	O
.	O
</s>
<s>
When	O
there	O
is	O
a	O
mismatch	O
in	O
these	O
models	O
developed	O
across	O
these	O
swapped	O
training	O
and	O
validation	O
samples	O
as	O
happens	O
quite	O
frequently	O
,	O
MAQC-II	O
shows	O
that	O
this	O
will	O
be	O
much	O
more	O
predictive	O
of	O
poor	O
external	O
predictive	O
validity	O
than	O
traditional	O
cross-validation	B-Application
.	O
</s>
<s>
In	O
addition	O
to	O
placing	O
too	O
much	O
faith	O
in	O
predictions	O
that	O
may	O
vary	O
across	O
modelers	O
and	O
lead	O
to	O
poor	O
external	O
validity	O
due	O
to	O
these	O
confounding	O
modeler	O
effects	O
,	O
these	O
are	O
some	O
other	O
ways	O
that	O
cross-validation	B-Application
can	O
be	O
misused	O
:	O
</s>
<s>
By	O
performing	O
an	O
initial	O
analysis	O
to	O
identify	O
the	O
most	O
informative	O
features	B-Algorithm
using	O
the	O
entire	O
data	O
set	O
–	O
if	O
feature	B-General_Concept
selection	I-General_Concept
or	O
model	O
tuning	O
is	O
required	O
by	O
the	O
modeling	O
procedure	O
,	O
this	O
must	O
be	O
repeated	O
on	O
every	O
training	O
set	O
.	O
</s>
<s>
If	O
cross-validation	B-Application
is	O
used	O
to	O
decide	O
which	O
features	B-Algorithm
to	O
use	O
,	O
an	O
inner	O
cross-validation	B-Application
to	O
carry	O
out	O
the	O
feature	B-General_Concept
selection	I-General_Concept
on	O
every	O
training	O
set	O
must	O
be	O
performed	O
.	O
</s>
<s>
This	O
is	O
why	O
traditional	O
cross-validation	B-Application
needs	O
to	O
be	O
supplemented	O
with	O
controls	O
for	O
human	O
bias	O
and	O
confounded	O
model	O
specification	O
like	O
swap	O
sampling	O
and	O
prospective	O
studies	O
.	O
</s>
<s>
Since	O
the	O
order	O
of	O
the	O
data	O
is	O
important	O
,	O
cross-validation	B-Application
might	O
be	O
problematic	O
for	O
time-series	O
models	O
.	O
</s>
<s>
A	O
more	O
appropriate	O
approach	O
might	O
be	O
to	O
use	O
rolling	O
cross-validation	B-Application
.	O
</s>
<s>
However	O
,	O
if	O
performance	O
is	O
described	O
by	O
a	O
single	O
summary	O
statistic	O
,	O
it	O
is	O
possible	O
that	O
the	O
approach	O
described	O
by	O
Politis	O
and	O
Romano	O
as	O
a	O
stationary	O
bootstrap	B-Application
will	O
work	O
.	O
</s>
<s>
The	O
statistic	O
of	O
the	B-Application
bootstrap	I-Application
needs	O
to	O
accept	O
an	O
interval	O
of	O
the	O
time	O
series	O
and	O
return	O
the	O
summary	O
statistic	O
on	O
it	O
.	O
</s>
<s>
The	O
call	O
to	O
the	O
stationary	O
bootstrap	B-Application
needs	O
to	O
specify	O
an	O
appropriate	O
mean	O
interval	O
length	O
.	O
</s>
<s>
Cross-validation	B-Application
can	O
be	O
used	O
to	O
compare	O
the	O
performances	O
of	O
different	O
predictive	B-General_Concept
modeling	I-General_Concept
procedures	O
.	O
</s>
<s>
For	O
example	O
,	O
suppose	O
we	O
are	O
interested	O
in	O
optical	B-Application
character	I-Application
recognition	I-Application
,	O
and	O
we	O
are	O
considering	O
using	O
either	O
a	O
Support	B-Algorithm
Vector	I-Algorithm
Machine	I-Algorithm
(	O
SVM	B-Algorithm
)	O
or	O
k-nearest	B-General_Concept
neighbors	I-General_Concept
(	O
KNN	O
)	O
to	O
predict	O
the	O
true	O
character	O
from	O
an	O
image	O
of	O
a	O
handwritten	O
character	O
.	O
</s>
<s>
Using	O
cross-validation	B-Application
,	O
we	O
could	O
objectively	O
compare	O
these	O
two	O
methods	O
in	O
terms	O
of	O
their	O
respective	O
fractions	O
of	O
misclassified	O
characters	O
.	O
</s>
<s>
If	O
we	O
simply	O
compared	O
the	O
methods	O
based	O
on	O
their	O
in-sample	O
error	O
rates	O
,	O
one	O
method	O
would	O
likely	O
appear	O
to	O
perform	O
better	O
,	O
since	O
it	O
is	O
more	O
flexible	O
and	O
hence	O
more	O
prone	O
to	O
overfitting	B-Error_Name
compared	O
to	O
the	O
other	O
method	O
.	O
</s>
<s>
Cross-validation	B-Application
can	O
also	O
be	O
used	O
in	O
variable	B-General_Concept
selection	I-General_Concept
.	O
</s>
<s>
A	O
practical	O
goal	O
would	O
be	O
to	O
determine	O
which	O
subset	O
of	O
the	O
20	O
features	B-Algorithm
should	O
be	O
used	O
to	O
produce	O
the	O
best	O
predictive	B-General_Concept
model	I-General_Concept
.	O
</s>
<s>
For	O
most	O
modeling	O
procedures	O
,	O
if	O
we	O
compare	O
feature	O
subsets	O
using	O
the	O
in-sample	O
error	O
rates	O
,	O
the	O
best	O
performance	O
will	O
occur	O
when	O
all	O
20	O
features	B-Algorithm
are	O
used	O
.	O
</s>
<s>
However	O
under	O
cross-validation	B-Application
,	O
the	O
model	O
with	O
the	O
best	O
fit	O
will	O
generally	O
include	O
only	O
a	O
subset	O
of	O
the	O
features	B-Algorithm
that	O
are	O
deemed	O
truly	O
informative	O
.	O
</s>
