<s>
In	O
mathematical	O
modeling	O
,	O
overfitting	B-Error_Name
is	O
"	O
the	O
production	O
of	O
an	O
analysis	O
that	O
corresponds	O
too	O
closely	O
or	O
exactly	O
to	O
a	O
particular	O
set	O
of	O
data	O
,	O
and	O
may	O
therefore	O
fail	O
to	O
fit	O
to	O
additional	O
data	O
or	O
predict	O
future	O
observations	O
reliably	O
"	O
.	O
</s>
<s>
The	O
essence	O
of	O
overfitting	B-Error_Name
is	O
to	O
have	O
unknowingly	O
extracted	O
some	O
of	O
the	O
residual	O
variation	O
(	O
i.e.	O
,	O
the	O
noise	B-General_Concept
)	O
as	O
if	O
that	O
variation	O
represented	O
underlying	O
model	O
structure	O
.	O
</s>
<s>
Underfitting	B-Error_Name
occurs	O
when	O
a	O
mathematical	O
model	O
cannot	O
adequately	O
capture	O
the	O
underlying	O
structure	O
of	O
the	O
data	O
.	O
</s>
<s>
An	O
under-fitted	B-Error_Name
model	O
is	O
a	O
model	O
where	O
some	O
parameters	O
or	O
terms	O
that	O
would	O
appear	O
in	O
a	O
correctly	O
specified	O
model	O
are	O
missing	O
.	O
</s>
<s>
Under-fitting	B-Error_Name
would	O
occur	O
,	O
for	O
example	O
,	O
when	O
fitting	O
a	O
linear	O
model	O
to	O
non-linear	O
data	O
.	O
</s>
<s>
The	O
possibility	O
of	O
over-fitting	B-Error_Name
exists	O
because	O
the	O
criterion	O
used	O
for	O
selecting	O
the	O
model	O
is	O
not	O
the	O
same	O
as	O
the	O
criterion	O
used	O
to	O
judge	O
the	O
suitability	O
of	O
a	O
model	O
.	O
</s>
<s>
For	O
example	O
,	O
a	O
model	O
might	O
be	O
selected	O
by	O
maximizing	O
its	O
performance	O
on	O
some	O
set	O
of	O
training	O
data	O
,	O
and	O
yet	O
its	O
suitability	O
might	O
be	O
determined	O
by	O
its	O
ability	O
to	O
perform	O
well	O
on	O
unseen	O
data	O
;	O
then	O
over-fitting	B-Error_Name
occurs	O
when	O
a	O
model	O
begins	O
to	O
"	O
memorize	O
"	O
training	O
data	O
rather	O
than	O
"	O
learning	O
"	O
to	O
generalize	O
from	O
a	O
trend	O
.	O
</s>
<s>
The	O
potential	O
for	O
overfitting	B-Error_Name
depends	O
not	O
only	O
on	O
the	O
number	O
of	O
parameters	O
and	O
data	O
but	O
also	O
the	O
conformability	O
of	O
the	O
model	O
structure	O
with	O
the	O
data	O
shape	O
,	O
and	O
the	O
magnitude	O
of	O
model	O
error	O
compared	O
to	O
the	O
expected	O
level	O
of	O
noise	B-General_Concept
or	O
error	O
in	O
the	O
data	O
.	O
</s>
<s>
To	O
lessen	O
the	O
chance	O
or	O
amount	O
of	O
overfitting	B-Error_Name
,	O
several	O
techniques	O
are	O
available	O
(	O
e.g.	O
,	O
model	O
comparison	O
,	O
cross-validation	B-Application
,	O
regularization	O
,	O
early	B-Algorithm
stopping	I-Algorithm
,	O
pruning	B-Algorithm
,	O
Bayesian	O
priors	O
,	O
or	O
dropout	B-Algorithm
)	O
.	O
</s>
<s>
Burnham&	O
Anderson	O
,	O
in	O
their	O
much-cited	O
text	O
on	O
model	O
selection	O
,	O
argue	O
that	O
to	O
avoid	O
overfitting	B-Error_Name
,	O
we	O
should	O
adhere	O
to	O
the	O
"	O
Principle	O
of	O
Parsimony	O
"	O
.	O
</s>
<s>
Overfitting	B-Error_Name
is	O
more	O
likely	O
to	O
be	O
a	O
serious	O
concern	O
when	O
there	O
is	O
little	O
theory	O
available	O
to	O
guide	O
the	O
analysis	O
,	O
in	O
part	O
because	O
then	O
there	O
tend	O
to	O
be	O
a	O
large	O
number	O
of	O
models	O
to	O
select	O
from	O
.	O
</s>
<s>
In	O
regression	O
analysis	O
,	O
overfitting	B-Error_Name
occurs	O
frequently	O
.	O
</s>
<s>
As	O
an	O
extreme	O
example	O
,	O
if	O
there	O
are	O
p	O
variables	O
in	O
a	O
linear	B-General_Concept
regression	I-General_Concept
with	O
p	O
data	O
points	O
,	O
the	O
fitted	O
line	O
can	O
go	O
exactly	O
through	O
every	O
point	O
.	O
</s>
<s>
In	O
the	O
process	O
of	O
regression	O
model	O
selection	O
,	O
the	O
mean	O
squared	O
error	O
of	O
the	O
random	O
regression	O
function	O
can	O
be	O
split	O
into	O
random	O
noise	B-General_Concept
,	O
approximation	O
bias	O
,	O
and	O
variance	O
in	O
the	O
estimate	O
of	O
the	O
regression	O
function	O
.	O
</s>
<s>
The	O
bias	B-General_Concept
–	I-General_Concept
variance	I-General_Concept
tradeoff	I-General_Concept
is	O
often	O
used	O
to	O
overcome	O
overfit	B-Error_Name
models	O
.	O
</s>
<s>
With	O
a	O
large	O
set	O
of	O
explanatory	O
variables	O
that	O
actually	O
have	O
no	O
relation	O
to	O
the	O
dependent	O
variable	O
being	O
predicted	O
,	O
some	O
variables	O
will	O
in	O
general	O
be	O
falsely	O
found	O
to	O
be	O
statistically	B-General_Concept
significant	I-General_Concept
and	O
the	O
researcher	O
may	O
thus	O
retain	O
them	O
in	O
the	O
model	O
,	O
thereby	O
overfitting	B-Error_Name
the	O
model	O
.	O
</s>
<s>
Overfitting	B-Error_Name
is	O
the	O
use	O
of	O
models	O
or	O
procedures	O
that	O
violate	O
Occam	O
's	O
razor	O
,	O
for	O
example	O
by	O
including	O
more	O
adjustable	O
parameters	O
than	O
are	O
ultimately	O
optimal	O
,	O
or	O
by	O
using	O
a	O
more	O
complicated	O
approach	O
than	O
is	O
ultimately	O
optimal	O
.	O
</s>
<s>
If	O
the	O
new	O
,	O
more	O
complicated	O
function	O
is	O
selected	O
instead	O
of	O
the	O
simple	O
function	O
,	O
and	O
if	O
there	O
was	O
not	O
a	O
large	O
enough	O
gain	O
in	O
training-data	O
fit	O
to	O
offset	O
the	O
complexity	O
increase	O
,	O
then	O
the	O
new	O
complex	O
function	O
"	O
overfits	B-Error_Name
"	O
the	O
data	O
,	O
and	O
the	O
complex	O
overfitted	O
function	O
will	O
likely	O
perform	O
worse	O
than	O
the	O
simpler	O
function	O
on	O
validation	O
data	O
outside	O
the	O
training	O
dataset	O
,	O
even	O
though	O
the	O
complex	O
function	O
performed	O
as	O
well	O
,	O
or	O
perhaps	O
even	O
better	O
,	O
on	O
the	O
training	O
dataset	O
.	O
</s>
<s>
Overfitting	B-Error_Name
is	O
especially	O
likely	O
in	O
cases	O
where	O
learning	O
was	O
performed	O
too	O
long	O
or	O
where	O
training	O
examples	O
are	O
rare	O
,	O
causing	O
the	O
learner	O
to	O
adjust	O
to	O
very	O
specific	O
random	O
features	O
of	O
the	O
training	O
data	O
that	O
have	O
no	O
causal	O
relation	O
to	O
the	O
target	O
function	O
.	O
</s>
<s>
In	O
this	O
process	O
of	O
overfitting	B-Error_Name
,	O
the	O
performance	O
on	O
the	O
training	O
examples	O
still	O
increases	O
while	O
the	O
performance	O
on	O
unseen	O
data	O
becomes	O
worse	O
.	O
</s>
<s>
Generally	O
,	O
a	O
learning	O
algorithm	O
is	O
said	O
to	O
overfit	B-Error_Name
relative	O
to	O
a	O
simpler	O
one	O
if	O
it	O
is	O
more	O
accurate	O
in	O
fitting	O
known	O
data	O
(	O
hindsight	O
)	O
but	O
less	O
accurate	O
in	O
predicting	O
new	O
data	O
(	O
foresight	O
)	O
.	O
</s>
<s>
One	O
can	O
intuitively	O
understand	O
overfitting	B-Error_Name
from	O
the	O
fact	O
that	O
information	O
from	O
all	O
past	O
experience	O
can	O
be	O
divided	O
into	O
two	O
groups	O
:	O
information	O
that	O
is	O
relevant	O
for	O
the	O
future	O
,	O
and	O
irrelevant	O
information	O
(	O
"	O
noise	B-General_Concept
"	O
)	O
.	O
</s>
<s>
Everything	O
else	O
being	O
equal	O
,	O
the	O
more	O
difficult	O
a	O
criterion	O
is	O
to	O
predict	O
(	O
i.e.	O
,	O
the	O
higher	O
its	O
uncertainty	O
)	O
,	O
the	O
more	O
noise	B-General_Concept
exists	O
in	O
past	O
information	O
that	O
needs	O
to	O
be	O
ignored	O
.	O
</s>
<s>
A	O
learning	O
algorithm	O
that	O
can	O
reduce	O
the	O
risk	O
of	O
fitting	O
noise	B-General_Concept
is	O
called	O
"	O
robust.	O
"	O
</s>
<s>
The	O
most	O
obvious	O
consequence	O
of	O
overfitting	B-Error_Name
is	O
poor	O
performance	O
on	O
the	O
validation	O
dataset	O
.	O
</s>
<s>
At	O
one	O
extreme	O
,	O
a	O
one-variable	O
linear	B-General_Concept
regression	I-General_Concept
is	O
so	O
portable	O
that	O
,	O
if	O
necessary	O
,	O
it	O
could	O
even	O
be	O
done	O
by	O
hand	O
.	O
</s>
<s>
This	O
phenomenon	O
also	O
presents	O
problems	O
in	O
the	O
area	O
of	O
artificial	O
intelligence	O
and	O
copyright	O
,	O
with	O
the	O
developers	O
of	O
some	O
generative	O
deep	O
learning	O
models	O
such	O
as	O
Stable	B-General_Concept
Diffusion	I-General_Concept
and	O
GitHub	B-Application
Copilot	I-Application
being	O
sued	O
for	O
copyright	O
infringement	O
because	O
these	O
models	O
have	O
been	O
found	O
to	O
be	O
capable	O
of	O
reproducing	O
certain	O
copyrighted	O
items	O
from	O
their	O
training	O
data	O
.	O
</s>
<s>
Dropout	B-Algorithm
regularisation	O
can	O
also	O
improve	O
robustness	O
and	O
therefore	O
reduce	O
over-fitting	B-Error_Name
by	O
probabilistically	O
removing	O
inputs	O
to	O
a	O
layer	O
.	O
</s>
<s>
Underfitting	B-Error_Name
is	O
the	O
inverse	O
of	O
overfitting	B-Error_Name
,	O
meaning	O
that	O
the	O
statistical	O
model	O
or	O
machine	O
learning	O
algorithm	O
is	O
too	O
simplistic	O
to	O
accurately	O
capture	O
the	O
patterns	O
in	O
the	O
data	O
.	O
</s>
<s>
A	O
sign	O
of	O
underfitting	B-Error_Name
is	O
that	O
there	O
is	O
a	O
high	O
bias	O
and	O
low	O
variance	O
detected	O
in	O
the	O
current	O
model	O
or	O
algorithm	O
used	O
(	O
the	O
inverse	O
of	O
overfitting	B-Error_Name
:	O
low	O
bias	O
and	O
high	O
variance	O
)	O
.	O
</s>
<s>
This	O
can	O
be	O
gathered	O
from	O
the	O
Bias-variance	B-General_Concept
tradeoff	I-General_Concept
which	O
is	O
the	O
method	O
of	O
analyzing	O
a	O
model	O
or	O
algorithm	O
for	O
bias	O
error	O
,	O
variance	O
error	O
and	O
irreducible	O
error	O
.	O
</s>
<s>
With	O
a	O
high	O
bias	O
and	O
low	O
variance	O
the	O
result	O
of	O
the	O
model	O
is	O
that	O
it	O
will	O
inaccurately	O
represent	O
the	O
data	O
points	O
and	O
thus	O
insufficiently	O
be	O
able	O
to	O
predict	O
future	O
data	O
results	O
(	O
see	O
Generalization	B-Algorithm
error	I-Algorithm
)	O
.	O
</s>
<s>
There	O
are	O
multiple	O
ways	O
to	O
deal	O
with	O
underfitting	B-Error_Name
:	O
</s>
<s>
However	O
,	O
this	O
should	O
be	O
done	O
carefully	O
to	O
avoid	O
overfitting	B-Error_Name
.	O
</s>
<s>
For	O
example	O
,	O
a	O
neural	B-Architecture
network	I-Architecture
may	O
be	O
more	O
effective	O
than	O
a	O
linear	B-General_Concept
regression	I-General_Concept
model	I-General_Concept
for	O
some	O
types	O
of	O
data	O
.	O
</s>
<s>
Increase	O
the	O
amount	O
of	O
training	O
data	O
:	O
If	O
the	O
model	O
is	O
underfitting	B-Error_Name
due	O
to	O
lack	O
of	O
data	O
,	O
increasing	O
the	O
amount	O
of	O
training	O
data	O
may	O
help	O
.	O
</s>
<s>
Regularization	O
:	O
Regularization	O
is	O
a	O
technique	O
used	O
to	O
prevent	O
overfitting	B-Error_Name
by	O
adding	O
a	O
penalty	O
term	O
to	O
the	O
loss	O
function	O
that	O
discourages	O
large	O
parameter	O
values	O
.	O
</s>
<s>
It	O
can	O
also	O
be	O
used	O
to	O
prevent	O
underfitting	B-Error_Name
by	O
controlling	O
the	O
complexity	O
of	O
the	O
model	O
.	O
</s>
<s>
Ensemble	B-Algorithm
Methods	I-Algorithm
:	O
Ensemble	B-Algorithm
methods	I-Algorithm
combine	O
multiple	O
models	O
to	O
create	O
a	O
more	O
accurate	O
prediction	O
.	O
</s>
<s>
This	O
can	O
help	O
to	O
reduce	O
underfitting	B-Error_Name
by	O
allowing	O
multiple	O
models	O
to	O
work	O
together	O
to	O
capture	O
the	O
underlying	O
patterns	O
in	O
the	O
data	O
.	O
</s>
<s>
Feature	B-General_Concept
engineering	I-General_Concept
:	O
Feature	B-General_Concept
engineering	I-General_Concept
involves	O
creating	O
new	O
model	O
features	O
from	O
the	O
existing	O
ones	O
that	O
may	O
be	O
more	O
relevant	O
to	O
the	O
problem	O
at	O
hand	O
.	O
</s>
<s>
This	O
can	O
help	O
to	O
improve	O
the	O
accuracy	O
of	O
the	O
model	O
and	O
prevent	O
underfitting	B-Error_Name
.	O
</s>
