<s>
Empirical	B-General_Concept
risk	I-General_Concept
minimization	I-General_Concept
(	O
ERM	O
)	O
is	O
a	O
principle	O
in	O
statistical	B-General_Concept
learning	I-General_Concept
theory	I-General_Concept
which	O
defines	O
a	O
family	O
of	O
learning	O
algorithms	O
and	O
is	O
used	O
to	O
give	O
theoretical	O
bounds	O
on	O
their	O
performance	O
.	O
</s>
<s>
Consider	O
the	O
following	O
situation	O
,	O
which	O
is	O
a	O
general	O
setting	O
of	O
many	O
supervised	B-General_Concept
learning	I-General_Concept
problems	O
.	O
</s>
<s>
For	O
classification	O
problems	O
,	O
the	O
Bayes	B-General_Concept
classifier	I-General_Concept
is	O
defined	O
to	O
be	O
the	O
classifier	O
minimizing	O
the	O
risk	O
defined	O
with	O
the	O
0	O
–	O
1	O
loss	O
function	O
.	O
</s>
<s>
The	O
empirical	B-General_Concept
risk	I-General_Concept
minimization	I-General_Concept
principle	O
states	O
that	O
the	O
learning	O
algorithm	O
should	O
choose	O
a	O
hypothesis	O
which	O
minimizes	O
the	O
empirical	O
risk	O
:	O
</s>
<s>
Empirical	B-General_Concept
risk	I-General_Concept
minimization	I-General_Concept
for	O
a	O
classification	O
problem	O
with	O
a	O
0-1	O
loss	O
function	O
is	O
known	O
to	O
be	O
an	O
NP-hard	O
problem	O
even	O
for	O
a	O
relatively	O
simple	O
class	O
of	O
functions	O
such	O
as	O
linear	B-General_Concept
classifiers	I-General_Concept
.	O
</s>
<s>
In	O
practice	O
,	O
machine	O
learning	O
algorithms	O
cope	O
with	O
this	O
issue	O
either	O
by	O
employing	O
a	O
convex	O
approximation	O
to	O
the	O
0	O
–	O
1	O
loss	O
function	O
(	O
like	O
hinge	B-Algorithm
loss	I-Algorithm
for	O
SVM	B-Algorithm
)	O
,	O
which	O
is	O
easier	O
to	O
optimize	O
,	O
or	O
by	O
imposing	O
assumptions	O
on	O
the	O
distribution	O
(	O
and	O
thus	O
stop	O
being	O
agnostic	O
learning	O
algorithms	O
to	O
which	O
the	O
above	O
result	O
applies	O
)	O
.	O
</s>
