<s>
In	O
machine	O
learning	O
and	O
mathematical	O
optimization	O
,	O
loss	B-Algorithm
functions	I-Algorithm
for	I-Algorithm
classification	I-Algorithm
are	O
computationally	O
feasible	O
loss	O
functions	O
representing	O
the	O
price	O
paid	O
for	O
inaccuracy	O
of	O
predictions	O
in	O
classification	O
problems	O
(	O
problems	O
of	O
identifying	O
which	O
category	O
a	O
particular	O
observation	O
belongs	O
to	O
)	O
.	O
</s>
<s>
In	O
addition	O
to	O
their	O
computational	O
tractability	O
,	O
one	O
can	O
show	O
that	O
the	O
solutions	O
to	O
the	O
learning	O
problem	O
using	O
these	O
loss	O
surrogates	O
allow	O
for	O
the	O
recovery	O
of	O
the	O
actual	O
solution	O
to	O
the	O
original	O
classification	B-General_Concept
problem	I-General_Concept
.	O
</s>
<s>
(	O
See	O
statistical	B-General_Concept
learning	I-General_Concept
theory	I-General_Concept
for	O
a	O
more	O
detailed	O
description	O
.	O
)	O
</s>
<s>
This	O
holds	O
even	O
for	O
the	O
nonconvex	O
loss	O
functions	O
,	O
which	O
means	O
that	O
gradient	B-Algorithm
descent	I-Algorithm
based	O
algorithms	O
such	O
as	O
gradient	B-Algorithm
boosting	I-Algorithm
can	O
be	O
used	O
to	O
construct	O
the	O
minimizer	O
.	O
</s>
<s>
For	O
proper	O
loss	O
functions	O
,	O
the	O
loss	O
margin	O
can	O
be	O
defined	O
as	O
and	O
shown	O
to	O
be	O
directly	O
related	O
to	O
the	O
regularization	O
properties	O
of	O
the	O
classifier	B-General_Concept
.	O
</s>
<s>
It	O
is	O
shown	O
that	O
this	O
is	O
directly	O
equivalent	O
to	O
decreasing	O
the	O
learning	O
rate	O
in	O
gradient	B-Algorithm
boosting	I-Algorithm
where	O
decreasing	O
improves	O
the	O
regularization	O
of	O
the	O
boosted	O
classifier	B-General_Concept
.	O
</s>
<s>
In	O
conclusion	O
,	O
by	O
choosing	O
a	O
loss	O
function	O
with	O
larger	O
margin	O
(	O
smaller	O
)	O
we	O
increase	O
regularization	O
and	O
improve	O
our	O
estimates	O
of	O
the	O
posterior	O
probability	O
which	O
in	O
turn	O
improves	O
the	O
ROC	O
curve	O
of	O
the	O
final	O
classifier	B-General_Concept
.	O
</s>
<s>
Specifically	O
for	O
Tikhonov	O
regularization	O
,	O
one	O
can	O
solve	O
for	O
the	O
regularization	O
parameter	O
using	O
leave-one-out	O
cross-validation	B-Application
in	O
the	O
same	O
time	O
as	O
it	O
would	O
take	O
to	O
solve	O
a	O
single	O
problem	O
.	O
</s>
<s>
The	O
logistic	O
loss	O
is	O
used	O
in	O
the	O
LogitBoost	B-Algorithm
algorithm	I-Algorithm
.	O
</s>
<s>
The	O
cross	O
entropy	O
loss	O
is	O
ubiquitous	O
in	O
modern	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
.	O
</s>
<s>
The	O
exponential	O
loss	O
is	O
used	O
in	O
the	O
AdaBoost	B-Algorithm
algorithm	I-Algorithm
.	O
</s>
<s>
The	O
Savage	O
loss	O
has	O
been	O
used	O
in	O
gradient	B-Algorithm
boosting	I-Algorithm
and	O
the	O
SavageBoost	O
algorithm	O
.	O
</s>
<s>
The	O
Tangent	O
loss	O
has	O
been	O
used	O
in	O
gradient	B-Algorithm
boosting	I-Algorithm
,	O
the	O
TangentBoost	O
algorithm	O
and	O
Alternating	O
Decision	O
Forests	O
.	O
</s>
<s>
In	O
addition	O
,	O
the	O
empirical	B-General_Concept
risk	I-General_Concept
minimization	I-General_Concept
of	O
this	O
loss	O
is	O
equivalent	O
to	O
the	O
classical	O
formulation	O
for	O
support	B-Algorithm
vector	I-Algorithm
machines	I-Algorithm
(	O
SVMs	B-Algorithm
)	O
.	O
</s>
<s>
Consequently	O
,	O
the	O
hinge	O
loss	O
function	O
cannot	O
be	O
used	O
with	O
gradient	B-Algorithm
descent	I-Algorithm
methods	I-Algorithm
or	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
methods	I-Algorithm
which	O
rely	O
on	O
differentiability	O
over	O
the	O
entire	O
domain	O
.	O
</s>
<s>
However	O
,	O
the	O
hinge	O
loss	O
does	O
have	O
a	O
subgradient	O
at	O
,	O
which	O
allows	O
for	O
the	O
utilization	O
of	O
subgradient	B-Algorithm
descent	I-Algorithm
methods	I-Algorithm
.	O
</s>
<s>
SVMs	B-Algorithm
utilizing	O
the	O
hinge	O
loss	O
function	O
can	O
also	O
be	O
solved	O
using	O
quadratic	B-Algorithm
programming	I-Algorithm
.	O
</s>
