<s>
AdaBoost	B-Algorithm
,	O
short	O
for	O
Adaptive	O
Boosting	B-Algorithm
,	O
is	O
a	O
statistical	B-General_Concept
classification	I-General_Concept
meta-algorithm	B-Algorithm
formulated	O
by	O
Yoav	O
Freund	O
and	O
Robert	O
Schapire	O
in	O
1995	O
,	O
who	O
won	O
the	O
2003	O
Gödel	O
Prize	O
for	O
their	O
work	O
.	O
</s>
<s>
The	O
output	O
of	O
the	O
other	O
learning	O
algorithms	O
( 	O
 '	O
weak	B-Algorithm
learners	I-Algorithm
 '	O
)	O
is	O
combined	O
into	O
a	O
weighted	O
sum	O
that	O
represents	O
the	O
final	O
output	O
of	O
the	O
boosted	O
classifier	B-General_Concept
.	O
</s>
<s>
Usually	O
,	O
AdaBoost	B-Algorithm
is	O
presented	O
for	O
binary	B-General_Concept
classification	I-General_Concept
,	O
although	O
it	O
can	O
be	O
generalized	O
to	O
multiple	O
classes	O
or	O
bounded	O
intervals	O
on	O
the	O
real	O
line	O
.	O
</s>
<s>
AdaBoost	B-Algorithm
is	O
adaptive	O
in	O
the	O
sense	O
that	O
subsequent	O
weak	B-Algorithm
learners	I-Algorithm
are	O
tweaked	O
in	O
favor	O
of	O
those	O
instances	O
misclassified	O
by	O
previous	O
classifiers	B-General_Concept
.	O
</s>
<s>
In	O
some	O
problems	O
it	O
can	O
be	O
less	O
susceptible	O
to	O
the	O
overfitting	B-Error_Name
problem	O
than	O
other	O
learning	O
algorithms	O
.	O
</s>
<s>
Although	O
AdaBoost	B-Algorithm
is	O
typically	O
used	O
to	O
combine	O
weak	O
base	O
learners	O
(	O
such	O
as	O
decision	B-Algorithm
stumps	I-Algorithm
)	O
,	O
it	O
has	O
been	O
shown	O
that	O
it	O
can	O
also	O
effectively	O
combine	O
strong	O
base	O
learners	O
(	O
such	O
as	O
deep	O
decision	B-Algorithm
trees	I-Algorithm
)	O
,	O
producing	O
an	O
even	O
more	O
accurate	O
model	O
.	O
</s>
<s>
AdaBoost	B-Algorithm
(	O
with	O
decision	B-Algorithm
trees	I-Algorithm
as	O
the	O
weak	B-Algorithm
learners	I-Algorithm
)	O
is	O
often	O
referred	O
to	O
as	O
the	O
best	O
out-of-the-box	O
classifier	B-General_Concept
.	O
</s>
<s>
When	O
used	O
with	O
decision	B-Algorithm
tree	I-Algorithm
learning	I-Algorithm
,	O
information	O
gathered	O
at	O
each	O
stage	O
of	O
the	O
AdaBoost	B-Algorithm
algorithm	O
about	O
the	O
relative	O
'	O
hardness	O
 '	O
of	O
each	O
training	O
sample	O
is	O
fed	O
into	O
the	O
tree	O
growing	O
algorithm	O
such	O
that	O
later	O
trees	O
tend	O
to	O
focus	O
on	O
harder-to-classify	O
examples	O
.	O
</s>
<s>
AdaBoost	B-Algorithm
refers	O
to	O
a	O
particular	O
method	O
of	O
training	O
a	O
boosted	O
classifier	B-General_Concept
.	O
</s>
<s>
where	O
each	O
is	O
a	O
weak	B-Algorithm
learner	I-Algorithm
that	O
takes	O
an	O
object	O
as	O
input	O
and	O
returns	O
a	O
value	O
indicating	O
the	O
class	O
of	O
the	O
object	O
.	O
</s>
<s>
For	O
example	O
,	O
in	O
the	O
two-class	O
problem	O
,	O
the	O
sign	O
of	O
the	O
weak	B-Algorithm
learner	I-Algorithm
's	O
output	O
identifies	O
the	O
predicted	O
object	O
class	O
and	O
the	O
absolute	O
value	O
gives	O
the	O
confidence	O
in	O
that	O
classification	O
.	O
</s>
<s>
Similarly	O
,	O
the	O
-th	O
classifier	B-General_Concept
is	O
positive	O
if	O
the	O
sample	O
is	O
in	O
a	O
positive	O
class	O
and	O
negative	O
otherwise	O
.	O
</s>
<s>
Each	O
weak	B-Algorithm
learner	I-Algorithm
produces	O
an	O
output	O
hypothesis	O
which	O
fixes	O
a	O
prediction	O
for	O
each	O
sample	O
in	O
the	O
training	O
set	O
.	O
</s>
<s>
At	O
each	O
iteration	O
,	O
a	O
weak	B-Algorithm
learner	I-Algorithm
is	O
selected	O
and	O
assigned	O
a	O
coefficient	O
such	O
that	O
the	O
total	O
training	O
error	O
of	O
the	O
resulting	O
-stage	O
boosted	O
classifier	B-General_Concept
is	O
minimized	O
.	O
</s>
<s>
Here	O
is	O
the	O
boosted	O
classifier	B-General_Concept
that	O
has	O
been	O
built	O
up	O
to	O
the	O
previous	O
stage	O
of	O
training	O
and	O
is	O
the	O
weak	B-Algorithm
learner	I-Algorithm
that	O
is	O
being	O
considered	O
for	O
addition	O
to	O
the	O
final	O
classifier	B-General_Concept
.	O
</s>
<s>
These	O
weights	O
can	O
be	O
used	O
in	O
the	O
training	O
of	O
the	O
weak	B-Algorithm
learner	I-Algorithm
.	O
</s>
<s>
For	O
instance	O
,	O
decision	B-Algorithm
trees	I-Algorithm
can	O
be	O
grown	O
which	O
favor	O
the	O
splitting	O
of	O
sets	O
of	O
samples	O
with	O
large	O
weights	O
.	O
</s>
<s>
Suppose	O
we	O
have	O
a	O
data	O
set	O
where	O
each	O
item	O
has	O
an	O
associated	O
class	O
,	O
and	O
a	O
set	O
of	O
weak	B-Algorithm
classifiers	I-Algorithm
each	O
of	O
which	O
outputs	O
a	O
classification	O
for	O
each	O
item	O
.	O
</s>
<s>
After	O
the	O
-th	O
iteration	O
our	O
boosted	O
classifier	B-General_Concept
is	O
a	O
linear	O
combination	O
of	O
the	O
weak	B-Algorithm
classifiers	I-Algorithm
of	O
the	O
form	O
:	O
</s>
<s>
At	O
the	O
-th	O
iteration	O
we	O
want	O
to	O
extend	O
this	O
to	O
a	O
better	O
boosted	O
classifier	B-General_Concept
by	O
adding	O
another	O
weak	B-Algorithm
classifier	I-Algorithm
,	O
with	O
another	O
weight	O
:	O
</s>
<s>
So	O
it	O
remains	O
to	O
determine	O
which	O
weak	B-Algorithm
classifier	I-Algorithm
is	O
the	O
best	O
choice	O
for	O
,	O
and	O
what	O
its	O
weight	O
should	O
be	O
.	O
</s>
<s>
We	O
define	O
the	O
total	O
error	O
of	O
as	O
the	O
sum	O
of	O
its	O
exponential	B-Algorithm
loss	I-Algorithm
on	O
each	O
data	O
point	O
,	O
given	O
as	O
follows	O
:	O
</s>
<s>
the	O
weak	B-Algorithm
classifier	I-Algorithm
with	O
the	O
lowest	O
weighted	O
error	O
(	O
with	O
weights	O
)	O
.	O
</s>
<s>
We	O
calculate	O
the	O
weighted	O
error	O
rate	O
of	O
the	O
weak	B-Algorithm
classifier	I-Algorithm
to	O
be	O
,	O
so	O
it	O
follows	O
that	O
:	O
</s>
<s>
Note	O
:	O
This	O
derivation	O
only	O
applies	O
when	O
,	O
though	O
it	O
can	O
be	O
a	O
good	O
starting	O
guess	O
in	O
other	O
cases	O
,	O
such	O
as	O
when	O
the	O
weak	B-Algorithm
learner	I-Algorithm
is	O
biased	O
(	O
)	O
,	O
has	O
multiple	O
leaves	O
(	O
)	O
or	O
is	O
some	O
other	O
function	O
.	O
</s>
<s>
Thus	O
we	O
have	O
derived	O
the	O
AdaBoost	B-Algorithm
algorithm	O
:	O
At	O
each	O
iteration	O
,	O
choose	O
the	O
classifier	B-General_Concept
,	O
which	O
minimizes	O
the	O
total	O
weighted	O
error	O
,	O
use	O
this	O
to	O
calculate	O
the	O
error	O
rate	O
,	O
use	O
this	O
to	O
calculate	O
the	O
weight	O
,	O
and	O
finally	O
use	O
this	O
to	O
improve	O
the	O
boosted	O
classifier	B-General_Concept
to	O
.	O
</s>
<s>
Boosting	B-Algorithm
is	O
a	O
form	O
of	O
linear	O
regression	O
in	O
which	O
the	O
features	O
of	O
each	O
sample	O
are	O
the	O
outputs	O
of	O
some	O
weak	B-Algorithm
learner	I-Algorithm
applied	O
to	O
.	O
</s>
<s>
While	O
regression	O
tries	O
to	O
fit	O
to	O
as	O
precisely	O
as	O
possible	O
without	O
loss	O
of	O
generalization	O
,	O
typically	O
using	O
least	B-Algorithm
square	I-Algorithm
error	O
,	O
whereas	O
the	O
AdaBoost	B-Algorithm
error	O
function	O
takes	O
into	O
account	O
the	O
fact	O
that	O
only	O
the	O
sign	O
of	O
the	O
final	O
result	O
is	O
used	O
,	O
thus	O
can	O
be	O
far	O
larger	O
than	O
1	O
without	O
increasing	O
error	O
.	O
</s>
<s>
Thus	O
it	O
can	O
be	O
seen	O
that	O
the	O
weight	O
update	O
in	O
the	O
AdaBoost	B-Algorithm
algorithm	O
is	O
equivalent	O
to	O
recalculating	O
the	O
error	O
on	O
after	O
each	O
stage	O
.	O
</s>
<s>
As	O
long	O
as	O
the	O
loss	O
function	O
is	O
monotonic	O
and	O
continuously	O
differentiable	O
,	O
the	O
classifier	B-General_Concept
is	O
always	O
driven	O
toward	O
purer	O
solutions	O
.	O
</s>
<s>
Zhang	O
(	O
2004	O
)	O
provides	O
a	O
loss	O
function	O
based	O
on	O
least	B-Algorithm
squares	I-Algorithm
,	O
a	O
modified	O
Huber	O
loss	O
function	O
:	O
</s>
<s>
This	O
function	O
is	O
more	O
well-behaved	O
than	O
LogitBoost	O
for	O
close	O
to	O
1	O
or	O
-1	O
,	O
does	O
not	O
penalise	O
‘	O
overconfident’	O
predictions	O
(	O
)	O
,	O
unlike	O
unmodified	O
least	B-Algorithm
squares	I-Algorithm
,	O
and	O
only	O
penalises	O
samples	O
misclassified	O
with	O
confidence	O
greater	O
than	O
1	O
linearly	O
,	O
as	O
opposed	O
to	O
quadratically	O
or	O
exponentially	O
,	O
and	O
is	O
thus	O
less	O
susceptible	O
to	O
the	O
effects	O
of	O
outliers	O
.	O
</s>
<s>
Boosting	B-Algorithm
can	O
be	O
seen	O
as	O
minimization	O
of	O
a	O
convex	O
loss	O
function	O
over	O
a	O
convex	O
set	O
of	O
functions	O
.	O
</s>
<s>
In	O
the	O
gradient	B-Algorithm
descent	I-Algorithm
analogy	O
,	O
the	O
output	O
of	O
the	O
classifier	B-General_Concept
for	O
each	O
training	O
point	O
is	O
considered	O
a	O
point	O
in	O
n-dimensional	O
space	O
,	O
where	O
each	O
axis	O
corresponds	O
to	O
a	O
training	O
sample	O
,	O
each	O
weak	B-Algorithm
learner	I-Algorithm
corresponds	O
to	O
a	O
vector	O
of	O
fixed	O
orientation	O
and	O
length	O
,	O
and	O
the	O
goal	O
is	O
to	O
reach	O
the	O
target	O
point	O
(	O
or	O
any	O
region	O
where	O
the	O
value	O
of	O
loss	O
function	O
is	O
less	O
than	O
the	O
value	O
at	O
that	O
point	O
)	O
,	O
in	O
the	O
fewest	O
steps	O
.	O
</s>
<s>
Thus	O
AdaBoost	B-Algorithm
algorithms	O
perform	O
either	O
Cauchy	B-Algorithm
(	O
find	O
with	O
the	O
steepest	O
gradient	O
,	O
choose	O
to	O
minimize	O
test	O
error	O
)	O
or	O
Newton	O
(	O
choose	O
some	O
target	O
point	O
,	O
find	O
that	O
brings	O
closest	O
to	O
that	O
point	O
)	O
optimization	O
of	O
training	O
error	O
.	O
</s>
<s>
The	O
output	O
of	O
decision	B-Algorithm
trees	I-Algorithm
is	O
a	O
class	O
probability	O
estimate	O
,	O
the	O
probability	O
that	O
is	O
in	O
the	O
positive	O
class	O
.	O
</s>
<s>
Friedman	O
,	O
Hastie	O
and	O
Tibshirani	O
derive	O
an	O
analytical	O
minimizer	O
for	O
for	O
some	O
fixed	O
(	O
typically	O
chosen	O
using	O
weighted	O
least	B-Algorithm
squares	I-Algorithm
error	O
)	O
:	O
</s>
<s>
LogitBoost	O
represents	O
an	O
application	O
of	O
established	O
logistic	O
regression	O
techniques	O
to	O
the	O
AdaBoost	B-Algorithm
method	O
.	O
</s>
<s>
That	O
is	O
is	O
the	O
Newton	O
–	O
Raphson	O
approximation	O
of	O
the	O
minimizer	O
of	O
the	O
log-likelihood	O
error	O
at	O
stage	O
,	O
and	O
the	O
weak	B-Algorithm
learner	I-Algorithm
is	O
chosen	O
as	O
the	O
learner	O
that	O
best	O
approximates	O
by	O
weighted	O
least	B-Algorithm
squares	I-Algorithm
.	O
</s>
<s>
As	O
p	O
approaches	O
either	O
1	O
or	O
0	O
,	O
the	O
value	O
of	O
becomes	O
very	O
small	O
and	O
the	O
z	O
term	O
,	O
which	O
is	O
large	O
for	O
misclassified	O
samples	O
,	O
can	O
become	O
numerically	B-Algorithm
unstable	I-Algorithm
,	O
due	O
to	O
machine	O
precision	O
rounding	O
errors	O
.	O
</s>
<s>
While	O
previous	O
boosting	B-Algorithm
algorithms	O
choose	O
greedily	O
,	O
minimizing	O
the	O
overall	O
test	O
error	O
as	O
much	O
as	O
possible	O
at	O
each	O
step	O
,	O
GentleBoost	O
features	O
a	O
bounded	O
step	O
size	O
.	O
</s>
<s>
Thus	O
,	O
in	O
the	O
case	O
where	O
a	O
weak	B-Algorithm
learner	I-Algorithm
exhibits	O
perfect	O
classification	O
performance	O
,	O
GentleBoost	O
chooses	O
exactly	O
equal	O
to	O
,	O
while	O
steepest	B-Algorithm
descent	I-Algorithm
algorithms	O
try	O
to	O
set	O
.	O
</s>
<s>
A	O
technique	O
for	O
speeding	O
up	O
processing	O
of	O
boosted	O
classifiers	B-General_Concept
,	O
early	O
termination	O
refers	O
to	O
only	O
testing	O
each	O
potential	O
object	O
with	O
as	O
many	O
layers	O
of	O
the	O
final	O
classifier	B-General_Concept
necessary	O
to	O
meet	O
some	O
confidence	O
threshold	O
,	O
speeding	O
up	O
computation	O
for	O
cases	O
where	O
the	O
class	O
of	O
the	O
object	O
can	O
easily	O
be	O
determined	O
.	O
</s>
<s>
One	O
such	O
scheme	O
is	O
the	O
object	O
detection	O
framework	O
introduced	O
by	O
Viola	O
and	O
Jones	O
:	O
in	O
an	O
application	O
with	O
significantly	O
more	O
negative	O
samples	O
than	O
positive	O
,	O
a	O
cascade	O
of	O
separate	O
boost	O
classifiers	B-General_Concept
is	O
trained	O
,	O
the	O
output	O
of	O
each	O
stage	O
biased	O
such	O
that	O
some	O
acceptably	O
small	O
fraction	O
of	O
positive	O
samples	O
is	O
mislabeled	O
as	O
negative	O
,	O
and	O
all	O
samples	O
marked	O
as	O
negative	O
after	O
each	O
stage	O
are	O
discarded	O
.	O
</s>
<s>
If	O
50%	O
of	O
negative	O
samples	O
are	O
filtered	O
out	O
by	O
each	O
stage	O
,	O
only	O
a	O
very	O
small	O
number	O
of	O
objects	O
would	O
pass	O
through	O
the	O
entire	O
classifier	B-General_Concept
,	O
reducing	O
computation	O
effort	O
.	O
</s>
<s>
In	O
the	O
field	O
of	O
statistics	O
,	O
where	O
AdaBoost	B-Algorithm
is	O
more	O
commonly	O
applied	O
to	O
problems	O
of	O
moderate	O
dimensionality	O
,	O
early	B-Algorithm
stopping	I-Algorithm
is	O
used	O
as	O
a	O
strategy	O
to	O
reduce	O
overfitting	B-Error_Name
.	O
</s>
<s>
A	O
validation	O
set	O
of	O
samples	O
is	O
separated	O
from	O
the	O
training	O
set	O
,	O
performance	O
of	O
the	O
classifier	B-General_Concept
on	O
the	O
samples	O
used	O
for	O
training	O
is	O
compared	O
to	O
performance	O
on	O
the	O
validation	O
samples	O
,	O
and	O
training	O
is	O
terminated	O
if	O
performance	O
on	O
the	O
validation	O
sample	O
is	O
seen	O
to	O
decrease	O
even	O
as	O
performance	O
on	O
the	O
training	O
set	O
continues	O
to	O
improve	O
.	O
</s>
<s>
For	O
steepest	B-Algorithm
descent	I-Algorithm
versions	O
of	O
AdaBoost	B-Algorithm
,	O
where	O
is	O
chosen	O
at	O
each	O
layer	O
t	O
to	O
minimize	O
test	O
error	O
,	O
the	O
next	O
layer	O
added	O
is	O
said	O
to	O
be	O
maximally	O
independent	O
of	O
layer	O
t	O
:	O
it	O
is	O
unlikely	O
to	O
choose	O
a	O
weak	B-Algorithm
learner	I-Algorithm
t+1	O
that	O
is	O
similar	O
to	O
learner	O
t	O
.	O
However	O
,	O
there	O
remains	O
the	O
possibility	O
that	O
t+1	O
produces	O
similar	O
information	O
to	O
some	O
other	O
earlier	O
layer	O
.	O
</s>
<s>
Totally	O
corrective	O
algorithms	O
,	O
such	O
as	O
LPBoost	B-Algorithm
,	O
optimize	O
the	O
value	O
of	O
every	O
coefficient	O
after	O
each	O
step	O
,	O
such	O
that	O
new	O
layers	O
added	O
are	O
always	O
maximally	O
independent	O
of	O
every	O
previous	O
layer	O
.	O
</s>
<s>
This	O
can	O
be	O
accomplished	O
by	O
backfitting	O
,	O
linear	B-Algorithm
programming	I-Algorithm
or	O
some	O
other	O
method	O
.	O
</s>
<s>
Pruning	O
is	O
the	O
process	O
of	O
removing	O
poorly	O
performing	O
weak	B-Algorithm
classifiers	I-Algorithm
to	O
improve	O
memory	O
and	O
execution-time	O
cost	O
of	O
the	O
boosted	O
classifier	B-General_Concept
.	O
</s>
<s>
The	O
simplest	O
methods	O
,	O
which	O
can	O
be	O
particularly	O
effective	O
in	O
conjunction	O
with	O
totally	O
corrective	O
training	O
,	O
are	O
weight	O
-	O
or	O
margin-trimming	O
:	O
when	O
the	O
coefficient	O
,	O
or	O
the	O
contribution	O
to	O
the	O
total	O
test	O
error	O
,	O
of	O
some	O
weak	B-Algorithm
classifier	I-Algorithm
falls	O
below	O
a	O
certain	O
threshold	O
,	O
that	O
classifier	B-General_Concept
is	O
dropped	O
.	O
</s>
<s>
Margineantu	O
&	O
Dietterich	O
suggested	O
an	O
alternative	O
criterion	O
for	O
trimming	O
:	O
weak	B-Algorithm
classifiers	I-Algorithm
should	O
be	O
selected	O
such	O
that	O
the	O
diversity	O
of	O
the	O
ensemble	O
is	O
maximized	O
.	O
</s>
<s>
If	O
two	O
weak	B-Algorithm
learners	I-Algorithm
produce	O
very	O
similar	O
outputs	O
,	O
efficiency	O
can	O
be	O
improved	O
by	O
removing	O
one	O
of	O
them	O
and	O
increasing	O
the	O
coefficient	O
of	O
the	O
remaining	O
weak	B-Algorithm
learner	I-Algorithm
.	O
</s>
