<s>
Gradient	B-Algorithm
boosting	I-Algorithm
is	O
a	O
machine	O
learning	O
technique	O
used	O
in	O
regression	O
and	O
classification	B-General_Concept
tasks	O
,	O
among	O
others	O
.	O
</s>
<s>
It	O
gives	O
a	O
prediction	O
model	O
in	O
the	O
form	O
of	O
an	O
ensemble	B-Algorithm
of	O
weak	B-Algorithm
prediction	O
models	O
,	O
which	O
are	O
typically	O
decision	B-Algorithm
trees	I-Algorithm
.	O
</s>
<s>
When	O
a	O
decision	B-Algorithm
tree	I-Algorithm
is	O
the	O
weak	B-Algorithm
learner	I-Algorithm
,	O
the	O
resulting	O
algorithm	O
is	O
called	O
gradient-boosted	O
trees	B-Algorithm
;	O
it	O
usually	O
outperforms	O
random	B-Algorithm
forest	I-Algorithm
.	O
</s>
<s>
A	O
gradient-boosted	O
trees	B-Algorithm
model	O
is	O
built	O
in	O
a	O
stage-wise	O
fashion	O
as	O
in	O
other	O
boosting	B-Algorithm
methods	O
,	O
but	O
it	O
generalizes	O
the	O
other	O
methods	O
by	O
allowing	O
optimization	O
of	O
an	O
arbitrary	O
differentiable	O
loss	O
function	O
.	O
</s>
<s>
The	O
idea	O
of	O
gradient	B-Algorithm
boosting	I-Algorithm
originated	O
in	O
the	O
observation	O
by	O
Leo	O
Breiman	O
that	O
boosting	B-Algorithm
can	O
be	O
interpreted	O
as	O
an	O
optimization	O
algorithm	O
on	O
a	O
suitable	O
cost	O
function	O
.	O
</s>
<s>
Explicit	O
regression	O
gradient	B-Algorithm
boosting	I-Algorithm
algorithms	O
were	O
subsequently	O
developed	O
,	O
by	O
Jerome	O
H	O
.	O
Friedman	O
,	O
simultaneously	O
with	O
the	O
more	O
general	O
functional	O
gradient	B-Algorithm
boosting	I-Algorithm
perspective	O
of	O
Llew	O
Mason	O
,	O
Jonathan	O
Baxter	O
,	O
Peter	O
Bartlett	O
and	O
Marcus	O
Frean	O
.	O
</s>
<s>
The	O
latter	O
two	O
papers	O
introduced	O
the	O
view	O
of	O
boosting	B-Algorithm
algorithms	O
as	O
iterative	O
functional	O
gradient	B-Algorithm
descent	I-Algorithm
algorithms	O
.	O
</s>
<s>
That	O
is	O
,	O
algorithms	O
that	O
optimize	O
a	O
cost	O
function	O
over	O
function	O
space	O
by	O
iteratively	O
choosing	O
a	O
function	O
(	O
weak	B-Algorithm
hypothesis	O
)	O
that	O
points	O
in	O
the	O
negative	O
gradient	O
direction	O
.	O
</s>
<s>
This	O
functional	O
gradient	O
view	O
of	O
boosting	B-Algorithm
has	O
led	O
to	O
the	O
development	O
of	O
boosting	B-Algorithm
algorithms	O
in	O
many	O
areas	O
of	O
machine	O
learning	O
and	O
statistics	O
beyond	O
regression	O
and	O
classification	B-General_Concept
.	O
</s>
<s>
(	O
This	O
section	O
follows	O
the	O
exposition	O
of	O
gradient	B-Algorithm
boosting	I-Algorithm
by	O
Cheng	O
Li	O
.	O
)	O
</s>
<s>
Like	O
other	O
boosting	B-Algorithm
methods	O
,	O
gradient	B-Algorithm
boosting	I-Algorithm
combines	O
weak	B-Algorithm
"	O
learners	O
"	O
into	O
a	O
single	O
strong	O
learner	O
in	O
an	O
iterative	O
fashion	O
.	O
</s>
<s>
It	O
is	O
easiest	O
to	O
explain	O
in	O
the	O
least-squares	B-Algorithm
regression	O
setting	O
,	O
where	O
the	O
goal	O
is	O
to	O
"	O
teach	O
"	O
a	O
model	O
to	O
predict	O
values	O
of	O
the	O
form	O
by	O
minimizing	O
the	O
mean	B-Algorithm
squared	I-Algorithm
error	I-Algorithm
,	O
where	O
indexes	O
over	O
some	O
training	O
set	O
of	O
size	O
of	O
actual	O
values	O
of	O
the	O
output	O
variable	O
:	O
</s>
<s>
Now	O
,	O
let	O
us	O
consider	O
a	O
gradient	B-Algorithm
boosting	I-Algorithm
algorithm	O
with	O
stages	O
.	O
</s>
<s>
At	O
each	O
stage	O
(	O
)	O
of	O
gradient	B-Algorithm
boosting	I-Algorithm
,	O
suppose	O
some	O
imperfect	O
model	O
(	O
for	O
low	O
,	O
this	O
model	O
may	O
simply	O
return	O
,	O
where	O
the	O
RHS	O
is	O
the	O
mean	O
of	O
)	O
.	O
</s>
<s>
Therefore	O
,	O
gradient	B-Algorithm
boosting	I-Algorithm
will	O
fit	O
to	O
the	O
residual	O
.	O
</s>
<s>
As	O
in	O
other	O
boosting	B-Algorithm
variants	O
,	O
each	O
attempts	O
to	O
correct	O
the	O
errors	O
of	O
its	O
predecessor	O
.	O
</s>
<s>
A	O
generalization	O
of	O
this	O
idea	O
to	O
loss	O
functions	O
other	O
than	O
squared	O
error	O
,	O
and	O
to	O
classification	B-General_Concept
and	O
ranking	O
problems	O
,	O
follows	O
from	O
the	O
observation	O
that	O
residuals	O
for	O
a	O
given	O
model	O
are	O
proportional	O
to	O
the	O
negative	O
gradients	O
of	O
the	O
mean	B-Algorithm
squared	I-Algorithm
error	I-Algorithm
(	O
MSE	O
)	O
loss	O
function	O
(	O
with	O
respect	O
to	O
)	O
:	O
</s>
<s>
So	O
,	O
gradient	B-Algorithm
boosting	I-Algorithm
could	O
be	O
specialized	O
to	O
a	O
gradient	B-Algorithm
descent	I-Algorithm
algorithm	O
,	O
and	O
generalizing	O
it	O
entails	O
"	O
plugging	O
in	O
"	O
a	O
different	O
loss	O
and	O
its	O
gradient	O
.	O
</s>
<s>
In	O
many	O
supervised	B-General_Concept
learning	I-General_Concept
problems	O
there	O
is	O
an	O
output	O
variable	O
and	O
a	O
vector	O
of	O
input	O
variables	O
,	O
related	O
to	O
each	O
other	O
with	O
some	O
probabilistic	O
distribution	O
.	O
</s>
<s>
The	O
gradient	B-Algorithm
boosting	I-Algorithm
method	O
assumes	O
a	O
real-valued	O
.	O
</s>
<s>
It	O
seeks	O
an	O
approximation	O
in	O
the	O
form	O
of	O
a	O
weighted	O
sum	O
of	O
functions	O
from	O
some	O
class	O
,	O
called	O
base	O
(	O
or	O
weak	B-Algorithm
)	O
learners	O
:	O
</s>
<s>
In	O
accordance	O
with	O
the	O
empirical	B-General_Concept
risk	I-General_Concept
minimization	I-General_Concept
principle	O
,	O
the	O
method	O
tries	O
to	O
find	O
an	O
approximation	O
that	O
minimizes	O
the	O
average	O
value	O
of	O
the	O
loss	O
function	O
on	O
the	O
training	O
set	O
,	O
i.e.	O
,	O
minimizes	O
the	O
empirical	O
risk	O
.	O
</s>
<s>
It	O
does	O
so	O
by	O
starting	O
with	O
a	O
model	O
,	O
consisting	O
of	O
a	O
constant	O
function	O
,	O
and	O
incrementally	O
expands	O
it	O
in	O
a	O
greedy	B-Algorithm
fashion	O
:	O
</s>
<s>
The	O
idea	O
is	O
to	O
apply	O
a	O
steepest	B-Algorithm
descent	I-Algorithm
step	O
to	O
this	O
minimization	O
problem	O
(	O
functional	O
gradient	B-Algorithm
descent	I-Algorithm
)	O
.	O
</s>
<s>
The	O
basic	O
idea	O
behind	O
the	O
steepest	B-Algorithm
descent	I-Algorithm
is	O
to	O
find	O
a	O
local	O
minimum	O
of	O
the	O
loss	O
function	O
by	O
iterating	O
on	O
.	O
</s>
<s>
This	O
is	O
the	O
direction	O
of	O
steepest	B-Algorithm
ascent	I-Algorithm
and	O
hence	O
we	O
must	O
move	O
in	O
the	O
opposite	O
(	O
i.e.	O
,	O
negative	O
)	O
direction	O
in	O
order	O
to	O
move	O
in	O
the	O
direction	O
of	O
steepest	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
when	O
the	O
set	O
is	O
finite	O
,	O
we	O
choose	O
the	O
candidate	O
function	O
closest	O
to	O
the	O
gradient	O
of	O
for	O
which	O
the	O
coefficient	O
may	O
then	O
be	O
calculated	O
with	O
the	O
aid	O
of	O
line	B-Algorithm
search	I-Algorithm
on	O
the	O
above	O
equations	O
.	O
</s>
<s>
In	O
pseudocode	O
,	O
the	O
generic	O
gradient	B-Algorithm
boosting	I-Algorithm
method	O
is	O
:	O
</s>
<s>
Fit	O
a	O
base	O
learner	O
(	O
or	O
weak	B-Algorithm
learner	I-Algorithm
,	O
e.g.	O
</s>
<s>
Compute	O
multiplier	O
by	O
solving	O
the	O
following	O
one-dimensional	B-Algorithm
optimization	I-Algorithm
problem	O
:	O
</s>
<s>
Gradient	B-Algorithm
boosting	I-Algorithm
is	O
typically	O
used	O
with	O
decision	B-Algorithm
trees	I-Algorithm
(	O
especially	O
CARTs	B-Algorithm
)	O
of	O
a	O
fixed	O
size	O
as	O
base	O
learners	O
.	O
</s>
<s>
For	O
this	O
special	O
case	O
,	O
Friedman	O
proposes	O
a	O
modification	O
to	O
gradient	B-Algorithm
boosting	I-Algorithm
method	O
which	O
improves	O
the	O
quality	O
of	O
fit	O
of	O
each	O
base	O
learner	O
.	O
</s>
<s>
Generic	O
gradient	B-Algorithm
boosting	I-Algorithm
at	O
the	O
m-th	O
step	O
would	O
fit	O
a	O
decision	B-Algorithm
tree	I-Algorithm
to	O
pseudo-residuals	O
.	O
</s>
<s>
Then	O
the	O
coefficients	O
are	O
multiplied	O
by	O
some	O
value	O
,	O
chosen	O
using	O
line	B-Algorithm
search	I-Algorithm
so	O
as	O
to	O
minimize	O
the	O
loss	O
function	O
,	O
and	O
the	O
model	O
is	O
updated	O
as	O
follows	O
:	O
</s>
<s>
,	O
the	O
number	O
of	O
terminal	O
nodes	O
in	O
trees	B-Algorithm
,	O
is	O
the	O
method	O
's	O
parameter	B-General_Concept
which	O
can	O
be	O
adjusted	O
for	O
a	O
data	O
set	O
at	O
hand	O
.	O
</s>
<s>
With	O
(	O
decision	B-Algorithm
stumps	I-Algorithm
)	O
,	O
no	O
interaction	O
between	O
variables	O
is	O
allowed	O
.	O
</s>
<s>
comment	O
that	O
typically	O
work	O
well	O
for	O
boosting	B-Algorithm
and	O
results	O
are	O
fairly	O
insensitive	O
to	O
the	O
choice	O
of	O
in	O
this	O
range	O
,	O
is	O
insufficient	O
for	O
many	O
applications	O
,	O
and	O
is	O
unlikely	O
to	O
be	O
required	O
.	O
</s>
<s>
Several	O
so-called	O
regularization	O
techniques	O
reduce	O
this	O
overfitting	B-Error_Name
effect	O
by	O
constraining	O
the	O
fitting	O
procedure	O
.	O
</s>
<s>
One	O
natural	O
regularization	O
parameter	B-General_Concept
is	O
the	O
number	O
of	O
gradient	B-Algorithm
boosting	I-Algorithm
iterations	O
M	O
(	O
i.e.	O
</s>
<s>
the	O
number	O
of	O
trees	B-Algorithm
in	O
the	O
model	O
when	O
the	O
base	O
learner	O
is	O
a	O
decision	B-Algorithm
tree	I-Algorithm
)	O
.	O
</s>
<s>
Increasing	O
M	O
reduces	O
the	O
error	O
on	O
training	O
set	O
,	O
but	O
setting	O
it	O
too	O
high	O
may	O
lead	O
to	O
overfitting	B-Error_Name
.	O
</s>
<s>
Another	O
regularization	O
parameter	B-General_Concept
is	O
the	O
depth	O
of	O
the	O
trees	B-Algorithm
.	O
</s>
<s>
The	O
higher	O
this	O
value	O
the	O
more	O
likely	O
the	O
model	O
will	O
overfit	B-Error_Name
the	O
training	O
data	O
.	O
</s>
<s>
An	O
important	O
part	O
of	O
gradient	B-Algorithm
boosting	I-Algorithm
method	O
is	O
regularization	O
by	O
shrinkage	O
which	O
consists	O
in	O
modifying	O
the	O
update	O
rule	O
as	O
follows	O
:	O
</s>
<s>
where	O
parameter	B-General_Concept
is	O
called	O
the	O
"	O
learning	B-General_Concept
rate	I-General_Concept
"	O
.	O
</s>
<s>
Empirically	O
it	O
has	O
been	O
found	O
that	O
using	O
small	O
learning	B-General_Concept
rates	I-General_Concept
(	O
such	O
as	O
)	O
yields	O
dramatic	O
improvements	O
in	O
models	O
 '	O
generalization	O
ability	O
over	O
gradient	B-Algorithm
boosting	I-Algorithm
without	O
shrinking	O
(	O
)	O
.	O
</s>
<s>
However	O
,	O
it	O
comes	O
at	O
the	O
price	O
of	O
increasing	O
computational	O
time	O
both	O
during	O
training	O
and	O
querying	B-Library
:	O
lower	O
learning	B-General_Concept
rate	I-General_Concept
requires	O
more	O
iterations	O
.	O
</s>
<s>
Soon	O
after	O
the	O
introduction	O
of	O
gradient	B-Algorithm
boosting	I-Algorithm
,	O
Friedman	O
proposed	O
a	O
minor	O
modification	O
to	O
the	O
algorithm	O
,	O
motivated	O
by	O
Breiman	O
's	O
bootstrap	B-Algorithm
aggregation	I-Algorithm
(	O
"	O
bagging	B-Algorithm
"	O
)	O
method	O
.	O
</s>
<s>
Friedman	O
observed	O
a	O
substantial	O
improvement	O
in	O
gradient	B-Algorithm
boosting	I-Algorithm
's	O
accuracy	O
with	O
this	O
modification	O
.	O
</s>
<s>
Smaller	O
values	O
of	O
introduce	O
randomness	O
into	O
the	O
algorithm	O
and	O
help	O
prevent	O
overfitting	B-Error_Name
,	O
acting	O
as	O
a	O
kind	O
of	O
regularization	O
.	O
</s>
<s>
The	O
algorithm	O
also	O
becomes	O
faster	O
,	O
because	O
regression	B-Algorithm
trees	I-Algorithm
have	O
to	O
be	O
fit	O
to	O
smaller	O
datasets	O
at	O
each	O
iteration	O
.	O
</s>
<s>
Also	O
,	O
like	O
in	O
bagging	B-Algorithm
,	O
subsampling	O
allows	O
one	O
to	O
define	O
an	O
out-of-bag	B-Algorithm
error	I-Algorithm
of	O
the	O
prediction	O
performance	O
improvement	O
by	O
evaluating	O
predictions	O
on	O
those	O
observations	O
which	O
were	O
not	O
used	O
in	O
the	O
building	O
of	O
the	O
next	O
base	O
learner	O
.	O
</s>
<s>
Gradient	O
tree	O
boosting	B-Algorithm
implementations	O
often	O
also	O
use	O
regularization	O
by	O
limiting	O
the	O
minimum	O
number	O
of	O
observations	O
in	O
trees	B-Algorithm
 '	O
terminal	O
nodes	O
.	O
</s>
<s>
Another	O
useful	O
regularization	O
techniques	O
for	O
gradient	O
boosted	O
trees	B-Algorithm
is	O
to	O
penalize	O
model	O
complexity	O
of	O
the	O
learned	O
model	O
.	O
</s>
<s>
The	O
model	O
complexity	O
can	O
be	O
defined	O
as	O
the	O
proportional	O
number	O
of	O
leaves	O
in	O
the	O
learned	O
trees	B-Algorithm
.	O
</s>
<s>
Other	O
kinds	O
of	O
regularization	O
such	O
as	O
an	O
penalty	O
on	O
the	O
leaf	O
values	O
can	O
also	O
be	O
added	O
to	O
avoid	O
overfitting	B-Error_Name
.	O
</s>
<s>
Gradient	B-Algorithm
boosting	I-Algorithm
can	O
be	O
used	O
in	O
the	O
field	O
of	O
learning	O
to	O
rank	O
.	O
</s>
<s>
The	O
commercial	O
web	O
search	O
engines	O
Yahoo	B-Application
and	O
Yandex	O
use	O
variants	O
of	O
gradient	B-Algorithm
boosting	I-Algorithm
in	O
their	O
machine-learned	O
ranking	O
engines	O
.	O
</s>
<s>
Gradient	B-Algorithm
boosting	I-Algorithm
is	O
also	O
utilized	O
in	O
High	O
Energy	O
Physics	O
in	O
data	O
analysis	O
.	O
</s>
<s>
At	O
the	O
Large	O
Hadron	O
Collider	O
(	O
LHC	O
)	O
,	O
variants	O
of	O
gradient	B-Algorithm
boosting	I-Algorithm
Deep	O
Neural	O
Networks	O
(	O
DNN	O
)	O
were	O
successful	O
in	O
reproducing	O
the	O
results	O
of	O
non-machine	O
learning	O
methods	O
of	O
analysis	O
on	O
datasets	O
used	O
to	O
discover	O
the	O
Higgs	O
boson	O
.	O
</s>
<s>
Gradient	B-Algorithm
boosting	I-Algorithm
decision	B-Algorithm
tree	I-Algorithm
was	O
also	O
applied	O
in	O
earth	O
and	O
geological	O
studies	O
–	O
for	O
example	O
quality	O
evaluation	O
of	O
sandstone	O
reservoir	O
.	O
</s>
<s>
Friedman	O
introduced	O
his	O
regression	O
technique	O
as	O
a	O
"	O
Gradient	B-Algorithm
Boosting	I-Algorithm
Machine	I-Algorithm
"	O
(	O
GBM	O
)	O
.	O
</s>
<s>
described	O
the	O
generalized	O
abstract	O
class	O
of	O
algorithms	O
as	O
"	O
functional	O
gradient	B-Algorithm
boosting	I-Algorithm
"	O
.	O
</s>
<s>
describe	O
an	O
advancement	O
of	O
gradient	O
boosted	O
models	O
as	O
Multiple	B-Algorithm
Additive	I-Algorithm
Regression	I-Algorithm
Trees	I-Algorithm
(	O
MART	O
)	O
;	O
Elith	O
et	O
al	O
.	O
</s>
<s>
describe	O
that	O
approach	O
as	O
"	O
Boosted	O
Regression	B-Algorithm
Trees	I-Algorithm
"	O
(	O
BRT	O
)	O
.	O
</s>
<s>
A	O
popular	O
open-source	O
implementation	O
for	O
R	B-Language
calls	O
it	O
a	O
"	O
Generalized	O
Boosting	B-Algorithm
Model	O
"	O
,	O
however	O
packages	O
expanding	O
this	O
work	O
use	O
BRT	O
.	O
</s>
<s>
XGBoost	B-Language
is	O
another	O
popular	O
modern	O
implementation	O
of	O
the	O
method	O
with	O
some	O
extensions	O
,	O
like	O
second-order	O
optimization	O
.	O
</s>
<s>
While	O
boosting	B-Algorithm
can	O
increase	O
the	O
accuracy	O
of	O
a	O
base	O
learner	O
,	O
such	O
as	O
a	O
decision	B-Algorithm
tree	I-Algorithm
or	O
linear	O
regression	O
,	O
it	O
sacrifices	O
intelligibility	O
and	O
interpretability	O
.	O
</s>
<s>
For	O
example	O
,	O
following	O
the	O
path	O
that	O
a	O
decision	B-Algorithm
tree	I-Algorithm
takes	O
to	O
make	O
its	O
decision	O
is	O
trivial	O
and	O
self-explained	O
,	O
but	O
following	O
the	O
paths	O
of	O
hundreds	O
or	O
thousands	O
of	O
trees	B-Algorithm
is	O
much	O
harder	O
.	O
</s>
<s>
To	O
achieve	O
both	O
performance	O
and	O
interpretability	O
,	O
some	O
model	O
compression	O
techniques	O
allow	O
transforming	O
an	O
XGBoost	B-Language
into	O
a	O
single	O
"	O
born-again	O
"	O
decision	B-Algorithm
tree	I-Algorithm
that	O
approximates	O
the	O
same	O
decision	O
function	O
.	O
</s>
