<s>
Dilution	B-Algorithm
and	O
dropout	B-Algorithm
(	O
also	O
called	O
DropConnect	O
)	O
are	O
regularization	O
techniques	O
for	O
reducing	O
overfitting	B-Error_Name
in	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
by	O
preventing	O
complex	O
co-adaptations	O
on	O
training	O
data	O
.	O
</s>
<s>
They	O
are	O
an	O
efficient	O
way	O
of	O
performing	O
model	O
averaging	O
with	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
Dilution	B-Algorithm
refers	O
to	O
thinning	O
weights	O
,	O
while	O
dropout	B-Algorithm
refers	O
to	O
randomly	O
"	O
dropping	O
out	O
"	O
,	O
or	O
omitting	O
,	O
units	O
(	O
both	O
hidden	O
and	O
visible	O
)	O
during	O
the	O
training	O
process	O
of	O
a	O
neural	B-Architecture
network	I-Architecture
.	O
</s>
<s>
Dilution	B-Algorithm
is	O
usually	O
split	O
in	O
weak	O
dilution	B-Algorithm
and	O
strong	O
dilution	B-Algorithm
.	O
</s>
<s>
Weak	O
dilution	B-Algorithm
describes	O
the	O
process	O
in	O
which	O
the	O
finite	O
fraction	O
of	O
removed	O
connections	O
is	O
small	O
,	O
and	O
strong	O
dilution	B-Algorithm
refers	O
to	O
when	O
this	O
fraction	O
is	O
large	O
.	O
</s>
<s>
There	O
is	O
no	O
clear	O
distinction	O
on	O
where	O
the	O
limit	O
between	O
strong	O
and	O
weak	O
dilution	B-Algorithm
is	O
,	O
and	O
often	O
the	O
distinction	O
is	O
dependent	O
on	O
the	O
precedent	O
of	O
a	O
specific	O
use-case	O
and	O
has	O
implications	O
for	O
how	O
to	O
solve	O
for	O
exact	O
solutions	O
.	O
</s>
<s>
Sometimes	O
dilution	B-Algorithm
is	O
used	O
for	O
adding	O
damping	O
noise	O
to	O
the	O
inputs	O
.	O
</s>
<s>
In	O
that	O
case	O
,	O
weak	O
dilution	B-Algorithm
refers	O
to	O
adding	O
a	O
small	O
amount	O
of	O
damping	O
noise	O
,	O
while	O
strong	O
dilution	B-Algorithm
refers	O
to	O
adding	O
a	O
greater	O
amount	O
of	O
damping	O
noise	O
.	O
</s>
<s>
Both	O
can	O
be	O
rewritten	O
as	O
variants	O
of	O
weight	O
dilution	B-Algorithm
.	O
</s>
<s>
Dilution	B-Algorithm
and	O
dropout	B-Algorithm
both	O
refer	O
to	O
an	O
iterative	O
process	O
.	O
</s>
<s>
The	O
pruning	O
of	O
weights	O
typically	O
does	O
not	O
imply	O
that	O
the	O
network	O
continues	O
learning	O
,	O
while	O
in	O
dilution/dropout	O
,	O
the	O
network	O
continues	O
to	O
learn	O
after	O
the	O
technique	O
is	O
applied	O
.	O
</s>
<s>
During	O
weak	O
dilution	B-Algorithm
,	O
the	O
finite	O
fraction	O
of	O
removed	O
connections	O
(	O
the	O
weights	O
)	O
is	O
small	O
,	O
giving	O
rise	O
to	O
a	O
tiny	O
uncertainty	O
.	O
</s>
<s>
where	O
the	O
function	O
imposes	O
the	O
previous	O
dilution	B-Algorithm
.	O
</s>
<s>
In	O
weak	O
dilution	B-Algorithm
only	O
a	O
small	O
and	O
fixed	O
fraction	O
of	O
the	O
weights	O
are	O
diluted	O
.	O
</s>
<s>
When	O
the	O
dilution	B-Algorithm
is	O
strong	O
,	O
the	O
finite	O
fraction	O
of	O
removed	O
connections	O
(	O
the	O
weights	O
)	O
is	O
large	O
,	O
giving	O
rise	O
to	O
a	O
huge	O
uncertainty	O
.	O
</s>
<s>
Because	O
dropout	B-Algorithm
removes	O
a	O
whole	O
row	O
from	O
the	O
vector	O
matrix	O
,	O
the	O
previous	O
(	O
unlisted	O
)	O
assumptions	O
for	O
weak	O
dilution	B-Algorithm
and	O
the	O
use	O
of	O
mean	O
field	O
theory	O
are	O
not	O
applicable	O
.	O
</s>
<s>
If	O
the	O
neural	B-Architecture
net	I-Architecture
is	O
processed	O
by	O
a	O
high-performance	O
digital	O
array-multiplicator	O
,	O
then	O
it	O
is	O
likely	O
more	O
effective	O
to	O
drive	O
the	O
value	O
to	O
zero	O
late	O
in	O
the	O
process	O
graph	O
.	O
</s>
<s>
Although	O
there	O
have	O
been	O
examples	O
of	O
randomly	O
removing	O
connections	O
between	O
neurons	B-Algorithm
in	O
a	O
neural	B-Architecture
network	I-Architecture
to	O
improve	O
models	O
,	O
this	O
technique	O
was	O
first	O
introduced	O
with	O
the	O
name	O
dropout	B-Algorithm
by	O
Geoffrey	O
Hinton	O
,	O
et	O
al	O
.	O
</s>
<s>
Google	B-Application
currently	O
holds	O
the	O
patent	O
for	O
the	O
dropout	B-Algorithm
technique	O
.	O
</s>
