<s>
In	O
deep	B-Algorithm
learning	I-Algorithm
,	O
a	O
convolutional	B-Architecture
neural	I-Architecture
network	I-Architecture
(	O
CNN	B-Architecture
)	O
is	O
a	O
class	O
of	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
most	O
commonly	O
applied	O
to	O
analyze	O
visual	O
imagery	O
.	O
</s>
<s>
CNNs	B-Architecture
use	O
a	O
mathematical	O
operation	O
called	O
convolution	B-Language
in	O
place	O
of	O
general	O
matrix	O
multiplication	O
in	O
at	O
least	O
one	O
of	O
their	O
layers	O
.	O
</s>
<s>
They	O
have	O
applications	O
in	O
image	B-Application
and	I-Application
video	I-Application
recognition	I-Application
,	O
recommender	B-Application
systems	I-Application
,	O
image	O
classification	O
,	O
image	B-Algorithm
segmentation	I-Algorithm
,	O
medical	B-Algorithm
image	I-Algorithm
analysis	I-Algorithm
,	O
natural	B-Language
language	I-Language
processing	I-Language
,	O
brain	B-Application
–	I-Application
computer	I-Application
interfaces	I-Application
,	O
and	O
financial	O
time	O
series	O
.	O
</s>
<s>
CNNs	B-Architecture
are	O
also	O
known	O
as	O
Shift	O
Invariant	O
or	O
Space	O
Invariant	O
Artificial	B-Architecture
Neural	I-Architecture
Networks	I-Architecture
(	O
SIANN	O
)	O
,	O
based	O
on	O
the	O
shared-weight	O
architecture	O
of	O
the	O
convolution	B-Language
kernels	B-Operating_System
or	O
filters	O
that	O
slide	O
along	O
input	O
features	O
and	O
provide	O
translation-equivariant	O
responses	O
known	O
as	O
feature	B-Algorithm
maps	O
.	O
</s>
<s>
Counter-intuitively	O
,	O
most	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
are	O
not	O
invariant	O
to	O
translation	O
,	O
due	O
to	O
the	O
downsampling	B-Algorithm
operation	O
they	O
apply	O
to	O
the	O
input	O
.	O
</s>
<s>
CNNs	B-Architecture
are	O
regularized	O
versions	O
of	O
multilayer	B-Algorithm
perceptrons	I-Algorithm
.	O
</s>
<s>
Multilayer	B-Algorithm
perceptrons	I-Algorithm
are	O
usually	O
fully	O
connected	O
networks	O
,	O
that	O
is	O
,	O
each	O
neuron	O
in	O
one	O
layer	B-Algorithm
is	O
connected	O
to	O
all	O
neurons	B-Algorithm
in	O
the	O
next	O
layer	B-Algorithm
.	O
</s>
<s>
The	O
"	O
full	O
connectivity	O
"	O
of	O
these	O
networks	O
make	O
them	O
prone	O
to	O
overfitting	B-Error_Name
data	O
.	O
</s>
<s>
Typical	O
ways	O
of	O
regularization	O
,	O
or	O
preventing	O
overfitting	B-Error_Name
,	O
include	O
:	O
penalizing	O
parameters	O
during	O
training	O
(	O
such	O
as	O
weight	O
decay	O
)	O
or	O
trimming	O
connectivity	O
(	O
skipped	O
connections	O
,	O
dropout	B-Algorithm
,	O
etc	O
.	O
)	O
</s>
<s>
Developing	O
robust	O
datasets	O
also	O
increases	O
the	O
probability	O
that	O
CNNs	B-Architecture
will	O
learn	O
the	O
generalized	O
principles	O
that	O
characterize	O
a	O
given	O
dataset	O
rather	O
than	O
the	O
biases	O
of	O
a	O
poorly-populated	O
set	O
.	O
</s>
<s>
CNNs	B-Architecture
take	O
a	O
different	O
approach	O
towards	O
regularization	O
:	O
they	O
take	O
advantage	O
of	O
Meaning	O
,	O
CNNs	B-Architecture
utilize	O
the	O
hierarchical	O
structure	O
of	O
the	O
data	O
they	O
are	O
processing	O
.	O
</s>
<s>
Instead	O
of	O
trying	O
to	O
process	O
the	O
entire	O
image	O
or	O
input	O
at	O
once	O
,	O
CNNs	B-Architecture
break	O
it	O
down	O
into	O
smaller	O
,	O
simpler	O
features	O
,	O
which	O
are	O
represented	O
by	O
filters	O
.	O
</s>
<s>
This	O
hierarchical	O
approach	O
allows	O
CNNs	B-Architecture
to	O
efficiently	O
learn	O
complex	O
patterns	O
in	O
data	O
,	O
while	O
minimizing	O
the	O
risk	O
of	O
overfitting	B-Error_Name
.	O
</s>
<s>
Therefore	O
,	O
on	O
a	O
scale	O
of	O
connectivity	O
and	O
complexity	O
,	O
CNNs	B-Architecture
are	O
on	O
the	O
lower	O
extreme	O
.	O
</s>
<s>
Convolutional	O
networks	O
were	O
inspired	O
by	O
biological	O
processes	O
in	O
that	O
the	O
connectivity	O
pattern	O
between	O
neurons	B-Algorithm
resembles	O
the	O
organization	O
of	O
the	O
animal	O
visual	O
cortex	O
.	O
</s>
<s>
Individual	O
cortical	O
neurons	B-Algorithm
respond	O
to	O
stimuli	O
only	O
in	O
a	O
restricted	O
region	O
of	O
the	O
visual	O
field	O
known	O
as	O
the	O
receptive	O
field	O
.	O
</s>
<s>
The	O
receptive	O
fields	O
of	O
different	O
neurons	B-Algorithm
partially	O
overlap	O
such	O
that	O
they	O
cover	O
the	O
entire	O
visual	O
field	O
.	O
</s>
<s>
CNNs	B-Architecture
use	O
relatively	O
little	O
pre-processing	O
compared	O
to	O
other	O
image	O
classification	O
algorithms	O
.	O
</s>
<s>
This	O
means	O
that	O
the	O
network	O
learns	O
to	O
optimize	O
the	O
filters	O
(	O
or	O
kernels	B-Operating_System
)	O
through	O
automated	O
learning	O
,	O
whereas	O
in	O
traditional	O
algorithms	O
these	O
filters	O
are	O
hand-engineered	B-General_Concept
.	O
</s>
<s>
This	O
independence	O
from	O
prior	O
knowledge	O
and	O
human	O
intervention	O
in	O
feature	B-Algorithm
extraction	O
is	O
a	O
major	O
advantage	O
.	O
</s>
<s>
A	O
convolutional	B-Architecture
neural	I-Architecture
network	I-Architecture
consists	O
of	O
an	O
input	O
layer	B-Algorithm
,	O
hidden	O
layers	O
and	O
an	O
output	O
layer	B-Algorithm
.	O
</s>
<s>
In	O
a	O
convolutional	B-Architecture
neural	I-Architecture
network	I-Architecture
,	O
the	O
hidden	O
layers	O
include	O
one	O
or	O
more	O
layers	O
that	O
perform	O
convolutions	B-Language
.	O
</s>
<s>
Typically	O
this	O
includes	O
a	O
layer	B-Algorithm
that	O
performs	O
a	O
dot	O
product	O
of	O
the	O
convolution	B-Language
kernel	B-Operating_System
with	O
the	O
layer	B-Algorithm
's	O
input	O
matrix	O
.	O
</s>
<s>
This	O
product	O
is	O
usually	O
the	O
Frobenius	O
inner	O
product	O
,	O
and	O
its	O
activation	B-Algorithm
function	I-Algorithm
is	O
commonly	O
ReLU	B-Algorithm
.	O
</s>
<s>
As	O
the	O
convolution	B-Language
kernel	B-Operating_System
slides	O
along	O
the	O
input	O
matrix	O
for	O
the	O
layer	B-Algorithm
,	O
the	O
convolution	B-Language
operation	O
generates	O
a	O
feature	B-Algorithm
map	O
,	O
which	O
in	O
turn	O
contributes	O
to	O
the	O
input	O
of	O
the	O
next	O
layer	B-Algorithm
.	O
</s>
<s>
In	O
a	O
CNN	B-Architecture
,	O
the	O
input	O
is	O
a	O
tensor	O
with	O
shape	O
:	O
(	O
number	O
of	O
inputs	O
)	O
×	O
(	O
input	O
height	O
)	O
×	O
(	O
input	O
width	O
)	O
×	O
(	O
input	O
channels	B-Algorithm
)	O
.	O
</s>
<s>
After	O
passing	O
through	O
a	O
convolutional	O
layer	B-Algorithm
,	O
the	O
image	O
becomes	O
abstracted	O
to	O
a	O
feature	B-Algorithm
map	O
,	O
also	O
called	O
an	O
activation	B-Algorithm
map	I-Algorithm
,	O
with	O
shape	O
:	O
(	O
number	O
of	O
inputs	O
)	O
×	O
(	O
feature	B-Algorithm
map	O
height	O
)	O
×	O
(	O
feature	B-Algorithm
map	O
width	O
)	O
×	O
(	O
feature	B-Algorithm
map	O
channels	B-Algorithm
)	O
.	O
</s>
<s>
Convolutional	O
layers	O
convolve	O
the	O
input	O
and	O
pass	O
its	O
result	O
to	O
the	O
next	O
layer	B-Algorithm
.	O
</s>
<s>
Although	O
fully	B-Algorithm
connected	I-Algorithm
feedforward	I-Algorithm
neural	I-Algorithm
networks	I-Algorithm
can	O
be	O
used	O
to	O
learn	O
features	O
and	O
classify	O
data	O
,	O
this	O
architecture	O
is	O
generally	O
impractical	O
for	O
larger	O
inputs	O
(	O
e.g.	O
,	O
high-resolution	O
images	O
)	O
,	O
which	O
would	O
require	O
massive	O
numbers	O
of	O
neurons	B-Algorithm
because	O
each	O
pixel	O
is	O
a	O
relevant	O
input	O
feature	B-Algorithm
.	O
</s>
<s>
A	O
fully	O
connected	O
layer	B-Algorithm
for	O
an	O
image	O
of	O
size	O
100	O
×	O
100	O
has	O
10,000	O
weights	O
for	O
each	O
neuron	O
in	O
the	O
second	O
layer	B-Algorithm
.	O
</s>
<s>
Convolution	B-Language
reduces	O
the	O
number	O
of	O
free	O
parameters	O
,	O
allowing	O
the	O
network	O
to	O
be	O
deeper	O
.	O
</s>
<s>
For	O
example	O
,	O
using	O
a	O
5	O
×	O
5	O
tiling	O
region	O
,	O
each	O
with	O
the	O
same	O
shared	O
weights	O
,	O
requires	O
only	O
25	O
neurons	B-Algorithm
.	O
</s>
<s>
Using	O
regularized	O
weights	O
over	O
fewer	O
parameters	O
avoids	O
the	O
vanishing	O
gradients	O
and	O
exploding	O
gradients	O
problems	O
seen	O
during	O
backpropagation	B-Algorithm
in	O
earlier	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
To	O
speed	O
processing	O
,	O
standard	O
convolutional	O
layers	O
can	O
be	O
replaced	O
by	O
depthwise	O
separable	O
convolutional	O
layers	O
,	O
which	O
are	O
based	O
on	O
a	O
depthwise	O
convolution	B-Language
followed	O
by	O
a	O
pointwise	O
convolution	B-Language
.	O
</s>
<s>
The	O
depthwise	O
convolution	B-Language
is	O
a	O
spatial	O
convolution	B-Language
applied	O
independently	O
over	O
each	O
channel	O
of	O
the	O
input	O
tensor	O
,	O
while	O
the	O
pointwise	O
convolution	B-Language
is	O
a	O
standard	O
convolution	B-Language
restricted	O
to	O
the	O
use	O
of	O
kernels	B-Operating_System
.	O
</s>
<s>
Pooling	O
layers	O
reduce	O
the	O
dimensions	O
of	O
data	O
by	O
combining	O
the	O
outputs	O
of	O
neuron	O
clusters	O
at	O
one	O
layer	B-Algorithm
into	O
a	O
single	O
neuron	O
in	O
the	O
next	O
layer	B-Algorithm
.	O
</s>
<s>
Global	O
pooling	O
acts	O
on	O
all	O
the	O
neurons	B-Algorithm
of	O
the	O
feature	B-Algorithm
map	O
.	O
</s>
<s>
Max	O
pooling	O
uses	O
the	O
maximum	O
value	O
of	O
each	O
local	O
cluster	O
of	O
neurons	B-Algorithm
in	O
the	O
feature	B-Algorithm
map	O
,	O
while	O
average	O
pooling	O
takes	O
the	O
average	O
value	O
.	O
</s>
<s>
Fully	O
connected	O
layers	O
connect	O
every	O
neuron	O
in	O
one	O
layer	B-Algorithm
to	O
every	O
neuron	O
in	O
another	O
layer	B-Algorithm
.	O
</s>
<s>
It	O
is	O
the	O
same	O
as	O
a	O
traditional	O
multilayer	B-Algorithm
perceptron	I-Algorithm
neural	B-Architecture
network	I-Architecture
(	O
MLP	O
)	O
.	O
</s>
<s>
The	O
flattened	O
matrix	O
goes	O
through	O
a	O
fully	O
connected	O
layer	B-Algorithm
to	O
classify	O
the	O
images	O
.	O
</s>
<s>
In	O
neural	B-Architecture
networks	I-Architecture
,	O
each	O
neuron	O
receives	O
input	O
from	O
some	O
number	O
of	O
locations	O
in	O
the	O
previous	O
layer	B-Algorithm
.	O
</s>
<s>
In	O
a	O
convolutional	O
layer	B-Algorithm
,	O
each	O
neuron	O
receives	O
input	O
from	O
only	O
a	O
restricted	O
area	O
of	O
the	O
previous	O
layer	B-Algorithm
called	O
the	O
neuron	O
's	O
receptive	O
field	O
.	O
</s>
<s>
5	O
by	O
5	O
neurons	B-Algorithm
)	O
.	O
</s>
<s>
Whereas	O
,	O
in	O
a	O
fully	O
connected	O
layer	B-Algorithm
,	O
the	O
receptive	O
field	O
is	O
the	O
entire	O
previous	O
layer	B-Algorithm
.	O
</s>
<s>
Thus	O
,	O
in	O
each	O
convolutional	O
layer	B-Algorithm
,	O
each	O
neuron	O
takes	O
input	O
from	O
a	O
larger	O
area	O
in	O
the	O
input	O
than	O
previous	O
layers	O
.	O
</s>
<s>
This	O
is	O
due	O
to	O
applying	O
the	O
convolution	B-Language
over	O
and	O
over	O
,	O
which	O
takes	O
into	O
account	O
the	O
value	O
of	O
a	O
pixel	O
,	O
as	O
well	O
as	O
its	O
surrounding	O
pixels	O
.	O
</s>
<s>
To	O
manipulate	O
the	O
receptive	O
field	O
size	O
as	O
desired	O
,	O
there	O
are	O
some	O
alternatives	O
to	O
the	O
standard	O
convolutional	O
layer	B-Algorithm
.	O
</s>
<s>
For	O
example	O
,	O
atrous	O
or	O
dilated	O
convolution	B-Language
expands	O
the	O
receptive	O
field	O
size	O
without	O
increasing	O
the	O
number	O
of	O
parameters	O
by	O
interleaving	O
visible	O
and	O
blind	O
regions	O
.	O
</s>
<s>
Moreover	O
,	O
a	O
single	O
dilated	O
convolutional	O
layer	B-Algorithm
can	O
comprise	O
filters	O
with	O
multiple	O
dilation	O
ratios	O
,	O
thus	O
having	O
a	O
variable	O
receptive	O
field	O
size	O
.	O
</s>
<s>
Each	O
neuron	O
in	O
a	O
neural	B-Architecture
network	I-Architecture
computes	O
an	O
output	O
value	O
by	O
applying	O
a	O
specific	O
function	O
to	O
the	O
input	O
values	O
received	O
from	O
the	O
receptive	O
field	O
in	O
the	O
previous	O
layer	B-Algorithm
.	O
</s>
<s>
A	O
distinguishing	O
feature	B-Algorithm
of	O
CNNs	B-Architecture
is	O
that	O
many	O
neurons	B-Algorithm
can	O
share	O
the	O
same	O
filter	O
.	O
</s>
<s>
CNN	B-Architecture
are	O
often	O
compared	O
to	O
the	O
way	O
the	O
brain	O
achieves	O
vision	O
processing	O
in	O
living	O
organisms	O
.	O
</s>
<s>
Work	O
by	O
Hubel	O
and	O
Wiesel	O
in	O
the	O
1950s	O
and	O
1960s	O
showed	O
that	O
cat	O
visual	O
cortices	O
contain	O
neurons	B-Algorithm
that	O
individually	O
respond	O
to	O
small	O
regions	O
of	O
the	O
visual	O
field	O
.	O
</s>
<s>
The	O
"	O
neocognitron	B-Algorithm
"	O
was	O
introduced	O
by	O
Kunihiko	O
Fukushima	O
in	O
1980	O
.	O
</s>
<s>
The	O
neocognitron	B-Algorithm
introduced	O
the	O
two	O
basic	O
types	O
of	O
layers	O
in	O
CNNs	B-Architecture
:	O
convolutional	O
layers	O
,	O
and	O
downsampling	B-Algorithm
layers	O
.	O
</s>
<s>
A	O
convolutional	O
layer	B-Algorithm
contains	O
units	O
whose	O
receptive	O
fields	O
cover	O
a	O
patch	O
of	O
the	O
previous	O
layer	B-Algorithm
.	O
</s>
<s>
Downsampling	B-Algorithm
layers	O
contain	O
units	O
whose	O
receptive	O
fields	O
cover	O
patches	O
of	O
previous	O
convolutional	O
layers	O
.	O
</s>
<s>
This	O
downsampling	B-Algorithm
helps	O
to	O
correctly	O
classify	O
objects	O
in	O
visual	O
scenes	O
even	O
when	O
the	O
objects	O
are	O
shifted	O
.	O
</s>
<s>
In	O
1969	O
,	O
Kunihiko	O
Fukushima	O
also	O
introduced	O
the	O
ReLU	B-Algorithm
(	O
rectified	B-Algorithm
linear	I-Algorithm
unit	I-Algorithm
)	O
activation	B-Algorithm
function	I-Algorithm
.	O
</s>
<s>
The	O
rectifier	B-Algorithm
has	O
become	O
the	O
most	O
popular	O
activation	B-Algorithm
function	I-Algorithm
for	O
CNNs	B-Architecture
and	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
in	O
general	O
.	O
</s>
<s>
In	O
a	O
variant	O
of	O
the	O
neocognitron	B-Algorithm
called	O
the	O
cresceptron	O
,	O
instead	O
of	O
using	O
Fukushima	O
's	O
spatial	O
averaging	O
,	O
J	O
.	O
Weng	O
et	O
al	O
.	O
</s>
<s>
in	O
1993	O
introduced	O
a	O
method	O
called	O
max-pooling	O
where	O
a	O
downsampling	B-Algorithm
unit	O
computes	O
the	O
maximum	O
of	O
the	O
activations	O
of	O
the	O
units	O
in	O
its	O
patch	O
.	O
</s>
<s>
Max-pooling	O
is	O
often	O
used	O
in	O
modern	O
CNNs	B-Architecture
.	O
</s>
<s>
Several	O
supervised	O
and	O
unsupervised	B-General_Concept
learning	I-General_Concept
algorithms	O
have	O
been	O
proposed	O
over	O
the	O
decades	O
to	O
train	O
the	O
weights	O
of	O
a	O
neocognitron	B-Algorithm
.	O
</s>
<s>
Today	O
,	O
however	O
,	O
the	O
CNN	B-Architecture
architecture	O
is	O
usually	O
trained	O
through	O
backpropagation	B-Algorithm
.	O
</s>
<s>
The	O
neocognitron	B-Algorithm
is	O
the	O
first	O
CNN	B-Architecture
which	O
requires	O
units	O
located	O
at	O
multiple	O
network	O
positions	O
to	O
have	O
shared	O
weights	O
.	O
</s>
<s>
Convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
were	O
presented	O
at	O
the	O
Neural	O
Information	O
Processing	O
Workshop	O
in	O
1987	O
,	O
automatically	O
analyzing	O
time-varying	O
signals	O
by	O
replacing	O
learned	O
multiplication	O
with	O
convolution	B-Language
in	O
time	O
,	O
and	O
demonstrated	O
for	O
speech	O
recognition	O
.	O
</s>
<s>
The	O
time	B-Algorithm
delay	I-Algorithm
neural	I-Algorithm
network	I-Algorithm
(	O
TDNN	B-Algorithm
)	O
was	O
introduced	O
in	O
1987	O
by	O
Alex	O
Waibel	O
et	O
al	O
.	O
</s>
<s>
It	O
did	O
so	O
by	O
utilizing	O
weight	O
sharing	O
in	O
combination	O
with	O
backpropagation	B-Algorithm
training	O
.	O
</s>
<s>
Thus	O
,	O
while	O
also	O
using	O
a	O
pyramidal	O
structure	O
as	O
in	O
the	O
neocognitron	B-Algorithm
,	O
it	O
performed	O
a	O
global	O
optimization	O
of	O
the	O
weights	O
instead	O
of	O
a	O
local	O
one	O
.	O
</s>
<s>
TDNNs	B-Algorithm
are	O
convolutional	O
networks	O
that	O
share	O
weights	O
along	O
the	O
temporal	O
dimension	O
.	O
</s>
<s>
In	O
1990	O
Hampshire	O
and	O
Waibel	O
introduced	O
a	O
variant	O
which	O
performs	O
a	O
two	O
dimensional	O
convolution	B-Language
.	O
</s>
<s>
Since	O
these	O
TDNNs	B-Algorithm
operated	O
on	O
spectrograms	O
,	O
the	O
resulting	O
phoneme	O
recognition	O
system	O
was	O
invariant	O
to	O
both	O
shifts	O
in	O
time	O
and	O
in	O
frequency	O
.	O
</s>
<s>
This	O
inspired	O
translation	O
invariance	O
in	O
image	O
processing	O
with	O
CNNs	B-Architecture
.	O
</s>
<s>
TDNNs	B-Algorithm
now	O
achieve	O
the	O
best	O
performance	O
in	O
far	O
distance	O
speech	O
recognition	O
.	O
</s>
<s>
They	O
did	O
so	O
by	O
combining	O
TDNNs	B-Algorithm
with	O
max	O
pooling	O
in	O
order	O
to	O
realize	O
a	O
speaker	O
independent	O
isolated	O
word	O
recognition	O
system	O
.	O
</s>
<s>
In	O
their	O
system	O
they	O
used	O
several	O
TDNNs	B-Algorithm
per	O
word	O
,	O
one	O
for	O
each	O
syllable	B-General_Concept
.	O
</s>
<s>
The	O
results	O
of	O
each	O
TDNN	B-Algorithm
over	O
the	O
input	O
signal	O
were	O
combined	O
using	O
max	O
pooling	O
and	O
the	O
outputs	O
of	O
the	O
pooling	O
layers	O
were	O
then	O
passed	O
on	O
to	O
networks	O
performing	O
the	O
actual	O
word	O
classification	O
.	O
</s>
<s>
A	O
system	O
to	O
recognize	O
hand-written	O
ZIP	B-Language
Code	O
numbers	O
involved	O
convolutions	B-Language
in	O
which	O
the	O
kernel	B-Operating_System
coefficients	O
had	O
been	O
laboriously	O
hand	O
designed	O
.	O
</s>
<s>
(	O
1989	O
)	O
used	O
back-propagation	B-Algorithm
to	O
learn	O
the	O
convolution	B-Language
kernel	B-Operating_System
coefficients	O
directly	O
from	O
images	O
of	O
hand-written	O
numbers	O
.	O
</s>
<s>
(	O
1988	O
)	O
used	O
back-propagation	B-Algorithm
to	O
train	O
the	O
convolution	B-Language
kernels	B-Operating_System
of	O
a	O
CNN	B-Architecture
for	O
alphabets	O
recognition	O
.	O
</s>
<s>
The	O
model	O
was	O
called	O
Shift-Invariant	O
Artificial	B-Architecture
Neural	I-Architecture
Network	I-Architecture
(	O
SIANN	O
)	O
before	O
the	O
name	O
CNN	B-Architecture
was	O
coined	O
later	O
in	O
the	O
early	O
1990s	O
.	O
</s>
<s>
also	O
applied	O
the	O
same	O
CNN	B-Architecture
without	O
the	O
last	O
fully	O
connected	O
layer	B-Algorithm
for	O
medical	O
image	O
object	O
segmentation	B-Algorithm
(	O
1991	O
)	O
and	O
breast	O
cancer	O
detection	O
in	O
mammograms	O
(	O
1994	O
)	O
.	O
</s>
<s>
This	O
approach	O
became	O
a	O
foundation	O
of	O
modern	O
computer	B-Application
vision	I-Application
.	O
</s>
<s>
The	O
ability	O
to	O
process	O
higher-resolution	O
images	O
requires	O
larger	O
and	O
more	O
layers	O
of	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
,	O
so	O
this	O
technique	O
is	O
constrained	O
by	O
the	O
availability	O
of	O
computing	O
resources	O
.	O
</s>
<s>
A	O
shift-invariant	O
neural	B-Architecture
network	I-Architecture
was	O
proposed	O
by	O
Wei	O
Zhang	O
et	O
al	O
.	O
</s>
<s>
It	O
is	O
a	O
modified	O
Neocognitron	B-Algorithm
by	O
keeping	O
only	O
the	O
convolutional	O
interconnections	O
between	O
the	O
image	O
feature	B-Algorithm
layers	O
and	O
the	O
last	O
fully	O
connected	O
layer	B-Algorithm
.	O
</s>
<s>
The	O
model	O
was	O
trained	O
with	O
back-propagation	B-Algorithm
.	O
</s>
<s>
The	O
model	O
architecture	O
was	O
modified	O
by	O
removing	O
the	O
last	O
fully	O
connected	O
layer	B-Algorithm
and	O
applied	O
for	O
medical	O
image	B-Algorithm
segmentation	I-Algorithm
(	O
1991	O
)	O
and	O
automatic	O
detection	O
of	O
breast	O
cancer	O
in	O
mammograms	O
(	O
1994	O
)	O
.	O
</s>
<s>
A	O
different	O
convolution-based	O
design	O
was	O
proposed	O
in	O
1988	O
for	O
application	O
to	O
decomposition	O
of	O
one-dimensional	O
electromyography	O
convolved	B-Language
signals	O
via	O
de-convolution	O
.	O
</s>
<s>
This	O
design	O
was	O
modified	O
in	O
1989	O
to	O
other	O
de-convolution-based	O
designs	O
.	O
</s>
<s>
The	O
feed-forward	O
architecture	O
of	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
was	O
extended	O
in	O
the	O
neural	O
abstraction	O
pyramid	O
by	O
lateral	O
and	O
feedback	O
connections	O
.	O
</s>
<s>
The	O
resulting	O
recurrent	B-Algorithm
convolutional	O
network	O
allows	O
for	O
the	O
flexible	O
incorporation	O
of	O
contextual	O
information	O
to	O
iteratively	O
resolve	O
local	O
ambiguities	O
.	O
</s>
<s>
In	O
contrast	O
to	O
previous	O
models	O
,	O
image-like	O
outputs	O
at	O
the	O
highest	O
resolution	O
were	O
generated	O
,	O
e.g.	O
,	O
for	O
semantic	B-Algorithm
segmentation	I-Algorithm
,	O
image	O
reconstruction	O
,	O
and	O
object	O
localization	O
tasks	O
.	O
</s>
<s>
Although	O
CNNs	B-Architecture
were	O
invented	O
in	O
the	O
1980s	O
,	O
their	O
breakthrough	O
in	O
the	O
2000s	O
required	O
fast	O
implementations	O
on	O
graphics	B-Architecture
processing	I-Architecture
units	I-Architecture
(	O
GPUs	B-Architecture
)	O
.	O
</s>
<s>
In	O
2004	O
,	O
it	O
was	O
shown	O
by	O
K	O
.	O
S	O
.	O
Oh	O
and	O
K	O
.	O
Jung	O
that	O
standard	O
neural	B-Architecture
networks	I-Architecture
can	O
be	O
greatly	O
accelerated	O
on	O
GPUs	B-Architecture
.	O
</s>
<s>
Their	O
implementation	O
was	O
20	O
times	O
faster	O
than	O
an	O
equivalent	O
implementation	O
on	O
CPU	B-Device
.	O
</s>
<s>
In	O
2005	O
,	O
another	O
paper	O
also	O
emphasised	O
the	O
value	O
of	O
GPGPU	B-Architecture
for	O
machine	O
learning	O
.	O
</s>
<s>
The	O
first	O
GPU-implementation	O
of	O
a	O
CNN	B-Architecture
was	O
described	O
in	O
2006	O
by	O
K	O
.	O
Chellapilla	O
et	O
al	O
.	O
</s>
<s>
Their	O
implementation	O
was	O
4	O
times	O
faster	O
than	O
an	O
equivalent	O
implementation	O
on	O
CPU	B-Device
.	O
</s>
<s>
Subsequent	O
work	O
also	O
used	O
GPUs	B-Architecture
,	O
initially	O
for	O
other	O
types	O
of	O
neural	B-Architecture
networks	I-Architecture
(	O
different	O
from	O
CNNs	B-Architecture
)	O
,	O
especially	O
unsupervised	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
at	O
IDSIA	O
showed	O
that	O
even	O
deep	O
standard	O
neural	B-Architecture
networks	I-Architecture
with	O
many	O
layers	O
can	O
be	O
quickly	O
trained	O
on	O
GPU	B-Architecture
by	O
supervised	O
learning	O
through	O
the	O
old	O
method	O
known	O
as	O
backpropagation	B-Algorithm
.	O
</s>
<s>
Their	O
network	O
outperformed	O
previous	O
machine	O
learning	O
methods	O
on	O
the	O
MNIST	B-General_Concept
handwritten	O
digits	O
benchmark	O
.	O
</s>
<s>
In	O
2011	O
,	O
they	O
extended	O
this	O
GPU	B-Architecture
approach	O
to	O
CNNs	B-Architecture
,	O
achieving	O
an	O
acceleration	O
factor	O
of	O
60	O
,	O
with	O
impressive	O
results	O
.	O
</s>
<s>
In	O
2011	O
,	O
they	O
used	O
such	O
CNNs	B-Architecture
on	O
GPU	B-Architecture
to	O
win	O
an	O
image	O
recognition	O
contest	O
where	O
they	O
achieved	O
superhuman	O
performance	O
for	O
the	O
first	O
time	O
.	O
</s>
<s>
Between	O
May	O
15	O
,	O
2011	O
and	O
September	O
30	O
,	O
2012	O
,	O
their	O
CNNs	B-Architecture
won	O
no	O
less	O
than	O
four	O
image	O
competitions	O
.	O
</s>
<s>
In	O
2012	O
,	O
they	O
also	O
significantly	O
improved	O
on	O
the	O
best	O
performance	O
in	O
the	O
literature	O
for	O
multiple	O
image	O
databases	O
,	O
including	O
the	O
MNIST	B-General_Concept
database	I-General_Concept
,	O
the	O
NORB	O
database	O
,	O
the	O
HWDB1.0	O
dataset	O
(	O
Chinese	O
characters	O
)	O
and	O
the	O
CIFAR10	B-General_Concept
dataset	I-General_Concept
(	O
dataset	O
of	O
60000	O
32x32	O
labeled	O
RGB	O
images	O
)	O
.	O
</s>
<s>
Subsequently	O
,	O
a	O
similar	O
GPU-based	O
CNN	B-Architecture
by	O
Alex	O
Krizhevsky	O
et	O
al	O
.	O
</s>
<s>
A	O
very	O
deep	O
CNN	B-Architecture
with	O
over	O
100	O
layers	O
by	O
Microsoft	O
won	O
the	O
ImageNet	O
2015	O
contest	O
.	O
</s>
<s>
Compared	O
to	O
the	O
training	O
of	O
CNNs	B-Architecture
using	O
GPUs	B-Architecture
,	O
not	O
much	O
attention	O
was	O
given	O
to	O
the	O
Intel	B-General_Concept
Xeon	I-General_Concept
Phi	I-General_Concept
coprocessor	I-General_Concept
.	O
</s>
<s>
A	O
notable	O
development	O
is	O
a	O
parallelization	O
method	O
for	O
training	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
on	O
the	O
Intel	B-General_Concept
Xeon	I-General_Concept
Phi	I-General_Concept
,	O
named	O
Controlled	O
Hogwild	O
with	O
Arbitrary	O
Order	O
of	O
Synchronization	O
(	O
CHAOS	O
)	O
.	O
</s>
<s>
CHAOS	O
exploits	O
both	O
the	O
thread	O
-	O
and	O
SIMD-level	O
parallelism	O
that	O
is	O
available	O
on	O
the	O
Intel	B-General_Concept
Xeon	I-General_Concept
Phi	I-General_Concept
.	O
</s>
<s>
In	O
the	O
past	O
,	O
traditional	O
multilayer	B-Algorithm
perceptron	I-Algorithm
(	O
MLP	O
)	O
models	O
were	O
used	O
for	O
image	O
recognition	O
.	O
</s>
<s>
However	O
,	O
the	O
full	O
connectivity	O
between	O
nodes	O
caused	O
the	O
curse	B-Algorithm
of	I-Algorithm
dimensionality	I-Algorithm
,	O
and	O
was	O
computationally	O
intractable	O
with	O
higher-resolution	O
images	O
.	O
</s>
<s>
A	O
1000×	O
1000-pixel	O
image	O
with	O
RGB	O
color	B-Algorithm
channels	I-Algorithm
has	O
3	O
million	O
weights	O
per	O
fully-connected	O
neuron	O
,	O
which	O
is	O
too	O
high	O
to	O
feasibly	O
process	O
efficiently	O
at	O
scale	O
.	O
</s>
<s>
For	O
example	O
,	O
in	O
CIFAR-10	B-General_Concept
,	O
images	O
are	O
only	O
of	O
size	O
32×32×3	O
(	O
32	O
wide	O
,	O
32	O
high	O
,	O
3	O
color	B-Algorithm
channels	I-Algorithm
)	O
,	O
so	O
a	O
single	O
fully	O
connected	O
neuron	O
in	O
the	O
first	O
hidden	O
layer	B-Algorithm
of	O
a	O
regular	O
neural	B-Architecture
network	I-Architecture
would	O
have	O
32*32*3	O
=	O
3,072	O
weights	O
.	O
</s>
<s>
A	O
200×200	O
image	O
,	O
however	O
,	O
would	O
lead	O
to	O
neurons	B-Algorithm
that	O
have	O
200*200*3	O
=	O
120,000	O
weights	O
.	O
</s>
<s>
This	O
ignores	O
locality	B-General_Concept
of	I-General_Concept
reference	I-General_Concept
in	O
data	O
with	O
a	O
grid-topology	O
(	O
such	O
as	O
images	O
)	O
,	O
both	O
computationally	O
and	O
semantically	O
.	O
</s>
<s>
Thus	O
,	O
full	O
connectivity	O
of	O
neurons	B-Algorithm
is	O
wasteful	O
for	O
purposes	O
such	O
as	O
image	O
recognition	O
that	O
are	O
dominated	O
by	O
spatially	O
local	O
input	O
patterns	O
.	O
</s>
<s>
Convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
are	O
variants	O
of	O
multilayer	B-Algorithm
perceptrons	I-Algorithm
,	O
designed	O
to	O
emulate	O
the	O
behavior	O
of	O
a	O
visual	O
cortex	O
.	O
</s>
<s>
As	O
opposed	O
to	O
MLPs	O
,	O
CNNs	B-Architecture
have	O
the	O
following	O
distinguishing	O
features	O
:	O
</s>
<s>
3D	O
volumes	O
of	O
neurons	B-Algorithm
.	O
</s>
<s>
The	O
layers	O
of	O
a	O
CNN	B-Architecture
have	O
neurons	B-Algorithm
arranged	O
in	O
3	O
dimensions	O
:	O
width	O
,	O
height	O
and	O
depth	O
.	O
</s>
<s>
Where	O
each	O
neuron	O
inside	O
a	O
convolutional	O
layer	B-Algorithm
is	O
connected	O
to	O
only	O
a	O
small	O
region	O
of	O
the	O
layer	B-Algorithm
before	O
it	O
,	O
called	O
a	O
receptive	O
field	O
.	O
</s>
<s>
Distinct	O
types	O
of	O
layers	O
,	O
both	O
locally	O
and	O
completely	O
connected	O
,	O
are	O
stacked	O
to	O
form	O
a	O
CNN	B-Architecture
architecture	O
.	O
</s>
<s>
Local	O
connectivity	O
:	O
following	O
the	O
concept	O
of	O
receptive	O
fields	O
,	O
CNNs	B-Architecture
exploit	O
spatial	O
locality	O
by	O
enforcing	O
a	O
local	O
connectivity	O
pattern	O
between	O
neurons	B-Algorithm
of	O
adjacent	O
layers	O
.	O
</s>
<s>
Shared	O
weights	O
:	O
In	O
CNNs	B-Architecture
,	O
each	O
filter	O
is	O
replicated	O
across	O
the	O
entire	O
visual	O
field	O
.	O
</s>
<s>
These	O
replicated	O
units	O
share	O
the	O
same	O
parameterization	O
(	O
weight	O
vector	O
and	O
bias	O
)	O
and	O
form	O
a	O
feature	B-Algorithm
map	O
.	O
</s>
<s>
This	O
means	O
that	O
all	O
the	O
neurons	B-Algorithm
in	O
a	O
given	O
convolutional	O
layer	B-Algorithm
respond	O
to	O
the	O
same	O
feature	B-Algorithm
within	O
their	O
specific	O
response	O
field	O
.	O
</s>
<s>
Replicating	O
units	O
in	O
this	O
way	O
allows	O
for	O
the	O
resulting	O
activation	B-Algorithm
map	I-Algorithm
to	O
be	O
equivariant	O
under	O
shifts	O
of	O
the	O
locations	O
of	O
input	O
features	O
in	O
the	O
visual	O
field	O
,	O
i.e.	O
</s>
<s>
they	O
grant	O
translational	O
equivariance	O
-	O
given	O
that	O
the	O
layer	B-Algorithm
has	O
a	O
stride	B-Data_Structure
of	O
one	O
.	O
</s>
<s>
Pooling	O
:	O
In	O
a	O
CNN	B-Architecture
's	O
pooling	O
layers	O
,	O
feature	B-Algorithm
maps	O
are	O
divided	O
into	O
rectangular	O
sub-regions	O
,	O
and	O
the	O
features	O
in	O
each	O
rectangle	O
are	O
independently	O
down-sampled	O
to	O
a	O
single	O
value	O
,	O
commonly	O
by	O
taking	O
their	O
average	O
or	O
maximum	O
value	O
.	O
</s>
<s>
In	O
addition	O
to	O
reducing	O
the	O
sizes	O
of	O
feature	B-Algorithm
maps	O
,	O
the	O
pooling	O
operation	O
grants	O
a	O
degree	O
of	O
local	O
translational	O
invariance	O
to	O
the	O
features	O
contained	O
therein	O
,	O
allowing	O
the	O
CNN	B-Architecture
to	O
be	O
more	O
robust	O
to	O
variations	O
in	O
their	O
positions	O
.	O
</s>
<s>
Together	O
,	O
these	O
properties	O
allow	O
CNNs	B-Architecture
to	O
achieve	O
better	O
generalization	O
on	O
vision	B-Application
problems	I-Application
.	O
</s>
<s>
A	O
CNN	B-Architecture
architecture	O
is	O
formed	O
by	O
a	O
stack	O
of	O
distinct	O
layers	O
that	O
transform	O
the	O
input	O
volume	O
into	O
an	O
output	O
volume	O
(	O
e.g.	O
</s>
<s>
The	O
convolutional	O
layer	B-Algorithm
is	O
the	O
core	O
building	O
block	O
of	O
a	O
CNN	B-Architecture
.	O
</s>
<s>
The	O
layer	B-Algorithm
's	O
parameters	O
consist	O
of	O
a	O
set	O
of	O
learnable	O
filters	O
(	O
or	O
kernels	B-Operating_System
)	O
,	O
which	O
have	O
a	O
small	O
receptive	O
field	O
,	O
but	O
extend	O
through	O
the	O
full	O
depth	O
of	O
the	O
input	O
volume	O
.	O
</s>
<s>
During	O
the	O
forward	O
pass	O
,	O
each	O
filter	O
is	O
convolved	B-Language
across	O
the	O
width	O
and	O
height	O
of	O
the	O
input	O
volume	O
,	O
computing	O
the	O
dot	O
product	O
between	O
the	O
filter	O
entries	O
and	O
the	O
input	O
,	O
producing	O
a	O
2-dimensional	O
activation	B-Algorithm
map	I-Algorithm
of	O
that	O
filter	O
.	O
</s>
<s>
As	O
a	O
result	O
,	O
the	O
network	O
learns	O
filters	O
that	O
activate	O
when	O
it	O
detects	O
some	O
specific	O
type	O
of	O
feature	B-Algorithm
at	O
some	O
spatial	O
position	O
in	O
the	O
input	O
.	O
</s>
<s>
Stacking	O
the	O
activation	O
maps	O
for	O
all	O
filters	O
along	O
the	O
depth	O
dimension	O
forms	O
the	O
full	O
output	O
volume	O
of	O
the	O
convolution	B-Language
layer	B-Algorithm
.	O
</s>
<s>
Every	O
entry	O
in	O
the	O
output	O
volume	O
can	O
thus	O
also	O
be	O
interpreted	O
as	O
an	O
output	O
of	O
a	O
neuron	O
that	O
looks	O
at	O
a	O
small	O
region	O
in	O
the	O
input	O
and	O
shares	O
parameters	O
with	O
neurons	B-Algorithm
in	O
the	O
same	O
activation	B-Algorithm
map	I-Algorithm
.	O
</s>
<s>
Self-supervised	B-General_Concept
learning	I-General_Concept
has	O
been	O
adapted	O
for	O
use	O
in	O
convolutional	O
layers	O
by	O
using	O
sparse	O
patches	O
with	O
a	O
high-mask	O
ratio	O
and	O
a	O
global	O
response	O
normalization	O
layer	B-Algorithm
.	O
</s>
<s>
When	O
dealing	O
with	O
high-dimensional	O
inputs	O
such	O
as	O
images	O
,	O
it	O
is	O
impractical	O
to	O
connect	O
neurons	B-Algorithm
to	O
all	O
neurons	B-Algorithm
in	O
the	O
previous	O
volume	O
because	O
such	O
a	O
network	O
architecture	O
does	O
not	O
take	O
the	O
spatial	O
structure	O
of	O
the	O
data	O
into	O
account	O
.	O
</s>
<s>
Convolutional	O
networks	O
exploit	O
spatially	O
local	O
correlation	O
by	O
enforcing	O
a	O
sparse	O
local	O
connectivity	O
pattern	O
between	O
neurons	B-Algorithm
of	O
adjacent	O
layers	O
:	O
each	O
neuron	O
is	O
connected	O
to	O
only	O
a	O
small	O
region	O
of	O
the	O
input	O
volume	O
.	O
</s>
<s>
The	O
extent	O
of	O
this	O
connectivity	O
is	O
a	O
hyperparameter	B-General_Concept
called	O
the	O
receptive	O
field	O
of	O
the	O
neuron	O
.	O
</s>
<s>
Three	O
hyperparameters	B-General_Concept
control	O
the	O
size	O
of	O
the	O
output	O
volume	O
of	O
the	O
convolutional	O
layer	B-Algorithm
:	O
the	O
depth	O
,	O
stride	B-Data_Structure
,	O
and	O
padding	O
size	O
:	O
</s>
<s>
The	O
depth	O
of	O
the	O
output	O
volume	O
controls	O
the	O
number	O
of	O
neurons	B-Algorithm
in	O
a	O
layer	B-Algorithm
that	O
connect	O
to	O
the	O
same	O
region	O
of	O
the	O
input	O
volume	O
.	O
</s>
<s>
These	O
neurons	B-Algorithm
learn	O
to	O
activate	O
for	O
different	O
features	O
in	O
the	O
input	O
.	O
</s>
<s>
For	O
example	O
,	O
if	O
the	O
first	O
convolutional	O
layer	B-Algorithm
takes	O
the	O
raw	O
image	O
as	O
input	O
,	O
then	O
different	O
neurons	B-Algorithm
along	O
the	O
depth	O
dimension	O
may	O
activate	O
in	O
the	O
presence	O
of	O
various	O
oriented	O
edges	O
,	O
or	O
blobs	O
of	O
color	O
.	O
</s>
<s>
Stride	B-Data_Structure
controls	O
how	O
depth	O
columns	O
around	O
the	O
width	O
and	O
height	O
are	O
allocated	O
.	O
</s>
<s>
If	O
the	O
stride	B-Data_Structure
is	O
1	O
,	O
then	O
we	O
move	O
the	O
filters	O
one	O
pixel	O
at	O
a	O
time	O
.	O
</s>
<s>
For	O
any	O
integer	O
a	O
stride	B-Data_Structure
S	O
means	O
that	O
the	O
filter	O
is	O
translated	O
S	O
units	O
at	O
a	O
time	O
per	O
output	O
.	O
</s>
<s>
A	O
greater	O
stride	B-Data_Structure
means	O
smaller	O
overlap	O
of	O
receptive	O
fields	O
and	O
smaller	O
spatial	O
dimensions	O
of	O
the	O
output	O
volume	O
.	O
</s>
<s>
The	O
size	O
of	O
this	O
padding	O
is	O
a	O
third	O
hyperparameter	B-General_Concept
.	O
</s>
<s>
The	O
spatial	O
size	O
of	O
the	O
output	O
volume	O
is	O
a	O
function	O
of	O
the	O
input	O
volume	O
size	O
,	O
the	O
kernel	B-Operating_System
field	O
size	O
of	O
the	O
convolutional	O
layer	B-Algorithm
neurons	B-Algorithm
,	O
the	O
stride	B-Data_Structure
,	O
and	O
the	O
amount	O
of	O
zero	O
padding	O
on	O
the	O
border	O
.	O
</s>
<s>
The	O
number	O
of	O
neurons	B-Algorithm
that	O
"	O
fit	O
"	O
in	O
a	O
given	O
volume	O
is	O
then	O
:	O
</s>
<s>
If	O
this	O
number	O
is	O
not	O
an	O
integer	O
,	O
then	O
the	O
strides	O
are	O
incorrect	O
and	O
the	O
neurons	B-Algorithm
cannot	O
be	O
tiled	O
to	O
fit	O
across	O
the	O
input	O
volume	O
in	O
a	O
symmetric	O
way	O
.	O
</s>
<s>
In	O
general	O
,	O
setting	O
zero	O
padding	O
to	O
be	O
when	O
the	O
stride	B-Data_Structure
is	O
ensures	O
that	O
the	O
input	O
volume	O
and	O
output	O
volume	O
will	O
have	O
the	O
same	O
size	O
spatially	O
.	O
</s>
<s>
However	O
,	O
it	O
is	O
not	O
always	O
completely	O
necessary	O
to	O
use	O
all	O
of	O
the	O
neurons	B-Algorithm
of	O
the	O
previous	O
layer	B-Algorithm
.	O
</s>
<s>
For	O
example	O
,	O
a	O
neural	B-Architecture
network	I-Architecture
designer	O
may	O
decide	O
to	O
use	O
just	O
a	O
portion	O
of	O
padding	O
.	O
</s>
<s>
It	O
relies	O
on	O
the	O
assumption	O
that	O
if	O
a	O
patch	O
feature	B-Algorithm
is	O
useful	O
to	O
compute	O
at	O
some	O
spatial	O
position	O
,	O
then	O
it	O
should	O
also	O
be	O
useful	O
to	O
compute	O
at	O
other	O
positions	O
.	O
</s>
<s>
Denoting	O
a	O
single	O
2-dimensional	O
slice	O
of	O
depth	O
as	O
a	O
depth	O
slice	O
,	O
the	O
neurons	B-Algorithm
in	O
each	O
depth	O
slice	O
are	O
constrained	O
to	O
use	O
the	O
same	O
weights	O
and	O
bias	O
.	O
</s>
<s>
Since	O
all	O
neurons	B-Algorithm
in	O
a	O
single	O
depth	O
slice	O
share	O
the	O
same	O
parameters	O
,	O
the	O
forward	O
pass	O
in	O
each	O
depth	O
slice	O
of	O
the	O
convolutional	O
layer	B-Algorithm
can	O
be	O
computed	O
as	O
a	O
convolution	B-Language
of	O
the	O
neuron	O
's	O
weights	O
with	O
the	O
input	O
volume	O
.	O
</s>
<s>
Therefore	O
,	O
it	O
is	O
common	O
to	O
refer	O
to	O
the	O
sets	O
of	O
weights	O
as	O
a	O
filter	O
(	O
or	O
a	O
kernel	B-Operating_System
)	O
,	O
which	O
is	O
convolved	B-Language
with	O
the	O
input	O
.	O
</s>
<s>
The	O
result	O
of	O
this	O
convolution	B-Language
is	O
an	O
activation	B-Algorithm
map	I-Algorithm
,	O
and	O
the	O
set	O
of	O
activation	O
maps	O
for	O
each	O
different	O
filter	O
are	O
stacked	O
together	O
along	O
the	O
depth	O
dimension	O
to	O
produce	O
the	O
output	O
volume	O
.	O
</s>
<s>
Parameter	O
sharing	O
contributes	O
to	O
the	O
translation	O
invariance	O
of	O
the	O
CNN	B-Architecture
architecture	O
.	O
</s>
<s>
This	O
is	O
especially	O
the	O
case	O
when	O
the	O
input	O
images	O
to	O
a	O
CNN	B-Architecture
have	O
some	O
specific	O
centered	O
structure	O
;	O
for	O
which	O
we	O
expect	O
completely	O
different	O
features	O
to	O
be	O
learned	O
on	O
different	O
spatial	O
locations	O
.	O
</s>
<s>
In	O
that	O
case	O
it	O
is	O
common	O
to	O
relax	O
the	O
parameter	O
sharing	O
scheme	O
,	O
and	O
instead	O
simply	O
call	O
the	O
layer	B-Algorithm
a	O
"	O
locally	O
connected	O
layer	B-Algorithm
"	O
.	O
</s>
<s>
Another	O
important	O
concept	O
of	O
CNNs	B-Architecture
is	O
pooling	O
,	O
which	O
is	O
a	O
form	O
of	O
non-linear	O
down-sampling	B-Algorithm
.	O
</s>
<s>
Intuitively	O
,	O
the	O
exact	O
location	O
of	O
a	O
feature	B-Algorithm
is	O
less	O
important	O
than	O
its	O
rough	O
location	O
relative	O
to	O
other	O
features	O
.	O
</s>
<s>
This	O
is	O
the	O
idea	O
behind	O
the	O
use	O
of	O
pooling	O
in	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
.	O
</s>
<s>
The	O
pooling	O
layer	B-Algorithm
serves	O
to	O
progressively	O
reduce	O
the	O
spatial	O
size	O
of	O
the	O
representation	O
,	O
to	O
reduce	O
the	O
number	O
of	O
parameters	O
,	O
memory	O
footprint	O
and	O
amount	O
of	O
computation	O
in	O
the	O
network	O
,	O
and	O
hence	O
to	O
also	O
control	O
overfitting	B-Error_Name
.	O
</s>
<s>
This	O
is	O
known	O
as	O
down-sampling	B-Algorithm
.	O
</s>
<s>
It	O
is	O
common	O
to	O
periodically	O
insert	O
a	O
pooling	O
layer	B-Algorithm
between	O
successive	O
convolutional	O
layers	O
(	O
each	O
one	O
typically	O
followed	O
by	O
an	O
activation	B-Algorithm
function	I-Algorithm
,	O
such	O
as	O
a	O
ReLU	B-Algorithm
layer	B-Algorithm
)	O
in	O
a	O
CNN	B-Architecture
architecture	O
.	O
</s>
<s>
While	O
pooling	O
layers	O
contribute	O
to	O
local	O
translation	O
invariance	O
,	O
they	O
do	O
not	O
provide	O
global	O
translation	O
invariance	O
in	O
a	O
CNN	B-Architecture
,	O
unless	O
a	O
form	O
of	O
global	O
pooling	O
is	O
used	O
.	O
</s>
<s>
The	O
pooling	O
layer	B-Algorithm
commonly	O
operates	O
independently	O
on	O
every	O
depth	O
,	O
or	O
slice	O
,	O
of	O
the	O
input	O
and	O
resizes	O
it	O
spatially	O
.	O
</s>
<s>
A	O
very	O
common	O
form	O
of	O
max	O
pooling	O
is	O
a	O
layer	B-Algorithm
with	O
filters	O
of	O
size	O
2×2	O
,	O
applied	O
with	O
a	O
stride	B-Data_Structure
of	O
2	O
,	O
which	O
subsamples	O
every	O
depth	O
slice	O
in	O
the	O
input	O
by	O
2	O
along	O
both	O
width	O
and	O
height	O
,	O
discarding	O
75%	O
of	O
the	O
activations	O
:	O
</s>
<s>
The	O
depth	O
dimension	O
remains	O
unchanged	O
(	O
this	O
is	O
true	B-General_Concept
for	O
other	O
forms	O
of	O
pooling	O
as	O
well	O
)	O
.	O
</s>
<s>
"	O
Region	B-Algorithm
of	I-Algorithm
Interest	I-Algorithm
"	O
pooling	O
(	O
also	O
known	O
as	O
RoI	O
pooling	O
)	O
is	O
a	O
variant	O
of	O
max	O
pooling	O
,	O
in	O
which	O
output	O
size	O
is	O
fixed	O
and	O
input	O
rectangle	O
is	O
a	O
parameter	O
.	O
</s>
<s>
Pooling	O
is	O
a	O
downsampling	B-Algorithm
method	O
and	O
an	O
important	O
component	O
of	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
for	O
object	B-General_Concept
detection	I-General_Concept
based	O
on	O
the	O
Fast	O
R-CNN	O
architecture	O
.	O
</s>
<s>
A	O
CMP	O
operation	O
layer	B-Algorithm
conducts	O
the	O
MP	O
operation	O
along	O
the	O
channel	O
side	O
among	O
the	O
corresponding	O
positions	O
of	O
the	O
consecutive	O
feature	B-Algorithm
maps	O
for	O
the	O
purpose	O
of	O
redundant	O
information	O
elimination	O
.	O
</s>
<s>
The	O
CMP	O
makes	O
the	O
significant	O
features	O
gather	O
together	O
within	O
fewer	O
channels	B-Algorithm
,	O
which	O
is	O
important	O
for	O
fine-grained	O
image	O
classification	O
that	O
needs	O
more	O
discriminating	O
features	O
.	O
</s>
<s>
Meanwhile	O
,	O
another	O
advantage	O
of	O
the	O
CMP	O
operation	O
is	O
to	O
make	O
the	O
channel	O
number	O
of	O
feature	B-Algorithm
maps	O
smaller	O
before	O
it	O
connects	O
to	O
the	O
first	O
fully	O
connected	O
(	O
FC	O
)	O
layer	B-Algorithm
.	O
</s>
<s>
Similar	O
to	O
the	O
MP	O
operation	O
,	O
we	O
denote	O
the	O
input	O
feature	B-Algorithm
maps	O
and	O
output	O
feature	B-Algorithm
maps	O
of	O
a	O
CMP	O
layer	B-Algorithm
as	O
F	O
∈	O
R( C×M×N	O
)	O
and	O
C	B-Language
∈	O
R( c×M×N	O
)	O
,	O
respectively	O
,	O
where	O
C	B-Language
and	O
c	B-Language
are	O
the	O
channel	O
numbers	O
of	O
the	O
input	O
and	O
output	O
feature	B-Algorithm
maps	O
,	O
M	O
and	O
N	O
are	O
the	O
widths	O
and	O
the	O
height	O
of	O
the	O
feature	B-Algorithm
maps	O
,	O
respectively	O
.	O
</s>
<s>
Note	O
that	O
the	O
CMP	O
operation	O
only	O
changes	O
the	O
channel	O
number	O
of	O
the	O
feature	B-Algorithm
maps	O
.	O
</s>
<s>
The	O
width	O
and	O
the	O
height	O
of	O
the	O
feature	B-Algorithm
maps	O
are	O
not	O
changed	O
,	O
which	O
is	O
different	O
from	O
the	O
MP	O
operation	O
.	O
</s>
<s>
ReLU	B-Algorithm
is	O
the	O
abbreviation	O
of	O
rectified	B-Algorithm
linear	I-Algorithm
unit	I-Algorithm
introduced	O
by	O
Kunihiko	O
Fukushima	O
in	O
1969	O
.	O
</s>
<s>
ReLU	B-Algorithm
applies	O
the	O
non-saturating	O
activation	B-Algorithm
function	I-Algorithm
.	O
</s>
<s>
It	O
effectively	O
removes	O
negative	O
values	O
from	O
an	O
activation	B-Algorithm
map	I-Algorithm
by	O
setting	O
them	O
to	O
zero	O
.	O
</s>
<s>
It	O
introduces	O
nonlinearities	O
to	O
the	O
decision	B-General_Concept
function	I-General_Concept
and	O
in	O
the	O
overall	O
network	O
without	O
affecting	O
the	O
receptive	O
fields	O
of	O
the	O
convolution	B-Language
layers	O
.	O
</s>
<s>
In	O
2011	O
,	O
Xavier	O
Glorot	O
,	O
Antoine	O
Bordes	O
and	O
Yoshua	O
Bengio	O
found	O
that	O
ReLU	B-Algorithm
enables	O
better	O
training	O
of	O
deeper	O
networks	O
,	O
compared	O
to	O
widely	O
used	O
activation	B-Algorithm
functions	I-Algorithm
prior	O
to	O
2011	O
.	O
</s>
<s>
Other	O
functions	O
can	O
also	O
be	O
used	O
to	O
increase	O
nonlinearity	O
,	O
for	O
example	O
the	O
saturating	O
hyperbolic	O
tangent	O
,	O
,	O
and	O
the	O
sigmoid	B-Algorithm
function	I-Algorithm
.	O
</s>
<s>
ReLU	B-Algorithm
is	O
often	O
preferred	O
to	O
other	O
functions	O
because	O
it	O
trains	O
the	O
neural	B-Architecture
network	I-Architecture
several	O
times	O
faster	O
without	O
a	O
significant	O
penalty	O
to	O
generalization	O
accuracy	O
.	O
</s>
<s>
Neurons	B-Algorithm
in	O
a	O
fully	O
connected	O
layer	B-Algorithm
have	O
connections	O
to	O
all	O
activations	O
in	O
the	O
previous	O
layer	B-Algorithm
,	O
as	O
seen	O
in	O
regular	O
(	O
non-convolutional	O
)	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
.	O
</s>
<s>
Their	O
activations	O
can	O
thus	O
be	O
computed	O
as	O
an	O
affine	B-Algorithm
transformation	I-Algorithm
,	O
with	O
matrix	O
multiplication	O
followed	O
by	O
a	O
bias	O
offset	O
(	O
vector	O
addition	O
of	O
a	O
learned	O
or	O
fixed	O
bias	O
term	O
)	O
.	O
</s>
<s>
The	O
"	O
loss	O
layer	B-Algorithm
"	O
,	O
or	O
"	O
loss	O
function	O
"	O
,	O
specifies	O
how	O
training	O
penalizes	O
the	O
deviation	O
between	O
the	O
predicted	O
output	O
of	O
the	O
network	O
,	O
and	O
the	O
true	B-General_Concept
data	O
labels	O
(	O
during	O
supervised	O
learning	O
)	O
.	O
</s>
<s>
The	O
Softmax	B-Algorithm
loss	O
function	O
is	O
used	O
for	O
predicting	O
a	O
single	O
class	O
of	O
K	O
mutually	O
exclusive	O
classes	O
.	O
</s>
<s>
Sigmoid	B-Algorithm
cross-entropy	O
loss	O
is	O
used	O
for	O
predicting	O
K	O
independent	O
probability	O
values	O
in	O
.	O
</s>
<s>
Hyperparameters	B-General_Concept
are	O
various	O
settings	O
that	O
are	O
used	O
to	O
control	O
the	O
learning	O
process	O
.	O
</s>
<s>
CNNs	B-Architecture
use	O
more	O
hyperparameters	B-General_Concept
than	O
a	O
standard	O
multilayer	B-Algorithm
perceptron	I-Algorithm
(	O
MLP	O
)	O
.	O
</s>
<s>
The	O
kernel	B-Operating_System
is	O
the	O
number	O
of	O
pixels	O
processed	O
together	O
.	O
</s>
<s>
It	O
is	O
typically	O
expressed	O
as	O
the	O
kernel	B-Operating_System
's	O
dimensions	O
,	O
e.g.	O
,	O
2x2	O
,	O
or	O
3x3	O
.	O
</s>
<s>
The	O
padding	O
applied	O
is	O
typically	O
one	O
less	O
than	O
the	O
corresponding	O
kernel	B-Operating_System
dimension	O
.	O
</s>
<s>
For	O
example	O
,	O
a	O
convolutional	O
layer	B-Algorithm
using	O
3x3	O
kernels	B-Operating_System
would	O
receive	O
a	O
2-pixel	O
pad	O
,	O
that	O
is	O
1	O
pixel	O
on	O
each	O
side	O
of	O
the	O
image	O
.	O
</s>
<s>
The	O
stride	B-Data_Structure
is	O
the	O
number	O
of	O
pixels	O
that	O
the	O
analysis	O
window	O
moves	O
on	O
each	O
iteration	O
.	O
</s>
<s>
A	O
stride	B-Data_Structure
of	O
2	O
means	O
that	O
each	O
kernel	B-Operating_System
is	O
offset	O
by	O
2	O
pixels	O
from	O
its	O
predecessor	O
.	O
</s>
<s>
Since	O
feature	B-Algorithm
map	O
size	O
decreases	O
with	O
depth	O
,	O
layers	O
near	O
the	O
input	O
layer	B-Algorithm
tend	O
to	O
have	O
fewer	O
filters	O
while	O
higher	O
layers	O
can	O
have	O
more	O
.	O
</s>
<s>
To	O
equalize	O
computation	O
at	O
each	O
layer	B-Algorithm
,	O
the	O
product	O
of	O
feature	B-Algorithm
values	O
va	O
with	O
pixel	O
position	O
is	O
kept	O
roughly	O
constant	O
across	O
layers	O
.	O
</s>
<s>
Preserving	O
more	O
information	O
about	O
the	O
input	O
would	O
require	O
keeping	O
the	O
total	O
number	O
of	O
activations	O
(	O
number	O
of	O
feature	B-Algorithm
maps	O
times	O
number	O
of	O
pixel	O
positions	O
)	O
non-decreasing	O
from	O
one	O
layer	B-Algorithm
to	O
the	O
next	O
.	O
</s>
<s>
The	O
number	O
of	O
feature	B-Algorithm
maps	O
directly	O
controls	O
the	O
capacity	O
and	O
depends	O
on	O
the	O
number	O
of	O
available	O
examples	O
and	O
task	O
complexity	O
.	O
</s>
<s>
The	O
challenge	O
is	O
to	O
find	O
the	O
right	O
level	O
of	O
granularity	O
so	O
as	O
to	O
create	O
abstractions	O
at	O
the	O
proper	O
scale	O
,	O
given	O
a	O
particular	O
data	O
set	O
,	O
and	O
without	O
overfitting	B-Error_Name
.	O
</s>
<s>
This	O
implies	O
that	O
the	O
input	O
is	O
drastically	O
downsampled	B-Algorithm
,	O
reducing	O
processing	O
cost	O
.	O
</s>
<s>
Greater	O
pooling	O
reduces	B-Algorithm
the	I-Algorithm
dimension	I-Algorithm
of	O
the	O
signal	O
,	O
and	O
may	O
result	O
in	O
unacceptable	O
information	B-General_Concept
loss	I-General_Concept
.	O
</s>
<s>
Dilation	O
involves	O
ignoring	O
pixels	O
within	O
a	O
kernel	B-Operating_System
.	O
</s>
<s>
A	O
dilation	O
of	O
2	O
on	O
a	O
3x3	O
kernel	B-Operating_System
expands	O
the	O
kernel	B-Operating_System
to	O
5x5	O
,	O
while	O
still	O
processing	O
9	O
(	O
evenly	O
spaced	O
)	O
pixels	O
.	O
</s>
<s>
Accordingly	O
,	O
dilation	O
of	O
4	O
expands	O
the	O
kernel	B-Operating_System
to	O
9x9	O
.	O
</s>
<s>
It	O
is	O
commonly	O
assumed	O
that	O
CNNs	B-Architecture
are	O
invariant	O
to	O
shifts	O
of	O
the	O
input	O
.	O
</s>
<s>
Convolution	B-Language
or	O
pooling	O
layers	O
within	O
a	O
CNN	B-Architecture
that	O
do	O
not	O
have	O
a	O
stride	B-Data_Structure
greater	O
than	O
one	O
are	O
indeed	O
equivariant	O
to	O
translations	O
of	O
the	O
input	O
.	O
</s>
<s>
However	O
,	O
layers	O
with	O
a	O
stride	B-Data_Structure
greater	O
than	O
one	O
ignore	O
the	O
Nyquist-Shannon	O
sampling	O
theorem	O
and	O
might	O
lead	O
to	O
aliasing	B-Error_Name
of	O
the	O
input	O
signal	O
While	O
,	O
in	O
principle	O
,	O
CNNs	B-Architecture
are	O
capable	O
of	O
implementing	O
anti-aliasing	O
filters	O
,	O
it	O
has	O
been	O
observed	O
that	O
this	O
does	O
not	O
happen	O
in	O
practice	O
and	O
yield	O
models	O
that	O
are	O
not	O
equivariant	O
to	O
translations	O
.	O
</s>
<s>
Furthermore	O
,	O
if	O
a	O
CNN	B-Architecture
makes	O
use	O
of	O
fully	O
connected	O
layers	O
,	O
translation	O
equivariance	O
does	O
not	O
imply	O
translation	O
invariance	O
,	O
as	O
the	O
fully	O
connected	O
layers	O
are	O
not	O
invariant	O
to	O
shifts	O
of	O
the	O
input	O
.	O
</s>
<s>
One	O
solution	O
for	O
complete	O
translation	O
invariance	O
is	O
avoiding	O
any	O
down-sampling	B-Algorithm
throughout	O
the	O
network	O
and	O
applying	O
global	O
average	O
pooling	O
at	O
the	O
last	O
layer	B-Algorithm
.	O
</s>
<s>
Additionally	O
,	O
several	O
other	O
partial	O
solutions	O
have	O
been	O
proposed	O
,	O
such	O
as	O
anti-aliasing	O
before	O
downsampling	B-Algorithm
operations	O
,	O
spatial	O
transformer	O
networks	O
,	O
data	B-General_Concept
augmentation	I-General_Concept
,	O
subsampling	O
combined	O
with	O
pooling	O
,	O
and	O
capsule	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
.	O
</s>
<s>
Other	O
times	O
methods	O
such	O
as	O
k-fold	O
cross-validation	B-Application
are	O
applied	O
.	O
</s>
<s>
Other	O
strategies	O
include	O
using	O
conformal	B-Algorithm
prediction	I-Algorithm
.	O
</s>
<s>
Regularization	O
is	O
a	O
process	O
of	O
introducing	O
additional	O
information	O
to	O
solve	O
an	O
ill-posed	B-Algorithm
problem	I-Algorithm
or	O
to	O
prevent	O
overfitting	B-Error_Name
.	O
</s>
<s>
CNNs	B-Architecture
use	O
various	O
types	O
of	O
regularization	O
.	O
</s>
<s>
Because	O
a	O
fully	O
connected	O
layer	B-Algorithm
occupies	O
most	O
of	O
the	O
parameters	O
,	O
it	O
is	O
prone	O
to	O
overfitting	B-Error_Name
.	O
</s>
<s>
One	O
method	O
to	O
reduce	O
overfitting	B-Error_Name
is	O
dropout	B-Algorithm
.	O
</s>
<s>
This	O
is	O
the	O
biggest	O
contribution	O
of	O
the	O
dropout	B-Algorithm
method	O
:	O
although	O
it	O
effectively	O
generates	O
neural	B-Architecture
nets	I-Architecture
,	O
and	O
as	O
such	O
allows	O
for	O
model	O
combination	O
,	O
at	O
test	O
time	O
only	O
a	O
single	O
network	O
needs	O
to	O
be	O
tested	O
.	O
</s>
<s>
By	O
avoiding	O
training	O
all	O
nodes	O
on	O
all	O
training	O
data	O
,	O
dropout	B-Algorithm
decreases	O
overfitting	B-Error_Name
.	O
</s>
<s>
This	O
makes	O
the	O
model	O
combination	O
practical	O
,	O
even	O
for	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
.	O
</s>
<s>
DropConnect	O
is	O
the	O
generalization	O
of	O
dropout	B-Algorithm
in	O
which	O
each	O
connection	O
,	O
rather	O
than	O
each	O
output	O
unit	O
,	O
can	O
be	O
dropped	O
with	O
probability	O
.	O
</s>
<s>
Each	O
unit	O
thus	O
receives	O
input	O
from	O
a	O
random	O
subset	O
of	O
units	O
in	O
the	O
previous	O
layer	B-Algorithm
.	O
</s>
<s>
DropConnect	O
is	O
similar	O
to	O
dropout	B-Algorithm
as	O
it	O
introduces	O
dynamic	O
sparsity	O
within	O
the	O
model	O
,	O
but	O
differs	O
in	O
that	O
the	O
sparsity	O
is	O
on	O
the	O
weights	O
,	O
rather	O
than	O
the	O
output	O
vectors	O
of	O
a	O
layer	B-Algorithm
.	O
</s>
<s>
In	O
other	O
words	O
,	O
the	O
fully	O
connected	O
layer	B-Algorithm
with	O
DropConnect	O
becomes	O
a	O
sparsely	O
connected	O
layer	B-Algorithm
in	O
which	O
the	O
connections	O
are	O
chosen	O
at	O
random	O
during	O
the	O
training	O
stage	O
.	O
</s>
<s>
A	O
major	O
drawback	O
to	O
Dropout	B-Algorithm
is	O
that	O
it	O
does	O
not	O
have	O
the	O
same	O
benefits	O
for	O
convolutional	O
layers	O
,	O
where	O
the	O
neurons	B-Algorithm
are	O
not	O
fully	O
connected	O
.	O
</s>
<s>
In	O
stochastic	O
pooling	O
,	O
the	O
conventional	O
deterministic	B-General_Concept
pooling	O
operations	O
are	O
replaced	O
with	O
a	O
stochastic	O
procedure	O
,	O
where	O
the	O
activation	O
within	O
each	O
pooling	O
region	O
is	O
picked	O
randomly	O
according	O
to	O
a	O
multinomial	O
distribution	O
,	O
given	O
by	O
the	O
activities	O
within	O
the	O
pooling	O
region	O
.	O
</s>
<s>
This	O
approach	O
is	O
free	O
of	O
hyperparameters	B-General_Concept
and	O
can	O
be	O
combined	O
with	O
other	O
regularization	O
approaches	O
,	O
such	O
as	O
dropout	B-Algorithm
and	O
data	B-General_Concept
augmentation	I-General_Concept
.	O
</s>
<s>
This	O
is	O
similar	O
to	O
explicit	O
elastic	O
deformations	O
of	O
the	O
input	O
images	O
,	O
which	O
delivers	O
excellent	O
performance	O
on	O
the	O
MNIST	B-General_Concept
data	I-General_Concept
set	I-General_Concept
.	O
</s>
<s>
Because	O
the	O
degree	O
of	O
model	O
overfitting	B-Error_Name
is	O
determined	O
by	O
both	O
its	O
power	O
and	O
the	O
amount	O
of	O
training	O
it	O
receives	O
,	O
providing	O
a	O
convolutional	O
network	O
with	O
more	O
training	O
examples	O
can	O
reduce	O
overfitting	B-Error_Name
.	O
</s>
<s>
One	O
of	O
the	O
simplest	O
methods	O
to	O
prevent	O
overfitting	B-Error_Name
of	O
a	O
network	O
is	O
to	O
simply	O
stop	O
the	O
training	O
before	O
overfitting	B-Error_Name
has	O
had	O
a	O
chance	O
to	O
occur	O
.	O
</s>
<s>
Another	O
simple	O
way	O
to	O
prevent	O
overfitting	B-Error_Name
is	O
to	O
limit	O
the	O
number	O
of	O
parameters	O
,	O
typically	O
by	O
limiting	O
the	O
number	O
of	O
hidden	O
units	O
in	O
each	O
layer	B-Algorithm
or	O
limiting	O
network	O
depth	O
.	O
</s>
<s>
Limiting	O
the	O
number	O
of	O
parameters	O
restricts	O
the	O
predictive	O
power	O
of	O
the	O
network	O
directly	O
,	O
reducing	O
the	O
complexity	O
of	O
the	O
function	O
that	O
it	O
can	O
perform	O
on	O
the	O
data	O
,	O
and	O
thus	O
limits	O
the	O
amount	O
of	O
overfitting	B-Error_Name
.	O
</s>
<s>
The	O
level	O
of	O
acceptable	O
model	O
complexity	O
can	O
be	O
reduced	O
by	O
increasing	O
the	O
proportionality	O
constant('alpha' hyperparameter )	O
,	O
thus	O
increasing	O
the	O
penalty	O
for	O
large	O
weight	O
vectors	O
.	O
</s>
<s>
In	O
other	O
words	O
,	O
neurons	B-Algorithm
with	O
L1	O
regularization	O
end	O
up	O
using	O
only	O
a	O
sparse	O
subset	O
of	O
their	O
most	O
important	O
inputs	O
and	O
become	O
nearly	O
invariant	O
to	O
the	O
noisy	O
inputs	O
.	O
</s>
<s>
Overlapping	O
the	O
pools	O
so	O
that	O
each	O
feature	B-Algorithm
occurs	O
in	O
multiple	O
pools	O
,	O
helps	O
retain	O
the	O
information	O
.	O
</s>
<s>
The	O
alternative	O
is	O
to	O
use	O
a	O
hierarchy	O
of	O
coordinate	O
frames	O
and	O
use	O
a	O
group	O
of	O
neurons	B-Algorithm
to	O
represent	O
a	O
conjunction	O
of	O
the	O
shape	O
of	O
the	O
feature	B-Algorithm
and	O
its	O
pose	O
relative	O
to	O
the	O
retina	O
.	O
</s>
<s>
CNNs	B-Architecture
are	O
often	O
used	O
in	O
image	O
recognition	O
systems	O
.	O
</s>
<s>
In	O
2012	O
an	O
error	O
rate	O
of	O
0.23	O
%	O
on	O
the	O
MNIST	B-General_Concept
database	I-General_Concept
was	O
reported	O
.	O
</s>
<s>
Another	O
paper	O
on	O
using	O
CNN	B-Architecture
for	O
image	O
classification	O
reported	O
that	O
the	O
learning	O
process	O
was	O
"	O
surprisingly	O
fast	O
"	O
;	O
in	O
the	O
same	O
paper	O
,	O
the	O
best	O
published	O
results	O
as	O
of	O
2011	O
were	O
achieved	O
in	O
the	O
MNIST	B-General_Concept
database	I-General_Concept
and	O
the	O
NORB	O
database	O
.	O
</s>
<s>
AlexNet	B-Algorithm
won	O
the	O
ImageNet	O
Large	O
Scale	O
Visual	O
Recognition	O
Challenge	O
2012	O
.	O
</s>
<s>
When	O
applied	O
to	O
facial	O
recognition	O
,	O
CNNs	B-Architecture
achieved	O
a	O
large	O
decrease	O
in	O
error	O
rate	O
.	O
</s>
<s>
CNNs	B-Architecture
were	O
used	O
to	O
assess	O
video	B-Device
quality	I-Device
in	O
an	O
objective	O
way	O
after	O
manual	O
training	O
;	O
the	O
resulting	O
system	O
had	O
a	O
very	O
low	O
root	B-General_Concept
mean	I-General_Concept
square	I-General_Concept
error	I-General_Concept
.	O
</s>
<s>
In	O
the	O
ILSVRC	O
2014	O
,	O
a	O
large-scale	O
visual	O
recognition	O
challenge	O
,	O
almost	O
every	O
highly	O
ranked	O
team	O
used	O
CNN	B-Architecture
as	O
their	O
basic	O
framework	O
.	O
</s>
<s>
The	O
winner	O
GoogLeNet	O
(	O
the	O
foundation	O
of	O
DeepDream	B-Application
)	O
increased	O
the	O
mean	O
average	O
precision	O
of	O
object	B-General_Concept
detection	I-General_Concept
to	O
0.439329	O
,	O
and	O
reduced	O
classification	O
error	O
to	O
0.06656	O
,	O
the	O
best	O
result	O
to	O
date	O
.	O
</s>
<s>
That	O
performance	O
of	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
on	O
the	O
ImageNet	O
tests	O
was	O
close	O
to	O
that	O
of	O
humans	O
.	O
</s>
<s>
For	O
example	O
,	O
they	O
are	O
not	O
good	O
at	O
classifying	O
objects	O
into	O
fine-grained	O
categories	O
such	O
as	O
the	O
particular	O
breed	O
of	O
dog	O
or	O
species	O
of	O
bird	O
,	O
whereas	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
handle	O
this	O
.	O
</s>
<s>
In	O
2015	O
a	O
many-layered	O
CNN	B-Architecture
demonstrated	O
the	O
ability	O
to	O
spot	O
faces	O
from	O
a	O
wide	O
range	O
of	O
angles	O
,	O
including	O
upside	O
down	O
,	O
even	O
when	O
partially	O
occluded	O
,	O
with	O
competitive	O
performance	O
.	O
</s>
<s>
Compared	O
to	O
image	O
data	O
domains	O
,	O
there	O
is	O
relatively	O
little	O
work	O
on	O
applying	O
CNNs	B-Architecture
to	O
video	O
classification	O
.	O
</s>
<s>
However	O
,	O
some	O
extensions	O
of	O
CNNs	B-Architecture
into	O
the	O
video	O
domain	O
have	O
been	O
explored	O
.	O
</s>
<s>
One	O
approach	O
is	O
to	O
treat	O
space	O
and	O
time	O
as	O
equivalent	O
dimensions	O
of	O
the	O
input	O
and	O
perform	O
convolutions	B-Language
in	O
both	O
time	O
and	O
space	O
.	O
</s>
<s>
Another	O
way	O
is	O
to	O
fuse	O
the	O
features	O
of	O
two	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
,	O
one	O
for	O
the	O
spatial	O
and	O
one	O
for	O
the	O
temporal	O
stream	O
.	O
</s>
<s>
Long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
(	O
LSTM	B-Algorithm
)	O
recurrent	B-Algorithm
units	O
are	O
typically	O
incorporated	O
after	O
the	O
CNN	B-Architecture
to	O
account	O
for	O
inter-frame	O
or	O
inter-clip	O
dependencies	O
.	O
</s>
<s>
Unsupervised	B-General_Concept
learning	I-General_Concept
schemes	O
for	O
training	O
spatio-temporal	O
features	O
have	O
been	O
introduced	O
,	O
based	O
on	O
Convolutional	O
Gated	O
Restricted	O
Boltzmann	B-Algorithm
Machines	I-Algorithm
and	O
Independent	O
Subspace	O
Analysis	O
.	O
</s>
<s>
It	O
's	O
Application	O
can	O
be	O
seen	O
in	O
Text-to-Video	B-Algorithm
model	I-Algorithm
.	O
</s>
<s>
CNNs	B-Architecture
have	O
also	O
been	O
explored	O
for	O
natural	B-Language
language	I-Language
processing	I-Language
.	O
</s>
<s>
CNN	B-Architecture
models	O
are	O
effective	O
for	O
various	O
NLP	B-Language
problems	O
and	O
achieved	O
excellent	O
results	O
in	O
semantic	B-Application
parsing	I-Application
,	O
search	O
query	O
retrieval	O
,	O
sentence	O
modeling	O
,	O
classification	O
,	O
prediction	O
and	O
other	O
traditional	O
NLP	B-Language
tasks	O
.	O
</s>
<s>
Compared	O
to	O
traditional	O
language	O
processing	O
methods	O
such	O
as	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
,	O
CNNs	B-Architecture
can	O
represent	O
different	O
contextual	O
realities	O
of	O
language	O
that	O
do	O
not	O
rely	O
on	O
a	O
series-sequence	O
assumption	O
,	O
while	O
RNNs	O
are	O
better	O
suitable	O
when	O
classical	O
time	O
series	O
modeling	O
is	O
required	O
.	O
</s>
<s>
A	O
CNN	B-Architecture
with	O
1-D	O
convolutions	B-Language
was	O
used	O
on	O
time	O
series	O
in	O
the	O
frequency	O
domain	O
(	O
spectral	O
residual	O
)	O
by	O
an	O
unsupervised	O
model	O
to	O
detect	O
anomalies	O
in	O
the	O
time	O
domain	O
.	O
</s>
<s>
CNNs	B-Architecture
have	O
been	O
used	O
in	O
drug	O
discovery	O
.	O
</s>
<s>
In	O
2015	O
,	O
Atomwise	O
introduced	O
AtomNet	O
,	O
the	O
first	O
deep	B-Algorithm
learning	I-Algorithm
neural	B-Architecture
network	I-Architecture
for	O
structure-based	O
drug	O
design	O
.	O
</s>
<s>
CNNs	B-Architecture
have	O
been	O
used	O
in	O
the	O
game	O
of	O
checkers	O
.	O
</s>
<s>
From	O
1999	O
to	O
2001	O
,	O
Fogel	O
and	O
Chellapilla	O
published	O
papers	O
showing	O
how	O
a	O
convolutional	B-Architecture
neural	I-Architecture
network	I-Architecture
could	O
learn	O
to	O
play	O
checker	O
using	O
co-evolution	O
.	O
</s>
<s>
Ultimately	O
,	O
the	O
program	O
(	O
Blondie24	B-General_Concept
)	O
was	O
tested	O
on	O
165	O
games	O
against	O
players	O
and	O
ranked	O
in	O
the	O
highest	O
0.4	O
%	O
.	O
</s>
<s>
It	O
also	O
earned	O
a	O
win	O
against	O
the	O
program	O
Chinook	B-General_Concept
at	O
its	O
"	O
expert	O
"	O
level	O
of	O
play	O
.	O
</s>
<s>
CNNs	B-Architecture
have	O
been	O
used	O
in	O
computer	B-Application
Go	I-Application
.	O
</s>
<s>
In	O
December	O
2014	O
,	O
Clark	O
and	O
Storkey	O
published	O
a	O
paper	O
showing	O
that	O
a	O
CNN	B-Architecture
trained	O
by	O
supervised	O
learning	O
from	O
a	O
database	O
of	O
human	O
professional	O
games	O
could	O
outperform	O
GNU	B-Application
Go	I-Application
and	O
win	O
some	O
games	O
against	O
Monte	B-Application
Carlo	I-Application
tree	I-Application
search	I-Application
Fuego	O
1.1	O
in	O
a	O
fraction	O
of	O
the	O
time	O
it	O
took	O
Fuego	O
to	O
play	O
.	O
</s>
<s>
Later	O
it	O
was	O
announced	O
that	O
a	O
large	O
12-layer	O
convolutional	B-Architecture
neural	I-Architecture
network	I-Architecture
had	O
correctly	O
predicted	O
the	O
professional	O
move	O
in	O
55%	O
of	O
positions	O
,	O
equalling	O
the	O
accuracy	O
of	O
a	O
6	O
dan	O
human	O
player	O
.	O
</s>
<s>
When	O
the	O
trained	O
convolutional	O
network	O
was	O
used	O
directly	O
to	O
play	O
games	O
of	O
Go	O
,	O
without	O
any	O
search	O
,	O
it	O
beat	O
the	O
traditional	O
search	O
program	O
GNU	B-Application
Go	I-Application
in	O
97%	O
of	O
games	O
,	O
and	O
matched	O
the	O
performance	O
of	O
the	O
Monte	B-Application
Carlo	I-Application
tree	I-Application
search	I-Application
program	O
Fuego	O
simulating	O
ten	O
thousand	O
playouts	O
(	O
about	O
a	O
million	O
positions	O
)	O
per	O
move	O
.	O
</s>
<s>
A	O
couple	O
of	O
CNNs	B-Architecture
for	O
choosing	O
moves	O
to	O
try	O
(	O
"	O
policy	O
network	O
"	O
)	O
and	O
evaluating	O
positions	O
(	O
"	O
value	O
network	O
"	O
)	O
driving	O
MCTS	O
were	O
used	O
by	O
AlphaGo	B-Application
,	O
the	O
first	O
to	O
beat	O
the	O
best	O
human	O
player	O
at	O
the	O
time	O
.	O
</s>
<s>
Recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
are	O
generally	O
considered	O
the	O
best	O
neural	B-Architecture
network	I-Architecture
architectures	O
for	O
time	O
series	O
forecasting	O
(	O
and	O
sequence	O
modeling	O
in	O
general	O
)	O
,	O
but	O
recent	O
studies	O
show	O
that	O
convolutional	O
networks	O
can	O
perform	O
comparably	O
or	O
even	O
better	O
.	O
</s>
<s>
Dilated	O
convolutions	B-Language
might	O
enable	O
one-dimensional	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
to	O
effectively	O
learn	O
time	O
series	O
dependences	O
.	O
</s>
<s>
Convolutions	B-Language
can	O
be	O
implemented	O
more	O
efficiently	O
than	O
RNN-based	O
solutions	O
,	O
and	O
they	O
do	O
not	O
suffer	O
from	O
vanishing	O
(	O
or	O
exploding	O
)	O
gradients	O
.	O
</s>
<s>
CNNs	B-Architecture
can	O
also	O
be	O
applied	O
to	O
further	O
tasks	O
in	O
time	O
series	O
analysis	O
(	O
e.g.	O
,	O
time	O
series	O
classification	O
or	O
quantile	O
forecasting	O
)	O
.	O
</s>
<s>
As	O
archaeological	O
findings	O
like	O
clay	O
tablets	O
with	O
cuneiform	B-Language
writing	I-Language
are	O
increasingly	O
acquired	O
using	O
3D	B-Algorithm
scanners	I-Algorithm
first	O
benchmark	O
datasets	O
are	O
becoming	O
available	O
like	O
HeiCuBeDa	O
providing	O
almost	O
2.000	O
normalized	O
2D	O
-	O
and	O
3D-datasets	O
prepared	O
with	O
the	O
GigaMesh	B-Language
Software	I-Language
Framework	I-Language
.	O
</s>
<s>
So	O
curvature-based	O
measures	O
are	O
used	O
in	O
conjunction	O
with	O
Geometric	O
Neural	B-Architecture
Networks	I-Architecture
(	O
GNNs	O
)	O
e.g.	O
</s>
<s>
Convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
usually	O
require	O
a	O
large	O
amount	O
of	O
training	O
data	O
in	O
order	O
to	O
avoid	O
overfitting	B-Error_Name
.	O
</s>
<s>
Once	O
the	O
network	O
parameters	O
have	O
converged	O
an	O
additional	O
training	O
step	O
is	O
performed	O
using	O
the	O
in-domain	O
data	O
to	O
fine-tune	O
the	O
network	O
weights	O
,	O
this	O
is	O
known	O
as	O
transfer	B-General_Concept
learning	I-General_Concept
.	O
</s>
<s>
End-to-end	O
training	O
and	O
prediction	O
are	O
common	O
practice	O
in	O
computer	B-Application
vision	I-Application
.	O
</s>
<s>
However	O
,	O
human	O
interpretable	O
explanations	O
are	O
required	O
for	O
critical	B-Application
systems	I-Application
such	O
as	O
a	O
self-driving	O
cars	O
.	O
</s>
<s>
With	O
recent	O
advances	O
in	O
visual	O
salience	O
,	O
spatial	O
attention	O
,	O
and	O
temporal	B-Application
attention	I-Application
,	O
the	O
most	O
critical	O
spatial	O
regions/temporal	O
instants	O
could	O
be	O
visualized	O
to	O
justify	O
the	O
CNN	B-Architecture
predictions	O
.	O
</s>
<s>
A	O
deep	O
Q-network	O
(	O
DQN	O
)	O
is	O
a	O
type	O
of	O
deep	B-Algorithm
learning	I-Algorithm
model	O
that	O
combines	O
a	O
deep	O
neural	B-Architecture
network	I-Architecture
with	O
Q-learning	B-Algorithm
,	O
a	O
form	O
of	O
reinforcement	O
learning	O
.	O
</s>
<s>
Unlike	O
earlier	O
reinforcement	O
learning	O
agents	O
,	O
DQNs	O
that	O
utilize	O
CNNs	B-Architecture
can	O
learn	O
directly	O
from	O
high-dimensional	O
sensory	O
inputs	O
via	O
reinforcement	O
learning	O
.	O
</s>
<s>
The	O
research	O
described	O
an	O
application	O
to	O
Atari	B-General_Concept
2600	I-General_Concept
gaming	O
.	O
</s>
<s>
Convolutional	O
deep	B-Algorithm
belief	I-Algorithm
networks	I-Algorithm
(	O
CDBN	O
)	O
have	O
structure	O
very	O
similar	O
to	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
and	O
are	O
trained	O
similarly	O
to	O
deep	B-Algorithm
belief	I-Algorithm
networks	I-Algorithm
.	O
</s>
<s>
Therefore	O
,	O
they	O
exploit	O
the	O
2D	O
structure	O
of	O
images	O
,	O
like	O
CNNs	B-Architecture
do	O
,	O
and	O
make	O
use	O
of	O
pre-training	O
like	O
deep	B-Algorithm
belief	I-Algorithm
networks	I-Algorithm
.	O
</s>
<s>
Caffe	B-Algorithm
:	O
A	O
library	O
for	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
.	O
</s>
<s>
It	O
supports	O
both	O
CPU	B-Device
and	O
GPU	B-Architecture
.	O
</s>
<s>
Developed	O
in	O
C++	B-Language
,	O
and	O
has	O
Python	B-Language
and	O
MATLAB	B-Language
wrappers	O
.	O
</s>
<s>
Deeplearning4j	B-Library
:	O
Deep	B-Algorithm
learning	I-Algorithm
in	O
Java	B-Language
and	O
Scala	B-Application
on	O
multi-GPU-enabled	O
Spark	B-Language
.	O
</s>
<s>
A	O
general-purpose	O
deep	B-Algorithm
learning	I-Algorithm
library	O
for	O
the	O
JVM	O
production	O
stack	O
running	O
on	O
a	O
C++	B-Language
scientific	O
computing	O
engine	O
.	O
</s>
<s>
Dlib	B-Language
:	O
A	O
toolkit	O
for	O
making	O
real	O
world	O
machine	O
learning	O
and	O
data	O
analysis	O
applications	O
in	O
C++	B-Language
.	O
</s>
<s>
Microsoft	B-Algorithm
Cognitive	I-Algorithm
Toolkit	I-Algorithm
:	O
A	O
deep	B-Algorithm
learning	I-Algorithm
toolkit	O
written	O
by	O
Microsoft	O
with	O
several	O
unique	O
features	O
enhancing	O
scalability	O
over	O
multiple	O
nodes	O
.	O
</s>
<s>
It	O
supports	O
full-fledged	O
interfaces	O
for	O
training	O
in	O
C++	B-Language
and	O
Python	B-Language
and	O
with	O
additional	O
support	O
for	O
model	O
inference	O
in	O
C#	B-Application
and	O
Java	B-Language
.	O
</s>
<s>
TensorFlow	B-Language
:	O
Apache	O
2.0-licensed	O
Theano-like	O
library	O
with	O
support	O
for	O
CPU	B-Device
,	O
GPU	B-Architecture
,	O
Google	O
's	O
proprietary	O
tensor	B-Device
processing	I-Device
unit	I-Device
(	O
TPU	O
)	O
,	O
and	O
mobile	O
devices	O
.	O
</s>
<s>
Theano	B-Algorithm
:	O
The	O
reference	O
deep-learning	B-Algorithm
library	O
for	O
Python	B-Language
with	O
an	O
API	O
largely	O
compatible	O
with	O
the	O
popular	O
NumPy	B-Application
library	O
.	O
</s>
<s>
Allows	O
user	O
to	O
write	O
symbolic	O
mathematical	O
expressions	O
,	O
then	O
automatically	O
generates	O
their	O
derivatives	O
,	O
saving	O
the	O
user	O
from	O
having	O
to	O
code	O
gradients	O
or	O
backpropagation	B-Algorithm
.	O
</s>
<s>
These	O
symbolic	O
expressions	O
are	O
automatically	O
compiled	O
to	O
CUDA	B-Architecture
code	O
for	O
a	O
fast	O
,	O
on-the-GPU	B-Operating_System
implementation	O
.	O
</s>
<s>
Torch	B-Algorithm
:	O
A	O
scientific	O
computing	O
framework	O
with	O
wide	O
support	O
for	O
machine	O
learning	O
algorithms	O
,	O
written	O
in	O
C	B-Language
and	O
Lua	B-Language
.	O
</s>
