<s>
There	O
are	O
many	O
types	B-Algorithm
of	I-Algorithm
artificial	I-Algorithm
neural	I-Algorithm
networks	I-Algorithm
(	O
ANN	O
)	O
.	O
</s>
<s>
Artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
are	O
computational	B-Application
models	I-Application
inspired	O
by	O
biological	B-General_Concept
neural	I-General_Concept
networks	I-General_Concept
,	O
and	O
are	O
used	O
to	O
approximate	B-Algorithm
functions	O
that	O
are	O
generally	O
unknown	O
.	O
</s>
<s>
Most	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
bear	O
only	O
some	O
resemblance	O
to	O
their	O
more	O
complex	O
biological	O
counterparts	O
,	O
but	O
are	O
very	O
effective	O
at	O
their	O
intended	O
tasks	O
(	O
e.g.	O
</s>
<s>
classification	B-General_Concept
or	O
segmentation	O
)	O
.	O
</s>
<s>
Some	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
are	O
adaptive	O
systems	O
and	O
are	O
used	O
for	O
example	O
to	O
model	O
populations	O
and	O
environments	O
,	O
which	O
constantly	O
change	O
.	O
</s>
<s>
Neural	B-Architecture
networks	I-Architecture
can	O
be	O
hardware	O
-	O
(	O
neurons	O
are	O
represented	O
by	O
physical	O
components	O
)	O
or	O
software-based	B-Algorithm
(	O
computer	O
models	O
)	O
,	O
and	O
can	O
use	O
a	O
variety	O
of	O
topologies	O
and	O
learning	O
algorithms	O
.	O
</s>
<s>
The	O
feedforward	O
neural	B-Architecture
network	I-Architecture
was	O
the	O
first	O
and	O
simplest	O
type	O
.	O
</s>
<s>
Feedforward	O
networks	O
can	O
be	O
constructed	O
with	O
various	O
types	O
of	O
units	B-Algorithm
,	O
such	O
as	O
binary	O
McCulloch	B-Algorithm
–	I-Algorithm
Pitts	I-Algorithm
neurons	I-Algorithm
,	O
the	O
simplest	O
of	O
which	O
is	O
the	O
perceptron	B-Algorithm
.	O
</s>
<s>
Continuous	O
neurons	O
,	O
frequently	O
with	O
sigmoidal	B-Algorithm
activation	O
,	O
are	O
used	O
in	O
the	O
context	O
of	O
backpropagation	B-Algorithm
.	O
</s>
<s>
The	O
Group	O
Method	O
of	O
Data	O
Handling	O
(	O
GMDH	O
)	O
features	B-Algorithm
fully	O
automatic	O
structural	O
and	O
parametric	O
model	O
optimization	O
.	O
</s>
<s>
The	O
node	O
activation	B-Algorithm
functions	I-Algorithm
are	O
Kolmogorov	O
–	O
Gabor	O
polynomials	O
that	O
permit	O
additions	O
and	O
multiplications	O
.	O
</s>
<s>
It	O
uses	O
a	O
deep	O
multilayer	B-Algorithm
perceptron	I-Algorithm
with	O
eight	O
layers	O
.	O
</s>
<s>
It	O
is	O
a	O
supervised	B-General_Concept
learning	I-General_Concept
network	O
that	O
grows	O
layer	O
by	O
layer	O
,	O
where	O
each	O
layer	O
is	O
trained	O
by	O
regression	O
analysis	O
.	O
</s>
<s>
Useless	O
items	O
are	O
detected	O
using	O
a	O
validation	B-General_Concept
set	I-General_Concept
,	O
and	O
pruned	O
through	O
regularization	O
.	O
</s>
<s>
An	O
autoencoder	B-Algorithm
,	O
autoassociator	B-Algorithm
or	O
Diabolo	B-Algorithm
network	I-Algorithm
is	O
similar	O
to	O
the	O
multilayer	B-Algorithm
perceptron	I-Algorithm
(	O
MLP	O
)	O
–	O
with	O
an	O
input	O
layer	O
,	O
an	O
output	O
layer	O
and	O
one	O
or	O
more	O
hidden	O
layers	O
connecting	O
them	O
.	O
</s>
<s>
However	O
,	O
the	O
output	O
layer	O
has	O
the	O
same	O
number	O
of	O
units	B-Algorithm
as	O
the	O
input	O
layer	O
.	O
</s>
<s>
Therefore	O
,	O
autoencoders	B-Algorithm
are	O
unsupervised	B-General_Concept
learning	I-General_Concept
models	O
.	O
</s>
<s>
An	O
autoencoder	B-Algorithm
is	O
used	O
for	O
unsupervised	B-General_Concept
learning	I-General_Concept
of	O
efficient	B-General_Concept
codings	I-General_Concept
,	O
typically	O
for	O
the	O
purpose	O
of	O
dimensionality	B-Algorithm
reduction	I-Algorithm
and	O
for	O
learning	O
generative	O
models	O
of	O
data	O
.	O
</s>
<s>
A	O
probabilistic	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
(	O
PNN	O
)	O
is	O
a	O
four-layer	O
feedforward	O
neural	B-Architecture
network	I-Architecture
.	O
</s>
<s>
In	O
the	O
PNN	O
algorithm	O
,	O
the	O
parent	O
probability	O
distribution	O
function	O
(	O
PDF	O
)	O
of	O
each	O
class	O
is	O
approximated	O
by	O
a	O
Parzen	B-General_Concept
window	I-General_Concept
and	O
a	O
non-parametric	B-General_Concept
function	O
.	O
</s>
<s>
It	O
was	O
derived	O
from	O
the	O
Bayesian	O
network	O
and	O
a	O
statistical	O
algorithm	O
called	O
Kernel	B-General_Concept
Fisher	I-General_Concept
discriminant	I-General_Concept
analysis	I-General_Concept
.	O
</s>
<s>
It	O
is	O
used	O
for	O
classification	B-General_Concept
and	O
pattern	O
recognition	O
.	O
</s>
<s>
A	O
time	B-Algorithm
delay	I-Algorithm
neural	I-Algorithm
network	I-Algorithm
(	O
TDNN	B-Algorithm
)	O
is	O
a	O
feedforward	O
architecture	O
for	O
sequential	O
data	O
that	O
recognizes	O
features	B-Algorithm
independent	O
of	O
sequence	O
position	O
.	O
</s>
<s>
It	O
has	O
been	O
implemented	O
using	O
a	O
perceptron	B-Algorithm
network	O
whose	O
connection	O
weights	O
were	O
trained	O
with	O
back	B-Algorithm
propagation	I-Algorithm
(	O
supervised	B-General_Concept
learning	I-General_Concept
)	O
.	O
</s>
<s>
A	O
convolutional	B-Architecture
neural	I-Architecture
network	I-Architecture
(	O
CNN	B-Architecture
,	O
or	O
ConvNet	B-Architecture
or	O
shift	O
invariant	O
or	O
space	O
invariant	O
)	O
is	O
a	O
class	O
of	O
deep	O
network	O
,	O
composed	O
of	O
one	O
or	O
more	O
convolutional	O
layers	O
with	O
fully	O
connected	O
layers	O
(	O
matching	O
those	O
in	O
typical	O
ANNs	O
)	O
on	O
top	O
.	O
</s>
<s>
It	O
is	O
often	O
structured	B-General_Concept
via	O
Fukushima	O
's	O
convolutional	O
architecture	O
.	O
</s>
<s>
They	O
are	O
variations	O
of	O
multilayer	B-Algorithm
perceptrons	I-Algorithm
that	O
use	O
minimal	O
preprocessing	B-General_Concept
.	O
</s>
<s>
This	O
architecture	O
allows	O
CNNs	B-Architecture
to	O
take	O
advantage	O
of	O
the	O
2D	O
structure	O
of	O
input	O
data	O
.	O
</s>
<s>
Units	B-Algorithm
respond	O
to	O
stimuli	O
in	O
a	O
restricted	O
region	O
of	O
space	O
known	O
as	O
the	O
receptive	O
field	O
.	O
</s>
<s>
Unit	O
response	O
can	O
be	O
approximated	O
mathematically	O
by	O
a	O
convolution	B-Language
operation	O
.	O
</s>
<s>
CNNs	B-Architecture
are	O
suitable	O
for	O
processing	O
visual	O
and	O
other	O
two-dimensional	O
data	O
.	O
</s>
<s>
They	O
can	O
be	O
trained	O
with	O
standard	O
backpropagation	B-Algorithm
.	O
</s>
<s>
CNNs	B-Architecture
are	O
easier	O
to	O
train	O
than	O
other	O
regular	O
,	O
deep	O
,	O
feed-forward	O
neural	B-Architecture
networks	I-Architecture
and	O
have	O
many	O
fewer	O
parameters	O
to	O
estimate	O
.	O
</s>
<s>
Capsule	B-Algorithm
Neural	I-Algorithm
Networks	I-Algorithm
(	O
CapsNet	O
)	O
add	O
structures	O
called	O
capsules	O
to	O
a	O
CNN	B-Architecture
and	O
reuse	O
output	O
from	O
several	O
capsules	O
to	O
form	O
more	O
stable	O
(	O
with	O
respect	O
to	O
various	O
perturbations	O
)	O
representations	O
.	O
</s>
<s>
Examples	O
of	O
applications	O
in	O
computer	B-Application
vision	I-Application
include	O
DeepDream	B-Application
and	O
robot	B-General_Concept
navigation	I-General_Concept
.	O
</s>
<s>
They	O
have	O
wide	O
applications	O
in	O
image	B-Application
and	I-Application
video	I-Application
recognition	I-Application
,	O
recommender	B-Application
systems	I-Application
and	O
natural	B-Language
language	I-Language
processing	I-Language
.	O
</s>
<s>
A	O
deep	O
stacking	O
network	O
(	O
DSN	O
)	O
(	O
deep	O
convex	O
network	O
)	O
is	O
based	O
on	O
a	O
hierarchy	O
of	O
blocks	O
of	O
simplified	O
neural	B-Architecture
network	I-Architecture
modules	O
.	O
</s>
<s>
It	O
formulates	O
the	O
learning	O
as	O
a	O
convex	O
optimization	O
problem	O
with	O
a	O
closed-form	O
solution	O
,	O
emphasizing	O
the	O
mechanism	O
's	O
similarity	O
to	O
stacked	B-Algorithm
generalization	I-Algorithm
.	O
</s>
<s>
Each	O
DSN	O
block	O
is	O
a	O
simple	O
module	O
that	O
is	O
easy	O
to	O
train	O
by	O
itself	O
in	O
a	O
supervised	B-General_Concept
fashion	O
without	O
backpropagation	B-Algorithm
for	O
the	O
entire	O
blocks	O
.	O
</s>
<s>
Each	O
block	O
consists	O
of	O
a	O
simplified	O
multi-layer	B-Algorithm
perceptron	I-Algorithm
(	O
MLP	O
)	O
with	O
a	O
single	O
hidden	O
layer	O
.	O
</s>
<s>
The	O
hidden	O
layer	O
h	O
has	O
logistic	O
sigmoidal	B-Algorithm
units	B-Algorithm
,	O
and	O
the	O
output	O
layer	O
has	O
linear	O
units	B-Algorithm
.	O
</s>
<s>
The	O
matrix	O
of	O
hidden	O
units	B-Algorithm
is	O
.	O
</s>
<s>
Unlike	O
other	O
deep	O
architectures	O
,	O
such	O
as	O
DBNs	B-Algorithm
,	O
the	O
goal	O
is	O
not	O
to	O
discover	O
the	O
transformed	O
feature	B-Algorithm
representation	O
.	O
</s>
<s>
In	O
purely	O
discriminative	O
tasks	O
,	O
DSNs	O
outperform	O
conventional	O
DBNs	B-Algorithm
.	O
</s>
<s>
TDSNs	O
use	O
covariance	O
statistics	O
in	O
a	O
bilinear	O
mapping	O
from	O
each	O
of	O
two	O
distinct	O
sets	O
of	O
hidden	O
units	B-Algorithm
in	O
the	O
same	O
layer	O
to	O
predictions	O
,	O
via	O
a	O
third-order	O
tensor	B-Device
.	O
</s>
<s>
The	O
basic	O
architecture	O
is	O
suitable	O
for	O
diverse	O
tasks	O
such	O
as	O
classification	B-General_Concept
and	O
regression	O
.	O
</s>
<s>
Regulatory	O
feedback	O
networks	O
started	O
as	O
a	O
model	O
to	O
explain	O
brain	O
phenomena	O
found	O
during	O
recognition	O
including	O
network-wide	O
bursting	B-Algorithm
and	O
difficulty	O
with	O
similarity	O
found	O
universally	O
in	O
sensory	O
recognition	O
.	O
</s>
<s>
Radial	O
basis	O
functions	O
have	O
been	O
applied	O
as	O
a	O
replacement	O
for	O
the	O
sigmoidal	B-Algorithm
hidden	O
layer	O
transfer	O
characteristic	O
in	O
multi-layer	B-Algorithm
perceptrons	I-Algorithm
.	O
</s>
<s>
In	O
classification	B-General_Concept
problems	O
the	O
output	O
layer	O
is	O
typically	O
a	O
sigmoid	B-Algorithm
function	I-Algorithm
of	O
a	O
linear	O
combination	O
of	O
hidden	O
layer	O
values	O
,	O
representing	O
a	O
posterior	O
probability	O
.	O
</s>
<s>
RBF	O
networks	O
have	O
the	O
advantage	O
of	O
avoiding	O
local	O
minima	O
in	O
the	O
same	O
way	O
as	O
multi-layer	B-Algorithm
perceptrons	I-Algorithm
.	O
</s>
<s>
In	O
classification	B-General_Concept
problems	O
the	O
fixed	O
non-linearity	O
introduced	O
by	O
the	O
sigmoid	O
output	O
function	O
is	O
most	O
efficiently	O
dealt	O
with	O
using	O
iteratively	B-Algorithm
re-weighted	I-Algorithm
least	I-Algorithm
squares	I-Algorithm
.	O
</s>
<s>
A	O
common	O
solution	O
is	O
to	O
associate	O
each	O
data	O
point	O
with	O
its	O
own	O
centre	O
,	O
although	O
this	O
can	O
expand	O
the	O
linear	O
system	O
to	O
be	O
solved	O
in	O
the	O
final	O
layer	O
and	O
requires	O
shrinkage	O
techniques	O
to	O
avoid	O
overfitting	B-Error_Name
.	O
</s>
<s>
Associating	O
each	O
input	O
datum	O
with	O
an	O
RBF	O
leads	O
naturally	O
to	O
kernel	O
methods	O
such	O
as	O
support	B-Algorithm
vector	I-Algorithm
machines	I-Algorithm
(	O
SVM	B-Algorithm
)	O
and	O
Gaussian	O
processes	O
(	O
the	O
RBF	O
is	O
the	O
kernel	O
function	O
)	O
.	O
</s>
<s>
Like	O
Gaussian	O
processes	O
,	O
and	O
unlike	O
SVMs	B-Algorithm
,	O
RBF	O
networks	O
are	O
typically	O
trained	O
in	O
a	O
maximum	O
likelihood	O
framework	O
by	O
maximizing	O
the	O
probability	O
(	O
minimizing	O
the	O
error	O
)	O
.	O
</s>
<s>
SVMs	B-Algorithm
avoid	O
overfitting	B-Error_Name
by	O
maximizing	O
instead	O
a	O
margin	O
.	O
</s>
<s>
SVMs	B-Algorithm
outperform	O
RBF	O
networks	O
in	O
most	O
classification	B-General_Concept
applications	O
.	O
</s>
<s>
RBF	O
neural	B-Architecture
networks	I-Architecture
are	O
conceptually	O
similar	O
to	O
K-Nearest	B-General_Concept
Neighbor	I-General_Concept
(	O
k-NN	B-General_Concept
)	O
models	O
.	O
</s>
<s>
The	O
nearest	B-General_Concept
neighbor	I-General_Concept
classification	B-General_Concept
performed	O
for	O
this	O
example	O
depends	O
on	O
how	O
many	O
neighboring	O
points	O
are	O
considered	O
.	O
</s>
<s>
Alternatively	O
,	O
if	O
9-NN	O
classification	B-General_Concept
is	O
used	O
and	O
the	O
closest	O
9	O
points	O
are	O
considered	O
,	O
then	O
the	O
effect	O
of	O
the	O
surrounding	O
8	O
positive	O
points	O
may	O
outweigh	O
the	O
closest	O
9	O
(	O
negative	O
)	O
point	O
.	O
</s>
<s>
The	O
input	O
neurons	O
standardizes	O
the	O
value	O
ranges	O
by	O
subtracting	O
the	O
median	O
and	O
dividing	O
by	O
the	O
interquartile	B-General_Concept
range	I-General_Concept
.	O
</s>
<s>
For	O
classification	B-General_Concept
problems	O
,	O
one	O
output	O
is	O
produced	O
(	O
with	O
a	O
separate	O
set	O
of	O
weights	O
and	O
summation	O
unit	O
)	O
for	O
each	O
target	O
category	O
.	O
</s>
<s>
One	O
approach	O
first	O
uses	O
K-means	B-Algorithm
clustering	I-Algorithm
to	O
find	O
cluster	O
centers	O
which	O
are	O
then	O
used	O
as	O
the	O
centers	O
for	O
the	O
RBF	O
functions	O
.	O
</s>
<s>
However	O
,	O
K-means	B-Algorithm
clustering	I-Algorithm
is	O
computationally	O
intensive	O
and	O
it	O
often	O
does	O
not	O
generate	O
the	O
optimal	O
number	O
of	O
centers	O
.	O
</s>
<s>
It	O
determines	O
when	O
to	O
stop	O
adding	O
neurons	O
to	O
the	O
network	O
by	O
monitoring	O
the	O
estimated	O
leave-one-out	O
(	O
LOO	O
)	O
error	O
and	O
terminating	O
when	O
the	O
LOO	O
error	O
begins	O
to	O
increase	O
because	O
of	O
overfitting	B-Error_Name
.	O
</s>
<s>
A	O
GRNN	O
is	O
an	O
associative	O
memory	O
neural	B-Architecture
network	I-Architecture
that	O
is	O
similar	O
to	O
the	O
probabilistic	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
but	O
it	O
is	O
used	O
for	O
regression	O
and	O
approximation	O
rather	O
than	O
classification	B-General_Concept
.	O
</s>
<s>
A	O
deep	B-Algorithm
belief	I-Algorithm
network	I-Algorithm
(	O
DBN	O
)	O
is	O
a	O
probabilistic	O
,	O
generative	O
model	O
made	O
up	O
of	O
multiple	O
hidden	O
layers	O
.	O
</s>
<s>
It	O
can	O
be	O
considered	O
a	O
composition	B-Application
of	O
simple	O
learning	O
modules	O
.	O
</s>
<s>
A	O
DBN	O
can	O
be	O
used	O
to	O
generatively	O
pre-train	O
a	O
deep	O
neural	B-Architecture
network	I-Architecture
(	O
DNN	O
)	O
by	O
using	O
the	O
learned	O
DBN	O
weights	O
as	O
the	O
initial	O
DNN	O
weights	O
.	O
</s>
<s>
Recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
(	O
RNN	O
)	O
propagate	O
data	O
forward	O
,	O
but	O
also	O
backwards	O
,	O
from	O
later	O
processing	O
stages	O
to	O
earlier	O
stages	O
.	O
</s>
<s>
Its	O
network	O
creates	O
a	O
directed	O
connection	O
between	O
every	O
pair	O
of	O
units	B-Algorithm
.	O
</s>
<s>
For	O
supervised	B-General_Concept
learning	I-General_Concept
in	O
discrete	O
time	O
settings	O
,	O
training	O
sequences	O
of	O
real-valued	O
input	O
vectors	O
become	O
sequences	O
of	O
activations	O
of	O
the	O
input	O
nodes	O
,	O
one	O
input	O
vector	O
at	O
a	O
time	O
.	O
</s>
<s>
At	O
each	O
time	O
step	O
,	O
each	O
non-input	O
unit	O
computes	O
its	O
current	O
activation	O
as	O
a	O
nonlinear	O
function	O
of	O
the	O
weighted	O
sum	O
of	O
the	O
activations	O
of	O
all	O
units	B-Algorithm
from	O
which	O
it	O
receives	O
connections	O
.	O
</s>
<s>
The	O
system	O
can	O
explicitly	O
activate	O
(	O
independent	O
of	O
incoming	O
signals	O
)	O
some	O
output	O
units	B-Algorithm
at	O
certain	O
time	O
steps	O
.	O
</s>
<s>
To	O
minimize	O
total	O
error	O
,	O
gradient	B-Algorithm
descent	I-Algorithm
can	O
be	O
used	O
to	O
change	O
each	O
weight	O
in	O
proportion	O
to	O
its	O
derivative	O
with	O
respect	O
to	O
the	O
error	O
,	O
provided	O
the	O
non-linear	O
activation	B-Algorithm
functions	I-Algorithm
are	O
differentiable	O
.	O
</s>
<s>
The	O
standard	O
method	O
is	O
called	O
"	O
backpropagation	B-Algorithm
through	I-Algorithm
time	I-Algorithm
"	O
or	O
BPTT	B-Algorithm
,	O
a	O
generalization	O
of	O
back-propagation	B-Algorithm
for	O
feedforward	O
networks	O
.	O
</s>
<s>
Unlike	O
BPTT	B-Algorithm
this	O
algorithm	O
is	O
local	O
in	O
time	O
but	O
not	O
local	O
in	O
space	O
.	O
</s>
<s>
An	O
online	O
hybrid	O
between	O
BPTT	B-Algorithm
and	O
RTRL	O
with	O
intermediate	O
complexity	O
exists	O
,	O
with	O
variants	O
for	O
continuous	O
time	O
.	O
</s>
<s>
A	O
major	O
problem	O
with	O
gradient	B-Algorithm
descent	I-Algorithm
for	O
standard	O
RNN	O
architectures	O
is	O
that	O
error	O
gradients	O
vanish	O
exponentially	O
quickly	O
with	O
the	O
size	O
of	O
the	O
time	O
lag	O
between	O
important	O
events	O
.	O
</s>
<s>
The	O
Long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
architecture	O
overcomes	O
these	O
problems	O
.	O
</s>
<s>
Instead	O
a	O
fitness	O
function	O
or	O
reward	O
function	O
or	O
utility	O
function	O
is	O
occasionally	O
used	O
to	O
evaluate	O
performance	O
,	O
which	O
influences	O
its	O
input	O
stream	O
through	O
output	O
units	B-Algorithm
connected	O
to	O
actuators	O
that	O
affect	O
the	O
environment	O
.	O
</s>
<s>
The	O
Hopfield	B-Algorithm
network	I-Algorithm
(	O
like	O
similar	O
attractor-based	O
networks	O
)	O
is	O
of	O
historic	O
interest	O
although	O
it	O
is	O
not	O
a	O
general	O
RNN	O
,	O
as	O
it	O
is	O
not	O
designed	O
to	O
process	O
sequences	O
of	O
patterns	O
.	O
</s>
<s>
If	O
the	O
connections	O
are	O
trained	O
using	O
Hebbian	O
learning	O
the	O
Hopfield	B-Algorithm
network	I-Algorithm
can	O
perform	O
as	O
robust	O
content-addressable	B-Data_Structure
memory	I-Data_Structure
,	O
resistant	O
to	O
connection	O
alteration	O
.	O
</s>
<s>
The	O
Boltzmann	B-Algorithm
machine	I-Algorithm
can	O
be	O
thought	O
of	O
as	O
a	O
noisy	O
Hopfield	B-Algorithm
network	I-Algorithm
.	O
</s>
<s>
It	O
is	O
one	O
of	O
the	O
first	O
neural	B-Architecture
networks	I-Architecture
to	O
demonstrate	O
learning	O
of	O
latent	O
variables	O
(	O
hidden	O
units	B-Algorithm
)	O
.	O
</s>
<s>
Boltzmann	B-Algorithm
machine	I-Algorithm
learning	O
was	O
at	O
first	O
slow	O
to	O
simulate	O
,	O
but	O
the	O
contrastive	B-Algorithm
divergence	I-Algorithm
algorithm	O
speeds	O
up	O
training	O
for	O
Boltzmann	B-Algorithm
machines	I-Algorithm
and	O
Products	B-General_Concept
of	I-General_Concept
Experts	I-General_Concept
.	O
</s>
<s>
The	O
self-organizing	O
map	O
(	O
SOM	O
)	O
uses	O
unsupervised	B-General_Concept
learning	I-General_Concept
.	O
</s>
<s>
Learning	B-Algorithm
vector	I-Algorithm
quantization	I-Algorithm
(	O
LVQ	B-Algorithm
)	O
can	O
be	O
interpreted	O
as	O
a	O
neural	B-Architecture
network	I-Architecture
architecture	O
.	O
</s>
<s>
Prototypical	O
representatives	O
of	O
the	O
classes	O
parameterize	O
,	O
together	O
with	O
an	O
appropriate	O
distance	O
measure	O
,	O
in	O
a	O
distance-based	O
classification	B-General_Concept
scheme	O
.	O
</s>
<s>
Simple	O
recurrent	O
networks	O
have	O
three	O
layers	O
,	O
with	O
the	O
addition	O
of	O
a	O
set	O
of	O
"	O
context	O
units	B-Algorithm
"	O
in	O
the	O
input	O
layer	O
.	O
</s>
<s>
These	O
units	B-Algorithm
connect	O
from	O
the	O
hidden	O
layer	O
or	O
the	O
output	O
layer	O
with	O
a	O
fixed	O
weight	O
of	O
one	O
.	O
</s>
<s>
At	O
each	O
time	O
step	O
,	O
the	O
input	O
is	O
propagated	O
in	O
a	O
standard	O
feedforward	O
fashion	O
,	O
and	O
then	O
a	O
backpropagation-like	O
learning	O
rule	O
is	O
applied	O
(	O
not	O
performing	O
gradient	B-Algorithm
descent	I-Algorithm
)	O
.	O
</s>
<s>
The	O
fixed	O
back	O
connections	O
leave	O
a	O
copy	O
of	O
the	O
previous	O
values	O
of	O
the	O
hidden	O
units	B-Algorithm
in	O
the	O
context	O
units	B-Algorithm
(	O
since	O
they	O
propagate	O
over	O
the	O
connections	O
before	O
the	O
learning	O
rule	O
is	O
applied	O
)	O
.	O
</s>
<s>
Reservoir	O
computing	O
is	O
a	O
computation	O
framework	O
that	O
may	O
be	O
viewed	O
as	O
an	O
extension	O
of	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
Liquid-state	B-Algorithm
machines	I-Algorithm
are	O
a	O
type	O
of	O
reservoir	O
computing	O
.	O
</s>
<s>
The	O
long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
(	O
LSTM	B-Algorithm
)	O
avoids	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
.	O
</s>
<s>
LSTM	B-Algorithm
RNN	O
outperformed	O
other	O
RNN	O
and	O
other	O
sequence	O
learning	O
methods	O
such	O
as	O
HMM	O
in	O
applications	O
such	O
as	O
language	O
learning	O
and	O
connected	O
handwriting	O
recognition	O
.	O
</s>
<s>
This	O
technique	O
proved	O
to	O
be	O
especially	O
useful	O
when	O
combined	O
with	O
LSTM	B-Algorithm
.	O
</s>
<s>
A	O
RNN	O
(	O
often	O
a	O
LSTM	B-Algorithm
)	O
where	O
a	O
series	O
is	O
decomposed	O
into	O
a	O
number	O
of	O
scales	O
where	O
every	O
scale	O
informs	O
the	O
primary	O
length	O
between	O
two	O
consecutive	O
points	O
.	O
</s>
<s>
This	O
realization	O
gave	O
birth	O
to	O
the	O
concept	O
of	O
modular	O
neural	B-Architecture
networks	I-Architecture
,	O
in	O
which	O
several	O
small	O
networks	O
cooperate	O
or	O
compete	O
to	O
solve	O
problems	O
.	O
</s>
<s>
A	O
committee	O
of	O
machines	O
(	O
CoM	O
)	O
is	O
a	O
collection	O
of	O
different	O
neural	B-Architecture
networks	I-Architecture
that	O
together	O
"	O
vote	O
"	O
on	O
a	O
given	O
example	O
.	O
</s>
<s>
Because	O
neural	B-Architecture
networks	I-Architecture
suffer	O
from	O
local	O
minima	O
,	O
starting	O
with	O
the	O
same	O
architecture	O
and	O
training	O
but	O
using	O
randomly	O
different	O
initial	O
weights	O
often	O
gives	O
vastly	O
different	O
results	O
.	O
</s>
<s>
The	O
CoM	O
is	O
similar	O
to	O
the	O
general	O
machine	O
learning	O
bagging	B-Algorithm
method	O
,	O
except	O
that	O
the	O
necessary	O
variety	O
of	O
machines	O
in	O
the	O
committee	O
is	O
obtained	O
by	O
training	O
from	O
different	O
starting	O
weights	O
rather	O
than	O
training	O
on	O
different	O
randomly	O
selected	O
subsets	O
of	O
the	O
training	O
data	O
.	O
</s>
<s>
The	O
associative	O
neural	B-Architecture
network	I-Architecture
(	O
ASNN	O
)	O
is	O
an	O
extension	O
of	O
committee	O
of	O
machines	O
that	O
combines	O
multiple	O
feedforward	O
neural	B-Architecture
networks	I-Architecture
and	O
the	O
k-nearest	B-General_Concept
neighbor	I-General_Concept
technique	O
.	O
</s>
<s>
This	O
corrects	O
the	O
Bias	O
of	O
the	O
neural	B-Architecture
network	I-Architecture
ensemble	O
.	O
</s>
<s>
An	O
associative	O
neural	B-Architecture
network	I-Architecture
has	O
a	O
memory	O
that	O
can	O
coincide	O
with	O
the	O
training	O
set	O
.	O
</s>
<s>
If	O
new	O
data	O
become	O
available	O
,	O
the	O
network	O
instantly	O
improves	O
its	O
predictive	B-General_Concept
ability	O
and	O
provides	O
data	O
approximation	O
(	O
self-learns	O
)	O
without	O
retraining	O
.	O
</s>
<s>
Another	O
important	O
feature	B-Algorithm
of	O
ASNN	O
is	O
the	O
possibility	O
to	O
interpret	O
neural	B-Architecture
network	I-Architecture
results	O
by	O
analysis	O
of	O
correlations	O
between	O
data	O
cases	O
in	O
the	O
space	O
of	O
models	O
.	O
</s>
<s>
A	O
physical	O
neural	B-Architecture
network	I-Architecture
includes	O
electrically	O
adjustable	O
resistance	O
material	O
to	O
simulate	O
artificial	O
synapses	O
.	O
</s>
<s>
Examples	O
include	O
the	O
ADALINE	B-Algorithm
memristor-based	O
neural	B-Architecture
network	I-Architecture
.	O
</s>
<s>
Dynamic	O
neural	B-Architecture
networks	I-Architecture
address	O
nonlinear	O
multivariate	O
behaviour	O
and	O
include	O
(	O
learning	O
of	O
)	O
time-dependent	O
behaviour	O
,	O
such	O
as	O
transient	O
phenomena	O
and	O
delay	O
effects	O
.	O
</s>
<s>
Cascade	O
correlation	O
is	O
an	O
architecture	O
and	O
supervised	B-General_Concept
learning	I-General_Concept
algorithm	O
.	O
</s>
<s>
Instead	O
of	O
just	O
adjusting	O
the	O
weights	O
in	O
a	O
network	O
of	O
fixed	O
topology	O
,	O
Cascade-Correlation	O
begins	O
with	O
a	O
minimal	O
network	O
,	O
then	O
automatically	O
trains	O
and	O
adds	O
new	O
hidden	O
units	B-Algorithm
one	O
by	O
one	O
,	O
creating	O
a	O
multi-layer	O
structure	O
.	O
</s>
<s>
This	O
unit	O
then	O
becomes	O
a	O
permanent	O
feature-detector	O
in	O
the	O
network	O
,	O
available	O
for	O
producing	O
outputs	O
or	O
for	O
creating	O
other	O
,	O
more	O
complex	O
feature	B-Algorithm
detectors	O
.	O
</s>
<s>
The	O
Cascade-Correlation	O
architecture	O
has	O
several	O
advantages	O
:	O
It	O
learns	O
quickly	O
,	O
determines	O
its	O
own	O
size	O
and	O
topology	O
,	O
retains	O
the	O
structures	O
it	O
has	O
built	O
even	O
if	O
the	O
training	O
set	O
changes	O
and	O
requires	O
no	O
backpropagation	B-Algorithm
.	O
</s>
<s>
A	O
neuro-fuzzy	O
network	O
is	O
a	O
fuzzy	O
inference	O
system	O
in	O
the	O
body	O
of	O
an	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
.	O
</s>
<s>
Compositional	O
pattern-producing	O
networks	O
(	O
CPPNs	O
)	O
are	O
a	O
variation	O
of	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
which	O
differ	O
in	O
their	O
set	O
of	O
activation	B-Algorithm
functions	I-Algorithm
and	O
how	O
they	O
are	O
applied	O
.	O
</s>
<s>
While	O
typical	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
often	O
contain	O
only	O
sigmoid	B-Algorithm
functions	I-Algorithm
(	O
and	O
sometimes	O
Gaussian	O
functions	O
)	O
,	O
CPPNs	O
can	O
include	O
both	O
types	O
of	O
functions	O
and	O
many	O
others	O
.	O
</s>
<s>
Furthermore	O
,	O
unlike	O
typical	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
,	O
CPPNs	O
are	O
applied	O
across	O
the	O
entire	O
space	O
of	O
possible	O
inputs	O
so	O
that	O
they	O
can	O
represent	O
a	O
complete	O
image	O
.	O
</s>
<s>
These	O
models	O
have	O
been	O
applied	O
in	O
the	O
context	O
of	O
question	B-Algorithm
answering	I-Algorithm
(	O
QA	O
)	O
where	O
the	O
long-term	O
memory	O
effectively	O
acts	O
as	O
a	O
(	O
dynamic	O
)	O
knowledge	O
base	O
and	O
the	O
output	O
is	O
a	O
textual	O
response	O
.	O
</s>
<s>
In	O
sparse	B-Architecture
distributed	I-Architecture
memory	I-Architecture
or	O
hierarchical	B-Algorithm
temporal	I-Algorithm
memory	I-Algorithm
,	O
the	O
patterns	O
encoded	O
by	O
neural	B-Architecture
networks	I-Architecture
are	O
used	O
as	O
addresses	O
for	O
content-addressable	B-Data_Structure
memory	I-Data_Structure
,	O
with	O
"	O
neurons	O
"	O
essentially	O
serving	O
as	O
address	O
encoders	O
and	O
decoders	O
.	O
</s>
<s>
The	O
network	O
offers	O
real-time	O
pattern	O
recognition	O
and	O
high	O
scalability	O
;	O
this	O
requires	O
parallel	O
processing	O
and	O
is	O
thus	O
best	O
suited	O
for	O
platforms	O
such	O
as	O
wireless	B-Architecture
sensor	I-Architecture
networks	I-Architecture
,	O
grid	B-Architecture
computing	I-Architecture
,	O
and	O
GPGPUs	B-Architecture
.	O
</s>
<s>
Hierarchical	B-Algorithm
temporal	I-Algorithm
memory	I-Algorithm
(	O
HTM	O
)	O
models	O
some	O
of	O
the	O
structural	O
and	O
algorithmic	O
properties	O
of	O
the	O
neocortex	O
.	O
</s>
<s>
HTM	O
is	O
a	O
biomimetic	B-Application
model	O
based	O
on	O
memory-prediction	O
theory	O
.	O
</s>
<s>
HTM	O
combines	O
and	O
extends	O
approaches	O
used	O
in	O
Bayesian	O
networks	O
,	O
spatial	O
and	O
temporal	O
clustering	O
algorithms	O
,	O
while	O
using	O
a	O
tree-shaped	O
hierarchy	O
of	O
nodes	O
that	O
is	O
common	O
in	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
Apart	O
from	O
long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
(	O
LSTM	B-Algorithm
)	O
,	O
other	O
approaches	O
also	O
added	O
differentiable	O
memory	O
to	O
recurrent	O
functions	O
.	O
</s>
<s>
Neural	O
Turing	B-Architecture
machines	I-Architecture
couple	O
LSTM	B-Algorithm
networks	O
to	O
external	O
memory	O
resources	O
,	O
with	O
which	O
they	O
can	O
interact	O
by	O
attentional	O
processes	O
.	O
</s>
<s>
The	O
combined	O
system	O
is	O
analogous	O
to	O
a	O
Turing	B-Architecture
machine	I-Architecture
but	O
is	O
differentiable	O
end-to-end	O
,	O
allowing	O
it	O
to	O
be	O
efficiently	O
trained	O
by	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
Preliminary	O
results	O
demonstrate	O
that	O
neural	O
Turing	B-Architecture
machines	I-Architecture
can	O
infer	O
simple	O
algorithms	O
such	O
as	O
copying	O
,	O
sorting	O
and	O
associative	O
recall	O
from	O
input	O
and	O
output	O
examples	O
.	O
</s>
<s>
Differentiable	B-Algorithm
neural	I-Algorithm
computers	I-Algorithm
(	O
DNC	O
)	O
are	O
an	O
NTM	O
extension	O
.	O
</s>
<s>
They	O
out-performed	O
Neural	O
turing	B-Architecture
machines	I-Architecture
,	O
long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
systems	O
and	O
memory	O
networks	O
on	O
sequence-processing	O
tasks	O
.	O
</s>
<s>
Approaches	O
that	O
represent	O
previous	O
experiences	O
directly	O
and	O
use	B-General_Concept
a	I-General_Concept
similar	I-General_Concept
experience	I-General_Concept
to	I-General_Concept
form	I-General_Concept
a	I-General_Concept
local	I-General_Concept
model	I-General_Concept
are	O
often	O
called	O
nearest	B-General_Concept
neighbour	I-General_Concept
or	O
k-nearest	B-General_Concept
neighbors	I-General_Concept
methods	O
.	O
</s>
<s>
Unlike	O
sparse	B-Architecture
distributed	I-Architecture
memory	I-Architecture
that	O
operates	O
on	O
1000-bit	O
addresses	O
,	O
semantic	O
hashing	O
works	O
on	O
32	O
or	O
64-bit	O
addresses	O
found	O
in	O
a	O
conventional	O
computer	B-General_Concept
architecture	I-General_Concept
.	O
</s>
<s>
Deep	O
neural	B-Architecture
networks	I-Architecture
can	O
be	O
potentially	O
improved	O
by	O
deepening	O
and	O
parameter	O
reduction	O
,	O
while	O
maintaining	O
trainability	O
.	O
</s>
<s>
While	O
training	O
extremely	O
deep	O
(	O
e.g.	O
,	O
1	O
million	O
layers	O
)	O
neural	B-Architecture
networks	I-Architecture
might	O
not	O
be	O
practical	O
,	O
CPU-like	O
architectures	O
such	O
as	O
pointer	O
networks	O
and	O
neural	O
random-access	O
machines	O
overcome	O
this	O
limitation	O
by	O
using	O
external	O
random-access	B-Architecture
memory	I-Architecture
and	O
other	O
components	O
that	O
typically	O
belong	O
to	O
a	O
computer	B-General_Concept
architecture	I-General_Concept
such	O
as	O
registers	B-General_Concept
,	O
ALU	B-General_Concept
and	O
pointers	O
.	O
</s>
<s>
Such	O
systems	O
operate	O
on	O
probability	O
distribution	O
vectors	O
stored	O
in	O
memory	O
cells	O
and	O
registers	B-General_Concept
.	O
</s>
<s>
Encoder	O
–	O
decoder	O
frameworks	O
are	O
based	O
on	O
neural	B-Architecture
networks	I-Architecture
that	O
map	O
highly	O
structured	B-General_Concept
input	O
to	O
highly	O
structured	B-General_Concept
output	O
.	O
</s>
<s>
The	O
approach	O
arose	O
in	O
the	O
context	O
of	O
machine	B-Application
translation	I-Application
,	O
where	O
the	O
input	O
and	O
output	O
are	O
written	O
sentences	O
in	O
two	O
natural	O
languages	O
.	O
</s>
<s>
In	O
that	O
work	O
,	O
an	O
LSTM	B-Algorithm
RNN	O
or	O
CNN	B-Architecture
was	O
used	O
as	O
an	O
encoder	O
to	O
summarize	O
a	O
source	O
sentence	O
,	O
and	O
the	O
summary	O
was	O
decoded	O
using	O
a	O
conditional	O
RNN	O
language	B-Language
model	I-Language
to	O
produce	O
the	O
translation	O
.	O
</s>
<s>
These	O
systems	O
share	O
building	O
blocks	O
:	O
gated	O
RNNs	O
and	O
CNNs	B-Architecture
and	O
trained	O
attention	O
mechanisms	O
.	O
</s>
<s>
Instantaneously	B-Algorithm
trained	I-Algorithm
neural	I-Algorithm
networks	I-Algorithm
(	O
ITNN	O
)	O
were	O
inspired	O
by	O
the	O
phenomenon	O
of	O
short-term	O
learning	O
that	O
seems	O
to	O
occur	O
instantaneously	O
.	O
</s>
<s>
Spiking	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
(	O
SNN	O
)	O
explicitly	O
consider	O
the	O
timing	O
of	O
inputs	O
.	O
</s>
<s>
SNN	O
are	O
also	O
a	O
form	O
of	O
pulse	B-General_Concept
computer	I-General_Concept
.	O
</s>
<s>
Spiking	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
with	O
axonal	O
conduction	O
delays	O
exhibit	O
polychronization	O
,	O
and	O
hence	O
could	O
have	O
a	O
very	O
large	O
memory	O
capacity	O
.	O
</s>
<s>
The	O
feedback	O
is	O
used	O
to	O
find	O
the	O
optimal	O
activation	O
of	O
units	B-Algorithm
.	O
</s>
<s>
It	O
is	O
most	O
similar	O
to	O
a	O
non-parametric	B-General_Concept
method	I-General_Concept
but	O
is	O
different	O
from	O
K-nearest	B-General_Concept
neighbor	I-General_Concept
in	O
that	O
it	O
mathematically	O
emulates	O
feedforward	O
networks	O
.	O
</s>
<s>
The	O
neocognitron	B-Algorithm
is	O
a	O
hierarchical	O
,	O
multilayered	O
network	O
that	O
was	O
modeled	O
after	O
the	O
visual	O
cortex	O
.	O
</s>
<s>
It	O
uses	O
multiple	O
types	O
of	O
units	B-Algorithm
,	O
(	O
originally	O
two	O
,	O
called	O
simple	O
and	O
complex	O
cells	O
)	O
,	O
as	O
a	O
cascading	O
model	O
for	O
use	O
in	O
pattern	O
recognition	O
tasks	O
.	O
</s>
<s>
Local	O
features	B-Algorithm
are	O
extracted	O
by	O
S-cells	O
whose	O
deformation	O
is	O
tolerated	O
by	O
C-cells	O
.	O
</s>
<s>
Local	O
features	B-Algorithm
in	O
the	O
input	O
are	O
integrated	O
gradually	O
and	O
classified	O
at	O
higher	O
layers	O
.	O
</s>
<s>
Among	O
the	O
various	O
kinds	O
of	O
neocognitron	B-Algorithm
are	O
systems	O
that	O
can	O
detect	O
multiple	O
patterns	O
in	O
the	O
same	O
input	O
by	O
using	O
back	B-Algorithm
propagation	I-Algorithm
to	O
achieve	O
selective	O
attention	O
.	O
</s>
<s>
It	O
has	O
been	O
used	O
for	O
pattern	O
recognition	O
tasks	O
and	O
inspired	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
.	O
</s>
<s>
Compound	O
hierarchical-deep	O
models	O
compose	O
deep	O
networks	O
with	O
non-parametric	B-General_Concept
Bayesian	O
models	O
.	O
</s>
<s>
Features	B-Algorithm
can	O
be	O
learned	O
using	O
deep	O
architectures	O
such	O
as	O
DBNs	B-Algorithm
,	O
deep	O
Boltzmann	B-Algorithm
machines	I-Algorithm
(	O
DBM	O
)	O
,	O
deep	O
auto	B-Algorithm
encoders	I-Algorithm
,	O
convolutional	O
variants	O
,	O
ssRBMs	O
,	O
deep	O
coding	O
networks	O
,	O
DBNs	B-Algorithm
with	O
sparse	O
feature	B-General_Concept
learning	I-General_Concept
,	O
RNNs	O
,	O
conditional	O
DBNs	B-Algorithm
,	O
de-noising	O
auto	B-Algorithm
encoders	I-Algorithm
.	O
</s>
<s>
This	O
provides	O
a	O
better	O
representation	O
,	O
allowing	O
faster	O
learning	O
and	O
more	O
accurate	O
classification	B-General_Concept
with	O
high-dimensional	O
data	O
.	O
</s>
<s>
However	O
,	O
these	O
architectures	O
are	O
poor	O
at	O
learning	O
novel	O
classes	O
with	O
few	O
examples	O
,	O
because	O
all	O
network	O
units	B-Algorithm
are	O
involved	O
in	O
representing	O
the	O
input	O
(	O
a	O
)	O
and	O
must	O
be	O
adjusted	O
together	O
(	O
high	O
degree	O
of	O
freedom	O
)	O
.	O
</s>
<s>
Hierarchical	O
Bayesian	O
(	O
HB	O
)	O
models	O
allow	O
learning	O
from	O
few	O
examples	O
,	O
for	O
example	O
for	O
computer	B-Application
vision	I-Application
,	O
statistics	O
and	O
cognitive	O
science	O
.	O
</s>
<s>
The	O
compound	O
HDP-DBM	O
architecture	O
is	O
a	O
hierarchical	B-General_Concept
Dirichlet	I-General_Concept
process	I-General_Concept
(	O
HDP	O
)	O
as	O
a	O
hierarchical	O
model	O
,	O
incorporating	O
DBM	O
architecture	O
.	O
</s>
<s>
where	O
is	O
the	O
set	O
of	O
hidden	O
units	B-Algorithm
,	O
and	O
are	O
the	O
model	O
parameters	O
,	O
representing	O
visible-hidden	O
and	O
hidden-hidden	O
symmetric	O
interaction	O
terms	O
.	O
</s>
<s>
A	O
deep	O
predictive	B-General_Concept
coding	O
network	O
(	O
DPCN	O
)	O
is	O
a	O
predictive	B-General_Concept
coding	O
scheme	O
that	O
uses	O
top-down	O
information	O
to	O
empirically	O
adjust	O
the	O
priors	O
needed	O
for	O
a	O
bottom-up	O
inference	O
procedure	O
by	O
means	O
of	O
a	O
deep	O
,	O
locally	O
connected	O
,	O
generative	O
model	O
.	O
</s>
<s>
This	O
works	O
by	O
extracting	O
sparse	O
features	B-Algorithm
from	O
time-varying	O
observations	O
using	O
a	O
linear	O
dynamical	O
model	O
.	O
</s>
<s>
Then	O
,	O
a	O
pooling	O
strategy	O
is	O
used	O
to	O
learn	O
invariant	O
feature	B-Algorithm
representations	O
.	O
</s>
<s>
These	O
units	B-Algorithm
compose	O
to	O
form	O
a	O
deep	O
architecture	O
and	O
are	O
trained	O
by	O
greedy	B-Algorithm
layer-wise	O
unsupervised	B-General_Concept
learning	I-General_Concept
.	O
</s>
<s>
DPCNs	O
can	O
be	O
extended	O
to	O
form	O
a	O
convolutional	B-Architecture
network	I-Architecture
.	O
</s>
<s>
They	O
use	O
kernel	B-Algorithm
principal	I-Algorithm
component	I-Algorithm
analysis	I-Algorithm
(	O
KPCA	O
)	O
,	O
as	O
a	O
method	O
for	O
the	O
unsupervised	B-General_Concept
greedy	B-Algorithm
layer-wise	O
pre-training	O
step	O
of	O
deep	O
learning	O
.	O
</s>
<s>
Layer	O
learns	O
the	O
representation	O
of	O
the	O
previous	O
layer	O
,	O
extracting	O
the	O
principal	B-Application
component	I-Application
(	O
PC	O
)	O
of	O
the	O
projection	O
layer	O
output	O
in	O
the	O
feature	B-Algorithm
domain	O
induced	O
by	O
the	O
kernel	O
.	O
</s>
<s>
To	O
reduce	O
the	O
dimensionaliity	B-Algorithm
of	O
the	O
updated	O
representation	O
in	O
each	O
layer	O
,	O
a	O
supervised	B-General_Concept
strategy	I-General_Concept
selects	O
the	O
best	O
informative	O
features	B-Algorithm
among	O
features	B-Algorithm
extracted	O
by	O
KPCA	O
.	O
</s>
<s>
rank	O
the	O
features	B-Algorithm
according	O
to	O
their	O
mutual	O
information	O
with	O
the	O
class	O
labels	O
;	O
</s>
<s>
for	O
different	O
values	O
of	O
K	O
and	O
,	O
compute	O
the	O
classification	B-General_Concept
error	O
rate	O
of	O
a	O
K-nearest	B-General_Concept
neighbor	I-General_Concept
(	O
K-NN	B-General_Concept
)	O
classifier	B-General_Concept
using	O
only	O
the	O
most	O
informative	O
features	B-Algorithm
on	O
a	O
validation	B-General_Concept
set	I-General_Concept
;	O
</s>
<s>
the	O
value	O
of	O
with	O
which	O
the	O
classifier	B-General_Concept
has	O
reached	O
the	O
lowest	O
error	O
rate	O
determines	O
the	O
number	O
of	O
features	B-Algorithm
to	O
retain	O
.	O
</s>
<s>
The	O
main	O
idea	O
is	O
to	O
use	O
a	O
kernel	O
machine	O
to	O
approximate	B-Algorithm
a	O
shallow	O
neural	B-Architecture
net	I-Architecture
with	O
an	O
infinite	O
number	O
of	O
hidden	O
units	B-Algorithm
,	O
then	O
use	O
a	O
deep	O
stacking	O
network	O
to	O
splice	O
the	O
output	O
of	O
the	O
kernel	O
machine	O
and	O
the	O
raw	O
input	O
in	O
building	O
the	O
next	O
,	O
higher	O
level	O
of	O
the	O
kernel	O
machine	O
.	O
</s>
