<s>
A	O
feedforward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
(	O
FNN	O
)	O
is	O
an	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
wherein	O
connections	O
between	O
the	O
nodes	O
do	O
not	O
form	O
a	O
cycle	O
.	O
</s>
<s>
As	O
such	O
,	O
it	O
is	O
different	O
from	O
its	O
descendant	O
:	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
.	O
</s>
<s>
The	O
feedforward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
was	O
the	O
first	O
and	O
simplest	O
type	O
of	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
devised	O
.	O
</s>
<s>
The	O
simplest	O
kind	O
of	O
feedforward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
is	O
a	O
linear	O
network	O
,	O
which	O
consists	O
of	O
a	O
single	O
layer	O
of	O
output	O
nodes	O
;	O
the	O
inputs	O
are	O
fed	O
directly	O
to	O
the	O
outputs	O
via	O
a	O
series	O
of	O
weights	O
.	O
</s>
<s>
The	O
mean	B-Algorithm
squared	I-Algorithm
errors	I-Algorithm
between	O
these	O
calculated	O
outputs	O
and	O
a	O
given	O
target	O
values	O
are	O
minimized	O
by	O
creating	O
an	O
adjustment	O
to	O
the	O
weights	O
.	O
</s>
<s>
This	O
technique	O
has	O
been	O
known	O
for	O
over	O
two	O
centuries	O
as	O
the	O
method	B-Algorithm
of	I-Algorithm
least	I-Algorithm
squares	I-Algorithm
or	O
linear	B-General_Concept
regression	I-General_Concept
.	O
</s>
<s>
It	O
was	O
used	O
as	O
a	O
means	O
of	O
finding	O
a	O
good	O
rough	O
linear	B-General_Concept
fit	I-General_Concept
to	O
a	O
set	O
of	O
points	O
by	O
Legendre	O
(	O
1805	O
)	O
and	O
Gauss	B-Algorithm
(	O
1795	O
)	O
for	O
the	O
prediction	O
of	O
planetary	O
movement	O
.	O
</s>
<s>
The	O
single-layer	O
perceptron	B-Algorithm
combines	O
a	O
linear	O
neural	B-Architecture
network	I-Architecture
with	O
a	O
threshold	O
function	O
.	O
</s>
<s>
Neurons	O
with	O
this	O
kind	O
of	O
activation	B-Algorithm
function	I-Algorithm
are	O
often	O
called	O
linear	O
threshold	O
units	O
.	O
</s>
<s>
In	O
the	O
literature	O
the	O
term	O
perceptron	B-Algorithm
often	O
refers	O
to	O
networks	O
consisting	O
of	O
just	O
one	O
of	O
these	O
units	O
.	O
</s>
<s>
A	O
perceptron	B-Algorithm
can	O
be	O
created	O
using	O
any	O
values	O
for	O
the	O
activated	O
and	O
deactivated	O
states	O
as	O
long	O
as	O
the	O
threshold	O
value	O
lies	O
between	O
the	O
two	O
.	O
</s>
<s>
Perceptrons	B-Algorithm
can	O
be	O
trained	O
by	O
a	O
simple	O
learning	O
algorithm	O
that	O
is	O
usually	O
called	O
the	O
delta	B-Algorithm
rule	I-Algorithm
.	O
</s>
<s>
It	O
calculates	O
the	O
errors	O
between	O
calculated	O
output	O
and	O
sample	O
output	O
data	O
,	O
and	O
uses	O
this	O
to	O
create	O
an	O
adjustment	O
to	O
the	O
weights	O
,	O
thus	O
implementing	O
a	O
form	O
of	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
Single-layer	O
perceptrons	B-Algorithm
are	O
only	O
capable	O
of	O
learning	O
linearly	O
separable	O
patterns	O
;	O
in	O
1969	O
in	O
a	O
famous	O
monograph	O
titled	O
Perceptrons	B-Algorithm
,	O
Marvin	O
Minsky	O
and	O
Seymour	O
Papert	O
showed	O
that	O
it	O
was	O
impossible	O
for	O
a	O
single-layer	O
perceptron	B-Algorithm
network	O
to	O
learn	O
an	O
XOR	O
function	O
.	O
</s>
<s>
Nonetheless	O
,	O
it	O
was	O
known	O
that	O
multi-layer	O
perceptrons	B-Algorithm
(	O
MLPs	O
)	O
are	O
capable	O
of	O
producing	O
any	O
possible	O
boolean	O
function	O
.	O
</s>
<s>
For	O
example	O
,	O
already	O
in	O
1967	O
,	O
Shun'ichi	O
Amari	O
trained	O
an	O
MLP	O
by	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
Although	O
a	O
single	O
threshold	O
unit	O
is	O
quite	O
limited	O
in	O
its	O
computational	O
power	O
,	O
it	O
has	O
been	O
shown	O
that	O
networks	O
of	O
parallel	O
threshold	O
units	O
can	O
approximate	B-Algorithm
any	I-Algorithm
continuous	I-Algorithm
function	I-Algorithm
from	O
a	O
compact	O
interval	O
of	O
the	O
real	O
numbers	O
into	O
the	O
interval	O
 [ -1 , 1 ] 	O
.	O
</s>
<s>
This	O
result	O
can	O
be	O
found	O
in	O
Peter	O
Auer	O
,	O
Harald	O
Burgsteiner	O
and	O
Wolfgang	O
Maass	O
"	O
A	O
learning	O
rule	O
for	O
very	O
simple	O
universal	B-Algorithm
approximators	I-Algorithm
consisting	O
of	O
a	O
single	O
layer	O
of	O
perceptrons	B-Algorithm
"	O
.	O
</s>
<s>
A	O
single-layer	O
neural	B-Architecture
network	I-Architecture
can	O
compute	O
a	O
continuous	O
output	O
instead	O
of	O
a	O
step	O
function	O
.	O
</s>
<s>
The	O
logistic	O
function	O
is	O
one	O
of	O
the	O
family	O
of	O
functions	O
called	O
sigmoid	B-Algorithm
functions	I-Algorithm
because	O
their	O
S-shaped	B-Algorithm
graphs	O
resemble	O
the	O
final-letter	O
lower	O
case	O
of	O
the	O
Greek	O
letter	O
Sigma	B-Application
.	O
</s>
<s>
It	O
has	O
a	O
continuous	O
derivative	O
,	O
which	O
allows	O
it	O
to	O
be	O
used	O
in	O
backpropagation	B-Algorithm
.	O
</s>
<s>
If	O
single-layer	O
neural	B-Architecture
network	I-Architecture
activation	B-Algorithm
function	I-Algorithm
is	O
modulo	O
1	O
,	O
then	O
this	O
network	O
can	O
solve	O
XOR	O
problem	O
with	O
a	O
single	O
neuron	O
.	O
</s>
<s>
In	O
many	O
applications	O
the	O
units	O
of	O
these	O
networks	O
apply	O
a	O
sigmoid	B-Algorithm
function	I-Algorithm
as	O
an	O
activation	B-Algorithm
function	I-Algorithm
.	O
</s>
<s>
However	O
sigmoidal	O
activation	B-Algorithm
functions	I-Algorithm
have	O
very	O
small	O
derivative	O
values	O
outside	O
a	O
small	O
range	O
and	O
do	O
not	O
work	O
well	O
in	O
deep	O
neural	B-Architecture
networks	I-Architecture
due	O
to	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
.	O
</s>
<s>
The	O
universal	B-Algorithm
approximation	I-Algorithm
theorem	I-Algorithm
for	O
neural	B-Architecture
networks	I-Architecture
states	O
that	O
every	O
continuous	O
function	O
that	O
maps	O
intervals	O
of	O
real	O
numbers	O
to	O
some	O
output	O
interval	O
of	O
real	O
numbers	O
can	O
be	O
approximated	O
arbitrarily	O
closely	O
by	O
a	O
multi-layer	O
perceptron	B-Algorithm
with	O
just	O
one	O
hidden	O
layer	O
.	O
</s>
<s>
This	O
result	O
holds	O
for	O
a	O
wide	O
range	O
of	O
activation	B-Algorithm
functions	I-Algorithm
,	O
e.g.	O
</s>
<s>
for	O
the	O
sigmoidal	B-Algorithm
functions	I-Algorithm
.	O
</s>
<s>
The	O
first	O
deep	B-Algorithm
learning	I-Algorithm
MLP	O
was	O
published	O
by	O
Alexey	O
Grigorevich	O
Ivakhnenko	O
and	O
Valentin	O
Lapa	O
in	O
1965	O
.	O
</s>
<s>
The	O
first	O
deep	B-Algorithm
learning	I-Algorithm
MLP	O
trained	O
by	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
was	O
published	O
in	O
1967	O
by	O
Shun'ichi	O
Amari	O
.	O
</s>
<s>
Today	O
,	O
the	O
most	O
popular	O
method	O
for	O
training	O
MLPs	O
is	O
back-propagation	B-Algorithm
.	O
</s>
<s>
The	O
terminology	O
"	O
back-propagating	O
errors	O
"	O
was	O
introduced	O
in	O
1962	O
by	O
Frank	O
Rosenblatt	O
,	O
but	O
he	O
did	O
not	O
know	O
how	O
to	O
implement	O
this	O
,	O
although	O
Henry	O
J	O
.	O
Kelley	O
had	O
a	O
continuous	O
precursor	O
of	O
backpropagation	B-Algorithm
already	O
in	O
1960	O
in	O
the	O
context	O
of	O
control	O
theory	O
.	O
</s>
<s>
Modern	O
backpropagation	B-Algorithm
is	O
actually	O
Seppo	O
Linnainmaa	O
's	O
general	O
reverse	O
mode	O
of	O
automatic	B-Algorithm
differentiation	I-Algorithm
(	O
1970	O
)	O
for	O
discrete	O
connected	O
networks	O
of	O
nested	O
differentiable	O
functions	O
.	O
</s>
<s>
In	O
1982	O
,	O
Paul	O
Werbos	O
applied	O
backpropagation	B-Algorithm
to	O
MLPs	O
in	O
the	O
way	O
that	O
has	O
become	O
standard	O
.	O
</s>
<s>
During	O
backpropagation	B-Algorithm
,	O
the	O
output	O
values	O
are	O
compared	O
with	O
the	O
correct	O
answer	O
to	O
compute	O
the	O
value	O
of	O
some	O
predefined	O
error-function	O
.	O
</s>
<s>
To	O
adjust	O
weights	O
properly	O
,	O
one	O
applies	O
a	O
general	O
method	O
for	O
non-linear	O
optimization	O
that	O
is	O
called	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
due	O
to	O
Augustin-Louis	O
Cauchy	O
,	O
who	O
first	O
suggested	O
it	O
in	O
1847	O
.	O
</s>
<s>
For	O
this	O
reason	O
,	O
back-propagation	B-Algorithm
can	O
only	O
be	O
applied	O
on	O
networks	O
with	O
differentiable	O
activation	B-Algorithm
functions	I-Algorithm
.	O
</s>
<s>
The	O
danger	O
is	O
that	O
the	O
network	O
overfits	B-Error_Name
the	O
training	O
data	O
and	O
fails	O
to	O
capture	O
the	O
true	O
statistical	O
process	O
generating	O
the	O
data	O
.	O
</s>
<s>
In	O
the	O
context	O
of	O
neural	B-Architecture
networks	I-Architecture
a	O
simple	O
heuristic	B-Algorithm
,	O
called	O
early	B-Algorithm
stopping	I-Algorithm
,	O
often	O
ensures	O
that	O
the	O
network	O
will	O
generalize	O
well	O
to	O
examples	O
not	O
in	O
the	O
training	O
set	O
.	O
</s>
<s>
Other	O
typical	O
problems	O
of	O
the	O
back-propagation	B-Algorithm
algorithm	O
are	O
the	O
speed	O
of	O
convergence	O
and	O
the	O
possibility	O
of	O
ending	O
up	O
in	O
a	O
local	O
minimum	O
of	O
the	O
error	O
function	O
.	O
</s>
<s>
Today	O
,	O
there	O
are	O
practical	O
methods	O
that	O
make	O
back-propagation	B-Algorithm
in	O
multi-layer	O
perceptrons	B-Algorithm
the	O
tool	O
of	O
choice	O
for	O
many	O
machine	O
learning	O
tasks	O
.	O
</s>
<s>
One	O
also	O
can	O
use	O
a	O
series	O
of	O
independent	O
neural	B-Architecture
networks	I-Architecture
moderated	O
by	O
some	O
intermediary	O
,	O
a	O
similar	O
behavior	O
that	O
happens	O
in	O
brain	O
.	O
</s>
<s>
Various	O
activation	B-Algorithm
functions	I-Algorithm
can	O
be	O
used	O
,	O
and	O
there	O
can	O
be	O
relations	O
between	O
weights	O
,	O
as	O
in	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
.	O
</s>
<s>
Examples	O
of	O
other	O
feedforward	O
networks	O
include	O
radial	B-Algorithm
basis	I-Algorithm
function	I-Algorithm
networks	I-Algorithm
,	O
which	O
use	O
a	O
different	O
activation	B-Algorithm
function	I-Algorithm
.	O
</s>
<s>
Sometimes	O
multi-layer	O
perceptron	B-Algorithm
is	O
used	O
loosely	O
to	O
refer	O
to	O
any	O
feedforward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
,	O
while	O
in	O
other	O
cases	O
it	O
is	O
restricted	O
to	O
specific	O
ones	O
(	O
e.g.	O
,	O
with	O
specific	O
activation	B-Algorithm
functions	I-Algorithm
,	O
or	O
with	O
fully	O
connected	O
layers	O
,	O
or	O
trained	O
by	O
the	O
perceptron	B-Algorithm
algorithm	I-Algorithm
)	O
.	O
</s>
