<s>
A	O
recurrent	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
(	O
RNN	O
)	O
is	O
a	O
class	O
of	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
where	O
connections	O
between	O
nodes	O
can	O
create	O
a	O
cycle	O
,	O
allowing	O
output	O
from	O
some	O
nodes	O
to	O
affect	O
subsequent	O
input	O
to	O
the	O
same	O
nodes	O
.	O
</s>
<s>
Derived	O
from	O
feedforward	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
,	O
RNNs	O
can	O
use	O
their	O
internal	O
state	O
(	O
memory	O
)	O
to	O
process	O
variable	O
length	O
sequences	O
of	O
inputs	O
.	O
</s>
<s>
This	O
makes	O
them	O
applicable	O
to	O
tasks	O
such	O
as	O
unsegmented	O
,	O
connected	O
handwriting	B-Application
recognition	I-Application
or	O
speech	B-Application
recognition	I-Application
.	O
</s>
<s>
Recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
are	O
theoretically	O
Turing	B-Algorithm
complete	I-Algorithm
and	O
can	O
run	O
arbitrary	O
programs	O
to	O
process	O
arbitrary	O
sequences	O
of	O
inputs	O
.	O
</s>
<s>
The	O
term	O
"	O
recurrent	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
"	O
is	O
used	O
to	O
refer	O
to	O
the	O
class	O
of	O
networks	O
with	O
an	O
infinite	O
impulse	O
response	O
,	O
whereas	O
"	O
convolutional	B-Architecture
neural	I-Architecture
network	I-Architecture
"	O
refers	O
to	O
the	O
class	O
of	O
finite	O
impulse	O
response	O
.	O
</s>
<s>
A	O
finite	O
impulse	O
recurrent	O
network	O
is	O
a	O
directed	O
acyclic	O
graph	O
that	O
can	O
be	O
unrolled	O
and	O
replaced	O
with	O
a	O
strictly	O
feedforward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
,	O
while	O
an	O
infinite	O
impulse	O
recurrent	O
network	O
is	O
a	O
directed	O
cyclic	O
graph	O
that	O
can	O
not	O
be	O
unrolled	O
.	O
</s>
<s>
Both	O
finite	O
impulse	O
and	O
infinite	O
impulse	O
recurrent	O
networks	O
can	O
have	O
additional	O
stored	O
states	O
,	O
and	O
the	O
storage	O
can	O
be	O
under	O
direct	O
control	O
by	O
the	O
neural	B-Architecture
network	I-Architecture
.	O
</s>
<s>
Such	O
controlled	O
states	O
are	O
referred	O
to	O
as	O
gated	O
state	O
or	O
gated	O
memory	O
,	O
and	O
are	O
part	O
of	O
long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
networks	O
(	O
LSTMs	B-Algorithm
)	O
and	O
gated	B-Algorithm
recurrent	I-Algorithm
units	I-Algorithm
.	O
</s>
<s>
This	O
is	O
also	O
called	O
Feedback	O
Neural	B-Architecture
Network	I-Architecture
(	O
FNN	O
)	O
.	O
</s>
<s>
This	O
was	O
also	O
called	O
the	O
Hopfield	B-Algorithm
network	I-Algorithm
(	O
1982	O
)	O
.	O
</s>
<s>
In	O
1993	O
,	O
a	O
neural	O
history	O
compressor	O
system	O
solved	O
a	O
"	O
Very	O
Deep	B-Algorithm
Learning	I-Algorithm
"	O
task	O
that	O
required	O
more	O
than	O
1000	O
subsequent	O
layers	B-Algorithm
in	O
an	O
RNN	O
unfolded	O
in	O
time	O
.	O
</s>
<s>
Long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
(	O
LSTM	B-Algorithm
)	O
networks	O
were	O
invented	O
by	O
Hochreiter	O
and	O
Schmidhuber	O
in	O
1997	O
and	O
set	O
accuracy	O
records	O
in	O
multiple	O
applications	O
domains	O
.	O
</s>
<s>
Around	O
2007	O
,	O
LSTM	B-Algorithm
started	O
to	O
revolutionize	O
speech	B-Application
recognition	I-Application
,	O
outperforming	O
traditional	O
models	O
in	O
certain	O
speech	O
applications	O
.	O
</s>
<s>
In	O
2009	O
,	O
a	O
Connectionist	B-Algorithm
Temporal	I-Algorithm
Classification	I-Algorithm
(	O
CTC	O
)	O
-trained	O
LSTM	B-Algorithm
network	O
was	O
the	O
first	O
RNN	O
to	O
win	O
pattern	O
recognition	O
contests	O
when	O
it	O
won	O
several	O
competitions	O
in	O
connected	O
handwriting	B-Application
recognition	I-Application
.	O
</s>
<s>
In	O
2014	O
,	O
the	O
Chinese	O
company	O
Baidu	B-Application
used	O
CTC-trained	O
RNNs	O
to	O
break	O
the	O
2S09	O
Switchboard	O
Hub5'00	O
speech	B-Application
recognition	I-Application
dataset	O
benchmark	O
without	O
using	O
any	O
traditional	O
speech	O
processing	O
methods	O
.	O
</s>
<s>
LSTM	B-Algorithm
also	O
improved	O
large-vocabulary	O
speech	B-Application
recognition	I-Application
and	O
text-to-speech	B-Application
synthesis	O
and	O
was	O
used	O
in	O
Google	B-Application
Android	I-Application
.	O
</s>
<s>
In	O
2015	O
,	O
Google	O
's	O
speech	B-Application
recognition	I-Application
reportedly	O
experienced	O
a	O
dramatic	O
performance	O
jump	O
of	O
49%	O
through	O
CTC-trained	O
LSTM	B-Algorithm
.	O
</s>
<s>
LSTM	B-Algorithm
broke	O
records	O
for	O
improved	O
machine	B-Application
translation	I-Application
,	O
Language	B-Language
Modeling	I-Language
and	O
Multilingual	O
Language	O
Processing	O
.	O
</s>
<s>
LSTM	B-Algorithm
combined	O
with	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
(	O
CNNs	B-Architecture
)	O
improved	O
automatic	O
image	O
captioning	O
.	O
</s>
<s>
Fully	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
(	O
FRNN	O
)	O
connect	O
the	O
outputs	O
of	O
all	O
neurons	O
to	O
the	O
inputs	O
of	O
all	O
neurons	O
.	O
</s>
<s>
This	O
is	O
the	O
most	O
general	O
neural	B-Architecture
network	I-Architecture
topology	O
because	O
all	O
other	O
topologies	O
can	O
be	O
represented	O
by	O
setting	O
some	O
connection	O
weights	O
to	O
zero	O
to	O
simulate	O
the	O
lack	O
of	O
connections	O
between	O
those	O
neurons	O
.	O
</s>
<s>
The	O
illustration	O
to	O
the	O
right	O
may	O
be	O
misleading	O
to	O
many	O
because	O
practical	O
neural	B-Architecture
network	I-Architecture
topologies	O
are	O
frequently	O
organized	O
in	O
"	O
layers	B-Algorithm
"	O
and	O
the	O
drawing	O
gives	O
that	O
appearance	O
.	O
</s>
<s>
However	O
,	O
what	O
appears	O
to	O
be	O
layers	B-Algorithm
are	O
,	O
in	O
fact	O
,	O
different	O
steps	O
in	O
time	O
of	O
the	O
same	O
fully	O
recurrent	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
.	O
</s>
<s>
It	O
is	O
"	O
unfolded	O
"	O
in	O
time	O
to	O
produce	O
the	O
appearance	O
of	O
layers	B-Algorithm
.	O
</s>
<s>
At	O
each	O
time	O
step	O
,	O
the	O
input	O
is	O
fed	O
forward	O
and	O
a	O
learning	B-Algorithm
rule	I-Algorithm
is	O
applied	O
.	O
</s>
<s>
The	O
fixed	O
back-connections	O
save	O
a	O
copy	O
of	O
the	O
previous	O
values	O
of	O
the	O
hidden	O
units	O
in	O
the	O
context	O
units	O
(	O
since	O
they	O
propagate	O
over	O
the	O
connections	O
before	O
the	O
learning	B-Algorithm
rule	I-Algorithm
is	O
applied	O
)	O
.	O
</s>
<s>
Thus	O
the	O
network	O
can	O
maintain	O
a	O
sort	O
of	O
state	O
,	O
allowing	O
it	O
to	O
perform	O
such	O
tasks	O
as	O
sequence-prediction	O
that	O
are	O
beyond	O
the	O
power	O
of	O
a	O
standard	O
multilayer	B-Algorithm
perceptron	I-Algorithm
.	O
</s>
<s>
The	O
Hopfield	B-Algorithm
network	I-Algorithm
is	O
an	O
RNN	O
in	O
which	O
all	O
connections	O
across	O
layers	B-Algorithm
are	O
equally	O
sized	O
.	O
</s>
<s>
It	O
requires	O
stationary	B-Algorithm
inputs	O
and	O
is	O
thus	O
not	O
a	O
general	O
RNN	O
,	O
as	O
it	O
does	O
not	O
process	O
sequences	O
of	O
patterns	O
.	O
</s>
<s>
If	O
the	O
connections	O
are	O
trained	O
using	O
Hebbian	O
learning	O
then	O
the	O
Hopfield	B-Algorithm
network	I-Algorithm
can	O
perform	O
as	O
robust	B-Application
content-addressable	B-Data_Structure
memory	I-Data_Structure
,	O
resistant	O
to	O
connection	O
alteration	O
.	O
</s>
<s>
Introduced	O
by	O
Bart	O
Kosko	O
,	O
a	O
bidirectional	O
associative	O
memory	O
(	O
BAM	O
)	O
network	O
is	O
a	O
variant	O
of	O
a	O
Hopfield	B-Algorithm
network	I-Algorithm
that	O
stores	O
associative	O
data	O
as	O
a	O
vector	O
.	O
</s>
<s>
A	O
BAM	O
network	O
has	O
two	O
layers	B-Algorithm
,	O
either	O
of	O
which	O
can	O
be	O
driven	O
as	O
an	O
input	O
to	O
recall	O
an	O
association	O
and	O
produce	O
an	O
output	O
on	O
the	O
other	O
layer	O
.	O
</s>
<s>
A	O
variant	O
for	O
spiking	B-Algorithm
neurons	I-Algorithm
is	O
known	O
as	O
a	O
liquid	B-Algorithm
state	I-Algorithm
machine	I-Algorithm
.	O
</s>
<s>
The	O
Independently	O
recurrent	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
(	O
IndRNN	O
)	O
addresses	O
the	O
gradient	O
vanishing	O
and	O
exploding	O
problems	O
in	O
the	O
traditional	O
fully	O
connected	O
RNN	O
.	O
</s>
<s>
The	O
gradient	O
backpropagation	B-Algorithm
can	O
be	O
regulated	O
to	O
avoid	O
gradient	O
vanishing	O
and	O
exploding	O
in	O
order	O
to	O
keep	O
long	O
or	O
short-term	O
memory	O
.	O
</s>
<s>
The	O
cross-neuron	O
information	O
is	O
explored	O
in	O
the	O
next	O
layers	B-Algorithm
.	O
</s>
<s>
A	O
recursive	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
is	O
created	O
by	O
applying	O
the	O
same	O
set	O
of	O
weights	O
recursively	O
over	O
a	O
differentiable	B-Algorithm
graph-like	O
structure	O
by	O
traversing	O
the	O
structure	O
in	O
topological	B-Algorithm
order	I-Algorithm
.	O
</s>
<s>
Such	O
networks	O
are	O
typically	O
also	O
trained	O
by	O
the	O
reverse	O
mode	O
of	O
automatic	B-Algorithm
differentiation	I-Algorithm
.	O
</s>
<s>
A	O
special	O
case	O
of	O
recursive	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
is	O
the	O
RNN	O
whose	O
structure	O
corresponds	O
to	O
a	O
linear	O
chain	O
.	O
</s>
<s>
Recursive	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
have	O
been	O
applied	O
to	O
natural	B-Language
language	I-Language
processing	I-Language
.	O
</s>
<s>
The	O
Recursive	O
Neural	O
Tensor	B-Device
Network	O
uses	O
a	O
tensor-based	O
composition	O
function	O
for	O
all	O
nodes	O
in	O
the	O
tree	O
.	O
</s>
<s>
A	O
generative	O
model	O
partially	O
overcame	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
of	O
automatic	B-Algorithm
differentiation	I-Algorithm
or	O
backpropagation	B-Algorithm
in	O
neural	B-Architecture
networks	I-Architecture
in	O
1992	O
.	O
</s>
<s>
In	O
1993	O
,	O
such	O
a	O
system	O
solved	O
a	O
"	O
Very	O
Deep	B-Algorithm
Learning	I-Algorithm
"	O
task	O
that	O
required	O
more	O
than	O
1000	O
subsequent	O
layers	B-Algorithm
in	O
an	O
RNN	O
unfolded	O
in	O
time	O
.	O
</s>
<s>
This	O
allows	O
a	O
direct	O
mapping	O
to	O
a	O
finite-state	B-Architecture
machine	I-Architecture
both	O
in	O
training	O
,	O
stability	O
,	O
and	O
representation	O
.	O
</s>
<s>
Long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
is	O
an	O
example	O
of	O
this	O
but	O
has	O
no	O
such	O
formal	O
mappings	O
or	O
proof	O
of	O
stability	O
.	O
</s>
<s>
Long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
(	O
LSTM	B-Algorithm
)	O
is	O
a	O
deep	B-Algorithm
learning	I-Algorithm
system	O
that	O
avoids	O
the	O
vanishing	B-Algorithm
gradient	I-Algorithm
problem	I-Algorithm
.	O
</s>
<s>
LSTM	B-Algorithm
is	O
normally	O
augmented	O
by	O
recurrent	O
gates	O
called	O
"	O
forget	O
gates	O
"	O
.	O
</s>
<s>
LSTM	B-Algorithm
prevents	O
backpropagated	O
errors	O
from	O
vanishing	O
or	O
exploding	O
.	O
</s>
<s>
Instead	O
,	O
errors	O
can	O
flow	O
backwards	O
through	O
unlimited	O
numbers	O
of	O
virtual	O
layers	B-Algorithm
unfolded	O
in	O
space	O
.	O
</s>
<s>
That	O
is	O
,	O
LSTM	B-Algorithm
can	O
learn	O
tasks	O
that	O
require	O
memories	O
of	O
events	O
that	O
happened	O
thousands	O
or	O
even	O
millions	O
of	O
discrete	O
time	O
steps	O
earlier	O
.	O
</s>
<s>
Problem-specific	O
LSTM-like	O
topologies	O
can	O
be	O
evolved	O
.	O
</s>
<s>
LSTM	B-Algorithm
works	O
even	O
given	O
long	O
delays	O
between	O
significant	O
events	O
and	O
can	O
handle	O
signals	O
that	O
mix	O
low	O
and	O
high	O
frequency	O
components	O
.	O
</s>
<s>
Many	O
applications	O
use	O
stacks	O
of	O
LSTM	B-Algorithm
RNNs	O
and	O
train	O
them	O
by	O
Connectionist	B-Algorithm
Temporal	I-Algorithm
Classification	I-Algorithm
(	O
CTC	O
)	O
to	O
find	O
an	O
RNN	O
weight	O
matrix	O
that	O
maximizes	O
the	O
probability	O
of	O
the	O
label	O
sequences	O
in	O
a	O
training	O
set	O
,	O
given	O
the	O
corresponding	O
input	O
sequences	O
.	O
</s>
<s>
LSTM	B-Algorithm
can	O
learn	O
to	O
recognize	O
context-sensitive	O
languages	O
unlike	O
previous	O
models	O
based	O
on	O
hidden	O
Markov	O
models	O
(	O
HMM	O
)	O
and	O
similar	O
concepts	O
.	O
</s>
<s>
Gated	B-Algorithm
recurrent	I-Algorithm
units	I-Algorithm
(	O
GRUs	O
)	O
are	O
a	O
gating	O
mechanism	O
in	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
introduced	O
in	O
2014	O
.	O
</s>
<s>
Their	O
performance	O
on	O
polyphonic	O
music	O
modeling	O
and	O
speech	O
signal	O
modeling	O
was	O
found	O
to	O
be	O
similar	O
to	O
that	O
of	O
long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
.	O
</s>
<s>
They	O
have	O
fewer	O
parameters	O
than	O
LSTM	B-Algorithm
,	O
as	O
they	O
lack	O
an	O
output	O
gate	O
.	O
</s>
<s>
This	O
technique	O
has	O
been	O
proven	O
to	O
be	O
especially	O
useful	O
when	O
combined	O
with	O
LSTM	B-Algorithm
RNNs	O
.	O
</s>
<s>
A	O
continuous-time	O
recurrent	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
(	O
CTRNN	O
)	O
uses	O
a	O
system	O
of	O
ordinary	O
differential	O
equations	O
to	O
model	O
the	O
effects	O
on	O
a	O
neuron	O
of	O
the	O
incoming	O
inputs	O
.	O
</s>
<s>
:	O
Sigmoid	B-Algorithm
of	O
x	O
e.g.	O
</s>
<s>
Note	O
that	O
,	O
by	O
the	O
Shannon	O
sampling	O
theorem	O
,	O
discrete	O
time	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
can	O
be	O
viewed	O
as	O
continuous-time	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
where	O
the	O
differential	O
equations	O
have	O
transformed	O
into	O
equivalent	O
difference	O
equations	O
.	O
</s>
<s>
This	O
transformation	O
can	O
be	O
thought	O
of	O
as	O
occurring	O
after	O
the	O
post-synaptic	O
node	O
activation	B-Algorithm
functions	I-Algorithm
have	O
been	O
low-pass	O
filtered	O
but	O
prior	O
to	O
sampling	O
.	O
</s>
<s>
Hierarchical	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
(	O
HRNN	O
)	O
connect	O
their	O
neurons	O
in	O
various	O
ways	O
to	O
decompose	O
hierarchical	O
behavior	O
into	O
useful	O
subprograms	O
.	O
</s>
<s>
Generally	O
,	O
a	O
recurrent	O
multilayer	B-Algorithm
perceptron	I-Algorithm
network	O
(	O
RMLP	O
)	O
network	O
consists	O
of	O
cascaded	O
subnetworks	O
,	O
each	O
of	O
which	O
contains	O
multiple	O
layers	B-Algorithm
of	O
nodes	O
.	O
</s>
<s>
A	O
multiple	O
timescales	O
recurrent	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
(	O
MTRNN	O
)	O
is	O
a	O
neural-based	O
computational	O
model	O
that	O
can	O
simulate	O
the	O
functional	O
hierarchy	O
of	O
the	O
brain	O
through	O
self-organization	O
that	O
depends	O
on	O
spatial	O
connection	O
between	O
neurons	O
and	O
on	O
distinct	O
types	O
of	O
neuron	O
activities	O
,	O
each	O
with	O
distinct	O
time	O
properties	O
.	O
</s>
<s>
Neural	O
Turing	B-Architecture
machines	I-Architecture
(	O
NTMs	O
)	O
are	O
a	O
method	O
of	O
extending	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
by	O
coupling	O
them	O
to	O
external	O
memory	O
resources	O
which	O
they	O
can	O
interact	O
with	O
by	O
attentional	O
processes	O
.	O
</s>
<s>
The	O
combined	O
system	O
is	O
analogous	O
to	O
a	O
Turing	B-Architecture
machine	I-Architecture
or	O
Von	B-Architecture
Neumann	I-Architecture
architecture	I-Architecture
but	O
is	O
differentiable	B-Algorithm
end-to-end	O
,	O
allowing	O
it	O
to	O
be	O
efficiently	O
trained	O
with	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
Differentiable	B-Algorithm
neural	I-Algorithm
computers	I-Algorithm
(	O
DNCs	O
)	O
are	O
an	O
extension	O
of	O
Neural	O
Turing	B-Architecture
machines	I-Architecture
,	O
allowing	O
for	O
the	O
usage	O
of	O
fuzzy	O
amounts	O
of	O
each	O
memory	O
address	O
and	O
a	O
record	O
of	O
chronology	O
.	O
</s>
<s>
Neural	B-Architecture
network	I-Architecture
pushdown	O
automata	O
(	O
NNPDA	O
)	O
are	O
similar	O
to	O
NTMs	O
,	O
but	O
tapes	O
are	O
replaced	O
by	O
analogue	O
stacks	O
that	O
are	O
differentiable	B-Algorithm
and	O
that	O
are	O
trained	O
.	O
</s>
<s>
DARPA	O
's	O
SyNAPSE	B-Application
project	I-Application
has	O
funded	O
IBM	O
Research	O
and	O
HP	O
Labs	O
,	O
in	O
collaboration	O
with	O
the	O
Boston	O
University	O
Department	O
of	O
Cognitive	O
and	O
Neural	O
Systems	O
(	O
CNS	O
)	O
,	O
to	O
develop	O
neuromorphic	O
architectures	O
which	O
may	O
be	O
based	O
on	O
memristive	O
systems	O
.	O
</s>
<s>
Memristive	O
networks	O
are	O
a	O
particular	O
type	O
of	O
physical	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
that	O
have	O
very	O
similar	O
properties	O
to	O
(	O
Little	O
-	O
)	O
Hopfield	B-Algorithm
networks	I-Algorithm
,	O
as	O
they	O
have	O
a	O
continuous	O
dynamics	O
,	O
have	O
a	O
limited	O
memory	O
capacity	O
and	O
they	O
natural	O
relax	O
via	O
the	O
minimization	O
of	O
a	O
function	O
which	O
is	O
asymptotic	O
to	O
the	O
Ising	O
model	O
.	O
</s>
<s>
Gradient	B-Algorithm
descent	I-Algorithm
is	O
a	O
first-order	O
iterative	B-Algorithm
optimization	O
algorithm	O
for	O
finding	O
the	O
minimum	O
of	O
a	O
function	O
.	O
</s>
<s>
In	O
neural	B-Architecture
networks	I-Architecture
,	O
it	O
can	O
be	O
used	O
to	O
minimize	O
the	O
error	O
term	O
by	O
changing	O
each	O
weight	O
in	O
proportion	O
to	O
the	O
derivative	O
of	O
the	O
error	O
with	O
respect	O
to	O
that	O
weight	O
,	O
provided	O
the	O
non-linear	O
activation	B-Algorithm
functions	I-Algorithm
are	O
differentiable	B-Algorithm
.	O
</s>
<s>
The	O
standard	O
method	O
is	O
called	O
"	O
backpropagation	B-Algorithm
through	I-Algorithm
time	I-Algorithm
"	O
or	O
BPTT	B-Algorithm
,	O
and	O
is	O
a	O
generalization	O
of	O
back-propagation	B-Algorithm
for	O
feed-forward	B-Algorithm
networks	I-Algorithm
.	O
</s>
<s>
Like	O
that	O
method	O
,	O
it	O
is	O
an	O
instance	O
of	O
automatic	B-Algorithm
differentiation	I-Algorithm
in	O
the	O
reverse	O
accumulation	O
mode	O
of	O
Pontryagin	O
's	O
minimum	O
principle	O
.	O
</s>
<s>
A	O
more	O
computationally	O
expensive	O
online	O
variant	O
is	O
called	O
"	O
Real-Time	O
Recurrent	O
Learning	O
"	O
or	O
RTRL	O
,	O
which	O
is	O
an	O
instance	O
of	O
automatic	B-Algorithm
differentiation	I-Algorithm
in	O
the	O
forward	O
accumulation	O
mode	O
with	O
stacked	O
tangent	O
vectors	O
.	O
</s>
<s>
Unlike	O
BPTT	B-Algorithm
,	O
this	O
algorithm	O
is	O
local	O
in	O
time	O
but	O
not	O
local	O
in	O
space	O
.	O
</s>
<s>
Local	O
in	O
time	O
means	O
that	O
the	O
updates	O
take	O
place	O
continually	O
(	O
on-line	O
)	O
and	O
depend	O
only	O
on	O
the	O
most	O
recent	O
time	O
step	O
rather	O
than	O
on	O
multiple	O
time	O
steps	O
within	O
a	O
given	O
time	O
horizon	O
as	O
in	O
BPTT	B-Algorithm
.	O
</s>
<s>
Biological	O
neural	B-Architecture
networks	I-Architecture
appear	O
to	O
be	O
local	O
with	O
respect	O
to	O
both	O
time	O
and	O
space	O
.	O
</s>
<s>
For	O
recursively	O
computing	O
the	O
partial	O
derivatives	O
,	O
RTRL	O
has	O
a	O
time-complexity	O
of	O
O(number of hidden x number of weights )	O
per	O
time	O
step	O
for	O
computing	O
the	O
Jacobian	O
matrices	O
,	O
while	O
BPTT	B-Algorithm
only	O
takes	O
O(number of weights )	O
per	O
time	O
step	O
,	O
at	O
the	O
cost	O
of	O
storing	O
all	O
forward	O
activations	O
within	O
the	O
given	O
time	O
horizon	O
.	O
</s>
<s>
An	O
online	O
hybrid	O
between	O
BPTT	B-Algorithm
and	O
RTRL	O
with	O
intermediate	O
complexity	O
exists	O
,	O
along	O
with	O
variants	O
for	O
continuous	O
time	O
.	O
</s>
<s>
A	O
major	O
problem	O
with	O
gradient	B-Algorithm
descent	I-Algorithm
for	O
standard	O
RNN	O
architectures	O
is	O
that	O
error	B-Algorithm
gradients	I-Algorithm
vanish	I-Algorithm
exponentially	O
quickly	O
with	O
the	O
size	O
of	O
the	O
time	O
lag	O
between	O
important	O
events	O
.	O
</s>
<s>
LSTM	B-Algorithm
combined	O
with	O
a	O
BPTT/RTRL	O
hybrid	O
learning	O
method	O
attempts	O
to	O
overcome	O
these	O
problems	O
.	O
</s>
<s>
This	O
problem	O
is	O
also	O
solved	O
in	O
the	O
independently	O
recurrent	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
(	O
IndRNN	O
)	O
by	O
reducing	O
the	O
context	O
of	O
a	O
neuron	O
to	O
its	O
own	O
past	O
state	O
and	O
the	O
cross-neuron	O
information	O
can	O
then	O
be	O
explored	O
in	O
the	O
following	O
layers	B-Algorithm
.	O
</s>
<s>
The	O
on-line	O
algorithm	O
called	O
causal	O
recursive	O
backpropagation	B-Algorithm
(	O
CRBP	O
)	O
,	O
implements	O
and	O
combines	O
BPTT	B-Algorithm
and	O
RTRL	O
paradigms	O
for	O
locally	O
recurrent	O
networks	O
.	O
</s>
<s>
It	O
uses	O
the	O
BPTT	B-Algorithm
batch	O
algorithm	O
,	O
based	O
on	O
Lee	O
's	O
theorem	O
for	O
network	O
sensitivity	O
calculations	O
.	O
</s>
<s>
Training	O
the	O
weights	O
in	O
a	O
neural	B-Architecture
network	I-Architecture
can	O
be	O
modeled	O
as	O
a	O
non-linear	O
global	O
optimization	O
problem	O
.	O
</s>
<s>
The	O
most	O
common	O
global	O
optimization	O
method	O
for	O
training	O
RNNs	O
is	O
genetic	B-Algorithm
algorithms	I-Algorithm
,	O
especially	O
in	O
unstructured	O
networks	O
.	O
</s>
<s>
Initially	O
,	O
the	O
genetic	B-Algorithm
algorithm	I-Algorithm
is	O
encoded	O
with	O
the	O
neural	B-Architecture
network	I-Architecture
weights	O
in	O
a	O
predefined	O
manner	O
where	O
one	O
gene	O
in	O
the	O
chromosome	O
represents	O
one	O
weight	O
link	O
.	O
</s>
<s>
Many	O
chromosomes	O
make	O
up	O
the	O
population	O
;	O
therefore	O
,	O
many	O
different	O
neural	B-Architecture
networks	I-Architecture
are	O
evolved	O
until	O
a	O
stopping	O
criterion	O
is	O
satisfied	O
.	O
</s>
<s>
Therefore	O
,	O
the	O
goal	O
of	O
the	O
genetic	B-Algorithm
algorithm	I-Algorithm
is	O
to	O
maximize	O
the	O
fitness	O
function	O
,	O
reducing	O
the	O
mean-squared-error	O
.	O
</s>
<s>
Other	O
global	O
(	O
and/or	O
evolutionary	O
)	O
optimization	O
techniques	O
may	O
be	O
used	O
to	O
seek	O
a	O
good	O
set	O
of	O
weights	O
,	O
such	O
as	O
simulated	B-Algorithm
annealing	I-Algorithm
or	O
particle	B-Algorithm
swarm	I-Algorithm
optimization	I-Algorithm
.	O
</s>
<s>
They	O
are	O
in	O
fact	O
recursive	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
with	O
a	O
particular	O
structure	O
:	O
that	O
of	O
a	O
linear	O
chain	O
.	O
</s>
<s>
Whereas	O
recursive	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
operate	O
on	O
any	O
hierarchical	O
structure	O
,	O
combining	O
child	O
representations	O
into	O
parent	O
representations	O
,	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
operate	O
on	O
the	O
linear	O
progression	O
of	O
time	O
,	O
combining	O
the	O
previous	O
time	O
step	O
and	O
a	O
hidden	O
representation	O
into	O
the	O
representation	O
for	O
the	O
current	O
time	O
step	O
.	O
</s>
<s>
RNN	O
,	O
GAN	O
,	O
RL	O
,	O
CNN	B-Architecture
,...	O
)	O
.	O
</s>
<s>
The	O
framework	O
has	O
the	O
advantage	O
of	O
having	O
been	O
generated	O
from	O
an	O
extensive	O
analysis	O
of	O
the	O
literature	O
and	O
dedicated	O
to	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
and	O
their	O
variations	O
.	O
</s>
<s>
Caffe	B-Algorithm
:	O
Created	O
by	O
the	O
Berkeley	O
Vision	O
and	O
Learning	O
Center	O
(	O
BVLC	O
)	O
.	O
</s>
<s>
Developed	O
in	O
C++	B-Language
,	O
and	O
has	O
Python	B-Language
and	O
MATLAB	B-Language
wrappers	O
.	O
</s>
<s>
Chainer	B-Algorithm
:	O
Fully	O
in	O
Python	B-Language
,	O
production	O
support	O
for	O
CPU	O
,	O
GPU	O
,	O
distributed	O
training	O
.	O
</s>
<s>
Deeplearning4j	B-Library
:	O
Deep	B-Algorithm
learning	I-Algorithm
in	O
Java	B-Language
and	O
Scala	B-Application
on	O
multi-GPU-enabled	O
Spark	B-Language
.	O
</s>
<s>
Flux	B-Application
:	O
includes	O
interfaces	O
for	O
RNNs	O
,	O
including	O
GRUs	O
and	O
LSTMs	B-Algorithm
,	O
written	O
in	O
Julia	B-Application
.	O
</s>
<s>
Keras	B-Algorithm
:	O
High-level	O
API	O
,	O
providing	O
a	O
wrapper	O
to	O
many	O
other	O
deep	B-Algorithm
learning	I-Algorithm
libraries	O
.	O
</s>
<s>
MXNet	B-Algorithm
:	O
an	O
open-source	O
deep	B-Algorithm
learning	I-Algorithm
framework	O
used	O
to	O
train	O
and	O
deploy	O
deep	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
PyTorch	B-Algorithm
:	O
Tensors	B-Device
and	O
Dynamic	O
neural	B-Architecture
networks	I-Architecture
in	O
Python	B-Language
with	O
GPU	O
acceleration	O
.	O
</s>
<s>
Theano	B-Algorithm
:	O
A	O
deep-learning	B-Algorithm
library	O
for	O
Python	B-Language
with	O
an	O
API	O
largely	O
compatible	O
with	O
the	O
NumPy	B-Application
library	O
.	O
</s>
<s>
Torch	B-Algorithm
:	O
A	O
scientific	O
computing	O
framework	O
with	O
support	O
for	O
machine	O
learning	O
algorithms	O
,	O
written	O
in	O
C	B-Language
and	O
Lua	B-Language
.	O
</s>
<s>
Applications	O
of	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
include	O
:	O
</s>
