<s>
Leabra	B-Algorithm
stands	O
for	O
local	O
,	O
error-driven	B-Algorithm
and	O
associative	O
,	O
biologically	O
realistic	O
algorithm	O
.	O
</s>
<s>
It	O
is	O
a	O
model	B-Application
of	O
learning	O
which	O
is	O
a	O
balance	O
between	O
Hebbian	O
and	O
error-driven	B-Algorithm
learning	I-Algorithm
with	O
other	O
network-derived	O
characteristics	O
.	O
</s>
<s>
This	O
model	B-Application
is	O
used	O
to	O
mathematically	O
predict	O
outcomes	O
based	O
on	O
inputs	O
and	O
previous	O
learning	O
influences	O
.	O
</s>
<s>
This	O
model	B-Application
is	O
heavily	O
influenced	O
by	O
and	O
contributes	O
to	O
neural	B-Architecture
network	I-Architecture
designs	O
and	O
models	O
.	O
</s>
<s>
This	O
algorithm	O
is	O
the	O
default	O
algorithm	O
in	O
emergent	B-Algorithm
(	O
successor	O
of	O
PDP++	B-Algorithm
)	O
when	O
making	O
a	O
new	O
project	O
,	O
and	O
is	O
extensively	O
used	O
in	O
various	O
simulations	O
.	O
</s>
<s>
Hebbian	O
learning	O
is	O
performed	O
using	O
conditional	B-Application
principal	I-Application
components	I-Application
analysis	I-Application
(	O
CPCA	B-Application
)	O
algorithm	O
with	O
correction	O
factor	O
for	O
sparse	O
expected	O
activity	O
levels	O
.	O
</s>
<s>
Error-driven	B-Algorithm
learning	I-Algorithm
is	O
performed	O
using	O
GeneRec	O
,	O
which	O
is	O
a	O
generalization	O
of	O
the	O
recirculation	O
algorithm	O
,	O
and	O
approximates	O
Almeida	O
–	O
Pineda	O
recurrent	O
backpropagation	O
.	O
</s>
<s>
The	O
symmetric	O
,	O
midpoint	O
version	O
of	O
GeneRec	O
is	O
used	O
,	O
which	O
is	O
equivalent	O
to	O
the	O
contrastive	B-Algorithm
Hebbian	I-Algorithm
learning	I-Algorithm
algorithm	O
(	O
CHL	B-Algorithm
)	O
.	O
</s>
<s>
The	O
activation	O
function	O
is	O
a	O
point-neuron	O
approximation	O
with	O
both	O
discrete	O
spiking	B-Algorithm
and	O
continuous	O
rate-code	O
output	O
.	O
</s>
<s>
Layer	O
or	O
unit-group	O
level	O
inhibition	O
can	O
be	O
computed	O
directly	O
using	O
a	O
k-winners-take-all	B-Algorithm
(	O
KWTA	B-Algorithm
)	O
function	O
,	O
producing	O
sparse	O
distributed	O
representations	O
.	O
</s>
<s>
The	O
pseudocode	O
for	O
Leabra	B-Algorithm
is	O
given	O
here	O
,	O
showing	O
exactly	O
how	O
the	O
pieces	O
of	O
the	O
algorithm	O
described	O
in	O
more	O
detail	O
in	O
the	O
subsequent	O
sections	O
fit	O
together	O
.	O
</s>
<s>
here	O
so	O
network	B-Architecture
can	O
be	O
dynamically	O
altered	O
)	O
.	O
</s>
<s>
-	O
Compute	O
kWTA	B-Algorithm
inhibition	O
for	O
each	O
layer	O
,	O
based	O
on	O
g_i^Q	O
:	O
</s>
<s>
is	O
the	O
original	O
implementation	O
of	O
Leabra	B-Algorithm
;	O
its	O
most	O
recent	O
implementation	O
is	O
written	O
in	O
Go	B-Application
.	O
</s>
<s>
Although	O
emergent	B-Algorithm
has	O
a	O
graphical	O
user	O
interface	O
,	O
it	O
is	O
very	O
complex	O
and	O
has	O
a	O
steep	O
learning	O
curve	O
.	O
</s>
<s>
There	O
is	O
also	O
an	O
available	O
,	O
that	O
can	O
be	O
easily	O
installed	O
via	O
install.packages("leabRa" )	O
in	O
R	O
and	O
has	O
a	O
to	O
how	O
the	O
package	O
is	O
used	O
.	O
</s>
<s>
Temporal	O
differences	O
(	O
TD	O
)	O
is	O
widely	O
used	O
as	O
a	O
model	B-Application
of	O
midbrain	O
dopaminergic	O
firing	O
.	O
</s>
<s>
Prefrontal	B-Algorithm
cortex	I-Algorithm
basal	I-Algorithm
ganglia	I-Algorithm
working	I-Algorithm
memory	I-Algorithm
(	O
PBWM	B-Algorithm
)	O
.	O
</s>
<s>
PBWM	B-Algorithm
uses	O
PVLV	O
to	O
train	O
prefrontal	O
cortex	O
working	O
memory	O
updating	O
system	O
,	O
based	O
on	O
the	O
biology	O
of	O
the	O
prefrontal	O
cortex	O
and	O
basal	O
ganglia	O
.	O
</s>
