<s>
Hopfield	O
networks	O
are	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
with	O
dynamical	O
trajectories	O
converging	O
to	O
fixed	O
point	O
attractor	O
states	O
and	O
described	O
by	O
an	O
energy	O
function	O
.	O
</s>
<s>
The	O
energy	O
in	O
the	O
continuous	O
case	O
has	O
one	O
term	O
which	O
is	O
quadratic	O
in	O
the	O
(	O
as	O
in	O
the	O
binary	O
model	O
)	O
,	O
and	O
a	O
second	O
term	O
which	O
depends	O
on	O
the	O
gain	O
function	O
(	O
neuron	O
's	O
activation	B-Algorithm
function	I-Algorithm
)	O
.	O
</s>
<s>
Dense	O
Associative	O
Memories	O
(	O
also	O
known	O
as	O
the	O
modern	B-General_Concept
Hopfield	I-General_Concept
networks	I-General_Concept
)	O
are	O
generalizations	O
of	O
the	O
classical	O
Hopfield	O
networks	O
that	O
break	O
the	O
linear	O
scaling	O
relationship	O
between	O
the	O
number	O
of	O
input	O
features	O
and	O
the	O
number	O
of	O
stored	O
memories	O
.	O
</s>
<s>
This	O
is	O
achieved	O
by	O
introducing	O
stronger	O
non-linearities	O
(	O
either	O
in	O
the	O
energy	O
function	O
or	O
neurons’	O
activation	B-Algorithm
functions	I-Algorithm
)	O
leading	O
to	O
super-linear	O
(	O
even	O
an	O
exponential	O
)	O
memory	O
storage	O
capacity	O
as	O
a	O
function	O
of	O
the	O
number	O
of	O
feature	O
neurons	O
.	O
</s>
<s>
The	O
key	O
theoretical	O
idea	O
behind	O
the	O
Modern	B-General_Concept
Hopfield	I-General_Concept
networks	I-General_Concept
is	O
to	O
use	O
an	O
energy	O
function	O
and	O
an	O
update	O
rule	O
that	O
is	O
more	O
sharply	O
peaked	O
around	O
the	O
stored	O
memories	O
in	O
the	O
space	O
of	O
neuron	O
’s	O
configurations	O
compared	O
to	O
the	O
classical	O
Hopfield	O
network	O
.	O
</s>
<s>
A	O
simple	O
example	O
of	O
the	O
Modern	B-General_Concept
Hopfield	I-General_Concept
network	I-General_Concept
can	O
be	O
written	O
in	O
terms	O
of	O
binary	O
variables	O
that	O
represent	O
the	O
active	O
and	O
inactive	O
state	O
of	O
the	O
model	O
neuron	O
.In	O
this	O
formula	O
the	O
weights	O
represent	O
the	O
matrix	O
of	O
memory	O
vectors	O
(	O
index	O
enumerates	O
different	O
memories	O
,	O
and	O
index	O
enumerates	O
the	O
content	O
of	O
each	O
memory	O
corresponding	O
to	O
the	O
-th	O
feature	O
neuron	O
)	O
,	O
and	O
the	O
function	O
is	O
a	O
rapidly	O
growing	O
non-linear	O
function	O
.	O
</s>
<s>
Modern	B-General_Concept
Hopfield	I-General_Concept
networks	I-General_Concept
or	O
Dense	O
Associative	O
Memories	O
can	O
be	O
best	O
understood	O
in	O
continuous	O
variables	O
and	O
continuous	O
time	O
.	O
</s>
<s>
It	O
is	O
convenient	O
to	O
define	O
these	O
activation	B-Algorithm
function	I-Algorithm
as	O
derivatives	O
of	O
the	O
Lagrangian	O
functions	O
for	O
the	O
two	O
groups	O
of	O
neuronsThis	O
way	O
the	O
specific	O
form	O
of	O
the	O
equations	O
for	O
neuron	O
's	O
states	O
is	O
completely	O
defined	O
once	O
the	O
Lagrangian	O
functions	O
are	O
specified	O
.	O
</s>
<s>
Classical	O
formulation	O
of	O
continuous	O
Hopfield	O
networks	O
can	O
be	O
understood	O
as	O
a	O
special	O
limiting	O
case	O
of	O
the	O
Modern	B-General_Concept
Hopfield	I-General_Concept
networks	I-General_Concept
with	O
one	O
hidden	O
layer	O
.	O
</s>
<s>
Continuous	O
Hopfield	O
Networks	O
for	O
neurons	O
with	O
graded	O
response	O
are	O
typically	O
described	O
by	O
the	O
dynamical	O
equations	O
and	O
the	O
energy	O
function	O
where	O
,	O
and	O
is	O
the	O
inverse	O
of	O
the	O
activation	B-Algorithm
function	I-Algorithm
.	O
</s>
<s>
This	O
model	O
is	O
a	O
special	O
limit	O
of	O
the	O
class	O
of	O
models	O
that	O
is	O
called	O
models	O
A	O
,	O
with	O
the	O
following	O
choice	O
of	O
the	O
Lagrangian	O
functions	O
that	O
,	O
according	O
to	O
the	O
definition	O
(	O
)	O
,	O
leads	O
to	O
the	O
activation	B-Algorithm
functions	I-Algorithm
If	O
we	O
integrate	O
out	O
the	O
hidden	O
neurons	O
the	O
system	O
of	O
equations	O
(	O
)	O
reduces	O
to	O
the	O
equations	O
on	O
the	O
feature	O
neurons	O
(	O
)	O
with	O
,	O
and	O
the	O
general	O
expression	O
for	O
the	O
energy	O
(	O
)	O
reduces	O
to	O
the	O
effective	O
energy	O
While	O
the	O
first	O
two	O
terms	O
in	O
equation	O
(	O
)	O
are	O
the	O
same	O
as	O
those	O
in	O
equation	O
(	O
)	O
,	O
the	O
third	O
terms	O
look	O
superficially	O
different	O
.	O
</s>
<s>
In	O
equation	O
(	O
)	O
it	O
is	O
a	O
Legendre	O
transform	O
of	O
the	O
Lagrangian	O
for	O
the	O
feature	O
neurons	O
,	O
while	O
in	O
(	O
)	O
the	O
third	O
term	O
is	O
an	O
integral	O
of	O
the	O
inverse	O
activation	B-Algorithm
function	I-Algorithm
.	O
</s>
<s>
This	O
completes	O
the	O
proof	O
that	O
the	O
classical	O
Hopfield	O
network	O
with	O
continuous	O
states	O
is	O
a	O
special	O
limiting	O
case	O
of	O
the	O
modern	B-General_Concept
Hopfield	I-General_Concept
network	I-General_Concept
(	O
)	O
with	O
energy	O
(	O
)	O
.	O
</s>
<s>
This	O
section	O
describes	O
a	O
mathematical	O
model	O
of	O
a	O
fully	O
connected	O
Modern	B-General_Concept
Hopfield	I-General_Concept
network	I-General_Concept
assuming	O
the	O
extreme	O
degree	O
of	O
heterogeneity	O
:	O
every	O
single	O
neuron	O
is	O
different	O
.	O
</s>
<s>
Specifically	O
,	O
an	O
energy	O
function	O
and	O
the	O
corresponding	O
dynamical	O
equations	O
are	O
described	O
assuming	O
that	O
each	O
neuron	O
has	O
its	O
own	O
activation	B-Algorithm
function	I-Algorithm
and	O
kinetic	O
time	O
scale	O
.	O
</s>
<s>
The	O
activation	B-Algorithm
function	I-Algorithm
for	O
each	O
neuron	O
is	O
defined	O
as	O
a	O
partial	O
derivative	O
of	O
the	O
Lagrangian	O
with	O
respect	O
to	O
that	O
neuron	O
's	O
activity	O
From	O
the	O
biological	O
perspective	O
one	O
can	O
think	O
about	O
as	O
an	O
axonal	O
output	O
of	O
the	O
neuron	O
.	O
</s>
<s>
For	O
non-additive	O
Lagrangians	O
this	O
activation	B-Algorithm
function	I-Algorithm
can	O
depend	O
on	O
the	O
activities	O
of	O
a	O
group	O
of	O
neurons	O
.	O
</s>
<s>
The	O
advantage	O
of	O
formulating	O
this	O
network	O
in	O
terms	O
of	O
the	O
Lagrangian	O
functions	O
is	O
that	O
it	O
makes	O
it	O
possible	O
to	O
easily	O
experiment	O
with	O
different	O
choices	O
of	O
the	O
activation	B-Algorithm
functions	I-Algorithm
and	O
different	O
architectural	O
arrangements	O
of	O
neurons	O
.	O
</s>
<s>
The	O
neurons	O
can	O
be	O
organized	O
in	O
layers	O
so	O
that	O
every	O
neuron	O
in	O
a	O
given	O
layer	O
has	O
the	O
same	O
activation	B-Algorithm
function	I-Algorithm
and	O
the	O
same	O
dynamic	O
time	O
scale	O
.	O
</s>
<s>
It	O
has	O
layers	O
of	O
recurrently	O
connected	O
neurons	O
with	O
the	O
states	O
described	O
by	O
continuous	O
variables	O
and	O
the	O
activation	B-Algorithm
functions	I-Algorithm
,	O
index	O
enumerates	O
the	O
layers	O
of	O
the	O
network	O
,	O
and	O
index	O
enumerates	O
individual	O
neurons	O
in	O
that	O
layer	O
.	O
</s>
<s>
The	O
activation	B-Algorithm
functions	I-Algorithm
can	O
depend	O
on	O
the	O
activities	O
of	O
all	O
the	O
neurons	O
in	O
the	O
layer	O
.	O
</s>
<s>
The	O
activation	B-Algorithm
functions	I-Algorithm
in	O
that	O
layer	O
can	O
be	O
defined	O
as	O
partial	O
derivatives	O
of	O
the	O
Lagrangian	O
With	O
these	O
definitions	O
the	O
energy	O
(	O
Lyapunov	O
)	O
function	O
is	O
given	O
by	O
If	O
the	O
Lagrangian	O
functions	O
,	O
or	O
equivalently	O
the	O
activation	B-Algorithm
functions	I-Algorithm
,	O
are	O
chosen	O
in	O
such	O
a	O
way	O
that	O
the	O
Hessians	O
for	O
each	O
layer	O
are	O
positive	O
semi-definite	O
and	O
the	O
overall	O
energy	O
is	O
bounded	O
from	O
below	O
,	O
this	O
system	O
is	O
guaranteed	O
to	O
converge	O
to	O
a	O
fixed	O
point	O
attractor	O
state	O
.	O
</s>
