<s>
In	O
the	O
study	O
of	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
(	O
ANNs	O
)	O
,	O
the	O
neural	B-Algorithm
tangent	I-Algorithm
kernel	I-Algorithm
(	O
NTK	O
)	O
is	O
a	O
kernel	B-Algorithm
that	O
describes	O
the	O
evolution	O
of	O
deep	B-Algorithm
artificial	I-Algorithm
neural	I-Algorithm
networks	I-Algorithm
during	O
their	O
training	O
by	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
It	O
allows	O
ANNs	O
to	O
be	O
studied	O
using	O
theoretical	O
tools	O
from	O
kernel	B-Algorithm
methods	I-Algorithm
.	O
</s>
<s>
In	O
general	O
,	O
a	O
kernel	B-Algorithm
is	O
a	O
positive-semidefinite	O
symmetric	O
function	O
of	O
two	O
inputs	O
which	O
represents	O
some	O
notion	O
of	O
similarity	O
between	O
the	O
two	O
inputs	O
.	O
</s>
<s>
The	O
NTK	O
is	O
a	O
specific	O
kernel	B-Algorithm
derived	O
from	O
a	O
given	O
neural	B-Architecture
network	I-Architecture
;	O
in	O
general	O
,	O
when	O
the	O
neural	B-Architecture
network	I-Architecture
parameters	O
change	O
during	O
training	O
,	O
the	O
NTK	O
evolves	O
with	O
.	O
</s>
<s>
However	O
,	O
in	O
the	O
limit	O
of	O
large	O
layer	O
width	O
,	O
the	O
NTK	O
is	O
constant	O
over	O
training	O
,	O
unveiling	O
a	O
duality	O
between	O
training	O
the	O
wide	O
neural	B-Architecture
network	I-Architecture
and	O
kernel	B-Language
regression	I-Language
:	O
gradient	B-Algorithm
descent	I-Algorithm
in	O
the	O
infinite-width	B-Algorithm
limit	I-Algorithm
is	O
fully	O
equivalent	O
to	O
kernel	B-Algorithm
gradient	B-Algorithm
descent	I-Algorithm
with	O
the	O
NTK	O
.	O
</s>
<s>
As	O
a	O
result	O
,	O
the	O
trained	O
neural	B-Architecture
network	I-Architecture
converges	O
to	O
the	O
kernel	B-Language
regression	I-Language
estimator	O
(	O
with	O
the	O
NTK	O
)	O
.	O
</s>
<s>
This	O
duality	O
enables	O
simple	O
closed	O
form	O
statements	O
to	O
be	O
made	O
about	O
the	O
predictions	O
,	O
training	O
dynamics	O
,	O
generalization	O
,	O
and	O
loss	O
surfaces	O
of	O
wide	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
The	O
NTK	O
was	O
introduced	O
in	O
2018	O
by	O
Arthur	O
Jacot	O
,	O
Franck	O
Gabriel	O
and	O
Clément	O
Hongler	O
,	O
who	O
used	O
it	O
to	O
study	O
the	O
convergence	O
and	O
generalization	O
properties	O
of	O
fully	O
connected	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
Later	O
works	O
extended	O
the	O
NTK	O
results	O
to	O
other	O
neural	B-Architecture
network	I-Architecture
architectures	O
.	O
</s>
<s>
The	O
parameters	O
of	O
a	O
wide	O
neural	B-Architecture
network	I-Architecture
change	O
negligibly	O
during	O
training	O
(	O
which	O
causes	O
the	O
NTK	O
to	O
be	O
constant	O
)	O
.	O
</s>
<s>
However	O
,	O
this	O
implies	O
that	O
infinite-width	O
neural	B-Architecture
networks	I-Architecture
cannot	O
exhibit	O
feature	B-General_Concept
learning	I-General_Concept
,	O
which	O
is	O
widely	O
considered	O
to	O
be	O
an	O
important	O
property	O
of	O
realistic	O
deep	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
Recent	O
works	O
address	O
this	O
shortcoming	O
by	O
taking	O
an	O
alternate	O
kind	O
of	O
infinite-width	B-Algorithm
limit	I-Algorithm
in	O
which	O
there	O
is	O
no	O
duality	O
with	O
kernel	B-Language
regression	I-Language
,	O
but	O
feature	B-General_Concept
learning	I-General_Concept
occurs	O
during	O
training	O
.	O
</s>
<s>
The	O
NTK	O
is	O
a	O
kernel	B-Algorithm
defined	O
byIn	O
the	O
language	O
of	O
kernel	B-Algorithm
methods	I-Algorithm
,	O
the	O
NTK	O
is	O
the	O
kernel	B-Algorithm
associated	O
with	O
the	O
feature	B-Algorithm
map	I-Algorithm
.	O
</s>
<s>
When	O
optimizing	O
the	O
parameters	O
of	O
an	O
ANN	O
to	O
minimize	O
an	O
empirical	O
loss	O
through	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
the	O
NTK	O
governs	O
the	O
dynamics	O
of	O
the	O
ANN	O
output	O
function	O
throughout	O
the	O
training	O
.	O
</s>
<s>
minimize	O
)	O
via	O
continuous-time	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
the	O
parameters	O
evolve	O
through	O
the	O
ordinary	O
differential	O
equation	O
:	O
</s>
<s>
For	O
a	O
dataset	O
with	O
vector	O
labels	O
and	O
a	O
loss	O
function	O
,	O
the	O
corresponding	O
empirical	O
loss	O
on	O
functions	O
is	O
defined	O
byThe	O
training	O
of	O
through	O
continuous-time	O
gradient	B-Algorithm
descent	I-Algorithm
yields	O
the	O
following	O
evolution	O
in	O
function	O
space	O
driven	O
by	O
the	O
NTK	O
:	O
</s>
<s>
The	O
NTK	O
represents	O
the	O
influence	O
of	O
the	O
loss	O
gradient	O
with	O
respect	O
to	O
example	O
on	O
the	O
evolution	O
of	O
ANN	O
output	O
through	O
a	O
gradient	B-Algorithm
descent	I-Algorithm
step	O
:	O
in	O
the	O
scalar	O
case	O
,	O
this	O
readsIn	O
particular	O
,	O
each	O
data	O
point	O
influences	O
the	O
evolution	O
of	O
the	O
output	O
for	O
each	O
throughout	O
the	O
training	O
,	O
in	O
a	O
way	O
that	O
is	O
captured	O
by	O
the	O
NTK	O
.	O
</s>
<s>
Recent	O
theoretical	O
and	O
empirical	O
work	O
in	O
deep	B-Algorithm
learning	I-Algorithm
has	O
shown	O
the	O
performance	O
of	O
ANNs	O
to	O
strictly	O
improve	O
as	O
their	O
layer	O
widths	O
grow	O
larger	O
.	O
</s>
<s>
For	O
various	O
ANN	B-Architecture
architectures	I-Architecture
,	O
the	O
NTK	O
yields	O
precise	O
insight	O
into	O
the	O
training	O
in	O
this	O
large-width	O
regime	O
.	O
</s>
<s>
Consider	O
an	O
ANN	O
with	O
fully-connected	B-Architecture
layers	O
of	O
widths	O
,	O
so	O
that	O
,	O
where	O
is	O
the	O
composition	O
of	O
an	O
affine	B-Algorithm
transformation	I-Algorithm
with	O
the	O
pointwise	O
application	O
of	O
a	O
nonlinearity	B-Algorithm
,	O
where	O
parametrizes	O
the	O
maps	O
.	O
</s>
<s>
where	O
denotes	O
the	O
kernel	B-Algorithm
defined	O
in	O
terms	O
of	O
the	O
Gaussian	O
expectation	O
:	O
</s>
<s>
The	O
NTK	O
describes	O
the	O
evolution	O
of	O
neural	B-Architecture
networks	I-Architecture
under	O
gradient	B-Algorithm
descent	I-Algorithm
in	O
function	O
space	O
.	O
</s>
<s>
Dual	O
to	O
this	O
perspective	O
is	O
an	O
understanding	O
of	O
how	O
neural	B-Architecture
networks	I-Architecture
evolve	O
in	O
parameter	O
space	O
,	O
since	O
the	O
NTK	O
is	O
defined	O
in	O
terms	O
of	O
the	O
gradient	O
of	O
the	O
ANN	O
's	O
outputs	O
with	O
respect	O
to	O
its	O
parameters	O
.	O
</s>
<s>
The	O
NTK	O
can	O
be	O
studied	O
for	O
various	O
ANN	B-Architecture
architectures	I-Architecture
,	O
in	O
particular	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
(	O
CNNs	B-Architecture
)	O
,	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
(	O
RNNs	O
)	O
and	O
transformers	B-Algorithm
.	O
</s>
<s>
In	O
such	O
settings	O
,	O
the	O
large-width	O
limit	O
corresponds	O
to	O
letting	O
the	O
number	O
of	O
parameters	O
grow	O
,	O
while	O
keeping	O
the	O
number	O
of	O
layers	O
fixed	O
:	O
for	O
CNNs	B-Architecture
,	O
this	O
involves	O
letting	O
the	O
number	O
of	O
channels	O
grow	O
.	O
</s>
<s>
The	O
NTK	O
gives	O
a	O
rigorous	O
connection	O
between	O
the	O
inference	O
performed	O
by	O
infinite-width	O
ANNs	O
and	O
that	O
performed	O
by	O
kernel	B-Algorithm
methods	I-Algorithm
:	O
when	O
the	O
loss	O
function	O
is	O
the	O
least-squares	O
loss	O
,	O
the	O
inference	O
performed	O
by	O
an	O
ANN	O
is	O
in	O
expectation	O
equal	O
to	O
the	O
kernel	B-Language
ridge	I-Language
regression	I-Language
(	O
with	O
zero	O
ridge	O
)	O
with	O
respect	O
to	O
the	O
NTK	O
.	O
</s>
<s>
This	O
suggests	O
that	O
the	O
performance	O
of	O
large	O
ANNs	O
in	O
the	O
NTK	O
parametrization	O
can	O
be	O
replicated	O
by	O
kernel	B-Algorithm
methods	I-Algorithm
for	O
suitably	O
chosen	O
kernels	O
.	O
</s>
<s>
is	O
a	O
free	B-License
and	I-License
open-source	I-License
Python	B-Language
library	O
used	O
for	O
computing	O
and	O
doing	O
inference	O
with	O
the	O
infinite	O
width	O
NTK	O
and	O
neural	B-Algorithm
network	I-Algorithm
Gaussian	I-Algorithm
process	I-Algorithm
(	O
NNGP	O
)	O
corresponding	O
to	O
various	O
common	O
ANN	B-Architecture
architectures	I-Architecture
.	O
</s>
