<s>
Deep	B-Algorithm
learning	I-Algorithm
and	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
are	O
approaches	O
used	O
in	O
machine	O
learning	O
to	O
build	O
computational	O
models	O
which	O
learn	O
from	O
training	O
examples	O
.	O
</s>
<s>
Bayesian	O
neural	B-Architecture
networks	I-Architecture
merge	O
these	O
fields	O
.	O
</s>
<s>
They	O
are	O
a	O
type	O
of	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
whose	O
parameters	O
and	O
predictions	O
are	O
both	O
probabilistic	O
.	O
</s>
<s>
While	O
standard	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
often	O
assign	O
high	O
confidence	O
even	O
to	O
incorrect	O
predictions	O
,	O
Bayesian	O
neural	B-Architecture
networks	I-Architecture
can	O
more	O
accurately	O
evaluate	O
how	O
likely	O
their	O
predictions	O
are	O
to	O
be	O
correct	O
.	O
</s>
<s>
Neural	B-Architecture
Network	I-Architecture
Gaussian	B-General_Concept
Processes	I-General_Concept
(	O
NNGPs	O
)	O
are	O
equivalent	O
to	O
Bayesian	O
neural	B-Architecture
networks	I-Architecture
in	O
a	O
particular	O
limit	O
,	O
and	O
provide	O
a	O
closed	O
form	O
way	O
to	O
evaluate	O
Bayesian	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
They	O
are	O
a	O
Gaussian	B-General_Concept
process	I-General_Concept
probability	O
distribution	O
which	O
describes	O
the	O
distribution	O
over	O
predictions	O
made	O
by	O
the	O
corresponding	O
Bayesian	O
neural	B-Architecture
network	I-Architecture
.	O
</s>
<s>
Computation	O
in	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
is	O
usually	O
organized	O
into	O
sequential	O
layers	O
of	O
artificial	B-Algorithm
neurons	I-Algorithm
.	O
</s>
<s>
The	O
equivalence	O
between	O
NNGPs	O
and	O
Bayesian	O
neural	B-Architecture
networks	I-Architecture
occurs	O
when	O
the	O
layers	O
in	O
a	O
Bayesian	O
neural	B-Architecture
network	I-Architecture
become	O
infinitely	O
wide	O
(	O
see	O
figure	O
)	O
.	O
</s>
<s>
large	B-Algorithm
width	I-Algorithm
limit	I-Algorithm
is	O
of	O
practical	O
interest	O
,	O
since	O
finite	O
width	O
neural	B-Architecture
networks	I-Architecture
typically	O
perform	O
strictly	O
better	O
as	O
layer	O
width	O
is	O
increased	O
.	O
</s>
<s>
The	O
NNGP	O
also	O
appears	O
in	O
several	O
other	O
contexts	O
:	O
it	O
describes	O
the	O
distribution	O
over	O
predictions	O
made	O
by	O
wide	O
non-Bayesian	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
after	O
random	O
initialization	O
of	O
their	O
parameters	O
,	O
but	O
before	O
training	O
;	O
it	O
appears	O
as	O
a	O
term	O
in	O
neural	B-Algorithm
tangent	I-Algorithm
kernel	I-Algorithm
prediction	O
equations	O
;	O
it	O
is	O
used	O
in	O
deep	O
information	O
propagation	O
to	O
characterize	O
whether	O
hyperparameters	O
and	O
architectures	O
will	O
be	O
trainable	O
.	O
</s>
<s>
It	O
is	O
related	O
to	O
other	O
large	B-Algorithm
width	I-Algorithm
limits	I-Algorithm
of	I-Algorithm
neural	I-Algorithm
networks	I-Algorithm
.	O
</s>
<s>
Every	O
setting	O
of	O
a	O
neural	B-Architecture
network	I-Architecture
's	O
parameters	O
corresponds	O
to	O
a	O
specific	O
function	O
computed	O
by	O
the	O
neural	B-Architecture
network	I-Architecture
.	O
</s>
<s>
A	O
prior	O
distribution	O
over	O
neural	B-Architecture
network	I-Architecture
parameters	O
therefore	O
corresponds	O
to	O
a	O
prior	O
distribution	O
over	O
functions	O
computed	O
by	O
the	O
network	O
.	O
</s>
<s>
As	O
neural	B-Architecture
networks	I-Architecture
are	O
made	O
infinitely	O
wide	O
,	O
this	O
distribution	O
over	O
functions	O
converges	O
to	O
a	O
Gaussian	B-General_Concept
process	I-General_Concept
for	O
many	O
architectures	O
.	O
</s>
<s>
The	O
figure	O
to	O
the	O
right	O
plots	O
the	O
one-dimensional	O
outputs	O
of	O
a	O
neural	B-Architecture
network	I-Architecture
for	O
two	O
inputs	O
and	O
against	O
each	O
other	O
.	O
</s>
<s>
The	O
black	O
dots	O
show	O
the	O
function	O
computed	O
by	O
the	O
neural	B-Architecture
network	I-Architecture
on	O
these	O
inputs	O
for	O
random	O
draws	O
of	O
the	O
parameters	O
from	O
.	O
</s>
<s>
For	O
infinitely	O
wide	O
neural	B-Architecture
networks	I-Architecture
,	O
since	O
the	O
distribution	O
over	O
functions	O
computed	O
by	O
the	O
neural	B-Architecture
network	I-Architecture
is	O
a	O
Gaussian	B-General_Concept
process	I-General_Concept
,	O
the	O
joint	O
distribution	O
over	O
network	O
outputs	O
is	O
a	O
multivariate	O
Gaussian	O
for	O
any	O
finite	O
set	O
of	O
network	O
inputs	O
.	O
</s>
<s>
The	O
equivalence	O
between	O
infinitely	O
wide	O
Bayesian	O
neural	B-Architecture
networks	I-Architecture
and	O
NNGPs	O
has	O
been	O
shown	O
to	O
hold	O
for	O
:	O
single	O
hidden	O
layer	O
and	O
deep	O
fully	O
connected	O
networks	O
as	O
the	O
number	O
of	O
units	O
per	O
layer	O
is	O
taken	O
to	O
infinity	O
;	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
as	O
the	O
number	O
of	O
channels	O
is	O
taken	O
to	O
infinity	O
;	O
transformer	O
networks	O
as	O
the	O
number	O
of	O
attention	O
heads	O
is	O
taken	O
to	O
infinity	O
;	O
recurrent	B-Algorithm
networks	I-Algorithm
as	O
the	O
number	O
of	O
units	O
is	O
taken	O
to	O
infinity	O
.	O
</s>
<s>
This	O
in	O
particular	O
includes	O
all	O
feedforward	O
or	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
composed	O
of	O
multilayer	O
perceptron	O
,	O
recurrent	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
(	O
e.g.	O
</s>
<s>
LSTMs	B-Algorithm
,	O
GRUs	B-Algorithm
)	O
,	O
(	O
nD	O
or	O
graph	O
)	O
convolution	B-Architecture
,	O
pooling	O
,	O
skip	O
connection	O
,	O
attention	O
,	O
batch	B-General_Concept
normalization	I-General_Concept
,	O
and/or	O
layer	O
normalization	O
.	O
</s>
<s>
This	O
section	O
expands	O
on	O
the	O
correspondence	O
between	O
infinitely	O
wide	O
neural	B-Architecture
networks	I-Architecture
and	O
Gaussian	B-General_Concept
processes	I-General_Concept
for	O
the	O
specific	O
case	O
of	O
a	O
fully	O
connected	O
architecture	O
.	O
</s>
<s>
Consider	O
a	O
fully	O
connected	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
with	O
inputs	O
,	O
parameters	O
consisting	O
of	O
weights	O
and	O
biases	O
for	O
each	O
layer	O
in	O
the	O
network	O
,	O
pre-activations	O
(	O
pre-nonlinearity	O
)	O
,	O
activations	O
(	O
post-nonlinearity	O
)	O
,	O
pointwise	O
nonlinearity	O
,	O
and	O
layer	O
widths	O
.	O
</s>
<s>
We	O
first	O
observe	O
that	O
the	O
pre-activations	O
are	O
described	O
by	O
a	O
Gaussian	B-General_Concept
process	I-General_Concept
conditioned	O
on	O
the	O
preceding	O
activations	O
.	O
</s>
<s>
Since	O
the	O
are	O
jointly	O
Gaussian	O
for	O
any	O
set	O
of	O
,	O
they	O
are	O
described	O
by	O
a	O
Gaussian	B-General_Concept
process	I-General_Concept
conditioned	O
on	O
the	O
preceding	O
activations	O
.	O
</s>
<s>
The	O
covariance	O
or	O
kernel	O
of	O
this	O
Gaussian	B-General_Concept
process	I-General_Concept
depends	O
on	O
the	O
weight	O
and	O
bias	O
variances	O
and	O
,	O
as	O
well	O
as	O
the	O
second	O
moment	O
matrix	O
of	O
the	O
preceding	O
activations	O
,	O
</s>
<s>
Because	O
of	O
this	O
,	O
we	O
can	O
say	O
that	O
is	O
a	O
Gaussian	B-General_Concept
process	I-General_Concept
conditioned	O
on	O
,	O
rather	O
than	O
conditioned	O
on	O
,	O
</s>
<s>
We	O
have	O
already	O
determined	O
that	O
is	O
a	O
Gaussian	B-General_Concept
process	I-General_Concept
.	O
</s>
<s>
This	O
means	O
that	O
the	O
sum	O
defining	O
is	O
an	O
average	O
over	O
samples	O
from	O
a	O
Gaussian	B-General_Concept
process	I-General_Concept
which	O
is	O
a	O
function	O
of	O
,	O
</s>
<s>
As	O
the	O
layer	O
width	O
goes	O
to	O
infinity	O
,	O
this	O
average	O
over	O
samples	O
from	O
the	O
Gaussian	B-General_Concept
process	I-General_Concept
can	O
be	O
replaced	O
with	O
an	O
integral	O
over	O
the	O
Gaussian	B-General_Concept
process	I-General_Concept
:	O
</s>
<s>
There	O
are	O
a	O
number	O
of	O
situations	O
where	O
this	O
has	O
been	O
solved	O
analytically	O
,	O
such	O
as	O
when	O
is	O
a	O
ReLU	B-Algorithm
,	O
</s>
<s>
By	O
combining	O
this	O
expression	O
with	O
the	O
further	O
observations	O
that	O
the	O
input	O
layer	O
second	O
moment	O
matrix	O
is	O
a	O
deterministic	O
function	O
of	O
the	O
input	O
,	O
and	O
that	O
is	O
a	O
Gaussian	B-General_Concept
process	I-General_Concept
,	O
the	O
output	O
of	O
the	O
neural	B-Architecture
network	I-Architecture
can	O
be	O
expressed	O
as	O
a	O
Gaussian	B-General_Concept
process	I-General_Concept
in	O
terms	O
of	O
its	O
input	O
,	O
</s>
<s>
is	O
a	O
free	B-License
and	I-License
open-source	I-License
Python	B-Language
library	O
used	O
for	O
computing	O
and	O
doing	O
inference	O
with	O
the	O
NNGP	O
and	O
neural	B-Algorithm
tangent	I-Algorithm
kernel	I-Algorithm
corresponding	O
to	O
various	O
common	O
ANN	O
architectures	O
.	O
</s>
