<s>
Stochastic	B-Algorithm
gradient	I-Algorithm
Langevin	I-Algorithm
dynamics	I-Algorithm
(	O
SGLD	O
)	O
is	O
an	O
optimization	O
and	O
sampling	O
technique	O
composed	O
of	O
characteristics	O
from	O
Stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
,	O
a	O
Robbins	O
–	O
Monro	O
optimization	O
algorithm	O
,	O
and	O
Langevin	O
dynamics	O
,	O
a	O
mathematical	O
extension	O
of	O
molecular	O
dynamics	O
models	O
.	O
</s>
<s>
Like	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
,	O
SGLD	O
is	O
an	O
iterative	O
optimization	O
algorithm	O
which	O
uses	O
minibatching	O
to	O
create	O
a	O
stochastic	O
gradient	O
estimator	O
,	O
as	O
used	O
in	O
SGD	O
to	O
optimize	O
a	O
differentiable	O
objective	O
function	O
.	O
</s>
<s>
Stochastic	B-Algorithm
gradient	I-Algorithm
Langevin	I-Algorithm
dynamics	I-Algorithm
uses	O
a	O
modified	O
update	O
procedure	O
with	O
minibatched	O
likelihood	O
terms	O
:	O
</s>
<s>
For	O
early	O
iterations	O
of	O
the	O
algorithm	O
,	O
each	O
parameter	O
update	O
mimics	O
Stochastic	B-Algorithm
Gradient	I-Algorithm
Descent	I-Algorithm
;	O
however	O
,	O
as	O
the	O
algorithm	O
approaches	O
a	O
local	O
minimum	O
or	O
maximum	O
,	O
the	O
gradient	O
shrinks	O
to	O
zero	O
and	O
the	O
chain	O
produces	O
samples	O
surrounding	O
the	O
maximum	O
a	O
posteriori	O
mode	O
allowing	O
for	O
posterior	O
inference	O
.	O
</s>
<s>
In	O
doing	O
so	O
,	O
the	O
method	O
maintains	O
the	O
computational	O
efficiency	O
of	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
when	O
compared	O
to	O
traditional	O
gradient	B-Algorithm
descent	I-Algorithm
while	O
providing	O
additional	O
information	O
regarding	O
the	O
landscape	O
around	O
the	O
critical	O
point	O
of	O
the	O
objective	O
function	O
.	O
</s>
<s>
In	O
practice	O
,	O
SGLD	O
can	O
be	O
applied	O
to	O
the	O
training	O
of	O
Bayesian	O
Neural	B-Architecture
Networks	I-Architecture
in	O
Deep	B-Algorithm
Learning	I-Algorithm
,	O
a	O
task	O
in	O
which	O
the	O
method	O
provides	O
a	O
distribution	O
over	O
model	O
parameters	O
.	O
</s>
<s>
Additionally	O
,	O
obtaining	O
samples	O
from	O
a	O
posterior	O
distribution	O
permits	O
uncertainty	O
quantification	O
by	O
means	O
of	O
confidence	O
intervals	O
,	O
a	O
feature	O
which	O
is	O
not	O
possible	O
using	O
traditional	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
This	O
algorithm	O
is	O
also	O
a	O
reduction	O
of	O
Hamiltonian	B-Algorithm
Monte	I-Algorithm
Carlo	I-Algorithm
,	O
consisting	O
of	O
a	O
single	O
leapfrog	O
step	O
proposal	O
rather	O
than	O
a	O
series	O
of	O
steps	O
.	O
</s>
<s>
Since	O
SGLD	O
can	O
be	O
formulated	O
as	O
a	O
modification	O
of	O
both	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
and	O
MCMC	O
methods	O
,	O
the	O
method	O
lies	O
at	O
the	O
intersection	O
between	O
optimization	O
and	O
sampling	O
algorithms	O
;	O
the	O
method	O
maintains	O
SGD	O
's	O
ability	O
to	O
quickly	O
converge	O
to	O
regions	O
of	O
low	O
cost	O
while	O
providing	O
samples	O
to	O
facilitate	O
posterior	O
inference	O
.	O
</s>
<s>
Considering	O
relaxed	O
constraints	O
on	O
the	O
step	O
sizes	O
such	O
that	O
they	O
do	O
not	O
approach	O
zero	O
asymptotically	O
,	O
SGLD	O
fails	O
to	O
produce	O
samples	O
for	O
which	O
the	O
Metropolis	B-Algorithm
Hastings	I-Algorithm
rejection	O
rate	O
is	O
zero	O
,	O
and	O
thus	O
a	O
MH	O
rejection	O
step	O
becomes	O
necessary	O
.	O
</s>
<s>
where	O
is	O
a	O
normal	O
distribution	O
centered	O
one	O
gradient	B-Algorithm
descent	I-Algorithm
step	O
from	O
and	O
is	O
our	O
target	O
distribution	O
.	O
</s>
<s>
Under	O
some	O
regularity	O
conditions	O
of	O
an	O
L-Lipschitz	O
smooth	O
objective	O
function	O
which	O
is	O
m-strongly	O
convex	O
outside	O
of	O
a	O
region	O
of	O
radius	O
with	O
condition	B-Algorithm
number	I-Algorithm
,	O
we	O
have	O
mixing	O
rate	O
bounds	O
:	O
</s>
