<s>
In	O
statistics	O
,	O
Gibbs	B-Algorithm
sampling	I-Algorithm
or	O
a	O
Gibbs	B-Algorithm
sampler	I-Algorithm
is	O
a	O
Markov	B-General_Concept
chain	I-General_Concept
Monte	I-General_Concept
Carlo	I-General_Concept
(	O
MCMC	O
)	O
algorithm	O
for	O
obtaining	O
a	O
sequence	O
of	O
observations	O
which	O
are	O
approximated	O
from	O
a	O
specified	O
multivariate	O
probability	O
distribution	O
,	O
when	O
direct	O
sampling	O
is	O
difficult	O
.	O
</s>
<s>
Gibbs	B-Algorithm
sampling	I-Algorithm
is	O
commonly	O
used	O
as	O
a	O
means	O
of	O
statistical	O
inference	O
,	O
especially	O
Bayesian	O
inference	O
.	O
</s>
<s>
It	O
is	O
a	O
randomized	B-General_Concept
algorithm	I-General_Concept
(	O
i.e.	O
</s>
<s>
an	O
algorithm	O
that	O
makes	O
use	O
of	O
random	O
numbers	O
)	O
,	O
and	O
is	O
an	O
alternative	O
to	O
deterministic	B-General_Concept
algorithms	I-General_Concept
for	O
statistical	O
inference	O
such	O
as	O
the	O
expectation-maximization	B-Algorithm
algorithm	I-Algorithm
(	O
EM	O
)	O
.	O
</s>
<s>
As	O
with	O
other	O
MCMC	O
algorithms	O
,	O
Gibbs	B-Algorithm
sampling	I-Algorithm
generates	O
a	O
Markov	O
chain	O
of	O
samples	O
,	O
each	O
of	O
which	O
is	O
correlated	O
with	O
nearby	O
samples	O
.	O
</s>
<s>
Gibbs	B-Algorithm
sampling	I-Algorithm
is	O
named	O
after	O
the	O
physicist	O
Josiah	O
Willard	O
Gibbs	O
,	O
in	O
reference	O
to	O
an	O
analogy	O
between	O
the	O
sampling	O
algorithm	O
and	O
statistical	O
physics	O
.	O
</s>
<s>
In	O
its	O
basic	O
version	O
,	O
Gibbs	B-Algorithm
sampling	I-Algorithm
is	O
a	O
special	O
case	O
of	O
the	O
Metropolis	B-Algorithm
–	I-Algorithm
Hastings	I-Algorithm
algorithm	I-Algorithm
.	O
</s>
<s>
However	O
,	O
in	O
its	O
extended	O
versions	O
(	O
see	O
below	O
)	O
,	O
it	O
can	O
be	O
considered	O
a	O
general	O
framework	O
for	O
sampling	O
from	O
a	O
large	O
set	O
of	O
variables	O
by	O
sampling	O
each	O
variable	O
(	O
or	O
in	O
some	O
cases	O
,	O
each	O
group	O
of	O
variables	O
)	O
in	O
turn	O
,	O
and	O
can	O
incorporate	O
the	O
Metropolis	B-Algorithm
–	I-Algorithm
Hastings	I-Algorithm
algorithm	I-Algorithm
(	O
or	O
methods	O
such	O
as	O
slice	B-Algorithm
sampling	I-Algorithm
)	O
to	O
implement	O
one	O
or	O
more	O
of	O
the	O
sampling	O
steps	O
.	O
</s>
<s>
Gibbs	B-Algorithm
sampling	I-Algorithm
is	O
applicable	O
when	O
the	O
joint	O
distribution	O
is	O
not	O
known	O
explicitly	O
or	O
is	O
difficult	O
to	O
sample	O
from	O
directly	O
,	O
but	O
the	O
conditional	O
distribution	O
of	O
each	O
variable	O
is	O
known	O
and	O
is	O
easy	O
(	O
or	O
at	O
least	O
,	O
easier	O
)	O
to	O
sample	O
from	O
.	O
</s>
<s>
The	O
Gibbs	B-Algorithm
sampling	I-Algorithm
algorithm	O
generates	O
an	O
instance	O
from	O
the	O
distribution	O
of	O
each	O
variable	O
in	O
turn	O
,	O
conditional	O
on	O
the	O
current	O
values	O
of	O
the	O
other	O
variables	O
.	O
</s>
<s>
Gibbs	B-Algorithm
sampling	I-Algorithm
is	O
particularly	O
well-adapted	O
to	O
sampling	O
the	O
posterior	O
distribution	O
of	O
a	O
Bayesian	O
network	O
,	O
since	O
Bayesian	O
networks	O
are	O
typically	O
specified	O
as	O
a	O
collection	O
of	O
conditional	O
distributions	O
.	O
</s>
<s>
Gibbs	B-Algorithm
sampling	I-Algorithm
,	O
in	O
its	O
basic	O
incarnation	O
,	O
is	O
a	O
special	O
case	O
of	O
the	O
Metropolis	B-Algorithm
–	I-Algorithm
Hastings	I-Algorithm
algorithm	I-Algorithm
.	O
</s>
<s>
The	O
point	O
of	O
Gibbs	B-Algorithm
sampling	I-Algorithm
is	O
that	O
given	O
a	O
multivariate	O
distribution	O
it	O
is	O
simpler	O
to	O
sample	O
from	O
a	O
conditional	O
distribution	O
than	O
to	O
marginalize	O
by	O
integrating	O
over	O
a	O
joint	O
distribution	O
.	O
</s>
<s>
The	O
initial	O
values	O
of	O
the	O
variables	O
can	O
be	O
determined	O
randomly	O
or	O
by	O
some	O
other	O
algorithm	O
such	O
as	O
expectation-maximization	B-Algorithm
.	O
</s>
<s>
The	O
process	O
of	O
simulated	B-Algorithm
annealing	I-Algorithm
is	O
often	O
used	O
to	O
reduce	O
the	O
"	O
random	O
walk	O
"	O
behavior	O
in	O
the	O
early	O
part	O
of	O
the	O
sampling	O
process	O
(	O
i.e.	O
</s>
<s>
Other	O
techniques	O
that	O
may	O
reduce	O
autocorrelation	O
are	O
collapsed	O
Gibbs	B-Algorithm
sampling	I-Algorithm
,	O
blocked	O
Gibbs	B-Algorithm
sampling	I-Algorithm
,	O
and	O
ordered	O
overrelaxation	O
;	O
see	O
below	O
.	O
</s>
<s>
Gibbs	B-Algorithm
sampling	I-Algorithm
is	O
commonly	O
used	O
for	O
statistical	O
inference	O
(	O
e.g.	O
</s>
<s>
The	O
most	O
likely	O
value	O
of	O
a	O
desired	O
parameter	O
(	O
the	O
mode	O
)	O
could	O
then	O
simply	O
be	O
selected	O
by	O
choosing	O
the	O
sample	O
value	O
that	O
occurs	O
most	O
commonly	O
;	O
this	O
is	O
essentially	O
equivalent	O
to	O
maximum	B-General_Concept
a	I-General_Concept
posteriori	I-General_Concept
estimation	I-General_Concept
of	O
a	O
parameter	O
.	O
</s>
<s>
More	O
commonly	O
,	O
however	O
,	O
the	O
expected	O
value	O
(	O
mean	O
or	O
average	O
)	O
of	O
the	O
sampled	O
values	O
is	O
chosen	O
;	O
this	O
is	O
a	O
Bayes	B-General_Concept
estimator	I-General_Concept
that	O
takes	O
advantage	O
of	O
the	O
additional	O
data	O
about	O
the	O
entire	O
distribution	O
that	O
is	O
available	O
from	O
Bayesian	O
sampling	O
,	O
whereas	O
a	O
maximization	O
algorithm	O
such	O
as	O
expectation	B-Algorithm
maximization	I-Algorithm
(	O
EM	O
)	O
is	O
capable	O
of	O
only	O
returning	O
a	O
single	O
point	O
from	O
the	O
distribution	O
.	O
</s>
<s>
For	O
example	O
,	O
for	O
a	O
unimodal	O
distribution	O
the	O
mean	O
(	O
expected	O
value	O
)	O
is	O
usually	O
similar	O
to	O
the	O
mode	O
(	O
most	O
common	O
value	O
)	O
,	O
but	O
if	O
the	O
distribution	O
is	O
skewed	B-General_Concept
in	O
one	O
direction	O
,	O
the	O
mean	O
will	O
be	O
moved	O
in	O
that	O
direction	O
,	O
which	O
effectively	O
accounts	O
for	O
the	O
extra	O
probability	O
mass	O
in	O
that	O
direction	O
.	O
</s>
<s>
Supervised	B-General_Concept
learning	I-General_Concept
,	O
unsupervised	B-General_Concept
learning	I-General_Concept
and	O
semi-supervised	B-General_Concept
learning	I-General_Concept
(	O
aka	O
learning	O
with	O
missing	O
values	O
)	O
can	O
all	O
be	O
handled	O
by	O
simply	O
fixing	O
the	O
values	O
of	O
all	O
variables	O
whose	O
values	O
are	O
known	O
,	O
and	O
sampling	O
from	O
the	O
remainder	O
.	O
</s>
<s>
Instead	O
,	O
in	O
such	O
a	O
case	O
there	O
will	O
be	O
variables	O
representing	O
the	O
unknown	O
true	O
mean	O
and	O
true	O
variance	O
,	O
and	O
the	O
determination	O
of	O
sample	O
values	O
for	O
these	O
variables	O
results	O
automatically	O
from	O
the	O
operation	O
of	O
the	O
Gibbs	B-Algorithm
sampler	I-Algorithm
.	O
</s>
<s>
variations	O
of	O
linear	B-General_Concept
regression	I-General_Concept
)	O
can	O
sometimes	O
be	O
handled	O
by	O
Gibbs	B-Algorithm
sampling	I-Algorithm
as	O
well	O
.	O
</s>
<s>
For	O
example	O
,	O
probit	B-Architecture
regression	I-Architecture
for	O
determining	O
the	O
probability	O
of	O
a	O
given	O
binary	O
(	O
yes/no	O
)	O
choice	O
,	O
with	O
normally	O
distributed	O
priors	O
placed	O
over	O
the	O
regression	B-General_Concept
coefficients	I-General_Concept
,	O
can	O
be	O
implemented	O
with	O
Gibbs	B-Algorithm
sampling	I-Algorithm
because	O
it	O
is	O
possible	O
to	O
add	O
additional	O
variables	O
and	O
take	O
advantage	O
of	O
conjugacy	O
.	O
</s>
<s>
More	O
commonly	O
,	O
however	O
,	O
Metropolis	B-Algorithm
–	I-Algorithm
Hastings	I-Algorithm
is	O
used	O
instead	O
of	O
Gibbs	B-Algorithm
sampling	I-Algorithm
.	O
</s>
<s>
The	O
following	O
algorithm	O
details	O
a	O
generic	O
Gibbs	B-Algorithm
sampler	I-Algorithm
:	O
</s>
<s>
Note	O
that	O
Gibbs	B-Algorithm
sampler	I-Algorithm
is	O
operated	O
by	O
the	O
iterative	O
Monte	O
Carlo	O
scheme	O
within	O
a	O
cycle	O
.	O
</s>
<s>
The	O
mutual	O
information	O
can	O
be	O
interpreted	O
as	O
the	O
quantity	O
that	O
is	O
transmitted	O
from	O
the	O
-th	O
step	O
to	O
the	O
-th	O
step	O
within	O
a	O
single	O
cycle	O
of	O
the	O
Gibbs	B-Algorithm
sampler	I-Algorithm
.	O
</s>
<s>
Numerous	O
variations	O
of	O
the	O
basic	O
Gibbs	B-Algorithm
sampler	I-Algorithm
exist	O
.	O
</s>
<s>
A	O
blocked	O
Gibbs	B-Algorithm
sampler	I-Algorithm
groups	O
two	O
or	O
more	O
variables	O
together	O
and	O
samples	O
from	O
their	O
joint	O
distribution	O
conditioned	O
on	O
all	O
other	O
variables	O
,	O
rather	O
than	O
sampling	O
from	O
each	O
one	O
individually	O
.	O
</s>
<s>
For	O
example	O
,	O
in	O
a	O
hidden	O
Markov	O
model	O
,	O
a	O
blocked	O
Gibbs	B-Algorithm
sampler	I-Algorithm
might	O
sample	O
from	O
all	O
the	O
latent	O
variables	O
making	O
up	O
the	O
Markov	O
chain	O
in	O
one	O
go	O
,	O
using	O
the	O
forward-backward	B-Algorithm
algorithm	I-Algorithm
.	O
</s>
<s>
A	O
collapsed	O
Gibbs	B-Algorithm
sampler	I-Algorithm
integrates	O
out	O
(	O
marginalizes	O
over	O
)	O
one	O
or	O
more	O
variables	O
when	O
sampling	O
for	O
some	O
other	O
variable	O
.	O
</s>
<s>
For	O
example	O
,	O
imagine	O
that	O
a	O
model	O
consists	O
of	O
three	O
variables	O
A	O
,	O
B	O
,	O
and	O
C	O
.	O
A	O
simple	O
Gibbs	B-Algorithm
sampler	I-Algorithm
would	O
sample	O
from	O
p( A|B	O
,	O
C	O
)	O
,	O
then	O
p( B|A	O
,	O
C	O
)	O
,	O
then	O
p( C|A	O
,	O
B	O
)	O
.	O
</s>
<s>
A	O
collapsed	O
Gibbs	B-Algorithm
sampler	I-Algorithm
might	O
replace	O
the	O
sampling	O
step	O
for	O
A	O
with	O
a	O
sample	O
taken	O
from	O
the	O
marginal	O
distribution	O
p( A|C	O
)	O
,	O
with	O
variable	O
B	O
integrated	O
out	O
in	O
this	O
case	O
.	O
</s>
<s>
In	O
hierarchical	O
Bayesian	O
models	O
with	O
categorical	O
variables	O
,	O
such	O
as	O
latent	O
Dirichlet	O
allocation	O
and	O
various	O
other	O
models	O
used	O
in	O
natural	B-Language
language	I-Language
processing	I-Language
,	O
it	O
is	O
quite	O
common	O
to	O
collapse	O
out	O
the	O
Dirichlet	O
distributions	O
that	O
are	O
typically	O
used	O
as	O
prior	O
distributions	O
over	O
the	O
categorical	O
variables	O
.	O
</s>
<s>
The	O
conditional	O
distribution	O
of	O
a	O
given	O
categorical	O
variable	O
in	O
this	O
distribution	O
,	O
conditioned	O
on	O
the	O
others	O
,	O
assumes	O
an	O
extremely	O
simple	O
form	O
that	O
makes	O
Gibbs	B-Algorithm
sampling	I-Algorithm
even	O
easier	O
than	O
if	O
the	O
collapsing	O
had	O
not	O
been	O
done	O
.	O
</s>
<s>
The	O
same	O
rule	O
applies	O
in	O
other	O
iterative	O
inference	O
methods	O
,	O
such	O
as	O
variational	O
Bayes	O
or	O
expectation	B-Algorithm
maximization	I-Algorithm
;	O
however	O
,	O
if	O
the	O
method	O
involves	O
keeping	O
partial	O
counts	O
,	O
then	O
the	O
partial	O
counts	O
for	O
the	O
value	O
in	O
question	O
must	O
be	O
summed	O
across	O
all	O
the	O
other	O
dependent	O
nodes	O
.	O
</s>
<s>
A	O
Gibbs	B-Algorithm
sampler	I-Algorithm
with	O
ordered	O
overrelaxation	O
samples	O
a	O
given	O
odd	O
number	O
of	O
candidate	O
values	O
for	O
at	O
any	O
given	O
step	O
and	O
sorts	O
them	O
,	O
along	O
with	O
the	O
single	O
value	O
for	O
according	O
to	O
some	O
well-defined	O
ordering	O
.	O
</s>
<s>
It	O
is	O
also	O
possible	O
to	O
extend	O
Gibbs	B-Algorithm
sampling	I-Algorithm
in	O
various	O
ways	O
.	O
</s>
<s>
For	O
example	O
,	O
in	O
the	O
case	O
of	O
variables	O
whose	O
conditional	O
distribution	O
is	O
not	O
easy	O
to	O
sample	O
from	O
,	O
a	O
single	O
iteration	O
of	O
slice	B-Algorithm
sampling	I-Algorithm
or	O
the	O
Metropolis	B-Algorithm
–	I-Algorithm
Hastings	I-Algorithm
algorithm	I-Algorithm
can	O
be	O
used	O
to	O
sample	O
from	O
the	O
variables	O
in	O
question	O
.	O
</s>
<s>
There	O
are	O
two	O
ways	O
that	O
Gibbs	B-Algorithm
sampling	I-Algorithm
can	O
fail	O
.	O
</s>
<s>
Gibbs	B-Algorithm
sampling	I-Algorithm
will	O
become	O
trapped	O
in	O
one	O
of	O
the	O
two	O
high-probability	O
vectors	O
,	O
and	O
will	O
never	O
reach	O
the	O
other	O
one	O
.	O
</s>
<s>
More	O
generally	O
,	O
for	O
any	O
distribution	O
over	O
high-dimensional	O
,	O
real-valued	O
vectors	O
,	O
if	O
two	O
particular	O
elements	O
of	O
the	O
vector	O
are	O
perfectly	O
correlated	O
(	O
or	O
perfectly	O
anti-correlated	O
)	O
,	O
those	O
two	O
elements	O
will	O
become	O
stuck	O
,	O
and	O
Gibbs	B-Algorithm
sampling	I-Algorithm
will	O
never	O
be	O
able	O
to	O
change	O
them	O
.	O
</s>
<s>
But	O
you	O
would	O
probably	O
have	O
to	O
take	O
more	O
than	O
samples	O
from	O
Gibbs	B-Algorithm
sampling	I-Algorithm
to	O
get	O
the	O
same	O
result	O
.	O
</s>
<s>
But	O
Gibbs	B-Algorithm
sampling	I-Algorithm
will	O
alternate	O
between	O
returning	O
only	O
the	O
zero	O
vector	O
for	O
long	O
periods	O
(	O
about	O
in	O
a	O
row	O
)	O
,	O
then	O
only	O
nonzero	O
vectors	O
for	O
long	O
periods	O
(	O
about	O
in	O
a	O
row	O
)	O
.	O
</s>
<s>
The	O
slow	O
convergence	O
here	O
can	O
be	O
seen	O
as	O
a	O
consequence	O
of	O
the	O
curse	B-Algorithm
of	I-Algorithm
dimensionality	I-Algorithm
.	O
</s>
<s>
If	O
this	O
vector	O
is	O
the	O
only	O
thing	O
being	O
sampled	O
,	O
then	O
block	O
sampling	O
is	O
equivalent	O
to	O
not	O
doing	O
Gibbs	B-Algorithm
sampling	I-Algorithm
at	O
all	O
,	O
which	O
by	O
hypothesis	O
would	O
be	O
difficult	O
.	O
)	O
</s>
<s>
The	O
OpenBUGS	B-Application
software	O
(	O
Bayesian	O
inference	O
Using	O
Gibbs	B-Algorithm
Sampling	I-Algorithm
)	O
does	O
a	O
Bayesian	O
analysis	O
of	O
complex	O
statistical	O
models	O
using	O
Markov	B-General_Concept
chain	I-General_Concept
Monte	I-General_Concept
Carlo	I-General_Concept
.	O
</s>
<s>
JAGS	B-Application
(	O
Just	B-Application
another	I-Application
Gibbs	I-Application
sampler	I-Application
)	O
is	O
a	O
GPL	O
program	O
for	O
analysis	O
of	O
Bayesian	O
hierarchical	O
models	O
using	O
Markov	B-General_Concept
Chain	I-General_Concept
Monte	I-General_Concept
Carlo	I-General_Concept
.	O
</s>
<s>
Church	B-Language
is	O
free	O
software	O
for	O
performing	O
Gibbs	O
inference	O
over	O
arbitrary	O
distributions	O
that	O
are	O
specified	O
as	O
probabilistic	O
programs	O
.	O
</s>
<s>
PyMC	B-Application
is	O
an	O
open	O
source	O
Python	B-Language
library	O
for	O
Bayesian	O
learning	O
of	O
general	O
Probabilistic	O
Graphical	O
Models	O
.	O
</s>
<s>
is	O
an	O
open	O
source	O
Julia	B-Application
library	O
for	O
Bayesian	O
Inference	O
using	O
probabilistic	O
programming	O
.	O
</s>
