<s>
In	O
statistics	O
,	O
a	O
maximum-entropy	B-General_Concept
Markov	I-General_Concept
model	I-General_Concept
(	O
MEMM	B-General_Concept
)	O
,	O
or	O
conditional	B-General_Concept
Markov	I-General_Concept
model	I-General_Concept
(	O
CMM	O
)	O
,	O
is	O
a	O
graphical	O
model	O
for	O
sequence	B-General_Concept
labeling	I-General_Concept
that	O
combines	O
features	O
of	O
hidden	O
Markov	O
models	O
(	O
HMMs	O
)	O
and	O
maximum	O
entropy	O
(	O
MaxEnt	O
)	O
models	O
.	O
</s>
<s>
An	O
MEMM	B-General_Concept
is	O
a	O
discriminative	O
model	O
that	O
extends	O
a	O
standard	O
maximum	O
entropy	O
classifier	O
by	O
assuming	O
that	O
the	O
unknown	O
values	O
to	O
be	O
learnt	O
are	O
connected	O
in	O
a	O
Markov	O
chain	O
rather	O
than	O
being	O
conditionally	O
independent	O
of	O
each	O
other	O
.	O
</s>
<s>
MEMMs	B-General_Concept
find	O
applications	O
in	O
natural	B-Language
language	I-Language
processing	I-Language
,	O
specifically	O
in	O
part-of-speech	O
tagging	O
and	O
information	B-General_Concept
extraction	I-General_Concept
.	O
</s>
<s>
In	O
a	O
MEMM	B-General_Concept
,	O
this	O
probability	O
is	O
factored	O
into	O
Markov	O
transition	O
probabilities	O
,	O
where	O
the	O
probability	O
of	O
transitioning	O
to	O
a	O
particular	O
label	O
depends	O
only	O
on	O
the	O
observation	O
at	O
that	O
position	O
and	O
the	O
previous	O
position	O
's	O
label	O
:	O
</s>
<s>
Furthermore	O
,	O
a	O
variant	O
of	O
the	O
Baum	O
–	O
Welch	O
algorithm	O
,	O
which	O
is	O
used	O
for	O
training	O
HMMs	O
,	O
can	O
be	O
used	O
to	O
estimate	O
parameters	O
when	O
training	O
data	O
has	O
incomplete	B-General_Concept
or	I-General_Concept
missing	I-General_Concept
labels	I-General_Concept
.	O
</s>
<s>
The	O
optimal	O
state	O
sequence	O
can	O
be	O
found	O
using	O
a	O
very	O
similar	O
Viterbi	B-Algorithm
algorithm	I-Algorithm
to	O
the	O
one	O
used	O
for	O
HMMs	O
.	O
</s>
<s>
An	O
advantage	O
of	O
MEMMs	B-General_Concept
rather	O
than	O
HMMs	O
for	O
sequence	O
tagging	O
is	O
that	O
they	O
offer	O
increased	O
freedom	O
in	O
choosing	O
features	O
to	O
represent	O
observations	O
.	O
</s>
<s>
In	O
the	O
original	O
paper	O
introducing	O
MEMMs	B-General_Concept
,	O
the	O
authors	O
write	O
that	O
"	O
when	O
trying	O
to	O
extract	O
previously	O
unseen	O
company	O
names	O
from	O
a	O
newswire	O
article	O
,	O
the	O
identity	O
of	O
a	O
word	O
alone	O
is	O
not	O
very	O
predictive	O
;	O
however	O
,	O
knowing	O
that	O
the	O
word	O
is	O
capitalized	O
,	O
that	O
is	O
a	O
noun	O
,	O
that	O
it	O
is	O
used	O
in	O
an	O
appositive	O
,	O
and	O
that	O
it	O
appears	O
near	O
the	O
top	O
of	O
the	O
article	O
would	O
all	O
be	O
quite	O
predictive	O
(	O
in	O
conjunction	O
with	O
the	O
context	O
provided	O
by	O
the	O
state-transition	O
structure	O
)	O
.	O
"	O
</s>
<s>
Therefore	O
,	O
MEMMs	B-General_Concept
allow	O
the	O
user	O
to	O
specify	O
many	O
correlated	O
,	O
but	O
informative	O
features	O
.	O
</s>
<s>
Another	O
advantage	O
of	O
MEMMs	B-General_Concept
versus	O
HMMs	O
and	O
conditional	B-General_Concept
random	I-General_Concept
fields	I-General_Concept
(	O
CRFs	O
)	O
is	O
that	O
training	O
can	O
be	O
considerably	O
more	O
efficient	O
.	O
</s>
<s>
In	O
HMMs	O
and	O
CRFs	O
,	O
one	O
needs	O
to	O
use	O
some	O
version	O
of	O
the	O
forward	B-Algorithm
–	I-Algorithm
backward	I-Algorithm
algorithm	I-Algorithm
as	O
an	O
inner	O
loop	O
in	O
training	O
.	O
</s>
<s>
However	O
,	O
in	O
MEMMs	B-General_Concept
,	O
estimating	O
the	O
parameters	O
of	O
the	O
maximum-entropy	O
distributions	O
used	O
for	O
the	O
transition	O
probabilities	O
can	O
be	O
done	O
for	O
each	O
transition	O
distribution	O
in	O
isolation	O
.	O
</s>
<s>
A	O
drawback	O
of	O
MEMMs	B-General_Concept
is	O
that	O
they	O
potentially	O
suffer	O
from	O
the	O
"	O
label	O
bias	O
problem	O
,	O
"	O
where	O
states	O
with	O
low-entropy	O
transition	O
distributions	O
"	O
effectively	O
ignore	O
their	O
observations.	O
"	O
</s>
<s>
Conditional	B-General_Concept
random	I-General_Concept
fields	I-General_Concept
were	O
designed	O
to	O
overcome	O
this	O
weakness	O
,	O
</s>
