<s>
An	O
autoencoder	B-Algorithm
is	O
a	O
type	O
of	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
used	O
to	O
learn	O
efficient	B-General_Concept
codings	I-General_Concept
of	O
unlabeled	O
data	O
(	O
unsupervised	B-General_Concept
learning	I-General_Concept
)	O
.	O
</s>
<s>
An	O
autoencoder	B-Algorithm
learns	O
two	O
functions	O
:	O
an	O
encoding	O
function	O
that	O
transforms	O
the	O
input	O
data	O
,	O
and	O
a	O
decoding	O
function	O
that	O
recreates	O
the	O
input	O
data	O
from	O
the	O
encoded	O
representation	O
.	O
</s>
<s>
The	O
autoencoder	B-Algorithm
learns	O
an	O
efficient	B-General_Concept
representation	I-General_Concept
(	O
encoding	O
)	O
for	O
a	O
set	O
of	O
data	O
,	O
typically	O
for	O
dimensionality	B-Algorithm
reduction	I-Algorithm
.	O
</s>
<s>
Examples	O
are	O
regularized	O
autoencoders	B-Algorithm
(	O
Sparse	O
,	O
Denoising	O
and	O
Contractive	O
)	O
,	O
which	O
are	O
effective	O
in	O
learning	B-General_Concept
representations	I-General_Concept
for	O
subsequent	O
classification	B-General_Concept
tasks	O
,	O
and	O
Variational	B-Algorithm
autoencoders	I-Algorithm
,	O
with	O
applications	O
as	O
generative	O
models	O
.	O
</s>
<s>
Autoencoders	B-Algorithm
are	O
applied	O
to	O
many	O
problems	O
,	O
including	O
facial	O
recognition	O
,	O
feature	O
detection	O
,	O
anomaly	B-Algorithm
detection	I-Algorithm
and	O
acquiring	O
the	O
meaning	O
of	O
words	O
.	O
</s>
<s>
Autoencoders	B-Algorithm
are	O
also	O
generative	O
models	O
which	O
can	O
randomly	O
generate	O
new	O
data	O
that	O
is	O
similar	O
to	O
the	O
input	O
data	O
(	O
training	O
data	O
)	O
.	O
</s>
<s>
An	O
autoencoder	B-Algorithm
is	O
defined	O
by	O
the	O
following	O
components	O
:	O
Two	O
sets	O
:	O
the	O
space	O
of	O
decoded	O
messages	O
;	O
the	O
space	O
of	O
encoded	O
messages	O
.	O
</s>
<s>
Usually	O
,	O
both	O
the	O
encoder	O
and	O
the	O
decoder	O
are	O
defined	O
as	O
multilayer	B-Algorithm
perceptrons	I-Algorithm
.	O
</s>
<s>
where	O
is	O
an	O
element-wise	O
activation	B-Algorithm
function	I-Algorithm
such	O
as	O
a	O
sigmoid	B-Algorithm
function	I-Algorithm
or	O
a	O
rectified	B-Algorithm
linear	I-Algorithm
unit	I-Algorithm
,	O
is	O
a	O
matrix	O
called	O
"	O
weight	O
"	O
,	O
and	O
is	O
a	O
vector	O
called	O
"	O
bias	O
"	O
.	O
</s>
<s>
An	O
autoencoder	B-Algorithm
,	O
by	O
itself	O
,	O
is	O
simply	O
a	O
tuple	O
of	O
two	O
functions	O
.	O
</s>
<s>
With	O
those	O
,	O
we	O
can	O
define	O
the	O
loss	O
function	O
for	O
the	O
autoencoder	B-Algorithm
asThe	O
optimal	O
autoencoder	B-Algorithm
for	O
the	O
given	O
task	O
is	O
then	O
.	O
</s>
<s>
The	O
search	O
for	O
the	O
optimal	O
autoencoder	B-Algorithm
can	O
be	O
accomplished	O
by	O
any	O
mathematical	O
optimization	O
technique	O
,	O
but	O
usually	O
by	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
This	O
search	O
process	O
is	O
referred	O
to	O
as	O
"	O
training	O
the	O
autoencoder	B-Algorithm
"	O
.	O
</s>
<s>
Then	O
the	O
problem	O
of	O
searching	O
for	O
the	O
optimal	O
autoencoder	B-Algorithm
is	O
just	O
a	O
least-squares	B-Algorithm
optimization	O
:	O
</s>
<s>
An	O
autoencoder	B-Algorithm
has	O
two	O
main	O
parts	O
:	O
an	O
encoder	O
that	O
maps	O
the	O
message	O
to	O
a	O
code	O
,	O
and	O
a	O
decoder	O
that	O
reconstructs	O
the	O
message	O
from	O
the	O
code	O
.	O
</s>
<s>
An	O
optimal	O
autoencoder	B-Algorithm
would	O
perform	O
as	O
close	O
to	O
perfect	O
reconstruction	O
as	O
possible	O
,	O
with	O
"	O
close	O
to	O
perfect	O
"	O
defined	O
by	O
the	O
reconstruction	O
quality	O
function	O
.	O
</s>
<s>
Such	O
an	O
autoencoder	B-Algorithm
is	O
called	O
undercomplete	O
.	O
</s>
<s>
It	O
can	O
be	O
interpreted	O
as	O
compressing	B-General_Concept
the	O
message	O
,	O
or	O
reducing	B-Algorithm
its	I-Algorithm
dimensionality	I-Algorithm
.	O
</s>
<s>
At	O
the	O
limit	O
of	O
an	O
ideal	O
undercomplete	O
autoencoder	B-Algorithm
,	O
every	O
possible	O
code	O
in	O
the	O
code	O
space	O
is	O
used	O
to	O
encode	O
a	O
message	O
that	O
really	O
appears	O
in	O
the	O
distribution	O
,	O
and	O
the	O
decoder	O
is	O
also	O
perfect	O
:	O
.	O
</s>
<s>
This	O
ideal	O
autoencoder	B-Algorithm
can	O
then	O
be	O
used	O
to	O
generate	O
messages	O
indistinguishable	O
from	O
real	O
messages	O
,	O
by	O
feeding	O
its	O
decoder	O
arbitrary	O
code	O
and	O
obtaining	O
,	O
which	O
is	O
a	O
message	O
that	O
really	O
appears	O
in	O
the	O
distribution	O
.	O
</s>
<s>
If	O
the	O
code	O
space	O
has	O
dimension	O
larger	O
than	O
(	O
overcomplete	O
)	O
,	O
or	O
equal	O
to	O
,	O
the	O
message	O
space	O
,	O
or	O
the	O
hidden	O
units	O
are	O
given	O
enough	O
capacity	O
,	O
an	O
autoencoder	B-Algorithm
can	O
learn	O
the	O
identity	O
function	O
and	O
become	O
useless	O
.	O
</s>
<s>
However	O
,	O
experimental	O
results	O
found	O
that	O
overcomplete	O
autoencoders	B-Algorithm
might	O
still	O
learn	B-General_Concept
useful	I-General_Concept
features	I-General_Concept
.	O
</s>
<s>
A	O
standard	O
way	O
to	O
do	O
so	O
is	O
to	O
add	O
modifications	O
to	O
the	O
basic	O
autoencoder	B-Algorithm
,	O
to	O
be	O
detailed	O
below	O
.	O
</s>
<s>
The	O
autoencoder	B-Algorithm
was	O
first	O
proposed	O
as	O
a	O
nonlinear	O
generalization	O
of	O
principal	B-Application
components	I-Application
analysis	I-Application
(	O
PCA	O
)	O
by	O
Kramer	O
.	O
</s>
<s>
The	O
autoencoder	B-Algorithm
has	O
also	O
been	O
called	O
the	O
autoassociator	B-Algorithm
,	O
or	O
Diabolo	B-Algorithm
network	I-Algorithm
.	O
</s>
<s>
Their	O
most	O
traditional	O
application	O
was	O
dimensionality	B-Algorithm
reduction	I-Algorithm
or	O
feature	B-General_Concept
learning	I-General_Concept
,	O
but	O
the	O
concept	O
became	O
widely	O
used	O
for	O
learning	O
generative	O
models	O
of	O
data	O
.	O
</s>
<s>
Some	O
of	O
the	O
most	O
powerful	O
AIs	B-Application
in	O
the	O
2010s	O
involved	O
autoencoders	B-Algorithm
stacked	O
inside	O
deep	B-Algorithm
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
Various	O
techniques	O
exist	O
to	O
prevent	O
autoencoders	B-Algorithm
from	O
learning	O
the	O
identity	O
function	O
and	O
to	O
improve	O
their	O
ability	O
to	O
capture	O
important	O
information	O
and	O
learn	O
richer	O
representations	O
.	O
</s>
<s>
Inspired	O
by	O
the	O
sparse	O
coding	O
hypothesis	O
in	O
neuroscience	O
,	O
sparse	O
autoencoders	B-Algorithm
are	O
variants	O
of	O
autoencoders	B-Algorithm
,	O
such	O
that	O
the	O
codes	O
for	O
messages	O
tend	O
to	O
be	O
sparse	O
codes	O
,	O
that	O
is	O
,	O
is	O
close	O
to	O
zero	O
in	O
most	O
entries	O
.	O
</s>
<s>
Sparse	O
autoencoders	B-Algorithm
may	O
include	O
more	O
(	O
rather	O
than	O
fewer	O
)	O
hidden	O
units	O
than	O
inputs	O
,	O
but	O
only	O
a	O
small	O
number	O
of	O
the	O
hidden	O
units	O
are	O
allowed	O
to	O
be	O
active	O
at	O
the	O
same	O
time	O
.	O
</s>
<s>
Encouraging	O
sparsity	O
improves	O
performance	O
on	O
classification	B-General_Concept
tasks	O
.	O
</s>
<s>
This	O
is	O
the	O
k-sparse	O
autoencoder	B-Algorithm
.	O
</s>
<s>
The	O
k-sparse	O
autoencoder	B-Algorithm
inserts	O
the	O
following	O
"	O
k-sparse	O
function	O
"	O
in	O
the	O
latent	O
layer	O
of	O
a	O
standard	O
autoencoder:where	O
if	O
ranks	O
in	O
the	O
top	O
k	O
,	O
and	O
0	O
otherwise	O
.	O
</s>
<s>
This	O
is	O
essentially	O
a	O
generalized	O
ReLU	B-Algorithm
function	O
.	O
</s>
<s>
The	O
other	O
way	O
is	O
a	O
relaxed	O
version	O
of	O
the	O
k-sparse	O
autoencoder	B-Algorithm
.	O
</s>
<s>
Let	O
the	O
autoencoder	B-Algorithm
architecture	O
have	O
layers	O
.	O
</s>
<s>
In	O
this	O
case	O
,	O
one	O
can	O
sparsity	O
regularization	O
loss	O
as	O
where	O
is	O
the	O
activation	O
vector	O
in	O
the	O
-th	O
layer	O
of	O
the	O
autoencoder	B-Algorithm
.	O
</s>
<s>
The	O
norm	O
is	O
usually	O
the	O
L1	O
norm	O
(	O
giving	O
the	O
L1	O
sparse	O
autoencoder	B-Algorithm
)	O
or	O
the	O
L2	O
norm	O
(	O
giving	O
the	O
L2	O
sparse	O
autoencoder	B-Algorithm
)	O
.	O
</s>
<s>
Denoising	O
autoencoders	B-Algorithm
(	O
DAE	O
)	O
try	O
to	O
achieve	O
a	O
good	O
representation	O
by	O
changing	O
the	O
reconstruction	O
criterion	O
.	O
</s>
<s>
A	O
DAE	O
is	O
defined	O
by	O
adding	O
a	O
noise	O
process	O
to	O
the	O
standard	O
autoencoder	B-Algorithm
.	O
</s>
<s>
A	O
contractive	O
autoencoder	B-Algorithm
adds	O
the	O
contractive	O
regularization	O
loss	O
to	O
the	O
standard	O
autoencoder	B-Algorithm
loss:where	O
measures	O
how	O
much	O
contractive-ness	O
we	O
want	O
to	O
enforce	O
.	O
</s>
<s>
The	O
concrete	O
autoencoder	B-Algorithm
is	O
designed	O
for	O
discrete	O
feature	O
selection	O
.	O
</s>
<s>
A	O
concrete	O
autoencoder	B-Algorithm
forces	O
the	O
latent	O
space	O
to	O
consist	O
only	O
of	O
a	O
user-specified	O
number	O
of	O
features	O
.	O
</s>
<s>
The	O
concrete	O
autoencoder	B-Algorithm
uses	O
a	O
continuous	O
relaxation	O
of	O
the	O
categorical	O
distribution	O
to	O
allow	O
gradients	O
to	O
pass	O
through	O
the	O
feature	O
selector	O
layer	O
,	O
which	O
makes	O
it	O
possible	O
to	O
use	O
standard	O
backpropagation	B-Algorithm
to	O
learn	O
an	O
optimal	O
subset	O
of	O
input	O
features	O
that	O
minimize	O
reconstruction	O
loss	O
.	O
</s>
<s>
Variational	B-Algorithm
autoencoders	I-Algorithm
(	O
VAEs	O
)	O
belong	O
to	O
the	O
families	O
of	O
variational	O
Bayesian	O
methods	O
.	O
</s>
<s>
Despite	O
the	O
architectural	O
similarities	O
with	O
basic	O
autoencoders	B-Algorithm
,	O
VAEs	O
are	O
architecture	O
with	O
different	O
goals	O
and	O
with	O
a	O
completely	O
different	O
mathematical	O
formulation	O
.	O
</s>
<s>
Autoencoders	B-Algorithm
are	O
often	O
trained	O
with	O
a	O
single	O
layer	O
encoder	O
and	O
a	O
single	O
layer	O
decoder	O
,	O
but	O
using	O
many-layered	O
(	O
deep	B-Algorithm
)	O
encoders	O
and	O
decoders	O
offers	O
many	O
advantages	O
.	O
</s>
<s>
Experimentally	O
,	O
deep	B-Algorithm
autoencoders	B-Algorithm
yield	O
better	O
compression	O
compared	O
to	O
shallow	O
or	O
linear	O
autoencoders	B-Algorithm
.	O
</s>
<s>
Geoffrey	O
Hinton	O
developed	O
the	O
deep	B-Algorithm
belief	I-Algorithm
network	I-Algorithm
technique	O
for	O
training	O
many-layered	O
deep	B-Algorithm
autoencoders	B-Algorithm
.	O
</s>
<s>
His	O
method	O
involves	O
treating	O
each	O
neighbouring	O
set	O
of	O
two	O
layers	O
as	O
a	O
restricted	B-Algorithm
Boltzmann	I-Algorithm
machine	I-Algorithm
so	O
that	O
pretraining	O
approximates	O
a	O
good	O
solution	O
,	O
then	O
using	O
backpropagation	B-Algorithm
to	O
fine-tune	O
the	O
results	O
.	O
</s>
<s>
training	O
the	O
whole	O
architecture	O
together	O
with	O
a	O
single	O
global	O
reconstruction	O
objective	O
to	O
optimize	O
)	O
would	O
be	O
better	O
for	O
deep	B-Algorithm
auto-encoders	B-Algorithm
.	O
</s>
<s>
A	O
2015	O
study	O
showed	O
that	O
joint	O
training	O
learns	O
better	O
data	O
models	O
along	O
with	O
more	O
representative	O
features	O
for	O
classification	B-General_Concept
as	O
compared	O
to	O
the	O
layerwise	O
method	O
.	O
</s>
<s>
The	O
two	O
main	O
applications	O
of	O
autoencoders	B-Algorithm
are	O
dimensionality	B-Algorithm
reduction	I-Algorithm
and	O
information	B-Library
retrieval	I-Library
,	O
but	O
modern	O
variations	O
have	O
been	O
applied	O
to	O
other	O
tasks	O
.	O
</s>
<s>
Dimensionality	B-Algorithm
reduction	I-Algorithm
was	O
one	O
of	O
the	O
first	O
deep	B-Algorithm
learning	I-Algorithm
applications	O
.	O
</s>
<s>
For	O
Hinton	O
's	O
2006	O
study	O
,	O
he	O
pretrained	O
a	O
multi-layer	O
autoencoder	B-Algorithm
with	O
a	O
stack	O
of	O
RBMs	B-Algorithm
and	O
then	O
used	O
their	O
weights	O
to	O
initialize	O
a	O
deep	B-Algorithm
autoencoder	B-Algorithm
with	O
gradually	O
smaller	O
hidden	O
layers	O
until	O
hitting	O
a	O
bottleneck	O
of	O
30	O
neurons	O
.	O
</s>
<s>
The	O
resulting	O
30	O
dimensions	O
of	O
the	O
code	O
yielded	O
a	O
smaller	O
reconstruction	O
error	O
compared	O
to	O
the	O
first	O
30	O
components	O
of	O
a	O
principal	B-Application
component	I-Application
analysis	I-Application
(	O
PCA	O
)	O
,	O
and	O
learned	O
a	O
representation	O
that	O
was	O
qualitatively	O
easier	O
to	O
interpret	O
,	O
clearly	O
separating	O
data	O
clusters	O
.	O
</s>
<s>
Representing	O
dimensions	O
can	O
improve	O
performance	O
on	O
tasks	O
such	O
as	O
classification	B-General_Concept
.	O
</s>
<s>
Indeed	O
,	O
the	O
hallmark	O
of	O
dimensionality	B-Algorithm
reduction	I-Algorithm
is	O
to	O
place	O
semantically	O
related	O
examples	O
near	O
each	O
other	O
.	O
</s>
<s>
If	O
linear	O
activations	O
are	O
used	O
,	O
or	O
only	O
a	O
single	O
sigmoid	O
hidden	O
layer	O
,	O
then	O
the	O
optimal	O
solution	O
to	O
an	O
autoencoder	B-Algorithm
is	O
strongly	O
related	O
to	O
principal	B-Application
component	I-Application
analysis	I-Application
(	O
PCA	O
)	O
.	O
</s>
<s>
The	O
weights	O
of	O
an	O
autoencoder	B-Algorithm
with	O
a	O
single	O
hidden	O
layer	O
of	O
size	O
(	O
where	O
is	O
less	O
than	O
the	O
size	O
of	O
the	O
input	O
)	O
span	O
the	O
same	O
vector	O
subspace	O
as	O
the	O
one	O
spanned	O
by	O
the	O
first	O
principal	B-Application
components	I-Application
,	O
and	O
the	O
output	O
of	O
the	O
autoencoder	B-Algorithm
is	O
an	O
orthogonal	O
projection	O
onto	O
this	O
subspace	O
.	O
</s>
<s>
The	O
autoencoder	B-Algorithm
weights	O
are	O
not	O
equal	O
to	O
the	O
principal	B-Application
components	I-Application
,	O
and	O
are	O
generally	O
not	O
orthogonal	O
,	O
yet	O
the	O
principal	B-Application
components	I-Application
may	O
be	O
recovered	O
from	O
them	O
using	O
the	O
singular	O
value	O
decomposition	O
.	O
</s>
<s>
However	O
,	O
the	O
potential	O
of	O
autoencoders	B-Algorithm
resides	O
in	O
their	O
non-linearity	O
,	O
allowing	O
the	O
model	O
to	O
learn	O
more	O
powerful	O
generalizations	O
compared	O
to	O
PCA	O
,	O
and	O
to	O
reconstruct	O
the	O
input	O
with	O
significantly	O
lower	O
information	O
loss	O
.	O
</s>
<s>
Information	B-Library
retrieval	I-Library
benefits	O
particularly	O
from	O
dimensionality	B-Algorithm
reduction	I-Algorithm
in	O
that	O
search	O
can	O
become	O
more	O
efficient	O
in	O
certain	O
kinds	O
of	O
low	O
dimensional	O
spaces	O
.	O
</s>
<s>
Autoencoders	B-Algorithm
were	O
indeed	O
applied	O
to	O
semantic	O
hashing	O
,	O
proposed	O
by	O
Salakhutdinov	O
and	O
Hinton	O
in	O
2007	O
.	O
</s>
<s>
By	O
training	O
the	O
algorithm	O
to	O
produce	O
a	O
low-dimensional	O
binary	O
code	O
,	O
all	O
database	O
entries	O
could	O
be	O
stored	O
in	O
a	O
hash	B-Algorithm
table	I-Algorithm
mapping	O
binary	O
code	O
vectors	O
to	O
entries	O
.	O
</s>
<s>
This	O
table	O
would	O
then	O
support	O
information	B-Library
retrieval	I-Library
by	O
returning	O
all	O
entries	O
with	O
the	O
same	O
binary	O
code	O
as	O
the	O
query	B-Library
,	O
or	O
slightly	O
less	O
similar	O
entries	O
by	O
flipping	O
some	O
bits	O
from	O
the	O
query	B-Library
encoding	O
.	O
</s>
<s>
Another	O
application	O
for	O
autoencoders	B-Algorithm
is	O
anomaly	B-Algorithm
detection	I-Algorithm
.	O
</s>
<s>
In	O
most	O
cases	O
,	O
only	O
data	O
with	O
normal	O
instances	O
are	O
used	O
to	O
train	O
the	O
autoencoder	B-Algorithm
;	O
in	O
others	O
,	O
the	O
frequency	O
of	O
anomalies	O
is	O
small	O
compared	O
to	O
the	O
observation	O
set	O
so	O
that	O
its	O
contribution	O
to	O
the	O
learned	O
representation	O
could	O
be	O
ignored	O
.	O
</s>
<s>
After	O
training	O
,	O
the	O
autoencoder	B-Algorithm
will	O
accurately	O
reconstruct	O
"	O
normal	O
"	O
data	O
,	O
while	O
failing	O
to	O
do	O
so	O
with	O
unfamiliar	O
anomalous	O
data	O
.	O
</s>
<s>
Recent	O
literature	O
has	O
however	O
shown	O
that	O
certain	O
autoencoding	B-Algorithm
models	O
can	O
,	O
counterintuitively	O
,	O
be	O
very	O
good	O
at	O
reconstructing	O
anomalous	O
examples	O
and	O
consequently	O
not	O
able	O
to	O
reliably	O
perform	O
anomaly	B-Algorithm
detection	I-Algorithm
.	O
</s>
<s>
The	O
characteristics	O
of	O
autoencoders	B-Algorithm
are	O
useful	O
in	O
image	O
processing	O
.	O
</s>
<s>
One	O
example	O
can	O
be	O
found	O
in	O
lossy	O
image	B-General_Concept
compression	I-General_Concept
,	O
where	O
autoencoders	B-Algorithm
outperformed	O
other	O
approaches	O
and	O
proved	O
competitive	O
against	O
JPEG	O
2000	O
.	O
</s>
<s>
Another	O
useful	O
application	O
of	O
autoencoders	B-Algorithm
in	O
image	O
preprocessing	O
is	O
image	O
denoising	O
.	O
</s>
<s>
Autoencoders	B-Algorithm
found	O
use	O
in	O
more	O
demanding	O
contexts	O
such	O
as	O
medical	B-Application
imaging	I-Application
where	O
they	O
have	O
been	O
used	O
for	O
image	O
denoising	O
as	O
well	O
as	O
super-resolution	B-Algorithm
.	O
</s>
<s>
In	O
image-assisted	O
diagnosis	O
,	O
experiments	O
have	O
applied	O
autoencoders	B-Algorithm
for	O
breast	O
cancer	O
detection	O
and	O
for	O
modelling	O
the	O
relation	O
between	O
the	O
cognitive	O
decline	O
of	O
Alzheimer	O
's	O
disease	O
and	O
the	O
latent	O
features	O
of	O
an	O
autoencoder	B-Algorithm
trained	O
with	O
MRI	B-Algorithm
.	O
</s>
<s>
In	O
2019	O
molecules	O
generated	O
with	O
variational	B-Algorithm
autoencoders	I-Algorithm
were	O
validated	O
experimentally	O
in	O
mice	O
.	O
</s>
<s>
Recently	O
,	O
a	O
stacked	O
autoencoder	B-Algorithm
framework	O
produced	O
promising	O
results	O
in	O
predicting	O
popularity	O
of	O
social	O
media	O
posts	O
,	O
which	O
is	O
helpful	O
for	O
online	O
advertising	O
strategies	O
.	O
</s>
<s>
Autoencoders	B-Algorithm
have	O
been	O
applied	O
to	O
machine	B-Application
translation	I-Application
,	O
which	O
is	O
usually	O
referred	O
to	O
as	O
neural	B-General_Concept
machine	I-General_Concept
translation	I-General_Concept
(	O
NMT	O
)	O
.	O
</s>
<s>
Unlike	O
traditional	O
autoencoders	B-Algorithm
,	O
the	O
output	O
does	O
not	O
match	O
the	O
input	O
-	O
it	O
is	O
in	O
another	O
language	O
.	O
</s>
<s>
Language-specific	O
autoencoders	B-Algorithm
incorporate	O
further	O
linguistic	O
features	O
into	O
the	O
learning	O
procedure	O
,	O
such	O
as	O
Chinese	O
decomposition	O
features	O
.	O
</s>
<s>
Machine	B-Application
translation	I-Application
is	O
rarely	O
still	O
done	O
with	O
autoencoders	B-Algorithm
,	O
but	O
rather	O
transformer	B-Algorithm
networks	I-Algorithm
.	O
</s>
