<s>
Paraphrase	B-General_Concept
or	O
paraphrasing	B-General_Concept
in	O
computational	O
linguistics	O
is	O
the	B-Language
natural	I-Language
language	I-Language
processing	I-Language
task	O
of	O
detecting	O
and	O
generating	O
paraphrases	B-General_Concept
.	O
</s>
<s>
Applications	O
of	O
paraphrasing	B-General_Concept
are	O
varied	O
including	O
information	O
retrieval	O
,	O
question	B-Algorithm
answering	I-Algorithm
,	O
text	B-Application
summarization	I-Application
,	O
and	O
plagiarism	O
detection	O
.	O
</s>
<s>
Paraphrasing	B-General_Concept
is	O
also	O
useful	O
in	O
the	O
evaluation	B-General_Concept
of	I-General_Concept
machine	I-General_Concept
translation	I-General_Concept
,	O
as	O
well	O
as	O
semantic	B-Application
parsing	I-Application
and	O
generation	B-General_Concept
of	O
new	O
samples	O
to	O
expand	O
existing	O
corpora	O
.	O
</s>
<s>
Barzilay	O
and	O
Lee	O
proposed	O
a	O
method	O
to	O
generate	O
paraphrases	B-General_Concept
through	O
the	O
usage	O
of	O
monolingual	O
parallel	O
corpora	O
,	O
namely	O
news	O
articles	O
covering	O
the	O
same	O
event	O
on	O
the	O
same	O
day	O
.	O
</s>
<s>
Training	O
consists	O
of	O
using	O
multi-sequence	O
alignment	O
to	O
generate	O
sentence-level	O
paraphrases	B-General_Concept
from	O
an	O
unannotated	O
corpus	O
.	O
</s>
<s>
This	O
is	O
achieved	O
by	O
first	O
clustering	O
similar	O
sentences	O
together	O
using	O
n-gram	B-Language
overlap	O
.	O
</s>
<s>
Finally	O
,	O
new	O
paraphrases	B-General_Concept
can	O
be	O
generated	O
by	O
choosing	O
a	O
matching	O
cluster	O
for	O
a	O
source	O
sentence	O
,	O
then	O
substituting	O
the	O
source	O
sentence	O
's	O
argument	O
into	O
any	O
number	O
of	O
patterns	O
in	O
the	O
cluster	O
.	O
</s>
<s>
Paraphrase	B-General_Concept
can	O
also	O
be	O
generated	O
through	O
the	O
use	O
of	O
phrase-based	O
translation	O
as	O
proposed	O
by	O
Bannard	O
and	O
Callison-Burch	O
.	O
</s>
<s>
The	O
chief	O
concept	O
consists	O
of	O
aligning	O
phrases	O
in	O
a	O
pivot	O
language	O
to	O
produce	O
potential	O
paraphrases	B-General_Concept
in	O
the	O
original	O
language	O
.	O
</s>
<s>
The	O
phrase	O
"	O
unter	O
kontrolle	O
"	O
is	O
then	O
found	O
in	O
another	O
German	O
sentence	O
with	O
the	O
aligned	O
English	O
phrase	O
being	O
"	O
in	O
check	O
,	O
"	O
a	O
paraphrase	B-General_Concept
of	O
"	O
under	O
control.	O
"	O
</s>
<s>
The	O
probability	O
distribution	O
can	O
be	O
modeled	O
as	O
,	O
the	O
probability	O
phrase	O
is	O
a	O
paraphrase	B-General_Concept
of	O
,	O
which	O
is	O
equivalent	O
to	O
summed	O
over	O
all	O
,	O
a	O
potential	O
phrase	O
translation	O
in	O
the	O
pivot	O
language	O
.	O
</s>
<s>
Additionally	O
,	O
the	O
sentence	O
is	O
added	O
as	O
a	O
prior	O
to	O
add	O
context	O
to	O
the	O
paraphrase	B-General_Concept
.	O
</s>
<s>
Thus	O
the	O
optimal	O
paraphrase	B-General_Concept
,	O
can	O
be	O
modeled	O
as	O
:	O
</s>
<s>
There	O
has	O
been	O
success	O
in	O
using	O
long	B-Algorithm
short-term	I-Algorithm
memory	I-Algorithm
(	O
LSTM	B-Algorithm
)	O
models	O
to	O
generate	O
paraphrases	B-General_Concept
.	O
</s>
<s>
In	O
short	O
,	O
the	O
model	O
consists	O
of	O
an	O
encoder	O
and	O
decoder	O
component	O
,	O
both	O
implemented	O
using	O
variations	O
of	O
a	O
stacked	O
residual	O
LSTM	B-Algorithm
.	O
</s>
<s>
First	O
,	O
the	O
encoding	O
LSTM	B-Algorithm
takes	O
a	O
one-hot	O
encoding	O
of	O
all	O
the	O
words	O
in	O
a	O
sentence	O
as	O
input	O
and	O
produces	O
a	O
final	O
hidden	O
vector	O
,	O
which	O
can	O
represent	O
the	O
input	O
sentence	O
.	O
</s>
<s>
The	O
decoding	O
LSTM	B-Algorithm
takes	O
the	O
hidden	O
vector	O
as	O
input	O
and	O
generates	O
a	O
new	O
sentence	O
,	O
terminating	O
in	O
an	O
end-of-sentence	O
token	O
.	O
</s>
<s>
The	O
encoder	O
and	O
decoder	O
are	O
trained	O
to	O
take	O
a	O
phrase	O
and	O
reproduce	O
the	O
one-hot	O
distribution	O
of	O
a	O
corresponding	O
paraphrase	B-General_Concept
by	O
minimizing	O
perplexity	B-Application
using	O
simple	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
New	O
paraphrases	B-General_Concept
are	O
generated	O
by	O
inputting	O
a	O
new	O
phrase	O
to	O
the	O
encoder	O
and	O
passing	O
the	O
output	O
to	O
the	O
decoder	O
.	O
</s>
<s>
With	O
the	O
introduction	O
of	O
Transformer	B-Algorithm
models	I-Algorithm
,	O
paraphrase	B-General_Concept
generation	B-General_Concept
approaches	O
improved	O
their	O
ability	O
to	O
generate	O
text	O
by	O
scaling	O
neural	B-Architecture
network	I-Architecture
parameters	O
and	O
heavily	O
parallelizing	O
training	O
through	O
feed-forward	B-Algorithm
layers	I-Algorithm
.	O
</s>
<s>
Transformer-based	O
paraphrase	B-General_Concept
generation	B-General_Concept
relies	O
on	O
autoencoding	B-Algorithm
,	O
autoregressive	B-Algorithm
,	O
or	O
sequence-to-sequence	B-Algorithm
methods	O
.	O
</s>
<s>
Autoencoder	B-Algorithm
models	O
predict	O
word	O
replacement	O
candidates	O
with	O
a	O
one-hot	O
distribution	O
over	O
the	O
vocabulary	O
,	O
while	O
autoregressive	B-Algorithm
and	O
seq2seq	B-Algorithm
models	O
generate	O
new	O
text	O
based	O
on	O
the	O
source	O
predicting	O
one	O
word	O
at	O
a	O
time	O
.	O
</s>
<s>
More	O
advanced	O
efforts	O
also	O
exist	O
to	O
make	O
paraphrasing	B-General_Concept
controllable	O
according	O
to	O
predefined	O
quality	O
dimensions	O
,	O
such	O
as	O
semantic	O
preservation	O
or	O
lexical	O
diversity	O
.	O
</s>
<s>
Many	O
Transformer-based	O
paraphrase	B-General_Concept
generation	B-General_Concept
methods	O
rely	O
on	O
unsupervised	O
learning	O
to	O
leverage	O
large	O
amounts	O
of	O
training	O
data	O
and	O
scale	O
their	O
methods	O
.	O
</s>
<s>
Paraphrase	B-General_Concept
recognition	O
has	O
been	O
attempted	O
by	O
Socher	O
et	O
al	O
through	O
the	O
use	O
of	O
recursive	O
autoencoders	B-Algorithm
.	O
</s>
<s>
The	O
main	O
concept	O
is	O
to	O
produce	O
a	O
vector	O
representation	O
of	O
a	O
sentence	O
and	O
its	O
components	O
by	O
recursively	O
using	O
an	O
autoencoder	B-Algorithm
.	O
</s>
<s>
The	O
vector	O
representations	O
of	O
paraphrases	B-General_Concept
should	O
have	O
similar	O
vector	O
representations	O
;	O
they	O
are	O
processed	O
,	O
then	O
fed	O
as	O
input	O
into	O
a	O
neural	B-Architecture
network	I-Architecture
for	O
classification	O
.	O
</s>
<s>
Given	O
a	O
sentence	O
with	O
words	O
,	O
the	O
autoencoder	B-Algorithm
is	O
designed	O
to	O
take	O
2	O
-dimensional	O
word	B-General_Concept
embeddings	I-General_Concept
as	O
input	O
and	O
produce	O
an	O
-dimensional	O
vector	O
as	O
output	O
.	O
</s>
<s>
The	O
same	O
autoencoder	B-Algorithm
is	O
applied	O
to	O
every	O
pair	O
of	O
words	O
in	O
to	O
produce	O
vectors	O
.	O
</s>
<s>
The	O
autoencoder	B-Algorithm
is	O
then	O
applied	O
recursively	O
with	O
the	O
new	O
vectors	O
as	O
inputs	O
until	O
a	O
single	O
vector	O
is	O
produced	O
.	O
</s>
<s>
The	O
autoencoder	B-Algorithm
is	O
trained	O
to	O
reproduce	O
every	O
vector	O
in	O
the	O
full	O
recursion	O
tree	O
,	O
including	O
the	O
initial	O
word	B-General_Concept
embeddings	I-General_Concept
.	O
</s>
<s>
Given	O
two	O
sentences	O
and	O
of	O
length	O
4	O
and	O
3	O
respectively	O
,	O
the	O
autoencoders	B-Algorithm
would	O
produce	O
7	O
and	O
5	O
vector	O
representations	O
including	O
the	O
initial	O
word	B-General_Concept
embeddings	I-General_Concept
.	O
</s>
<s>
The	O
output	O
is	O
then	O
normalized	O
to	O
have	O
mean	O
0	O
and	O
standard	O
deviation	O
1	O
and	O
is	O
fed	O
into	O
a	O
fully	O
connected	O
layer	O
with	O
a	O
softmax	B-Algorithm
output	O
.	O
</s>
<s>
The	O
dynamic	O
pooling	O
to	O
softmax	B-Algorithm
model	O
is	O
trained	O
using	O
pairs	O
of	O
known	O
paraphrases	B-General_Concept
.	O
</s>
<s>
Skip-thought	O
vectors	O
are	O
an	O
attempt	O
to	O
create	O
a	O
vector	O
representation	O
of	O
the	O
semantic	O
meaning	O
of	O
a	O
sentence	O
,	O
similarly	O
to	O
the	O
skip	B-Algorithm
gram	I-Algorithm
model	I-Algorithm
.	O
</s>
<s>
The	O
encoder	O
and	O
decoder	O
can	O
be	O
implemented	O
through	O
the	O
use	O
of	O
a	O
recursive	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
(	O
RNN	O
)	O
or	O
an	O
LSTM	B-Algorithm
.	O
</s>
<s>
Since	O
paraphrases	B-General_Concept
carry	O
the	O
same	O
semantic	O
meaning	O
between	O
one	O
another	O
,	O
they	O
should	O
have	O
similar	O
skip-thought	O
vectors	O
.	O
</s>
<s>
Similar	O
to	O
how	O
Transformer	B-Algorithm
models	I-Algorithm
influenced	O
paraphrase	B-General_Concept
generation	B-General_Concept
,	O
their	O
application	O
in	O
identifying	O
paraphrases	B-General_Concept
showed	O
great	O
success	O
.	O
</s>
<s>
Models	O
such	O
as	O
BERT	O
can	O
be	O
adapted	O
with	O
a	O
binary	B-General_Concept
classification	I-General_Concept
layer	O
and	O
trained	O
end-to-end	O
on	O
identification	O
tasks	O
.	O
</s>
<s>
Transformers	B-Algorithm
achieve	O
strong	O
results	O
when	O
transferring	O
between	O
domains	O
and	O
paraphrasing	B-General_Concept
techniques	O
compared	O
to	O
more	O
traditional	O
machine	O
learning	O
methods	O
such	O
as	O
logistic	O
regression	O
.	O
</s>
<s>
Other	O
successful	O
methods	O
based	O
on	O
the	O
Transformer	B-Algorithm
architecture	O
include	O
using	O
adversarial	B-General_Concept
learning	I-General_Concept
and	O
meta-learning	B-General_Concept
.	O
</s>
<s>
Multiple	O
methods	O
can	O
be	O
used	O
to	O
evaluate	O
paraphrases	B-General_Concept
.	O
</s>
<s>
Since	O
paraphrase	B-General_Concept
recognition	O
can	O
be	O
posed	O
as	O
a	O
classification	O
problem	O
,	O
most	O
standard	O
evaluations	O
metrics	O
such	O
as	O
accuracy	O
,	O
f1	B-General_Concept
score	I-General_Concept
,	O
or	O
an	O
ROC	B-Algorithm
curve	I-Algorithm
do	O
relatively	O
well	O
.	O
</s>
<s>
However	O
,	O
there	O
is	O
difficulty	O
calculating	O
f1-scores	O
due	O
to	O
trouble	O
producing	O
a	O
complete	O
list	O
of	O
paraphrases	B-General_Concept
for	O
a	O
given	O
phrase	O
and	O
the	O
fact	O
that	O
good	O
paraphrases	B-General_Concept
are	O
dependent	O
upon	O
context	O
.	O
</s>
<s>
ParaMetric	O
aims	O
to	O
calculate	O
the	O
precision	O
and	O
recall	O
of	O
an	O
automatic	O
paraphrase	B-General_Concept
system	O
by	O
comparing	O
the	O
automatic	O
alignment	O
of	O
paraphrases	B-General_Concept
to	O
a	O
manual	O
alignment	O
of	O
similar	O
phrases	O
.	O
</s>
<s>
Since	O
ParaMetric	O
is	O
simply	O
rating	O
the	O
quality	O
of	O
phrase	O
alignment	O
,	O
it	O
can	O
be	O
used	O
to	O
rate	O
paraphrase	B-General_Concept
generation	B-General_Concept
systems	O
,	O
assuming	O
it	O
uses	O
phrase	O
alignment	O
as	O
part	O
of	O
its	O
generation	B-General_Concept
process	O
.	O
</s>
<s>
The	O
evaluation	O
of	O
paraphrase	B-General_Concept
generation	B-General_Concept
has	O
similar	O
difficulties	O
as	O
the	O
evaluation	B-General_Concept
of	I-General_Concept
machine	I-General_Concept
translation	I-General_Concept
.	O
</s>
<s>
The	O
quality	O
of	O
a	O
paraphrase	B-General_Concept
depends	O
on	O
its	O
context	O
,	O
whether	O
it	O
is	O
being	O
used	O
as	O
a	O
summary	O
,	O
and	O
how	O
it	O
is	O
generated	O
,	O
among	O
other	O
factors	O
.	O
</s>
<s>
Additionally	O
,	O
a	O
good	O
paraphrase	B-General_Concept
usually	O
is	O
lexically	O
dissimilar	O
from	O
its	O
source	O
phrase	O
.	O
</s>
<s>
The	O
simplest	O
method	O
used	O
to	O
evaluate	O
paraphrase	B-General_Concept
generation	B-General_Concept
would	O
be	O
through	O
the	O
use	O
of	O
human	O
judges	O
.	O
</s>
<s>
Automated	O
approaches	O
to	O
evaluation	O
prove	O
to	O
be	O
challenging	O
as	O
it	O
is	O
essentially	O
a	O
problem	O
as	O
difficult	O
as	O
paraphrase	B-General_Concept
recognition	O
.	O
</s>
<s>
While	O
originally	O
used	O
to	O
evaluate	O
machine	B-Application
translations	I-Application
,	O
bilingual	O
evaluation	O
understudy	O
(	O
BLEU	O
)	O
has	O
been	O
used	O
successfully	O
to	O
evaluate	O
paraphrase	B-General_Concept
generation	B-General_Concept
models	O
as	O
well	O
.	O
</s>
<s>
However	O
,	O
paraphrases	B-General_Concept
often	O
have	O
several	O
lexically	O
different	O
but	O
equally	O
valid	O
solutions	O
,	O
hurting	O
BLEU	O
and	O
other	O
similar	O
evaluation	O
metrics	O
.	O
</s>
<s>
Metrics	O
specifically	O
designed	O
to	O
evaluate	O
paraphrase	B-General_Concept
generation	B-General_Concept
include	O
paraphrase	B-General_Concept
in	O
n-gram	B-Language
change	O
(	O
PINC	O
)	O
and	O
paraphrase	B-General_Concept
evaluation	O
metric	O
(	O
PEM	O
)	O
along	O
with	O
the	O
aforementioned	O
ParaMetric	O
.	O
</s>
<s>
Since	O
BLEU	O
has	O
difficulty	O
measuring	O
lexical	O
dissimilarity	O
,	O
PINC	O
is	O
a	O
measurement	O
of	O
the	O
lack	O
of	O
n-gram	B-Language
overlap	O
between	O
a	O
source	O
sentence	O
and	O
a	O
candidate	O
paraphrase	B-General_Concept
.	O
</s>
<s>
It	O
is	O
essentially	O
the	O
Jaccard	O
distance	O
between	O
the	O
sentence	O
,	O
excluding	O
n-grams	B-Language
that	O
appear	O
in	O
the	O
source	O
sentence	O
to	O
maintain	O
some	O
semantic	O
equivalence	O
.	O
</s>
<s>
PEM	O
,	O
on	O
the	O
other	O
hand	O
,	O
attempts	O
to	O
evaluate	O
the	O
"	O
adequacy	O
,	O
fluency	O
,	O
and	O
lexical	O
dissimilarity	O
"	O
of	O
paraphrases	B-General_Concept
by	O
returning	O
a	O
single	O
value	O
heuristic	O
calculated	O
using	O
N-grams	B-Language
overlap	O
in	O
a	O
pivot	O
language	O
.	O
</s>
<s>
It	O
is	O
equivalent	O
to	O
training	O
a	O
paraphrase	B-General_Concept
recognition	O
to	O
evaluate	O
a	O
paraphrase	B-General_Concept
generation	B-General_Concept
system	O
.	O
</s>
<s>
The	O
Quora	O
Question	O
Pairs	O
Dataset	O
,	O
which	O
contains	O
hundreds	O
of	O
thousands	O
of	O
duplicate	O
questions	O
,	O
has	O
become	O
a	O
common	O
dataset	O
for	O
the	O
evaluation	O
of	O
paraphrase	B-General_Concept
detectors	O
.	O
</s>
<s>
The	O
best	O
performing	O
models	O
for	O
paraphrase	B-General_Concept
detection	O
for	O
the	O
last	O
three	O
years	O
have	O
all	O
used	O
the	O
Transformer	B-Algorithm
architecture	O
and	O
all	O
have	O
relied	O
on	O
large	O
amounts	O
of	O
pre-training	O
with	O
more	O
general	O
data	O
before	O
fine-tuning	O
with	O
the	O
question	O
pairs	O
.	O
</s>
