<s>
In	O
machine	O
learning	O
,	O
feature	B-General_Concept
learning	I-General_Concept
or	O
representation	B-General_Concept
learning	I-General_Concept
is	O
a	O
set	O
of	O
techniques	O
that	O
allows	O
a	O
system	O
to	O
automatically	O
discover	O
the	O
representations	O
needed	O
for	O
feature	B-Algorithm
detection	O
or	O
classification	B-General_Concept
from	O
raw	O
data	O
.	O
</s>
<s>
This	O
replaces	O
manual	O
feature	B-General_Concept
engineering	I-General_Concept
and	O
allows	O
a	O
machine	O
to	O
both	O
learn	O
the	O
features	B-Algorithm
and	O
use	O
them	O
to	O
perform	O
a	O
specific	O
task	O
.	O
</s>
<s>
Feature	B-General_Concept
learning	I-General_Concept
is	O
motivated	O
by	O
the	O
fact	O
that	O
machine	O
learning	O
tasks	O
such	O
as	O
classification	B-General_Concept
often	O
require	O
input	O
that	O
is	O
mathematically	O
and	O
computationally	O
convenient	O
to	O
process	O
.	O
</s>
<s>
However	O
,	O
real-world	O
data	O
such	O
as	O
images	O
,	O
video	O
,	O
and	O
sensor	O
data	O
has	O
not	O
yielded	O
to	O
attempts	O
to	O
algorithmically	O
define	O
specific	O
features	B-Algorithm
.	O
</s>
<s>
An	O
alternative	O
is	O
to	O
discover	O
such	O
features	B-Algorithm
or	O
representations	O
through	O
examination	O
,	O
without	O
relying	O
on	O
explicit	O
algorithms	O
.	O
</s>
<s>
Feature	B-General_Concept
learning	I-General_Concept
can	O
be	O
either	O
supervised	O
,	O
unsupervised	O
or	O
self-supervised	O
.	O
</s>
<s>
In	O
supervised	B-General_Concept
feature	I-General_Concept
learning	I-General_Concept
,	O
features	B-Algorithm
are	O
learned	O
using	O
labeled	O
input	O
data	O
.	O
</s>
<s>
This	O
can	O
be	O
leveraged	O
to	O
generate	O
feature	B-Algorithm
representations	O
with	O
the	O
model	O
which	O
result	O
in	O
high	O
label	O
prediction	O
accuracy	O
.	O
</s>
<s>
Examples	O
include	O
supervised	B-Architecture
neural	I-Architecture
networks	I-Architecture
,	O
multilayer	B-Algorithm
perceptron	I-Algorithm
and	O
(	O
supervised	O
)	O
dictionary	B-General_Concept
learning	I-General_Concept
.	O
</s>
<s>
In	O
unsupervised	B-General_Concept
feature	I-General_Concept
learning	I-General_Concept
,	O
features	B-Algorithm
are	O
learned	O
with	O
unlabeled	O
input	O
data	O
by	O
analyzing	O
the	O
relationship	O
between	O
points	O
in	O
the	O
dataset	O
.	O
</s>
<s>
Examples	O
include	O
dictionary	B-General_Concept
learning	I-General_Concept
,	O
independent	B-Algorithm
component	I-Algorithm
analysis	I-Algorithm
,	O
matrix	O
factorization	O
and	O
various	O
forms	O
of	O
clustering	B-Algorithm
.	O
</s>
<s>
In	O
self-supervised	B-General_Concept
feature	I-General_Concept
learning	I-General_Concept
,	O
features	B-Algorithm
are	O
learned	O
using	O
unlabeled	O
data	O
like	O
unsupervised	B-General_Concept
learning	I-General_Concept
,	O
however	O
input-label	O
pairs	O
are	O
constructed	O
from	O
each	O
data	O
point	O
,	O
which	O
enables	O
learning	O
the	O
structure	O
of	O
the	O
data	O
through	O
supervised	O
methods	O
such	O
as	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
Classical	O
examples	O
include	O
word	B-General_Concept
embeddings	I-General_Concept
and	O
autoencoders	B-Algorithm
.	O
</s>
<s>
SSL	B-General_Concept
has	O
since	O
been	O
applied	O
to	O
many	O
modalities	B-General_Concept
through	O
the	O
use	O
of	O
deep	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
architectures	I-Algorithm
such	O
as	O
CNNs	B-Architecture
and	O
transformers	B-Algorithm
.	O
</s>
<s>
Supervised	B-General_Concept
feature	I-General_Concept
learning	I-General_Concept
is	O
learning	O
features	B-Algorithm
from	O
labeled	O
data	O
.	O
</s>
<s>
Dictionary	B-General_Concept
learning	I-General_Concept
develops	O
a	O
set	O
(	O
dictionary	O
)	O
of	O
representative	O
elements	O
from	O
the	O
input	O
data	O
such	O
that	O
each	O
data	O
point	O
can	O
be	O
represented	O
as	O
a	O
weighted	O
sum	O
of	O
the	O
representative	O
elements	O
.	O
</s>
<s>
Supervised	O
dictionary	B-General_Concept
learning	I-General_Concept
exploits	O
both	O
the	O
structure	O
underlying	O
the	O
input	O
data	O
and	O
the	O
labels	O
for	O
optimizing	O
the	O
dictionary	O
elements	O
.	O
</s>
<s>
For	O
example	O
,	O
this	O
supervised	O
dictionary	B-General_Concept
learning	I-General_Concept
technique	O
applies	O
dictionary	B-General_Concept
learning	I-General_Concept
on	O
classification	B-General_Concept
problems	O
by	O
jointly	O
optimizing	O
the	O
dictionary	O
elements	O
,	O
weights	O
for	O
representing	O
data	O
points	O
,	O
and	O
parameters	O
of	O
the	O
classifier	B-General_Concept
based	O
on	O
the	O
input	O
data	O
.	O
</s>
<s>
In	O
particular	O
,	O
a	O
minimization	O
problem	O
is	O
formulated	O
,	O
where	O
the	O
objective	O
function	O
consists	O
of	O
the	O
classification	B-General_Concept
error	O
,	O
the	O
representation	O
error	O
,	O
an	O
L1	O
regularization	O
on	O
the	O
representing	O
weights	O
for	O
each	O
data	O
point	O
(	O
to	O
enable	O
sparse	O
representation	O
of	O
data	O
)	O
,	O
and	O
an	O
L2	O
regularization	O
on	O
the	O
parameters	O
of	O
the	O
classifier	B-General_Concept
.	O
</s>
<s>
Neural	B-Architecture
networks	I-Architecture
are	O
a	O
family	O
of	O
learning	O
algorithms	O
that	O
use	O
a	O
"	O
network	O
"	O
consisting	O
of	O
multiple	O
layers	O
of	O
inter-connected	O
nodes	O
.	O
</s>
<s>
A	O
network	O
function	O
associated	O
with	O
a	O
neural	B-Architecture
network	I-Architecture
characterizes	O
the	O
relationship	O
between	O
input	O
and	O
output	O
layers	O
,	O
which	O
is	O
parameterized	O
by	O
the	O
weights	O
.	O
</s>
<s>
Multilayer	O
neural	B-Architecture
networks	I-Architecture
can	O
be	O
used	O
to	O
perform	O
feature	B-General_Concept
learning	I-General_Concept
,	O
since	O
they	O
learn	O
a	O
representation	O
of	O
their	O
input	O
at	O
the	O
hidden	O
layer(s )	O
which	O
is	O
subsequently	O
used	O
for	O
classification	B-General_Concept
or	O
regression	O
at	O
the	O
output	O
layer	O
.	O
</s>
<s>
The	O
most	O
popular	O
network	O
architecture	O
of	O
this	O
type	O
is	O
Siamese	B-Algorithm
networks	I-Algorithm
.	O
</s>
<s>
Unsupervised	B-General_Concept
feature	I-General_Concept
learning	I-General_Concept
is	O
learning	O
features	B-Algorithm
from	O
unlabeled	O
data	O
.	O
</s>
<s>
The	O
goal	O
of	O
unsupervised	B-General_Concept
feature	I-General_Concept
learning	I-General_Concept
is	O
often	O
to	O
discover	O
low-dimensional	O
features	B-Algorithm
that	O
capture	O
some	O
structure	O
underlying	O
the	O
high-dimensional	O
input	O
data	O
.	O
</s>
<s>
When	O
the	O
feature	B-General_Concept
learning	I-General_Concept
is	O
performed	O
in	O
an	O
unsupervised	O
way	O
,	O
it	O
enables	O
a	O
form	O
of	O
semisupervised	O
learning	O
where	O
features	B-Algorithm
learned	O
from	O
an	O
unlabeled	O
dataset	O
are	O
then	O
employed	O
to	O
improve	O
performance	O
in	O
a	O
supervised	O
setting	O
with	O
labeled	O
data	O
.	O
</s>
<s>
K-means	B-Algorithm
clustering	I-Algorithm
is	O
an	O
approach	O
for	O
vector	B-Algorithm
quantization	I-Algorithm
.	O
</s>
<s>
In	O
particular	O
,	O
given	O
a	O
set	O
of	O
n	O
vectors	O
,	O
k-means	B-Algorithm
clustering	I-Algorithm
groups	O
them	O
into	O
k	O
clusters	O
(	O
i.e.	O
,	O
subsets	O
)	O
in	O
such	O
a	O
way	O
that	O
each	O
vector	O
belongs	O
to	O
the	O
cluster	O
with	O
the	O
closest	O
mean	O
.	O
</s>
<s>
The	O
problem	O
is	O
computationally	O
NP-hard	O
,	O
although	O
suboptimal	O
greedy	B-Algorithm
algorithms	I-Algorithm
have	O
been	O
developed	O
.	O
</s>
<s>
K-means	B-Algorithm
clustering	I-Algorithm
can	O
be	O
used	O
to	O
group	O
an	O
unlabeled	O
set	O
of	O
inputs	O
into	O
k	O
clusters	O
,	O
and	O
then	O
use	O
the	O
centroids	O
of	O
these	O
clusters	O
to	O
produce	O
features	B-Algorithm
.	O
</s>
<s>
These	O
features	B-Algorithm
can	O
be	O
produced	O
in	O
several	O
ways	O
.	O
</s>
<s>
The	O
simplest	O
is	O
to	O
add	O
k	O
binary	O
features	B-Algorithm
to	O
each	O
sample	O
,	O
where	O
each	O
feature	B-Algorithm
j	O
has	O
value	O
one	O
iff	O
the	O
jth	O
centroid	O
learned	O
by	O
k-means	B-Algorithm
is	O
the	O
closest	O
to	O
the	O
sample	O
under	O
consideration	O
.	O
</s>
<s>
It	O
is	O
also	O
possible	O
to	O
use	O
the	O
distances	O
to	O
the	O
clusters	O
as	O
features	B-Algorithm
,	O
perhaps	O
after	O
transforming	O
them	O
through	O
a	O
radial	B-Algorithm
basis	I-Algorithm
function	I-Algorithm
(	O
a	O
technique	O
that	O
has	O
been	O
used	O
to	O
train	O
RBF	B-Algorithm
networks	I-Algorithm
)	O
.	O
</s>
<s>
Coates	O
and	O
Ng	O
note	O
that	O
certain	O
variants	O
of	O
k-means	B-Algorithm
behave	O
similarly	O
to	O
sparse	B-General_Concept
coding	I-General_Concept
algorithms	O
.	O
</s>
<s>
In	O
a	O
comparative	O
evaluation	O
of	O
unsupervised	B-General_Concept
feature	I-General_Concept
learning	I-General_Concept
methods	O
,	O
Coates	O
,	O
Lee	O
and	O
Ng	O
found	O
that	O
k-means	B-Algorithm
clustering	I-Algorithm
with	O
an	O
appropriate	O
transformation	O
outperforms	O
the	O
more	O
recently	O
invented	O
auto-encoders	B-Algorithm
and	O
RBMs	O
on	O
an	O
image	O
classification	B-General_Concept
task	O
.	O
</s>
<s>
K-means	B-Algorithm
also	O
improves	O
performance	O
in	O
the	O
domain	O
of	O
NLP	B-Language
,	O
specifically	O
for	O
named-entity	B-General_Concept
recognition	I-General_Concept
;	O
there	O
,	O
it	O
competes	O
with	O
Brown	B-General_Concept
clustering	I-General_Concept
,	O
as	O
well	O
as	O
with	O
distributed	O
word	O
representations	O
(	O
also	O
known	O
as	O
neural	O
word	B-General_Concept
embeddings	I-General_Concept
)	O
.	O
</s>
<s>
Principal	B-Application
component	I-Application
analysis	I-Application
(	O
PCA	O
)	O
is	O
often	O
used	O
for	O
dimension	O
reduction	O
.	O
</s>
<s>
These	O
p	O
singular	O
vectors	O
are	O
the	O
feature	B-Algorithm
vectors	I-Algorithm
learned	O
from	O
the	O
input	O
data	O
,	O
and	O
they	O
represent	O
directions	O
along	O
which	O
the	O
data	O
has	O
the	O
largest	O
variations	O
.	O
</s>
<s>
PCA	O
is	O
a	O
linear	O
feature	B-General_Concept
learning	I-General_Concept
approach	O
since	O
the	O
p	O
singular	O
vectors	O
are	O
linear	O
functions	O
of	O
the	O
data	O
matrix	O
.	O
</s>
<s>
Local	B-Algorithm
linear	I-Algorithm
embedding	I-Algorithm
(	O
LLE	O
)	O
is	O
a	O
nonlinear	O
learning	O
approach	O
for	O
generating	O
low-dimensional	O
neighbor-preserving	O
representations	O
from	O
(	O
unlabeled	O
)	O
high-dimension	O
input	O
.	O
</s>
<s>
The	O
first	O
step	O
is	O
for	O
"	O
neighbor-preserving	O
"	O
,	O
where	O
each	O
input	O
data	O
point	O
Xi	O
is	O
reconstructed	O
as	O
a	O
weighted	O
sum	O
of	O
K	O
nearest	B-General_Concept
neighbor	I-General_Concept
data	O
points	O
,	O
and	O
the	O
optimal	O
weights	O
are	O
found	O
by	O
minimizing	O
the	O
average	O
squared	O
reconstruction	O
error	O
(	O
i.e.	O
,	O
difference	O
between	O
an	O
input	O
point	O
and	O
its	O
reconstruction	O
)	O
under	O
the	O
constraint	O
that	O
the	O
weights	O
associated	O
with	O
each	O
point	O
sum	O
up	O
to	O
one	O
.	O
</s>
<s>
Note	O
that	O
in	O
the	O
first	O
step	O
,	O
the	O
weights	O
are	O
optimized	O
with	O
fixed	O
data	O
,	O
which	O
can	O
be	O
solved	O
as	O
a	O
least	B-Algorithm
squares	I-Algorithm
problem	I-Algorithm
.	O
</s>
<s>
It	O
is	O
assumed	O
that	O
original	O
data	O
lie	O
on	O
a	O
smooth	O
lower-dimensional	O
manifold	B-Architecture
,	O
and	O
the	O
"	O
intrinsic	O
geometric	O
properties	O
"	O
captured	O
by	O
the	O
weights	O
of	O
the	O
original	O
data	O
are	O
also	O
expected	O
to	O
be	O
on	O
the	O
manifold	B-Architecture
.	O
</s>
<s>
Independent	B-Algorithm
component	I-Algorithm
analysis	I-Algorithm
(	O
ICA	O
)	O
is	O
a	O
technique	O
for	O
forming	O
a	O
data	O
representation	O
using	O
a	O
weighted	O
sum	O
of	O
independent	O
non-Gaussian	O
components	O
.	O
</s>
<s>
Unsupervised	O
dictionary	B-General_Concept
learning	I-General_Concept
does	O
not	O
utilize	O
data	O
labels	O
and	O
exploits	O
the	O
structure	O
underlying	O
the	O
data	O
for	O
optimizing	O
dictionary	O
elements	O
.	O
</s>
<s>
An	O
example	O
of	O
unsupervised	O
dictionary	B-General_Concept
learning	I-General_Concept
is	O
sparse	B-General_Concept
coding	I-General_Concept
,	O
which	O
aims	O
to	O
learn	O
basis	O
functions	O
(	O
dictionary	O
elements	O
)	O
for	O
data	O
representation	O
from	O
unlabeled	O
input	O
data	O
.	O
</s>
<s>
Sparse	B-General_Concept
coding	I-General_Concept
can	O
be	O
applied	O
to	O
learn	O
overcomplete	O
dictionaries	O
,	O
where	O
the	O
number	O
of	O
dictionary	O
elements	O
is	O
larger	O
than	O
the	O
dimension	O
of	O
the	O
input	O
data	O
.	O
</s>
<s>
The	O
hierarchical	O
architecture	O
of	O
the	O
biological	O
neural	O
system	O
inspires	O
deep	B-Algorithm
learning	I-Algorithm
architectures	O
for	O
feature	B-General_Concept
learning	I-General_Concept
by	O
stacking	O
multiple	O
layers	O
of	O
learning	O
nodes	O
.	O
</s>
<s>
In	O
a	O
deep	B-Algorithm
learning	I-Algorithm
architecture	O
,	O
the	O
output	O
of	O
each	O
intermediate	O
layer	O
can	O
be	O
viewed	O
as	O
a	O
representation	O
of	O
the	O
original	O
input	O
data	O
.	O
</s>
<s>
The	O
input	O
at	O
the	O
bottom	O
layer	O
is	O
raw	O
data	O
,	O
and	O
the	O
output	O
of	O
the	O
final	O
layer	O
is	O
the	O
final	O
low-dimensional	O
feature	B-Algorithm
or	O
representation	O
.	O
</s>
<s>
Restricted	B-Algorithm
Boltzmann	I-Algorithm
machines	I-Algorithm
(	O
RBMs	O
)	O
are	O
often	O
used	O
as	O
a	O
building	O
block	O
for	O
multilayer	O
learning	O
architectures	O
.	O
</s>
<s>
An	O
RBM	O
can	O
be	O
represented	O
by	O
an	O
undirected	O
bipartite	O
graph	B-Application
consisting	O
of	O
a	O
group	O
of	O
binary	O
hidden	O
variables	O
,	O
a	O
group	O
of	O
visible	O
variables	O
,	O
and	O
edges	O
connecting	O
the	O
hidden	O
and	O
visible	O
nodes	O
.	O
</s>
<s>
It	O
is	O
a	O
special	O
case	O
of	O
the	O
more	O
general	O
Boltzmann	B-Algorithm
machines	I-Algorithm
with	O
the	O
constraint	O
of	O
no	O
intra-node	O
connections	O
.	O
</s>
<s>
An	O
RBM	O
can	O
be	O
viewed	O
as	O
a	O
single	O
layer	O
architecture	O
for	O
unsupervised	B-General_Concept
feature	I-General_Concept
learning	I-General_Concept
.	O
</s>
<s>
In	O
particular	O
,	O
the	O
visible	O
variables	O
correspond	O
to	O
input	O
data	O
,	O
and	O
the	O
hidden	O
variables	O
correspond	O
to	O
feature	B-Algorithm
detectors	O
.	O
</s>
<s>
The	O
weights	O
can	O
be	O
trained	O
by	O
maximizing	O
the	O
probability	O
of	O
visible	O
variables	O
using	O
Hinton	O
's	O
contrastive	B-Algorithm
divergence	I-Algorithm
(	O
CD	O
)	O
algorithm	O
.	O
</s>
<s>
An	O
autoencoder	B-Algorithm
consisting	O
of	O
an	O
encoder	O
and	O
a	O
decoder	O
is	O
a	O
paradigm	O
for	O
deep	B-Algorithm
learning	I-Algorithm
architectures	O
.	O
</s>
<s>
An	O
example	O
is	O
provided	O
by	O
Hinton	O
and	O
Salakhutdinov	O
where	O
the	O
encoder	O
uses	O
raw	O
data	O
(	O
e.g.	O
,	O
image	O
)	O
as	O
input	O
and	O
produces	O
feature	B-Algorithm
or	O
representation	O
as	O
output	O
and	O
the	O
decoder	O
uses	O
the	O
extracted	O
feature	B-Algorithm
from	O
the	O
encoder	O
as	O
input	O
and	O
reconstructs	O
the	O
original	O
input	O
raw	O
data	O
as	O
output	O
.	O
</s>
<s>
The	O
parameters	O
involved	O
in	O
the	O
architecture	O
were	O
originally	O
trained	O
in	O
a	O
greedy	B-Algorithm
layer-by-layer	O
manner	O
:	O
after	O
one	O
layer	O
of	O
feature	B-Algorithm
detectors	O
is	O
learned	O
,	O
they	O
are	O
fed	O
up	O
as	O
visible	O
variables	O
for	O
training	O
the	O
corresponding	O
RBM	O
.	O
</s>
<s>
Current	O
approaches	O
typically	O
apply	O
end-to-end	O
training	O
with	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
methods	O
.	O
</s>
<s>
Self-supervised	O
representation	B-General_Concept
learning	I-General_Concept
is	O
learning	O
features	B-Algorithm
by	O
training	O
on	O
the	O
structure	O
of	O
unlabeled	O
data	O
rather	O
than	O
relying	O
on	O
explicit	O
labels	O
for	O
an	O
information	B-Algorithm
signal	I-Algorithm
.	O
</s>
<s>
This	O
approach	O
has	O
enabled	O
the	O
combined	O
use	O
of	O
deep	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
architectures	I-Algorithm
and	O
larger	O
unlabeled	O
datasets	O
to	O
produce	O
deep	O
feature	B-Algorithm
representations	O
.	O
</s>
<s>
Contrastive	O
representation	B-General_Concept
learning	I-General_Concept
trains	O
representations	O
for	O
associated	O
data	O
pairs	O
,	O
called	O
positive	O
samples	O
,	O
to	O
be	O
aligned	O
,	O
while	O
pairs	O
with	O
no	O
relation	O
,	O
called	O
negative	O
samples	O
,	O
are	O
contrasted	O
.	O
</s>
<s>
Generative	O
representation	B-General_Concept
learning	I-General_Concept
tasks	O
the	O
model	O
with	O
producing	O
the	O
correct	O
data	O
to	O
either	O
match	O
a	O
restricted	O
input	O
or	O
reconstruct	O
the	O
full	O
input	O
from	O
a	O
lower	O
dimensional	O
representation	O
.	O
</s>
<s>
A	O
common	O
setup	O
for	O
self-supervised	O
representation	B-General_Concept
learning	I-General_Concept
of	O
a	O
certain	O
data	O
type	O
(	O
e.g.	O
</s>
<s>
words	O
)	O
which	O
new	O
data	O
can	O
be	O
broken	O
into	O
,	O
or	O
a	O
neural	B-Architecture
network	I-Architecture
able	O
to	O
convert	O
each	O
new	O
data	O
point	O
(	O
e.g.	O
</s>
<s>
image	O
)	O
into	O
a	O
set	O
of	O
lower	O
dimensional	O
features	B-Algorithm
.	O
</s>
<s>
Specialization	O
of	O
the	O
model	O
to	O
specific	O
tasks	O
is	O
typically	O
done	O
with	O
supervised	B-General_Concept
learning	I-General_Concept
,	O
either	O
by	O
fine-tuning	O
the	O
model	O
/	O
representations	O
with	O
the	O
labels	O
as	O
the	O
signal	O
,	O
or	O
freezing	O
the	O
representations	O
and	O
training	O
an	O
additional	O
model	O
which	O
takes	O
them	O
as	O
an	O
input	O
.	O
</s>
<s>
Many	O
self-supervised	O
training	O
schemes	O
have	O
been	O
developed	O
for	O
use	O
in	O
representation	B-General_Concept
learning	I-General_Concept
of	O
various	O
modalities	B-General_Concept
,	O
often	O
first	O
showing	O
successful	O
application	O
in	O
text	O
or	O
image	O
before	O
being	O
transferred	O
to	O
other	O
data	O
types	O
.	O
</s>
<s>
Word2vec	B-Algorithm
is	O
a	O
word	B-General_Concept
embedding	I-General_Concept
technique	O
which	O
learns	O
to	O
represent	O
words	O
through	O
self-supervision	O
over	O
each	O
word	O
and	O
its	O
neighboring	O
words	O
in	O
a	O
sliding	O
window	O
across	O
a	O
large	O
corpus	O
of	O
text	O
.	O
</s>
<s>
The	O
model	O
has	O
two	O
possible	O
training	O
schemes	O
to	O
produce	O
word	B-General_Concept
vector	I-General_Concept
representations	O
,	O
one	O
generative	O
and	O
one	O
contrastive	O
.	O
</s>
<s>
A	O
limitation	O
of	O
word2vec	B-Algorithm
is	O
that	O
only	O
the	O
pairwise	O
co-occurrence	O
structure	O
of	O
the	O
data	O
is	O
used	O
,	O
and	O
not	O
the	O
ordering	O
or	O
entire	O
set	O
of	O
context	O
words	O
.	O
</s>
<s>
More	O
recent	O
transformer-based	O
representation	B-General_Concept
learning	I-General_Concept
approaches	O
attempt	O
to	O
solve	O
this	O
with	O
word	O
prediction	O
tasks	O
.	O
</s>
<s>
GPT	B-Language
pretrains	O
on	O
next	O
word	O
prediction	O
using	O
prior	O
input	O
words	O
as	O
context	O
,	O
whereas	O
BERT	B-General_Concept
masks	O
random	O
tokens	O
in	O
order	O
to	O
provide	O
bidirectional	O
context	O
.	O
</s>
<s>
Other	O
self-supervised	O
techniques	O
extend	O
word	B-General_Concept
embeddings	I-General_Concept
by	O
finding	O
representations	O
for	O
larger	O
text	O
structures	O
such	O
as	O
sentences	B-General_Concept
or	O
paragraphs	O
in	O
the	O
input	O
data	O
.	O
</s>
<s>
Doc2vec	O
extends	O
the	O
generative	O
training	O
approach	O
in	O
word2vec	B-Algorithm
by	O
adding	O
an	O
additional	O
input	O
to	O
the	O
word	O
prediction	O
task	O
based	O
on	O
the	O
paragraph	O
it	O
is	O
within	O
,	O
and	O
is	O
therefore	O
intended	O
to	O
represent	O
paragraph	O
level	O
context	O
.	O
</s>
<s>
The	O
domain	O
of	O
image	O
representation	B-General_Concept
learning	I-General_Concept
has	O
employed	O
many	O
different	O
self-supervised	O
training	O
techniques	O
,	O
including	O
transformation	O
,	O
inpainting	O
,	O
patch	O
discrimination	O
and	O
clustering	B-Algorithm
.	O
</s>
<s>
Examples	O
of	O
generative	O
approaches	O
are	O
Context	O
Encoders	O
,	O
which	O
trains	O
an	O
AlexNet	B-Algorithm
CNN	B-Architecture
architecture	O
to	O
generate	O
a	O
removed	O
image	O
region	O
given	O
the	O
masked	O
image	O
as	O
input	O
,	O
and	O
iGPT	O
,	O
which	O
applies	O
the	O
GPT-2	B-General_Concept
language	O
model	O
architecture	O
to	O
images	O
by	O
training	O
on	O
pixel	O
prediction	O
after	O
reducing	O
the	O
image	B-Algorithm
resolution	I-Algorithm
.	O
</s>
<s>
Many	O
other	O
self-supervised	O
methods	O
use	O
siamese	B-Algorithm
networks	I-Algorithm
,	O
which	O
generate	O
different	O
views	O
of	O
the	O
image	O
through	O
various	O
augmentations	O
that	O
are	O
then	O
aligned	O
to	O
have	O
similar	O
representations	O
.	O
</s>
<s>
SimCLR	O
is	O
a	O
contrastive	O
approach	O
which	O
uses	O
negative	O
examples	O
in	O
order	O
to	O
generate	O
image	O
representations	O
with	O
a	O
ResNet	B-Algorithm
CNN	B-Architecture
.	O
</s>
<s>
The	O
goal	O
of	O
many	O
graph	B-Application
representation	I-Application
learning	O
techniques	O
is	O
to	O
produce	O
an	O
embedded	O
representation	O
of	O
each	O
node	O
based	O
on	O
the	O
overall	O
network	O
topology	O
.	O
</s>
<s>
node2vec	B-General_Concept
extends	O
the	O
word2vec	B-Algorithm
training	O
technique	O
to	O
nodes	O
in	O
a	O
graph	B-Application
by	O
using	O
co-occurrence	O
in	O
random	O
walks	O
through	O
the	O
graph	B-Application
as	O
the	O
measure	O
of	O
association	O
.	O
</s>
<s>
Another	O
approach	O
is	O
to	O
maximize	O
mutual	O
information	O
,	O
a	O
measure	O
of	O
similarity	O
,	O
between	O
the	O
representations	O
of	O
associated	O
structures	O
within	O
the	O
graph	B-Application
.	O
</s>
<s>
An	O
example	O
is	O
Deep	O
Graph	B-Application
Infomax	O
,	O
which	O
uses	O
contrastive	O
self-supervision	O
based	O
on	O
mutual	O
information	O
between	O
the	O
representation	O
of	O
a	O
“	O
patch	O
”	O
around	O
each	O
node	O
,	O
and	O
a	O
summary	O
representation	O
of	O
the	O
entire	O
graph	B-Application
.	O
</s>
<s>
Negative	O
samples	O
are	O
obtained	O
by	O
pairing	O
the	O
graph	B-Application
representation	I-Application
with	O
either	O
representations	O
from	O
another	O
graph	B-Application
in	O
a	O
multigraph	O
training	O
setting	O
,	O
or	O
corrupted	O
patch	O
representations	O
in	O
single	O
graph	B-Application
training	O
.	O
</s>
<s>
With	O
analogous	O
results	O
in	O
masked	O
prediction	O
and	O
clustering	B-Algorithm
,	O
video	O
representation	B-General_Concept
learning	I-General_Concept
approaches	O
are	O
often	O
similar	O
to	O
image	O
techniques	O
but	O
must	O
utilize	O
the	O
temporal	O
sequence	O
of	O
video	O
frames	O
as	O
an	O
additional	O
learned	O
structure	O
.	O
</s>
<s>
Examples	O
include	O
VCP	O
,	O
which	O
masks	O
video	O
clips	O
and	O
trains	O
to	O
choose	O
the	O
correct	O
one	O
given	O
a	O
set	O
of	O
clip	O
options	O
,	O
and	O
Xu	O
et	O
al.	O
,	O
who	O
train	O
a	O
3D-CNN	O
to	O
identify	O
the	O
original	O
order	O
given	O
a	O
shuffled	O
set	O
of	O
video	O
clips	O
.	O
</s>
<s>
Self-supervised	O
representation	O
techniques	O
have	O
also	O
been	O
applied	O
to	O
many	O
audio	O
data	O
formats	O
,	O
particularly	O
for	O
speech	B-Algorithm
processing	I-Algorithm
.	O
</s>
<s>
Wav2vec	O
2.0	O
discretizes	O
the	O
audio	O
waveform	O
into	O
timesteps	O
via	O
temporal	O
convolutions	B-Architecture
,	O
and	O
then	O
trains	O
a	O
transformer	B-Algorithm
on	O
masked	O
prediction	O
of	O
random	O
timesteps	O
using	O
a	O
contrastive	O
loss	O
.	O
</s>
<s>
This	O
is	O
similar	O
to	O
the	O
BERT	B-General_Concept
language	I-General_Concept
model	I-General_Concept
,	O
except	O
as	O
in	O
many	O
SSL	B-General_Concept
approaches	O
to	O
video	O
,	O
the	O
model	O
chooses	O
among	O
a	O
set	O
of	O
options	O
rather	O
than	O
over	O
the	O
entire	O
word	O
vocabulary	O
.	O
</s>
<s>
Self-supervised	B-General_Concept
learning	I-General_Concept
has	O
also	O
been	O
used	O
to	O
develop	O
joint	O
representations	O
of	O
multiple	O
data	O
types	O
.	O
</s>
<s>
Approaches	O
usually	O
rely	O
on	O
some	O
natural	O
or	O
human-derived	O
association	O
between	O
the	O
modalities	B-General_Concept
as	O
an	O
implicit	O
label	O
,	O
for	O
instance	O
video	O
clips	O
of	O
animals	O
or	O
objects	O
with	O
characteristic	O
sounds	O
,	O
or	O
captions	O
written	O
to	O
describe	O
images	O
.	O
</s>
<s>
MERLOT	O
Reserve	O
trains	O
a	O
transformer-based	O
encoder	O
to	O
jointly	O
represent	O
audio	O
,	O
subtitles	O
and	O
video	O
frames	O
from	O
a	O
large	O
dataset	O
of	O
videos	O
through	O
3	O
joint	O
pretraining	O
tasks	O
:	O
contrastive	O
masked	O
prediction	O
of	O
either	O
audio	O
or	O
text	O
segments	O
given	O
the	O
video	O
frames	O
and	O
surrounding	O
audio	O
and	O
text	O
context	O
,	O
along	O
with	O
contrastive	O
alignment	O
of	O
video	O
frames	O
with	O
their	O
corresponding	O
captions	O
.	O
</s>
<s>
Multimodal	B-Algorithm
representation	O
models	O
are	O
typically	O
unable	O
to	O
assume	O
direct	O
correspondence	O
of	O
representations	O
in	O
the	O
different	O
modalities	B-General_Concept
,	O
since	O
the	O
precise	O
alignment	O
can	O
often	O
be	O
noisy	O
or	O
ambiguous	O
.	O
</s>
<s>
This	O
limitation	O
means	O
that	O
downstream	O
tasks	O
may	O
require	O
an	O
additional	O
generative	O
mapping	O
network	O
between	O
modalities	B-General_Concept
to	O
achieve	O
optimal	O
performance	O
,	O
such	O
as	O
in	O
DALLE-2	B-General_Concept
for	O
text	O
to	O
image	O
generation	O
.	O
</s>
