<s>
Self-supervised	B-General_Concept
learning	I-General_Concept
(	O
SSL	O
)	O
refers	O
to	O
a	O
machine	O
learning	O
paradigm	O
,	O
and	O
corresponding	O
methods	O
,	O
for	O
processing	O
unlabelled	O
data	O
to	O
obtain	O
useful	O
representations	O
that	O
can	O
help	O
with	O
downstream	O
learning	O
tasks	O
.	O
</s>
<s>
Then	O
the	O
typical	O
SSL	O
pipeline	O
consists	O
of	O
learning	O
supervisory	O
signals	O
(	O
labels	O
generated	O
automatically	O
)	O
in	O
a	O
first	O
stage	O
,	O
which	O
are	O
then	O
used	O
for	O
some	O
supervised	B-General_Concept
learning	I-General_Concept
task	O
in	O
the	O
second	O
and	O
later	O
stages	O
.	O
</s>
<s>
For	O
this	O
reason	O
,	O
SSL	O
can	O
be	O
described	O
as	O
an	O
intermediate	O
form	O
of	O
unsupervised	B-General_Concept
and	O
supervised	B-General_Concept
learning	I-General_Concept
.	O
</s>
<s>
The	O
typical	O
SSL	O
method	O
is	O
based	O
on	O
an	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
or	O
other	O
model	O
such	O
as	O
a	O
decision	B-General_Concept
list	I-General_Concept
.	O
</s>
<s>
Second	O
,	O
the	O
actual	O
task	O
is	O
performed	O
with	O
supervised	O
or	O
unsupervised	B-General_Concept
learning	I-General_Concept
.	O
</s>
<s>
Self-supervised	B-General_Concept
learning	I-General_Concept
was	O
referred	O
as	O
"	O
self-labeling	O
"	O
in	O
2013	O
.	O
</s>
<s>
Self-labeling	O
generates	O
labels	O
based	O
on	O
values	O
of	O
the	O
input	O
variables	O
,	O
as	O
for	O
example	O
,	O
to	O
allow	O
the	O
application	O
of	O
supervised	B-General_Concept
learning	I-General_Concept
methods	O
on	O
unlabeled	O
time-series	O
.	O
</s>
<s>
Self-supervised	B-General_Concept
learning	I-General_Concept
has	O
produced	O
promising	O
results	O
in	O
recent	O
years	O
and	O
has	O
found	O
practical	O
application	O
in	O
audio	B-Algorithm
processing	I-Algorithm
and	O
is	O
being	O
used	O
by	O
Facebook	B-Application
and	O
others	O
for	O
speech	B-Application
recognition	I-Application
.	O
</s>
<s>
Self-supervised	B-General_Concept
learning	I-General_Concept
more	O
closely	O
imitates	O
the	O
way	O
humans	O
learn	O
to	O
classify	O
objects	O
.	O
</s>
<s>
Contrastive	O
self-supervised	B-General_Concept
learning	I-General_Concept
uses	O
both	O
positive	O
and	O
negative	O
examples	O
.	O
</s>
<s>
Contrastive	B-General_Concept
learning	I-General_Concept
's	O
loss	O
function	O
minimizes	O
the	O
distance	O
between	O
positive	O
samples	O
while	O
maximizing	O
the	O
distance	O
between	O
negative	O
samples	O
.	O
</s>
<s>
Non-contrastive	O
self-supervised	B-General_Concept
learning	I-General_Concept
(	O
NCSSL	O
)	O
uses	O
only	O
positive	O
examples	O
.	O
</s>
<s>
SSL	O
belongs	O
to	O
supervised	B-General_Concept
learning	I-General_Concept
methods	O
insofar	O
as	O
the	O
goal	O
is	O
to	O
generate	O
a	O
classified	O
output	O
from	O
the	O
input	O
.	O
</s>
<s>
SSL	O
is	O
similar	O
to	O
unsupervised	B-General_Concept
learning	I-General_Concept
in	O
that	O
it	O
does	O
not	O
require	O
labels	O
in	O
the	O
sample	O
data	O
.	O
</s>
<s>
Unlike	O
unsupervised	B-General_Concept
learning	I-General_Concept
,	O
however	O
,	O
learning	O
is	O
not	O
done	O
using	O
inherent	O
data	O
structures	O
.	O
</s>
<s>
Semi-supervised	B-General_Concept
learning	I-General_Concept
combines	O
supervised	O
and	O
unsupervised	B-General_Concept
learning	I-General_Concept
,	O
requiring	O
only	O
a	O
small	O
portion	O
of	O
the	O
learning	O
data	O
be	O
labeled	O
.	O
</s>
<s>
In	O
transfer	B-General_Concept
learning	I-General_Concept
a	O
model	O
designed	O
for	O
one	O
task	O
is	O
reused	O
on	O
a	O
different	O
task	O
.	O
</s>
<s>
Training	O
an	O
autoencoder	B-Algorithm
intrinsically	O
constitutes	O
a	O
self-supervised	O
process	O
,	O
because	O
the	O
output	O
pattern	O
needs	O
to	O
become	O
an	O
optimal	O
reconstruction	O
of	O
the	O
input	O
pattern	O
itself	O
.	O
</s>
<s>
the	O
case	O
of	O
fully	O
self-contained	O
autoencoder	B-Algorithm
training	O
.	O
</s>
<s>
Self-supervised	B-General_Concept
learning	I-General_Concept
is	O
particularly	O
suitable	O
for	O
speech	B-Application
recognition	I-Application
.	O
</s>
<s>
For	O
example	O
,	O
Facebook	B-Application
developed	O
wav2vec	O
,	O
a	O
self-supervised	O
algorithm	O
,	O
to	O
perform	O
speech	B-Application
recognition	I-Application
using	O
two	O
deep	B-Architecture
convolutional	I-Architecture
neural	I-Architecture
networks	I-Architecture
that	O
build	O
on	O
each	O
other	O
.	O
</s>
<s>
Google	B-Application
's	I-Application
Bidirectional	B-General_Concept
Encoder	I-General_Concept
Representations	I-General_Concept
from	I-General_Concept
Transformers	I-General_Concept
(	O
BERT	B-General_Concept
)	O
model	O
is	O
used	O
to	O
better	O
understand	O
the	O
context	O
of	O
search	O
queries	O
.	O
</s>
<s>
OpenAI	O
's	O
GPT-3	B-General_Concept
is	O
an	O
autoregressive	O
language	B-Language
model	I-Language
that	O
can	O
be	O
used	O
in	O
language	O
processing	O
.	O
</s>
<s>
Bootstrap	O
Your	O
Own	O
Latent	O
is	O
a	O
NCSSL	O
that	O
produced	O
excellent	O
results	O
on	O
ImageNet	B-General_Concept
and	O
on	O
transfer	O
and	O
semi-supervised	O
benchmarks	O
.	O
</s>
<s>
The	O
Yarowsky	O
algorithm	O
is	O
an	O
example	O
of	O
self-supervised	B-General_Concept
learning	I-General_Concept
in	O
natural	B-Language
language	I-Language
processing	I-Language
.	O
</s>
<s>
From	O
a	O
small	O
number	O
of	O
labeled	O
examples	O
,	O
it	O
learns	O
to	O
predict	O
which	O
word	B-General_Concept
sense	I-General_Concept
of	O
a	O
polysemous	O
word	O
is	O
being	O
used	O
at	O
a	O
given	O
point	O
in	O
text	O
.	O
</s>
<s>
DirectPred	O
is	O
a	O
NCSSL	O
that	O
directly	O
sets	O
the	O
predictor	O
weights	O
instead	O
of	O
learning	O
it	O
via	O
gradient	B-Algorithm
update	I-Algorithm
.	O
</s>
