<s>
A	O
Vision	B-Algorithm
Transformer	I-Algorithm
(	O
ViT	B-Algorithm
)	O
is	O
a	O
transformer	B-Algorithm
that	O
is	O
targeted	O
at	O
vision	O
processing	O
tasks	O
such	O
as	O
image	O
recognition	O
.	O
</s>
<s>
Transformers	B-Algorithm
found	O
their	O
initial	O
applications	O
in	O
natural	B-Language
language	I-Language
processing	I-Language
(	O
NLP	B-Language
)	O
tasks	O
,	O
as	O
demonstrated	O
by	O
language	B-Language
models	I-Language
such	O
as	O
BERT	B-General_Concept
and	O
GPT-3	B-General_Concept
.	O
</s>
<s>
By	O
contrast	O
the	O
typical	O
image	O
processing	O
system	O
uses	O
a	O
convolutional	B-Architecture
neural	I-Architecture
network	I-Architecture
(	O
CNN	B-Architecture
)	O
.	O
</s>
<s>
Well-known	O
projects	O
include	O
Xception	O
,	O
ResNet	B-Algorithm
,	O
EfficientNet	O
,	O
DenseNet	O
,	O
and	O
Inception	O
.	O
</s>
<s>
Transformers	B-Algorithm
measure	O
the	O
relationships	O
between	O
pairs	O
of	O
input	O
tokens	O
(	O
words	O
in	O
the	O
case	O
of	O
text	O
strings	O
)	O
,	O
termed	O
attention	B-General_Concept
.	O
</s>
<s>
For	O
images	O
,	O
the	O
basic	O
unit	O
of	O
analysis	O
is	O
the	O
pixel	B-Algorithm
.	O
</s>
<s>
However	O
,	O
computing	O
relationships	O
for	O
every	O
pixel	B-Algorithm
pair	O
in	O
a	O
typical	O
image	O
is	O
prohibitive	O
in	O
terms	O
of	O
memory	O
and	O
computation	O
.	O
</s>
<s>
Instead	O
,	O
ViT	B-Algorithm
computes	O
relationships	O
among	O
pixels	B-Algorithm
in	O
various	O
small	O
sections	O
of	O
the	O
image	O
(	O
e.g.	O
,	O
16x16	O
pixels	B-Algorithm
)	O
,	O
at	O
a	O
drastically	O
reduced	O
cost	O
.	O
</s>
<s>
The	O
result	O
,	O
with	O
the	O
position	O
embedding	O
is	O
fed	O
to	O
the	O
transformer	B-Algorithm
.	O
</s>
<s>
As	O
in	O
the	O
case	O
of	O
BERT	B-General_Concept
,	O
a	O
fundamental	O
role	O
in	O
classification	O
tasks	O
is	O
played	O
by	O
the	O
class	O
token	O
.	O
</s>
<s>
A	O
special	O
token	O
that	O
is	O
used	O
as	O
the	O
only	O
input	O
of	O
the	O
final	O
MLP	B-Algorithm
Head	O
as	O
it	O
has	O
been	O
influenced	O
by	O
all	O
the	O
others	O
.	O
</s>
<s>
The	O
architecture	O
for	O
image	O
classification	O
is	O
the	O
most	O
common	O
and	O
uses	O
only	O
the	O
Transformer	B-Algorithm
Encoder	O
in	O
order	O
to	O
transform	O
the	O
various	O
input	O
tokens	O
.	O
</s>
<s>
However	O
,	O
there	O
are	O
also	O
other	O
applications	O
in	O
which	O
the	O
decoder	O
part	O
of	O
the	O
traditional	O
Transformer	B-Algorithm
Architecture	O
is	O
also	O
used	O
.	O
</s>
<s>
The	O
general	O
transformer	B-Algorithm
architecture	O
was	O
initially	O
introduced	O
in	O
2017	O
in	O
the	O
well-known	O
paper	O
"	O
Attention	B-General_Concept
is	O
All	O
You	O
Need	O
"	O
.	O
</s>
<s>
They	O
have	O
spread	O
widely	O
in	O
the	O
field	O
of	O
Natural	B-Language
Language	I-Language
Processing	I-Language
and	O
have	O
become	O
one	O
of	O
the	O
most	O
widely	O
used	O
and	O
promising	O
neural	O
network	O
architectures	O
in	O
the	O
field	O
.	O
</s>
<s>
In	O
2019	O
the	O
Vision	B-Algorithm
Transformer	I-Algorithm
architecture	O
for	O
processing	O
images	O
without	O
the	O
need	O
of	O
any	O
convolutions	O
was	O
proposed	O
by	O
Cordonnier	O
et	O
al.	O
,	O
and	O
later	O
empirically	O
evaluated	O
more	O
extensively	O
in	O
the	O
well-known	O
paper	O
"	O
An	O
image	O
is	O
worth	O
16x16	O
words	O
"	O
.	O
</s>
<s>
The	O
idea	O
is	O
basically	O
to	O
break	O
down	O
input	O
images	O
as	O
a	O
series	O
of	O
patches	O
which	O
,	O
once	O
transformed	O
into	O
vectors	O
,	O
are	O
seen	O
as	O
words	O
in	O
a	O
normal	O
transformer	B-Algorithm
.	O
</s>
<s>
If	O
in	O
the	O
field	O
of	O
Natural	B-Language
Language	I-Language
Processing	I-Language
the	O
mechanism	O
of	O
attention	B-General_Concept
of	O
the	O
Transformers	B-Algorithm
tried	O
to	O
capture	O
the	O
relationships	O
between	O
different	O
words	O
of	O
the	O
text	O
to	O
be	O
analysed	O
,	O
in	O
Computer	O
Vision	O
the	O
Vision	B-Algorithm
Transformers	I-Algorithm
try	O
instead	O
to	O
capture	O
the	O
relationships	O
between	O
different	O
portions	O
of	O
an	O
image	O
.	O
</s>
<s>
In	O
2021	O
a	O
pure	O
transformer	B-Algorithm
model	I-Algorithm
demonstrated	O
better	O
performance	O
and	O
greater	O
efficiency	O
than	O
CNNs	B-Architecture
on	O
image	O
classification	O
.	O
</s>
<s>
A	O
study	O
in	O
June	O
2021	O
added	O
a	O
transformer	B-Algorithm
backend	O
to	O
Resnet	B-Algorithm
,	O
which	O
dramatically	O
reduced	O
costs	O
and	O
increased	O
accuracy	O
.	O
</s>
<s>
In	O
the	O
same	O
year	O
,	O
some	O
important	O
variants	O
of	O
the	O
Vision	B-Algorithm
Transformers	I-Algorithm
were	O
proposed	O
.	O
</s>
<s>
Among	O
the	O
most	O
relevant	O
is	O
the	O
Swin	O
Transformer	B-Algorithm
,	O
which	O
through	O
some	O
modifications	O
to	O
the	O
attention	B-General_Concept
mechanism	I-General_Concept
and	O
a	O
multi-stage	O
approach	O
achieved	O
state-of-the-art	O
results	O
on	O
some	O
object	B-General_Concept
detection	I-General_Concept
datasets	O
such	O
as	O
COCO	O
.	O
</s>
<s>
Another	O
interesting	O
variant	O
is	O
the	O
TimeSformer	O
,	O
designed	O
for	O
video	O
understanding	O
tasks	O
and	O
able	O
to	O
capture	O
spatial	O
and	O
temporal	O
information	O
through	O
the	O
use	O
of	O
divided	O
space-time	O
attention	B-General_Concept
.	O
</s>
<s>
Vision	B-Algorithm
Transformers	I-Algorithm
were	O
also	O
able	O
to	O
get	O
out	O
of	O
the	O
lab	O
and	O
into	O
one	O
of	O
the	O
most	O
important	O
fields	O
of	O
Computer	O
Vision	O
,	O
autonomous	O
driving	O
.	O
</s>
<s>
ViT	B-Algorithm
performance	O
depends	O
on	O
decisions	O
including	O
that	O
of	O
the	O
optimizer	O
,	O
dataset-specific	O
hyperparameters	B-General_Concept
,	O
and	O
network	O
depth	O
.	O
</s>
<s>
CNN	B-Architecture
are	O
much	O
easier	O
to	O
optimize	O
.	O
</s>
<s>
A	O
variation	O
on	O
a	O
pure	O
transformer	B-Algorithm
is	O
to	O
marry	O
a	O
transformer	B-Algorithm
to	O
a	O
CNN	B-Architecture
stem/front	O
end	O
.	O
</s>
<s>
A	O
typical	O
ViT	B-Algorithm
stem	O
uses	O
a	O
16x16	O
convolution	O
with	O
a	O
16	O
stride	O
.	O
</s>
<s>
The	O
CNN	B-Architecture
translates	O
from	O
the	O
basic	O
pixel	B-Algorithm
level	O
to	O
a	O
feature	O
map	O
.	O
</s>
<s>
A	O
tokenizer	O
translates	O
the	O
feature	O
map	O
into	O
a	O
series	O
of	O
tokens	O
that	O
are	O
then	O
fed	O
into	O
the	O
transformer	B-Algorithm
,	O
which	O
applies	O
the	O
attention	B-General_Concept
mechanism	I-General_Concept
to	O
produce	O
a	O
series	O
of	O
output	O
tokens	O
.	O
</s>
<s>
The	O
latter	O
allows	O
the	O
analysis	O
to	O
exploit	O
potentially	O
significant	O
pixel-level	O
details	O
.	O
</s>
<s>
The	O
differences	O
between	O
CNNs	B-Architecture
and	O
Vision	B-Algorithm
Transformers	I-Algorithm
are	O
many	O
and	O
lie	O
mainly	O
in	O
their	O
architectural	O
differences	O
.	O
</s>
<s>
In	O
fact	O
,	O
CNNs	B-Architecture
achieve	O
excellent	O
results	O
even	O
with	O
training	O
based	O
on	O
data	O
volumes	O
that	O
are	O
not	O
as	O
large	O
as	O
those	O
required	O
by	O
Vision	B-Algorithm
Transformers	I-Algorithm
.	O
</s>
<s>
This	O
different	O
behaviour	O
seems	O
to	O
derive	O
from	O
the	O
presence	O
in	O
the	O
CNNs	B-Architecture
of	O
some	O
inductive	B-General_Concept
biases	I-General_Concept
that	O
can	O
be	O
somehow	O
exploited	O
by	O
these	O
networks	O
to	O
grasp	O
more	O
quickly	O
the	O
particularities	O
of	O
the	O
analysed	O
images	O
even	O
if	O
,	O
on	O
the	O
other	O
hand	O
,	O
they	O
end	O
up	O
limiting	O
them	O
making	O
it	O
more	O
complex	O
to	O
grasp	O
global	O
relations	O
.	O
</s>
<s>
On	O
the	O
other	O
hand	O
,	O
the	O
Vision	B-Algorithm
Transformers	I-Algorithm
are	O
free	O
from	O
these	O
biases	O
which	O
leads	O
them	O
to	O
be	O
able	O
to	O
capture	O
also	O
global	O
and	O
wider	O
range	O
relations	O
but	O
at	O
the	O
cost	O
of	O
a	O
more	O
onerous	O
training	O
in	O
terms	O
of	O
data	O
.	O
</s>
<s>
Vision	B-Algorithm
Transformers	I-Algorithm
also	O
proved	O
to	O
be	O
much	O
more	O
robust	O
to	O
input	O
image	O
distortions	O
such	O
as	O
adversarial	O
patches	O
or	O
permutations	O
.	O
</s>
<s>
However	O
,	O
choosing	O
one	O
architecture	O
over	O
another	O
is	O
not	O
always	O
the	O
wisest	O
choice	O
,	O
and	O
excellent	O
results	O
have	O
been	O
obtained	O
in	O
several	O
Computer	O
Vision	O
tasks	O
through	O
hybrid	O
architectures	O
combining	O
convolutional	O
layers	O
with	O
Vision	B-Algorithm
Transformers	I-Algorithm
.	O
</s>
<s>
The	O
considerable	O
need	O
for	O
data	O
during	O
the	O
training	O
phase	O
has	O
made	O
it	O
essential	O
to	O
find	O
alternative	O
methods	O
to	O
train	O
these	O
models	O
,	O
and	O
a	O
central	O
role	O
is	O
now	O
played	O
by	O
self-supervised	B-General_Concept
methods	I-General_Concept
.	O
</s>
<s>
Being	O
able	O
to	O
train	O
a	O
Vision	B-Algorithm
Transformer	I-Algorithm
without	O
having	O
to	O
have	O
a	O
huge	O
vision	O
dataset	O
at	O
its	O
disposal	O
could	O
be	O
the	O
key	O
to	O
the	O
widespread	O
dissemination	O
of	O
this	O
promising	O
new	O
architecture	O
.	O
</s>
<s>
Vision	B-Algorithm
Transformers	I-Algorithm
have	O
been	O
used	O
in	O
many	O
Computer	O
Vision	O
tasks	O
with	O
excellent	O
results	O
and	O
in	O
some	O
cases	O
even	O
state-of-the-art	O
.	O
</s>
<s>
There	O
are	O
many	O
implementations	O
of	O
Vision	B-Algorithm
Transformers	I-Algorithm
and	O
its	O
variants	O
available	O
in	O
open	O
source	O
online	O
.	O
</s>
<s>
The	O
main	O
versions	O
of	O
this	O
architecture	O
have	O
been	O
implemented	O
in	O
PyTorch	B-Algorithm
but	O
implementations	O
have	O
also	O
been	O
made	O
available	O
for	O
TensorFlow	B-Language
.	O
</s>
