<s>
Adversarial	B-General_Concept
machine	I-General_Concept
learning	I-General_Concept
is	O
the	O
study	O
of	O
the	O
attacks	O
on	O
machine	O
learning	O
algorithms	O
,	O
and	O
of	O
the	O
defenses	O
against	O
such	O
attacks	O
.	O
</s>
<s>
Some	O
of	O
the	O
most	O
common	O
threat	O
models	O
in	O
adversarial	B-General_Concept
machine	I-General_Concept
learning	I-General_Concept
include	O
evasion	O
attacks	O
,	O
data	O
poisoning	O
attacks	O
,	O
Byzantine	B-Operating_System
attacks	I-Operating_System
and	O
model	O
extraction	O
.	O
</s>
<s>
In	O
2004	O
,	O
Nilesh	O
Dalvi	O
and	O
others	O
noted	O
that	O
linear	B-General_Concept
classifiers	I-General_Concept
used	O
in	O
spam	O
filters	O
could	O
be	O
defeated	O
by	O
simple	O
"	O
evasion	O
attacks	O
"	O
as	O
spammers	O
inserted	O
"	O
good	O
words	O
"	O
into	O
their	O
spam	O
emails	O
.	O
</s>
<s>
(	O
Around	O
2007	O
,	O
some	O
spammers	O
added	O
random	O
noise	O
to	O
fuzz	O
words	O
within	O
"	O
image	O
spam	O
"	O
in	O
order	O
to	O
defeat	O
OCR-based	O
filters	O
.	O
)	O
</s>
<s>
As	O
late	O
as	O
2013	O
many	O
researchers	O
continued	O
to	O
hope	O
that	O
non-linear	O
classifiers	O
(	O
such	O
as	O
support	B-Algorithm
vector	I-Algorithm
machines	I-Algorithm
and	O
neural	B-Architecture
networks	I-Architecture
)	O
might	O
be	O
robust	O
to	O
adversaries	O
,	O
until	O
Battista	O
Biggio	O
and	O
others	O
demonstrated	O
the	O
first	O
gradient-based	O
attacks	O
on	O
such	O
machine-learning	O
models	O
(	O
2012	O
–	O
2013	O
)	O
.	O
</s>
<s>
In	O
2012	O
,	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
began	O
to	O
dominate	O
computer	O
vision	O
problems	O
;	O
starting	O
in	O
2014	O
,	O
Christian	O
Szegedy	O
and	O
others	O
demonstrated	O
that	O
deep	B-Algorithm
neural	I-Algorithm
networks	I-Algorithm
could	O
be	O
fooled	O
by	O
adversaries	O
,	O
again	O
using	O
a	O
gradient-based	O
attack	O
to	O
craft	O
adversarial	O
perturbations	O
.	O
</s>
<s>
Frosst	O
also	O
believes	O
that	O
the	O
adversarial	B-General_Concept
machine	I-General_Concept
learning	I-General_Concept
community	O
incorrectly	O
assumes	O
models	O
trained	O
on	O
a	O
certain	O
data	O
distribution	O
will	O
also	O
perform	O
well	O
on	O
a	O
completely	O
different	O
data	O
distribution	O
.	O
</s>
<s>
He	O
suggests	O
that	O
a	O
new	O
approach	O
to	O
machine	O
learning	O
should	O
be	O
explored	O
,	O
and	O
is	O
currently	O
working	O
on	O
a	O
unique	O
neural	B-Architecture
network	I-Architecture
that	O
has	O
characteristics	O
more	O
similar	O
to	O
human	O
perception	O
than	O
state	O
of	O
the	O
art	O
approaches	O
.	O
</s>
<s>
While	O
adversarial	B-General_Concept
machine	I-General_Concept
learning	I-General_Concept
continues	O
to	O
be	O
heavily	O
rooted	O
in	O
academia	O
,	O
large	O
tech	O
companies	O
such	O
as	O
Google	O
,	O
Microsoft	O
,	O
and	O
IBM	O
have	O
begun	O
curating	O
documentation	O
and	O
open	O
source	O
code	O
bases	O
to	O
allow	O
others	O
to	O
concretely	O
assess	O
the	O
robustness	B-Application
of	O
machine	O
learning	O
models	O
and	O
minimize	O
the	O
risk	O
of	O
adversarial	O
attacks	O
.	O
</s>
<s>
Examples	O
include	O
attacks	O
in	O
spam	O
filtering	O
,	O
where	O
spam	O
messages	O
are	O
obfuscated	O
through	O
the	O
misspelling	O
of	O
"	O
bad	O
"	O
words	O
or	O
the	O
insertion	O
of	O
"	O
good	O
"	O
words	O
;	O
attacks	O
in	O
computer	O
security	O
,	O
such	O
as	O
obfuscating	O
malware	O
code	O
within	O
network	B-Protocol
packets	I-Protocol
or	O
modifying	O
the	O
characteristics	O
of	O
a	O
network	B-Operating_System
flow	I-Operating_System
to	O
mislead	O
intrusion	O
detection	O
;	O
attacks	O
in	O
biometric	O
recognition	O
where	O
fake	O
biometric	O
traits	O
may	O
be	O
exploited	O
to	O
impersonate	O
a	O
legitimate	O
user	O
;	O
or	O
to	O
compromise	O
users	O
 '	O
template	O
galleries	O
that	O
adapt	O
to	O
updated	O
traits	O
over	O
time	O
.	O
</s>
<s>
Researchers	O
showed	O
that	O
by	O
changing	O
only	O
one-pixel	O
it	O
was	O
possible	O
to	O
fool	O
deep	B-Algorithm
learning	I-Algorithm
algorithms	O
.	O
</s>
<s>
Others	O
3-D	O
printed	O
a	O
toy	O
turtle	O
with	O
a	O
texture	O
engineered	O
to	O
make	O
Google	O
's	O
object	O
detection	O
AI	B-Application
classify	O
it	O
as	O
a	O
rifle	O
regardless	O
of	O
the	O
angle	O
from	O
which	O
the	O
turtle	O
was	O
viewed	O
.	O
</s>
<s>
McAfee	O
attacked	O
Tesla	O
's	O
former	O
Mobileye	B-Application
system	O
,	O
fooling	O
it	O
into	O
driving	O
50mph	O
over	O
the	O
speed	O
limit	O
,	O
simply	O
by	O
adding	O
a	O
two-inch	O
strip	O
of	O
black	O
tape	O
to	O
a	O
speed	O
limit	O
sign	O
.	O
</s>
<s>
An	O
adversarial	O
attack	O
on	O
a	O
neural	B-Architecture
network	I-Architecture
can	O
allow	O
an	O
attacker	O
to	O
inject	O
algorithms	O
into	O
the	O
target	O
system	O
.	O
</s>
<s>
In	O
federated	B-Operating_System
learning	I-Operating_System
,	O
for	O
instance	O
,	O
edge	O
devices	O
collaborate	O
with	O
a	O
central	O
server	O
,	O
typically	O
by	O
sending	O
gradients	O
or	O
model	O
parameters	O
.	O
</s>
<s>
On	O
the	O
other	O
hand	O
,	O
if	O
the	O
training	O
is	O
performed	O
on	O
a	O
single	O
machine	O
,	O
then	O
the	O
model	O
is	O
very	O
vulnerable	O
to	O
a	O
failure	O
of	O
the	O
machine	O
,	O
or	O
an	O
attack	O
on	O
the	O
machine	O
;	O
the	O
machine	O
is	O
a	O
single	B-Architecture
point	I-Architecture
of	I-Architecture
failure	I-Architecture
.	O
</s>
<s>
Byzantine	B-Operating_System
)	O
participants	O
are	O
based	O
on	O
robust	O
gradient	O
aggregation	O
rules	O
.	O
</s>
<s>
Evasion	O
attacks	O
can	O
be	O
generally	O
split	O
into	O
two	O
different	O
categories	O
:	O
black	B-Device
box	I-Device
attacks	I-Device
and	O
white	B-General_Concept
box	I-General_Concept
attacks	I-General_Concept
.	O
</s>
<s>
Model	O
extraction	O
involves	O
an	O
adversary	O
probing	O
a	O
black	B-Device
box	I-Device
machine	O
learning	O
system	O
in	O
order	O
to	O
extract	O
the	O
data	O
it	O
was	O
trained	O
on	O
.	O
</s>
<s>
On	O
the	O
other	O
hand	O
,	O
membership	O
inference	O
is	O
a	O
targeted	O
model	O
extraction	O
attack	O
,	O
which	O
infers	O
the	O
owner	O
of	O
a	O
data	O
point	O
,	O
often	O
by	O
leveraging	O
the	O
overfitting	B-Error_Name
resulting	O
from	O
poor	O
machine	O
learning	O
practices	O
.	O
</s>
<s>
With	O
the	O
emergence	O
of	O
transfer	B-General_Concept
learning	I-General_Concept
and	O
public	O
accessibility	O
of	O
many	O
state	O
of	O
the	O
art	O
machine	O
learning	O
models	O
,	O
tech	O
companies	O
are	O
increasingly	O
drawn	O
to	O
create	O
models	O
based	O
on	O
public	O
ones	O
,	O
giving	O
attackers	O
freely	O
accessible	O
information	O
to	O
the	O
structure	O
and	O
type	O
of	O
model	O
being	O
used	O
.	O
</s>
<s>
Many	O
of	O
these	O
work	O
on	O
both	O
deep	B-Algorithm
learning	I-Algorithm
systems	O
as	O
well	O
as	O
traditional	O
machine	O
learning	O
models	O
such	O
as	O
SVMs	B-Algorithm
and	O
linear	B-General_Concept
regression	I-General_Concept
.	O
</s>
<s>
Black	B-Device
box	I-Device
attacks	I-Device
in	O
adversarial	B-General_Concept
machine	I-General_Concept
learning	I-General_Concept
assumes	O
that	O
the	O
adversary	O
can	O
only	O
get	O
outputs	O
for	O
provided	O
inputs	O
and	O
has	O
no	O
knowledge	O
of	O
the	O
model	O
structure	O
or	O
parameters	O
.	O
</s>
<s>
In	O
either	O
case	O
,	O
the	O
objective	O
of	O
these	O
attacks	O
are	O
to	O
create	O
adversarial	O
examples	O
that	O
are	O
able	O
to	O
transfer	O
to	O
the	O
black	B-Device
box	I-Device
model	O
in	O
question	O
.	O
</s>
<s>
The	O
Square	O
Attack	O
was	O
introduced	O
in	O
2020	O
as	O
a	O
black	B-Device
box	I-Device
evasion	O
adversarial	O
attack	O
based	O
on	O
querying	O
classification	O
scores	O
without	O
the	O
need	O
of	O
gradient	O
information	O
.	O
</s>
<s>
As	O
a	O
score	O
based	O
black	B-Device
box	I-Device
attack	O
,	O
this	O
adversarial	O
approach	O
is	O
able	O
to	O
query	O
probability	O
distributions	O
across	O
model	O
output	O
classes	O
,	O
but	O
has	O
no	O
other	O
access	O
to	O
the	O
model	O
itself	O
.	O
</s>
<s>
According	O
to	O
the	O
paper	O
's	O
authors	O
,	O
the	O
proposed	O
Square	O
Attack	O
required	O
less	O
queries	O
than	O
when	O
compared	O
to	O
state	O
of	O
the	O
art	O
score	O
based	O
black	B-Device
box	I-Device
attacks	I-Device
at	O
the	O
time	O
.	O
</s>
<s>
The	O
paper	O
then	O
defines	O
loss	B-Algorithm
as	O
and	O
proposes	O
the	O
solution	O
to	O
finding	O
adversarial	O
example	O
as	O
solving	O
the	O
below	O
constrained	B-Application
optimization	I-Application
problem	I-Application
:	O
</s>
<s>
To	O
find	O
such	O
example	O
,	O
Square	O
Attack	O
utilizes	O
the	O
iterative	O
random	B-Algorithm
search	I-Algorithm
technique	O
to	O
randomly	O
perturb	O
the	O
image	O
in	O
hopes	O
of	O
improving	O
the	O
objective	O
function	O
.	O
</s>
<s>
This	O
black	B-Device
box	I-Device
attack	O
was	O
also	O
proposed	O
as	O
a	O
query	O
efficient	O
attack	O
,	O
but	O
one	O
that	O
relies	O
solely	O
on	O
access	O
to	O
any	O
input	O
's	O
predicted	O
output	O
class	O
.	O
</s>
<s>
However	O
,	O
since	O
HopSkipJump	O
is	O
a	O
proposed	O
black	B-Device
box	I-Device
attack	O
and	O
the	O
iterative	O
algorithm	O
above	O
requires	O
the	O
calculation	O
of	O
a	O
gradient	O
in	O
the	O
second	O
iterative	O
step	O
(	O
which	O
black	B-Device
box	I-Device
attacks	I-Device
do	O
not	O
have	O
access	O
to	O
)	O
,	O
the	O
authors	O
propose	O
a	O
solution	O
to	O
gradient	O
calculation	O
that	O
requires	O
only	O
the	O
model	O
's	O
output	O
predictions	O
alone	O
.	O
</s>
<s>
The	O
result	O
of	O
the	O
equation	O
above	O
gives	O
a	O
close	O
approximation	O
of	O
the	O
gradient	O
required	O
in	O
step	O
2	O
of	O
the	O
iterative	O
algorithm	O
,	O
completing	O
HopSkipJump	O
as	O
a	O
black	B-Device
box	I-Device
attack	O
.	O
</s>
<s>
White	B-General_Concept
box	I-General_Concept
attacks	I-General_Concept
assumes	O
that	O
the	O
adversary	O
has	O
access	O
to	O
model	O
parameters	O
on	O
top	O
of	O
being	O
able	O
to	O
get	O
labels	O
for	O
provided	O
inputs	O
.	O
</s>
<s>
Shown	O
below	O
is	O
the	O
equation	O
to	O
generate	O
an	O
adversarial	O
example	O
where	O
is	O
the	O
original	O
image	O
,	O
is	O
a	O
very	O
small	O
number	O
,	O
is	O
the	O
gradient	O
function	O
,	O
is	O
the	O
loss	B-Algorithm
function	O
,	O
is	O
the	O
model	O
weights	O
,	O
and	O
is	O
the	O
true	O
label	O
.	O
</s>
<s>
One	O
important	O
property	O
of	O
this	O
equation	O
is	O
that	O
the	O
gradient	O
is	O
calculated	O
with	O
respect	O
to	O
the	O
input	O
image	O
since	O
the	O
goal	O
is	O
to	O
generate	O
an	O
image	O
that	O
maximizes	O
the	O
loss	B-Algorithm
for	O
the	O
original	O
image	O
of	O
true	O
label	O
.	O
</s>
<s>
In	O
traditional	O
gradient	B-Algorithm
descent	I-Algorithm
(	O
for	O
model	O
training	O
)	O
,	O
the	O
gradient	O
is	O
used	O
to	O
update	O
the	O
weights	O
of	O
the	O
model	O
since	O
the	O
goal	O
is	O
to	O
minimize	O
the	O
loss	B-Algorithm
for	O
the	O
model	O
on	O
a	O
ground	O
truth	O
dataset	O
.	O
</s>
<s>
The	O
Fast	O
Gradient	O
Sign	O
Method	O
was	O
proposed	O
as	O
a	O
fast	O
way	O
to	O
generate	O
adversarial	O
examples	O
to	O
evade	O
the	O
model	O
,	O
based	O
on	O
the	O
hypothesis	O
that	O
neural	B-Architecture
networks	I-Architecture
cannot	O
resist	O
even	O
linear	O
amounts	O
of	O
perturbation	O
to	O
the	O
input	O
.	O
</s>
<s>
When	O
solved	O
using	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
this	O
equation	O
is	O
able	O
to	O
produce	O
stronger	O
adversarial	O
examples	O
when	O
compared	O
to	O
fast	O
gradient	O
sign	O
method	O
that	O
is	O
also	O
able	O
to	O
bypass	O
defensive	O
distillation	O
,	O
a	O
defense	O
that	O
was	O
once	O
proposed	O
to	O
be	O
effective	O
against	O
adversarial	O
examples	O
.	O
</s>
<s>
AI-written	O
algorithms	O
.	O
</s>
<s>
AIs	B-Application
that	O
explore	O
the	O
training	O
environment	O
;	O
for	O
example	O
,	O
in	O
image	O
recognition	O
,	O
actively	O
navigating	O
a	O
3D	O
environment	O
rather	O
than	O
passively	O
scanning	O
a	O
fixed	O
set	O
of	O
2D	O
images	O
.	O
</s>
<s>
Ensembles	B-Algorithm
of	O
models	O
have	O
been	O
proposed	O
in	O
the	O
literature	O
but	O
caution	O
should	O
be	O
applied	O
when	O
relying	O
on	O
them	O
:	O
usually	O
ensembling	O
weak	O
classifiers	O
results	O
in	O
a	O
more	O
accurate	O
model	O
but	O
it	O
does	O
not	O
seem	O
to	O
apply	O
in	O
the	O
adversarial	O
context	O
.	O
</s>
