<s>
Federated	B-Operating_System
learning	I-Operating_System
(	O
also	O
known	O
as	O
collaborative	O
learning	O
)	O
is	O
a	O
machine	O
learning	O
technique	O
that	O
trains	O
an	O
algorithm	O
via	O
multiple	O
independent	O
sessions	O
,	O
each	O
using	O
its	O
own	O
dataset	O
.	O
</s>
<s>
Federated	B-Operating_System
learning	I-Operating_System
enables	O
multiple	O
actors	O
to	O
build	O
a	O
common	O
,	O
robust	O
machine	O
learning	O
model	O
without	O
sharing	O
data	O
,	O
thus	O
addressing	O
critical	O
issues	O
such	O
as	O
data	O
privacy	O
,	O
data	O
security	O
,	O
data	O
access	O
rights	O
and	O
access	O
to	O
heterogeneous	O
data	O
.	O
</s>
<s>
Its	O
applications	O
engage	O
industries	O
including	O
defense	O
,	O
telecommunications	O
,	O
Internet	B-Operating_System
of	I-Operating_System
Things	I-Operating_System
,	O
and	O
pharmaceuticals	O
.	O
</s>
<s>
A	O
major	O
open	O
question	O
at	O
is	O
when/whether	O
federated	B-Operating_System
learning	I-Operating_System
is	O
preferable	O
to	O
pooled	O
data	O
learning	O
.	O
</s>
<s>
Federated	B-Operating_System
learning	I-Operating_System
aims	O
at	O
training	O
a	O
machine	O
learning	O
algorithm	O
,	O
for	O
instance	O
deep	O
neural	B-Architecture
networks	I-Architecture
,	O
on	O
multiple	O
local	O
datasets	O
contained	O
in	O
local	O
nodes	O
without	O
explicitly	O
exchanging	O
data	O
samples	O
.	O
</s>
<s>
the	O
weights	O
and	O
biases	O
of	O
a	O
deep	O
neural	B-Architecture
network	I-Architecture
)	O
between	O
these	O
local	O
nodes	O
at	O
some	O
frequency	O
to	O
generate	O
a	O
global	O
model	O
shared	O
by	O
all	O
nodes	O
.	O
</s>
<s>
The	O
main	O
difference	O
between	O
federated	B-Operating_System
learning	I-Operating_System
and	O
distributed	O
learning	O
lies	O
in	O
the	O
assumptions	O
made	O
on	O
the	O
properties	O
of	O
the	O
local	O
datasets	O
,	O
as	O
distributed	O
learning	O
originally	O
aims	O
at	O
parallelizing	O
computing	O
power	O
where	O
federated	B-Operating_System
learning	I-Operating_System
originally	O
aims	O
at	O
training	O
on	O
heterogeneous	B-General_Concept
datasets	I-General_Concept
.	O
</s>
<s>
None	O
of	O
these	O
hypotheses	O
are	O
made	O
for	O
federated	B-Operating_System
learning	I-Operating_System
;	O
instead	O
,	O
the	O
datasets	O
are	O
typically	O
heterogeneous	O
and	O
their	O
sizes	O
may	O
span	O
several	O
orders	O
of	O
magnitude	O
.	O
</s>
<s>
Moreover	O
,	O
the	O
clients	O
involved	O
in	O
federated	B-Operating_System
learning	I-Operating_System
may	O
be	O
unreliable	O
as	O
they	O
are	O
subject	O
to	O
more	O
failures	O
or	O
drop	O
out	O
since	O
they	O
commonly	O
rely	O
on	O
less	O
powerful	O
communication	O
media	O
(	O
i.e.	O
</s>
<s>
smartphones	B-Application
and	O
IoT	B-Operating_System
devices	O
)	O
compared	O
to	O
distributed	O
learning	O
where	O
nodes	O
are	O
typically	O
datacenters	B-Operating_System
that	O
have	O
powerful	O
computational	O
capabilities	O
and	O
are	O
connected	O
to	O
one	O
another	O
with	O
fast	O
networks	O
.	O
</s>
<s>
The	O
objective	O
function	O
for	O
federated	B-Operating_System
learning	I-Operating_System
is	O
as	O
follows	O
:	O
</s>
<s>
The	O
goal	O
of	O
federated	B-Operating_System
learning	I-Operating_System
is	O
to	O
train	O
a	O
common	O
model	O
on	O
all	O
of	O
the	O
nodes	O
 '	O
local	O
datasets	O
,	O
in	O
other	O
words	O
:	O
</s>
<s>
Achieving	O
consensus	B-Operating_System
on	O
.	O
</s>
<s>
In	O
the	O
centralized	O
federated	B-Operating_System
learning	I-Operating_System
setting	O
,	O
a	O
central	O
server	O
is	O
used	O
to	O
orchestrate	O
the	O
different	O
steps	O
of	O
the	O
algorithms	O
and	O
coordinate	O
all	O
the	O
participating	O
nodes	O
during	O
the	O
learning	O
process	O
.	O
</s>
<s>
In	O
the	O
decentralized	O
federated	B-Operating_System
learning	I-Operating_System
setting	O
,	O
the	O
nodes	O
are	O
able	O
to	O
coordinate	O
themselves	O
to	O
obtain	O
the	O
global	O
model	O
.	O
</s>
<s>
Nevertheless	O
,	O
the	O
specific	O
network	B-Architecture
topology	I-Architecture
may	O
affect	O
the	O
performances	O
of	O
the	O
learning	O
process	O
.	O
</s>
<s>
See	O
blockchain-based	O
federated	B-Operating_System
learning	I-Operating_System
and	O
the	O
references	O
therein	O
.	O
</s>
<s>
An	O
increasing	O
number	O
of	O
application	O
domains	O
involve	O
a	O
large	O
set	O
of	O
heterogeneous	O
clients	O
,	O
e.g.	O
,	O
mobile	O
phones	O
and	O
IoT	B-Operating_System
devices	O
.	O
</s>
<s>
Most	O
of	O
the	O
existing	O
Federated	B-Operating_System
learning	I-Operating_System
strategies	O
assume	O
that	O
local	O
models	O
share	O
the	O
same	O
global	O
model	O
architecture	O
.	O
</s>
<s>
Recently	O
,	O
a	O
new	O
federated	B-Operating_System
learning	I-Operating_System
framework	O
named	O
HeteroFL	O
was	O
developed	O
to	O
address	O
heterogeneous	O
clients	O
equipped	O
with	O
very	O
different	O
computation	O
and	O
communication	O
capabilities	O
.	O
</s>
<s>
To	O
ensure	O
good	O
task	O
performance	O
of	O
a	O
final	O
,	O
central	O
machine	O
learning	O
model	O
,	O
federated	B-Operating_System
learning	I-Operating_System
relies	O
on	O
an	O
iterative	O
process	O
broken	O
up	O
into	O
an	O
atomic	O
set	O
of	O
client-server	O
interactions	O
known	O
as	O
a	O
federated	B-Operating_System
learning	I-Operating_System
round	O
.	O
</s>
<s>
However	O
,	O
other	O
strategies	O
lead	O
to	O
the	O
same	O
results	O
without	O
central	O
servers	O
,	O
in	O
a	O
peer-to-peer	O
approach	O
,	O
using	O
gossip	B-Operating_System
or	O
consensus	B-Operating_System
methodologies	O
.	O
</s>
<s>
Initialization	O
:	O
according	O
to	O
the	O
server	O
inputs	O
,	O
a	O
machine	O
learning	O
model	O
(	O
e.g.	O
,	O
linear	B-General_Concept
regression	I-General_Concept
,	O
neural	B-Architecture
network	I-Architecture
,	O
boosting	B-Algorithm
)	O
is	O
chosen	O
to	O
be	O
trained	O
on	O
local	O
nodes	O
and	O
initialized	O
.	O
</s>
<s>
Configuration	O
:	O
the	O
central	O
server	O
orders	O
selected	O
nodes	O
to	O
undergo	O
training	O
of	O
the	O
model	O
on	O
their	O
local	O
data	O
in	O
a	O
pre-specified	O
fashion	O
(	O
e.g.	O
,	O
for	O
some	O
mini-batch	O
updates	O
of	O
gradient	B-Algorithm
descent	I-Algorithm
)	O
.	O
</s>
<s>
Recent	O
federated	B-Operating_System
learning	I-Operating_System
developments	O
introduced	O
novel	O
techniques	O
to	O
tackle	O
asynchronicity	O
during	O
the	O
training	O
process	O
,	O
or	O
training	O
with	O
dynamically	O
varying	O
models	O
.	O
</s>
<s>
Compared	O
to	O
synchronous	O
approaches	O
where	O
local	O
models	O
are	O
exchanged	O
once	O
the	O
computations	O
have	O
been	O
performed	O
for	O
all	O
layers	O
of	O
the	O
neural	B-Architecture
network	I-Architecture
,	O
asynchronous	O
ones	O
leverage	O
the	O
properties	O
of	O
neural	B-Architecture
networks	I-Architecture
to	O
exchange	O
model	O
updates	O
as	O
soon	O
as	O
the	O
computations	O
of	O
a	O
certain	O
layer	O
are	O
available	O
.	O
</s>
<s>
These	O
techniques	O
are	O
also	O
commonly	O
referred	O
to	O
as	O
split	O
learning	O
and	O
they	O
can	O
be	O
applied	O
both	O
at	O
training	O
and	O
inference	O
time	O
regardless	O
of	O
centralized	O
or	O
decentralized	O
federated	B-Operating_System
learning	I-Operating_System
settings	O
.	O
</s>
<s>
In	O
most	O
cases	O
,	O
the	O
assumption	O
of	O
independent	O
and	O
identically	O
distributed	O
samples	O
across	O
local	O
nodes	O
does	O
not	O
hold	O
for	O
federated	B-Operating_System
learning	I-Operating_System
setups	O
.	O
</s>
<s>
Under	O
this	O
setting	O
,	O
the	O
performances	O
of	O
the	O
training	O
process	O
may	O
vary	O
significantly	O
according	O
to	O
the	O
unbalanced	O
local	O
data	O
samples	O
as	O
well	O
as	O
the	O
particular	O
probability	O
distribution	O
of	O
the	O
training	O
examples	O
(	O
i.e.	O
,	O
features	B-Algorithm
and	O
labels	B-General_Concept
)	O
stored	O
at	O
the	O
local	O
nodes	O
.	O
</s>
<s>
The	O
description	O
of	O
non-IID	O
data	O
relies	O
on	O
the	O
analysis	O
of	O
the	O
joint	O
probability	O
between	O
features	B-Algorithm
and	O
labels	B-General_Concept
for	O
each	O
node	O
.	O
</s>
<s>
An	O
example	O
occurs	O
in	O
natural	B-Language
language	I-Language
processing	I-Language
datasets	O
where	O
people	O
typically	O
write	O
the	O
same	O
digits/letters	O
with	O
different	O
stroke	O
widths	O
or	O
slants	O
.	O
</s>
<s>
Prior	O
probability	O
shift	O
:	O
local	O
nodes	O
may	O
store	O
labels	B-General_Concept
that	O
have	O
different	O
statistical	O
distributions	O
compared	O
to	O
other	O
nodes	O
.	O
</s>
<s>
Concept	O
drift	O
(	O
same	O
label	O
,	O
different	O
features	B-Algorithm
)	O
:	O
local	O
nodes	O
may	O
share	O
the	O
same	O
labels	B-General_Concept
but	O
some	O
of	O
them	O
correspond	O
to	O
different	O
features	B-Algorithm
at	O
different	O
local	O
nodes	O
.	O
</s>
<s>
Concept	O
shift	O
(	O
same	O
features	B-Algorithm
,	O
different	O
labels	B-General_Concept
)	O
:	O
local	O
nodes	O
may	O
share	O
the	O
same	O
features	B-Algorithm
but	O
some	O
of	O
them	O
correspond	O
to	O
different	O
labels	B-General_Concept
at	O
different	O
local	O
nodes	O
.	O
</s>
<s>
For	O
example	O
,	O
in	O
natural	B-Language
language	I-Language
processing	I-Language
,	O
the	O
sentiment	O
analysis	O
may	O
yield	O
different	O
sentiments	O
even	O
if	O
the	O
same	O
text	O
is	O
observed	O
.	O
</s>
<s>
The	O
loss	O
in	O
accuracy	O
due	O
to	O
non-iid	O
data	O
can	O
be	O
bounded	O
through	O
using	O
more	O
sophisticated	O
means	O
of	O
doing	O
data	O
normalization	O
,	O
rather	O
than	O
batch	B-General_Concept
normalization	I-General_Concept
.	O
</s>
<s>
This	O
leads	O
to	O
a	O
variety	O
of	O
federated	B-Operating_System
learning	I-Operating_System
approaches	O
:	O
for	O
instance	O
no	O
central	O
orchestrating	O
server	O
,	O
or	O
stochastic	O
communication	O
.	O
</s>
<s>
Once	O
the	O
topology	B-Architecture
of	O
the	O
node	O
network	O
is	O
chosen	O
,	O
one	O
can	O
control	O
different	O
parameters	O
of	O
the	O
federated	B-Operating_System
learning	I-Operating_System
process	O
(	O
in	O
opposition	O
to	O
the	O
machine	O
learning	O
model	O
's	O
own	O
hyperparameters	O
)	O
to	O
optimize	O
learning	O
:	O
</s>
<s>
Number	O
of	O
federated	B-Operating_System
learning	I-Operating_System
rounds	O
:	O
</s>
<s>
Local	O
batch	B-General_Concept
size	I-General_Concept
used	O
at	O
each	O
learning	O
iteration	O
:	O
</s>
<s>
For	O
instance	O
,	O
stochastically	O
choosing	O
a	O
limited	O
fraction	O
of	O
nodes	O
for	O
each	O
iteration	O
diminishes	O
computing	O
cost	O
and	O
may	O
prevent	O
overfitting	B-Error_Name
,	O
in	O
the	O
same	O
way	O
that	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
can	O
reduce	O
overfitting	B-Error_Name
.	O
</s>
<s>
Federated	B-Operating_System
learning	I-Operating_System
requires	O
frequent	O
communication	O
between	O
nodes	O
during	O
the	O
learning	O
process	O
.	O
</s>
<s>
Nevertheless	O
,	O
the	O
devices	O
typically	O
employed	O
in	O
federated	B-Operating_System
learning	I-Operating_System
are	O
communication-constrained	O
,	O
for	O
example	O
IoT	B-Operating_System
devices	O
or	O
smartphones	B-Application
are	O
generally	O
connected	O
to	O
Wi-Fi	O
networks	O
,	O
thus	O
,	O
even	O
if	O
the	O
models	O
are	O
commonly	O
less	O
expensive	O
to	O
be	O
transmitted	O
compared	O
to	O
raw	O
data	O
,	O
federated	B-Operating_System
learning	I-Operating_System
mechanisms	O
may	O
not	O
be	O
suitable	O
in	O
their	O
general	O
form	O
.	O
</s>
<s>
Federated	B-Operating_System
learning	I-Operating_System
raises	O
several	O
statistical	O
challenges	O
:	O
</s>
<s>
Heterogeneity	B-General_Concept
between	O
the	O
different	O
local	O
datasets	O
:	O
each	O
node	O
may	O
have	O
some	O
bias	O
with	O
respect	O
to	O
the	O
general	O
population	O
,	O
and	O
the	O
size	O
of	O
the	O
datasets	O
may	O
vary	O
significantly	O
;	O
</s>
<s>
Temporal	O
heterogeneity	B-General_Concept
:	O
each	O
local	O
dataset	O
's	O
distribution	O
may	O
vary	O
with	O
time	O
;	O
</s>
<s>
Lack	O
of	O
annotations	O
or	O
labels	B-General_Concept
on	O
the	O
client	O
side	O
.	O
</s>
<s>
Deep	B-Algorithm
learning	I-Algorithm
training	O
mainly	O
relies	O
on	O
variants	O
of	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
,	O
where	O
gradients	O
are	O
computed	O
on	O
a	O
random	O
subset	O
of	O
the	O
total	O
dataset	O
and	O
then	O
used	O
to	O
make	O
one	O
step	O
of	O
the	O
gradient	B-Algorithm
descent	I-Algorithm
.	O
</s>
<s>
Federated	O
stochastic	B-Algorithm
gradient	I-Algorithm
descent	I-Algorithm
is	O
the	O
direct	O
transposition	O
of	O
this	O
algorithm	O
to	O
the	O
federated	O
setting	O
,	O
but	O
by	O
using	O
a	O
random	O
fraction	O
of	O
the	O
nodes	O
and	O
using	O
all	O
the	O
data	O
on	O
this	O
node	O
.	O
</s>
<s>
The	O
gradients	O
are	O
averaged	O
by	O
the	O
server	O
proportionally	O
to	O
the	O
number	O
of	O
training	O
samples	O
on	O
each	O
node	O
,	O
and	O
used	O
to	O
make	O
a	O
gradient	B-Algorithm
descent	I-Algorithm
step	O
.	O
</s>
<s>
Federated	B-Operating_System
learning	I-Operating_System
methods	O
suffer	O
when	O
the	O
device	O
datasets	O
are	O
heterogeneously	O
distributed	O
.	O
</s>
<s>
Since	O
the	O
local	O
losses	O
are	O
aligned	O
,	O
FedDyn	O
is	O
robust	O
to	O
the	O
different	O
heterogeneity	B-General_Concept
levels	O
and	O
it	O
can	O
safely	O
perform	O
full	O
minimization	O
in	O
each	O
device	O
.	O
</s>
<s>
Theoretically	O
,	O
FedDyn	O
converges	O
to	O
the	O
optimal	O
(	O
a	O
stationary	O
point	O
for	O
nonconvex	O
losses	O
)	O
by	O
being	O
agnostic	O
to	O
the	O
heterogeneity	B-General_Concept
levels	O
.	O
</s>
<s>
Minimizing	O
the	O
number	O
of	O
communications	O
is	O
the	O
gold-standard	O
for	O
comparison	O
in	O
federated	B-Operating_System
learning	I-Operating_System
.	O
</s>
<s>
Federated	B-Operating_System
Learning	I-Operating_System
methods	O
cannot	O
achieve	O
good	O
global	O
performance	O
under	O
Non-IID	O
settings	O
which	O
motivates	O
the	O
participating	O
clients	O
to	O
yield	O
personalized	O
models	O
in	O
federation	O
.	O
</s>
<s>
Sub-FedAvg	O
is	O
the	O
first	O
work	O
which	O
shows	O
existence	O
of	O
personalized	O
winning	O
tickets	O
for	O
clients	O
in	O
federated	B-Operating_System
learning	I-Operating_System
through	O
experiments	O
.	O
</s>
<s>
Sub-FedAvg	O
tries	O
to	O
extend	O
"	O
Lottery	O
Ticket	O
Hypothesis	O
"	O
which	O
is	O
for	O
centrally	O
trained	O
neural	B-Architecture
networks	I-Architecture
to	O
federated	B-Operating_System
learning	I-Operating_System
trained	O
neural	B-Architecture
networks	I-Architecture
leading	O
to	O
this	O
open	O
research	O
problem	O
:	O
“	O
Do	O
winning	O
tickets	O
exist	O
for	O
clients’	O
neural	B-Architecture
networks	I-Architecture
being	O
trained	O
in	O
federated	B-Operating_System
learning	I-Operating_System
?	O
</s>
<s>
Very	O
few	O
methods	O
for	O
hybrid	O
federated	B-Operating_System
learning	I-Operating_System
,	O
where	O
clients	O
only	O
hold	O
subsets	O
of	O
both	O
features	B-Algorithm
and	O
samples	O
,	O
exist	O
.	O
</s>
<s>
(	O
2017	O
)	O
,	O
to	O
the	O
case	O
where	O
both	O
samples	O
and	O
features	B-Algorithm
are	O
partitioned	O
across	O
clients	O
.	O
</s>
<s>
However	O
,	O
HyFEM	O
is	O
suitable	O
for	O
a	O
vast	O
array	O
of	O
architectures	O
including	O
deep	B-Algorithm
learning	I-Algorithm
architectures	O
,	O
whereas	O
HyFDCA	O
is	O
designed	O
for	O
convex	O
problems	O
like	O
logistic	O
regression	O
and	O
support	O
vector	O
machines	O
.	O
</s>
<s>
Federated	B-Operating_System
Learning	I-Operating_System
(	O
FL	O
)	O
provides	O
training	O
of	O
global	O
shared	O
model	O
using	O
decentralized	O
data	O
sources	O
on	O
edge	O
nodes	O
while	O
preserving	O
data	O
privacy	O
.	O
</s>
<s>
However	O
,	O
its	O
performance	O
in	O
the	O
computer	B-Application
vision	I-Application
applications	O
using	O
Convolution	O
neural	B-Architecture
network	I-Architecture
(	O
CNN	O
)	O
considerably	O
behind	O
that	O
of	O
centralized	O
training	O
due	O
to	O
limited	O
communication	O
resources	O
and	O
low	O
processing	O
capability	O
at	O
edge	O
nodes	O
.	O
</s>
<s>
Alternatively	O
,	O
Pure	O
Vision	B-Algorithm
transformer	I-Algorithm
models	O
(	O
VIT	B-Algorithm
)	O
outperform	O
CNNs	O
by	O
almost	O
four	O
times	O
when	O
it	O
comes	O
to	O
computational	O
efficiency	O
and	O
accuracy	O
.	O
</s>
<s>
Hence	O
,	O
we	O
propose	O
a	O
new	O
FL	O
model	O
with	O
reconstructive	O
strategy	O
called	O
FED-REV	O
,	O
Illustrates	O
how	O
attention-based	O
structures	O
(	O
pure	O
Vision	B-Algorithm
Transformers	I-Algorithm
)	O
enhance	O
FL	O
accuracy	O
over	O
large	O
and	O
diverse	O
data	O
distributed	O
over	O
edge	O
nodes	O
,	O
in	O
addition	O
to	O
the	O
proposed	O
reconstruction	O
strategy	O
that	O
determines	O
the	O
dimensions	O
influence	O
of	O
each	O
stage	O
of	O
the	O
vision	B-Algorithm
transformer	I-Algorithm
and	O
then	O
reduce	O
its	O
dimension	O
complexity	O
which	O
reduce	O
computation	O
cost	O
of	O
edge	O
devices	O
in	O
addition	O
to	O
preserving	O
accuracy	O
achieved	O
due	O
to	O
using	O
the	O
pure	O
Vision	B-Algorithm
transformer	I-Algorithm
.	O
</s>
<s>
Federated	B-Operating_System
learning	I-Operating_System
has	O
started	O
to	O
emerge	O
as	O
an	O
important	O
research	O
topic	O
in	O
2015	O
and	O
2016	O
,	O
with	O
the	O
first	O
publications	O
on	O
federated	O
averaging	O
in	O
telecommunication	O
settings	O
.	O
</s>
<s>
Another	O
important	O
aspect	O
of	O
active	O
research	O
is	O
the	O
reduction	O
of	O
the	O
communication	O
burden	O
during	O
the	O
federated	B-Operating_System
learning	I-Operating_System
process	O
.	O
</s>
<s>
In	O
2017	O
and	O
2018	O
,	O
publications	O
have	O
emphasized	O
the	O
development	O
of	O
resource	O
allocation	O
strategies	O
,	O
especially	O
to	O
reduce	O
communication	O
requirements	O
between	O
nodes	O
with	O
gossip	B-Operating_System
algorithms	O
as	O
well	O
as	O
on	O
the	O
characterization	O
of	O
the	O
robustness	O
to	O
differential	O
privacy	O
attacks	O
.	O
</s>
<s>
Developing	O
ultra-light	O
DNN	O
architectures	O
is	O
essential	O
for	O
device-/edge	O
-	O
learning	O
and	O
recent	O
work	O
recognises	O
both	O
the	O
energy	O
efficiency	O
requirements	O
for	O
future	O
federated	B-Operating_System
learning	I-Operating_System
and	O
the	O
need	O
to	O
compress	O
deep	B-Algorithm
learning	I-Algorithm
,	O
especially	O
during	O
learning	O
.	O
</s>
<s>
Another	O
active	O
direction	O
of	O
research	O
is	O
to	O
develop	O
Federated	B-Operating_System
learning	I-Operating_System
for	O
training	O
heterogeneous	O
local	O
models	O
with	O
varying	O
computation	O
complexities	O
and	O
producing	O
a	O
single	O
powerful	O
global	O
inference	O
model	O
.	O
</s>
<s>
Compared	O
with	O
Federated	B-Operating_System
learning	I-Operating_System
that	O
often	O
requires	O
a	O
central	B-Operating_System
controller	I-Operating_System
to	O
orchestrate	O
the	O
learning	O
and	O
optimization	O
,	O
aims	O
to	O
provide	O
protocols	O
for	O
the	O
agents	O
to	O
optimize	O
and	O
learn	O
among	O
themselves	O
without	O
a	O
global	O
model	O
.	O
</s>
<s>
Federated	B-Operating_System
learning	I-Operating_System
typically	O
applies	O
when	O
individual	O
actors	O
need	O
to	O
train	O
models	O
on	O
larger	O
datasets	O
than	O
their	O
own	O
,	O
but	O
cannot	O
afford	O
to	O
share	O
the	O
data	O
in	O
itself	O
with	O
others	O
(	O
e.g.	O
,	O
for	O
legal	O
,	O
strategic	O
or	O
economic	O
reasons	O
)	O
.	O
</s>
<s>
Self-driving	O
cars	O
encapsulate	O
many	O
machine	O
learning	O
technologies	O
to	O
function	O
:	O
computer	B-Application
vision	I-Application
for	O
analyzing	O
obstacles	O
,	O
machine	O
learning	O
for	O
adapting	O
their	O
pace	O
to	O
the	O
environment	O
(	O
e.g.	O
,	O
bumpiness	O
of	O
the	O
road	O
)	O
.	O
</s>
<s>
Federated	B-Operating_System
learning	I-Operating_System
can	O
represent	O
a	O
solution	O
for	O
limiting	O
volume	O
of	O
data	O
transfer	O
and	O
accelerating	O
learning	O
processes	O
.	O
</s>
<s>
In	O
Industry	B-General_Concept
4.0	I-General_Concept
,	O
there	O
is	O
a	O
widespread	O
adoption	O
of	O
machine	O
learning	O
techniques	O
to	O
improve	O
the	O
efficiency	O
and	O
effectiveness	O
of	O
industrial	O
process	O
while	O
guaranteeing	O
a	O
high	O
level	O
of	O
safety	O
.	O
</s>
<s>
Federated	B-Operating_System
learning	I-Operating_System
algorithms	O
can	O
be	O
applied	O
to	O
these	O
problems	O
as	O
they	O
do	O
not	O
disclose	O
any	O
sensitive	O
data	O
.	O
</s>
<s>
Federated	B-Operating_System
learning	I-Operating_System
seeks	O
to	O
address	O
the	O
problem	O
of	O
data	O
governance	O
and	O
privacy	O
by	O
training	O
algorithms	O
collaboratively	O
without	O
exchanging	O
the	O
data	O
itself	O
.	O
</s>
<s>
Nature	O
Digital	O
Medicine	O
published	O
the	O
paper	O
"	O
The	O
Future	O
of	O
Digital	O
Health	O
with	O
Federated	B-Operating_System
Learning	I-Operating_System
"	O
in	O
September	O
2020	O
,	O
in	O
which	O
the	O
authors	O
explore	O
how	O
federated	B-Operating_System
learning	I-Operating_System
may	O
provide	O
a	O
solution	O
for	O
the	O
future	O
of	O
digital	O
health	O
,	O
and	O
highlight	O
the	O
challenges	O
and	O
considerations	O
that	O
need	O
to	O
be	O
addressed	O
.	O
</s>
<s>
Recently	O
,	O
a	O
collaboration	O
of	O
20	O
different	O
institutions	O
around	O
the	O
world	O
validated	O
the	O
utility	O
of	O
training	O
AI	O
models	O
using	O
federated	B-Operating_System
learning	I-Operating_System
.	O
</s>
<s>
In	O
a	O
paper	O
published	O
in	O
Nature	O
Medicine	O
"	O
Federated	B-Operating_System
learning	I-Operating_System
for	O
predicting	O
clinical	O
outcomes	O
in	O
patients	O
with	O
COVID-19	O
"	O
,	O
they	O
showcased	O
the	O
accuracy	O
and	O
generalizability	O
of	O
a	O
federated	O
AI	O
model	O
for	O
the	O
prediction	O
of	O
oxygen	O
needs	O
in	O
patients	O
with	O
COVID-19	O
infections	O
.	O
</s>
<s>
Furthermore	O
,	O
in	O
a	O
published	O
paper	O
"	O
A	O
Systematic	O
Review	O
of	O
Federated	B-Operating_System
Learning	I-Operating_System
in	O
the	O
Healthcare	O
Area	O
:	O
From	O
the	O
Perspective	O
of	O
Data	O
Properties	O
and	O
Applications	O
"	O
,	O
the	O
authors	O
trying	O
to	O
provide	O
a	O
set	O
of	O
challenges	O
on	O
FL	O
challenges	O
on	O
medical	O
data-centric	O
perspective	O
.	O
</s>
<s>
Federated	B-Operating_System
Learning	I-Operating_System
provides	O
a	O
solution	O
to	O
improve	O
over	O
conventional	O
machine	O
learning	O
training	O
methods	O
.	O
</s>
<s>
In	O
the	O
paper	O
,	O
Federated	B-Operating_System
Learning	I-Operating_System
is	O
applied	O
to	O
improve	O
multi-robot	O
navigation	O
under	O
limited	O
communication	O
bandwidth	O
scenarios	O
,	O
which	O
is	O
a	O
current	O
challenge	O
in	O
real-world	O
learning-based	O
robotic	O
tasks	O
.	O
</s>
<s>
In	O
the	O
paper	O
,	O
Federated	B-Operating_System
Learning	I-Operating_System
is	O
used	O
to	O
learn	O
Vision-based	O
navigation	O
,	O
helping	O
better	O
sim-to-real	O
transfer	O
.	O
</s>
