<s>
Variance	O
reduction	O
approaches	O
are	O
widely	O
used	O
for	O
training	O
machine	O
learning	O
models	O
such	O
as	O
logistic	O
regression	O
and	O
support	B-Algorithm
vector	I-Algorithm
machines	I-Algorithm
as	O
these	O
problems	O
have	O
finite-sum	O
structure	O
and	O
uniform	O
conditioning	B-Algorithm
that	O
make	O
them	O
ideal	O
candidates	O
for	O
variance	O
reduction	O
.	O
</s>
<s>
Although	O
variance	O
reduction	O
methods	O
can	O
be	O
applied	O
for	O
any	O
positive	O
and	O
any	O
structure	O
,	O
their	O
favorable	O
theoretical	O
and	O
practical	O
properties	O
arise	O
when	O
is	O
large	O
compared	O
to	O
the	O
condition	B-Algorithm
number	I-Algorithm
of	O
each	O
,	O
and	O
when	O
the	O
have	O
similar	O
(	O
but	O
not	O
necessarily	O
identical	O
)	O
Lipschitz	O
smoothness	O
and	O
strong	O
convexity	O
constants	O
.	O
</s>
<s>
Stochastic	B-Algorithm
variance	I-Algorithm
reduction	I-Algorithm
methods	O
converge	O
almost	O
as	O
fast	O
as	O
the	O
gradient	O
descent	O
method	O
's	O
rate	O
,	O
despite	O
using	O
only	O
a	O
stochastic	O
gradient	O
,	O
at	O
a	O
lower	O
cost	O
than	O
gradient	O
descent	O
.	O
</s>
<s>
Exploiting	O
the	O
dual	B-Algorithm
representation	I-Algorithm
of	O
the	O
objective	O
leads	O
to	O
another	O
variance	O
reduction	O
approach	O
that	O
is	O
particularly	O
suited	O
to	O
finite-sums	O
where	O
each	O
term	O
has	O
a	O
structure	O
that	O
makes	O
computing	O
the	O
convex	O
conjugate	O
or	O
its	O
proximal	O
operator	O
tractable	O
.	O
</s>
<s>
by	O
a	O
stochastic	O
coordinate	B-Algorithm
ascent	I-Algorithm
procedure	O
,	O
where	O
at	O
each	O
step	O
the	O
objective	O
is	O
optimized	O
with	O
respect	O
to	O
a	O
randomly	O
chosen	O
coordinate	O
,	O
leaving	O
all	O
other	O
coordinates	O
the	O
same	O
.	O
</s>
