<s>
Batch	B-General_Concept
normalization	I-General_Concept
(	O
also	O
known	O
as	O
batch	O
norm	O
)	O
is	O
a	O
method	O
used	O
to	O
make	O
training	O
of	O
artificial	B-Architecture
neural	I-Architecture
networks	I-Architecture
faster	O
and	O
more	O
stable	O
through	O
normalization	O
of	O
the	O
layers	O
 '	O
inputs	O
by	O
re-centering	O
and	O
re-scaling	O
.	O
</s>
<s>
While	O
the	O
effect	O
of	O
batch	B-General_Concept
normalization	I-General_Concept
is	O
evident	O
,	O
the	O
reasons	O
behind	O
its	O
effectiveness	O
remain	O
under	O
discussion	O
.	O
</s>
<s>
It	O
was	O
believed	O
that	O
it	O
can	O
mitigate	O
the	O
problem	O
of	O
internal	O
covariate	O
shift	O
,	O
where	O
parameter	O
initialization	O
and	O
changes	O
in	O
the	O
distribution	O
of	O
the	O
inputs	O
of	O
each	O
layer	O
affect	O
the	O
learning	B-General_Concept
rate	I-General_Concept
of	O
the	O
network	O
.	O
</s>
<s>
Recently	O
,	O
some	O
scholars	O
have	O
argued	O
that	O
batch	B-General_Concept
normalization	I-General_Concept
does	O
not	O
reduce	O
internal	O
covariate	O
shift	O
,	O
but	O
rather	O
smooths	O
the	O
objective	O
function	O
,	O
which	O
in	O
turn	O
improves	O
the	O
performance	O
.	O
</s>
<s>
However	O
,	O
at	O
initialization	O
,	O
batch	B-General_Concept
normalization	I-General_Concept
in	O
fact	O
induces	O
severe	O
gradient	B-Algorithm
explosion	I-Algorithm
in	O
deep	O
networks	O
,	O
which	O
is	O
only	O
alleviated	O
by	O
skip	O
connections	O
in	O
residual	O
networks	O
.	O
</s>
<s>
Others	O
maintain	O
that	O
batch	B-General_Concept
normalization	I-General_Concept
achieves	O
length-direction	O
decoupling	O
,	O
and	O
thereby	O
accelerates	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
More	O
recently	O
a	O
normalize	O
gradient	O
clipping	O
technique	O
and	O
smart	O
hyperparameter	O
tuning	O
has	O
been	O
introduced	O
in	O
Normalizer-Free	O
Nets	O
,	O
so	O
called	O
"	O
NF-Nets	O
"	O
which	O
mitigates	O
the	O
need	O
for	O
batch	B-General_Concept
normalization	I-General_Concept
.	O
</s>
<s>
Each	O
layer	O
of	O
a	O
neural	B-Architecture
network	I-Architecture
has	O
inputs	O
with	O
a	O
corresponding	O
distribution	O
,	O
which	O
is	O
affected	O
during	O
the	O
training	O
process	O
by	O
the	O
randomness	O
in	O
the	O
parameter	O
initialization	O
and	O
the	O
randomness	O
in	O
the	O
input	O
data	O
.	O
</s>
<s>
Batch	B-General_Concept
normalization	I-General_Concept
was	O
initially	O
proposed	O
to	O
mitigate	O
internal	O
covariate	O
shift	O
.	O
</s>
<s>
Therefore	O
,	O
the	O
method	O
of	O
batch	B-General_Concept
normalization	I-General_Concept
is	O
proposed	O
to	O
reduce	O
these	O
unwanted	O
shifts	O
to	O
speed	O
up	O
training	O
and	O
to	O
produce	O
more	O
reliable	O
models	O
.	O
</s>
<s>
Besides	O
reducing	O
internal	O
covariate	O
shift	O
,	O
batch	B-General_Concept
normalization	I-General_Concept
is	O
believed	O
to	O
introduce	O
many	O
other	O
benefits	O
.	O
</s>
<s>
With	O
this	O
additional	O
operation	O
,	O
the	O
network	O
can	O
use	O
higher	O
learning	B-General_Concept
rate	I-General_Concept
without	O
vanishing	O
or	O
exploding	O
gradients	O
.	O
</s>
<s>
Furthermore	O
,	O
batch	B-General_Concept
normalization	I-General_Concept
seems	O
to	O
have	O
a	O
regularizing	O
effect	O
such	O
that	O
the	O
network	O
improves	O
its	O
generalization	O
properties	O
,	O
and	O
it	O
is	O
thus	O
unnecessary	O
to	O
use	O
dropout	B-Algorithm
to	O
mitigate	O
overfitting	B-Error_Name
.	O
</s>
<s>
It	O
has	O
been	O
observed	O
also	O
that	O
with	O
batch	O
norm	O
the	O
network	O
becomes	O
more	O
robust	O
to	O
different	O
initialization	O
schemes	O
and	O
learning	B-General_Concept
rates	I-General_Concept
.	O
</s>
<s>
In	O
a	O
neural	B-Architecture
network	I-Architecture
,	O
batch	B-General_Concept
normalization	I-General_Concept
is	O
achieved	O
through	O
a	O
normalization	O
step	O
that	O
fixes	O
the	O
means	O
and	O
variances	O
of	O
each	O
layer	O
's	O
inputs	O
.	O
</s>
<s>
Ideally	O
,	O
the	O
normalization	O
would	O
be	O
conducted	O
over	O
the	O
entire	O
training	O
set	O
,	O
but	O
to	O
use	O
this	O
step	O
jointly	O
with	O
stochastic	B-Algorithm
optimization	I-Algorithm
methods	O
,	O
it	O
is	O
impractical	O
to	O
use	O
the	O
global	O
information	O
.	O
</s>
<s>
Formally	O
,	O
the	O
operation	O
that	O
implements	O
batch	B-General_Concept
normalization	I-General_Concept
is	O
a	O
transform	O
called	O
the	O
Batch	O
Normalizing	O
transform	O
.	O
</s>
<s>
Specifically	O
,	O
depends	O
on	O
the	O
choice	O
of	O
activation	B-Algorithm
function	I-Algorithm
,	O
and	O
the	O
gradient	O
against	O
other	O
parameters	O
could	O
be	O
expressed	O
as	O
a	O
function	O
of	O
:	O
</s>
<s>
Since	O
the	O
parameters	O
are	O
fixed	O
in	O
this	O
transformation	O
,	O
the	O
batch	B-General_Concept
normalization	I-General_Concept
procedure	O
is	O
essentially	O
applying	O
a	O
linear	B-Architecture
transform	I-Architecture
to	O
the	O
activation	O
.	O
</s>
<s>
Although	O
batch	B-General_Concept
normalization	I-General_Concept
has	O
become	O
popular	O
due	O
to	O
its	O
strong	O
empirical	O
performance	O
,	O
the	O
working	O
mechanism	O
of	O
the	O
method	O
is	O
not	O
yet	O
well-understood	O
.	O
</s>
<s>
One	O
alternative	O
explanation	O
,	O
is	O
that	O
the	O
improvement	O
with	O
batch	B-General_Concept
normalization	I-General_Concept
is	O
instead	O
due	O
to	O
it	O
producing	O
a	O
smoother	O
parameter	O
space	O
and	O
smoother	O
gradients	O
,	O
as	O
formalized	O
by	O
a	O
smaller	O
Lipschitz	O
constant	O
.	O
</s>
<s>
Consider	O
two	O
identical	O
networks	O
,	O
one	O
contains	O
batch	B-General_Concept
normalization	I-General_Concept
layers	O
and	O
the	O
other	O
does	O
n't	O
,	O
the	O
behaviors	O
of	O
these	O
two	O
networks	O
are	O
then	O
compared	O
.	O
</s>
<s>
For	O
the	O
second	O
network	O
,	O
additionally	O
goes	O
through	O
a	O
batch	B-General_Concept
normalization	I-General_Concept
layer	O
.	O
</s>
<s>
If	O
the	O
loss	O
is	O
locally	O
convex	O
,	O
then	O
the	O
Hessian	O
is	O
positive	B-Algorithm
semi-definite	I-Algorithm
,	O
while	O
the	O
inner	O
product	O
is	O
positive	O
if	O
is	O
in	O
the	O
direction	O
towards	O
the	O
minimum	O
of	O
the	O
loss	O
.	O
</s>
<s>
It	O
could	O
thus	O
be	O
concluded	O
from	O
this	O
inequality	O
that	O
the	O
gradient	O
generally	O
becomes	O
more	O
predictive	O
with	O
the	O
batch	B-General_Concept
normalization	I-General_Concept
layer	O
.	O
</s>
<s>
In	O
addition	O
to	O
the	O
smoother	O
landscape	O
,	O
it	O
is	O
further	O
shown	O
that	O
batch	B-General_Concept
normalization	I-General_Concept
could	O
result	O
in	O
a	O
better	O
initialization	O
with	O
the	O
following	O
inequality	O
:	O
</s>
<s>
Some	O
scholars	O
argue	O
that	O
the	O
above	O
analysis	O
cannot	O
fully	O
capture	O
the	O
performance	O
of	O
batch	B-General_Concept
normalization	I-General_Concept
,	O
because	O
the	O
proof	O
only	O
concerns	O
the	O
largest	O
eigenvalue	O
,	O
or	O
equivalently	O
,	O
one	O
direction	O
in	O
the	O
landscape	O
at	O
all	O
points	O
.	O
</s>
<s>
Since	O
it	O
is	O
hypothesized	O
that	O
batch	B-General_Concept
normalization	I-General_Concept
layers	O
could	O
reduce	O
internal	O
covariate	O
shift	O
,	O
an	O
experiment	O
is	O
set	O
up	O
to	O
measure	O
quantitatively	O
how	O
much	O
covariate	O
shift	O
is	O
reduced	O
.	O
</s>
<s>
The	O
correlation	O
between	O
the	O
gradients	O
are	O
computed	O
for	O
four	O
models	O
:	O
a	O
standard	O
VGG	O
network	O
,	O
a	O
VGG	O
network	O
with	O
batch	B-General_Concept
normalization	I-General_Concept
layers	O
,	O
a	O
25-layer	O
deep	O
linear	O
network	O
(	O
DLN	O
)	O
trained	O
with	O
full-batch	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
and	O
a	O
DLN	O
network	O
with	O
batch	B-General_Concept
normalization	I-General_Concept
layers	O
.	O
</s>
<s>
Interestingly	O
,	O
it	O
is	O
shown	O
that	O
the	O
standard	O
VGG	O
and	O
DLN	O
models	O
both	O
have	O
higher	O
correlations	O
of	O
gradients	O
compared	O
with	O
their	O
counterparts	O
,	O
indicating	O
that	O
the	O
additional	O
batch	B-General_Concept
normalization	I-General_Concept
layers	O
are	O
not	O
reducing	O
internal	O
covariate	O
shift	O
.	O
</s>
<s>
Even	O
though	O
batchnorm	O
was	O
originally	O
introduced	O
to	O
alleviate	O
gradient	B-Algorithm
vanishing	I-Algorithm
or	I-Algorithm
explosion	I-Algorithm
problems	I-Algorithm
,	O
a	O
deep	O
batchnorm	O
network	O
in	O
fact	O
suffers	O
from	O
gradient	B-Algorithm
explosion	I-Algorithm
at	O
initialization	O
time	O
,	O
no	O
matter	O
what	O
it	O
uses	O
for	O
nonlinearity	O
.	O
</s>
<s>
This	O
gradient	B-Algorithm
explosion	I-Algorithm
on	O
the	O
surface	O
contradicts	O
the	O
smoothness	O
property	O
explained	O
in	O
the	O
previous	O
section	O
,	O
but	O
in	O
fact	O
they	O
are	O
consistent	O
.	O
</s>
<s>
The	O
previous	O
section	O
studies	O
the	O
effect	O
of	O
inserting	O
a	O
single	O
batchnorm	O
in	O
a	O
network	O
,	O
while	O
the	O
gradient	B-Algorithm
explosion	I-Algorithm
depends	O
on	O
stacking	O
batchnorms	O
typical	O
of	O
modern	O
deep	O
neural	B-Architecture
networks	I-Architecture
.	O
</s>
<s>
Another	O
possible	O
reason	O
for	O
the	O
success	O
of	O
batch	B-General_Concept
normalization	I-General_Concept
is	O
that	O
it	O
decouples	O
the	O
length	O
and	O
direction	O
of	O
the	O
weight	O
vectors	O
and	O
thus	O
facilitates	O
better	O
training	O
.	O
</s>
<s>
For	O
a	O
particular	O
neural	B-Architecture
network	I-Architecture
unit	O
with	O
input	O
and	O
weight	O
vector	O
,	O
denote	O
its	O
output	O
as	O
,	O
where	O
is	O
the	O
activation	B-Algorithm
function	I-Algorithm
,	O
and	O
denote	O
.	O
</s>
<s>
Assume	O
that	O
,	O
and	O
that	O
the	O
spectrum	O
of	O
the	O
matrix	O
is	O
bounded	O
as	O
,	O
,	O
such	O
that	O
is	O
symmetric	B-Algorithm
positive	I-Algorithm
definite	I-Algorithm
.	O
</s>
<s>
This	O
property	O
could	O
then	O
be	O
used	O
to	O
prove	O
the	O
faster	O
convergence	O
of	O
problems	O
with	O
batch	B-General_Concept
normalization	I-General_Concept
.	O
</s>
<s>
With	O
the	O
reparametrization	O
interpretation	O
,	O
it	O
could	O
then	O
be	O
proved	O
that	O
applying	O
batch	B-General_Concept
normalization	I-General_Concept
to	O
the	O
ordinary	O
least	O
squares	O
problem	O
achieves	O
a	O
linear	O
convergence	O
rate	O
in	O
gradient	B-Algorithm
descent	I-Algorithm
,	O
which	O
is	O
faster	O
than	O
the	O
regular	O
gradient	B-Algorithm
descent	I-Algorithm
with	O
only	O
sub-linear	O
convergence	O
.	O
</s>
<s>
,	O
where	O
is	O
a	O
symmetric	O
matrix	O
and	O
is	O
a	O
symmetric	B-Algorithm
positive	I-Algorithm
definite	I-Algorithm
matrix	O
.	O
</s>
<s>
The	O
problem	O
of	O
learning	O
halfspaces	O
refers	O
to	O
the	O
training	O
of	O
the	O
Perceptron	B-Algorithm
,	O
which	O
is	O
the	O
simplest	O
form	O
of	O
neural	B-Architecture
network	I-Architecture
.	O
</s>
<s>
First	O
,	O
a	O
variation	O
of	O
gradient	B-Algorithm
descent	I-Algorithm
with	O
batch	B-General_Concept
normalization	I-General_Concept
,	O
Gradient	B-Algorithm
Descent	I-Algorithm
in	O
Normalized	O
Parameterization	O
(	O
GDNP	O
)	O
,	O
is	O
designed	O
for	O
the	O
objective	O
function	O
,	O
such	O
that	O
the	O
direction	O
and	O
length	O
of	O
the	O
weights	O
are	O
updated	O
separately	O
.	O
</s>
<s>
The	O
GDNP	O
algorithm	O
thus	O
slightly	O
modifies	O
the	O
batch	B-General_Concept
normalization	I-General_Concept
step	O
for	O
the	O
ease	O
of	O
mathematical	O
analysis	O
.	O
</s>
<s>
,	O
where	O
and	O
are	O
the	O
input	O
and	O
output	O
weights	O
of	O
unit	O
correspondingly	O
,	O
and	O
is	O
the	O
activation	B-Algorithm
function	I-Algorithm
and	O
is	O
assumed	O
to	O
be	O
a	O
tanh	O
function	O
.	O
</s>
