<s>
WaveNet	B-Application
is	O
a	O
deep	O
neural	B-Architecture
network	I-Architecture
for	O
generating	O
raw	O
audio	O
.	O
</s>
<s>
It	O
was	O
created	O
by	O
researchers	O
at	O
London-based	O
AI	B-Application
firm	O
DeepMind	B-Application
.	O
</s>
<s>
The	O
technique	O
,	O
outlined	O
in	O
a	O
paper	O
in	O
September	O
2016	O
,	O
is	O
able	O
to	O
generate	O
relatively	O
realistic-sounding	O
human-like	O
voices	O
by	O
directly	O
modelling	O
waveforms	O
using	O
a	O
neural	B-Architecture
network	I-Architecture
method	O
trained	O
with	O
recordings	O
of	O
real	O
speech	O
.	O
</s>
<s>
WaveNet	B-Application
's	O
ability	O
to	O
generate	O
raw	O
waveforms	O
means	O
that	O
it	O
can	O
model	O
any	O
kind	O
of	O
audio	O
,	O
including	O
music	O
.	O
</s>
<s>
Generating	O
speech	O
from	O
text	O
is	O
an	O
increasingly	O
common	O
task	O
thanks	O
to	O
the	O
popularity	O
of	O
software	O
such	O
as	O
Apple	O
's	O
Siri	B-Application
,	O
Microsoft	O
's	O
Cortana	B-Application
,	O
Amazon	B-Application
Alexa	I-Application
and	O
the	O
Google	B-Application
Assistant	I-Application
.	O
</s>
<s>
The	O
characteristics	O
of	O
the	O
output	O
speech	O
are	O
controlled	O
via	O
the	O
inputs	O
to	O
the	O
model	O
,	O
while	O
the	O
speech	O
is	O
typically	O
created	O
using	O
a	O
voice	O
synthesiser	O
known	O
as	O
a	O
vocoder	B-Application
.	O
</s>
<s>
WaveNet	B-Application
is	O
a	O
type	O
of	O
feedforward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
known	O
as	O
a	O
deep	B-Architecture
convolutional	I-Architecture
neural	I-Architecture
network	I-Architecture
(	O
CNN	B-Architecture
)	O
.	O
</s>
<s>
In	O
WaveNet	B-Application
,	O
the	O
CNN	B-Architecture
takes	O
a	O
raw	O
signal	O
as	O
an	O
input	O
and	O
synthesises	O
an	O
output	O
one	O
sample	O
at	O
a	O
time	O
.	O
</s>
<s>
It	O
does	O
so	O
by	O
sampling	O
from	O
a	O
softmax	B-Algorithm
(	O
i.e.	O
</s>
<s>
categorical	O
)	O
distribution	O
of	O
a	O
signal	O
value	O
that	O
is	O
encoded	O
using	O
μ-law	B-Algorithm
companding	O
transformation	O
and	O
quantized	B-Algorithm
to	O
256	O
possible	O
values	O
.	O
</s>
<s>
According	O
to	O
the	O
original	O
September	O
2016	O
DeepMind	B-Application
research	O
paper	O
WaveNet	B-Application
:	O
A	O
Generative	O
Model	O
for	O
Raw	O
Audio	O
,	O
the	O
network	O
was	O
fed	O
real	O
waveforms	O
of	O
speech	O
in	O
English	O
and	O
Mandarin	O
.	O
</s>
<s>
WaveNet	B-Application
is	O
able	O
to	O
accurately	O
model	O
different	O
voices	O
,	O
with	O
the	O
accent	O
and	O
tone	O
of	O
the	O
input	O
correlating	O
with	O
the	O
output	O
.	O
</s>
<s>
The	O
capability	O
also	O
means	O
that	O
if	O
the	O
WaveNet	B-Application
is	O
fed	O
other	O
inputs	O
–	O
such	O
as	O
music	O
–	O
its	O
output	O
will	O
be	O
musical	O
.	O
</s>
<s>
At	O
the	O
time	O
of	O
its	O
release	O
,	O
DeepMind	B-Application
showed	O
that	O
WaveNet	B-Application
could	O
produce	O
waveforms	O
that	O
sound	O
like	O
classical	O
music	O
.	O
</s>
<s>
According	O
to	O
the	O
June	O
2018	O
paper	O
Disentangled	O
Sequential	O
Autoencoder	B-Algorithm
,	O
DeepMind	B-Application
has	O
successfully	O
used	O
WaveNet	B-Application
for	O
audio	O
and	O
voice	O
"	O
content	O
swapping	O
"	O
:	O
the	O
network	O
can	O
swap	O
the	O
voice	O
on	O
an	O
audio	O
recording	O
for	O
another	O
,	O
pre-existing	O
voice	O
while	O
maintaining	O
the	O
text	O
and	O
other	O
features	O
from	O
the	O
original	O
recording	O
.	O
</s>
<s>
(	O
p.1	O
)	O
According	O
to	O
the	O
paper	O
,	O
a	O
two-digit	O
minimum	O
amount	O
of	O
hours	O
(	O
c	O
.	O
50	O
hours	O
)	O
of	O
pre-existing	O
speech	O
recordings	O
of	O
both	O
source	O
and	O
target	O
voice	O
are	O
required	O
to	O
be	O
fed	O
into	O
WaveNet	B-Application
for	O
the	O
program	O
to	O
learn	O
their	O
individual	O
features	O
before	O
it	O
is	O
able	O
to	O
perform	O
the	O
conversion	O
from	O
one	O
voice	O
to	O
another	O
at	O
a	O
satisfying	O
quality	O
.	O
</s>
<s>
(	O
p.8	O
)	O
,	O
i	O
.	O
e	O
.	O
WaveNet	B-Application
is	O
capable	O
of	O
distinguishing	O
between	O
the	O
spoken	O
text	O
and	O
modes	O
of	O
delivery	O
(	O
modulation	O
,	O
speed	O
,	O
pitch	O
,	O
mood	O
,	O
etc	O
.	O
)	O
</s>
<s>
The	O
January	O
2019	O
follow-up	O
paper	O
Unsupervised	O
speech	O
representation	O
learning	O
using	O
WaveNet	B-Application
autoencoders	B-Algorithm
details	O
a	O
method	O
to	O
successfully	O
enhance	O
the	O
proper	O
automatic	O
recognition	O
and	O
discrimination	O
between	O
dynamical	O
and	O
static	O
features	O
for	O
"	O
content	O
swapping	O
"	O
,	O
notably	O
including	O
swapping	O
voices	O
on	O
existing	O
audio	O
recordings	O
,	O
in	O
order	O
to	O
make	O
it	O
more	O
reliable	O
.	O
</s>
<s>
Another	O
follow-up	O
paper	O
,	O
Sample	O
Efficient	O
Adaptive	O
Text-to-Speech	O
,	O
dated	O
September	O
2018	O
(	O
latest	O
revision	O
January	O
2019	O
)	O
,	O
states	O
that	O
DeepMind	B-Application
has	O
successfully	O
reduced	O
the	O
minimum	O
amount	O
of	O
real-life	O
recordings	O
required	O
to	O
sample	O
an	O
existing	O
voice	O
via	O
WaveNet	B-Application
to	O
"	O
merely	O
a	O
few	O
minutes	O
of	O
audio	O
data	O
"	O
while	O
maintaining	O
high-quality	O
results	O
.	O
</s>
<s>
Its	O
ability	O
to	O
clone	O
voices	O
has	O
raised	O
ethical	O
concerns	O
about	O
WaveNet	B-Application
's	O
ability	O
to	O
mimic	O
the	O
voices	O
of	O
living	O
and	O
dead	O
persons	O
.	O
</s>
<s>
According	O
to	O
a	O
2016	O
BBC	O
article	O
,	O
companies	O
working	O
on	O
similar	O
voice-cloning	O
technologies	O
(	O
such	O
as	O
Adobe	B-Application
Voco	I-Application
)	O
intend	O
to	O
insert	O
watermarking	O
inaudible	O
to	O
humans	O
to	O
prevent	O
counterfeiting	O
,	O
while	O
maintaining	O
that	O
voice	O
cloning	O
satisfying	O
,	O
for	O
instance	O
,	O
the	O
needs	O
of	O
entertainment-industry	O
purposes	O
would	O
be	O
of	O
a	O
far	O
lower	O
complexity	O
and	O
use	O
different	O
methods	O
than	O
required	O
to	O
fool	O
forensic	O
evidencing	O
methods	O
and	O
electronic	O
ID	O
devices	O
,	O
so	O
that	O
natural	O
voices	O
and	O
voices	O
cloned	O
for	O
entertainment-industry	O
purposes	O
could	O
still	O
be	O
easily	O
told	O
apart	O
by	O
technological	O
analysis	O
.	O
</s>
<s>
At	O
the	O
time	O
of	O
its	O
release	O
,	O
DeepMind	B-Application
said	O
that	O
WaveNet	B-Application
required	O
too	O
much	O
computational	O
processing	O
power	O
to	O
be	O
used	O
in	O
real	O
world	O
applications	O
.	O
</s>
<s>
WaveNet	B-Application
was	O
then	O
used	O
to	O
generate	O
Google	B-Application
Assistant	I-Application
voices	O
for	O
US	O
English	O
and	O
Japanese	O
across	O
all	O
Google	O
platforms	O
.	O
</s>
<s>
In	O
November	O
2017	O
,	O
DeepMind	B-Application
researchers	O
released	O
a	O
research	O
paper	O
detailing	O
a	O
proposed	O
method	O
of	O
"	O
generating	O
high-fidelity	O
speech	O
samples	O
at	O
more	O
than	O
20	O
times	O
faster	O
than	O
real-time	O
"	O
,	O
called	O
"	O
Probability	O
Density	O
Distillation	O
"	O
.	O
</s>
<s>
At	O
the	O
annual	O
I/O	B-Application
developer	I-Application
conference	I-Application
in	O
May	O
2018	O
,	O
it	O
was	O
announced	O
that	O
new	O
Google	B-Application
Assistant	I-Application
voices	O
were	O
available	O
and	O
made	O
possible	O
by	O
WaveNet	B-Application
;	O
WaveNet	B-Application
greatly	O
reduced	O
the	O
number	O
of	O
audio	O
recordings	O
that	O
were	O
required	O
to	O
create	O
a	O
voice	O
model	O
by	O
modeling	O
the	O
raw	O
audio	O
of	O
the	O
voice	O
actor	O
samples	O
.	O
</s>
