<s>
Time	B-Algorithm
delay	I-Algorithm
neural	I-Algorithm
network	I-Algorithm
(	O
TDNN	B-Algorithm
)	O
is	O
a	O
multilayer	O
artificial	B-Architecture
neural	I-Architecture
network	I-Architecture
architecture	O
whose	O
purpose	O
is	O
to	O
1	O
)	O
classify	O
patterns	O
with	O
shift-invariance	O
,	O
and	O
2	O
)	O
model	O
context	O
at	O
each	O
layer	O
of	O
the	O
network	O
.	O
</s>
<s>
For	O
the	O
classification	O
of	O
a	O
temporal	O
pattern	O
(	O
such	O
as	O
speech	O
)	O
,	O
the	O
TDNN	B-Algorithm
thus	O
avoids	O
having	O
to	O
determine	O
the	O
beginning	O
and	O
end	O
points	O
of	O
sounds	O
before	O
classifying	O
them	O
.	O
</s>
<s>
For	O
contextual	O
modelling	O
in	O
a	O
TDNN	B-Algorithm
,	O
each	O
neural	O
unit	O
at	O
each	O
layer	O
receives	O
input	O
not	O
only	O
from	O
activations/features	O
at	O
the	O
layer	O
below	O
,	O
but	O
from	O
a	O
pattern	O
of	O
unit	O
output	O
and	O
its	O
context	O
.	O
</s>
<s>
Applied	O
to	O
two-dimensional	O
classification	O
(	O
images	O
,	O
time-frequency	O
patterns	O
)	O
,	O
the	O
TDNN	B-Algorithm
can	O
be	O
trained	O
with	O
shift-invariance	O
in	O
the	O
coordinate	O
space	O
and	O
avoids	O
precise	O
segmentation	O
in	O
the	O
coordinate	O
space	O
.	O
</s>
<s>
The	O
TDNN	B-Algorithm
was	O
introduced	O
in	O
the	O
late	O
1980s	O
and	O
applied	O
to	O
a	O
task	O
of	O
phoneme	B-Language
classification	O
for	O
automatic	B-Application
speech	I-Application
recognition	I-Application
in	O
speech	O
signals	O
where	O
the	O
automatic	O
determination	O
of	O
precise	O
segments	O
or	O
feature	O
boundaries	O
was	O
difficult	O
or	O
impossible	O
.	O
</s>
<s>
Because	O
the	O
TDNN	B-Algorithm
recognizes	O
phonemes	B-Language
and	O
their	O
underlying	O
acoustic/phonetic	O
features	O
,	O
independent	O
of	O
position	O
in	O
time	O
,	O
it	O
improved	O
performance	O
over	O
static	O
classification	O
.	O
</s>
<s>
They	O
did	O
so	O
by	O
combining	O
TDNNs	B-Algorithm
with	O
max	O
pooling	O
in	O
order	O
to	O
realize	O
a	O
speaker	O
independent	O
isolated	O
word	O
recognition	O
system	O
.	O
</s>
<s>
The	O
Time	B-Algorithm
Delay	I-Algorithm
Neural	I-Algorithm
Network	I-Algorithm
,	O
like	O
other	O
neural	B-Architecture
networks	I-Architecture
,	O
operates	O
with	O
multiple	O
interconnected	O
layers	O
of	O
perceptrons	B-Algorithm
,	O
and	O
is	O
implemented	O
as	O
a	O
feedforward	B-Algorithm
neural	I-Algorithm
network	I-Algorithm
.	O
</s>
<s>
All	O
neurons	O
(	O
at	O
each	O
layer	O
)	O
of	O
a	O
TDNN	B-Algorithm
receive	O
inputs	O
from	O
the	O
outputs	O
of	O
neurons	O
at	O
the	O
layer	O
below	O
but	O
with	O
two	O
differences	O
:	O
</s>
<s>
Unlike	O
regular	O
Multi-Layer	B-Algorithm
perceptrons	I-Algorithm
,	O
all	O
units	O
in	O
a	O
TDNN	B-Algorithm
,	O
at	O
each	O
layer	O
,	O
obtain	O
inputs	O
from	O
a	O
contextual	O
window	O
of	O
outputs	O
from	O
the	O
layer	O
below	O
.	O
</s>
<s>
Shift-invariance	O
is	O
achieved	O
by	O
explicitly	O
removing	O
position	O
dependence	O
during	O
backpropagation	B-Algorithm
training	O
.	O
</s>
<s>
The	O
error	O
gradient	O
is	O
then	O
computed	O
by	O
backpropagation	B-Algorithm
through	O
all	O
these	O
networks	O
from	O
an	O
overall	O
target	O
vector	O
,	O
but	O
before	O
performing	O
the	O
weight	O
update	O
,	O
the	O
error	O
gradients	O
associated	O
with	O
shifted	O
copies	O
are	O
averaged	O
and	O
thus	O
shared	O
and	O
constraint	O
to	O
be	O
equal	O
.	O
</s>
<s>
Thus	O
,	O
all	O
position	O
dependence	O
from	O
backpropagation	B-Algorithm
training	O
through	O
the	O
shifted	O
copies	O
is	O
removed	O
and	O
the	O
copied	O
networks	O
learn	O
the	O
most	O
salient	O
hidden	O
features	O
shift-invariantly	O
,	O
i.e.	O
</s>
<s>
without	O
first	O
requiring	O
precise	O
localization	O
,	O
the	O
TDNN	B-Algorithm
is	O
trained	O
time-shift-invariantly	O
.	O
</s>
<s>
Time-shift	O
invariance	O
is	O
achieved	O
through	O
weight	O
sharing	O
across	O
time	O
during	O
training	O
:	O
Time	O
shifted	O
copies	O
of	O
the	O
TDNN	B-Algorithm
are	O
made	O
over	O
the	O
input	O
range	O
(	O
from	O
left	O
to	O
right	O
in	O
Fig.1	O
)	O
.	O
</s>
<s>
Backpropagation	B-Algorithm
is	O
then	O
performed	O
from	O
an	O
overall	O
classification	O
target	O
vector	O
(	O
see	O
TDNN	B-Algorithm
diagram	O
,	O
three	O
phoneme	B-Language
class	O
targets	O
(	O
/b/	O
,	O
/d/	O
,	O
/g/	O
)	O
are	O
shown	O
in	O
the	O
output	O
layer	O
)	O
,	O
resulting	O
in	O
gradients	O
that	O
will	O
generally	O
vary	O
for	O
each	O
of	O
the	O
time-shifted	O
network	O
copies	O
.	O
</s>
<s>
TDNNs	B-Algorithm
could	O
also	O
be	O
combined	O
or	O
grown	O
by	O
way	O
of	O
pre-training	O
.	O
</s>
<s>
The	O
precise	O
architecture	O
of	O
TDNNs	B-Algorithm
(	O
time-delays	O
,	O
number	O
of	O
layers	O
)	O
is	O
mostly	O
determined	O
by	O
the	O
designer	O
depending	O
on	O
the	O
classification	O
problem	O
and	O
the	O
most	O
useful	O
context	O
sizes	O
.	O
</s>
<s>
Work	O
has	O
also	O
been	O
done	O
to	O
create	O
adaptable	O
time-delay	O
TDNNs	B-Algorithm
where	O
this	O
manual	O
tuning	O
is	O
eliminated	O
.	O
</s>
<s>
TDNN-based	O
phoneme	B-Language
recognizers	O
compared	O
favourably	O
in	O
early	O
comparisons	O
with	O
HMM-based	O
phone	O
models	O
.	O
</s>
<s>
Modern	O
deep	O
TDNN	B-Algorithm
architectures	O
include	O
many	O
more	O
hidden	O
layers	O
and	O
sub-sample	O
or	O
pool	O
connections	O
over	O
broader	O
contexts	O
at	O
higher	O
layers	O
.	O
</s>
<s>
While	O
the	O
different	O
layers	O
of	O
TDNNs	B-Algorithm
are	O
intended	O
to	O
learn	O
features	O
of	O
increasing	O
context	O
width	O
,	O
they	O
do	O
model	O
local	O
contexts	O
.	O
</s>
<s>
When	O
longer-distance	O
relationships	O
and	O
pattern	O
sequences	O
have	O
to	O
be	O
processed	O
,	O
learning	O
states	O
and	O
state-sequences	O
is	O
important	O
and	O
TDNNs	B-Algorithm
can	O
be	O
combined	O
with	O
other	O
modelling	O
techniques	O
.	O
</s>
<s>
TDNNs	B-Algorithm
used	O
to	O
solve	O
problems	O
in	O
speech	B-Application
recognition	I-Application
that	O
were	O
introduced	O
in	O
1989	O
and	O
initially	O
focused	O
on	O
shift-invariant	O
phoneme	B-Application
recognition	I-Application
.	O
</s>
<s>
Speech	O
lends	O
itself	O
nicely	O
to	O
TDNNs	B-Algorithm
as	O
spoken	O
sounds	O
are	O
rarely	O
of	O
uniform	O
length	O
and	O
precise	O
segmentation	O
is	O
difficult	O
or	O
impossible	O
.	O
</s>
<s>
By	O
scanning	O
a	O
sound	O
over	O
past	O
and	O
future	O
,	O
the	O
TDNN	B-Algorithm
is	O
able	O
to	O
construct	O
a	O
model	O
for	O
the	O
key	O
elements	O
of	O
that	O
sound	O
in	O
a	O
time-shift	O
invariant	O
manner	O
.	O
</s>
<s>
Large	O
phonetic	O
TDNNs	B-Algorithm
can	O
be	O
constructed	O
modularly	O
through	O
pre-training	O
and	O
combining	O
smaller	O
networks	O
.	O
</s>
<s>
Large	O
vocabulary	O
speech	B-Application
recognition	I-Application
requires	O
recognizing	O
sequences	O
of	O
phonemes	B-Language
that	O
make	O
up	O
words	O
subject	O
to	O
the	O
constraints	O
of	O
a	O
large	O
pronunciation	O
vocabulary	O
.	O
</s>
<s>
Integration	O
of	O
TDNNs	B-Algorithm
into	O
large	O
vocabulary	O
speech	B-Application
recognizers	I-Application
is	O
possible	O
by	O
introducing	O
state	O
transitions	O
and	O
search	O
between	O
phonemes	B-Language
that	O
make	O
up	O
a	O
word	O
.	O
</s>
<s>
The	O
resulting	O
Multi-State	O
Time-Delay	O
Neural	B-Architecture
Network	I-Architecture
(	O
MS-TDNN	O
)	O
can	O
be	O
trained	O
discriminative	O
from	O
the	O
word	O
level	O
,	O
thereby	O
optimizing	O
the	O
entire	O
arrangement	O
toward	O
word	O
recognition	O
instead	O
of	O
phoneme	B-Language
classification	O
.	O
</s>
<s>
Two-dimensional	O
variants	O
of	O
the	O
TDNNs	B-Algorithm
were	O
proposed	O
for	O
speaker	O
independence	O
.	O
</s>
<s>
One	O
of	O
the	O
persistent	O
problems	O
in	O
speech	B-Application
recognition	I-Application
is	O
recognizing	O
speech	O
when	O
it	O
is	O
corrupted	O
by	O
echo	O
and	O
reverberation	O
(	O
as	O
is	O
the	O
case	O
in	O
large	O
rooms	O
and	O
distant	O
microphones	O
)	O
.	O
</s>
<s>
The	O
TDNN	B-Algorithm
was	O
shown	O
to	O
be	O
effective	O
to	O
recognize	O
speech	O
robustly	O
despite	O
different	O
levels	O
of	O
reverberation	O
.	O
</s>
<s>
TDNNs	B-Algorithm
were	O
also	O
successfully	O
used	O
in	O
early	O
demonstrations	O
of	O
audio-visual	O
speech	O
,	O
where	O
the	O
sounds	O
of	O
speech	O
are	O
complemented	O
by	O
visually	O
reading	O
lip	O
movement	O
.	O
</s>
<s>
Here	O
,	O
TDNN-based	O
recognizers	O
used	O
visual	O
and	O
acoustic	O
features	O
jointly	O
to	O
achieve	O
improved	O
recognition	O
accuracy	O
,	O
particularly	O
in	O
the	O
presence	O
of	O
noise	O
,	O
where	O
complementary	O
information	O
from	O
an	O
alternate	O
modality	O
could	O
be	O
fused	O
nicely	O
in	O
a	O
neural	B-Architecture
net	I-Architecture
.	O
</s>
<s>
TDNNs	B-Algorithm
have	O
been	O
used	O
effectively	O
in	O
compact	O
and	O
high-performance	O
handwriting	B-Application
recognition	I-Application
systems	I-Application
.	O
</s>
<s>
Shift-invariance	O
was	O
also	O
adapted	O
to	O
spatial	O
patterns	O
(	O
x/y	O
-axes	O
)	O
in	O
image	O
offline	O
handwriting	B-Application
recognition	I-Application
.	O
</s>
<s>
Video	O
has	O
a	O
temporal	O
dimension	O
that	O
makes	O
a	O
TDNN	B-Algorithm
an	O
ideal	O
solution	O
to	O
analysing	O
motion	O
patterns	O
.	O
</s>
<s>
When	O
examining	O
videos	O
,	O
subsequent	O
images	O
are	O
fed	O
into	O
the	O
TDNN	B-Algorithm
as	O
input	O
where	O
each	O
image	O
is	O
the	O
next	O
frame	O
in	O
the	O
video	O
.	O
</s>
<s>
The	O
strength	O
of	O
the	O
TDNN	B-Algorithm
comes	O
from	O
its	O
ability	O
to	O
examine	O
objects	O
shifted	O
in	O
time	O
forward	O
and	O
backward	O
to	O
define	O
an	O
object	O
detectable	O
as	O
the	O
time	O
is	O
altered	O
.	O
</s>
<s>
Two-dimensional	O
TDNNs	B-Algorithm
were	O
later	O
applied	O
to	O
other	O
image-recognition	O
tasks	O
under	O
the	O
name	O
of	O
"	O
Convolutional	B-Architecture
Neural	I-Architecture
Networks	I-Architecture
"	O
,	O
where	O
shift-invariant	O
training	O
is	O
applied	O
to	O
the	O
x/y	O
axes	O
of	O
an	O
image	O
.	O
</s>
<s>
TDNNs	B-Algorithm
can	O
be	O
implemented	O
in	O
virtually	O
all	O
machine-learning	O
frameworks	O
using	O
one-dimensional	O
convolutional	B-Architecture
neural	I-Architecture
networks	I-Architecture
,	O
due	O
to	O
the	O
equivalence	O
of	O
the	O
methods	O
.	O
</s>
<s>
Matlab	B-Language
:	O
The	O
neural	B-Architecture
network	I-Architecture
toolbox	O
has	O
explicit	O
functionality	O
designed	O
to	O
produce	O
a	O
time	B-Algorithm
delay	I-Algorithm
neural	I-Algorithm
network	I-Algorithm
give	O
the	O
step	O
size	O
of	O
time	O
delays	O
and	O
an	O
optional	O
training	O
function	O
.	O
</s>
<s>
The	O
default	O
training	O
algorithm	O
is	O
a	O
Supervised	O
Learning	O
back-propagation	B-Algorithm
algorithm	O
that	O
updates	O
filter	O
weights	O
based	O
on	O
the	O
Levenberg-Marquardt	O
optimizations	O
.	O
</s>
<s>
The	O
function	O
is	O
timedelaynet(delays, hidden_layers, train_fnc )	O
and	O
returns	O
a	O
time-delay	O
neural	B-Architecture
network	I-Architecture
architecture	O
that	O
a	O
user	O
can	O
train	O
and	O
provide	O
inputs	O
to	O
.	O
</s>
<s>
The	O
Kaldi	B-General_Concept
ASR	I-General_Concept
Toolkit	I-General_Concept
has	O
an	O
implementation	O
of	O
TDNNs	B-Algorithm
with	O
several	O
optimizations	O
for	O
speech	B-Application
recognition	I-Application
.	O
</s>
