Information Technology Reference
In-Depth Information
n
i = 1 a i λ s i ,
j = 1 a j λ s j
n
=
L 2 ( T )
i = 1 a i λ s i L 2 ( T ) 0 ,
(1.45)
n
=
since the norm is nonnegative.
Proof (Property 1.5). Consider the 2
×
2 CI kernel matrix,
I
(
s i ,
s i )
I
(
s i ,
s j )
V =
.
I
(
s j ,
s i )
I
(
s j ,
s j )
From Property 1.2, this matrix is symmetric and nonnegative definite. Hence, its
determinant is nonnegative [7, p. 245]. Mathematically,
I 2
det
( V )=
I
(
s i ,
s i )
I
(
s j ,
s j )
(
s i ,
s j )
0
,
which proves the result of Equation (1.16).
Proof (Property 1.6). Consider two spike trains, s i ,
s j ∈S ( T )
. The norm of the sum
of two spike trains is
λ s i + λ s j
= λ s i + λ s j , λ s i + λ s j
2
(1.46a)
2 λ s i , λ s j + λ s j , λ s j
= λ s i , λ s i +
(1.46b)
λ s i λ s j + λ s j
2
2
λ s i
+
2
(1.46c)
= λ s i +
λ s j 2
,
(1.46d)
with the upper bound in step 1.46c established by the Cauchy-Schwarz inequality
(Property 1.5).
References
1. Aronszajn, N. Theory of reproducing kernels. Trans Am Math Soc 68 (3), 337-404 (1950)
2. Berg, C. Christensen, J.P.R., Ressel, P. Harmonic Analysis on Semigroups: Theory of Positive
Definite and Related Functions. Springer-Verlag, New York (1984)
3. Bohte, S.M., Kok, J.N., Poutre, H.L.: Error-backpropagation in temporally encoded networks
of spiking neurons. Neurocomputing 48 (1-4), 17-37 (2002). DOI 10.1016/S0925-2312(01)
00658-0
4. Carnell, A., Richardson, D.: Linear algebra for time series of spikes. In: Proceedings European
Symposium on Artificial Neural Networks, pp. 363-368. Bruges, Belgium (2005)
5. Dayan, P., Abbott, L.F. Theoretical Neuroscience: Computational and Mathematical Modeling
of Neural Systems. MIT Press, Cambridge, MA (2001)
6. Diggle, P., Marron, J.S. Equivalence of smoothing parameter selectors in density and intensity
estimation. J Acoust Soc Am 83 (403), 793-800 (1988)
7. Harville, D.A. Matrix Algebra from a Statistician's Perspective. Springer, New York (1997)
Search WWH ::




Custom Search