Information Technology Reference
In-Depth Information
N
i = 1
N
j = 1 b j λ s i , λ s j N
k = 1 b k λ s i , λ s k
J
( ζ )=
1
k = 1 b j b k λ s i , λ s k
(1.43)
N
j = 1
N
+ γ
b + γ 1
Ib ,
T ˜
T ˜
2
= b
I
b
λ s j L 2 ).
As expected, since the inner product is the same and the two spaces are congru-
ent, this cost function yields the same solution. However, unlike the previous, this
presentation has the advantage that it shows the role of the eigenvectors of the Gram
matrix and, most importantly, how to obtain the principal component functions in
the space of intensity functions. From Equation (1.42), the coefficients of the eigen-
vectors of the Gram matrix provide a weighting for the intensity functions of each
spike trains and, therefore, express how important a spike train is to represent oth-
ers. In a different perspective, this suggests that the principal component functions
should reveal general trends in the intensity functions.
˜
˜
˜
is the Gram matrix of the centered intensity functions (i.e., I ij =
where
I
λ s i ,
1.7.3 Results
To illustrate the algorithm just derived we performed a simple experiment. We gen-
erated two template spike trains comprising of 10 spikes uniformly random dis-
tributed over an interval of 0.25 s. In a specific application these template spike
trains could correspond, for example, to the average response of a culture of neurons
to two distinct but fixed input stimuli. For the computation of the coefficients of the
eigendecomposition (“training set”), we generated a total of 50 spike trains, half for
each template, by randomly copying each spike from the template with probability
0.8 and adding zero mean Gaussian distributed jitter with standard deviation 3 ms.
For testing of the obtained coefficients, 200 spike trains were generated following
the same procedure. The simulated spike trains are shown in Fig. 1.2.
According to the PCA algorithm derived previously, we computed the eigende-
composition of the matrix
˜
as given by Equation (1.35) so that it solves Equa-
tion (1.37). The evaluation of the mCI kernel was estimated from the spike trains
according to Equation (1.12) and computed with a Gaussian kernel with size 2 ms.
The eigenvalues
I
and first two eigenvectors are shown in Fig. 1.3.
The first eigenvalue alone accounts for more than 26% of the variance of the dataset
in the RKHS space. Although this value is not impressive, its importance is clear
since it is nearly four times higher than the second eigenvalue (6.6%). Furthermore,
notice that the first eigenvector clearly shows the separation between spike trains
generated from different templates (Fig. 1.3b). This again can be seen in the first
principal component function, shown in Fig. 1.4, which reveals the location of the
spike times used to generate the templates while discriminating between them with
{ ρ l ,
l
=
1
,...,
100
}
Search WWH ::




Custom Search