Information Technology Reference
In-Depth Information
max 1
N
i = 1 ξ i
N
i = 1 μ i ξ i
N
i = 1 λ i [ f ( x i )( < ω , x i > + b ) 1 + ξ i ]
2
2 ω
+
C
,
(5.5)
N
i = 1 λ i f ( x i ) x i ,
s
.
t
. ω =
(5.6)
N
i = 1 λ f ( x i )= 0 ,
(5.7)
C
μ i λ i =
0
,
i
=
1
,
2
,...,
N
,
(5.8)
λ i
0
, μ i
0
,
i
=
1
,
2
,...,
N
.
(5.9)
In order to obtain the best value of the parameter C , the search space of SVM is
set C
5 to 5. We also investigated a nonlinear variant with a
radial basis function (RBF) kernel, which yielded similar results to the linear SVM.
=
2 to the power of
5.4.3 Gaussian Nave Bayes
The GNB classifier uses the training data to estimate the probability distribution
over fMRI observations, conditioned on the stimuli. Responses conditioned on the
stimuli were modeled as Gaussians, where it was assumed that each voxel was in-
dependent of the others. The Gaussian mixture model (GMM) means and variances
were estimated by maximal likelihood estimation. With the obtained model, the de-
cision boundary for classification was the optimal boundary. The predicted class on
test data was the most probable class under this model.
5.4.4 Correlation Analysis
The responses in the training set for active and inactive voxels were averaged sepa-
rately to compute the mean responses for each category as templates. For prediction,
the correlation coefficients between each test point (time series of a voxel) and each
of the templates were obtained. Then, each test point was predicted to belong to
Class 1 if the correlation coefficient for Class 1 was bigger than for Class 2, and to
Class 2 otherwise.
5.4.5 k-Nearest Neighbor
The algorithm for k -nearest neighbor classifier is summarized as follows. Given an
unknown feature vector (voxel), the k -nearest neighbors irrespective of class label
was identified. Out of these k samples, the number of vectors that belong to class 1
Search WWH ::




Custom Search