Information Technology Reference
In-Depth Information
feel' ( a 33 = 0.06 ) (see Table 4). In this case, all the
other criteria weights could be considered equally
important and according to the normalization
Formula (2) get a weight a i = 0.026. The choice
of the particular values of the weights usually
depends on the experts (decision-makers).
In this particular scenario, applying the util-
ity function (3) would give us Fedora (2009) as
the optimal LOR for users with special needs,
as compared with DSpace (2009) and EPrints
(2009) LOR packages. The main reasons for this
outcome are the following: (1) modular approach,
(2) metadata schema extensible without restric-
tions, (3) all UI-projects are open source and can
be adapted, (4) high ability to customize 'look
and feel', etc.
where r is the number of experts; m is the number
of the parameters under evaluation; S is the square
sum of the evaluated deviations of importance
rates' value from the experts' aggregate average.
In its turn,
12
S
l 2
=
Wr m
(
− =
1
)
,
(5)
rm m
(
+
1
)
The compatibility of the experts' assessments is
considered satisfactory if the value of concordance
rate W is 0.6-0.7 (Kendall, 1979).
4. FUTURE RESEARCH TRENDS
The authors have analyzed several well-known
tools and methods for multiple criteria evalua-
tion of learning software, such as LOs, LORS
and VLEs.
Future research will concentrate on further
analysis of more learning software quality evalu-
ation models and tools with a view to create com-
prehensive sets of learning software evaluation
criteria according to the principle, presented in
the Introductory Section. Additional research is
also needed to avoid the overlap of the learning
software technological quality evaluation criteria.
Furthermore, all the new models are to be vali-
dated. Along this line, validation of the proposed
LORs quality evaluation model is scheduled for
the autumn 2010 in Lithuania, involving three
researchers and software engineering experts to
validate 'Internal quality' criteria, and 12 (3 for
each of the 4 groups) programmers and users to
validate 'Quality in use' criteria.
The authors have analyzed the application of
the only scientific method available, represented
by Equation (3), for multiple criteria evaluation
and optimization of the learning software. Other
methods of vector optimization could be used in
the future research, and their efficiency should
be compared.
3.2.4 Minimization of
Experts' Subjectivity
Another very complicated problem for such mul-
tiple criteria evaluation and optimization tasks is
the minimization of the experts' (decision makers')
subjectivity. Experts' subjectivity can influence the
quality criteria ratings (values) and their weights.
There are some scientific approaches to al-
leviate this situation, such as the one formulated
in (Kendall, 1979). According to Kendall (1979)
the experts' influence is different in general and
therefore this importance should be assessed us-
ing the appropriate methodology. It is important
to form the expert group purely in line with their
competences. Furthermore, according to Kendall
(1979), we should eliminate the extreme experts'
assessments of the ratings and weights. In order
to pursue the compatibility of the experts' as-
sessments, we should calculate the so-called
concordance rates W and distributions λ 2 :
12
S
W
=
,
(4)
2
3
r m
(
m
)
 
Search WWH ::




Custom Search