Information Technology Reference
In-Depth Information
classical approaches in the field of operations re-
search consider only a single objective function to
be optimized; such an approach models one aspect
of the decision problem, or aggregates relevant
aspects into a single criterion (the aggregation
being usually rather simplistic).
Many multidimensional approaches have been
proposed as extensions of the classical ones. A
first one was the so-called Multicriteria Decision
Making (MCDM), developed by the so-called
American School. More recently, the European
School has created a new type of approach to
these problems, called Multicriteria Decision Aid
(MCDA). Many real life applications have suc-
cessfully validated the feasibility of this approach.
MCDM/MCDA deal with different classes of
decision problems (choice, classification, sorting,
ranking), explicitly taking into consideration sev-
eral points of view (multiple attributes or criteria,
i.e. attributes with an ordered domain), in order
to support decision makers in finding a consistent
solution of the problem at hand (MCDM, 2009).
Despite recent advances in electronic tech-
nologies for e-learning, a consolidated evaluation
methodology for e-learning applications is not
available. The evaluation of educational software
must consider its usability and more in general its
accessibility, as well as its didactic effectiveness
(Ardito et. al, 2006).
Despite the widespread use of e-learning sys-
tems and the considerable investment in purchas-
ing or developing them, there is no consensus on
a standard framework for evaluating the system
quality (Chua & Dyson, 2004).
The authors approach is to use a multiple cri-
teria evaluation method expressed by an experts'
utility function, which is presented below in the
section“Experts' Additive Utility Function”,
including evaluation criteria of alternatives,
their ratings (values) and weights for evaluating
the technological quality of learning software.
According to this method, in order to evaluate
the e-learning system components, we should
identify their evaluation criteria, ratings (values)
and weights.
First, the authors' comprehensive sets (tools)
of criteria for evaluation of e-learning system
components, created according to the principle,
are described in the next section. Then, the rat-
ings, weights and an example of experimental
evaluation of VLEs according to the method are
presented.
3.1. Comprehensive
Technological Evaluation
Models for Learning Software
3.1.1. Comprehensive Technological
Evaluation Model for Learning Objects
While analysing the aforementioned LOs evalu-
ation criteria, it was necessary to exclude all the
evaluation criteria that do not deal directly with
LOs technological quality problems, on the one
hand, and to estimate interconnected/overlapping
criteria, on the other hand.
This analysis has shown that all the analysed
sets of LOs evaluation criteria have a number
of limitations from technological point of view:
LORI (Vargo et al ., 2003), Paulsson and
Naeve (2006) and MELT (2008) criteria
do not examine the different LO life cycle
stages.
Q4R (2008) set of criteria insufficiently
examines technological evaluation crite-
ria before the LO inclusion into the LO
repository.
All these criteria insufficiently examine
LO reusability, including interoperability.
It is obvious that a more comprehensive set of
criteria for LOs technological evaluation is needed
that should comprise both LO quality evaluation
criteria suitable for the different LO life cycle
stages, including criteria for before, during and
after LO inclusion into the repository, and LO
Search WWH ::




Custom Search