Information Technology Reference
In-Depth Information
here. The original set of LOs evaluation criteria
is presented in Table 4.
Additional LO evaluation criteria intercon-
nected with technological criteria could be (1)
licensing (clear rules, e.g. compliance with Cre-
ative Commons (2009)), and (2) economic effi-
ciency which is taking into account the probable
LO reusability level (Kurilovas, 2007).
Table 1) but it also includes criteria from other
presented tools as well as authors' own research.
The main ideas for the constitution of this tool
are to clearly divide LORs quality evaluation
criteria in conformity with the principle as well
as to ensure the comprehensiveness of the tool
and to avoid the overlap of the criteria.
The advantage of the tool proposed is its
comprehensiveness and the clear division of
the criteria: 'internal quality' criteria are mainly
the area of interest of software engineers, and
'quality in use' criteria are mostly to be analysed
by programmers, taking into account the users'
feedback on the usability of software.
Two of the criteria in Table 5 could be inter-
preted from different perspectives: 'Accessibility:
access for all' could be included into the 'Archi-
tecture' group, but as it requires users' evaluation,
it has been included in the 'Quality in use' criteria
group. Also, 'Property and metadata inheritance'
could also be included in the 'Metadata' group,
although it deals with 'Storage' issues as well.
In any case, we have 34 different evalua-
tion criteria in this model (set of criteria), from
which 11 criteria deal with 'Internal quality' (or
'Architecture'), and 23 criteria deal with 'Quality
in use'. The twenty three 'Quality in use' criteria
are further divided into four groups to increase
the precision and convenience in practical evalu-
ation. Different experts (programmers and users)
could be used for different groups of the 'Quality
in use' criteria. Indeed, 'Metadata', 'Storage' and
'Graphical user interface' criteria need different
kinds of evaluators' expertise.
3.1.2. Comprehensive Technological
Evaluation Model for Learning
Object Repositories
The principle presented in the Introductory Section
claims that there exist both 'internal quality' and
'quality in use' evaluation criteria of the software
packages (such as LORs). The analysis shows that
none of the tools, presented in the previous section,
has clearly divided the LORs quality evaluation
criteria into two separate groups: LORs 'internal
quality' evaluation criteria and 'quality in use' cri-
teria. Therefore it is difficult to understand which
criteria reflect the basic LORs quality aspects
suitable for all software package alternatives, and
which are suitable only for a particular project
or user, and therefore need the users' feedback.
While analysing the LOR quality evaluation
criteria, presented previously, we noticed that
several tools pay more attention to the general
software 'internal quality' evaluation criteria
(such as the 'Architecture' group criteria) and
some of them to the 'customizable' 'quality in
use' evaluation criteria groups suitable for a
particular project or user: 'Metadata', 'Storage',
'Graphical user interface' and 'Other'. According
to the principle , the comprehensive LOR quality
evaluation tool should include both the general
software 'internal quality' evaluation criteria and
the 'quality in use' evaluation criteria suitable for
a particular project or user.
The LOR quality evaluation tool proposed
by the authors is presented in Table 5. This tool
is mostly similar to the SWITCH tool (c.f. with
3.1.3. Comprehensive Technological
Evaluation Model for Virtual
Learning Environments
While analyzing VLEs evaluation methods in the
previous section, it was necessary to exclude all
the evaluation criteria that do not deal directly with
VLEs technological quality problems, on the one
Search WWH ::




Custom Search