Information Technology Reference
In-Depth Information
Strategies after LO inclusion in the LOR are
based on the following principles:
of Selected Open Source Repository Solutions”
(2006).
As the overview in Table 2 shows there is no
division of the criteria into 'internal quality' and
'quality in use' criteria in this tool. We can notice
that 'Scalability', 'Security', 'Interoperability',
and 'Ease of deployment' criteria are mainly engi-
neering criteria, and could therefore be considered
as 'internal quality' criteria. All other criteria in
Table 2 are mainly users-related and could be
interpreted as 'quality in use' criteria.
Provide interesting and easily understood
user statistics, such as stars, percentages,
voting systems.
Include recommendations for reuse by
the user, both to the next user and to the
designer.
2.2. Existing Technological
Evaluation Tools for Learning
Objects Repositories
2.2.3. The OMII Evaluation Criteria of
Software Repository
2.2.1. The SWITCH Learning Object
Repository Quality Evaluation Grid
The next tool presented here is the “Software Re-
pository - Evaluation Criteria and Dissemination”
prepared by S. Newhouse of the Open Middleware
Infrastructure Institute (OMII).
Newhouse (2005) has specified three critical
phases of the software repository process:
The first LOR quality evaluation tool presented
here is the SWITCH project tool (SWITCH col-
lection, 2008) developed while evaluating DSpace
(2009) and Fedora (2009) LORs in 2008.
An overview of the tool in Table 1 shows that
there is no clear division of the criteria into 'inter-
nal quality' and 'quality in use'. According to the
principle (see the Introductory Section), 'internal
quality' criteria should mainly be the area of inter-
est of the software engineers, and 'quality in use'
criteria should mostly be analysed by the program-
mers taking into account the users' feedback on the
usability of software. Nevertheless, we can notice
that 'Architecture' group's sub-criteria are mainly
engineering criteria, and therefore they could be
analyzed as 'internal quality' criteria, while all
other criteria are mainly user-related and could,
therefore, be treated as 'quality in use' criteria.
1. Information that must be captured when a
product is created within the repository and
a specific release submitted to the repository.
2. Assessment criteria that should be used to
review the software contribution.
3. How product and release information, cou-
pled with the evaluation results, is presented
within LOR.
The tool combines three types of criteria:
documentation, technical and management (see
Table 3).
Although, as in other tools described above,
there is no clear division of the criteria into 'in-
ternal quality' and 'quality in use' criteria, we can
notice that the 'Technical' group's sub-criteria are
mainly engineering criteria, and could therefore
be analyzed as 'internal quality' criteria. The
'Documentation' and 'Management' criteria are
mainly users-related, and could be considered as
'quality in use' criteria.
2.2.2. The Catalyst IT Technical
Evaluation Tool for Open Source
Repositories
The second LOR quality evaluation tool presented
here is the tool developed by Catalyst IT while
evaluating DSpace (2009), EPrints (2009) and
Fedora (2009) LORs in the “Technical Evaluation
Search WWH ::




Custom Search