Information Technology Reference
In-Depth Information
such measurements analysts should be able to determine if the project is reason-
ably sound, financially speaking, based on a given schedule. Project data from past
projects (the ones being used for the baseline) should be complete and thorough,
not rough estimates. The projects should be similar to one another, so that the
metrics have some sort of relevance. A final characteristic of a good baseline is
that it is composed of as many data sets as possible. Using these techniques it is
possible to gauge a projects progress accurately and cost effectively.
12.1.6 Attributes of a Good Metric
Metrics, like the software they evaluate, must be evaluated. Meta-metrics are the
description of a metric. Measuring metrics at times is a subjective process; this is
because metrics are subjectively analyzed. The quality of metrics can be broken
into five categories (Conte et al. 1986 ):
• Simplicity: Simple metrics provide information in a manner that is easily
comprehended by anyone reading it. A simple metric may be the number of
faults in a program or the time taken to develop a component. A metric that is
not simple for example is a complex derivative based analysis of complexity. In
a nutshell simple is easy to understand.
• Validity: For a metric to be valid, they should measure what they claim to
measure. For instance if lines of code were to be counted they could be a
measure of program size, but if they were to be used as a measure of a pro-
grammer's amount of work it would seem incorrect as, lines of code may vary
with programming style. Furthermore, the development of a sound and correct
algorithm is more difficult then coding a solution.
• Robustness: Being robust means that a measurement should not be easily
tampered with. Whether the tampering is merely a side effect of some modifi-
cations to the project or if it is done as part of what could be coined creative
analysis or the modification of superficial components in order to illicit the
desired results.
• Prescriptiveness: Is the test in the right place at the right time? Just like you
don't see signs past a sharp curve on the highway informing you that you just
passed through the sharp curve successfully, software metrics should be
deployed at the correct time. If algorithms are analyzed after they have been
coded and integrated it may be difficult to repair them, but if they are tested
before integration they will be more easily repaired. Measures must be applied
before
they
become
obsolete
and
their
results
must
be
presented
to
all
stakeholders.
• Analyzability: Can statistics be applied to the metric? Normally, a metric that
can be properly statistically analyzed will be less prone to subjectivity. Mea-
surements should be just that, depth of nesting, number of global constructs,
count of classes, number of memory leaks. All of those previously listed can be
Search WWH ::




Custom Search