Information Technology Reference
In-Depth Information
We express this metric as ''M'' defects per ''N'' requirements. Let me illustrate
this with an example.
Let us assume that there are 100 requirements in the traceability matrix and 5
defects were uncovered in all quality control activities that are attributable to
requirements engineering. We express this as—DIR is one defect for every 20
requirements
Now what does it mean to us? No metric is meaningful without a comparable
benchmark. While we do not have an industry benchmark for DIR, we have an
ideal benchmark for delivered defects and that is 3 defects per one million
opportunities. This is the ideal situation and we refer to the organization that
achieved this level of quality as the ''Six Sigma'' level of quality organization. The
level of quality is referred to as five sigma level if the delivered defects are 3 per
one hundred thousand opportunities and four sigma if there are 3 delivered defects
for ten thousand opportunities. Most professional organizations that have imple-
mented a process to drive the organization would be between four sigma and five
sigma levels at a minimum.
The philosophy of quality is to aim for zero-defects. We are now in the era of
total quality management philosophy which states that we need to prevent error
than to spend effort to uncover and fix it. Therefore, the DIR must be as close to
the sigma level of the organization as possible. However, realizing that there
would always be some defects left in the artifact by its author, we accept a
variance of up to 20 % in the industry. That is if we are at four sigma level, then
the DIR can be 3.6 defects per ten thousand opportunities or 36 defects per one
hundred thousand opportunities or 360 defects per one million opportunities.
We can compute this metric after the project is completed as the defects may be
uncovered during design, coding or testing in addition to requirements engineering
that may be attributable to requirements stage.
10.3.3.2 Delivered Defect Density
We compute this metric for the overall project. We get this data only after the
software product is put into production and is being used by the end users. The
defect reports do come from the end users. Normally at this stage, it would be
difficult to trace the origin of the defect. Another aspect is that unless all the
accepted requirements of the end users are met, the product would not be accepted.
Therefore, once the product is in production, we would not be getting defect
reports whose origin lies in the requirements engineering stage of the software
development. Therefore, I am not discussing this metric in this topic. This metric is
more relevant to software design and construction activities of the software
development than to requirements engineering.
Search WWH ::




Custom Search