Information Technology Reference
In-Depth Information
whereas in CTQ flow-up, data obtained via simulation or empirical methods of the
various X's is used to predict the final performance on Y.
Predicting design behavior also brings to the fore another critical DFSS method-
ology component: process variation, part variation, and measurement variation. For
instance, change in the value of a factor (X1) may impact outputs (Y1 and Y2)
of interest in opposite ways. How do we study the effect of these interactions in
a software design? The Main effects plot and interaction plots available through
Minitab (Minitab Inc., State College, PA)—the most widely used Six Sigma analysis
tool—often are used to study the nature of interaction.
FMEA often is carried out during this phase to identify potential failure aspects
of the design and plans to overcome failure. FMEA involves computation of a risk
priority number (RPN) for every cause that is a source of variation in the process. For
each cause severity, correction is rated on a scale of 1
10, with 1 being the best and
10 the worst. The detection aspect for each cause also is rated on a scale of 1
...
...
10,
but here a rating of 10 is most desirable, whereas 1 is least desirable.
Severity —How significant is the impact of the cause on the output?
Occurrence —How likely is it that the cause of the failure mode will occur?
Detection —How likely is it that the current design will be able to detect the
cause or mode of failure should
Risk Priority Number
=
Severity
×
Occurrence
×
Detection
If data from an earlier design were available, regression is a possible option,
whereas design of experiments (DOE), inputs from domain experts, factorial design,
simulation, or a combination often is adopted when past data are not available.
Businesses also could use techniques such as ATAM (Kazman et al., 2000) that place
emphasis on performance, modifiability, and availability characteristics to determine
the viability of a software design from an architectural standpoint. This offers a
structured framework to evaluate designs with a view to determining the design
tradeoffs and is an aspect that makes for interesting study.
Each quality attribute characterization is divided into three categories: external stimuli,
architectural decisions, and responses . External stimuli (or just stimuli for short) are
the events that cause the architecture to respond or change. To analyze architecture for
adherence to quality requirements, those requirements need to be expressed in terms
that are concrete and measurable or observable. These measurable/observable quanti-
ties are described in the responses section of the attribute characterization. Architectural
decisions are those aspects of an architecture i.e. components, connectors, and their
properties—that have a direct impact on achieving attribute responses. For example,
the external stimuli for performance are events such as messages, interrupts, or user
keystrokes that result in computation being initiated. Performance architectural deci-
sions include processor and network arbitration mechanisms; concurrency structures
including processes, threads, and processors; and properties including process priorities
and execution times. Responses are characterized by measurable quantities such as la-
tency and throughput. For modifiability, the external stimuli are change requests to the
Search WWH ::




Custom Search