Information Technology Reference
In-Depth Information
Histograms 19 are graphs of a distribution of data designed to show the centering,
dispersion (spread), and shape (relative frequency) of the data. Histograms can pro-
vide a visual display of large amounts of data that are difficult to understand in a
tabular, or spreadsheet, form. They are used to understand how the output of a process
relates to customer expectations (targets and specifications) and to help answer the
question: “Is the process capable of meeting customer requirements?” In other words,
how the voice of the process (VOP) measures up to the voice of the customer (VOC).
Histograms are used to plot the density of data and often for density estimation:
estimating the probability density function of the underlying variable. The total area
of a histogram always equals 1. If the lengths of the intervals on the x-axis are all
1, then a histogram is identical to a relative frequency plot. An alternative to the
histogram is kernel density estimation, which uses a kernel to smooth samples.
DFSS scorecard (El-Haik & Yang, 2003) is the repository for all managed CTQ
information. At the top level, the scorecard predicts the defect level for each CTQ.
The input sheets record the process capability for each key input. The scorecard
calculates short-term Z scores and long-term DPMO (see Chapter 7). By layering
scorecards, they become a systems integration tool for the project team and manager.
If a model can be created to predict the team's designs performance with respect
to a critical requirement, and if this model can be computed relatively quickly, then
powerful statistical analyses become available that allow the software DFSS team to
reap the full benefits of DFSS. They can predict the probability of the software design
meeting the requirement given environmental variation and usage variation using
statistical analysis techniques (see Chapter 6). If this probability is not sufficiently
large, then the team can determine the maximum allowable variation on the model
inputs to achieve the desired output probability using statistical allocation techniques.
And if the input variation cannot be controlled, they can explore new input parameter
values that may improve their design's statistical performance with respect to multiple
requirements simultaneously using optimization techniques (see Chapters 17 and 18).
Risk is a natural part of the business landscape. The software industry is no
difference. If left unmanaged, the uncertainty can spread like weeds. If managed
effectively, losses can be avoided and benefits can be obtained. Too often, software
risk (risk related to the use of software) is overlooked. Other business risks, such as
market risks, credit risk and operational risks have long been incorporated into the
corporate decision-making processes. Risk Management 20 is a methodology based
on a set of guiding principles for effective management of software risk.
Failure Mode and Effect Analysis (FMEA) 21 is a proactive tool, technique, and
quality method that enables the identification and prevention of process or software
product errors before they occur. As a tool embedded within DFSS methodology,
FMEA can help identify and eliminate concerns early in the development of a process
or new service delivery. It is a systematic way to examine a process prospectively
for possible ways in which failure can occur, and then to redesign the product so
19 See Chapter 5.
20 See Chapter 15.
21 See Chapter 16.
Search WWH ::




Custom Search