Documente Academic
Documente Profesional
Documente Cultură
AND
BEYOND
Design for
Six Sigma
SIX SIGMA AND
BEYOND
A series by D.H. Stamatis
Volume I
Foundations of Excellent Performance
Volume II
Problem Solving and Basic Mathematics
Volume III
Statistics and Probability
Volume IV
Statistical Process Control
Volume V
Design of Experiments
Volume VI
Design for Six Sigma
Volume VII
The Implementation Process
D. H. Stamatis
SIX SIGMA
AND
BEYOND
Design for
Six Sigma
This book contains information obtained from authentic and highly regarded sources. Reprinted material
is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable
efforts have been made to publish reliable data and information, but the author and the publisher cannot
assume responsibility for the validity of all materials or for the consequences of their use.
Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic
or mechanical, including photocopying, microfilming, and recording, or by any information storage or
retrieval system, without prior permission in writing from the publisher.
The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for
creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC
for such copying.
Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation, without intent to infringe.
To
Christine
SL3151 FMFrame Page 6 Friday, September 27, 2002 3:14 PM
SL3151 FMFrame Page 7 Friday, September 27, 2002 3:14 PM
Preface
A collage of historical facts brings us to the realization that concerns about quality
are present not only in the minds of top management when things go wrong but also
in the minds of customers when they buy something and it does not work.
We begin the collage 20 years ago, with Wayne’s (1982) proclamation in the
New York Times of “management gospel gone wrong.” Wayne quoted two Harvard
professors, Hays and Abernathy, as saying, “You may have your eye on the wrong
ball.” In a discussion of the cost differential between American and Japanese com-
panies, Wayne said that American business executives argue that the Japanese advan-
tage is largely rooted in factors unique to Japan: lower labor costs, more automated
and newer factories, strong government support, and a homogeneous culture.
The professors, though, argue differently, Wayne said. They claim that Japanese
businesses are better because they pay attention to such basics as a clean workplace,
preventive maintenance for machinery, a desire to make their production process
error free, and an attitude that “thinks quality.”
Other authors writing in the early 1980s made similar points. Blotnick (1982)
wrote, “If it’s American, it must be bad.” The headline of an anonymous article in
The Sentinel Star (1982) referred to “retailers relearning lesson of customer’s always
right.” Ohmae (1982) wrote an article titled “Quality control circles: They work and
don’t work.” Imai (1982) wrote that unless organizations control (eliminate) their
waste, they would have problems. He identified waste as:
Holusha (1983) wrote of the “U.S. striving for efficiency.” Serrin (1983) described
a study that showed that the work ethic is “alive but neglected.” Holloran (1983) wrote
that an “army staff chief faults industry as producing defective materials.”
Almost twenty years later, Zahary (2001) reported that Toyota strives to retain
its benchmark status by continuing its focus on the Kaizen approach and genchi
genbutso (go and see attitude). Winter (2001) wrote that GM is “now trying to show
it understands importance of product.” McElroy (2001) wrote, “Customers don’t
care how well your stock is performing. They do not care you are the lowest producer.
They do not care you are the fastest to market. All they care about is the car they
are buying. That is why it all comes down to product.”
Morais (2001), quoting O’Connell (2000), claimed that over 100,000 focus
groups were fielded in 1999, even though marketing and advertising professionals
have mixed feelings about their value. Steel (1998 pp. 79 and 202–205) expressed
industry’s ambivalence about focus groups. Among other things, he claimed that
they are not very representative at all. The odd thing about focus groups is that we
still use them to predict the sales potential of new products primarily because of
their instant judgments, non-projectable conclusions, and comparatively low costs,
even though we know better — that is, we know that we could do better by learning
about consumers’ product needs and attitudes and understanding their lives.
In the automotive industry, the evidence that something is wrong is abundantly
clear, as Mayne et al. (2001) have reported. Here are some key points:
beyond. The only requirement is that we must take advantage of the future before
we are ready for it. I am reminded of Flint’s (2001), Visnic’s (2001), and Mayne’s
(2001) comments, respectively. American automotive companies, for example, have
abandoned the car market because they do not make money on cars. They forget
that the Japanese companies not only sell cars but make money from them. So what
does Detroit do to sell? It focuses on price — rebates, discounts, 0% finance, lease
subsidies, and so on. What does the competition do? Not only have they developed
an engine/transmission with variable valve breathing, they are already using it. We
are trying to perfect the five-speed, and the competition is installing six speeds; we
talk about CVTs, and Audi is putting one in its new A4. We are focusing on 10
years and 150,000 miles reliability, and our competitors are pushing for 15 years
and 200,000 miles reliability.
In diesel technology, the Europeans and Americans are worlds apart. Even in
this age of globalization, the light duty diesel markets in Europe have become more
sophisticated and demanding to the point where policy makers have recognized the
environmental advantages of diesel and have allowed new diesel vehicles to prove
themselves as efficient, quiet, and powerful alternatives. What do we do? Our policy
makers have created a regulatory structure that greatly impedes the widespread use
of diesel vehicles. Consequently, Americans may be denied the performance, fuel
economy, and environmental benefits of advanced diesel technology.
A third example comes again from the automotive world in reference to fuel
economy. One of the issues in fuel economy is the underbody design. Early on,
American companies paid great attention to the design of the underbody. As time
went on, the emphasis shifted to shapes that channel airflow over the bodywork,
instead of what lies beneath. But while U.S. automakers were accustomed to being
on top, BMW AG was redefining airflow from the ground up. Underbodies have
been a priority with the Munich-based automaker since 1980. That is when BMW
acquired its first wind tunnel and began development of the 1986 7-series — code
named E32. Today, underbodies rank second behind rear ends, wheel housing and
cooling airflow. As of right now, the initiative for BMW has gained them 2 miles
per hour.
When we talk about customer satisfaction we must do certain things that will
help or improve the image of the organization in the perception of the customer. We
are talking about prestige and reputation. Prestige and reputation differ from each
other in three ways:
It is prestige that we are interested from a Design for Six Sigma perspective.
The reason for this is that prestige compels each organization to perform better than
its competitors, thereby promoting excellence and continuously raising industry
SL3151 FMFrame Page 11 Friday, September 27, 2002 3:14 PM
standards not only for the customer but also for the competitors. To achieve prestige,
we must be cognizant of some basic yet inherent items, including the following:
close friends. You need to keep in touch. You need to be honest. You need
to tell people they matter to you. To facilitate this “special attitude,” an
organization may have special days for the customer, special product
anniversaries and so on. However, in every special situation, representa-
tives of the organization should be identifying new functionalities for new
or pending products and shortcomings of the current products.
• Never try to “understand” your customer. (This is not a contradiction
of the above points. Rather, it emphasizes the notion of change in expec-
tations.) Customers are fickle. They change. As a consequence, the orga-
nization must be vigilant in tracking the changes, the wants, and the
expectations of customers. To make sure that customers are being satisfied
and that they will continue to be loyal to your products and services, make
sure you have a system that allows you to listen, listen, and then listen
some more to what they have to say.
• Shrink the globe. The world is shrinking. It has become commonplace
to discuss the information revolution in terms of the creation of “global
markets.” To “think global” is in vogue with the majority of large corpo-
rations. But global thinking presupposes that we also understand the
“global customer.” Do we really understand? Or do we merely think we
do? How do we treat all our customers as though they live right next
door? One way, of course, is through a combination of modern commu-
nication technology and old-fashioned neighborliness. You need good,
solid, two-way conversation with someone half a globe away that is as
immediate, as powerful, and as intimate as a conversation with someone
right in front of you. This obviously is difficult and demanding, and in
the following chapters we are going to establish a flow of disciplines that
perhaps can help us in formulating those “global customers” with their
specific needs, wants, and expectations.
• Design for customer satisfaction and loyalty. Some time ago I heard a
saying that is quite appropriate here. The saying goes something like
“everything is in a state of flux, including the status quo.” I happen to
agree. Never in human history has so much change affected so many
people in so many ways.
The winds of change keep building, blowing harder than ever, hitting more
people, reshaping all kinds of organizations. Incredible as it may sound, all these
changes are happening even in organizations that think that they have understood
the customer and the market. To their surprise, they have not. How else can we
explain some of the latest statistics that tell us the following:
1. Business failures topped 400,00 in the first half of the 1990s and exceeded
500,000 by the end of the decade. That is double the number of the
previous decade. The same trend is projected for the first decade of the
new century.
SL3151 FMFrame Page 13 Friday, September 27, 2002 3:14 PM
What can be done to reverse this trend? Well, some will ride the wind based only
on their size, some will not make it, and some will make it. The ones that will make it
must learn to operate under different conditions — conditions that delight the customer
with the service or the product that an organization is offering. The organization must
learn to design services or products so that the customer will see value in them and
cannot stand it until it has possession of either one. As the desire of the customer
increases for the service or product, the demand for quality will increase.
Designing for six sigma is not a small thing, nor should it be a lighthearted
undertaking. It is a very difficult road to follow, but the results are worthwhile.
The structure of this volume is straightforward and follows the pattern of the
model of DFSS, which is Recognize, Define, Characterize, Optimize, and Verify
(RDCOV). Specifically, with each stage of the model, we will explain some of the
most important tools and methodologies.
Our introduction is the stage where we address the basic and fundamental
characteristics of any DFSS program. It is our version of the Recognize step.
Specifically, we address:
1. Partnering
2. Robust teams
3. Systems engineering
4. Advanced quality planning
We follow with the Define stage, where we discuss customer concerns by first
explaining the notion of “function” and then continuing with three very important
methodologies in the pursuit of satisfying the customer. Those methodologies are:
1. Kano model
2. Quality function deployment (QFD)
3. Conjoint analysis
1. Monte Carlo
2. Finite element analysis
3. Excel’s solver
4. Failure mode and effect analysis (FMEA)
5. Reliability and R&M
6. DOE
7. Parameter design
8. Tolerance design
SL3151 FMFrame Page 14 Friday, September 27, 2002 3:14 PM
1. Theory of constraints
2. Design review
3. Trade-off analysis
4. Cost of quality
5. Reengineering
6. GD&T
7. Metrology
1. Define
2. Characterize
3. Optimize
4. Verify
REFERENCES
Anon., Retailers Relearning Lesson of Customer’s Always Right, The Sentinel Star, Jan. 17,
1982, p. 4.
Blotnick, S., If It’s American, It Must Be Bad, Forbes, Feb. 1, 1982, p. 146.
Flint, J., Where’s the Cars? You Can Make Money on Cars If You Really Want To, Ward’s
AUTOWORLD, Sept. 2001, p .21.
Halloran, R., Chief of Army Assails Industry on Arms Flaw, The New York Times, Aug. 9,
1983, p. 1.
Holusha, J., Why G.M. Needs Toyota: U.S. Striving for Efficiency, The New York Times, Feb.
16, 1983, p. 1 (of business section).
Imai, M., From Taylor to Ford to Toyota: Kanban System — Another Challenge from Japan,
The Japan Economic Journal, Mar. 30, 1982, p. 12.
SL3151 FMFrame Page 15 Friday, September 27, 2002 3:14 PM
Lewin, T., Japanese Bosses Ponder Mysterious U.S. Workers, The New York Times, Nov. 7,
1982, p. 2 (of business section).
Lohr, S., Japan’s Hard Look at Software, The New York Times, Jan. 9, 1983, p. 3 (of business
section).
Mayne, E., Bottoms Up! Fuel Economy Pressure Underscores Underbody Debate. Ward’s
AUTOWORLD, Sept. 2001, p. 58.
Mayne, E. et al., Quality Crunch, Ward’s AUTOWORLD, July 2001, p. 14.
McElroy, J., Rendezvous captures consumer interest, Wards AUTOWORLD, Jan. 2001, p. 12.
Morais, R., The End of Focus Groups, Quirk’s Marketing Research Review, May 2001, p. 154.
O’Connell, V., advertising column, Wall Street Journal, Nov. 27, 2000, p. B21.
Ohmae, K., Quality Control Circles: They Work and Don’t Work, The Wall Street Journal,
Mar. 29, 1982, p. 2.
Serrin, W., Study Says Work Ethic Is Alive But Neglected, The New York Times, Sept. 5,
1983, p. 4.
Steel, J., Truth, Lies and Advertising, Wiley, New York, 1998.
Visnic, B., Super Diesel! Anyone in the Industry Will Tell You: Forget Hybrids; Diesels Are
Our One Stop Cure All, Ward’s AUTOWORLD, Sept. 2001, p. 34.
Wayne, L., Management Gospel Gone Wrong, The New York Times, May 30, 1982, p. 1 (of
business section).
Wight, O.W., Learning To Tell the Truth, Purchasing, May 13, 1982, p. 5.
Winter, D., One last speed, Wards AUTOWORLD, July 2001, p. 9.
Zachary, K., Toyota Strives To Retain Its Benchmark Status, Supplement to Ward’s AUTO-
WORLD, Aug. 6–10, 2001, p. 11.
SL3151 FMFrame Page 16 Friday, September 27, 2002 3:14 PM
SL3151 FMFrame Page 17 Friday, September 27, 2002 3:14 PM
Acknowledgments
I want to thank Dr. A. Stuart for granting me permission to use some of the material
in Chapter 14. The summaries of the different distributions and reliability have added
much to the volume. I am really indebted for his contribution.
As with the other volumes in this series, many people have helped in many ways
to make this book a reality. I am afraid that I will miss some, even though their help
was invaluable.
Dr. H. Hatzis, Dr. E. Panos, and Dr. E. Kelly have been indispensable in review-
ing and commenting freely on previous drafts and throughout this project.
I would like to thank Dr. L. Lamberson for his thoughtful comments and sug-
gestions on reliability, G. Burke for his suggestions on R&M, and R. Kapur for his
valuable comments about the flow and content of the material.
I want to thank Ford Motor Company and especially Richard Rossier and David
Kelley for their efforts to obtain permission for using the introductory material on
“robust teams.”
I want to thank Prentice Hall for granting me permission to use the material on
conjoint and MANOVA analysis in Chapter 2. That material was taken from the
1998 book Multivariate Data Analysis, 5th ed., by J.F. Hair, R.E. Anderson, R.L.
Tatham, and W.C. Block.
I want to thank McGraw-Hill and D.R. Bothe for granting me permission to use
some material on six sigma taken from the 1977 book Measuring Process Capability,
by D.R. Bothe.
I want to thank J. Wiley and the Buffa Foundation for granting me permission
to use material on the Monte Carlo method from the 1973 book Modern Production
Management, 4th ed., by E.S. Buffa.
I want to thank the American Supplier Institute for granting me permission to
use the L8 interaction table as well as some of their OA and linear graphs.
I want to thank M.A. Anleitner, from Livonia Technical Services, for his con-
tribution to the topic of “function” in Chapter 2, for helping me articulate some of
the key points on APQP, and for serving as a sounding board on issues of value
analysis. Thanks, Mike.
I also want to thank J. Ondrus, from General Dynamics — Land System Divi-
sion, for introducing me to Value Analysis and serving as a reviewer for earlier drafts
on this topic.
I want to thank T. Panson, P. Rageas, and J. Golematis, all of them certified
public accountants, for their guidance and help in articulating the basics of account-
ing and financial concerns presented in Chapter 15. Of course, the ultimate respon-
sibility for interpreting their guidance is solely mine.
Special thanks go to the editors at CRC for putting up with me, as well as for
transforming my notes and the manuscript into a user-friendly product.
SL3151 FMFrame Page 18 Friday, September 27, 2002 3:14 PM
I want to thank the participants in my seminars for their comments and recom-
mendations. They actually piloted the material in their own organizations and saw
firsthand the results of some of the techniques and methodologies discussed in this
particular volume. Their comments were incorporated with much appreciation.
Finally, as always, this volume would not have been completed without the
support of my family and especially my navigator, chief editor, and supporter —
my wife, Carla.
SL3151 FMFrame Page 19 Friday, September 27, 2002 3:14 PM
List of Figures
Figure 2.1 Paper pencil assembly.
Figure 2.2 Function diagram for a mechanical pencil.
Figure 2.3 Ten symbols for process flow charting.
Figure 2.4 Process flow for complaint handling.
Figure 2.5 Kano model framework.
Figure 2.6 Basic quality depicted in the Kano model.
Figure 2.7 Performance quality depicted in the Kano model.
Figure 2.8 Excitement quality depicted in the Kano model.
Figure 2.9 Excitement quality depicted over time in the Kano model.
Figure 2.10 A typical House of Quality matrix.
Figure 2.11 The initial “what” of the customer.
Figure 2.12 The iterative process of “what” to “how.”
Figure 2.13 The relationship matrix.
Figure 2.14 The conversion of “how” to “how much.”
Figure 2.15 The flow of information in the process of developing the final “House
of Quality.”
Figure 2.16 Alternative method of calculating importance.
Figure 2.17 The development of QFD.
Figure 3.1 The benchmarking continuum process.
Figure 5.1 Trade-off relationships between program objectives (balance design).
Figure 5.2 Sequential approach.
Figure 5.3 Simultaneous approach.
Figure 5.4 Tomorrow’s approach … if not today’s.
Figure 5.5 The product development map/guide.
Figure 5.6 Manufacturing system schematic.
Figure 5.7 Approaches to mistake proofing.
Figure 5.8 Major inspection techniques.
Figure 5.9 Function of mistake-proofing devices.
Figure 6.1 Types of FMEA.
Figure 6.2 Payback effort.
Figure 6.3 Kano model.
Figure 6.4 A Pugh matrix — shaving with a razor.
Figure 6.5 Scope for DFMEA — braking system.
Figure 6.6 Scope for PFMEA — printed circuit board screen printing process.
Figure 6.7 Typical FMEA header.
Figure 6.8 Typical FMEA body.
Figure 6.9 Function tree process.
Figure 6.10 Example of ballpoint pen.
Figure 6.11 FMEA body.
SL3151 FMFrame Page 22 Friday, September 27, 2002 3:14 PM
List of Tables
Table I.1 Probability of a completely conforming product.
Table 1.1 Customer/supplier expanded partnering interface meetings.
Table 1.2 A typical questionnaire.
Table 1.3 A general questionnaire.
Table 2.1 Characteristic matrix for a machining process.
Table 2.2 Benefits of improved total development process.
Table 2.3 Stimuli descriptions and respondent rankings for conjoint analysis of
industrial cleanser.
Table 2.4 Average ranks and deviations for respondents 1 and 2.
Table 2.5 Estimated part-worths and factor importance for respondents 1 and 2.
Table 2.6 Predicted part-worth totals and comparison of actual and estimated
preference rankings.
Table 4.1 Simulated samples of 20 performance time values for operations A and B.
Table 4.2 Simulated operation of the two-station assembly line when operation
A precedes operation B.
Table 4.3 Simulated operation of the two-station assembly line when operation
B precedes operation A.
Table 5.1 Customer attributes for a car door.
Table 5.2 Relative importance of weights.
Table 5.3 Customer’s evaluation of competitive products.
Table 5.4 Examples of mistakes and defects.
Table 6.1 DFMEA — severity rating.
Table 6.2 PFMEA — severity rating.
Table 6.3 DFMEA — occurrence rating.
Table 6.4 PFMEA — occurrence rating.
Table 6.5 DFMEA detection table.
Table 6.6 PFMEA detection table.
Table 6.7 Special characteristics for both design and process.
Table 6.8 Manufacturing process control matrix.
Table 6.9 Machinery guidelines for severity, occurrence, and detection.
Table 7.1 Failure rates with median ranks.
Table 7.2 Median ranks.
Table 7.3 Five percent rank table.
Table 7.4 Ninety-five percent rank table.
Table 7.5 Department of Defense reliability and maintainability — standards and
data items.
Table 8.1 Activities in the first three phases of the R&M process.
Table 8.2 Cost comparison of two machines.
Table 8.3 Thermal calculation values.
SL3151 FMFrame Page 26 Friday, September 27, 2002 3:14 PM
Contents
Introduction Understanding the Six Sigma Philosophy.......................................1
A Static versus a Dynamic Process ..........................................................................1
Products with Multiple Characteristics .....................................................................2
Short- and Long-Term Six Sigma Capability ...........................................................4
Design for Six Sigma and the Six Sigma Philosophy..............................................5
Design Phase.........................................................................................................5
Internal Manufacturing .........................................................................................5
External Manufacturing ........................................................................................6
References..................................................................................................................7
References..............................................................................................................219
Selected Bibliography............................................................................................219
Requirements ....................................................................................................263
Discussion .........................................................................................................263
Forming the Appropriate Team....................................................................263
Describing the Function of the Design/Product..........................................264
Describing the Failure Mode Anticipated ...................................................264
Describing the Effect of the Failure ............................................................264
Describing the Cause of the Failure ............................................................264
Estimating the Frequency of Occurrence of Failure ...................................265
Estimating the Severity of the Failure.........................................................265
Identifying System and Design Controls ....................................................265
Estimating the Detection of the Failure ......................................................266
Calculating the Risk Priority Number .........................................................267
Recommending Corrective Action...............................................................267
Strategies for Lowering Risk: (System/Design) — High Severity
or Occurrence ..........................................................................................267
Strategies for Lowering Risk: (System/Design) — High Detection
Rating ......................................................................................................267
Process Failure Mode and Effects Analysis (FMEA)...........................................268
Objective ...........................................................................................................268
Timing ...............................................................................................................268
Requirements ....................................................................................................268
Discussion .........................................................................................................269
Forming the Team ........................................................................................269
Describing the Process Function .................................................................269
Manufacturing Process Functions...........................................................269
The PFMEA Function Questions............................................................270
Describing the Failure Mode Anticipated ...................................................270
Describing the Effect(s) of the Failure........................................................271
Describing the Cause(s) of the Failure........................................................272
Estimating the Frequency of Occurrence of Failure ...................................273
Estimating the Severity of the Failure.........................................................273
Identifying Manufacturing Process Controls...............................................273
Estimating the Detection of the Failure ......................................................274
Calculating the Risk Priority Number .........................................................275
Recommending Corrective Action...............................................................275
Strategies for Lowering Risk: (Manufacturing) — High Severity
or Occurrence ..........................................................................................275
Strategies for Lowering Risk: (Manufacturing) — High Detection
Rating ......................................................................................................276
Machinery FMEA (MFMEA) ...............................................................................277
Identify the Scope of the MFMEA ..................................................................277
Identify the Function ........................................................................................277
Failure Mode.....................................................................................................277
Potential Effects ................................................................................................278
Severity Rating..................................................................................................279
Classification .....................................................................................................279
SL3151 FMFrame Page 38 Friday, September 27, 2002 3:14 PM
Potential Causes................................................................................................279
Occurrence Ratings...........................................................................................282
Surrogate MFMEAs..........................................................................................282
Current Controls...........................................................................................282
Detection Rating ..........................................................................................282
Risk Priority Number (RPN)............................................................................282
Recommended Actions .....................................................................................283
Date, Responsible Party....................................................................................283
Actions Taken/Revised RPN.............................................................................283
Revised RPN.....................................................................................................284
Summary ................................................................................................................284
Selected Bibliography............................................................................................284
AST/PASS..............................................................................................................310
Purpose of AST.................................................................................................310
AST Pre-Test Requirements .............................................................................311
Objective and Benefits of AST.........................................................................311
Purpose of PASS...............................................................................................311
Objective and Benefits of PASS .......................................................................312
Characteristics of a Reliability Demonstration Test .............................................312
The Operating Characteristic Curve.................................................................313
Attributes Tests .................................................................................................313
Variables Tests ..................................................................................................314
Fixed-Sample Tests ...........................................................................................314
Sequential Tests ................................................................................................314
Reliability Demonstration Test Methods...............................................................314
Small Populations — Fixed-Sample Test
Using the Hypergeometric Distribution ...........................................................315
Large Population — Fixed-Sample Test
Using the Binomial Distribution ......................................................................315
Large Population — Fixed-Sample Test
Using the Poisson Distribution.........................................................................316
Success Testing ......................................................................................................316
Sequential Test Plan for the Binomial Distribution .........................................317
Graphical Solution ............................................................................................318
Variables Demonstration Tests ..............................................................................318
Failure-Truncated Test Plans — Fixed-Sample Test
Using the Exponential Distribution ..................................................................318
Time-Truncated Test Plans — Fixed-Sample Test
Using the Exponential Distribution ..................................................................319
Weibull and Normal Distributions....................................................................320
Sequential Test Plans .............................................................................................321
Exponential Distribution Sequential Test Plan.................................................321
Weibull and Normal Distributions....................................................................323
Interference (Tail) Testing ................................................................................323
Reliability Vision ..............................................................................................323
Reliability Block Diagrams ..............................................................................323
Weibull Distribution — Instructions for Plotting and Analyzing Failure
Data on a Weibull Probability Chart ................................................................325
Instructions for Plotting Failure and Suspended Items Data
on a Weibull Probability Chart.........................................................................331
Additional Notes on the Use of the Weibull....................................................334
Design of Experiments in Reliability Applications ..............................................335
Reliability Improvement through Parameter Design ............................................336
Department of Defense Reliability and Maintainability — Standards
and Data Items.......................................................................................................337
References..............................................................................................................342
Selected Bibliography............................................................................................343
SL3151 FMFrame Page 40 Friday, September 27, 2002 3:14 PM
Chapter 16 Closing Thoughts about Design for Six Sigma (DFSS) ...............715
Index ......................................................................................................................737
SL3151 FMFrame Page 52 Friday, September 27, 2002 3:14 PM
SL3151Ch00Frame Page 1 Thursday, September 12, 2002 6:15 PM
Introduction —
Understanding the
Six Sigma Philosophy
Much discussion in recent years has been devoted to the concept of “six sigma”
quality. The company most often associated with this philosophy is Motorola, Inc.,
whose definition of this principle is stated by Harry (1997, p. 3) as follows:
A product is said to have six sigma quality when it exhibits no more than 3.4 npmo
at the part and process step levels.
Confusion often exists about the relationship between six sigma and this defi-
nition of producing no more than 3.4 nonconformities per million opportunities.
From a typical normal distribution table, one may find that the area underneath the
normal curve beyond six sigma from the average is 1.248 × 10–9 or .001248 ppm,
which is about 1 part per billion. Considering both tails of the process distribution,
this would be a total of .002 ppm. This process has the potential capability of fitting
two six sigma spreads within the tolerance, or equivalently, having 12 σ equal the
tolerance.
However, the 3.4 ppm value corresponds to the area under the curve at a distance
of only 4.5 sigma from the process average. Why this apparent discrepancy? It is
due to the difference between a static and a dynamic process. (The reader is encour-
aged to review Volume I of this series.)
TABLE I.1
Probability of a Completely Conforming Product
With 1.56 Shift
Number of C, = 1.33 C,. = 2.00
Characteristics (±46) (±6a)
1 99.3790 99.99966
2 98.7618 99.99932
5 96.9333 99.9983
10 93.9607 99.9966
25 85.5787 99.9915
50 73.2371 99.9830
100 53.6367 99.9660
150 39.2820 99.9490
250 21.0696 99.9150
500 4.4393 99.8301
capability. The processes producing the features are assumed to be dynamic, with
up to a 1.5-sigma shift in average possible.
Suppose a product has only one feature, which is produced on a process having
±4 sigma potential capability. We can then calculate that a maximum of .6210 percent
of these parts will be non-conforming under the dynamic model. Conversely, at least
99.3790 percent will be conforming, as is listed in the first line of Table I.1. If this
single characteristic is instead produced on a process with ±6 sigma potential capa-
bility, at most .00034 percent of the finished product will be out of specification,
with at least 99.99966 percent within specification.
If a product has two characteristics, the probability that both are within speci-
fication (assuming independence) is .993790 times .993790, or 98.7618 percent when
each is produced on a ±4 sigma process. If they are produced on a ±6 sigma process,
this probability increases to 99.99932 percent (.9999966 times .9999966). The
remainder of the table is computed in a similar manner.
When each characteristic is produced with ±4 sigma capability (and assuming
a maximum drift of 1.5 sigma), a product with 10 characteristics will average about
939 conforming parts out of every 1000 made, with the 61 nonconforming ones
having at least one characteristic out of specification. If all characteristics are man-
ufactured with ±6 sigma capability, it would be very unlikely to see even one
nonconforming part out of these 1000.
For a product having 50 characteristics, 268 out of 1000 parts will have at least
one nonconforming characteristic when each is produced with ±4 sigma capability.
If these 50 characteristics were manufactured with ±6 sigma capability, it would still
be improbable to see one nonconforming part. In fact, with ±6 sigma capability, a
product must have 150 characteristics before you would expect to find even one
nonconforming part out of 1000. Contrast this to the ±4 sigma capability level, where
60.7 percent of these parts would be rejected, and the rationale for adopting the six
sigma philosophy becomes quite evident.
SL3151Ch00Frame Page 4 Thursday, September 12, 2002 6:15 PM
σ LT = cσ ST
1 µ−M
c= k=
1− k (USL − LSL ) / 2
If a process has a Cp of 2.00 and is centered at the middle of the tolerance, then
there is a distance of 6σST from the average to the USL. When the process average
shifts up by 1.5σST, it has moved off target by 25 percent of one-half the tolerance
(1.5/6.0 = .25). For this k factor of .25, c is calculated as 1.33.
The long-term standard deviation for this process would then be estimated from
σST, as:
σ
ˆ LT = cσ ST = 1.33σ ST
The value 1.33 is quite commonly adopted as the relationship between short-
and long-term process variation (Koons, 1992). This factor implies that long-term
variation is approximately 33 percent greater than short-term variation. Other authors
are more conservative and assume a c factor between 1.40 and 1.60, which translates
to a k factor ranging from .286 to .375 (Harry and Lawson, 1992, pp. 6–12, 7–6).
For a c factor of 1.50, k is .333.
1.50 = 1/(1 – k)
1 – k = 1/1.50
k = 1 – (1/1.50) = .333
SL3151Ch00Frame Page 5 Thursday, September 12, 2002 6:15 PM
This assumption expects up to a 33.3 percent shift in the process average. With
six sigma capability, there is 6σST from M to the specification limit, a distance that
equals one-half the tolerance. A k factor of .333 represents a maximum shift in the
process average of 2.0σST, a number derived by multiplying one-half the tolerance,
or 6σST, by .333.
DESIGN PHASE
1. Design in ±6σ tolerances for all critical product and process parameters.
For additional information on this topic, read Six Sigma Mechanical
Design Tolerancing by Harry and Stewart (1988).
2. Develop designs robust to unexpected changes in both manufacturing and
customer environments (see Harry and Lawson, 1992).
3. Minimize part count and number of processing steps.
4. Standardize parts and processes.
INTERNAL MANUFACTURING
A special warning here is appropriate. Even if the first two strategies are adopted,
a company will never achieve six sigma quality unless it has the full cooperation
and participation of all its suppliers.
EXTERNAL MANUFACTURING
1. Qualify suppliers.
2. Minimize the number of suppliers.
3. Develop long-term partnerships with remaining suppliers.
4. Require documented process control plans.
5. Insist on continuous process improvement.
Craig (1993) shows how Dupont Connector Systems utilized this set of strategies
to introduce new products into the data processing and telecommunications indus-
tries. Noguera (1992) discusses how the six sigma doctrine applies to chip connection
technology in electronics manufacturing, while Fontenot et al. (1994) explain how
these six sigma principles pertain to improving customer service. Daskalantonakis
et al. (1990–1991) describe how software measurement technology can identify areas
of improvement and help track progress toward attaining six sigma quality in soft-
ware development.
As all these authors conclude, the rewards for achieving the six sigma quality
goals are shorter cycle times, shorter lead times, reduced costs, higher yields,
improved product reliability, increased profitability, and most important of all, highly
satisfied customers.
We have reviewed the principles of six sigma here to make sure the reader
understands the ramifications of poor quality and the significance of implementing
the six sigma philosophy. In Volume I of this series, we discussed this philosophy
in much more detail. However, it is imperative to summarize some of the inherent
advantages, as follows:
SL3151Ch00Frame Page 7 Thursday, September 12, 2002 6:15 PM
1. As quality improves to the six sigma level, profits will follow with a
margin of about 8% higher prices.
2. The difference between a six sigma company and a non–six sigma com-
pany is that the six sigma company is three times more profitable. Most
of that profitability is through elimination of variability — waste.
3. Companies with improved quality gain market share continuously at the
expense of companies that do not improve.
The focus of all these great results is in the manufacturing. However, most of
the cost reduction is not in manufacturing. We know from many studies and the
experience of management consultants that about 80% of quality problems are
actually designed into the product without any conscious attempt to do so. We also
know that about 70% of a product’s cost is determined by its design.
Yet, most of the “hoopla” about six sigma in the last several years has been
about the DMAIC model. To be sure, in the absence of anything else, the DMAIC
model is great. But it still focuses on after-the-fact problems, issues, and concerns.
As we keep on fixing problems, we continually generate problems to be fixed. That
is why Stamatis (2000) and Tavormina and Buckley (1994) and the first volume of
this series proclaimed that six sigma is not any different from any other tool already
in the tool box of the practitioner. We still believe that, but with a major caveat.
The benefit of the six sigma philosophy and its application is in the design phase
of the product or service. It is unconscionable to think that in this day and age there
are organizations that allow their people to chase their tails and give accolades to
so many for fixing problems. Never mind that the problems they are fixing are
repeatable problems. It is an abomination to think that the more we talk about quality,
the more it seems that we regress. We believe that a certification program will do
its magic when in fact nothing will lead to real improvement unless we focus on
the design.
This volume is dedicated to the Design for Six Sigma, and we are going to talk
about some of the most essential tools for improvement in “real” terms. Specifically,
we are going to focus on resource efficiency, robust designs, and production of
products and services that are directly correlated with customer needs, wants, and
expectations.
REFERENCES
Bender, A, Bendarizing Tolerances — A Simple Practical Probability Method of Handling
Tolerances for Limit-Stack-Ups. Graphic Science, Dec. 1962, pp. 17–21.
Bender, A., Statistical Tolerancing as It Relates to Quality Control and the Designer, SAE
Paper No. 680490, Society of Automotive Engineers, Southfield, MI, 1968.
Bothe, D.R., Reducing Process Variation, International Quality Institute., Inc., Sacramento,
CA, 1993.
Bothe, D.R., Measuring Process Capability, McGraw-Hill. New York, 1997.
Craig, R.J., Six-Sigma Quality, the Key to Customer Satisfaction, 47th ASQC Annual Quality
Congress Transactions, Boston, 1993, pp. 206–212.
SL3151Ch00Frame Page 8 Thursday, September 12, 2002 6:15 PM
Daskalantonakis, M.K., Yacobellis, R.H., and Basili, V.R., A method for assessing software
measurement technology, Quality Engineering 3, 27–40, 1990–1991.
Delott, C. and Gupta, P., Characterization of copperplating process for ceramic substrates,
Quality Engineering, 2, 269–284, 1990.
de Treville, S., Edelson, N.M., and Watson, R., Getting six sigma back to basics, Quality
Digest, 15, 42–47, 1995.
Evans, D.H., Statistical tolerancing formulation, Journal of Quality Technology, 2, 188–195,
1970.
Evans, D.H., Statistical tolerancing: the state of the art, Part I: Background, Journal of Quality
Technology, 6, 188–195, 1974.
Evans, D.H., Statistical tolerancing: the state of the art, Part II: Methods for estimating
moments, Journal of Quality Technology, 7, 1–12. 1975 (a).
Evans, D.H., Statistical tolerancing: the state of the art, Part III: Shifts and drifts, Journal of
Quality Technology, 7, 72–76, 1975 (b).
Fan, John Y. (May 1990). Achieving Six Sigma in Design, 44th ASQC Annual Quality
Congress Transactions, San Francisco, May 1990, pp. 851–856.
Fontenot, G., Behara, R., and Gresham, A., Six sigma in customer satisfaction, Quality
Progress, 27, 73–76, 1994.
Gilson, J., A New Approach to Engineering Tolerances, Machinery Publishing Co., London,
1951.
Harry, M., The Nature of Six Sigma Quality, Motorola Univ. Press, Schaumburg, IL, 1997.
Harry, M. and Stewart, R., Six Sigma Mechanical Design Tolerancing, Motorola University
Press, Schaumburg, IL, 1988.
Harry, M., The Vision of Six Sigma: Case Studies and Applications, 2nd ed., Sigma Publishing
Co., Phoenix, 1994.
Harry, M. and Lawson, J.R., Six Sigma Producibility Analysis and Process Characterization,
Addison-Wesley Publishing Co., Reading, MA, 1992.
Kelly, H.W. and Seymour, L.A., Data Display. Addison-Wesley Publishing Co., Reading,
MA, 1993.
Koons, J., Indices of Capability: Classical and Six Sigma Tools, Addison-Wesley Publishing
Co., Reading, MA, 1992.
Mader, D.P., Seymour, L.A., Brauer, D.C., and Gallemore, M.L., Process Control Methods,
Addison-Wesley Publishing Co., Reading, MA, 1993.
McFadden, F.R., Six-sigma quality programs, Quality Progress, 26, 37–42, 1993.
Noguera, J., Implementing Six Sigma for Interconnect Technology, 46th ASQC Annual
Quality Congress Transactions, Nashville, TN, May 1992, pp. 538–544.
Pena, E., Motorola’s secret to total quality control, Quality Progress, 23, 43–45, 1990.
Stamatis, D.H., Six sigma: point/counterpoint: who needs six sigma anyway, Quality Digest,
33–38, May, 2000.
Tadikamalla, P.R., The confusion over six-sigma quality, Quality Progress, 21, 83–85, 1994.
Tavormina, J.J., and Buckley, S., SPC and six-sigma, Quality, 31, 47, 1992.
Tomas, S., Motorola’s Six Steps to Six Sigma, 34th International Conference Proceedings,
APICS, Seattle, WA, 1991. pp. 166–169.
SL3151Ch01Frame Page 9 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design
1 for Six Sigma (DFSS)
So far in this series we have presented an overview of the six sigma methodology
(DMAIC) and some of the tools and specific methodologies for addressing problems
in manufacturing. Although this is a commendable endeavor for anyone to pursue —
as mentioned in Volume I of this series — it is not an efficient way to use resources
to pursue improvement. The reason for this is the same as the reason you do not
apply an atomic bomb to demolish a two-story building. It can be done, but it is a
very expensive way to go.
As we proposed in Volume I, if an organization really means business and wants
quality improvement to go beyond six sigma constraints, it must focus on the design
phase of its products or services. It is the design that produces results. It is the design
that allows the organization to have flexibility. It is the design that convinces the
customer of the existence of quality in a product. Of course, in order for this design
to be appropriate and applicable for customer use, it must be perceived by the
customer as functional, not by the organization’s definition but by the customer’s
personal perceived understanding and application of that product or service.
Design for Six Sigma (DFSS) is an approach in which engineers interpret and
design the functionality of the customer need, want, and expectation into require-
ments that are based on a win-win proposition between customer and organization.
Why is this important? It is important because only through improved quality and
perceived value will the customer be satisfied. In turn, only if the customer is satisfied
will the competitive advantage of a given organization increase.
There are four prerequisites to DFSS and beyond. The first is the recognition
that improvement must be a collaboration between organization and supplier (part-
nering). The second is the recognition that true DFSS and beyond will only be
achieved if in a given organization there are “real” teams and those teams are really
“robust.” The third prerequisite is that improvement on such a large scale can only
be achieved by recognizing that systems engineering must be in place. Its function
has to be to make sure that the customer’s needs, wants, and expectations are
cascaded all the way to the component level. The fourth prerequisite is the imple-
mentation of at least a rudimentary system of Advanced Quality Planning (AQP).
In this chapter we will address each of these prerequisites in a cursory format.
(Here we must note that these prerequisites have also been called the “recognize”
phase of the six sigma methodology.) In the follow-up chapters, we will discuss
specific tools that we need in the pursuit of DFSS and beyond.
PARTNERING
Partnering and cooperation must be our watchwords. In any industry, better com-
munication up and down the supply chain is mandatory. In the past — in few
9
SL3151Ch01Frame Page 10 Thursday, September 12, 2002 6:15 PM
instances even today — U.S. companies have bought almost solely on the basis of
price through competitive bidding. We need to change our attitude. Price is important,
but it is not the only consideration. Partnering with both customers and suppliers is
just as important.
The Japanese have created a competitive edge through vertical integration. We
can learn from them by establishing “virtual” vertical integration through partnering
with customers and suppliers. Just as in a marriage, we need to give more than we
get and believe that it will all work out better in the end. We need to give preferential
treatment to local suppliers. We should take a long-term view, understanding their
need for profitability and looking beyond this year’s buy.
To begin our thinking in that direction we must change our current paradigm.
The first paradigm shift must be in the following definitions:
These are small changes indeed but they mean totally different things. For
example: “supplier” implies working together in a win-win situation, while “vendor”
implies a one-time benefit — usually price. “Procurement” implies price orientation
based on bidding of some sort, while “business strategy” takes into account the
concern(s) of the entire organization. We all know that price alone is not the sole
reason we buy. If we do buy on the basis of price alone, we pay the consequences
later on.
So, what is partnering? Partnering is a business culture that fosters open com-
munication and mutually beneficial relationships in a supportive environment built
on trust. Partnering relationships stimulate continuous quality improvement and a
reduction in the total cost of ownership.
Partnering starts with:
1. Teaming
2. Sharing resources
3. Melding of customer and supplier
4. Eliminating the we/they approach to conducting business
1. Customer satisfaction
2. Mutual profitability
3. Improved product, service, and operational quality
4. A desire for and a commitment to excellence through continuous improve-
ments in communication skills, quality, delivery, administration, and ser-
vice performance
5. The factors that contribute to customer satisfaction and the lowest total
cost of ownership
6. A situation in which each partner enhances its own competitive position
through the knowledge and resources shared by the other
Traditional Expanded
Lowest price Total cost of ownership
Specification-driven End customer–driven
Short-term, reacts to market Long-term
Trouble avoidance Opportunity maximization
Purchasing’s responsibility Cross-functional teams and top management involvement
Tactical Strategic
Little sharing of information on both sides Both supplier and buyer share short- and long-term plans
Share risk and opportunity
Standardization
Joint venture
Share data
How can this partnership develop? There are prerequisites. Some are listed here.
The prerequisites for basic partnering include:
1. Mutual respect
2. Honesty
3. Trust
4. Open and frequent communication
5. Understanding of each other’s needs
6. Long-term commitment
7. Recognition of continuing improvement — objective and factual
8. Passion to help each other succeed
9. High priority on relationship
10. Shared risk and opportunity
11. Shared strategies/technology road maps
12. Management commitment
Of course, there are different levels of partnering just as there are different levels
of results. For example:
SL3151Ch01Frame Page 13 Thursday, September 12, 2002 6:15 PM
Why is partnering so important in the DFSS even though it may mean different
things to different people? It is because the purposes or goal of most customers who
advocate “partnerships” are to reduce the time to get a new product to market by
eliminating the bid cycle and to extend the customer’s capability without adding
personnel.
Partnering is joining together to accomplish an objective that can best be met
by two individuals or corporations rather than one. For a partnership to work well,
it requires that both partners understand the objective, each partner complements
the other in skills necessary to meet the objective, and each recognizes the value of
the other in the relationship. A true partnership occurs when both partners make a
conscious decision to enter into a unique relationship. As the partnership develops,
trust and respect build to a degree that both share the joy and rewards of success
and, when things do not go so well, both work hard together to resolve the issues
to mutual satisfaction.
In a customer/supplier partnership, the customer must define the objective (or
the scope of the project) and identify the needs. The supplier must have the capability
to meet the customer’s needs and become an extension of the customer’s resources.
To be more specific, the customer must be able to quantify and share the desired
needs in terms of the quantity of services required, the timeline or critical path
desired, and targeted costs — including up-front engineering as well as unit cost
and capital investment. The supplier must determine whether it can commit the
resources required to meet those needs and whether it is capable of reaching the
targets. A mutual commitment must be made early in the program, and it must be
for the life of the program.
In a more practical sense, the customer in a customer/supplier partnership must
be the leader and be in a position to guide the partners to the objective — no different
than a project leader or a team leader of a program that is 100 percent internal to
the customer. The leader also must monitor the progress in terms of cost and time
with input from the supplier. Our experience would indicate that longer projects
should be broken into “phases” so that there are milestones that are mutually agreed
to in advance by the partners and that mark the points at which the supplier is paid
for its services.
For a partnership to work well, customer/supplier communications must be open
and frequent. With the availability of CAD, e-mail, Internet, Web sites, fax, and
voice mail, there should be no reason not to communicate within minutes of recog-
nition of an issue critical to the program, but there is also a need for regular meetings
at predetermined intervals at either the customer’s or supplier’s location (probably
with some meetings at each location to expose both partners to as many of the team
players as possible).
SL3151Ch01Frame Page 14 Thursday, September 12, 2002 6:15 PM
IMPLEMENTING PARTNERING
There are five steps to partnering. They are:
There are several options in this phase. However, the most common are:
Option 1: Supplier Partnering Manager
A staff supplier partnering manager is appointed to a full-time position (for a
minimum of two years). This manager will be responsible for:
SL3151Ch01Frame Page 15 Thursday, September 12, 2002 6:15 PM
Perhaps one of the most important functions in this step is to establish credibility
with each other as well as confidentiality requirements. The process of this exchange
must be truthful and full of integrity. Some characteristics of this exchange are:
1. Each party provides the other with the information needed to be success-
ful.
2. The supplier needs to know the customer’s requirements and expectations
in order to meet them on a long-term basis.
16
TABLE 1.1
Customer/Supplier Expanded Partnering Interface Meetings
Meetings
Internal Preparation Kick-off Monthly Team Quarterly/Semiannual Annual Management
Meeting Meeting Meeting Management Meeting Review
Participants
Customer teama Customer team Purchasing Purchasing Purchasing
Supplier teama Supplier team Technical Technical Technical
Executive partners Quality/reliability Quality/reliability Quality/reliability
(if appointed) (Other team members) (Other team members) (Other team members)
Executive partnersb Executive partners
Meeting Topics
Partner meeting Introduce program Establish/update mutual key results, Major issues At supplier location and tour
SL3151Ch01Frame Page 16 Thursday, September 12, 2002 6:15 PM
Meeting purpose Obtain mutual agreement and goals, objectives, action plans Performance review Maintain key contacts
Objectives commitment Discuss issues “Health check” Major performance review
Issues Identify teams Review performance Objectives
Participant Introduce/suggest executive partners Review/discuss on-time deliveries Expectations
responsibilities Present/discuss customer objectives Required actions of both parties Actual performance
Supplier objectives Quality indicators Technology trends
Proposed objectives Quality action plan Business trends
Business objectives Business issues Program direction
Definition of responsibilities
Expectations
a Team includes personnel from Purchasing, Quality, Material Control, Engineering. When needed, also can include personnel from Sales, Safety, Manufacturing, Process
Area Management, Planning, Training, Legal, Risk Management, Finance, Project Management.
b Optional as part of quarterly and semiannual meetings.
Six Sigma and Beyond
SL3151Ch01Frame Page 17 Thursday, September 12, 2002 6:15 PM
To be successful in this exchange requires time. The reason for this is that
building trust is a function of time. The longer you work with someone the more
you get to know that person. To expedite the process of gaining trust, suppliers and
customers may want to share in:
1. Non-disclosure agreements
2. Quality improvement process
3. Technology development roadmaps
4. Specification development
5. Should-cost/Total-cost model
6. Forecasts/Frozen schedules
7. Executive partners
8. Job rotation with suppliers
Be aware of, adhere to, and respect the sensitive/confidential nature of propri-
etary information, both yours and your partner’s. Always remember: recognize the
differences in company cultures. Find ways to do things without imposing your
value system.
Compromise...
Find the common ground...
Work out the differences...
Move forward…
Negotiate...
COOPERATE!
People cannot improve unless they know where they are. Evaluation of the partnering
process is a way to benchmark the progress of the relationship and to set priorities
for future improvement. Questionnaires with five-point rating criteria provide a
means for this evaluation in which both customers and suppliers take an active role.
A typical questionnaire may look like Table 1.2.
Sometimes the questionnaires provide detailed definitions of certain words or
criteria that are being used in the instrument. The following is a brief supplement
to explain/define the rating categories and some of the terms used in Table 1.2:
Ratings
TABLE 1.2
A Typical Questionnaire
Please select one of the following ratings for each question:
Ratings:
(1) Does not meet (2) Marginally meets (3) Meets (4) Exceeds (5) Superior
1. Rate the relationship’s impact in focusing both parties on strategic and tactical goals to foster mutual
success.
Strategic 1 2 3 4 5
Tactical 1 2 3 4 5
Comments:
2. Have all established communication channels within Intel, from executive sponsor down, enabled the
partners to improve their effectiveness/competitiveness as a company?
Technical Issues 1 2 3 4 5
Business Issues 1 2 3 4 5
Comments:
4. Rate the effectiveness of the Key Supplier Program team in generating high quality solutions.
Time of Solutions 1 2 3 4 5
Quality of Solutions 1 2 3 4 5
Cost-Effective Solutions 1 2 3 4 5
Comments:
Question 1
Strategic Goals — Long-range objectives (i.e., next-generation technology)
Tactical Goals — Operational, day-to-day problem solving, etc.
Question 3
Management Team — Executive sponsors plus upper/middle managers
Working Team — Commodity/product teams, task forces, user groups
Performance Reviews — Grading joint MBOs, other indicators (e.g.,
quality, customer satisfaction survey)
Question 4
Time of Solution — Meets or exceeds time requirements/expectations
Quality of Solution — Meets or exceeds quality requirements/expecta-
tions
Cost-Effective Solution — Improves total cost effectiveness/fosters mutu-
al profitability
Question 5
Meaningful Support — Active participation and involvement during and
between business meetings
Question 6
Resource Commitment — Adequate support (people, tools, space...) to al-
low successful results
Formal Communication Tools — Meetings, reports, MBO’s technology
exchange; correct topics, timely, worthwhile
Information Sharing — Plans, technology, data; useful, timely, fosters
profitability
Total Cost Focus — Model in place and used to support decisions to apply
resources
Dealing with “The Best” — Process contributes to world-class perfor-
mance
TABLE 1.3
A General Questionnaire
Evaluate the following categories based on a rating of 1 to 5, with 1 being low and 5 being excellent.
(Yet another variation of the criteria may be 1 = Much improvement needed, 5 = Little or no improvement
needed.)
Issues or concerns of specific nature may develop when any of the following
situations exist:
We can benefit from creating a “mentoring” attitude toward our suppliers. Tra-
ditionally we say, “Do this because we need it.” Start saying (and thinking), “Do
SL3151Ch01Frame Page 21 Thursday, September 12, 2002 6:15 PM
this because it will make you a stronger company, and that will in turn make us a
stronger company.” Become a mentor in the Partnering for Total Quality assessment
process with your suppliers.
Clearly define expectations by:
1. Organization itself
2. Internal, interfunctional communication
3. Customer orientation
4. World-class definition
5. Skills development
1. Leadership
Our management:
SL3151Ch01Frame Page 22 Thursday, September 12, 2002 6:15 PM
Our organization:
Our organization:
Our organization:
Our organization:
Our organization:
1. Shares meaningful information and data with our customers and suppliers,
with frequent and timely feedback on problems as well as successes
2. Provides guidance to suppliers in defining improvement efforts that
address all problems
1. Recognizes mutual dependencies with our customers and the need to work
together; understands that partnering does not end with the signing of the
purchase order.
2. Engages in win/win, non-adversarial negotiations and purchasing deci-
sions based on total cost of ownership
3. Provides prompt disclosure to customers of any inability of the organiza-
tion to meet current or future requirements; makes realistic commitments
to customers
1. Leadership
Our organization:
1. Shares short- and long-term improvement plans and priorities with sup-
pliers and customers
2. Works with customers and suppliers to understand their quality needs and
plans for continuous improvements
1. Share mutual joint performance measures that are written, measured, and
tracked
2. Work toward standardization of quality and certification programs
3. Develop and implement valid quality assurance systems for products,
processes, service, and administration
TEAM SYSTEMS
Many social psychologists only consider a collection of people to be a group if their
activities relate to one another in a systematic fashion. However, it is easier to define
a group as a collection of individuals. The word “team,” however, as mentioned in
Volume I, Part II, is reserved for those groups that constitute a system whose parts
interrelate and whose members share a common goal. Some groups can easily be
viewed according to this criterion. A soccer club, its manager, and its players
constitute a set of parts necessary to the functioning of the whole — the common
SL3151Ch01Frame Page 27 Thursday, September 12, 2002 6:15 PM
aim being to win soccer games. However, when does a newly established team
become a good or effective team? To see the answer to this question let us examine
the team from a systems approach.
Input
A team has an input or signal. The input is the information, energy, resources, etc.,
that enter into the system and are transformed through its structures and processes.
A broad spectrum of inputs into the system can exist and, depending on the per-
spective one chooses to take, the boundaries that are drawn around the system can
be more or less inclusive of these elements.
A system in which the boundary is closely defined will have only the fixed
structures and extant processes within it and will have a wide range of inputs, many
of which may enter the system simultaneously. A system that has a very broad
boundary might include people, materials, resources, and most information as a part
of the system, with the input defined very narrowly as a discrete piece of information
or energy.
Signal
The signal as developed in the Taguchi model has a more specific and limited
definition. It is an input into the system, but it is limited to the means by which the
user conveys to the system a deliberate intention to change (or adjust) the system
output. In more general terms, it is the variable to which the system must respond
in order to fulfill the user’s intent. From this perspective, most of what are tradi-
tionally considered inputs into the system, i.e., people, materials, information, and
so on, are already part of the system itself, and the signal is the discrete piece of
information that determines the amount of energy transformed by the system.
The System
The structure of a system comprises aspects of the system that are relatively static
or enduring. Process, on the other hand, refers to the behavior of the system.
Consequently, process refers to those relatively dynamic or transient aspects of a
system that are observable by virtue of change or instability. Traditional models of
a system are based upon an input-process-output model. The system acts to transform
the energy from the input into the output. This process, once established, is subject
to variation due to internal and external factors that produce “error states” or outputs
other than the desired output. These outputs can simply be wasted energy or may
actually reduce the functional ability of the system itself.
If a particular team has a task to perform, e.g., solving a problem, you can
consider the team to be a system that has inputs, output, and a process that allows
the team members to transform their energy into the desired outputs. Team process
can be defined as any activity (for example, meetings) that utilizes resources (the
team) to transform inputs (ideas, skills, and qualities of team members) into outputs
(discoveries, solutions to problems, proposals, actions, design ideas, products, etc.).
SL3151Ch01Frame Page 28 Thursday, September 12, 2002 6:15 PM
Often the energy that the team brings to the process is not used to best effect.
For example, in a meeting, time may be wasted reiterating points because individuals
have not paid attention to what is being discussed or because there is cross talk.
This in turn leaves people annoyed and frustrated. These are examples of “error
states” or undesirable outputs from the team process.
Output/Response
In traditional systems models, the output is whatever the system transforms, pro-
duces, or expresses into the environment as a consequence of the impact its structures
and processes have on the input. An output can be anything from a newborn baby
to well done barbecued ribs to a presentation to a text return. This is very important
to understand because teams, by their nature, are complex and multifunctional. They
cannot and should not be configured to produce one kind of response. Most teams
will have a whole range of outputs with accompanying measures that will be used
to identify how successful they are and how effective they are in transferring energy.
The key is to identify appropriate measures that can be used to monitor the team’s
progress.
The Environment
External Variation
In teams, external variation factors may include such things as change in team
membership, the environment in which the team is working, changing demands from
management, corporate cultural, racial, and gender factors, and so on. In developing
a group process, it is important to develop group systems and processes that are
robust to these factors. In addition, team goals exert a considerable influence on the
behavior of individual members, and goals can vary enormously. They could be
output targets that will vary in accordance with the team’s task — problem-solving
teams puzzling over the root cause of a problem; design teams considering the
optimization of a particular system design to achieve robustness; a marketing team
attempting to understand the exact details of customer requirements; or sports teams,
each of which will have an entirely different set of performance goals depending
upon the sport: soccer, football, tennis, golf, and so on.
Any analysis of working teams should take into account the objectives of the
team and the situation in which the team performs because both will have a profound
effect on the team functioning.
SL3151Ch01Frame Page 29 Thursday, September 12, 2002 6:15 PM
Internal Variation
Internal variation, on the other hand, relates to factors that are in the team system
and its members. People may bring predetermined ideas about the correct design
solution. They may have biases about other team members depending on their race,
gender, function, grade, and so on. Certain team members may not get along with
other team members and will regularly question, challenge, or contradict the others
for no apparent reason. The team may not manage its time well and consequently
may find itself chronically short of time at the end of meetings.
Team members may not know how to ask open questions that will open up fresh
avenues of information. Closed questions will result in familiar dead ends or non-
productive and previously rejected ideas. Team members may not know how to build
on the ideas of other team members and, consequently, good ideas may be regularly
lost. If the reader needs help in this area, we recommend a review of Volume I, Part II.
The Boundary
At the simplest level, boundaries can be put around almost anything, thereby defining
it as a system. In practice, the identification of the boundary is the key to successful
system analysis. The classification of factors (signal, control, and variation) that
impact on the system is dependent on the way in which the boundary is defined.
For example, by setting the boundary of the system fairly wide, to include the team
members, environment, resources, information, and so on, leaving only the directive
from the champion or the monthly output target outside, more factors would be
considered as control factors and fewer as variation. In this case, the directive from
the champion would be the signal factor. The team members, environment (or aspect
of it), and so on would be control factors.
External variations would then include disruptions to the team process from
sources outside the team boundary. Internal variations would include attitudinal,
cultural, and intellectual variations among and between team members and variations
in environmental conditions (e.g., temperature). By setting a narrower boundary,
many of the factors such as environment and resources would be considered external
to the system and therefore would become noise factors rather than control factors.
These issues are important because they determine the team’s strategy for dealing
with variations and establishing a means of becoming robust to them.
Even the simplest model of the effective team includes this concept of feedback
loops. By employing information feedback loops, systems may behave in ways that
can be described as “goal seeking” or “purposive.”
Negative feedback allows a system to maintain stability as in the case of the
most commonly quoted example, a thermostat. A thermostat is controlled by negative
feedback so that when the temperature increases above a certain level the heating
is switched off, but when the temperature decreases sufficiently the heating is
switched on. The process of maintaining stability is called “homeostasis.” The
capacity for such control is engineered into some mechanical systems and occurs
naturally in all biological and social systems. Threats to the stability of the system
will be countered in a powerful attempt to maintain homeostasis.
System Feedback
One alternative approach is to monitor those aspects of team behavior that are
observable (i.e., gather “the voice of the process”). Descriptive Feedback offers a
non-judgmental method of monitoring what happens in working groups. It allows
team members to notice when team process is in control and meeting or exceeding
predetermined expectations or drifting out of control and reducing potential. Descrip-
tive Feedback provides three basic functions:
The Parameter Design approach used in quality engineering — see Volume V of this
series — is concerned with minimizing the effect of variation factors by making the
system robust. This involves identifying control factors — in this case, aspects of
the team process that are within the control of the team and that can be used to
reduce the impact of variation factors without eliminating or controlling the variation
factors themselves. An example of a “control factor” functioning in this way is the
SL3151Ch01Frame Page 32 Thursday, September 12, 2002 6:15 PM
use of Warm-Up and its consideration of “place” (layout, heating, lighting, ventila-
tion) so that best use is made of the facility provided and distractions are minimized,
even though the place itself and many of its features cannot be changed.
The key to a successful team lies not only in identifying those parameters that
are critical for the efficient transformation of inputs to the team process into outputs
but also in doing this with minimal loss of energy in error states and maximum
robustness to variation factors in the environment. Different types of teams with
different outputs required of them would have different parameters established for
their most efficient performance. Many of the structures, processes and skills that
could be used as control factors in a team process have been identified in Volume I,
Part II of this series.
Through this process of observation, it is possible to establish control limits in
a wider area of team performance. A number of the factors that have an impact on
team performance can be observed and regulated through feedback, and “tolerance”
for them can be established depending upon the makeup and objective of the team.
These factors include warming up and down, place, task, maintenance, process
management, team roles, agenda management, communication skills, speaking
guidelines, meeting management, exploratory thinking guidelines, experimental
thinking guidelines, change management, action planning, and team parameters.
The traditional approach to engineering waits until the end of the design process
to address the optimization of a system’s performance — in other words, after
parameter values are selected and tolerances determined, often at the extremes of
conditions and often without considering interactions among different components
or subsystems. When the components and subsystems are integrated, and if perfor-
mance does not meet the target value or the customer’s requirements, parameter
values are altered. Consequently, though the system may be adjusted to operate
within tolerance, this process does not guarantee that the system is producing its
ideal performance.
Similarly, traditional approaches to building teams have selected team members
according to a number of factors: predetermined skills and knowledge, established
roles for team members, and implemented structured norms. They also have waited
until the end of the process of team design in order to optimize performance. If the
team does not perform within the accepted values of these parameters, then it is
adjusted: team members are changed, roles are redefined, norms are more strictly
enforced. This, however, is against performance criteria that do not necessarily
optimize the team’s performance nor add to the motivation or job satisfaction of the
team members.
The shift suggested in Parameter Design in engineering (and that may be applied
to teams as well) is to move from establishing parameter values to identifying those
parameters that are most important for the function of the process and then determine
through experimental design the correct values for those parameters. The key is to
establish the values that use the energy of the system most efficiently and that are
resistant to uncontrollable impact from other factors internal or external to the system
itself.
SL3151Ch01Frame Page 33 Thursday, September 12, 2002 6:15 PM
System Interrelationships
A systems model of processes differs from traditional models in many ways, one
of which is the notion of circular causality. In the non-systems view, every event
has its cause or causes in preceding events and its effects on subsequent events: the
scientist seeks the cause or effect. Using the linear method of causality, ultimate
causes are sought by tracing back through proximate causes. However, many phe-
nomena do not “fit” the linear model: the relationships between them — and the
relationships between the attributes or characteristics of the elements — do not
conform to this linear approach to causality.
In engineering systems, a direct cause and effect relationship often exists
between the component of the system and the transformation of the input into an
output. A steering wheel channels the input of the vehicle operator directly into the
output of the system. That is, turning the steering wheel to the right or left actually
turns the wheels of the vehicle to the right or the left. However, it is equally clear
that error states or phenomena are nowhere near as simple or linear in the causal
relationship. Feedback loops and circular causality create very complex interactions.
Similarly, the choice of lubricants may not affect the performance of the system
until months or years later, when early deterioration of a transmission would result
in difficulty shifting gears.
Similarly, in teams, some cause and effect relationships are clearly related in
time and others are not. Interventions by a timekeeper will affect the ability of the
team to stick to its agenda. But other factors have more circular relationships. In a
global problem-solving team, changing seating arrangements from the long-tabled
boardroom style to a circular arrangement will result in more universal eye contact
among team members, which may increase the team’s communication. This leads
to enhanced exchange of information, which may lead to a clearer identification of
the problem which will, in turn, lead to a more targeted search for relevant data,
which will finally lead to a root-cause identification for the problem. Changing the
seating arrangement may enhance finding a root cause more quickly than might have
been the case in boardroom seating, and the cause and effect chain may be quite
intricate.
SL3151Ch01Frame Page 34 Thursday, September 12, 2002 6:15 PM
SYSTEMS ENGINEERING
An emerging basis for unifying and relating the complexities of managerial problems
is the system concept and its methodology. This concept has been applied more to
the analysis of productive systems than to other fields, but it is clear that the value
of the concept in management is pervasive.
The word “system” has become so commonplace in the general literature as
well as in the field that one often wants to scream, for its common use almost
depreciates its value. Yet the word itself is so descriptive of the general interacting
nature of the myriad of elements that enter managerial problems that we can no
longer talk of complex problems without using the term “systems.” Indeed, we must
learn to distinguish the general use of the term from its specific use as a mode of
structuring and analyzing problems.
One of the great values of the system concept is that it helps us to take a very
complex situation and lend order and structure to it by using statistics, probability,
and mathematical modeling. A major contribution of the concept is the reduction of
complexity in managerial problems to a block diagram showing the relationship and
interacting effects of the various elements that affect the problem at hand. At its
present state of development and application, the systems concept is most useful in
helping us gain insight into problems. At a second and very powerful level of
contribution, however, systems analysis is gaining prominence as a basis for gener-
ating solutions to problems and evaluating their effects, and for designing alternate
systems.
“SYSTEMS“ DEFINED
We have been using the term systems without defining it. Though nearly everyone
may have a general understanding of the term, it may be useful to be more precise.
Webster defines a system as a regularly interacting or interdependent group of items
forming a unified whole. Thus a system may have many components and objects,
but they are united in the pursuit of some common goal. They are in some sense
unified, organized, or coordinated. The components of a system contribute to the
production of a set of outputs from given inputs that may or may not be optimal or
best with respect to some appropriate measure of effectiveness. Systems are often
complex, although the definition does not specify that they need to be.
It is probably correct to say that some of the most interesting systems for study
are complex and that a change in one variable within the system will affect many
other variables of the system. Thus in productive systems, a change in production
rate may affect inventories, hours worked per week, overtime hours, facility layout,
and so on. Understanding and predicting these complex interactions among variables
is one of our main objectives in this section.
One of the elusive aspects of the systems concept is in the definition of a specific
system. The fact that we can define the system that we wish to consider and draw
boundaries around it is important. We can then look inside the defined system to
see what happens, but it is just as important to see how the system is affected by
its environment.
SL3151Ch01Frame Page 35 Thursday, September 12, 2002 6:15 PM
1. Medical
2. Compensation
The point here is that with most injuries the focus becomes the direct cost,
thereby dismissing the indirect costs. It has been estimated time and again that the
cost relationship of direct to indirect cost is one to three, yet we continue to ignore
the real problems of injury. An appropriate system design for injury prevention would
SL3151Ch01Frame Page 37 Thursday, September 12, 2002 6:15 PM
minimize if not eliminate the hidden costs. Generally speaking, the system should
include (a) engineering, (b) education, and (c) enforcement considerations. Some
specific considerations should be:
PRE-FEASIBILITY ANALYSIS
Before the actual analysis takes place there is a preliminary trade-off analysis as to
what the customer needs and wants and what the organization is willing to provide.
This is done under the rubric of preliminary feasibility. When the feasibility is
complete, then the actual requirement analysis takes place.
REQUIREMENT ANALYSIS
The requirement analysis involves the following steps:
At the end of the requirement analysis the results are moved to the second stage
of the system engineering model, which is design synthesis. However, before the
synthesis actually takes place, another feasibility analysis is completed to find out
whether the organization is capable of designing the requirements of the customer.
This feasibility analysis takes into consideration the organization’s knowledge from
previous or similar designs and incorporates it into the new. The idea of this feasi-
bility analysis is to make sure the designers optimize reusability and carry over parts
and/or complete designs.
DESIGN SYNTHESIS
Design synthesis involves the following steps:
At the end of the design synthesis a very important analysis takes place. This
analysis tests the integrity of the design against the customer’s requirements. If it is
found that the requirements are not addressed (design gap), a redesign or a review
takes place and a fix is issued. If, on the other hand, everything is as planned, the
process moves to the third link of the model — verification.
VERIFICATION
The final stage is verification. It involves the following:
At the end of this stage, if problems are found they (the problems) revert back
to the design; if there are no problems, the design goes to manufacturing, with a
design ready to fulfill the customer’s expectations. This final stage in essence tests
the integrity of the design against the actual hardware. In other words, the questions
often heard in verification are: Does the design work? Can you prove it?
The beauty of this model is that it is an iterative model, meaning that the
process — no matter where you are in the model — iterates until a balanced optimum
design is achieved. This is because the goal is to design a customer-friendly design
with compatibility, carryover, reusability, and low complexity requirements com-
pared to other, similar designs. Iterations happen because of human oversights,
poorly defined requirements, or an increase in knowledge.
Another special characteristic of systems engineering is the notion of traceability.
Traceability is reverse cascading and is used throughout the design process to make
sure that the voices of the customer, regulator, and corporate or lower-level design
are heard and accounted for in the overall design. With traceability, extra caution is
given to the trade-off analysis. This is because by definition trade-off analysis
accounts for designs with certain priority levels among the needs and wants of the
customer. In a trade-off analysis, we choose among stated design alternatives.
However, a trade-off analysis is also an iterative process, and usually none of
the alternatives is perfect [R(t) = 1 – F(t)]. This is important to remember because
all trade-off analyses assess risk, both external and internal, of the given alternatives
so as to make robust designs.
A final word about verification and systems engineering: As we already men-
tioned, the intent of verification is to make sure that the hardware meets the require-
ments of the design. The process for conducting this verification is done —
generally — in five steps, which are:
SL3151Ch01Frame Page 40 Thursday, September 12, 2002 6:15 PM
The focus of this process is to make sure that the requirements are driving the
process and not the tests regardless of how sophisticated they are. To be sure, tests
are an integral part of verification, but they are the means not the end. The intent
of the tests is to verify each requirement, and there is no wrong way as long as the
testing method is linked to real world usage. The reason for doing all this is to:
In essence then, the customer appears satisfied, but a product, service, or process
is not improved at all. This is precisely why it is imperative for organizations to
look at quality planning as a totally integrated activity that involves the entire
organization. The organization must expect changes in its operations by employing
cross-functional and multidisciplinary teams to exceed customer desires — not just
meet requirements. A quality plan includes, but is not limited to:
1. In the auto industry, demand is so high that Chrysler, Ford, and General
Motors have developed a standardized approach to AQP. That standardized
approach is a requirement for the QS-9000 and/or ISO/TS19469 certifi-
cation. In addition, each company has its own way of measuring success
in the implementation and reporting phase of AQP tasks.
2. Auto suppliers are expected to demonstrate the ability to participate in
early design activities from concept through prototype and on to produc-
tion.
3. Quality planning is initiated as early as possible, well before print release.
4. Planning for quality is needed particularly when a company’s management
establishes a policy of “prevention” as opposed to “detection.”
5. When you use AQP, you provide for the organization and resources needed
to accomplish the quality improvement task.
6. Early planning prevents waste (scrap, rework, and repair), identifies
required engineering changes, improves timing for new product introduc-
tion, and lowers costs.
SL3151Ch01Frame Page 42 Thursday, September 12, 2002 6:15 PM
The supplier — as in the case of certification programs such as ISO 9000, QS-
9000, ISO/TS19469, and so on — is to maintain evidence of the use of defect prevention
techniques prior to production launch. The defect prevention methods used are to
be implemented as soon as possible in the new product development cycle. It follows
then, that the basic requirements for appropriate and complete AQP are:
1. Team approach
2. Systematic development of products/services and processes
3. Reduction in variation (this must be done, even before the customer
requests improvement of any kind)
4. Development of a control plan
As AQP is continuously used in a given organization, the obvious need for its
implementation becomes stronger and stronger. That need may be demonstrated
through:
to the process of strategy making, to the strategies that result from that process, and
ultimately to the taking of effective actions by the organization, and (3) whether the
very nature of planning actually fosters managerial commitment to itself.
Another pitfall, of equal importance, is the cultural attitude of “fighting fires.”
In most organizations, we reward problem solvers rather than planners. As a conse-
quence, in most organizations the emphasis is on low-risk “fire fighting,” when in
fact it should be on planning a course of action that will be realistic, productive,
and effective. Planning may be tedious in the early stages of conceptual design, but
it is certainly less expensive and much more effective than corrective action in the
implementation stage of any product or service development.
1. Begin with the end in mind. This may be obvious; however, it is how most
goals are achieved. This is the stage where the experimenter determines
how the study results will be implemented. What courses of action can
the customer take and how will they be influenced by the study results?
Clearly understanding the goal defines the study problem and report
structure. To ensure implementation, determine what the report should
look like and what it should contain.
2. Determine what is important. All resources are limited and therefore we
cannot do everything. However, we can do the most important things. We
must learn to use the Pareto principle (the vital few as opposed to the
trivial many). To identify what is important, we have many methods,
including asking about advantages and disadvantages, benefits desired,
likes and dislikes, importance ratings, preference regression, key driver
analysis, conjoint and discrete choice analysis, force field analysis, value
analysis, and many others. The focus of these approaches is to improve
performance in areas in which a competitor is ahead or in areas where
SL3151Ch01Frame Page 45 Thursday, September 12, 2002 6:15 PM
Application of APQP in the DFSS process provides a company with the oppor-
tunity to achieve the following benefits:
Once program content has been clarified, the following information can be
discerned:
Using the APQP process to stabilize program timing and content, the opportu-
nities for cost improvement are dramatically increased. When we are aware of the
timing and concerns that may occur during the course of a program, it provides us
the opportunity to reduce costs in the following areas:
REFERENCES
Automotive Industry Action Group (AIAG), Advanced Product Quality Planning and Control
Plan. Chrysler Co., Ford Motor Co., and General Motors. Distributed by AIAG,
Southfield, MI, 1995.
Mayne, E. et al., Quality Crunch, Ward’s AUTOWORLD, July 2001, pp. 14–18.
Mintzberg, H., The Rise and Fall of Strategic Planning, New York Free Press, New York, 1994.
Stamatis, D.H., Advanced Quality Planning. A Commonsense Guide to AQP and APQP,
Quality Resources, New York, 1998.
SELECTED BIBLIOGRAPHY
Bossert, J., Considerations for Global Supplier Quality, Quality Progress, Jan. 1998, pp.
29–34.
Brown, J.O., A Practical Approach to Service: Supplier Certification, Quality Progress, Jan.
1998, pp. 35–40.
SL3151Ch01Frame Page 48 Thursday, September 12, 2002 6:15 PM
Forcinio, H., Supply Chain Visibility: Is It Really Possible? Managing Automation, July 2001,
pp. 24–28
Gurwitz, P.M., Six Questions to Ask Your Supplier About Multivariate Analysis, Quirk’s
Marketing Review, Feb. 1991, pp 8–9, 23.
Mehta, P.V. and Scheffler, J.M., Getting Suppliers in on the Quality Act, Quality Progress,
Jan. 1998, pp. 21–28.
Schoenfeldt, T., Building Effective Supplier Relationships, Automotive Excellence, Winter
1999, pp.17–25.
SL3151Ch02Frame Page 49 Thursday, September 12, 2002 6:13 PM
Customer
2 Understanding
In Volume I of this series, we made a point to discuss the difference between
“customer satisfaction” and “loyalty.” We said that they are not the same and that
most organizations are interested in loyalty. We are going to pursue this discussion
in this chapter because, as we have been saying all along, understanding the differ-
ence between customer service and customer satisfaction can provide marketers with
the competitive advantage necessary to retain existing customers and attract new
ones. “Understanding” what satisfaction is and what the customer is looking for can
provide the engineer with a competitive advantage to design a product and or service
second to none. At first glance, service and satisfaction may appear to mean the
same thing, but they do not; service is what the marketer provides and what the
customer gets, and satisfaction is the customer’s evaluation of the level of service
received based on preconceived assumptions and the customer’s own definition of
“functionalities.” The satisfaction level is determined by comparing expected service
to delivered service. Four outcomes are possible:
49
SL3151Ch02Frame Page 50 Thursday, September 12, 2002 6:13 PM
that the vast majority were frustrated and unhappy about having to wait more than
15 minutes before getting attended to. The marketing executive interrupted, saying,
“Those customers should consider themselves lucky; if they were in the dealerships
of one of our competitors, they would have to wait 20 to 30 minutes before they
were seen by the service manager.”
This example includes all the information needed to explain the difference between
customer service and customer satisfaction. The customers in this example defined their
personal expectations about the service — their waiting time experience — and clearly,
a conflict existed between their service expectation (a short wait before being seen
by a service manager) and their service experience (waits of more than 15 minutes).
Customers then were dissatisfied with the waiting rooms and the dealerships in
general. The marketing manager’s response to customer dissatisfaction was to note
that the waits could have been worse: He knew that competitors’ dealerships were
worse. He also knew customer waits of more than 30 minutes were not uncommon.
In light of these data, he judged the 15- to 20-minute waits in the waiting room
acceptable.
Herein lies the conflict between service delivery and customer satisfaction. The
important concept for this marketing executive — and for all marketers — is that
customers define their own satisfaction standards. The customers in this example
did not go to the competitors’ dealerships; instead, they came to the marketer’s
dealership with a set of their own expectations in a preconceived environment. When
the marketer used his service delivery criteria to defend the waiting time, he simply
missed the point.
Unfortunately, this illustration is typical of how many marketers think about
customer satisfaction. They tend to relate customer satisfaction directly to their own
service standards and goals rather than to their customers’ expectations, whether or
not those expectations are realistic. To assess satisfaction, marketers must look
beyond their own assessments, tapping into the customers’ evaluations of their
service experience.
Consider, for example, a bank that thought it was doing a good job of measuring
service satisfaction but really was too focused on service delivery. This bank had
developed a policy that time spent in the lobby room should be less than 15 minutes
for all customers. A customer came into the office and waited 12 minutes in the
reception area for a mortgage application. Then she waited another five minutes for
the loan officer to clear all the papers from his desk from the previous customer and
an additional three minutes for him to get the file and all the pertinent information
for the current application. As this customer was leaving, she was asked to fill out
a customer satisfaction questionnaire. Under the category for reception area waiting
time, she checked off that she had waited less than 15 minutes.
Based on this response, the bank’s marketing director assumed the customer
was satisfied, but she was not; the customer had been told that if she came in for
the mortgage loan during her lunch hour, she would be taken care of right away.
Instead, she waited a total of 20 minutes for her application process to begin. She
did not have time to shop for the gift her son needed that night for a birthday party,
and her entire schedule was in disarray. She left dissatisfied.
SL3151Ch02Frame Page 51 Thursday, September 12, 2002 6:13 PM
Customer Understanding 51
Understanding the difference between service and satisfaction is the first step
in developing a successful customer satisfaction program, and all marketers must
share the same understanding. Only customers can define what satisfaction means
to them. Here are some practical ways to understand customers’ expectations:
• Ask customers to reflect on their experiences with your services and their
needs, wants, and expectations.
• Talk to customers face to face through focus groups, as well as through
questionnaires. A wealth of information can be collected this way.
• Talk with your staff about what they hear from customers about their
expectations and experience with service delivery.
• Review warranty data.
Remember the three words that can help you learn from your customers: What,
how and why. That is, what service did you experience, how did it make you feel,
and why did you feel that way? Continual probing with these three perspectives will
deliver the answers you need to better manage service to generate customer satis-
faction.
As Harry (1997 p. 2.20) has pointed out:
Y = a + bx
Y = f(x1, x2…xn)
df − sin θ a r a sin θ
=r sin θ + + cos θ +
dθ cos2 θ g cos θ cos θ g cos2 θ
In DFSS, the transfer function is used to estimate both mean (sensitivity) and
variance (variability) of Ys and ys. When we do not know what the Y is, it is
acceptable to use surrogate metrics. However, it must be recognized from the begin-
ning that not all variables should be included in a transfer function. Priorities should
be set based on appropriate trade-off analysis. This is because DFSS is meant to
emphasize only what is critical, and that means we must understand the mathematical
concept of measurement.
The focus of understanding customer satisfaction has been captured by Rechtin
and Hair (1998), when they wrote that “an insight is worth a thousand market
surveys.” It is that insight that DFSS is looking for before the requirements are
discussed and ultimately set. This will help us in identifying what is really going
on with the customer. Let us look at the function first.
Customer Understanding 53
In this expanded definition, you cannot only see the concept of function at work,
but you may be able to recognize the essential abstraction of a process.
In a process, some type of input is transformed into an output. As a simple
equation, we might say that
In the case of function, the inputs are the unfulfilled wants and needs that a
customer or a prospective customer has. These can be and often are intricate; this
is why the discipline of marketing is still more art than science. (We will have more
to say about this issue in just a moment.) Nevertheless, there exist multiple sets of
unfulfilled wants and needs that are open to the lures and attractions provided by
the marketplace.
In this very broad model, the transformation is provided by the producer. With
one, ten, or hundreds of internal processes (within any discussion of process, there
is always the “Russian doll” image: processes within processes within processes),
business organizations attempt to determine the unmet wants and needs that cus-
tomers have. The producer then must design and develop products and delivery
processes that will provide tangible and/or intangible media of exchange that will
assuage the unmet needs or need sets.
Finally, the external processes that involve exchange of the producer’s goods for
money or other barter provide the customer with varying degrees of satisfaction.
The gratification (or lack of satisfaction) that results can then be viewed as the output
of the general process.
In business, the inputs are not within the control of producers. As a result,
producers need powerful tools to understand, delineate, and plan for ways to meet
these needs. This can be thought of as the domain of the Kano model or Quality
Function Deployment.
The transformational activities, however, are within the control of the producer.
These “controlled” activities include planning efforts to deliver “function” at a
satisfactory price; the nuances and subtleties of this activity can be strongly influ-
enced or even controlled by the discipline of Value Analysis.
In addition, fulfillment of marketplace “need sets” also implies that this fulfill-
ment will occur without unpleasant surprises. Unwanted, incomplete, or otherwise
unacceptable attempts to produce “function” often result in failure. This implies that
producers have a need to systematically analyze and plan for a reduction in the
propensity to deliver unpleasant surprises. This planning activity can be greatly aided
by the application of Failure Modes and Effects Analysis techniques.
To see how these ideas mesh, we need to consider how “function” can be
comprehensively mapped. This will require several steps. To apply what will be
discussed in the rest of this section, we need to emphasize the importance of choosing
the proper scale for any analysis. The probability is that you will choose too broad
a view or too much detail; we will try to provide guidance on this issue during our
discussion of methods.
SL3151Ch02Frame Page 54 Thursday, September 12, 2002 6:13 PM
Customer Understanding 55
the governmental agencies and political constituencies that administer these laws
can be seen as the “overarching” customers described previously.
Ultimately, vehicle purchasers themselves are the critical endpoint in this chain of
evaluation. And within this category of customers are the many segments and niches
that car makers discuss, such as entry level, luxury, sport utility, and the many other
differentiation patterns that auto marketers employ. Only when a product passes through
the entire sequence will it have a reasonable chance of successfully and repeatedly
generating revenue for the producer. This provides a critical insight about function.
Function is only meaningful through some transactional event involving one or
more customers. Only customers can judge whether a product delivers desired or
unanticipated-yet-delightful function.
In many cases (in fact, most), firms simply do not consider all of these customers.
As a result, they are often surprised when problems arise. Moreover, they suffer
financial impediments as a result — even though they may simply budget some
degree of failure expectation into overhead calculations.
A rational assessment of this situation means that the first requirement for under-
standing function is a comprehensive listing of customers. Frankly, this is very hard
work, and it requires time, dedication, and effort. Regardless, understanding the cus-
tomers that you wish to serve is an essential prerequisite to comprehension of function.
For example, let us consider a simple mechanical pencil. The mechanism of the
pencil must feed lead at a controlled rate. This also means that there must be a
specific position for the lead. If the lead is fed too far, the lead will break. If the
feed is not far enough, the pencil may not be able to make marks. As a result, one
function that we can consider is “position lead.” The measurement is the length of
exposed lead, and the desired extent of the positioning function may be 5 mm from
the barrel end of the pencil. If there are limits on the extent in the form of tolerance,
this is a good time to think about these limits as well.*
While you are describing function in terms of an active verb and measurable
noun, it is very important to maintain a customer frame of reference. Do not forget
that function is only meaningful in terms of customer perception. No matter how
much you may be enamored of a product feature or service issue, you must decide
if the target customer will perceive your product in the same way.**
* When you do this, you have created a “specification” for this function.
** One of the most common and debilitating errors in market analysis is to assume that others will
respond the same way that you do. This is a simple but profound delusion. Most of us think that we are
normal, typical people. When we awaken in the morning, we look in the mirror and see a normal (although
perhaps disheveled if we look before the second cup of coffee) person. Thus, we think, “I like this widget.
Since I am normal, most other people will like this widget, too. Therefore, my tastes are likely to be a
good guideline to what my customers will want.” In most cases, even if you really are “normal” and
even “typical,” this easy generalization is dangerously false.
SL3151Ch02Frame Page 57 Thursday, September 12, 2002 6:13 PM
Customer Understanding 57
pencil that will write notes without a writer attached you will probably become rich.)
The function that is more appropriate for the pencil is “make marks.”
The best way to start is to simply brainstorm as many functions as you can using
active verbs and measurable nouns. There are many ways to brainstorm; in this case,
it is usually easiest to have everyone involved use index cards or sticky notes to record
their ideas. Remember that brainstorming should not be interrupted by criticism; just
let the ideas flow. You will get things that do not apply, and, until you gain experience,
you will not always use the “functive” structure that is ultimately important. Do not
worry about these issues during the idea-generation phase of this process.
Once you have a nice pile of cards or notes, start by sifting and sorting the ideas
into categories. In any pile of ideas, there will be natural “groupings” of the cards.
Determine these categories and then sort the cards. This can be thought of as “affinity
diagramming” of the ideas. You will find some duplicates and some weird things
that probably do not belong in the pile.* Discard the excess baggage and look at
the categories. Are there any important functions you have missed? Do not hesitate
to add new ideas to the categories, either.
Finally, you are ready to bear down on the linguistic issues. Make sure that all
of the ideas are expressed in terms of active verbs and measurable nouns. Change
the idea to a “functive” construction, and then look for the “nerd” verb cards. Convert
all of the “nerd verb” functions into true functives, with fully active verbs and
measurable nouns. When you are done, you will have an interesting and important
preliminary output.
Now, count the cards again. If you have more than 20 to 30 cards, you have
probably tackled too complex a subject or viewpoint. For example, a commercial
airline has thousands — even hundreds of thousands — of functions. If you wanted
to analyze function on the widest scale, you would probably be guilty of too much
detail if you listed more than 30 functives. On the broadest scales of view, you may
only list a handful of functions. Nothing is wrong with a short list, especially for
the broadest view.
If you have trouble, we can suggest some “function questions” that can assist
you in your brainstorming. Try these questions:
* Do not automatically toss out strange ideas — see if the team can reword or express more clearly the
idea that underlies the oddball cards or notes. Some percentage of these cards will have important
information. Many will be eventual discards, but do not jump to conclusions.
SL3151Ch02Frame Page 58 Thursday, September 12, 2002 6:13 PM
• A “part” view
• Each sub-element of the project
• A “component” view
Finally, we can start our next task, which consists of arranging functions into
logical groups that show interrelationships. In addition, this next “arranging” step
will allow us to test for completeness of function identification and improve team
communication.
We start by asking “What is the reason for the existence of the product or
service?” This function represents the fundamental need of the customer. Example:
a vacuum cleaner “sucks air” but the customer really needs “remove debris.” What-
ever this reason for being is, we need to identify this particular function, which we
call the task function. You must identify the task function from all of the functions
you have listed.
If you happen to find more than one task function, it is quite likely that you
have actually taken on two products. For example, a clock-radio has two task
functions: tell time and play music. However, you would be far better served by
breaking your analysis into two components — one for telling time, the other for
playing music. Alternatively, this product could be considered on a broader basis,
as a system — in which case the task function might be “inform user,” with subor-
dinate functions of “tell time” and “play music.”
In any event, once you have identified the task function, you will realize that
there are many functions other than the task function. Divide the remaining functions
by asking:
“Is the function required for the performance of the task function?”
If the answer to this question is yes, then the function can be termed essential.
If the answer is no, then the function can be considered enhancing. All functions
other than the task function must be either essential (necessary to the task function)
or enhancing. So, your next task is to divide all of the remaining functions into these
two general categories.
You can further divide the enhancing functions — the functions that are not
essential to the task function. Enhancing functions influence customer satisfaction
and purchase decisions. Enhancing functions always divide into four categories:
1. Ensure dependability
2. Ensure convenience
3. Please senses
4. Delight the customer*
* “Delight the customer” is actually quite rare — most enhancing functions fit one of the other three
categories. If you do find a “delight the customer” function, try comparing this with an “excitement”
feature in a Kano analysis; you should find that the function fits both descriptions.
SL3151Ch02Frame Page 59 Thursday, September 12, 2002 6:13 PM
Customer Understanding 59
Lead
Barrel
EAT AT JOE’S 0,7 mm
Clip
• Feed lead
• Reposition lead
• Support lead
• Locate eraser
• Position tube
• Generate force
• Hold eraser
Lead:
• Make marks
• Maintain point
Eraser:
• Erase marks
• Locate eraser
Barrel:
• Support tube
• Support lead
• Position lead
• Protect lead
• Position tube
• Position eraser
• Show name
• Display advertising
• Convey message
• Fit hand
• Enhance feel
• Provide instructions
Clip:
• Generate force
• Position clip
• Retain clip
And, finally, the function diagram (only one possibility among many, many
different results) may look like Figure 2.2.
SL3151Ch02Frame Page 61 Thursday, September 12, 2002 6:13 PM
Customer Understanding 61
Maintain
point
Support
Support tube
lead Position
tube
Basic Position
Functions Store lead
lead Feed lead
Re-position
lead
Position
Make Ensure Erase Hold eraser
Marks convenience marks eraser Locate
eraser
Retain clip
Fit pocket Generate
Position force
clip
Ensure Provide
dependability Instructions
Supporting
Functions Please the Enhance
senses feel Fit hand
Display
Delight the advertising
Show name
customer Convey
message
Significant Incidental
move move
Delay Document
Input Decision
confuse your viewpoint while developing a flow chart, you will quickly become
confused about the process functions.)
Inputs and outputs are the easiest steps to understand. You start with an input
and you end with an output. A document may be a special kind of input or output —
it can appear at the beginning, at the end, or during the overall process. The process
box is the most common box; it describes transformations that occur within the
process. Decisions are represented by a diamond shape, and an inspection step (in
the shape of a stop sign) is just a special kind of decision. If you delay a process,
you use a yield sign. If you store information, you use an inverted yield sign — a pile.
Movement is also important. If a move is incidental, you tie the associated boxes
together with a simple arrow. However, if a movement is complex (say, sending a
courier package to Hong Kong as opposed to handing it to your next-cubicle neigh-
bor), then you may have a special transformation or process step that we call a
“significant” move, i.e., a large horizontal arrow.
Let us look at a simple process for handling complaints. Your office deals with
customer complaints, but you have a local factory (where your office is) and a factory
in Japan. How you handle a complaint might look like Figure 2.4.
This flow chart shows many of the symbols noted above, but it is not the only
way that the process could be flow charted. However, if the team that developed the
chart (once again, a team approach is likely to be the most effective technique) can
reach a high level of consensus, then the communication of these ideas to others
will be powerful and comprehensive.
Now that the basics of 10 × 10 (ten steps or fewer using ten or fewer symbols)
are apparent, it becomes possible to construct a “hierarchy” of flow charts that will
fill in missing details that may have been skirted with the “10 step limit.”
The next step is to create a new 10 × 10 flow chart for each box in the top level
flow chart that requires additional explanation to reach the desired level of detail.
These next flow charts (typically three to five of the boxes require additional detail)
make up the second level flow charts. Wherever necessary, go to another level of
flow charts; continue creating 10 × 10 flow charts until you have a hierarchy of flow
charts that directly addresses all of the details that you feel are important.
Finally, for each process box on each flow chart, you will have a process purpose.
Why did you do this step? Simple — you had one (or possibly two) purposes in
SL3151Ch02Frame Page 63 Thursday, September 12, 2002 6:13 PM
Customer Understanding 63
Phone notice of
customer
complaint
Local or
overseas
factory?
Log
complaint Local Overseas
into database
Send by
Local courier to
Compile Japan
factory is
information notified of
for notice complaint
Factory is
Complaint notified of
Pending
notice complaint
file
mind when you designed this step into your process. Process purpose can be easily
described using the language of function. Once again, you must use an active verb
and a measurable noun.
Often, a team can move directly to listing process functions from the flow charts.
However, especially in manufacturing, it is common for the level of detail hidden
in flow charts to be large, especially with intricate or subtle fabrication procedures.
You may need to use an additional tool for teasing the “function” information from
a flow chart called a “characteristic matrix.”
A characteristic matrix is a reasonably simple analysis tool. The purpose of the
matrix is to show the relationships between product characteristics and manufactur-
ing steps. The importance of product characteristics in this matrix is significant; by
considering the impact of a manufacturing step on product characteristics, we again
focus our attention on customer requirements. Too often, manufacturing emphasis
turns inward; it is critical that the focus be constantly directed at customers. Of
course, there are “internal” customers as well. It is certainly important that interme-
diate characteristics, necessary for facilitating additional fabrication or assembly
activities, be included in the analysis of function.
For example, a simple machining process could have the characteristic matrix
shown in Table 2.1.
In this example, a simple machining step could be shown on a process flow
chart with a process box that describes the machining operation as “CNC Lathe” or
something similar. However, the lathe operation creates several important dimen-
sions, or product characteristics, that are needed to meet customer expectations.
These characteristics are sufficiently varied and complex that an additional level of
detail is necessary. Some of these characteristics are important to the end customer;
some are important to internal or “next step” process stations.
For this example, the three left hand columns establish important functional
information. The product characteristic is essentially the “measurable noun” (an
SL3151Ch02Frame Page 64 Thursday, September 12, 2002 6:13 PM
TABLE 2.1
Characteristic Matrix for a Machining Process
Product Target Tolerance Process Operations
Characteristic Value
Lathe Lathe Face Deburr Cut
Turn Turn Cut 40 Radius
10 20 30 50
While we do not intend to explain these techniques fully in this context (however,
they will be explained later), we would like to address the usefulness of function
concepts in these methodologies. In these discussions, we are assuming that you
have a passing or even detailed familiarity with these tools. If not, you may wish
to pass over to the discussion of QFD later in this chapter or to Chapters 6 and 12
for lengthy discussions of FMEA and VA.
SL3151Ch02Frame Page 65 Thursday, September 12, 2002 6:13 PM
Customer Understanding 65
For Quality Function Deployment, the most challenging issue is the one that we
have just explored: how can one determine the functions that must be analyzed for
deployment? In other forms, this is the same question facing practitioners in FMEA
and VA. Clearly, the product flow diagram provides several instrumental techniques
for improving these activities.
A major difficulty in QFD is the often overwhelming complexity of the “House
of Quality” approach. Constructing the first house, using conventional QFD tech-
niques, is often the start of the complexity. Many different customer “wants” are
listed. This is occasionally done as a “pre-planning” matrix. Moreover, the linguistic
construction for these “wants” is undisciplined and subjective.
Similarly, in FMEA, the initial list of failure modes is difficult to obtain. In VA,
determining the “baseline” value assessment can also be difficult.*
The techniques for developing a function diagram, especially the informal sug-
gestions about “sizing” a project, can be very helpful in this regard. QFD, like FMEA
and VA, typically fails to deliver the results expected because the project selected
is too complex. A QFD study on a car or truck, for example, could easily contain
hundreds of thousands of pages of information. That is not to say that the information
in this study would not be valuable or that it should not be done; the issue is how
complexity of this type should be dealt with.
If you start with a systemwide view and construct a function diagram of the
limited size previously discussed (20–30 functions maximum, even fewer are better),
then this will provide a first level in a “hierarchy” of function diagrams. Subsequent
analysis of various subsystems, then components and parts, and finally processes
will complete the analysis. While the end result (for a car) would conceivably be of
the same magnitude, the belief that all of the work must be done within the same
team or by the same organization would be quickly abandoned. Moreover, the
knowledge and understanding that is developed is generated at the hierarchical level
(in the supply chain) of greatest importance, utility, and impact.
Moreover, using the “functive” combination of active verbs and measurable
nouns will assist in making QFD a useful tool. The vague, imprecise, or even
confusing descriptions of function that are often used in QFD contribute to the
difficulty in usage.
A vehicle planning team may carry out a QFD study on the overall vehicle,
assessing the major issues regarding the vehicle; these could include size, styling
motifs, performance themes, and target markets. Subsequently, a study of the pow-
ertrain (engine, transmission, and axles) could be completed by another team. The
engine itself could then be divided into major components: block, pistons, electronic
controls, and so forth. Ultimately, suppliers of major and minor components alike
would be asked to carry out QFD studies on each element. The multiplicity of
information is still present, but it is no longer generated in some centralized form.
This means that accessibility, usefulness, and the likelihood of beneficial deployment
of the findings are much greater.**
* In Value Analysis, the Function Analysis System Technique or “FAST,” a close cousin of the function
diagram, is typically used to establish the initial functional baseline for value calculations.
** If the reader sees an “echo” of the hierarchy of flow charts, this is not coincidental.
SL3151Ch02Frame Page 66 Thursday, September 12, 2002 6:13 PM
As an added benefit, starting QFD using this approach provides benefits in the
completion of FMEA and VA studies, since a consistent set of functions will be used
as a basis for each technique. We will next consider each of these in turn. We will
start with FMEA, because the importance of function in this methodology is not
widely understood or appreciated.
In FMEA, determining all of the appropriate failure modes is usually a great
challenge. This obstacle is reflected in the widespread difficulty in understanding
what is a failure mode and what is an effect. For example, the effect “customer is
dissatisfied” is often found in FMEA studies. While this is likely to be true, it is an
effect of little or no worth in developing and improving products and processes.
Similarly, failure modes are often confused with effect. This can be illustrated
with another common product, a disposable razor. How can we determine a com-
prehensive list of failure modes? Simply start with an appropriate function diagram.
For each function, we need to consider how these functions can go astray. There are
a limited number of ways that this can occur, all related to function. If you consider
the completion of a function (at the desired extent) to be the absence of failure, then
pose these questions about each function in the function diagram:
Each of these conditions establishes a possible failure mode. For the disposable
razor, the task function is generally understood to be “cut hair” (not, of course, to
shave). The failure mode that is most obvious is an additional unwanted function,
namely “cut skin.” Notice that the mode of failure is not “feel pain” or “bleed;”
these are failure effects.
To make use of these ideas in the context of the function diagram, we must next
define “terminus” functions. Terminus functions are simply those functions at the
right hand (or “how”) end of any function chain in the function diagram. In the
mechanical pencil example, two terminus functions would be “position eraser” and
“locate eraser.” Why do you position and locate the eraser? To hold the eraser. Why
do you hold the eraser? To erase marks. Why do you erase marks? To ensure
correctness. Since this chain is one of enhancing functions, we do not directly modify
the task function.
Start your analysis of failure modes by testing each of the possible conditions
listed above against the terminus functions. After you have completed the terminus
functions, move one step in the “why” direction. However, as you move to the left,
you will find that you frequently discover the same modes for the other functions.
Since the function chain shows the interrelated nature of the functions, this should
SL3151Ch02Frame Page 67 Thursday, September 12, 2002 6:13 PM
Customer Understanding 67
not be surprising. As a rule, you will get most (if not all) of the relevant failure
modes from the terminus functions.* So, starting with the terminus functions will
speed your work and reduce redundancy.
By working through each function chain in the function diagram, a comprehen-
sive list of failure modes can be developed. This listing of failure modes then alters
the approach to FMEA substantially; modes are clear, and cause-effect relationships
are easier to understand. Moreover, developing FMEA studies using function dia-
grams that were originally constructed as part of the QFD discipline assures that
product development activities continue to reflect the initial assumptions incorpo-
rated in the conceptual planning phase of the development process.**
Once you have identified failure modes in association with functions, the remain-
der of the FMEA study — though still involved — is rather mechanical. For each
failure mode, you must examine the likely effects that will result from this mode.
With a clear mode statement, this is much simpler, and you are much less likely to
confuse mode and effect issues. The effects can then be rated for severity using an
appropriate table. With the effects in hand, causes can next be established and the
occurrence rating estimated. Notice that this sequence of events makes the confusion
of cause and effect much more difficult; in many cases, the logical improbability of
reversal of cause and effect statements is so obvious that you simply cannot reverse
these two issues.
Finally, you can conclude the fundamental analysis with an evaluation of controls
and detection. Once again, starting with a statement of function makes this clearer
and less subject to ambiguity. Understanding the progression from function to mode
to cause to effect sets the stage. What is it that you expect to detect? Is it a mode?
In practice, detecting modes is extremely unlikely. You are more likely to detect
effects. However, are effects what you want to detect? Once an effect is seen the
failure has already occurred, and costs associated with the failure must already be
absorbed.
Let us return to the disposable razor to understand this. If the failure mode is
“cut skin,” we must recognize that detecting “cut skin” is extremely difficult. You
are much more likely to detect an effect — namely, pain or bleeding. Now, we
recognize that we really do not want to detect failures at this point. Instead, we need
to ask what are the possible causes of this failure mode. In this simple example, two
different causes are readily apparent. From a design standpoint, the blades of the
razor could be designed at the wrong angle to the shaver head. Even if the manu-
facturing were 100% accurate, a design that sets the blade angle incorrectly would
have terrible consequences. On the other hand, the design could be correct; the blade
angle could be specified at the optimum angle, but it could be assembled at an
incorrect angle. Detection would best be aimed at testing the design angle*** and
* This is even more true for a system FMEA than for a design FMEA study.
** Of course, any change that is made in concept during development activities requires a continuous
updating of the function diagrams under consideration.
*** In the ISO and QS-9000 systems, we can think of this in terms of design verification.
SL3151Ch02Frame Page 68 Thursday, September 12, 2002 6:13 PM
at controlling the manufacturing process so that the optimum design angle would
be repeatable (within limits) in production.*
Finally, the Value Analysis process can also make use of the function diagrams
that serve in the QFD and FMEA processes. In VA, the essence of the technique is
the association of cost with function. Once this is accomplished, the method of
functional realization can be considered in a variety of “what if” conditions. If there
is a comprehensive statement of function, VA teams can be reasonably sure that
ongoing value assessments, based on the ratio of function to cost, have a consistent
and rational foundation. Moreover, the teams have a much higher confidence that
these “what if” questions take customer issues into proper account.
Too often, VA activities are carried out as if function is well understood and
only cost matters. In too many cases, no function analysis is even performed. Despite
the long-standing cautions against this, this alluring shortcut is often taken to save
time, money, or both. The shortcomings of skipping function analysis in VA are not
trivial. More disappointing results in usage of the VA methodology have probably
been obtained because function was not fully and comprehensively understood.
At a very fundamental level, how can a value ratio analysis be performed without
a full statement of function? This is like calculating a return on investment without
knowing the investment. Moreover, the analysis of value ratio can be misleading if the
function issue is not well defined. It is easy to reduce cost. You simply eliminate features
and functions from a product. Soon, you will not even be able to accomplish the task
function. (In practice, “functionless” VA studies typically eliminate important enhancing
functions that make a critical difference in the marketplace, and customers consequently
pronounce unfavorable judgments on “decontented” products. VA then gets the blame.)
Since value studies typically occur subsequent to QFD and FMEA in product
development activities, the difficulty of understanding function is eliminated if
function is fully defined and even specified during these earlier activities. By using
function as the basis for product and manufacturing activities, a degree of focus and
understanding of customer wants and needs is preserved not only during VA activities
but throughout the product life cycle.
KANO MODEL
The tool of choice that is preferred for understanding the “function” is the Kano
model. A typical framework of the model is shown in Figure 2.5. The Kano model
identifies three aspects of quality, each having a different effect on customer satis-
faction. They are:
1. Basic quality — take for granted they exist
2. Performance quality — the more principle
3. Excitement quality — the wow
* This is the issue of “process control” in the ISO and QS-9000 systems — in QS-9000, it goes to the
heart of the control plan itself. Also, this is a simplified example. In more detail, the failure mode of
“cut skin” can even occur when the blade angle is correct both in design and execution. A deeper
examination of these issues quickly leads to the consideration of “robustness” in the design itself.
SL3151Ch02Frame Page 69 Thursday, September 12, 2002 6:13 PM
Customer Understanding 69
The more we find out about these three aspects from the customer, the more
successful we are going to be in our DFSS venture. (Caution: It is imperative to
understand that the customer talks in everyday language, and that this language may
or may not be acceptable from a design perspective. It is the engineer’s responsibility
to translate the language data into a form that may prove worthwhile in requirements
as well as verification. A good source for more detailed information is the 1993
book by Shoji.)
BASIC QUALITY
“Basic” quality refers to items that the customer is dissatisfied with when the product
performs poorly but is not more satisfied with when the product performs well.
Fixing these items will not raise satisfaction beyond a minimum point. These items
may be identified in the Kano model as in Figure 2.6.
Some sources for the basic quality characteristics are: things gone right, things
gone wrong, surrogate data, surveys, warranty, and market research.
PERFORMANCE QUALITY
“Performance” quality refers to items that the customer is more satisfied with more
of. In other words, the better the product performs the more satisfied the customer.
The worse the product performs, the less satisfied the customer. Attributes that can
be classified as linear satisfiers fall into this category. A typical depiction is shown
in Figure 2.7.
Some sources for performance quality characteristics are: internal satisfaction anal-
ysis, customer interviews, corporate targets/goals, competition, and benchmarking.
EXCITEMENT QUALITY
“Excitement” quality refers to items that the customer is more satisfied with when
the product is more functional but is not less satisfied with when it is not. This is
the area where the customer can be really surprised and delighted. A typical depiction
of these attributes is shown in Figure 2.8.
Some sources for excitement quality characteristics are: customer insight, tech-
nology, interviews with comments such as high % or better than expected.
SL3151Ch02Frame Page 70 Thursday, September 12, 2002 6:13 PM
- + product functionality
Brakes
Horn
Windshield wipers
-
Performance
Customer satisfaction
+ Quiet gear shift
Fuel economy -
Customer satisfaction
+
Style
Ride
Features
- + X – axis (product functionality)
Items that are identified as surprise/delight candidates are very fickle in the sense
that they may change without warning. Indeed, they become expectations. The
engineer must be very cautious here because items that are identified as excitement
items now may not predict excitement at some future date. In fact, we already know
that over time the surprised/delighted items become performance items, the perfor-
mance items become basic, and the basic items become inherent attributes of the
product. A classic example is the brakes of an automobile. The traditional brakes
were the default item. However, when disc brakes came in as a new technology,
they were indeed the excitement item of the hour. They were replaced, however,
SL3151Ch02Frame Page 71 Thursday, September 12, 2002 6:13 PM
Customer Understanding 71
Customer satisfaction
Surprise/delight + Performance
- + product functionality
FIGURE 2.9 Excitement quality depicted over time in the Kano model.
with the ABS brake system, and now even this is about to be displaced by the
electronic brake system. This evolution may be seen in the Kano model in Figure 2.9.
Developing these “surprised and delighted” items requires activities that gain
insight into the customers’ emotions and motivations. It requires an investment of
time to talk with and observe the customer in the customer’s own setting, and the
use of the potential product. Above all, it requires the ability to read the customer’s
latent needs and unspoken requirements.
Is there a way to sustain the delight of the customer? We believe that there is.
Once the attributes have been identified, a robust design must be initiated with two
objectives in mind.
These two steps will create an outstanding reliability and durability reputation.
Correlation
matrix
HOW
Importance
What I I Competitive
Relationship
M M assessment
matrix
P P
O O
R R
T T
A A
N N
C C
E E
Technical
difficulty
How much
Competitive
assessment
Important
control items
Importance
propelled the Japanese to find not only a tool but a planning tool that implements
the business objectives, of which the right application is product development. The
definition of QFD is a systematic approach for translating customer wants/require-
ments into company-wide requirements. This translation takes place at each stage
from research and development to engineering and manufacturing to marketing and
sales and distribution. The QFD system concept is based on four key documents:
Customer Understanding 73
BENEFITS OF QFD
QFD certainly appears to be a sensible approach to defining and executing the myriad
of details embodied in the product development process, but it also appears to be a
great deal of extra work. What is it really worth? Setting the logical arguments aside,
there are a number of demonstrated benefits resulting from the use of QFD:
• Demonstrated results
• Preservation of knowledge
• Fewer startup problems
• Lower startup cost
• Shorter lead time
• Warranty reduction
• Customer satisfaction
• Marketing advantage
SL3151Ch02Frame Page 74 Thursday, September 12, 2002 6:13 PM
All of the above translate into significant marketing advantages, that is, speedy
introduction of products that satisfy customers without problems.
In addition to all the benefits already mentioned, Table 2.2 shows some of the
benefits from the total development process perspective, which is a synergistic result
starting with QFD.
SL3151Ch02Frame Page 75 Thursday, September 12, 2002 6:13 PM
Customer Understanding 75
TABLE 2.2
Benefits of Improved Total Development Process
Cash Drain Old Process Improved Process
Technology push, but Concepts with no needs, Technology strategy and technology transfer
where’s the pull? needs with no concept bring right technology to the product
Disregard for voice of The voice of the engineer and House of Quality and all steps of QFD deploy
the customer other corporate specialists is the voice of the customer throughout the
emphasized process
Eureka concept Mad dash with singular Pugh process converges on consensus and
concept, usually vulnerable commitment to invulnerable concept
Pretend designs Initial design is not Two step design and design competitive
production intent and benchmarking lead to superior design
emphasizes newness rather
than superior design
Pampered product Make it look good for Taguchi optimization positions product as far
demonstration as possible away from potential problems
Hardware swamps Large number of highly Only four iterations, each planned to make
overlapped prototype maximum contribution to optimization
iterations leaves little time
for improvement
Here is the product; Product is developed, then One total development process, product, and
where is the factory? factory reacts to it production capability
We have always made Old process parameters used Taguchi process parameter improves quality,
it this way repetitiously without design reduces cycle times
improvement
Inspection Inspection creates scrap, Taguchi’s optimal checking and adjusting
rework, adjustments, and minimizes costs of inspection
field quality loss
Give me my targets, let Lack of teamwork Teamwork and competitive benchmarking
me do my thing beat contracts, and targets lead the process,
do not manage problems
1. Change is uncomfortable.
Counterpoint: There is an old saying, “If we do what we have done, we
will get what we have.” To truly improve, we must explore new pat-
terns of logical thinking and let go of outdated ways. We must be will-
ing to change.
2. Success is not realized until the product is released.
Counterpoint: The truest measure of customer satisfaction comes after the
product or service is introduced. It is easy to lose sight of improvements
that do not materialize until years after the improvement effort. We
SL3151Ch02Frame Page 76 Thursday, September 12, 2002 6:13 PM
would be remiss not to seek ways to achieve the end goal of customer
satisfaction in our design and development process.
3. QFD is a long process.
Counterpoint: QFD saves the team’s time and resources with new ap-
proaches and tools. Avoiding multiple redesigns and multiple prototype
levels in response to customer input recovers the time spent on QFD.
The upstream time saves multiples of downstream time.
4. It is not as much fun as “fire fighting.”
Counterpoint. Finding and fixing problems may be personally gratifying.
It is the stuff from which heroes/heroines are made. But emergencies
are not in the company’s best interest and certainly not in the custom-
er’s interest. Management must provide a system that rewards problem
prevention as well as problem solving.
5. The relation to the traditional product development process is not understood.
Counterpoint: QFD replaces some traditional product design and develop-
ment events, i.e., target setting and functional assumptions, and thereby
does not add time.
6. It is difficult to accept customer input when the “voice of the engineer”
contradicts.
Counterpoint: Engineering has delivered about 80% customer satisfac-
tion; getting to 90–95% is a tough challenge requiring enhancements to
current methods for achieving quality.
PROCESS OVERVIEW
The easiest way to think of QFD is to think of it as a process consisting of linked
spreadsheets arranged along a horizontal (Customer) axis and intersecting vertical
(Technical) axis. Important details include the following:
Customer Understanding 77
The steps listed above will result in the following QFD deliverables for the Customer
Axis:
Customer Understanding 79
Technical Axis
• Rank ordered list of key Technical System Expectations that when cor-
rectly targeted will satisfy Customer Wants at a strategically competitive
level
• Target values for key TSEs derived from technical competitive bench-
marking that correlate with customer’s competitive evaluations. These
target values aid program management two ways:
a. By driving the product and engineering program toward integrated
business and technical propositions that program management can
prove
b. With managing the program team’s performance at program comple-
tion
Institutionalizing revised tests and standards into real world usage — customer
dependent, of course — customer requirements, corporate engineering test proce-
dures, and other documents both generic and program specific that support the
organization’s design verification system.
• Shared responsibilities
• Interpretations
• Priorities
• Technical knowledge
• Long time experience
• Resource changes
• Communication
• Lots of work
SL3151Ch02Frame Page 80 Thursday, September 12, 2002 6:13 PM
It is precisely this complexity that all too often causes the product development
process to create a product that fails to meet the customer requirements. For example:
Customer requirement →
Design requirements →
Part characteristics →
Manufacturing operations →
Production requirements
QFD METHODOLOGY
QFD is accomplished through a series of charts that appear to be very complex.
They do contain a great deal of information, however. That information is both an
asset and a liability.
All the charts are interconnected to what is called the House of Quality because
of the roof-like structure at its top. Since this house is made up of distinct parts or
“rooms,” let us find the function of each part, so that we can comprehend what QFD
is all about — see Figure 2.10.
QFD begins with a list of objectives or the “what” that we want to accomplish —
see Figure 2.11. This is usually the voice of the customer and as such is very general,
vague, and difficult to implement directly. It is given to us in raw form, that is, in
the everyday language of the customer. (Example: “I don’t want a leaky window
when it rains.”)
For each what, we refine the list into the next level of detail by listing one or more
“hows” for each what. The hows are an engineering task. Figure 2.11 shows the rela-
tionship between the what and the how. Figure 2.12 shows that it is possible to have
an iterative process between the what and the how, with a possible refinement of the
“old how” into the “new what” and ultimately to generate a very good “new how.”
Even though this step shows greater detail than the original what list, it is by
itself often not directly actionable and requires further definition. This is accom-
plished by further refinement until every item on the list is actionable. This level is
important because there is no way of ensuring successful realization of a requirement
that no one knows how to accomplish. (Note: Remember that our level of refinement
within the how list may affect more than one how or what and can in fact adversely
affect one another. That is why the arrows in Figure 2.11 are going in multiple
directions.)
To reduce possible confusion we represent the what and how in the following
manner. The enclosed matrix becomes the relationships. The relationships are shown
at the intersections of the what and how. Some common symbols are:
□ Medium relationship
Weak relationship
Very strong relationship
SL3151Ch02Frame Page 81 Thursday, September 12, 2002 6:13 PM
Customer Understanding 81
What How
How
What Importance
4 •
1 •
2 •
How
much
Importance 42 21 33 28 24
ratings
Where
= 3
• = 9
= 1
Therefore: (4x9) + (2x3) = 42 and so on. Make sure that the ratings differentiate to the
point of discrimination between each other. Remember, you are interested in great
differentiation rather than a simple priority.
HOW
What
How much
Customer Understanding 83
Functional
spec
VOC Requirements
analysis
Design
System
design Methods,
tools,
procedures
Where:
VOC = Voice of the customer Technical
assessment
Resource plan
Implementation
plan
FIGURE 2.15 The flow of information in the process of developing the final “House of
Quality.”
taken down to the work detail level of production requirements. The QFD process
is well suited to simultaneous engineering in which product and process engineers
participate in a team effort.) For more information on the cascading process of the
QFD methodology, see the Appendix.
So far, we have talked about the basic charts in the House of Quality, and as a
result we have gained much information about the problem at hand. However, there
are several useful extensions to the basic QFD charts that enhance their usefulness.
These are used as required based on the content and purpose of each particular chart.
One such extension is the correlation matrix.
The correlation matrix — see Figure 2.10 — is a triangular table often attached
to the “hows.” The purpose of such placement is to establish the correlation between
each “how” item, i.e., to indicate the strength of the relationship and to describe the
direction of the relationship. To do that, symbols are used, most commonly:
Positive X Negative
Strong positive # Strong negative
to our “in house” tests and standards but fail to achieve expected results in the hands
of our customers.
Why are we doing this? Basically, for two reasons:
Remember that the correlation must be related to real world usage from the
customer’s perspective. What and how items that are strongly related should also be
shown to relate to one another in the competitive assessment. If the correlation does
not agree, it may mean that we overlooked something very significant.
A third extension is the importance rating — see Figure 2.10. This is a mech-
anism for prioritizing efforts and making trade-off decisions for each of the whats
and hows. It is important to keep in mind that the values by themselves have no
direct meaning; rather, their meaning surfaces only when they are interpreted by
comparing their magnitudes. The importance rating is useful for prioritizing efforts
and making trade-off decisions. (Some of the trade-offs may require high level
decisions because they cross engineering group, department, divisional, or company
lines. Early resolution of trade-offs is essential to shorten program timing and avoid
non-productive internal iterations while seeking a nonexistent solution.) The rating
itself may take the form of numerical tables or graphs that depict the relative
importance of each what or how to the desired end result. Any rating scale will
work, provided that the scale is a weighted one. A common method is to assign
weights to each relationship matrix symbol and sum the weights, just as we did in
Figure 2.13. Another more technical way is the following:
w ′functioni = ∑w r
j
yj ij
5(w ′functioni )
wfunction i =
maxi (w ′functioni )
Customer Understanding 85
HOW
What Importance
4 •
1 •
2 •
How
much
Importance 42 21 33 28 24
ratings –
unnormalized
Importance of 5 2.5 or 3 3.9 or 4 3.3 or 3 2.9 or 3
how
W’ function i = (4x9) + (2x3) = 42 and so on
5 (42)
W function i = = 5 and so on
42
Keep in mind that when you are addressing the “hows” in essence you are
dealing with customer functionalities. Therefore, it is recommended to design for
the average, based on each function’s importance according to its capability to
supply each original Y.
CUSTOMER REQUIREMENTS
⇓
⇓
⇓
PRODUCT
SL3151Ch02Frame Page 86 Thursday, September 12, 2002 6:13 PM
Let us call the process of translating these requirements into a viable product
the “product development process.” This process includes program planning, con-
cepting, optimization, development, prototyping, and testing, as well as the corre-
sponding manufacturing functions. Thus:
CUSTOMER REQUIREMENTS
⇓
⇓
⇓
PRODUCT DEVELOPMENT
PROCESS
⇓
⇓
⇓
PRODUCT
• Trade-offs
• Shared responsibilities
• Interpretations
• Priorities
• Technical knowledge
• Long time frame
• Resource changes
• Communication — lots of work
Customer Understanding 87
Design
requirements
Product planning
Part
characteristic
Part deployment
Manufacturing
operations
Process planning
Production
requirements
Production planning
PART CHARACTERISTICS
⇓
⇓
⇓
MANUFACTURING OPERATIONS
⇓
⇓
⇓
PRODUCTION REQUIREMENTS
⇓
⇓
⇓
CONJOINT ANALYSIS
WHAT IS CONJOINT ANALYSIS?
We introduced conjoint analysis in Volume III of this series. Recall that conjoint
analysis is a multivariate technique used specifically to understand how respondents
develop preferences for products or services. It is based on the simple premise that
consumers evaluate the value of a product/service/idea (real or hypothetical) by
combining the separate amounts of value provided by each attribute.
It is this characteristic that is of interest in the DFSS methodology. After all, we
want to know the bundle of utility from the customer’s perspective. (The reader is
encouraged to review Volume III, Chapter 11.) So in this section, rather than dwelling
on theoretical statistical explanations, we will apply conjoint analysis in a couple
of hypothetical examples. The examples are based on the work of Hair et al. (1998)
and are used here with the publisher’s permission.
SL3151Ch02Frame Page 89 Thursday, September 12, 2002 6:13 PM
Customer Understanding 89
Factor Level
Ingredients Phosphate-free Phosphate-based
Form Liquid Powder
Brand name HATCO Generic brand
HATCO customers are then asked either to rank-order the eight stimuli in terms
of preference or to rate each combination on a preference scale (perhaps a 1-to-10
scale). We can see why conjoint analysis is also called “trade-off analysis,” because
in making a judgment on a hypothetical product, respondents must consider both
the “good” and “bad” characteristics of the product in forming a preference. Thus,
respondents must weigh all attributes simultaneously in making their judgments.
By constructing specific combinations (stimuli), the researcher is attempting to
understand a respondent’s preference structure. The preference structure “explains”
not only how important each factor is in the overall decision, but also how the
differing levels within a factor influence the formation of an overall preference
(utility). In our example, conjoint analysis would assess the relative impact of each
brand name (HATCO versus generic), each form (powder versus liquid), and the
different cleaning ingredients (phosphate-free versus phosphate-based) in deter-
mining the utility to a person. This utility, which represents the total “worth” or
overall preference of an object, can be thought of as based on the part-worths for
each level. The general form of a conjoint model can be shown as
where the product or service has m attributes, each having n levels. The product
consists of level i of factor 2, level j of factor 2, and so forth, up to level n for factor m.
SL3151Ch02Frame Page 90 Thursday, September 12, 2002 6:13 PM
TABLE 2.3
Stimuli Descriptions and Respondent Rankings for Conjoint
Analysis of Industrial Cleanser
Stimuli Descriptions Respondent Rankings
Form Ingredients Brand Respondent 1 Respondent 2
In our example, a simple additive model would represent the preference structure
for the industrial cleanser as based on the three factors (utility = brand effect +
ingredient effect + form effect). The preference for a specific cleanser product can
be directly calculated from the part-worth values. For example, the preference for
HATCO phosphate-free powder is:
AN EMPIRICAL EXAMPLE
To illustrate a simple conjoint analysis, assume that the industrial cleanser
experiment was conducted with respondents who purchased industrial supplies. Each
respondent was presented with eight descriptions of cleanser products (stimuli) and
asked to rank them in order of preference for purchase (1 = most preferred; 8 = least
preferred). The eight stimuli are described in Table 2.3, along with the rank orders
given by two respondents.
As we examine the responses for respondent 1, we see that the ranks for the
stimuli with the phosphate-free ingredients are the highest possible (1, 2, 3, and 4),
whereas the phosphate-based product has the four lowest ranks (5, 6, 7, and 8).
Thus, the phosphate-free product is much more preferred than the phosphate-based
cleanser. This can be contrasted to the ranks for the two brands, which show a
mixture of high and low ranks for each brand. Assuming that the basic model (an
SL3151Ch02Frame Page 91 Thursday, September 12, 2002 6:13 PM
Customer Understanding 91
additive model) applies, we can calculate the impact of each level as differences
(deviations) from the overall mean ranking. (Readers may note that this is analogous
to multiple regression with dummy variables or ANOVA.) For example, the average
ranks for the two cleanser ingredients (phosphate-free versus phosphate-based) for
respondent 1 are:
• Step 1: Square the deviations and find their sum across all levels.
• Step 2: Calculate a standardizing value that is equal to the total number
of levels divided by the sum of squared deviations.
• Step 3: Standardize each squared deviation by multiplying it by the stan-
dardizing value.
• Step 4: Estimate the part-worth by taking the square root of the standard-
ized squared deviation.
Let us examine how we would calculate the part-worth of the first level of
ingredients (phosphate-free) for respondent 1. The deviations from 2.5 are squared.
The squared deviations are summed (10.5). The number of levels is six (three factors
with two levels apiece). Thus, the standardizing value is calculated as .571 (6/10.5
= .571). The squared deviation for phosphate-free (22; remember that we reverse
signs) is then multiplied by .571 to get 2.284 (22 × .571 = 2.284). Finally, to calculate
the part-worth for this level, we then take the square root of 2.284, for a value of
1.1511. This process yields part-worths for each level for respondents 1 and 2, as
shown in Table 2.5.
Because the part-worth estimates are on a common scale, we can compute the
relative importance of each factor directly. The importance of a factor is represented
by the range of its levels (i.e., the difference between the highest and lowest values)
divided by the sum of the ranges across all factors. For example, for respondent 1,
the ranges are 1.512 [.756 – (–.756)], 3.022 [1.511 – (–1.511)], and .756 [.378 –
(–.378)]. The sum total of ranges is 5.290. The relative importance for form, ingredients,
SL3151Ch02Frame Page 92 Thursday, September 12, 2002 6:13 PM
TABLE 2.4
Average Ranks and Deviations for Respondents 1 and 2
Deviation from
Factor Level Ranks Across Stimuli Average Rank of Level Overall Average Rank
Respondent l
Form
Liquid 1, 2, 5, 6 3.5 –1.0
Powder 3, 4, 7, 8 5.5 +1.0
Ingredients
Phosphate-free 1, 2, 3, 4 2.5 –2.0
Phosphate-based 5, 6, 7, 8 6.5 +2.0
Brand
HATCO 1, 3, 5, 7 4.0 –.5
Generic 2, 4, 6, 8 5.0 +.5
Respondent 2
Form
Liquid 1, 2, 3, 4 2.5 –2.0
Powder 5, 6, 7, 8 6.5 +2.0
Ingredients
Phosphate-free 1, 2, 5, 7 3.75 –.75
Phosphate-based 3, 4, 6, 8 5.25 +.75
Brand
HATCO 1, 3, 7, 8 4.75 +.25
Generic 2, 4, 5, 6 4.25 –.25
Customer Understanding 93
TABLE 2.5
Estimated Part-Worths and Factor Importance for Respondents 1 and 2
Estimated Part-Worths Calculating Factor Importance
Reversed Squared Standardized Estimated Range of Factor
Factor Level Deviationa Deviation Deviationb Part-Worthc Part-Worths Importanced
Respondent 1
Form
Liquid +1.0 1.0 +.571 +.756
Powder –1.0 1.0 –.571 –.756 1.512 28.6%
Ingredients
Phosphate-free +2.0 4.0 +2.284 +1.511
Phosphate-based –2.0 4.0 –2.284 –1.511 3.022 57.1%
Brand
HATCO +.5 .25 +.143 +.378
Generic –.5 .25 –.143 –.378 .756 14.3%
Sum of squared 10.5
deviations
Standardizing valuee .571
Sum of part-worth 5.290
ranges
Respondent 2
Form
Liquid +2.0 4.0 +2.60 +1.612
Powder –2.0 4.0 –2.60 –1.612 3.224 66.7%
Ingredients
Phosphate-free +.75 .5625 +.365 +.604
Phosphate-based –.75 .5625 –.365 –.604 1.208 25.0%
Brand
HATCO –.25 .0625 –.04 –.20
Generic +.25 .0625 +.04 +.20 .400 8.3%
Sum of squared 9.25
deviations
Standardizing value .649
Sum of part-worth 4.832
ranges
a Deviations are reversed to indicate higher preference for lower ranks. Sign of deviation used to indicate sign
of estimated part-worth.
b Standardized deviation equal to the squared deviation times the standardizing value.
d Factor importance equal to the range of a factor divided by the sum of the ranges across all factors, multiplied
TABLE 2.6
Predicted Part-Worth Totals and Comparison of Actual
and Estimated Preference Rankings
Stimuli Description Part-Worth Estimates Preference Rankings
Size Ingredients Brand Size Ingredients Brand Total Estimated Actual
Respondent 1
Liquid Phosphate-free HATCO .756 1.511 .378 2.645 1 1
Liquid Phosphate-free Generic .756 1.511 –.378 1.889 2 2
Liquid Phosphate-based HATCO .756 –1.511 .378 –.377 5 5
Liquid Phosphate-based Generic .756 –1.511 –.378 –1.133 6 6
Powder Phosphate-free HATCO –.756 1.511 .378 1.133 3 3
Powder Phosphate-free Generic –.756 1.511 –.378 .377 4 4
Powder Phosphate-based HATCO –.756 –1.511 .378 –1.889 7 7
Powder Phosphate-based Generic –.756 –1.511 –.378 –2.645 8 8
Respondent 2
Liquid Phosphate-free HATCO 1.612 .604 –.20 2.016 2 1
Liquid Phosphate-free Generic 1.612 .604 .20 2.416 1 2
Liquid Phosphate-based HATCO 1.612 –.604 –.20 .808 4 3
Liquid Phosphate-based Generic 1.612 –.604 .20 1.208 3 4
Powder Phosphate-free HATCO –1.612 .604 –.20 –1.208 6 7
Powder Phosphate-free Generic –1.612 .604 .20 –.808 5 5
Powder Phosphate-based HATCO –1.612 –.604 –.20 –2.416 8 8
Powder Phosphate-based Generic –1.612 –.604 .20 –2.016 7 6
The estimated part-worths predict the preference order perfectly for respondent
1. This indicates that the preference structure was successfully represented in the
part-worth estimates and that the respondent made choices consistent with the
preference structure. The need for consistency is seen when the rankings for
respondent 2 are examined. For example, the average rank for the generic brand is
lower than that for the HATCO brand (refer to Table 2.4), meaning that, all things
being equal, the stimuli with the generic brand will be more preferred. Yet, examining
the actual rank orders, this is not always seen. Stimuli 1 and 2 are equal except for
brand name, yet HATCO is preferred. This also occurs for stimuli 3 and 4. However,
the correct ordering (generic preferred over HATCO) is seen for the stimuli pairs
of 5–6 and 7–8. Thus, the preference structure of the part-worths will have a difficult
time predicting this choice pattern. When we compare the actual and predicted rank
orders (see Table 2.6), we see that respondent 2’s choices are many times mispre-
dicted but most often just miss by one position due to the brand effect. Thus, we
would conclude that the preference structure is an adequate representation of the
choice process for the more important factors, but that it does not predict choice
perfectly for respondent 2, as it does for respondent 1.
SL3151Ch02Frame Page 95 Thursday, September 12, 2002 6:13 PM
Customer Understanding 95
The knowledge of the preference structure for each individual allows the
researcher almost unlimited flexibility in examining both individual and aggregate
reactions to a wide range of product- or service-related issues.
REFERENCES
Fowler, T.C., Value Analysis in Design, Van Nostrand Reinhold, New York, 1990.
Hair, J.F., Multivariate Data Analysis, 5th ed., Prentice Hall, Upper Saddle River, NJ, 1998.
Harry, M.,The Vision of Six Sigma: A Roadmap for Breakthrough, 5th ed., Vol. 1, TriStar
Publishing, Phoenix, 1997.
Porter, M., Competitive Advantage, Free Press, New York, 1985.
Rechtin, E. and Maier M., The Art of Systems Architecting, CRC, Boca Raton, FL, 1997.
Shoji, S., A New American TQM, Productivity Press, Portland, OR, 1993.
SELECTED BIBLIOGRAPHY
Afors, C. and Michaels, M.Z., A Quick, Accurate Way to Determine Customer Needs, Quality
Progress, July 2001, pp. 82–88.
Anon., Quality Function Deployment, American Supplier Institute, Inc., Dearborn, MI, 1988.
Bialowas, P. and Tabaszewska E., How to Evaluate the Internal Customer Supplier Relation-
ship, Quality Progress, July 2001, pp. 63–67.
Carlzon, J., Moments of Truth, HarperCollins, New York, 1989.
Fredericks, J. O. and Salter, J.M., What Does Your Customer Really Want? Quality Progress,
Jan. 1998, pp. 63–70.
SL3151Ch02Frame Page 96 Thursday, September 12, 2002 6:13 PM
Gale, B.T., Managing Customer Value: Creating Quality and Service that Customers Can See,
Free Press, New York, 1994.
Gobits, R., The Measurement of Insight, unpublished paper presented at the 2nd International
Symposium on Educational Testing, Montreux, 1975.
Goncalves, K.P. and Goncalves, M.P., Use of the Kano Method Keeps Honeywell Attuned to
the Voice of the Customer, Quirk’s Marketing Research Review, Apr. 2001, pp. 18–25.
Gutman, J. and Miaoulis, G., Past Experience Drives Future CS Behavior, Marketing News,
Oct. 22, 2001, pp. 45–46.
Harry, M., The Vision of Six Sigma: A Roadmap for Breakthrough, 5th ed., Vol. 2, TriStar
Publishing, Phoenix, 1997.
James, H.L., Sasser, W.E., and Schlesinger, L.A., The Service Profit Chain: How Leading
Companies Link Profit and Growth to Loyalty, Satisfaction and Value, Free Press,
New York, 1997.
Mariampolski. H, Qualitative Market Research, Sage Publications, Newbury Park, CA, 2001.
Morais, R., The End of Focus Groups, Quirk’s Marketing Research Review, pp. 153–154,
May 2001.
Mudge, A.E., Numerical Evaluation of Functional Relationships, Proceedings, Society of
American Value Engineers, 1967.
Murphy, B., Methodological Pitfalls in Linking Customer Satisfaction with Profitability,
Quirk’s Marketing Research Review, Oct. 2001, pp. 22–27.
Murphy, B., Qualitatively Speaking: Of Bullies, Friends and Mice, Quirk’s Marketing
Research Review, Oct. 2001, pp. 16, 61.
Saliba, M.T. and Fisher, C.M., Managing Customer Value, Quality Progress, June 2000, pp.
63–70.
Shillito, M.L., Pareto Voting. Proceedings, Society of American Value Engineers, 1973.
Stamatis, D.H., Total Quality Management: Engineering Handbook, Marcel Dekker, New
York, 1997.
Stamatis, D.H., Total Quality Service, St. Lucie Press, Delray Beach, FL, 1996.
Sullivan, L.P., The Seven Stages in Company Wide Quality Control, Quality Progress, May
1986, pp. 77–83.
Sullivan, L.P., Quality Function Deployment, Quality Progress, June 1986, 1986, pp. 39–50.
Thomas, J. and Sasser, W.E., Why Satisfied Customers Defect, Harvard Business Review,
Nov.-Dec. 1995, pp. 88–89.
VanVierah, S. and Olosky, M., Achieving Customer Satisfaction: Registrar Satisfaction Survey
Counterbalances the Myth About Registrars, Automotive Excellence, Winter 1999,
pp. 10–15.
Veins, M., Wedel, M., and Wilms, T., Metric conjoint segmentation methods: a Monte Carlo
comparison, Journal of Marketing Research, 33, 73–85, 1996.
Wittink, D.R. et al., Commercial use of conjoint analysis: an update, Journal of Marketing,
53, 91–96, 1989.
SL3151Ch03Frame Page 97 Thursday, September 12, 2002 6:12 PM
Benchmarking
3
Benchmarking is a tool, a technique or process, a philosophy, and a new name for
old practices. It involves operations research and management science for determin-
ing (a) what to do or “goal setting” and (b) how to do it or “action plan identification.”
Benchmarking can be applied (a) systematically and comprehensively or (b) ad
hoc project by project. In both cases it can require (a) sophisticated statistical
analysis, (b) utilization of a wide variety of analytical tools, and (c) a wide range
of data sources. The basic requirements for success are:
Benchmarking 99
How can this be so? What strategy did the more successful competitors follow?
• Least cost
• Differentiation
Those competitors who do not explicitly follow one strategy or the other tend
to get “stuck in the middle” and do not have the highest return on investment. Hall’s
findings do, however, indicate that some firms can successfully manage both strategy
options. The generic strategies identified by Hall and Porter have been supported by
a number of research studies (see Higgins and Vincze, 1989).
For a successful business strategy to be developed, a company must decide what
course it will follow. It must also be certain that it is, in fact, realistically able to
pursue that alternative. Some questions to be asked include the following:
• Does a company really have the least cost? How do they know? What is
the basis for the claim?
• Is the company really differentiated in the eyes of the customer? How do
they know? What is the basis for the claim?
• How might competitive conditions change in the future?
• What is the relative market share of the company? Does the experience
curve have a significant effect on cost reduction?
• Is the industry one that can be affected by automation possibilities, con-
veyorized assembly, or new production technology? Is the capital available
for investment in efficient scale facilities and product and process engi-
neering innovation?
• Do competitors have a different mix of fixed and variable costs?
• What is the percent capacity utilization by competitive firms?
• Are the competitive firms using activity-based accounting?
• How critical is raw material supply? Does the firm have preemptive
sources of supply?
• Does the firm have a tight system of budgeting and cost control for all
functions?
• Are productions designed for low cost productions? Are products simplified
and product lines reduced in number? Are bills of material standardized?
• What is the level of product/service quality versus competition?
• How labor intensive is the process? How effective are labor/management
relations?
• Are marginal accounts minimized?
SL3151Ch03Frame Page 101 Thursday, September 12, 2002 6:12 PM
Benchmarking 101
Improved quality through benchmarking can lead to lower costs. The cost of
quality — really the cost of non-quality — consists of the costs of prevention,
appraisal (inspection), internal quality failures, and external quality failures. This
cost can amount to as much as 30–40% of the cost of goods sold. Costs include the
following:
Costs of prevention
Training
Equipment
Costs of appraisal (inspection)
Inspectors
Equipment
Cost of internal quality failures
Scrap
Rework
Machine downtime
Missed schedules
Excess inventory
Cost of external quality failures
Warranty expense
Customer dissatisfaction
Studies have shown that the average quality improvement project results in
$100,000 of cost reduction. The associated cost to diagnose and remedy the problem
has averaged $15,000. Consequently, the payout from benchmarking in this area can
be significant. Velcro reported a 50% reduction in waste as a percentage of total
manufacturing cost in the first year and an additional 45% decrease in the second
year of its quality program.
Motorola achieved a quality level in 1991 that was 100 times better than it was
in 1987. By 1992, this company was striving for six sigma quality. That means three
defects per million or 99.9997 percent perfection. Motorola believes that super
quality is the lowest cost way of doing things, if you do things right the first time.
Their director of manufacturing — at that time — pointed out that each piece of
equipment has 17,000 parts and 144,000 opportunities for mistakes. A 99 percent
quality rate is equivalent to 1,440 mistakes per piece. The cost to hire and train
people to fix those mistakes would put the company out of business.
Quality planning
• Identifying target market segments
• Determining specific customers’ needs, wants, and expectations
SL3151Ch03Frame Page 103 Thursday, September 12, 2002 6:12 PM
Benchmarking 103
• Six sigma
• Business strategic planning
• Strategy development (least cost versus differentiation)
• TQM
• Pricing strategy
• Benchmarking
Benchmarking 105
The Malcolm Baldrige National Quality Award encapsulates the essential ele-
ments of Strategic Quality Management. The key attributes considered when making
this award are listed below. Many agree that the criteria provide the blueprint for a
better company. The urgency to win the award can accelerate change within an
organization. Some companies have told their suppliers to compete or else. These
are the criteria:
Achievement of the award requires extensive top management effort and support.
All of the Quality Award winners have been in highly competitive industries and
either had to improve or get out of the business. On a scale of 10 (best) to 1 (poor),
how would you rate your company on each of these attributes? If you find yourself
on the low end, there may be a need for benchmarking.
• Criteria and rationale the company uses for making competitive compar-
isons and benchmarks. These include:
• The relationship to company goals and priorities for the improvement
of product and service quality and/or company operations
• The companies for comparison within or outside the industry
• Current scope of competitive and benchmark involvement and data col-
lection relative to:
• Product and service quality
• Customer satisfaction and other customer needs
• Supplier performance
• Employee data
• Internal operations, business processes, and support services
• Other
• For each, the company is directed to list sources of comparisons and
benchmarks, including companies benchmarked and independent testing
or evaluation, and:
• How each type of data is used
• How the company evaluates and improves the scope, sources, and uses
of competitive and benchmark data
• The company must also indicate how this data is used to support:
• Company planning
• Setting of priorities
• Quality performance review
• Improvement of internal operations
• Determination of product or service features that best predict customer
satisfaction
• Quality improvement projects
• Developing plans
• Goal setting
• Continuous process improvement
• Determining trends and levels of product and service quality, the effec-
tiveness of business practices, and supplier quality
• Determining customer satisfaction levels
A closer review of the criteria indicates several factors that are essential for effective
quality excellence and benchmarking activities within a company, including:
SL3151Ch03Frame Page 107 Thursday, September 12, 2002 6:12 PM
Benchmarking 107
Customer-driven quality
• Quality is judged by the customer. The customer’s expectations of
quality dictate product design and this, in turn, drives manufacturing.
• All product and service attributes that lead to customer satisfaction and
preference must be taken into consideration.
• Customer driven quality is a strategic concept. Why do people buy
your product? How do you know?
• Leadership is crucial. A company’s senior management must create
clear quality values, specific goals, and well-defined systems and meth-
ods for achieving the goals.
• Ongoing personal involvement is essential. The attitude must be
changed from a “management control” focus to a “management com-
mitted to help you” focus.
Continual improvement
• Constant improvement in many directions is required: improved prod-
ucts and services, reduced errors and defects, improved responsiveness,
and improved efficiency and effectiveness in the use of resources. All
of this takes time. If you do not have the time, do not start.
Fast response
• An increasing need exists for shorter new product and service devel-
opment and introduction cycles and a more rapid response to custom-
ers.
Actions based on facts, data and analysis
• A wide range of facts and data is required, e.g., customer satisfaction,
competitive evaluations, supplier data, and data relative to internal
operations.
• Performance indicators to track operational and competitive perfor-
mance are critical. These performance indicators or goals can act as
the cohesive or unifying force within an organization. They can also
provide the basis for recognition and reward.
• Participation by all employees is important. Reward and recognition sys-
tems need to reinforce total participation and the emphasis on quality.
• Factors bearing on the safety, health, and well being of employees need
to be included in the improvement objectives.
• Effective training is required. The emphasis must be on preventing
mistakes, not merely correcting them. Employees must be trained to
inspect their own work on a continuous basis.
• Participation with suppliers is essential. It is important to get suppliers
to improve their quality standards.
To show the strong relationship between National Quality Award winners and bench-
marking, we provide a historical perspective. The first example comes to us from
SL3151Ch03Frame Page 108 Thursday, September 12, 2002 6:12 PM
Cadillac’s approach to excellence. (Cadillac was the 1990 winner of the National
Quality Award.) The brief case study that follows indicates the integration of business
planning, excellent quality management, and benchmarking.
The Business Plan was the Quality Plan. The plan was designed to ensure that
Cadillac is the “Standard of the World” in all measurable areas of quality and
customer service.
The major components of the plan were:
• Mission
• Objectives
• Quality — Emphasis on six major vehicle systems:
• Exterior component and body mechanical
• Chassis/powertrain
• Seats and interior trim
• Electrical/electronics
• Body in white
• Instrument panel
• Competitiveness
• Disciplined planning and execution
• Leadership and people
• Goals
Action plans
• Took appropriate and applicable action to fulfill all the requirements so
that the customer could be satisfied.
Benchmarking 109
Xerox reduced its number of suppliers from 5,000 in 1980 to 300 by 1986 based
on performance data and attitude. Suppliers were classified as: (a) does not think
improvement is necessary, (b) slow to accept or manage change, and (c) willing to
go for it and strong enough to be a survivor. Xerox reallocated its internal efforts
to concentrate on the companies in the third group. Xerox provided extensive training
to these companies, and defect rates in incoming materials dropped 90 percent in
three years.
In addition to performance improvement, the suppliers were asked to participate
in copier design, as early in the concept phase as possible, and to make suggestions
so that overall quality could be improved and costs reduced. When this information
was used, the cost of purchased material dropped by 50 percent.
Results:
Each of the firm’s six major groups and sectors have “benchmarking” programs that
analyze all aspects of a competitor’s products to assess their manufacturability,
reliability, manufacturing cost, and performance. Motorola has measured the prod-
ucts of some 125 companies against its own standards, verifying that many Motorola
products rank as “best in their class.” (It is imperative for the reader to understand
that the result of a benchmarking study may indeed provide the researcher with data
to support the assertion that the current practices of your own organization are the
“best in class.”)
Benchmarking 111
11. Eliminate numerical quotas. These often promote poor quality. Instead
analyze the process to determine the systemic changes required to enable
superior performance.
12. Remove barriers to pride in workmanship by providing the training, com-
munication, and facilities required.
13. Institute a vigorous program of education and retraining. Help people to
improve every day.
14. Take action to accomplish the transformation required.
Study a process to determine what changes might be made to improve it. What type
of performance is achieved by the best of the best? What do they do that we are not
doing? What results do they achieve? What changes would we have to make? What
does the customer expect? What is the customer level of satisfaction? Is the change
economically justified?
Do
Determine the specific plan for improvement and implement it. This involves the
development of creative alternatives by work teams and the conscious choice of a
strategy to be followed. This may require internal or external benchmarking.
Was the root cause of the problem identified and corrected? Will the problem recur?
Are the expected results being achieved?
Act
Study the results and repeat the process. Was the plan a good one? What was learned?
This approach amounts to the application of the scientific method to the solution
of business problems. It is the basis of organizational learning.
How can we define quality? This is a very critical question and may indeed
prove the most important question in pursuing benchmarking. The importance of
this question is that it will focus the research on “best” in a very customized fashion
SL3151Ch03Frame Page 112 Thursday, September 12, 2002 6:12 PM
Benchmarking 113
Aesthetics — Aesthetics are concerned with the look, taste, feel, sound, and
smell of an item. This can be critical for products such as food, paint, works
of art, fashion, and decorative items.
Perceived quality — Perceived quality is determined by factors such as image,
advertising, brand identity, and word of mouth reputation.
Stamatis, on the other hand, has introduced a modified version of the above
points with some additional points — especially for service organizations. They are:
• Be accessible
• Provide prompt personal attention
• Offer expertise
• Provide leading technology
• Depend — quite often — on subjective satisfaction
• Provide for cost effectiveness
What is interesting about these two lists is the fact that both Garvin and Stamatis
recognize that design for optimum customer satisfaction is a design issue. Design,
indeed, is the integrating factor. The designer has to make the tough trade-offs.
Concurrent engineering and Quality Function Deployment suggest that the product
designer, the manufacturing engineer, and the purchasing specialist work jointly
during the product design phase to build quality in from the start. The focus, of
course, is to design all the above characteristics as a bundle of utility for the customer.
That bundle must address in holistic approach the following:
Image
Transcendent view — This view defines quality as that property that you
will innately recognize as such once you have been exposed to it.
Something about the product or service or the way it has been promot-
ed/communicated to you causes you to recognize it as a quality
offering — perhaps an excellent one.
Performance
Product-based view — This view defines quality in terms of a desirable
attribute or series of attributes that a product contains. A high-quality
fuel product could have a high BTU content and a low percentage of
sulfur.
SL3151Ch03Frame Page 114 Thursday, September 12, 2002 6:12 PM
To come up with reasonable definitions and actions for the above characteristics,
a team must be in place and team dynamics at work. A very good approach for this
portion of benchmarking that we recommend is the nominal group process:
The process features are as follows:
The discussion and direction of the nominal process must not focus on price
alone because that is a very narrow point of view. Some examples of non-price
reasons to buy are:
Benchmarking 115
• Fill rate
• Fun to deal with
• Number/location of stocking warehouses
• Repair facilities and location
• Technical assistance
• Service — before, during, and after sale
• Willingness to hold inventory
• Flexibility
• Access to salespeople
• Access to multiple supply sources
• Reputation
• Life cycle cost
• Financing terms
• Turnkey operations
• Consulting/training
• Warehousing
• Guarantees/warranty
• Services provided by salespeople
• Ease of resale
• Computer placement of orders
• Professional endorsement
• Packaging
• Up front engineering
• Vendor financial stability
• Confidence in salespeople
• Backup facilities
• Courtesy
• Credibility
• Understandability
• Responsiveness
• Accessibility of key players
• Flexibility
• Confidentiality
• Safety
• Delivery
• Ease of installation
• Ability to upgrade
• Customer training
• Provision of ancillary services
• Product briefing seminars
• Repair service and parts availability
• Warranty
• Image
• Brand recognition
• Atmosphere of facilities
• Sponsor of special events
SL3151Ch03Frame Page 116 Thursday, September 12, 2002 6:12 PM
The service and image features define the “augmented product.” They answer
the questions:
• What does your customer want in addition to the product itself? (the
unspoken requirements)
• What does your customer perceive to have value?
• What does your customer view as “quality”?
• What are the non-price reasons to buy your product? How do they compare
with the product and service attributes listed above?
• How do your customers define quality? How does your company define
quality?
• What is more important? Product or service?
• Can specific, measurable attributes be defined?
• How does your competitor define quality?
• How do you compare with your competitor?
• What other companies or industries influence your customer as to what
should be expected relative to each of these characteristics?
• What does this suggest in the way of benchmarking opportunities?
For example, here are some non-price reasons to buy that might apply to a
supermarket:
Benchmarking 117
None of these, in itself, is earth-shaking. But they could make the difference in
an industry that operates with a profit margin of less than 1%.
We cannot pass up the opportunity to address non-price issues for the WAL-
Mart corporation, which allegedly spends 1% of 100 details in the following items:
• Aggressive hospitality
• People greeters
• Associates not employees
• Tough people to sell to
• Weekly top management meetings
• Low cost, no frills environment
• Good computerized database
• Rapid communications by phone
• Managers in the field Monday through Thursday
• High-efficiency distribution centers
• Emphasis on training of people
• Department managers having cost and margin data
• Profit sharing if store meets goals
• Bonus if shrinkage goal is met
• Open door policy
• Grass-roots meetings
• Constant improvements
• Competitive ads shown in store
• User
• Technical buyer
• Economic buyer
• Corporate general interest buyer
SL3151Ch03Frame Page 118 Thursday, September 12, 2002 6:12 PM
Who is the competitor? Assume for example a recreation environment. Here are
some questions you might ask that would help you to determine who the competitors are:
Once these questions have been addressed, now we are ready to do the compet-
itive evaluation in the following stages:
Survey design
• Attributes considered
• Relative weight given to each
• Direct competitors
• Performance versus competition
Approaches to making the survey
Internal
• Sales force
• Sales management (Remember, the more accurate data you have,
the better the survey. For example: Colgate Palmolive audits 75,000
customers for all products. “People know what they want and will
not settle for happy mediums.”)
External
• Market research firms/universities
• Attribution/non-attribution
• Use of customer service hot line — GE progressed from receiving
1000 calls per week in 1982 to receiving 65,000 calls per week with
the installation of an 800 number answer center. The 150 phone
reps need a college degree and sales experience. They have been
effective in spotting trends in complaints as well as increasing sales.
The increase in sales has been estimated at more than twice the
operating cost of the center. (Did this trigger off a benchmarking
candidate for you?)
Groups to be surveyed
• Current customers
• Lost customers
• Prospects
SL3151Ch03Frame Page 119 Thursday, September 12, 2002 6:12 PM
Benchmarking 119
Survey frequency
Comparison of company internal view versus the customer view
ROI equals net income before interest and taxes divided by the total of working
capital and fixed capital.
As a result the Institute can help organizations with:
• Understandability
• Predictability
of their own organization’s behavior and their own products and services.
Of course, the choice of strategy depends upon several factors, including but
not limited to:
• People/culture/compensation
• Process/procedure
• Facilities/systems
• Material
Benchmarking 121
TYPES OF BENCHMARKING
Benchmarking can be performed for any product, service, or process. Different
classification schemes have been suggested. For example, Xerox classifies bench-
marking in the following categories:
• Internal benchmarking
• Direct product competitor benchmarks
• Functional benchmarking — This is a comparison with the best of the
best even if from a different industry.
• Generic benchmarking — This is an extension of functional benchmark-
ing. It requires the ability to creatively imitate any process or activity to
meet a specific need. For example, the technique used for high speed
checking of paper currency (into the categories of good, mutilated, or
counterfeit) by a bank could be adapted for high speed identification and
sorting of packages in a warehouse.
ATT, on the other hand, uses the classification indicated below. Specific examples
of benchmarking studies for each are shown. These are not limited to ATT examples:
Task
• Invoicing
• Order entry
• Invoice design
• Customer satisfaction
• Supplier evaluation
• Flow charting
• Accounts payable
Functional
• Promotion by banks
• Purchasing
• Advertising by media type
• Pricing strategy
• Safety
• Security
Management process
• PIMS par report
• Profit margin/asset turnover
SL3151Ch03Frame Page 123 Thursday, September 12, 2002 6:12 PM
Benchmarking 123
• Strategic planning
• Operational planning
• Capital project approval process
• Technology assessment
• Research and development (R and D) project selection
• Innovation
• Training
• Time-based competition
• Benchmarking
• Self-managed teams
Operations process
• Warehouse operations
• Make versus buy
Benchmarking is often an integral part of the situation analysis. It can also have
a major impact on the mission statement, the goals, the strategy, the tactics, and the
identification and determination of action plans. Benchmarking can provide major
guidance when determining what to do, how to do it, and what can be expected.
Benchmarking for strategic planning might concentrate on the determination of
the critical success factors for an industry (based on customer and competitive inputs)
and identifying what has to be done to be the success factors. This then leads to the
development of detailed action plan with effort and result goals. Benchmarking for
operational planning might concentrate on the cost and cost structure for each
functional area relative to the outputs produced.
All quality initiatives — including six sigma — have a significant influence on
the mission statement and the objectives and goals of an organization. As such, they
can provide an added impetus to do benchmarking to satisfy the quality goals.
Benchmarking can be centralized (ATT) or decentralized (Xerox). Xerox has several
functional area benchmarking specialists, including specialists for finance, admin-
istration, marketing, and manufacturing. The big advantage of a decentralized
approach is a greater likelihood of organizational buying of the final results of the
benchmarking study. The effort required to perform a benchmarking study can vary
significantly. For example, the L.L. Beam study performed by Xerox took one person
year of effort. Generally, three to six companies are included in the benchmark.
However, some companies use only one or two. Also, some studies are performed
in depth, while others are fairly casual. The “One Idea Club” was a simple approach
with a substantial reward.
Benchmarking 125
It sounds good. But does benchmarking work? Let us see what the SAS Airlines
did, as an example.
When Jan Carlzon took over as president of Scandinavian Airlines (SAS) in 1980,
the company was losing money. For several previous years, management had dealt with
this problem by cutting costs. After all, this was a commodity business. Carlzon saw
this as the wrong solution. In his view, the company needed to find new ways to compete
and build its revenue. SAS had been pursuing all travelers with no focus on superior
advantage to offer to anyone. In fact, it was seen as one of the least punctual carriers
in Europe. Competition had increased so much that Carlzon had to figure out:
Carlzon decided that the answer was to focus SAS’s services on frequently flying
business people and their needs. He recognized that other airlines were thinking the
same way. They were offering business class and free drinks and other amenities.
SAS had to find a way to do this better if it was to be the preferred airline of the
frequent business traveler.
SL3151Ch03Frame Page 126 Thursday, September 12, 2002 6:12 PM
The starting point was market research to find out what frequent business
travelers wanted and expected in the way of airline service. Carlzon’s goal was to
be one percent better in 100 details rather than 100 percent better in only one detail.
The market research showed that the number one priority was on-time arrival.
Business travelers also wanted to check in fast and be able to retrieve their luggage
fast. Carlzon appointed dozens of task forces to come up with ideas for improving
these and other services. They came back with ideas for hundreds of projects, of
which 150 were selected with an implementation cost of $40 million.
One of the key projects was to train a total customer orientation into all of SAS’s
personnel. Carlzon figured that the average passenger came into contact with five
SAS employees on an average trip. Each interaction created a “moment of truth”
about SAS. At that point of contact, the person was SAS. Given the 5 million
passengers per year flying SAS, this amounted to 25 million moments of truth where
the company either satisfied or dissatisfied its customer.
To create the right attitudes toward customers within the company, SAS sent
10,000 front line staff to service seminars for two days and 25,000 managers to
three-week courses. Carlzon taught many of these courses himself. A major emphasis
was getting people to value their own self-worth so that they could, in turn, treat
the customer with respect and dignity. Every person was there to serve the customer
or to serve someone who was serving the customer.
The results: Within four months, SAS achieved the record as the most punctual
airline system in Europe, and it has maintained this record. Check-in systems are
much faster, and they include a service where travelers who are staying at SAS
hotels can have their luggage sent directly to the airport for loading on the plane.
SAS does a much faster job of unloading luggage after landings as well. Another
innovation is that SAS sells all tickets as business class unless the traveler wants
economy class.
The company’s improved reputation among business flyers led to an increase in
its full fare traffic in Europe of 8 percent and its full fare intercontinental travel of
16 percent, quite an accomplishment considering the price cutting that was taking
place and zero growth in the air travel market. Within a two-year period, the company
became a profitable operation.
Carlzon’s impact on SAS illustrates the customer satisfaction and profits that a
corporate leader can achieve by creating a vision and target for the company that
excites and gets all the personnel to swim in the same direction — namely, toward
satisfying the target customers. As a leader, Carlzon created the conditions necessary
to ensure the success of the strategy by implementing the projects required for the
front line people to do their jobs well.
D×V×F>R
SL3151Ch03Frame Page 127 Thursday, September 12, 2002 6:12 PM
Benchmarking 127
1. Believe that he or she has the skill necessitated by the change. Can I do it?
2. Perceive a reasonable likelihood of personal value fulfillment as a result
of making the change. What will I get out of it?
3. Perceive that the total personal cost of making the change is more than
offset by the expectation of personal gain. Is it worth making the change?
• Beliefs
• Facts
• Values
• Feelings
Benchmarking can help implement change by providing the required facts and
challenging beliefs, especially if there are data to be supported from other organi-
zations. Other models to manage change are:
• Financial pressure
• Quarterly earnings
• Cash flow (Need: to improve operational efficiency)
STRUCTURAL PRESSURE
Benchmarking 129
One effective way to start the benchmarking process is to select one high
visibility area of concern to the influence leaders in a company and produce results
that can showcase the benchmarking process. This might start with a library search
to highlight the results that are possible.
IDENTIFICATION OF BENCHMARKING
ALTERNATIVES
As indicated earlier, benchmarking candidates can be identified in a wide variety of
ways. They can be detected, for example, during the business planning process, as
part of a quality initiative, during a six sigma project, or during a profit improvement
campaign. Both external and internal analysis can lead to potential candidates.
• The ease of market entry — What can you do that will make it hard to
enter the business?
• The barriers to market exit — What can you do to make it easy for a
competitor to get out of the business?
• Governmental regulations — What can you do to influence these?
SL3151Ch03Frame Page 130 Thursday, September 12, 2002 6:12 PM
Based on an analysis of the industry as it exists now and might exist in the
future, what are the factors absolutely critical for success? Five or six critical success
factors can usually be identified for a company. Examples are:
• Customer service
• Distribution
• Technically superior product
• Styling
• Location
• Product mix
• Cost control
• Dealer system
• Product availability
• Supply source
• Production engineering
• Advertising and promotion
• Packaging
• Staff/skill availability
• Quality
• Convenience
• Personal attention
• Innovation
• Capital
Once the critical success factors have been identified, the company can assess its
current position to determine whether benchmarking is required. One technique for
performing this analysis is to make a tabulation showing how the major competitors in
an industry rank for each critical success factor. As a cross check, there should be a
correlation between the tabulated results, market share and return on equity.
The PIMS par report indicates the financial results that companies in similar cir-
cumstances have been able to achieve. As such, it provides a quantitative benchmark.
The PIMS report also indicates those factors that should enable you to earn greater
than par and those factors that would cause you to earn less than par.
Financial Comparison
If PIMS data are not available, a comparison of the company’s financial performance
versus that of other companies in the same industry can suggest the value of
benchmarking in specific areas. Potential areas that might be identified are:
Benchmarking 131
Competitive Evaluations
Focus Groups
Focus groups are used to determine what a customer segment thinks about a product
or service and why it thinks that way. Participants are invited to join the group
usually with some type of personal compensation. A focus group starts with a series
of open-ended questions relative to a specific subject. Representatives of the spon-
soring company view the entire process either through a one-way mirror or by closed
circuit TV. As a second phase, the company representatives ask specific follow-up
questions (through the facilitator), based on the open-ended probing.
Importance/Performance Analysis
1. Customer-oriented goals
2. Service/quality goals
SL3151Ch03Frame Page 132 Thursday, September 12, 2002 6:12 PM
The value added chain can also provide customer perspective by suggesting the
questions:
1. How does our product or service help customers to minimize their cost?
2. How does our product or service help customers to differentiate their
product?
Pareto Analysis
Pareto analysis is a form of data analysis that requires that each element or possible
cause of a problem be identified along with its frequency of occurrence. Items are
then displayed in order of decreasing frequency of probability of occurrence. This
can help to identify the most significant problem to attack first. A common expression
of the Pareto Law is the 80/20 rule, which states that 20% of the problem causes
80% of the difficulties. A Pareto analysis of setup delay might include factors such
as: necessary material not available, tooling not ready, lack of gages, setup personnel
not available, another setup has priority, material handling equipment not available,
error — incorrect setup. Develop a Pareto analysis for the production of scrap. (There
is a tremendous difference between knowing the facts and guessing).
SL3151Ch03Frame Page 133 Thursday, September 12, 2002 6:12 PM
Benchmarking 133
Statistical process control is a technique for identifying random (or common) causes
versus identifiable (or special) causes in a process. Both of these are potential sources
for improvement. The amount of random variation affects the capability of a machine
to produce within a desired range of dimensions. Hence, benchmarking could be
performed to determine machine processing capabilities and how to achieve those
levels. The determination and correction of recurring systematic changes is also a
benchmarking possibility.
The reduction of the random variation or the uncertainty of the process and the
identification and correction of special causes are critical aspects of the total quality
management process. Correction often requires a change in the total manufacturing
process, tooling, the equipment being used, and/or training in setup and operations.
The first step in process improvement is to control the environment and the
components of the system so that variations are within natural, predictable limits.
The second step is to reduce the underlying variation of the process. Both of these
are candidates for benchmarking.
Trend Charting
Historic data can be used to develop statistical forecasts and confidence intervals
that depict acceptable random variation. When data fall within the confidence inter-
vals, you have no cause to suspect unusual behavior. However, data outside of the
confidence intervals could provide an opportunity for benchmarking. It might also
be informative to pursue benchmarking as a device to reduce the range of variation
or the size of the confidence interval.
A simple trend analysis of your own past data can also provide a basis for
improvement. The following data relative to the percent scrap and rework illustrate
the improvement made and could provide the basis for benchmarking:
Products tend to go through a defined life cycle starting with an introductory phase
and proceeding through growth, maturity, and decline. The management style and
business tactics are very different at each stage. Anticipating and managing the
transitions can be important. This could lead to opportunities for benchmarking of
product life cycle management and product portfolio management. Product portfolio
management can lead to the need for new product identification and introduction.
These areas have both been the subjects of benchmarking studies.
In addition to the changes that products go through, companies tend to go through
various stages of development and crises. Again, the management of the transitions
can be an important benchmarking candidate.
SL3151Ch03Frame Page 134 Thursday, September 12, 2002 6:12 PM
Failure Mode and Effect Analysis (FMEA) is a systematic way to study the operating
environment for equipment or products and to determine and characterize the ways
in which a product can fail. Benchmarking can be used to determine component and
system design goals and alternatives (see Chapter 6).
Cost/Time Analysis
To evaluate its new product introduction process, a company may plot cost per unit
produced versus elapsed time for each element of the process, e.g., design and
engineering, production, sub-assembly, and assembly. The area under the curve
represents money tied up (inventory), and smaller is better.
When solving a problem, it is critical to attack the underlying cause of the problem
and not the symptoms. The underlying cause can be identified by listing all possible
causes and identifying the most probable cause based on data collection and a Pareto
analysis. This sometimes leads to multiple benchmarking opportunities. Failure to
diagnose a problem (ready, fire, aim) can lead to an inefficient use of resources and
frustration.
When identifying underlying causes, it can also be useful to ask five sequential
“whys” to get to the heart of a problem. For example:
Benchmarking 135
TABLE 3.1
A Typical Assessment Instrument
Please indicate how you evaluate the organization using the following key:
(There are many ways to use a key. This is only one example.)
Benchmarking 137
Benchmarking 139
Optional Information
Name:
Date:
Title:
Dept:
PRIORITIZATION OF BENCHMARKING
ALTERNATIVES — PRIORITIZATION PROCESS
A variety of prioritization approaches are available. Use the one most appropriate
to a specific situation.
PRIORITIZATION MATRIX
The following steps are required to complete a prioritization matrix:
SL3151Ch03Frame Page 140 Thursday, September 12, 2002 6:12 PM
1. List the items indicating “what” you want to accomplish. These are the
evaluation criteria.
2. List “how” you will accomplish what you want to do. These are the
alternatives to be evaluated.
3. Indicate the degree of importance for each of the “whats.” This is a number
ranging from 1 to 10 (10 is most important).
4. Indicate the company and the competitive rating using a scale from 1 to
10 (10 is best). Plot the competitive comparison.
5. Specify the planned or desired future rating.
6. Calculate the improvement ratio by dividing the planned rating by the
company current rating.
7. Select at most four items to indicate as “sales points.” Use a factor of 1.5
for major sales points and a factor of 1.2 for minor sales points.
8. Calculate the importance rate as the degree of importance times the
improvement ratio times the sales points.
9. Calculate the relative weight for each item by dividing its importance rate
by the total of the importance rates for all “whats.”
10. Indicate the relationship value between each “what” and “how.” Use values
of 9, 3, and 1 to indicate a strong, moderate or light interrelationship.
11. Calculate the importance weight for each “how.” This is the total of the
cross products of the relationship value and the relative weight of the
“what.”
12. Indicate the technical difficulty associated with the “how.” Use a scale of
5 to 1 (5 is the most difficult).
13. Indicate the company, competitive values, and benchmark values for the
“how”.
14. Specify the plan for each of the “hows.”
Benchmarking 141
1. Product planning
What — customer requirements
How — product technical requirements
2. Product design
What — product technical requirements
How — part characteristics
3. Process planning
What — part characteristics
How — process characteristics
4. Production planning
What — process characteristics
How — process control methods
IMPORTANCE/FEASIBILITY MATRIX
Importance is a function of urgency and potential impact on corporate goals. It is
expressed in terms of high, medium, and low. Feasibility takes into consideration
technical requirements, resources, and the cultural and political climate. It is also
expressed in terms of high, medium, and low.
Paired Comparisons
Improvement Potential
Prioritization Factors
The first project should be a winner. It should address a chronic problem, there
should be a high likelihood of completion in several weeks, and the results should
be (a) correlated to customer needs and wants, (b) significant to the company, and
(c) measurable. Factors to be used subsequently are:
Benchmarking 143
sources can be internal best performers, competitive best performers, or best in class
worldwide.
Best of Class
There is, in general, no way to know the “best” of the best. Companies generally
pick the “best” based on reputation through publications, speeches, news releases,
etc. A company might start out with four to ten “best” candidates and narrow them
down based on initial discussions.
Xerox looked at IBM and Kodak but also L.L. Bean, the catalog sales company,
known for effective and efficient warehousing and distribution of products. Addi-
tional benchmarking partners used by Xerox were:
Milliken & Company, winner of the 1989 National Quality Award, provided the
following partial list of benchmarks:
Strategy
Safety DuPont
Customer satisfaction ATT, IBM
Innovation 3M, KBC
Education IBM, Motorola
Strategic planning Frito-Lay, IBM, ATT
Time based competition Lenscrafters
SL3151Ch03Frame Page 144 Thursday, September 12, 2002 6:12 PM
Quality Process
Benchmarking Xerox
Self-managed teams Goodyear, P&G
Continuous improvement Japanese
Heroic goals concept Motorola
Role model evaluation Xerox
Environmental practice DuPont, Mobay, Ciba-Geigy
Statistical methods Motorola
Flow charting Sara Lee
Quality process FP&L, Westinghouse, Motorola
Miscellaneous
Security DuPont
Accounts payable Mobay
Order handling L.L. Bean
SELECTION CRITERIA
How do you know who is the best? Here are some ways to get that information:
• Library search
• Reputation
• Consultants
• Networking
• Company size
• Customer non-price reasons to buy
• Industry critical success factors
• Availability of data
• Data collection costs
• Innovation
• Receptivity
One hundred percent accuracy of information is not required. You only need
enough to head you in the right direction.
Benchmarking 145
It is also helpful to make use of trade associations and consultants and to network.
Review studies in which people have identified the characteristics of best per-
formers. Good sources here are Clifford and Cavanagh (1988), Smith and Brown
(1986), and Berle (1991).
Another good source is the Encyclopedia of Business Information Sources,
published frequently by Gale Research, Detroit, Michigan. This source contains
references by subject to the following:
Additional sources may also be found in the John Wiley publication entitled
Where to Find Business Information, as well as the following:
Books and periodicals
• Trade journals
• Functional journals
• F.W. Dodge reports
• Technical abstracts
• Local newspapers, national newspapers
• Nielson — Market Share
• Yellow Pages
• Textbooks
• Special interest books
• City, region, state business reviews
• Standard and Poors industry surveys
Directories
• Trade show directory
• Directory of Associations
• Brands and Their Companies
• Who Runs the Corporate 1000
• Corporate Technology Directory
• American Firms in Foreign Countries
• Corporate Affiliations
• Foreign Manufacturers in U.S.
• Directory of Company Histories
SL3151Ch03Frame Page 146 Thursday, September 12, 2002 6:12 PM
Benchmarking 147
Individuals
• Company employees
• Past employees/retirees
• Social events
• Construction contractors
• Landlords, leasing agents
• Salesmen
• Service personnel
• Focus groups
Professional societies
• Professional society members
• Trade shows/conventions
• National associations
• User groups
• Seminars
• Rating services
• Newsletters
Government
• Public bid openings
• Proposals
• National Technical Information Center
• Freedom of Information Act
• Occupational Safety and Health Administration (OSHA) filings
• Environmental Protection Agency (EPA) filings
• Commerce Business Daily
• Government Printing Office Index
• Federal depository libraries
• Court records
• Bank filings
• Chamber of Commerce
• Government Industrial Program reviews
• Uniform Commerce Code filings
• State corporate filings
• County courthouse
• U.S. Department of Commerce
• Federal Reserve banks
• Legislative summaries
• The Federal Database Finder
• Patents
Customers
• New customers
• Consumer groups
Industry members
• Suppliers
• Equipment manufacturers
SL3151Ch03Frame Page 148 Thursday, September 12, 2002 6:12 PM
• Distributors
• Buying groups
• Testing firms
Snooping
• Reverse engineering
• Hire past employees
• Interview current employees
• Dummy purchases
• Shopping
• Request a proposal
• Hire to do one job
• Apply for a job
• Mole
• Site inspections
• Trash
• Chatting in bars
• Surveillance equipment
Schools and universities
• Directories of case studies
• Industry studies
Consultants
• Business schools on a consulting basis
• Jointly sponsored studies
• Information brokers
• Industry studies
• Market research studies
• Seminars
Benchmarking 149
• Mission
• Objective/scope
• Statement of importance
• Information available
• Critical questions
• Ethical and legal issues
• Partner selection
• Roles and responsibilities
• Visit schedule
• Data analysis requirements
• Form of recommendation
You need to understand your own operations very thoroughly before comparing
them with the operations of others. Here are some steps you should take to make
sure that you understand your current methods:
Ask open-ended questions. For example, for “who”:
Ask similar questions for what, where, when, why, and how.
Activity Analysis
• Function
• Process
• Marketing and sales
• Sell products
Activity: These are the major action steps of a process. For example, make a
proposal.
Benchmarking 151
Example: bad weather, poor product quality, automated equipment, workplace layout
• Cost/lot
• Pieces/hour
• Cost/unit
• Square foot per person
• Patents per engineer
• Drawings per engineer
SL3151Ch03Frame Page 152 Thursday, September 12, 2002 6:12 PM
Examples of Modeling
The modeling of raw material cost per unit produced might consider the following
variables:
Benchmarking 153
When working with salaries and wages, it is necessary to take into consideration
factors such as headcount, rate by grade, straight time/overtime ratios, benefits, skill
level, age, education, union vs. non-union, and incentives. Salary and wage ratios
that can be benchmarked are:
• Skilled/unskilled labor
• Direct/indirect labor
• Training cost per employee
• Overtime hours/straight time hours
To determine the sales dollars from a new account, start by flow charting the steps
required to sell a new account. Start with cold calls and work through to a close.
Use of symbols in flow charting:
• Start or stop
• Flow lines
• Activity
• Document
• Decision
• Connector
Picking operations
Orders filled per person per day
Line items per person per day
Pieces per person per day
Number of picks per order
Standard hours earned per day
Line items per order
Receiving operations
Number of trucks unloaded per shift
Number of pallets received per day
Number of cases received per day
Number of errors per day
Direct labor hours unloading trucks
SL3151Ch03Frame Page 154 Thursday, September 12, 2002 6:12 PM
Incoming QC operations
Number of inspections per period
Number of rejects per period
Direct inspection labor hours
Putaway/storage operations
Number of full pallet putaways per period
Number of loose case putaways per period
Direct labor hours putaway or storage
Cube utilization
Truck loading
Number of units loaded per truck per period
Number of trucks per period
Time per trailer
Customer service operations
Fill rate
Elapsed time between order and shipment
Error rate
Customer calls taken per day
Number of problems solved per call
Number requiring multiple calls
Number of credits issued
Number of backorders
At this stage we are ready to identify information required when meeting with
the benchmark partner. The following information is typical and may be used to
focus the meeting with the benchmark partner and to highlight information require-
ments:
Review the assumptions for the study to make sure that the outcomes are
correlated to what you were studying. (At this stage, it is not unusual to find surprises.
That is, you may find items that you overlooked or you thought were unimportant
and so on.)
SL3151Ch03Frame Page 155 Thursday, September 12, 2002 6:12 PM
Benchmarking 155
Ask open-ended questions, just as you did when observing your own operations.
For example, for “who”:
• Who does it per the job description?
• Who is actually doing it?
• Who else could do it?
• Who should be doing it?
Ask similar questions for what, where, when, why, and how.
Follow the procedures described above for analyzing the company activities. You
may encounter some analytical difficulties because of the following factors.
Accounting differences
Account definitions vary in terms of what is included in the account. For
example, does the cost of raw material include the cost of freight in and
insurance? Where is scrap accounted for?
Cost allocations.
Identification of all multi-department costs.
Different economies of scale/learning curve
Specialization
Automation
Time/unit
Factors to consider when trying to determine if you have identified all the factors
required for success include the following:
Benchmarking Examples
1. Functional Analysis
Hours/1000 pcs
Functions Company Best Company Gap
2. Cost Analysis
3. Technology Forecasting
Benchmarking 157
4. Financial Benchmarking
The comparison of company strategy versus industry strategy can lead to the need
for more specific benchmarking studies.
6. Warehouse Operations
The performance of units engaged in essentially the same type of activity can be
compared using statistical regression analysis. This technique can be used to deter-
mine the significant independent variables and their impact on costs. Exceptionally
good and bad performance can be identified and this provides the basis for further
benchmarking studies.
7. PIMS Analysis
The Center for Advanced Purchasing Studies (Tempe, Arizona) has benchmarked
the purchasing activity for the petroleum, banking, pharmaceutical, food service,
telecommunication services, computer, semiconductor, chemical, and transportation
industries. For a wide variety of activity measures, the reports provide the average
value, the maximum, the minimum, and the median value.
SL3151Ch03Frame Page 158 Thursday, September 12, 2002 6:12 PM
Motorola Example
Perhaps one of the most famous examples of benchmarking in recent history is the
Motorola example. Motorola, through “Operation Bandit,” was able to cut the prod-
uct development time for a new pager in half to 18 months based on traveling the
world and looking for “islands of excellence.” These companies were in various
industries: cars, watches, cameras, and other technically intensive products. The
solution required a variety of actions:
• Automated factories
• Removing barriers in the workplace
• Training of 100,000 employees
• Technical sharing alliance with Toshiba
GAP ANALYSIS
DEFINITION OF GAP ANALYSIS
There are at least two ways to view “gap.”
1. Result Gap — A result gap is the difference between the company per-
formance and the performance of the best in class as determined by the
benchmarking process. This gap is defined in terms of the activity per-
formance measure. The gap can be positive or negative.
2. Practice or Process Gap — A practice or process gap is the difference
between what the company does in carrying out an activity and what the
best in class does as determined by the benchmarking process. This gap
is measured in terms of factors such as organizational structure, methods
used, technology used, or material used. The gap can be positive or
negative.
Benchmarking 159
The company that ignores likely improvements of the benchmark gets caught
in the Z trap. The Z trap, of course, is the step-wise improvement that is OK for
catching up but never good enough to be the best in class.
To summarize the benchmark findings, it is often helpful to make a tabulation
showing the current practice and metric and the expected future practice and metric
for the company, the competition, and the best in class. In order for the benchmarking
process to be effective, it is critical that management accept the validity of the gap
and provide the resources necessary to close the gap.
GOAL SETTING
GOAL DEFINITION
Two terms that are often used interchangeably are “objective” and “goal.” There is,
of course, no one correct definition. As long as the terms are used consistently within
an organization, it does not really matter. For our purposes, however, objectives are
broad areas where something is to be accomplished, such as sales and marketing or
customer service. Goals, on the other hand, are specific and measurable and have a
time frame. For example, “Answer all inquiries within 2 hours by the 3rd quarter
of 2002.”
GOAL CHARACTERISTICS
For best results, goals should be (a) tough (you need to stretch to attain them) and
(b) attainable (realistic).
When evaluating these two characteristics, always take into consideration the
current capabilities of the company versus the benchmark candidate now and pro-
jected. A good way to monitor progress towards attainment is through trend charting.
GOAL STRUCTURE
Cascading Goal Structure
A consistent goal structure can provide focus and direction to the entire organization.
In order to create this, start with the most important goal, as viewed by the president
or chief executive officer, and decompose each of these by functional area working
from one management level to the next. For example, starting with a return on equity
SL3151Ch03Frame Page 161 Thursday, September 12, 2002 6:12 PM
Benchmarking 161
goal, what does this mean each department has to do? What does this suggest in the
way of specific benchmarking goals?
Interdepartmental Goals
One of the most elusive tasks of management is to get all departments to work
together toward a common set of goals. One way to manage this is to have each
department indicate its goals and what it requires in the way of performance from
other departments to reach those goals. A cross tabulation can then be used to develop
the total goals for a department or function.
Combine?
How about a blend, an alloy, an assortment, an ensemble? Combine units?
Combine purpose? Combine appeals? Combine ideas?
Benchmarking 163
Just because the official benchmarking study has been completed does not mean
that you are done. To the contrary, you must be vigilant in monitoring your com-
petitor’s activities by tracking the competitive performance versus plan. This is
because things change and modifications must be made to recalibrate the results.
Some items of interest are:
FINANCIAL ANALYSIS
OF BENCHMARKING ALTERNATIVES
When comparing benchmarking alternatives, it is often necessary to take into con-
sideration the fact that cash is received and/or disbursed in different time periods
for each of the alternatives. Cash received in the future is not as valuable as cash
received today because cash received today can be reinvested and earn a return. In
order to compare the current value of cash received or disbursed in different periods,
it is necessary to convert a future dollar value to its present value.
For example, the present value of $1100 received a year from now is $1000 if
money can be invested at 10%. The alternative way to view this is to note that the
future value of $1000 invested for one year at 10% is equal to 1000 times 1.10 or
$1100.
The following table can be used to determine the present value of a future cash
flow depending upon the discount rate and the number of years from the present
that the investment is made. To relate to the discussion above, note that the Present
Value Factor for one year at 10% is .9091. Therefore, the present value of $1100
received a year from now is $1000, i.e., $1100 times .9091.
A typical capital project of benchmark alternative evaluation is discussed in the
following pages. The projected net income after tax as well as a summary of the
investments made in the project, the after-tax salvage value, and the cash flow
associated with the project are indicated.
The assumptions used to generate the net income are indicated below the pro-
jection. Note the separation of fixed and variable cost and the relationship between
specific assumptions and the level of capacity utilization. In this case, the investment
is assumed to occur at the end of the first year. Also, there is no increase in working
capital associated with the construction of the plant.
The cash flow can be determined in one of two ways: (a) it is equal to the net
income after tax plus depreciation or (b) it is equal to revenue minus operating
SL3151Ch03Frame Page 164 Thursday, September 12, 2002 6:12 PM
expenses minus taxes. The net present value is indicated for several discount rates
(10 to 40%). The net present value at 10% is determined, for example, as in Table 3.2.
If the company cost of capital is 15%, then this project would be acceptable
because the net present value is positive at that discount rate. A similar analysis can
be used to determine a breakeven product price — see Table 3.3.
Benchmarking 165
TABLE 3.2
An Example of Cash Flow and Present Value
End of Year Cash Flow Present Value Factor Present Value
TABLE 3.3
Benchmark Project Evaluations
2001 2002 2003 2004
Assumptions
Plant capacity (units) 70,000
Unit price — start 38.00
Tax rate (%) 40
Depreciation (%) 5
Capacity utilization (%) 30 70 95
Price increase (%) 7 7 12
Operating Expense
Units Fixed Variable
REFERENCES
Berle, G., Business Information Sourcebook, Wiley, New York, 1991.
Buzzell, R.D. and Gale B.T., The PIMS Principles, Free Press, New York, 1987.
SL3151Ch03Frame Page 167 Thursday, September 12, 2002 6:12 PM
Benchmarking 167
Clifford, D.K. and Cavanagh, R.E., The Winning Performance: How America’s High Growth
Midsize Companies Succeed, Bantam Books, New York, 1988.
Garvin, D.A, Managing Quality, Free Press, New York, 1988.
Hall, W.K., Survival Strategies in a Hostile Environment, Harvard Business Review, Sept./Oct.
1980, pp. 34–38.
Higgins, H. and Vincze, A., Strategic Management, Dryden Press, New York, 1989.
Smith, G.N. and Brown, P.B., Sweat Equity, Simon and Schuster, New York, 1986.
Stamatis, D.H., Total Quality Service, St. Lucie Press, Boca Raton, FL, 1996.
Stamatis, D.H., TQM Engineering Handbook, Marcel Dekker, New York, 1997.
SELECTED BIBLIOGRAPHY
Balm, G.J., Benchmarking: A Practitioner’s Guide for Becoming and Staying Best of the Best,
Quality & Productivity Management Association, Schaumburg, IL, 1992.
Barnes, B., Squeeze Play: Satisfaction Programs Are Key for Manufacturers Caught Between
Declining and Increasing Raw Material Costs, Quirk’s Marketing Research Review,
Oct. 2001, pp. 44–47.
Bosomworth, C., The Executive Benchmarking Guidebook, Management Roundtable, Boston,
MA, 1993.
Boxsvell, R.J., Jr., Benchmarking for Competitive Advantage, McGraw-Hill, New York, 1994.
Camp, R., Business Process Benchmarking: Finding and Implementing Best Practices, ASQC
Quality Press, Milwaukee, WI, 1995.
Chang, R.Y. and Kelly, P.K., Improving through Benchmarking, Richard Chang Associates,
Publications Division, Irvine, CA, 1994.
Karlof, B. and Ostblom, S., Benchmarking: A Signpost to Excellence in Quality and Produc-
tivity, John Wiley & Sons, New York, 1993.
Lewis, S., Cleaning Up: Ongoing Satisfaction Measurement Adds to Japanese Janitorial Firm’s
Bottom Line, Quirk’s Marketing Research Review, Oct 2001, pp. 20–21, 68–70.
SL3151Ch03Frame Page 168 Thursday, September 12, 2002 6:12 PM
SL3151Ch04Frame Page 169 Thursday, September 12, 2002 6:11 PM
Simulation
4
As companies continue to look for more efficient ways to run their businesses,
improve work flow, and increase profits, they increasingly turn to simulation, which
is used by best-in-class operations to improve their processes, achieve their goals,
and gain a competitive edge. Simulation is used by some of the world’s most
successful companies, including Ford, Toyota, Honda, DaimlerChrysler, Volk-
swagen, Boeing, Delphi Automotive Systems, Dell Corp. Gorton Fish Co., and many
others. Both design and process simulations have become increasingly important
and integral tools as businesses look for ways to strip non-value-adding steps from
their processes and maximize human and equipment effectiveness, all parts of the
six sigma philosophy. The beauty of simulation is that, while it complements and
aids in the six sigma initiative, it can also stand alone to improve business processes.
In this chapter, we do not dwell on the mathematical justification of simulation;
rather, we attempt to explain the process and identify some of the key characteristics
in any simulation. Part of the reason we do not elaborate on the mathematical
formulas is the fact that in the real world, simulations are conducted via computers.
Also, readers who are interested in the theoretical concepts of simulation can refer
to the selected bibliography found both at the end of the chapter and at the end of
this volume.
WHAT IS SIMULATION?
Simulation is a technology that allows the analysis of complex systems through
statistically valid means. Through a software interface, the user creates a comput-
erized version of a design or a process, otherwise known as a “model.” The model
construction is a basic flow chart with great additional capabilities. It is the interface
a company uses to build a model of its business process.
Simulation technology has been around for a generation or more, with early
developments mostly in the area of programming languages. In the last 10 to 15
years, a number of off-the-shelf software packages have become available. More
recently, these tools have been simplified to the point that your average business
manager with no industrial engineering skills can effectively employ this technology
without requiring expert assistance. (Some companies have actually modified the
commercial versions to adopt them into their own environments.)
Simplicity is the key to today’s simulation software. The basic simulation struc-
ture is as follows: after flow charting the process, the user inputs information about
how the process operates by simply filling in blanks. While completing a model,
the user answers three questions at each step of the process: how long does the step
169
SL3151Ch04Frame Page 170 Thursday, September 12, 2002 6:11 PM
take, how often does it happen, and who is involved? After the model is built and
verified, it can be manipulated to do two critical things: analyze current operations
to identify problem areas and test various ideas for improvement.
The latest improvements in simulation software have made it an excellent tool
for enhancing the design for six sigma (DFSS) process, which strives to eliminate
eight wastes: overproduction, motion, inventory, waiting, transportation, defects,
underutilized people, and extra processing. DFSS targets non-value-added
activities — the same activities that contribute to poor product quality.
In this chapter we are not going to discuss commercial packages; rather we are
going to introduce three methodologies that facilitate simulation — Monte Carlo,
Finite Element analysis, and Excel’s Solver approach.
SIMULATED SAMPLING
The sampling method, known generally as Monte Carlo, is a simulation procedure
of considerable value.
Let us assume that a product is being assembled by a two-station assembly line.
There is one operator at each of the two stations. Operation A is the first of the two
operations. The operator completes approximately the first half of the assembly and
then sets the half-completed assembly on a section of conveyor where it rolls down
to operation B. It takes a constant time of 0.10 minute for the part to roll down the
conveyor section and be available to operator B. Operator B then completes the
assembly. The average time for operation A is 0.52 minute per assembly and the
average time for operation B is 0.48 minute per assembly. We wish to determine
the average inventory of assemblies that we may expect (average length of the
waiting line of assemblies) and the average output of the assembly line. This may
be done by simulated sampling as follows:
Simulation 171
less, so the value five is plotted for 0.30 minute. For the performance
time 0.35 minute, there were 10 observations recorded, but there were
15 observations that measured 0.35 minute or less. When the cumula-
tive frequency distribution was completed, a cumulative percent scale
was constructed on the right, by assigning the number 100 to the max-
imum value, 167 in this case, and dividing the resulting scale into equal
parts. This results in a cumulative probability distribution. We can use
this distribution to say for example that 100 percent of the time values
were 0.85 minute or less, 55.1 per cent were 0.50 minute or less and so on.
3. Sample at random from the cumulative distributions to determine specific
performance times to use in simulating the operation of the assembly line.
We do this by selecting numbers between 0 and 100 at random (represent-
ing probabilities or percents). The random numbers could be selected
by any random process, such as drawing numbered chips from a box,
using a random number table, or using computer-generated random
numbers. For small studies, the easiest way is to use a table of random
numbers.
The random numbers are used to enter the cumulative distributions in or-
der to obtain time values. In our example, we start with the random
number 10. A horizontal line is projected until it intersects the distribu-
tion curve; a vertical line projected to the horizontal axis gives the mid-
point time value associated with the intersected point on the
distribution curve, which happens to be 0.40 minute for the random
number 10. Now we can see the purpose behind the conversion of the
original distribution to a cumulative distribution. Only one time value
can now be associated with a given random number. In the original dis-
tribution, two values would result because of the bell shape of the
curve.
Sampling from the cumulative distribution in this way gives time values in
random order, which will occur in proportion to the original distribu-
tion, just as if assemblies were actually being produced. Table 4.1 gives
a sample of 20 time values determined in this way from the two distri-
butions.
4. Simulate the actual operation of the assembly line.
This is done in Table 4.2, which is very similar to waiting (queuing) line
problems. The time values for operation A (Table 4.1) are first used to
determine when the half-completed assemblies would be available to
operation B. The first assembly is completed by operator A in 0.40
minute. It takes 0.10 minute to roll down to operator B, so this point in
time is selected as zero. The next assembly is available 0.40 minute lat-
er, and so on. For the first assembly, operation B begins at time zero.
From the simulated sample, the first assembly requires 0.60 minute for
B. At this point, there is no idle time for B and no inventory. At time
0.40 the second assembly becomes available, but B is still working on
the first so the assembly must wait 0.20 minute. Operator B begins
SL3151Ch04Frame Page 172 Thursday, September 12, 2002 6:11 PM
TABLE 4.1
Simulated Samples of 20 Performance Time Values for Operations A and B
Operation A Operation B
Performance Time Performance Time
from Cumulative from Cumulative
Distribution for Distribution for
Random Number Operation A Random Number Operation B
10 0.40 79 0.60
22 0.40 69 0.50
24 0.45 33 0.40
42 0.50 52 0.45
37 0.45 13 0.35
77 0.60 16 0.35
99 0.85 19 0.35
96 0.75 4 0.30
89 0.65 14 0.35
85 0.65 6 0.30
28 0.45 30 0.40
63 0.55 25 0.35
9 0.40 38 0.40
10 0.40 0 0.25
7 0.35 92 0.70
51 0.50 82 0.60
2 0.30 20 0.35
1 0.25 40 0.40
52 0.50 44 0.45
7 0.35 25 0.35
Totals 9.75 8.20
work on it at 0.60. From Table 4.1, the second assembly requires 0.50
minute for B. We continue the simulated operation of the line in this
way.
The sixth assembly becomes available to B at time 2.40, but B was ready
for it at time 2.30. He therefore was forced to remain idle for 0.10
minute because of lack of work. The completed sample of 20 assem-
blies is progressively worked out — see Table 4.2.
The summary at the bottom of Table 4.4 shows the result in terms of the idle
time in operation B, the waiting time of the parts, the average inventory between
the two operations, and the resulting production rates. From the average times given
by the original distributions, we would have guessed that A would limit the output
of the line since it was the slower of the two operations. Actually, however, the line
production rate is less than that dictated by A (116.5 pieces per hour compared to
123 pieces per hour for A as an individual operation). The reason is that the interplay
SL3151Ch04Frame Page 173 Thursday, September 12, 2002 6:11 PM
Simulation 173
TABLE 4.2
Simulated Operation of the Two-Station Assembly Line
when Operation A Precedes Operation B
Number of
Assemblies Parts in Line,
Available Idle Waiting Excluding Assembly
for Operation B Operation B Time in Time of Being Processed
Operation B at Begins at Ends at Operation B Assemblies in Operation B
Note: In the above computations, 20 is the total number of completed assemblies; 9.75 is the total work
time of operation A for 20 assemblies from Table 4.1; 8.20 is the total work time, exclusive of idle time,
for operation B for 20 assemblies from Table 4.1.
of performance times for A and B does not always match up very well, and sometimes
B has to wait for work. B’s enforced idle time plus B’s total work time actually
determine the maximum production rate of the line.
A little thought should convince us that, if possible, it would have been better
to redistribute the assembly work so that A is the faster of the two operations. Then
the probability that B will run out of work is reduced. This is demonstrated by
Table 4.3, which assumes a simple reversal of the sequence of A and B. The same
SL3151Ch04Frame Page 174 Thursday, September 12, 2002 6:11 PM
TABLE 4.3
Simulated Operation of the Two-Station Assembly Line
when Operation B Precedes Operation A
Number of
Assemblies Parts in Line,
Available Idle Waiting Excluding Assembly
for Operation A Operation A Time in Time of Being Processed
Operation A at Begins at Ends at Operation A Assemblies in Operation A
sample times have been used and the simulated operation of the line has been
developed as before. With the faster of the two operations being first in the sequence,
the output rate of the line increases and approaches the rate of the limiting operation,
and the average inventory between the two operations increases. With the higher
average inventory there, the second operation in the sequence is almost never idle
owing to lack of work. Actually, this conclusion is a fairly general one with regard
to the balance of assembly lines; that is, the best labor balance will be achieved
when each succeeding operation in the sequence is slightly slower than the one
before it. This minimizes the idle time created when the operators run out of work
because of the variable performance times of the various operations. In practical
SL3151Ch04Frame Page 175 Thursday, September 12, 2002 6:11 PM
Simulation 175
For example, commonly used elements in the automotive industry (body engi-
neering) are:
• Beams
• Rigid links
• Thin plates — triangular and quadrilateral
• Solid elements
• Springs
• Gaps (contact or interface elements)
TYPES OF ANALYSES
There are many combinations of analyses one may perform with FEA as the driving
tool. However, the two predominant types are nonlinear and dynamic. Using these types
one may focus on specific analysis of — for example nonlinearities types such as:
Geometric
• Stress less than yield strength
• Euler (elastic) buckling
• Examples: quarter panel under jacking and towing; hood following
front crash
Material
• Stress greater than yield strength or material is nonlinear elastic
• Plastic flow
• Examples: seat belt pull; door intrusion beam bending
Combination of geometric and material
• Stress is greater than yield strength and buckling takes place
• Crippling
• Examples: rails during crash; roof crush
SL3151Ch04Frame Page 177 Thursday, September 12, 2002 6:11 PM
Simulation 177
The reader should also recognize that combinations of these types exist as well,
for example linear/static — the easiest and most economical. Most of the FEA
applications involve this kind of analysis. Examples include joint stiffness and door
sag. Nonlinear/static is less frequently used. Examples include door intrusion beam,
roof crush, and seat belt pull. Linear/dynamic is rarely used. Examples include
windshield wipers or latch mechanism. Nonlinear/dynamic is the most complex and
most expensive. Examples include knee bolster crash, front crash, and rear crash.
Let us look at these combinations a little more closely:
Linear static analysis: This is the simplest form of analytical application and
is used most frequently for a wide range of structures. The desired results
are usually the stress contours, deformed geometry, strain energy distribu-
tion, unknown reaction forces, and design optimization. Typical examples
are door sag simulation, margin/fit problems, joint stiffness evaluation, high
stress location search for all components, spot weld forces, and thermal
stresses.
Euler buckling analysis: This analysis is also relatively simple to perform and
is used to calculate critical buckling loads. Caution should be exercised
when performing this analysis because it produces analytical results that
are not conservative. In other words, the critical buckling load thus calcu-
lated is usually higher than the actual load that would be determined through
testing. A typical application is hood buckling.
Normal modes analysis: This is an extremely useful technique for determining
the natural frequencies (eigenvalues) of components and also the corre-
sponding eigenvectors which represent the modes of deformation. Strictly
speaking, this category does not fall under dynamic analysis since the
problem is not time dependent. Typical examples include instrument panels,
total vehicle or component NVH evaluation, door inner panel flutter, and
steering column shake.
Nonlinear static analysis: In general, all nonlinear analysis requires advanced
methodology and is not recommended for use by inexperienced analysts.
Usually, a graduate degree or several graduate level courses in the theory
of elasticity, plasticity, vibrations, and solid and fluid mechanics are
required to understand nonlinear behavior. Nonlinear FEA tends to be as
much an art as it is a science, and familiarity with the subject structure is
essential. Typical examples are seat back distortion, door beam bending
rigidity studies, underbody components such as front and rear rails and
wheel housings, bumper design, and crush analysis of several components.
Nonlinear dynamic analysis: This FEA category is the most advanced. It
involves very complex ideas and techniques and has become practicable
only due to the availability of super-high-speed computers. This class of
analysis involves all the complexities of nonlinear static analysis as well as
additional problems involved with iterative time step selection and contact
simulation at impact. Typical applications are related to crash evaluation
and energy management.
SL3151Ch04Frame Page 178 Thursday, September 12, 2002 6:11 PM
1. Establish objective.
2. What type of analysis? What program?
Statics
Mechanical Loads
• Forces
• Displacements
• Pressure
• Temperatures
SL3151Ch04Frame Page 179 Thursday, September 12, 2002 6:11 PM
Simulation 179
Heat Transfer
• Conduction
• Convection
• 1-D radiation
Dynamics
Mode frequency
Mechanical load
• Transient (direct or reduced) linear
• Sinusoidal
Shock spectra
Heat transfer direct transient
Special features
Nonlinear
• Buckling
• Large displacement
• Elasticity
• Creep
• Friction, gaps
Substructuring
3. What is minimum portion of system or structure required?
Known forces or displacements at a point
Allows for separation
Structural symmetry
Isolation through test data
Cyclic symmetry
4. What are loading and boundary conditions?
Loading known
Loading can be calculated from simplistic analysis
Loading to be determined from test data
Support of excluded part of system established on modeled portion
Test data taken to establish stiffness of partial constraints
5. Determine model grid.
Choose element types.
Establish grid size to satisfy cost versus accuracy criterion.
6. Develop bulk data.
Establish coordinate systems.
Number node or order elements to minimize cost.
Develop node coordinates and element connectivity description.
Code load and B.C. description.
Check geometry description by plotting.
Geometry: This refers to the locations of grid points and the orientations of
coordinate systems that will be used to record components of displacements
and forces at grid points.
Element connectivities: This refers to identification numbers of the grid points
to which each element is connected.
Element properties: Examples of element properties are the thickness of a
surface element and the cross-sectional area of a line element. Each element
type has a specific list of properties.
Material properties: Examples of material properties are Young’s modulus,
density, and thermal expansion coefficient. There are several material types
available in MSC/NASTRAN. Each has a specific list of properties.
Constraints: Constraints are used to specify boundary conditions, symmetry
conditions, and a variety of other useful relationships. Constraints are
essential because an unconstrained structure is capable of free-body motion,
which will cause the analysis to fail.
Loads and enforced displacements: Loads may be applied at grid points or
within elements.
Simulation 181
It is the responsibility of the user to verify the accuracy of the finite element
analysis results. Some suggested checks to perform are:
Loads:
• Isolation of single component of assembly
• Hard to put assumed load in controlled lab test (linear loads causing
moments)
Strain gages:
• Gage locations and orientation
• Single leg gages versus rosettes
• Improper gage lead hookup
Non-linearities:
• Plasticity
• Pin joint clearance
• Bolted joints
Therefore, to make sure that the FEA is worth the effort, the following steps are
recommended:
EXCEL’S SOLVER
Yet another simple simulation tool is found in the Tools (add in) category of the
Excel software program. Its simplicity is astonishing, and the results may be indeed
phenomenal. What is required is the transformation function. Once that is identified,
then the experimenter defines the constraints and the rest is computed by Solver.
DESIGN OPTIMIZATION
In dealing with DFSS, a frequent euphemism is “design optimization.” What is
design optimization? Design optimization is a technique that seeks to determine an
SL3151Ch04Frame Page 183 Thursday, September 12, 2002 6:11 PM
Simulation 183
optimum design. By “optimum design,” we mean one that meets all specified require-
ments but with a minimum expense of certain factors such as weight, surface area,
volume, stress, cost, and so on. In other words, the optimum design is one that is
as efficient and as effective as possible.
To calculate an optimum design, many methods can be followed. Here, however,
we focus on the ANSYS program, as defined by Moaveni (1999), which performs a
series of analysis-evaluation-modification cycles. That is, an analysis of the initial design
is performed, the results are evaluated against specified design criteria, and the design
is modified as necessary. This process is repeated until all specified criteria are met.
Design optimization can be used to optimize virtually any aspect of the design:
dimensions (such as thickness), shape (such as fillet radii), placement of supports,
cost of fabrication, natural frequency, material property, and so on. Actually, any
ANSYS item that can be expressed in terms of a parameter can be subjected to
design optimization. One example of optimization is the design of an aluminum
pipe with cooling fins where the objective is to find the optimum diameter, shape,
and spacing of the fins for maximum heat flow.
Before describing the procedure for design optimization, we will define some
of the terminology: design variables, state variables, objective function, feasible and
unfeasible designs, loops, and design sets. We will start with a typical optimization
problem statement:
Find the minimum-weight design of a beam of rectangular cross section subject
to the following constraints:
Design Variables (DVs) are independent quantities that can be varied in order
to achieve the optimum design. Upper and lower limits are specified on the design
variables to serve as “constraints.” In the above beam example, width and height
are obvious candidates for DVs, since they both cannot be zero or negative, so their
lower limit would be some value greater than zero.
State Variables (SVs) are quantities that constrain the design. They are also
known as “behavioral constraints” and are typically response quantities that are
functions of the design variables. Our beam example has two SVs: σ(the total stress)
and δ(the beam deflection). You may define up to 100 SVs in an ANSYS design
optimization problem.
The Objective Function is the quantity that you are attempting to minimize or
maximize. It should be a function of the DVs, i.e., changing the values of the DVs
should change the value of the objective function. In our beam example, the total
weight of the beam could be the objective function (to be minimized). Only one
objective function may be defined in a design optimization problem.
A design is simply a set of design variable values. A feasible design is one that
satisfies all specified constraints, including constraints on the SVs as well as constraints
SL3151Ch04Frame Page 184 Thursday, September 12, 2002 6:11 PM
on the DVs. If even one of the constraints is not satisfied, the design is considered
infeasible.
An optimization loop (or simply loop) is one pass through the analysis-evalua-
tion-modification cycle. Each loop consists of the following steps:
At the end of each loop, new values of DVs, SVs, and the objective function
are available and are collectively referred to as a design set (or simply set).
Details of these steps are beyond the scope of this volume. However, the reader
may find the information in Moaveni (1999).
Simulation 185
REFERENCES
Buchanan G.R., Schaum’s Outline of Finite Element Analysis, McGraw-Hill Professional
Publishing, New York, 1994.
Cook, R., Finite Element Modeling for Stress Analysis, Wiley, New York, 1995.
Moaveni, S., Finite Element Analysis: Theory and Applications with ANSYS, Prentice Hall,
Upper Saddle River, NJ, 1999.
Schaeffer, H.G., MSC/NASTRAN Primer: Static and Normal Modes Analysis, MSC, New
York, 1998.
SELECTED BIBLIOGRAPHY
Adams, V. and Askenazi, A., Building Better Products with Finite Element Analysis, OnWord
Press, New York, 1998.
Belytschko, T., Liu, W.K., and Moran, B., Nonlinear Finite Elements for Continua and
Structures, Wiley, New York, 2000.
Hughes, T.J.R., The Finite Element Method: Linear Static and Dynamic Finite Element
Analysis, Dover Publications, New York, 2000.
Malkus, D.S. et al., Concepts and Applications of Finite Element Analysis, 4th ed., Wiley,
New York, 2001.
Rieger, M. and Steele, J., Basic Course in FEA Modeling, Machine Design, June 6, 1981,
pp. 7–8.
Rieger, M. and Steele, J., Basic Course in FEA Modeling, Machine Design, July 9, 1981, pp.
8–10.
Rieger, M. and Steele, J., Advanced Techniques in FEA Modeling, Machine Design, July 23,
1981, pp. 7–12.
Shih, R., Introduction to Finite Element Analysis Using I-DEAS Master Series 7, Schroff
Development Corp. Publications, New York, 1999.
Zienkiewics, O.C. and Taylor, R.L., Finite Element Method: Volume 1, The Basis, Butterworth-
Heinsmann, London, 2000.
Zienkiewics, O.C. and Taylor, R.L., Finite Element Method: Volume 2, Solid Mechanics,
Butterworth-Heinsmann, London, 2000.
Zienkiewics, O.C. and Taylor, R.L., Finite Element Method: Volume 3, Fluid Dynamics,
Butterworth-Heinsmann, London, 2000.
SL3151Ch05Frame Page 187 Thursday, September 12, 2002 6:10 PM
Design for
5 Manufacturability/
Assembly
(DFM/DFA or DFMA)
When we talk about design for manufacturability/assembly (DFM/DFA or DFMA),
we describe a methodology that is concerned with reducing the cost of a product
through simplification of its design. In other words, we try to reduce the number of
individual parts that must be assembled and ultimately, increase the ease with which
these parts can be put together.
By focusing on these two items we are able to:
To maximize
a. Simplicity of design
b. Economy of materials, parts, and components
c. Economy of tooling/fixtures, process, and methods
d. Standardization
e. Assembliability
f. Testability
g. Serviceability
h. Integrity of product features
To minimize
a. Unique processes
b. Critical, precise processes
c. Material waste, or scrap due to process
d. Energy consumption
e. Generation of pollution, liquid or solid
f. Waste
g. Limited available materials, components, and parts
h. Limited available, proprietary, or long lead time equipment
i. Degree of ongoing product and production support
187
SL3151Ch05Frame Page 188 Thursday, September 12, 2002 6:10 PM
Producibility
Trade-off
Trade-offs
Reliability
Performance
a) Old Design
Goals: Reliability
Better performance
Producibility Performance
Trade-offs
By doing a DFM/DFA, we are able to take into consideration many inputs with
the intent of optimizing the design in terms of the following characteristics:
Time
Marketing
specification
and function
confirmation
Engineering
product design
Mfg production
Quality inspection
Product to customer
Product
design/development
Manufacturing process
Equipment design
design/development
capability
assessment
Figure 5.4 shows the modern way of addressing these concerns. The arrows
between product and process indicate possible alternatives. For example, if we
SL3151Ch05Frame Page 192 Thursday, September 12, 2002 6:10 PM
Product
alternative(s)
Voice Business
decision Manufacturing
of the production
(cost and
customer investment) and quality
Process
alternative(s)
Product
examine the producibility for a textile component, we could look at the following
material considerations:
• Natural
• Synthetic
• Properties
• Processes
• Applications
• Pattern layout
• Cutting
• Sewing assembly
• Types
• Processes
• Characteristics
• Duration
• Responsibility
b. Specific performance test:
• Function
• Appearance
• Durability
c. Use project management techniques.
d. Concentrate on the concept of getting it done right the first time, not
only doing it right the first time.
e. Focus on the high leverage items — get some encouraging news first.
f. Locate and prioritize the resource.
g. Management commitment.
h. Individual commitment.
Manage the DFMA project
• Ensure regular and formal review of the status by charter members.
• Regularly prepare and formalize executive reports; get feedback.
• Ensure total team inputs and contributions, not only involvement.
• Utilize proven tools/methodologies.
• Make adjustment with team consensus.
• Ensure adequate resources with proper priorities.
• Control the progress of the project.
Product Design
1. Opportunity cost
2. Development risk
3. Manufacturing risk
Company: IBM
Product: Personal computer
Environment: Forecasted annual growth rate of 60%. Competitors, i.e., Apple,
Tandy are controlling market developments and are beginning to cut into
IBM’s traditional office market.
Analysis: Opportunity cost is high. Development cost is low ($10 million
compared to IBM’s equity value of $18 billion). The technology of design
and process are stable and internally available.
Decision: Crash program approach — develop, design, manufacture, and mar-
ket the product within 2 years.
Approach details: Deviate the standard eight phases design procedure. Give
the development team complete freedom in product planning; keep inter-
ference to a minimum; and allow the use of streamlined, relatively infor-
mational management system. Use a so-called zero procedure approach,
focusing on development speed rather than risk reduction of product, man-
ufacturing, and so on.
Results: Introduce the product within 2 years. Customer acceptance is good.
Cost overrun by 15%. Cost of goods sold is about 5% unfavorable to the
original estimate. Market share is questionable. Long term effects — ???
(Does this sound familiar? Quite a few organizations take this approach and of
course, they fail.)
SL3151Ch05Frame Page 196 Thursday, September 12, 2002 6:10 PM
Company: Boeing
Product: Boeing 727 replacement aircraft (767)
Environment: Replacement within ten years is inevitable (may be speeded up
to 5 years). Competitor, i.e., Airbus, has started its design. A new mid-range
aircraft may take 727 replacement market away due to the operating/fuel
inefficiency, comfortability, and Environmental Protection Agency (EPA)
restraints.
Analysis: Opportunity cost is high. (There is a need for 200–300 seat market;
727 is becoming obsolete.) Development cost is high (estimated $1.5 billion
compared to entire company equity of $1.4 billion). Development and
manufacturing risk is high. Technology and customer preferences are pre-
dictable but not yet crystallized. (Should it have two engineers or three?
Should its cockpit allow for two people or three? Cruise range? Fuel
consumption? Pricing?)
Decision: Perfect product design approach. Complete the development of all
new technologies of design and manufacturing processes in the early stages
of research and development (R and D). Test everything in sight, and move
product to launch only when success is nearly guaranteed. Eight-year design
lead time.
Approach details: Form an R and D team of 400 engineers/managers that
includes designer, manufacturing engineer, quality, purchasing, and mar-
keting. (The team member number goes up to 1000 right before go-ahead.)
Apply concurrent engineering and DFMA process fully in the product R
and D stage.
Results: Introduce the 767 on schedule (which compares to Airbus’ 310 eight
months behind schedule). Although Boeing had missed the 300–350 seat
market and lost some of the 727 replacement market to Airbus 300, Boeing
got to keep 200–300 seat market with a successful 767. Development costs
were within budget and cost of goods sold was 4% favorable to the original
estimates. No recall record so far. Long term effects — likely good.
Most likely you are the in-betweens. The other approaches (see Figure 5.5)
include:
Product design has dedicated (whether one wants to admit it or not) the future of
the product. About 95% of the material costs and 85% of the design/labor and
SL3151Ch05Frame Page 197 Thursday, September 12, 2002 6:10 PM
Joint venture
overhead costs are controlled by the design itself. Once the design is complete, about
85% of the manufacturing process has been locked in.
Design-related factors affecting the manufacturing process include:
• Product size/weight
• Reliability/quality requirement
• Architectural structure
• Fastener/joint methods
• Parts/components/materials
• Size, shape, and weight of parts/components
• Appearance/cosmetic requirement
• Floor space
• Material flow and process flow
• Power, compressed air, a/c and heating, and facility
• Quality plan
• Manual operation mandatory
• Mechanized operation or automation operation mandatory
• System interfacing requirement
• Manufacturing process concepts/philosophy — cpf vs. in-line vs. batch
vs. cellar approaches
SL3151Ch05Frame Page 198 Thursday, September 12, 2002 6:10 PM
• Management commitment
• Production volume
Volume requirements have the major influence on the choice of the man-
ufacturing process.
• Product life cycle
As with volume requirements, product life has a significant influence on
the manufacturing process.
• Funding
Since most of mechanization and automation are heavily capitalized, fund-
ing plays a major role in determining the product plan, which has a sig-
nificant influence on the manufacturing process.
• Cost of goods sold
What is affordable capital/tooling/fixture amortization?
What is the targeted cost of goods sold?
The result of this understanding will facilitate the development of realistic and
agreed upon specification(s). Some of the specific items that will guide realistic
specifications are:
Mechanics:
1. Mitsubishi method
2. U-MASS method
3. MIL-HDB-727 design guidance for producibility
All of the above methods utilize the principles of Taylor’s motion economy,
which have been proven to be quite helpful, especially in the DFA area. We identify
some of these principles here that may be profitably applied to shop and office work
alike. Although not all are applicable to every operation, they do form a basis or a
code for improving efficiency and reducing fatigue in manual work:
4. The two hands should begin as well as complete their motions at the same
time.
SL3151Ch05Frame Page 200 Thursday, September 12, 2002 6:10 PM
5. The two hands should not be idle at the same time except during rest
periods.
6. Motions of the arms should be made in opposite and symmetrical direc-
tions and should be made simultaneously.
7. Hand and body motions should be confined to the lowest classification
with which it is possible to perform the work satisfactorily.
8. Momentum should be employed to assist the worker wherever possible,
and it should be reduced to a minimum if it must be overcome by muscular
effort.
9. Eye fixations should be as few and as close together as possible.
18. The hands should be relieved of all work that can be done more advan-
tageously by a jig, a fixture, or a foot-operated device.
19. Two or more tools should be combined wherever possible.
20. Tools and materials should be pre-positioned whenever possible.
21. Where each finger performs some specific movement, such as in type-
writing, the load should be distributed in accordance with the inherent
capacities of the fingers.
22. Levers, crossbars, and hand wheels should be located in such positions
that the operator can manipulate them with the least change in body
position and with the greatest mechanical advantage.
MITSUBISHI METHOD
The Mitsubishi method was developed and fine-tuned by Japanese engineers in
Mitsubishi’s Kobe shipyard. The primary principle is the combination of QFD and
Taylor’s motion economy. The Mitsubishi method is very popular in Japan’s heavy
industries, i.e., shipbuilding industry, steel industry, and heavy equipment industry.
There is also evidence of some application of this method in Japan’s automotive,
SL3151Ch05Frame Page 201 Thursday, September 12, 2002 6:10 PM
motorcycle, and office equipment industries. More efforts are needed to promote
and share these techniques, and some effort is needed to fine-tune the Mitsubishi
method and make it practical to fit U.S. manufacturing companies’ cultures and
traditions.
The process is based on the following principles:
Table 5.1 shows an example of customer attributes (Cas) and bundles of CAs
for a car door. An example of relative importance weights of customer attributes is
SL3151Ch05Frame Page 202 Thursday, September 12, 2002 6:10 PM
TABLE 5.1
Customer Attributes for a Car Door
Primary Secondary Tertiary
TABLE 5.2
Relative Importance of Weights
Bundles Customer Attributes Relative Importance
U-MASS METHOD
The U-MASS method is named for the University of Massachusetts, where it was
developed by two professors, Geoffrey Boothroyd and Peter Dewhurst, and their
graduate students. It is the most common DFM/DFA approach used in the U.S. The
SL3151Ch05Frame Page 203 Thursday, September 12, 2002 6:10 PM
TABLE 5.3
Customer’s Evaluations of Competitive Products
Customer Relative Customer
Bundles Attributes Importance Perceptions
primary principle is the conventional motion and time study, while keeping in mind
the component counts and motion economy.
This method is heavily promoted in academic communities or institute-related
manufacturing companies located in the New England area, such as Digital Equipment
Corp. and Westinghouse Electric Company. Other companies are using it as well, such
as Ford Motor Co., DaimlerChrysler, and many others. Its appeal seems to be the
availability of the software that may be purchased from Boothroyd and Dewhurst. (Some
practitioners find the software very time-consuming in design efficiency calculation and
believe that more work is needed to fine tune its efficiency, as well as make it more
user friendly.) The process is based on the following principles:
MIL-HDBK-727
This method was developed by the U.S. Army material command and published by
the naval publications and forms center. The first edition was published in 1971,
and the latest revision was published in April 1984. The primary principle is Taylor’s
motion economy and some other design tools, i.e., DOE. This method is not too
popular. Not many people know about it, and it is not used very much outside of
the military. Some updates and revisions are needed to make it more practical to
general manufacturing companies.
SL3151Ch05Frame Page 204 Thursday, September 12, 2002 6:10 PM
Constraints
Personnel Policies
Quality Control/Assurance
Purchasing
• Fork lifts
• Parts/component feeding system:
• Vibratory bowl feeder
• Reciprocating tube hopper feeder
• Centerboard hopper feeder
• Reciprocating fork hopper feeder
• External gate hopper feeder
• Rotary disk feeder
• Centrifugal hopper feeder
• Revolving hook hopper feeder
• Stationary hook hopper feeder
• Bladed wheel hopper feeder
• Tumbling barrel hopper feeder
• Rotary centerboard hopper feeder
• Magnetic disk feeder
• Elevating hopper feeder
• Magnetic elevating hopper feeder
Disadvantages:
• Capital investment — high
• Preventative maintenance and corrective maintenance — absolute
necessity (If one part breaks down, the entire line is down.)
• Engineering, technician, and flow disciplines — absolute necessity
• Flexibility — low
• Production changeover — complicated
Pace production line — one in, one out
Definition: Same cycle time at all work stations, and likely all work pieces
transfer at the same time
Advantages:
• Work-in-process — very low and can be calculated
• Material handling — automatic
• Material flow — good
• Productivity — best
Disadvantages:
• Capital investment — high
• Preventative maintenance and corrective maintenance — absolute
necessity (If one part breaks down, the entire line is down.)
• Engineering, technician, and flow disciplines — absolute necessity
• Flexibility — very low
• Production changeover — difficult
MISTAKE PROOFING
DEFINITION
Mistake proofing by definition is a process improvement system that prevents per-
sonal injury, promotes job safety, prevents faulty products, and prevents machine
damage. It is also known as the Shingo method, Poka Yoke, error proofing, fail safe
design, and by many other names.
THE STRATEGY
Establish a team approach to mistake proof systems that will focus on both internal
and external customer concerns with the intention of maximizing value. This will
include quality indicators such as on-line inspection and probe studies.
The strategy involves:
• Concentrating on the things that can be changed rather than on the things
that are perceived as having to be changed to improve process performance
• Developing the training required to prepare team members
• Involving all the appropriate people in the mistake proof systems process
• Tracking quality improvements using in-plant and external data collection
systems (before/after data)
SL3151Ch05Frame Page 209 Thursday, September 12, 2002 6:10 PM
DEFECTS
Many things can and often do go wrong in our ever-changing and increasingly
complex work environment. Opportunities for mistakes are plentiful and often lead
to defective products. Defects are not only wasteful but result in customer dissatis-
faction if not detected before shipment.
The philosophy behind mistake proof systems suggests that if we are going to
be competitive and remain competitive in a world market we cannot accept any
number of defects as satisfactory.
In essence, not even one defect can be tolerated. Mistake proof systems are a
simple method for making this philosophy become a daily practice. Simple concepts
and methods are used to accomplish this objective.
SL3151Ch05Frame Page 210 Thursday, September 12, 2002 6:10 PM
The concept of error proof systems has been in existence for a long time, only we
have not attempted to turn it into a formalized process. It has often been referred to
as idiot proofing, goof proofing, fool proofing, and so on. These terms often have a
negative connotation that appears to attack the intelligence of the individual involved
and therefore are not used in today’s work environment. For this reason we have
selected the term “mistake proof system.” The idea behind a mistake proof system
is to reduce the opportunity for human error by taking over tasks that are repetitive
or actions that depend solely upon memory or attention. With this approach, we
allow the worker to maintain dignity and self-esteem without the negative connota-
tion that the individual is an idiot, goof, or fool.
There are times when we forget things, especially when we are not fully concen-
trating or focusing. An example that can result in serious consequences is the failure
to lock out a piece of equipment or machine we are working on. To preclude this,
precautionary measures can be taken: post lock out instructions at every piece of
equipment and/or machine; have an ongoing program to continuously alert operators
of the danger.
Mistakes of Misunderstanding
Jumping to conclusions before we are familiar with the situation often leads to
mistakes. For example, visual aids are often prepared by engineers who are thor-
oughly familiar with the operation or process. Since the aid is completely clear from
their perspective, they may make the assumption (and often do) that the operator
fully understands as well. This may not be true. To preclude this, we may test this
hypothesis before we create an aid; provide training/education; standardize work
methods and procedures.
Identification Mistakes
Situations are often misjudged because we view them too quickly or from too far
away to clearly see them. One example of this type of mistake is misreading the
identification code on a component of a piece of equipment and replacing that
component with the wrong part. To prevent these errors, we might improve legibility
SL3151Ch05Frame Page 211 Thursday, September 12, 2002 6:10 PM
Amateur Errors
Lack of experience often leads to mistakes. Newly hired workers will not know the
sequence of operations to perform their tasks and often, due to inadequate training,
will perform those tasks incorrectly. To prevent amateur errors, provide proper
training; utilize skill building techniques prior to job assignment; use work stan-
dardization.
Willful Mistakes
Willful errors result when we choose to ignore the rules. One example of this type
of error is placing a rack of material outside the lines painted on the floor that clearly
designate the proper location. The results can be damage to the vehicle or the material
or perhaps an unsafe work condition. To prevent this situation, provide basic edu-
cation and/or training; require strict adherence to the rules.
Inadvertent Mistakes
Sometimes we make mistakes without even being aware of them. For example, a
wrong part might be installed because the operator was daydreaming. To minimize
this, we may standardize the work, through discipline if necessary.
Slowness Mistakes
When our actions are slowed by delays in judgment, mistakes are often the result.
For example, an operator unfamiliar with the operation of a fork lift might pull the
wrong lever and drop the load. Methods to prevent this might be: skill building;
work standardization.
Mistakes will occur when there is a lack of suitable work standards or when workers
do not understand instructions. For example, two inspectors performing the same
inspection may have different views on what constitutes a reject. To prevent this,
develop operation definitions of what the product is expected to be that are clearly
understood by all; provide proper training and education.
Surprise Mistakes
Intentional Mistakes
Mistakes are sometimes made deliberately by some people. These fall in the category
of sabotage. Disciplinary measures and basic education are the only deterrents to
these types of mistakes.
There are many reasons for mistakes to happen. However, almost all of these
can be prevented if we diligently expend the time and effort to identify the basic
conditions that allow them to occur, such as:
and then determine what steps are needed to prevent these mistakes from recurring —
permanently.
The mistake proof system approach and the methods used give you an oppor-
tunity to prevent mistakes and errors from occurring.
1. Errors are inevitable: People will always make mistakes. Accepting this
premise makes one question the rationale of blaming people when mis-
takes are committed. Maintaining this “blame” attitude generally results
in defects. Also, quite often errors are overlooked when they occur in the
production process. To avoid blame, the discovery of defects is postponed
until the final inspection, or worse yet, until the product reaches the
customer.
2. Errors can be eliminated: If we utilize a system that supports (a) proper
training and education and (b) fostering the belief that mistakes can be
prevented, then people will make fewer mistakes. This being true, it is
then possible that mistakes by people can be eliminated.
Sources of mistakes may be any one of the six basic elements of a process:
1. Measurement
2. Material
3. Method
4. Manpower
5. Machinery
6. Environment
TABLE 5.4
Examples of Mistakes and Defects
Mistake Resulting Defects
Failure to put gasoline in the snow blower Snow blower will not start
Failure to close window of unit being tested Seats and carpet are wet
Failure to reset clock for daylight savings time Late for work
Failure to show operator how to properly assemble Defective or warped product
components
Proper weld schedule not maintained on welding Bad welds, rejectable and/or scrap material
equipment
Low charged battery placed in griptow Griptow will not pull racks resulting in lost
production, downtime, etc.
consequence of the interaction of all six elements and the actual work performed in
the process. Furthermore, we must recognize that the role of inspection is to audit
the process and to identify the defects. It is an appraisal system and it does nothing
for prevention. Product quality is changed only by improving the quality of the
process. Therefore, the first step toward elimination of defects is to understand the
difference between defects and mistakes (errors):
Assembly mistakes
Inadequate training
Symmetry (parts mounted backwards)
Too many operations to perform
Multiple parts to select from with poor or no identification
Misread or unfamiliar with parts/products
Tooling broken and/or misaligned
New operator
Processing mistakes
Part of process omitted (inadvertent/deliberate)
Fixture inadequate (resulting in parts being set into incorrectly)
Symmetrical parts (wrong part can be installed)
SL3151Ch05Frame Page 214 Thursday, September 12, 2002 6:10 PM
Figure 5.8 shows major inspection techniques. Source inspection utilizing mistake
proofing system devices is the most logical method of defect prevention.
Mistake proof system “devices” are simple and inexpensive. There are essentially
two types of devices used:
Ship to
Operation #1 Operation #2
customer
Second
Function
Detects mistakes as
they are occurring,
but before they result
in defects
However, to reach the state of defect free system, in addition to signals and
inspection we must also incorporate appropriate sensors to identify, stop, and/or
correct a problem before it goes to the next operation. Sensors are very important
in mistake proofing, so let us look at them little closer.
A sensor is an electrical device that detects and responds to changes in a given
characteristic of a part, assembly, or fixture — see Figure 5.9. A sensor can, for
example, verify with a high degree of accuracy the presence and position of a part
on an assembly or fixture and can identify damage or wear. Some examples of types
of sensors and typical uses are:
1. Sensors
2. Sequence restrictors
3. Odd part out method
4. Limit or microswitches, proximity detectors
5. Templates
6. Guide rods or pins
7. Stoppers or gates
8. Counters
9. Standardized methods of operation and/or material usage
10. Detect delivery chute
11. Critical condition indicators
12. Probes
13. Mistake proof your mistake proof system
and so on
REFERENCES
Boothroyd, G. and Dewhurst, P., Product Design for Assembly, Boothroyd Dewhurst, Inc.,
Wakefield, RI, 1991.
MIL-HDBK-727, Design Guidance for Producibility, U.S. Army Material Command, Wash-
ington, DC, 1986.
Mitsubishi, Mitsubishi Design Engineering Handbook, Mitsubishi, Kobe, Japan, 1976.
Munro, A., S. Munro and Associates, Inc., Design for Manufacture, training manual, 1992.
SELECTED BIBLIOGRAPHY
Anon., How To Achieve Error Proof Manufacturing: Poka-Yoke and Beyond: A Technical
Video Tutorial, SAE International, undated (may be ordered online for $895 [$25
preview copy]).
Anon., 21st Century Manufacturing Enterprise Strategy, An Industry Led View, Volumes 1
and 2, Iacocca Institute, Lehigh University, PA, 1991.
Anon., Mistake-Proofing for Operators: The ZQC System, The Productivity Press Develop-
ment Team, Productivity Press, Portland, OR, 1997.
SL3151Ch05Frame Page 220 Thursday, September 12, 2002 6:10 PM
Anon., Manufacturing Management Handbook for Program Manager, ABN Fort Belvoir, VA,
1982.
Anon., Product Engineering Design Mannual, Litton Industries, Beverly Hills, CA, 1978.
Azuma, L. and Tada, M., A case history development of a foolproofing interface documen-
tation system, IEEE Transactions on Software Engineering, 19, 765–773, 1993.
Bandyopadhyay, J.K., Poka Yoke systems to ensure zero defect quality manufacturing, Inter-
national Journal of Management, 10(1), 29–33, 1993.
Barkers, R., Motion and Time Study: Design and Measurement of Work, Cot Loge Book
Company, Los Angeles, 1970.
Barkman, W.E., In-Process Quality Control for Manufacturing, Marcel Dekker, New York,
1989. (Preface and Chapter 3 are of particular interest.)
Bayer, P.C., Using Poka Yoke (mistake proofing devices) to ensure quality, IEEE 9th Applied
Power Electronics Conference Proceedings, 1, 201–204, 1994.
Bodine, W.E., The Trend: 100 Percent Quality Verification, Production, June 1993, pp. 54–55.
Bosa, R., Despite fuzzy logic and neural networks, operator control is still a must, CMA, 69,
7, 995.
Boothroyd, G. and Murch, P., Automatic Assembly, Marcel Dekker, New York, 1982.
Brehmer, B., Variable errors set a limit to adaptation, Ergonomics 33, 1231–1239, 1990.
Brall, J., Product Design for Manufacturing, McGraw-Hill, New York, 1986.
Casey, S., Set Phasers on Stun and Other True Tales of Design, Technology, and Human
Error, Aegean, Santa Barbara, CA, 1993.
Chase, R.B., and Stewart, D.M., Make Your Service Fail-safe, Sloan Management Review,
Spring 1994, pp. 35–44.
Chase, R.B. and Stewart, D.M., Designing Errors Out, Productivity Press, Portland, OR,
1995. Note of interest: Productivity Press has discontinued sales of this book (a very
sad outcome). Some copies may be available in local bookstores. It is both more
readable and broader in application than Shingo but does not have a catalog of
examples as Shingo does.
Damian, J., “Agile Manufacturing” Can Revive U.S. Competitiveness, Industry Study Says —
A Modest Proposal, Electronics, Feb. 1992, pp. 34, 42–44.
Dove, R., Agile and Otherwise — Measuring Agility: The Toll of Turmoil, Production, Jan.
1995, pp. 12–15.
Dove, R., Agile and Otherwise — The Challenges of Change, Production, Feb. 1995, pp.
14–16.
Gross, N., This Is What the U.S. Must Do To Stay Competitive, Business Week, Dec. 1991,
pp. 21–24.
Grout, J.R., Mistake-Proofing Production, working paper 75275–0333, Cox School of Busi-
ness, Southern Methodist University, Dallas, 1995.
Grout, J.R. and Downs, B.T., An Economic Analysis of Inspection Costs for Failsafing
Attributes, working paper 95–0901, Cox School of Business, Southern Methodist
University, Dallas, 1995.
Grout, J.R. and Downs. B.T., Fail-safing and Measurement Control Charts, 1995 Proceedings,
Decision Sciences Institute Annual Meetings, Boston, MA, 1995.
Henricks, M., Make No Mistake, Entrepreneur, Oct. 1996, pp. 86–89. (Last quote should
read “average net savings of around $2500 a piece...” not average cost.)
Hinckley, C.M. and Barkan, P., The role of variation, mistakes, and complexity in producing
nonconformities, Journal of Quality Technology 27(3), 242–249, 1995.
Jaikumar, R., Manufacturing a’la Carte Agile Assembly Lines, Faster Development Cycles,
200 Years to CIM, IEEE Spectrum, 76–82, Sept. 1993.
SL3151Ch05Frame Page 221 Thursday, September 12, 2002 6:10 PM
Kaplan, G., Manufacturing a’la Carte Agile Assembly Lines, Faster Development Cycles,
Introduction, IEEE Spectrum, 46–51, Sept. 1993.
Kelly, K., Your Job Managing Error is Out of Control, Addison-Wesley, New York, 1994.
Kletz, T., Plant Design for Safety: A User-Friendly Approach, Hemisphere Publishing Corp.,
New York, 1991.
Lafferty, J.P., Cpk of 2 Not Good Enough for You? Manufacturing Engineering, Oct. 1992,
p. 10.
Ligus, R.G., Enterprise Agility: Jazz in the Factory, Industrial Engineering, Nov. 1994, pp.
19–23.
Lucas Engineering and Systems Ltd., Design for Manufacture Reference Tables, University
of Hull, Hull, England, Lucas Industries, Jan. 1994.
Manji, J.F., Sharpen, C., Your Competitive Edge Today and Into the 21st Century, CALS El
Journal, Date unknown, pp. 56–61.
Marchwinski, C., Ed., Company Cuts the Risk of Defects During Assembly and Maintenance,
MfgNet: The Manufacturer’s Internet Newsletter, Productivity, Inc. Norwalk, CT, 1996.
Marchwinski, C., Ed., Mistake-proofing, Productivity, 17(3), 1–6, 1995.
Marchwinski, C., Ed., (1997). SPC vs. ZQC. Productivity, 18(1), 1–4 1997. (Note: ZQC is
another name for mistake proofing. It stands for Zero Quality Control.)
McClelland, S., Poka-Yoke and the Art of Motorcycle Maintenance, Sensor Review, 9(2), 63,
1989.
Monden, Y., Toyota Production System, Industrial Engineering and Management Press, Nor-
cross, GA, 1983, pp. 10, 137–154.
Munro, A., S. Munro and Associates, Inc., Design for Manufacture, training manual, 1994.
Munro, A., S. Munro and Associates, Inc., Trainers for Design for Manufacture, analysis,
undated.
Myers, M., Poka/Yoke-ing Your Way to Success, Network World, Sept. 11, 1995, p. 39.
Nakajo, T. and Qume, H., The principles of foolproofing and their application in manufac-
turing, Reports of Statistical Application Research, Union of Japanese Scientists and
Engineers, 32(2), 10–29, 1985.
Niebel, C. and Baldwin, J., Designing for Production, Irwin, Homewood, IL, 1963.
Nieber, C. and Draper, G., Product Design and Process Engineering, McGraw-Hill, New
York, 1974.
Noaker, P.M., The Search for Agile Manufacturing, Manufacturing Engineering, Nov. 1994,
pp. 57–63.
Norman, D.A., The Design of Everyday Things, Doubleday, New York, 1989.
O’Connor, L., Agile Manufacturing in a Responsive Factory, Mechanical Engineering, July
1994, pp. 43–46.
Otto, K. and Wood, K., Product Design: Techniques in Reverse Engineering and New Product
Development, Prentice Hall, Upper Saddle River, NJ, 2001.
Port, O., Moving Past the Assembly Line — “Agile” Manufacturing Systems May Bring a
U.S. Revival, Business Week/Re-Inventing America, 1992, pp. 17–20.
Reason, J., Human Error, Cambridge University Press, New York, 1990.
Robinson, A.G. and Schroeder, D.M., The limited role of statistical quality control in a zero
defects environment, Production and Inventory Management Journal, 31(3), 60–65,
1990.
Robinson, A.G., Ed., Modern Approaches to Manufacturing Improvement: The Shingo System,
Productivity Press, Portland, OR, 1991.
Shandle, J., Sandia Labs Launches Agile Factory Program, Electronics, Mar. 8, 1993, pp.
48–49.
SL3151Ch05Frame Page 222 Thursday, September 12, 2002 6:10 PM
Sheridan, J.H., A Vision of Agility, Industry Week, Mar. 21, 1994, pp. 22–24.
Shingo, S., Zero Quality Control: Source Inspection and the Poka-Yoke System, Trans. A.P.
Dillion, Productivity Press, Portland, OR, 1986.
Shingo S., A Study of the Toyota Production System from an Industrial Engineering Viewpoint,
Productivity Press, Portland, OR, 1989, online excerpts.
Steven, S. and Bowen., H.K., Decoding the DNA of the Toyota Production System, Harvard
Business Review, Sept./Oct, 1999, pp. 97–106.
Texas Instruments, Design to Cost: An Introduction, Corporate Engineering Council, Texas
Instruments, Inc., Dallas, 1977.
Trucks, H.E., Designing for Economical Production, SME, Dearborn, MI, 1974.
Tsuda, Y., Implications of fool proofing in the manufacturing process, in Quality Through
Engineering Design, Kuo, W., Ed., Elsevier, New York, 1993.
Vasilash, G.S., Re-engineering, Re-energizing, Objects and Other Issues of Interest, Produc-
tion, Jan. 1995, pp. 38–41.
Vasilash, G.S., On training for mistake-proofing, Production, Mar. 1995, pp. 42–44.
Ward, C., What Is Agility? Industrial Engineering, Nov. 1994, pp. 38–44.
Warm, J.S., An introduction to vigilance, in Sustained Attention in Human Performance,
Warm, J.S., Ed., Wiley, New York, 1984.
Weimer, G., Is an American Renaissance at Hand? Industry Week, May 1992, pp. 14–17.
Weimer, G., U.S.A. 2006: Industry Leader or Loser, Industry Week, Jan. 20, 1992, pp. 31–34.
SL3151Ch06Frame Page 223 Thursday, September 12, 2002 6:09 PM
223
SL3151Ch06Frame Page 224 Thursday, September 12, 2002 6:09 PM
In its most rigorous form, an FMEA summarizes the engineer’s thoughts while
developing a process. This systematic approach parallels and formalizes the mental
discipline that an engineer normally uses to develop processing requirements.
DEFINITION OF FMEA
FMEA is an engineering “reliability tool” that:
TYPES OF FMEAS
There are many types of FMEAs (see Figure 6.1). However, the main ones are:
Types of FMEA
Design Process
FMEA FMEA
System Component Machines
FMEA Subsystem Methods
System Material
Focus: Manpower
Machinery Minimize failure Measurement
FMEA effects on the
Environment
system
Focus:
Focus: Objective: Minimize
Design changes to Maximize system production process
lower life cycle quality, failure effects on
costs reliability cost, the system
and
Objective: maintain ability Objective:
Improve the
Maximize the
reliability and
system quality,
maintain ability of
reliability, cost,
the machinery and
maintain ability,
equipment
and productivity
IS FMEA NEEDED?
If any answer to the following questions is positive, then you need an FMEA:
BENEFITS OF FMEA
When properly conducted, product and process FMEAs should lead to:
1. Confidence that all risks have been identified early and appropriate actions
have been taken
2. Priorities and rationale for product and process improvement actions
3. Reduction of scrap, rework, and manufacturing costs
4. Preservation of product and process knowledge
5. Reduction of field failures and warranty cost
6. Documentation of risks and actions for future designs or processes
By way of comparison of FMEA benefits and the quality lever, Figure 6.2 may
help.
In essence, one may argue that the most important benefit of an FMEA is that
it helps identify hidden costs, which are quite often greater than visible costs. Some
of these costs may be identified through:
1. Customer dissatisfaction
2. Development inefficiencies
3. Lost repeat business (no brand loyalty)
4. High employee turnover
5. And so on
FMEA HISTORY
This type of thinking has been around for hundreds of years. It was first formalized
in the aerospace industry during the Apollo program in the 1960s. The initial
automotive adoption was in the 1970s in the area of safety issues. FMEA was
required by QS-9000 and the advanced product quality planning process in 1994
for all automotive suppliers. It has now been adopted by many other industries.
SL3151Ch06Frame Page 227 Thursday, September 12, 2002 6:09 PM
Payback: Effort
Product design fix
100:1
Process design fix
10:1
Production fix
1:1
Customer fix
1:10
Planning and
definition
Product design
and development
Mfg process
design and
development
Product and
process
validation Production
• A typical system FMEA should begin even before the program approval
stage. The design FMEA should start right after program approval and
continue to be updated through prototypes. A process FMEA should begin
just before prototypes and continue through pilot build and sometimes
into product launching. As for the MFMEA, it should also start at the
same time as the design FMEA. It is imperative for a user of an FMEA
to understand that sometimes information is not always available. During
these situations, users must do the best they can with what they have,
recognizing that the document itself is indeed a living document and will
change as more information becomes available.
• History has shown that a majority of product warranty campaigns and
automotive recalls could have been prevented by thorough FMEA studies.
GETTING STARTED
Just as with anything else, before the FMEA begins there are some assumptions and
preparations that must be taken care of. These are:
Satisfied
Excitement needs
Performance needs
Time
Basic needs
Dissatisfied
Excitement needs: Generally, these are the unspoken “wants” of the customer.
Performance needs: Generally, these are the spoken “needs” of the customer.
They serve as the neutral requirements of the customer.
Basic needs: Generally, these are the unspoken “needs” of the customer. They
serve as the very minimum of requirements.
SYSTEM customers may be viewed as: other systems, whole product, gov-
ernment regulations, design engineers, and end user.
DESIGN customers may be viewed as: higher assembly, whole product, design
engineers, manufacturing engineers, government engineers, and end user.
PROCESS customers may be viewed as: the next operation, operators, design
and manufacturing engineering, government regulations, and end user.
MACHINE customers may be viewed as: higher assembly, whole product,
design engineers, manufacturing engineers, government regulations, and
end user.
Another way to understand the FMEA customers is through the FMEA team,
which must in no uncertain terms determine:
The appropriate and applicable response will help in developing both the function
and effects.
There are many methods to assist in developing concepts. Some of the most common
are:
1. Brainstorming
2. Benchmarking
3. TRIZ (the theory of inventive problem solving)
4. Pugh concept selection (an objective way to analyze and select/synthesize
alternative concepts)
SL3151Ch06Frame Page 231 Thursday, September 12, 2002 6:09 PM
Totals - 2 3 4 1
+ 1 1 1 2 1 2
S 1 3 2 3 1
Legend:
Evaluation Criteria: These are the criteria that we are comparing the razor
with the other approaches.
Datum: These are the basic razor characteristics that we are comparing the
other concepts to.
Figure 6.4 shows what a Pugh matrix may look like for the concept of “shaving”
with a base that of a “razor.”
Core team
The experts of the project and the closest to the project. They facilitate
honest communication and encourage active participation. Support
membership may vary depending on the stage of the project.
Champion/sponsor
• Provides resources and support
• Attends some meetings
• Supports team
• Promotes team efforts and implements recommendations
• Shares authority/power with team
• Kicks off team
• Higher up in management the better
Team leader
A team leader is the “watchdog” of the project. Typically, this function
falls upon the lead engineer. Some of the ingredients of a good team
leader are:
• Possesses good leadership skills
• Is respected by team members
• Leads but does not dominate
• Maintains full team participation
Recorder
Keeps documentation of team’s efforts. The recorder is responsible for co-
ordinating meeting rooms and times as well as distributing meeting
minutes and agendas.
Facilitator
The “watchdog” of the process. The facilitator keeps the team on track and
makes sure that everyone participates. In addition, it the facilitator’s re-
sponsibility to make sure that team dynamics develop in a positive en-
vironment. For the facilitator to be effective, it is imperative for the
facilitator to have no stake on the project, possess FMEA process ex-
pertise, and communicate assertively.
• Continuity of members
• Receptive and open-minded
• Committed to success
SL3151Ch06Frame Page 233 Thursday, September 12, 2002 6:09 PM
• Empowered by sponsor
• Cross-functionality
• Multidiscipline
• Consensus
• Positive synergy
• Realistic agendas
• Good facilitator
• Short meetings
• Right people present
• Reach decisions based on consensus
• Open minded, self initiators, volunteers
• Incentives offered
• Ground rules established
• One individual responsible for coordination and accountability of the
FMEA project (Typically for the design, the design engineer is that person
and for the process, the manufacturing engineer has that responsibility.)
To make sure the effectiveness of the team is sustained throughout the project,
it is imperative that everyone concerned with the project bring useful information
to the process. Useful information may be derived due to education, experience,
training, or a combination of these.
At least two areas that are usually underutilized for useful information are
background information and surrogate data. Background information and supporting
documents that may be helpful to complete system, design, or process FMEAs are:
Surrogate data are data that are generated from similar projects. They may help
in the initial stages of the FMEA. When surrogate data are used, extra caution should
be taken.
Potential FMEA team members include:
• Design engineers
• Manufacturing engineers
SL3151Ch06Frame Page 234 Thursday, September 12, 2002 6:09 PM
• Quality engineers
• Test engineers
• Reliability engineers
• Maintenance personnel
• Operators (from all shifts)
• Equipment suppliers
• Customers
• Suppliers
• Anyone who has a direct or indirect interest
• In any FMEA team effort the individuals must have interaction with
manufacturing and/or process engineering while conducting a design
FMEA. This is important to ensure that the process will manufacture
per design specification.
• On the other hand, interaction with design engineering while conduct-
ing a process or assembly FMEA is important to ensure that the design
is right.
• In either case, group consensus will identify the high-risk areas that
must be addressed to ensure that the design and/or process changes
are implemented for improved quality and reliability of the product
Obviously, these lists are typical menus to choose an appropriate team for your
project. The actual team composition for your organization will depend upon your
individual project and resources.
Once the team is chosen for the given project, spend 15–20 minutes creating a
list of the biggest (however you define “biggest”) concerns for this product or
process. This list will be used later to make sure you have a complete list of functions.
Two excellent tools for such an evaluation are (1) block diagram for system, design,
and machinery and (2) process flow diagram for process. In essence, part of the respon-
sibility to define the project and scope has to do with the question “How broad is our
focus?” Another way to say this is to answer the question “How detailed do we have
to be?” This is much more difficult than it sounds and it needs some heavy discussion
from all the members. Obviously, consensus is imperative. As a general rule, the focus
is dependent upon the project and the experience or education of the team members.
Let us look at an example. It must be recognized that sometimes due to the
complexity of the system, it is necessary to narrow the scope of the FMEA. In other
words, we must break down the system into smaller pieces — see Figure 6.5.
SL3151Ch06Frame Page 235 Thursday, September 12, 2002 6:09 PM
Cylinder, fluid
Master bladder, etc
cylinder
Pedal, rubber
Pedals cover, cotter pins, etc.
and
linkages Rubber hose, metal
tubing, proportioning
Hydraulics valve, fitting, etc.
Back
Brake plate and
System Back plate, springs, washer,
hardware clips, etc.
The form may be expanded to include or to be used for such matters as:
Safety: Injury is the most serious of all failure effects. As a consequence, safety
is handled either with an FMEA or a fault tree analysis (FTA) or critical
SL3151Ch06Frame Page 236 Thursday, September 12, 2002 6:09 PM
Yes
Good?
No No L Run, package
and ship
L
Inpect Wash board
M
print
Apply
H paste
L Load
board Our scope
M
Dispense
paste
H Set up
machine
L L L
H Load Load Load tool Develop
screen sqeegee plate program
Legend:
L: Low risk
M: Medium risk
H: High risk
FIGURE 6.6 Scope for PFMEA — printed circuit board screen printing process.
FMEA WORKSHEET
System FMEA ____Design FMEA ____Process FMEA ____FMEA Number ____
Subject: ______________Team Leader.________________Page ____ of _____
Part/Proc. ID No. __________Date Orig. _____________Date Rev. __________
Key Date. ____________Team Members: ___________________
Part name or Potential Potential S C Potential O Current D RPN Recommended Target Actual Actions S O D R Remarks
process step failure effect of L cause of controls action and finish finish taken P
and function mode failure A failure responsibility date date N
mode S mode
S
Failure Mode and Effect Analysis (FMEA)
SL3151Ch06Frame Page 237 Thursday, September 12, 2002 6:09 PM
analysis (FMCA). In the traditional FTA, the starting point is the list of
hazard or undersized events for which the designer must provide some
solution. Each hazard becomes a failure mode and thus it requires an
analysis.
Effect of downtime: The FMEA may incorporate maintenance data to study
the effects of downtime. It is an excellent tool to be used in conjunction
with total preventive maintenance.
Repair planning: An FMEA may provide preventive data to support repair
planning as well as predictive maintenance cycles.
Access: In the world of recycling and environmental conscience, the FMEA
can provide data for tear downs as well as information about how to get at
the failed component. It can be used with mistake proofing for some very
unexpected positive results.
A typical body of an FMEA form may look like Figure 6.8. The details of this
form will be discussed in the following pages. We begin with the first part of the
form; that is the description in the form of:
1. A system view
2. A subsystem view
3. A component view
Primary Supporting
supporting functions Tertiary
function supporting
HOW? function
Primary Secondary
supporting supporting Tertiary
function function supporting
function
Primary
supporting
function
Task Ensure Tertiary
function dependability enhancing
Secondary function
Ensure enhancing
convenience function
Tertiary
Please senses enhancing
WHY? function
Enhancing
functions
Delight customer Tertiary
enhancing
function
• Position
• Support
• Seal in, out
• Retain
• Lubricate
For an example of a function tree for a ball point pen (tip), see Figure 6.10.
The process of brainstorming failure modes may include the following questions:
DFMEA
• Considering the conditions in which the product will be used, how can
it fail to perform its function?
• How have similar products failed in the past?
SL3151Ch06Frame Page 241 Thursday, September 12, 2002 6:09 PM
PFMEA
• Considering the conditions in which the process will be used, what
could possibly go wrong with the process?
• How have similar processes failed in the past?
• What might happen that would cause a part to be rejected?
SL3151Ch06Frame Page 242 Thursday, September 12, 2002 6:09 PM
Failure modes are when the function is not fulfilled in five major categories. Some
of these categories may not apply. As a consequence, use these as “thought provok-
ers” to begin the process and then adjust them as needed:
1. Absence of function
2. Incomplete, partial, or decayed function
3. Related unwanted “surprise” failure mode
4. Function occurs too soon or too late
5. Excess or too much function
6. Interfacing with other components, subsystems or systems. There are four
possibilities of interfacing. They are (a) energy transfer, (b) information
transfer, (c) proximity, and (d) material compatibility.
Failure mode examples using the above categories and applied to the pen case
include:
1. Absence of function:
• DFMEA: Make marks
• PFMEA: Inject plastic
2. Incomplete, partial or decayed function:
• DFMEA: Make marks
• PFMEA: Inject plastic
3. Related unwanted “surprise” failure mode
• DFMEA: Make marks
• PFMEA: Inject plastic
4. Function occurs too soon or too late
• DFMEA: Make marks
• PFMEA: Inject plastic
5. Excess or too much function
• DFMEA: Make marks
• PFMEA: Inject plastic
SL3151Ch06Frame Page 243 Thursday, September 12, 2002 6:09 PM
Process FMEA:
Four categories of process failures:
1. Fabrication failures
2. Assembly failures
3. Testing failures
4. Inspection failures
• Warped
• Too hot
• RPM too slow
• Rough surface
• Loose part
• Misaligned
• Poor inspection
• Hole too large
• Leakage
• Fracture
• Fatigue
• And so on
Note: At this stage, you are ready to transfer the failure modes in the FMEA
form — see Figure 6.12.
SFMEA
• System
• Other systems
SL3151Ch06Frame Page 244 Thursday, September 12, 2002 6:09 PM
• Whole product
• Government regulations
• End user
DFMEA
• Part
• Higher assembly
• Whole product
• Government regulations
• End user
PFMEA
• Part
• Next operation
• Equipment
• Government regulations
• Operators
• End user
Effects and severity are very related items. As the effect increases, so does the
severity. In essence, two fundamental questions have to be raised and answered:
The progression of function, cause, failure mode, effect, and severity can be
illustrated by the following series of questions:
Special Note: Please note that the effect remains the same for both DFMEA and
PFMEA.
Severity is a numerical rating — see Table 6.1 for design and Table 6.2 for process —
of the impact on customers. When multiple effects exist for a given failure mode,
enter the worst-case severity on the worksheet to calculate risk. (This is the excepted
method for the automotive industry and for the SAE J1739 standard. In cases where
severity varies depending on timing, use the worst-case scenario.
Note: There is nothing special about these guidelines. They may be changed to
reflect the industry, the organization, the product/design, or the process. For example,
the automotive industry has its own version and one may want to review its guidelines
in the AIAG (2001). To modify these guidelines, keep in mind:
At this point the information should be transferred to the FMEA form — see
Figure 6.13. The column identifying the “class” is the location for the placement of
the special characteristic. The appropriate response is only “Yes” or “No.” A Yes in
this column indicates that the characteristic is special, a No indicates that the
characteristic is not special. In some industries, special characteristics are of two
types: (a) critical and (b) significant. “Critical” refers to characteristics associated
with safety and/or government regulations, and “significant” refers to those that
affect the integrity of the product. In design, all special characteristics are potential.
In process they become critical or significant depending on the numerical values of
severity and occurrence combinations.
SL3151Ch06Frame Page 246 Thursday, September 12, 2002 6:09 PM
TABLE 6.1
DFMEA — Severity Rating
Effect Description Rating
None No effect noticed by customer; the failure will not have any 1
perceptible effect on the customer
Very minor Very minor effect, noticed by discriminating customers; the 2
failure will have little perceptible effect on discriminating
customers
Minor Minor effect, noticed by average customers; the failure will have 3
a minor perceptible effect on average customers
Very low Very low effect, noticed by most customers; the failure will have 4
some small perceptible effect on most customers
Low Primary product function operational, however at a reduced level 5
of performance; customer is somewhat dissatisfied
Moderate Primary product function operational, however secondary 6
functions inoperable; customer is moderately dissatisfied
High Failure mode greatly affects product operation; product or 7
portion of product is inoperable; customer is very dissatisfied
Very high Primary product function is non-operational but safe; customer 8
is very dissatisfied.
Hazard with warning Failure mode affects safe product operation and/or involves 9
nonconformance with government regulation with warning
Hazard with no warning Failure mode affects safe product operation and/or involves 10
nonconformance with government regulation without warning
1. What design or process choices did we already make that may be respon-
sible for the occurrence of a failure?
2. How likely is the failure mode to occur because of this?
For each failure mode, the possible mechanism(s) and cause(s) of failure are
listed. This is an important element of the FMEA since it points the way toward
preventive/corrective action. It is, after all, a description of the design or process
deficiency that results in the failure mode. That is why it is important to focus on
the “global” or “root” cause. Root causes should be specific and in the form of a
characteristic that may be controlled or corrected. Caution should be exerted not to
overuse “operator error” or “equipment failure” as a root cause even though they
are both tempting and make it easy to assign “blame.”
You must look for causes, not symptoms of the failure. Most failure modes have
more than one potential cause. An easy way to probe into the causes is to ask:
What design choices, process variables, or circumstances could result in the failure
mode(s)?
SL3151Ch06Frame Page 247 Thursday, September 12, 2002 6:09 PM
TABLE 6.2
PFMEA — Severity Rating
Effect Description Rating
None No effect noticed by customer; the failure will not have any effect 1
on the customer
Very minor Very minor disruption to production line; a very small portion 2
of the product may have to be reworked; defect noticed by
discriminating customers
Minor Minor disruption to production line; a small portion (much <5%) 3
of product may have to be reworked on-line; process up but
minor annoyances
Very low Very low disruption to production line; a moderate portion 4
(<10%) of product may have to be reworked on-line; process
up but minor annoyances
Low Low disruption to production line; a moderate portion (<15%) 5
of product may have to be reworked on-line; process up but
minor annoyances
Moderate Moderate disruption to production line; a moderate portion 6
(>20%) of product may have to be scrapped; process up but
some inconveniences
High Major disruption to production line; a portion (>30%) of product 7
may have to be scrapped; process may be stopped; customer
dissatisfied
Very high Major disruption to production line; close to 100% of product 8
may have to be scrapped; process unreliable; customer very
dissatisfied
Hazard with warning May endanger operator or equipment; severely affects safe 9
process operation and/or involves noncompliance with
government regulation; failure will occur with warning
Hazard with no warning May endanger operator or equipment; severely affects safe 10
process operation and/or involves noncompliance with
government regulation; failure occurs without warning
DFMEA failure causes are typically specific system, design, or material char-
acteristics.
PFMEA failure causes are typically process parameters, equipment characteris-
tics, or environmental or incoming material characteristics.
• Brainstorm
• Whys
• Fishbone diagram
SL3151Ch06Frame Page 248 Thursday, September 12, 2002 6:09 PM
and so on
• Fault Tree Analysis (FTA; a model that uses a tree to show the cause-and-
effect relationship between a failure mode and the various contributing
causes. The tree illustrates the logical hierarchy branches from the failure
at the top to the root causes at the bottom.)
• Classic five-step problem-solving process
a. What is the problem?
b. What can I do about it?
c. Put a star on the “best” plan.
d. Do the plan.
e. Did your plan work?
• Kepner Trego (What is, what is not analysis)
• Discipline GPS – see Volume II
• Experience
• Knowledge of physics and other sciences
• Knowledge of similar products
SL3151Ch06Frame Page 249 Thursday, September 12, 2002 6:09 PM
TABLE 6.3
DFMEA — Occurrence Rating
Occurrence Description Frequency Rating
Occurrence Rating
Design and process controls are the mechanisms, methods, tests, procedures, or
controls that we have in place to prevent the cause of the failure mode or detect the
failure mode or cause should it occur. (The controls currently exist.)
Design controls prevent or detect the failure mode prior to engineering release.
Process controls prevent or detect the failure mode prior to the part or assembly
leaving the area.
SL3151Ch06Frame Page 250 Thursday, September 12, 2002 6:09 PM
TABLE 6.4
PFMEA — Occurrence Rating
Occurrence Description Frequency Cpk Rating
Detection Rating
Detection rating — see Table 6.5 for design and Table 6.6 for process — is a numer-
ical rating of the probability that a given set of controls will discover a specific cause
or failure mode to prevent bad parts from leaving the operation/facility or getting
to the ultimate customer. Assuming that the cause of the failure did occur, assess
the capabilities of the controls to find the design flaw or prevent the bad part from
leaving the operation/facility. In the first case, the DFMEA is at issue. In the second
case, the PFMEA is of concern.
When multiple controls exist for a given failure mode, record the best (lowest)
to calculate risk. In order to evaluate detection, there are appropriate tables for
both design and process. Just as before, however, if there is a need to alter them,
remember that the change and approval must be made by the FMEA team with
consensus.
At this point, the data for current controls and their ratings should be transferred
to the FMEA form — see Figure 6.15. There should be a current control for every
cause. If there is not, that is a good indication that a problem might exist.
SL3151Ch06Frame Page 251 Thursday, September 12, 2002 6:09 PM
Old pen
stops
Partial ink writing, 7 N Inconsistent 2
customer ball rolling
scraps due to
pen deformed
housing
Customer 7 Ball does not 7
has to always pick
retrace up ink due to
ink viscosity
Writing
or Housing I.D. 1
drawing variation due
looks bad to mfg
and so on and so on
and so on
TABLE 6.5
DFMEA Detection Table
Detection Description Rating
Almost certain Design control will almost certainly detect the potential cause of 1
subsequent failure modes
Very high Very high chance the design control will detect the potential cause of 2
subsequent failure mode
High High chance the design control will detect the potential cause of 3
subsequent failure mode
Moderately high Moderately high chance the design control will detect the potential cause 4
of subsequent failure mode
Moderate Moderate chance the design control will detect the potential cause of 5
subsequent failure mode
Low Low chance the design control will detect the potential cause of 6
subsequent failure mode
Very low Very low chance the design control will detect the potential cause of 7
subsequent failure mode
Remote Remote chance the design control will detect the potential cause of 8
subsequent failure mode
Very remote Very remote chance the design control will detect the potential cause 9
of subsequent failure mode
Very uncertain There is no design control or control will not or cannot detect the 10
potential cause of subsequent failure mode
Risk = RPN = S × O × D
Obviously the higher the RPN the more the concern. A good rule-of-thumb
analysis to follow is the 95% rule. That means that you will address all failure modes
with a 95% confidence. It turns out the magic number is 50, as indicated in this
equation: [(S = 10 × O = 10 × D = 10) – (1000 × .95)]. This number of course is
only relative to what the total FMEA is all about, and it may change as the risk
increases in all categories and in all causes.
Special risk priority patterns require special attention, through specific action
plans that will reduce or eliminate the high risk factor. They are identified through:
1. High RPN
2. Any RPN with a severity of 9 or 10 and occurrence > 2
3. Area chart
The area chart — Figure 6.16 — uses only severity and occurrence and therefore
is a more conservative approach than the priority risk pattern mentioned previously.
At this stage, let us look at our FMEA project and calculate and enter the RPN —
see Figure 6.17. It must be noted here that this is only one approach to evaluating
risk. Another possibility is to evaluate the risk based on the degree of severity first,
SL3151Ch06Frame Page 253 Thursday, September 12, 2002 6:09 PM
TABLE 6.6
PFMEA Detection Table
Detection Description Rating
Almost certain Process control will almost certainly detect or prevent the potential cause 1
of subsequent failure mode
Very high Very high chance process control will detect or prevent the cause of 2
subsequent failure mode
High High chance the process control will detect or prevent the potential cause 3
of subsequent failure mode.
Moderately high Moderately high chance the process control will detect or prevent the 4
potential cause of subsequent failure mode
Moderate Moderate chance the process control will detect or prevent the potential 5
cause of subsequent failure mode
Low Low chance the process control will detect or prevent the potential cause 6
of subsequent failure mode
Very low Very low chance the process control will detect or prevent the potential 7
cause of subsequent failure mode
Remote Remote chance the process control will detect or prevent the potential 8
cause of subsequent failure mode
Very remote Very remote chance the process control will detect or prevent the potential 9
cause of subsequent failure mode
Very uncertain There is no process control or control will not or cannot detect the potential 10
cause of subsequent failure mode
in which case the engineer tries to eliminate the failure; evaluate the risk based on
a combination of severity (values of 5–8) and occurrence (>3) second, in which case
the engineer tries to minimize the occurrence of the failure through a redundant
system; and to evaluate the risk through the detection of the RPN third, in which
case the engineer tries to control the failure before the customer receives it.
Reducing the severity rating (or reducing the severity of the failure mode effect)
• Design or manufacturing process changes are necessary.
• This approach is much more proactive than reducing the detection
rating.
Reducing the occurrence rating (or reducing the frequency of the cause)
• Design or manufacturing process changes are necessary.
• This approach is more proactive than reducing the detection rating.
SL3151Ch06Frame Page 254 Thursday, September 12, 2002 6:09 PM
Old pen
stops
Partial ink writing, 7 N Inconsistent 2 Test # X 10
customer ball rolling
scraps due to
pen deformed
Customer housing
has to 7 Ball does not 7 None 10
retrace always pick
up ink due to
ink viscosity
Writing Housing I.D. 1 None 10
or variation due
drawing to mfg
looks bad
and so on and so on and so on
and so on
FIGURE 6.15 Transferring current controls and detection to the FMEA form.
10
9
High
8 Priority
7
Occurrence 6
Medium
5
Priority
4
3
2 Low
Priority
1
Severity
1 2 3 4 5 6 7 8 9 10
Examples include size, form, location, orientation, or other physical properties such
as color, hardness, strength, etc.
and so on
TABLE 6.7
Special Characteristics for Both Design and Process
FMEA Type Classification Purpose Criteria Control
FIGURE 6.19 Transferring action plans and action results on the FMEA form.
Design FMEA
Quality
Function
Deployment Function Failure Effect Severity Class Cause Controls Rec. Action
System
Design
Specifications
Sign-Off Report
Design Verification
Plan and Report
Process FMEA
Part C
Characteristic Function Failure Effect Controls L Cause Controls Reaction
1 Normal A S Special
2 S
3
4
etc. Remove the
classification
symbol
Machinery inputs
P diagram
Boundary diagram
Interface matrix
Customer functionality in terms of engineering specifications
Regulatory requirements review
Figure 6.21 shows the learning stages (the direction of the arrows indicates the
increasing level) in a company that is developing maturity in the use of FMEA.
OBJECTIVE
The design FMEA is a disciplined analysis of the part design with the intent to
identify and correct any known or potential failure modes before the manufacturing
stage begins. Once these failure modes are identified and the cause and effects are
determined, each failure mode is then systematically ranked so that the most severe
failure modes receive priority attention. The completion of the design FMEA is the
responsibility of the individual product design engineer. This individual engineer is
the most knowledgeable about the product design and can best anticipate the failure
modes and their corrective actions.
TIMING
The design FMEA is initiated during the early planning stages of the design and is
continually updated as the program develops. The design FMEA must be totally
completed prior to the first production run.
REQUIREMENTS
The requirements for a design FMEA include:
1. Forming a team
2. Completing the design FMEA form
3. FMEA risk ranking guidelines
DISCUSSION
The effectiveness of an FMEA is dependent on certain key steps in the analysis
process, as follows:
• Design engineer(s)
• Test/development engineer
• Reliability engineer
• Materials engineer
• Field service engineer
• Manufacturing/process engineer
• Customer
1. Task functions: These functions describe the single most important reason
for the existence of the system/product. (Vacuum cleaner? Windshield
wiper? Ballpoint pen?)
2. Supporting functions: These are the “sub” functions that are needed in
order for the task function to be performed.
3. Enhancing functions: These are functions that enhance the product and
improve customer satisfaction but are not needed to perform the task
function.
After computing the function tree or a block diagram, transfer functions to the
FMEA worksheet or some other form of a worksheet to retain. Add the extent of
each function (range, target, specification, etc.) to test the measurability of the
function.
The team must describe the effect of the failure in terms of customer reaction or in
other words, e.g., “What does the customer experience as a result of the failure mode
of a shorted wire?” Notice the specificity. This is very important, because this will
establish the basis for exploratory analysis of the root cause of the function. Would
the shorted wire cause the fuel gage to be inoperative or would it cause the dome
light to remain on?
The team anticipates the cause of the failure. Would poor wire insulation cause the
short? Would a sharp sheet metal edge cut through the insulation and cause the
short? The team is analyzing what conditions can bring about the failure mode. The
more specific the responses are, the better the outcome of the FMEA.
SL3151Ch06Frame Page 265 Thursday, September 12, 2002 6:09 PM
Brittle material
Weak fastener
Corrosion
Low hardness
Too small of a gap
Wrong bend angle
Stress concentration
Ribs too thin
Wrong material selection
Poor stitching design
High G forces
Part interference
Tolerance stack-up
Vibration
Oxidation
And so on
The team must estimate the probability that the given failure is going to occur. The
team is assessing the likelihood of occurrence, based on its knowledge of the system,
using an evaluation scale of 1 to 10. A 1 would indicate a low probability of
occurrence whereas a 10 would indicate a near certainty of occurrence.
In estimating the severity of the failure, the team is weighing the consequence
of the failure. The team uses the same 1 to 10 evaluation scale. A 1 would indicate
a minor nuisance, while a 10 would indicate a severe consequence such as “loss of
brakes” or “stuck at wide open throttle” or “loss of life.”
Generally, these controls consist of tests and analyses that detect failure modes or
causes during early planning and system design activities. Good system controls
detect faults or weaknesses in system designs. Design controls consist of tests and
analyses that detect failure causes or failure modes during design, verification, and
validation activities. Good design controls detect faults or weaknesses in component
designs.
SL3151Ch06Frame Page 266 Thursday, September 12, 2002 6:09 PM
Special notes:
• Just because there is a current control in place that does not mean that it
is effective. Make sure the team reviews all the current controls, especially
those that deal with inspection or alarms.
• To be effective (proactive), system controls must be applied throughout
the pre-prototype phase of the Advanced Product Quality Planning
(APQP) process.
• To be effective (proactive), design controls must be applied throughout
the pre-launch phase of the APQP process.
• To be effective (proactive), process controls should be applied during the
post-pilot build phase of APQP and continue during the production phase.
If they are applied only after production begins, they serve as reactive
plans and become very inefficient.
Engineering analysis
• Computer simulation
• Mathematical modeling/CAE/FEA
• Design reviews, verification, validation
• Historical data
• Tolerance stack studies
• Engineering reviews, etc.
System/component level physical testing
• Breadboard, analog tests
• Alpha and beta tests
• Prototype, fleet, accelerated tests
• Component testing (thermal, shock, life, etc.)
• Life/durability/lab testing
• Full scale system testing (thermal, shock, etc)
• Taguchi methods
• Design reviews
The team is estimating the probability that a potential failure will be detected before
it reaches the customer. Again, the 1 to 10 evaluation scale is used. A 1 would
indicate a very high probability that a failure would be detected before reaching the
customer. A 10 would indicate a very low probability that the failure would be
detected, and therefore, be experienced by the customer. For instance, an electrical
connection left open preventing engine start might be assigned a detection number
of 1. A loose connection causing intermittent no-start might be assigned a detection
number of 6, and a connection that corrodes after time causing no start after a period
of time might be assigned a detection number of 10.
SL3151Ch06Frame Page 267 Thursday, September 12, 2002 6:09 PM
Detection is a function of the current controls. The better the controls, the more
effective the detection. It is very important to recognize that inspection is not a very
effective control because it is a reactive task.
The product of the estimates of occurrence, severity, and detection forms a risk
priority number (RPN). This RPN then provides a relative priority of the failure
mode. The higher the number, the more serious is the mode of failure considered.
From the risk priority numbers, a critical items summary can be developed to
highlight the top priority areas where actions must be directed.
The basic purpose of an FMEA is to highlight the potential failure modes so that
the responsible engineer can address them after this identification phase. It is imper-
ative that the team provide sound corrective actions or provide impetus for others
to take sound corrective actions. The follow-up aspect is critical to the success of
this analytical tool. Responsible parties and timing for completion should be desig-
nated in all corrective actions.
Strategies for Lowering Risk: (System/Design) — High Severity or Occurrence
To reduce risk, you may change the product design to:
• Eliminate the failure mode cause or decouple the cause and effect
• Eliminate or reduce the severity of the effect
• Make the cause less likely or impossible to occur
• Eliminate function or eliminate part (functional analysis)
• Benchmarking
• Brainstorming
• Process control (automatic corrective devices)
• TRIZ, etc.
OBJECTIVE
The process FMEA is a disciplined analysis of the manufacturing process with the
intent to identify and correct any known or potential failure modes before the first
production run occurs. Once these failure modes are identified and the cause and
effects are determined, each failure mode is then systematically ranked so that the
most severe failure modes receive priority attention. The completion of the process
FMEA is the responsibility of the individual product process engineer. This individ-
ual process engineer is the most knowledgeable about the process structure and can
best anticipate the failure modes and their effects and address the corrective actions.
TIMING
The process FMEA is initiated during the early planning stages of the process before
machines, tooling, facilities, etc., are purchased. The process FMEA is continually
updated as the process becomes more clearly defined. The process FMEA must be
totally completed prior to the first production run.
REQUIREMENTS
The requirements for a process FMEA are as follows:
SL3151Ch06Frame Page 269 Thursday, September 12, 2002 6:09 PM
1. Form team
2. Complete the process FMEA form
3. FMEA risk ranking guidelines
DISCUSSION
The effectiveness of an FMEA on a process is dependent on certain key steps in the
analysis, including the following:
• Design engineer
• Manufacturing or process engineer
• Quality engineer
• Reliability engineer
• Tooling engineer
• Responsible operators from all shifts
• Supplier
• Customer
The team must identify the process or machine and describe its function. The team
members should ask of themselves, “What is the purpose of this operation?” State
concisely what should be accomplished as a result of the process being performed.
Typically, there are three areas of concern. They are:
For example, consider the pen assembly process (see Figure 6.22), which involves
the following steps:
Note: At the end of this function analysis you are ready to transfer the informa-
tion to the FMEA form.
Remember that another way to reduce the complexity or scope of the FMEA is
to prioritize the list of functions and then take only the ones that the team collectively
agrees are the biggest concerns.
The team must pose the question to itself, “How could this process fail to complete
its intended function? Could the resulting workpiece be oversize, undersize, rough,
eccentric, misassembled, deformed, cracked, open, shorted, leaking, porous, dam-
aged, omitted, misaligned, out of balance, etc.?” The team members are trying to
anticipate how the workpiece might fail to meet engineering requirements; at this
point in their analysis they should stress how it could fail and not whether it will fail.
SL3151Ch06Frame Page 271 Thursday, September 12, 2002 6:09 PM
Ink
Insert tip
Barrel
assembly housing
Move to dock
Once failure modes are determined under these assumptions, then determine
other failure modes due to:
The team must describe the effect of the failure on the component or assembly. What
will happen as a result of the failure mode described? Will the component or
SL3151Ch06Frame Page 272 Thursday, September 12, 2002 6:09 PM
The engineer anticipates the cause of the failure. The engineer is describing what
conditions can bring about the failure mode. Locators are not flat and parallel. The
handling system causes scratches on a shaft. Inadequate venting and gaging can
cause misruns, porosity, and leaks. Inefficient die cooling causes die hot spots.
Undersize condition can be caused by heat treat shrinkage, etc.
The purpose of a process FMEA (PFMEA) is to analyze or evaluate a process
on its ability to perform its functions (part characteristics). Therefore, the initial
assumptions in determining causes are:
Fatigue
Poor surface preparation
Improper installation
Low torque
Improper maintenance
Inadequate clamping
Misuse
High RPM
Abuse
Inadequate venting
Unclear instructions
Tool wear
Component interactions
Overheating
And so on
SL3151Ch06Frame Page 273 Thursday, September 12, 2002 6:09 PM
The team must estimate the probability that the given failure mode will occur. This
team is assessing the likelihood of occurrence, based on their knowledge of the
process, using an evaluation scale of 1 to 10. A 1 would indicate a low probability
of occurrence, whereas a 10 would indicate a near certainty of occurrence.
In estimating the severity of the failure, the team is weighing the consequence (effect)
of the failure. The team uses the same 1 to 10 evaluation scale. A 1 would indicate
a minor nuisance, while a 10 would indicate a severe consequence such as “motor
inoperative, horn does not blow, engine seizes, no drive, etc.”
Manufacturing process controls consist of tests and analyses that detect causes or
failure modes during process planning or production. Manufacturing process controls
can occur at the specific operation in question or at a subsequent operation. There
are three types of process controls, those that:
• Setup
• Machine
• Operator
• Component or material
• Tooling
• Preventive maintenance
• Fixture/pallet/work holding
• Environment
TABLE 6.8
Manufacturing Process Control Matrix
Dominance Factor Attribute Data Variable Date
The detection is directly related to the controls available in the process. So the better
the controls, the better the detection. The team in essence is estimating the probability
SL3151Ch06Frame Page 275 Thursday, September 12, 2002 6:09 PM
that a potential failure will be detected before it reaches the customer. The team
members use the 1 to 10 evaluation scale. A 1 would indicate a very high probability
that a failure would be detected before reaching the customer. A 10 would indicate
a very low probability that the failure would be detected, and therefore, be experi-
enced by the customer. For instance, a casting with a large hole would be readily
detected and would be assessed as a 1. A casting with a small hole causing leakage
between two channels only after prolonged usage would be assigned a 10. The team
is assessing the chances of finding a defect, given that the defect exists.
The product of the estimates of occurrence, severity, and detection forms a risk
priority number (RPN). This RPN then provides a relative priority of the failure
mode. The higher the number, the more serious is the mode of failure considered.
From the risk priority numbers, a critical items summary can be developed to
highlight the top priority areas where actions must be directed.
The basic purpose of an FMEA is to highlight the potential failure modes so that
the engineer can address them after this identification phase. It is imperative that
the engineer provide sound corrective actions or provide impetus for others to take
sound corrective actions. The follow-up aspect is critical to the success of this
analytical tool. Responsible parties and timing for completion should be designated
in all corrective actions.
Strategies for Lowering Risk: (Manufacturing) — High Severity or Occurrence
Change the product or process design to:
• Benchmarking
• Brainstorming
• Mistake proofing
• TRIZ, etc.
• Benchmarking
• Brainstorming, etc.
At this stage, now you are ready to enter a brief description of the recommended
actions, including the department and individual responsible for implementation, as
well as both the target and finish dates, on the FMEA form. If the risk is low and
no action is required write “no action needed.”
For each entry that has a designated characteristic in the class[ification] column,
review the issues that impact cause/occurrence, detection/control, or failure mode.
Generate recommended actions to reduce risk. Special RPN patterns suggest that certain
characteristics/root causes are important risk factors that need special attention.
After FMEA:
SL3151Ch06Frame Page 277 Thursday, September 12, 2002 6:09 PM
FAILURE MODE
A failure is an event when the equipment/machinery is not capable of producing
parts at specific conditions when scheduled or is not capable of producing parts or
SL3151Ch06Frame Page 278 Thursday, September 12, 2002 6:09 PM
POTENTIAL EFFECTS
The consequence of a failure mode on the subsystem is described in terms of safety
and the big seven losses. (The big seven losses may be identified through warranty
or historical data.)
Describe the potential effects in terms of downtime, scrap, and safety issues. If
a functional approach is used, then list the causes first before developing the effects
listing. Associated with the potential effects is the severity, which is a rating corre-
sponding to the seriousness of the effect of a potential machinery failure mode.
Typical descriptions are:
Downtime
• Breakdowns: Losses that are a result of a functional loss or function
reduction on a piece of machine requiring maintenance intervention.
• Setup and adjustment: Losses that are a result of set procedures. Adjust-
ments include the amount of time production is stopped to adjust
process or machine to avoid defect and yield losses, requiring operator
or job setter intervention.
• Startup losses: Losses that occur during the early stages of production
after extended shutdowns (weekends, holidays, or between shifts),
resulting in decreased yield or increased scrap and defects.
• Idling and minor stoppage: Losses that are a result of minor interrup-
tions in the process flow, such as a process part jammed in a chute or
a limit switch sticking, etc., requiring only operator or job setter inter-
vention. Idling is a result of process flow blockage (downstream of the
focus operation) or starvation (upstream of the focus operation). Idling
can only be resolved by looking at the entire line/system.
• Reduced cycle: Losses that are a result of differences between the ideal
cycle time of a piece of machinery and its actual cycle time.
Scrap
• Defective parts: Losses that are a result of process part quality defects
resulting in rework, repair, or scrap.
• Tooling: Losses that are a result of tooling failures/breakage or dete-
rioration/wear (e.g., cutting tools, fixtures, welding tips, punches, etc.).
Safety
• Safety considerations: Immediate life or limb threatening hazard or
minor hazard.
SL3151Ch06Frame Page 279 Thursday, September 12, 2002 6:09 PM
SEVERITY RATING
Severity is comprised of three components:
A rating should be established for each effect listed. Rate the most serious effect.
Begin the analysis with the function of the subsystem that will affect safety, gov-
ernment regulations, and downtime of the equipment. A very important point here
is the fact that a reduction in severity rating may be accomplished only through a
design change. A typical rating is shown in Table 6.9.
It should be noted that these guidelines may be modified to reflect specific
situations. Also, the basis for the criteria may be changed to reflect the specificity
of the machine and its real world usage.
CLASSIFICATION
The classification column is not typically used in the MFMEA process but should
be addressed if related to safety or noncompliance with government regulations.
Address the failure modes with a severity rating of 9 or 10. Failure modes that affect
worker safety will require a design change. Enter “OS” in the class column. OS
(operator safety) means that this potential effect of failure is critical and needs to
be addressed by the equipment supplier. Other notations can be used but should be
approved by the equipment user.
POTENTIAL CAUSES
The potential causes should be identified as design deficiencies. These could translate as:
Identify first level causes that will cause the failure mode. Data for the devel-
opment of the potential causes of failure can be obtained from:
• Surrogate MFMEA
• Failure logs
• Interface matrix (focusing on physical proximity, energy transfer, material,
information transfer)
• Warranty data
• Concern reports (things gone wrong, things gone right)
• Test reports
• Field service reports
280
TABLE 6.9
Machinery Guidelines for Severity, Occurrence, and Detection
Alternate
Probability Criteria for Criteria for
Effect Criteria Severity Rank of Failure Occurrence Rank Occurrence Detection Criteria for Detection Rank
Hazardous Very high severity: affects operator, 10 Failure occurs R(t) < 1 or some 10 1 in 1 Very low Present design controls 10
without plant, or maintenance personnel every hour MTBF cannot detect potential
warning safety and/or effects cause or no design control
noncompliance with government available
SL3151Ch06Frame Page 280 Thursday, September 12, 2002 6:09 PM
Moderate Downtime of 60–120 min or the 6 Failure occurs R(t) = 60% 6 1 in 350 Team’s discretion depending 6
production of defective parts for every month on machine and situation
up to 60 min
Low Downtime of 30–60 min with no 5 Failure occurs R(t) = 78% 5 1 in 1000 Medium Machinery controls will 5
production of defective parts or the every 3 months provide an indicator of
production of defective parts for imminent failure
up to 30 min
Very low Downtime of 15–30 min with no 4 Failure occurs R(t) = 85% 4 1 in 2500 Team’s discretion depending 4
production of defective parts every 6 months on machine and situation
Minor Downtime up to 15 min with no 3 Failure occurs R(t) = 90% 3 1 in 5000 High Machinery controls will 3
production of defective parts every year prevent an imminent failure
and isolate the cause
Very minor Process parameter variability not 2 Failure occurs R(t) = 95% 2 1 in 10,000 Team’s discretion depending 2
within specification limits. every 2 years on machine and situation
Failure Mode and Effect Analysis (FMEA)
OCCURRENCE RATINGS
Occurrence is the rating corresponding to the likelihood of the failure mode occurring
within a certain period of time — see Table 6.8. The following should be considered
when developing the occurrence ratings:
• Service data
• MTBF data
• Failure logs
• Maintenance records
SURROGATE MFMEAS
Current Controls
Current controls are described as being those items that will be able to detect the
failure mode or the causes of failure. Controls can be either design controls or
process controls.
A design control is based on tests or other mechanisms used during the design
stage to detect failures. Process controls are those used to alert the plant personnel
that a failure has occurred. Current controls are generally described as devices to:
Detection Rating
Detection rating is the method used to rate the effectiveness of the control to detect
the potential failure mode or cause. The scale for ranking these methods is based
on a 1 to 10 scale — see Table 6.8.
Step 1: severity
Step 2: criticality
Step 3: detection
This means that regardless of the RPN, the priority is based on the highest
severity first, especially if it is a 9 or a 10, followed by the criticality, which is the
product of severity and occurrence, and then the RPN.
RECOMMENDED ACTIONS
• Each RPN value should have a recommended action listed.
• The actions are designed to reduce severity, occurrence, and detection
ratings.
• Actions should address in order the following concerns:
• Failure modes with a severity of 9 or 10
• Failure mode/cause that has a high severity occurrence rating
• Failure mode/cause/design control that has a high RPN rating
• When a failure mode/cause has a severity rating of 9 or 10, the design
action must be considered before the engineering release to eliminate
safety concerns.
REVISED RPN
• Recalculate S, O, and D after the action taken has been completed. Always
remember that only a change in design can change the severity. Occurrence
may be changed by a design change or a redundant system. Detection
may be changed by a design change or better testing or better design
control.
• MFMEA — A team needs to review the new RPN and determine if
additional design actions are necessary.
SUMMARY
In summary, the steps in conducting the FMEA are as follows:
SELECTED BIBLIOGRAPHY
Chrysler Corporation, Ford Motor Company, and General Motors Corporation, Potential
Failure Mode and Effect Analysis (FMEA) Reference Manual, 2nd ed., distributed by
the Automotive Industry Action Group (AIAG), Southfield, MI, 1995.
Chrysler Corporation, Ford Motor Company, and General Motors Corporation, Advanced
Product Quality Planning and Control Plan, distributed by the Automotive Industry
Action Group (A.I.A.G.), Southfield, MI, 1995.
Chrysler Corporation, Ford Motor Company, and General Motors Corporation, Potential
Failure Mode and Effect Analysis (FMEA) Reference Manual, 32nd ed., Chrysler
Corporation, Ford Motor Company, and General Motors Corporation. Distributed by
the Automotive Industry Action Group (AIAG), Southfield, MI, 2001.
The Engineering Society for Advancing Mobility Land Sea Air and Space, Potential Failure
Mode and Effects Analysis in Design FMEA and Potential Failure Mode and Effects
Analysis in Manufacturing and Assembly Processes (Process FMEA) Reference Man-
ual, SAE: J1739, The Engineering Society for Advancing Mobility Land Sea Air and
Space, Warrendale, PA, 1994.
SL3151Ch06Frame Page 285 Thursday, September 12, 2002 6:09 PM
The Engineering Society for Advancing Mobility Land Sea Air and Space, Reliability and
Maintainability Guideline for Manufacturing Machinery and Equipment, SAE Prac-
tice Number M-110, The Engineering Society for Advancing Mobility Land Sea Air
and Space, Warrendale, PA, 1999.
Ford Motor Company, Failure Mode Effects Analysis: Training Reference Guide, Ford Motor
Company — Ford Design Institute. Dearborn, MI, 1998.
Kececioglu, D., Reliability Engineering Handbook, Vol. 1–2, Prentice Hall, Englewood Cliffs,
NJ, 1991.
Stamatis, D.H., Advanced Quality Planning, Quality Resources, New York, 1998.
Stamatis, D.H., Failure Mode and Effect Analysis: FMEA from Theory to Execution, Quality
Press, Milwaukee, 1995.
SL3151Ch06Frame Page 286 Thursday, September 12, 2002 6:09 PM
SL3151Ch07Frame Page 287 Thursday, September 12, 2002 6:07 PM
Reliability
7
Reliability n — may be relied on; trustworthiness, authenticity, consistency; infalli-
bility, suggesting the complete absence of error, breakdown, or poor performance.
In other words, when we speak of a reliable product, we usually expect such
adjectives as dependable and trustworthy to apply. But to measure product reliability,
we must have a more exact definition. The definition of reliability then, is: the
probability that a product will perform its intended function in a satisfactory manner
for a specified period of time when operating under specified conditions.
Thus, the reliability of a system expresses the length of failure-free time that
can be expected from the equipment. Higher levels of reliability mean less failure
of the system and consequently less downtime. To measure reliability it is necessary
to:
From a service point of view, we may be interested in repair frequency and then
we say that 20% of the units will have to be repaired by 8000 hours. Or the repair
per hundred units (R/100) is 20 at 8000 hours. The important point is that the
reliability is a metric expressing the probability of maintaining intended function
over time and is measurable as a percentage.
• Excess vibrations
• Excess noise
• Intermittent operation
• Drift
• Catastrophic failure
• And many other possibilities
• Major
• Minor
The severity of the failure to the customer must be documented and recognized
in a Failure Definition and Scoring Criterion that precisely delineates how each
incident on a system or equipment will be handled in regards to reliability and
maintainability calculations. Such documents should be developed early in a design
and development program so that all concerned are aware of the consequences of
incidents that occur during product testing and in field use.
The design team must be able to use the failure definition and scoring criterion
to address product trade-offs. If the severity of a failure to the customer can be
lowered by design changes, the failure definition and scoring criterion should pro-
mote such trade-offs.
Reliability 289
SPECIFIED CONDITIONS
Different environments promote different failure modes and different failure rates
for a product. The environmental factors that the product will encounter must be
clearly defined. The levels (and rate of change) at which we want to address these
environmental factors must also be defined.
• Temperature
• Humidity
• Vibration
• Shock
• Corrosive materials
• Immersion
• Pressures, vacuum
• Salt spray
• Dust
• Cement floors/basements
• Ice/snow
• Lubricants
• Perfumes
• Magnetic fields
• Nuclear radiation
• Weather
• Contamination
• Antifreeze
• Gasoline fumes
• Rust inhibitors/under coatings
• Rain
• Soda pop/hot coffee
• Sunlight
• Electrical discharges
• And so on
SL3151Ch07Frame Page 290 Thursday, September 12, 2002 6:07 PM
RELIABILITY NUMBERS
The reliability number attached to a product changes with:
At any product age (t) for a population of N products, the reliability at time t
denoted by R(t) is
This is the reliability of this population of products at time t. The real world
estimation of reliability is usually much more difficult due to products being sold
over time with each having a different usage profile. Calendar time is known but
product life on each product is not, while warranty systems monitor and record only
failure.
• MTBF — The mean time between failures, also MTTF, MMBF, MCTF.
MTBF = 120 hours means that on the average a failure will occur with
every 120 hours of operation.
• Failure rate — The rate of failures per unit of operating time. λ =
0.05/hour means that one failure will occur with every 20 hours of oper-
ation, on the average.
• R/100 (or R/1000) — The number of warranty claims per 100 (or 1000)
products sold. R/100 = 7 means that there are seven warranty claims for
every 100 products sold.
• Reliability number — The reliability of the product at some specific time.
R = 90% means that 9 out of 10 products work successfully for the
specified time.
SL3151Ch07Frame Page 291 Thursday, September 12, 2002 6:07 PM
Reliability 291
Psychological
• Taste
• Beauty, style
• Status
Technological
• Hardness
• Vibration
• Noise
• Materials (bearings, belts, hoses, etc.)
Time-oriented
• Reliability
• Maintainability
Contractual
• Warranty
Ethical
• Honesty of repair shop
• Experience and integrity of sales force
PRODUCT DEFECTS
Quality defects are defined as those that can be located by conventional inspection
techniques. (Note: for legal reasons, it is better to identify these defects as noncon-
formances.) Reliability defects are defined as those that require some stress applied
over time to develop into detectable defects.
What causes product failure over time? Some possibilities are:
• Design
• Manufacturing
• Packaging
• Shipping
• Storage
• Sales
• Installation
• Maintenance
• Customer duty cycle
SL3151Ch07Frame Page 292 Thursday, September 12, 2002 6:07 PM
CUSTOMER SATISFACTION
The ultimate goal of a product is to satisfy a customer from all aspects of cost,
performance, reliability, and maintainability. The customer trades off these param-
eters when making a decision to buy a product. Assuming that we are designing a
product for a certain market segment, cost is determined within limits. The trade-
offs are as follows:
MTBF
A=
MTBF + MTTR
SL3151Ch07Frame Page 293 Thursday, September 12, 2002 6:07 PM
Reliability 293
Time
where MTTR = mean time-to-repair and the MTTR is for the active re-
pair time.
5. Active repair time is that portion of downtime when the technicians are
working on the system to repair the failure situation. It must be understood
that the different availabilities are defined for various time-states of the
system.
6. Serviceability is the ease with which machinery and equipment can be
repaired. Here repair includes diagnosis of the fault, replacement of the
necessary parts, tryout, and bringing the equipment back on line. Service-
ability is somewhat qualitative and addresses the ease by which the equip-
ment, as designed, can be diagnosed and repaired. It involves factors such
as accessibility to test points, ease of removal of the failed components,
and ease of bringing the system back on line.
1. Infant mortality period: During the infant mortality period the population
exhibits a high failure rate, decreasing rapidly as the weaker products fail.
Some manufacturers provide a “burn-in” period for their products to help
eliminate infant mortality failures. Generally, infant mortality is associated
with manufacturing issues. Examples are:
• Poor welds
• Contamination
• Improper installation
• And so on
2. Useful life period: During this period the population of products exhibits
a relatively low and constant failure rate. It is explained using the stress –
strength inference model for reliability. This model identifies the stress
SL3151Ch07Frame Page 294 Thursday, September 12, 2002 6:07 PM
In conjunction with the bathtub curve there are two more items of concern. The
first one is the hazard rate (or the instantaneous failure rate) and the second, the
ROCOF plot.
The hazard rate is the probability that the product will fail in the next interval
of time (or distance or cycles). It is assumed the product has survived up to that
time. For example, there is a one in twenty chance that it will crack, break, bend,
or fail to function in the next month. Typically, hazard rate is shown as
f (t ) f (t )
h(t ) = =
1 − F (t ) R(t )
where h(t) = hazard rate; f(t) = probability density function [PDF: f(t) = λe–λt]; F(t) =
cumulative distribution function [CDF: F(t) = 1 – e–λt; and R(t) = reliability at time
t [R(t) = 1 – F(t) = 1 – (1 – e–λt) = e–λt].
The Rate of Change of Failure or Rate of Change of Occurrence of Failure
(ROCOF), on the other hand, is a visual tool that helps the engineer to analyze
situations where a lot of data over time has been accumulated. Essentially, its purpose
is the same as that of the reliability bathtub curve, that is, to understand the life
stages of a product or process and take the appropriate action. A typical ROCOF
plot (for warranty item) will display an early (decreasing rate) and useful life
(constant rate) performance. If wear out is detected, it should be investigated.
Knowing what is happening to a product from one region of the bathtub curve to
the next helps the engineer specify what failed hardware to collect and aids with
calibrating the severity of development tests.
SL3151Ch07Frame Page 295 Thursday, September 12, 2002 6:07 PM
Reliability 295
If the number of failures is small, the ROCOF plot approach may be difficult to
interpret. When that happens, it is recommended that a “smoothing” approach be
taken. The typical smoothing methodology is to use log paper for the plotting.
Obviously, many more ways and more advanced techniques exist. It must be noted
here that most statistical software provides this smoothing as an option for the data
under consideration. See Volume III for more details on smoothing.
• Market research
• Forecast need.
• Forecast sales.
• Understand who the customer is and how the product will be used.
• Set broad performance objectives.
• Establish program cost objectives.
• Establish technical feasibility.
• Establish manufacturing capacity.
• Establish reliability and maintainability (R&M) requirements.
• Understand governmental regulations.
• Understand corporate objectives.
• Concept phase
• Formulate project team.
• Formulate design requirements.
• Establish real world customer usage profile.
• Develop and consider alternatives.
• Rank alternatives considering R&M requirements.
• Review quality and reliability history on past products.
• Assess feasibility of R&M requirements.
• Design phase
• Prepare preliminary design.
• Perform design calculations.
• Prepare rough drawings.
• Compare alternatives to pursue.
• Evaluate manufacturing feasibility of design approach (design for
manufacturability and assembly).
SL3151Ch07Frame Page 296 Thursday, September 12, 2002 6:07 PM
RELIABILITY IN DESIGN
The cost of unreliability is:
It has been demonstrated in the marketplace that highly reliable products (failure
free) gain market share. A very classic example of this is the American automotive
market. In the early 1960s, American manufacturers were practically the only game
in town with GM capturing some 60% of the market. Since then, progressively and
on a yearly basis the market has shifted to the point where Flint (2001) reports that
now GM has a shade over 25% without trucks and Saab, Ford 14.7% without Volvo
and Jaguar, and Chrysler about 5%. The projections for the 2002 model year are
SL3151Ch07Frame Page 297 Thursday, September 12, 2002 6:07 PM
Reliability 297
not any better with GM capturing only 25%, Ford 15%, and Chrysler 6%. The sad
part of the automotive scene is that GM, Ford, and DaimlerChrysler have lost market
share, and sales are continually nudging down with no end in sight. That is, as Flint
(2001, p. 21) points out, “they are not going to recover that market share, not in the
short term, not in the next five to ten years.”
The evidence suggests that the mission of a reliability program is to estimate,
track, and report the reliability of hardware before it is produced. The reliability of
the equipment must be reported at every phase of design and development in a
consistent and easy-to-understand format. Warranty cost is an expensive situation
resulting from poor manufacturing quality and inadequate reliability. For example,
the chairman and chief executive of Ford Motor Company, Jacques Nasser, in the
1st quarter of 2001 leadership cascading meeting made the statement that in 1999,
there were 2.1 times as many vehicles recalled as were sold. In 2000, there were
six times as many. By way of comparison:
For each car sold, the manufacturer must collect and retain this expense in a
warranty account.
Therefore, reliability can play an important role in designing products that will
satisfy the customer and will prove durable in the real world usage application. The
focus of reliability is to design, identify, and detect early potential concerns at a
point where it is really cost effective to do so.
Reliability must be valued by the organization and should be a primary consid-
eration in all decision making. Reliability techniques and disciplines are integrated
into system and component planning, design, development, manufacturing, supply,
delivery, and service processes. The reliability process is tailored to fit individual
business unit requirements and is based on common concepts that are focused on
producing reliable products and systems, not just components.
SL3151Ch07Frame Page 298 Thursday, September 12, 2002 6:07 PM
1. Pre-Deployment Process
Reliability 299
Develop generic requirements for forward models by providing product lines with
generic information on system robust design, such as case studies, system P-dia-
grams, measurement of ideal functions, etc. In this stage, we also conduct compet-
itive technical information analysis to our potential product lines through test-the-
best and reliability benchmarking. Some of the specific tools we may use are:
• Prioritize concerns
• Identify root causes
• Determine/incorporate corrective action
SL3151Ch07Frame Page 300 Thursday, September 12, 2002 6:07 PM
• Validate improvements
• Champion implementation across product line(s)
3. Quality Support
Identify best reliability practices and lead the process standardization and simplifi-
cation. Develop a toolbox and provide reliability consultation.
In addition to their other uses, the outcomes of reliability testing are used as a
basis for design qualification and acceptance. Reliability testing should be a natural
extension of the analytical reliability models, so that test results will clarify and
verify the predicted results, in the customer’s environment.
A key factor in reliability test planning is choosing the proper sample size. Most
of the activity in determining sample size is involved with either:
1. Achieving the desired confidence that the test results give the correct
information
2. Reducing the risk that the test results will give the wrong information
SL3151Ch07Frame Page 301 Thursday, September 12, 2002 6:07 PM
Reliability 301
• Test with regard to production intent. Make sure the sample that is tested
is representative of the system that the customer will receive. This means
that the test unit is representative of the final product in all areas including
materials (metals, fasteners, weight), processes (machining, casting, heat
treat), and procedures (assembly, service, repair). Of course, consider that
these elements may change or that they may not be known. However, use
the same production intent to the extent known at the time of the test plan.
• Determine performance parameters before testing is started. It is often
more important in reliability evaluations to monitor the percentage change
in a parameter rather than the performance to specification.
SL3151Ch07Frame Page 302 Thursday, September 12, 2002 6:07 PM
Remember,
Sudden-Death Testing
Sudden-death testing allows you to obtain test data quickly and reduces the number
of test fixtures required. It can be used on a sample as large as 40 or more or as
small as 15. Sudden-death testing reduces testing time in cases where the lower
quartile (lower 25%) of a life distribution is considerably lower than the upper
quartile (upper 25%). The philosophy involved in sudden-death testing is to test
small groups of samples to a first failure only and use the data to determine the
Weibull distribution of the component. The method is as follows:
1. Choose a sample size that can be divided into three or more groups with
the same number of items in each group. Divide the sample into three or
more groups of equal size and treat each group as if it were an individual
assembly.
2. Test all items in each group concurrently until there is a first failure in
that group. Testing is then stopped on the remaining units in that group
as soon as the first unit fails, hence the name “sudden death.”
3. Record the time to first failure in each group.
4. Rank the times to failure in ascending order.
5. Assign median ranks to each failure based on the sample size equal to
the number of groups. Median rank charts are used for this purpose.
6. Plot the times to failure vs. median ranks on Weibull paper.
7. Draw the best fit line. (Eye the line or use the regression model.) This
line represents the sudden-death line.
8. Determine the life at which 50% of the first failures are likely to occur
(B50 life) by drawing a horizontal line from the 50% level to the sudden-
death line. Drop a vertical line from this point down.
9. Find the median rank for the first failure when the sample size is equal
to the number of items in each subgroup. Again, refer to the median rank
charts. Draw a horizontal line from this point until it intersects the vertical
line drawn in the previous step.
SL3151Ch07Frame Page 303 Thursday, September 12, 2002 6:07 PM
Reliability 303
TABLE 7.1
Failure Rates with Median Ranks
Failure Order Life Median Ranks,
Number Hours %
1 65 12.95
2 120 31.38
3 155 50.00
4 200 68.62
5 300 87.06
10. Draw a line parallel to the sudden-death line passing through the inter-
section point from step 9. This line is called the population line and
represents the Weibull distribution of the population.
EXAMPLE
Assume you have a sample of 40 parts from the same production run available for
testing purposes. The parts are divided into five groups of eight parts as shown below:
Group l 12345678
Group 2 12345678
Group 3 12345678
Group 4 12345678
Group 5 12345678
All parts in each group are put on test simultaneously. The test proceeds until any
one part in each group fails. At that time, testing stops on all parts in that group.
In the test, we experience the following first failures in each group:
Failure data are arranged in ascending hours to failure, and their median ranks are
determined based on a sample size of N = 5. (There are five failures, one in each of
five groups.) The chart in Table 7.1 illustrates the data. The median rank percentage
for each failure is derived from the median rank (Table 7.2) for five samples.
If the life hours and median ranks of the five failures are plotted on Weibull paper,
the resulting line is called the sudden-death line. The sudden-death line represents
SL3151Ch07Frame Page 304 Thursday, September 12, 2002 6:07 PM
TABLE 7.2
Median Ranks
Rank Sample size
Order 1 2 3 4 5 6 7 8 9 10
1 50.0 29.3 20.6 15.9 12.9 10.9 9.4 8.3 7.4 6.7
2 70.7 50.0 38.6 31.4 26.4 22.8 20.1 18.0 16.2
3 7 9.4 61.4 50.0 42.1 36.4 3G.1 Z8.6 25.9
4 84.1 68.6 57.9 50.0 44:0 39.3 35.5
5 87.1 73.9 63.6 56.0 50.0 45.2
6 89.1 77.2 67.9 60.7 54.8
7 90.6 79.9 71.4 64.5
8 91.7 82.0 74.1
9 92.6 83.8
10 93.3
1 6.1 5.6 5.2 4.8 4.5 4.2 4.0 3.8 3.6 3.4
2 14.8 13.6 12.6 1 1.7 10.9 10.3 9.7 9.2 8.7 8.3
3 23.6 21.7 20.0 18.6 17.4 16.4 15.4 14.6 13.8 13.1
4 32.4 29.8 27.5 25.6 23.9 22.5 21.2 20.0 19.0 18.1
5 41.2 37.9 35.0 32.6 30.4 28.6 26.9 25.5 24.2 23.0
6 50.0 46.0 42.5 39.5 37.0 34.7 32.7 30.9 29.3 27.9
7 58.8 54.0 50.0 46.5 43.5 40.8 38.5 36.4 34.5 32.8
8 67.6 62.1 57.5 53.5 50.0 46.9 44.2 41.8 39.7 37.7
9 76.4 70.2 65.0 60.5 56.5 53.1 50.0 47.3 44.8 42.6
10 85.2 78.3 72.5 67.4 63.0 59.2 55.8 52.7 50.0 47.5
11 93.9 86.4 80.0 74.4 69.5 65.3 61.5 58.2 55.2 52.5
12 94.4 87.4 81.4 76.1 71.4 67.3 63.6 60.3 57.4
13 94.8 88.3 82.6 77.5 73.1 69.1 65.5 62.3
14 95.2 89.1 83.6 78.8 74.5 70.7 67.2
15 95.5 89.7 84.6 80.0 75.8 72.1
16 95.8 90.3 85.4 81.0 77.0
17 96.0 90.8 86.2 81.9
18 96.2 91.3 86.9
19 96.4 91.7
20 96.6
the cumulative distribution that would result if five assemblies failed, but it actually
represents five measures of the first failure in eight of the population. The median
life point on the sudden-death line (point at which 50% of the failures occur) will
correspond to the median rank for the first failure in a sample of eight, which is
8.30%. The population line is drawn parallel to the sudden-death line through a point
plotted at 8.30% and at the median life to first failure as determined above. This
estimate of the population’s minimum life is just as reliable as the one that would
have been obtained if all 40 parts were tested to failure.
SL3151Ch07Frame Page 305 Thursday, September 12, 2002 6:07 PM
Reliability 305
Accelerated Testing
Accelerated testing is another approach that may be used to reduce the total test
time required. Accelerated testing requires stressing the product to levels that are
more severe than normal. The results that are obtained at the accelerated stress levels
are compared to those at the design stress or normal operating conditions. We will
look at examples of this comparison during this section.
We use accelerated testing to:
• Accelerated testing can cause failure modes that are not representative.
• If there is little correlation to “real” use, such as aging, thermal cycling,
and corrosion, then it will be difficult to determine how accelerated testing
affects these types of failure modes.
• Constant-stress testing
• Step-stress testing
• Progressive-stress testing
• AST/PASS testing
Before we discuss the methods, keep in mind that any product may be subjected
to multiple stresses and combinations of stresses. The stresses and combinations are
identified very early in the design phase. When accelerated tests are run, ensure that
all the stresses are represented in the test environment and that the product is exposed
to every stress.
CONSTANT-STRESS TESTING
In constant-stress testing, each test unit is run at constant high stress until it fails or
its performance degrades. Several different constant stress conditions are usually
SL3151Ch07Frame Page 306 Thursday, September 12, 2002 6:07 PM
employed, and a number of test units are tested at each condition. Some products
run at constant stress, and this type of test represents actual use for those products.
Constant stress will usually provide greater accuracy in estimating time to failure.
Also, constant-stress testing is most helpful for simple components. In systems and
assemblies, acceleration factors often differ for different types of components.
STEP-STRESS TESTING
In step-stress testing, the item is tested initially at a normal, constant stress for a
specified period of time. Then the stress is increased to a higher level for a specified
period of time. Increases continue in a stepped fashion.
The main advantage of step-stress testing is that it quickly yields failure, because
increasing stress ensures that failures occur. A disadvantage is that failure modes
that occur at high stress may differ from those at normal use conditions. Quick
failures do not guarantee more accurate estimates of life or reliability. A constant-
stress test with a few failures usually yields greater accuracy in estimating the actual
time to failure than a shorter step-stress test; however, we may need to do both to
correlate the results so that the results of the shorter test can be used to predict the
life. (Always remember that failures must be related to the stress conditions to be
valid. Other test discrepancies should be noted and repaired and the testing continued.)
PROGRESSIVE-STRESS TESTING
Progressive-stress testing is step-stress testing carried to the extreme. In this test,
the stress on a test unit is continuously increased, rather than being increased in
steps. Usually, the accelerating variable is increased linearly with time.
Several different rates of increase are used, and a number of test units are tested
at each rate of increase. Under a low rate of increase of stress, specimens tend to
live longer and to fail at lower stress because of the natural aging effects or cumu-
lative effects of the stress on the component. Progressive-stress testing has some of
the same advantages and disadvantages as step-stress testing.
ACCELERATED-TEST MODELS
The data from accelerated tests are interpreted and analyzed using different models.
The model that is used depends upon the:
• Product
• Testing method
• Accelerating variables
The models give the product life or performance as a function of the accelerating
stress. Keep these two points in mind as you analyze accelerated test data:
1. Units run at a constant high stress tend to have shorter life than units run
at a constant low stress.
SL3151Ch07Frame Page 307 Thursday, September 12, 2002 6:07 PM
Reliability 307
2. Distribution plots show the cumulative percentage of the samples that fails
as a function of time. In fact, over time the smoothing of the curve in the
shape of an “S” is indeed the estimate of the actual cumulative percentage
failing as a function of time.
The inverse power law model applies to many failure mechanisms as well as to
many systems and components. This model assumes that at any stress, the time to
failure is Weibull distributed. Thus:
• The Weibull shape parameter β has the same value for all the stress levels.
• The Weibull scale parameter θ is an inverse power function of the stress.
The model assumes that the life at rated stress divided by the life at accelerated
stress is equal to the quantity, accelerated stress divided by rated stress, raised to
the power n, where: n = acceleration factor determined from the slope of the S-N
diagram on the log-log scale.
Using the above information, we can say that:
where θu = life at the rated (usage) stress level; θa = life at the accelerated stress
level; and n = acceleration factor determined from the slope of the S-N diagram on
the log-log scale.
EXAMPLE
Let us assume we tested 15 incandescent lamps at 36 volts until all items in the
sample failed. A second sample of 15 lamps was tested at 20 volts. Using these data,
we will determine the characteristic life at each test voltage and use this information
to determine the characteristic life of the device when operated at 5 volts.
Since we know these two factors, we can determine the acceleration factor, n. We
have the following relationship:
SL3151Ch07Frame Page 308 Thursday, September 12, 2002 6:07 PM
n
11.7hrs 36v
=
2.3hrs 20 v
Therefore,
n = 2.767
Now we can use the following equation to determine the characteristic life at 5 volts:
The reader must be very careful here because not all electronic parts or assem-
blies will follow the inverse power law model. Therefore, its applicability must
usually be verified experimentally before use.
Arrhenius Model
The Arrhenius relationship for reaction rate is often used to account for the effect
of temperature on electrical/electronic components. The Arrhenius relationship is as
follows:
− Ea
Reaction rate = A exp
K BT
Reliability 309
− Ea
Rateuse = A exp
K BTuse
− Ea
Rateaccelerated = A exp
K BTaccelerated
− Ea
A exp
K BTa
Acceleration Factor = AF = Ratea/Rateu =
− Ea
A exp
K BTu
−E 1 1 E 1 1
AF = exp a − = exp a −
K B Ta Tu K B Tu Ta
EXAMPLE
Assume we have a device that has an activation energy of 0.5 and a characteristic
life of 2750 hours at an accelerated operating temperature of 150°C. We want to find
the characteristic life at an expected use temperature of 85°C. (Remember that the
conversion factor for Celsius to Kelvin is: °K = °C + 273 — You may want to review
Volume II.)
Therefore:
.5 1 1
AF = exp −5 −
8.63x10 358 423
AF = exp [2.49] = 12. Therefore, the acceleration factor is 12. To determine life at
85°C, multiply the acceleration factor times the characteristic life at the accelerated
test level of 150°C.
AST/PASS
HALT (Highly Accelerated Life Test) and HASS (Highly Accelerated Stress Screens)
are two types of accelerated test processes used to simulate aging in manufactured
products. The HALT/HASS process was invented by Dr. Gregg Hobbs in the early
1980s. It has since been used with much success in various military and commercial
applications. The HALT/HASS methods and tools are still in the development phase
and will continue to evolve as more companies embrace the concept of accelerated
testing. Many companies use this type of testing, which they call AST (Accelerated
Stress Test) and PASS (Production Accelerated Stress Screen).
The goal of accelerated testing is to simulate aging. If the stress-strength rela-
tionships are plotted, the design strength and field stress are distributed around
means. Let us assume the stress and strength distributions are overlapped (the right
tail of the stress curve is overlapped with the left tail of the strength curve). When
that happens, there is an opportunity for the product to fail in the field. This area of
overlap is called interference.
Many products, including some electronic products, have a tendency to grow
weaker with age. This is reflected in a greater overlap of the curves, thus increasing
the interference area. Accelerated testing attempts to simulate the aging process so
that the limits of design strength are identified quickly and the necessary design
modifications can be implemented.
PURPOSE OF AST
AST is a highly accelerated test designed to fail the target component or module.
The goal of this process is to cause failure, discover the root cause, fix it, and retest
it. This process continues until the “limit of technology” is reached and all the
components of one technology (i.e., capacitors, diodes, resistors) fail. Once a design
reaches its limit of technology, the tails of the stress-strength distribution should
have minimal overlap.
The AST method uses step-stress techniques to discover the operating and
destruct limits of the component or module design. This method should be used in
the pre-prototype and/or pre-bookshelf phase of the product development cycle or
as soon as the first parts are available. Let us look at an example:
We want to discover the operating and destruct limits of a component/module
design for minimum temperature. The unit is placed in a test chamber, stabilized at
–40°C, then powered up to verify the operation. The unit is then unpowered, the
temperature lowered to –45°C and the unit allowed to stabilize at that temperature.
It is then powered on and verified. This process is repeated as the temperature is
lowered by 5° increments.
At –70°C, the unit fails. The unit is warmed to –65°C to see if it recovers.
Normally, it will recover. The temperature of –65°C is said to be its operational
limit. The test continues to determine the destruct limit. The limit is lowered to
–75°C, stabilized, powered to see if it operates, then returned it to –65°C to see if
it recovers. If when this unit is taken down to –95°C and returned to –65°C, it does
not recover, the minimum temperature destruct limit for this module is determined
SL3151Ch07Frame Page 311 Thursday, September 12, 2002 6:07 PM
Reliability 311
to be –95°C. The failed module is then analyzed to determine the root cause of the
failure.
The team must then determine if the failure mode is the limit of technology or
if it is a design problem that can be fixed. Experience has shown that 80% of the
failures are design problems accelerated to failure using the AST or similar accel-
erated stress test methods.
The failure modes from the AST and PASS are used by the manufacturing team
to ensure that they do not see any of these problems in their products.
PURPOSE OF PASS
PASS is incorporated into a process after the design has been first subjected to AST.
The purpose of PASS is to take the process flaws created in the component/module
from latent (invisible) to patent (visible). This is accomplished by severely stressing
a component enough to make the flaws “visible” to the monitoring equipment. These
SL3151Ch07Frame Page 312 Thursday, September 12, 2002 6:07 PM
flaws are called outliers, and they result from process variation, process changes,
and different supplier sources. The goal of PASS is to find the outliers, which will
assist in the determination of the root cause and the correction of the problem before
the component reaches the customer. This process offers the opportunity for the
organization to eliminate module conditioning and burn-in.
PASS development is an iterative process that starts when the pre-pilot units
become available in the pre-pilot phase of the product development cycle. The initial
PASS screening test limits are the AST operational limits and will be adjusted
accordingly as the components/modules fail and the root cause determinations indi-
cate whether the failures are limits of technology or process problems. The PASS
also incorporates findings from process failure mode and effect analysis (PFMEA)
regarding possible “significant” process failure modes that must be detected if
present.
When PASS development is complete, a strength-of-PASS test is performed to
verify that the PASS has not removed too much useful life from the product. A
sample of 12 to 24 components is run through 10 to 20 PASS cycles. These samples
are then tested using the design verification life test. If the samples fail this test, the
screen is too strong. The PASS will be adjusted based on the root cause analysis,
and the strength-of-PASS will be rerun.
CHARACTERISTICS OF A RELIABILITY
DEMONSTRATION TEST
Eight characteristics are important in reliability demonstration testing. These are:
Reliability 313
3. Consumer’s risk, β: Any demonstration test runs the risk of accepting bad
product or rejecting good product. From the consumer’s point of view,
the risk is greatest if bad product is accepted. Therefore, the consumer
wants to minimize that risk. The consumer’s risk is the risk that a test can
accept a product that actually fails to meet the reliability requirement.
Consumer’s risk can be expressed as: β = 1 – confidence level
4. Probability distribution: This is the distribution that is used for the number
of failures or for time to failure. These are generally expressed as normal,
exponential, or Weibull.
5. Sampling scheme
6. Number of test failures to allow
7. Producer’s risk, α : From the producer’s standpoint, the risk is greatest if
the test rejects good product. Producer’s risk is the risk that the test will
reject a product that actually meets the reliability requirement.
8. Design reliability, Ra: This is the reliability that is required in order to
meet the producer’s risk, α, requirement at the particular sample size
chosen for the test. Small test sample sizes will require a high design
reliability in order to meet the producer’s risk objective. As the sample
size increases, the design reliability requirement will become smaller in
order to meet the producer’s risk objective.
1. Producer’s risk, α
2. Consumer’s risk, β
3. Probability of acceptance at any other population reliability or MTBF or
failure rate
Obviously, a specific OC curve will apply for each test situation and will depend
on the number of pieces tested and the number of failures allowed.
ATTRIBUTES TESTS
If the components being tested are merely being classified as acceptable or unac-
ceptable, the demonstration test is an attributes test. Attributes tests:
VARIABLES TESTS
Variables tests are used when more information is required than whether the unit
passed or failed, for example, “What was the time to failure?” The test is a variables
test if the life of the items under test is:
FIXED-SAMPLE TESTS
When the required reliability and the test confidence/risk are known, statistical theory
will dictate the precise number of items that must be tested if a fixed sample size
is desired.
SEQUENTIAL TESTS
A sequential test may be used when the units are tested one at a time and the
conclusion to accept or reject is reached after an indeterminate number of observa-
tions. In a sequential test:
Now that you are familiar with the four test types, let us look at the test methods.
Note that the four test types are not mutually exclusive. We can have fixed-sample
or sequential-attributes tests as well as fixed-sample or sequential-variables tests.
Reliability 315
N N!
n =
n! N (
−n! )
LARGE POPULATION — FIXED-SAMPLE TEST
USING THE BINOMIAL DISTRIBUTION
When parts from a large population are tested and the accept/reject decision is based
on attributes, the binomial distribution can be used. Note that for a large N (one in
which the sample size will be less than 10% of the population), the binomial
distribution is a good approximation for the hypergeometric distribution. The bino-
mial attribute demonstration test is probably the most versatile for use on product
components for several reasons:
1. Specified reliability, Rs
2. Confidence level or consumer’s risk, β
3. Producer’s risk, α (if desired)
f
n
Pr(x ≤ f) = ∑ x (1 − R) ( R)
x =0
x n− x
f − λ poi
λxpoie
Pr(x ≤ f) = ∑
x =0
x!
SUCCESS TESTING
Success testing is a special case of binomial attributes testing for large populations
where no failures are allowed. Success testing is the simplest method for demon-
strating a required reliability level at a specified confidence level. In this test case,
n items are subjected to a test for the specified time of interest, and the specified
SL3151Ch07Frame Page 317 Thursday, September 12, 2002 6:07 PM
Reliability 317
reliability and confidence levels are demonstrated if no failures occur. The method
uses the following relationship:
R = (1 – C)1/n = (β)1/n
ln(1 − C )
n=
ln R
The test procedure consists of testing parts one at a time and classifying the
tested parts as good or defective. After each part is tested, calculations are made
based on the test data generated to that point. The decision is made as to whether
the test has been passed or failed or if another observation should be made. A
sequential test will result in a smaller average number of parts tested when the
population tested has a reliability close to either the specified or design reliability.
The method to use is described below:
Determine Rs, Rd, α.β
Calculate the accept/reject decision points using:
β 1−β
and
1−α α
f s
1 − Rs Rs
L =
1 − Rd Rd
SL3151Ch07Frame Page 318 Thursday, September 12, 2002 6:07 PM
1−β
If L > , the test is failed.
α
β
If L < , the test is passed.
1−α
β 1−β
If ≤L≤ , the test should be continued.
1−α α
GRAPHICAL SOLUTION
A graphical solution for critical values of f and s is possible by solving the following
equations:
1−β 1 − Rs R
ln = ( f )ln + ( s )ln s
α 1 − Rd Rd
and
β 1 − Rs R
ln = ( f )ln + ( s )ln s
1−α 1 − Rd Rd
t
RS = e − λ st = e θs
SL3151Ch07Frame Page 319 Thursday, September 12, 2002 6:07 PM
Reliability 319
Then, solve the following equation for various sample sizes and allowable
failures:
n
∑
2 ti
θ ≥ i2=1
χβ,2 f
The method to use will be the same as with the failure-truncated test. In this case:
n
2 ∑
i=1
ti
θ≥ 2
χ
β,2 ( f +1)
EXAMPLE
How many units must be checked on a 2000-hour test if zero failures are allowed
and θs = 32,258? A 75% confidence level is required.
SL3151Ch07Frame Page 320 Thursday, September 12, 2002 6:07 PM
β = 1 – 0.75 = 0.25
2(f + 1) = 2(0 + 1) = 2
Therefore:
n n
2
i=1
∑
t i 2 ti
i=1
∑
θ≥ 2 = θ≥ = 32,258
χ0.25,2 2.772
∑t =
(2.772)(32, 258)
i = 44, 709.59
i=1
2
Since no failures are allowed, all units must complete the 2000-hour test and:
Solving for n:
Reliability 321
β 1−β
and
1−α α
θd 1 1
n
L=
θs
exp − −
θs θd
∑t
i=1
i
where ti = time to failure of the ith unit tested and n = number tested.
1−β
If L > , the test is failed.
α
β
If L < , the test is passed.
1−α
β 1−β
If ≤ L≤ , the test should be continued.
1−α α
nb – h1 and nb + h2
SL3151Ch07Frame Page 322 Thursday, September 12, 2002 6:07 PM
1 θd 1 1 1 1−β
where n = number tested; b = ln ; D = − ; h1 = ln ; and h2 =
D θs θs θ d D α
1 1−α
ln .
D β
Let ti equal time to failure for the ith item. Make conclusions based on the
following:
If ∑t i
≥ nb + h2, the test is passed.
If nb – h1 ≤ ∑t i
< nb + h2, continue the test.
EXAMPLE
Assume you are interested in testing a new product to see whether it meets a specified
MTBF of 500 hours with a consumer’s risk of 0.10. Further, specify a design MTBF
of 1000 hours for a producer’s risk of 0.05. Run tests to determine whether the
product meets the criteria.
1 1
D= − = (1/500) – (1/1000) = .001
θs θ d
Then calculate
1 1−β
h1 = ln = (1/.001) ln[(1 – .10)/.05] ≈2890
D α
1 1−α
h2 = ln = (1/.001) ln[(1 – .05)/.10] ≈2251
D β
1 θd
b= ln = (1/.001) ln(1000/500) ≈693
D θs
Using these results, we can determine at which points we can make a decision, by
using the following:
SL3151Ch07Frame Page 323 Thursday, September 12, 2002 6:07 PM
Reliability 323
R1 R2 R3
RELIABILITY VISION
Reliability is valued by the organization and is a primary consideration in all decision
making. Reliability techniques and disciplines are integrated into system and com-
ponent planning, design, development, manufacturing, supply, delivery, and service
processes. The reliability process is tailored to fit individual business unit require-
ments and is based on common concepts that are focused on producing reliable
products and systems, not just components.
1. A typical series block diagram is shown in Figure 7.2 with each of the
three components having R1, R2, and R3 reliability respectively.
SL3151Ch07Frame Page 324 Thursday, September 12, 2002 6:07 PM
R1
R2
R3
EXAMPLE
If the reliability for R1 = .80, R2 = .99, and R3 = .99, the system reliability is: Rtotal =
(.80)(.99)(.99) = .78. Please notice that the total reliability is no more than the weakest
component in the system. In this case, the total reliability is less than R1.
EXAMPLE
If the reliability for R1 = .80, R2 = .90, and R3 = .99, the system reliability is: Rtotal =
1 – [(1 – .80)(1 – .90)(1 – .99)] = .9998 Please notice that the total reliability is more
than that of the strongest component in the system. In this case, the total reliability
is more than the R3.
3. Complex reliability block diagrams show systems that combine both series
and parallel situations. A typical complex system is shown in Figure 7.4.
The system reliability for this system is calculated in two steps:
Step 1. Calculate the parallel reliability.
Step 2. Calculate the series reliability — which becomes the total reli-
ability.
EXAMPLE
If the reliability for R1 = .80, R2 = .90, R3 = .95, R4 = .98, and R5 = .99, what is
the total reliability for the system?
SL3151Ch07Frame Page 325 Thursday, September 12, 2002 6:07 PM
Reliability 325
R3
R5
R1 R2
R4
Step 2. The series reliability for R1, R2, (R3 & R4), and R5 is
Please notice that the parallel reliability was actually converted into a single reliability
and that is why it is used in the series as a single value.
1. Gather the failure data (it can be in miles, hours, cycles, number of parts
produced on a machine, etc.), then list in ascending order. For example:
We conduct an experiment and the following failures (sample size of 10
failures) are identified (actual hours to failure): 95, 110, 140, 165, 190,
205, 215, 265, 275, and 330.
2. Using the table of median ranks (Table 7.2), find the column correspond-
ing to the number of failures in the sample tested. In our example we
have a sample size of ten, so we use the “sample size 10” column. The
“% Median Ranks” are then read directly from the table.
SL3151Ch07Frame Page 326 Thursday, September 12, 2002 6:07 PM
3. Match the hours (or some other failure characteristic that is measured)
with the median ranks from the sample size selected. For example:
Actual Hours
to Failure % Median Ranks
95 6.7
110 16.2
140 25.9 Sample size
165 35.5 of 10 failures
190 45.2
205 54.8
215 64.5
265 74.1
275 83.8
330 93.3
4. In constructing the Weibull plot, label the “Life” on the horizontal log
scale on the Weibull graph in the units in which the data were measured.
Try to center the life data close to the center of the horizontal scale
(Figure 7.5).
5. Plot each pair of “actual hours to failure” (on the horizontal logarithmic
scale) and “% median rank” (on the vertical axis, which is a log-log scale)
on the graph. The matching points are shown as dots (“ •s”) on Figure 7.5.
Draw a “line of best fit” (generally a straight line) as close to the data
pairs as possible. Half the data points should be on one side of the line,
and the other half should be on the other side. No two people will generate
the exact same line, but analysts should keep in mind that this is a visual
estimate. (If the line is computer generated, it is actually calculated based
on the “best fit” regression line.)
6. After the line of “best fit” is drawn, the life at a specific point can be
found be going vertically to the “Weibull line” then going horizontally to
the “Cumulative % Failed.” In other words, this is the percent that is
expected to fail at the life that was selected. In the example, 100 was
selected as the life, then going up to the line and then across, we can see
the expected % failed to be 10%. In this case, the life at 100 hours is also
known as the B10 life (or 90% reliability) and is the value at which we
would expect 10% of the parts to fail when tested under similar conditions.
(Please note that there is nothing secret about the B10 life. Any Bx life can
be identified. It just happens that the B10 is the conventional life that most
engineers are accustomed to using.) In addition, we can plot the 5% and
the 95% confidence using Tables 7.3 and 7.4 respectively.
The confidence lines are drawn for our example in Figure 7.5. The reader
will notice that the confidence lines are not straight. That is because as we
move in the fringes of the reliability we are less confident about the results.
SL3151Ch07Frame Page 327 Thursday, September 12, 2002 6:07 PM
Reliability 327
6.0
4.0
3.0
99.9
2.0
4
1.
99.0
2
1.
0
WEIBULL 1.
95.0 8
SLOPE 0.
90.0 0.7 6
0.
80.0 0.5
70.0
60.0
50.0
40.0
30.0
20.0
10.0
5.0
4.0
3.0
2.0
1.0
PERCENT
0.50
0.40
0.30
0.20
0.10
0.05
0.04
0.03
2 3 4 5 67 89 2 3 4 5 67 89 2 3 4 5 67 89
TABLE 7.3
Five Percent Rank Table
Sample Size (n)
1 1 2 3 4 5 6 7 8 9 10
1 5.000 2.532 1.695 1.274 1.021 0.851 0.730 0.639 0.568 0.512
2 22.361 13.535 9.761 7.644 6.285 5.337 4.639 4.102 3.677
3 36.840 24.860 18.925 15.316 12.876 11.111 9.775 8.726
4 47.237 34.259 27.134 22.532 19.290 16.875 15.003
5 54.928 41.820 34.126 28.924 25.137 22.244
6 60.696 47.820 40.031 34.494 30.354
7 65.184 52.932 45.036 39.338
8 68.766 57.086 49.310
9 71.687 60584
10 74.113
1 0.465 0.426 0.394 0.366 0.341 0.320 0.301 0.285 0.270 0.256
2 3.332 3.046 2.805 2.600 2.423 2.268 2.132 2.011 1.903 1.806
3 7.882 7.187 6.605 6.110 5.685 5.315 4.990 4.702 4.446 4.217
4 13.507 12.285 11.267 10.405 9.666 9.025 8.464 7.969 7.529 7.135
5 19.958 18.102 16.566 15.272 14.166 13.211 12.377 11.643 10.991 10.408
6 27.125 24.530 22.395 20.607 19.086 17.777 16.636 15.634 14.747 13.955
7 34.981 31.524 28.705 26.358 24.373 22.669 21.191 19.895 18.750 17.731
8 43.563 39.086 35.480 32.503 29.999 27.860 26.011 24.396 22.972 21.707
9 52.991 47.267 42.738 39.041 35.956 33.337 31.083 29.120 27.395 25.865
10 63.564 56.189 50.535 45.999 42.256 39.101 36.401 34.060 32.009 30.195
11 76.160 66.132 58.990 53.434 48.925 45.165 41.970 39.215 36.811 34.693
12 77.908 68.366 61.461 56.022 51.560 47.808 44.595 41.806 39358
13 79.418 70.327 63.656 58.343 53.945 50.217 47.003 44.197
14 80.736 72.060 65.617 60.436 56.112 52.420 49.218
15 81.896 73.604 67.381 62.332 58.088 54.442
16 82.925 74.988 68.974 64.057 59.897
17 83.843 76.234 70.420 65.634
18 84.668 77.363 71.738
19 85.413 78.389
20 86.089
SL3151Ch07Frame Page 329 Thursday, September 12, 2002 6:07 PM
Reliability 329
TABLE 7.4
Ninety-five Percent Rank Table
Sample Size (n)
j 1 2 3 4 5 6 7 8 9 10
1 95.000 77.639 63.160 52.713 45.072 39.304 34.816 31.234 28.313 25.887
2 97.468 86.465 75.139 65.741 58.180 52.070 47.068 42.914 39.416
3 98.305 90.239 81.075 72.866 65.874 59.969 54.964 50.690
4 98.726 92.356 84.684 77.468 71.076 65.506 60.662
5 98.979 93.715 87.124 80.710 74.863 69.646
6 99.149 94.662 88.889 83.125 77.756
7 99.270 95.361 90.225 84.997
8 99.361 95.898 91.274
9 99.432 96.323
10 99.488
1 23.840 22.092 20.582 19.264 18.104 17.075 16.157 15.332 14.587 13.911
2 36.436 33.868 31.634 29.673 27.940 26.396 25.012 23.766 22.637 21.611
3 47.009 43.811 41.010 38.539 36.344 34.383 32.619 31.026 29.580 28.262
4 56.437 52.733 49.465 46566 43.978 41.657 39.564 37.668 35.943 34366
5 65.019 60.914 57.262 54.000 51.075 48.440 46.055 43.888 41.912 40.103
6 72.875 68.476 64.520 60.928 57.744 54.835 52.192 49.783 47.580 45.558
7 80.042 75.470 71.295 67.497 64.043 60.899 58.029 55.404 52.997 50.782
8 86.492 81.898 77.604 73.641 70.001 66.663 63.599 60.784 58.194 55.803
9 92.118 87.715 83.434 79.393 75.627 72.140 68.917 65.940 63.188 60.641
10 96.668 92.813 88.733 84.728 80.913 77.331 73.989 70.880 67.991 65.307
11 99.535 96.954 93.395 89.595 85.834 82.223 78.809 75.604 72.605 69.805
12 99.573 97.195 93.890 90.334 86.789 83.364 80.105 77.028 74.135
13 99.606 97.400 94.315 90.975 87.623 84.366 81.250 78.293
14 99.634 97.577 94.685 91.535 88.357 85.253 82.269
15 99.659 97.732 95.010 92.030 89.009 86.045
16 99.680 97.868 95.297 92.471 89.592
17 99.699 97.989 95.553 92.865
18 99.715 98.097 95.783
19 99.730 98.193
20 99.744
SL3151Ch07Frame Page 330 Thursday, September 12, 2002 6:07 PM
Reliability 331
1. Gather the failure and suspended items data, then including the suspended
items, list in ascending order.
1 95 F1
2 110 F2
3 140 F3
4 165 F4 Sample Size 13
5 185 S1 10 failures
6 190 F5 3 suspensions
7 205 F6
8 210 S2
9 215 F7
10 265 F8
11 275 F9
12 330 F10
13 350 S3
a Code items as failed (F) or suspended (S).
2. Calculate the mean order number of each failed unit. The mean order
numbers before the first suspended item are the respective item numbers
in the order of occurrence, i.e., 1, 2, 3, and 4. The mean order numbers
after the suspended items are calculated by the following equations.
Then, the mean order number of F7 (seventh failed item) is 6.222 + 1.296 =
7.518 (and so on for F8, F9, and F10).
This new increment also applies to mean order numbers:
1 95 F1 1
2 110 F2 2
3 140 F3 3
4 165 F4 4
5 185 S1 —
6 190 F5 5.111
7 205 F6 6.222
8 210 S2 —
9 215 F7 7.518
10 265 F8 8.815
11 275 F9 10.111
12 330 F10 11.407
13 350 S3 —
3. A rough check on the calculations can be made by adding the last incre-
ment to the final mean order number. If the value is close to the total
sample size, the numbers are correct. In our example, 11.407 + [11.407 –
10.111] = 11.407 + 1.296 = 12.702, which is a close approximation to
the sample size of 13.
4. Using the table of median ranks for a sample size of 13 we can determine
the median rank for the first four failures, or we can use the approximate
median rank formula.
5.111 − .3
= 0.359
13 + .4
6.222 − .3
= 0.442
13 + .4
7.518 − .3
and so on.
13 + .4
SL3151Ch07Frame Page 333 Thursday, September 12, 2002 6:07 PM
Reliability 333
1 95 F1 1 5.2
2 110 F2 2 12.6
3 140 F3 3 20.0
4 165 F4 4 27.5
5 185 S1 — —
6 190 F5 5.111 35.9
7 205 F6 6.222 44.2
8 210 S2 — —
9 215 F7 7.518 53.9
10 265 F8 8.815 63.5
11 275 F9 10.111 73.2
12 330 F10 11.407 82.9
13 350 S3 — —
5. Label the “Life” on the horizontal log scale on the Weibull graph in the
units in which the data were measured. Try to center the life data close
to the center of the horizontal scale.
6. Plot each pair of “actual hours to failure” (on the horizontal scale) and
“% median rank” (on the vertical scale) on the graph. Draw a “line of
best fit” (generally a straight line) as close to the data pair as possible.
Half the data points should be on one side of the line, and the other half
should be on the other side.
7. Once the line is drawn, the life at a specific point can be found by going
vertically to the “Weibull line” then going horizontally to the “Cumulative
% failed.” In other words, this is the percent that is expected to fail at the
life that was selected. In the example, 200 hours was selected as the life,
then going up to the line and then across, we can see the expected %
failed to be 40%.
8. Other reliability parameters that can be read from the Weibull plot are:
MTBF = 240 hours
B10 = 105 hours
B = 2.5
Reliability at 100 hours is 1 – 0.09 = 0.91 reading from the graph, or using
the Weibull equation
t B 100 2.5
− −
MTBF 240
R= e = e = 0.9038
9. Comparing the two examples shows that the analysis with suspended items
results in a slightly higher reliability characteristics. This is using the same
failure data plus the three suspended items.
SL3151Ch07Frame Page 334 Thursday, September 12, 2002 6:07 PM
4. One of the advantages of using the Weibull is that it is very flexible in its
interpretations. A wealth of information can be derived from it. If the
Weibull slope is equal to one, the distribution is the same as the exponen-
tial, or a constant failure rate. If the slope is in the vicinity of 3.5, it is a
“near normal distribution.” If the slope is greater than one, the plot starts
to represent a wear out distribution, or an increasing hazard rate. A slope
less than one generally indicates a decreasing hazard rate, or an infant
mortality distribution.
5. Analysts should be careful about extrapolating beyond the data when
making predictions. Remember that the failure points fall within certain
bounds and that the analyst should have a valid reason when venturing
beyond these bounds. When making projections over and above these
confines, sound engineering judgment, statistical theory, and experience
should all be taken into consideration.
6. The three-parameter Weibull is a distribution with non-zero minimum life.
This means that the population of products goes for an initial period of
time without failure. The reliability function for the three-parameter
Weibull is given by
t −δ β
−
θ−δ
R(t) = e ,t≥δ
SL3151Ch07Frame Page 335 Thursday, September 12, 2002 6:07 PM
Reliability 335
1
1 β
t = θ + (θ – δ) × ln( )
R
1
1 β
B10 = θ + (θ – δ) × ln( )
0.90
DESIGN OF EXPERIMENTS
IN RELIABILITY APPLICATIONS
Certainly we can use DOE in passive observation of the covariates in the tested
components. We can also use DOE in directed experimentation as part of our
reliability improvement. Covariates are usually called factors in the experimentation
framework. Two main technical problems arise in the reliability area, however, when
standard methods of experimental design are employed.
1. Failure time data are rarely normally distributed, so standard analysis tools
that rely on symmetry, e.g., normal plots, do not work too well.
2. Censoring.
Eventually this process will converge, i.e., the predictions for the fail times of
the censorings will stop changing from one iteration to the next. If necessary, the
process can be tried with several model choices for step 1. In fact, the algorithm of
the five steps leads to the same results as maximum likelihood estimation.
RELIABILITY IMPROVEMENT
THROUGH PARAMETER DESIGN
Two special categories of covariates in any parameter design are design parameters
(or control factors) and error variables (or noise factors). The terms in parenthesis
are the equivalent terms within the context of robustness, which we already have
discussed in Volume V of this series.
The achievement of higher reliability can also be viewed as an improvement to
robustness. Robustness is defined as reduced sensitivity to noise factors. In most
industries, noise factors have five main categories:
Reliability 337
Functional
measure
C2
C1
N- N+
Time
The idea of experimental layouts of this type is to look for interactions between
control factors and noise factors, which lead to configurations with minimum dif-
ference between the y values.
TABLE 7.5
Department of Defense Reliability and Maintainability — Standards and
Data Items
Standard Explanation
Reliability Standards
MIL-STD-721C Definitions of Terms for Reliability & Maintainability
MIL-STD-756B Reliability Modeling & Prediction
MIL-STD-781 D Reliability Testing for Engineering Development Qualification &
Production
MIL-STD-785B Reliability Program Systems & Equipment Development & Production
MIL-STD-1543B-(USAF) Reliability Program Requirements for Space & Launch Vehicles
MIL-STD-2155-(AS) Failure Reporting Analysis & Corrective Action System
MIL-STD-2164-(EC) Environmental Stress Screening Process for Electronic Equipment
MIL-0–9858A Quality Program Requirements
MIL-HDBK-189 Reliability Growth Management
MIL-HDBK-217F Reliability Prediction of Electronic Equipment
MIL-HDBK-781 Reliability Test Methods, Plans & Environments for Engineering
Development, Qualification & Production
DoD-HDBK-344-(USAF) Environmental Stress Screening of Electronic Equipment
Maintainability Standards
MIL-STD-470B Maintainability Program for Systems & Equipment
MIL-STD-471A Maintainability Demonstration
MIL-STD-2084-(AS) General Requirements for Maintainability
MIL-STD-2165 Testability Program for Electronic Systems & Equipment
MIL-HDBK-472 Maintainability Prediction
Reliability 339
Computer Formats
NPRD-P Nonelectronic Parts Reliability Data (IBM PC database)
NRPS Nonoperating Reliability Prediction Software (Includes NONOP-1)
VZAP-P VZAP Data (IBM PC database)
SL3151Ch07Frame Page 340 Thursday, September 12, 2002 6:07 PM
MIL-STD-781 Reliability Test Methods, Plans, and Environments for engineering development,
Qualification and Production
DI-RELI-80247 Thermal Survey Report
DI-RELI-80248 Vibration Survey Report
DI-RELI-80249 ESS Report
DI-RELI-80250 B Test Plan
DI-RELI-80251 B Test Procedures
DI-RELI-80252 B Test Report
DI-RELI-80253 Failed Item Analysis Report
DI-RELI-80254 Corrective Action Plan
DI-RELI-80255 Failure Summary and Analysis Report
MIL-STD-785 Reliability Program for Systems and Equipment Development and Production and
MIL-STD-1543 Reliability Program Requirements for Space and Launch Vehicles
DI-R-7079 R Program Plan
DI-R-7084 Elect. Parts/Circuits Tol. Analysis Report
DI-R-7086 FMECA Plan
DI-A-7088 Conference Agenda
DI-A-7089 Conference Agenda
DI-OCIC-80125 ALERT/SAFE ALERT
DI-OCIC-80126 Response to ALERT/SAFE ALERT
DI-RELI-80249 ESS Report
DI-RELI-80250 Test Plan
DI-RELI-80251 Test and Demo. Procedures
DI-RELI-80252 Test Reports
DI-RELI-80253 Failed Item Analysis Report
DI-RELI-80255 Report, Failure Summary and Analysis
DI-RELI-80685 Critical Item List
DI-RELI-80686 Allocat., Assess. & Analysis Report
DI-RELI-80687 Report, FMECA
SL3151Ch07Frame Page 341 Thursday, September 12, 2002 6:07 PM
Reliability 341
MIL-STD-1686 ESD Control Program for Protection of Electrical and Electronic Parts, Assemblies
and Equipment
DI-RELI-80669 ESD Control Program Plan
DI-RELI-80670 Reporting Results of ESD Sensitivity Tests of Electrical & Electronic Parts
DI-RELI-80671 Handling Procedure for ESD Sensitive Items
MIL-STD-1546 Parts, Materials, and Processes Control Program for Space and Launch Vehicles
DI-A-7088 Conference Agenda
DI-A-7089 Conference Minutes
DI-MI SC-80526 Parts Control Program Plan
DI-MISC-80072 Program Parts Selection List (PPSL)
DI-MISC-80071 Part Approval Requests
Note: Only data items specified in the Contract Data Requirements List (CDRL) are deliverable.
REFERENCES
Anon., Warranty Cost Issue Hurts Chrysler, USA Today, Oct. 24, 1994, p. 3B.
ANSI/IEEE Standard 100–1988, 4th ed., IEEE Standard Dictionary of Electrical and Elec-
tronic Terms, The Institute of Electrical and Electronic Engineers, Inc., New York,
1988.
Flint, J., It Is Time To Get Realistic, WARD’S AUTOWORLD, Oct. 2001, p. 21.
Mayne, E. et al., Quality Crunch, Ward’s AUTOWORLD, July 2001, pp. 14–18.
VonAlven, W.H., Ed., Reliability Engineering, Prentice Hall, Inc., Englewood Cliffs, NJ, 1964.
SL3151Ch07Frame Page 343 Thursday, September 12, 2002 6:07 PM
Reliability 343
SELECTED BIBLIOGRAPHY
Aitken, M., A note on the regression analysis of censored data, Technometrics, 23, 161–163,
1981.
Box, G.E.P. and Meyer, R.D., Finding the active factors in fractionated screening experiments,
Journal of Quality Technology, 25, 94–105, 1993.
Cox, D.R. and Oakes, D., Analysis of Survival Data, Chapman Hall, London, 1984.
Grove, D.M. and Davis, T.P., Engineering, Quality, and Experimental Design, Longman,
Harlow, England, 1992.
Hamada, M. and Wu, C.F.J., Analysis of censored data from highly fractionated experiments.
Technometrics, 33, 25–3, 1991.
Hamada, M. and Wu, C.F.J., Analysis of designed experiments with complex aliasing, Journal
of Quality Technology, 23, 130–137, 1992.
Kalbfleisch, J.D. and Prentice, R.L., The Statistical Analysis of Failure Time Data, Wiley,
New York, 1980.
Kapur, K.C. and Lamberson, L.R., Reliability in Engineering Design, Wiley, New York, 1977.
Kececioglu, D., Reliability Engineering Handbook, Vols. 1 and 2, Prentice Hall, Englewood
Cliffs, NJ, 1991.
Lawless, J. F., Statistical Models and Methods for Lifetime Data, Wiley, New York, 1982.
McCormick, N.J., Reliability and Risk Analysis, Academic Press, New York, 1981.
Nelson, W., Theory and applications of hazard plotting for censored failure data, Technomet-
rics, 14, 945–966, 1972.
Schmee, J. and Hahn, G., A simple method of regression analysis with censored data. Tech-
nometrics, 21, 417–432, 1979.
Smith, R.L., Weibull regression models for reliability data, Reliability Engineering and System
Safety, 34, 55–57, 1991.
SL3151Ch07Frame Page 344 Thursday, September 12, 2002 6:07 PM
SL3151Ch08Frame Page 345 Thursday, September 12, 2002 6:07 PM
Reliability and
8 Maintainability
As the world moves towards building more competitive products, it is important to
put additional emphasis on reliability and maintainability (R&M), which support
reduction of inventories and “build to schedule” targets.
The Quality Systems Requirements, Tooling & Equipment (TE) Supplement to
QS-9000 was developed by Chrysler, Ford, General Motors, and Riviera Die & Tool
to enhance quality systems while eliminating redundant requirements, facilitating
consistent terminology, and reducing costs. It is important that everyone involved
in the design or purchase of machinery be aware of this supplement and their
responsibilities as outlined in the QS-9000 process. It is also important that everyone
understand that the TE supplement defines machinery as tooling and equipment
combined. Machinery is a generic term for all hardware, including necessary oper-
ational software, which performs a manufacturing process.
The TE goal is to improve the quality, reliability, maintainability, and durability
of products through development and implementation of a fundamental quality
management system. The supplement communicates additional common system
requirements unique to the manufacturers of tooling and equipment as applied to
the QS-9000 requirements. This particular chapter will emphasize the reliability and
maintainability areas. Quality operating systems (QOS) and durability are equally
important subjects but are beyond the scope of this work. The reader is encouraged
to review Volume IV — the material on machine acceptance.
345
SL3151Ch08Frame Page 346 Thursday, September 12, 2002 6:07 PM
The R&M process consists of five phases that form a continuous loop. The five
phases are: (1) concept; (2) design and development; (3) machinery build and
installation; (4) machinery operation, continuous improvement, performance analy-
sis; and (5) conversion concept of next cycle. As the loop continues, each generation
of machinery improves.
In this chapter we will concentrate on the first three phases of the loop, not
because they are more important, but because they are the major focus of this
planning effort of the design for six sigma (DFSS) campaign. The last two phases
should be well documented in each organization for they are facility dependent.
OBJECTIVES
The emphasis of all R & M is focused on three objectives:
WHO’S RESPONSIBLE?
Full realization of R&M benefits requires consistent application of the process.
Simultaneous engineering (SE) teams, together with the plants and the supply base,
must align their efforts and objectives to provide quality machinery designed for
R&M. Reliability and maintainability engineering is the responsibility of everyone
involved in machinery design, as much as the collection and maintenance of oper-
ational data are the responsibility of those operating and maintaining the equipment
day to day.
The R&M process places responsibility on the groups possessing the skills or
knowledge necessary to efficiently and accurately complete a given set of tasks. It
turns out that much of the expertise is in the supply base, and as such, the suppliers
must take the lead role and responsibility in R&M efforts. The R&M process
encourages the organization and suppliers to lock into budget costs based on Life
Cycle Costing (LCC) analysis of options and cost targets. Warranty issues should
be considered in the LCC analysis so that design helps decrease excessive warranty
costs after installation. The focus places responsibility for correcting design defects
on the machinery designers.
Facility and tooling producers who practice R&M will ultimately reduce the
cost (such as warranty) of their product and will become more competitive over
time. Further, suppliers that practice R&M will qualify as QS-9000 certified, pre-
ferred, global sourcing partners. Engineers and program managers who practice and
encourage R&M will reduce operational costs over time. In doing so, they will meet
manufacturing and cost objectives for their projects or programs.
TOOLS
There are many R&M tools. The ones mentioned here are required in the Design
and Development Planning (4.4.2) section of the TE Supplement. Many others
beyond the few that are addressed here are available and can improve reliability.
SL3151Ch08Frame Page 348 Thursday, September 12, 2002 6:07 PM
Mean Time Between Failure (MTBF) is defined as the average time between
failure occurrences. It is simply the sum of the operating time of a machine divided
by the total number of failures. For example, if a machine runs for 100 hours and
breaks down four times, the MTBF is 100 divided by 4 or 25 hours. As changes are
made to the machine or process, we can measure the success by comparing the new
MTBF with the old MTBF and quantify the action that has been taken.
Mean Time to Repair (MTTR) is defined as the average time to restore machinery
or equipment to its specified conditions. This is accomplished by dividing the total
repair time by the number of failures. It is important to note that the MTTR
calculation is based on repairing one failure and one failure only. The length of time
it takes to repair each failure directly affects up-time, up-time %, and capacity. For
example, if a machine runs 100 hours and has eight failures recorded with a total
repair time of four hours, the MTTR for this machine would be four hours divided
by eight failures or .5 hours. This is the mean time it takes to repair each failure.
Fault Tree Analysis (FTA) is an effect-and-cause diagram. It is a method used
to identify the root causes of a failure mode using symbols developed in the defense
industry. The FTA is a great prescriptive method for determining the root causes
associated with failures and can be used as an alternative to the Ishikawa Fish Bone
Diagram. It compliments the Machinery Failure Mode and Effects Analysis
(MFMEA) by representing the relationship of each root cause to other failure-mode
root causes. Some feel the FTA is better suited than the FMEA to providing an
understanding of the layers and relationships of causes. An FTA also aids in establishing
a troubleshooting guide for maintenance procedures. It is a top down approach.
Life Cycle Costs (LCC) are the total costs of ownership of the equipment or
machinery during its operational life. A purchased system must be supported during
its total life cycle. The importance of life cycle costs related to R&M is based on
the fact that up to 95% of the total life cycle costs are determined during the early
stages of the design and development of the equipment. The first three phases of
the equipment’s life cycle are typically identified as non-recurring costs. The remain-
ing two phases are associated with the equipment’s support costs.
TABLE 8.1
Activities in the First Three Phases of the R&M Process
Concept Design/Development Build and Installation
To determine timing for the R&M process, you may use the following procedure:
CONCEPT
BOOKSHELF DATA
Activities associated with the bookshelf data stage include:
At this point it is important to ask and answer this question: Have we collected
all of the relevant historical data from similar operations or designs and documented
them for use during the process selection and design stages?
SL3151Ch08Frame Page 350 Thursday, September 12, 2002 6:07 PM
1. Identify general life cycle costs to drive the manufacturing process selec-
tion.
2. Establish OEE targets including availability, quality, and performance
efficiency numbers that drive the manufacturing process selection.
3. Establish broad R&M target ranges that drive the manufacturing process
selection.
4. Establish manufacturing assumptions based on cycle plan, including vol-
umes and dollar targets.
5. Identify simultaneous engineering (SE) partners for project.
6. Select manufacturing process based on demonstrated performance and
expected ability to meet established targets.
7. Search for other surplus equipment to be considered for reuse.
8. If surplus machinery has not been identified for reuse, identify a supplier,
based on manufacturing process selection (evaluate R&M capability).
9. Generate detailed life cycle costing analysis on selected manufacturing
process.
At this point it is important to ask and answer these questions: Have broad, high
level R&M targets been set to drive detailed process trade-off decisions? Is the life
cycle cost analysis complete for the selected manufacturing process? Do the projec-
tions support the budget per the affordable business structure?
At this point it is important to ask and answer this question: Have specific R&M
targets been set to support the unique operating conditions and PM program objectives?
At this point it is important to ask and answer these questions: Does the R&M
plan address each project target? Is the R&M plan sufficient to meet project targets?
At this point it is important to ask and answer this question: Is the process FMEA
complete, and have causes of potentially common failure modes been addressed and
redesigned?
At this point it is important to ask and answer these questions: Is the machinery
FMEA complete, and have causes of potentially common failure modes been
addressed and redesigned? Is the data collection plan complete?
DESIGN REVIEW
Activities associated with the design review stage include:
1. Conduct machinery design review (field history, machinery FMEA, test
or build problems, R&M simulation and reliability predictions, maintain-
ability, thermal/mechanical/electrical analyses, etc.).
2. Provide R&M requirements to tier two suppliers (levels, root cause anal-
yses, standardized component applications, testing, etc.).
At this point it is important to ask and answer this question: Have the R&M
plan requirements been incorporated in the machinery design?
At this point it is important to ask and answer this question: Has the plant
maintenance department devised a maintenance plan based on expected machine
performance?
OPERATION OF MACHINERY
Activities associated with the operation of machinery stage include:
1. Implement and utilize machinery data feedback plan.
2. Implement and utilize FRACAS.
3. Evaluate PM program.
4. Update FMEA and reliability predictions.
5. Conduct reliability growth curve development and analysis.
At this point it is important to ask and answer this question: Have design practices
been documented for use by the next generation design teams? (Also note that as
SL3151Ch08Frame Page 353 Thursday, September 12, 2002 6:07 PM
the machinery begins to operate, the continuous improvement cycle phases begin to
lead the R&M effort in phases four and five.)
CONVERSION/DECOMMISSION
Conversion is one of the key elements of the investment efficiency loop. The R&M
process for reuse of equipment is very similar to the purchase of new equipment
except that you have more limitations on the concept of the new process. The data
are collected and phase one is repeated, often, with more specific direction as the
current equipment may limit some of the other concepts.
While decommission may be the process of equipment disposal, it is necessary
to verify and record R&M data from this equipment to help identify the best design
practices. It is also important to make note of those design practices that did not
work as well as planned.
As plans for decommission become firm, it is important to generate forecasts
for equipment availability. These forecasts should then be entered into a database
for future forecasted and available machinery and equipment. Maintenance data,
including condition, operation description, and reason for availability should be
included. This will assist engineers evaluating surplus machinery and equipment for
reuse in their programs.
−t
MTBF
R(t ) = e
where R(t) = reliability point estimate during a constant failure rate period; e =
natural logarithm which is 2.718281828…; t = schedule time or mission time of the
equipment or machinery; and MTBF = mean time between failure.
Special note: This calculation may be performed only when the machine has
reached the bottom of the bathtub curve.
EXAMPLE
A water pump is scheduled (mission time) to operate for 100 hours. The MTBF for
this pump is also rated at 100 hours and the MTTR is 2 hours. The probability that
the pump will not fail during the mission is:
−t −100
MTBF 100
R(t ) = e = R(t ) = e = .37 or 37%.
This means that the pump will have a 37% chance of not breaking down during the
100-hour mission time.
This means that the pump has a 63% chance of failing during the 100 hour mission.
MTBE
Mean time between event can be calculated as:
where Total Operating Time = the total scheduled production time when machinery
or equipment is powered and producing parts and N = the total number of downtime
events, scheduled and unscheduled.
EXAMPLE
The total operating time for a machine is 550 hours. In addition, the machine
experiences 2 failures, 2 tool changes, 2 quality checks, 1 preventive maintenance
meeting, and 5 lunch breaks. What is the MTBE?
SL3151Ch08Frame Page 355 Thursday, September 12, 2002 6:07 PM
MTBF
Mean time between failure is the average time between failure occurrences and is
calculated as:
MTBF = Operating Time/N
where Operating Time = scheduled production time and N = total number of failures
observed during the operating period.
EXAMPLE
If machinery is operating for 400 hours and there are eight failures, what is the
MTBF?
FAILURE RATE
Failure rate estimates the number of failures in a given unit of time, events, cycles, or
number of parts. It is the probability of failure within a unit of time. It is calculated as:
EXAMPLE
The failure rate of a pump that experiences one failure within an operating time
period of 2000 hours is:
This means that there is a .0005 probability that a failure will occur with every hour
of operation.
MTTR
Mean time to repair is a calculation based on one failure and one failure only. The
longer each failure takes to repair, the more the equipment’s cost of ownership goes
up. Additionally, MTTR directly effects uptime, uptime percent, and capacity. It is
calculated as:
MTTR =
∑t
N
SL3151Ch08Frame Page 356 Thursday, September 12, 2002 6:07 PM
MTTR =
∑t = 5/4 = 1.25 hours
N
AVAILABILITY
Availability is the measure of the degree to which machinery or equipment is in an
operable and committable state at any point in time. Availability is dependent upon
(a) breakdown loss, (b) setup and adjustment loss, and (c) other factors that may
prevent machinery from being available for operation when needed. When calculat-
ing this metric, it is assumed that maintenance starts as soon as the failure is reported.
(Special note: Think of the measurement of R&M in terms of availability. That is,
MTBF is reliability and MTTR is maintainability.) Availability is calculated as:
EXAMPLE
What is the availability for a system that has an MTBF of 50 hours and an MTTR
of 1 hour?
TABLE 8.2
Cost Comparison of Two Machines
Costs Machine A Machine B
EXAMPLE
What is the LCC for the two machines shown in Table 8.2 and which one is a better deal?
The reader should notice that before the decision is made all costs should be eval-
uated. In this case, machine A has a higher acquisition cost than machine B, but it
turns out that machine A has a lower LCC than machine B. Therefore, machine A
is the better deal.
THERMAL ANALYSIS
This analysis is conducted to help the designer to develop the appropriate and
applicable heat transfer (Table 8.3). The actual analysis is conducted by following
these six steps:
EXAMPLE
The electrical enclosure is 5 ft. tall by 4 ft. deep. The surface area for this enclosure
is calculated as follows:
SL3151Ch08Frame Page 358 Thursday, September 12, 2002 6:07 PM
TABLE 8.3
Thermal Calculation Values
Thermal Calculation Values
Individual Wattage Total
Component Name Quantity Maximum Wattage
Internal
Relay 4 2.5 10.0
A18 contactor 1 1.7 1.7
A25 contactor 2 2 4.0
PS27 power supply 1 71 71.0
Monochrome monitor 1 85 85.0
Subtotal wattage 171.7
External
Servo transformer 1 450 63.0
Subtotal wattage 63.0
Total enclosure wattage 234.7
Note: The servo transformer is mounted externally and next to the enclosure. There-
fore, only 14% of the total wattage is estimated to radiate into the enclosure
Thermal rise (∆T) = Thermal resistance (θCA) cabinet to ambient × Power (W)
The thermal conductivity value is found in the catalog of the National Electrical
Manufacturing Association (NEMA).
Thus, .25 W/degree F is the thermal conductivity value for a NEMA 12 enclosure.
If the equipment inside the enclosure generates 234.7 watts, then the thermal rise is
SL3151Ch08Frame Page 359 Thursday, September 12, 2002 6:07 PM
If the ambient temperature is 100°F, then the enclosure temperature will reach
113.8°F. If the enclosure temperature is specified as 104°F, then the design exceeds
the specification by approximately 9.8°F. The enclosure must be increased in size,
the load must be reduced, or active cooling techniques need to be applied. (Special
note: Remember that a 10% rise in temperature decreases the reliability by about
50%. Also the method just mentioned in this example is not valid for enclosures that
have other means of heat dissipation such as fans, or for those made of heavier metal
or if the material were changed. This specific calculation assumes that the heat is
being radiated through convection to the outside air.)
IT
% derating = 1 −
IS
EXAMPLE
During a design review, the question arose as to whether the 24 V power supply for
a motor was adequately derated. The power supply takes 480 VAC three phase with
a 2 A circuit breaker and has a rated output of 10 A. An examination of the system
reveals that 24 V power is delivered to the load through three circuit breakers (A =
.477 A, B = .73 A, and C = 5.53 A. The total for the three circuits is therefore 6.737
A.) When these circuit breakers are combined, 11 A of current flow to the load. This
situation may not happen, but further investigation is required.
IT 6.737
% derating = 1 − = 1− = 32.63%
IS 10.0
This means that in this case the power supply will not be overloaded and the circuit
breakers are generously oversized. In other words, the circuit breakers should not be
tripped due to false triggers.
U STRENGTH − U STRESS
SM =
Sv 2 + Lv 2
Where SM = safety margin; USTRENGTH = mean strength; ULOAD = mean load; Lv2 =
load variance; and Sv2 = strength variance.
EXAMPLE
A robot’s arm has a mean strength of 80 kg. The maximum allowable stress applied
by the end of arm tooling is 50 kg. The strength variance is 8 kg and the stress
variance is 7 kg. What is the SM?
U STRENGTH − U STRESS 80 − 50
SM = = = 2.822
Sv + Lv
2 2
82 + 72
(A low SM may indicate the need to assign another size robot or redesign the tooling
material.)
INTERFERENCE
Once the SM is calculated, it can be used to calculate the interference and reliability
of the components under investigation. Interference may be thought of as the overlap
between the stress and the strength distributions. In more formal terms, it is the
probability that a random observation from the load distribution exceeds a random
observation from the strength distribution. To calculate interference, we use the SM
equation and substitute the z for the SM distribution:
U STRENGTH − U STRESS
Z=
Sv 2 + Lv 2
EXAMPLE
If we use the answer from the previous example (z = 2.822), we can use the z table
(in this case the area under the z = 2.822 is .0024). This means that there exists a
.0024 or .24% probability of failure.
R = 1 – interference or R = 1 – α
This means that even though the strength and the load have a very low (.24%)
probability of failure, the reliability of the system is very high with a 99.76%.
SL3151Ch08Frame Page 361 Thursday, September 12, 2002 6:07 PM
TABLE 8.4
Guidelines for the Duane Model
β Recommended Actions
0 to .2 No priority is given to reliability improvement; failure data not analyzed; corrective action
taken for important failure modes, but with low priority
.2 to .3 Routine attention to reliability improvement; corrective action taken for important failure
modes
.3 to .4 Priority attention to reliability improvement; normal (typical stresses) environment utilization;
well-managed analysis and corrective action for important failure modes
.4 to .6 Eliminating failures takes top priority; immediate analysis and corrective action for all failures
1 1
MTBF = and FR =
FR MTBF
Step 1. Collect data on the machine and calculate the cumulative MTBF value
for the machine.
Step 2. Plot the data on log–log paper. (An increasing slope indicates a
reliability growth flatness, which indicates that the machine has achieved
its inherent level of MTBF and cannot get any better)
Step 3. Calculate the slope, using regression analysis or best fit line. Once the
slope (the beta value) is calculated, we can apply the Duane model inter-
pretation. The guidelines (Table 8.4) for the interpretation are
MACHINERY FMEA
Machinery FMEA is a systematic approach that applies the tabular method to aid
the thought process used by simultaneous engineering teams to identify the
machine’s potential failure modes, potential effects, and potential causes and to
develop corrective action plans that will remove or reduce the impact of the failure
modes. Perhaps the most important use of the machinery FMEA is to identify and
correct all safety issues. A more detailed discussion will be given in Chapter 6.
SL3151Ch08Frame Page 362 Thursday, September 12, 2002 6:07 PM
of their equipment and processes within their work areas. TPM implemen-
tation vigorously benchmarks, measures, and corrects all losses resulting
from inefficiencies.
Life cycle — The sequence through which machinery and equipment pass
from conception through decommission.
Life cycle costs (LCC) — The sum of all cost factors incurred during the
expected life of machinery.
Machine condition signature analysis (MCSA) — An application that
applies mechanical signature (vibration) analysis techniques to characterize
machinery and equipment on a systems level to significantly improve reli-
ability and maintainability.
Machinery — Tooling and equipment combined. A generic term for all hard-
ware (including necessary operational software) that performs a manufac-
turing process.
Maintainability — A characteristic of design, installation, and operation,
usually expressed as the probability that a machine can be retained in, or
restored to, specified operable condition within a specified interval of time
when maintenance is performed in accordance with prescribed procedures.
Mean time between failures (MTBF) — The average time between failure
occurrences. The sum of the operating time of a machine divided by the
total number of failures. Predominantly used for repairable equipment.
Mean time to failure (MTTF) — The average time to failure for a specific
equipment design. Used predominantly for non-repairable equipment.
Mean time to repair (MTTR) — The average time to restore machinery or
equipment to specified conditions.
Overall equipment effectiveness (OEE) — Percentage of the time the
machinery is available (Availability) × how fast the machinery is running
relative to its design cycle (Performance efficiency) × percentage of the
resulting product within quality specifications (Yield).
Perishable tooling — Tooling which is consumed over time during a manu-
facturing operation.
Plant floor information system (PFIS) — An information gathering system
used on the plant floor to gather data relating to plant operations including
maintenance activities.
Predictive maintenance (PdM) — A portion of scheduled maintenance ded-
icated to inspection for the purpose of detecting incipient failures.
Preventative maintenance (PM) — A portion of scheduled maintenance ded-
icated to taking planned actions for the purpose of reducing the frequency
or severity of future failures, including lubrication, filter changes, and part
replacement dictated by analytical techniques and predictive maintenance
procedures.
Probability ratio sequential testing (PRST) — A reliability qualification test
to demonstrate if the machinery/equipment satisfies a specified MTBF
requirement and is not lower than an acceptable MTBF (MIL-STD-781).
Process — Any operation or sequence of operations that contributes to the
transformation of raw material into a finished part or assembly.
SL3151Ch08Frame Page 364 Thursday, September 12, 2002 6:07 PM
REFERENCES
Otto, K. and Wood, K., Product Design, Prentice Hall, Upper Saddle River, NJ, 2001.
SELECTED BIBLIOGRAPHY
Anon., Reliability and Maintainability Guideline for Manufacturing Machinery and Equip-
ment, M-110.2, 2nd ed., Society of Automotive Engineers, Inc., Warrendale, PA and
National Center for Manufacturing Sciences, Inc., Ann Arbor, MI, 1999.
Anon., ISO/TS16949. International Automotive Task Force. 2nd ed. AIAG. Southfield, MI,
2002.
Automotive Industry Action Group, Potential Failure Mode and Effect Analysis, 3rd ed.,
Chrysler Corp., Ford Motor Co., and General Motors. Distributed by AIAG, South-
field, MI, 2001.
Blenchard, B.S., Logistics Engineering and Management, 3rd ed., Prentice Hall, Englewood
Cliffs, NJ, 1986.
Chrysler, Ford, and GM, Quality System Requirements: QS-9000, distributed by Automotive
Industry Action Group, Southfield, MI, 1995.
Chrysler, Ford, and GM, Quality System Requirements: Tooling and Equipment Supplement,
distributed by Automotive Industry Action Group, Southfield, MI, 1996.
Creveling, C.M., Tolerance Design: A Handbook for Developing Optimal Specifications,
Addison Wesley Longman, Reading, MA, 1997.
Hollins, B. and Pugh, S., Successful Product Design, Butterworth Scientific. London, 1990.
Kapur, K.C. and Lamberson, L.R., Reliability in Engineering Design, Wiley, New York, 1977.
Nelson, W., Graphical analysis of system repair data, Journal of Quality Technology, 20,
24–35, 1988.
Stamatis, D.H., Implementing the TE Supplement to QS-9000, Quality Resources, New York,
1998.
SL3151Ch08Frame Page 366 Thursday, September 12, 2002 6:07 PM
SL3151Ch09Frame Page 367 Thursday, September 12, 2002 6:05 PM
Design of Experiments
9
SETTING THE STAGE FOR DOE
Design of Experiments (DOE) is a way to efficiently plan and structure an investi-
gatory testing program. Although DOE is often perceived to be a problem-solving
tool, its greatest benefit can come as a problem avoidance tool. In fact, it is this
avoidance that we emphasize in design for six sigma (DFSS).
This chapter is organized into nine sections. The user who is looking for a basic
DOE introduction in order to participate with some understanding in a problem-
solving group is urged to study and understand the first two sections or go back and
review Volume V of this series. The remaining sections discuss more complex topics
including problem avoidance in product and process design, more advanced exper-
imental layouts, and understanding the analysis in more detail.
1. DOE helps the responsible group plan, conduct, and analyze test programs
more efficiently.
2. DOE is an effective way to reduce cost.
Usually the term DOE brings to mind only the analysis of experimental data.
The application of DOE necessitates a much broader approach that encompasses the
total process involved in testing. The skills required to conduct an effective test
program fall into three main categories:
1. Planning/organizational
2. Technical
3. Analytical/statistical
The planning of the experiment is a critical phase. If the groundwork laid in the
planning phase is faulty, even the best analytic techniques will not salvage the
disaster. The tendency to run off and conduct tests as soon as a problem is found,
without planning the outcome, should be resisted. The benefits from up-front plan-
ning almost always outweigh the small investment of time and effort. Too often,
time and resources are wasted running down blind alleys that could have been
avoided. Section 2 of this chapter contains a more detailed discussion of planning
and the techniques used to ensure a well-planned experiment.
367
SL3151Ch09Frame Page 368 Thursday, September 12, 2002 6:05 PM
TABLE 9.1
One Factor at a Time
The group tests configurations containing the following combinations of the factors:
1 1 1 1 1 1 1 1 271.4 266.3
2 2 1 1 1 1 1 1 215.0 211.2
3 1 2 1 1 1 1 1 275.3 271.1
4 1 2 2 1 1 1 1 235.2 231.5
5 1 2 1 2 1 1 1 296.6 301.6
6 1 2 1 2 2 1 1 305.2 301.1
7 1 2 1 2 1 2 1 278.8 275.3
8 1 2 1 2 1 1 2 251.9 254.3
DOE can be a powerful tool in situations where the effect on a measured output
of several factors, each at two or more levels, must be determined. In the traditional
“one factor at a time” approach, each test result is used in a small number of
comparisons. In DOE, each test is used in every comparison. A simplified example
follows.
EXAMPLE
TABLE 9.2
Test Numbers for Comparison
Test Numbers Used to Determine: Difference
Factor Level 1 Level 2 Level 1 – Level 2
TABLE 9.3
The Group Runs Using DOE Configurations
Level of Factor
Test (1 and 2 Indicate the Different Levels)
Number A B C D E F G Result
1 1 1 1 1 1 1 1 270.7
2 1 1 1 2 2 2 2 223.8
3 1 2 2 1 1 2 2 158.2
4 1 2 2 2 2 1 1 263.1
5 2 1 2 1 2 1 2 129.3
6 2 1 2 2 1 2 1 175.1
7 2 2 1 1 2 2 1 195.4
8 2 2 1 2 1 1 2 194.6
TABLE 9.4
Comparisons Using DOE
Test Numbers Used to Determine Difference
Factor Level 1 Level 2 Level 1 – Level 2
A 1, 2, 3, 4 5, 6, 7, 8 55.4
B 1, 2, 5, 6 3, 4, 7, 8 –3.1
C 1, 2, 7, 8 3, 4, 5, 6 39.7
D 1, 3, 5, 7 2, 4, 6, 8 –25.8
E 1, 3, 6, 8 2, 4, 5, 7 –3.3
F 1, 4, 5, 8 2, 3, 5, 8 26.3
G 1, 4, 6, 7 2, 3, 5, 8 49.6
SL3151Ch09Frame Page 370 Thursday, September 12, 2002 6:05 PM
TABLE 9.5
Comparison of the Two Means
Confidence Interval
Number of Tests Estimate at the Best Levels at 90% Confidence
For a comparison of the two methods, see Table 9.5. Half as many tests are
required using a DOE approach and the estimate at each level is better (four tests
per factor level versus two). This is almost like getting something for nothing. The
only thing that is required is that the group plan out what is to be learned before
running any of the tests. The savings in time and testing resources can be significant.
Direct benefits include reduced product development time, improved problem cor-
rection response, and more satisfied customers. And that is exactly what DFSS should
be aiming at.
This approach to DOE is also very flexible and can accommodate known or
suspected interactions and factors with more than two levels. A properly structured
experiment will give the maximum amount of information possible. An experiment
that is less well designed will be an inefficient use of scarce resources.
TAGUCHI’S APPROACH
Here it is appropriate to summarize Dr. Taguchi’s approach, which is to minimize
the total cost to society. He uses the “Loss Function” (Section 4) to evaluate the
total cost impact of alternative quality improvement actions. In Dr. Taguchi’s view,
we all have an important societal responsibility to minimize the sum of the internal
cost of producing a product and the external cost the customer incurs in using the
product. The customer’s cost includes the cost of dissatisfaction. This responsibility
should be in harmony with every company’s objectives when the long-term view of
survival and customer satisfaction is considered. Profits may be maximized in the
short run by deceiving today’s customers or trading away the future.
Traditionally, the next quarter’s or next year’s “bottom line” has been the driving
force in most corporations. Times have changed, however. Worldwide competition
has grown, and customers have become more concerned with the total product cost.
In this environment, survival becomes a real issue, and customer satisfaction must
be a part of the cost equation that drives the decision process.
Dr. Taguchi uses the signal-to-noise (S/N) ratio as the operational way of incor-
porating the loss function into experimental design. Experiment S/N is analogous
to the S/N measurement developed in the audio/electronics industry. S/N is used to
ensure that designs and processes give desired responses over different conditions
of uncontrollable “noise” factors. S/N is introduced in Section 4 and developed in
examples in later sections.
SL3151Ch09Frame Page 371 Thursday, September 12, 2002 6:05 PM
There are three basic types of product design activity in Dr. Taguchi’s approach:
1. System design
2. Parameter design
3. Tolerance design
MISCELLANEOUS THOUGHTS
A tremendous opportunity exists when the basic relationships between components
are defined in equation form in the system design phase. This occurs in electrical
circuit design, finite element analysis, and other situations. In these cases, once the
equations are known, testing can be simulated on a computer and the “best” com-
ponent values and appropriate tolerances obtained. It might be argued that the true
best values would not be located using this technique; only the local maxima would
be obtained. The equations involved are generally too complex to solve to the true
best values using calculus. Determining the local best values in the region that the
experienced design engineer considers most promising is generally the best available
approach. It definitely has merit over choosing several values and solving for the
remaining ones. The cost involved is computation time, and the benefit is a robust
design using the widest possible tolerances.
Those readers who have some experience in classical statistics may wonder
about the differences between the classical and Taguchi approaches. Although there
are some operational differences, the biggest difference is in philosophical
emphasis — see Volume V of this series. Classical statistics emphasizes the pro-
ducer’s risk. This means a factor’s effect must be shown to be significantly different
SL3151Ch09Frame Page 372 Thursday, September 12, 2002 6:05 PM
from zero at a high confidence level to warrant a choice between levels. Taguchi
uses percent contribution as a way to evaluate test results from a consumer’s risk
standpoint. The reasoning is that if a factor has a high percent contribution, more
often than not it is worth pursuing. In this respect, the Taguchi approach is less
conservative than the classical approach. Dr. Taguchi uses orthogonal arrays exten-
sively in his approach and has formulated them into a “cookbook” approach that is
relatively easy to learn and apply. Classical statistics has several different ways of
designing experiments including orthogonal arrays. In some cases, another approach
may be more efficient than the orthogonal array. However, the application of these
methods may be complex and is usually left to statisticians. Dr. Taguchi also
approaches uncontrollable “noise” differently. He emphasizes developing a design
that is robust over the levels of noise factors. This means that the design will perform
at or near target regardless of what is happening with the uncontrollable factors.
Classical statistics seeks to remove the noise factors from consideration by “block-
ing” the noise factors.
In certain cases, the approaches Taguchi recommends may be more complicated
than other statistical approaches or may be questioned by classical statisticians. In
these cases, alternative approaches are presented as supplemental information at the
end of the appropriate section. Additional analysis techniques are also presented in
section supplements.
The reader is encouraged to thoroughly analyze the data using all appropriate
tools. Incomplete analysis can result in incorrect conclusions.
BRAINSTORMING
The first steps in planning a DOE are to define the situation to be addressed, identify
the participants, and determine the scope and the goal of the investigation. This
information should be written down in terms that are as specific as possible so that
everyone involved can agree on and share a common understanding and purpose.
The experts involved should pool their understanding of the subject. In a brainstorm-
ing session, each participant is encouraged to offer an opinion of which factors cause
the effect. All ideas are recorded without question or discussion at this stage. To aid
in the organization of the proposed factors, a branching (fishbone) format is often
used, where each main branch is a main aspect of the effect under investigation
(e.g., material, methods, machine, people, measurement, environment). The con-
struction of a cause-and-effect (fishbone or Ishikawa) diagram in a brainstorming
session provides a structured, efficient way to ensure that pertinent ideas are collected
SL3151Ch09Frame Page 373 Thursday, September 12, 2002 6:05 PM
Engine Control
Calibration Hardware
Cooling
F/A Spark Scatter Poor Ground Injectors
Too Great Stuck
Ratio Broken
Spark Advance
Contaminated
Range Internal
Too Small Harness &
to Veh Connectors
EMI Intermittents
Fuel Improper
Flow Air Ports
CBs Connector
Fit
Piston Ring
Scuff/Power
Bolt Torque Loss
Bore
Fit
Distortion
Compression Grinding
Height
Piston Piston
Design Rings
Timing Suppliers
Bore Camshaft Buffs
Finish Finish
Compression
Ratio Assembly
Engine
Hardware Manufacturing
and considered and that the discussion stays on track. An example of a partially
completed cause-and-effect diagram is shown in Figure 9.1.
After the participants have expressed their ideas on possible causes, the factors
are discussed and prioritized for investigation. Usually, a three-level (high, moderate,
and low) rating system is used to indicate the group consensus on the level of
suspected contribution. Quite often, the rating will be determined by a simple vote
of the participants. In situations where several different areas of contributing exper-
tise are represented, participants’ votes outside of their areas of expertise may not
have the importance of the expert’s vote. Handling this situation becomes a man-
agement challenge for the group leader and is beyond the scope of this document —
the reader may need to review Volume II of this series.
During the brainstorming and prioritization process, the participants should
consider the following:
1. The situation — What is the present state of affairs and why are we
dissatisfied?
2. The goal — When will we be satisfied (at least in the short term)?
3. The constraints — How much time and resources can we use in the
investigation?
4. The approach — Is DOE appropriate right now or should we do other
research first?
5. The measurement technique and response — What measurement tech-
nique will be used and what response will be measured?
CHOICE OF RESPONSE
The choice of measurement technique and response is an important point that is
sometimes not given much thought. The obvious response is not always the best.
SL3151Ch09Frame Page 374 Thursday, September 12, 2002 6:05 PM
Factor 2 = Low
Response
Factor 2 = High
As an example, consider the gap between two vehicle body panels. At first thought,
that gap could be used as the response in a DOE aimed at achieving a target gap.
However, the gap can be a symptom of more basic problems with the:
All of these must be right for the gap to be as intended. If the goal of the
experiment is to identify which of these has the biggest impact on the gap, the choice
of the gap as a response is appropriate. If the purpose is to minimize the deviation
from the target gap, the gap may not be the right response. A more basic investigation
of the factors that contribute to the underlying cause is required. Do not confuse the
symptom with the underlying causes. This thought process is very similar to the
thought process used in SPC and failure mode and effect analysis (FMEA) and
draws heavily upon the experience of experts to frame the right question. In DOE,
the choice of an improper response could result in an inconclusive experiment or in
a solution that might not work as things change due to interactions between the
factors. An interaction occurs when the change in the response due to a change in
the level of a factor is different for the different levels of a second factor. An example
is shown in Figure 9.2.
The choice of the proper response characteristic will usually result in few
interactions being significant. Since there is a limitation as to how much information
can be extracted from a given number of experiments, choosing the right response
will allow the investigation of the maximum number of factors in the minimum
number of tests without interactions between factors blurring the factor main effect.
Interactions will be discussed in more detail in Section 3. The proper setup of an
SL3151Ch09Frame Page 375 Thursday, September 12, 2002 6:05 PM
experiment is not only a statistical task. Statistics serve to focus the technical
expertise of the participating experts into the most efficient approach.
In summary, the response should:
The prioritization process continues until the most critical factors that can be
addressed within the resources of the test program are identified. The next step is
to determine:
1. Are the factors controllable or are some of them “noise” beyond our
control?
2. Do the factors interact?
3. What levels of each factor should be considered?
4. How do these levels relate to production limits or specs?
5. Who will supply the parts, machines, and testing facilities, and when will
they be available?
6. Does everyone agree on the statement of the problem, goal, approach,
and allocation of roles?
7. What kind of test procedure will be used?
When all of these questions have been answered, the person who is acting as
the statistical resource for the group can translate the answers into a hypothesis and
experimental setup to test the hypothesis. The following example illustrates how the
process can work:
EXAMPLE
A particular bracket has started to fail in the field with a higher than expected
frequency. Timothy, the design engineer, and Christine, the process engineer, are
alerted to the problem and agree to form a problem-solving team to investigate the
situation. Timothy reviews the design FMEA, while Christine reviews the process
FMEA. The information relating to the previously anticipated potential causes of
this failure and SPC charts for the appropriate critical characteristics are brought to
the first meeting. The team consists of Timothy, Christine, Cary (the machine oper-
ator), Stephen (the metallurgist), and Eric (another manufacturing engineer who has
taken a DOE course and has agreed to help the group set up the DOE).
In the first meeting, the group discussed the applicable areas from the FMEAs,
reviewed the SPC charts, and began a cause-and-effect listing for the observed failure
mode. At the conclusion of the meeting, Timothy was assigned to determine if the
loads on the bracket had changed due to changes in the components attached to it;
Christine was asked to investigate if there had been any change to the incoming
material; Stephen was asked to consider the testing procedure that should be used
to duplicate field failure modes and the response that should be measured, and all
SL3151Ch09Frame Page 376 Thursday, September 12, 2002 6:05 PM
C7
Bracket
Breaks
TABLE 9.6
The Test Matrix for the Seven Factors
Test Levels for Each Suspected Factor for Each of Eight Tests
Number C1 C2 C7 C11 C13 C15 C16
1 1 1 1 1 1 1 1
2 1 1 1 2 2 2 2
3 1 2 2 1 1 2 2
4 1 2 2 2 2 1 1
5 2 1 2 1 2 1 2
6 2 1 2 2 1 2 1
7 2 2 1 1 2 2 1
8 2 2 1 2 1 1 2
of the group members were asked to consider additions to the cause-and-effect list.
At the second meeting, the participants reported on their assignments and continued
constructing the cause-and-effect (C & E) diagram. Their cause-and-effect diagram
is shown in Figure 9.3 with the specific causes shown as “C1, C2, …” rather than
the actual descriptions that would appear on a real C & E diagram.
The group easily reached the consensus that seven of the potential causes were
suspected of contributing to the field problem. Eric agreed to set up the experiment
assuming two levels for each factor, and the others determined what those levels
should be to relate the experiment to the production reality. Eric returned to the group
and announced that he was able to use an L8 orthogonal array to set up the experiment
and that eight tests were all that were needed at this time. The test matrix for the
seven suspected factors is shown in Table 9.6.
Eric explained that this matrix would allow the group to determine if a difference
in test responses existed for the two levels of each factor and would prioritize the
SL3151Ch09Frame Page 377 Thursday, September 12, 2002 6:05 PM
TABLE 9.7
Test Results
Test Number Result
1 10
2 13
3 15
4 17
5 14
6 16
7 19
8 21
C1 C2 C7 C11
R
E 18 – – – –
S 17 – – – –
P 16 – – – –
O 15 – – – –
N 14 – – – –
S 13 – – – –
E
–
–
–
1 2 1 2 1 2 1 2
LEVEL LEVEL LEVEL LEVEL
1 2 1 2 1 2
LEVEL LEVEL LEVEL
within-factor differences. Since the two levels of each factor represented an actual
situation that existed in production during the time the failed parts were produced,
this information could be used to correct the problem. By now, Stephen had identified
a test procedure and response that seemed to fit the requirements outlined in this
section.
Two weeks were required to gather all the material and parts for the experiment
and to run the experiment. The test results are shown in Table 9.7. While Eric entered
the data into the computer for analysis, Timothy and Christine plotted the data to
see if anything was readily apparent. The factor level plots are shown in Figure 9.4.
SL3151Ch09Frame Page 378 Thursday, September 12, 2002 6:05 PM
When Eric finished with the computer, he reported that of all the variability observed
in the data, 53.65% was due to the change in factor C2; 33.38% was due to the
change in factor C1; and 11.92% was due to the change in factor C11. The remaining
1.04% was due to the other factors and experimental error. The large percentage
variability contribution, coupled with the fact that the differences between the levels
of the three factors are significant from an engineering standpoint, indicate that these
three factors may indeed be the culprits. The computer analysis indicated that the
best estimate for a test run at C1 = 2, C2 = 2, and C11 = 2 is 21.4. One of the eight
tests in the experiment was run at this condition and the result was 21. Two confir-
matory tests were run and the results were 11 and 20. The group then moved into a
second phase of the investigation to identify what the specs limits should be on C1,
C2, and C11. In the second round of testing, eight tests were required to investigate
three levels for each of the three factors. The setup for the second round of testing
involved an advanced procedure (idle column method) that will be presented later
in this chapter, so the example will be concluded for now.
TABLE 9.8
An Example of Contrasts
Average at Average at Contrast
Factor Level One Level Two (Level 2 Avg. – Level 1 Avg.)
prior belief that a factor in part 1, for instance, is significant, then a different approach
should be used. This approach is also dependent upon the structure of the situation.
The above example is presented to illustrate the point that the experimenter
should be alert for ways to test more efficiently and effectively.
MISCELLANEOUS THOUGHTS
An additional useful method of looking at the data is to plot the contrasts on
normal probability paper. For a two-level factor, the contrast is the average of all
the tests run at one level subtracted from the average of the tests run at the other
level. For the example in this section, the contrasts are shown in Table 9.8.
These contrasts are plotted on normal probability paper versus median ranks.
The values for median ranks are available in many statistics and reliability books
and are used in Weibull reliability plotting. For this example, the normal contrast
plot is shown in Figure 9.6.
To plot the contrasts on normal paper, the contrasts are ranked in numerical
order, here from –0.25 (C7) to 4.75 (C2). The contrasts are then plotted against the
median ranks or, in this case, against the rank number shown on the left margin of
the plot. Factors that are significant have contrasts that are relatively far from zero
and do not lie on a line roughly defined by the rest of the factors. These factors can
lie off the line on the right side (level 2 higher) or on the left side (level 1 higher).
In the example, two separate lines seem to be defined by the contrasts. This could
be due to either of these situations:
• C1, C2, and C11 are significant and the others are not.
• There may be one or more bad data points that occur when C1, C2, and
C11 are at one level and the other factors are set at the other level.
In this example, C1, C2, C11, and C16 were at level 2 and the other factors
were set at level 1 for run number eight. Depending upon the situation, it would be
worthwhile to either rerun that test or to investigate the circumstances that accom-
panied that the test (e.g., was the test hard to run because of the factor settings or
SL3151Ch09Frame Page 380 Thursday, September 12, 2002 6:05 PM
C2
7
Numerical Rank Corresponding To
6
C1
Median Rank Probability
C11
5
C16
3
C13
2
C15
C7
-1 0 1 2 3 4 5
Contrast
did something else change that was not in the experiment?). In the example, this
combination of factors represented the best observed outcome, and the confirmation
runs supported the results of the original test.
Plotting contrasts is a way of better understanding the data. It helps the exper-
imenter visualize what is happening with the data. Sometimes, information that
might be lost in a table of data will be crystal clear on a plot.
A factor level is one of the choices of the factor to be evaluated (e.g., if the
screw speed of a machine is the factor to be investigated, two factor levels might
be 1200 and 1400 rpm).
Investigating a larger number of levels for a factor requires more tests than
investigating a smaller number of levels. There is usually a trade-off required con-
cerning the amount of information needed from the experiment to be very confident
of the results and the time and resources available. If testing and material are cheap
and time is available, evaluate many levels for each factor. Usually, this is not the
case, and two or three levels for each factor are recommended. An exception to this
occurs when the factor is non-continuous, and several levels are of interest. Examples
of this type of factor include the evaluation of a number of suppliers, machines, or
types of material. This situation will be discussed later in this section.
The first round of testing is usually designed to screen a large number of factors.
To accomplish this in a small number of tests, two levels per factor are usually tested.
The choice of the levels depends upon the question to be addressed. If the question is
“Have we specified the right spec limits?” or “What happens to the response in the
clear worst possible situation?” then the choice of what the levels should be clear.
A more complicated question to address is “How will the distribution in pro-
duction affect the response?” As suppliers become capable of maintaining low
variability about a target value, testing at the spec limits will not give a good answer
to this question. There are at least two approaches that can be used:
The main point of this discussion is that the choice of levels is an integral part
of the experimental definition and should be carefully considered by the group setting
up the experiment.
The second and subsequent rounds of testing are usually designed to investigate
particular factors in more detail. Generally, three levels per factor are recommended.
Using two levels allows the estimation of a linear trend between the points tested.
The testing of three levels gives an indication of non-linearity of the response across
the levels tested. This non-linearity can be used in determining specification limits
to optimize the response. Although this concept will be explored in more detail in
a later section on tolerance design, its application can be illustrated as follows:
Response
Levels of Factor 1 A B
Response
Levels of Factor 1 C B D
LINEAR GRAPHS
After the number of levels has been determined for each factor, the next step is to
decide which experimental setup to use. Dr. Taguchi uses a tool called “linear graphs”
to aid the experimenter in this process. Linear graphs are provided in the Appendix
of Volume V for several situations. Typical designs, however, are:
1. All factors at two levels (L4, L8, L12, L16, L32)
2. All factors at three levels (L9, L27)
3. A mix of two- and three-level factors (L18, L36)
SL3151Ch09Frame Page 383 Thursday, September 12, 2002 6:05 PM
DEGREES OF FREEDOM
In the orthogonal array designation, the number following the L indicates how many
testing setups are involved. This number is also one more than the degrees of freedom
available in the setup. Degrees of freedom are the number of pair-wise comparisons
that can be made. In comparing the levels of a two-level factor, one comparison is
made and one degree of freedom is expended. For a three-level factor, two compar-
isons are made as follows: first, compare A and B, then compare whichever is “best”
with C to determine which of the three is “best.” Two degrees of freedom are
expended in this comparison. Once the number of levels for each factor is deter-
mined, the degrees of freedom required for each factor are summed. This sum plus
one becomes the bottom limit to the orthogonal array choice.
The degrees of freedom for an interaction are determined by multiplying the
degrees of freedom for the factors involved in the interaction.
TABLE 9.9
L4 Setup
Column
Row 1 2 3
1 1 1 1
2 1 2 2
3 2 1 2
4 2 2 1
1 3 2
Generally, near the orthogonal array are line-and-dot figures that look a little
like “stick” drawings. These are linear graphs. The dots represent the factors that
can be assigned to the orthogonal array, and the lines represent the possible inter-
action of the two dots joined by the line. The numbers next to the dots and lines
correspond to the column numbers in the orthogonal array. For example, the linear
graph for the L4 is shown in Figure 9.9.
The interpretation of this linear graph is that if a factor is assigned to column 1
and a factor is assigned to column 2, column 3 can be used to evaluate their
interaction. If the interaction is not suspected of influencing the response, another
factor can be assigned to column 3. If no other factor remains, column 3 is left
unassigned and becomes an estimator of experimental error or non-repeatability.
This will be explained in more detail later in this chapter. The interrelationships
between the columns are such that there are many ways of writing the linear graphs.
TABLE 9.10
The L8 Interaction Table
Column
Column 1 2 3 4 5 6 7
1 3 2 5 4 7 6
2 1 6 7 4 5
3 7 6 5 4
4 1 2 3
5 3 2
6 1
EXAMPLE
Column 1 3, 4 2
Row 1 2 3 4
1 1 1 1 1
2 1 2 2 2 Column Interaction Table
3 1 3 3 3
4 2 1 2 3 Column
5 2 2 3 1 Column 1 2 3 4
6 2 3 1 2
7 3 1 3 2 1 3 2 2
8 3 2 1 3 4 4 3
2
9 3 3 2 1 1 4
3 4 3
1
2
1. A******
A D B
2. B***********
3.
2
3
1 5 4
6
7
FIGURE 9.10 The orthogonal array (OA), linear graph (LG), and column interaction for L9.
C**********
Column 1 2 3 4 5 6 7
Factor D B BxD C CxD A unassigned
where, B x D indicates the interaction between B and D.
D*****
1
7
2 5
3 4
6
E*****
Column
1 2 Four Level Factor
1 1 1
1 2 2
2 1 3
2 2 4
F*****
Test Columns
Number 1 2 3 4 5 6 7
1 0 0 1 1 1 1 1
2 0 0 1 2 2 2 2
3 0 0 2 1 1 2 2
4 0 0 2 2 2 1 1
5 0 0 3 1 2 1 2
6 0 0 3 2 1 2 1
7 0 0 4 1 2 2 1
8 0 0 4 2 1 1 2
G*********
3 5
7
2 6 4
H*****
Column Eight Level
1 2 4 Factor
1 1 1 1
1 1 2 2
1 2 1 3
1 2 2 4
2 1 1 5
2 1 2 6
2 2 1 7
2 2 2 8
I ********
3, 4
5
6, 7
1
9, 10
8
12, 13
11
TABLE 9.11
An L9 with a Two-Level Column
Test Columns
Number 1 2 3 4
1 1 1 1 1
2 1 2 2 2
3 1 3 3 3
4 2 1 2 3
5 2 2 3 1
6 2 3 1 2
7 1 1 3 2
8 1 2 1 3
9 1 3 2 1
Combination Method
Two two-level factors can be assigned to a single three-level column. This is done by
assigning three of the four combinations of the two two-level factors to the three-level
SL3151Ch09Frame Page 391 Thursday, September 12, 2002 6:05 PM
TABLE 9.12
Combination Method
Three-Level
Factor A Factor B Column
1 1 1
1 2 2
2 1 3
factor and not testing the fourth combination. As an example, two two-level factors
are assigned to a three-level column as in Table 9.12. Note that the combination
A2B2 is not tested. In this approach, information about the AB interaction is not available,
and many ANOVA (analysis of variance) computer programs are not able to break apart
the effect of A and B. A way of doing that manually will be presented later.
OTHER TECHNIQUES
There are other techniques for setting up an experiment that will be mentioned here
but will not be discussed in detail. The user is invited to read the chapter on pseudo-
factor design in Quality Engineering — Product and Process Design Optimization,
SL3151Ch09Frame Page 392 Thursday, September 12, 2002 6:05 PM
1 (idle)
3 5
2 4
6
TABLE 9.13
Modified L8 Array
Test Columns
Number 1 2 3 4 5 6 7
1 1 0 1 0 1 1 1
2 1 0 1 0 2 2 2
3 1 0 2 0 1 2 2
4 1 0 2 0 2 1 1
5 2 0 3 0 2 1 2
6 2 0 3 0 3 2 1
7 2 0 2 0 2 2 1
8 2 0 2 0 3 1 2
by Yuin Wu and Dr. Willie Hobbs Moore or to consult with a statistician to use these
techniques.
Nesting of Factors
Occasionally, levels of one factor have meaning only at a particular level of another
factor. Consider the comparison of two types of machine. One is electrically operated
and the other is hydraulically operated. The voltage and frequency of the electrical
power source and the temperature and formulation of the hydraulic fluid are factors
that have meaning for one type of machine but not the other. These factors are nested
within the machine level and require a special setup and analysis which is discussed
in the reference given above.
Experiments with factors with large numbers of levels can be assigned to an exper-
imental layout using combinations of the techniques that have been covered in this
booklet.
SL3151Ch09Frame Page 393 Thursday, September 12, 2002 6:05 PM
TABLE 9.14
An L8 with an L4 Outer Array
L4 1 2 2 1
Test L8 (on side) 1 2 1 2
No. 1 2 3 4 5 6 7 → 1 1 2 2
1 1 1 1 1 1 1 1 Test Results x1 x2 x3 x4
2 1 1 1 2 2 2 2 x5 x6 x7 x8
3 1 2 2 1 1 2 2
4 1 2 2 2 2 1 1 . . . .
5 2 1 2 1 2 1 2 . . . .
6 2 1 2 2 1 2 1 . . . .
7 2 2 1 1 2 2 1
8 2 2 1 2 1 1 2 x29 x30 x31 x32
1. Control factors are the factors that are to be optimized to attain the
experimental goal.
2. Noise factors represent the uncontrollable elements of the system. The
optimum choice of control factor levels should be robust over the noise
factor levels.
3. Signal factors represent different inputs into the system for which system
response should be different. For example, if several micrometers were
to be compared, the standard thickness to be measured would be levels
of a signal factor. The optimum micrometer choice would be the one that
operated best at all the standard thicknesses. Signal factors are discussed
in more detail on pages 430–441.
Control and noise factors are usually handled differently from one another in
setting up an experiment. Control factors are entered into an orthogonal array called
an inner array. The noise factors are entered into a separate array called an outer
array. These arrays are related so that every test setup in the inner array is evaluated
across every noise setup in the outer array. As an example, consider an L8 inner
(control) array with an L4 outer (noise) array, as shown in Table 9.14.
The purpose of this relationship is to equally and completely expose the control
factor choices to the uncontrollable environment. This ensures that the optimum
factor will be robust. A signal-to-noise (S/N) ratio can be calculated for each of the
control factor array test situations. This allows the experimenter to identify the
control factor level choices that meet the target response consistently.
SL3151Ch09Frame Page 394 Thursday, September 12, 2002 6:05 PM
MISCELLANEOUS THOUGHTS
Dr. Taguchi stresses evaluating as many main factors as possible and filling up the
available columns. If it turns out that the experimental design will result in unas-
signed columns, some column assignment schemes are better than others in a few
situations. The rationale behind these choices is that they minimize the confounding
of unsuspected two-factor interactions with the main factors. A detailed discussion
is beyond the scope of this chapter. The user is invited to read Chapter 12 of Statistics
for Experiments, by G. Box, W. Hunter, and J.S. Hunter to learn more about this
concept.
Consider an L8 for which there are to be four two-level factors assigned. This
implies that there will be three columns that will not be assigned to a main factor.
There are 35 ways in which the four factors can be assigned to the seven columns.
The recommended assignment is to use columns 1, 2, 4, and 7 for the main factors.
The interactions to be evaluated, the linear graphs, and the column interaction table
SL3151Ch09Frame Page 395 Thursday, September 12, 2002 6:05 PM
TABLE 9.15
Recommended Factor Assignment by Column
Number
of Factors L8 Array L16 Array L32 Array
4 1, 2, 4, 7 1, 2, 4, 8
5 a 1, 2, 4, 8, 15 1, 2, 4, 8, 16
6 a 1, 2, 4, 8, 11, 13 1, 2, 4, 8, 16, 31
7 a 1, 2, 4, 7, 8, 11, 13 1, 2, 4, 8, 15, 16, 23
8 — 1, 2, 4, 7, 8, 11, 13, 14 1, 2, 4, 8, 15, 16, 23, 27
9 — a 1, 2, 4, 8, 15, 16, 23, 27, 29
10 — a 1, 2, 4, 8, 15, 16, 23, 27, 29, 30
11 — a 1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21
12 — a 1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21, 22
13 — a 1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21, 22, 25
14 — a 1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21, 22, 25, 26
15 — a 1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21, 22, 25, 26, 28
a No recommended assignment scheme.
determine if the recommended column assignments are usable for a particular exper-
iment. The recommended column assignments are given in Table 9.15.
Some of the linear graphs may be found in the Appendix of Volume V. However,
the user will find that the linear graphs in other books and reference materials may
not make these assignments available. There are many equally valid ways that linear
graphs for the larger arrays can be constructed from the column interaction table. It
is not feasible for any one book to list all the possibilities. An excellent source is
Taguchi and Konishi (1987).
In many cases, the brainstorming group may not have a good feel for whether
interactions exist or not. In these cases, two alternatives are usually considered:
The second approach is based on the assumption that few of the interactions will
be significant and that later testing can be used to investigate them in more detail. The
reader is urged to seek statistical assistance in approaching this type of experiment.
Sometimes, the response is not related to the input factors in a linear fashion.
Testing each factor at two levels allows only a linear relationship to be defined and,
in this more complex situation, can give misleading results. A detailed statistical
analysis tool called response surface methodology can be used to investigate the
complex relationship of the input factors to the response in these cases.
SL3151Ch09Frame Page 396 Thursday, September 12, 2002 6:05 PM
All of this seems to indicate that DOEs must be lengthy and complicated when
interactions or nonlinear relationships are suspected. In most situations, time and
resources are not available to run a large experiment. Sometimes, a transformation
of the measured data or of a quantitative input factor can allow a linear model to fit
within the region covered by the input factors. The linear model requires fewer data
points than a curvilinear model and is easier to interpret. Unfortunately, unless
multiple observations are made at each inner array setup, the choice of transformation
is guided mainly by the experience of the experimenter or by trying several trans-
formations and seeing which one fits best.
The choice of the proper transformation to use is related to the choice of the
proper response. As an example, two common measures of fuel usage are “miles
per gallon” and “liters per kilometer.” With the multiplication of a constant, these
two measures are inverses of each other. A model that is linear in mi/g will be
definitely non-linear in l/km. Which measurement is correct? There is no easy
answer. The experimenter should evaluate several different transformations to deter-
mine the best model. Some transformations that are useful are:
y = Y1/2useful for count data (Poisson distributed) such as the number of flaws
in a painted surface
y = log(Y) or ln(Y)useful for comparing variances
y = Y–1/2
y = 1/Y
When there are several observations at each inner array test setup either through
replication or through testing with and outer array, another guide to choosing the right
transformation can be used. For the ANOVA to work correctly, the variances at all test
points should be equal. The observed variances should be compared as follows:
( )
1. Calculate the average X and the standard deviation(s) for each inner
array test setup.
2. Take the log or ln of each X and s.
3. Plot log s (y-axis) versus log X (x-axis) and estimate the slope.
4. Use the estimated slope as a rough guide to determine which transforma-
tion to use:
Slope Transformation
0.0 no transformation
0.5 y = Y1/2
1.0 y = log(Y) or 1n(Y)
1.5 y = Y–1/2
2.0 y = 1/Y
Loss
In $ Scrap
Rework
Spec Spec
Limit Limit
should be easy and should be pursued as a means to get the most information out
of the data.
Examples of this approach will be given later in the chapter. The reader is invited
to refer to Statistics for Experiments by G. Box, W. Hunter, and J.S. Hunter to learn
more about the use of transformations in analyzing data.
1. The Taguchi loss function and its cost-oriented approach to product design
2. A comparison of the loss function and the traditional approach to calcu-
lating loss
3. The use of the loss function in evaluating alternative actions
4. A comparison of the loss function and Cpk and the appropriate use of each
5. The relationship of the loss function and the signal-to-noise (S/N) calcu-
lation that Dr. Taguchi uses in design of experiments
characteristic that will best satisfy all customer requirements. Parts or systems that
are produced farther away from the target will not satisfy the customer as well. The
level of satisfaction decreases as the distance from the target increases. The loss
function approximates the total cost to society, including customer dissatisfaction,
of producing a part at a particular characteristic value.
Taken for a whole production run, the total cost to society is based on the
variability of the process and the distance of the distribution mean to the target.
Decisions that affect process variability and centering or the range over which the
customer will be satisfied can be evaluated using the common measurement of loss
to society.
The loss function can be used when considering the expenditure of resources.
Customer dissatisfaction is very difficult to quantify and is often ignored in the
traditional approach. Its inclusion in the decision process via the loss function
highlights a gold mine in customer-perceived quality and repeat purchases that would
be hidden otherwise. This gold mine is often available at a relatively minor expense
applied to improving the product or process.
Note: Use of the loss function implies a total system that starts with the deter-
mination of targets that reflect the greatest level of customer satisfaction. Calculation
of losses using nominals that were set using other methods may yield erroneous
results.
( )
2
L( x) = k x − m
where L(x) is the loss associated with producing a part at “x” value; k is a unique
constant determined for each situation; x is the measured value of the characteristic;
and m is the target of the characteristic.
When the general form is extended to a production of “n” items, the average
loss is:
( )∑ ( x − m)
2
L= k/n
SL3151Ch09Frame Page 399 Thursday, September 12, 2002 6:05 PM
Ao
Cost
m–∆ m m+∆
( )
2
L = k σ2 + µ − m
( ) ( )
2
L m−∆ =k m−∆−m
A0 = k∆2
k = A0 / ∆2
() ( )
2
L x = A0 / ∆2 * x − m
(
L = A0 / ∆2 * σ2 + offset 2 )
EXAMPLE
of 300 units by 10 or more, the average customer will complain, and the estimated
warranty cost will be $150.00. In this case,
k = $150.00/(10 units)2
SPC records indicate that the process average is 295 units and the variability is eight
units2. The present total loss is:
( )
2
L = k σ2 + µ − 300
( )
2
= $1.50 82 + 295 − 300
= $133.50 per part
Fifty thousand parts are produced per year. The total yearly loss (and opportunity
for improvement) is $6.7 million.
Situation 1
It is estimated that a redesign of the system would make the system more robust,
and the average customer would complain if the component deviated by 15 units or
more from 300. In this case:
( )
2
k = $150 / 15 units
( )
2
L = $0.6782 + 295 − 300
= $59.63 per part
The net yearly improvement due to redesigning the system would be:
(
Improvement = $1.33.50 − $59.63 * 5000 )
= $3, 693, 500
Situation 2
( )
2
L = $1.50 62 + 297 − 300
= $67.50 per part
The net yearly improvement due to using the new machine would be:
( )
Improvement = $1.33.50 − $67.50 * 50, 000
This cost should be balanced against the cost of the new machine.
From these situations, it is apparent that the quality of decisions using the loss
function is heavily dependent upon the quality of the data that goes into the loss
function. The loss function emphasizes making a decision based on quantitative total
cost data. In the traditional approach, decisions are difficult because of the unknowns
and differing subjective interpretations. The loss function approach requires inves-
tigation to remove some of the unknowns. Subjective interpretations become numeric
assumptions and analyses, which are easier to discuss and can be shown to be based
on facts.
In the smaller-the-better (STB) situation illustrated in Figure 9.14, the loss
function reduces to:
L = k [1/n ∑ x 2]
A0
Cost
X0
For the larger-the-better (LTB) situation illustrated in Figure 9.15, the loss
function reduces to:
L = k [1/n ∑1/x 2]
Loss function
• Provides more emphasis on the target
• Relates to customer costs
• Can be used to prioritize the effect of different processes
Cpk
• Is easier to understand and use
• Is based only on data from the process and specifications
• Is normalized for all processes
The loss function represents the type of thinking that must go into making
strategic management decisions regarding the product and process for critical char-
acteristics. Cpk is an easily used tool for monitoring actual production processes.
SL3151Ch09Frame Page 403 Thursday, September 12, 2002 6:05 PM
C pk 1 1 1 0.47 2
Loss (assume k = 2) 3.56 8.89 16 16 0.89
Figure 9.16 shows Cpk and the value of the loss function for five different cases.
In each of these cases, the specification is 20 ± 4 and the value of k in the loss
function is $2 per unit2.
Both Cpk and the loss function emphasize reducing the part-to-part variability
and centering the process on target. The use of Cpk is recommended in production
areas to monitor process performance because of the ease of understanding the clear
relationship of Cpk and the other SPC tools. Management decisions regarding the
location of distributions with small variability within a large specification tolerance
should be based on a loss function approach. (See cases 2 and 5 in Figure 9.16.)
The loss function approach should be used to determine the target value and to
evaluate the relative merits of two or more courses of action because of the emphasis
on cost and on including customer satisfaction as a factor in making basic product
and process decisions. These questions also lend themselves to the use of design of
experiments. The relationship of the loss function to the signal-to-noise DOE cal-
culations used by Dr. Taguchi will now be discussed.
SIGNAL-TO-NOISE (S/N)
Signal-to-Noise is a calculated value that Dr. Taguchi recommends to analyze DOE
results. It incorporates both the average response and the variability of the data. S/N
is a measure of the signal strength to the strength of the noise (variability). The goal
is always to maximize the S/N. S/N ratios are so constructed that if the average
response is far from the target, re-centering the response has a greater effect on the
S/N than reducing the variability. When the average response is close to the target,
reducing the variability has a greater effect. There are three basic formulas used for
calculating S/N, as shown in Table 9.16.
S/N for a particular testing condition is calculated by considering all the data
that were run at that particular condition across all noise factors. Actual analysis
techniques will be covered later.
SL3151Ch09Frame Page 404 Thursday, September 12, 2002 6:05 PM
TABLE 9.16
Formulas for Calculating S/N
Signal-to-Noise (S/N) Loss Function (L)
(∑ x ) / n
2
where Sm =
V = (∑ x − S ) / ( n − 1)
o
2
m
The relationships between S/N and loss function are obvious for STB and LTB.
The expressions contained in brackets are the same. When S/N is maximized, the
loss function will be minimized. For the NTB situation, the total analysis procedure
of looking at both the raw data for location effects and S/N data for dispersion effects
parallels the loss function approach. Examples of these analysis techniques are given
in the next section. S/N is used in DOE rather than the loss function because it is
more understandable from an engineering standpoint and because it is not necessary
to compute the value of k when comparing two alternate courses of action.
S/N calculations are also used in DOE to search for “robust” factor values. These
are values around which production variability has the least effect on the response.
MISCELLANEOUS THOUGHTS
Many statisticians disagree with the use of the previously defined S/N ratios to
analyze DOE data. They do not recognize the need to analyze both location effects
and dispersion (variance) effects but use other measures. Dr. George Box’s 1987
report is recommended to the reader who wishes to learn more about this disagree-
ment and some of the other methods that are available.
In brief, Dr. Box disagrees with the STB and LTB S/N calculations and finds
the NTB S/N to be inefficient. The approach that he supports is to calculate the log
(or ln) of the standard deviation of the data, log(s), at each inner array setup in place
of the S/N ratio. The log is used because the standard deviation tends to be log-
normally distributed. The raw data should be analyzed (with appropriate transfor-
mations) to determine which factors control the average of the response, and the
log(s) should be analyzed to determine which factors control the variance of the
response. From these two analyses, the experimenter can choose the combination
of factors that gives the response that best fills the requirements.
The data in Table 9.17 illustrate some of the concerns with the NTB S/N ratio. The
first three tests (A through C) have the same standard deviation but very different S/N,
SL3151Ch09Frame Page 405 Thursday, September 12, 2002 6:05 PM
TABLE 9.17
Concerns with NTB S/N Ratio
Standard NTB
Test Raw Data (4 Reps.) Deviation S/N
A 1 2 4 5 1.83 3.89
B 15 11 12 14 1.83 17.03
C 18 21 19 22 1.83 20.78
D 24 24 28.12 28.12 2.38 20.78
E 42.55 42.8 50 50 4.23 20.78
while the last three tests (C through E) have the same S/N but very different standard
deviations. The NTB S/N ratio places emphasis on getting a higher response value.
This approach might lead to difficulties in tuning the response to a specific target.
It should be noted that Taguchi does discuss other S/N measures in some of his
works that have not been widely available in English. An alternate NTB S/N ratio
is available in the computer program ANOVA-TM, which is distributed by Advanced
Systems and Designs, Inc. (ASD) of Farmington Hills, Michigan and is based on
Taguchi’s approach. This S/N ratio is:
ANALYSIS
The purpose of this section is to:
GRAPHICAL ANALYSIS
In the example in Section 2, Timothy and Christine calculated and plotted the average
response at each factor level. Since the experimental design they used (an L8) is
orthogonal, the average at each level of a factor is equally impacted by the effect
of the levels of the other factors. This allows the graphical approach to have direct
usage. This example from section 2 is shown in Table 9.18. The factor level plots
are shown in Figure 9.17.
Factors C1, C2 and C11 clearly have a different response for each of their two
levels. The difference between levels is much smaller for the other factors. If the
SL3151Ch09Frame Page 406 Thursday, September 12, 2002 6:05 PM
TABLE 9.18
L8 with Test Results
Test Levels for Each Suspected Factor for Each of 8 Tests
Number C1 C2 C7 C11 C13 C15 C16 Test Result
1 1 1 1 1 1 1 1 10
2 1 1 1 2 2 2 2 13
3 1 2 2 1 1 2 2 15
4 1 2 2 2 2 1 1 17
5 2 1 2 1 2 1 2 14
6 2 1 2 2 1 2 1 16
7 2 2 1 1 2 2 1 19
8 2 2 1 2 1 1 2 21
C1 C2 C7 C11
R
E 18 – – – –
S 17 – – – –
P 16 – – – –
O 15 – – – –
N 14 – – – –
S 13 – – – –
E
–
–
–
1 2 1 2 1 2 1 2
LEVEL LEVEL LEVEL LEVEL
1 2 1 2 1 2
LEVEL LEVEL LEVEL
goal of the experiment was to identify situations that minimize or maximize the
response, C1, C2 and C11 are important while the others are not.
Graphical analysis is a valid, powerful technique that is especially useful in the
following situations:
TABLE 9.19
ANOVA Table
Column Source df SS MS F Ratio S’ %
Once the experiment has been set up correctly, the graphical analysis can be
easily used and can point the way to improvements.
due to error. Factors that do not demonstrate much difference in response over the
levels tested have a variability that is not much different from the error estimate.
The df and SS from these factors are pooled into the error term. Pooling is done by
adding the df and SS into the error df and SS. Pooling the insignificant factors into
the error can provide a better estimate of the error.
Initially, no estimate of error was made in the L8 example because no unassigned
columns or repetitions were present. Because of this, a true estimate of the error
could not be made. However, the purpose of the experiment was to identify the
factors that have a usable difference in response between the levels. In this experi-
ment, the factors with relatively small MS were pooled and called “error.” Pooling
requires that the experimenter judge which differences are significant from an oper-
ational standpoint. This judgment is based on the prior knowledge of the system
being studied. In the example, factors C7, C13, C15, and C16 have much lower MS
than do the other factors and are pooled to construct an error estimate. The * next
to a df indicates that the df and SS for that factor were pooled into the error term.
The F ratio column contains the ratio of the MS for a source to the MS for the
pooled error. This ratio is used to statistically test whether the variance due to that
factor is significantly different from the error variance. As a quick rule of thumb, if
the F ratio is greater than three, the experimenter should suspect that there is a
significant difference. Dr. Taguchi does not emphasize the use of the F ratio statistical
test in his approach to DOE. A detailed description of the use of the F test can be
found in Box, Hunter, and Hunter (1978), and a practical explanation is included in
Volume V of this series.
In the determination of the SS of a factor, the non-repeatability of the experiment
is still present. The number in the “S” column is an attempt to totally remove the
SS due to error and leave the “pure” SS that is due only to the source factor. The
error MS times the df is subtracted from the SS to leave the pure SS or S′ value for
a factor. The amount that is subtracted from each non-pooled factor is then added
to the pooled error SS and the total is entered as the error S′. In this way the total
SS remains the same.
The % column contains the S′ value divided by the total SS times 100%. This
gives the percent contribution by that factor to the total variation of the data. This
information can be used directly in prioritizing the factors. In the experiment that
has been discussed, C2 makes the greatest contribution, C1 contributes less, and
C11 contributes still less. It can be argued that the graphical analysis can display
those conclusions quite well. In more complicated experiments with many factors
and factors with a large number of levels, however, the ANOVA table can display
the analysis in a more concise form and quickly lead the experimenter to the most
important factors.
of factor A (A2), the third level of factor B (B3), the first level of factor C (C1),
and the interaction of C1 and D1 are determined to be the best combination of
factors. An estimate of the response at these conditions can be made using the
equation:
( ) ( ) [( ) (
µˆ opt = T + ( A2 − T ) + B3 − T + C1 − T + C1D1 − T − C1 − T − D1 − T ) ( )]
where T = the average response of all the data; A2 = the average of the data run at
A2; B3 = the average of the data run at B3; C1 = the average of the data run at C1;
and D1 = the average of the data run at D1.
Each factor that is a significant contributor appears in a manner similar to A2,
B3 and C1 above. The term in brackets [ ] addresses the optimum level of the CD
interaction and is an example of the way in which interactions are handled.
( ) (
µˆ opt ± F1,dfe,.05 * MSe * 1 / ne + 1 / nr )
where F1,dfe,.05 is a value from an F statistical table. The F values are based on two
different degrees of freedom and the desired confidence. In this case, the first degree
of freedom is always 1 and the second is the degree of freedom of the pooled error
(dfe). The desired confidence is .05 since .05 in each direction (±) sums to a 10%
confidence. MSe is the mean square of the pooled error term; nr is the number of
confirmatory tests to be run; and ne is the effective number of replications and is
calculated as follows:
Source df
A 1
B 2
C 1
CD 1
Mean 1
Total 6
SL3151Ch09Frame Page 410 Thursday, September 12, 2002 6:05 PM
Response
Response
Response
1 2 3 1 2 3 1 2 3
Factor Level Factor Level Factor Level
Response
1 2 3
Supplier
ne = 36/6 = 6.0
TABLE 9.20
Higher Order Relationships
Levels of a Factor df Relationships
2 1 Linear
3 2 Linear, quadratic
4 3 Linear, quadratic, cubic
5 4 Linear, quadratic, cubic, quartic
etc.
TABLE 9.21
Inner OA (L8) with Outer OA (L4) and Test Results
L8 L4 Z 1 2 2 1
Test A B C D E F G (on side) Y 1 2 1 2
No. 1 2 3 4 5 6 7 X 1 1 2 2
1 1 1 1 1 1 1 1 Test Results 25 27 30 26
2 1 1 1 2 2 2 2 25 27 21 19
3 1 2 2 1 1 2 2 18 21 19 22
4 1 2 2 2 2 1 1 26 23 27 28
5 2 1 2 1 2 1 2 15 11 12 14
6 2 1 2 2 1 2 1 18 15 17 18
7 2 2 1 1 2 2 1 20 17 21 18
8 2 2 1 2 1 1 2 19 20 20 17
Control factors and noise factors were introduced in Section 3. Control factors appear
in an orthogonal array called an inner array. Noise factors that represent the uncon-
trolled or uncontrollable environment are entered into a separate array called an
outer array. The following example of an L8 linear (control) array with an L4 outer
(noise) array was first presented in Section 3. Actual responses and factor names
are added here — see Table 9.21 — in the development of the example.
This type of experimental setup and analysis evaluates each of the control factor
choices (L8 array factors) over the expected range of the uncontrollable environment
SL3151Ch09Frame Page 412 Thursday, September 12, 2002 6:05 PM
TABLE 9.22
The STB ANOVA Table
Source df SS MS F Ratio S’ %
(L4 array factors). This assures that the optimal factor levels from the L8 array will
be robust. An S/N can be calculated for each test situation. These S/N ratios are
then used in an ANOVA to identify the situation that maximizes the S/N.
Smaller-the-Better (STB)
The following S/N ratios are calculated for the STB situation using the equations
given in Section 4 and assuming that the optimum value is zero and that the responses
shown represent deviations from that target:
1 28.65
2 27.32
3 26.05
4 28.32
5 22.34
6 24.63
7 25.61
8 25.59
The S/N ratios for testing situations are then analyzed using an ANOVA table.
The STB ANOVA table for the example is shown in Table 9.22. The ANOVA table
indicates that factors A, G, and C are the most significant contributors. Inspection
of the level averages shows that the highest S/N values (least negative), in order
of contribution, occur at A2, G2, C2, D1, B1. Estimation of the S/N at the optimal
levels can be made from the S/N level averages using the technique discussed
earlier in this section. Likewise, estimation of the raw data average response at
the optimal level can be made from the response level averages at the optimal S/N
factor levels.
SL3151Ch09Frame Page 413 Thursday, September 12, 2002 6:05 PM
TABLE 9.23
The LTB ANOVA Table
Source df SS MS F Ratio S’ %
Larger-the-Better (LTB)
The same data will be used to demonstrate the LTB notation. In this case, the
optimum value is infinity. Examples of this include strength or fuel economy. The
following S/N ratios are calculated using the LTB equation given in Section 4.
1 28.57
2 26.98
3 25.94
4 28.23
5 22.08
6 24.54
7 25.48
8 25.52
The S/N ratios for testing situations are then analyzed using an ANOVA table.
The LTB ANOVA table for the example is shown in Table 9.23. Inspection of the
ANOVA table and the level averages shows that the highest S/N values occur at A1,
G1, C1, D2, B2. Interpretation of the LTB analysis is similar to that of the STB
analysis.
Analysis of the NTB experiment is a two-part process. Again, the same data will be
used to illustrate this approach. The target value will be assumed to be 24 in this case.
SL3151Ch09Frame Page 414 Thursday, September 12, 2002 6:05 PM
TABLE 9.24
The NTB ANOVA Table
Source df SS MS F Ratio S’ %
A 1* 0.193 0.193
B 1 9.618 9.618 54.339 9.441 23.10
C 1* 0.006 0.006
D 1* 0.333 0.333
E 1 17.816 17.816 100.655 17.639 43.16
F 1 2.477 2.477 13.994 2.300 5.63
G 1 10.424 10.424 58.893 10.247 25.07
Error
(pooled error) 3 0.532 0.177 1.240 3.03
Total 7 40.867 5.838
The S/N values are analyzed. The following S/N are calculated:
1 21.93
2 15.96
3 20.78
4 21.60
5 17.03
6 21.59
7 20.33
8 22.56
The S/N ratios for testing situations are then analyzed using an ANOVA table.
The NTB ANOVA table for the example is shown in Table 9.24.
The ANOVA table and the level averages indicate that E1, G1, B2, F1 are the
optimal choices from an S/N standpoint. These are the factor choices that
should result in the minimum variance of the response.
The ANOVA analysis and level averages of the raw data are then investigated
to determine if there are other factors that have significantly different
responses at their different levels but are not significant in the S/N analysis.
These factors can be used to tune the average response to the desired value
but do not appreciably affect the variability of the response. The ANOVA
table of the raw data is shown in Table 9.25. From this ANOVA table, it
can be seen that the significant contributors to the observed variability of
the data averages are the factors A, G, C, D, and F. This can be combined
with the S/N analysis and interpreted as follows:
a. Factors that influence variability only — B, E
b. Factors that influence both variability and average response — G
c. Factors that influence the average only — A, C
d. Factors that have little or no influence on either variability or average
response – D, F
SL3151Ch09Frame Page 415 Thursday, September 12, 2002 6:05 PM
TABLE 9.25
Raw Data ANOVA Table
Source df SS MS F Ratio S’ %
The results from this experiment indicate that factors B, E, and G should be set
to the levels with the highest S/N. Factor G should be set to the level with the highest
S/N rather than using it to tune the average since its relative contribution to S/N
variability is greater than its contribution to the variability of raw data. This decision
might change based on cost implications and the ability to use factors A and C to
tune the average response. Factors A and C should be investigated to determine if
they can be set to levels that will allow the target value of 24 to be attained. This
may be possible with factors that have continuous values. Factors with discrete
choices such as supplier or machine number cannot be interpolated. Factors D and
F should be set to the levels that are least expensive. A series of confirmation runs
should be made when the optimum levels have been determined. The average
response and S/N should be compared to the predicted values.
COMBINATION DESIGN
Combination design was mentioned in Section 3 as a way of assigning two two-
level factors to a single three-level column. This is done by assigning three of the
four combinations of the two two-level factors to the three-level factor and not testing
the fourth combination. As an example, two two-level factors are assigned to a three-
level column as in Table 9.26.
Note that the combination A1B2 is not tested. In this approach, information about
the A.B interaction is not available, and many ANOVA computer programs are not
able to break apart the effect of A and B.
The sum of squares (SS) in the ANOVA table that is due to factor A.B contains
both the SS due to factor A and the SS due to factor B. These two SSs are not
additive since the factors A and B are not orthogonal. This means:
SL3151Ch09Frame Page 416 Thursday, September 12, 2002 6:05 PM
TABLE 9.26
Combination Design
Three Level Column
Factor A Factor B Combined Factor (A.B)
1 1 1
2 1 2
2 2 3
( ) / (2 * r )
2
SSA = TAB1 − TAB2
= (T ) / (2 * r )
2
SSB AB2 − TAB3
where TAB1 = the sum of all responses run at the first level of AB; TAB2 = the sum
of all responses run at the second level of AB; TAB3 = the sum of all responses run
at the third level of AB; and r = the number of data points run at each level of AB.
The MS of A and B then can be separately compared to the error MS to determine
if either or both factors are significant. The df for both A and B is 1. If one of the
factors is significant and the other is not, the ANOVA should be rerun with the
significant factor shown with a dummy treatment and the other factor excluded from
the analysis.
EXAMPLE
A 2
B 2
C 3
D 3
E 3
A and B will be combined into a single three-level column. The test array and results
are shown in Table 9.27.
The sum of the data at each level of AB is: for AB = 1, the sum is 17 + 9 + 8 =
34; for AB = 2, the sum is 40 + 28 + 17 = 85; for AB = 3, the sum is 28 + 22 + 27 = 77.
SL3151Ch09Frame Page 417 Thursday, September 12, 2002 6:05 PM
TABLE 9.27
L9 OA with Test Results
Sum of the
A B A.B C D E Test Results Test Results
1 1 1 1 1 1 7 10 17
1 1 1 2 2 2 3 6 9
1 1 1 3 3 3 5 3 8
2 1 2 1 2 3 22 18 40
2 1 2 2 3 1 13 15 28
2 1 2 3 1 2 9 8 17
2 2 3 1 3 2 12 16 28
2 2 3 2 1 3 12 10 22
2 2 3 3 2 1 15 12 27
TABLE 9.28
ANOVA Table
Source df SS MS F Ratio S’ %
( ) ( )
2
SSA = 24 − 85 / 2 * 6
SSA = 216.75
( ) ( )
2
SSB = 85 − 77 / 2 * 6
SSB = 5.33
The ANOVA table is for the data shown — see Table 9.28. The decomposed SS for
A and B are shown in parentheses and are not added into the total SS.
SL3151Ch09Frame Page 418 Thursday, September 12, 2002 6:05 PM
TABLE 9.29
Second Run of ANOVA
Source df SS MS F Ratio S’ %
The F ratio for factor B indicates that the effect of the change in factor B on the
response is insignificant. Factor B is excluded from the analysis and factor A is
analyzed with a dummy treatment. The ANOVA table for this analysis is shown in
Table 9.29. The analysis continues using the techniques described in this section.
MISCELLANEOUS THOUGHTS
The purpose of most DOEs is to predict what the response will be at the optimum
condition. Confirmatory tests should be run to assure the experimenter that the
projected results are valid. Sometimes, the results of the confirmatory tests are
significantly different from the projected results. This can be due to one or more of
the following:
The experimenter who is faced with data that does not support the prediction is
forced to ask which of these problems affected the results. It is important that all
of these problems be considered and investigated, if appropriate. If two or more of
these problems coexisted, correcting only one problem may not improve the exper-
imental results.
Even though it may seem that the experiment was a failure, that is not necessarily
true. Experimentation should be considered an organized approach to uncovering a
SL3151Ch09Frame Page 419 Thursday, September 12, 2002 6:05 PM
TABLE 9.30
L8 with Test Results and S/N Values
L8 Z 1 2 2 1
Test A B C D E F G Y 1 2 1 2 –20
No. 1 2 3 4 5 6 7 X 1 1 2 2 s log(s)
working knowledge about a situation. The “failed” experiment does provide new
knowledge about the situation that should be used in setting up the next iteration of
experimental testing.
The prior statement may sound too idealistic for the “real” world where deadlines
are very important. A failed experiment may cause some people to doubt the use-
fulness of the DOE approach and extol the virtues of traditional one-factor-at-a-time
testing. However, all of the problems listed above that could cause a DOE to fail
will also cause a one-factor-at-a-time experiment to fail. In DOE, the problem will
be found fairly early since relatively few tests are run. In one-factor-at-a-time testing,
the problem may not surface until many tests have been run, or the problem may
not even be identified in the testing program. In this case, the problem may not show
up until production or field use.
The importance of meeting real-world deadlines makes the planning stage of
the experiment critical. Proper planning, including consideration of the experience
and knowledge of experts, will enable the experimenter to avoid many of the possible
problems. Deadlines are never a good excuse for not taking the time to adequately
plan an experiment.
AN EXAMPLE
The data used to demonstrate the S/N calculations in this section will be analyzed
here using the approach, NTBII S/N = –10 log (s2) = –20 log (s). This approach was
discussed earlier in this chapter. The data set is repeated in Table 9.30.
The S/N ratios for the testing situations are then analyzed using an ANOVA table.
The NTBII ANOVA table for the example is shown in Table 9.31. To help interpret
the ANOVA table, the level standard deviation averages and the level S/N averages
are shown for the significant factors in Table 9.32.
To give a visual impact of the spread of the data and what the above table really
means, it would be wise to plot the data for each factor level. The plots of the average
standard deviation by factor level are shown in Figure 9.20.
SL3151Ch09Frame Page 420 Thursday, September 12, 2002 6:05 PM
TABLE 9.31
ANOVA Table for Data from Table 9.30
Source df SS MS F Ratio S’ %
TABLE 9.32
Significant Figures from Table 9.31
Average
Standard
Factor Level Deviation NTBII S/N
A 1 2.36 –7.465
2 1.61 –4.120
B 1 2.12 –6.545
2 1.79 –5.039
C 1 2.12 –6.545
2 1.79 –5.039
E 1 1.67 –4.485
2 2.26 –7.099
The ANOVA table and the level average standard deviations indicate that
A2B2C2E1 are the optimal choices from an NTBII S/N standpoint. The analysis of
the raw data remains the same as shown in the chapter. The average level of the
response should be targeted using the results of the raw data analysis. This is true
regardless of whether the goal is as small as possible, as large as possible, or to meet
a specific value. The variance should be minimized by maximizing the NTBII S/N.
The experimenter must make the trade-off between the choice of factor levels that
adjust the response average and the choice of factor levels that minimize the variance
of the response.
A comparison of the results of the two methods shows clear differences. As an
example, for the situation where a specific value is targeted (NTB), the factor level
choices are: NTB — B2E1G1 to minimize variability, A and C set to achieve target;
NTBII — B2E1 to minimize variability, G set to achieve target. If the target is
attainable using factor G, use A2C2 to minimize variability, otherwise, set C and/or
A to achieve target.
SL3151Ch09Frame Page 421 Thursday, September 12, 2002 6:05 PM
2.5
Standard Deviation
1.5
1 2 1 2 1 2 1 2
Factor A Factor B Factor C Factor E
1. Plot the data including raw and/or transformed values, level averages and
standard deviations, and any other information that seems appropriate.
One picture is worth a thousand words.
2. Analyze the data using the appropriate analysis techniques.
3. Compare the results to the data plots in order to determine which set of
results makes the most sense. Perform this comparison fairly and resist
the temptation to choose the results solely on whether they support con-
venient conclusions.
4. Run confirmation tests.
DOE is a powerful tool that can help the experimenter get the most out of scarce
testing resources. However, as with any powerful tool, care must be taken to under-
stand how to use the tool and how to interpret the results.
TABLE 9.33
Observed Versus Cumulative Frequency
Observed Frequency Cumulative Frequency
Class I 2 2
Class II 1 3
Class III 1 4
CLASSIFIED RESPONSES
Some experimental responses cannot be measured on a continuous scale although
they can be divided into sequential classes. Examples include appearance and per-
formance ratings. In these situations, three to five rating classes are generally the
optimum number because this number allows major differences in the responses to
be identified and yet does not require the rater to identify differences that are too
subtle. Two related techniques are used to analyze classified responses:
1. Classified attribute analysis is used when the total number of items rated
is the same for every test matrix setup.
2. Classified variable analysis is used when the total number of items rated
is not the same for every test matrix setup.
EXAMPLE
Three grades are used to evaluate paint appearance of a product. They are “Bad,”
“OK,” and “Good.” Seven factors (A through G), each at two levels, are evaluated
to determine the combination of factor levels that optimizes paint appearance. Five
products are evaluated at each testing situation in an L8 orthogonal array. Test results
are shown in Table 9.34.
SL3151Ch09Frame Page 423 Thursday, September 12, 2002 6:05 PM
TABLE 9.34
Attribute Test Setup and Results
Frequency in Each Grade
A B C D E F G Bad OK Good
1 1 1 1 1 1 1 2 3 0
1 1 1 2 2 2 2 3 2 0
1 2 2 1 1 2 2 4 1 0
1 2 2 2 2 1 1 0 2 3
2 1 2 1 2 1 2 0 4 1
2 1 2 2 1 2 1 1 3 1
2 2 1 1 2 2 1 0 3 2
2 2 1 2 1 1 2 0 1 4
TABLE 9.35
ANOVA Table (for Cumulative Frequency)
Source df SS MS F Ratio S’ %
The ANOVA analysis for this set of data is shown in Table 9.35. Note that the
degrees of freedom are calculated differently from the non-classified situation. The
df of each source is:
In this example, the number of levels of each factor is two and the number of classes
is three. For each factor,
( )( )
df = 2 − 1 * 3 − 1 = 2
The total df = (the total number of rated items – 1) * (the number of classes – 1).
Thus, the total df for this example is:
SL3151Ch09Frame Page 424 Thursday, September 12, 2002 6:05 PM
TABLE 9.36
The Effect of the Significant Factors
Observed % Rate of Cumulative Cumulative
Frequency Occurrence (R.O.) Frequency % R.O.
Bad OK Good Bad OK Good Bad OK Good Bad OK Good
A1 9 8 3 45 40 15 9 17 20 45 85 100
A2 1 11 8 5 55 40 1 12 20 5 60 100
B1 6 12 2 30 60 10 6 18 20 30 90 100
B2 4 7 9 20 35 45 4 11 20 20 55 100
F1 2 10 8 10 50 40 2 12 20 10 60 100
F2 8 9 3 40 45 15 8 17 20 40 85 100
Total 10 19 11 25 73 100
Factor Effects
100
90
80
Cumulative Rate of
Occurrence - %
70
60
50
40
30
20
10
0
A-1 A-2 B-1 B-2 F-1 F-2
Factor - Level
Bad OK Good
( )( )
df = 40 − 1 * 3 − 1 = 78
Factor Effects
100
90
Cumulative Rate of
Occurrence - %
80
70
60
50
40
30
20
10
0
A-1 A-2 A-3 B-1 B-2 B-3 C-1 C-2 C-3
Factor - Level
Class 1 Class 2 Class 3
exist in estimating the cumulative rate of occurrence for each class under the opti-
mum condition.
Percentages near 0% or 100% are not additive. The cumulative of occurrence
can be transformed using the omega method to obtain values that are additive. In
the omega method, the cumulative percentage (p) is transformed to a new value (Ω)
as follows:
( )
Ω = −10 log10 l / p − 1 [the units of Ω are decibels (db).]
Using this transformation, the estimated cumulative rate of occurrence for each
class at the optimum condition (A2B2F1) is calculated as follows:
( ) (
db of µ̂ = db of T + db of A2 − db of T + db of B2 − db of T + db of F1 − db of T ) ( )
The estimated cumulative rate of occurrence for each class for the optimum
condition is:
Class 1
(
db of µˆ = db of .25 + db of .05 − db of .25 + db of .20 − db of .25 ) ( )
(
+ db of .10 − db of .25 )
( ) (
= −4.77 + −12.79 + 4.77 + −6.02 + 4.77 + −9.54 + 4.77 ) ( )
= −18.81
µˆ = 1%
SL3151Ch09Frame Page 426 Thursday, September 12, 2002 6:05 PM
TABLE 9.37
Rate of Occurrence at the Optimum Settings
Cumulative
Class Rate of Occurrence Rate of Occurrence
Bad 1% 1%
OK 27% 26%
Good 100% 73%
Class 2
( ) (
db of µˆ = db of .73 + db of .60 − db of .73 + db of .55 − db of .73 )
(
+ db of .60 − db of .73 )
= −4.25
µˆ = 27%
EXAMPLE
Four factors (A, B, C and D) are suspected of influencing door closing efforts for a
particular car model. An experiment was set up that evaluated each of these factors
at three levels. An L9 orthogonal array was used to evaluate the factor levels. Door
closing effort ratings were made by a group of typical customers. Each customer
was asked to evaluate the doors on a scale of one to three as follows:
1 Unacceptable
2 Barely acceptable
3 Very good feel
The experimental setup and test results are shown in Table 9.38 and Figure 9.22.
The ANOVA analysis for this set of data is shown in Table 9.39.
From the ANOVA table, factors A, B and C are identified as significant. The
effects of these factors are shown in Table 9.40.
SL3151Ch09Frame Page 427 Thursday, September 12, 2002 6:05 PM
TABLE 9.38
Door Closing Effort: Test Setup and Results
Number Ratings Class% Rate Class Cumulative
of by Class of Occurrence Frequency (%)
A B C D Ratings 1 2 3 1 2 3 1 2 3
1 1 1 1 5 1 3 1 20 60 20 20 80 100
1 2 2 2 4 2 1 1 50 25 25 50 75 100
1 3 3 3 5 2 3 0 40 60 0 40 100 100
2 1 2 3 4 0 0 4 0 0 100 0 0 100
2 2 3 1 4 0 1 3 0 25 75 0 25 100
2 3 1 2 4 0 1 3 0 25 75 0 25 100
3 1 3 2 5 3 2 0 60 40 0 60 100 100
3 2 1 3 5 4 1 0 80 20 0 80 100 100
3 3 2 1 4 3 1 0 75 25 0 75 100 100
TABLE 9.39
ANOVA Table for Door Closing Effort
Source df SS MS F Ratio S’ %
The choice of the optimum levels is clear for factors A and B. A2 and B1 are the
best choices. Two different choices are possible for factor C, depending on the overall
goal of the design. If the goal is to minimize the occurrence of unacceptable efforts,
C1 is the best choice. If the goal is to maximize the number of customer ratings of
“very good,” then C2 is the best choice. For this example, C1 will be chosen as the
preferred factor setting. The estimated rate of occurrence for each class for the optimum
setting, A2B1C1, can be calculated using the omega method. The estimated rates are
shown in Table 9.41. The df for the factors are calculated in the same way as with the
Classified Attribute Analysis, i.e., df = (the number of levels of that factor – 1) * (the
number of classes – 1).
In Classified Variable Analysis, the total number of items evaluated at each condition
is not equal. To “normalize” these sample sizes, percentages are analyzed and the
“sample size” for each test setup becomes 100 (for 100%). The total df is (the number
of “sample sizes” – 1) * (the number of classes – 1). For this example, the total df is:
SL3151Ch09Frame Page 428 Thursday, September 12, 2002 6:05 PM
TABLE 9.40
The Effects of the Door Closing Effort
Factor % Rate of Occurrence Cumulative% Rate of Occurrence
& Level Class 1 Class 2 Class 3 Class 1 Class 2 Class 3
TABLE 9.41
Rate of Occurrence at the Optimum Settings
Cumulative
Class Rate of Occurrence Rate of Occurrence
1 (unacceptable) 0% 0%
2 (barely acceptable) 13.4% 13.4%
3 (very good feel) 100% 86.6%
( )( )
df = 900 − 1 * 3 − 1
= 1798
MISCELLANEOUS THOUGHTS
As we just mentioned in the discussion of the degrees of freedom, there is no
consensus among statisticians regarding the best method to use to analyze classified
data. A method that is an alternate to the ones described in this section is to transform
the classified data into variable data and analyze the data as described in Section 5.
A drawback to this approach is that the relative difference in the transformed values
should reflect the relative difference in the classifications, and this is sometimes
difficult to achieve. A simple example from the medical field will illustrate this. Four
different groups of patients suffering from the same disease are each given a different
medicine. The purpose is to determine which medicine is best. The response classes
are shown in below:
A Patient improves
B No change in patient
C Patient dies
EXAMPLE
Three grades are used to evaluate paint appearance of a product. They are “Bad,”
“OK,” and “Good.” The classified data are transformed into variable data as follows:
Bad = 1; OK = 3; Good = 4. This puts emphasis on avoiding situations that result
in “bad” responses. Seven factors (A through G), each at two levels, are evaluated
to determine the combination of factor levels that optimizes paint appearance. Five
products are evaluated at each testing situation in an L8 orthogonal array. Test results
are shown in Table 9.42. The ANOVA analysis for the raw data is shown in
Table 9.43. Plotting of the data and inspection of the level averages reveal that the
best factor choices are: A2B2F1. The ANOVA analysis for the NTB S/N ratios is
shown in Table 9.44.
Plotting of the S/N data and inspection of the level averages reveals that the best
factor choices are: A2B2E2F1. The best choices overall are: A2B2E2F1. This compares
with the best choice of A2B2F1 from the accumulation analysis on page 425.
Each of the methods has one further disadvantage. Using the transformation
approach, it is not possible to make a projection of what the distribution of classes would
look like at the optimum settings. The accumulation analysis was not able to identify
the effect on the standard deviation of the ratings due to factor E. Each approach tells
a different part of the story and both should be used to get the full picture.
SL3151Ch09Frame Page 430 Thursday, September 12, 2002 6:05 PM
TABLE 9.42
OA and Test Setup and Results
Test Setup and Results
1 1 1 1 1 1 1 2 3 0 1 1 3 3
1 1 1 2 2 2 2 3 2 0 1 1 1 3
1 2 2 1 1 2 2 4 1 0 1 1 1 3
1 2 2 2 2 1 1 0 2 3 3 3 4 4
2 1 2 1 2 1 2 0 4 1 3 3 3 4
2 1 2 2 1 2 1 1 3 1 1 3 3 4
2 2 1 1 2 2 1 0 3 2 3 3 3 4
2 2 1 2 1 1 2 0 1 4 3 4 4 4
TABLE 9.43
ANOVA for the Raw Data
Source df SS MS F Ratio S’ %
DYNAMIC SITUATIONS
This section discusses:
DEFINITION
In many instances, the experimenter knows that the optimum response for a system
changes with levels of an input signal. Using the signal-to-noise techniques described
SL3151Ch09Frame Page 431 Thursday, September 12, 2002 6:05 PM
TABLE 9.44
ANOVA Table for the NTB S/N Ratios
Source df SS MS F Ratio S’ %
in the previous sections would yield incorrect results. These techniques emphasize
repeatability across all levels of the noise factors. In a dynamic situation, the exper-
imenter wants different responses depending upon the level of an input signal factor.
Two examples are:
DISCUSSION
The analysis of dynamic test data can be complicated. The following conditions exist
in the examples that follow. If these conditions are not present in a dynamic experiment,
help from a statistician should be sought before setting up the experiment.
Conditions
TABLE 9.45
Typical ANOVA Table Setup
Source df SS V
3. If there are three levels for a signal factor, the intervals between the
adjacent levels will be equal.
4. The experimental test includes either noise factors in an outer array or
repetitions so that for each inner array control factor setup, two or more
runs are made for each signal factor.
Analysis
1. The test results for each inner array setup (test number) are separately
analyzed using analysis of variance (ANOVA). The ANOVA table for
these analyses will be shown in a typical format as Table 9.45.
2. A nominal-the-best signal-to-noise ratio is calculated for each inner array
setup from these ANOVA tables as follows:
SS − V
S / N = 10 log10 s e
e
V * r * s * h2
where r = the number of data in each level of the signal factor for this
inner array setup; s = 0.5 if the signal factor has two levels or s = 2.0 if
the signal factor has three levels; and h = the interval between the adjacent
levels of the signal factor.
4. The calculated S/N ratio for each inner array setup is then used in a
nominal-the-best (NTB) S/N analysis of variance to determine which
control factor settings should be used to reduce variability and which
should be used to tune the response to the desired output.
The application of these steps will be developed more fully through the following
examples.
EXAMPLE 1
TABLE 9.46
L4 OA with Test Results
Test Test Matrix S1 S2 NTB
Number T O T ×O F1 F2 F1 F2 S/N
Factor Column
Type (T) 1
Orientation (O) 2
T × O Interaction 3
Items with two different surface finishes will be measured by the devices. Surface
finish (F) will be a noise factor. Two standard lengths of 10 and 20 mm will be
evaluated. These will be the two levels of the signal factor (S). The test matrix and
test results for the experiment are shown in Table 9.46.
For test number 1, the S/N ratio is calculated from the ANOVA table for just the
runs in test number 1.
S F Test Result
1 1 9.8
1 2 9.7
2 1 20.4
2 2 20.2
The ANOVA table for these data is shown in Table 9.47. The S/N ratio is
calculated as:
SS − V
S / N = 10 log10 s e
2
Ve * r * s * h
111.303 − 0.013
S / N = 10 log10
0.013 * 2 * 0.5 * 102
S / N = 19.33
SL3151Ch09Frame Page 434 Thursday, September 12, 2002 6:05 PM
TABLE 9.47
ANOVA Table — Raw Data
Source df SS V
TABLE 9.48
ANOVA Table (S/N Ratio Used as Raw Data)
Source df SS MS F Ratio S’ %
An S/N ratio for each of the other test setups is calculated in a similar manner.
These S/N ratios are then analyzed using the S/N ratio as a single response for each
test setup — see Table 9.48. The level averages for the data are:
Level Averages
NTB S/N Data
O1 O2 Overall
Inspection of the data shows that the setting of T that gives the highest S/N is level
1. Although there are not enough test setups to allow the statistical identification of
level 1 of factor O as the optimum, the data suggests that orientation 1 might work
the best with device 1 and should be further investigated.
The level averages of the raw data are shown in Table 9.49.
The predicted averages are calculated using the techniques given in Section 5
using the interaction of T1 and O1 (assumed) as the optimum setting.
SL3151Ch09Frame Page 435 Thursday, September 12, 2002 6:05 PM
TABLE 9.49
Level Averages — Raw Data
S1 S2 Overall
(
for S = 1 10 mm )
µˆ = 14.93 + [(15.03 − 14.93) − (15.08 − 14.93) − (14.90 − 14.93)] + (9.98 − 14.93)
µˆ = 9.87 mm
(
for S = 2 20 mm )
µˆ = 14.93 + [(15.03 − 14.93) − (15.08 − 14.93) − (14.90 − 14.93)] + (9.98 − 14.93)
µˆ = 19.96 mm
Note that the readings at the optimum do not average out to the standard exactly.
This assumes that the output reading can be calibrated to reflect the standard mea-
sured. The emphasis in the approach is to provide readings with low variability at
each standard level output.
This example was very simple and it may seem that the ANOVA was not really
necessary. In many cases, the inner array will be more complicated than an L4, and the
technique shown in this example will help the experimenter make an informed choice.
EXAMPLE 2
Column
1 A — Content of material “Z” in the brake pads
2 B — Content of material “Y” in the brake rotors
3 A × B interaction
4 C — Hydraulic fluid type
5 Unassigned
6 Unassigned
7 D — Brake pad design
SL3151Ch09Frame Page 436 Thursday, September 12, 2002 6:05 PM
TABLE 9.50
OA Setup and Test Results for Example 2
A S1 S2
X T1 T2 T1 T2 NTB
A B B C D P1 P2 P1 P2 P1 P2 P1 P2 S/N
Column
1 S — Vehicle speed (30 mi/h or 60 mi/h)
2 T — Tire size
3 Unassigned
4 P — Pavement type (asphalt or concrete)
5 Unassigned
6 Unassigned
7 Unassigned
In this example, vehicle speed is a signal factor. It is not possible that the braking
distance would be the same when starting from 30 mi/h vs. 60 mi/h and therefore,
different responses are expected. The experimenter has determined through market
research that for this type of vehicle, the customer would prefer that the braking
distance be 35 feet from 30 mi/h and 130 feet from 60 mi/h.
The test setup and results are shown in Table 9.50.
The unassigned columns are not shown to conserve space and to make the table
more presentable. The outer array is also shown somewhat differently, with column
1 (factor S) at the top, column 2 (factor T) in the middle, and column 4 (factor P)
at the bottom. Although this arrangement can be used to present the data, the
unassigned columns should be added back to the arrays to aid the experimenter’s
understanding of the analysis and the application of the inner and outer L8 orthogonal
arrays.
For the first test setup, the S/N ratio is calculated from the ANOVA table for the
data in the first row. The ANOVA table for these data is:
TABLE 9.51
ANOVA Table (S/N Ratio Used as Raw Data)
Source df SS MS F Ratio S’ %
A 1* 0.340 0.340
B 1 85.217 85.217 44.453 83.300 47.87
A×B 1 7.431 7.431 3.876 5.514 3.17
C 1 47.288 47.288 24.668 45.371 26.08
Unassigned 1* 1.103 1.103
Unassigned 1* 4.307 4.307
D 1 28.313 28.313 14.769 26.396 15.17
Error —
(pooled error) 3 5.750 1.917 13.418 7.71
Total 7 173.998 24.857
SS − V
S / N = 10 log10 s e
e
V * r * s * h2
20090.101 − 0.298
S / N = 10 log10
0.298 * 4 * 0.5 * 302
S / N = 15.73
An S/N ratio for each of the other test setups is calculated in a similar manner. These
S/N ratios are then analyzed using the S/N ratio as a single response for each test
setup — see Table 9.51.
The ANOVA table indicates that factors B, C, D, and the interaction of factors A
and B are significant. The level averages for these factors are:
Level Averages
B1 B2
A1 A2 A1 A2 C1 C2 D1 D2
The ANOVA table and the level averages indicate that B1, C2 and D1 are the optimal
choices from an S/N standpoint. These are the factor choices that should result in
the minimum variance of the response.
An analysis of the raw data would identify the signal factor as the most significant
contributor to the variation of the data. However, this information is not useful. To
increase the ability of the analysis to clearly show the significant control factors, the
target braking distance for each of the signal factor levels is subtracted from all of
the data collected at that signal factor level. This reduces the percent level of
SL3151Ch09Frame Page 438 Thursday, September 12, 2002 6:05 PM
TABLE 9.52
Transformed Data
A S1 S2
X T1 T2 T1 T2
A B B C D P1 P2 P1 P2 P1 P2 P1 P2
TABLE 9.53
ANOVA Table for the Transformed Data
Source df SS MS F Ratio S’ %
contribution of the signal factor and increases the percent level of contribution of
the control factors while maintaining their relative order of contribution. This trans-
formation makes the effects of the control factors more visible but does not affect
their significance. The transformed data are shown in Table 9.52.
The ANOVA table for these data is shown in Table 9.53.
SL3151Ch09Frame Page 439 Thursday, September 12, 2002 6:05 PM
The interactions between all the columns of the inner array and all the columns
of the outer array are available for investigation. For this example, only the A × S
and D × S interactions are investigated to give an indication of whether factors A
and D “behave” consistently at the two levels of the signal factor. The analysis
indicates that control factors A and D are significant contributors to the variation of
the data. The difference in responses between the two levels of these factors is
independent of the signal level. The analysis also identified the signal factor, S, as
an important contributor to the data variation. This, of course, was already known.
The level averages are:
S1 S2 Overall
The predicted averages are calculated using the techniques given in Section 5 using
A2 and D1 as the optimum settings and adding the values that were subtracted prior
to the ANOVA.
(
for S = 1 30 mph )
[( ) ( ) ( )] ( )
µˆ = 7.53 + 6.00 − 7.53 − 6.27 − 7.53 − 14.90 − 14.93 + 5.08 − 7.53 + 35
µˆ = 37.29 feet
(
for S = 2 60 mph )
[( ) ( ) ( )] ( )
µˆ = 7.53 + 6.00 − 7.53 − 6.27 − 7.53 − 14.90 − 14.93 + 5.08 − 7.53 + 130
µˆ = 137.19 feet
Factor B should be set to level 1 and factor C should be set to level 2 to maximize
the S/N ratio.
Since the target values were not obtained at the optimum settings, the experi-
menter must either continue to investigate other ways of reducing the stopping
distance or accept the consequences of failing to fully satisfy the customer’s require-
ments.
MISCELLANEOUS THOUGHTS
Let us close this section with a discussion of the two examples dealing with the
NTBII S/N approach. The NTB S/N ratio for a dynamic situation is:
SS − V
NTB S / N = 10 log10 s e
2
Ve * r * s * h
SL3151Ch09Frame Page 440 Thursday, September 12, 2002 6:05 PM
This equation was explained earlier. Using the same terminology, the NTBII S/N
= NTBII S/N = –10 log (Ve) which equals –20 log (error standard deviation).
For Example 1
The calculations for the NTB S/Ns were discussed earlier. The same steps are
followed for the NTB approach until the final S/N calculation. The two sets of S/N
ratios are contrasted below:
1 19.33 19.03
2 14.94 14.88
3 12.05 12.04
4 12.65 13.01
When the NTBII S/N ratios were analyzed, the ANOVA table and the interpre-
tation of the level averages were essentially the same as those for the NTB S/N.
For Example 2
The calculations for the NTB S/N were discussed earlier. The NTBII analysis had
suggested that the standard deviation of the data might be related to the average of
the data. In other words, the spread of the stopping distances might be greater at
standard one (30 mi/h) than at standard two (60 mi/h). Using the procedure given
in pages 395–397, the averages and standard deviations were compared as follows:
1. For each vehicle speed, the average stopping distance and the standard
deviation were calculated (16 averages and 16 standard deviations total).
2. The log (standard deviation) was plotted versus the log (average).
3. The slope was estimated to be in the range of 0.2 to 0.3 with large scatter
in the data. By comparing this value to Item 4 on page 396, it was
determined that there was not a strong need to transform the data.
The NTBII S/N ratios were calculated for the untransformed data. The two sets
of S/N ratios are compared below:
1 15.73 5.26
2 14.62 4.19
3 5.90 –4.51
4 15.25 4.62
5 12.74 2.21
6 20.64 10.18
7 6.58 –3.87
8 9.89 –0.56
SL3151Ch09Frame Page 441 Thursday, September 12, 2002 6:05 PM
When the NTBII S/N ratios were analyzed, the ANOVA table and the interpre-
tation of the level averages were essentially the same as those for the NTB S/N. The
reader is encouraged to run the analysis to confirm this. The analysis of the raw data
did not change. The conclusions also remained the same as before.
For these two examples, the NTB and NTBII methods give equivalent results.
However, this does not prove equivalency of the methods. On other sets of data,
differences in the results obtained have been demonstrated. Of the two methods, the
NTBII approach is easier to understand since maximizing the NTBII S/N is the same
as minimizing the error variance for the chosen combination of factor levels. (The
experimenter should always analyze the data completely, plot the data, compare the
results to the data plots, and run confirmation tests.)
PARAMETER DESIGN
This section provides an example of how the DOE technique is used to determine
design factor target levels. This approach is an upstream attempt to develop a robust
product that will avoid problems later in production. The emphasis at this stage is
on using wide tolerance levels to provide a product that is easy to manufacture and
still meets all requirements.
DISCUSSION
After the basic design of a product is determined, the next step is to determine to
what levels the components of that product should be set to ensure that the target
will be met. The experience of the designer or design team is useful in establishing
the starting values for the investigation. This investigation begins by determining
what the component target values should be using wide component tolerances. This
is called parameter design. If the resultant variability around the product response
target is too great, the next step is to determine which tolerances should be tightened.
This approach, Tolerance Design, will be discussed in Section 9.
Example
A particular product has been designed with five components (A through E). The
target response for the product is 59.0 units. Field experience has indicated that
when the response differs from the target by five units, the average customer would
complain and the repair cost would be $150. From this information, the k value in
the loss function can be calculated.
k = $150 / 52
A brainstorming group that consisted of the designer and other experts in this
area determined that the response is linear over the component range of interest and
that the components should be evaluated at the levels shown in Table 9.54.
SL3151Ch09Frame Page 442 Thursday, September 12, 2002 6:05 PM
TABLE 9.54
Components and Their Levels
Levels
(Units are those appropriate
Component to each component)
(Factor) Low High
A 1000 1500
B 400 700
C 50 70
D 1300 2200
E 1200 1600
Note: Factor A is more expensive to manufacture at the higher level than at the
lower level.
These factors will be assigned to an L8 inner array. The two unassigned columns
will be used for an estimate of the experimental error.
An L8 outer array contains the low cost tolerances as follows:
A –50 +50
B –15 +15
C –5 +5
D –200 +200
E –100 +100
The tolerance amounts are added to/subtracted from the control level as indicated
by the outer array. The brainstorming group suspects that two other noise factors
are significant, namely, the temperature (T) and humidity (H) of the assembly
environment. The noise and tolerance factors are combined into an L8 outer array.
The testing setup and test results are shown in Table 9.55.
An understanding of the way the testing matrix is interpreted can be reached by
considering the factor A. When the inner array column associated with factor A has
a value of 1, the value of A is 1000. When there is a 2 in that column, the value of
A is 1500. The actual test values of A are also determined by the tolerance value of
A in the outer array. If the outer array value of A is 1, then 50 is subtracted from
the value of A determined in the inner array. If the outer array value is 2, then 50
is added to it. This can be summarized as follows:
1 950 1050
2 1450 1550
SL3151Ch09Frame Page 443 Thursday, September 12, 2002 6:05 PM
TABLE 9.55
L8 Inner OA with L8 Outer OA and Test Results
Outer Array
T 1 2 2 1 2 1 1 2
H 1 2 2 1 1 2 2 1
E 1 2 1 2 2 1 2 1
D 1 2 1 2 1 2 1 2
C 1 1 2 2 2 2 1 1
L8 Inner Array B 1 1 2 2 1 1 2 2 NTB
A B C D E X Y A 1 1 1 1 2 2 2 2 S/N
Note: X and Y are the unassigned columns that will be used to estimate error.
The ANOVA table and level averages for the most significant factors for the S/N
and raw data are shown in Table 9.56.
From the S/N level averages, D2B1E2 is clearly the best setting for S/N. The
estimated S/N at that setting is:
( ) ( ) (
S / N = 24.14 + 25.48 − 24.14 + 25.46 − 24.14 + 25.30 − 24.14 )
= 27.96
Since A1 is preferred from a cost standpoint and D2 is preferred from the S/N
analysis, the next step is to determine if the value of C can be adjusted to attain the
target of 59. The average response at A1D2 is:
To reach a target of 59, the value of C that is included in the average response
calculation must have a level average of 50.8 since:
TABLE 9.56
ANOVA Table (NTB) and Level Averages
for the Most Significant Factors
Source df SS MS F Ratio S’ %
A 1* 0.860 0.860
B 1 13.965 13.965 21.992 13.330 30.50
C 1 2.533 2.533 3.989 1.898 4.34
D 1 14.455 14.455 22.764 13.820 31.62
E 1 10.845 10.845 17.079 10.210 23.36
X 1* 2.477 2.477
Y 1* 10.424 10.424
Error —
(pooled error) 3 1.906 0.635 4.446 10.17
Total 7 43.704 6.243
S/N Level Averages
Factor Level 1 Level 2
D 22.80 25.48
B 25.46 22.82
E 22.98 25.30
A 56.23 44.96
C 47.99 53.21
D 48.01 53.18
Average of all data = 50.60
SL3151Ch09Frame Page 445 Thursday, September 12, 2002 6:05 PM
The target value for C can be interpreted from the tested levels and the level
averages as follows:
50.8 − 47.99
Target = 50 +
53.21 − 47.99
(
70 − 50 )
= 60.77
Note:
Value of C Response
50 47.99
Target 50.8
70 53.21
A 1000
B 400
C 60.77
D 2200
E 1600
(
59.0 ± 4.02 * 2.987 * 1 / 16 + 1 / 8 )
or ,
59.0 ± 1.50
A set of verification runs is not made using the recommended factor target values
given previously. The tolerance levels from the outer array are used to define an L8
verification run experiment as shown in Table 9.57.
The average response is 59.5 and the S/N is 27.3. Since the average response
and the S/N are close to the predicted values, the verification runs confirm the
prediction. If the average response and S/N did not confirm the predictions, the data
could be analyzed to determine which factors have response characteristics different
from those predicted.
The information from the verification runs cannot be used directly in the loss
function, since the observed variability may be affected by testing only at the
tolerance limits. The center portions of the factor distributions are not represented
in these tests. For the situation where the change in response is assumed to be a
linear increase or decrease across the tolerance levels, the loss function can be easily
calculated as follows:
SL3151Ch09Frame Page 446 Thursday, September 12, 2002 6:05 PM
TABLE 9.57
Variation Runs Using Recommended Factor Target Values
A-Tol B-Tol C-Tol D-Tol E-Tol H T Test Result
1 1 1 1 1 1 1 60.2
1 1 1 2 2 2 2 57.9
1 2 2 1 1 2 2 59.5
1 2 2 2 2 1 1 64.8
2 1 2 1 2 1 2 59.4
2 1 2 2 1 2 1 58.6
2 2 1 1 2 2 1 55.7
2 2 1 2 1 1 2 59.9
1. If it can be assumed that the Cpk in production will be 1.0 or greater for
all specified tolerances, then the difference between the tolerance limits
will be equal to or greater than six times the production standard deviation
for each component parameter.
2. The difference between the response level averages for the two tolerance
limits will equal six times the production response standard deviation
since the product response is linearly related to the component parameter
level.
3. The response variance due to each tolerance is additive since the response
effect of each component tolerance is additive. (Variance = Std. Dev.2)
4. The effect of noise factors can be treated in a similar manner.
In this example, the levels of humidity were set at the average humidity ± 2
times the humidity standard deviation. The change in response is assumed to be
linear across the change in humidity. The difference in response between the two
levels represents four times the response standard deviation. The response variance
can be calculated as shown in Table 9.58.
The response variance will be 3.9970. The loss function can be calculated from
the equation:
(
Loss = k σ2 + offset 2 )
( )
2
Loss = $6.00 3.9970 + 59.5 − 59.0
For a production run of 50,000 pieces, the total loss would be $1,274,100.
In the situation where the change in response is non-linear across the tolerance
levels or noise factor levels, a computer simulation can be used to determine the
distribution of product response for each component taken singly and for the total
SL3151Ch09Frame Page 447 Thursday, September 12, 2002 6:05 PM
TABLE 9.58
Calculated Response Variance
Tolerance Response Difference Response Production
for Factor Between Tol Limits Std. Dev. Variance
assembly. This situation occurs when the highest (lowest) response occurs at the
component nominal and the response decreases (increases) as the distance from the
component nominal increases. The purpose of these calculations is to estimate the
response variance for the total assembly population so that the loss function can be
calculated. Once the value of the loss function has been calculated, it can be
compared to the cost of tightening the tolerances so as to determine the optimal
tolerance limits. This technique will be discussed in the following section.
TOLERANCE DESIGN
This section illustrates:
1. How tolerance limits can be set so that the product will meet customer
requirements repeatedly with the widest possible tolerances. The goal is
to choose the most cost efficient tolerance levels.
2. How prior knowledge about the response characteristics of the component
levels of a product can be efficiently used.
DISCUSSION
After the target level for each component has been determined using parameter
design, the loss function value of the product design is compared to design guidelines
and to the cost of improving the production processes to meet tightened tolerances.
If it costs less to tighten the tolerance than the resulting reduction in the loss function,
then for the long run it is better to tighten the tolerance. The evaluation of the
tolerance limits and the selective tightening of the limits is called tolerance design.
As in parameter design, an example will illustrate this approach. The reader may
need to review Volume V of this series.
SL3151Ch09Frame Page 448 Thursday, September 12, 2002 6:05 PM
TABLE 9.59
Cost of Reducing Tolerances
Low Cost High Cost
Tolerance Tolerance % Cost to Change the Tolerance
Component Low High Low High Reduction for 50,000 Pieces (dollars)
Example
Continuing the example from the previous section, the loss function with low cost
tolerance was calculated to be $25.482 per piece or $1,274,100 for the production
run of 50,000 pieces. This calculation was based on the assumptions that:
1. The tolerance spread is equal to six times the production standard devi-
ation (Cpk = 1).
2. The response changes linearly across the tolerance limits.
3. The sum of the variance contributions for the components is the total
assembly response variance.
4. Humidity was set at levels that are the average humidity ±2 times the
standard deviation, and the response changes linearly across these levels.
TABLE 9.60
The Impact of Tightening the Tolerance
Response Difference Response Tightened Cost of Tightened
Between Tightened Production Response Tolerance
Component Tolerance Limits Std. Dev. Variance (dollars)
Variance %
Original Tightened Variance % Reduction per
Component Variance Variance Reduction $1000 Cost
TABLE 9.61
Reduction of 20% in the Tolerance Limits of Component A
Tolerance for Response Difference Response Production Response
Component Between Tol. Limits Std. Dev. Variance
(
Loss = k σ2 + offset 2 )
( )
2
Loss = $6.00 3.9484 + 59.5 − 59.0
TABLE 9.62
Reduction of Tolerance Limits for Component D
Tolerance for Response Difference Response Production Response
Component Between Tol. Limits Std. Dev. Variance
For a production run of 50,000 pieces, the total loss would be $1,259,520. This
is a $14,580 decrease in the loss function from the low cost tolerance situation.
Since the decrease in the loss function is more than the $5000 cost of tightening the
tolerance on A, it is advantageous in the long run to tighten that tolerance.
The 0.50 offset is assumed to be a constant to provide a basis to compare
improvement in only the variance part of the equation. In some situations, the actions
taken to reduce the response variance may also result in a better-centered response
distribution.
The next step is to evaluate the loss function with the tolerance limits reduced
for component D — see Table 9.62. The response variance will be 3.9173. If the
same 0.5 offset is assumed, the loss function can be calculated as:
(
Loss = k σ2 + offset 2 )
( )
2
Loss = $6.00 3.9173 + 59.5 − 59.0
For a production run of 50,000 pieces, the total loss would be $1,250,190. This
is a $9330 decrease in the loss function from the situation with only the tolerance
of A tightened. Since the decrease in the loss function is more than the $9000 cost
of tightening the tolerance on D, it is advantageous in the long run to tighten that
tolerance.
We can do the same for component C — see Table 9.63.
The response variance will be 3.8687. If the same 0.5 offset is assumed, the loss
function can be calculated as:
SL3151Ch09Frame Page 451 Thursday, September 12, 2002 6:05 PM
TABLE 9.63
Reduction of Tolerance Limits for Component C
Tolerance for Response Difference Response Production Respons
Component Between Tol. Limits Std. Dev. Variance
(
Loss = k σ2 + offset 2 )
( )
2
Loss = $6.00 3.8687 + 59.5 − 59.0
For a production run of 50,000 pieces, the total loss would be $1,235,610. This
is a $14,580 decrease in the loss function from the situation with only the tolerances
of A and D tightened. Since the cost of tightening the tolerance on component C is
$15,000, it would not be advantageous to tighten that tolerance.
So far, the tolerance design has been entirely a paper exercise based on the tests
run during the parameter design and the assumptions about the relationships between
the component levels and the response. A set of confirmation runs should be made
with the tolerance limits for components A and D tightened. An L8 orthogonal array
is used for the confirmation runs with the levels set, test setup, ANOVA table, and
level averages as shown in Table 9.64.
The test setup and results:
1 1 1 1 1 1 1 59.7
1 1 1 2 2 2 2 57.5
1 2 2 1 1 2 2 59.5
1 2 2 2 2 1 1 63.6
2 1 2 1 2 1 2 59.3
2 1 2 2 1 2 1 58.6
2 2 1 1 2 2 1 55.8
2 2 1 2 1 1 2 59.3
SL3151Ch09Frame Page 452 Thursday, September 12, 2002 6:05 PM
TABLE 9.64
L8 OA used for the Confirmation Runs with the Levels
Set, Test Setup, ANOVA Table, and Level Averages
Tolerance Levels
Low Level High Level
Column (1) (2) Nominal
The average response is 59.2 and the S/N is 28.5. The ANOVA table and level
averages for all of the factors from the verification runs are:
ANOVA Table
Source df SS MS F Ratio S’ %
Level Averages
Factor Level 1 Level 2
As mentioned in the last section, this information cannot be used directly in the
loss function since the observed variability may be affected by testing only at the
SL3151Ch09Frame Page 453 Thursday, September 12, 2002 6:05 PM
TABLE 9.65
Response Variance
Tolerance Response Difference Response Production
for Factor Between Tol. Limits Std. Dev. Variance
tolerance limits. The center portions of the distributions are not represented in these
tests.
Since the change in response is assumed to be a linear increase or decrease
across tolerance levels, the loss functions can be easily calculated. The Cpk in
production is assumed to be 1.0 or greater for all specified tolerances. The difference
between the tolerance limits will be equal to six times the production standard
deviation for each component parameter for a Cpk of 1. Since the product response
is linearly related to the component parameter level, the difference between the
response level averages for the two tolerance limits will equal six times the produc-
tion response standard deviation. Since the response effect of each component
tolerance is additive, the response variance due to each tolerance is additive. In a
similar manner, the difference in response between the two humidity levels represents
four times the response standard deviation. For this example, the response variance
can be calculated as shown in Table 9.65.
The response variance will be 1.0318. The loss function can be calculated from
the equation:
(
Loss = k σ2 + offset 2 )
( )
2
Loss = $6.001.0318 + 59.5 − 59.0
For a production run of 50,000 pieces, the total loss is estimated to be $384,540
compared to $1,274,100 before the tolerance design. This is a $889,560 reduction
from the original estimate of the value of the loss function.
SL3151Ch09Frame Page 454 Thursday, September 12, 2002 6:05 PM
Humidity
Note that humidity was identified as an important contributor throughout this exam-
ple. The experimenter should investigate the possibility of controlling humidity to
further reduce the loss function. If either the effect of humidity on the design can
be minimized or the humidity can be controlled, the loss function could be greatly
reduced.
Testing
Eighty tests were used in the example for the last and present sections. These tests
were used as follows:
DOE CHECKLIST
Action Complete
Describe in measurable terms how the present situation deviates from what is desired.
Identify the proper people to be involved in the investigation and the leader of the
investigation.
Obtain agreement from those involved on:
Scope of the investigation
Other constraints, such as time or resources
Obtain agreement on the goal of the investigation.
Determine if DOE is appropriate or if other research should be done first.
Use brainstorming to determine what factors may be important and which of them could
interact.
Choose a response and measurement technique that:
Relates to the underlying cause and is not a symptom
Is measurable
Is repeatable
Determine the test procedure to be used.
Determine which of the factors are controllable and which are noise.
Determine the levels to be tested for each factor.
Choose the appropriate experimental design for the control and noise factors.
SL3151Ch09Frame Page 455 Thursday, September 12, 2002 6:05 PM
Action Complete
Obtain final agreement from all involved parties on the:
Goal
Test procedure
Approach
Timing of the work plan
Allocation of roles
Arrange to obtain appropriate parts, machines and testing facilities.
Monitor the testing to assure proper procedures are followed.
Use the appropriate techniques to analyze the data.
Run confirmatory experiments.
Prepare a summary report of the experiment with conclusions and recommendations.
SELECTED BIBLIOGRAPHY
Bowker, A.H. and Lieberman, G.J., Engineering Statistics, Prentice-Hall, Inc., Englewood
Cliffs, NJ, 1972.
Box, G.E.P., Report No. 26, Studies in Quality Improvement: Signal-to-Noise Ratios, Perfor-
mance Criteria and Transformation, The College of Engineering, University of
Wisconsin — Madison, 1987.
Box, G.E.P. and Draper, N.R., Empirical Model Building and Response Surfaces, John Wiley
& Sons, New York, 1987.
Box, G.E.P., Hunter, W.G., and Hunter, J.S., Statistics for Experimenters, John Wiley & Sons,
New York, 1978.
Brown, R.M. and Burke, M.I., Framing of Design of Experiments (DOE), Proceedings from
the American Society for Quality Control 42nd Annual Quality Congress, May 1988.
Fleiss, J.L., Statistical Methods for Rates and Proportions, John Wiley & Sons, New York,
1981.
Hicks, C.R., Fundamental Concepts in the Design of Experiments, Holt, Rinehart and Winston,
New York, 1982.
Ishikawa, K., Guide to Quality Control, Asian Productivity Organization, Tokyo, 1983.
Kapur, K.C. and Lamberson, L.R., Reliability in Engineering Design, John Wiley & Sons,
New York, 1977.
Taguchi, G., System of Experimental Design, Volumes 1 and 2, UNIPUB: Kraus International,
White Plains, NY, and American Supplier Institute, Dearborn, MI, 1987.
Taguchi, G. and Konishi, S., Orthogonal Arrays and Linear Graphs: Tools for Quality
Engineering, American Supplier Institute, Dearborn, MI, 1987.
Taguchi, G. and Wu, Y., Introduction to Off-Line Quality Control, Central Japan Quality
Control Association, Nagaya, Japan, 1979.
Wu, Y. and Moore, W.H., Quality Engineering Product and Process Design Optimization,
American Supplier Institute, Dearborn, MI, 1985.
Japanese Industrial Standard, General Tolerancing Rules for Plastics Dimensions – JIS K
7109–1986, Japanese Standards Association, Tokyo, 1986.
SL3151Ch09Frame Page 456 Thursday, September 12, 2002 6:05 PM
SL3151Ch10Frame Page 457 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics —
10 Methodologies
THEORY OF CONSTRAINTS (TOC)
Every organization that wishes to achieve significant improvement with modest
capital investment must address five critical questions:
1. What are the key areas within the organization for competitive improve-
ment?
2. What are the key technologies and techniques that will improve these key
competitive areas at least cost to the organization?
3. How do these improvement and investment opportunities relate (i.e., how
can they be applied in an integrated, supportive and logical manner)?
4. In what sequence should these opportunities be addressed?
5. What are the real financial benefits going to be?
Certainly, many other questions must be dealt with, but these are the five issues
that frequently cause the most difficulty in industrial and business planning today.
So, how can the theory of constraints (TOC) help? Before we answer this, let us
examine the fundamental concepts of TOC.
THE GOAL
The fundamental goal of any for-profit organization is to make money. In fact,
practical experience tells us that the owners/shareholders of such organizations
demand this end result performance. However, is this definition of the goal complete?
The notion of continuous or ongoing improvement has proven to be extremely
powerful in all aspects of life. Therefore, does the application of this notion to our
definition of the goal have any important impact? Most organizations strive to
improve their money-making performance year after year. So, the goal is really to
make more money now and in the future. In this manner, it is impossible to make
short-term decisions that bolster short-term profitability while compromising longer-
term profitability without violating our goal definition.
If the owners are responsible for determining the goal of the organization, what
is the role of management? Clearly, management must develop strategies and tactics
that are appropriate for achieving the goal. Unlike the goal, these strategies and
tactics must be flexible and responsive to changing conditions. The goal of for-profit
organizations has been the same for over 1000 years and shows all indications that
it will continue in good health.
457
SL3151Ch10Frame Page 458 Thursday, September 12, 2002 6:03 PM
Now, before an organization can develop its own customized strategies and tactics,
it must first address at least one prerequisite. How would the management team of an
organization know whether a particular strategy or tactic was effective (i.e., contributed
to making more money)? Some set of measures would have to be used to gauge the
degree of success. As a matter of fact, the implementation of a few strategic and/or
tactical candidates may not be measurable. These candidates would likely not be
selected for actual implementation. So, what should be the high-level measures that
lead us to understand the impact of our strategic and tactical efforts on our goal?
STRATEGIC MEASURES
For TOC, three measures have become the pillars of the methodology. They are:
1. Throughput (T) — The net rate at which the organization generates and
contributes new money, primarily through sales.
2. Investment (I) — The money the organization spends on “stuff,” short-
and long-term assets, which can ultimately be converted into T.
3. Operating Expense (OE) — The money the organization spends convert-
ing I into T.
The net profit generated per period in relation to the total investment base of
the company employed is an important relative measure. This relative measure is
frequently thought of as return on investment and can be summarized as follows:
Productivity = T/OE
MEASUREMENT FOCUS
It has often been said that if you focus on everything, you will end up focusing on
nothing. So in this context, which of the three strategic measures (T, I, or OE) should
an organization focus its improvement efforts on? Before we answer this question,
it may be helpful to determine which measure typical organizations traditionally
have focused on. It is our experience that traditional focus is on OE. Why? Here are
some popular reasons:
may aid our investigation. Assume you can go to your local optician and purchase
a pair of “OE” glasses. When you get back to your company and put these glasses
on, you are able to clearly, easily, and rapidly identify and prioritize opportunities
for OE reduction, and as a result, you amass a list of OE reduction projects (A, B,
C, D, and E). A few days later, you return to that same optician and purchase a pair
of “T” prescription glasses. Upon returning to your company, you put on your new
T glasses and are able to clearly, easily, and rapidly identify and prioritize opportu-
nities for increasing T. Using your instincts, do you think the list of T projects would
be the same or different from OE reduction project list? Over the past five years
and thousands of respondents, the answer has unanimously been “different.” In other
words, our instincts tell us that perspective is critical in guiding continual improve-
ment efforts. Therefore, viewing the organization “through the eyes of T” may be
the most effective method of driving continual improvement within the modern,
lean, quick-response enterprise.
an old saying that goes, “Tell me how you will measure me, and I will tell you how
I will behave.” Therefore, the manner in which each group within the organization
is measured directly impacts that group’s actions.
To see how this phenomenon occurs, let us examine a hypothetical organization
with sales, finance, manufacturing, product development (PD), and quality depart-
ments within that organization:
Typical measurable characteristics are:
Finance
• Cash flow management
• Return on assets
• Return on investment
Sales
• Volumes
Product development
• Development budgets
• Development schedules
Quality
• Defect rates
• Returns
• Scrap rates
• Warrantee claims
Manufacturing
• On-time delivery
What do you notice about these individual department or local measures? First,
many of them exist. Second, they are all different. But there is something else. Let
us say that I manage manufacturing and my on-time delivery is getting worse. Such
local measures as on-time delivery are important to me; they are like my professional
report card. So I conclude that in order for me to improve my local measure, I need
to add production capacity by purchasing and installing a new piece of equipment.
However, when I take my recommendation to the finance people, they veto the
proposal not because they are inherently nasty people, but because the proposal will
make their return on assets local measures worse. So I rethink my strategy and
conclude that I could improve on-time delivery by simply speeding up my production
equipment. However, when I do that, the quality manager reacts negatively, to say
the least, because the defect rate local measure gets worse. It appears that frequently
my efforts to improve those measures for which I am held accountable hurt other
local measures for which others in the organization are held accountable. In other
words, our traditional local measures are frequently in conflict with one another.
In the quick response, lean, modern organization, can such a fundamental conflict
be allowed to continue? Most believe it cannot. Can we overcome this situation?
The simplest solution would be to measure all departments and functions with the
same measures. What measures come to mind first? How about T, I, and OE?
Of these fundamental measures, which should take precedence within the quick-
response, lean, modern enterprise? We concluded earlier that T provided the best
SL3151Ch10Frame Page 463 Thursday, September 12, 2002 6:03 PM
These two elements are related to design for the six sigma (DFSS) methodology
because:
1. Design engineering knows the critical features of the products that are
processed through the specific manufacturing process A.
2. Manufacturing engineering knows the critical manufacturing process steps
performed by the specific manufacturing process A.
3. Integrating the above two insights (the essence of DFSS) frequently allows
the joint design/manufacturing engineering team to offload a few non-
critical process steps now performed by process A to some other non-
constraint manufacturing process such as process B.
First, let us examine how these lists would be constructed from the traditional
management perspective. What type of projects would make it into Tier I for man-
ufacturing process (MP) #1 department? Obviously, process improvement projects
focused primarily, if not exclusively, at MP #1 department. How about MP #3
department? The same. And MP #5, #6, and so on? The same. By looking at the
organization’s various lists of Tier I process improvement projects, can we determine
where the leverage point of the organization is located? Can we determine what the
organization’s key improvement thrust is? No! In fact many, many different issues
are key to the organization. It appears that the organization is focusing on everything.
Now, from the T perspective, what type of projects would make it into Tier I
for MP #1 department? Obviously, process improvement projects focused primarily,
if not exclusively, at MP #4 department. How about MP #3 department? The empha-
sis is also focused primarily on process improvement projects at MP #4 department.
And for MP #5, #6, and so on? The same. By looking at the organization’s various
lists of Tier I process improvement projects, can we determine where the leverage
point of the organization is located? Can we determine what the organization’s key
improvement thrust is? Yes! In fact, it appears that the organization has been able
to synchronize the process improvement efforts of all of its non-constraint resources
from the T perspective.
If any logistical system’s T performance is limited by its constraint resource and
there can only be one constraint resource within a system at a single point in time,
then the rest of the organization’s resources must be non-constraints. Therefore, in
terms of sheer numbers, non-constraints dominate the organization. From this per-
spective coupled with the Tier I process improvement example above, we have
discovered that we may have made a mistake naming this methodology the theory
of constraints. In reality TOC is not primarily about the poor overworked people in
the constraint department. TOC is not primarily about whether the constraint people
can invent a 25th hour in the day or an eighth day in the week. Rather, one of the
most powerful elements of TOC is the synchronization of the organization’s non-
constraints so as to improve the T performance of the entire system.
SELECTED BIBLIOGRAPHY
Goldratt, E., Theory of Constraints, North River Press, Inc., Great Barrington, MA, 1990.
Goldratt, E. and Cox, J., The Goal, 2nd ed., North River Press, Inc., Great Barrington, MA,
1992.
Goldratt, E., Satellite Program: Facilitators Handbook. North River Press, Inc., Great Bar-
rington, MA, 1999.
Goldratt, E. Late Night Discussions on the Theory of Constraints, North River Press, Inc.,
Great Barrington, MA., 1998.
DESIGN REVIEW
A typical design review is a process, and it must be (a) multi-phased and (b) involved
in the different design phases. In fact, the reviews should also extend to the operations
and support phases. It is of paramount importance in any given design review to
consider the feedback of customer information because quite often it reveals factors
of concern that may have been forgotten or considered too lightly. If design reviews
are not taken seriously in the sequential design phases, warranty costs can well
exceed any early budgetary considerations. A very typical sequential design review
is given in the SAE R&M Guideline (1999, p. 16) shown in Table 10.1.
Even though Table 10.1 identifies the core objectives of design review, there is
more to it than just a cursory outline of requirements (Stamatis, 2002). For example:
System Requirements Review — This is the first review with the customer
where the customer specifies the level of cost-effectiveness that the manu-
facturer is expected to meet. It is at this meeting(s) that customer and
manufacturer come to some agreement not only on the reliability and
SL3151Ch10Frame Page 466 Thursday, September 12, 2002 6:03 PM
TABLE 10.1
Design Review Objectives
Design Phase Review Objectives
In-house Reviews — In the event problems occur after production starts, addi-
tional reviews may be necessary. Most R&M texts do not discuss this type
of review because theoretically the design is “frozen” and the only prob-
lems, if there are problems, are production types. But in the real world
anomalies do occur, and they have to be addressed. The design review
concept should continue until the customer is totally convinced that that
the product is of high quality.
Many questions arise in the course of design review meetings. Some organiza-
tions have a checklist they use to ensure that “nothing falls through the cracks” and
that they cover all the potential problems that may occur. A generic checklist, which
can be used at design review meetings or by the designer to ensure the integrity of
the design, is shown as Table 10.2. For obvious reasons, this list is not an exhaustive
list to cover all situations. Rather, it is a list that may be modified and act as a
catalyst for further discussion.
Here, we must give the reader a very strong caution. Concurrent or simultaneous
engineering is not a design review function, but it is closely related to it. Concurrent
engineering is the process by which all disciplines that design, manufacture, inspect,
sell, use, and maintain the product work together to develop and produce it. Mile-
stones are established where the various disciplines accomplish specific tasks simul-
taneously before proceeding on to the next task. Traditionally, each step in the design
process occurred one step at a time and extended over a relatively long duration. In
concurrent engineering, several tasks such as manufacturing engineering work with
the quality engineer and design to set up their responsibilities while the designer is
still working with the design. A comparison between traditional and concurrent
engineering is shown in Table 10.3.
TABLE 10.2
Design Review Checklist
TABLE 10.3
Comparison Between Traditional and Concurrent Engineering
Function Traditional Engineering Concurrent Engineering
An FMEA starts with the origin of a failure mode An FTA starts with an accident or undesirable
and then proceeds to find the causes, probabilities, failure event, determines the causes, then the
and corrective actions. origin of the causes, then what can be
accomplished to avoid the failure.
An FMEA considers all potential failure modes An FTA studies only negative outcomes that
that can be produced through mental generation. warrant further analysis.
An FMEA uses an inductive approach. The FTA uses a deductive approach.
Less engineering is needed for an FMEA. Skilled personnel are required for an FTA.
There is limited safety assessment in an FMEA. The FTA manages risk assessment and safety
concerns.
The failure paths are not delineated in an FMEA. The FTA provides a good assessment of the failure
paths, and the control points are well enhanced.
An FMEA looks at each failure mode separately. An FTA demonstrates a more selective method of
showing the relationship among events that
interact with one another.
To help in the generation of FTA, the Rome Air Development Center (RADC)
has formulated a seven-step approach. These steps obviously are very generic, and
each organization that is using them must modify them to fit their purpose. The
seven steps are:
1. Define the system, ground rules, and any assumptions to be used in the
analysis.
2. Develop a simple block diagram of the system showing inputs, outputs,
and interfaces.
3. Define the top event (ultimate failure effect or undesirable event) of
interest.
4. Construct the fault tree for the top event using the rules of formal logic.
5. Analyze the completed fault tree.
6. Recommend any corrections for design changes.
7. Document the analysis and its results.
FTA is one of the few tools that can depict the interaction of many factors and
manage to consider the event that would trigger the failure or undesirable event.
SL3151Ch10Frame Page 470 Thursday, September 12, 2002 6:03 PM
Designers should consider the safety aspects of their designs early in the concept
stage so that necessary changes can be accomplished before they have a chance of
occurring.
REFERENCES
Anon. Reliability and maintainability guideline for manufacturing machinery and equipment.
M-110.2. 2nd ed. Society of Automotive Engineers, Warrendale, PA. and National
Center for Manufacturing Sciences, Ann Arbor, MI, 1999.
Stamatis, D.H. Guidelines for Six Sigma design review — Part 1. Quality Digest. pp. 27–31.
Part 2. pp. 26–30, April and May 2002.
Stamatis, D.H. Failure Mode and Effect Analysis: From Theory to Execution. Quality Press,
Milwaukee, WI, 1995.
SELECTED BIBLIOGRAPHY
Hu, J. et al., Role of Failure Mechanism Identification in Accelerated Testing, Journal of the
IES. July/Aug. 1993, 39–45.
Keceioglu, D., Reliability and Life Testing Handbook, vols. 1 and 2, Prentice-Hall, Upper
Saddle River, NJ, 1993.
King, J., New Directions for Reliability, QualityEngineering-1, 79–89, 1988.
O’Connor, P., Practical Reliability Engineering, John Wiley & Sons, New York, 1991.
Phadke, M.S., Quality Engineering Using Robust Design, Prentice-Hall, Upper Saddle River,
NJ, 1989.
TRADE-OFF STUDIES
Trade-off studies are designed for balancing both business and technical issues and
optimizing the product for the customer, whether internal or external to the organi-
zation. Trade-off studies:
• Demonstrate that the alternatives satisfy the requirements (as they are
understood at the time of the evaluation)
• Document the evaluation
Of course, the question quite often is, “How do I know if I need to conduct a
trade-off study?” The answer depends on the following questions:
• Does the decision require input and concurrence from several organizations?
• Does the decision require balancing inputs that may conflict and/or are
inversely related?
• Is there a choice between several viable/acceptable alternatives?
• Is a quick, comprehensive, and defensible decision needed?
If the answer is yes to one or more of these questions, a trade-off study may be
the best approach for selecting the optimum solution. To conduct the trade-off study
appropriately, there are some preliminary requirements. They are:
As with any other methodology and tool, with a trade-off study we expect to
have some kind of deliverables at the end of the analysis. Typical deliverables are:
The trade-off study matrix consists of two major components — the alternative list
and the category list:
SL3151Ch10Frame Page 472 Thursday, September 12, 2002 6:03 PM
The alternative list: This list is simply a listing of each alternative being
considered. The alternatives are listed across the top of the trade-off study
matrix, with one alternative per column.
The category list: The category list consists of musts and wants arranged by
assessment items. Each assessment item is broken down into various mea-
surable discriminators.
An example:
The first step in the trade-off study process is creating a preliminary matrix. You
must identify both the alternatives being examined and the list of assessment items and
discriminators. Draw assessment items and discriminators from program requirements,
corporate data, QFD studies, CAE analysis, etc. The preliminary matrix acts as a
discussion catalyst at the first team meeting. After assembling the assessment list, sort
it into those that are imperatives (or musts) and those that are desirables (or wants).
The goal of this step is to ensure that affected parties are adequately represented. It
is better for a group to decline participation than to be overlooked in the team
assembly process. The team is composed of representatives from each group
impacted by the decision being made (it must be cross-functional and multidisci-
plinary). Team size varies depending on the subject and scope of the project (initial
meetings should include no more than nine to twelve people). Team membership is
based on contribution potential not approval needs. Approval takes place during the
presentation of results at the end of the process.
Although all team members will play a role in the trade-off study process, three key
positions must be filled to ensure process success, as follows:
Team champion
Is usually a program manager, project manager, or someone empowered
by those individuals to carry out the selection of an alternative
Is the individual who must design, build, or approve the selected alternative
Supports and participates in the process, and accepts (and backs) the
team’s consensus decision
Provides the resources to accomplish the task at hand
Lead facilitator
Guards against duplicating efforts, provides information to individuals be-
tween meetings, generally coordinates the entire process
SL3151Ch10Frame Page 473 Thursday, September 12, 2002 6:03 PM
Ranking teams are designed to evaluate each alternative within a particular category
or assessment item. Individuals are assigned to these teams based on their particular
specialty. For example, a transmission design engineer would be assigned to the task
of assessing (or ranking) an alternative’s ability to handle a particular level of input
torque; an individual from marketing would determine an alternative’s potential
volumes; a financial expert may be assigned to develop marginal costs, and so on.
After the core team modifies and approves the preliminary matrix, determine
the personnel necessary for conducting the ranking within each category. Assign
team members to ranking teams based on their specialty or to the category that
affects their area/product. Due to the critical aspect of the ranking teams, team
members require specific direction on:
1. Ranking/evaluation methodologies
2. Documentation format and content
3. Reporting their findings, conclusions, and issues
As each ranking team delves into the ranking process, the team members may
expand or contract the discriminator list to better evaluate the alternative’s
performance for a given assessment item.
Each team selects the ranking method it feels is appropriate and is defensible
to the core team.
Whenever possible, the rankings should be based on actual test results, CAE
analysis, or numerical analysis (i.e., facts or directly observable data).
SL3151Ch10Frame Page 474 Thursday, September 12, 2002 6:03 PM
Expert opinion and subjective rankings should be used as a last resort when
time, cost, knowledge, or value limit the reliance on tests, rigorous analysis,
or computer simulation.
An alternative that fails in achieving any assessment item defined as a project
must is eliminated from the evaluation. The ranking team informs the lead
facilitator who stops further evaluation of that alternative by all other
ranking teams (this is done to limit resource expenditure).
When ranking the alternatives, the one (or more) alternatives best satisfying
a particular discriminator get the highest numerical rank for that discrimi-
nator. This is usually done by counting the number of alternatives and using
that value as the highest rank (i.e. with four alternatives, the best would
receive a rank of four, the next a three, and so on). Alternatives do not have
to be forced ranked, nor does one have to receive the top score. If all or
some of the alternatives have equal ability to satisfy the discriminator, they
would receive equal rank. If that ability is high, they would all receive a
top rank; if that ability is poor, they all may receive a low rank
The weighting rule: Assign weightings according to the assessment item’s impor-
tance or impact on satisfying the customer and company needs/requirements and
ensuring the optimum decision (for this point in the program).
With key personnel developing the weightings in parallel to the ranking process,
we are able to:
Now that the weightings have been assigned, the next step is for the lead facilitator
to compile the evidence book. There are several steps to this process:
Once each ranking team has reported out, the lead facilitator develops a summary
of the evidence book for distribution during the final presentation. This summary
contains:
When each ranking team has reported its results to the lead facilitator, the process
coordinator reassembles the core team for a presentation of the results. Copies of
the summary document are distributed to each core team member three to five days
prior to the meeting. Each ranking team then presents its findings, methods, and
issues to the entire team.
Sensitivity Analysis: The purpose of the sensitivity analysis is to determine the
robustness of the selected alternative. The process allows the group to ask various
“what if ” questions regarding a particular ranking or weighting and receive an
immediate answer, such as:
“What if ” the ranking of that assessment category were inverted, would the
alternative still be chosen?
“What if ” the weighing of that assessment item were lowered, would it change
the selection?
SL3151Ch10Frame Page 476 Thursday, September 12, 2002 6:03 PM
It is recommended that a laptop computer, loaded with the trade-off study matrix,
be brought to the presentation by the lead facilitator. Modifications can be made to
the rankings within an assessment item or weightings on a category to see how that
would affect the overall decision. This will identify how sensitive the decision is to
certain changes and give the group an immediate feel for the selected alternative’s
robustness.
Here is a typical trade-off study process checklist:
GLOSSARY OF TERMS
Alternative rank — Defines how well an alternative compares to other alter-
natives in achieving a particular assessment item or discriminator.
Assessment item — A particular attribute of a given category.
SL3151Ch10Frame Page 477 Thursday, September 12, 2002 6:03 PM
SELECTED BIBLIOGRAPHY
Bain, L., Statistical Analysis of Reliability and Life Testing Models: Theory and Methods,
Marcel Dekker, New York, 1991.
Hubka, V., Principles of Engineering Design, Butterworth Scientific, London, 1982.
Kapur, K.C. and Lamberson, L.R., Reliability in Engineering Design, John Wiley & Sons,
New York, 1977.
COST OF QUALITY
The purpose of costs in quality is to establish “the method” for collecting, main-
taining, and using quality cost data so that they become the conscience (the driving
force) of the organization for continual improvement. Once this conscience has been
realized, then a real effort is put in place in the area of quality improvement
opportunity (QIO) for quality audit, product procedures process, and the overall
system.
SL3151Ch10Frame Page 478 Thursday, September 12, 2002 6:03 PM
Scrap or
rework
Suppliers
Reject
Final
Receiving Production
Inputs product Output
CQ Process
QC
Accept
Process
QC
This is based on the notion that “quality” is defined as satisfying the customer’s
needs. How does the cost of quality/quality improvement opportunities satisfy the
customer’s needs through the manufacturing organization? See Figures 10.1 and 10.2.
Standard Cost
The organization should develop a method that will allow for efficiency in labor
(direct and indirect); identification of material content (parts and components);
appropriate measures of overhead; and documentation/development, review, and
revision of these standards.
Actual Costs
A supplier should maintain an accurate system to record, monitor, and control labor,
material and overhead costs. For example:
COST
Continual Improvement
EXPECTATIONS
Cost Cost
Monitoring Reduction
System Efforts
Continual
Standard Cost
Cost Improvement
Efforts
Competitive
Actual Cost Product
Development
Variance
A supplier should be able to identify and control cost variances. For example:
The organization and the supplier should cooperate fully with each other in an effort
to reduce costs. Continuous efforts to reduce costs and therefore selling price are
essential for the organization and the supplier, if both are to remain competitive in
the market place. For a supplier to reduce costs, the efforts should be directed in
the following areas:
J. Juran
W.E. Deming
Perhaps one of the most prolific gurus in quality issues of the 20th century, Deming
spent a lifetime explaining the issue of cost as one of the driving forces in a dynamic
SL3151Ch10Frame Page 481 Thursday, September 12, 2002 6:03 PM
organization by always explaining the need to end the practice of awarding business
on the basis of price tag, eliminate numerical goals, and eliminate work standards.
P. Crosby
Crosby was by far the best salesperson of quality. He was the first to associate the
bottom line with the effect of costs. He made a point of differentiating quality of
conformance and nonconformance, to quantify the waste of poor quality as a per
cent of sales, push for the concept of zero defects and the attributes of prevention
quality as opposed to appraisal.
G. Taguchi
Taguchi’s contribution to the cost of quality is with the tolerance design cost account-
ability on specifications setting and the loss function.
• Quality reporting
• Improvement projects
• Other prevention costs (general office expenses)
The outputs are made up of the internal failure costs. These result when
quality issues are discovered outside an organization by the customer.
They include
• Scrap
• Rework
• Retest
• Downtime
• Yield losses
• Disposition
• Failure analysis
• Fault of suppliers
Another component of the outputs is the external failure costs, which are
• Complaint adjustment
• Returned material
• Warranty changes
• Allowances
• Repair
• Errors
• Liability
2. The use of cost of quality can be quantified by giving attention, prioritizing,
justifying, recognizing, and driving decision making deeper into the organi-
zation. Screening the costs throughout the organization occurs through:
a. Analyzing the ingredients of established accounts
b. Resorting to basic accounting documents
c. Creating records for documentation
d. Estimating costs using statistical tools
The analysis of these four categories can be performed by various means,
i.e., through descriptive statistics, graphical techniques, or advanced
statistical analyses.
Here we must emphasize that cost of quality is not a system that encour-
ages, fortifies, or perpetrates the adversary position of one department
against another or one company against another. Rather, it is a system
that allows management to look at a specific situation compared against
itself over time. The ultimate in this thinking is planning for growth.
The relationship among CQ, planning, growth, and quality is heavily
dependent upon management’s attitude and the employer involvement
improvement opportunity of a given organization. The underlying as-
sumption of this concept is that as one controls quality, one reduces cost
and this increases profit. This assumption of CQ is important because it
becomes the catalyst that causes management to address the issue of
quality. Profit is the universal language of all management. The ques-
tion becomes: “What does CQ provide to management that serves as a
significant indicator to them?” It provides:
SL3151Ch10Frame Page 483 Thursday, September 12, 2002 6:03 PM
This means that an additional dollar invested in prevention will produce exactly one
dollar’s worth of reduced failure costs. Below the optimum it provides more than
one dollar and above the optimum the opposite is true. Therefore:
The optimum (minimum) quality cost could lie at zero defects, q = 100%, if the
incremental cost of approaching zero defects is less than the incremental return from
the resulting improvement. Juran asserts that prevention costs rise asymptotically,
becoming infinite at 100% conformance. This implies that the incremental cost is
also infinite. Since the incremental return is not, it follows from his assertion and
the above mathematics that the optimum lies below 100%. The question is: Does it
really take infinite investment to reach zero defects?
For Crosby, on the other hand, the cost of quality bases are:
COMPLAINT INDICES
The user costs associated with failures can be grouped into five categories.
R = repair cost
E = effectiveness loss (idle labor)
C = extra capacity required because of product downtime
D = damage caused by failure
L = lost income (profit)
If these costs are measured each year over the life of the product, then the failure
cost (Cf) is
∑ (L + i) j ( R + E + C + D + L )
L
Cf = j j j j j
j =1
2. Defect matrices
3. Cost analysis
4. Spare parts use growth curves
5. Probability paper
6. Simulation studies
7. Statistical modeling
a. Abnormality control chart. The abnormality chart is a chart that
addresses the following questions/concerns:
i. How did it happen?
Date: Place: Lot number:
Item name: Found By: Description:
ii. How it was found?
iii. Emergency measure taken
iv. Investigation of the causes
v. Cause(s)
vi. Measures taken to prevent recurrence
vii. How will these measures affect similar processes?
viii. How to proceed next?
Engineered Studies made by engineers, e.g., material Are we attaining the results that the
usage, labor hours engineering studies showed were
obtainable?
Historical Statistical computation of past Are we getting better or worse?
performance
Market Market studies to discover performance Were do we stand compared to our
of competitors competitors?
Planned Broad program of final results needed and Are we going to be able to attain the
allocation to subprograms, e.g., overall planned goal?
reliability goals
TABLE 10.4
Typical Monthly Quality Cost Report (Values in Thousands of Dollars)
October Year to Date
Actual Variance Category Actual Variance
A. Prevention Cost
18.3 3.2 1. Quality engineering 190.1 10.1
4.6 06 2. Design and development 61 8 7.5
2.6 0.9 3. Quality planning by others 20 7 73
7.3 2.1 4. Quality training 46.8 20.3
2.4 3.4 5. Other 312 25.0
35.2 10.2 Total prevention cost 350 6 55 2
7.7% % of total quality cost 9.4%
B. Appraisal Cost
9.6 1.8 1. Inspect and test incoming materials 87.3 7.1
32.5 15.4 2. Inspection and test 323.0 105.0
14.1 27.4 3. Product quality audits 140.9 269.7
1.4 1.1 4. Materials and services consumed 16.5 8.8
4.1 1.6 5. Equipment calibration and maintenance 23.4 00
61.7 9.7 Total appraisal cost 591.1 166.4
13.5% % of total quality cost 15.9%
9.6 C. Internal Failure Cost
14 6 124.3 1. Scrap 50.0 8.0
197.2 8.1 2. Rework 1305.6 557 6
25.2 2.3 3. Failure analysis 185.1 0.4
6.8 6.6 4. Reinspection 88.0 3.0
14.1 0.2 5. Fault of supplier 152.1 77.2
0.8 6. Downgrading 8.1 19
258.7 129.9 Total internal cost 1788.9 621.5
56.4% % of total quality cost 48.1
D. External Failure Cost
8.6 1.6 1. Complaints 75.3 5.3
41.8 1.2 2. Rejected and returned 403.6 26.4
25.6 0.3 3. Repair 256.5 3.5
21.9 27.0 4. Warranty Charges 226.6 263.4
4.9 4.0 5. Errors 28.5 10.2
0.0 0.0 6. Liability 0.0 0.0
102.8 30.3 Total external cost 990.5 291.2
22.4% % of total quality cost 26.6%
458.4 79.7 Total operating cost 3721.1 108.7
Measurement Bases
6.5 1. Direct labor ($/man-hour) 5.3
8.8 2. Sales (%) 9.0
16.7 3. Manufacturing Costs (%) 16.3
SL3151Ch10Frame Page 487 Thursday, September 12, 2002 6:03 PM
DATA SOURCES
Typical sources for cost of quality (CQ) are:
INSPECTION DECISIONS
TABLE 10.5
Prevention Costs
Where to Obtain/
Cost Element Description/Definition How to Calculate Cost
2. Test and inspection planning Costs of planning and procuring Department budget reports
developing test and inspection (allocated)
equipment (excluding actual Purchase orders
equipment costs, which are part Estimates
of appraisal costs)
Development costs for test and
inspection processes
TABLE 10.6
Appraisal Costs
Where to Obtain/
Cost Element Description/Definition How to Calculate Cost
1. Incoming and All costs of inspectors, supervision, lab, Department budgets (allocated)
receiving and clerical personnel working on Process sheet standards
inspection and incoming material; includes costs to visit Inspection sheets standards
test or station personnel at supplier locations Estimates
2. In-process Salaries and associated costs of all staff Same as # 1
inspection and performing in-process inspection and
test testing either 100% or sampling;
includes materials consumed during tests
3. Test and Costs of tests, inspection, and lab Department budgets (allocated)
inspection equipment; equipment maintenance and Purchase orders/maintenance
equipment purchased services also included contracts
Estimates
4. Product quality Personnel expenses for performing quality Department budgets (allocated)
reviews reviews on in-process or finished Estimates
products
5. Field Costs incurred in field testing for product Field inspection reports
performance acceptance at a customer’s site, prior to Department budgets (allocated)
evaluations releasing the product Estimates
6. Other appraisal All other appraisal costs not specifically Estimates
costs covered elsewhere Adjustments
(including negative costs)
TABLE 10.7
Internal Failure Costs
Where to Obtain/How
Cost Element Description/Definition to Calculate Cost
1. Rework and repair Costs of reworking defective product; Cost accounting reports
internal fault internal includes costs associated with the renew and Detective material reports
fault supplier fault dispositioning of non-conforming purchased Department budgets
products (allocated)
Estimates
2. Scrap All scrap losses incurred resulting defective Salvage reports
purchased materials/products and incorrectly Defective material
performing manufacturing operations; costs reports
charged to suppliers are not included; scrap Estimates
value, less handling charges, may be
included as an offset
3. Troubleshooting and Costs incurred in analyzing non-conforming Department budgets
failure analysis product to determine causes (allocated)
Problem reports
4. Reinspect and retest Costs to reinspect or retest products that Department budgets
previously failed (allocated)
Estimates
5. Excess inventory Inventory costs resulting from producing Cost accounting reports
defective products; includes storage of Department budgets
defective product and added inventory of (allocated)
good product to cover production shortfalls Estimates
6. Design and process Costs to revise a product or process due to Estimates
changes production of defective product
7. Other internal failure All other costs related to the production of Estimates
costs/offsets defective product nor specifically included Adjustments
elsewhere
8. Sum these costs to obtain the total cost of quality within the process.
9. State this cost as a fraction of the total cost of the process or as a dollar
amount that represents the opportunity for improvement in the process.
10. Ensure continuous improvement through ongoing process analysis (plan,
do, check, act).
TABLE 10.8
External Failure Costs
Where to Obtain/
Cost Element Description/Definition How to Calculate Cost
3. Review each operation for the type of cost incurred; appraisal (checks,
reviews, etc.), internal failure (blueprint errors, incomplete forms, etc.).
4. Talk to individual employees to define further what goes wrong at each
operation; redundant operations, misfiling, improper direction, delays, etc.
5. Identify and quantify failures at each operation and their effect on subse-
quent operations.
6. Use the existing financial system to assign the cost of labor and material
to each operation.
SL3151Ch10Frame Page 492 Thursday, September 12, 2002 6:03 PM
7. Calculate the cost of losses associated with the items identified in steps
4 and 5.
8. Sum these costs to obtain the total cost of quality within the process.
9. State this cost as a fraction of the total cost of the process or as a dollar
amount that represents the opportunity for improvement in the process.
10. Ensure continuous improvement through ongoing process analysis (plan,
do, check, act).
Procedure
Examples
1. Accounting
• Percent of late reports
• Computer input incorrect
• Errors in specific reports as audited
• Percentage of significant errors in reports; total number of reports
• Percentage of late reports; total number of reports; average reduction
in time spans associated with important reports
• Pinpointing high-cost manufacturing elements for correction
• Pinpointing jobs yielding low or no profit for correction
• Providing various departments with the specific cost tools they need
to manage their operations for lowest cost
SL3151Ch10Frame Page 493 Thursday, September 12, 2002 6:03 PM
2. Administrative
• Success in maximizing discount opportunities through consolidated
ordering
• Success in eliminating security violations
• Success in effecting pricing actions so as to preclude subsequent
upward revisions
• Success in estimating inventory requirements
• Success in responses to customer inquiries so as to maximize customer
satisfaction
• Decimal points correctly placed
• Correct calculations in bills, purchase orders, journal entries, payrolls,
bills of lading, etc.
• Time spent in locating filed material
• Percentage of correct punches in paper used during a given period
versus actual output in finished pages
3. Clerical
• Accurate typing, spelling, hyphenation
• Decimal points correctly placed
• Correct calculations in bills, purchase orders, journal entries, payrolls,
bills of lading, etc.
• Time spent in locating filed material
• Percentage of correct punches
• Paper used during a given period versus actual output in finished pages
4. Data processing
• Keypunch (KP) cards thrown out for error
• Computer downtime due to error
• Rerun time
• Promptness in output delivery
• Effectiveness of scheduling
• Depth of investigations by programmers
• Program debugging time
• KP (data entering) efficiency
5. Engineering: design
• Adequacy of systems specifications
• Accuracy of system block diagrams
• Thoroughness of system concepts
• Simulation results compared to original design or prediction
• Success in creating engineering designs that do not require change in
order to make them perform as intended
• Success in developing engineering cost estimates versus actual accruals
• Success in meeting self-imposed schedules
• Success in reducing drafting errors
• Success in maximizing capture rates on RFPs for which the company
was a contender
• Success in meeting engineering test objectives
• Number of error-free designs
SL3151Ch10Frame Page 494 Thursday, September 12, 2002 6:03 PM
•
Assets control
•
Minimizing capital expenditures
•
Realistic budgets
•
Clear and concise operating policies; timely submission of realistic
cost proposals
• Completeness of financial reports
• Effectiveness of disposition of government property
• Effectiveness of cost negotiations
10. Legal
• Amount of paper used versus finished pages produced
• Misdelivered mail
• Misfiled documents
• Delays in execution of documents
• Teletype errors
• Patent claims omitted
• Response time on request for legal opinion
11. Management
• Output of staff elements, overall defects rates, budgets and schedule
controls, and other factors that reflect on managerial effectiveness (In
other words, the accomplishments of a manager are the sum totals of
those working under him or her)
• Success in developing estimates of costs versus actual accruals
• Success in meeting schedules
• Performance record of employees under the manager’s supervision
• Success in developing realistic estimates on a PERT or PERT/cost chart
• Success in minimizing use of overtime operations
• All nonproduction departments can be measured
• Each department should be measured against itself, using time com-
parisons, and preferably by itself.
• The best primary goals are those that measure cost performance, deliv-
ery performance, and quality performance of the of the department.
Secondary goals can be derived from these primary goals.
• There should be a base against which quality, cost or delivery perfor-
mance can be measured as a percentage improvement. Examples of
such a base would be direct labor, the sales dollar, the material dollars,
or the budget dollar. A dollar base is more meaningful to management
than a physical quantity of output.
• Success in effecting pricing actions so as to preclude subsequent revi-
sions.
• Pages of data compiled with no defects
• Clarity and conciseness of operating procedures
• Evaluations of capital investment
• Errors in applying standards on process sheets
• Accuracy of estimates; actual costs versus estimated costs
• Effectiveness of work measurement programs
SL3151Ch10Frame Page 497 Thursday, September 12, 2002 6:03 PM
12. Marketing
• Success in reduction of defects through suggestion submittal
• Success in capturing new business versus quotations
• Responsiveness to customer inquiries
• Accuracy of marketing forecasts
• Response from news releases and advertisements
• Effectiveness of cost and price negotiations
• Success in response to customer inquiries (customer identification)
• Customer liaison
• Effectiveness of market intelligence
• Attainment of new order targets
• Operation within budgets
• Effectiveness of proposals
• Exercise of selectivity
• Control of cost of sales
• Meeting proposal submittal dates
• Timely preparation of priced spare parts list
• Aggressiveness
• Utilization of field marketing services
• Dissemination of customer information
• Bookings budget met
• Accuracy of predictions, planning and selections
• Accurate and well-managed contracts
• Exploitation of business potential
• Effectiveness of proposals
• Control of printing costs
• Application of standard proposal material
• Standardization of proposals
• Reduction of reproduction expense
• Contract errors
• Order description error
• Sales order errors
13. Material
• Saving made
• Late deliveries
• Purchase order (PO) errors
• Material received against no PO
• Status of unplaced requisitions
• Orders open to government agency for approval
• Delays in processing material received
• Damage or loss items received
• Claims for products damaged after shipment from our plant
• Delays in outbound shipments
• Complaints or improper packing in our shipments
• Errors in travel arrangements
SL3151Ch10Frame Page 498 Thursday, September 12, 2002 6:03 PM
b. Special review
c. Tool/equipment control
d. Preventive maintenance
e. ZD program
f. Identify incorrect (zero defect) specifications/drawings
g. Housekeeping
h. Controlled overtime
i. Checking labor
j. Trend charting
k. Customer source inspection
l. First piece inspection
m. Stock audits
n. Certification
Manufacturing: price of nonconformance
a. Rework
b. Scrap
c. Repair and return expenses
d. Obsolescence
e. Equipment/facility damage
f. Repair equipment/material
g. Expense of controllable absence
h. Supervision of manufacturing failure element
i. Discipline costs
j. Lost time accidents
k. Product liability
Quality control: price of conformance
a. Quality training
b. Test planning
c. Inspection planning
d. Audit planning
e. Product design review
f. Supplier qualification
g. Productibility/quality analysis review
h. Process capability studies
i. Machine capability studies
j. Calibration of quality equipment
k. Operator certification
l. Incoming inspection
m. In-process inspection
n. Final product inspection
o. Product test
p. Product audit
q. Test equipment
r. Checking gauges, fixtures, etc.
s. Prototype inspection
SL3151Ch10Frame Page 508 Thursday, September 12, 2002 6:03 PM
−d
r
( np) e − np ( d )r e u
Under special conditions, such as normalizing per unit, the d/u = np/u. Therefore,
if we substitute terms for the normalizing case where u = 1, for the special case
where r = 0 (remember 0! = 1), we are able to reduce the above formulas into
−d
Y =e u
The reader will notice that this equation really reflects the first time yield (YFT)
for a specific d/u. Of course, if we know the first time yield we can solve for d/u
with the following formula:
d/u = –ln(YFT)
m!
Y= pr qm−r
r!( m − r )!
This of course, becomes Y = (1 – p)m for the special case of r = 0 where p is the
constant probability of an event and q = 1 – p.
In dealing with COQ issues, as Grant and Leavenworth (1980) have pointed out,
the Poisson approximation may be applied when the number of opportunities for
non-conformance (n) is large and the probability (p) of an event (r) is small. In fact,
as n increases and r decreases, the approximation by the Poisson model improves.
Furthermore, we can use COQ issues and information to generate or double-
check the validity of the critical to quality characteristic (CTQ) as well as the critical
to process characteristic (CTP). Above all, we are capable of measuring the classical
perspective of yield. The traditional formula, which is based on process capability, is
S
Y final =
U
where Yfinal = final yield; S = number of units that pass; and U = number of units
tested.
Another way to view yield is to calculate the rolled throughput, which is
Yrt = e − dpu or
1
Yrt = Ytp;1 * Ytp;2 *,… * Ytp;m or the normalized variation of Ynorm = (Yrt ) m
where Ynorm is the normalized yield; Yrt is the rolled throughput yield; m is the
number of categories; tp is the throughput yield of any given category; and dpu is
defects per unit.
SL3151Ch10Frame Page 511 Thursday, September 12, 2002 6:03 PM
REFERENCES
Grant, E.L. and Leavenworth, R.S., Statistical Quality Control, McGraw-Hill, New York, 1980.
SELECTED BIBLIOGRAPHY
Aaron, M.B., Measure for Measure Alternative to Goodness, Motherhood & Morality, paper
presented in Automach, Australia for SME, 1986.
Besterfield, D.H., Quality Control, Prentice Hall, Englewood Cliffs, NJ, 1979.
Carlsen, R.D., Gerber J, and McHugh, J.F., Manual of Quality Assurance Procedures and
Forms, Prentice Hall, Englewood Cliffs, NJ, 1981.
Ford Motor Co., Team Orientated Problem Solving. Electrical & Electronics Div., Rawson-
ville, MI, 1987.
Juran, J.M. and Gzyna F.M., Quality Planning and Analysis. 3rd ed., McGraw-Hill, New
York, 1993.
Schneiderman, A.M., Optimum Quality Costs and Zero Defects: Are They Contradictory
Concepts? Quality Program, Nov. 1986, pp. 23–27.
REENGINEERING
Reengineering by definition is a drastic change of the process. However, if the process
changes, then the job/task regarding that process must be changed as well. This section
addresses the approach and method for reengineering only from the process perspective.
For more details see Stamatis (1997) and Selected Bibliography. The discussion will
focus on drastic changes as well as developmental changes for the process. Evolutionary
changes in process are addressed by statistical process control charting and other
monitoring methods and are beyond the scope of this volume (Volume IV of this series
covers this topic). Both approaches to redesign merge the viewpoints of management
and labor, resulting in more job satisfaction and productivity.
Drastic changes are taking place across the corporate world in the areas of com-
munication practices, corporation cultures, and productivity. These changes are the
result of increased employee awareness, an advanced level of technology, competition,
mergers, greater demand of quality, and in general, increasing business costs.
These changes have forced management to respond in several ways, including
asking employees (union and non-union) for their help. This initiative by management
has resulted in participative programs such as teamwork and as of late redesigning the
actual job or process. As a result of these changes, trust and open communication are
cultivated and encouraged. Information sharing, as well as moving responsibility and
accountability to employees themselves, is a common occurrence.
This employee participation has generated a need for both job and process
redesign so that an organization may be more competitive in the world markets as
well as more efficient in producing its product or service.
PROCESS REDESIGN
A process may change in evolutionary form and or in a very drastic approach (Chang,
1994). When a process changes in an evolutionary form, it may be because of
statistical process control monitoring or some other kind of monitoring method.
SL3151Ch10Frame Page 512 Thursday, September 12, 2002 6:03 PM
Under this condition, the redesign process takes the form of a problem-solving
approach. A typical approach is the following:
Step 1. Reason for improvement — why is there a need for change?
Select appropriate and applicable measures and targets.
Determine any gaps.
Step 2. Define problem — Whenever possible, stratify the improvement area.
Look for root cause rather than symptoms.
Step 3. Analysis — Verify root cause.
Step 4. Solution(s) — Determine alternatives.
Select best solution.
Step 5. Result(s) — Verify and evaluate the elimination of the root cause by
asking:
Are we better off?
Are we worse off?
Are we the same as before?
Step 6. Implementation — Review the control plan. Change it if appropriate
and applicable.
Standardize the process.
Replicate.
Step 5. Build the “new” process — Verify the process performance and effec-
tiveness.
Step 6. Implementation — Develop “new” control plan.
Standardize the “new” process
Replicate
If, on the other hand, a process changes because of reengineering efforts, then
this process is developed in four stages. The four stages are:
Stage 1. Recognition for change. One of the first things that the team has to
recognize is that the status quo is about to change. Once the realization
sets in, then a formal analysis of what has to change must be performed
and the intentions of that change must be communicated throughout the
organization to those who are or will be involved. As part of the commu-
nication effort, the context of the change and the operating principles will
also be communicated.
Stage 2. Change content definition (formulation). In this stage, a process map
is developed, so that the process targets and objectives may be declared.
(In some cases, two process maps are developed as needed. One represents
the old process and the second represents the new process. This is done for
comparison purposes.) This stage begins the process analysis (tasks and
jobs required) and determines the process changes and the new owners.
Perhaps one of the most important aspects of this stage is the formulation
of the baseline production requirements such as capacity, cycle time, pro-
ductivity, efficiency, and quality requirements. In some cases, this is the
stage where a pilot study will be designed.
Stage 3. Change implementation. This is the stage where most of the tedious
work has to occur. Specifically, the controls are designed for the new
process, and a systematic analysis is performed to identify potential mod-
ification points and to eliminate non–value adding steps. A formal value
analysis and FMEA may be performed in this stage to identify areas of
opportunity and possible restructuring.
Stage 4. System maintenance. This is the final stage of the reengineering
process. This is where the old system officially is declared obsolete and
the new system is installed with all the new structures, modifications, waste
reductions, and new targets of production.
This four-stage model of the process redesign identifies the general elements of the
change. To complete the discussion, however, we must also address the specific tasks
that the team leader (project manager) must perform and how the team members will
respond. Table 10.9 summarizes these seven steps to the implementation process.
TABLE 10.9
Seven-Step Process Redesign Model
Prime Action of Team Leader Support Action
Introduce the process change; full disclosure about Action kick-off by management team who will be
the project to all concerned responsible to the team and team leader for
follow-up
Develop strategy for implementation Strategy is incorporated into the business plan, and
the team leader acts as both changing agent and
support member
Perform appropriate and applicable training to the Managers and team leader support the team for the
team members change; they provide encouragement, coaching,
and resources as needed
Follow up with both managers and employees to Managers and team leader define the system of
develop team level information, meetings, the “change”; they provide the appropriate
reports, problem resolution structure, and support as needed.
whatever else is necessary
Follow up on the reports and measurement element Team leader conducts meetings and improves
of this stage; evaluation of the results is also quality figures; interpretation is an important
important information through a systematic flow upward
Help managers with problem Team leader and managers set first set of
performance resolution targets
Full integration of process redesign Team leader reports progress; audit(s) may be
conducted in order to verify targets and or modify
the process or the targets
a. Analysis of current state. Here the core process is studied and evaluated
rather than what people do.
b. Creation of the ideal state. Here the group activity is focused on
generating a conducive environment for breakthrough thinking.
3. The social conference. In this stage of the conference the social system
of the organization is evaluated. The elements upon which the evaluation
is based are structure, skills, style, symbols, and human systems. Each
element is influenced by changes in the environment and must be aligned
with the organization’s vision values and technical system. The goal of
this stage is to generate a design that is the “best.”
Once the minimum requirements are established, then the reengineering team
is ready to implement the method for change. The success of the OOAD will depend
upon the implementation of the following ten steps:
Many executives who so enthusiastically embrace six sigma do not really know
what they are getting into, and that is a guarantee of trouble downstream. For many,
six sigma methodology has become a corporate panacea, a silver bullet of sorts, and
that spells unrealistic expectations and eventual disillusionment. More importantly,
many companies are discovering that there is a large gap between learning the
techniques of six sigma and realizing its benefits. Some are falling prey to six sigma-
itis, symptomized by a vast number of uncoordinated projects that do not support
SL3151Ch10Frame Page 517 Thursday, September 12, 2002 6:03 PM
critical corporate goals or customer functionality. If these problems are not addressed
quickly, six sigma will become just another corporate fad, companies will not benefit
from its power, and institutional cynicism will get yet another boost. We have been
down this road before, and we do not need to go down it again.
Executives of these companies need a better understanding of what six sigma
is and what it is not, what it can do and what it cannot. With six sigma’s focus on
problem identification and resolution, can it create breakthrough process designs?
Does it have the power to address so-called “big P” (big picture) processes that cross
functional boundaries? Is its popularity at least in part a reflection of the fact that
six sigma does not shake up an organization, which might make it easier to swallow
but limits its impact? Might six sigma actually reinforce, rather than knock down,
the silos that impede improved performance? We need realistic answers to these
essential questions.
We suggest that process management and reengineering are necessary comple-
ments to six sigma — especially in the DFSS phase. Six sigma is part of the answer,
but it is not the whole answer. Six sigma veterans have learned they need to leverage
their six sigma efforts with other improvement techniques — process management,
reengineering, and lean thinking. Six sigma is a methodology that uses many tools,
is not a religion, and business results are more important than ideological purity.
The real winners at DFSS do not limit their arsenal to just one weapon but employ
all appropriate techniques, combining them in an integrated program of process
redesign and improvement. After all, we already know that over 90% of our problems
are systems (process) problems. It would be ludicrous to look the other way when
we have a chance to fix them in the design stage.
In addition, most of the six sigma projects, discussions, and experts emphasize
the manufacturing end of potential improvements. But let us remember that in
services, the cost of quality is in the 60 to 70% range of sales. That is an incredible
potential of improvement, and reengineering is a perfect tool not only to evaluate
but also to help the six sigma initiatives reformulate the processes in such a way
that improvements will occur as a matter of course and on an ongoing basis. In this
respect, one may even say that the goal of DFSS and reengineering is to create a
process in which all its components are managed, measured, and improved from
two distinct, yet the same perspectives:
1. Organizational profitability
2. Satisfaction of customer functionality
REFERENCES
Chang, R.Y., Improve Processes, Reengineer Them, or Both, Training & Development, Mar.
1994, pp. 54–61.
Rupp, R.O. and Russell, J.R., The Golden Rules of Process Redesign, Quality Progress, Dec.
1994, pp. 85–90.
Stamatis, D.H., The Nuts and Bolts of Reengineering, Paton Press, Red Bluff, CA, 1997.
Weisbord, M., Productive Workplaces, Jossey-Bass, San Francisco, 1987.
Wilgus, A.L., The Conference Method of Redesign, Quality Progress, May 1995, pp. 89–95.
SL3151Ch10Frame Page 518 Thursday, September 12, 2002 6:03 PM
SELECTED BIBLIOGRAPHY
Baer, W., Employee-Managed Work Redesign: New Quality Of Work Life Developments,
Supervision, Mar. 1986, pp. 6–9.
Cooley, M., Architect or Bee, South End Press, Boston, 1981.
Crosby, B., Employee Involvement and Why It Fails, What It Takes to Succeed, Personnel
Administration, Feb. 1986, pp. 95–96.
Meyer, L., How the Right Measures Help Teams Excel, Harvard Business Review, May/June
1994, pp. 95–104.
McElroy, J., Back to Basics at NUMI: Quality Through Teamwork, Automotive Industries,
Oct. 1985, pp. 63–64.
Pyzdek, T., Considering Constraints, Quality Digest, June 2000, p. 22
Schumann, A.l., Fed Up with Furniture Failure? Office System 85, May 1986, pp. 60–67.
Solomon, B.A., A Plan That Proves Team Management Works, Personnel, June 1985, pp. 6–8.
Swineheart, D.P. and Sherr, M.A., A System Model for Labor-Management Cooperation,
Personnel Administration, Apr. 1986, pp. 87–90.
Wall, T., What Is New in Job Design, Personnel Management, 1984, pp. 27–29.
GEOMETRIC DIMENSIONING
AND TOLERANCING (GD&T)
GD&T is an engineering product definition standard that geometrically describes design
intent. It also provides the documentation base for the design of quality and production
systems. Used for communication between product engineers and manufacturing engi-
neers, it promotes a uniform interpretation of a component’s production requirements.
This interpretation and communication are of interest to those who are about to
undertake the DFSS baton. Without the appropriate and applicable interpretation of
the design and without the appropriate communication of that design to manufac-
turing, problems will definitely occur.
Therefore, in this section we will address some of the key aspects of GD&T in
a cursory manner. We will touch on some of the definitions and principles of general
tolerancing as applied to conventional dimensioning practices. The term conventional
dimensioning as used here implies dimensioning without the use of geometric
tolerancing. Conventional tolerancing applies a degree of form and location control
by increasing or decreasing the tolerance.
Conventional dimensioning methods provide the necessary basic background to
begin a study of geometric tolerancing. It is important that you completely under-
stand conventional tolerancing before you begin the study of geometric tolerancing.
When mass production methods began, interchangeability of parts was impor-
tant. However, many times parts had to be “hand selected for fitting.” Today, industry
has faced the reality that in a technological environment, there is no time to do
unnecessary individual fitting of parts. Geometric tolerancing helps ensure inter-
changeability of parts. The function and relationship of a particular feature on a part
dictates the use of geometric tolerancing.
Geometric tolerancing does not take the place of conventional tolerancing.
However, geometric tolerancing specifies requirements more precisely than conven-
tional tolerancing does, leaving no doubts as to the intended definition. This precision
SL3151Ch10Frame Page 519 Thursday, September 12, 2002 6:03 PM
TABLE 10.10
GD&T Characteristics and Symbols
Straightness
Cylindricity
Parallelism
Perpendicularity (Squareness)
Angularity Related
Circular Runout
Position
Concentricity Location
Least Material L
Diameter
Basic .50°
Datum Target
A
may not be the case when conventional tolerancing is used, and notes on the drawing
may become ambiguous.
When dealing with technology, a drafter needs to know how to properly represent
conventional dimensioning and geometric tolerancing. Also, a technician must be able
to accurately read dimensioning and geometric tolerancing. Generally, the drafter con-
verts engineering sketches or instructions into formal drawings using proper standards
SL3151Ch10Frame Page 520 Thursday, September 12, 2002 6:03 PM
Millimeters
• The decimal point and zero are omitted when the metric dimension is
a whole number. For example, the metric dimension “12” has no
decimal point followed by a zero.
• When the metric dimension is greater than a whole number by a
fraction of a millimeter, the last digit to the right of the decimal point
is not followed by a zero. For example, the metric dimension “12.5”
has no zero to the right of the five. This rule is true unless tolerance
values are displayed.
• Both the plus and minus values of a metric tolerance have the same
number of decimal places. Zeros are added to fill in where needed.
• A zero precedes a decimal millimeter that is less than one. For example,
the metric dimension “0.5” has a zero before the decimal point.
• Examples in ASME Y14.5M show no zeros after the specified dimen-
sion to match the tolerance. For example, 24 ± 0.25 or 24.5 ± 0.25 are
correct. However, some companies prefer to add zeros after the spec-
ified dimension to match the tolerance, as in 24.00 ± 0.25 or 24.50 ±
0.25.
Inches
• A zero does not precede a decimal inch that is less than one. For
example, the inch dimension “.5” has no zero before the decimal point.
• A specified inch dimension is expressed to the same number of decimal
places as its tolerance. Zeros are added to the right of the decimal point
SL3151Ch10Frame Page 521 Thursday, September 12, 2002 6:03 PM
The following rules are summarized from ASME Y14.5M. These rules are
intended to give you an understanding of the purpose for standardized dimensioning
practices. Short definitions are given in some cases:
REFERENCES
American Society of Mechanical Engineers, Dimensioning and Tolerancing: ASME Y14.5M-
1994, American Society of Mechanical Engineers, New York, 1994.
SELECTED BIBLIOGRAPHY
Anon., GDT Gets Everyone Working on Design Quality, Quality Assurance Bulletin, number
1315, undated.
Karl, D.P., Morisette, J., and Taam, W., Some applications of a multivariate capability index
in geometric dimensioning and tolerancing, Quality Engineering, 6, 649–665, 1994.
Krulikowski, A., Geometric Dimensioning and Tolerancing: A Self Study Workbook. Quality
Press, Milwaukee, 1994.
SL3151Ch10Frame Page 524 Thursday, September 12, 2002 6:03 PM
METROLOGY
The two pillars of the six sigma breakthrough methodology are measurement and
project selection. In this section we are going to discuss the issue of measurement
from a metrology perspective, and in Chapter 13 we will address project selection
under the umbrella of project management.
We have known for a long time that we are only as good as our measurement
will allow us to be. Therefore, improving measurement systems and understanding
the protocol of measurement will benefit the DFSS team immensely. It is very
frustrating when you know that what you measure with is not sensitive enough, not
accurate enough, or not precise enough. As though that is not enough, we compound
the problem with variability (incompatibility) between metrology systems.
When a project has been selected for a DFSS investigation/improvement, it is
imperative to understand the metrology system in place — the advantages as well
as the disadvantages/limitations of that system — and plan accordingly.
Otherwise, the results do not mean very much. Let us then examine in a cursory
fashion metrology.
Metrology is a Hellenic word made up of metron = measure of length and logos
= reason; logic. In its combined form it means the science of or systems of weights
and measures. (It was Eli Whitney who between 1800 and 1811 proved interchange-
ability would be a vital part of manufacturing, thus developing the first use of the
metrology system in the United States.)
1. Make things:
Length, width and height
Eliminate one of a kind
2. Control the way others make things:
Designing
Building
3. Describe scientific …:
Worldwide exchange of ideas
Dollar
In any metrology system there is a control. The control system is one or more
of the following items.
Depending on the control used, there is also a calibration requirement that will
ensure that the measurement is “what is supposed to be.” Considerations for any
calibration system include the following:
SL3151Ch10Frame Page 526 Thursday, September 12, 2002 6:03 PM
Just like any other system, a measuring system may develop inaccurate mea-
surements. Some of the sources of inaccuracy are:
• Poor contact — Gages with wide areas of contact should not be used on
parts with irregular or curved surfaces.
• Distortion — Gages that are spring-loaded could cause distortion of thin-
walled, elastic parts.
• Impression — Gages with heavy stylus could indent the surface of contact.
• Expansion — Gages and parts should obtain thermal equilibrium before
measuring.
• Geometry — Measurements are sometimes made under false assump-
tions. For example, part being flat when not flat, or concentric when not
concentric, or round when lobed.
These inaccuracies may be the result of either accuracy and or precision problems
due to:
• Equipment error — Each piece of equipment has built within it its own
sources of error.
• Material error — In many cases material cannot be retested, such as in
cases of destruct test.
• Test procedure error — In cases where two procedures may exist which
expect the same outcome.
• Laboratory error — In cases of two laboratories performing same test.
(Here the reader may want to review Gage R&R and the applicability of accuracy,
repeatability, reproducibility, stability, and linearity in Volume V of this series —
especially Chapters 15 and 16.)
1. Linear
2. Angular
3. Force and torque
4. Surface and volume
5. Mass and weight
6. Temperature
7. Pressure and vacuum
8. Mechanical
9. Electrical
10. Optical
11. Chemical
To make these measurements, many types of equipment may be used, some tradi-
tional and common and some very specific and unique for individual situations.
Typical types of equipment include:
1. Scale
2. Radius gages
3. Plug gage
4. Thread gage
5. Spline gage
6. Parallels
7. Sine plate
8. Surface plate
9. Caliper
10. Hardness tester
11. Indicator
12. Micrometer
SL3151Ch10Frame Page 528 Thursday, September 12, 2002 6:03 PM
13. Comparator
14. Profilometer
15. Coordinate measuring machine
16. Pneumatic gaging
17. Optical
PURPOSE OF INSPECTION
Inspection is used primarily in three areas, called inspection points. They are:
1. Incoming material
• Verification of purchase order
• Checking for conformance to specification
• Verification of quantity received
• Acceptance of certification
• Identification
2. In-process:
• First piece setup
• Verification of process change
• Monitoring of process capability
• Verification of process conformance
3. Finished product
• Last piece release
• Verification of product process
• Preparation of certification of process
It is also important to know that there are three kinds of inspection. They are:
1. 100% inspection
• Safety product
• Lot size too small for sampling
• Seventy-nine percent effective manually
2. Sampling
• Large lot size
• Decision making
SL3151Ch10Frame Page 529 Thursday, September 12, 2002 6:03 PM
• Acceptance
• Reliability
• Qualification
• Verification
of the item tested. Perhaps the most important issue in testing for DFSS is verifica-
tion. We must be sure that the test is reflective of the “real world usage” and that it
addresses “customer functionality.”
METHODS OF TESTING
1. Destructive testing
• Can only be conducted once
• Detects flaws in materials and components
• Measures physical properties
• Is not cost effective
SL3151Ch10Frame Page 530 Thursday, September 12, 2002 6:03 PM
• Eddy current
• X-ray
• Gamma ray
• Magnetic particle
• Penetrant dye
• Ultrasonic
• Pulse echo
• Capacitive
• Fiber optical
Levels of Levels of
reporting responsibility
Operator Manager
Auditor Engineer
Technician Technician
Engineer Auditor
Manager Operator
SL3151Ch10Frame Page 531 Thursday, September 12, 2002 6:03 PM
By wringing gage blocks together, you can obtain accuracy within millionths of
an inch. Caution is usually given not to use a circular action because this might
cause serious wear or even damage from abrasive dust trapped between surfaces.
Gage blocks are calibrated at the international standard measuring temperature
of 68°F (20° C). (This is very important to keep in mind, otherwise see below.)
When measurements are conducted at this temperature between blocks and parts of
dissimilar materials, no correction for different coefficients of expansion is necessary
providing the components have had sufficient time to adjust to the environment. If
blocks and parts are made of the same material and are at the same temperature,
accurate results are possible regardless of whether the temperature is high or low.
To determine the correction requirement when blocks and parts are dissimilar
and at temperatures other than 68°F, use the following formula:
E = L (∆K)(∆T)
The gage blocks are typically in a set of 81 pieces and they are arranged in the
following order:
.101 .102 .103 .104 .105 .106 .107 .108 .109 .110
.111 .112 .113 .114 .115 .116 .117 .118 .119 .120
.121 .122 .123 .124 .125 .126 .127 .128 .129 .130
.131 .132 .133 .134 .135 .136 .137 .138 .139 .140
.141 .142 .143 .144 .145 .146 .147 .148 .149
.50 .100 .150 .200 .250 .300 .350 .400 .450 .500
.550 .600 .650 .700 .750 .800 .850 .900 .950
LENGTH COMBINATIONS
Do not trust trial and error methods when assembling gage blocks into a gaging
dimension. The basic rule is to select the fewest blocks that will suit the requirement.
To construct a length of 1.3275″ using a typical 81-piece set, the following procedure
may be used:
There are times when the same gaging dimension must be assembled more than
once from single set of blocks. This may unavoidably increase the number of blocks
required for the specific length. Assume that a second length of 1.3275″ is required
from the 81-piece set:
For some general rules and guidelines on shapes and basic calculations the reader
is referred to Volume II, Part II. Also, for an explanation and examples of the SI
system see Volume II, Part II.
Volume V of this series covers the issue of GR&R and its terminology and
therefore here we give only brief definitions of the key terms:
REFERENCES
Genest, D.H., Improving Measurement System Compatibility, Quality Digest, Apr. 2001, pp.
35–40.
SL3151Ch10Frame Page 534 Thursday, September 12, 2002 6:03 PM
SL3151Ch11Frame Page 535 Thursday, September 12, 2002 6:02 PM
Innovation Techniques
11 Used in Design
for Six Sigma (DFSS)
MODELING DESIGN ITERATION USING SIGNAL
FLOW GRAPHS AS INTRODUCED BY EPPINGER,
NUKALA AND WHITNEY (1997)
The signal flow graph represents a diagram of relationships among a number of
variables. When these relationships are linear, the graph represents a system of
simultaneous linear algebraic equations. The signal flow graph, as shown in
Figure 11.1, is composed of a network of directed branches, which connect at the
nodes. A branch jk, beginning at node j and terminating at node k, indicates its
direction from j to k by an arrowhead on the branch. Each branch jk has associated
with it a quantity known as the branch transmission Pjk.
For our modeling processes, the branches represent the tasks being worked (an
activity-on-arc representation). The branch transmissions include the probability and
time to execute the task represented by the branch:
Pjk = pjkz t jk
where pjk is the probability associated with the branch; tjk is the time taken to traverse
the branch; and z is the transform variable used to connect the physical system (time
domain) to the quantities used in the analysis (transform domain).
The z transform simplifies the algebra, as it enables us to incorporate the
quantities to be multiplied (probabilities) in the coefficient of the expression, and to
include the quantities to be added (task times) in the exponent. The resulting system
is then analogous to a discrete sampled data system, and the body of literature on
this subject can be applied for the analysis thereof.
The path transmission is defined as the product of all branch transmissions along
a single path. The graph transmission is the sum of the path transmissions of all the
possible paths between two given nodes. The graph transmission is also the resulting
expression on an arc connecting the two given nodes when all of the other nodes
have been absorbed. In particular, we are interested in computing the graph trans-
mission from the start to the finish nodes. Henceforth, graph transmission shall refer
to the graph transmission between the start and the finish nodes and is denoted as Tsf.
The coefficient of each term in the graph transmission is the probability asso-
ciated with the path(s) it represents, and the exponent of z is the duration associated
535
SL3151Ch11Frame Page 536 Thursday, September 12, 2002 6:02 PM
P43
3
P13
P34
1 4
P32
P23
P12 P24
2
Product
Concept Design A
Tooling
B
Design
Z2
A B 0.4 Finish
Start
Z3 0.6Z3
0.3Z2
, 0.7
A
with the path(s). The graph transmission can be derived using the standard operations
for the signal flow graphs (discussed below). The impulse response of the graph
transmission is then a function representing the probability distribution of the lead
time of the process. It can be shown that the expected value of the lead time of the
process is:
()
dTsf
E L = z−1
dz
SL3151Ch11Frame Page 537 Thursday, September 12, 2002 6:02 PM
0.18Z5 0.18Z5
Z5(0.6z3)
Z5 0.4 0.7
B B
z5(0.4+0.42z3)
Tsf =
1– 0.18z5
Probability
0.42
0.4
0.076
0.072
5 8 10 13 Time
NUMERICAL EXAMPLE
The distribution can be found for this simple example by performing synthetic
division on Tsf to obtain the first few terms of the infinite series. The nominal (once
through) time for A and B in series is 5 units of time, which occurs with probability
0.4. It is more likely (probability 0.42) that the lead time L of the process will be 8
units of time.
t
1 1 1/(1-t)
x y x x
y 1
= 1 + t + t 2 + t 3 + ... =
x 1–t
tzz
txy txy
z
txy+1 – t zz
txz txy
txy
x y x y
REFERENCES
Eppinger, S.D., Nukala, M., and Whitney, D.E., Generalized models of design iteration using
signal flow graphs, Research in Engineering Design, 9(2), 112–123, 1997.
Howard R., Dynamic Probabilistic Systems, Wiley, New York, 1971.
Truxal, J.G., Automatic Feedback Control System Synthesis, McGraw-Hill, New York, 1955.
SL3151Ch11Frame Page 540 Thursday, September 12, 2002 6:02 PM
SELECTED BIBLIOGRAPHY
Altus, S.S., Kroo, I.M., and G., P.J. (1996). A generic algorithm for scheduling and decom-
position of multidisciplinary design problems, Journal of Mechanical Design, Vol.
118(4), 486–489, 1996.
Anderson, J., Pohl, J., and Eppinger, S.D., A Design Process Modeling Approach Incorpo-
rating Nonlinear Elements, Proceedings of 1998 ASME Design Engineering Techni-
cal Conferences, Atlanta, Sept. 1998.
Austin, S.A., Baldwin, A.N., and Newton, A. (September 1994). Manipulating the flow of
design information to improve the programming of building design, Construction
Management & Economics, 12(5), 445–455, 1994.
Austin, S.A., Baldwin, A.N., and Newton, A., A data flow model to plan and manage the
building design process, Journal of Engineering Design, 7(1), 3–25, 1996.
Austin, S.A. et al., Analytical design planning technique: a model of the detailed building,
Design Process Design Studies, 20, 279–296, 1999.
Baldwin, A.N. et al., Modelling information flow during the conceptual and schematic stages
of building design, Construction Management & Economics, 17(2), 155–167, 1999.
Browning, T.R., Exploring Integrative Mechanisms with a View Towards Design for Integra-
tion, Proceedings of the Fourth ISPE International Conference on Concurrent Engi-
neering: Research and Applications, Rochester, MI, Aug. 20–22, 1997, pp. 83–90.
Browning, T.R., Applying the design structure matrix to system decomposition and integration
problems: a review and new directions, IEEE Transactions on Engineering Manage-
ment, 48(3), 292–306, 2001.
Cho, S.-H. and Eppinger, S., Product Development Process Modeling Using Advanced Sim-
ulation, Proceedings of the 13th International Conference on Design Theory and
Methodology (DTM 2001), Pittsburgh, Sept. 9–12, 2001.
Eppinger, S.D., Model-based approaches to managing concurrent engineering, Journal of
Engineering Design, 2, 283–290, 1991.
Eppinger, S.D. et al., A model-based method for organizing tasks in product development,
Research in Engineering Design, 6(1), 1–13, 1994.
Eppinger, S.D. and Salminen, V., Patterns of Product Development Interactions, International
Conference on Engineering Design, Glasgow, Scotland, Aug. 2001.
Gebala, D.A. and Eppinger, S.D.,Methods for Analyzing Design Procedures, Proceedings of
the ASME Third International Conference on Design Theory and Methodology, 1991,
pp. 227–233.
Grose, D.L., Reengineering the Aircraft Design Process, Proceedings of the Fifth
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimi-
zation, Panama City Beach, FL, Sept. 7–9, 1994.
Johnson, E.W. and Brockman, J.B., Measurement and analysis of sequential design processes,
ACM Transaction on Design Automation of Electronic Systems, 3(1), 1–20, 1998.
Kusiak, A. and Park, K., Concurrent engineering: decomposition and scheduling of design
activities, International Journal of Production Research, 28, 10, 1883–1900, 1990.
Kusiak, A. and Szcerbicki, E., Transformation from conceptual design to embodiment design,
IIE Transactions, 25(4), 6–12, 1993.
Kusiak, A. and Wang, J., Decomposition of the design process, Journal of Mechanical Design,
115, 687–695, 1993.
Kusiak, A. and Wang, J., Efficient organizing of design activities, International Journal of
Production Research, Vol. 31, 753–769, 1993.
Kusiak, A., Larson, N., and Wang, J., Reengineering of design and manufacturing processes,
Computers and Industrial Engineering, 26(3), 521–536, 1994.
SL3151Ch11Frame Page 541 Thursday, September 12, 2002 6:02 PM
Kusiak, A. and Larson, N., Decomposition and representation methods in mechanical design,
ASME Transactions: Journal of Mechanical Design, 117(3), 17–24, 1995.
Kusiak, A., Engineering Design: Products, Processes and Systems, Academic Press, San
Diego, 1999.
McCulley, C. and Bloebaum, C., A genetic tool for optimal design sequencing in complex
engineering systems, Structural Optimization, 12(2–3), 186–201, 1996.
Osborne, S.M., Product Development Cycle Time Characterization Through Modeling of
Process Iteration, master’s thesis (mgmt./eng.), M.I.T., Cambridge, MA, 1993.
Rogers, J. L., A Knowledge-Based Tool for Multilevel Decomposition of a Complex Design
Problem, NASA, Hampton, VA, Technical Paper TP-2903, 1989.
Smith, R.P. and Eppinger, S.D., Identifying controlling features of engineering design itera-
tion, Management Science, 43, 276–293, 1997.
Smith, R.P. and Eppinger, S.D., A predictive model of sequential iteration in engineering
design, Management Science, 43, 1104–1120, 1997.
Smith, R.P. and Eppinger, S.D., Deciding between sequential and parallel tasks in engineering
design, Concurrent Engineering: Research and Applications, 6, 15–25, 1998.
Smith, R.P. and Morrow, J., Product development process modeling, Design Studies, 20,
237–261, 1999.
Steward, D.V., Systems Analysis and Management: Structure, Strategy and Design, Petrocelli
Books, New York, 1981.
Ulrich, K.T. and Eppinger, S.D., Product Design and Development, 2nd ed., McGraw-Hill,
New York, 2000.
Yassine, A.A., Falkenburg, D.R. and Chelst, K., (1999) Engineering design management: an
information structure approach, International Journal of Production Research,
37(13), 2957–2975, 1999.
Yassine, A.A. and Falkenburg, D.R., A framework for design process specifications manage-
ment, Journal of Engineering Design, 10(3), Sept. 1999.
Yassine, A.A et al., DO-IT-RIGHT-FIRST-TIME (DRFT) Approach to Design Structure
Matrix (DSM) Restructuring, Proceedings of the 12th International Conference on
Design Theory and Methodology (DTM 2000), Baltimore, Sept. 10–13, 2000.
Yassine, A.A., Whitney, D., and Zambito, T., Assessment of Rework Probabilities for Design
Structure Matrix (DSM) Simulation in Product Development Management, Proceed-
ings of the 13th International Conference on Design Theory and Methodology (DTM
2001), Pittsburgh, September 9–12, 2001.
AXIOMATIC DESIGNS
Bad design is, well, bad design. Six sigma, tightening tolerances, substituting one
material for another and so on only treat the symptoms, not the problem. Also, they
may create expensive bad designs.
Axiomatic design, a theory and methodology developed at Massachusetts Insti-
tute of Technology (MIT; Cambridge, Mass.) 20 years ago, helps designers focus
on the problems in bad designs. As Suh (1990) points out, “The goal of axiomatic
design is to make human designers more creative, reduce the random search process,
minimize the iterative trial-and-error process, and determine the best design among
those proposed.” This, of course, applies to designing all sorts of things: software,
business processes, manufacturing systems, work flows, etc. The technique can also
be used for diagnosing and improving existing designs.
SL3151Ch11Frame Page 542 Thursday, September 12, 2002 6:02 PM
Functional:
Functional requirements Physical design Process: Process
(FRs) are a minimum set parameter (DP) are variables (PVs)
Customer: of independent the elements of the are the elements
The benefits requirements that design solution in in the process to
customers completely characterize the physical domain produce the
seek the functional needs of that are chosen to product specified
the design solution in the satisfy the specified in terms of DPs.
functional domain FRs
Design Process
Customer parameters parameters
attributes Functional (DPs) (PPs)
(Cas) requirements Physical Process domain
Customer (FRs) Domain
domain Functional
domain
FIGURE 11.8 Order of design matrix showing functional coupling between FRs and DPs.
Axiomatic design is not quite the Taguchi method, which is a specific application
of robust design. It is not quite quality function deployment (QFD). Nor, like many
other quality methodologies, is it an after-the-fact approach that looks at results and
then traces back to the source of those results.
Robust design (Taguchi) and axiomatic design are the only methods that address
the design itself, ensuring that the designs are good to start with. Unfortunately,
while Taguchi focuses on making a part immune to the error in variation, it focuses
on only one requirement at a time. A problem might arise when a design has to
satisfy two requirements simultaneously, such as designing a car door to seal com-
pletely and close easily. In short, a coupling exists between these two functional
requirements.
Taguchi method alone sometimes may trap designers into optimizing the wrong
function, optimizing a function they do not have ownership of, or optimizing a design
parameter that is linked to many functions. Worse, by optimizing one function,
designers run the probable risk of degrading other functions. Axiomatic design, on
the other hand, avoids all that by breaking the coupling between functional require-
ments so that they no longer interact with one another.
QFD is similar to axiomatic design in that customer requirements are listed along
the left side of a matrix and engineering requirements are lined up along the top. From
this matrix, designer teams can see conflicts that need to be resolved. However, QFD
is very subjective. Nor does QFD show a mathematical relationship between a functional
requirement and a design parameter, which axiomatic design does.
and in highly complex automatic transmissions designs. At one carmaker, the axi-
omatic methodology helps design teams optimize elements of a conceptual design
before engineering creates the detailed designs. The described benefit is that it helps
avoid unintended consequences in design, with the axiomatic method indicating
where interactions exist between the various elements and what the optimum
sequence is. The point is that axiomatic design is said to be a step beyond Taguchi
and worthwhile in a DFSS endeavor.
As we already mentioned, an axiomatic design helps designers with both new
and existing designs. In both cases, designers are more creative and develop better
designs in less time.
New Designs
The designer uses current design tools more effectively, producing better designs
by:
For diagnosing an existing design, the use of axiomatic design highlights problems
such as coupling and makes clear the relationships between the symptoms of the
problem (one or more FRs not being achieved) and their causes (the specific DPs
affecting those FRs). While improving the solution, the designer also enjoys the
new-design benefits above.
To summarize, for both new and existing designs, the designer is more creative,
turning out better designs quicker.
Axiomatic design helps to identify tasks, set a task sequence from the system
architecture, and assign resources effectively. This process also allows you to check
progress against explicit FRs.
When creating change, axiomatic design uses explicit criteria and allows you to
select the best option, identify effects throughout the system, and document changes.
“structures”
(generative) TRIZ TRIZ TRIZ
3. Less risk: Axiomatic design reduces both technical risk and business risk.
Axiom 2, the information axiom, ensures that the chosen design has
minimum information content, which is defined as the most technically
probable to succeed. Business risk is also reduced because:
• Products satisfy customers’ needs since FRs are derived from those needs.
• Development schedules are shorter and more predictable.
• Upgrades can be done quickly and effectively.
In sum, axiomatic design provides the designer with the benefit of designing
better products faster, and provides the firm with a competitive advantage, higher
profit, and less risk.
When you follow the axiomatic design process, you continue to use all of your
current software design tools. You will find that you will be more creative, turning out
better designs faster, since you will minimize iterations and trial and error. You will
have complete documentation of all the design decisions and supporting analysis.
To facilitate the analysis of axiomatic designs, to our knowledge the Acclaro™
Software allows you to link to all of your tools and to a common database for the
entire design team. It is available commercially and may be purchased by contacting
Axiomatic Design Software, Inc.
There are a number of techniques used today in design such as QFD, TRIZ, and
robust design. The use of these techniques and others is completely consistent with
axiomatic design. In fact, axiomatic design can help the designer apply these tech-
niques better. Figure 11.9 shows how they all fit together. Some examples of what
these techniques can do are:
Users of axiomatic designs and applicable software, such as the Acclaro, have
found that the process of implementing axiomatic designs is enhanced, in the sense
that the designers have more freedom to document every decision and to specify the
relationships between FRs and DPs to any level of detail. It also does matrix
manipulations, checks for design problems such as coupling, and communicates
relevant information to members of the design team. Specifically, the Acclaro soft-
ware runs on standard PCs and workstations and in addition it links to your software
design tools and to your existing database through SQL. No other software is required
except for the Java environment, which is available at no charge from Sun Micro-
systems or from Axiomatic Design Software, Inc.
REFERENCES
Suh, N.P., The Principles of Design, Oxford University Press, New York.
SELECTED BIBLIOGRAPHY
Black, J.T. and Shroer, B.J., Decouplers in integrated cellular manufacturing systems. Journal
of Engineering for Industry, Transactions of A.S.M.E., 110, 77–85, 1988.
Creveling, C.M., Tolerance Design: A Handbook for Developing Optimal Specifications,
Addison-Wesley, Reading, MA, 1997.
Hill, P.H., The Science of Engineering Design, Holt, Rinehart and Winston, New York, 1970.
French, M.J., Engineering Design: The Conceptual Stage, Heinemann Educational Books,
London, 1971.
Kar, K.A., Linking Axiomatic Design And Taguchi Methods Via Information Content in
Design, First International Conference on Axiomatic Design, 2000.
Kramer, B.M., An Analytical Approach to Tool Wear Prediction, thesis, MIT, 1979.
Kramer, B.M. and Suh, N.P., Tool wear by solution: a quantitative understanding. Journal of
Engineering for Industry, Transactions of A.S.M.E., 102, 303–339, 1980.
Mohsen, H., Thoughts on the Use of Axiomatic Design Within the Product Development
Process, First International Conference on Axiomatic Design, 2000.
Otto, K. and Wood, K., Product Design: Techniques in Reverse Engineering and New Product
Development, Prentice Hall, Upper Saddle River, NJ, 2001.
Rinderle, J. R. and Suh, N.P., Measures of functional coupling in design, Journal of Engi-
neering for Industry, Transactions of A.S.M.E, 104, 383–388, 1982.
Stoll, H.W., (1986). Design for manufacture: An overview, Applied Mechanics Review, 39,
1356–1364, 1986.
Suh, N.P., Development of the science base for the manufacturing field through the axiomatic
approach, Robotics and Computer Integrated Manufacturing, 1, 399–455, 1984.
SL3151Ch11Frame Page 548 Thursday, September 12, 2002 6:02 PM
For Altshuller (1997, p. 18–19), the benefit of using TRIZ is to help inventors elevate
their innovative solutions to levels 3 and 4. To optimize these levels he suggests the
following tools:
To actually use TRIZ in a design situation, the reader must be aware not only
of the nine steps just mentioned but also the 40 principles that are associated with
the methodology. Here we are going to list them without any further discussion. The
reader is encouraged to see Altshuller (1997), Terninko et al. (1996), and other
sources in the bibliography for more details.
SL3151Ch11Frame Page 550 Thursday, September 12, 2002 6:02 PM
1. Segmentation
2. Extraction
3. Local quality
4. Asymmetry
5. Consolidation
6. Universality
7. Nesting
8. Counterweight
9. Prior counteraction
10. Prior action
11. Cushion in advance
12. Equipotentiality
13. Do it in reverse
14. Spheroidality
15. Dynamicity
16. Partial or excessive action
17. Transition into a new dimension
18. Mechanical vibration
19. Periodic action
20. Continuity of useful action
21. Rushing through
22. Convert harm into benefit
23. Feedback
24. Mediator
25. Self service
26. Copying
27. Dispose
28. Replacement of mechanical system
29. Pneumatic or hydraulic constructions
30. Flexible membranes or thin films
31. Porous material
32. Changing the color
33. Homogeneity
34. Rejecting and regenerating parts
35. Transformation of properties
36. Phase transition
37. Thermal expansion
38. Accelerated oxidation
39. Inert environment
40. Composite materials
SL3151Ch11Frame Page 551 Thursday, September 12, 2002 6:02 PM
REFERENCES
Altshuller, G., 40 Principles: TRIZ Keys to Technical Innovation, Technical Innovation Center,
Worcester, MA, 1997.
Terninko, J., Zusman, A., and Zlotin, B., Step-by-Step TRIZ: Creating Innovative Solution
Concepts, Responsible Management Inc., 3rd ed., Nottingham, NH, 1996.
SELECTED BIBLIOGRAPHY
Altshuller, G.S., Creativity as an Exact Science, Gordon and Breach, New York, 1988.
Bar-El, Z., TRIZ methodology, The Entrepreneur Network Newsletter, May 1996.
Braham, J., Inventive Ideas Grow on TRIZ, Machine Design, Oct. 12, 1995, 58.
Kaplan, S., An Introduction to TRIZ: The Russian Theory of Inventive Problem Solving,
Ideation International Inc., Southfield, MI, 1996.
Osborne, A., Applied Imagination, Scribner, New York, 1953.
Pugh, S., Total Design — Integrated Methods for Successful Product Engineering, Addison-
Wesley, Reading, MA, 1991.
Taguchi, G., Introduction to Quality Engineering, Asian Productivity Organization, Tokyo,
1983.
Terninko, J., Systematic Innovation: Theory of Inventive Problem Solving (TRI7lTIPS),
Responsible Management Inc., Nottingham, NH, 1996.
Terninko, J., Step by Step QFD: Customer-Driven Product Design, Responsible Management
Inc., Nottingham, NH, 1995.
Terninko, J., Introduction to TRIZ: A Work Book, Responsible Management Inc., Nottingham,
NH, 1996.
Terninko, J., Robust Design: Key Points for World Class Quality, Responsible Management
Inc., Nottingham, NH, 1989.
von Oech, R., A Whack On The Side Of The Head, Warner Books, New York, 1983.
Zusman, A. and Terninko, J., TRIZlIdeation Methodology for Customer-Driven Innovation,
8th Symposium on Quality Function Deployment, The QFD Institute, Novi, MI, June
1996.
SL3151Ch11Frame Page 552 Thursday, September 12, 2002 6:02 PM
SL3151Ch12Frame Page 553 Thursday, September 12, 2002 6:01 PM
Value
12 Analysis/Engineering
INTRODUCTION TO VALUE CONTROL —
THE ENVIRONMENT
Among the major problems faced by industry today, two are the cost profit squeeze
and ineffective communications. Rising wages and materials costs are squeezing
profit against a price ceiling, and our communications systems do not seem to be
able to help effect a solution to the problem. We cannot buy labor or material for
less than the market cost nor can we sell for more than the consumer is willing to
pay. What then is the solution?
It is necessary to apply every known effective technique to learn how to thor-
oughly analyze the elements of a product or service so that we can identify and
isolate the unknown, unnecessary costs. In short, it is necessary to make a direct
attack on the high cost of business.
Value control has been proven to be an effective management tool to seek out
and eliminate this hidden cost wherever it may be. It can aid in solving both profit
and communications problems, and it can have an effect on operations that will be
limited only by your understanding of the techniques and management’s willingness
to apply them.
Many people are highly skilled at cost analysis and problem solving and think
that value control is something we do all of the time. There are many who think
that value control is part of every engineer’s job. Some also think that it is something
we have done for 20 or 30 years, but we did not call it value control. A primary
objective of this chapter is to demonstrate that value control is not only different
but is a more powerful technique than any used in the past.
Value control is not new, in that it has been around for about 25 years, but it
has only been within the past five to ten years that it has been widely accepted. It
is a broad scope management tool that considers all of the factors involved in a
decision. It goes to the heart of the problem, determines the function to be performed,
and applies creative problem solving and business operations such as time and
motion study, work simplification, feasibility reviews, systems analysis, etc. But, it
follows a systematic organized approach that, in addition, applies unique techniques
that identify value control as a special approach to profit improvement.
Why, after all these years of scientific and specialized management techniques,
is it necessary to develop another technique that does what many of the others were
supposed to do? Why is value control necessary?
It is still possible for one person to know all that is required to operate a small
company or design a simple product. However, our increasingly complex society
553
SL3151Ch12Frame Page 554 Thursday, September 12, 2002 6:01 PM
and increasingly complex technology have tended to make most of our managers
and technical people specialists in a limited area of activity. They have tended to
compartmentalize our operations and, to a large degree, our thinking. The more
complex the organization, the more the operation becomes fragmented into auton-
omous units that deal in a small part of the operation and have an effect on only a
small part of the profit.
In 1927, one of the greatest technological milestones in the history of human
development occurred as the result of the knowledge of two men. Charles A. Lind-
bergh sat on the Coronado Beach in San Diego with Donald Hall, chief engineer of
Ryan Airlines, and established the basic criteria for the Spirit of St. Louis. Two
people knew all that was needed to develop an advanced product that even today
clearly shows their creative thinking.
Lindbergh established the requirements, Hall provided the technical knowledge,
and the 13 Ryan supervisors and employees provided the understanding, know-how,
and enthusiasm to develop a product that was designed, was built, was tested, and
won everlasting glory for Mr. Lindbergh, all within 13 weeks.
The product was designed to perform a specific function for a specific cost target.
There was no communication problem, there was no cost problem since $15,000
was all they had for the entire project, and there was no timing problem — any
delay was unthinkable.
Consider the design of an advanced aircraft today. The cost in people and
materials is almost beyond comprehension. Hundreds of thousands of people in
dozens of industries in several states work in vast industrial complexes for years
before the product takes to the air.
A product such as the automobile has created a similar situation to the degree
that it is a basic national industry and affects people in every corner of the country
and in many cases abroad.
Is it any wonder that value control is developing on the management scene? It
is the only technique that is specifically designed to consider all of the factors
involved in decision making — product performance, project schedules, and total
cost.
Value control is a program of involvement. It makes use of experience from
engineering, manufacturing, purchasing, marketing, finance — any area and every
area that contributes to the development of a product. It can be used to keep cost
out of a product and it can be used to get cost out of a product. It can do this because
cost is everywhere and everyone in the organization contributes to it.
Cost is the result of marketing concepts, management philosophies, standards
and specifications, outdated practices or equipment, lack of time, and incomplete,
unobtainable, or inaccurate information, along with dozens of other contributing
factors. Every company has at least some of these problems, and it often requires
completely new ideas to change them.
To prevent and eliminate unnecessary cost we must know how to identify cost.
We must be able to identify a problem and be willing to improve the situation. This
means change — change in habits, change in ideas, change in philosophies. We
know we must change to keep up with the world. Value control enables us to take
SL3151Ch12Frame Page 555 Thursday, September 12, 2002 6:01 PM
a good look at all of the factors that must go into making a successful change that
will be for our benefit.
Value control requires special skills. It is not cost analysis, design reviews, or
something we do as part of our job. It is different because the basic philosophy is
different. It is not concerned with trying to reduce the cost of an item or service; it is
concerned with function and methods to provide the function at the lowest overall cost.
Value control concerns people and their habits and attitudes. It is to a large
degree a state of mind. It accepts changes as a way of life and makes every effort
to determine how change can be made to provide the most benefit.
It is a function-oriented system that makes use of creative problem solving and
team action. The team is designed to provide an experienced, balanced, and broad
scope look at a subject without being constrained by past experience. It requires
trained people who understand the system and its application.
VALUE CONCEPT
The value process is a function-oriented system. It makes use of team action and
creative problem solving to achieve results and is specifically designed to simulta-
neously consider all of the factors involved in decision making — performance,
schedule, and cost.
In order to obtain an experienced, balanced, broad-scope look at all facets of
the project, a carefully selected team is organized to satisfy the specific requirements
needed.
Selection of the team must consider personality as well as technical competence
of the candidates. The team must not only have the technical know-how required,
but the members must be compatible and know how to work together. The value
manager acts as a coach to guide the team members through the system to obtain
maximum benefit from their activities.
DEFINITION
The Society of American Value Engineers defines the term “value engineering” as
follows:
PLANNED APPROACH
Value control achieves results by following a well-organized planned approach. It
identifies unnecessary cost and applies creative problem-solving techniques to
remove it. The three basic steps brought to bear are:
FUNCTION
Function is the very foundation of value control. The concern is not with the part
or act itself but with what it does; what is its function? It may be said that a function
is the objective of the action being performed by the hardware or system. It is the
result to be accomplished, and it can be defined in some unit term of weight, quantity,
time, money, space, etc.
Function is the property that makes something work or sell. We pay for a
function, not hardware. Hardware has no value; only function has value. For example,
a drill is purchased for the hole it can produce, not for the hardware. We pay to
retrieve information, not to file papers.
Defining functions is not always easy. It takes practice and experience to properly
define a function. It must be defined in the broadest possible manner so that the
greatest number of potential alternatives can be developed to satisfy the function. A
function must also be defined in two words, a verb and a noun. If the function has
not been defined in this way, the problem has probably not been clearly defined.
Function definition is a forcing technique that tends to break down barriers to
visualization by concentrating on what must be accomplished rather than the present
way a task is being done. Concentrating on function opens the way to new innovative
approaches through creativity.
Some examples of simple functions are as follows: produce torque, convert
energy, conduct current, create design, evaluate information, determine needs,
restrict flow, enclose space, etc.
There are two types of functions, basic and secondary. The basic function
describes the most important action performed. A secondary function supports the
basic function and almost always adds cost.
VALUE
After the functions have been defined and identified as basic or secondary, we must
evaluate them to determine if they are worth their cost. This step is usually done by
comparison with something that is known to be a good value. This means the term
value must be understood.
Aristotle described seven classes of value: economic, political, social, aesthetic,
ethical, religious, and judicial. In value control, we are primarily concerned with
economic value. Webster defines value in terms of worth as follows:
Value: (1) A fair return or equivalent in goods, services, or money for something
exchanged; (2) the monetary worth of something; marketable price; monetary value.
Worth: The value of something measured by its qualities or by the esteem in which
it is held.
for our purpose as follows: Value is the lowest overall cost to reliably provide a
function. By overall we mean all costs that affect the function such as design and
development expenses as well as manufacturing, warranty, service, and other costs.
Value (V-E) is broken down into three kinds, each with a specific meaning:
1. Use Value (V-U): A measure of the properties that make something satisfy
a use or service.
2. Esteem Value (V-e): A measure of the properties that make something
desirable.
3. Exchange Value (V-x): A measure of the properties of an item that make
it possible for us to exchange it for something else.
VE = Vu + Ve + Vx
Do not confuse cost with value. Cost is a fact; it is a measure of the time, labor,
and material that go into producing a product. We can increase cost by adding
material or labor to a product, but this will not necessarily increase its value. Value
is an opinion based on the desirability or necessity of the required qualities or
functions at a specific time.
The relationship of cost (C) to value provides an index of performance (P).
DEVELOP ALTERNATIVES
Function has been defined as the property that makes something work or sell and
value as the lowest cost to reliably provide a function. The performance index (P)
has identified the problem. Now, what else will do the job? We need to develop
alternative ways to perform the function.
In order to develop alternatives, we make maximum use of imagination and
creativity. This is where team action makes a major contribution. The basic tool is
brainstorming. In brainstorming, we follow a rigid procedure in which alternatives
are developed and tabulated with no attempt to evaluate them. Evaluation comes
SL3151Ch12Frame Page 559 Thursday, September 12, 2002 6:01 PM
later. At this stage, the important thing is to develop the revolutionary solution to
the problem.
Free use of imagination means being free from the constraints of past habits
and attitudes. A seemingly wild idea may trigger the best solution to the problem
in someone else. Without a free exchange of ideas, the best solution may never be
developed. A skilled leader can produce outstanding results by brainstorming and
by providing simple thought stimulations at the proper time.
The creative phase does not usually result in concrete ideas that can be directly
developed into outstanding products. The creative phase is an attempt to develop
the maximum number of possible alternatives to satisfy a function. These ideas or
concepts must be screened, evaluated, combined, and developed to finally produce
a practical recommendation. This requires flexibility, tenacity, and visualization and
frequently the application of special methods designed to aid in the selection process.
The process is carried out during the evaluation and planning phases of the job plan
and is covered in detail in those sections.
The recommendation must be accepted as part of a design or plan to be suc-
cessful. In short, it must be sold. It must show the benefits to be gained, how these
benefits will be obtained, and finally proof that the ideas will work. This takes time,
persistence, and enthusiasm. Details of a recommended procedure are covered in
later sections of the text.
What is it?
What does it do?
What does it cost?
What is it worth?
Where is the problem?
What can we do?
What else will do the job?
How much does that cost?
1. Information phase
2. Creative phases
3. Evaluation phase
4. Planning phase
5. Reporting phase
6. Follow-up phase
SL3151Ch12Frame Page 560 Thursday, September 12, 2002 6:01 PM
Cost to change
------------------------------------------➠ time
APPLICATION
Although indoctrination workshops are usually conducted with existing hardware
or systems, the greatest opportunity for savings is in the prevention of unnecessary
cost. The function techniques apply, but modifications are required that depend on
the user’s understanding and ingenuity in applying them to conceptual ideas. Sys-
tems, procedures, manufacturing methods, and tool design are some of the areas
other than hardware where functional techniques have been used successfully.
Figure 12.1 shows how the overall savings varies with time in the application of
function analysis techniques from concept to hardware, system, plan, or any other
type of project implementation.
The figure also indicates that the cost to change increases and the net savings
decreases as a project develops. Once a product or service is in production or use,
the cost of tools, hardware, forms, and time necessary to achieve the product stated
at any given time cannot be recovered.
In order to be effective, value control needs trained people working as a team.
A team needs a coach who, in this case, is the value manager. The team provides
the technical expertise necessary, and the value manager provides the know-how
to apply this knowledge for effective results. Value control also requires manage-
ment to provide the necessary support and a creative environment. In short, success
is up to everyone in an organization. People must be trained; they must understand
the system; they must understand the application; they must be aware of cost and
how to handle it, and management must support their activity by active participa-
tion.
Success means change: change in methods, change in procedures, change in
attitudes. With this approach, value control will become an effective profit maker.
SL3151Ch12Frame Page 561 Thursday, September 12, 2002 6:01 PM
1. Information phase
What is it?
Collect all data, drawings, blueprints, costs, parts, flow sheets, process
sheets.
Talk with people, ask questions, listen, develop.
Become familiar with the project.
Discuss, probe, analyze.
What does it do?
Define functions.
Determine basic function(s).
Construct FAST diagram.
What does it cost?
Conduct function/cost analysis.
What is it worth?
Establish a value for each function.
Determine overall value for the product or source.
Where is the problem?
Analyze the diagram.
Locate poor value functions.
Pinpoint the areas for creativity.
What can we do?
Set goal for achievement.
2. Creative phase
What else will do the job?
Brainstorm the poor value target functions — use imagination — cre-
ate alternatives, develop unique solutions, combine or eliminate
functions.
Look for revolutionary ideas.
Do not overlook discoveries obtained by serendipity.
3. Evaluation phase
Select best ideas.
Screen all creative ideas.
Evaluate carefully for useful solutions.
Combine best ideas.
Categorize into basic groups.
Screen for best ideas.
How much does it cost?
Generate relative costs.
Analyze potential.
Anticipate roadblocks.
SL3151Ch12Frame Page 562 Thursday, September 12, 2002 6:01 PM
4. Planning phase
Develop best ideas.
Develop practical solutions.
Obtain accurate costs.
Review engineering and manufacturing requirements.
Check quality, reliability.
Talk with people.
Resolve anticipated roadblocks.
Develop alternative solution.
Plan your program to sell.
Show the benefits.
5. Reporting phase
Present ideas to management.
Show before and after costs, advantages and disadvantages, non-recur-
ring costs of development and implementation, scrap, warranty, and
other forecasts and net benefit.
Plan your recommendation to sell.
Make recommendation for action.
6. Implementation
Ensure proper implementation.
Be certain that the change has been made in accordance with the origi-
nal intent.
Audit actual costs.
TECHNIQUES
1. Define functions.
2. Identify/overcome roadblocks.
3. Use specialty products/processes.
4. Bring new information.
5. Construct FAST diagram.
6. Cost/evaluate FAST diagram.
7. Use accurate costs.
8. Establish goals.
9. All info from best source.
10. Use good human relations.
11. Get all the facts.
12. Blast-create-refine.
13. Get $ on key tolerance.
14. Put $ on main idea.
15. Use your own judgment.
16. Spend company $ as if own.
17. Use company’s services.
SL3151Ch12Frame Page 563 Thursday, September 12, 2002 6:01 PM
Items in bold type indicate techniques that apply at every phase of the job plan —
as well as in most other activities.
INFORMATION PHASE
DEFINE THE PROBLEM
The first phase of the job plan is the information phase. It is broken down into three
distinctly separate parts:
1. Information development
2. Function determination
3. Function analysis and evaluation
They are all part of the information phase because in reality, they are part of the
problem resolution. The work done in the information phase is the basis for the
SL3151Ch12Frame Page 564 Thursday, September 12, 2002 6:01 PM
TABLE 12.1
Project Identification Checklist
1. Assembly and part drawings
2. Quantity requirements per assembly and annual usage
3. Sample of assembly (project)
4. Sample of each part in assembly (raw cost of stamping blanks if practical)
5. Cost data (material, labor, overhead)
6. Tooling cost (special instructions)
7. Planning sheets (sequence of manufacturing operations including detailed cost breakdown)
8. Specifications (materials, manufacturing, engineering)
9. Required features (special instructions)
10. Name of project engineer
Information Development
Information Collection
The first part of the information phase is the development of all available information
concerning the project. This includes drawings, process sheets, flow diagrams, pro-
cedures, parts samples, costs, and any other available material. Discuss the project
with people who are in a position to provide reliable information. Check to be certain
that honest wrong impressions are not being collected, that is, information that may
have been fact at one time but is no longer valid.
It is very important that good human relations be used during this data and
information collecting phase. Get the person responsible for the project or develop-
ment in the first place to help by showing that individual how he or she will be able
to profit from successful results of the completed study.
The project identification pre-workshop checklist — Table 12.1 — details all of
the information required for study. If the data or information are not on hand, it will
be necessary to obtain them. A basic information data sheet that should be filled out
as a first step to identify the project is shown in Figure 12.2. A brief description of
the project should be written under “Operation and performance” to be certain all
of the team members are in at least basic agreement as to the product or process
operation. If available, a schematic or a picture should also be drawn in this area.
Cost Visibility
The next step towards a problem solution is to complete the cost visibility section —
Figure 12.3 — of the cost-function worksheet as detailed in the cost visibility
sheet — Figure 12.4.
P.F. costs are estimated as follows:
Manufacturing cost = Material + Labor + Burden
P.F. cost or Total cost = Manufacturing cost + Other
SL3151Ch12Frame Page 565 Thursday, September 12, 2002 6:01 PM
_______________________________ __________________________
_______________________________________
_____________________ _________________________________
______________________________________________________________________
______________________________________________________________________
Review these cost data in accordance with the preset goals of your project, and
make a preliminary judgment of the potential profit improvement. Consider the
factors involved and set a goal for achievement that will provide a profitable position.
The target should indicate a 30 to 100% cost reduction to be practical. It may seem
improbable that this can be achieved; however, it is a target to work toward. A check
against this target will be made at the completion of the information phase.
Project Scope
It is now possible to make a preliminary determination of the project scope. Consider
the new project as outlined on the project identification sheet, the present cost and
target for improvement, and the time available for the study. After evaluating these
factors, define the scope. Limiting or expanding the scope of a study depends on
the objective and the time allowed for the study. In project work, the analysis of
function should first be performed upon the total assembly or process. If the objec-
tives of the value control study are not achieved at that level, the next lower level
SL3151Ch12Frame Page 566 Thursday, September 12, 2002 6:01 PM
$ unit
Total cost
Verb Noun
should be studied and so on down to the lowest level of indenture. The lower the
level of indenture, the more detailed and complex the study will become. This may
require additional time in the present study or future studies to consider segments
identified by function analysis.
SL3151Ch12Frame Page 567 Thursday, September 12, 2002 6:01 PM
Function Determination
The information on hand together with an analysis of costs will tend to define the initial
scope of the project. The product or process has been defined and its cost evaluated by
use of the cost visibility techniques and a target set. It is now possible to start to define
the functions to be performed or that are being performed by the system.
Start with the function or functions of the assembly or total system, then break
the system down into each part and define the functions of each. Remember to strive
to define the functions in two words, and also keep in mind that the definition must
not constrain thinking. It is the function definition that will help to visualize new
ways to satisfy the function. If it is too constraining, it will tend to restrain thinking.
Figure 12.4 should be used for this effort. Most simple products will have at
least 20–25 functions. Detailed information on defining functions is covered further
on in this section.
After the functions have been determined, identify the basic function or functions.
The basic function is the function that cannot be eliminated unless the product can
be eliminated. There may be more than one, but an effort should be made to
determine the one most likely basic function. Determining the basic function is the
first step in the construction of a FAST diagram. A detailed discussion on the
construction of FAST diagrams is to be found further on in this section.
The FAST diagram makes it possible to complete the cost function worksheet.
A typical cost function sheet lists all functions versus all parts of a product or actions
of a system, procedure, or administrative activity. The objective is to convert product
cost to function cost.
The cost of each piece of hardware or action is redistributed to the function
performed. This proportional redistribution of cost to function requires information,
experience, and judgment, and all team members must contribute their expertise.
After the cost of each part or action has been redistributed to the functions
performed, the cost columns are totaled to obtain the function cost. This cost is then
placed on the FAST diagram. The FAST diagram then becomes a very valuable tool.
It tells what is happening, why, how, when, and what it costs to perform the function.
It is now possible to evaluate the functions to determine if they are worth what is
being paid for them. In other words, a value must be set on each function.
Determining the value of each function is a subjective process. However, it is a
key element in the value process. Comparing the function cost to function value
SL3151Ch12Frame Page 568 Thursday, September 12, 2002 6:01 PM
provides an immediate indication of the benefit being obtained for expended funds.
The ratio of value cost to function cost is the performance index. The sum of all
values is the value of the system or the lowest cost to reliably provide the basic
function. It should be compared to the preliminary goal set earlier.
It may be that the new goal is considerably higher than the original. If this is
the case, an evaluation of the diagram will indicate what must be done to achieve
the original goal. It may indicate that an entirely new concept is required, or it may
be that it will be acceptable to settle for less. It is often the case that the original
goal and the new value are close. An analysis of the function costs will again indicate
necessary action.
This analysis clearly defines the task for product improvement. It breaks the
problem down to functions that must be improved, revised, or eliminated to achieve
the goal. It is now possible to proceed to the second phase of the job plan — the
creative phase.
COST VISIBILITY
Experience has shown, for example, that the automotive industry is a price leadership
industry. Experience has also shown that in spite of the tremendous leverage of the
industry, it cannot control the prices it must pay for the basic materials required for
production. It is then quite clear that we cannot buy materials for less and we cannot
sell our products for more. Consequently, only one avenue remains open to increase
profit, and this is to identify the areas of high and unnecessary costs and to find
ways to reduce or eliminate these costs.
In the past, tremendous effort has been made to keep our products at a compet-
itive level. The intent is to add value control as another tool to aid in achieving the
desired function of a product at the best cost.
Cost visibility techniques are the first to be applied in the value control job plan.
Cost visibility techniques are well ordered and range from very simple to highly
complex. These techniques do not tell us where unnecessary costs are; they tell us
where high costs are. This is important because they identify a starting point.
Definitions
Since the techniques of cost visibility are concerned with all types of costs, each
type will be defined so there is no misunderstanding:
Burden consists of both fixed and variable categories, and separate rates
are often established for each. The method of assigning burden differs from
industry to industry and even from one company to another within an
industry. Any quantifiable product factor may serve as a basis for assign-
ment of burden as long as consistent use of the factor across the entire
product line results in full and equitable burden distribution.
Fixed burden — Includes all continuing costs regardless of the produc-
tion volume for a given item, such as salaries, building rent, real estate
taxes, and insurance.
Variable burden — Includes costs that increase or decrease as the vol-
ume rises or falls. Indirect materials, indirect labor, electricity used to
operate equipment, water, and certain perishable tooling are also in-
cluded in this classification.
Cost — The amount of money, time, labor, etc. required to obtain anything.
In business, the cost of making or producing a product or providing a
service.
Design cost — The sum of material, labor, and variable burden. An under-
standing of the elements of design is essential for an understanding of cost
visibility techniques.
Fixed cost — Cost elements that do not vary with the level of activity (insur-
ance, taxes, plant, and depreciation).
Incremental cost (sometimes called a marginal cost) — Not all variable costs
vary in direct proportion to the change in the level of activity. Some costs
remain the same over a given number of production units, but rise sharply
to new plateaus at certain incremental changes. The costs thus effected are
incremental or marginal costs.
Labor — Manpower expended in producing a product or performing a service.
Labor may be direct or indirect.
Direct labor — Labor that can be traced directly to a specific part. Wages
paid the stamping press operator would be classified direct labor.
Indirect labor — Labor that is necessary in the manufacturing process but
is not directly traceable to a specific part (material handling, inspection,
receiving, shipping, etc.) is generally included in burden.
Manufacturing cost — The sum of material, labor, and variable burden. An
understanding of the elements of manufacturing is essential for an under-
standing of cost visibility techniques.
Material — All hardware, raw (steel, zinc, plastic powders) and purchased
(instrument panel knobs, decals, rivets, screws, etc.) items consumed in
manufacturing a part. Material may be direct or indirect.
Direct material — Raw and purchased material which becomes an inte-
gral part of an end item. (The cost of the metal from which a fender is
formed would always be a direct material.)
Indirect material — Material that is necessary in the manufacturing pro-
cess but is not directly traceable to a specific part (lubricants, wiping
cloths, marking pens, etc.) is generally included in burden.
SL3151Ch12Frame Page 570 Thursday, September 12, 2002 6:01 PM
The application of cost visibility techniques begins with an analysis of total cost,
progresses through an analysis of cost elements, and finally analyzes component or
process costs. To perform these steps the best cost information available is required.
This information will be available from sources such as:
These techniques are not necessarily used in chronological order. We must always
use our judgment, not only in utilizing the techniques that indicate high cost, but
also in utilizing all the other tools of value control.
Cost visibility analysis is based on the information shown in Figure 12.3. Based
on the information gathered, the team makes the appropriate recommendations.
SL3151Ch12Frame Page 571 Thursday, September 12, 2002 6:01 PM
Compare material content with labor dollar content. Compare these elements of
cost with those for another similar manufactured item. If the elements of cost vary,
it is an indication one may be high in cost, and the reason for the difference must
be found.
This technique can also be used to arrive at a normal distribution of cost.
Accounting can usually determine the normal distribution of cost in material, labor,
and overhead for a specific department or profit center. Every part can then be
compared to the distribution cost to determine if the cost elements are high or low.
Again, comparison is being used to find high cost.
The cost breakdown may show that $10 worth of material and $.10 worth of
labor are being expended on a certain part. If this is the case, it can be asked if we
are in business to spend $.10 on labor for $10 worth of material. Perhaps the material
supplier should be asked to perform the labor operation. This could eliminate the
labor which may be used more productively elsewhere.
Conversely, it may be found that $.10 worth of raw material requires $10 worth
of labor. If this is the case, the overhead should be broken down into variances,
setups, tooling, direct labor, indirect labor, etc. The manufacturing area should be
questioned about methods and processes, profit centers being used, overhead, capital
equipment, labor grades, etc.
Technique 3 — Determine Component or Process Costs
The third technique goes one step further in breaking down material, labor, and
overhead. To determine component and element costs as they occur in the manu-
facture of a part, break down each component as shown in Figure 12.3.
Figure 12.3 shows the components broken down in elements. From this list, you
examine the reasonable costs versus the unreasonable. The process may sound very
subjective at this stage, but it is important to differentiate an item that does not “fit”
SL3151Ch12Frame Page 572 Thursday, September 12, 2002 6:01 PM
the pattern of other items. When the examination ends, more than likely you have
identified a “most probable” high cost item. Circle this amount, and examine it in
detail. Determine why this cost is so far out of line with other operations. This
technique gives a very precise and accurate cost visualization. It shows where costs
are being created on a component and element basis. Almost every analysis would
include the use of techniques 1, 2, and 3.
Now think of the third technique in more depth. If we study technique 3 in
depth, we will see that it can be used to analyze parts being assembled into a major
sub-assembly, major sub-assemblies being put together into a final assembly, and a
number of final assemblies being put together to make the total product. Good
judgment must be used in the application of this technique, and it will also dictate
the way the techniques should be used.
Technique 4 — Determine Quantitative Costs
This technique analyzes cost on the basis of some measurable unit such as time,
weight, size, area, etc., and then makes a comparison with the cost per unit of a
known good value. It is sometimes surprising how seemingly complex products will
fall into a pattern.
One of the most convenient ways to use this technique is to build a cost curve
for the product under study. A comparison to the curve will indicate whether the
product is high or low. Techniques 1, 2, and 3 can then be used to zero in on the
specific cause of the cost deviations.
Cost per period of time — This is good for high volume production. It can
also be used to describe cost per similar product class. Simply determine the number
produced in a convenient time period, minute, hour, day, etc. This can then be
compared to a similar unit. A simple example would be the cost per unit of a specific
class and size fastener.
Cost per pound — This is a basis for comparison usually applied to castings,
weldments, or forgings, but it can be applied to anything that will plot on a graph.
Determine the cost per pound of each item and plot these on a graph, and the high
cost items will be immediately apparent. Again, this is a basis for comparison —
another way to find meaningful cost basis. Remember, even though weight may not
be an important design criterion, it still costs money to ship every pound of unnec-
essary weight.
Cost per dimension — Some examples of the use of this unit would be as
follows: the cost per unit length for a simple extrusion, the cost per unit volume in
a tank, the cost per unit length of wiring, The cost per square foot of area covered
by a high-cost epoxy paint. These are convenient cost figures to have available as a
basis for comparison. Cost per unit of length, area, and volume are the key words
of this technique.
Cost per functional property — Determine the actual amount spent per func-
tional property. For example, in wiring harnesses, what is the cost per ampere
conducted, on a mechanical component, per pound of weight supported or per inch
pound of torque transmitted? This gives a basis for a direct comparison. The function
can then be evaluated by comparison. This is a basic value control technique.
SL3151Ch12Frame Page 573 Thursday, September 12, 2002 6:01 PM
Part Name:
area
Total cost
The use of these cost analysis techniques will literally explode costs in such a
way that a circle can be drawn around the areas that show where work is required.
The functional approach techniques can be used to study the high cost area. It does
not follow automatically that high cost is unnecessary cost. High cost may be
unnecessary cost, but we must use other value tools to find out if it really is.
Technique 5 — Determine Functional Area Costs
One purpose of this technique is to help answer the question, “where should effort
be applied?” If the study item is a part or a simple assembly (two or three parts),
then the scope is already defined. If the project is a complex assembly which could
have its principle of operation changed by a new design concept, questions such as
available time, savings potential, type of improvements, stage of product maturity,
etc., should be considered.
Divide the present cost into functional areas to define the project scope. Division
of cost into functional areas will pinpoint high cost differently than usual cost
visibility analysis, and will help to broaden or narrow the scope of study. This will
direct effort to more profitable areas. An example is shown in Figure 12.5.
FUNCTION DETERMINATION
Function analysis is the foundation of value control. A product or system is not
analyzed from a part or action standpoint, it is analyzed from a function standpoint
to break down the barriers to visualization for improved creativity and the develop-
ment of the maximum number of practical alternatives. The objective is to obtain
the maximum benefit possible; cost reduction is often as high as 30 to 100 percent.
Function analysis makes it possible to set high cost reduction goals and to meet
them. This can be done because basic functions are identified and isolated, and other
methods to perform them are developed through the use of applied creativity. The
function approach requires that certain definitions, ground rules, techniques, and
relationships be understood.
SL3151Ch12Frame Page 574 Thursday, September 12, 2002 6:01 PM
Experience has shown that function analysis combined with the systematic
approach of the job plan will almost invariably produce desired cost reductions.
However, the goal of eliminating all unnecessary costs is dependent upon the skill,
training, dedication, and organizational support attained.
What Is Function?
Function is the property that makes something work or sell. Function states what
the product or system does. It is the objective of the action, the result to be accom-
plished, and can be defined in some unit of measure such as weight, quantity, time,
money, space, or some other practical measure.
Functions are expressed in two words, a verb and a noun. The use of only two
words forces a brief or terse definition of the necessary characteristics. The use of
two words avoids the possibility of combining functions and of attempting to define
more than one simple function at a time. The two word requirement aids in achieving
the broadest level of abstraction. It is a forcing technique that causes a struggle to
clarify understanding and aid visualization for creativity.
Proper identification of function involves a point of view. The function must be
identified in such a way that is stripped of all restrictions that would inhibit devel-
opment of new and better ways to provide the function.
For example, consider the fastening of a simple nameplate to a part. One might
describe the function that applies as “attach nameplate.” It would be far better to
describe the function as identify product, because a nameplate is only one of many
ways to achieve the desired function. Nameplates might be riveted, welded, or
cemented. However, it is also possible to identify products by etching, stamping,
molding, or printing on the part, thereby eliminating the nameplate altogether. Some
examples of functions are:
Identifying the function in broadest terms provides the greatest potential for
value improvement because it allows greater freedom to creatively develop better
value alternatives. Further, it tends to overcome any preconceived ideas of the manner
by which the function is to be accomplished.
Basic Functions
There are two types of functions: basic and secondary. Basic function is the specific
purpose for which a device is designed and made. Or stated another way: basic
function is the performance feature that must be attained if the total item or system
is to work or sell. Consider a screwdriver. “Transfer torque” is the basic function.
If this function is eliminated, the screwdriver will not work.
SL3151Ch12Frame Page 575 Thursday, September 12, 2002 6:01 PM
Can you determine what parts perform these functions in a typical screwdriver?
Secondary functions are sometimes unwanted or unnecessary. An example would
be “make noise.” We have a complete sound laboratory trying to eliminate or control
noise on our cars. On the other hand, money was added to the turn signal flasher to
“increase noise” and then later to “control noise.”
In the automobile business, styling is a major factor. Styling features may be
basic or secondary. However, whether they are basic or secondary is more subjective
than in a mechanical part. For this reason, good supporting marketing data are
required to guide and advise the stylist of the consumer’s attitude and requirements.
This is a simple technique that asks the question “what must the part or assembly
do?” It applies to all projects and requires a clear determination of all use and esteem
functions. Each function should be expressed in two words — see Figure 12.4.
SL3151Ch12Frame Page 576 Thursday, September 12, 2002 6:01 PM
After all functions have been listed, classify them as basic or secondary — refer
to definitions of basic and secondary in the ground rules. This technique clarifies
the function, prevents combining of functions, and reveals the relationship of basic
and secondary functions.
This technique imposes the strictest discipline and requires the acceptance of a
forcing assumption: “only basic function has value.” The assumption is made as a
mental step in order to force our thinking to search for new and simpler designs that
will provide the basic function in such a way that the least number of secondary
functions is required to make it work and sell.
This technique is best applied to assemblies; however, it can be modified to
single parts. The blast-create-refine technique as described in detail in the creative
phase is an example of a special case of this technique. The value, as it is developed
here, is the combined result of individual judgment, creativity, and past experience
that expresses what the function should cost based on the work it performs (and the
way it could be done).
There are many variations to this technique. One is to expand the scope of study
and eliminate imposed functions by revising each listed function determined in
technique 2 and asking the question, “Is this function performed this way as a result
of the basic design concept?” Redesign to eliminate imposed functions means
expanding the scope thereby causing adjoining components to dictate new limits.
Some of the largest savings, 50 to 80%, will come using this technique.
highlight the cost required to satisfy the function “transmit torque.” This approach
takes value engineering from an art to a science and opens the door for value research.
While the basic concept is still the same, equating cost to function, a considerable
grasp of basic value techniques and mathematics is required.
It should be noted that any item may have more than one input or output, and
that unless inputs are transformed into outputs, the item has no value. Since function
is the key link between input and output, this is equivalent to stating that only function
can have value.
This technique is the primary function analysis technique used in most cases. This
system was developed to assist in performing function analysis on a complete system.
The use of determination logic helps to identify and verify the basic functions and
also helps identify higher and lower level functions and supporting systems. The
technique requires construction of a FAST (Functional Analysis System Technique)
diagram by the use of determination logic questions: How? Why? and When? The
steps necessary to complete a FAST diagram are:
HOW WHY
Modulate
air
Achieve Control
comfort air
Direct air
Select the function you think is the basic function and apply the logic questions
to the right and left of the basic function. Ask how the function is performed to
determine the function to the right. Ask why this function is performed to determine
the function to the left. It may be necessary to select more than one of the functions
to get the correct basic function.
In the example, the function “control air” is selected as the basic function. How
is air controlled? The reply is “direct air” and “modulate air.” Both answer the “how”
question. Do they both answer the “why” question? Why is the air modulated? Why
is the air directed? The answer is to “control air.” The logic questions are satisfied
and we can add the next FAST diagram block — see Figure 12.6.
Now the question “why do we control air?” must be answered. The reply is
“achieve comfort.” How do we achieve comfort? Control air. So the basic logic
questions have been satisfied for the basic function “control air”.
The basic function has been isolated, and the rest of the primary path functions
can be determined. These primary path functions become the basic framework for
developing a complete FAST diagram. The “how” and “why” logic questions must
now be applied to every function. Each must satisfactorily answer the question
relative to its position in the diagram. For example: If we take the function “modulate
air,” we can further analyze it into vary opening, direct force, apply torque, apply
effort.
SL3151Ch12Frame Page 579 Thursday, September 12, 2002 6:01 PM
Meet
Specs And
Modulate so
Scope Air Scope
on
Apply Apply
Torque Effort
Achieve Control
Comfort Air
Increase
Concept
Control Direct
Assembly Air
And so
And so on
on
performed. The function to the left tells why this function is performed. The function
below or above tells when, and that listed immediately below the function tells what
performs this function.
These simple words — how, why, when, and what — stimulate creativity. The
answers also keep your thinking close to the area in which a change is being sought.
In further utilization of your FAST diagram, try incorporating secondary func-
tions into existing parts by modification to the part. You will have the most success
if the functions are next to each other or happening at the same time.
This technique may be applied to existing or proposed designs, concepts, pro-
cedures, processes, documents, or any type of software. The primary purpose is to
identify functional relationships to stimulate creativity.
The FAST diagram clearly identifies functions and their relationship to each other.
The techniques of cost visibility identify high cost areas. These techniques can now
be combined to clearly identify the relationship between cost and function. This will
make it possible to identify the areas of unnecessary cost for the application of
creative problem-solving techniques.
The cost function worksheet (Figure 12.3) is the basic tool. The functions are
listed across the top from the FAST diagram. The parts, processes, or actions are
listed vertically with their actual costs — see format in Figure 12.7.
It is now necessary to determine the actual cost of each function by applying
the cost for the part or action that causes the function to be performed. In many
cases, it may be necessary to break the cost down into several functions. For example,
say in a foldout sample, the thumbwheel costs $.0957. This is distributed over three
functions: provides decor, apply torque, limit rotation. The percentage of cost applied
to each is a matter of qualified judgment unless a detailed breakdown can be obtained.
In our example, let us assume that the function “modulate air” is made up of
the cost of three items totaling .1100. This is the cost of the function “modulate air.”
In order to find the cost of the system to modulate air, all of the functions in the
critical path plus the supporting functions must be totaled. This cost is, say, $0.2699
or X% of the total assembly cost. In other words, if the modulate air function could
be eliminated, $.2699 could be removed from the assembly.
These function costs can be applied to the FAST diagram for convenience. This
enables a ready determination of what can be accomplished by eliminating or
combining functions to provide a less costly assembly.
When used in conjunction with the FAST diagram, the cost function worksheet
provides an accurate function cost. This can then be evaluated in terms of its value
or worth. By the application of creative techniques, new ways to perform the desired
function can be developed.
After a FAST diagram is complete, with part or action costs or assigned to the proper
functions, values can be assigned to each of the functions. By assigning cost first,
SL3151Ch12Frame Page 581 Thursday, September 12, 2002 6:01 PM
the task force members become familiar with detail costs and are therefore better
prepared to assign values to the functions by comparison.
Value is defined as the lowest cost to reliably perform a function. In evaluating
a function, the value or worth used must be the intrinsic value, not the result or
effect of that function, and it does not include other functions on the FAST diagram.
(During this phase of the job plan, the team must be optimistic, just as in the creative
phase; if not now, when will the team be optimistic?)
One of the easiest ways to determine value of a function is by comparison to
another method to perform the function at a lower cost. For example, in a given
design, the “support weight” function was performed by a columnar part of its
attachment, for a function cost of 23 cents. The team assigned a value of 5 cents
for the “support weight” function because the team members reasoned that the
specified load could be supported in suspension for that amount. At this time, the
team did not have a solution to the problem, but during the brainstorming session
the team generated proposed changes that were developed to accomplish the overall
target.
In many cases, function values cannot be assigned by comparison, and other
means must be used such as:
1. Apply the test for value — How much of my own money would I pay for
that function?
2. Rate function numerically — Apply ratios to function cost to arrive at
new values.
3. Apply VE techniques for lower cost —Set a goal or target for functions
(percentage reduction).
4. Others — Make use of other information, such as noticeable differences,
value standards, and mathematical comparisons.
The sum of the individual function values establishes a product or total system
value; this becomes the team’s new target. Now the team knows which functions to
attach during their creative sessions — the high cost and low value functions.
Once these values have been established by the team, place this assigned value
in the upper right-hand corner of the function box. The team has isolated the problem
and set its own goal(s) for improvement.
Remember these ten tests for value:
CREATIVE PHASE
The creative phase requires the use of your imagination to develop alternative
solutions to the functions defined in Phase I. The systematic value control approach
makes use of “brainstorming” as a principal technique; however, the “blast-create-
define” technique must frequently be used in conjunction with others.
Brainstorming is defined as the combined effort of two or more people to
determine all possible methods for performing the required functions. There is no
attempt at evaluation; this will come later. The requirement is to develop any and
all ideas that may include the outstanding alternative to satisfy the required functions.
It is necessary to become free from the constraints of past habits and attitudes
and apply thought needlers — see Table 12.2 — to increase the ideas when they
begin to slow down. Refer to specialty processes, products, or materials for ideas.
Apply the use of standards. Seek ideas from plant specialists and supplier represen-
tatives. Use catalog files such as Thomas Register and Sweets. Remember:
• Ideas come from every place and anybody. Do not restrict your thinking!
• Conduct a brainstorming session on each required function. List all ideas.
• Try to eliminate or combine functions. Be as flexible as possible.
TABLE 12.2
Idea Needlers or Thought Stimulators
• How much of this is the result of custom, tradition or options?
• Why does it have this shape?
• How would I design it if I had to build it in my home workshop?
• What if this were turned inside out? Reversed? Upside down?
• What is this were larger? Higher? Wider? Thicker? Lower? Longer?
• What else can it be made to do?
• Suppose this were left out?
• How can it be done piecemeal?
• How can it appeal to the senses?
• How about extra value?
• Can this be multiplied?
• What if this were blown up?
• What if this were carried to extremes?
• How can this be made more compact?
• Would this be better symmetrical or asymmetrical?
• In what form could this be? Liquid, powder, paste, or solid? Rod, tube, triangle, cube, or sphere?
• Can motion be added to it?
• Will it be better standing still?
• What other layout might be better?
• Can cause and effect be reversed? Is one possibility better than the other?
• Should it be put on the other end or in the middle?
• Should it slide instead of rotate?
• Can you demonstrate or describe it by what it is not?
• Has a search been made of the patent literature? Trade journals?
• Could a supplier supply this for quicker assembly?
• What other materials would do this job?
• What is similar to this but costs less? Why?
• What if it were made lighter or faster?
• What motion or power is wasted?
• Could the package be used for something afterward?
• If all specifications could be forgotten, how else could the basic function be accomplished?
• Could these be made to meet specifications?
• How do competitors solve problems similar to this?
or other stifling conditions will “freeze” creative thought. Others will transfer their
creativity to other parts of their lives: home, church, recreation, any place but the job.
If we are to encourage creative productivity, we must eliminate any idea that
the instant an idea is proposed it must be bitten, broken, or kicked. In order to break
ineffective habits and overcome stifling environments, a technique that is helpful is
to firmly commit ourselves to a goal to our associates, superiors, or even the general
public. In actual process, this technique resolves itself into the establishment of firm
deadlines and numerous subdeadlines in the course of a project. You will experience
that process during every value control experience.
Another technique is the inversion technique. It is used to solve the what-causes-
it type problem. This technique concentrates on inverting the problem. For example,
if the problem is how to cut cost, the technique would ask how you increase the
cost effectively.
SL3151Ch12Frame Page 584 Thursday, September 12, 2002 6:01 PM
Yet another technique for breaking through our judgment controls of creative
expression is that of “blast, create, and refine.” This technique is extremely helpful
in reaching value objectives. For years, we have been trying to reduce cost by 5,
10, or 15% through normal cost reduction procedures (material, fabrication methods,
etc.) This has become more and more difficult.
If we try to take out a larger percentage, say 50%, we are immediately forced
to take a new approach to the problem. The blast, create, and refine (BCR) technique
combines the function approach with creativity and evaluation of ideas in order to
find new, more effective ways to accomplish the required function in products,
processes, or procedures. There are several reasons to use the BCR approach;
however, the three major ones are:
Intense study of any product shows that it is, to greater or lesser degree, the
result of a chain of happenings (evolution). Even the new products that value
engineering may bring forth will, to some extent, also exhibit this type of evolution.
Therefore the search for better value requires that we ask the following vital ques-
tions: How can this chain of influence be stopped? How can we objectively look at
a function? The technique of blasting, creating, and then refining is especially
directed toward accomplishing these objectives. Its application is in three phases,
which are:
PHASE 1. BLAST
This phase consists of specifically identifying that portion of the problem under
study that does, in fact, perform the basic function (or part or most of it). Next, we
blast that portion out of the problem (isolate it) so that we can think about it clearly
and specifically. The basic function is the first block in the FAST diagram.
PHASE 2. CREATE
In this phase we try to answer the question: What do I have to add to that which I
isolated, in the blast phase, to make it capable of performing the required functions
or to have it work and sell? Alternatives are developed and costs are put on each
one. Make no attempt to evaluate alternatives at this time.
PHASE 3. REFINE
We evaluate the ideas developed in the create phase and through an objective process
of refining, develop an approach which will meet all the performance, cost, and
delivery parameters required.
SL3151Ch12Frame Page 585 Thursday, September 12, 2002 6:01 PM
EVALUATION PHASE
Evaluating the ideas developed during the creative phase is a critical step in the job
plan. The ideas generated will include practical suggestions as well as wild ideas.
Each and every idea must be evaluated without prejudice to determine if it can be
used or what characteristics the idea has that may be useful.
Proper evaluation of the ideas is a critical step. Remember, if an idea is discarded
without thorough evaluation, the key to a successful solution may be lost. The time
to create ideas is in the creative phase. If an idea is discarded, there may not be
another opportunity to develop it again.
Evaluation processes can range from the simple to the complex. The methods
selected depend to some degree on the number and quality of the ideas generated.
(It is not uncommon to have several hundred ideas to evaluate.) In the evaluation
process, do not be too critical. Look for the good rather than the bad and do not
present unnecessary roadblocks.
The initial screening will weed out worthless ideas and sometimes generate new
ideas or variations of the present ones. The initial screening will also begin to classify
the ideas into basic groups that, in effect, constitute a second stage in the screening
process. After the initial screening, it may be necessary to resort to systems designed
to aid the process. Two favored, because of their simplicity, are paired comparison
and Pareto voting. When the initial list of ideas has been screened and evaluated
and reduced to a choice between several alternatives, evaluate the good and bad
features of each alternative. Watch out for roadblocks, and try to determine if they
can be eliminated and how they may be eliminated.
Experience has shown that this evaluation process is a difficult task. The impulse
to quickly screen through the list to zero in on the best ideas must be controlled.
The mass of data must be handled systematically to obtain maximum benefit from
the creative phase. Careful screening is essential to isolating the best concept to
carry over into the planning phase where the idea will be developed into a practical
recommendation for action.
Pareto Voting
Pareto voting is based on Pareto’s law of maldistribution. Vilfredo Pareto
(1846–1923), a political economist, observed a common tendency of wealth and
SL3151Ch12Frame Page 586 Thursday, September 12, 2002 6:01 PM
power to be unequally distributed. This observation has been refined to the degree
that it can be said that there is an 80/20 percent relationship between similar elements.
For example, twenty percent of the parts in an assembly contain eighty percent of
the cost. This is most useful information in cost estimating; however, the relationship
holds for many diverse examples such as the following:
In value engineering, it is frequently necessary to select the best ideas, the highest
value functions, the highest potential projects, or any of a number of other require-
ments. It has been found that the application of Pareto voting can help to simplify
the list and will in most cases ensure that the most important items have been
selected. It also produces results quickly and can be incorporated into the value
engineering process to allow continuous operations without undue disruptions.
Pareto voting is conducted by requesting each team member to select what he
or she believes are the items or elements that have the greatest effect on the system.
This list of items is limited to twenty percent of the total number of items. For
example, each team member would be allowed to select six items out of a list of
30. The vote is on an individual basis to obtain as much objectivity as possible.
The resultant lists are then compared and arranged into a new consolidated list
in descending order by the number of votes each item received. Usually, several
items will have been selected by two or more team members. The top 10 to 15 items
are then ranked and weighted in a second step by using paired comparisons.
Paired Comparisons
TABLE 12.3
The Worksheet for Setting the List
Key Letter Alternatives Weight
A Majorca
B Florida
C Colorado
D Greek island cruise
TABLE 12.4
Evolution Summary
B C D
A A2 A2 A1
B C2 D2
C D1
Evaluation Summary
From the evaluation summary list, compare A to B, Majorca to Florida, and place
the selected location letter in the A-B box of the evaluation grid. If the difference
is major or clearly in favor of A, place a suffix 2 after the letter A. The A-B box
should read A2. Now compare A to C. If the selection is A, place an A in the A-B
box. If the difference is great, again add the suffix 2. Now compare A to C. If A is
again the selection, place the A in the A-C box. If it requires thought to make the
decision, the numerical suffix should be 1, minor. Drop the A and now compare B
to C and B to D. Lastly, drop the B and compare C to D — see Table 12.4.
To determine the ranking and weighting, add up the As, Bs, Cs, etc. In the
example the result is as shown in Table 12.5.
This analysis shows Majorca to be the most desirable. It is 40 percent more
desirable than a Greek island cruise and 60 percent more desirable than Colorado.
Matrix Analysis
Although Pareto voting and paired comparison satisfy the screening and evaluation
process in most cases, there are times when a more detailed analysis is required.
SL3151Ch12Frame Page 588 Thursday, September 12, 2002 6:01 PM
TABLE 12.5
Ranking and Weighting
Key Letter Alternatives Weight
A Majorca 5
B Florida 0
C Colorado 2
D Greek island cruise 3
TABLE 12.6
Criteria Affecting Car Purchase XXXX — Paired
Comparison
B C D E F G Coding & Results
A A1 A1 D1 E1 F1 A1 F – Cost 6
B B1 B1 E1 F1 G1 G – Economy 4
C C1 E1 F1 G1 E – Image 4
D E1 F1 G1 A – Styling 3
E1 F1 G1 B – Comfort 2
F F1 C – Reliability 1
D – Selection 1
TABLE 12.7
Criteria Weighing
Criteria
A B C D E F G Total Rank
Weight 3 2 1 1 4 6 4
Alternatives
Ford
Chrysler
Chevy
Honda
Audi
Two such cases could be when a decision involves large financial outlays or when
serious consequences could result from a change. In these cases, every effort must
be made to base a decision on the most objective data possible. For many of these
decisions, there is a need to rank and weigh a number of alternatives against a series
of specific criteria. By doing this, we learn which trade-offs must be made for the
SL3151Ch12Frame Page 589 Thursday, September 12, 2002 6:01 PM
various requirements of the project, enabling us to make the best decision. In these
cases, a combinex method is recommended.
Combinex was developed by Fallon (1971) and is based on comparing a number
of alternatives to a series of criteria. Each alternative is compared to the criteria in
turn and given a specific numerical rating. The resultant analysis clearly ranks and
weighs each alternative against each criterion, which allows for trade-offs based on
clearly defined data. This makes it an excellent tool in decision making.
Example
To illustrate the process, a typical problem familiar to most people will be used.
The problem is to select an automobile for purchase. The criteria for selection have
been taken from a list of factors affecting the sale of most products. The criteria
selected will have a different value for each individual and have been chosen to
illustrate several points. The selection criteria are:
A. Styling E. Image
B. Comfort F. Cost
C. Reliability G. Economy (mi/gal)
D. Selection (models available)
In other instances, the criteria used could be the factors affecting the purchase
of manufacturing equipment, location of a plant, construction of various types of
facilities, or any other requirement involving a series of criteria for selection.
The alternatives to be considered for purchase are the XXXX models listed
below along with their fictitious base prices. The analysis was made in April XXXX.
The same analysis made in September XXXX might have resulted in a different
conclusion as time and opinions change.
Alternatives
1. Ford $ 14,000
2. Plymouth $ 13,600
3. Chevrolet $ 14,500
4. Honda $ 15,000
5. Audi $ 28,000
TABLE 12.8
Criteria Comparison
Criteria
A B C D E F G Total Rank
Weight 3 2 1 1 4 6 4
Alternatives
Ford 2/ 4/ 3/ 4/ 3/ 3/ 4/
Chrysler 4/ 4/ 4/ 5/ 3/ 3/ 4/
Chevy 4/ 4/ 3/ 4/ 3/ 3/ 4/
Honda 4/ 3/ 3/ 3/ 3/ 3/ 5/
Audi 3/ 4/ 3/ 3/ 4/ 1/ 3/
TABLE 12.9
Criteria Weight Comparison — Completed Matrix
Criteria
A B C D E F G Total Rank
Weight 3 2 1 1 4 6 4
Alternatives
Ford 2/6 4/8 3/3 4/4 3/12 3/18 4/16 67 4
Chrysler 4/12 4/8 4/4 5/5 3/12 3/18 4/16 75 1
Chevy 4/12 4/8 3/3 4/4 3/12 3/18 4/16 73 3
Honda 4/12 3/6 3/3 3/3 3/12 3/18 5/20 74 2
Audi 3/9 4/8 3/3 3/3 4/16 1/6 3/12 57 5
5 Superior
4 Good
3 Average
2 Fair
1 Poor
criterion in turn, the second alternative, the Chrysler, was compared. In each case,
each team member expressed an opinion individually. In some instances, it was
necessary to develop an average. In other cases, the decision was unanimous. This
was done until each alternative was compared to each criterion.
The third step of the process is to multiply the criteria weight by the comparison
value as shown in Table 12.9. For example, the Ford styling weight of 3 was
multiplied by the value of 2. The resultant product of 6 is inserted in the lower
section of the box. After completion of each individual weighing, the score is
summed under the total column.
The total score is shown in the column at the right, and the choices are ranked
in the far right column. This analysis shows the first choice to be the Chrysler and
the last choice to be the Audi, as illustrated in the complete combinex scoreboard
(Table 12.11).
Analyze Results
An analysis of the table shows that although the Audi was a poor fifth in the selection
process, the primary reason was cost. If the cost had been average, the additional
12 points would have raised Audi’s total above that of the Ford. The table also shows
that if the Ford styling had been rated as good, 4, this car would have been ranked
second with a score of 73. Although styling was originally ranked fourth in impor-
tance with a 3, other factors may now be considered. An improvement in reliability
would not have a major effect on the overall rating, but a reduction in cost or an
improvement in economy could have. Cost could be negotiated; economy would
require some basic product changes.
IMPLEMENTATION PHASE
The objective of a value engineering study is the successful incorporation of rec-
ommendations into the product or operations. However, a successful project often
starts back at the beginning. Each project must be thoroughly analyzed to determine
its potential for benefit and the probability of implementation. This is as important
as the knowledge and skill required to apply the system to attain successful results.
An excellent idea is worthless unless it can be properly implemented. If it is not
implemented, no one will obtain any benefit. It must also be implemented in the
manner intended. Unfortunately, there have been many cases on record where the
idea could not be implemented because of the high cost to make the change. There
are other cases where the recommendations were not properly understood and
implementation resulted in increased cost. This often results in disillusionment or
the feeling that value engineering does not work on our problems. Actually, in most
cases the real problem was that the problem was not properly diagnosed. It was not
that value engineering does not work; it was a matter of inefficient preliminary
analysis and preparation.
It does not seem reasonable to expend the effort and funds required to make a
value study without first having done the necessary work to ensure that the project
is practical, that it can be implemented, and that the necessary funds and people will
be available.
SL3151Ch12Frame Page 592 Thursday, September 12, 2002 6:01 PM
DEVELOPING A PLAN
There are five steps to incorporating value engineering into operations. They are as
follows:
teach them value engineering but to demonstrate the benefits to be achieved and
how they are produced. This establishes the need to apply the process and defines
the necessary commitments for success. Those who should attend would be everyone
who will be expected to support operations with time, manpower, and funds.
It is difficult for a large group of high-level people to attend a one-day seminar.
However, it is essential for successful operations. Attendance also broadcasts the mes-
sage of importance to all levels of the organization. In addition, the managers attending
often derive substantial benefit from the session that can lead to immediate results.
The one-day orientation should be a case study, so participants can try the various
methods and systems. The result will be understanding of the system and how it
may be applied to various projects. It will identify the organizational and operational
pitfalls and in many cases define projects for future workshops.
Completion of the management orientation will create a need for a decision to
determine how operations will proceed from this point. If a consultant has been
brought in to aid in progressing to this point, the consultant will now be able to
assist in getting down to brass tacks. If one has not been brought in, now would be
the time. The consultant’s experience can ensure success from the start and increas-
ingly successful performance as skill develops. At this point there are two ways to
go. However, in the long run, the same objective will be achieved. One approach is
a large multi-team workshop or series of workshops directed towards indoctrinating
a large group of people (30–40) in the system at one time. These people would learn
the process while applying the methods and systems to projects of current interest
to the company. These workshops usually develop substantial monetary benefit for
the company. The second approach is one or two teams working on a specific project.
Both methods can be successful. However, the first is better suited to very large
organizations with large amounts of manpower. The second can be used in both
large and small organizations and produces substantial benefit that can be used for
further development. In many cases, a combination of the two plus a series of
orientations can be used effectively. The specific plan depends entirely upon the
organization and should be tailored to fit.
ORGANIZATION
The first step is to determine the objective, as was discussed earlier. The second
should be to develop a plan to achieve the objective and set up the necessary
organization. The third step is implementation of the plan; the fourth follow-up and
audit operations.
The essential elements are:
The coordinator will develop and organize a plan for management approval.
Inherent in the plan should be education and application programs for all who will
be involved in operations. The coordinator should be required to select a consultant,
develop an educational plan, aid in organizing and conducting workshops, and
identify people who may be developed into value specialists. The extent of these
programs will depend upon the size and scope of the company.
From what we have noted here, it is obvious that the problem is complex from
the standpoint of options. However, successful operations do not have to be extensive.
Starting small and developing successfully is preferred to a lot of noise and a big
crash because of poor planning.
SL3151Ch12Frame Page 596 Thursday, September 12, 2002 6:01 PM
ATTITUDE
One of the most important factors in value engineering is attitude — attitude on the
part of both management and people on task teams. A positive, cooperative, sup-
portive attitude is required. In many cases, value engineering actually requires a new
management style. It cuts across organizational lines, looks at taboo aspects of a
problem, and recommends drastic changes compared to the past. To accept these
disruptions to the old way of doing business requires faith and understanding — a
positive attitude.
In many cases, whenever a new idea is presented to an American management
team the initial reaction is negative. The first remarks are, “It is interesting but let
me tell you what is wrong with it.” The best approach to this reaction is to listen
carefully. The managers may have some ideas you overlooked. After all negative
reaction has run out, be prepared to ask some specific positive questions of the group
that will develop positive responses. For example, “I understand your difficulty in
producing this part in the plant. What do you think we would have to do to make
this practical? Do you see any changes we might make to satisfy our methods?”
This will usually work to a positive result.
Never argue. In many cases it is beneficial to solicit negative ideas, but be
prepared to develop positive questions. Our attitude is that we must begin to ask
“What’s good about this idea?” “How will it help us to do a better job?”
Changing people’s attitudes is difficult and may never happen, but understanding
the reasons behind the negative reaction should make it possible to persuade most
people that they can benefit from success. Remember, there is a risk of failure in
new ideas. New ideas require change, and they may not work. People want proof
that something will work before they will support it. However, maybe you can show
that the benefits are greater than the risk.
The best way to change people’s attitude is to show that top management is
interested in value engineering and expects participation and results in achieving the
stated goals.
VALUE COUNCIL
The value council is a small group of high-level executives who oversee operations.
In a small company, it might be chaired by the president; in a large company, by a
division manager. The council should be staffed with people who have the authority
to make decisions relative to acceptance and/or rejection of proposals, authorizing
funds, and manpower. They set the attitude, develop the environment, break the
bottlenecks, and by their interest and visibility create credibility to participation and
provide authority to operations.
It is important that members of the council make every effort to attend council
meetings except in cases of dire emergency. A member who is unable to attend
should authorize a key assistant to act on his or her behalf. If the council attendance
degenerates, the message sent is that we are losing interest.
SL3151Ch12Frame Page 597 Thursday, September 12, 2002 6:01 PM
The council should be made up of five to six people. Their duties are as follows:
• Set objectives
• Guide operations
• Monitor progress
• Eliminate roadblocks
• Recommend/approve projects
AUDIT RESULTS
There are two reasons to audit results. The first is to determine the actual benefit
received. Is it in accordance with expectations? If not, why not? The second is to
determine how to improve operations. A periodic status report on a project tends to
move it along. This is especially true of cost reductions.
PROJECT SELECTION
Confidence factor F
Poor 1
Questionable 2
Fair 3
Good 4
Very good 5
5. Example 1 2 3
S= $60,000 $20,000 $2000
C= $10,000 $10,000 $500
F= 1 5 4
R= 6 10 16
CONCLUDING COMMENTS
This is a very brief outline of some of the factors to be considered to implement
value engineering operations in your organization. The complete subject would
require an entire book but even then there would be many exceptions.
Value engineering is a task force type system. Set up the group, get the job done,
dissolve the group, get on with the next problem. It is people oriented; it is designed
to get maximum performance from the individual and capitalize on that person’s
performance by supplementing it with the group. Of course, there must be some
type of staff, and they must be skillful in application or know-how to get the people
who can produce results.
Remember, success is based on the three As: attitude, awareness, application.
There must be a positive attitude in the organization — an awareness of the need to
change and the skills to apply systems for effective results.
If these guidelines are followed, it has been proven that the benefits will be almost
immediate and far greater than the usually expected results. They are often outstanding.
REFERENCES
Fallon, C., Value Analysis to Improve Productivity, Wiley, New York, 1971.
Miles, L., Techniques of Value Analysis and Engineering, McGraw-Hill, New York, 1961.
SELECTED BIBLIOGRAPHY
Fowler, T.C., Value Analysis in Design, Van Nostrand Reinhold, New York, 1990.
Mendelson, S. and Greenfield, H.B. Taking value engineering/value analysis into the twenty-
first century, Cost Engineering, Vol. 37, No. 8, August, pp. 33–34, 1995.
Mudge, A.E., Numerical evaluation of functional relationships, Proceedings, Society of Amer-
ican Value Engineers, Vol. 2, pp. 111–123, 1967.
Penza, P., Measuring Market Risk with Value Risk, Wiley, New York, 2000.
Shillito, M.L. and DeMerle, D.J., Value: Its Measurement, Design and Management, John
Wiley & Sons, New York, 1992.
Stakgold, I., Green’s Functions and Boundary Value Problems, 2nd ed., John Wiley & Sons,
New York, 1997.
SL3151Ch13Frame Page 599 Thursday, September 12, 2002 6:01 PM
Project Management
13 (PM)
Project management (PM) is the application of knowledge, skills, tools, and tech-
niques in order to meet or exceed stakeholder requirements from a project. Meeting
or exceeding stakeholder requirements means balancing competing demands among:
1. Project management
2. The project context
3. The process of project management
4. Key integrative processes
5. Project scope management
6. Project time management
7. Project cost management
8. Project quality management
9. Project human resource management
10. Project communications management
11. Project risk management
12. Project procurement management
It is beyond the scope of this book to cover the entire discipline of project
management. However, this chapter will address PM as it may be used in six sigma
and design for six sigma (DFSS) initiatives within an organization. Towards that
end, this chapter will discuss some of the basic concepts of project management and
how the methodology of project management may be used.
WHAT IS A PROJECT?
Projects are tasks performed by people, constrained by limited resources, describable
as processes and subprocesses, that are planned, executed, and controlled within
definite time limits. Above all, they have a beginning and an end. Projects differ
from operations primarily in that operations are ongoing and repetitive while projects
599
SL3151Ch13Frame Page 600 Thursday, September 12, 2002 6:01 PM
are temporary and unique. A project can thus be defined in terms of its distinctive
characteristics — it is a temporary endeavor undertaken to create a unique product
or service. Temporary means that every project has a definite ending point. Unique
means the product or service is different in some distinguishing way from all similar
products or services.
Projects are undertaken at all levels of the organization. They may involve a
single person or many thousands. They may require less than 100 hours to complete
or over 10 million. Projects may involve a single unit of one organization or may
cross organizational boundaries as in joint ventures and partnering. Examples of
projects include:
Temporary means that every project has a definite ending point. The ending
point is when the project’s objectives have been achieved, or when it becomes clear
that the project objectives will not or cannot be met and the project is terminated.
Temporary does not necessarily mean short in duration. It means that the project is
not an ongoing task, therefore is finite. This point is very important, since many
undertakings are temporary in the sense that they will end at some point, but not in
the same sense that projects are temporary.
For example, assembly work at an automotive plant will eventually be discon-
tinued, and the plant itself decommissioned. Projects are fundamentally different
because the project ceases work when its objectives have been attained, while non-
project undertakings adopt a new set of objectives and continue to work. The
temporary nature of the project may apply to other aspects of the endeavor as well:
TABLE 13.1
Key Integrative Processes
Project Plan Development Project Plan Execution Overall Change Control
or outcome of one becomes an input to another. Among the central processes, the
links are iterated — planning provides executing with a documented project plan
early on and then provides documented updates to the plan as the project progresses.
It is imperative that the basic process interactions occur within each phase such that
closing one phase provides an input to initiating the next. For example: closing a
design phase requires customer acceptance of the design document. Simultaneously,
the design document defines the product description for the ensuing implementation
phase. For more information on this concept see Duncan (1994), Kerzner (1995),
and Frame (1994).
Although the processes seem to be discrete from each other, that is not the case
in practice. In fact, they overlap and interact in ways that are beyond the scope of
this book. A typical summary of key integrative process is shown in Table 13.1.
SL3151Ch13Frame Page 603 Thursday, September 12, 2002 6:01 PM
Describing a project is not as simple as it might seem. In fact, this step may be the
most difficult and time consuming. To be successful, the project description should
SL3151Ch13Frame Page 604 Thursday, September 12, 2002 6:01 PM
include: simple specifications, goals, projected time frame, and responsible individ-
uals, as well as constraints and assumptions. Capturing the essence of highly complex
projects in a few words is an exercise in focus and delineation. However, we must
be vigilant about avoiding becoming too simple and in the process failing to convey
the scope of the project. On the other hand, a detailed, complex description may
cloud the big picture. The key is clarity without an excess of volume or jargon.
After describing the project, begin to identify the right players. Too many people
on a team can stifle the decision-making process and reduce the number of accom-
plishments. Cross-functional teams are among the most difficult to appoint. Except
in the pure project organization, where the team is solely dedicated to completing
the project, roles and priorities can cause conflict. In cross-functional teams the
project leader must seek support from the functional managers and identify team
goals.
Before a project schedule is created, each task must be evaluated and assigned an
estimate of duration. There are essentially two ways of looking at this process. The
first way is to establish the duration of the task by estimating the time it takes to
complete the task with given resources. The second way is to estimate the type and
amount of resources needed and the effort in terms of resource hours that is necessary
to complete the task.
The kick-off of the project can really make an impact on project team members’
attendance, performance, and evaluation. Kick-off meetings should convey the fol-
lowing ideas:
The essence of this step is to bring the project to closure. That means that the project
must be officially closed, and all deliverables must be handed over to the
stakeholders — customers. In addition, a review of the lessons learned must take
place, and a thank you for the project team is the appropriate etiquette. Key questions
of this step are:
A GENERIC APPLICATION
OF PROJECT MANAGEMENT
IN IMPLEMENTING SIX SIGMA AND DFSS
Project management brings together and optimizes (the focus is always on allocation
of resources) rather than maximizes (concentrating on one thing at the expense of
something else; maximization leads to suboptimization) resources, including skills,
cooperative efforts of teams, facilities, tools, information, money, techniques, sys-
tems, and equipment.
SL3151Ch13Frame Page 606 Thursday, September 12, 2002 6:01 PM
TABLE 13.2
The Characteristics of the DFSS Implementation Model
Using Project Management
Phase 1 Phase 2 Phase 3 Phase 4
Establish a six sigma Capture company Make goal of six sigma and DFSS Provide applicable and
and DFSS objectives total improvement appropriate training
implementation Define: Examine internal structure and Prepare the
team of one person Mission compare it to the goals of six organization for both
from each Values sigma and DFSS internal and external
functional area Goals Determine departmental audits
Train those selected Strategy objectives Provide and/or
in the six sigma and Focus on continual Review structure of the develop appropriate
DFSS requirements improvement organization and applicable
Develop policies and Review job descriptions methodology for
procedures Review current processes corrective action
Reconfirm quality Review control mechanisms Continue focus on
management Review training requirements improvement
commitment Review all communication
methods
Review all approval processes
Review supplier relationship(s)
Review risk considerations and
how they are addressed
Review all outputs
Review all action plans
TABLE 13.3
The Process of Six Sigma/DFSS Implementation Using Project Management
Phase 1 Phase 2 Phase 3 Phase 4
Working with
Management Implementation Employees
Commitment Structure Setup of plan and Suppliers
Four steps define the planning process from a project management perspective. They are:
SL3151Ch13Frame Page 608 Thursday, September 12, 2002 6:01 PM
To optimize the output of these four steps the following questions may be raised:
Goal Setting
There are three basic steps in goal setting from a project perspective. They are:
3. What type of tool will generate what it is that you need to see?
4. What type of data are required of the selected tool?
5. Where can you get the required type of data?
These questions, upon further probing, will deliver some very impressive results.
However, the concern remains: How would a Black Belt or even a Master Black
Belt go about getting the correct answers to these questions? We believe the answer
lies with strategic planning and persistence to the basic format of PM. That is, in
the language of PM identify:
Ultimately, all projects in the six sigma/DFSS world are managed in the follow-
ing four categories:
Justification and prioritization of projects are based upon the following methods:
Benefit-cost analysis:
Return on investment (ROI)
Internal rate of return (IRR)
Return on assets (ROA)
Payback period
Net present value (NPV)
Decision analysis and portfolio analysis as applied to project decisions
Benefit-Cost Analysis
Project benefit-cost analysis is a comparison to determine if a project will be (or
was) worthwhile. The analysis is performed prior to implementation of project plans
and is based on time-weighted estimates of costs and predicted value of benefits.
The benefit-cost analysis is used as a management tool to determine if approval
should be given for the project go-ahead. The actual data are analyzed from an
accounting perspective after the project is completed to quantify the financial impact
of the project. The sequence for performing a benefit-cost analysis is:
Net Income
ROA =
Total Assets
where net income for a project is the expected earnings and total assets is the value
of the assets applied to the project.
Return on Investment (ROI)
Net Income
ROI =
Investment
SL3151Ch13Frame Page 611 Thursday, September 12, 2002 6:01 PM
where net income for a project is the expected earnings and investment is the value
of the investment in the project.
There are several methods used for evaluating a project based on dollar or cash
amounts and time periods. Three common methods are the net present value (NPV),
the internal rate of return (IRR), and the payback period methods. Project risk or
likelihood of success can be incorporated into the various benefit-cost analyses as well.
Net Present Value (NPV) Method
Weston and Brigham (1974) and Johnson and Melcher (1982) give the following
equations:
∑ (1 + r )
CFt
NPV = t
t =0
where n = the number of periods; t = the time period; r = the per period cost of
capital for the organization (also denoted as i if annual interest rate is used); and
CFt is the cash flow in time period t. Note that CF0, the cash flow in period zero, is
also denoted as the initial investment.
The cash flow for a given period, CFt is calculated as:
where CFB,t is the cash flow from project benefits in time period t and CFC,t is the
project costs in the same time period. The standard convention for cash flow is
positive (+) for inflows and negative (–) for outflows.
The conversion from an annual percentage rate (APR) per year, equal to i, to a
rate r for a shorter time period, with m periods per year, is:
1
R = (1 + i ) m − 1
If the project NPV is positive, for a given cost of capital, r, the project is normally
approved.
Internal Rate of Return (IRR) Method
The internal rate of return (IRR) is the interest or discount rate, i or r, that results
in a zero net present value, NPV = 0, for the project. This is equivalent to stating
that time weighted inflows equal time weighted outflows. The equation is
∑ (1 + r )
CFt
NPV = 0 = t
t =0
The IRR is that value of r that results in NPV being equal to 0 and is calculated
by an iterative process. Once calculated for a project, the IRR is then compared with
SL3151Ch13Frame Page 612 Thursday, September 12, 2002 6:01 PM
that for other projects and investment opportunities for the organization. The projects
with the highest IRR are approved, until the available investment capital is allocated.
Most real projects would have an IRR in the range of 5 to 25% per year. Managers
given the opportunity to accept a project that has calculated values for IRR higher
than the company’s return on investment (ROI) will normally approve, assuming
the capital is available.
The above equations for net present value and internal rate of return have ignored
the effects of taxes. Some organizations make investment decisions without consid-
ering taxes, while others look at the after-tax results. The equations for NPV and
IRR can be used with taxes, if the cash flow effect of taxes is known.
Payback Period Method
The payback period is the length of time necessary for the net cash benefits or
inflows to equal the net costs or outflows. The payback method generally ignores
the time value of money, although the calculations can be done taking this into
account. The main advantage of the payback method is the simplicity of calculation.
It is also useful for comparing projects on the basis of quick return on investment.
A disadvantage is that cash benefits and costs beyond the payback period are not
included in the calculations.
Organizations using the payback period method will set a cut-off criterion, such
as 1, 1½, or 2 years maximum for approval of projects. Uncertainty in future status
and effects of projects or rapidly changing markets and technology tend to reduce
the maximum payback period accepted for project approval. If the calculated pay-
back period is less than the organization’s maximum payback period, then the project
will be approved. (Quite often, in the six sigma/DFSS world, the payback is figured
on a preset project savings rather than time. The most common figure floating around
is a $250,000 per-project savings.)
Project Decision Analysis
In addition to the benefit-cost analysis for a project, the decision to proceed must
also include an evaluation of the risks associated with the project. To manage project
risks, first identify and assess all potential risk areas. Risk areas include:
After the risk areas are identified, each is assigned a probability of occurrence
and the consequence of risk. The project risk factor is then the sum of the products
of the probability of occurrence and the consequence of risk.
Risk factors for several projects can be compared if alternative projects are being
considered. Projects with lower risk factors are chosen in preference to projects with
higher risk factors. A more extensive description of risk management is found in
Kerzner (1995).
• Placing accountability on one person for the overall results of the project
• Assurance that decisions are made on the basis of the overall good of the
project, rather than the good of one or another contributing functional
department
• Coordination of all functional contributors to the project
• Proper utilization of integrated planning and control methods and the
information they produce
• Assurance that the activities of each functional area are being planned
and carried out to meet the overall needs of the project
• Assurance that the effects of favoring one project over another are known
• Early identification of problems that may jeopardize successful project
completion, to enable effective corrective action to prevent or resolve the
problem
Even though project management may not be feasible, good principles have con-
tributed to the success of thousands of small and medium-sized projects. Many managers
of such projects have never heard of project management but have used the principles.
A wider application of these principles will also help achieve success in smaller projects.
Executives play a key role in the successful application of project management.
A commitment from top management to ensure it is done right must be combined
with the decision to use this approach. Top management must realize that establishing
a project creates special problems for the people on the project, for the rest of the
organization, and for top managers themselves. If executives decide to use this
technique, they should expend the time, decision-making responsibility, and execu-
tive skills necessary to ensure that it is planned and executed properly. Before it can
be executed properly, sincere and constructive support must be obtained from all
functional managers. Directives or memos are not enough. It takes personal signals
from top management to members of the team and functional managers to convey
that the project will succeed and that team members will be rewarded by its success.
In addition, necessary and desirable changes in personnel policies and procedures
must be recognized and established at the onset of the project.
The human aspect of project management is both one of its greatest strengths
and one of its most serious drawbacks. In order for project management to succeed,
it requires capable staff. Only good people can make a project successful. In the
long run, this is true for any organization. Good people alone cannot guarantee
project success; a poorly conceived, badly planned, or inadequately resourced project
has little hope for success. Great emphasis is placed on the selection of good people.
The project leader, more than any other single variable, seems to make the difference
between success and failure. Large projects will require one person to be assigned
the full-time role of project manager. If a number of projects exist but not enough
project managers are available for full-time assignment to a project, assign several
projects to one full-time project manager. This approach has the advantage that the
individual is continually acting in the same role, that of a project manager, and is
not distracted or encumbered by functional responsibilities.
To conclude, project management is an effective management tool used by
business, industry, and government, but it must be used skillfully and carefully. In
review, the following major items are necessary for successful results from project
management in the field of quality:
• Emphasis is placed on selecting the best people for staff, especially the
project leader.
• Good principles of project planning and control are applied.
Effective use of project management will reduce costs and improve efficiency.
However, the main reason for the widespread growth of project management is its
ability to complete a job on schedule and in accordance with original plans and
budget.
REFERENCES
American Heritage Dictionary of the English Language, 3rd ed., Houghton Mifflin, Boston,
1992.
Duncan, W.R., A Guide to the Project Management Body of Knowledge, Project Management
Institute, Upper Darby, PA, 1994.
Harry, M., The Vision of Six Sigma: A Roadmap for Breakthrough, 5th ed., Vol. II, Tri Star
Publishing, Phoenix, AZ, 1997.
Johnson, R.B. and Melicher, R.W., Financial Management, 5th ed., Allyn and Bacon, Inc.,
Boston, 1982.
Kerzner, H., Project Management: A Systems Approach to Planning, Scheduling and Con-
trolling, 5th ed., Van Nostrand Reinhold, New York, 1995.
Stamatis, D.H., Total Quality Service, St. Lucie Press, Delray Beach, FL, 1996.
Turner, J. R., The Handbook of Project-Based Management, McGraw-Hill, New York, 1992.
Weston, J. F. and E.F. Brigham, Essentials of Managerial Finance, 3rd ed., Dryden Press,
Hinsdale, IL, 1974.
SELECTED BIBLIOGRAPHY
Frame, J.D., The New Project Management, Jossey-Bass, San Francisco, 1994.
Geddes, M., Hastings, C., and Briner, W., Project Leadership, Gower, Brookfield, VT, 1993.
Lock, D., Gower Handbook of Project Management, Gower, Brookfield, VT, 1994.
Michael, N. and Burton, C., Basic Project Management, Singapore Institute of Management,
Singapore, 1993.
Stamatis, D.H., TQM Engineering Handbook, Marcel Dekker, New York, 1997.
Stamatis, D.H., Total Quality Management and Project Management. Project Management
Journal, Sept. 1994, pp. 48–54.
SL3151Ch13Frame Page 616 Thursday, September 12, 2002 6:01 PM
SL3151Ch14Frame Page 617 Thursday, September 12, 2002 5:59 PM
Limited Mathematical
14 Background for Design
for Six Sigma (DFSS)
EXPONENTIAL DISTRIBUTION AND RELIABILITY
EXPONENTIAL DISTRIBUTION
f(t)
δ e - δt
0.37 δ
0 t
µt = 1/ δ
F(t)
1 - e - δt
0.63
0 t
µt = 1/ δ
617
SL3151Ch14Frame Page 618 Thursday, September 12, 2002 5:59 PM
δe − δ t ; t 〉0,δ 〉0
f (t ) = E x (t; δ ) =
0 ; elsewhere
∫
F (t ) ≡ δ e − δ t dt = 1 − e − δ t
0
1
Mean Time: µt = some use η = 1
δ δ
2
1
Variance: σ =
2
δ
t
One parameter: δ
Reliability Problems
R(t ) = e −δ t
F (t ) ≡ 1 − R(t ) = 1 − e − δ t
dF (t )
f (t ) = = δ e−δ t
dt
SL3151Ch14Frame Page 619 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 619
Fail
(Bad)
AT Pass
(Good)
0
t Time, T
0 t+∆t
dAe− δ t
= −δ A e − δ t
dt
If we consider equal time increments ∆t, then exponential has consistent ampli-
tude ratio between increments
Ae ( ) Ae (
− δ t +∆ − δ t + n∆t )
= = e −δ ∆ t
Ae ( )
−δ t
Ae
( ( ) )
− δ t + n−1 ∆t
Example
Data from 100 pumps demonstrated an average life of 5.75 years and that failures
followed an exponential distribution.
Solutions:
( ) (
F t −1 = 1 − e − 0.174t )
t =1
= 1 − e
− 0.174 1( ) = 1 − 0.84 = 0.16
( )
F t = 0.25 = 1 − e
− 0.174 ( 0.25)
(
= 1 − 0.957 = 0.043 )
3. Probability of failure prior to the average life; MTTF = TMF = 5.75
( )
F t = 5.75 = 1 − e
− 0.174 ( 5.75)
= 1−e
− 1.0
( )
= 1 − 0.368 = 0.632
(
R t = 10 = e ) ( ) = 0.176
− 0.174 10
5. Plot the reliability curve and compare with the pdf curve.
f(t) ; R(t)
f(t) = δe-δτ
R(t)
0.50
0.42
δ = 0.174 f(t)
0 5 10 15 t , years
SL3151Ch14Frame Page 621 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 621
PROBABILITY OF RELIABILITY
The exponential distribution as the basis of the reliability function is based on the
probability of samples of an event that describes a general physical situation; i.e.,
time to a (bad) occurrence.
CONTROL CHARTS
Continuous Time Waveform
Fail
(Bad)
AT Pass
(Good)
0
t Time, T
0 t+∆t
tk = k∆t Fail
(Bad)
∆t
AT Pass
(Good)
0
t1t2 tk tn Time, T
0 t t+∆t
SL3151Ch14Frame Page 622 Thursday, September 12, 2002 5:59 PM
tk = k∆t Fail
(Bad)
∆t
AT Pass
(Good)
0
t1t2 tk tn Time, T
0 t t+∆t
SAMPLE SPACE
Good
G Sample, n
n=g+b
g
b
Population, N
B N=G+B
Bad
Sample Space: n = g + b
SL3151Ch14Frame Page 623 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 623
Bad
Good Samples Sample
1 2 k n
Assign
X = 0 for “Good”
X = 1 for “Bad”
tk = k∆t Fail
(Bad)
∆t
AT Pass
(Good)
0
t1t2 tk tn Time, T
0 t t+∆t
Sample Space: n = g + b
Set Bad
Set Good Samples: {g} Samples:
{b}
● ● ● ● ● ● ● ● ● ● ● ●
1 2 k n
{b} = {Xn = 1}
{g} = {Xn – 1 = 0, Xn – 2 = 0, …, X1 = 0}
SL3151Ch14Frame Page 624 Thursday, September 12, 2002 5:59 PM
({ } { })
P b • g = P b { }{g} P({g}) = P({b}) P({g})
ASSIGNING PROBABILITY TO SETS
Assume only one sample can be measured in any interval ∆t.
[
• The probability that one bad sample occurs in interval t ≤ T ≤ t + ∆ t is
assumed to be constant
]
p = δ ∆t
({ }) ( )
P b ≡ P X = 1; t ≤ T ≤ t + ∆ t ≡ δ ∆ t = p
( ) ( )
P X = 0 ; t ≤ T ≤ t + ∆ t ≡ 1 − P X = 1; t ≤ T ≤ t + ∆ t = 1 − δ ∆ t = q
({ }) ( )
P g ≡ P X = 0 ; 0 ≤ T ≤ t ≡ R(t ) = Reliability
({ }) ({ }) ( ) (
P g P b ≡ P X = 0 ; t ≤ T ≤ t P X = 1; t ≤ T ≤ t + ∆ t )
Note: There are two types of probabilities or variables, one when X = 0 for set
{g} and one when X = 1 for set {b}.
To establish an “equation,” we need to deal with only one variable.
[ ]
Assume the sample of the increment t ≤ T ≤ t + ∆ t where also “good” then
we could write directly:
( ) ( ) (
P X = 0; 0 ≤ T ≤ t + ∆ t ≡ P X = 0; 0 ≤ T ≤ t P X = 0; 0 ≤ T ≤ t + ∆ t )
= P( X = 0 ; 0 ≤ T ≤ t ) [1 − δ ∆ t ]
( ) ( ) (
P X = 0; 0 ≤ T ≤ t + d t − P X = 0; 0 ≤ T ≤ t = P X = 0; 0 ≤ T ≤ t ) [ − δ d t]
SL3151Ch14Frame Page 625 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 625
( ) (
P X = 0; 0 ≤ T ≤ t + d t − P X = 0; 0 ≤ T ≤ t )
dt
= [ − δ] P ( X = 0 ; 0 ≤ T ≤ t )
First order differential equation (homogeneous):
(
P X = 0; 0 ≤ T ≤ t + d t ) = − δ P( X = 0 ; 0 ≤ T ≤ t )
dt
[ ]
which can be conveniently expressed in terms of reliability:
dRt ( ) = δ R(t )
dt
()
Solution: R t = e −δ t
+ C1
()
Initial condition at t = 0: R 0 ≡ 1 = 1 + C1 and C1 = 0
Hence, the reliability is: R(t ) = e −δ t
1
GAMMA DISTRIBUTION
The probability that the nth event (e.g., failure) will occur exactly at the (end) time
t, when the events are assumed to occur at a constant rate δ.
The idea of a constant event rate δ is the same assumption used for both
exponential and Poisson distributions.
The variable time, t, is said to have a gamma distribution.
δ n t n −1
() (
f t = G t; n, δ ≡ ) ()
n e − δ t; t 〉 0
0; elsewhere
Two parameters:
Scale parameter: n (changes scale not shape)
Shape parameter: δ (changes shape not scale)
∞
Gamma Function ( ) ∫x
n = n −1
e − x dx
0
SL3151Ch14Frame Page 626 Thursday, September 12, 2002 5:59 PM
n n
Mean: µ = Variance: δ 2 =
δ δ2
n −1
If 1 < n unimodal shape with mode at: m =
δ
If n ≤ 1 non-modal shape with mode at: m = 0
GAMMA FUNCTION
( ) ∫x
n = n −1
e − x dx
0
If n = positive integer,
() (
n = n −1 ! )
() ( )(
n = n −1 n −1 )
(1) = (0)! = 1
1
= Π
2
n n
= − 1 !
2 2
If n = odd integer:
n n n 5 3 Π
= − 1 − 2 … 2 2 2
2 2 2
SL3151Ch14Frame Page 627 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 627
tk = k∆t Fail
(Bad)
∆t
AT Pass
(Good)
0
t1t2 tk tn Time, T
0 t t+∆t
Limiting case: When total system failure occurs at the time of the first partial
failure n = 1 and the gamma distribution reduces to the exponential distribution.
δe − δ t ; 0≤t
f (t ) =
0 ;t 〈0
n
SL3151Ch14Frame Page 628 Thursday, September 12, 2002 5:59 PM
State the parameters and plot the pdf with its mean and mode for the time to
total system failure.
Solution: The two parameters are: δ and n
Failure rate: δ = 2 failure/year
Number of partial failures to total system failure: n = 3
Mean time to system failure:
TMT = µ T =
n
=
3 failures[ ]
= 1.5 years [ ]
[
δ 2 failures per year ]
Mode:
n −1
=
2 failures[ ]
= 1.0 years [ ]
δ [
2 failures per year ]
Gamma Distribution and Reliability
δ nt n − 1 − δ t
e ;t〉 0
f (t ) = G(t; n; δ ) = n
0 ; elsewhere
δ nt n − 1 − δ t 2 3 t 2 − 2 t
f (t ) = e = e = 4 t 2 e −2t
n 2
t = 0: f (0) = 0.0000
Limited Mathematical Background for Design for Six Sigma (DFSS) 629
0.6
0.5
n= 3
0.4 δ=2
f(t)
0.3
0.2
0.1
0
0 1 2 3 4 t
2. The case when the failure rate is reduced to 1 per year and the total system
failure occurs after only two switches fail.
Failure rate: δ = 1 failure/year
Number of partial failures to total system failure: n = 2
Mean time to system failure:
µ=
n
=
[
2 failures ]
= 2.0 years [ ]
[
δ 1 failures per year ]
Mode:
n −1
=
[
1 failures ]
= 1.0 years [ ]
δ [
1 failures per year ]
SL3151Ch14Frame Page 630 Thursday, September 12, 2002 5:59 PM
δ nt n − 1 − δ t 1 2 t 1 − 1t
f (t ) = e = e = te − t
n 1
t = 0: f (0) = 0.0000
0.6
n=2
0.5 δ=1
0.4
f(t)
0.3
0.2
0.1
0
0 1 2 3 4 t
δ e − δ t ;t≤ 0
f (t ) = E x(t; δ ) ≡
0
;t 〈 0
SL3151Ch14Frame Page 631 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 631
()
f t = δ e − δt = 2e − 2 t
f(t)
●
0.7 –
0.6 – ●
n=1
0.5 –
δ=2
0.4 –
0.3 –
●
0.2 –
0.1 – ●
● ●
0 ●
0 1 2 3 4 t
SL3151Ch14Frame Page 632 Thursday, September 12, 2002 5:59 PM
Reliability Relationships
tk = k∆t
Fail
∆t (Bad)
AT
Pass
(Good)
0
t 1t 2 tk tn Time, T
0 t t+∆t
Reliability Function
1 2 ... ...
k n
∆t
0 t
SL3151Ch14Frame Page 633 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 633
Reliability is the probability that a system can operate successfully over the time
interval from 0 to t. Reliability can also be viewed as the probability that the system
will survive beyond a given point in time, t.
() ( ) (n ) = n −nb(t ) = 1 − b(nt )
gt
R t =P T >t =
() ( )
Q t = P T > t = 1 − R(t ) =
b(t )
n
f (t ) =
( )
b t + ∆ t − b(t )
n ∆t
= lim
[n − g(t + ∆ t )] − [n − g(t )]
∆t → 0 n ∆t
= lim
() (
g t − g t + ∆t ) =−
1 d
()
gt
∆t → 0 n ∆t n dt
()
f t =
d
dt
()
Qt
Q(t + ∆t ) − Q(t )
= lim
∆t →0 ∆t
= lim
(
P T < t ≤ t + ∆t )
∆t →0 ∆t
SL3151Ch14Frame Page 634 Thursday, September 12, 2002 5:59 PM
h(t ) = lim
( )
b t + ∆ t − b(t )
∆t → 0
() g t ∆t
= lim
[n − g(t + ∆ t )] − [n − g(t )]
∆t → 0 g(t ) ∆ t
g(t ) − g(t + ∆ t )
g(t )
1 d
= lim =−
∆t → 0 g(t ) ∆ t g(t ) dt
nf (t ) f (t ) f (t )
= = =
g(t ) g(t ) / n R(t )
Hazard rate as the conditional probability of failure in the interval (t, t + ∆t]
given that the system has survived up to the time t:
h(t ) = lim
(
P T < t ≤ t + ∆t T > t )
∆t → 0 ∆t
= lim
(
P T < t ≤ t + ∆t )
∆t → 0
(
PT <t )
= lim
( ) (
P −∞ < T ≤ t + ∆ t − P −∞ < T ≤ t )
∆t → 0 P(T < t )
= lim
( )
Q t + ∆ t − Q(t )
∆t → 0 R( t ) ∆ t
1 d Q(t ) f (t )
= =
R(t ) dt R(t )
Limited Mathematical Background for Design for Six Sigma (DFSS) 635
f (t ) 1 d Q(t ) 1 d Q(t )
h(t ) = = =
R(t ) R(t ) dt 1 − Q(t ) dt[ ]
−1 d R(t )
=
1 d
R(t ) dt
[
1 − R(t ) =
R(t ) dt
]
=−
d
dt
[
In R(t ) ]
A separable first order differential equation whose solution is:
t
( (
∫ ()
In R(t ) = − h t dt
0
POISSON PROCESS
Probability of exactly x – “failures” occurring in any order within a given time
interval (or spatial region):
[
1. Time interval 0 ≤ T ≤ t + ∆ t ] = [0 ≤ T ≤ n ∆t]
2. Spatial region [a ≤ y ≤ b] (e.g., typos on a page)
Sample Space: n = g + b
H Set bad samples: {b = x = 5}
• Set good samples: {g = n – x}
1 2 ... ...
k n
∆t
0 t
SL3151Ch14Frame Page 636 Thursday, September 12, 2002 5:59 PM
p = δ ∆t
Poisson Distribution
(δ t ) e
x
−δt
P ( x; δ t ) =
O ; x = 0,1, 2,K
x!
where δ = average number of outcomes per unit time “mean time between events
or failures” and δ t = δ n ∆ t = np mean number of outcomes in time t.
There are two possible (mutually exclusive) situations for having x – failures in
this interval depending upon whether or not a failure occurs in the last ∆ t increment.
[
(x – 1) – failures in the interval 0 ≤ T ≤ t ]
[
1 – failures in the increment t ≤ T ≤ t + ∆ t
Sample Space: n = g + b
]
H Set bad samples: {b = x = 5}
• Set good samples: {g = n – x}
1 2 ... ...
k n
∆t
0 t
[
x – failures in the interval 0 ≤ T ≤ t ]
[
0 – failures in the increment t ≤ T ≤ t + ∆ t ]
Sample Space: n = g + b
H Set bad samples: {b = x = 5}
• Set good samples: {g = n – x}
SL3151Ch14Frame Page 637 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 637
1 2 ... ...
k n
∆t
0 t
Probability of configuration (1), where one failure occurs in the last increment,
is given by:
] [ ( )]
[
(x – 1) – failures in the interval 0 ≤ T ≤ t or 1 ≤ k ≤ n − 1
1 2 ... ...
k n
∆t
0 t
( ) ( ( ) )(
P X = x ; 0 ≤ T ≤ t + ∆ t ≡ P X = x − 1 ; 0 ≤ T ≤ t P X = 1; t ≤ T ≤ t + ∆ t )
= P( X = ( x − 1) ; 0 ≤ T ≤ t )[δ dt ]
where we again assume that the “success” probability of finding one bad sample
[ ]
occurs in increment t ≤ T ≤ t + ∆ t is a constant
p = δ ∆t
a = n p = nδ ∆ t = δ n∆ t ( )
Probability of configuration (2), where zero failure occurs in the last increment,
is given by:
[
x – failures in the interval 0 ≤ T ≤ t or 1 ≤ k ≤ n − 1 ] [ ( )]
[
0 – failures in the increment t ≤ T ≤ t + ∆ t or k = n ] [ ]
SL3151Ch14Frame Page 638 Thursday, September 12, 2002 5:59 PM
Sample Space: n = g + b
H Set bad samples: {b = x = 5}
• Set good samples: {g = n – x}
1 2 ... ...
k n
∆t
0 t
( ) ( ( ) )(
P X = x ; 0 ≤ T ≤ t + ∆ t ≡ P X = x − 1 ; 0 ≤ T ≤ t P X = 1; t ≤ T ≤ t + ∆ t )
= P( X = ( x − 1) ; 0 ≤ T ≤ t )[1 − p]
where we again assume that the “success” probability of finding no “bad” sample
[ ]
occurs in increment t ≤ T ≤ t + ∆ t is a constant
p =1− q =1− δ ∆t
a = n p = nδ ∆ t = δ n∆ t ( )
The combined probability of these two mutually exclusive configurations of x –
failures is given by the sum the probability associated with each configuration:
( ) ( ( )
P X = x; 0 ≤ T ≤ t + ∆ t ≡ P X = x − 1 ; 0 ≤ T ≤ t δ d t )[ ]
+ P( X = ( x ) ; 0 ≤ T ≤ t )[(1 − δ )d t ]
( ) ( ( )
P X = x; 0 ≤ T ≤ t + ∆ t ≡ P X = x − 1 ; 0 ≤ T ≤ t )
dt
[( )] ( ) [( )] ( ( )
= −δ P X = x ; 0 ≤ T ≤ t + −δ P X = x − 1 ; 0 ≤ T ≤ t )
SL3151Ch14Frame Page 639 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 639
( ) (
R x; t ≡ P X = x; 0 ≤ T ≤ t )
we can write the non-homogenous differential equation:
( ) + δ R( x ; t ) = −δ R( x − 1; t )
d R x ;t
dt
where the non-homogenous term depends upon the values of R obtained for all
previous values of x. That is, for x = 2, we need to successively find the solutions
for R(1;t) and R(0;t)
“Initial Condition” at t = 0 satisfying the certain and impossible outcomes,
respectively
1 if x = 0
R( x; 0) =
0 if x > 0
(δ t ) e
x
−δt
R( x ; t ) ≡ P( X = x ; 0 ≤ T ≤ t ) = ; x = 0, 1, 2, K
x!
Example
An average of four (4) private aircraft landings per hour in a small local airport,
δ = 4L/h. What is probability that six (6) aircraft will land in one hour?
The Poisson distribution is
(a ) x e − a
( )
PO x ; a =
x!
; x = 0, 1, 2, K
One parameter: a ( = np = δ t)
Mean: µ = a Variance: δ2 = a
Solution:
( 4)6 e −4
( )
PO 6 ; 4 =
6!
= 0.1042
There is a 10.4% chance that six aircraft will land in one hour.
WEIBULL DISTRIBUTION
One of the more popular models for time-to-failure (TTF), Weibull distributions take
many shapes and are typically identified as in the following illustration.
f(t)
a=4 Weibull
Distribution
δ=1
a=1
a=2
a = 0.5
0 1 2 3 t 4
( ) e ( ) ;0≤t
a−1 − δ t a
a δ δ t
( )
f (t ) = W t ; a, δ ≡
0 ;t < 0
Cumulative distribution
∫ ( ) e ( ) dt = 1 − e ( )
a−1 − δ t a − δt
a
F (t ) ≡ a δ δ t
0
Two parameters:
SL3151Ch14Frame Page 641 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 641
f(t)
a=4 Weibull
Distribution
δ=1
a=1
a=2
a = 0.5
0 1 2 3 t 4
1 1
• Mean: µ = 1 +
δ a
1 2 1
• Variance: σ 2 = 1 + − 2
1 + a
δ 2 a
• 1/δ also referred to as “characteristic life” or “time constant,” the life or
time at which 63.2% of population has failed.
• If a = 1, the Weibull reduces to the exponential distribution.
• If a = 2, the Weibull reduces to the Rayliegh distribution.
• If a ≈ 3.5, the Weibull approximates the normal distribution.
• For a < 1, reliability function decays less rapidly.
• For a > 1, reliability function decays more rapidly.
• A useful model for the failure time (or length of life) distributions of
produces and processes.
• Does not assume that the failure rate, δ, is a constant as do the Exponential
and Gamma distributions.
• Has the advantage that the distribution parameters can be adjusted to fit
many situations; because of this adaptability it is widely used in reliability
engineering.
• The cumulative distribution has closed form expression that can be used
to compute areas under the Weibull curve.
SL3151Ch14Frame Page 642 Thursday, September 12, 2002 5:59 PM
Note:
Characteristic life
t = 1/δ
corresponds to
the 63.2%
xxxx
99
95 Mileage (miles)
90 °1
80 °
70
60 °
50
°
Occurrence (CDF)
40
30 °
20
°
10
1 xxxx
10000 100000 1000000
Mileage (miles)
x
(
R(t ) ≡ P T > t = ) ∫ f (t() dt(
t
∞
( (
a−1 − δ t( a (
∫ ( ) e ( ) dt ; let u = δt ( )
a
= aδ δt
t
∞
∞
∫
a
= e− u d u = −e− u = e − (δ t )
u (t )
u(t )
Limited Mathematical Background for Design for Six Sigma (DFSS) 643
t
( (
Q(t ) ≡ P(T ≤ t ) =
∫ f (t ) dt
0
a
= 1 − R(t ) = 1 − e − (δ t )
( ) ( )
a−1 a
f (t ) a δ δ t e − δ t
( )
a−1
h(t ) ≡ = = aδ δt
( )
a
R(t ) e − δt
• The shape parameter a, can be used to adjust the shape of the Weibull
distribution to allow it to model a great many life (time) related distribu-
tions found in engineering.
f(t)
a=4 Weibull
Distribution
δ=1
a=1
a=2
a = 0.5
0 1 2 3 t 4
( ( ))
a
− δ t −tO
R(t ) = e
where the time tO is called the failure free time or minimum life.
SL3151Ch14Frame Page 644 Thursday, September 12, 2002 5:59 PM
( x − x0 )
n
∞
(n)
f ( x0 )
f ( x) = ∑
n =0
n!
() ( ) ( x − x ) + L + d f ( x) ( x − x )
n
df x d2 f x n
( )
= f x0 +
dx
(x − x ) +
0
dx 2 2!
0
dx n n!
0
x0 x0 x0
(x − x )
2
() ( ) ( )(
f x = f x0 + slope x0 ⋅ x − x0 + curvature x0 ) ( ) ⋅
2
0
+L
( x − x0 ) + L
2
(1) (2 )
f ( x ) = f ( x0 ) + f ( x0 ) ⋅ ( x − x0 ) + f ( x0 ) ⋅
2
f(x)
f(x)
f(x)
f( 2) (xo) [x-xo ]2 /2
f( 1) (xo ) [x-xo]
f(x)
f(xo ) f(xo ) f(xo )
(x - xo )
O x O x
xo x xo [x -xo ] x
1. Series about x0
( x − x0 )
n
∞
(n)
f ( x0 )
f ( x) = ∑
n =0
n!
(2 )
f ( x0 )
( 1)
= f ( x0 ) + f ( x0 )( x − x0 ) + ( x − x0 ) + L
2
2!
()x
f
n
( 0)
(x − x )
n
+L+ 0 +L
n!
SL3151Ch14Frame Page 645 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 645
( n ) ( x)
n
∞
f ( x) = ∑ f ( 0)
n =0
n!
(2 )
f ( 0) 2
(n)
f ( 0) n
( 1)
= f ( 0) + f ( 0)( x ) + x +L+ x +L
2! n!
= a0 + a1 x + a2 x + L a n x + L n
Observations:
() ( )
f x ≈ f x0 +
( ) ( x − x ) = f ( x ) + m( x − x )
df x
0 0 0
dx
x0
x
O
a xo b
SL3151Ch14Frame Page 646 Thursday, September 12, 2002 5:59 PM
x f(x)
Input Linear Output
System
f(x ) = mx + c
INPUT OUTPUT
1
( x - xo ) — [ f ( x) - f (xo ) ]
m
O x
a xo b
(x − x ) (x − x )
2 n
e =e
ax ax0
+ ae ax0
(x − x ) + a e
0
2 ax0
2!
0
+L + a e
2 ax0
n!
0
+L
( a2
) ( ) an
( )
2 n
e ax = e ax0 1 + a x − x0 + x − x0 +L + x − x0 + L
2 n!
Limited Mathematical Background for Design for Six Sigma (DFSS) 647
e ± ax
= 1 ± ax +
2! 3!
+L +
n!
+L = ∑n =0
n!
(2 b) x ( ) ( )
2 3
bx 2
3 2b 12 2 b
e =1+ 0 + 2
+0+ x +0+
4
x6 + L
2 24 6!
1
( ) 2
( )
2 3
= 1 + bx 2 + bx 2 + bx 2 +L
2 15
1
2Π
e− z
2
/2
=
1
2Π
[1 − z 2
/ 2 + z 4 / 8 − z 6 / 60 + L ]
Exact vs. Two Three Four Terms
99.74%
N(z; 0,1)
95.46
68.26
0.40 —
0.24
σ=1
0.05
0.004 2.5 13.5 34.0 34.0 13.5 2.5
-3 -2 -1 0 1 2 3 Z
µ=0
Zero:
2
e bx =1
x =0
First:
2
de bx de u de u du 2
= = = 2bx e bx
dx dx du dx
Second:
d bx 2 d
2
d 2 e bx
dx 2
=
d
e =
bx 2
2 bx e ( )
x =0
dx dx dx
2
( )
2 2
= 2 b e bx + 2 bx e bx = 2 b
x =0
Third:
d d 2 bx 2 d
2
d 3 e bx 2
( )
2 2
= 2 e = 2 b e bx + 2 bx e bx
dx 3 dx dx dx
x =0
( ) ( )( ) 2
( )
2 2 3
= 2 b 2 bx e bx + 2 2 bx 2 b e bx + 2 bx e bx = 0
x =0
Fourth:
2
d 4 e bx d
( ) bx 2
( )( ) ( )
2 3
bx 2 bx 2
4
= 2 b 2 bx e + 2 2 bx 2 b e + 2 bx e
dx dx
x =0
( ) ( ) ( )
2 2 2 2 2 2
= 2 b e bx + 2 b 2 bx e bx + 2 2 b e bx
( )( ) ( ) (2 b)e ( ) 2
2 2 2 4
bx 2
+ 2 2 b 2 bx e bx + 3 2 bx + 2 bx e bx
x=0
Fifth:
2
d 5e bx
=0
dx 5
x =0
Sixth:
SL3151Ch14Frame Page 649 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 649
2
d 6 e bx
( )
3
= 12 2b
dx 6
x =0
(x − x ) (x − x )
2 3
(
sin x = sin x0 + cos x0 x − x0 − sin x0 ) 2!
0
− cos x0
3!
0
∞ 2 n +1
Partial Derivatives
f(x, y)
Differentiate wrt to only one independent variable while holding the other
variable constant e.g.,y = yo
∂f x,y( ) ≡ ∂ f ( x , y) =
(
∂ f x , yo ) = f (x , y )
∂x ∂x ∂x x o
y = yo
) (∂ x ) ) (∂ x )
∂f x,y ∂f x ,y
( ) (
f x , y = f xo , yo + x − xo ) ( o
(
+ y − yo
o
+L
x = xo y = yo
Linear terms:
( ) ( ) ( ) (
f x , y = f xo , yo + x − xo f x x − xo + y − yo fy xo , yo ) ( ) ( )
Taylor Series of Random Variable (RV) Functions
Y (X1, X2)
[(
Mean: µ Y ≡ E Y X1 , X2 = Y µ X 1 , µ X 2 )] ( )
Variance and Covariance
Consider only linear terms of the Taylor series expansion about the mean of each
(
random variable, µ Y = Y µ X 1 , µ X 2 )
( ) (
Y X1 , X2 = Y µ X 1 , µ X 2 + X1 − µ X 1) ( ) ∂∂XY (
+ X2 − µ X 2 ) ∂∂XY
1 µ ,µ 2 µ ,µ
X1 X2 X1 X2
) ∂(X ) + ( X ) ∂ (X 2 )
∂Y µ ∂Y µ
(
Y − µY = X1 − µ X 1
X
2 − µX2
X
( )
2
σ 2Y = E Y − µ Y
2
∂Y ∂Y
(
= E X1 − µ X 1
∂ X1
) (
+ X2 − µ X 2 )
∂ X2 µ
µX
X
( ) ( ) ( ) ∂ Y (µ )
2 2
∂Y µ ∂Y µ ∂Y µ
X X X X
=σ 2
+ σ 2X 2 + 2σ 2X 1 X 2
∂X
X1 ∂X ∂X ∂X
1 2 1 2
SL3151Ch14Frame Page 651 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 651
Sum or difference: Y = a1 X1 ± a2 X2
Mean: µ Y = a1µ X 1 ± a2 µ X 2
Variance and covariance:
2 2
∂Y ∂Y ∂Y ∂Y
σ =σ
2 2
+ σ2X 2 + 2σ2X 1X 2
∂ X1 ∂ X2 ∂ X1 ∂ X2
Y X1
( ) = (a X )
∂Y µ X
= aoµ X 2 ;
( ) = (a X )
∂Y µ X
= aoµ X 1
∂X 1
o 2
µX ∂ X2 o 1
µX
Mean: µ Y = a oµ X 1µ X 2
Variance and covariance:
2 2
∂Y ∂Y ∂Y ∂Y
σ 2Y = σ 2X 1 + σ 2X 2 + 2σ 2X 1 X 2
∂ X1 ∂ X2 ∂ X1 ∂ X2
( ) ( ) ( )
2 2
= a oµ X 2 σ 2X 1 + a oµ X 1 σ 2X 2 ± 2 a 2o µ X 1µ X 2 σ 2X 1 X 2
ao X1
Y=
X2
( ) = a
∂Y µ X ao ∂ Y µ X −a X −a µ ( )
X = µ ; ∂X = = o2 X 1
o o 1
2
∂ X1 2 µX X2 2 X2 µX µX2
a oµ X 1
Mean: µY =
µ X2
Variance and covariance:
SL3151Ch14Frame Page 652 Thursday, September 12, 2002 5:59 PM
2 2
∂Y ∂Y ∂Y ∂Y
σY2 = σ2X 1 + σ2X 2 + 2σ2X 1X 2
∂ X1 ∂ X2 ∂ X1 ∂ X2
2 2
a −a µ a −a µ
= σ2X 1 o + σ2X 2 o2 X 1 + 2σ2X 1X 2 o o2 X 1
µX2 µX2 µX2 µX2
Single RV: X1
Y = ao X ± b
( ) = ±a b X
∂Y µ X ± b−1
=
± ao b X1± b
=
± bY
=
± b µY
∂ X1 o 1
µX X1 X1 µX
µ X1
µX
Mean: µ Y = a oµ X 1µ X 2
( )
2
∂Y µ ± bY
2
X1
Variance: σ = σ = σ X1
∂ X
2 2 2
Y X1
1 µ X1
or normalizing by the square of the means
σY2 σ2X 1
= b2
µY2 µ2X 1
Single RV X1: Y = ± a o e ± b X1
where units of the RV X1 are those of 1/b, and units of the RV Y are the same as
those of ao.
SL3151Ch14Frame Page 653 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 653
∂Y µ X( ) = ±a b e ± b X1
= ± bY ± b µY
∂ X1 o
µX µX
Mean: µ Y = ± a o b e ± b X1
( )
2
∂Y µ
( )
X1 2
Variance: σ = σ = σ 2X 1 ± a o b e ± bµ X 1 = σ 2X 1 b 2 µ2Y
∂ X
2 2
Y X1
1
or normalizing by the square of the means
σY2
= b2 σ2X 1
µY2
σ 2Y
( )
2
Variance: = b in c σ 2X 1
µ 2
Y
Single RV X1: Y = c ± b X1
where units of the RV X1 are those of 1/b and units of the RV Y are the same as
those of c
( )
( )
± b X1 ± b In c X1
then Y = c ± b X1 = e In c −e
Mean: µ Y = c ± bµ X 1 = e
( )
± b In c µ X 1
σ 2Y
( )
2
Variance: = b In c σ 2X 1
µ 2
Y
then
( ) =±a
∂Y µ X o
=
ao
∂ X1 X1 µX
µ X1
SL3151Ch14Frame Page 654 Thursday, September 12, 2002 5:59 PM
Mean: (
µ Y = a o In bµ X 1 )
2
a
Variance: σ = o = σ 2X 1
2
Y
µ X1
Deflection of the center of the beam of length L [m] under uniform loading W [N/m]
is deterministically given by:
WL3
Y= = aoWL3
48E I
where E = elastic modulus of the beam material [N/m] and I = moment of inertia
of beam cross section about its center of area [m4].
Load and length can be considered r.v. with mean and ± one standard deviation
is given as:
W = µW ± 1σW = 4000 N ± 40 N
L = µ L ± 1σ L = 20 m ± 0.2 m
( ) ( )
2 2
∂Y µ ∂Y µ
W ,L W ,L
σ =
2
σ 2
+σ 2
Y
∂W W
∂L L
( ) ( )
2 2
= σW2 ± aoµ3L + σ2L 3aoµW µ2L
2 2 2
σ σw
2 σl
= +3
Y
µY µw µl
SL3151Ch14Frame Page 655 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 655
For the case given the fractional standard deviations of the two variables are
equal:
σW 40
= = 0.01
µW 4000
σL
= 20 = 0.01
µL
2
σY
( ) ( ) ( )
2 2 2
µ = 0.01 + 3 0.01 = 10 0.01 = 0.001
2
Y
σY
µ = 0.032
Y
Observations:
1. Although W and L have the same fractional standard deviation (0.01), the
length — because it is a third power term in the deflection — is seen to
have more significance on the standard deviation of the deflection.
2. The fractional standard deviation of the deflection Y is considerably larger
than those of either the weight W or length L.
Y = X1 − X2
Examples:
1. Clearance
2. Before and after comparison (e.g., treated vs. untreated)
3. Comparison of two suppliers
SL3151Ch14Frame Page 656 Thursday, September 12, 2002 5:59 PM
Mean: µ Y = µ X 1 − µ X 2 or Y = X1 − X2
Variance (assume independent so covariance is zero):
X1 − X2
Z=
s / n + s22 / n2
2
1
sY2 =
(n − 1)s + (n − 1)s
1
2
1 2
2
2
( n + n − 2)
1 2
then
X1 − X2
T=
sY2 / n1 + sY2 / n2
MISCELLANEOUS
In Chapter 11, we discussed axiomatic design and its four mapping domains (CAs,
FRs, DPs, and PVs). Now, let us examine some of the mathematical relations for
these domains. If, for example, we are interested in the functional requirements
[FR(CTS)], then this can be expressed in the traditional six sigma notation of y =
f(x) as FR = f(DP), where DP (design parameter) is an array of the mapped to DPs
of size m. If we let each DP in the array be written as DPi = g(PVi), where PVi, i
= 1,…,m, is an array of process variables that the mapped to DPi, soft changes may
be implemented using sensitivities in physical (FR and DP) and process (DP and
PV) mapping. Using the chain rule, we have:
where PVij is a process variable in the array PVj that can be adjusted to improve the
problematic FR. The first term represents a design change while the second one
represents a process change. An efficient DFSS strategy should utilize both terms
in all potential improvements. After all, the ideal DFSS outcome is a design that a)
exceeds customer wants, needs and expectations, b) exceeds competition market
SL3151Ch14Frame Page 657 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 657
TABLE 14.1
Possibilities of Selecting a DFSS Problem
Zs Zs does not exist Zs exists
Xs does not exist No problem; this type of design may not Need conceptual change; DFSS has
exist potential while six sigma has no
potential
Xs exists Trivial problem; this type may be solved Both six sigma and DFSS have
with design of experiments (DOE) potential
performance as measured by reliability, robustness, and life cycle costs indices, and
c) the rest of product features.
This is very important because as we said earlier, DFSS is not for all designs
and processes. We must be selective in how we use it. Table 14.1 may be of help.
A final point about the axiomatic designs. The importance of the design or
problem matrix has many perspectives. The main one is the revealing of coupling
among the CTs. Knowledge of coupling is important because it gives the designer
clues about where to find solutions, and make adjustments or changes and how to
maintain them over the long term with minimal drift.
So, for the uncoupled matrix we have
y1 A11 0 . 0 x1
. 0 A22 . .
=
. . 0 .
y 0 . 0 Amm xm
m
y1 A11 A12 . A1 p x1
. A21 A22 . .
=
. . . A(m−1) p .
y A Amp
m m1 . Am ( p−1) x p
y1 A11 0 . 0 x1
. A21 A22 0 . .
=
. . . . 0 .
y A Am 2 . Amm xm
m m1
SL3151Ch14Frame Page 658 Thursday, September 12, 2002 5:59 PM
These design matrices are obtained in a hierarchy when the zigzagging method
is used (see Chapter 11). At lower levels of hierarchy, sensitivities can be obtained
mathematically as the CTSs take the form of basic physical and engineering quan-
tities. In some cases they are not available, and that means that the experimenter
has to rely on some kind of simulation or modeling.
CLOSING REMARKS
This chapter, especially, has focused on some mathematics that will allow the
experimenter to pursue design for six sigma (DFSS). The rationale for this mathe-
matical background (review) was to present a case for the integration of six sigma
methodology with scientifically based design methods — in particular, reliability,
axiomatic designs and the Define, Characterize, Optimize and Verify (DCOV) model
in general.
In Volume VII of this series we are going to use this background to show how
important the mathematical base is and how one may apply this knowledge to
optimize designs over two phases: 1. the conceptual design for capability phase, and
2. the tolerance optimization phase. Needless to say, all that may be done with
understanding and application of “robustness” in our designs, products, processes,
and so on.
SELECTED BIBLIOGRAPHY
Chase, K.W. and Greenwood, W.H., Design issues in mechanical tolerance analysis, Manu-
facturing Review, 1, 50–59, 1988.
El-Haik, B. and Yang, K., An Integer Programming Formulations for the Concept Selection
Problem with an Axiomatic Perspective (Part I): Crisp Formulation, Proceedings of
the First International Conference on Axiomatic Design, MIT, Cambridge, MA, Oct.
21–23, 2000.
El-Haik, B. and Yang, K., An Integer Programming Formulations for the Concept Selection
Problem with an Axiomatic Perspective (Part II): Fuzzy Formulation, Proceedings of
the First international Conference on Axiomatic Design, MIT, Cambridge, MA, Oct.
21–23, 2000
Hubka, V., Principles of Engineering Design, Butterworth Scientific, London, 1980.
Hughes-Hallett, D., et al., Calculus, 2nd ed., Wiley, New York, 1998.
Kacker, R.N., Off-line quality control, parameter design, and the Taguchi method. Journal of
Quality Technology, 17, 176–188, 1985.
Kapur, K.C., An approach for the development for specifications for quality improvement,
Quality Engineering, 1(1), 63–77, 1988.
Kapur, K.C., Quality engineering and tolerance design, Concurrent Engineering: Automation,
Tools and Techniques, Kusiak, A., Ed., John Wiley & Sons, NY, 287–306, 1992.
McCormick, N.J., Reliability and Risk Analysis, Academic Press, New York, 1981.
Stewart, J., Multivariable Calculus, 4th ed., Brooks/Cole Publishing Co., New York, 1999.
Strang, G., Linear Algebra and Its Applications, 2nd ed., Academic Press, New York, 1980.
Suh, N., Design and operation of large systems, Journal of Manufacturing Systems, 14(3),
1995.
SL3151Ch14Frame Page 659 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 659
Suh, N.P., Development of the science base for the manufacturing field through the axiomatic
approach, Robotics & Computer Integrated Manufacturing, Vol. 1 (3/4), pp. 397–415,
1984.
Suh, N.P., The Principles of Design, Oxford University Press, New York, 1990.
SL3151Ch14Frame Page 660 Thursday, September 12, 2002 5:59 PM
SL3151Ch15Frame Page 661 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance
15 and Accounting for
Champions, Master
Blacks, and Black Belts
This chapter is unique in the context of the six sigma and design for six sigma
(DFSS) methodology. Our intent is not to present a complete course in financial
management, but to introduce some key financial concepts for the Black Belt and
Master Black Belt in dealing with projects, Champions, and management in general.
As we have repeated many times, the intent of six sigma/DFSS is to satisfy the
customer and make a profit (however defined) for the organization. Well, for Black
Belts as well as Master Black Belts, that may be a goal, but the truth of the matter
is that the majority of them have no clue about accounting or financial issues. In
this chapter, we hope to sensitize all those individuals who are about to fix or improve
or even contemplate a change in the system of operations with some understanding
of the consequences of their recommendations to the organization as a whole. We
do not pretend to have covered the topic exhaustively, but we believe that this is the
minimum information that Champions, Shoguns (Master Black Belts), and Black
Belts must have to be effective not only in selecting their projects but also in
evaluating their outcome.
We hope that the reader will understand that the discussion here is very broad
and covers small and large organizations. As a consequence, not everyone pursuing
six sigma/DFSS will encounter all the issues presented here. However, regardless
of the organization, regardless of the project, somebody, somewhere, somehow in
the organization will be asking or being asked the questions addressed in this chapter.
661
SL3151Ch15Frame Page 662 Thursday, September 12, 2002 8:00 PM
Smith told how the pernicious sins of covetousness, gluttony, sloth, and greed
were somehow led by an “invisible hand” to benefit society. His remarkable work
was the cornerstone of those studies now called microeconomics, and a good many
business decisions can still be explained with Smith’s basic doctrine: Knowing their
product’s demand, competition, and cost, business people will act to maximize
profits.
However, despite its simplicity and power, the theory suffers in real world
application for two reasons. First, our information about product demand and com-
petition is usually slight, at best, and even costs — though largely under our own
control — occasionally veer away from expectations. Second, the wide separation
of ownership and management in the modern corporation has brought additional
motives to the mix; the invisible hand now guides by remote control, and the guided
managers have ideas of their own.
New theories have come forth since the 1950s to improve and update the original
model. They attempt to include the impact of the manager’s motives, and because
they concern you and me and what abides in our hearts, they are rather interesting.
One theory has it that companies are concerned more with maximizing sales
than profits. That might explain, for example, the current fascination with mergers
and acquisitions that produce instant sales growth yet from a profit standpoint are
often failures, about one third of them according to experts.
You cannot be in management very long without seeing examples of profits
sacrificed to sales: special discounts, loose credit terms, “prestige” products, low
bids. Why? Because the size of a company, as measured by sales (a la the Fortune
500 list), is what brings managers the greatest satisfaction, salary, distinction, and
seeming success.
Moreover, we all identify with the company we keep and the one that keeps us.
We take unto ourselves a bit of the power, reputation, and recognition associated
with our employer. That may be only a small satisfaction, but it is considerably
larger than the one we get from making profits for unknown shareholders. In fact
profits, if they are too large, may be thought unseemly and become an embarrassment
to us. When we are offered a bonus that is tied to net income, it is in part an attempt
to overcome our natural qualms about “excess” profits.
BUDGETS
Another modern theory suggests it is the size of his or her budget that gives a
manager the most satisfaction. How often have you ascribed this motive to your
governmental and not-for-profit colleagues? But it might apply to most corporate
middle managers, as well. The number of employees you direct has a bearing on
your salary; the amount of money that flows into and out of your control is a measure
of your importance; the size of your department often dictates the size of your
expense account, company car, office, etc. These things create far stronger urgings
that a few extra pennies added to the EPS.
SL3151Ch15Frame Page 663 Thursday, September 12, 2002 8:00 PM
BEHAVIORAL THEORY
Finally, there is the theory that suggests that maybe neither profits nor anything else
is being maximized in the modern corporation. The firm is not one body under a
single direction, but at least four bodies, each contributing a required input, and each
seeking a different reward. The basic four are the shareholders, executives, employ-
ees, and government. They cluster together as does a cloverleaf, four distinct parts
joined at the center. That center is a shifting axis representing the economic profit
of the business. Each group demands its share in the form of taxes, dividends,
security, better working conditions, and the like. Each has the power to close down
or sabotage the business.
If you accept this theory, then the task of management is not to maximize the
shareholders’ immediate profits but by satisfying all groups, to forge a cooperative
effort (optimize resources) that will yield a bigger reward for each. It is rare when
SL3151Ch15Frame Page 664 Thursday, September 12, 2002 8:00 PM
the various factions of a business pull together, but when they do the results are
astonishing.
ACCOUNTING FUNDAMENTALS
ACCOUNTING’S ROLE IN BUSINESS
To understand accounting’s role in business we might first look at the principal task
of management. The manager’s job is to control and direct the business affairs under
his or her command. To do so, the manager must understand the effects of past
business transactions and thereby be able to estimate the effects of proposed future
undertakings. Accounting has the dual role of (1) recording every occurrence that
has a financial impact on the business, and (2) reporting these financial data in a
form useful to management. Let us first look at the reports that accounting prepares
for management, then later at the way transactions are recorded.
FINANCIAL REPORTS
The balance sheet, income statement, and other reports summarize the results of a
company’s activities. When all of the talking is done, it is to them you look to see
how well the company is really doing. This is done through an evaluation of the
assets, liabilities and owner’s equity or a balance sheet.
Accountants are financial historians. Their task is first, to record every event in
the life of a business that has a monetary impact; and second, to report those
proceedings in forms that show management how far the company has come and in
which direction it is heading.
“Balance sheet” is the age-old name of a report that sets forth the assets, liabilities,
and equity of a company. As accountants have become more educated and higher
priced they have tried to substitute fancier names such as “statement of financial
position” or “statement of condition,” but the old name lingers on. There are two
important balancings or equalities in this report. The first is usually referred to as
the “balance sheet equation”:
Debits = Credits
Accumulated depreciation shows how much of the cost of existing fixed assets
has been expensed. It amortizes the cost over a period roughly akin to the useful
life of the assets. Here is the accounting entry for the yearly write-off:
While the creators of this format probably had good intentions, it is confusing
to read and even irritating because there is no figure for total assets. The working
capital figure is of little use and may even be harmful if it is taken to be something
it is not. You may subtract current liabilities from current assets on paper, but you
cannot do it in real life; current liabilities are reduced only by cash.
SL3151Ch15Frame Page 667 Thursday, September 12, 2002 8:00 PM
Noncurrent Assets
The noncurrent assets are those that take longer than a year to liquidity (e.g., long-
term receivables), and those that the company has no intention of selling, such as
property, plant, equipment, vehicles, and other so-called “fixed assets.” The fixed
assets are listed at what they cost, less depreciation, and on the balance sheet itself
no attempt is made to show their current market or replacement value.
Intangible assets such as patents, organization expense, and goodwill (usually
called something like “cost in excess of book value of acquired assets”) are also
shown in the noncurrent assets section, although they may not be labeled as “intan-
gible.”
Noncurrent Liabilities
Among the noncurrent liabilities are bonds payable and other “long-term debts,”
deferred compensation, and maybe accrued pension liabilities. Any part of these
obligations that falls due within the next 12 months is listed in the current liability
section. Also frequently found here is the deferred income tax account, which is a
liability in theory but seldom in practice; accountants (and everybody else) are so
unsure about how to categorize this account that they usually skip giving a total
liability figure on the balance sheet just to avoid having to classify it.
On about one out of five balance sheets you will run into “minority interest.” It
is usually found in between liabilities and equities because it is neither one nor the
other. Minority interest represents the outside shareholders of not fully owned
subsidiary corporations; the amount is not payable to them unless the subsidiary is
closed down and liquidated.
Shareholders’ Equity
The remainder of the balance sheet is given over to the equities. Some accountants
refer to them as a form of liability. They are … if you strain a little and reason that
the company assets that are not owed to the creditors are owed to the stockholders.
But in modern usage, equity is distinguished from liabilities, which are obligations
to make payments on specified dates. Shareholders may be entitled to the equity
share of the assets, but “cashing out” is a practical impossibility unless a majority
of them act to liquidate the company.
Of course, shareholders may sell their interest if the stock is publicly traded or
they can find a buyer. Stockholders of today think of themselves more like depositors
in an institution than owners of a company. The security, comfort, and convenience
of modern investing has been purchased with the power and influence shareholders
once had.
As we stressed earlier, the balance sheet is constantly changing, and the changes
year to year often give a clue as to where the business is heading. That information
is given in the statement of changes, discussed below, but one item in the equity
section — retained earnings — has a whole separate report to show how much and
why it changed. That report is called the income statement.
The Income Statement
This statement is a report of a company’s sales, less the expense involved in getting
those sales, and the resulting profits. It used to be called the “profit and loss”
SL3151Ch15Frame Page 668 Thursday, September 12, 2002 8:00 PM
statement — and was nicknamed “P&L” — but in the turbulent sixties corporations
became sensitive to the word “profit,” and it has all but disappeared from their public
utterances. (Nonprofit organizations are supersensitive about the word, as you might
imagine, and refer to their profits as the “excess of revenues over expenses,” or some
such dignified euphemism.)
Income statements begin with the grandest number found in the business, rev-
enues...the fount of all profits. In most firms the term “revenues” means sales, but
there may be other forms of revenue, too — interest income, rents, royalties, and so
on. The sales figure is usually “net” of returns and allowances.
Gross Profit
The rest of the income statement is a process of distilling the revenues by boiling
off expenses at various stages until you are left with the essence of net profit. The
figures you get along the way vary in importance. The first step is the deduction of
the cost of sales (or cost of goods sold) — the largest expense in most companies.
From these figures you can derive the gross profit margin (it is rarely given in
the report)
Gross Profit
= Gross Profit margin
Sales
Gross profit (GP) and gross profit margin (GPM) are important because they
reflect the basic climate of the business. In the typical firm, the gross profit margin
will not vary more than two or three percentage points from year to year. If the
figure is trending down, it may mean the company’s product line is getting old or
the pressure from competitors is increasing — both of which are major problems.
A Gaggle of Profits
Like a gaggle of geese whose symmetrical formation in flight points gracefully
toward their goal, profit calculations often taper gently inward as they descend to a
point on the bottom line. Along the way you might find figures for:
• Operating income
• EBIT (earnings before interest and taxes)
• Income before nonrecurring items
• Income before extraordinary items
• Income from continuing businesses
• Income before taxes
You might well ask whether all these numbers clarify the profit picture or deform
it. Perhaps the biggest benefit of EBIT is that it gives management a bigger number
to talk about. Most companies borrow money and pay interest with regularity; they
SL3151Ch15Frame Page 669 Thursday, September 12, 2002 8:00 PM
would not stop if they could, which they cannot, so there is not much point to
deriving a profit without such a routine expense.
The same could be said for profit before income taxes. It is like saying “look
how much money we could make if we did not have to pay taxes.” So what? It
would be as useful, and perhaps more interesting, to show us “income before
president’s salary,” or “income before expense accounts.”
On the other hand, the income before nonrecurring expense or rather, the non-
recurring expense itself can sometimes be revealing. Most often these charges are
the bite-the-bullet kind; the company has a losing product or division or subsidiary
that management decides to dump.
There is some psychology at work here. The thought of profits being attrited
year after year by some feeble division is depressing; the cost of getting rid of such
a ball and chain is almost inconsequential, so long as it can be tagged nonrecurring.
Management is saying “sure, there have been some problems or mistakes, but now
they are behind us and we can look to a brighter future.”
If you find such a write-off in some company’s glossy annual report, just turn
to the front pages where the recent acquisitions and new products are described with
unfettered optimism; see if you can guess which of them will be tomorrow’s non-
recurring expense.
Earnings per Share
The income statements of public corporations also give an earnings per share (EPS)
figure. From the net income is deducted dividends, if any, on the preferred stock,
and the remainder is divided by the number of common shares outstanding.
The Statement of Changes
The Statement of Changes in Financial Position is descended from a family of “funds
statements” that include (a) the sources and applications of funds, (b) the sources
and uses of cash, and (c) the where-got, where-gone statement.
The purpose of the report is to describe whence money has come into the
business, and how it has been used. There are, of course, thousands or millions of
little pieces to that puzzle, so the statement of changes does some wholesale netting
to get the report down to a manageable size.
All of the transactions involving sales and expenses are combined in a net income
or loss figure; to this are added back those deducted expenses that did not take any
cash, such as depreciation, amortization, and deferred income taxes. The total of
these items is often called the “sources of funds from operations.” The changes in
the current accounts may be grouped together as a change in working capital (current
assets minus current liabilities).
Sources of Funds or Cash
Besides the profits and noncash expenses, any increase in a company’s liabilities is
considered a source of cash. Think of borrowing from a bank; you sign a note that
increases your debts, and you walk out with a pocket full of cash. On the other hand,
any decrease in assets is also a source of funds — as when you sell one of your
trucks for cash.
SL3151Ch15Frame Page 670 Thursday, September 12, 2002 8:00 PM
Use of Funds
Typically, the principal use of funds is for additions to property, plant, and equipment;
also found here are increases in other slow assets, dividends paid, and net reductions
in debts. A balancing figure — the change in working capital — is either included
here or listed just below this section.
Changes in Working Capital Items
Some statements of changes have a section showing the changes in current assets
and current liabilities. The net changes in the current assets — please pay attention,
this is not easy — the net changes in the current assets minus the net changes in the
current liabilities equals the net change in working capital. This figure will match
the change in working capital calculated in the sections dealing with noncurrent
assets and liabilities and equity.
Say what? If after this simple explanation of the statement of changes you feel
as if your brain is turning to mush, be assured it is not your brain that is the problem;
it is the statement. Anyone can have a dud in his or her bag of tricks, and this is
one the accountants have. The statement is hard to understand and has so many
exchangeable opposites that the words increase and decrease tend to lose their
meaning after a few minutes.
Not many people, I have found, bother to read this report; but of the non-
accounting stalwarts who do, most fail to understand it, or worse, they misinterpret
it. Nevertheless, the changes in the balance sheets, one year to the next, may be
important. If that is the case, you can usually get just as good information — and
sometimes better — by simply subtracting the side-by-side numbers in the two
balance sheets listed, rather than from struggling with this unfortunate report.
The Footnotes
There is a cliche among analysts and accountants that the real lowdown on a firm
will be found in the footnotes. There is usually plenty of information there, all right,
maybe four times again as much as in the financial statements themselves. But the
footnotes in a financial report are, like footnotes anywhere else, related information
of lesser importance. Anything with a serious financial consequence will be
expressed on the statements, and while additional details can often be found in the
footnotes, they may or may not be of interest to you.
A classic example on footnotes is the 1986 annual report of General Motors
Corporation. It had a total stockholders’ equity figure on its balance sheet of $30.7
billion. In the footnotes, however, there was more than a full page of crammed data
that reconciled changes in amounts for five different classes of stock, capital surplus,
and retained earnings. While it may give some people comfort knowing that the
extra information is there, for most readers it is not likely to add anything to the
impression made by the single number. We suggest, therefore, not to bother with
the footnotes unless you have a particular need for more details about an item.
Another reason to go easy on the footnotes is that the language there is largely
technical, and if you are unfamiliar with it you might be led to a wrong conclusion.
Besides, all of us in business these days have more information available to us than
we have the time to look at it. Excessive information is no friend to a good decision,
and it is an enemy to action.
SL3151Ch15Frame Page 671 Thursday, September 12, 2002 8:00 PM
Accountants’ Report
Financial reports that have been audited by “independent” CPA firms will contain
a letter from them stating the scope of their involvement and giving their opinion
about the financials. It is usually written in accounting boilerplate.
If the letter is signed by an accounting firm, and it contains “in our opinion,” if
it does not contain “except for” or “subject to,” and if it has no more than a few
sentences, you are looking at a “clean” opinion and can feel very comfortable about
the figures. For any exceptions to the above you had better wade through the whole
letter — depending on how important it is to you.
Since I have looked at quite a few financial reports over the last 30 years, allow me
to recommend a best way to go about it. The fact is, that there is no one best way
for everybody. It is an individual thing, a little like the way you observe a member
of the opposite sex walking toward you. You look first at one thing, then another,
and if you are still interested you may turn around and look at a third. But each
person develops the pattern that suits his or her own individual needs. Same with
financials, so here is my pattern.
Step 1. Look first to see if the statement has the independent clean opinion
described earlier. Anything less means that you need to take a more careful
look at the numbers (i.e., testing them against poor common sense and
experience), and read every single word on the statements.
Step 2: Turn to the income statement and look at:
a. The latest net income figure. A loss is a red flag.
b. The prior year’s net income to see the direction of profits. Two years’
losses back to back means standby the lifeboats.
c. Total revenues or sales to see the direction they are heading. Sales are
a proxy for demand for the product — the single most important
requirement for success in business.
Step 3: Now the balance sheet. Here is the sequence I follow:
a. Right side, second to the last figure from the bottom — shareholders’
equity. Compare it with last year’s figure; if the company was profit-
able, equity should have gone up. Now compare it with the bottom
number (total debt plus equity) just below; if the bottom figure is more
than twice the equity, the firm may have too much debt.
b. Run your eye up the page to the total current liabilities amount; remem-
ber the number (rough rounded).
c. Now over to the current assets. Check the total of that section; if it is
only slightly higher than the total current liabilities, that is bad; twice
as much is good.
d. Finally, look at cash (plus marketable securities). If it is less than ¹⁄₁₀
the current liabilities, it is not so hot; a good ratio would be 30% or
more.
SL3151Ch15Frame Page 672 Thursday, September 12, 2002 8:00 PM
Step 4: The next step is to sit back and reflect for a moment. You have made
mental tests of statement reliability, profitability, leverage, and liquidity;
now form a preliminary opinion of the overall condition: excellent, good,
fair, poor, or lousy. If you have a mixture of good news, bad news and/or
you have to explain your judgment to others, go on to Step 5.
Step 5 (Optional): For a second opinion, ask for professional advice.
You will recall that we started this section by saying that accountants have a
dual role in business: (1) to record every financial transaction, and (2) to report this
financial data in a form useful to management. We have looked at the reports prepared
by accountants. Now let us examine how transactions are recorded.
The history of humankind, that is, the written record of human activities, goes
back about ten thousand years. The earliest evidence of writing that we have dis-
covered consists of some lumps of clay on which Sumerian farmers recorded their
livestock — what we might fairly term “accounting records.”
Today almost all accounting is done by the double-entry bookkeeping method,
which was developed by the Roman Catholic Church. Thus the term “accounting
clerk” derives from the word “cleric.” The first evidence of this system dates back
to Genoa, Italy in the fourteenth century.
I think we would all agree that the accounting profession has had plenty of time
to settle on all the right procedures. But judging by the changes still going on in
accounting, and the liveliness of the debates about them, you wonder if the devel-
opment of accounting is even half complete.
One reason for the continual changes may be that there is no unifying theory
of accounting similar to, say, the supply-demand concept of economics or the ego-
-id theory of personality. Instead, accounting is based on conventions, that is, rules
established by general consent, usage, and custom. These rules are called generally
accepted accounting principles (GAAP), and they change from time to time.
Accounting, then, is very much alive — if not completely well — and the challenges
and opportunities it offers to good management are as fresh as ever.
TABLE 15.1
A Summary of Debits and Credits
Debits Credits
Abbreviations Dr Cr
Represent what is Taken Given
They often designate what is Owned Owed
Or sometimes Benefits received Money spent
They are also the normal balances of Assets Liabilities
Expenses Equity
Revenues
The terms debit and credit were devised to represent the take and the give; they are
from the Latin language because our modern accounting system traces its origin to
Catholic clerics of the fourteenth century. In general, debits represent what is taken
from a transaction, and credit what is given up. Debits and credits have no more
meaning than that, and we could just as easily have chosen other terms in their place,
black and red, for example, or left and right.
Of course the words debit and credit have other meanings in our language, but
it will only confuse you to try to match them with the narrow usage in accounting.
Table 15.1 summarizes the use of these terms in accounting.
Sources and Uses of Cash
In order not to make this come too easily to the uninitiated, financial people some-
times define debits as the uses of cash, and credits as the sources of cash. These are
what I call definitions +; normal definitions plus one mental broad jump. Debits are
said to be uses of cash because when cash is spent (that is the credit part of the
transaction), something such as an asset is taken and recorded as a debit. They make
a similar convolution to label credits as a source of cash. If you borrow money from
a bank, the cash they give you is a debit — something received, an asset — but the
source of that cash was the bank loan — a liability and a credit.
How Debits and Credits Are Used
Every item — asset, liability, or equity — on the balance sheet has a dollar value
assigned to it; it is the company’s and their CPA’s best estimate of the worth of that
item. But each account also has another quality about it, one that is hidden and not
expressed: Every dollar amount on the balance sheet is also either a debit balance
or a credit balance. If you look back at Table 15.1, you will see that assets are debit
balances, while the liabilities and equity accounts are credits.
You will recall our earlier discussion of the balance sheet equation:
SL3151Ch15Frame Page 674 Thursday, September 12, 2002 8:00 PM
TABLE 15.2
Summary of Normal Debit/Credit Balances
Normal Balance
Type Definition Debit Credit Balance Sheet Income Stmt
Assets Sales
Current Less: Cost of Sales
Miscellaneous Gross Profit
Fixed Selling and Administrative Expense
Intangible Interest Expense
Liabilities Other Income and Expense
Current Income Taxes
Long term and Deferred Net Income
Equity Less: Dividends
Common and Preferred Stock Added to Retained Earnings
Retained Earnings
They will stay that way so long as every future transaction is recorded with
equal amounts of debit and credit dollars.
Forget + and – . In the use of debits and credits, they do not stand for plus and
minus. Both may be either; it depends on the account they are applied to. If a debit
is applied to an account that already has a debit balance, the two amounts are added
together, and a larger debit balance results. If a credit is applied to an account with
a debit balance, then the amounts are subtracted from one another. A similar rule
holds for accounts with credit balances. Debits and credits, in other words, are added
to their own kind but subtracted from their opposite number.
CLASSIFICATION OF ACCOUNTS
In accounting there are five basic types of accounts. On the balance sheet: assets,
liabilities, and equity. On the income statement: revenues and expenses. In Table 15.2
is a summary of their normal debit/credit balances.
SL3151Ch15Frame Page 675 Thursday, September 12, 2002 8:00 PM
Recording Transactions
Remember the basic rule in accounting that in the recording of a transaction debits
must equal credit. We can readily see that every business dealing has both a give
and a take to it. When a company buys merchandise it takes the goods and gives
money in return. The opposite occurs when the goods are re-sold. When it hires a
worker, a company takes the fruits of his or her labor and gives back cash in the
form of wages. In a broad sense, debits represent the take in a business transaction,
and credits the give.
For example, your company buys a new computer, paying $900 in cash.
The give and take aspects of double-entry accounting are easily seen here (they
are not always so readily apparent). The debit is what has been received; the credit
is what has been given in exchange; the debit also represents a use of cash. In this
example, both of the affected accounts are assets. The cost of the computer will be
added to the cost of previously acquired office equipment, which is already shown
on the balance sheet as an asset (debit). As we are combining a debit entry with a
debit balance, the result will be a larger asset account on the next balance sheet.
However, the cash used to buy the computer was also an asset (debit). Now
when we combine the credit to cash with our beginning debit balance of cash, the
result is a smaller debit balance of cash. Since only assets were affected by this
transaction, the total of assets was unchanged. There were no effects at all on
liabilities, equity, revenues, or expenses.
Another example: Your company makes a sale amounting to $2150. As soon as
an invoice is issued, the accountants will record the transaction. If the sale is for
cash the entry will be
Dr Cash $2150
Cr Sales $2150
In this situation, both entries are plus amounts. The debit is added to Cash and
the credit is added to Sales, for Sales is a revenue account that normally has a credit
balance.
Note: In this case there are no effects on liabilities, equity, or expenses. The cost
of the goods that were sold will be recorded in a separate transaction at the time we
derive a new inventory figure.
The transactions we just looked at and similar ones are recorded in a book called
the General Journal. It sets out in chronological order all of the firm’s business
dealings. It is like a diary of business transactions. Large firms have special journals,
such as the Cash Receipts Journal, for recording certain classes of transactions,
which are summarized at the end of the month or year in the general journal.
SL3151Ch15Frame Page 676 Thursday, September 12, 2002 8:00 PM
The second book of account is the General Ledger (GL), in which each and
every account has its own page, on which all of the journal entries relating to that
particular account are transcribed. With the GL you can look up an account such as
Cash or Notes Payable or Salaries and see all of the transactions made during the
year and the current balance of the account.
The Trial Balance
The process of closing out the books at the end of the year can be rather elaborate.
There are often many adjusting entries to be made, and various accounts must be
combined and fitted to form the final financial statements.
The first step in that process is the preparation of the Trial Balance (TB). The
TB is a listing of all the accounts in the General Ledger with the current balances
shown in either a debit or a credit column. Since debits and credits are equal in
every transaction, the two columns of the trial balance should also be equal. Finding
the two columns equal is the “trial” part, for if they are not, the accountants must
locate and correct the errors before they can proceed. The accounts listed in the trial
balance are then divided among the balance sheet and the income statement to form
those reports.
The Mirror Image
To the neophyte, the discussion about debits and credits may contribute to some
confusion and inconsistencies to the understanding of these two concepts. For
example, when we say that a “debit to cash” adds to our cash balance, it can create
understandable confusion. And when we put money into our checking account the
teller, if he or she speaks to us at all, may tell us that the bank is crediting our
account. But is this not debiting instead?
The confusion arises from the fact that our accounting entry in recording a
transaction is often a mirror image of the other party’s. The cash that we take in
(our debit) is the same cash that the other person has paid out (his credit). And the
goods we delivered to him are deducted (by a credit) from our balance sheet, and
added (by a debit) to his. When the bank tells you that your deposit is being credited
to your account, they are speaking from their viewpoint, not yours. Their accounting
entry is:
Dr Cash
Cr Demand deposits
Since your demand deposit is a credit on their books, an entry crediting your
account is one that increases the balance.
for the accrual basis, give the quick two-syllable pronunciation, “a-krool” rather
than the proper three-step version, “a-kroo-al.” Second, it rouses no sense of rec-
ognition or meaning. It is one of those words that requires you suspend all other
thoughts while you struggle for its gist. And third, even when you remember that
accrue means to accumulate or increase, it is still hard to make the connection with
the accrual basis of accounting.
Now that that is out of my system, accrual is the accounting principle that counts
sales as income even though the cash has not yet been received, and records expenses
in the period they produced sales although they may have been paid in some other
period.
If we look at the income statement in a company’s annual report, the first item
is sales, for the whole year, though it is likely the invoices from the final month are
still uncollected. By the same token some expenses, such as telephone and utilities,
are counted although those incurred in the last month may not be paid until the
following year. Sometimes there is the opposite effect, where the cash moves first,
and the recording of income and expense comes later. For example, a customer sends
in a check along with an order; the check may be deposited right away, but no sale
is recorded until the goods are delivered.
Or, say a company builds an elaborate display in December 2001 for a convention
in January 2002. Assuming the company operates on a calendar year basis, the
money spent in December would not be counted as an expense until 2002, the year
in which the benefits of the expense are derived.
As individuals, most of us use a cash basis for tax purposes. We do not report money
owed to us as income — only the cash we have received. Nor do we take a deduction
for a medical expense before we have paid the bill. Some businesses — usually
small — operate that way, too. There are a few advantages. It is a clean and simple
way of accounting; the cash receipts and payments records serve for the income and
expense statement as well and for tax purposes. The cash basis allows some maneu-
vering through the use of delayed billing or accelerated payments. The big disad-
vantage of the cash basis is that it is a crude measure of a fast-paced activity — so
crude that a lot of damage can be done before a true assessment is made.
Details, Details
The accrual basis of accounting sometimes appears overly concerned with particu-
lars. When the year ends one day after payday and an accountant spends hours
calculating that one day’s accrued salaries so they can be charged to the old year,
you may well wonder if the accounting profession is not feathering its bed.
But if there is any one thing about business that is essential for management to
know, it is an accurate picture of profit and loss. And given the large numbers we
deal with and the slender profit margins that accompany them, getting accurate
income figures is worth a lot of expense and bother.
SL3151Ch15Frame Page 678 Thursday, September 12, 2002 8:00 PM
Accrual accounting is father to the balance sheet. Look at the assets side of a balance
sheet, and draw a line under accounts receivable. Just about everything beneath that
line is a prepaid expense, an expenditure waiting to take its place in the expense
section of some future income statement.
On a cash basis, these assets would be counted as lost costs — a gross distortion
of the truth. But with accrual basis accounting we assign them a value commensurate
with their potential for producing future revenue. The accrual method is often a pain
both to apply and to understand, but consider the investment and credit decisions
(and if those do not move you, the management bonuses) that are dependent upon
it. The more accurate the measure of our past activities, the better will be our future
decisions.
Businesses are in business to make a profit, but they run on cash. And if you think I
am kidding, try getting on a bus with just your income statement. Because accrual
accounting distinguishes between the profit effect and the cash effect of transactions,
it is necessary for management to have a cash plan as well as a profit plan. Yes, it is
possible to be profitable and still go broke, that is, run out of cash. The problem can
be acute for highly seasonal businesses or those with a fluctuating cash/credit sales mix.
Most annual reports begin with a hymn of praise by management for themselves,
followed by colorful pictures of shiny products and smiling workers. It is the CPA’s
job to express all this in terms of dollars in the income statement and balance sheet.
From time to time, sentimentalists wonder why the human worth of the employees
is not reflected on the financial statements. Most of us know workers who might qualify
as assets, and others more properly described as liabilities, but no one has yet come up
with an acceptable way of putting a number to these characteristics.
The value of an asset is continually changing as a result of wear and tear on the one
hand, inflation on the other. Perhaps the true value of an asset is revealed only at
those moments in time when it is sold. Only at those times are we certain of the
asset’s hard cash value.
For convenience, accountants have fastened on the moment of acquisition to value
the fixed assets. The amount originally paid for an asset is the balance sheet value, and
no attempt is normally made to adjust that value except for scheduled depreciation.
For this convenience we pay a price, particularly in times of high inflation when
assets often appreciate. One of the most often-heard criticisms of CPAs is their
failure to value fixed assets at the current or replacement value. Leaving aside the
controversy there are two things to keep in mind:
SL3151Ch15Frame Page 679 Thursday, September 12, 2002 8:00 PM
1. Most assets are difficult to appraise, and there is often a wide difference
of opinion.
2. Business assets acquire value from the revenue stream they are able to
produce. If the value of assets does indeed rise during inflation, the proof
should be found in higher earnings.
Historical cost is the present basis of balance sheet values. We can all agree that a
thing is worth what someone will pay for it, so for at least one moment in time this
was the unquestioned value.
Liquidation Value
This is the present value of all future income expected to be derived from the asset,
discounted at a rate commensurate with business risk (typically 10% to 15%).
While most financial experts would agree that this is an asset’s “truest” value,
it is all based on the tricky task of estimating future income. When you apply
mathematical precision to a “guess forecast,” you also get another version of a guess
estimate with angular numbers instead of round ones.
Psychic Value
Often a factor in mergers and acquisitions, psychic value looks to the buyer’s state
of mind rather than any characteristic of the asset. Unfortunately, trying to divine
the hopes and dreams rattling around in the mind of a potential buyer is not any
easier than estimating future income.
These, as a result of double-digit inflation a while back, got a lot of attention from
the Securities and Exchange Commission and the accounting profession, if not
businesspeople themselves. The burden of their studies, however, was not that asset
values are really much higher than stated, but that depreciation allowances based on
historical costs understated the “true” expense, and thus led to overstated profits.
The current value issue, like inflation itself, is about as predictable as the
common cold, and as frustrating to cure. Bad as historical costing is, CPAs just have
not found anything they like better.
Granted that it may sound like a contradiction, assets and expenses are very much
alike. Except for financial assets (discussed below) and land, assets are little more
than prepaid expenses. The reason we just do not call them expenses is that they
still have some juice left in them — some power to generate future sales.
All expenditures — except payments of debt — result in either an expense or
an asset. The distinction rests on how long the item purchased will be of use. If it
will be used up by the end of the year it is an expense; if its usefulness extends
beyond the present accounting year it is an asset. Therefore, money spent for wages,
electricity, or travel results in an expense, while money spent to acquire carpeting,
a lathe, or a jet liner creates an asset.
Some distinctions are not so easy. Money spent to incorporate a business may
be listed as an asset (organization expense) on the theory that it will benefit the
company throughout its life. On the other hand, it might be written off at once as
just another legal expense. Most of the asset/expense decisions will be made by your
CPA using established principles, but there are always some arguable cases. The
key question is, do you want to bear the entire expense now or stretch it out? Since
most managements exist at the sufferance of the bottom line, it is more than an
academic issue. More often than not, if it is a borderline case, the course of action
SL3151Ch15Frame Page 681 Thursday, September 12, 2002 8:00 PM
is to expense off what you can and still keep your job. Remember that the issue will
not affect your cash balances — the money has already been spent.
TYPES OF ASSETS
Assets may be classified according to their tangibility. This is not the usual way we
distinguish them in financial reports, but it can add depth to our understanding of
the nature of modern business.
Financial Assets
These include cash, marketable securities, accounts and notes receivable, and invest-
ments.
Cash is the premier asset — it always gets the first position on the balance sheet.
There is little need to explain why, for while other assets may interest us, cash
generates something more akin to a fascination. I am reminded of something attrib-
uted to the Roman poet, Ovid, who is best known for writing “The Art of Love,”
and its antidote, “The Remedies of Love.” He said: “How little you know this world
if you fancy that honey is sweeter than cash in the hand.” Now if the poet Ovid
sounds a little like an economist, the economist John Kenneth Galbraith sounds a
little like a poet when he discusses money: “It ranks with love as the source of our
greatest pleasure, and with death as the source of our greatest anxiety.”
Accounts receivable are the monies due from customers. Nearly all firms that
sell to other businesses sell on open account credit, so receivables usually represent
one of the larger kinds of assets you need to run a company. Receivables are “claims
on money,” and as such are maybe halfway to being cash. Some people are fond of
reminding us that you still have to collect the account before you have something,
but the average amount that ends up as bad debts is only about a third of a percent.
Other cash claims such as receivables from and investments in affiliated com-
panies may or may not be financial assets, depending on how readily redeemable
they are. Like a loan to your brother-in-law, these may be more in the nature of gifts
than financial assets.
Financial assets make a shiny impression on those you deal with; they give you
tactical flexibility; they invite opportunities to knock on your door; and they give
you a sense of security and well-being. On the other hand, these financial assets are
given to you (management) to do something with besides bathe in their glow, and
by themselves they produce limited income. Later we will discuss the question of
how much cash is too much. Unlike property and equipment, financial assets do not
wear out or become obsolete. They do, however, suffer from inflation and, in the
case of marketable securities, from fluctuation in market price.
Physical Assets
These include the inventory, land, buildings, equipment, and anything else you can
paint. Most physical assets are subject to depreciation — the process of writing off
(accountants say “expensing”) the assets over their useful life.
SL3151Ch15Frame Page 682 Thursday, September 12, 2002 8:00 PM
Exceptions are inventory, which is not kept long enough to depreciate (that is,
it had better not be), and land, which is assumed not to depreciate (but do not try
convincing the folks in the vicinity of Mt. St. Helen’s).
Physical assets — other than these two — can be viewed as lost costs or prepaid
expenses. Their value is manifest not in their cost, size, sturdiness, or beauty, but in
their ability to create customers.
Operating Leverage
For the past 250 years, business has gradually increased its physical assets in
proportion to the people it employs, through the process we call automation — the
substitution of machines for people. This is referred to as operating leverage. In
general, higher operating leverage (a higher assets to people ratio) results in higher
profits in expansion and higher losses in a sales decline. That factor, so often
neglected in capital budgeting decisions, can have a profound effect on a company’s
long-term prospects.
Determining the Value of Inventory
Businesses use three principal methods to assign a value to their inventories: FIFO,
LIFO, and the weighted average method. We may think of inventory as a reservoir
of goods for sale. At the beginning of the year it stands at a certain level; during
the year we add to it by purchasing or manufacturing more goods; from it we take
the goods that we sell. And at the end of the year, we measure the level at which it
stands. The value of our inventory is what it cost us to make (not what we think we
can sell it for), but during the year costs may have fluctuated because of inflation
or changes in the supply of and demand for the raw materials or goods we purchase.
Moreover, in most companies businesspeople are not sure which goods — the higher
costing or the lower costing — were sold and which remain in inventory at the end of
the year. Therefore, the amount at which we value our ending inventory as well as the
cost of goods sold during the year will depend on the valuation method we choose.
FIFO
The First-in First-out (FIFO) method assumes that the oldest goods on hand are the
first to be sold, and the inventory remaining consists of the latest goods to be
purchased or made. This is a very reasonable assumption since most companies will
sell their products in roughly the same chronological order they acquired them.
LIFO
The Last-in First-out (LIFO) method assumes that the latest goods acquired were
the first ones sold, and the year-end inventory consists of the oldest goods on hand.
In most of the firms that use LIFO, this is clearly a fiction. The reason it is accepted
is to defer the payment of income taxes. How that works can be explained in a three-
step thought process:
1. The history of the world is inflation; we have always had it (except for
brief periods), and there is no sign of it disappearing.
2. In inflation, the goods purchased or made earlier in the year are likely to
cost less than those acquired near the end of the year. By assuming the
SL3151Ch15Frame Page 683 Thursday, September 12, 2002 8:00 PM
Weighted Average
With this method, the goods remaining in inventory will be valued at the same
average cost as those that have been available for sale during the year.
Depreciation
One of our most useful financial concepts — for setting prices, providing funds to
replace capital assets, and postponing taxes — is depreciation. Depreciation is the
value a fixed asset loses through our use of it and the passage of time. In business,
we recognize that depreciation, along with the cost of labor, materials, and taxes, is
an expense of running the company. Accountants recognize (record) depreciation in
an unimaginative, mechanical way that approximates real life in the long run but
may vary widely from it in the short.
We must recognize the expense of depreciation in order to correctly price our
products and get a true picture of profits or losses. Suppose we bought an ice cream
machine for $2000 and started selling cones for 50 cents, after determining that our
out-of-pocket expense to make them was 30 cents. It is obvious we are not making
a profit of 20 cents on each cone even though we have that much extra in our pocket,
because the 2000-dollar machine is gradually wearing out and losing its value
(especially when making my favorite, pecan-praline, because the little crunchies in
there wear it out faster). It is possible that at a half a buck a cone we will wind up
losing money.
Useful Life Concept
Under present accounting practices, a company that buys equipment or some other
fixed asset must estimate the number of years it will use the item and what salvage
or residual value it will have when the company is finished with it. The depreciable
amount, that is, the cost minus the salvage value, is apportioned to expenses over
the useful life of the asset in either equal or formulated amounts.
Because depreciation is a legitimate expense, because it does not involve a cash
payment to anyone, because it is based upon estimates of future wear and tear, and
because a variety of depreciation methods are acceptable to tax authorities and CPAs,
the depreciation process is almost an irresistible invitation to tax strategies and fiscal
manipulation. The manipulators are themselves manipulated by the government,
which frames depreciation rules so as to encourage businesses to buy new production
equipment. (Not all organizations, however, bother with depreciation. A list of those
that do not would include tiny companies with too little income to deduct depreci-
ation from, as well as giant nonprofit institutions that neither charge for their services
nor pay any taxes.)
SL3151Ch15Frame Page 684 Thursday, September 12, 2002 8:00 PM
Not so. For reasons best known to themselves, accountants like to preserve the
original cost of the asset. And so they create a valuation reserve that accumulates
the depreciation expensed each year; the accumulation is then deducted on the
balance sheet from the original cost of the fixed assets to produce a book value.
Here is the accounting entry:
as existing merely on paper and not requiring any cash. Can you see how, by adjusting
this non-cash expense upward, you can actually save yourself cash by lowering your
income tax bill? This is the magic of depreciation. Increase any other expense and
you have less cash; increase this one and you have more.
Behind the enchantment, however, lie some essential truths — realities that are
frequently overlooked — to the extent, that is, that real life permits such a thing:
Suppose, for example, you purchased an office copier for $5000, estimated its
useful life at 4 years, and thought you could afterward trade it in for $1000. The
depreciation would be:
1 + 2 + 3 + 4 = 10
The 10 becomes the denominator in a fraction, the numerator of which for the
first year is the last number in the sum: 4. The fraction is applied to the depreciable
amount, thus:
The second year’s calculation is: (3/10) × $4000 = $1200 and so on through
each digit until a total of 10/10, or 100 of the depreciable amount has been expensed.
As you can see, the first year’s depreciation under SYD is significantly greater
than under straight line. More expense means less income and income tax. Since
the total deductible amount is $4000 in either case, however, SYD will have to
compensate later on for the big numbers in the early years.
Getting the sum of the years’ digits can be tedious if it is a big number, such
as 15 years, so here is a formula you can use. Where N = the number of years’
useful life,
For example,
depreciation was 25% per year, because the useful life is 4 years and 4 × 25% =
100%. If the useful life had been 5 years, the rate would be 20%, and so on. Under
DDB this rate is doubled and applied to the beginning book value each year. For
the first year in our example, the depreciation is:
As you can see, we have stopped kidding around; we are really talking deprecia-
tion now. At the end of Year 1 the book value of our copier is:
100,000
× $4000 = $800
500,000
Replacement Cost
Replacement cost is a theoretical method not used for either financial or tax purposes;
the depreciation is based on future replacement rather than original cost. If you
thought that at the end of five years you would have to pay $10,000 to replace your
copier, you might consider the “true” depreciation cost to be:
Keep in mind, though, that this method is not authorized for financial reporting
or income tax purposes.
Advantages of Accelerated Depreciation
Depreciation expense is known as a non-cash expense because there is no out-of-
pocket payment associated with it, as there is with almost every other expense. Of
course, a payment is made at the time the asset is acquired. Afterward, however, the
amount or rate of depreciation has no effect on a company’s cash except as it affects
profits and profits affect the amount of income tax that is paid.
By selecting an accelerated method of depreciation, a company can postpone
the payment of some taxes that would be paid using the standard straight line method.
This postponement is very like an interest-free loan from the government.
SL3151Ch15Frame Page 688 Thursday, September 12, 2002 8:00 PM
To be sure, the main difficulty with financial statements is what may be called
the good news/bad news syndrome. Seldom do financial statements look completely
good or completely bad; they nearly always exhibit both qualities. There are two
principal ways of analyzing the financial strength of a company. One is through a
ratio analysis of recent financial statements. The other involves a financial forecast
of the near future. Ratio analysis is the easiest to learn and the fastest to use, and
that is the method we will examine first. Financial forecasting is more difficult to
learn and complex to apply, but it gives superior results.
Forecasts often require us to make difficult estimates of unknowns, but they deal
in specific goals and dates, such as earnings in the coming year or cash flow in the
next 15 months. Ratios, on the other hand, are usually easy to calculate, but the
results are often abstractions that may he hard to apply to real world problems. Does
knowing that the current assets are 200 percent of the current liabilities tell you if
you can pay your bills on time?
As we discuss ratios, keep in mind that they are nothing but little numbers unless
we have some standard by which to measure them. The 2:1 current ratio mentioned
above does not help you much unless you know what number constitutes a good
current ratio and whether it gets better as it gets higher, or vice versa.
RATIO ANALYSIS
The dollar values of items on the income statement and balance sheet have little
significance by themselves. Rather it is the proportion of accounts, or groups of
SL3151Ch15Frame Page 689 Thursday, September 12, 2002 8:00 PM
accounts, one to another that tells us whether a company is financially viable or not.
For example: Suppose a businessperson tells you his company has $85,000 in its
checking accounts. The figure means virtually nothing unless you can relate it to
other aspects of the business. If the man runs a local shoe store, he may be well
fixed, but if he turns out to be the president of the Eastman Kodak Company, he is
talking about the amount of cash that flows in and out of the company approximately
every minute of the business day.
A major problem with this kind of analysis has been the proliferation of different
ratios. Every financial statement lists several accounts; they may be compared to
each other or to the same accounts in previous periods; combinations of accounts
are related to individual items or to other combinations; and ratios themselves are
often divided by other ratios to produce super ratios for determining trends. The
possibilities and the confusion seem to be without limit. As an example let us look
at a balance sheet and income sheet for a hypothetical Company X.
ASSETS
Cash & Short Term Investments 1585 613
Accounts Receivable (Net) 1678 2563
Inventories 1703 2072
Deferred Taxes 230 348
Prepaid Expense 50 215
Current Assets 5246 5811
DEBT + EQUITY
Notes Payable
Current Maturities of LT Debt
Accounts Payable 1564 3440
Accrued Liabilities
Taxes Payable 482 209
Dividends Payable 201 142
Preferred Stock
Common Stock 674 936
Retained Earnings 5354 6533
Less: (Treasury Stock) –1081
Expenses:
Selling, G&A 1753 2693
Depreciation
Research and Development
Liquidity Ratios
Liquidity refers to the ease with which an asset can be converted to cash. The liquid
assets in a business are cash itself and those things that are near to being cash, such
as accounts receivable, or that are readily convertible, such as marketable securities.
The Securities and Exchange Commission and countless analysts have defined
liquidity as the ability to pay debts when they come due. A gutsier definition might
be simply “enough cash.” But enough for what? The answer to that is usually found
in the denominator of liquidity ratios. Enough cash to pay the bills coming due;
enough to pay recurring expenses such as payroll; and enough to cover unexpected
needs and opportunities. In addition, that simple question often yields a perplexing
answer. The elements of liquidity are in an active state of flux. Both the amount of
cash a business has on hand and the amount it is obligated to pay changes with
virtually every transaction that occurs. And even a modest-sized company may have
1000 employees spending its money — and 10,000 customers sending cash in.
It is difficult if not time-wasting, therefore, to contemplate cash needs moment
to moment. Most firms try to forecast cash flows in and out for a day, a week, or a
month, and then add a cushion to cover normal variances. An even more serious
problem in managing liquidity is that we are obliged to weigh an uncertainty against
a certainty.
There is little in life that is as fixed, certain, and unremitting as a debt owing.
On the other hand, few things are as inconstant, fickle, and capricious as payments
promised, loans pending, and sales forecasted. In using liquidity ratios it helps to
identify the certain and uncertain elements, and how much of the latter it takes to
balance the former. For example, in the popular current ratio (current assets/current
liabilities), we see a blend of uncertainties in the numerator. We can count on the
cash we already have, but the timing of receivable collections is somewhat uncertain,
and the sale of inventory even more so. The amounts and due dates of the current
liabilities, however, are known and fixed. In matching up the two elements, therefore,
we know instinctively there should be more current assets than current liabilities in
order to offset the uncertainty of the former.
Can a firm be too liquid? Can it have too much cash for its own good? There
are certain “conventional truths” that circulate among businesspeople that do not
bear close scrutiny so well. One of them says that the company with abnormally
high cash balances may be “missing investment opportunities that could bring growth
and profits.” In my view, cash is the beginning asset in business and the final asset.
And during the game it is like the queen on a chessboard. It travels in all directions,
any number of squares. That is, it is the most powerful and flexible of assets, and
SL3151Ch15Frame Page 692 Thursday, September 12, 2002 8:00 PM
as long as you have plenty of it you are in a superior position for taking advantage
of opportunities that float your way.
Financial Leverage
Financial leverage is the mix of debt and equity in a business. The perfect mix is
one that exactly balances the entrepreneur’s love of leverage and the creditor’s fear
of it.
Leverage is the relationship between the amount of money creditors put in a
business and the amount the owners contribute. Where there is plenty of debt and
not much equity we speak of high leverage. Where there is little debt and lots of
equity we talk of low leverage. Since leverage refers to the relationship of a firm’s
debt and equity, it stands to reason that a ratio of debt to equity will measure it. And
debt to equity is in fact the most popular ratio for gauging leverage.
Other well-known leverage ratios include equity/debt, assets/equity, and
debt/assets. All of these ratios have a direct mathematical link and tell exactly the
same story. Only the scale is different. The problem is not in measuring leverage so
much as it is in knowing when a company reaches a reasonable debt limit. Unfor-
tunately, we cannot tell how much leverage is enough except by noting when there
is too much. When a company goes bankrupt, we can say with a measure of
confidence that the company should have had a little less leverage. At that point,
however, the question itself is usually academic.
Coverage Ratios
Coverage ratios are intended to measure a company’s ability to pay the interest on
its debt from its earnings. Some financial people consider these a form of leverage
ratios, but in reality they are nothing more than earnings ratios, when they are useful
at all. The most popular coverage ratio is the Times Interest Earned ratio. The formula
is
Earnings
It is a formula that applies to the largest oil company, the smallest lemonade
stand, and everything in between.
The complex nature of earnings becomes apparent when you try to analyze them.
Why are some companies profitable while others are not? And why do some firms,
profitable for decades, suddenly turn stagnant? The key elements of profitability are:
SL3151Ch15Frame Page 693 Thursday, September 12, 2002 8:00 PM
Of these, demand for the company’s products or services is not only the greatest
influence on earnings, but whatever is in second place (probably luck) is way behind
it. Demand is the condition of being sought after, and it is made manifest in business
by the willingness, coupled with the ability, of customers to buy what you are selling.
This is the reason this section on accounting and finance is included in a discussion
of six sigma/DFSS. Unless we internalize the concept of demand in relation to the
functional requirements that the customer is ever seeking, we are not going to be
profitable.
Demand is a fickle friend; it comes often without warning and disappears the
same way. It is not something we have a great deal of control over. Rather, it is a
condition that arises within our customers and is difficult to predict — even by the
customers themselves — unless we spend some time and investigate their needs,
wants, and expectations. Businesses can stimulate demand a little with advertising
and other marketing efforts, but by and large it is created by the customer in a way
that we do not completely understand.
All of this leads us to the basic business risk — the reason companies are
deserving of making a profit. When you start a business, you have to create a product,
gather the people and materials needed to make it, set up a distribution system, and
advertise the product, all before you have the first evidence that people will buy the
product from you.
Earnings Ratios
There are a number of earnings or profitability ratios in current use. Just about all
of them use numbers from both the balance sheet and the income statement. Some
are better than others, and we will touch on the most popular ones.
Le ROI
The king of the earnings ratios is often referred to as ROI — Return on Investment.
That is the ratio of profit to equity. But in recent years, the interest in these mea-
surements has multiplied so that there is now a whole family of ROI ratios, and ROI
has become a generic term for several different kinds of measures.
Most earnings ratios are called Return on Something, and the method of calcu-
lation is fairly standard. “Return on” indicates that some profit figure is in the
numerator, and the “something” is the denominator of the fraction. The result usually
falls into a range between 0 and .5 and is normally expressed as a percentage figure.
Many of the return ratios come in two colors, profit before tax and profit after
tax (PAT). Both types are commonplace, but the latter is about twice the size of the
former, so you have to pay attention to what you are looking at. I will always be
SL3151Ch15Frame Page 694 Thursday, September 12, 2002 8:00 PM
referring to PAT unless I say otherwise. Here is a brief description of the three most
popular Return ratios, all three of which are calculated by most companies.
ROE: Return on Equity
Profit
Equity
ROE is the last word in profitability ratios. When the smoke and mirrors of this
“special factor” and that “extra adjustment” are put aside, this is the measurement
that tells you whether you really have a business or not.
ROA: Return on Assets
Profit
Assets
You will remember that Assets = Debt + Equity, so ROA is like ROE except
that the denominator is bigger and the percentage return is therefore smaller than
ROE. This ratio is popular among larger companies for measuring the performance
of subsidiaries and divisions. ROE would be a better measure, but when companies
cannot determine a true equity figure for a division, this works pretty well.
ROS: Return on Sales
Profit
Sales
ROS tells how many cents out of each sales dollar go into the owners’ pockets.
Nearly every company calculates this ratio, but it is not really very useful because
there is no standard to gauge it by, as there is with ROE. For example, Company A
may cater to the carriage trade and have limited sales but a high ROS. Company B
may be a mass merchandiser with a low ROS, yet B could have a higher ROE than
A because profit is the product of ROS and the volume of sales.
The average ROS for all companies in the US is between 5 and 6%, but the
number varies widely from company to company, even in the same industry.
Other Return Ratios
There are dozens of other return ratios in active use, but their definitions are not
well settled. Here are a few of the more common 3+ letter jobs, but even with these,
definitions vary among users.
RONCE: Return on Net Capital Employed — The denominator, net capital
employed, usually refers to total debt plus equity minus non-interest-bearing debt
such as accounts payable. But this is not always what it means, so if it is important
for you to know the precise meaning, ask the user to define the ratio.
SL3151Ch15Frame Page 695 Thursday, September 12, 2002 8:00 PM
1. Total debts against expected future profits, which are the primary source
of interest and principal payments
2. Total debts against total assets, the liquidation of which is a secondary,
albeit dire, source of repayment
Moody’s et al.
Moody’s and Standard and Poor’s are the best known of the bond rating companies.
Neither of these firms reveals exactly how they arrive at their ratings, but the
following criteria figure prominently in their classifications.
1. Financial leverage; the lower the leverage, the better the rating.
2. Profitability or rather, the avoidance of losses.
SL3151Ch15Frame Page 696 Thursday, September 12, 2002 8:00 PM
Both Moody’s and Standard and Poor’s use letter ratings beginning with triple
A. Here are samples of the definitions they give their classifications.
Moody’s
Aaa Bonds carry the smallest degree of investment risk and are generally referred
to as “gilt edge.” Interest payments are protected by a large or an exceptionally
stable margin and principal is secure.
Caa Bonds are of poor standing. Such issues may be in default or there may be
present elements of danger with respect to principal or interest.
Standard and Poor’s
AAA is the highest rating and indicates an extremely strong capacity to pay principal
and interest. Bonds rated BB, B, CCC, and CC are regarded, on balance, as pre-
dominantly speculative with respect to the issuer’s capacity to pay interest and repay
principal in accordance with the terms of the obligation.
As one can see, neither company gets very specific about the meaning of the
ratings, and no attempt is made to predict the future of the subject firm. Rather, the
ratings convey a “feeling” about it. That feeling represents the risk side of the
investment equation:
Risk = Return
The return, on the other hand, is represented by a specific number — the bond’s
yield. Now, if the bond’s rating could also be expressed as a specific number — the
percentage probability of loss — investment decisions would be greatly simplified.
The rating agencies maintain a rigid independence from the companies they
analyze. Rigidity is necessary because of the millions of dollars of higher or lower
interest costs that often ride on the change of a rating (that is, in subsequent bond
issues, not the ones outstanding). Now and then you will see a company emit an
outraged howl at being downgraded.
financial data on about 5000 common and preferred stocks. Most of the stock issues
are rated on an eight-level scale running from A+ (highest) to D (in reorganization).
The rating formula is based on a computerized scoring system that traces the trends
of earnings and dividends over the previous ten years. The basic scores are then
adjusted for growth, stability, and cycles; final scores are measured against a matrix
of a large sample of stocks. The Standard and Poor’s (S&P) rating serves well enough
as a measure of a company’s past performance; but as it ignores the condition of
the balance sheet and future earnings estimates, it is only a starter in an investment
analysis. Considering the price of the analyses, however, about ten cents a gross,
there hardly seems to be grounds for complaint.
The Value Line Investment Survey tracks over 1700 companies on a regular basis.
Using unpublished equations, it rates each company for “safety” and investment
“timeliness.” Both factors are ranked on a scale of 1 (highest) to 5 (lowest), the
rankings being relative to all 1700 stocks, not to some absolute standard. The Value
Line safety ranking is based on such factors as leverage, fixed charge coverage (the
number of times over that profits could pay the annual interest expense), liquidity,
and the riskiness of that type of business. The timeliness factor is a comparison of
a stock’s price trend against its expected earnings. A company may have a terrific
near-term profit outlook, but if the market price of its stock is hovering somewhere
in the stratosphere, it may not be a “timely” buy.
Among the published formulas for investing in stocks, one of the most famous and
enduring is based on the “intrinsic value” theory of the late Benjamin Graham.
Professor Graham, a pioneer in “fundamental” analysis, did most of his research in
pre-computer days when computations were made on those clunky mechanical
calculators and laboriously recorded by hand. Graham’s notion was that stocks at
any given time are likely to be undervalued or overvalued; that is, many investors
are buying and selling shares for reasons other than their fundamental value. The
smart investor will appraise that value (relative to the price) and snap up the under-
valued bargains. Among the criteria Graham proposed were the following:
4. Another price criterion: the earnings to price ratio (the reciprocal of the
P/E ratio) should be at least double the average Triple-A bond yield for
industrials. So if the bonds were averaging 10%, the E/P ratio should be
at least 207, which means a P/E ratio of 5 or less.
Special warning: Before you rush out and sell the farm to try one of these stock
market systems, be aware that no system has ever had a consistent, long-term run
of higher than average profits. Even Ben Graham’s common-sense method suffers
from the perverse nature of the market: No matter how adept you become at finding
“undervalued” stocks, the only way you can make money on them is if the rest of
the investors come to the same realization soon after you have bought some shares.
Sometimes they never do, and you could be left sitting on top of an undiscovered
gold mine until it is, well, too late.
Grade Meaning
1 High
2 Good
3 Fair
4 Limited
Each credit appraisal is done in conjunction with the financial strength rating,
so that the “1” in a rating of EE1 does not reflect the same standards as the “1” in
4A1. While there are smaller credit agencies that also issue ratings, D&B’s system
stands virtually alone. So extensively is it used by vendors granting trade credit that
for thousands of firms a good D&B rating is all that is necessary to establish an
open account.
SL3151Ch15Frame Page 699 Thursday, September 12, 2002 8:00 PM
TABLE 15.3
The Z Score
Ratio Factor
Other Systems
The tryout period is one of experimentation, finding the product, the price, the
method of distribution, the niche that will create customers. If and when the growth
stage develops, a heavy investment in promotion and production is needed. During
this period, which may last a decade or two, it is usual for more cash to be spent
than received, even though the operation is highly profitable. With maturity the cash
flow turns positive as sales level off. The last stage is the least predictable, some
companies going out with a bang, some with a whimper, others merging themselves
quietly into the operations of a more viable firm.
CASH FLOW
Cash flow has been defined as:
Profit + Depreciation
Cash flow is intended to represent “discretionary funds” that are over and above
what is needed to continue running the business, and may, therefore, be used to
expand the company, pay off loans, pay extra dividends, and so on.
When it was first conceived, the idea of adding profit and depreciation to get
cash flow found overnight acceptance among business executives. If profit was
vanilla ice cream, cash flow was a chocolate sundae. But it also produced an
unwanted side effect — “the non-cash illusion.”
Cash flow is a popular term with business managers. It is a phrase that is vague
enough to make you sound like you know what you are talking about even when
you do not. It is also useful in cases where you do know what you are talking about
but do not want to talk about it. As when a supplier calls you about an overdue bill.
Which would you rather say?
SL3151Ch15Frame Page 701 Thursday, September 12, 2002 8:00 PM
Included in the expenses is depreciation and maybe amortization, but these are
non-cash expenses; that is, there is no money paid out for this expense because it
was all paid out at the time the asset was bought.
The concept of cash flow is lame in one respect. It fails to recognize the need
to replenish fixed assets. Plants and equipment must be replenished, just as inventory
is. To say that funds from depreciation do not have to be spent on new fixed assets
is as deceptive as saying that cash received from a sale does not have to be spent
to buy new merchandise. (If you need any further convincing, look at some published
annual reports — the statistical section where you often find figures for depreciation
and new capital investment going back five or ten years. Count the number of years
that the value of new equipment exceeded the depreciation charges; chances are it
will be at least nine times out of ten.) In other words, companies are not only using
all of the depreciation money to buy new fixed assets, they need a good deal more
besides. A better formula for calculating cash flow would be:
Even if a company is not growing, chances are that inflation will push replace-
ment costs higher than depreciation rates.
investments, as we will see later, but for the management of cash and cash planning
it is not. Those activities are best managed with detailed budgets and forecasts.
Moreover, the mystique of cash flow has been known to replace common sense, as
in the airline industry where enormous depreciation charges often mask treacherous
losses; likewise in some tax shelter schemes where non-cash charges are used to
reduce taxes and thus appear to actually generate money.
In the last analysis all firms, all tax schemes, must be profitable to be successful.
Profits are the true test of any investment, and to the extent that cash flow confuses
this ultimate reckoning it does us a disservice.
It is apparent that a company can lose money and still have a positive cash flow.
What is not so clear is that a firm can have a positive cash flow and still go broke —
a common hazard for rapidly growing companies.
The term “working capital,” like the term “cash flow,” is frequently heard in the
daily chatter about business finance. It, too, suffers from liberties taken with its
definition and usage. Most often, and especially when financial people are talking,
working capital means the specific dollar amount derived from the formula
1. Working capital is a concept that has no existence in the real world. You
cannot hold working capital in your hand or put it in your pocket. Nor
can you actually offset current liabilities with current assets. Nearly all
current liabilities can only be satisfied with cash.
2. While businesspeople are fond of calculating working capital, no one has
yet come up with a rule stating how much of it a company should have.
It seems reasonable enough that the more sales a business has, the more
working capital there should be also. But we cannot seem to pin it down
to an actual standard. Working capital, therefore, is a measurement without
much meaning. In business there is only one excuse for an expense: it
will help to produce revenue.
Some expenses, however, have nothing to do with producing revenue — the entire
accounting department, for example — but they are necessary nevertheless. Others,
such as income tax and vacation pay, have only a roundabout effect on your sales but
are also unavoidable. Some pay our debts to society, such as the expense of unemploy-
ment insurance, or make us good neighbors, such as a little landscaping, while the sole
purpose of some expenses is to reduce overall expense, that is, increase productivity.
Unlike revenues, which are the result of a customer taking action, expenses
result when you take action. They are largely controllable and therefore a direct
reflection of your management ability. It may be hard to measure the value of what
you do, but the cost of your doing it is right there in the printout for all to see.
SL3151Ch15Frame Page 703 Thursday, September 12, 2002 8:00 PM
Actual — Actual costs are distinguished from standard costs; the latter are
estimates used for convenience, and an adjustment must be made to the
actual costs at least yearly.
Alternative — The costs of optional solutions; used in “what-if” analyses.
Controllable — Costs for which some manager can be held responsible.
Cost of Sales — Also called “cost of goods sold”; the cost of making or buying
the products a business sells. In a manufacturing firm it comprises direct
labor, materials, and manufacturing overhead.
Differential — The difference in the costs of two or more optional activities.
Direct — Costs that can be laid solely to a particular activity. In manufactur-
ing, the wages of the workers who make the products and the cost of the
materials used are direct costs; they are often referred to as “direct labor”
and “direct materials.”
Discretionary — More or less unnecessary but desirable outlays, such as the
office Christmas party or management seminars.
Estimated — Predetermined by an informed guess.
Extraordinary — Expenses due to abnormal events, such as an earthquake.
Costs in this category should be not only unexpected, but rare.
Fixed — Costs that remain the same despite changes in sales or some other
output. Examples are lease payments on property and depreciation on
equipment. Compare to variable costs. As used here the fixedness is a matter
of degree; almost every cost is affected somewhat by the volume of sales.
Historical — The original cost of an asset.
Imputed — The imagined or estimated cost of a sacrifice; not a cash outlay
but the giving up of something you could have had; a cost often recognized
in the decision process but not recorded on the books. When a company
has accounts receivable, for example, there is an imputed cost of the interest
it could be earning on the funds tied up in receivables.
Incremental — A cost that will be added or eliminated if some change is
made. Similar to “differential cost.”
Indirect — A general or overhead cost that is allocated to a product or
department on the theory that the receiver shares in the benefit of the thing,
and, besides, somebody has to pay for it.
Joint — Also called “common cost”; A cost shared by two or more products
or departments, as for example the expense of a company lunchroom.
Noncontrollable — Costs that are prerequisites to doing business, such as a
city license or smog control equipment. These costs are often allocated to
a department, but there is no point in holding the manager accountable for
them.
Opportunity — A theoretical cost of not using an asset in one way because
you are using it in another. For example, the opportunity cost of a company-
owned headquarters building is the money the company could get by renting
it to others.
SL3151Ch15Frame Page 704 Thursday, September 12, 2002 8:00 PM
Return Asset
Multiply by
on sales turnover
Divided into
Divided into
Net Total
Sales Sales
income assets
Fixed Current
Total costs Sales assets assets
Inventories Cash
Cost of Operating
goods sold expense Accounts Marketable
receivable
Depreciation Interest
securities
Taxes Less other
income
BREAKEVEN ANALYSIS
The breakeven point for a business is that volume of sales at which the revenues
equal the expenses. Above that point lie glory and profit; below lie infamy and loss.
At least that is the theory. In real life, it is very difficult to calculate a breakeven
point because the expenses of most businesses do not fit comfortably into just a
fixed or variable category. Breakeven analysis can be done visually using a graph
like the one in Figure 15.3, or mathematically.
$000s Revenue
Expense
Profit
20
Fixed costs
10
Loss
5
0 10 20 30 40
-------------------------------------------------------------------------------→ Units
Fixed Costs
Break - even sales =
Contribution Margin
$10,000
Break - even sales = = $20, 000
.50
Actual revenue was 1100 × $21 = $23,100 and the revenue variance = $3100.
That variance can be broken down as follows:
EOQ = (2*Q*P/C).5
where Q = quantity needed for the period; P = the cost of placing one order; and C =
the cost of carrying one unit for one period.
EXAMPLE
Standard Office Furniture sells 1800 “B” desks more or less evenly over 12 months.
The cost of placing and receiving an order from the manufacturer is $45. Standard’s
annual carrying costs are 20% of the inventory value. The “B” wholesales for $75,
so the annual carrying cost per desk is
The economic order quantity can then be calculated using the model:
We can also calculate Standard’s optimal inventory cycle for these desks:
This is an example where a company has to decide between two different manufac-
turing machines it wants to purchase. The costs and benefits of each are set out below.
MACHINE A $50,000
End of Year → 1 2 3 4 5
Revenues 30,000 30,000 30,000 30,000 30,000
Direct cost, mtl, labor, etc 5,000 5,000 5,000 5,000 5,000
Operating exp, Selling, G&A 5,000 5,000 5,000 5,000 5,000
Depn, Straight Ln 10,000 10,000 10,000 10,000 10,000
Profit Before Tax 10,000 10,000 10,000 10,000 10,000
Income Tax (50%) 5,000 5,000 5,000 5,000 5,000
Net Income5,000 5,000 5,000 5,000 5,000 5,000
Cash Flow 15,000 15,000 15,000 15,000 15,000
Investment 40,000 30,000 20,000 10,000 0
Machine B $50,000
End of Year → 1 2 3 4 5
Revenues 45,000 40,000 32,000 25,000
Direct cost, mtl, labor, etc 7,500 7,500 5,000 2,500
Operating exp, Selling, G&A 5,000 5,000 5,000 5,000
Depn, Straight Ln 12,500 12,500 12,500 12,500
Profit Before Tax 20,000 15,000 10,000 5,000
Income Tax (50%) 10,000 7,500 5,000 2,500
Net Income5,000 10,000 7,500 5,000 2,500
Cash Flow 22,500 20,000 17,500 15,000
Investment 37,500 25,000 12,500 0
Payback Method: Payback answers the question, how long will it take us to recover
our original investment?
Year → 1 2 3 4 5
MACHINE A
Balance to Recover 50,000 35,000 20,000 5,000 0
Cash Flow 15,000 15,000 15,000 15,000 15,000
Cumulative Years 1.00 2.00 3.00 3.33 3.33
MACHINE B
Balance to Recover 50,000 27,500 7,500 0
Cash Flow 22,500 20,000 17,500 15,000
Cumulative Years 1.00 2.00 2.43 2.43
Average Rate of Return: Average rate of return is our old friend ROE, Profit/Equity
(or in this case, Profit/Investment), except we call for the average return over the
period covered.
SL3151Ch15Frame Page 709 Thursday, September 12, 2002 8:00 PM
End of Year → 1 2 3 4 5
MACHINE A
Profit 5,000 5,000 5,000 5,000 5,000
Average Profit: 5000
MACHINE B
Profit 10,000 7,500 5,000 2,500
Average Profit: 6250
Beginning Investment = $50,000 Ending Investment = 0
Average Investment = $25,000
Average Rate of Return = $6250/$25,000 = 25%
NPV equals the cash receipts from an investment minus the cash outlays, all dis-
counted at an acceptable rate, sometimes called the hurdle rate. The formula is
∑ (1 + r )
CFt
NPV = t
t =0
where n = the number of periods; t = the time period; r = the per period cost of
capital; and CFt = the cash flow in time period t.
IRR is at present the “truest” rate of return we know how to calculate. Technically,
it is the “hurdle” or discount rate that produces an NPV equal to zero. The formula is
∑ (1 + r )
CFt
NPV = 0 = t
t =0
A special caution is needed here. The IRR calculation can turn awkward when
there is more than one sign change in the cash flow stream. You may get more than
one answer for the same series of payments.
One way around the problem is to do a modified IRR in which you calculate
the present value of all the outflows (negatives) using, say, the company’s average
interest rate on loans; then compute the IRR using the single outflow figure (CFCM).
The Financial Management Rate of Return (FMRR) developed by Findlay and
Messner in 1973 goes one step further. It starts by calculating the present value of
SL3151Ch15Frame Page 710 Thursday, September 12, 2002 8:00 PM
all cash outlays, as does the modified IRR, and then calculates a future value for
the positive cash flows (inflows). The rate for this future value calculation is the
expected rate at which the inflows will be employed.
PROFIT PLANNING
The most critical element of all in financial planning is the revenue, or sales, forecast.
It is the basis of the cost and profit forecasts, and the key element in planning a
firm’s people, money, and material needs. At the same time, the revenue or sales
forecast is the most difficult to make of all forecasts. Anyone who can project sales
a year ahead and come in less than 10% off the mark is doing pretty well.
Inflation — Up 2 to 12 percent
Demand for the product — Up or down 0–10%
State of the economy — Up or down 0–10%
New products — Up 0–10%
The sales goal form also aids in the forecasting of gross profit, operating profit,
net income, and earnings per share. In addition, it permits what-if analyses, showing
the effect on net income of a change in the sales or cost inputs.
Statistical Analysis
The computer is a helpful tool in the process because such budgets are negotiated
rather than merely extrapolated from revenues, planned rather than merely projected.
There are often several versions prepared before the final plan is hammered out. A
good deal of posturing and gamesmanship goes into the process, and a not untypical
scenario finds the department manager padding his or her budget secure in the
knowledge that top management will cut it, and top management cutting it because
they know it is always padded a little.
How to Budget
The easiest way to budget is to increase last year’s budget by some percentage that
will allow a comfortable salary raise for everyone, plus a few more bodies to ease
the workload, and some extra gadgets and trips to make it more fun. A more
businesslike method is to adjust last year’s budget to the expected level of next year’s
principal activities of the department. Every department has its tasks, and if the
output or the activities can be quantified, this will furnish a standard for setting the
new budget. The best lever for controlling department expense is the number of
people employed. Employees not only cost salaries, fringe benefits, and taxes, but
desks, computers, food service, and parking spaces, too.
Zero-Growth Budgeting
This is not to be confused with zero-based budgeting, which deservedly died a quiet
death a few years back. Zero-growth budgeting is a term for a plan that seeks to
hold expenses at the current level while revenues grow. If it works it will obviously
mean more profit. Of the two ways to get rich in life, the more excusable is to
increase your productivity — work harder or faster or smarter so that the value of
your output goes up.
To the department manager, zero-growth budgeting says, “Look, we expect sales
to rise 10% this year but we would like to handle the increased business with the
same budget as last year. What is more, we think the better people should get nice
raises, but out of the same pot as last year. That means you have to handle the work
more efficiently, and perhaps not replace people so fast when they leave.”
The idea is to challenge people to work smarter but not threaten them with the
annihilation that was implicit in the zero-based budgeting concept. Most productivity
gains are made inch by inch, and if too much is asked of people, they tend to give up.
SELECTED BIBLIOGRAPHY
Ainworth, P. et al., Introduction to Accounting: An Integrating Approach, Irwin, Homewood,
IL, 2001.
Albright, S.C. et al., Managerial Statistics. Duxbury, Pacific Grove, CA, 2000.
Baker, R.E., Lembke, V.C., and King, T.E., Advanced Financial Accounting, 5th ed., Irwin,
Homewood, IL, 2002.
Block, E., Chen, K. and Lin, T., Cost Management, Irwin, Homewood, IL, 2001.
Brealey, R.A., Fundamentals of Corporate Finance, 3rd. ed., Irwin, Homewood, IL, 2001.
Brigham, E.F. and Ehrhardt, M.E., Financial Management: Theory and Practice, 10th ed.,
The Dryden Press, New Rochelle, NY, 2002.
SL3151Ch15Frame Page 713 Thursday, September 12, 2002 8:00 PM
Brigham, E.F. and Houston, J.F., Fundamentals of Financial Management, Concise, 3rd. ed.,
The Dryden Press, New Rochelle, NY, 2002.
Dauten, C.A. and Welshans, M.T., Principles of Finance, 3rd. ed., South-Western Publishing
Co., New Rochelle, NY, 1970.
Edmonds, T.P. et al., Fundamental Financial Accounting Concepts, Irwin, Homewood, IL,
2000.
Moore, F.G., Manufacturing Management, 5th ed., Irwin, Homewood, IL, 1969.
Pyle, W.W. and White, J.A., Fundamental Accounting Principles, 6th ed., Irwin, Homewood,
IL, 1972.
Weston, J.F. and Brigham, E.F., Essentials of Managerial Finance, 3rd ed., The Dryden Press,
Hinsdale, IL, 1974.
SL3151Ch15Frame Page 714 Thursday, September 12, 2002 8:00 PM
SL3151Ch16Frame Page 715 Thursday, September 12, 2002 5:57 PM
Define
• Customer understanding
• Market research
• Kano model
• Organizational knowledge
• Target setting
Characterize
• Concept selection
• Pugh selection
• Value analysis
• System diagram
• Structure matrix
• Functional flow
• Interface
• QFD
• TRIZ
• Conjoint analysis
• Robustness
• Reliability checklist
• Signal process flow diagrams
• Axiomatic designs
• P-diagram
• Validation
• Verification
• Specifications
Optimize
• Parameter and tolerance design
• Simulation
715
SL3151Ch16Frame Page 716 Thursday, September 12, 2002 5:57 PM
• Taguchi
• Statistical tolerancing
• QLF
• Design and process failure mode and effects analysis (FMEA)
• Robustness
• Reliability checklist
• Process capability
• Gauge R & R
• Control plan
Verify
• Assessment (validation and verification score cards)
• Design verification plan and report
• Robustness reliability
• Process capability
• Gauge R & R
• Control plan
The concept of DFSS may be translated into a model shown in Figure 16.1. This
model not only identifies the components DCOV (define, characterize, optimize,
verify), but it also identifies the key characteristics of each one of the stages.
To understand and appreciate how and why this model works, one must under-
stand the purpose and the deliverables of each stage in the model. So, let us give a
summary of what the DFSS is all about.
SL3151Ch16Frame Page 717 Thursday, September 12, 2002 5:57 PM
In the Define (D) stage, it is imperative to make sure that the customer is
understood. The “spoken” and the “unspoken” requirements must be accounted for
and then the definition of the CTS drivers takes place. It is very tempting to jump
right away into the Yi without really knowing what the critical characteristics (or
functionalities) are for the customer. Unless they are understood, establishing oper-
ating window(s) for these Ys will be fruitless.
So the question then is, “How do we get to this point?” And the answer in
general terms (the specific answer depends on the organization and its product or
service) is that the inputs must be developed from a variety of sources including but
not limited to the following — the order does not matter:
• Consumer understanding
• Kano model application
• Regulatory requirements
• Corporate requirements
• Quality/customer satisfaction history
• Functional, serviceability, expectations
• Understanding of integration targets process
• Brand profiler/DNA
• Benchmarking
• Quality Function Deployment (QFD)
• Product Design Specifications (PDS)
• Business strategy
• Competitive environment
• Market segmentation
• Technology assessment
Once these inputs have been identified, developed, and understood by the DFSS
team, then the translation of these “functionalities” may be articulated into the Ys
and thus the iteration process begins. How is this done? By making sure all the
active individuals participate and have ownership of the project as well as technical
knowledge. Specifically, in this stage the owners of the DFSS project will be looking
to make correlated connections of what they expect and what they have found in
their research. Thus, the search for a “transformation function” begins and the
journey to improvement begins in a formal way. Some of the steps to identify the
Ys are:
Once the technical team has finished its review and come up with a consensus
for “action,” the following deliverables are expected:
SL3151Ch16Frame Page 718 Thursday, September 12, 2002 5:57 PM
• Kano diagram
• Targets and ranges for CTS Y’s
• Y relationship to customer satisfaction
• Y relationship to customer satisfaction
• Benchmarked CTSs
• CTS scorecard
At this point, one of the most important steps must be completed before the
DFSS team must go officially into the next step — characterize. This step is the
evaluation process. A thorough question and answer session takes place with focus
on what has transpired in this stage. It is important to ask questions such as: Are
we sure that our CTS Ys are really associated with customer satisfaction? Did we
review all attributes and functionalities? And so on. Typical tools for the basis of
the evaluation are:
• Consumer insight
• Market research
• Quality history
• Kano analysis
• QFD
• Regression modeling
• Conjoint analysis
When everyone is satisfied and consensus has been reached, then the team
officially moves into the characterize (C) stage. In this stage, all participants must
make sure that the system is understood. As a result of this understanding, the team
begins to formalize the concepts. The process for this development proceeds as
follows:
• Flow CTS Ys down to lower level y’s, e.g., Y = f(y1, y2,… yn).
• Relate y’s to CTQ parameters (x’s and n’s), y = f(x1,…, xk, n1,…, nj) (x
is the characteristic and n is the noise).
• Characterize robustness opportunities (optimize characteristics in the pres-
ence of noise).
• Kano diagram
• CTS Ys, with targets and ranges
• Customer satisfaction scorecard
• Functional boundaries and interfaces from system design specification(s)
and/or verification analysis
• Existing hardware FMEA data
Once these inputs have been identified, developed, and understood, then the
formal decomposition of Y to y to y1 as well as the relationship of X to x to x1 and
SL3151Ch16Frame Page 719 Thursday, September 12, 2002 5:57 PM
n’s to the Ys begins. How is this done? By making sure all the active individuals
participate and all have ownership of the project as well as technical knowledge.
Specifically, in this stage the owners of the DFSS project will be looking to make
correlated connections of what they expect and what they have found in their
research. Thus, the formal search for the “transformation function,” preferably the
“ideal” function, gets underway. Some of the steps to identify both the decomposition
of the Ys and its relationship to x are (order does not matter, since in most cases
these items will be worked on simultaneously):
• Function diagram(s)
• Mapping of Y → functions → critical functions → y’s
• P-diagram, including critical
• Control factors, x’s,
• Technical metrics, y’s,
• Noise factors, n’s
• Transfer function
• Scorecard with target and range for y’s and x’s
• Plan for optimization and verification (R&R checklist)
At this point, one of the most important steps must be completed before the
DFSS team must go officially into the next step — optimization. This step is the
evaluation process. A thorough question and answer session takes place with focus
on what has transpired in this stage. It is important to ask questions such as: Have
all the y’s technical metrices been accounted for? Are all the CTQ x’s measurable
and correlated to the Ys of the customer? Are all functionalities accounted for? And
so on. Typical tools for the basis of the evaluation are:
• Function structures
• P-diagram
• Robustness/reliability checklist
• Modeling using design of experiments (DOE)
• TRIZ
When everyone is satisfied, then the team officially moves into the optimization
(O) stage. In this stage, we make sure that the system is designed with robustness
in mind, which means the focus is on
SL3151Ch16Frame Page 720 Thursday, September 12, 2002 5:57 PM
In essence here, we design for producibility. The process for this development
follows the following steps:
The inputs for this process are based on the following processes and information:
Once these inputs have been identified, developed, and understood, then the
formal optimization begins. Remember, there is a big difference between maximi-
zation and optimization. We are interested in optimization because we want to
equalize our input in such a way that when we do the trade-off analysis we are still
ahead. That means we want to decrease variability and satisfy the customer without
adding extra cost. How is this done? By making sure all the active individuals
participate and all have ownership of the project as well as technical knowledge.
Specifically, in this stage, the owners of the DFSS project will be looking to make
adjustments in both variability and sensitivity using optimization and modeling
equations and calculations to optimize both product and process. The central formula
is
DMAIC
12
2
∂y 2 2 ∂y
σ y = ( ) σ x1 + + σ 2
+ ...
∂x1 ∂x2
x2
DCOV
Whereas the focus of the DMAIC model is to reduce σ2xx (variability), the focus
( )
of the DCOV is to reduce the ∂y ∂x (sensitivity). This is very important, and it
SL3151Ch16Frame Page 721 Thursday, September 12, 2002 5:57 PM
is why we use the partial derivatives of the x’s to define the Ys. Of course, if the
transformation function is a linear one, then the only thing we can do is to control
variability. Needless to say, in most cases we deal with polynomials, and that is why
DOE and especially parameter design are very important in any DFSS endeavor.
Some of the steps to identify this optimizing process are (order does not matter,
since in most cases these items will be worked on simultaneously):
• Transfer function
• Scorecard with estimate of σy
• Target nominal values identified for x’s
• Variability metric for CTS Y or related function, e.g., range, standard
deviation, S/N ratio improvement
• Tolerances specified for important characteristics
• Short-term capability, “z” score
• Long-term capability
• Updated verification plans: robustness and reliability checklist
• Updated control plan
At this point, one of the most important steps must be completed before the
DFSS team must go officially into the next step — testing and verification. This step
is the evaluation process. A thorough question and answer session takes place with
focus on what has transpired in this stage. It is important to ask questions such as:
Have all the z scores for the CTQs been identified? How about their targets and
ranges? Is there a clear definition of the product variability over time metric? And
so on. Typical tools for the basis of the evaluation are:
After the team is satisfied with the progress thus far, it is ready to address the issues
and concerns of the last leg of the model — verification of results (V). In this stage,
the team focuses on assessing the performance, the reliability, and the manufacturability
of what has been designed. The process for developing the verification begins by
emphasizing two items:
The inputs to generate this information are based on but not limited to the
following:
Once these inputs have been identified, developed, and understood, then the
team is entering perhaps one of the most critical phases in the DFSS process. This
is where the experience and knowledge of the team members through synergy will
indeed shine. This is where the team members will be expected to come up with
physical and analytical performance test(s) as well as key life testing to verify the
correlation of what has been designed and the functionality that the customer is
seeking. In other words, the team is actually testing the “ideal function” and the
model generating the characteristics that will delight the customer. Awesome respon-
sibility indeed, but doable. The approach of generating some of the tests is:
To say that we have such and such a test that will do this and this and will
conform to such and such condition or circumstance is not a big issue or important.
What is important and essential is to be able to assess the performance of what you
have designed against the customer’s functionalities. In other words: Are all your
SL3151Ch16Frame Page 723 Thursday, September 12, 2002 5:57 PM
x’s correlated (and if so, to what degree) to Xs which in turn correlate to y which
in turn correlate to the Y (the real functional definition of the customer)? Have the
phases D, C, and O of the model been appropriately assessed in every stage? How
reliable is the testing? And so on. Some of the approaches and methodologies used
are (order does not matter, since in most cases these items will be worked on
simultaneously):
• Reliability/robustness plan
• Design verification plan with key noises
• Correlation: tests to customer usage
• Reliability/robustness demonstration matrix
SL3151Ch16Frame Page 724 Thursday, September 12, 2002 5:57 PM
SL3151ZAppAFrame Page 725 Thursday, September 12, 2002 5:56 PM
Appendix:
The Four Stages of Quality
Function Deployment
STAGE 1: ESTABLISH TARGETS
Note: First QFD meeting held: Vice president of engineering chairs meeting for
the purpose of critiquing the design concept and target setting.
Note: Second QFD management meeting held: Discuss process of targeting and
mass production planning.
Note: Fourth and final quality management meeting held three months after start
of mass production. Manufacturing leads; engineering participates.
The importance and focus on the “voice of the customer” results in:
TANGIBLE BENEFITS
INTANGIBLE BENEFITS
SUMMARY VALUE
1. The project
a. Selection
• Broad appeal
• Simple but not trivial
• Opportunity to improve
• Management support
• Available expertise
• Available market information
b. Scope/targets
• Project limitations, operating constraints, product constraints
• Market segment
• Regularity requirements
• Cost
• Mass
c. Objectives
• Reason for doing
• Expected results/outcome
d. Timing
• Spans full product cycle
• Months work
• Concentrated effort
• Hours meetings/members
• Significant time commitment
SL3151ZAppAFrame Page 728 Thursday, September 12, 2002 5:56 PM
2. Team
a. Selection
• Cross-functional
• Members
• Membership
• Product planning
• Styling
• Marketing
• Product/manufacturing engineering
• Operations
• Key supplier
• Service
• Product assurance/testing
• Expertise (not position)
• Keep ranks about equal
• Open-minded members
b. Operation
• Facilitator/leader
• Recorder
• Regular meetings
• Meeting to organize
• Team consensus
• Agreement
• Not voting
• No one dominates
• No factions
c. Agree to support group decisions
d. Team training
• At least one person knowledgeable of QFD
• QFD overview
• Other training as needed
• Team building
• Creative thinking
• Problem solving
• Meeting skills
• Facilitator skills
• Interview/survey methods
• Employee involvement team skills
1. Timing
• Process spans a major portion of the product development process.
• Identify intermediate measures of progress.
• Major projects will require 50–60 hours of meetings.
SL3151ZAppAFrame Page 729 Thursday, September 12, 2002 5:56 PM
Selected Bibliography
Adams, L., Measuring by Comparison, Quality, May 2001, pp. 32–34.
Allen, M.J. and Yen, M., Introduction to Measurement Theory, Wadsworth, Belmont, CA,
1979.
Anon., Statistics Roundtable: Statistical Gymnastics Revisited, Quality Progress, Feb. 1999,
pp. 84–94.
Anon., Statistical thinking and its contribution to total quality, The American Statistician, 44,
116–121, 1990.
Atkinson, H., Hamburg, J., and Ittner, C., Linking Quality to Profits: Quality Based Cost
Management, Quality Press, Milwaukee, 1994.
Aubrey, C., Six Sigma Creates Success in Service Sector, QFD and Design for Six Sigma,
Proceedings 45th EOQ Congress 2001, Instanbul, Sept. 20, 2001.
Balasubramanian, R., Concurrent Engineering — A Powerful Enabler of Supply Chain Man-
agement, Quality Progress, June 2001, pp. 47–54.
Barnes, E.B. and Mohanty, G.P., Tolerance allocation methodology for manufacturing, 1996
SAE Reliability, Maintainability, Supportability and Logistics Conferences and Work-
shop, Proceedings of the 8th Annual SAE RMS Workshop, 1996, pp. 49–54.
Bongiorno, J., Improving FMEAs, Quality Digest, Oct. 2000, pp. 37–40.
Boyce, W.E. and DiPrima, R.C., Elementary Differential Equations and Boundary Value
Problems, 7th ed, Wiley, New York, 2000.
Brauer, J.R., Finite Element Analysis, Marcel Dekker, New York, 1998.
Breyfogle, F.W., Implementing Six Sigma: Smarter Solutions Using Statistical Methods,
Wiley-Interscience, New York, 1999.
Breyfogle, F.W., Statistical Methods for Testing, Development and Manufacturing, Wiley-
Interscience, New York, 1992.
Britz, G. et al., Statistical Thinking, Special Publication, ASQC Statistics Division, Spring
1996, pp. 2–23.
Burke, R.J. and Maier, N.R.F., Attempts to predict success on an insight problem, Psycho-
logical Reports, 17, 303–310, 1965.
Campanella, J., Ed., Principles of Quality Costs: Principles, Implementation, and Use, 3rd
ed., Quality Press, Milwaukee, 1999.
Chen, I.J. et al., Quality Managers and the Successful Management of Quality: An Insight,
Quality Management Journal, 2000, pp. 40–54.
Cone, G., Six Sigma: Black Belt Training for Quality, Automotive Excellence, Fall 1998, pp.
10–14.
Daniels, S.E., Product Safety and Reliability: The Failures, SUV Rollovers Put Quality on
Trial, Quality Progress, Dec. 2000, pp. 30–48.
Daniels, S.E. and Hagen, M.R., Making the Pitch in the Executive Suite, Quality Progress,
Apr. 1999, pp. 30–48.
Davenport, T.H., Process Innovation: Reengineering Work Through Information Technology,
Harvard Business School Press, Boston, 1993.
De Pablos, L.A., Six Sigma Quality Metric vs. Taguchi Loss Function, QFD and Design for
Six Sigma, Proceedings 45th EOQ Congress 2001, Instanbul, Sept. 20, 2001.
731
SL3151ZAppAFrame Page 732 Thursday, September 12, 2002 5:56 PM
Deleryd, M., Deltin, J., and Klefsjo, B., Critical factors for successful implementation of
process capability studies, Quality Management Journal, 6(1), 40–59, 1999.
Denton, K., The Tool Box for the Mind: Finding and Implementing Creative Solutions in the
Workplace, Quality Press, Milwaukee, 1999.
Dimitriades, Z.S., Empowerment in total quality: designing and implementing effective employee
decision-making strategies, Quality Management Journal, 8(2), 19–28, 2001.
Dodson, B., Weibull Analysis, Quality Press, Milwaukee, 1995.
Dovich, R.A., Reliability Statistics, Quality Press, Milwaukee, 1990.
Draman, R.H. and Chakravorty, S.S., An Evaluation of Quality Improvement Project Selection
Alternatives, Quality Management Journal, 2000, pp. 58–73.
Dusharme, D., Gage Use and Abuse: A Guide to Common Gage Misuse, Quality Digest, Feb.
1999, pp. 30–33.
Field, J.M., Beyond design: implementing effective production work teams, Quality Manage-
ment Journal, 8(2), 29–43, 2001.
Fleischer, M. and Liker, J.K., Concurrent Engineering Effectiveness, Hanser Gardner, Cin-
cinnati, 1997.
Finn, G., Building Quality into Design Engineering, Quality Digest, Feb. 2000, pp. 35–38.
Fraenkel, J., Wallen, N., and Sawin, E.I., Visual Statistics. A Conceptual Primer, Allyn &
Bacon, Needham Heights, MA, 1999.
Fuller, W.A., Measurement Error Models, Wiley, New York, 1987.
Gebrael, M.G., Markov Modeling as a Reliability Tool, 1996 SAE Reliability, Maintainability,
Supportability and Logistics Conferences and Workshop, Proceedings of the 8th
Annual SAE RMS Workshop, 1996, pp. 37–44.
Genest, D.H., Improving Measurement System Compatibility, Quality Digest, Apr. 2001, pp.
35–40.
Ghiselin, B., Ed., The Creative Process, University of California Press, Berkeley, 1952.
Goldratt, E., Critical Chain, North River Press, Great Barrington, MA, 1998.
Goldratt, E., Essays on the Theory of Constraints, North River Press, Great Barrington, MA,
1998.
Goldratt, E., Necessary But Not Sufficient, North River Press, Great Barrington, MA, 2000.
Goodden, R., How a Good Quality Management System Can Limit Lawsuits, Quality
Progress, June 2001, pp. 55–59.
Goodenow, W.H., How to Become a Master Black Belt Organization Without Six Sigma,
Quality in Manufacturing, Jan./Feb. 2001, pp. 16–18.
Gorsuch, R.L., Factor Analysis, W.B. Saunders, Philadelphia, 1974.
Griffith, G.K., Statistical Process Control Methods for Long and Short Run, 2nd ed., Quality
Press, Milwaukee, 1996.
Hammer, M., Beyond Reengineering — How the Process Centered Organization is Changing
Our Work and Our Lives, Harper Business, New York, 1996.
Hammer, M. and Champy, J., Reengineering the Corporation. A Manifesto for Business
Revolution, Harper Business, New York, 1993.
Harrington, H.J., Project Management: It’s a More Important Tool than Six Sigma, Quality
Digest, June 2000, p. 20.
Harrington, H.J., Business Process Improvement: The Breakthrough Strategy for Total Quality,
Productivity, and Competitiveness, McGraw-Hill, New York, 1991.
Heil, G., Parker, T., and Stephens, D.C., One Size Fits One. Building Relationships One
Customer and One Employee at a Time, Wiley, New York, 1999.
Heiser, D.R. and Schikora, P., Flowcharting with Excel, Quality Management Journal, 2001,
pp. 26–35.
SL3151ZAppAFrame Page 733 Thursday, September 12, 2002 5:56 PM
Hoerl, R.W., Six Sigma and the Future of the Quality Profession, Quality Progress, June
1998, pp. 35–44.
Holmes, D.S. and Mergen, A.E., Building an Acceptance Chart., Quality Digest, June 2000,
pp. 35–36.
Holpp, L., Managing Teams, McGraw-Hill, New York, 1999.
Hoyer, R.W. and Hoyer, B.B.Y., What Is Quality? Quality Progress, July 2001, pp. 52–62.
Hunter, J.S., Metrics for Uncertainty: A Look at Probability, Evidence and a Seldom Used
Additive Metric, Quality Progress, Dec. 2000, pp. 72–73.
Imparato, N. and Harari, O., Jumping the Curve. Innovation And Strategic Choice in an Age
of Transition, Jossey-Bass, San Francisco, 1994.
Ireson, W.G., Coombs, C.F., and Moss, R.Y., Handbook of Reliability Engineering and
Management, 2nd ed., Quality Press, Milwaukee, 1996.
Isaacson, J. and Chambers, W., An Introduction to Optical Measurement, Quality Digest, Oct.
2000, pp. 28–32.
Janov, J., The Inventive Organization. Hope and Daring at Work, Jossey-Bass, San Francisco,
1994.
Kales, P., Reliability: For Technology, Engineering, and Management, Quality Press, Milwau-
kee, 1998.
Kalfut, M., Riding the Benchmark, Technology Century, Dec. 1997/Jan. 1998, pp. 30–31.
Kall, J., Manufacturing Execution Systems: Leveraging Data for Competitive Advantage (Part
I), Quality Digest, Aug. 1999, pp. 31–34.
Kall, J., Manufacturing Execution Systems: Leveraging Data for Competitive Advantage (Part
II), Quality Digest, Sept. 1999, pp. 31–33.
Kanyamibwa, F., Christy, D.P., and Fong, D.K.H., Variable selection in product design, Quality
Management Journal, 8(1), 62–79, 2001.
Kaplan, R.S. and Norton, D.P., The Balanced Scorecard, Harvard Business School Press,
Boston, 1996.
Kapur, K.C. and Lamberson., L.R., Reliability in Engineering Design, Wiley, New York, 1977.
Kay, M., Applying Six Sigma in a Public Service Organization, QFD and Design for Six
Sigma, Proceedings 45th EOQ Congress 2001, Instanbul, Sept. 20, 2001.
Kelada, J.N., Intergrading Reengineering with Total Quality, Quality Press, Milwaukee, 1996.
Kelly, C. and Kachatorian. L., Robust Design for Six Sigma Manufacturability, 1996 SAE
Reliability, Maintainability, Supportability and Logistics Conferences and Workshop,
Proceedings of the 8th Annual SAE RMS Workshop, 1996, pp. 25–28.
Kelly, L. and Morath, P., How Do You Know the Change Worked? Quality Progress, July
2001, pp. 68–74.
Kish, L., Some statistical problems in research design, American Sociological Review, 24,
328–338, 1959.
Kish, F.J., Utilizing value engineering as a problem solving management tool, SAE National
Combined Farm Construction and Industrial Machinery, Powerplant, and Transpor-
tation Meetings, Society of Automotive Engineers, Milwaukee, Sept. 9–12, 1968,
paper 680567.
Knouse, S.B. and Strutton, H.D., Getting Employee Buy-In to Quality Management, Quality
Progress, Apr. 1999, pp. 61–64.
Krishnamoorthi, K.S., Reliability Methods for Engineers, Quality Press, Milwaukee, 1992.
Kume, H., Statistical Methods for Quality Improvement, The Association for Overseas Tech-
nical Scholarship, Tokyo, 1985.
Lathin, D. and Mitchell, R., Learning from Mistakes, Quality Progress, June 2001, pp. 39–46.
Lehmann, E.L., Testing Statistical Hypotheses, Wiley, New York, 1986.
Lepi, S.M., Practical Guide to Finite Elements, Marcel Dekker. New York, 1998.
SL3151ZAppAFrame Page 734 Thursday, September 12, 2002 5:56 PM
Levinson, W.A., How to Design Attribute Sample Plans on a Computer, Quality Digest, July
1999, pp. 45–47.
Liberatore, R., Teaching the Role of SPC in Industrial Statistics, Quality Progress, July 2001,
pp. 89–94.
Livingston, S., Creating the Right Atmosphere: Setting the Stage for Innovative Thinking in
Ideation Sessions, Quirk’s Marketing Research Review, May 2001, pp. 32–39.
Maier, N.R.F., Problem Solving and Creativity in Individuals and Groups, Brooks/Cole
Publishing Co., Wadsworth Publishing Co., Belmont, CA, 1970.
Marash, S.A., A New Look at Six Sigma, Quality Digest, Mar. 1999, p. 18.
Mazur, G., QFD and Design for Six Sigma, Proceedings 45th EOQ Congress 2001, Instanbul,
Sept. 20, 2001.
McLean, H.W., HALT, HASS and HASA Explained: Accelerated Reliability Techniques, Qual-
ity Press, Milwaukee, 2000.
Meeker, W.Q., Doganaksy, N., and Hahn, G.J., Using Degradation Data for Product Reliability
Analysis, Quality Progress, June 2001, pp. 60–65.
Mitchell, R.H., Process Capability Indices, ASQ Statistics Division Newsletter, Winter 1999,
pp. 16–20.
Mitchell, E., Web-Based APQP Keeps Everyone Connected, Quality, July 2001, pp. 40–44.
Modares, M., Kaminski, M., and Krivtsov, V., Reliability Engineering and Risk Analysis: A
Practical Guide, Marcel Dekker, New York, 1999.
Myers, R.E. and Torrance, E.P., Invitations to Thinking and Doing, Ginn, Boston, 1964.
O’Connell, V., Advertising, Wall Street Journal, Nov. 27, 2000, p. B21.
O’Conor, P.D.T., Practical Reliability Engineering, 3rd ed., Quality Press, Milwaukee, 1995.
Orme, B., Assessing the Monetary Value of Attribute Levels with Conjoint Analysis: Warnings
and Suggestions, Quirk’s Marketing Research Review, May 2001, pp. 16, 44–47
Osborn, A.F., Applied Imagination, 3rd ed., Scribener, New York, 1963.
Paul, L.G., Outsourcing and Analyzing the Value Proposition, CFO, Aug. 2001, pp. 60–61.
Peterman, M., Lean Manufacturing Techniques Support the Quest for Quality, Quality in
Manufacturing, Jan./Feb. 2001, pp. 24–25.
Peterman, M., Simulation Nation: Process Simulation Is Key in a Lean Manufacturing Com-
pany Hungering for Big Results, Quality Digest, May 2000, pp. 39–42.
Porter, A. and Adams, L., Quality Begins with Good Data, Quality, May 2001, pp. 32–34.
Porter, M.E., Competitive Advantage: Creating and Sustaining Superior Performance, Free
Press, New York, 1985.
Pylipow, P.E., Can It Be This Easy? You Can Alter Drawing Practices to Achieve Six Sigma,
But Only if You Understand All the Implications, Quality Progress, July 2001, pp.
139–140.
Pyzdek, T., Considering Constraints, Quality Digest, June 2000, p. 22.
Pyzdek, T., The 1.5 Sigma Shift, Quality Digest, May 2001, p. 22.
Quesenberry, C., Statistical Gymnastics, Quality Progress, Sept. 1998, pp. 75–78.
Rosen, R. and Digh, P., Developing globally literate leaders, T+D, 55(5), 70–83, 2001.
Salzman, R.H. and Liddy, R.G., Product Life Predictions from Warranty Data, 1996 SAE
Reliability, Maintainability, Supportability and Logistics Conferences and Workshop,
Proceedings of the 8th Annual SAE RMS Workshop, 1996, pp. 45–48.
Schuetz, G., Gaged and Confused, Quality Digest, May 2001, pp. 44–47.
Schwarz, F.C., Managing Progress Through Value Engineering, SAE National Combined
Farm Construction and Industrial Machinery, Powerplant, and Transportation Meet-
ings, Society of Automotive Engineers. Milwaukee, Sept. 9–12, 1968, paper 680566.
Slater, R., Jack Welch and the GE Way: Management Insights and Leadership Secrets of the
Legendary CEO, McGraw-Hill, New York, 1999.
SL3151ZAppAFrame Page 735 Thursday, September 12, 2002 5:56 PM
Smith, D., How Good Are Your Data? Quality Digest, June 2000, pp. 50–51.
Stalk, G., Jr. and Hout, T.M., Competing Against Time: How Time-Based Competition is
Reshaping Global Markets, Free Press, New York, 1990.
Stamatis, D.H., The Nuts and Bolts of Reengineering, Paton Press, Red Bluff, CA, 1998.
Stamatis, D.H., TQM Engineering Handbook, Marcel Dekker, New York, 1997.
Stasiowski, F.A. and Burstein, D., Total Quality Management for the Design Firm: How to
Improve Quality, Increase Sales, and Reduce Costs, Wiley, New York, 1993.
Steel, J., Truth, Lies and Advertising, Wiley, New York, 1998.
Steele, J. M., Applied Finite Element Modeling: Practical Problem Solving for Engineers,
Marcel Dekker, New York, 1998.
Stein, P., All You Ever Wanted to Know About Resolution, Quality Progress, July 2001, pp.
141–142.
Stevens, D.P., A stochastic approach for analyzing for analyzing product tolerances, Quality
Engineering, 6(3), 439–449, 1994.
Sun, H., Comparing quality management practices in the manufacturing and service industries:
learning opportunities, Quality Management Journal, 8(2), 53–71, 2001.
Subramanian, K., The System Approach: A Strategy to Survive and Succeed in the Global
Economy, Hanser Gardner, Cincinnati, 2000.
Taraschi, R., Cutting the Ties that Bind, Training and Development, Nov. 1998, pp. 12–14.
Tichy, N.M. and Sherman, S., Control Your Destiny or Someone Else Will: Lessons in
Mastering Change — From the Principles Jack Welch Is Using to Revolutionize GE,
Harper Business, New York, 1993.
Umble, E.J. and Umble, M.M., Developing Control Charts and Illustrating Type I and Type
II Errors, Quality Management Journal, 2000, pp. 23–31.
Valance, N., Prices Without Borders? CFO, Aug. 2001, pp. 71–73.
Van Mieghem, T., Lessons Learned From Alexander the Great, Quality Progress, Jan. 1998,
pp. 41–48.
Vasilash, G.S., For Robust Products, Automotive Design and Production, Aug. 2001, p. 8.
Ward, S., How Much Data is Needed? Quality, July 2001, pp. 26–29.
Wearing, C. and Karl, D.P., The Importance of Following GD&T Specifications, Quality
Progress, Feb. 1995, pp. 95–98.
Wetmore, D., The Juggling Act, Training and Development, Sept. 2000, pp. 67–68.
White, D.A. and Kall, J., Coherent Laser Radar: True Noncontact Three-dimensional Mea-
surement Has Arrived, Quality Digest, Aug. 1999, pp. 35–38.
Whitfield, K., The Current State of Quality at Honda and Toyota, Automotive Design and
Production, Aug. 2001, pp. 50–52.
Yilmaz, M.R. and Chatterjee, S., Six sigma beyond manufacturing — a concept for robust
management, Quality Management Journal, 7(3), 67–78, 2000.
SL3151ZAppAFrame Page 736 Thursday, September 12, 2002 5:56 PM
SL3151ZIndex Page 737 Thursday, September 26, 2002 8:56 PM
Index*
A Accrual accounting, 676–677
Accrued pension liabilities, 667
Abstracting and indexing services, 145 Accumulated depreciation, 665–666, 684
Accelerated degradation testing (ADT), 336 Achieved availability, 292
Accelerated depreciation, 687 Acquisitions, in product design, 196
Accelerated life testing (ALT), 336, 362 Action plans, 161–162
Accelerated stress test (AST), 310–311 based on facts and data, 107
Accelerated testing, 305 creative planning process in, 162
ADT (accelerated degradation testing), 336 documenting, 162
ALT (accelerated life testing), 336, 362 in FMEA (failure modes and effects analysis),
AST (accelerated stress test), 310–311 253–258
constant-stress testing, 305–306 monitoring and controlling, 162–163
definition of, 362 prioritizing, 162
HALT (highly accelerated life test), 310 Action standards. see standards
HASS (highly accelerated stress screens), 310 Activation energy type constant (Ea), 308
methods, 305–306 Active repair time, 293
models, 306–309 Activities in benchmarking
PASS (production accelerated stress screen), after visits to partners, 156
311–312 defining, 150
progressive-stress testing, 306 drivers of, 151
step-stress testing, 306 flowcharting, 153–154
Acceleration factor (A), 308–309 modeling, 152–153
Acclaro (software), 545–547 output, 151
Accountants, 663 performance measure, 151–152
clean opinions of, 671 resource requirements, 151
reports of, 671–672 triggering events, 150
Accounting during visits to partners, 155
accrual basis of, 676–677 Activity analysis, 150–152
books of account in, 675–676 Activity benchmarking, 123
in business assessments, 138 Activity drivers, 151
cash basis of, 677 Activity performance measure, 151–152
and depreciation, 684 Actual costs, 478–480, 568, 703
earliest evidence of, 672 Actual operating hours, 525
entries in, 675–676 Actual size, 522
financial reports in, 664 Actual usage, amount of, 525
and financial statement analysis, 688 Administrative process
as measure of quality cost, 492 cost of, 570
recording business transactions in, 672–675 improving, 490–492
roles in business, 664 as measure of quality cost, 493
valuation methods in, 679–681 ADT (accelerated degradation testing), 336
Accounts Advanced product quality planning. see APQP
books of, 675–676 Advanced quality planning. see AQP
contra, 684 Advanced Systems and Designs Inc., 405
types of, 674 Aerospace industry, 226
Accounts receivables, 681, 691 Aesthetics, 113
737
SL3151ZIndex Page 738 Thursday, September 26, 2002 8:56 PM
Index 739
Index 741
Index 743
Index 745
Index 747
Index 749
Index 751
Index 753
Index 755
Index 757
Index 759
Index 761
Index 763
Index 765
Index 767