Documente Academic
Documente Profesional
Documente Cultură
Contents
A look at the innovations that contributed to the methology we call Six Sigma and a glimpse into its future.
The Six Sigma methodology is not a revolutionary way of thinking, and it does not provide a radically new set of
quality tools. It is more of an evolutionary development in the science of continuous improvement that combines the
best elements from many earlier quality initiatives. Although some of the tools used in Six Sigma, such as quality
function deployment (QFD), are relatively new, most, such as the fishbone diagram, date back 50 years or more.
The philosophies related to Six Sigma have existed in one form or another even longer than that
(see Figure 1). Customer focus, data driven decision making, business results focus and process
understanding are not new approaches to business success. What is new, and what makes Six
Sigma so powerful, is the combination of these elements with a rigorous, disciplined approach and well-publicized,
proven business successes.
General George Patton was a great student of history. He believed those who did not learn from past mistakes were
doomed to repeat them. This is true in the field of quality and continuous improvement, as well. Understanding
qualitys roots and the reasons behind the methods will allow practitioners to be better prepared to launch successful
projects or large scale initiatives.
With the Industrial Revolution in full swing, ever increasing production volumes required different methods of testing
and assurance to provide reasonable certainty the new product would be similar to the original product or design. It
was no longer practical to test each piece against go and no go gages. Such testing was cost prohibitive,
unacceptably time consuming and, in some cases, impossible, especially if the test adversely affected the
functionality of the output. Therefore, methods to monitor the consistency of the process that produced the parts and
the use of sampling, rather than 100% inspection, were becoming necessities.
On May 16, 1924, Shewhart introduced a new data collection, display and analysis form.(3) It contained the first
known example of a process control chart and signaled the beginning of the age of statistical quality control. The
original control chart form allowed an inspector to document the percentage of defective product in both a tabular and
a time ordered graphic format. As data collection progressed, statistically computed limits were drawn to identify the
expected range of defective products. This helped alert the operator to changes in the process.
The ability to use statistically based control charts changed the role of the quality inspector from one of identifying
and sorting defective product to one of monitoring the stability of the process and identifying when it had changed.
Early detection by the inspector or worker helped identify the causes of the change and target improvements.
Improved product quality resulted through planning and timely, appropriate corrective action.
As production lots grew larger and more complex throughout the remainder of the 1920s and 1930s, the need for
sophisticated quality assurance and control gave birth to large quality control functions. Quality control departments
came to include inspectors, chief inspectors, supervisors, engineers and managers.(4)
The use of statistics grew, and in 1950, the U.S. government required statistically based levels of product quality from
its vendors. The military standard MIL-STD-105A was adopted and used by contracting officers and purchasing
agents to define contractually required sample sizes and maximum tolerable defect rates. This validated the need for
large separate quality functions. In reality, as quality evolved, it helped isolate the responsibility for quality from the
operators that actually produced the product or service. The its good enough or if its bad enough, the inspectors
will sort it out mentality was common.
When World War II ended, consumer affluence in the United States provided constantly increasing demand.
Fortunately, consumers had a tolerance for marginal quality. They readily absorbed the additional cost of inspection
and sorting, thereby allowing manufacturing operations to continue to focus on volume and output without a need to
focus on quality improvement or cost reduction.
Homer Sarasohn was summoned from the Massachusetts Institute of Technology to Japan to share U.S.
manufacturing management principles with Japanese business leaders. Japanese leaders considered the use of
statistics the secret weapon that helped the Allies win the war and wanted to learn more about the practical
application of this weapon.
Since Shewhart was too ill to travel in 1950, Deming, another learned statistician and friend of Shewhart, went to
Japan to teach statistics and U.S. quality methods. He reinforced the value of viewing data against computed
statistics to quantify variation and predict future process performance. This allowed timely identification of the sources
of problems and promoted the opportunity for continuous improvement.
Throughout the years, Deming promoted the use of the plan-do-check-act (PDCA) cycle of continuous improvement
and later changed it to the plan-do-study-act or PDSA cycle.
The level of quality awareness and the use of statistical methods grew rapidly, but the statisticians became isolated
and were seen as a separate layer of experts.(5 )Managers werent able to dedicate the time or effort to fully
understand the statistical theories and applications, and the operators were afraid of the statisticians, in part, because
they feared measuring devices and automatic recorders were being used to monitor the workers performance. The
operators also felt a lack of control over the changes being made.
The quality revolution was in jeopardy. In 1954, another American was invited to Japan to help address the problem.
(6) Juran brought his principles of quality management to Japan and helped integrate quality initiatives throughout all
layers in organizations. His concept of integration was known as Big Q or quality through managements active
involvement and ownership. Additional concepts introduced by Juran included the need for project by project quality
improvement and the diagnostic/remedial journeys, all of which were documented in his Quality Control Handbook,
first published in 1951.
During the 1970s, one nonviolent event shook the world and had a long-lasting, far-reaching impact on quality. The oil
embargo of 1973 forced U.S. business leaders to finally recognize the value of quality. Reduced supplies of oil
products resulted in increased costs and long lines at the gas pumps.
The impact was even greater after the flow of oil resumed. The Japanese developed and brought to market more fuel
efficient automobiles to address new customer requirements. These automobiles were less expensive, of higher
quality and more closely satisfied the needs of the customer than those produced domestically. The United States
lost its market share to foreign automobiles; the rippling effect of this loss proved significant.
Many people, including Ford Motor Co.s entire top management team, went to visit the spotlighted U.S. company
that improved significantly as a result of Demings consulting work. The Nashua Corp. proved quality improvement
was possible beyond the Japanese culture.
With the renewed interest in his work, Deming began a new career at age 79, helping American managers better
understand the concepts of variation and the value of using statistical methods. He introduced many to an
enlightened vision of quality using graphic displays (the red bead experiment) and easy to understand key points (his
14 point management theory and the seven deadly diseases) during his four-day seminars.
Juran introduced a series of videotapes in 1979 entitled Juran on Quality Improvement and focused attention on the
quality trilogy (planning, improvement and control) and the concept of project by project improvements. There were
other teachers and topics: Armand Feigenbaum, Kaoru Ishakawa, Yoji Akao, Genichi Taguchi, Dorian Shainin and
Shigeo Shingo are just a few of the leaders who came into the spotlight during this time. One other person who had
the right message at the right time was Philip Crosby.
Unfortunately, Crosbys roadmap did not always lead to the promised land. Quality was still too often delegated or
treated as a separate initiative, and zero defects seemed unattainable. As a result, many dismissed the approach as
unrealistic. It did, however, mobilize many people at all levels in organizations and provided widespread information
about quality tools and their use.
These global standards were designed to promote uniformity between countries that had their own definition of
quality specifications. This was most important in Europe where simultaneous purchasing from different countries
could result in unplanned variation.
The other opportunity afforded by ISO 9000 was the ability to force potential suppliers to submit to a third-party audit
to confirm the existence of and adherence to the quality standard. Third-party auditing and certification was designed
to provide reasonable assurances of ongoing consistency to the purchaser and to minimize the number of site visits
required by customers.
The creation of ISO 9000 helped define many of the elements of sound quality practice, but it did not assure the
products goodness or fitness for use; it only addressed consistency in the process. For example, a company that
sells cement lifejackets could be ISO 9000 certified.
It was originally believed that within a few years, the European market would close out any companies that were not
ISO 9000 certified. Although this never came to fruition, ISO 9000 served as a benchmark and provided a marketing
advantage, but it was not automatically exclusionary.
Two key aspects of the Baldrige Award are the promotion of best practice sharing and the establishment of a
benchmark for quality systems that focused on customer satisfaction as a primary driver of business design and
execution. Since then, the award has been replicated by many states.
The first company to win the Baldrige was Motorola. It recognized the need for focused quality improvement, and the
award simply confirmed it had an approach and deployment of metric based, customer focused quality that would
lead to the current Six Sigma methodology.
Building upon these existing Motorola practices, Bill Smith and other Motorola executives married the concept of
process capability and product specifications. Cp and Cpk measurements were used to compare the process
performance to these specifications. The calculation for capability became defects per million opportunities (DPMO).
Both the DPMO and the use of specifications left naysayers with enough reason to dismiss the new approach. They
believed you could improve performance by increasing the number of opportunities you consider or by changing the
specifications. Specifications were not directly based on customer needs, rather they were often derived from product
performance data and typically reflected the capability of the process.
IBM and other companies that had adopted the concepts of Six Sigma shared the new methodology and philosophy
with their suppliers, engineers and managers. To ensure widespread acceptance, these companies and people
needed to understand only the significant opportunities could be identified in the DPMO formula, and customers ever
changing expectations must be considered. It was also important for organizations to understand that embedding a
companywide mindset of continuous improvement was more important than targeting a specific quality level, such as
3.4 DPMO.
Eventually, the process was modified by others. The original measure, analyze, improve and control (MAIC) steps
assumed the project had a clear definition. IBM and other early users clearly identified the need to ensure the project
was properly scoped, resourced and defined by adding a D for define to the methodology (DMAIC). Other
companies have since modified it even more. Some add an additional I to emphasize the need to implement the
identified solutions; others include a trailing L to emphasize the need to leverage the results across the organization.
AlliedSignal was one of the first companies to adopt and use the methodology. Larry Bossidy, AlliedSignals CEO,
demonstrated the business value of Six Sigma by effectively turning around his company. In 1995, he introduced the
concept of Six Sigma to Jack Welch, CEO of General Electric. Welch took the methodology, made it a corporate
requirement and firmly deployed it throughout his organization with great intensity and significant success. His results
are well documented, and the rest is history.
The standard used ISO 9000 as a baseline, then made improvements, such as requiring an increased use of
statistical process control, supplier management, failure mode and effects analysis and business planning. Besides
these requirements, which are mandated by all three automakers, specific expectations for each are also identified.
QS-9000 requires third-party certification and a focus on continuous improvement.
Various other quality initiatives became the flavor of the month in the time period from the 1960s to the 1990s. Most
made significant contributions to the advancement of improved product or service quality, bottom-line impact or
business redesign. But because they were not comprehensive, most failed to endure the next consultant pitchman.
Quality circles, for example, were the an early Japanese import that failed miserably in the translation. Observers in
the United States saw small, voluntary, empowered teams working on projects they selected, but when the projects
failed to produce improved business results, they were abandoned as not culturally transferable. Unfortunately,
American observers did not understand the real intent of quality circles. The circles were originally designed to
provide foremen with small teams to lead during their quality training. The teams served as just-in-time (JIT)
opportunities to help the foremen practice what they were taught. Reduced cycle times and inventory levels were key
elements of JIT manufacturing.
Likewise, kanban squares and pull systems were successfully developed and deployed in Japan. They were brought
here by Hewlett-Packard and popularized by Richard Schonberger.(12) Many such concepts and practices, including
lean, have been absorbed by other efforts that still exist today.
Another Japanese improvement method involved use of j-cans, which involved structured, quick improvement efforts
that resulted in some improvement, though the results were of secondary importance. The primary goal was to get
practitioners to learn problem solving tools and methodology.
Total quality management (TQM) was also popular for a number of years. It began as total quality control in the mid-
1950s, was introduced in the Harvard Business Review by Armand V. Feigenbaum(13) and was driven in Japan by
Ishakawa. The first real attempt to systematically deploy quality, TQM is still the term the Japanese use to describe
their approach to quality.
TQM in the United States was a little different. It involved banners and pride in quality, but it fell short two ways:
1. It emphasized quality improvement for the sake of quality improvement instead of tying
improvements to the bottom line.
2. It focused on improvements within departments or business functions, not among them.
TQM introduced entire organizations to the best quality tools and gave them the mindset to
strive for continuous quality improvement, but because it didnt have significant dollar
benefits, the movement did not endure.
Although many of these initiatives have come and gone, several of their elements are apparent in the current Six
Sigma methodology.
There is, of course, an economic limit to the improvement required, so the argument about whether the goal should
be 3.4 DPMO or zero defects is not meaningful for more than a few companies. Both DMAIC and design for Six
Sigma will have their place, but the rigorous nature of the methodology and the time required to study and select
options based on data are not as significant for all improvements.
Although quick wins are allowed within the DMAIC methodology, management impatience requires more flexible
solutions, depending on the situation. Emergency response actions and interim containment plans detailed in Fords
global 8D program(14) are two examples that serve different needs.
I believe Six Sigma will evolve significantly over the next five to 10 years. Black Belts and Green Belts will not exist as
descriptors of specially trained professionals. For a company to be successful, each person will have to routinely use
the tools and methods of root cause identification and elimination. This lesson, originally learned in the 1980s when
large quality departments were eliminated because they were detrimental to the promotion of quality through
employee commitment, will become a standard of good business management. The future will bring an even more
rigorous approach toward designing processes correctly the first time rather than fixing existing processes.
We must be careful not to relive the problems encountered by the Japanese. Ishikawa stated, It is true that statistical
methods are effective, but we overemphasized their importance. As a result, people either feared or disliked quality
control as something very difficult. We overeducated people by giving them sophisticated methods where, at that
stage, simple methods would have sufficed.(15) Instead, we need to put the basic tools and the desire to
continuously improve into everyones hands, providing guidance and support by skilled professionals and leadership
by senior management.
Maintaining a clear focus on customer wants and continuously improving the processes that provide the product or
service will be the only way to remain competitive. As the methodology reaches further into the educational system, it
will blend into the fabric of generally accepted management practice. The future is bright, mostly because the past
was so generous with lessons and success stories.
Though there will be changesthe certifications and belts may go away, and Six Sigma training may be incorporated
into other company training and lose its identitythe basic tenants will remain forever. Elements such as customer
focus, data driven analysis, statistically based decisions, root cause problem identification and elimination, process
oriented thinking, process control to maintain improvements achieved during the initial improvement efforts and
sharing (leveraging) of results will always be necessary elements of business success.
References
Note: This article is not a complete listing of the many innovators, teachers or practitioners who have guided us in the
development of improved quality tools and methodologies, nor does it detail the many contributions of those who are
included.
Subscribe
Issue List
Featured advertisers
Knowledge Center
Membership
Certification
Training
Books & Standards
Conferences & Events
Communities
About ASQ
Home
Store
Quality Progress
ASQ TV
Contact ASQ
Create Account
Shopping Cart
Search
MEDIA ROOM
CAREER CENTER
SITE MAP
TERMS OF USE
PRIVACY POLICY