Sunteți pe pagina 1din 10

The Evolution of Six Sigma

Contents

Download the Article (PDF, 95 KB)

Jim Folaron, J.P. Morgan Chase & Co.

A look at the innovations that contributed to the methology we call Six Sigma and a glimpse into its future.

The Six Sigma methodology is not a revolutionary way of thinking, and it does not provide a radically new set of
quality tools. It is more of an evolutionary development in the science of continuous improvement that combines the
best elements from many earlier quality initiatives. Although some of the tools used in Six Sigma, such as quality
function deployment (QFD), are relatively new, most, such as the fishbone diagram, date back 50 years or more.

The philosophies related to Six Sigma have existed in one form or another even longer than that
(see Figure 1). Customer focus, data driven decision making, business results focus and process
understanding are not new approaches to business success. What is new, and what makes Six
Sigma so powerful, is the combination of these elements with a rigorous, disciplined approach and well-publicized,
proven business successes.

General George Patton was a great student of history. He believed those who did not learn from past mistakes were
doomed to repeat them. This is true in the field of quality and continuous improvement, as well. Understanding
qualitys roots and the reasons behind the methods will allow practitioners to be better prepared to launch successful
projects or large scale initiatives.

1798: Eli Whitney, Mass Production and Interchangeable Parts


Best known for his invention of the cotton gin in 1787, Eli Whitney had a greater impact on modern manufacturing
with the introduction of his revolutionary uniformity system.(1) In 1798, Whitney was awarded a government contract
to produce 10,000 muskets. He proved it was possible to produce interchangeable parts that were similar enough in
fit and function to allow for random selection of parts in the assembly of the muskets.
Throughout the next century, quality involved defining ways to objectively verify the new parts would match the
original parts or design. Exact replication was not always necessary, practical, cost effective or measurable.
Objective methods of measuring and assuring dimensional consistency evolved in the mid-1800s
with the introduction and use of go gages that verified the minimum dimension of the new part.
( Correct replication of the maximum dimension was assured by using the no go gages that were
introduced about 30 years later. Minimum and maximum tolerance limits, as measured by the use of
these gages, provided some of the first objective measures of similarity among parts. These
measures eventually evolved into specifications. See Figure 2 for a list of some of the eras
contributions to Six Sigma.

1913: Henry Ford and the Moving Assembly Line


With the introduction of Henry Fords moving automobile assembly line in 1913, the need for
predetermined part consistency became more acute. It was critical that only good parts be available
for use so the production assembly line would not be forced to slow down or stop while a worker sorted through piles
of parts to find one that fit.

With the Industrial Revolution in full swing, ever increasing production volumes required different methods of testing
and assurance to provide reasonable certainty the new product would be similar to the original product or design. It
was no longer practical to test each piece against go and no go gages. Such testing was cost prohibitive,
unacceptably time consuming and, in some cases, impossible, especially if the test adversely affected the
functionality of the output. Therefore, methods to monitor the consistency of the process that produced the parts and
the use of sampling, rather than 100% inspection, were becoming necessities.

1924: Walter Shewhart


The Western Electric manufacturing plant in Hawthorne, IL, is noteworthy because it was the breeding ground for
many quality leaders, including Joseph M. Juran, W. Edwards Deming and Walter A. Shewhart. Juran worked at the
plant for 17 years, Deming worked there as an intern during two summers, and Shewhart worked in the research labs
that supported the plant.

On May 16, 1924, Shewhart introduced a new data collection, display and analysis form.(3) It contained the first
known example of a process control chart and signaled the beginning of the age of statistical quality control. The
original control chart form allowed an inspector to document the percentage of defective product in both a tabular and
a time ordered graphic format. As data collection progressed, statistically computed limits were drawn to identify the
expected range of defective products. This helped alert the operator to changes in the process.

The ability to use statistically based control charts changed the role of the quality inspector from one of identifying
and sorting defective product to one of monitoring the stability of the process and identifying when it had changed.
Early detection by the inspector or worker helped identify the causes of the change and target improvements.
Improved product quality resulted through planning and timely, appropriate corrective action.

As production lots grew larger and more complex throughout the remainder of the 1920s and 1930s, the need for
sophisticated quality assurance and control gave birth to large quality control functions. Quality control departments
came to include inspectors, chief inspectors, supervisors, engineers and managers.(4)

The use of statistics grew, and in 1950, the U.S. government required statistically based levels of product quality from
its vendors. The military standard MIL-STD-105A was adopted and used by contracting officers and purchasing
agents to define contractually required sample sizes and maximum tolerable defect rates. This validated the need for
large separate quality functions. In reality, as quality evolved, it helped isolate the responsibility for quality from the
operators that actually produced the product or service. The its good enough or if its bad enough, the inspectors
will sort it out mentality was common.

When World War II ended, consumer affluence in the United States provided constantly increasing demand.
Fortunately, consumers had a tolerance for marginal quality. They readily absorbed the additional cost of inspection
and sorting, thereby allowing manufacturing operations to continue to focus on volume and output without a need to
focus on quality improvement or cost reduction.

1945: The Japanese Quality Movement Begins


Japan was crippled by the war. When it ended, General MacArthur and the occupying forces needed to help rebuild
the communications infrastructure to inform the people of Japan the war had ended and the United States was no
longer an enemy.

Homer Sarasohn was summoned from the Massachusetts Institute of Technology to Japan to share U.S.
manufacturing management principles with Japanese business leaders. Japanese leaders considered the use of
statistics the secret weapon that helped the Allies win the war and wanted to learn more about the practical
application of this weapon.

Since Shewhart was too ill to travel in 1950, Deming, another learned statistician and friend of Shewhart, went to
Japan to teach statistics and U.S. quality methods. He reinforced the value of viewing data against computed
statistics to quantify variation and predict future process performance. This allowed timely identification of the sources
of problems and promoted the opportunity for continuous improvement.

Throughout the years, Deming promoted the use of the plan-do-check-act (PDCA) cycle of continuous improvement
and later changed it to the plan-do-study-act or PDSA cycle.

The level of quality awareness and the use of statistical methods grew rapidly, but the statisticians became isolated
and were seen as a separate layer of experts.(5 )Managers werent able to dedicate the time or effort to fully
understand the statistical theories and applications, and the operators were afraid of the statisticians, in part, because
they feared measuring devices and automatic recorders were being used to monitor the workers performance. The
operators also felt a lack of control over the changes being made.

The quality revolution was in jeopardy. In 1954, another American was invited to Japan to help address the problem.
(6) Juran brought his principles of quality management to Japan and helped integrate quality initiatives throughout all
layers in organizations. His concept of integration was known as Big Q or quality through managements active
involvement and ownership. Additional concepts introduced by Juran included the need for project by project quality
improvement and the diagnostic/remedial journeys, all of which were documented in his Quality Control Handbook,
first published in 1951.

1973: The Japanese Make Their Move


Over the next 20 years, the efforts of the Japanese to constantly improve quality and manufacturing capability were
more effective than those employed in the United States. Their focus on two aspects of productivitydefect
elimination and cycle time reductionresulted in many significant developments and successes for Toyota and other
Japanese companies. Meanwhile, U.S. efforts focused on increased volume and maintenance of a lucrative market
share.

During the 1970s, one nonviolent event shook the world and had a long-lasting, far-reaching impact on quality. The oil
embargo of 1973 forced U.S. business leaders to finally recognize the value of quality. Reduced supplies of oil
products resulted in increased costs and long lines at the gas pumps.

The impact was even greater after the flow of oil resumed. The Japanese developed and brought to market more fuel
efficient automobiles to address new customer requirements. These automobiles were less expensive, of higher
quality and more closely satisfied the needs of the customer than those produced domestically. The United States
lost its market share to foreign automobiles; the rippling effect of this loss proved significant.

1980: If Japan Can, Why Cant We?


If the oil embargo caused U.S. business leaders to become aware of the presence of foreign economic competition,
the television documentary NBC aired on June 24, 1980, hit them over the head like a ton of bricks.(7) The
documentary If Japan Can Why Cant We? asked that very pointed question.

Many people, including Ford Motor Co.s entire top management team, went to visit the spotlighted U.S. company
that improved significantly as a result of Demings consulting work. The Nashua Corp. proved quality improvement
was possible beyond the Japanese culture.

With the renewed interest in his work, Deming began a new career at age 79, helping American managers better
understand the concepts of variation and the value of using statistical methods. He introduced many to an
enlightened vision of quality using graphic displays (the red bead experiment) and easy to understand key points (his
14 point management theory and the seven deadly diseases) during his four-day seminars.

Juran introduced a series of videotapes in 1979 entitled Juran on Quality Improvement and focused attention on the
quality trilogy (planning, improvement and control) and the concept of project by project improvements. There were
other teachers and topics: Armand Feigenbaum, Kaoru Ishakawa, Yoji Akao, Genichi Taguchi, Dorian Shainin and
Shigeo Shingo are just a few of the leaders who came into the spotlight during this time. One other person who had
the right message at the right time was Philip Crosby.

1980: Philip Crosby and Quality Is Free


U.S. industry was suffering through a recession and increasing foreign competition when Crosbys book,Quality is
Free,(8) hit the market. He wrote about a 14-step approach to quality improvement and introduced most of us to the
concept of zero defects, which was rooted in the U.S. space program. The book was seen by many managers as a
surefire recipe for success.

Unfortunately, Crosbys roadmap did not always lead to the promised land. Quality was still too often delegated or
treated as a separate initiative, and zero defects seemed unattainable. As a result, many dismissed the approach as
unrealistic. It did, however, mobilize many people at all levels in organizations and provided widespread information
about quality tools and their use.

1987: International Organization for Standardization


The desire for consistency in the definition of quality led to the development of industrial standardization
organizations beginning in Great Britain in 1901. By 1930, most of the worlds industrialized nations had similar
organizations. In 1987, the Geneva based International Organization for Standardization, known as ISO, introduced a
series of quality standards that were adopted by most of the industrialized world to serve as a single, global standard.
Based primarily on the BS-5750 quality system standard, ISO 9000 detailed the key elements of sound quality
practices.

These global standards were designed to promote uniformity between countries that had their own definition of
quality specifications. This was most important in Europe where simultaneous purchasing from different countries
could result in unplanned variation.

The other opportunity afforded by ISO 9000 was the ability to force potential suppliers to submit to a third-party audit
to confirm the existence of and adherence to the quality standard. Third-party auditing and certification was designed
to provide reasonable assurances of ongoing consistency to the purchaser and to minimize the number of site visits
required by customers.

The creation of ISO 9000 helped define many of the elements of sound quality practice, but it did not assure the
products goodness or fitness for use; it only addressed consistency in the process. For example, a company that
sells cement lifejackets could be ISO 9000 certified.
It was originally believed that within a few years, the European market would close out any companies that were not
ISO 9000 certified. Although this never came to fruition, ISO 9000 served as a benchmark and provided a marketing
advantage, but it was not automatically exclusionary.

1987: Malcolm Baldrige National Quality Award


Throughout the 1980s, U.S. based manufacturing operations continued to lose ground to constantly improving foreign
competition, partially because Japanese firms believed in helping each other and sharing best practices with their
countrymen. Then, in 1987, the United States government introduced the Malcolm Baldrige National Quality Award.
(9) This award is presented annually by the president and is designed to provide an operational definition of business
excellence.

Two key aspects of the Baldrige Award are the promotion of best practice sharing and the establishment of a
benchmark for quality systems that focused on customer satisfaction as a primary driver of business design and
execution. Since then, the award has been replicated by many states.

The first company to win the Baldrige was Motorola. It recognized the need for focused quality improvement, and the
award simply confirmed it had an approach and deployment of metric based, customer focused quality that would
lead to the current Six Sigma methodology.

1987: Motorola and Six Sigma


Motorola was greatly impacted by the quality improvements in foreign products. Under the leadership and support of
Bob Galvin, the companys zeal for quality improvement flourished. Stealing the best practices from the best
companies (known as the bandit project(10)) was Motorolas attempt at turning around the pocket pager business in
the early 1980s.

Building upon these existing Motorola practices, Bill Smith and other Motorola executives married the concept of
process capability and product specifications. Cp and Cpk measurements were used to compare the process
performance to these specifications. The calculation for capability became defects per million opportunities (DPMO).

Both the DPMO and the use of specifications left naysayers with enough reason to dismiss the new approach. They
believed you could improve performance by increasing the number of opportunities you consider or by changing the
specifications. Specifications were not directly based on customer needs, rather they were often derived from product
performance data and typically reflected the capability of the process.

IBM and other companies that had adopted the concepts of Six Sigma shared the new methodology and philosophy
with their suppliers, engineers and managers. To ensure widespread acceptance, these companies and people
needed to understand only the significant opportunities could be identified in the DPMO formula, and customers ever
changing expectations must be considered. It was also important for organizations to understand that embedding a
companywide mindset of continuous improvement was more important than targeting a specific quality level, such as
3.4 DPMO.

1988: Beyond Motorola


As a result of winning the Baldrige Award in 1988, Motorola was compelled to share its quality practices with others.
The companys approach to continuous improvement was based on a comparison of process performance and
product specification, and aggressive efforts to drive down defects.

Eventually, the process was modified by others. The original measure, analyze, improve and control (MAIC) steps
assumed the project had a clear definition. IBM and other early users clearly identified the need to ensure the project
was properly scoped, resourced and defined by adding a D for define to the methodology (DMAIC). Other
companies have since modified it even more. Some add an additional I to emphasize the need to implement the
identified solutions; others include a trailing L to emphasize the need to leverage the results across the organization.
AlliedSignal was one of the first companies to adopt and use the methodology. Larry Bossidy, AlliedSignals CEO,
demonstrated the business value of Six Sigma by effectively turning around his company. In 1995, he introduced the
concept of Six Sigma to Jack Welch, CEO of General Electric. Welch took the methodology, made it a corporate
requirement and firmly deployed it throughout his organization with great intensity and significant success. His results
are well documented, and the rest is history.

1960-1995: Other Initiatives


The U.S. automobile industry, led by the Big Three of Ford, General Motors and Chrysler, quickly recognized what a
consolidated standard of quality could do, and QS-9000, the automotive supplier requirements standard, was
introduced in 1994.(11) Compliance with and certification against QS-9000 was a prerequisite for vendors who
wished to supply parts or services to one of the U.S. automakers.

The standard used ISO 9000 as a baseline, then made improvements, such as requiring an increased use of
statistical process control, supplier management, failure mode and effects analysis and business planning. Besides
these requirements, which are mandated by all three automakers, specific expectations for each are also identified.
QS-9000 requires third-party certification and a focus on continuous improvement.

Various other quality initiatives became the flavor of the month in the time period from the 1960s to the 1990s. Most
made significant contributions to the advancement of improved product or service quality, bottom-line impact or
business redesign. But because they were not comprehensive, most failed to endure the next consultant pitchman.

Quality circles, for example, were the an early Japanese import that failed miserably in the translation. Observers in
the United States saw small, voluntary, empowered teams working on projects they selected, but when the projects
failed to produce improved business results, they were abandoned as not culturally transferable. Unfortunately,
American observers did not understand the real intent of quality circles. The circles were originally designed to
provide foremen with small teams to lead during their quality training. The teams served as just-in-time (JIT)
opportunities to help the foremen practice what they were taught. Reduced cycle times and inventory levels were key
elements of JIT manufacturing.

Likewise, kanban squares and pull systems were successfully developed and deployed in Japan. They were brought
here by Hewlett-Packard and popularized by Richard Schonberger.(12) Many such concepts and practices, including
lean, have been absorbed by other efforts that still exist today.

Another Japanese improvement method involved use of j-cans, which involved structured, quick improvement efforts
that resulted in some improvement, though the results were of secondary importance. The primary goal was to get
practitioners to learn problem solving tools and methodology.

Total quality management (TQM) was also popular for a number of years. It began as total quality control in the mid-
1950s, was introduced in the Harvard Business Review by Armand V. Feigenbaum(13) and was driven in Japan by
Ishakawa. The first real attempt to systematically deploy quality, TQM is still the term the Japanese use to describe
their approach to quality.

TQM in the United States was a little different. It involved banners and pride in quality, but it fell short two ways:

1. It emphasized quality improvement for the sake of quality improvement instead of tying
improvements to the bottom line.
2. It focused on improvements within departments or business functions, not among them.
TQM introduced entire organizations to the best quality tools and gave them the mindset to
strive for continuous quality improvement, but because it didnt have significant dollar
benefits, the movement did not endure.
Although many of these initiatives have come and gone, several of their elements are apparent in the current Six
Sigma methodology.

What the Future Holds


Will Six Sigma endure, or will it turn out to be another flavor of the month? The coming years will bring changes and
further improvements. Just as the worlds best sports car is not suitable for all persons and all situations, Six Sigma
has an important, but not exclusive, place in the field of continuous improvement. But Six Sigma, with a focus on
continuous and unending improvement, will endure.

There is, of course, an economic limit to the improvement required, so the argument about whether the goal should
be 3.4 DPMO or zero defects is not meaningful for more than a few companies. Both DMAIC and design for Six
Sigma will have their place, but the rigorous nature of the methodology and the time required to study and select
options based on data are not as significant for all improvements.

Although quick wins are allowed within the DMAIC methodology, management impatience requires more flexible
solutions, depending on the situation. Emergency response actions and interim containment plans detailed in Fords
global 8D program(14) are two examples that serve different needs.

I believe Six Sigma will evolve significantly over the next five to 10 years. Black Belts and Green Belts will not exist as
descriptors of specially trained professionals. For a company to be successful, each person will have to routinely use
the tools and methods of root cause identification and elimination. This lesson, originally learned in the 1980s when
large quality departments were eliminated because they were detrimental to the promotion of quality through
employee commitment, will become a standard of good business management. The future will bring an even more
rigorous approach toward designing processes correctly the first time rather than fixing existing processes.

We must be careful not to relive the problems encountered by the Japanese. Ishikawa stated, It is true that statistical
methods are effective, but we overemphasized their importance. As a result, people either feared or disliked quality
control as something very difficult. We overeducated people by giving them sophisticated methods where, at that
stage, simple methods would have sufficed.(15) Instead, we need to put the basic tools and the desire to
continuously improve into everyones hands, providing guidance and support by skilled professionals and leadership
by senior management.

Maintaining a clear focus on customer wants and continuously improving the processes that provide the product or
service will be the only way to remain competitive. As the methodology reaches further into the educational system, it
will blend into the fabric of generally accepted management practice. The future is bright, mostly because the past
was so generous with lessons and success stories.

Though there will be changesthe certifications and belts may go away, and Six Sigma training may be incorporated
into other company training and lose its identitythe basic tenants will remain forever. Elements such as customer
focus, data driven analysis, statistically based decisions, root cause problem identification and elimination, process
oriented thinking, process control to maintain improvements achieved during the initial improvement efforts and
sharing (leveraging) of results will always be necessary elements of business success.

References

1. Eli Whitney, http://technology.ksc.nasa.gov/ETEAM/whitney.html.


2. Walter A. Shewhart, Statistical Method From the Viewpoint of Quality Control, the
Department of Agriculture, 1939.
3. Louis E. Schultz, Profiles in QualityLearning From the Masters, Quality Resources, 1994.
4. Joseph M. Juran and A. Blanton Godfrey, Jurans Quality Handbook, McGraw-Hill, 1998.
5. Kaoru Ishikawa, What Is Total Quality Control? The Japanese Way, Prentiss-Hall, 1985.
6. Ibid.
7. Juran, Jurans Quality Handbook, see reference 4.
8. Philip B. Crosby, Quality Is FreeThe Art of Making Quality Certain, McGraw-Hill, 1979.
9. Malcolm Baldrige National Quality Award,
www.nist.gov/public_affairs/factsheet/baldfaqs.htm.
10. Mikel Harry and Richard Schroeder, Six SigmaThe Breakthrough Management Strategy
Revolutionizing the Worlds Top Corporations, Doubleday, 2000.
11. QS-9000 Quality System Requirements, AIAG,1994.
12. Richard Schonberger, Japanese Manufacturing Techniques: Nine Hidden Lessons in
Simplicity, Free Press, 1982.
13. Armand V. Feigenbaum, Total Quality Control, McGraw-Hill, 1953.
14. Abdul Chowdhry, To Be a Problem Solver, Be a Classicist, Quality Progress, June 1999.
15. Ishikawa, What Is Total Quality Control? The Japanese Way, see reference 5.

Note: This article is not a complete listing of the many innovators, teachers or practitioners who have guided us in the
development of improved quality tools and methodologies, nor does it detail the many contributions of those who are
included.

Subscribe

Issue List

Featured advertisers

Knowledge Center
Membership
Certification
Training
Books & Standards
Conferences & Events
Communities
About ASQ

Home
Store

Quality Progress

ASQ TV

Contact ASQ

Create Account

Shopping Cart

Chat with ASQ

ASQ is a global community of people passionate about


quality, who use the tools, their ideas and expertise to make
our world work better. ASQ: The Global Voice of Quality.
Select Country / Region U.S./Canada

Search

MEDIA ROOM

CAREER CENTER

ADVERTISING & SPONSORSHIP


CUSTOMER SERVICE

SITE MAP

TERMS OF USE

PRIVACY POLICY

AMERICAN SOCIETY FOR QUALITY. ALL RIGHTS RESERVED.

S-ar putea să vă placă și