Documente Academic
Documente Profesional
Documente Cultură
UNIT-I
CONCEPTS OF QUALITY
Definition of Quality:
Quality:
Characteristics of Quality:
DIMENSIONS OF QUALITY
• Performance
• Aesthetics
• Special features: convenience, high tech
• Safety
• Reliability
• Durability
• Perceived Quality
• Service after sale
IMPORTANCE OF QUALITY
Understandability
Completeness
3
Maintainability
Conciseness
Portability
Consistency
Testability
Usability
Reliability
Structured ness
Efficiency
Security
Two principal models of this type, one by Boehm (1978) and one by
McCall in 1977. A hierarchical model of software quality is based upon a set of
quality criteria, each of which has a set of measures or metrics associated with
it.
4
In early attempt to bridge the gap between users and developers, the criteria
were chosen in an attempt to reflect user’ s views as well as developer’ s
priorities.
development.
Product transition : it is an important application and it is distributed processing and the rapid
Quality Models
_ Hierarchical models
_ Questions
McCall’s Model
Boehm’s Model
Jim McCall produced this model for the US Air Force and the intention was to
bridge the gap between users and developers. He tried to map the user view with
the developer's priority.
Product revision
The product revision perspective identifies quality factors that influence the
ability to change the software product, these factors are:-
Product transition
The product transition perspective identifies quality factors that influence the
ability to adapt the software to new environments:-
11
Product operations
The product operations perspective identifies quality factors that influence the
extent to which the software fulfils its specification:-
At the highest level of his model, Boehm defined three primary uses (or basic
software requirements), these three primary uses are:-
As-is utility, the extent to which the as-is software can be used (i.e. ease of
use, reliability and efficiency).
These three primary uses had quality factors associated with them , representing
the next level of Boehm's hierarchical model.
Portability, the extent to which the software will work under different
computer configurations (i.e. operating systems, databases etc.).
Reliability, the extent to which the software performs as required, i.e. the
absence of defects.
These quality factors are further broken down into Primitive constructs that can
be measured, for example Testability is broken down into:- accessibility,
communicativeness, structure and self descriptiveness. As with McCall's
Quality Model, the intention is to be able to measure the lowest level of the
model.
According to the uses made of the system and they are classed into ‘ general’
or ‘ as is’ and the utilities are a subtype of the general utilities, to the product
operation.
There are two levels of actual quality criteria,the intermediate level being
further split into primitive characteristics which are amenable to measurement.
This model is based upon a much larger set of criteria than McCall’ s model,
but retains the same emphasis on technical criteria.
Although only a summary of the two example software factor models has been
given, some comparisons and observations can be made that generalize the
overall quest to characterize software.
14
Both of McCall and Boehm models follow a similar structure, with a similar
purpose. They both attempt to breakdown the software artifact into constructs
that can be measured. Some quality factors are repeated, for example: usability,
portability, efficiency and reliability.
The value of these, and other models, is purely a pragmatic one and not in the
semantics or structural differences.
The extent to which one model allows for an accurate measurement (cost and
benefit) of the software will determine its value.
It is unlikely that one model will emerge as best and there is always likely to be
other models proposed, this reflects the intangible nature of software and the
essential difficulties that this brings. The ISO 9126 represents the ISO's recent
attempt to define a set of useful quality characteristics.
McCall (1977)
Boehm(1978)
• The models focus on the parts that designers can more readily analyze.
Flexibility and reusability vs. integrity (inverse) the general flexible data
structures required for flexible and reusable software increase the security and
protection problem.
Maintainability vs. flexibility (direct)mai nt ain abl ecode arises from code
that is well structured.
part of a framework for improvement. Maturity models have been proposed for
a range of activities including quality management, software development,
supplier relationships, R&D effectiveness, product development, innovation,
product design, product development collaboration and product reliability.
The principal idea of the maturity grid is that it describes in a few phrases, the
typical behaviour exhibited by a firm at a number of levels of ‘maturity’, for
each of several aspects of the area under study. This provides the opportunity to
codify what might be regarded as good practice (and bad practice), along with
some intermediate or transitional stages.
This page traces the origins of the maturity grid and reviews methods of
construction and application. It draws on insights gained during development of
maturity grids for product design and for the management of product
development collaborations, and discusses alternative definitions of 'maturity'.
problems"
Perhaps the best known derivative from this line of work is the Capability
Maturity Model (CMM) for software. The CMM takes a different approach
however, identifying instead a cumulative set of ‘key process areas’ (KPAs)
which all need to be performed as the maturity level increases. As an alternative
to the complexity of CMM-derived maturity models, others have built on the
Crosby grid approach, where performance of a number of key activities are
described at each of 4 levels.
controlled.
Assurance
· Software
Configuration
Management
Typology
What is maturity? The literal meaning of the word maturity is 'ripeness',
conveying the notion of development from some initial state to some more
advanced state. Implicit in this is the notion of evolution or ageing, suggesting
that the subject may pass through a number of intermediate states on the way to
maturity.
The various components which may or may not be present in each model are:
The number of levels is to some extent arbitrary. One key aspect of the Quality
Grid approach is that it provides descriptive text for the characteristics traits of
performance at each level. This becomes increasingly difficult as the number of
levels is increased, and adds to the complexity of the tool, such that descriptive
grids typically contain no more than 5 levels.
the terminology of the SEI CMM, this is referred to as 'continuous' and 'staged'
respectively.
A typology is proposed here which divides maturity models into three basic
groups:
Maturity grids
Hybrids and Likert-like questionnaires
CMM-like models
If there has been no improvement since the beginning of your relationship with
your SEO company, they need to explain why. Then you have a great seo
Company working for you. If you don't want to do it yourself, an SEO company
being paid monthly should detail exactly what they are doing for you. | Any link
exchanges promising hundreds or links immediately upon joining should be
avoided unless the only search engine you care about is MSN. Using a link
exchange as the only way you get links will also not be a path you want to
wander down. Exchanging links with every site under the sun is also bad.
CMM-like models define a number of key process areas at each level. Each key
process area is organised into five sections called 'common features' -
commitment to perform, ability to perform, activities performed, measurement
& analysis and verifying implementation. The common features specify the key
practices that, when collectively addressed, accomplish the goals of the key
process area. These models are typically rather verbose.
Application
We have developed maturity grids for product design and for the management
of product development collaborations, and these have been applied both
standalone and in workshop settings. The grids were found to be a useful way of
capturing good and not-so-good practice, but in the interests of usability, a
balance was necessary between richness of detail (verbosity) and conciseness
(superficiality).
Other studies have confirmed the value of team discussion around an audit tool.
Discussing their innovation audit tool, Chiesa et al. report that ‘the feedback on
27
team use was very positive’. Group discussion also plays a part in use of
Cooper’s NewProd system, while Ernst and Teichert recommend using a
workshop in an NPD benchmarking exercise to build consensus and overcome
single-informant bias.
"The degree to which processes and activities are executed following 'good
practice' principles and are defined, managed and repeatable."
******************
UNIT II
What is software quality, and why is it so important that it is pervasive in the Software Engineering
Body of Knowledge? Within an information system, software is a tool, and tools have to be selected
for quality and for appropriateness. That is the role of requirements. But software is more than a
tool. It dictates the performance of the system, and it is therefore important to the system quality.
The notion of “quality” is not as simple as it may seem. For any engineered product, there are many
desired qualities relevant to a particular project, to be discussed and determined at the time that the
product requirements are determined. Qualities may be present or absent, or may be matters of
degree, with tradeoffs among them, with practicality and cost as major considerations. The software
engineer has a responsibility to elicit the system’s quality requirements that may not be explicit at
the outset and to discuss their importance and the difficulty of attaining them. All processes
29
associated with software quality (e.g. building, checking, improving quality) will be designed with
these in mind and carry costs based on the design. Thus, it is important to have in mind some of the
possible attributes of quality.
The important concept is that the software requirements define the required quality characteristics
of the software and influence the measurement methods and acceptance criteria for assessing these
characteristics.
Software engineers are expected to share a commitment to software quality as part of their culture.
Ethics can play a significant role in software quality, the culture, and the attitudes of software
engineers. The IEEE Computer Society and the ACM have developed a code of ethics and
professional practice based on eight principles to help software engineers reinforce attitudes related
to quality and to the independence of their work.
The notion of “quality” is not as simple as it may seem. For any engineered product, there are many
desired qualities relevant to a particular perspective of the product, to be discussed and determined
at the time that the product requirements are set down. Quality characteristics may be required or
not, or may be required to a greater or lesser degree, and trade-offs may be made among them.
The cost of quality can be differentiated into prevention cost, appraisal cost, internal failure cost,
and external failure cost. A motivation behind a software project is the desire to create software that
has value, and this value may or may not be quantified as a cost. The customer will have some
maximum cost in mind, in return for which it is expected that the basic purpose of the software will
be fulfilled. The customer may also have some expectation as to the quality of the software.
Sometimes customers may not have thought through the quality issues or their related costs. Is the
characteristic merely decorative, or is it essential to the software? If the answer lies somewhere in
30
between, as is almost always the case, it is a matter of making the customer a part of the decision
process and fully aware of both costs and benefits. Ideally, most of these decisions will be made in
the software requirements process, but these issues may arise throughout the software life cycle.
There is no definite rule as to how these decisions should be made, but the software engineer
should be able to present quality alternatives and their costs.
31
INTRODUCTION
The QFD framework can be used for translating actual customer statements and needs
("The voice of the customer") into actions and designs to build and deliver a quality product .
Quality function deployment (QFD) is a “method to transform user demands into design
quality, to deploy the functions forming quality, and to deploy methods for achieving the design
quality into subsystems and component parts, and ultimately to specific elements of the
manufacturing process.”
QFD seeks out both “spoken” and “unspoken” customer requirements and
maximizes “positive ” quality (such as ease of use, fun, luxury) that creates value. Traditional
quality systems aim at minimizing negative quality (such as defects, poor service).
Instead of conventional design processes that focus more on engineering capabilities
and less on customer needs, QFD focuses all product development activities on customer
needs.
QFD makes invisible requirements and strategic advantages visible. This allows a
company to prioritize and deliver on them.
Reduced time to market.
Reduction in design changes.
Decreased design and manufacturing costs.
Improved quality.
Increased customer satisfaction.
As with other Japanese management techniques, some problems can occur when
we apply QFD within the western business environment and culture.
Customer perceptions are found by market survey. If the survey is performed in a
poor way, then the whole analysis may result in doing harm to the firm.
The needs and wants of customers can change quickly nowadays. Comprehensive
system- and methodical thinking can make adapting to changed market needs more
complex.
The following are the Applications of the quality function deployment method:-
QFD has been applied in any industry: aerospace, manufacturing, software, communication, IT,
chemical and pharmaceutical, transportation, defense, government, R&D, food, and service industry.
QFD SUMMARY
Quality function deployment (QFD) is a “method to transform user demands into design quality, to
deploy the functions forming quality, and to deploy methods for achieving the design quality into
subsystems and component parts, and ultimately to specific elements of the manufacturing
process.” QFD is designed to help planners focus on characteristics of a new or existing product or
service from the viewpoints of market segments, company, or technology-development needs. The
technique yields graphs and matrices.
QFD helps transform customer needs (the voice of the customer [VOC]) into engineering
characteristics (and appropriate test methods) for a product or service, prioritizing each product or
service characteristic while simultaneously setting development targets for product or service.
VOICE OF THE
CUSTOMER
36
QFD – HISTORY
STATISTICAL
PROCESS CONTROL
VALUE
QFD Process
ENGINEERING
SQFD Process
SQFD PROCESS
37
This diagram represents the QFD House of Quality concepts applied to requirements engineering, or
SQFD. “SQFD is a front-end requirements elicitation technique, adaptable to any software
engineering methodology, that quantifiably solicits and defines critical customer requirements”
[Haag, 1996].
Step 2 – Customer requirements are converted to technical and measurable statements and
recorded
Step 3 – Customers identify the correlations between technical requirements and their own
verbatim statements of requirement
Step 5 – Applying weights and priorities to determine the technical product specification priorities
(this is a calculation)
QFD MATRIX
38
Secondary
HOWs vs. HOWs +3 Medium
Primary
+1 Weak
+9 Strong Positive
+3 Positive
-3
Requirements
Negative
Requirements
Prioritized
Customer
Customer
-9 Strong Negative
Technical Our
Customer Importance
Competitive A’s
Our
B’s
A’s
Assessment
B’s
Absolute Weight
Scale-up Factor
Degree of Technical Difficulty
Target Value
Assessment
Competitive
Target Value
Sales Point
Customer
Absolute Weight and Percent
Relative Weight and Percent
Prioritized Technical
Descriptors
QFD PROCESS
QFD Process
HOWs HOWs
WHATs
WHATs
HOW HOW
MUCH MUCH
Phase I
Product Planning
39
Phase I
Product Planning
Design
Requirements Requirements
Customer
PHASE II
PART DEVELOPMENT
Phase II
Part Development
Part Quality
Characteristics
Requirements
Design
40
Phase III
Process Planning
Key Process
Characteristics Operations
Part Quality
Phase IV
Production Planning
Production
Requirements
Key Process
Operations
Production Launch
The quality characteristics are used as the targets for validation (external quality) and
verification (internal quality) at the various stages of development.
41
They are refined into sub-characteristics, until the attributes or measurable properties are obtained.
In this context, metric or measure is a defined as a measurement method and measurement means
to use a metric or measure to assign a value.
In order to monitor and control software quality during the development process, the
external quality requirements are translated or transferred into the requirements of intermediate
products, obtained from development activities. The translation and selection of the attributes is a
non-trivial activity, depending on the stakeholder personal experience, unless the organization
provides an infrastructure to collect and to analyze previous experience on completed projects
There are many different kinds of requirements. One way of categorizing them is described as the
FURPS+ model [GRA92], using the acronym FURPS to describe the major categories of requirements
with subcategories as shown below.
Functionality
Usability
Reliability
Performance
Supportability
Design constraints
Implementation requirements
Interface requirements
Physical requirements.
Functional requirements specify actions that a system must be able to perform, without taking
physical constraints into consideration. These are often best described in a use-case model and in
use cases. Functional requirements thus specify the input and output behavior of a system.
Requirements that are not functional, such as the ones listed below, are sometimes called non-
functional requirements. Many requirements are non-functional, and describe only attributes of the
system or attributes of the system environment. Although some of these may be captured in use
42
A complete definition of the software requirements, use cases, and Supplementary Specifications
may be packaged together to define a Software Requirements Specification (SRS) for a particular
"feature" or other subsystem grouping.
Functionality
feature sets
capabilities
security
Usability
human factors
aesthetics
consistency in the user interface
online and context-sensitive help
wizards and agents
user documentation
training materials
Reliability
Performance
43
speed
efficiency
availability
accuracy
throughput
response time
recovery time
resource usage
Supportability
testability
extensibility
adaptability
maintainability
compatibility
configurability
serviceability
installability
localizability (internationalization)
FURPS
FURPS is an acronym representing a model for classifying software quality attributes (functional &
non-functional requirements):
There are many definitions of these Software Quality Attributes but a common
one is the FURPS+ model which was developed by Robert Grady at HewlettPackard.
Functionality
The F in the FURPS+ acronym represents all the system-wide functional requirements
that we would expect to see described.
These usually represent the main product features that are familiar within the business
domain of the solution being developed.
For example, order processing is very natural for someone to describe if you are
developing an order processing system.
The functional requirements can also be very technically oriented.
Functional requirements that you may consider to be also architecturally significant
system-wide functional requirements may include auditing, licensing, localization,
mail, online help, printing, reporting, security, system management, or workflow.
Each of these may represent functionality of the system being developed and they are
each a system-wide functional requirement.
Usability
Usability includes looking at, capturing, and stating requirements based around user
interface issues, things such as accessibility, interface aesthetics, and consistency
within the user interface.
Reliability
Reliability includes aspects such as availability, accuracy, and recoverability, for
example, computations, or recoverability of the system from shut-down failure.
Performance
Performance involves things such as throughput of information through the system,
system response time (which also relates to usability), recovery time, and startup time.
45
Supportability
Finally, we tend to include a section called Supportability, where we specify a number
of other requirements such as testability, adaptability, maintainability, compatibility,
configurability, installability, scalability, localizability, and so on.
These categories can be used as both product requirements as well as in the assessment of product
quality.
3. Risks: when might a requirement not be satisfied? What can be done to reduce this risk?
FURPS classification
47
FURPS+
Functionality
Usability
Reliability
Performance
Supportability
Design requirement
48
A design requirement, often called a design constraint, specifies or constrains the design of
a system.
Implementation requirement
required standards
implementation languages
policies for database integrity
resource limits
operation environments
Interface requirement
Physical requirement
A physical constraint imposed on the hardware used to house the system; for example,
shape, size and weight.
This type of requirement can be used to represent hardware requirements, such as the physical
network configurations required.
49
Gilb Approach
Break down high-level abstract concepts to more concrete ideas that can be measured.
For Example:
Reliability
o Number of errors in system over a period
o System up-time
This approach is tailored to the product being developed, so is more focussed and relevant to
product needs.
However, because each product is different, and will have different criteria, it's very difficult
to compare the quality of different products.
50
Quality Prompts
Quality prompts ask you to argue in support of the qualities of a person, place or thing. Qualities means positive aspects
or characteristics, for example: When answering a quality prompt, use G+3TiC=C and the six steps to demonstrate
OPDUL=C in your essay.
Quality prompts t What are the qualities of a good university? Develop your position using illustrations
and reasons.
UNIT III
QUALITY CONTROL
Quality control
Essentially, quality control involves the examination of a product, service, or process for
certain minimum levels of quality. The goal of a quality control team is to identify products
or services that do not meet a company’s specified standards of quality. If a problem is
identified, the job of a quality control team or professional may involve stopping production
51
temporarily. Depending on the particular service or product, as well as the type of problem
identified, production or implementation may not cease entirely.
In order to implement an effective QC program, an enterprise must first decide which specific
standards the product or service must meet. Then the extent of QC actions must be
determined (for example, the percentage of units to be tested from each lot). Next, real-world
data must be collected (for example, the percentage of units that fail) and the results reported
to management personnel. After this, corrective action must be decided upon and taken (for
example, defective units must be repaired or rejected and poor service repeated at no charge
until the customer is satisfied). If too many unit failures or instances of poor service occur, a
plan must be devised to improve the production or service process and then that plan must be
put into action. Finally, the QC process must be ongoing to ensure that remedial efforts, if
required, have produced satisfactory results and to immediately detect recurrences or new
instances of trouble.
Quality control is a process by which entities review the quality of all factors involved in
production. This approach places an emphasis on three aspects
1. Elements such as controls, job management, defined and well managed processes
2. performance and integrity criteria, and identification of records
3. Competence, such as knowledge, skills, experience, and qualifications
4. Soft elements, such as personnel integrity, confidence, organizational culture,
motivation, team spirit, and quality relationships.
The quality of the outputs is at risk if any of these three aspects is deficient in any way.
52
In project management, quality control requires the project manager and the project team to
inspect the accomplished work to ensure it's alignment with the project scope. In practice,
projects typically have a dedicated quality control team which focuses on this area.
QUALITY ASSURANCE
Quality assurance, or QA for short, is the systematic monitoring and evaluation of the
various aspects of a project, service or facility to maximize the probability that minimum
standards of quality are being attained by the production process. QA cannot absolutely
guarantee the production of quality products.
Two principles included in QA are: "Fit for purpose" - the product should be suitable for the
intended purpose; and "Right first time" - mistakes should be eliminated. QA includes
regulation of the quality of raw materials, assemblies, products and components, services
related to production, and management, production and inspection processes.
Quality is determined by the product users, clients or customers, not by society in general. It
is not the same as 'expensive' or 'high quality'. Low priced products can be considered as
having high quality if the product users determine them as such.
There are many forms of QA processes, of varying scope and depth. The application of a
particular process is often customized to the production process.
Failure testing
Statistical control
Many organizations use statistical process control to bring the organization to Six Sigma
levels of quality,[ in other words, so that the likelihood of an unexpected failure is confined to
six standard deviations on the normal distribution. This probability is less than four one-
millionths. Items controlled often include clerical tasks such as order-entry as well as
conventional manufacturing tasks.
The quality of products is dependent upon that of the participating constituents, some of
which are sustainable and effectively controlled while others are not. The process(es) which
are managed with QA pertain to Total Quality Management.
54
If the specification does not reflect the true quality requirements, the product's quality cannot
be guaranteed. For instance, the parameters for a pressure vessel should cover not only the
material and dimensions but operating, environmental, safety, reliability and maintainability
requirements.
QA is not limited to the manufacturing, and can be applied to any business or non-business
activity:
Design work
Administrative services
Consulting
Banking
Insurance
Computer software development
Retailing
Transportation
Education
Translation
It comprises a quality improvement process, which is generic in the sense it can be applied to
any of these activities and it establishes a behavior pattern, which supports the achievement
of quality.This in turn is supported by quality management practices which can include a
number of business systems and which are usually specific to the activities of the business
unit concerned.In manufacturing and construction activities, these business practices can be
equated to the models for quality assurance defined by the International Standards contained
in the ISO 9000 series and the specified Specifications for quality systems.In the system of
Company Quality, the work being carried out was shop floor inspection which did not reveal
the major quality problems. This led to quality assurance or total quality control, which has
come into being recently
Quality management can be considered to have four main components: quality planning,
quality control, quality assurance and quality improvement. Quality management is focused
not only on product/service quality, but also the means to achieve it. Quality management
55
therefore uses quality assurance and control of processes as well as products to achieve more
consistent quality.
Achieving results that satisfy the requirements for Demonstrating that the requirements for
quality. quality have been (and can be) achieved.
intended result.
Scope covers all activities that affect the total Scope of demonstration coves activities that
quality-related business results of the organization directly affect quality-related process and
product results
TIME MANAGEMENT
Time management is the act or process of exercising conscious control over the amount of
time spent on specific activities, especially to increase efficiency or productivity. Time
management may be aided by a range of skills, tools, and techniques used to manage time
when accomplishing specific tasks, projects and goals. This set encompasses a wide scope of
activities, and these include planning, allocating, setting goals, delegation, analysis of time
spent, monitoring, organizing, scheduling, and prioritizing. Initially, time management
referred to just business or work activities, but eventually the term broadened to include
personal activities as well. A time management system is a designed combination of
processes, tools, techniques, and methods. Usually time management is a necessity in any
project development as it determines the project completion time and scope.
Time management has been considered as subsets of different concepts such as:
Time Management has also been identified as one of the core functions identified in
project management.
Attention management: Attention Management relates to the management of
cognitive resources, and in particular the time that humans allocate their mind (and
organizations the minds of their employees) to conduct some activities.
Time management also covers how to eliminate tasks that don't provide the individual or
organization value.
One goal is to help yourself become aware of how you use your time
as one resource in organizing, prioritizing, and succeeding in your
studies in the context of competing activities of friends, work, family, etc.
As we go through each strategy, jot down an idea of what each will look like for you:
example, place blocks of time when you are most productive: are you a morning
person or a night owl?
Jot down one best time block you can study. How long is it? What makes for
a good break for you? Can you control the activity and return to your studies?
Dedicated study spaces
Determine a place free from distraction (no cell phone or text messaging!) where you can
maximize your concentration and be free of the distractions that friends or hobbies can
bring! You should also have a back-up space that you can escape to, like the
library, departmental study center, even a coffee shop where you can be anonymous. A
change of venue may also bring extra resources.
What is the best study space you can think of? What is another?
Weekly reviews
Weekly reviews and updates are also an important strategy. Each week, like a Sunday night,
review your assignments, your notes, your calendar. Be mindful that as deadlines and exams
approach, your weekly routine must adapt to them!
What is the best time in a week you can review?
Prioritize your assignments
When studying, get in the habit of beginning with the most difficult subject or task. You’ll
be fresh, and have more energy to take them on when you are at your best. For more difficult
courses of study, try to be flexible: for example, build in “reaction time” when you can get
feedback on assignments before they are due.
What subject has always caused you problems?
This can be the most difficult challenge of time management. As learners we always meet
unexpected opportunities that look appealing, then result in poor performance on a test, on a
paper, or in preparation for a task. Distracting activities will be more enjoyable later without
the pressure of the test, assignment, etc. hanging over your head. Think in terms of pride of
accomplishment. Instead of saying “no” learn to say “later”.
What is one distraction that causes you to stop studying?
Identify resources to help you
Are there tutors? An “expert friend”? Have you tried a keyword search on the Internet to get
better explanations? Are there specialists in the library that can point you to
resources? What about professionals and professional organizations. Using outside
resources can save you time and energy, and solve problems.
Write down three examples for that difficult subject above?
Be as specific as possible.
Effective aids:
This simple program will help you identify a few items, the reason for doing them, a
timeline for getting them done, and then printing this simple list and posting it for
reminders.
Daily/weekly planner
Write down appointments, classes, and meetings on a chronological log book or chart.
If you are more visual, sketch out your schedule
First thing in the morning, check what's ahead for the day
always go to sleep knowing you're prepared for tomorrow
Long term planner
Use a monthly chart so that you can plan ahead.
Long term planners will also serve as a reminder to constructively plan time for
yourself
No matter how organized we are, there are always only 24 hours in a day. Time doesn't
change. All we can actually manage is ourselves and what we do with the time that we have.
Many of us are prey to time-wasters that steal time we could be using much more
productively. What are your time-bandits? Do you spend too much time 'Net surfing, reading
email, or making personal calls? Tracking Daily Activities explains how to track your
activities so you can form a accurate picture of what you actually do, the first step to effective
time management.
61
Remember, the focus of time management is actually changing your behaviors, not changing
time. A good place to start is by eliminating your personal time-wasters. For one week, for
example, set a goal that you're not going to take personal phone calls while you're working.
(See Set Specific Goals for help with goal setting.) For a fun look at behaviors that can
interfere with successful time management, see my article Time Management Personality
Types. Find out if you're a Fireman, an Aquarian or a Chatty Kathy!
Think of this as an extension of time management tip # 3. The objective is to change your
behaviors over time to achieve whatever general goal you've set for yourself, such as
increasing your productivity or decreasing your stress. So you need to not only set your
specific goals, but track them over time to see whether or not you're accomplishing them.
Whether it's a Day-Timer or a software program, the first step to physically managing your
time is to know where it's going now and planning how you're going to spend your time in
the future. A software program such as Outlook, for instance, lets you schedule events easily
and can be set to remind you of events in advance, making your time management easier.
6) Prioritize ruthlessly.
You should start each day with a time management session prioritizing the tasks for that day
and setting your performance benchmark. If you have 20 tasks for a given day, how many of
them do you truly need to accomplish? For more on daily planning and prioritizing daily
tasks, see Start The Day Right With Daily Planning.
No matter how small your business is, there's no need for you to be a one-person show. For
effective time management, you need to let other people carry some of the load. Determining
Your Personal ROI explains two ways to pinpoint which tasks you'd be better off delegating
62
or outsourcing, while Decide To Delegate provides tips for actually getting on with the job of
delegating.
While crises will arise, you'll be much more productive if you can follow routines most of the
time.
For instance, reading and answering email can consume your whole day if you let it. Instead,
set a limit of one hour a day for this task and stick to it.
Are you wasting a lot of time looking for files on your computer? Take the time to organize a
file management system. Is your filing system slowing you down? Redo it, so it's organized
to the point that you can quickly lay your hands on what you need. You'll find more
information about setting up filing systems and handling data efficiently in my Data
Management library.
From client meetings to dentist appointments, it's impossible to avoid waiting for someone or
something. But you don't need to just sit there and twiddle your thumbs. Always take
something to do with you, such as a report you need to read, a checkbook that needs to be
balanced, or just a blank pad of paper that you can use to plan your next marketing campaign.
Technology makes it easy to work wherever you are; your PDA and/or cell phone will help
you stay connected.
Everyone is looking for ways to improve time management. Whether it is the management of
an organization looking for business improvement or an individual looking for ways to better
spend their time, time management is important to both.
63
Better time management can be achieved if goals have been set and then all future work is
prioritized based on how it moves the individual or organization towards meeting the goals.
Many time management priority methods exist. The most popular ones are the A, B, C
method and number ranking according to order in which tasks should be done. Both methods
encourage looking at things that move one closer to meeting important goals as the highest
priority to set. Things not related to goals would be lower priority. Here is a description at the
three priorities and how they relate to general time management practices.
• High priority items (rank A or 1) are those tasks, projects, and appointments that yield the
greatest results in accomplishing individual or organizational goals. For individuals, this
could be related to goals of career advancement or small business growth and ties directly to
promises made to customers or co-workers, or it could be unrelated to the job such as more
family or leisure time goals and promises. For organizations, this would likely be related to
increased profits, new business, key projects, and other strategic business items. High priority
items should be the first work planned for each day and blocked into a time that falls within
the individual's peak performance period.
• Medium priority items (rank B or 2) are those standard daily, weekly, or monthly tasks,
projects, and appointments that are part of the work that must be done in order to maintain the
status quo. For individuals, this would relate to getting their standard work done, and might
mean going to scheduled family or outside group activities as expected. For organizations,
this is every day business items like project meetings, cost reduction, as well as regular
administrative, sales, and manufacturing work. Medium priority work is scheduled after or
between high priority functions, because this work does not require high levels of
concentration, it can be done during non-peak periods as long as it is completed on schedule.
• Low priority items (rank C or 3) are those tasks, projects, and potential appointments that
are nice-to-do, can be put off until another time, and will not directly affect goals or standard
work practices. For individuals, this might mean learning a new skill or starting a new hobby
that may seem like good ideas but are not directly related to most desirable personal goals.
For organizations, this could be purging old files or evaluating existing work processes that
currently run smoothly enough.
64
It does not matter if time management priority methods like A, B, C, numbering, or simply
marking high, medium, low using a personalized coding or coloring method. It only matters
that the practice has no more than three priorities used in moving closer to meeting important
goals. More than three priority levels can bog the time manager in the process of prioritizing
rather than doing valuable work.
Whether organization management or an individual looking for ways to better utilize their
time, time management is important to both. Anyone looking for ways to improve time
management, will benefit from establishing and following a priority setting method for
completing work towards accomplishing goals.
Let’s start with a picture, which is always nice and polite. Take a look at the illustration
below:
The relationship between life management, personal productivity, and time management.
Before we start grinding more deeply on time management, let’s take a quick look at the two
layers below it.
Question: How does time management relate to to life management? Answer: This way.
Life management
All people manage their life in one way or another. We have split life management in five
areas: personal productivity, physical training, social relationships, financial management,
and eating habits.
The split above reflects our view, that we have a head (A), a body (B), and in addition to that,
an environment (C) where we live in, and in which we influence.
As we all know, we have to take care of our body. At its core, this is done by some kind of
physical exercise, and by striving for balanced eating habits. If there isn’t enough physical
exercise, our body may suffer. If we eat wrongly for a prolonged period of time, our body
may suffer. If we don’t focus on these areas at least somewhat, we seldom do them right.
Sooner or later our doctor will also verify this, as we develop our modern western lifestyle
diseases.
Then we have our environment. We interact with our surroundings via social interactions.
This might mean meeting with friends, living everyday life with our partner, using Facebook,
or chatting to our local store clerk while looking for the right credit card. Workplaces are also
an important place to mingle with people.
66
People who are too much alone often become unhappy. It is important to plan enough time
with your friends. Sports often combine both physical exercise and socializing.
As our society is built today, money is important. This is why we need financial skills too.
There exists a plethora of advisories and methods for that. Some are rip-offs, others could
benefit you. Only the people in the know survive in the long run.
Last but not least we have personal productivity, which is key for doing and achieving.
Personal productivity
As we have a separate post on the subject, we leave personal productivity here, and go on to
the beef. Here it comes, time management, give it to me baby!
Time Management
1. Managing goals is important because if we don’t know what we strive at, we are lost like
elks in thick fog. Yes, we may still be doing all kind of things with our sleeves wrapped and
sweat pears on our forehead, but if it is the wrong thing we are doing, we are not getting
towards our true goals. In that case, we might as well lie on the couch chilling. Instantly when
we define a goal, we know our direction. The doing then automatically starts to focus at the
right direction.
2. Managing tasks is also needed, as we always end up with many goals and too many things
to do. That is why we need a system for storing these tasks, and as a place to jot down our
“AHAA! This is what I should do” moments. If we don’t manage our tasks, we forget
important things, and miss deadlines.
67
5. Managing procrastination. We all know the situations when we just can’t bring ourselves
to start something. This is called procrastination, and it is a complicated, yet very important
subject. The reasons for procrastination are often rooted deeply in our souls, and some people
have a stronger tendency toward it than others. That is why you actually know who these
people are at your work. It is the ones that activate themselves just before important
deadlines.
The good news is, that there exists methods for managing procrastination. The tips for
beating procrastination are basic time management skills. Any good time manager should
have these skills in their skill set, and use them when needed (I know I have to, sometimes).
6. Follow-up systems. Finally, we need some kind of follow-up systems for ensuring that
things get finished. Many times, especially in work life, projects are started, and in the
beginning they are also closely monitored. A little later, new projects are started. The old
projects are forgotten. Most often, they are left undone. For good. With proper follow-up
systems this can be avoided. At least the old projects should be closed, or be put on hold
consciously.
There exists many ways to exercise time management. The most classical way is to take a
piece of paper, and to write down a list of things to accomplish “today”.
68
Technique /
Description
System
The important thing to remember when talking about time management techniques is that
everyone develops their own favorites. There is no right or wrong system for managing time.
One has to use what works for them.
By using proper time management skills for different purposes, you should be able to
maximize your free time. It is almost contradictory how it is O.K. to be motivated to learn
time management, with the purpose of avoiding too long work days (at least for longer
periods).
Life is a whole experience. We must not focus only on only one area of it, which often ends
up being work. By using time management tools and techniques, we can get more time for
doing all the other things we love, too.
69
Luckily, skills in personal productivity and time management really help. Personally, I could
not live without these skills anymore.
The ISO 9000 family of standards relate to quality management systems and are designed to
help organizations ensure they meet the needs of customers and other stakeholders
(Poksinska et al, The standards are published by ISO, the International Organization for
Standardization and available through National standards bodies.
ISO 9000 deals with the fundamentals of quality management systems (Tsim et al, 2002 [2] ),
including the eight management principles (Beattie and Sohal, 1999;[3] Tsim et al, 2002 [2])
on which the family of standards is based. ISO 9001 deals with the requirements that
organizations wishing to meet the standard have to meet.
Independent confirmation that organizations meet the requirements of ISO 9001 may be
obtained from third party certification bodies. Over a million organizations worldwide [4] are
independently certified making ISO 9001 one of the most widely used management tools in
the world today.
The global adoption of ISO 9001 may be attributable to a number of factors. A number of
major purchasers require their suppliers to hold ISO 9001 certification. In addition to several
stakeholders’ benefits, a number of studies have identified significant financial benefits for
71
- Quality improvements
Background
ISO 9001 was first published in 1987. It was based on the BS 5750 series of standards from
BSI that were proposed to ISO in 1979. Its history can however be traced back some twenty
years before that when the Department of Defense published its MIL-Q-9858 standard in
1959. MIL-Q-9858 was revised into the NATO AQAP series of standards in 1969, which in
turn were revised into the BS 5179 series of guidance standards published in 1974, and
finally revised into being the BS 5750 series of requirements standards in 1979, before being
submitted to ISO.
BSI has been certifying organizations for their quality management systems since 1978. Its
first certification (FM 00001) is still extant and held by the Tarmac company, a successor to
the original company which held this certificate. Today BSI claims to certify organizations at
nearly 70,000 sites globally. The development of the ISO 9000 series is shown in the diagram
to the right.
Certification
ISO does not itself certify organizations. Many countries have formed accreditation bodies to
authorize certification bodies, which audit organizations applying for ISO 9001 compliance
certification. Although commonly referred to as ISO 9000:2000 certification, the actual
72
The applying organization is assessed based on an extensive sample of its sites, functions,
products, services and processes; a list of problems ("action requests" or "non-compliance")
is made known to the management. If there are no major problems on this list, or after it
receives a satisfactory improvement plan from the management showing how any problems
will be resolved, the certification body will issue an ISO 9001 certificate for each
geographical site it has visited.
An ISO certificate is not a once-and-for-all award, but must be renewed at regular intervals
recommended by the certification body, usually around three years. There are no grades of
competence within ISO 9001: either a company is certified (meaning that it is committed to
the method and model of quality management described in the standard), or it is not. In this
respect, it contrasts with measurement-based quality systems such as the Capability Maturity
Model
Auditing
The aim is a continual process of review and assessment, to verify that the system is working
as it's supposed to, find out where it can improve and to correct or prevent problems
identified. It is considered healthier for internal auditors to audit outside their usual
management line, so as to bring a degree of independence to their judgments.
Under the 1994 standard, the auditing process could be adequately addressed by performing
"compliance auditing":
The 2000 standard uses a different approach. Auditors are expected to go beyond mere
auditing for rote "compliance" by focusing on risk, status and importance. This means they
are expected to make more judgments on what is effective, rather than merely adhering to
what is formally prescribed. The difference from the previous standard can be explained thus:
Advantages
It is widely acknowledged that proper quality management improves business, often having a
positive effect on investment, market share, sales growth, sales margins, competitive
advantage, and avoidance of litigation. The quality principles in ISO 9000:2000 are also
sound, according to Wade and also to Barnes, who says that "ISO 9000 guidelines provide a
comprehensive model for quality management systems that can make any company
competitive implementing ISO often gives the following advantages:
The Seven Basic Tools of Quality is a designation given to a fixed set of graphical techniques
identified as being most helpful in troubleshooting issues related to quality. They are called
basic because they are suitable for people with little formal training in statistics and because
they can be used to solve the vast majority of quality-related issues.
74
The Seven Basic Tools stand in contrast with more advanced statistical methods such as
survey sampling, acceptance sampling, statistical hypothesis testing, design of experiments,
multivariate analysis, and various methods developed in the field of operations research.
UNIT-IV
engineering work is an intensely human endeavor will never have success in project management.
early in the evolution of a project risks building an elegant solution for the wrong problem.
The manager who pays little attention to the process runs the risk of
The manager who embarks without a solid project plan jeopardizes the success of the product.
The People
• In fact, the “ people factor” is so important that the Software Engineering Institute has
developed a people management capability maturity (PM- CMM), “ to enhance the readiness
of software organizations to undertake increasingly complex applications by helping to attract,
grow, motivate, deploy, and retain the talent needed to improve their software development
capability”
76
• The people management maturity model defines the following key practice areas for software
people: recruiting, selection, performance management, training, Compensation, career
development, organization and work design, and team/culture development.
• Organizations that achieve high levels of maturity in the people management area has a higher
likelihood of implementing effective software engineering practices.
The Product
• Before a project can be planned, product’ objectives and scope should be established, alternative
solutions should be considered, and technical and management
• Constraints should be identified. Without this information, it is impossible to define reasonable (and
accurate) estimates of the cost, an effective assessment of risk, a realistic breakdown of project tasks,
or a manageable project schedule that provides a meaningful indication of progress.
• The software developer and customer must meet to define prod objectives and scope. In many
cases, this activity begins as part of the system engineering or business process engineering and
continues as the first step in software requirements analysis.
• Objectives identify the overall goals for the product (from the customer’ s
• Scope identifies the primary data, functions and behaviors that characterize the product, and more
important, attempts to bound these characteristics in a quantitative manner.
Once the product objectives and scope are understood, alternative solutions are considered.
Although very little detail is discussed, the alternatives enable managers and practitioners to
select a ‘ best’ approach, given the constraints imposed by delivery deadlines, budgetary
restrictions, personnel availability, technical interfaces, and myriad other factors.
The Process
• A number of different tasks set —tasks, milestones, work products and quality assurance points —
enable the framework activities to be adapted to the Characteristics of the software project and the
requirements of the project team.
The Project
• And yet, we still struggle. In 1998, industry data indicated that 26 percent of software projects
failed outright and 46 percent experienced cost and schedule overruns.
THE PEOPLE:
The Players
The software process is populated by players are categorized into one of five
constituencies:
1. Senior managers who define the business issues that often have sign
Engineered and other stakeholders who have a peripheral interest in the outcome.
production use.
People who fall within this taxonomy populate every software project. To be effective,
the project team must be organized in a way that maximizes each person’ s skills and abilities.
And that’ s the job of the team leader.
Team Leaders
• Project management is a people-intensive activity, and for this reason, competent practitioners
often make poor team leaders. They simply don’ t have the right mix of people skills.
And yet, as Edgemont states: “Unfortunately and all too frequently it Seems, individuals just
fall into a project manager role and become accidental Project managers.”
• In an excellent book of technical leadership, Jerry Weinberg suggests a MOI model of leadership:
Motivation
The ability to encourage (by “push or pull” ) technical people to produce to their best ability.
Organization
The ability to mold existing processes (or invent new ones) that will enable the initial concept
to be translated into a final product.
Ideas or innovation
when they must work within bounds established for a particular software product or application.
Weinberg suggests that successful project leaders apply a problem solving management style.
Understanding the problem to be solved, managing the flow of ideas, and at the same time,
letting everyone on the team know (by words and, far more important, by actions) that quality
counts and that it will not be compromised.
Problem solving
An effective software project manager can diagnose the technical and organizational issues that
are most relevant, systematically structure a solution or properly motivate other practitioners to
develop the solution, apply lessons learned from past projects to new situations, and remain flexible
enough to change direction if initial attempts at problem solution are fruitless.
Managerial identity
A good project manager must take charge of the project. He/She must have the
confidence to assume control when necessary and the assurance to allow good technical people
to follow their instincts.
Achievement
To optimize the productivity of a project team, a manager must reward initiative and
accomplishment and demonstrate through his own actions that controlled risk taking will not
be punished.
An effective project manager must be able to “ read” people; he/she must be able to
understand verbal and nonverbal signals and react to the needs of the people sending these
signals. The manager must remain under control in high- stress situations.
There are almost as many human organizational structures for software Development as there
are organizations that develop software.
80
with the practical and political consequences of organizational change are not within the software
project Manager’ s scope of responsibility.
The following options are available for applying human resources to a project that will require n
people working for k years:
Democratic decentralized (DD). This software engineering team has no permanent leader. Rather, ‘
task coordinators are appointed for short durations and then replaced by others who may coordinate
different tasks.’ Decisions on problems and approach are made by group consensus. Communication
among team members is horizontal.
Controlled decentralized (CD). This software engineering team has a defined leader who coordinates
specific tasks and secondary leaders that have responsibility for subtasks. Problem solving remains a
group activity, but implementation of solutions is partitioned among subgroups by the team leader.
Communication among subgroups and individuals is horizontal. Vertical communication along the
control hierarchy also occurs.
coordination are managed by a team leader. Communication between the leader and team
members is vertical. Mantel describes seven project factors that should be considered when
planning the structure of software engineering teams:
The difficulty of the problem to be solved.
The size of the resultant program(s) in lines of code or function points.
The time that the team will stay together (team lifetime).
The degree to which the problem can be modularized.
The required quality and reliability of the system to be built.
The rigidity of the delivery date.
The degree of sociability (communication) required for the project.
Because a centralized structure completes tasks faster. It is the most adept at
Democratic decentralized (DD). This software engineering team has no permanent leader. Rather, ‘
task coordinators are appointed for short durations and then replaced by others who may coordinate
different tasks.’ Decisions on problems and approach are made by group consensus. Communication
among team members is horizontal.
leader who coordinates specific tasks and secondary leaders that have responsibility for
subtasks. Problem solving remains a group activity, but implementation of solutions is
partitioned among subgroups by the team leader. Communication among subgroups and
individuals is horizontal. Vertical communication along the control hierarchy also occurs.
Controlled Centralized (CC). Top-level problem solving and internal team coordination are managed
by a team leader. Communication between the leader and team members is vertical. Mantel
describes seven project factors that should be considered when planning the structure of software
engineering teams:
permanent leader. Rather, ‘ task coordinators are appointed for short durations and then
replaced by others who may coordinate different tasks.’ Decisions on problems and approach
are made by group consensus. Communication among team members is horizontal.
coordination are managed by a team leader. Communication between the leader and team
members is vertical. Mantel describes seven project factors that should be considered when
planning the structure of software engineering teams:
The difficulty of the problem to be solved.
The size of the resultant program(s) in lines of code or function points.
The time that the team will stay together (team lifetime).
The degree to which the problem can be modularized.
The required quality and reliability of the system to be built.
The rigidity of the delivery date.
The degree of sociability (communication) required for the project.
Because a centralized structure completes tasks faster. It is the most adept at handling simple
problems.
83
The DD team structure is best applied to problems with relatively low modularity, because of the
higher volume of communication needed.
When high modularity is possible (and people can do their own thing), the CC or CD structure
will work well.
teams:
3. The open paradigm attempts to structure a team in a manner that achieves some of
the controls associated with the closed paradigm but also much of the innovation that occurs
when using the random paradigm. Work is per formed collaboratively, with heavy
communication and consensus-based decision making the trademarks of open paradigm teams.
Open paradigm team structures are well suited to the solution of complex problems but may not
perform as efficiently as other teams.
4. The synchronous paradigm relies on the natural compartmentalization of a
Problem and organizes team members to work on pieces of the problem with little active
communication among themselves.
High-performance team:
• Mavericks may have to be excluded from the team, if team cohesiveness is to be maintained.
84
There are many reasons that software projects get into trouble.
Thescale of many development efforts is large, leading to complexity, confusion, and significant
difficulties in coordinating team members.
Uncertaint is common, resulting in a continuing stream of changes that ratchets the project
team.
Interoper ability has become a key characteristic of many systems. New software must
communicate with existing software and conform to predefined constraints imposed by the system
or product.
These characteristics of modern software —scale, uncertainty, and interoperability —are facts of
life.
Karl and Streeter examine a collection of project coordination techniques that are categorized in
the following manner:
Formal, impersonal approaches include software engineering documents and deliverables (including
source code), technical memos, project milestones, schedules, and project control tools, change
requests and related documentation, error tracking reports, and repository data.
Applied to software engineering work products. These include status review meetings and design
and code inspections.
Informal, interpersonal procedures include group meetings for information dissemination and
problem solving and “ collocation of requirements and development staff.”
Electronic communication:
85
Electronic mail
Electronic bulletin boards
Voice based conference etc
The product
A software project manager is confronted with a dilemma at the very beginning of software
engineering project.
information is unavailable.
information for estimates, but analysis often take weeks or months to complete. Worse,
requirements may be fluid, changing regular as the project proceeds.
Yet, a plan is needed ‘ now!” Therefore, we must examine the product and the
Software Scope
The first software project management activity is the determination of software scope is defined by
answering the following questions:
Context. How the software to be built does fit into a larger system, product, business context and
what constraints are imposed as a result of the context Information objectives.
What customer-visible data objects are produced as output from the software? What data
objects are required for input Function and performance?
What function does the software perform transform input data into output?
86
Problem Decomposition
problem.
Rather, decomposition is applied in two major areas: (I) the functionality that
must be delivered and (2) the process that will be used to deliver it.
Human beings tend to apply a divide and conquer strategy when they are con fronted with
complex problems.
Stated simply, a complex problem is partitioned into smaller problems that are more
manageable.
This is the strategy that applies as project planning begins. Software , described in the statement of
scope, are evaluated and refined to provide more detail prior to the beginning of estimation.
The project team learns that the marketing department has talked with potential customers
and found that the following functions should be part of automatic copy editing:
(III) Reference checking for arranges documents (e.g., Is a reference to a bibliography entry
found in the list of entries n the bibliography?),
87
(IV) Section and chapter reference validation for large documents. Each of these features
represents a sub function to be implemented in software. Each can be further refined if the
decomposition will make planning easier
THE PROCESS
development, —are applicable to all software. The problem is to select the process
The project manager must decide which process model is most appropriate
for
(1) ‘ me customers who have requested the product and the people who will
do the work,
(3) the project environment in which the software team works. When a process
model has been selected, the team then defines a preliminary project plan based on the set of
common process framework activities.
88
Once the preliminary plan is established, process decomposition begins. That is, a
complete plan, reflecting the work tasks required to populate the frame work activities must
be created.
Project planning begins with the melding of the product and the process. Each
function to be engineered by the software team must pass through the set of framework
activities that have been defined for a software organization. Assume that the organization
has adopted the following set of framework activities
Planning —tasks required to define resources, timelines, and other project related
information.
• Risk analysis —tasks required to assess both technical and management risks.
•Engineering —tasks required to build one or more representations of the
application.
• Construction and release —tasks required to construct, test, install, and pro
THE PROJECT
1. Start on the right foot. This is accomplished by working hard (very hard) to understand the
problem that is to be solved and then setting realistic objects and expectations for everyone who
will be involved in the project. It is reinforced by building the right team and giving the team the
autonomy, authority, and technology needed to do the job.
2. Maintain momentum. Many projects get off to a good start and then slowly disintegrate. To
maintain momentum, the project manager must pro vide incentives to keep turnover of personnel to
an absolute minimum, the team should emphasize quality in every task it performs, and senior
management should do everything possible to stay out of the team’ s way.
3. Track progress. For a software project, progress is tracked as work products (e.g., specifications,
source code, sets of test cases) are produced and approved (using formal technical reviews) as part
of a quality assurance activity. In addition, software process and project measures (Chapter 4) can be
collected and used to assess progress against averages developed for the software development
organization.
4. Make smart decisions. In essence, the decisions of the project manager and the software
team should be to “ keep it simple.” Whenever possible, decide to use commercial off-the-shelf
software or existing software components, decide to avoid custom interfaces when standard
approaches are available, decide to identify and then avoid obvious risks, and decide to allocate
more time than you think is needed to complex or risky tasks (you’ ll need every minute).
5. Conduct a postmortem analysis. Establish a consistent mechanism for
90
extracting lessons learned for each project. Evaluate the planned and actual schedules, collect
and analyze software project metrics, get feedback from team members and customers, and
record findings in written form
ISO 9001
MANAGEMENT REQUIREMENTS
6.
RESOURCE
REQUIREMENTS
REALIZATION REQUIREMENTS
Implementing a Quality Management System will motivate staff by defining their key roles and
responsibilities. Gain ISO 9001 certification by completing our Free Quote form. Cost savings can be
made through improved efficiency and productivity, as product or service deficiencies will be
highlighted. From this, improvements can be developed, resulting in less waste, inappropriate or
rejected work and fewer complaints. Customers will notice that orders are met consistently, on time
and to the correct specification. This can open up the market place to increased opportunities.
The Capability Maturity Model (CMM) is a service mark registered with the U.S. Patent
and Trademark Office by Carnegie Mellon University (CMU) and refers to a development
model that was created after study of data collected from organizations that contracted with
the U.S. Department of Defense, who funded the research. This became the foundation from
100
which CMU created the Software Engineering Institute (SEI). Like any model, it is an
abstraction of an existing system.
The Capability Maturity Model (CMM) is a methodology used to develop and refine an
organization's software development process. The model describes a five-level evolutionary
path of increasingly organized and systematically more mature processes. CMM was
developed and is promoted by the Software Engineering Institute (SEI), a research and
development center sponsored by the U.S. Department of Defense (DoD). SEI was founded
in 1984 to address software engineering issues and, in a broad sense, to advance software
engineering methodologies. More specifically, SEI was established to optimize the process of
developing, acquiring, and maintaining heavily software-reliant systems for the DoD.
Because the processes involved are equally applicable to the software industry as a whole,
SEI advocates industry-wide adoption of the CMM.
Maturity model
A maturity model can be viewed as a set of stlevels that describe how well the behaviors,
practices and processes of an organization can reliably and sustainably produce required
outcomes. A maturity model may provide, for example :
a place to start
the benefit of a community’s prior experiences
a common language and a shared vision
a framework for prioritizing actions.
a way to define what improvement means for your organization.
A maturity model can be used as a benchmark for comparison and as an aid to understanding
- for example, for comparative assessment of different organizations where there is
something in common that can be used as a basis for comparison. In the case of the CMM,
101
for example, the basis for comparison would be the organizations' software development
processes.
Structure
Maturity Levels: a 5-level process maturity continuum - where the uppermost (5th) level is a
notional ideal state where processes would be systematically managed by a combination of
process optimization and continuous process improvement.
Key Process Areas: a Key Process Area (KPA) identifies a cluster of related activities that,
when performed together, achieve a set of goals considered important.
Goals: the goals of a key process area summarize the states that must exist for that key
process area to have been implemented in an effective and lasting way. The extent to which
the goals have been accomplished is an indicator of how much capability the organization
has established at that maturity level. The goals signify the scope, boundaries, and intent of
each key process area.
Common Features: common features include practices that implement and institutionalize a
key process area. There are five types of common features: commitment to Perform, Ability
to Perform, Activities Performed, Measurement and Analysis, and Verifying Implementation.
Key Practices: The key practices describe the elements of infrastructure and practice that
contribute most effectively to the implementation and institutionalization of the KPAs.
Levels
There are five levels defined along the continuum of the CMM and, according to the SEI:
"Predictability, effectiveness, and control of an organization's software processes are believed
to improve as the organization moves up these five levels. While not rigorous, the empirical
evidence to date supports this belief."
1. Initial (chaotic, ad hoc, individual heroics) - the starting point for use of a new process.
2. Managed - the process is managed in accordance with agreed metrics.
3. Defined - the process is defined/confirmed as a standard business process, and decomposed
to levels 0, 1 and 2 (the latter being Work Instructions).
4. Quantitatively managed
5. Optimizing - process management includes deliberate process optimization/improvement.
102
Within each of these maturity levels are Key Process Areas (KPAs) which characterize that
level, and for each KPA there are five definitions identified:
1. Goals
2. Commitment
3. Ability
4. Measurement
5. Verification
The KPAs are not necessarily unique to CMM, representing — as they do — the stages that
organizations must go through on the way to becoming mature.
The CMM provides a theoretical continuum along which process maturity can be developed
incrementally from one level to the next. Skipping levels is not allowed/feasible.
N.B.: The CMM was originally intended as a tool to evaluate the ability of government
contractors to perform a contracted software project. It has been used for and may be suited
to that purpose, but critics pointed out that process maturity according to the CMM was not
necessarily mandatory for successful software development. There were/are real-life
examples where the CMM was arguably irrelevant to successful software development, and
these examples include many shrinkwrap companies (also called commercial-off-the-shelf or
"COTS" firms or software package firms). Such firms would have included, for example,
Claris, Apple, Symantec, Microsoft, and Lotus. Though these companies may have
successfully developed their software, they would not necessarily have considered or defined
or managed their processes as the CMM described as level 3 or above, and so would have
fitted level 1 or 2 of the model. This did not - on the face of it - frustrate the successful
development of their software.
It is characteristic of processes at this level that they are (typically) undocumented and in a
state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive
manner by users or events. This provides a chaotic or unstable environment for the
processes.
Level 2 - Repeatable
103
It is characteristic of processes at this level that some processes are repeatable, possibly
with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may
help to ensure that existing processes are maintained during times of stress.
Level 3 - Defined
It is characteristic of processes at this level that there are sets of defined and documented
standard processes established and subject to some degree of improvement over time.
These standard processes are in place (i.e., they are the AS-IS processes) and used to
establish consistency of process performance across the organization.
Level 4 - Managed
It is characteristic of processes at this level that, using process metrics, management can
effectively control the AS-IS process (e.g., for software development ). In particular,
management can identify ways to adjust and adapt the process to particular projects
without measurable losses of quality or deviations from specifications. Process Capability is
established from this level.
Level 5 - Optimizing
At maturity level 5, processes are concerned with addressing statistical common causes of
process variation and changing the process (for example, to shift the mean of the process
performance) to improve process performance. This would be done at the same time as
maintaining the likelihood of achieving the established quantitative process-improvement
objectives.
The software process framework documented is intended to guide those wishing to assess an
organization/projects consistency with the CMM. For each maturity level there are five
checklist types:
104
TypeSD Description
Policy Describes the policy contents and KPA goals recommended by the CMM.
roles
entry criteria
inputs
activities
outputs
Process
exit criteria
reviews and audits
work products managed and controlled
measurements
documented procedures
training
tools
tools
reviews and audits
work products managed and controlled
measurements
Six Sigma
Six Sigma stands for Six Standard Deviations (Sigma is the Greek letter used to represent standard
deviation in statistics) from mean. Six Sigma methodology provides the techniques and tools to
improve the capability and reduce the defects in any process.
It was started in Motorola, in its manufacturing division, where millions of parts are made using the
same process repeatedly. Eventually Six Sigma evolved and applied to other non manufacturing
processes. Today you can apply Six Sigma to many fields such as Services, Medical and Insurance
Procedures, Call Centers.
Six Sigma methodology improves any existing business process by constantly reviewing and re-
tuning the process. To achieve this, Six Sigma uses a methodology known as DMAIC (Define
opportunities, Measure performance, Analyze opportunity, Improve performance, Control
performance)
Six Sigma seeks to improve the quality of process outputs by identifying and removing the
causes of defects (errors) and minimizing variability in manufacturing and business processes
It uses a set of quality management methods, including statistical methods, and creates a
special infrastructure of people within the organization ("Black Belts", "Green Belts", etc.)
who are experts in these methods Each Six Sigma project carried out within an organization
follows a defined sequence of steps and has quantified financial targets (cost reduction or
profit increase)
106
The term Six Sigma originated from terminology associated with manufacturing, specifically
terms associated with statistical modeling of manufacturing processes. The maturity of a
manufacturing process can be described by a sigma rating indicating its yield, or the
percentage of defect-free products it creates. A six sigma process is one in which 99.99966%
of the products manufactured are statistically expected to be free of defects (3.4 defects per
million). Motorola set a goal of "six sigma" for all of its manufacturing operations, and this
goal became a byword for the management and engineering practices used to achieve it.
Six Sigma is a systematical process of “quality improvement through the disciplined data-analyzing
approach, and by improving the organizational process by eliminating the defects or the obstacles
which prevents the organizations to reach the perfection”.
Six sigma points out the total number of the defects that has come across in an organizational
performance. Any type of defects, apart from the customer specification, is considered as the defect,
according to Six Sigma. With the help of the statistical representation of the Six Sigma, it is easy to
find out how a process is performing on quantitatively aspects. A Defect according to Six Sigma is
nonconformity of the product or the service of an organization.
Since the fundamental aim of the Six Sigma is the application of the improvement on the specified
process, through a measurement-based strategy, Six Sigma is considered as a registered service
mark or the trade mark. Six Sigma has its own rules and methodologies to be applied. In order to
achieve this service mark, the process should not produce defects more than 3.4. These numbers of
defects are considered as “the rate of the defects in a process should not exceed beyond the rate
3.4 per million opportunities”. Through the Six Sigma calculation the number of defects can be
calculated. For this there is a sigma calculator, which helps in the calculation.
In order to attain the fundamental objectives of Six Sigma, there are Six Sigma methodologies to be
implemented. This is done through the application of Six Sigma improvement projects, which is
accomplished through the two Six Sigma sub-methodologies. Under the improvement projects came
the identification, selection and ranking things according to the importance. The major two sub-
divisions of the improvement projects are the Six Sigma DMAIC and the Six Sigma DMADV. These
sub-divisions are considered as the processes and the execution of these processes are done
through three certifications.
107
HISTORICAL OVERVIEW
Six Sigma originated as a set of practices designed to improve manufacturing processes and
eliminate defects, but its application was subsequently extended to other types of business
processes as well.In Six Sigma, a defect is defined as any process output that does not meet
customer specifications, or that could lead to creating an output that does not meet customer
specifications
Bill Smith first formulated the particulars of the methodology at Motorola in 1986 Six Sigma
was heavily inspired by six preceding decades of quality improvement methodologies such as
quality control, TQM, and Zero Defects, based on the work of pioneers such as Shewhart,
Deming, Juran, Ishikawa, Taguchi and others.
Continuous efforts to achieve stable and predictable process results (i.e., reduce process
variation) are of vital importance to business success.
Manufacturing and business processes have characteristics that can be measured, analyzed,
improved and controlled.
Achieving sustained quality improvement requires commitment from the entire
organization, particularly from top-level management.
Features that set Six Sigma apart from previous quality improvement initiatives
include:
A clear focus on achieving measurable and quantifiable financial returns from any Six Sigma
project.
An increased emphasis on strong and passionate management leadership and support.
A special infrastructure of "Champions," "Master Black Belts," "Black Belts," "Green Belts",
etc. to lead and implement the Six Sigma approach.
A clear commitment to making decisions on the basis of verifiable data, rather than
assumptions and guesswork.
The term "Six Sigma" comes from a field of statistics known as process capability studies.
Originally, it referred to the ability of manufacturing processes to produce a very high
proportion of output within specification. Processes that operate with "six sigma quality" over
108
the short term are assumed to produce long-term defect levels below 3.4 defects per million
opportunities (DPMO)Six Sigma's implicit goal is to improve all processes to that level of
quality or better.
Six Sigma is a registered serce mark and trademark of Motorola Inc.As of 2006 Motorola
reported over US$17 billion in saving from Six Sigma.
Other early adopters of Six Sigma who achieved well-publicized success include Honeywell
(previously known as AlliedSignal) and General Electric, where Jack Welch introduced the
method. By the late 1990s, about two-thirds of the Fortune 500 organizations had begun Six
Sigma initiatives with the aim of reducing costs and improving quality.
In recent years, some practitioners have combined Six Sigma ideas with lean manufacturing
to yield a methodology named Lean Six Sigma.
METHODS
Six Sigma projects follow two project methodologies inspired by Deming's Plan-Do-Check-
Act Cycle. These methodologies, composed of five phases each, bear the acronyms DMAIC
and DMADV.
DMAIC is used for projects aimed at improving an existing business process. DMAIC is
pronounced as "duh-may-ick".
DMADV is used for projects aimed at creating new product or process designs.[12] DMADV is
pronounced as "duh-mad-vee".
DMAIC
Define the problem, the voice of the customer, and the project goals, specifically.
Measure key aspects of the current process and collect relevant data.
Analyze the data to investigate and verify cause-and-effect relationships. Determine what
the relationships are, and attempt to ensure that all factors have been considered. Seek out
root cause of the defect under investigation.
109
Improve or optimize the current process based upon data analysis using techniques such as
design of experiments, poka yoke or mistake proofing, and standard work to create a new,
future state process. Set up pilot runs to establish process capability.
Control the future state process to ensure that any deviations from target are corrected
before they result in defects. Implement control systems such as statistical process control,
production boards, and visual workplaces, and continuously monitor the process.
DMADV or DFSS
The DMADV project methodology, also known as DFSS ("Design For Six Sigma"),[12]
features five phases:
Define design goals that are consistent with customer demands and the enterprise strategy.
Measure and identify CTQs (characteristics that are Critical To Quality), product capabilities,
production process capability, and risks.
Analyze to develop and design alternatives, create a high-level design and evaluate design
capability to select the best design.
Design details, optimize the design, and plan for design verification. This phase may require
simulations.
Verify the design, set up pilot runs, implement the production process and hand it over to
the process owner(s).
Within the individual phases of a DMAIC or DMADV project, Six Sigma utilizes many
established quality-management tools that are also used outside of Six Sigma.
IMPLEMENTATION ROLES
One key innovation of Six Sigma involves the "professionalizing" of quality management
functions. Prior to Six Sigma, quality management in practice was largely relegated to the
production floor and to sin a separate quality department. Formal Six Sigma programs adopt
a ranking terminology (similar to some martial arts systems) to define a hierarchy (and career
path) that cuts across all business functions.
Six Sigma identifies several key roles for its successful implementation.
110
Executive Leadership includes the CEO and other members of top management. They are
responsible for setting up a vision for Six Sigma implementation. They also empower the
other role holders with the freedom and resources to explore new ideas for breakthrough
improvements.
Champions take responsibility for Six Sigma implementation across the organization in an
integrated manner. The Executive Leadership draws them from upper management.
Champions also act as mentors to Black Belts.
Master Black Belts, identified by champions, act as in-house coaches on Six Sigma. They
devote 100% of their time to Six Sigma. They assist champions and guide Black Belts and
Green Belts. Apart from statistical tasks, they spend their time on ensuring consistent
application of Six Sigma across various functions and departments.
Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific
projects. They devote 100% of their time to Six Sigma. They primarily focus on Six Sigma
project execution, whereas Champions and Master Black Belts focus on identifying
projects/functions for Six Sigma.
Green Belts are the employees who take up Six Sigma implementation along with their other
job responsibilities, operating under the guidance of Black Belts.
Some organizations use additional belt colours, such as Yellow Belts, for employees that have
basic training in Six Sigma tools.
Certification
In the United States, Six Sigma certification for both Green and Black Belts is offered by the
Institute of Industrial Engineers and by the American Society for Quality.[15] Many
organizations also offer certification programs to their employees. Many corporations,
including early Six Sigma pioneers General Electric and Motorola developed certification
programs as part of their Six Sigma implementation. All branches of the US Military also
train and certify their own Black and Green Belts[citation needed].
111
Graph of the normal distribution, which underlies the statistical assumptions of the Six Sigma model.
The Greek letter σ (sigma) marks the distance on the horizontal axis between the mean, µ, and the
curve's inflection point. The greater this distance, the greater is the spread of values encountered.
For the curve shown above, µ = 0 and σ = 1. The upper and lower specification limits (USL, LSL) are at
a distance of 6σ from the mean. Because of the properties of the normal distribution, values lying
that far away from the mean are extremely unlikely. Even if the mean were to move right or left by
1.5σ at some point in the future (1.5 sigma shift), there is still a good safety cushion. This is why Six
Sigma aims to have processes where the mean is at least 6σ away from the nearest specification
limit.
The term "six sigma process" comes from the notion that if one has six standard deviations
between the process mean and the nearest specification limit, as shown in the graph,
practically no items will fail to meet specifications.[8] This is based on the calculation method
employed in process capability studies.
Capability studies measure the number of standard deviations between the process mean and
the nearest specification limit in sigma units. As process standard deviation goes up, or the
mean of the process moves away from the center of the tolerance, fewer standard deviations
will fit between the mean and the nearest specification limit, decreasing the sigma number
and increasing the likelihood of items outside specification.
Experience has shown that processes usually do not perform as well in the long term as they
do in the short term. As a result, the number of sigmas that will fit between the process mean
112
and the nearest specification limit may well drop over time, compared to an initial short-term
study. To account for this real-life increase in process variation over time, an empirically-
based 1.5 sigma shift is introduced into the calculationAccording to this idea, a process that
fits 6 sigma between the process mean and the nearest specification limit in a short-term
study will in the long term only fit 4.5 sigma – either because the process mean will move
over time, or because the long-term standard deviation of the process will be greater than that
observed in the short term, or both.
Hence the widely accepted definition of a six sigma process is a process that produces 3.4
defective parts per million opportunities (DPMO). This is based on the fact that a process that
is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard
deviations above or below the mean (one-sided capability study). So the 3.4 DPMO of a six
sigma process in fact corresponds to 4.5 sigma, namely 6 sigma minus the 1.5-sigma shift
introduced to account for long-term variation. This allows for the fact that special causes may
result in a deterioration in process performance over time, and is designed to prevent
underestimation of the defect levels likely to be encountered in real-life operation.
Sigma levels
113
A control chart depicting a process that experienced a 1.5 sigma drift in the process mean toward
the upper specification limit starting at midnight. Control charts are used to maintain 6 sigma quality
by signaling when quality professionals should investigate a process to find and eliminate special-
cause variation.
The table below gives long-term DPMO values corresponding to various short-term sigma
levels.
It must be understood that these figures assume that the process mean will shift by 1.5 sigma
toward the side with the critical specification limit. In other words, they assume that after the
initial study determining the short-term sigma level, the long-term Cpk value will turn out to
be 0.5 less than the short-term Cpk value. So, for example, the DPMO figure given for 1
sigma assumes that the long-term process mean will be 0.5 sigma beyond the specification
limit (Cpk = –0.17), rather than 1 sigma within it, as it was in the short-term study (Cpk =
0.33). Note that the defect percentages only indicate defects exceeding the specification limit
to which the process mean is nearest. Defects beyond the far specification limit are not
included in the percentages.
Sigma level DPMO Percent defective Percentage yield Short-term Cpk Long-term Cpk
APPLICATION
114
Six Sigma mostly finds application in large organizations. An important factor in the spread
of Six Sigma was GE's 1998 announcement of $350 million in savings thanks to Six Sigma, a
figure that later grew to more than $1 billionAccording to industry consultants like Thomas
Pyzdek and John Kullmann, companies with fewer than 500 employees are less suited to Six
Sigma implementation, or need to adapt the standard approach to make it work for them. This
is due both to the infrastructure of Black Belts that Six Sigma requires, and to the fact that
large organizations present more opportunities for the kinds of improvements Six Sigma is
suited to bringing about.
CRITICISM
Lack of originality
Noted quality expert Joseph M. Juran has described Six Sigma as "a basic version of quality
improvement", stating that "there is nothing new there. It includes what we used to call
facilitators. They've adopted more flamboyant terms, like belts with different colors. I think
that concept has merit to set apart, to create specialists who can be very helpful. Again, that's
not a new idea. The American Society for Quality long ago established certificates, such as
for reliability engineers.
Role of consultants
The use of "Black Belts" as itinerant change agents has (controversially) fostered an industry
of training and certification. Critics argue there is overselling of Six Sigma by too great a
number of consulting firms, many of which claim expertise in Six Sigma when they only
have a rudimentary understanding of the tools and techniques involved.[3]
A Fortune article stated that "of 58 large companies that have announced Six Sigma
programs, 91 percent have trailed the S&P 500 since". The statement was attributed to "an
analysis by Charles Holland of consulting firm Qualpro (which espouses a competing quality-
improvement process)." The summary of the article is that Six Sigma is effective at what it is
intended to do, but that it is "narrowly designed to fix an existing process" and does not help
in "coming up with new products or disruptive technologies." Advocates of Six Sigma have
argued that many of these claims are in error or ill-informed.
115
A BusinessWeek article says that James McNerney's introduction of Six Sigma at 3M had the
effect of stifling creativity and reports its removal from the research function. It cites two
Wharton School professors who say that Six Sigma leads to incremental innovation at the
expense of blue skies research This phenomenon is further explored in the book, Going Lean,
which describes a related approach known as lean dynamics and provides data to show that
Ford's "6 Sigma" program did little to change its fortunes.[25]
While 3.4 defects per million opportunities might work well for certain products/processes, it
might not operate optimally or cost effectively for others. A pacemaker process might need
higher standards, for example, whereas a direct mail advertising campaign might need lower
standards. The basis and justification for choosing 6 (as opposed to 5 or 7, for example) as
the number of standard deviations is not clearly explained. In addition, the Six Sigma model
assumes that the process data always conform to the normal distribution. The calculation of
defect rates for situations where the normal distribution model does not apply is not properly
addressed in the current Six Sigma literature.[3]
The statistician Donald J. Wheeler has dismissed the 1.5 sigma shift as "goofy" because of its
arbitrary nature. Its universal applicability is seen as doubtful
The 1.5 sigma shift has also become contentious because it results in stated "sigma levels"
that reflect short-term rather than long-term performance: a process that has long-term defect
levels corresponding to 4.5 sigma performance is, by Six Sigma convention, described as a
"six sigma process." The accepted Six Sigma scoring system thus cannot be equated to actual
normal distribution probabilities for the stated number of standard deviations, and this has
been a key bone of contention about how Six Sigma measures are defined The fact that it is
rarely explained that a "6 sigma" process will have long-term defect rates corresponding to
4.5 sigma performance rather than actual 6 sigma performance has led several commentators
to express the opinion that Six Sigma is a
116
The management side focuses on using the management system to line up the right projects and
match them with the right individuals. Management also focuses on getting the right goals and
process metrics to insure that projects are successfully completed and that these gains can be
sustained over time.
The technical side focuses on enhancing process performance using process data, statistical thinking,
and methods. This focused process improvement methodology has five key stages: Define, Measure,
Analyze, Improve and Control. Define is to define process improvement goals that are consistent
with customer demands and company strategy. Next measure the key aspects of the current
processes that your company is using and collect relevant data about these processes and current
results. Then analyze the data to verify cause and affect relationships, be sure to consider all
possible factors involved. Then improve or optimize the process based upon data analysis using
techniques like Design of Experiments or observational study. The last step is to control to ensure
that any deviations are corrected before they result in defects. During this step you will also set up
pilot runs to establish process capability and will continuously monitor the process.
All tools statistical or not are linked and sequenced in a unique way that makes 6 Sigma both easy
and effective to use. The basic approach focuses on the identification of the key process drivers
(variables that have the largest effect on output) and relies on software such as Minitab for
statistical calculations.
117
Zero Defects, pioneered by Philip Crosby, is a business practice which aims to reduce and minimise
the number of defects and errors in a process and to do things right the first time. The ultimate aim
will be to reduce the level of defects to zero. However, this may not be possible and in practice and
what it means is that everything possible will be done to eliminate the likelihood of errors or defects
occurring. The overall effect of achieving zero defects is the maximisation of profitability.
More recently the concept of zero defects has lead to the creation and development of six sigma
pioneered by Motorola and now adopted worldwide by many other organisations.
The concept of zero defects as explained and initiated by Philip Crosby is a business system
that aims at reducing the defects in a process, and doing the process correctly the first of Zero
Defects
The Concept built to specifications without any drawbacks, then it is an acceptable product.
In terms of defects, a product will be acceptable when it is free of defects.
When considering the concept of zero defects, one might want to know what that zero defect
level is, if acceptable levels can be achieved for a product.
Attaining perfect zero defects may not be possible, and there is always a chance of some
errors or defect occurring. Zero defects means reaching a level of infinity sigma - which is
nearly impossible. In terms of Six Sigma, zero defects would mean maximization of
profitability and improvement in quality.
118
A process has to be in place that allows for the achievement of zero defects. Unless
conditions are perfect, the objective of zero defects is not possible. It is possible to measure
non-conformance in terms of waste. Unless the customer requirements are clear, you will not
be able to achieve the product that matches these requirements and is not
The concept of zero defects can be practically utilised in any situation to improve quality and reduce
cost. However it doesn’t just happen, as the right conditions have to be established to allow this to
take place. A process, system or method of working has to be established which allows for the
achievement of zero defects. If this process and the associated conditions are not created then it will
not be possible for anyone involved in the process to achieve the desired objective of zero defects.
In such a process it will be possible to measure the cost of none conformance in terms of wasted
materials and wasted time.
Any process that is to be designed to include this concept must be clear on its customer expectations
and desires. The ideal is to aim for a process and finished article that conforms to customer
requirements and does not fall short of or exceed these requirements. For example, in recent years
many financial organisations have made claims regarding how quickly they can process a home loan
application. But what they may have failed to realise is that in spending a great deal of time and
money reducing processing time they are exceeding customer requirements (even if they believe
that they know them). In these cases they have exceeded the cost of conformance when it was not
necessary to do so.
Employees are aware of the need to reduce defects, and they strive to achieve continual
improvement. However, over-emphasis of zero defects levels may be demoralizing, and may
even lead to non-productivity. Unless a level of zero defects is achieved, it would be regarded
as an unacceptable level.
119
When the zero defect rule is applied to suppliers and any minor defects are said to be
unacceptable, then the company's supply chain may be jeopardized - which in itself is not the
best business scenario.
It may be acceptable to have a policy of continuous improvement rather than a zero defect
one. Companies may be able to achieve decent reduction in costs and improved customer
satisfaction levels to achieve a bigger market share.
Every product or service has a requirement: a description of what the customer needs. When
a particular product meets that requirement, it has achieved quality, provided that the
requirement accurately describes what the enterprise and the customer actually need. This
technical sense should not be confused with more common usages that indicate weight or
goodness or precious materials or some absolute idealized standard. In common parlance, an
inexpensive disposable pen is a lower-quality item than a gold-plated fountain pen. In the
technical sense of Zero Defects, the inexpensive disposable pen is a quality product if it
meets requirements: it writes, does not skip nor clog under normal use, and lasts the time
specified.
The second principle is based on the observation that it is nearly always less troublesome,
more certain and less expensive to prevent defects than to discover and correct them.
The third is based on the normative nature of requirements: if a requirement expresses what is
genuinely needed, then any unit that does not meet requirements will not satisfy the need and
is no good. If units that do not meet requirements actually do satisfy the need, then the
requirement should be changed to reflect reality.
120
The fourth principle is key to the methodology. Phil Crosby believes that every defect
represents a cost, which is often hidden. These costs include inspection time, rework, wasted
material and labor, lost revenue and the cost of customer dissatisfaction. When properly
identified and accounted for, the magnitude of these costs can be made apparent, which has
three advantages. First, it provides a cost-justification for steps to improve quality. The title
of the book, "Quality is free," expresses the belief that improvements in quality will return
savings more than equal to the costs. Second, it provides a way to measure progress, which is
essential to maintaining management commitment and to rewarding employees. Third, by
making the goal measurable, actions can be made concrete and decisions can be made on the
basis of relative return.
Advantages
Cost reduction due to the fact that time is now being spent
on only producing goods or services that are produced
according to the requirements of consumers.
Disadvantages
Statistical process control (SPC) is the application of statistical methods to the monitoring
and control of a process to ensure that it operates at its full potential to produce conforming
product. Under SPC, a process behaves predictably to produce as much conforming product
as possible with the least possible waste. While SPC has been applied most frequently to
controlling manufacturing lines, it applies equally well to any process with a measurable
output. Key tools in SPC are control charts, a focus on continuous improvement and designed
experiments.
Much of the power of SPC lies in the ability to examine a process and the sources of variation
in that process using tools that give weight to objective analysis over subjective opinions and
that allow the strength of each source to be determined numerically. Variations in the process
that may affect the quality of the end product or service can be detected and corrected, thus
reducing waste as well as the likelihood that problems will be passed on to the customer.
With its emphasis on early detection and prevention of problems, SPC has a distinct
advantage over other quality methods, such as inspection, that apply resources to detecting
and correcting problems after they have occurred.
In addition to reducing waste, SPC can lead to a reduction in the time required to produce the
product or service from end to end. This is partially due to a diminished likelihood that the
final product will have to be reworked, but it may also result from using SPC data to identify
bottlenecks, wait times, and other sources of delays within the process. Process cycle time
reductions coupled with improvements in yield have made SPC a valuable tool from both a
cost reduction and a customer satisfaction standpoint.
Statistical process control (SPC) involves using statistical techniques to measure and
analyze the variation in processes. Most often used for manufacturing processes, the intent
of SPC is to monitor product quality and maintain processes to fixed targets. Statistical
quality control refers to using statistical techniques for measuring and improving the quality
of processes and includes SPC in addition to other techniques, such as sampling plans,
experimental design, variation reduction, process capability analysis, and process
improvement plans.
123
HISTORY
Statistical process control was pioneered by Walter A. Shewhart in the early 1920s. W.
Edwards Deming later applied SPC methods in the United States during World War II,
thereby successfully improving quality in the manufacture of munitions and other
strategically important products. Deming was also instrumental in introducing SPC methods
to Japanese industry after the war had ended.
Shewhart created the basis for the control chart and the concept of a state of statistical control
by carefully designed experiments. While Dr. Shewhart drew from pure mathematical
statistical theories, he understood that data from physical processes seldom produces a
"normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell
curve"). He discovered that observed variation in manufacturing data did not always behave
the same way as data in nature (for example, Brownian motion of particles). Dr. Shewhart
concluded that while every process displays variation, some processes display controlled
variation that is natural to the process (common causes of variation), while others display
uncontrolled variation that is not present in the process causal system at all times (special
causes of variation).[3]
In 1988, the Software Engineering Institute introduced the notion that SPC can be usefully
applied to non-manufacturing processes, such as software engineering processes, in the
Capability Maturity Model (CMM). This idea exists today within the Level 4 and Level 5
practices of the Capability Maturity Model Integration (CMMI). This notion that SPC is a
useful tool when applied to non-repetitive, knowledge-intensive processes such as
engineering processes has encountered much skepticism, and remains controversial today.
GENERAL
124
The following description relates to manufacturing rather than to the service industry,
although the principles of SPC can be successfully applied to either. For a description and
example of how SPC applies to a service environment, refer to Roberts (2005).[6] SPC has
also been successfully applied to detecting changes in organizational behavior with Social
Network Change Detection introduced by McCulloh (2007).[citation needed]
Selden describes
how to use SPC in the fields of sales, marketing, and customer service, using Deming's
famous Red Bead Experiment as an easy to follow demonstration.[7]
In mass-manufacturing, the quality of the finished article was traditionally achieved through
post-manufacturing inspection of the product; accepting or rejecting each article (or samples
from a production lot) based on how well it met its design specifications. In contrast,
Statistical Process Control uses statistical tools to observe the performance of the production
process in order to predict significant deviations that may later result in rejected product.
Two kinds of variation occur in all manufacturing processes: both these types of process
variation cause subsequent variation in the final product. The first is known as natural or
common cause variation and consists of the variation inherent in the process as it is designed.
Common cause variation may include variations in temperature, properties of raw materials,
strength of an electrical current etc. The second kind of variation is known as special cause
variation, or assignable-cause variation, and happens less frequently than the first. With
sufficient investigation, a specific cause, such as abnormal raw material or incorrect set-up
parameters, can be found for special cause variations.
For example, a breakfast cereal packaging line may be designed to fill each cereal box with
500 grams of product, but some boxes will have slightly more than 500 grams, and some will
have slightly less, in accordance with a distribution of net weights. If the production process,
its inputs, or its environment changes (for example, the machines doing the manufacture
begin to wear) this distribution can change. For example, as its cams and pulleys wear out,
the cereal filling machine may start putting more cereal into each box than specified. If this
change is allowed to continue unchecked, more and more product will be produced that fall
outside the tolerances of the manufacturer or consumer, resulting in waste. While in this case,
the waste is in the form of "free" product for the consumer, typically waste consists of rework
or scrap.
125
By observing at the right time what happened in the process that led to a change, the quality
engineer or any member of the team responsible for the production line can troubleshoot the
root cause of the variation that has crept in to the process and correct the problem.
SPC indicates when an action should be taken in a process, but it also indicates when NO
action should be taken. An example is a person who would like to maintain a constant body
weight and takes weight measurements weekly. A person who does not understand SPC
concepts might start dieting every time his or her weight increased, or eat more every time his
or her weight decreased. This type of action could be harmful and possibly generate even
more variation in body weight. SPC would account for normal weight variation and better
indicate when the person is in fact gaining or losing weight.
Statistical Process Control may be broadly broken down into three sets of activities:
understanding the process, understanding the causes of variation, and elimination of the
sources of special cause variation.
In understanding a process, the process is typically mapped out and the process is monitored
using control charts. Control charts are used to identify variation that may be due to special
causes, and to free the user from concern over variation due to common causes. This is a
continuous, ongoing activity. When a process is stable and does not trigger any of the
detection rules for a control chart, a process capability analysis may also be performed to
predict the ability of the current process to produce conforming (i.e. within specification)
product in the future.
When excessive variation is identified by the control chart detection rules, or the process
capability is found lacking, additional effort is exerted to determine causes of that variance.
The tools used include Ishikawa diagrams, designed experiments and Pareto charts. Designed
experiments are critical to this phase of SPC, as they are the only means of objectively
quantifying the relative importance of the many potential causes of variation.
Once the causes of variation have been quantified, effort is spent in eliminating those causes
that are both statistically and practically significant (i.e. a cause that has only a small but
statistically significant effect may not be considered cost-effective to fix; however, a cause
126
that is not statistically significant can never be considered practically significant). Generally,
this includes development of standard work, error-proofing and training. Additional process
changes may be required to reduce variation or align the process with the desired target,
especially if there is a problem with process capability.
In practise, most people (in a manufacturing environment) will think of SPC as a set of rules
and a control chart (paper and / or digital). SPC ought to be a PROCESS, that is, when
conditions change such 'rules' should be re-evaluated and possibly updated. This does not,
alas, take place usually; as a result the set of rules known as "the Western Electric rules" can
be, with minor variations, found in a great many different environs (for which they are very
rarely actually suitable).
For digital SPC charts, so-called SPC rules usually come with some rule specific logic that
determines a 'derived value' that is to be used as the basis for some (setting) correction. One
example of such a derived value would be (for the common N numbers in a row ranging up
or down 'rule'); derived value = last value + average difference between the last N numbers
(which would, in effect, be extending the row with the to be expected next value).
The fundamentals of Statistical Process Control (though that was not what it was called at the
time) and the associated tool of the Control Chart were developed by Dr Walter A Shewhart
in the mid-1920’s. His reasoning and approach were practical, sensible and positive. In order
to be so, he deliberately avoided overdoing mathematical detail. In later years, significant
mathematical attributes were assigned to Shewharts thinking with the result that this work
became better known than the pioneering application that Shewhart had worked up.
The crucial difference between Shewhart’s work and the inappropriately-perceived purpose
of SPC that emerged, that typically involved mathematical distortion and tampering, is that
his developments were in context, and with the purpose, of process improvement, as opposed
to mere process monitoring. I.e. they could be described as helping to get the process into that
“satisfactory state” which one might then be content to monitor. Note, however, that a true
adherent to Deming’s principles would probably never reach that situation, following instead
the philosophy and aim of continuous improvement.
127
Suppose that we are recording, regularly over time, some measurements from a process. The
measurements might be lengths of steel rods after a cutting operation, or the lengths of time
to service some machine, or your weight as measured on the bathroom scales each morning,
or the percentage of defective (or non-conforming) items in batches from a supplier, or
measurements of Intelligence Quotient, or times between sending out invoices and receiving
the payment etc., etc..
A series of line graphs or histograms can be drawn to represent the data as a statistical
distribution. It is a picture of the behaviour of the variation in the measurement that is being
recorded. If a process is deemed as “stable” then the concept is that it is in statistical control.
The point is that, if an outside influence impacts upon the process, (e.g., a machine setting is
altered or you go on a diet etc.) then, in effect, the data are of course no longer all coming
from the same source. It therefore follows that no single distribution could possibly serve to
represent them. If the distribution changes unpredictably over time, then the process is said to
be out of control. As a scientist, Shewhart knew that there is always variation in anything that
can be measured. The variation may be large, or it may be imperceptibly small, or it may be
between these two extremes; but it is always there.
What inspired Shewhart’s development of the statistical control of processes was his
observation that the variability which he saw in manufacturing processes often differed in
behaviour from that which he saw in so-called “natural” processes – by which he seems to
have meant such phenomena as molecular motions.
Wheeler and Chambers combine and summarise these two important aspects as follows:
"While every process displays variation, some processes display controlled variation, while
others display uncontrolled variation."
In particular, Shewhart often found controlled (stable variation in natural processes and
uncontrolled (unstable variation in manufacturing processes. The difference is clear. In the
former case, we know what to expect in terms of variability; in the latter we do not. We may
128
predict the future, with some chance of success, in the former case; we cannot do so in the
latter.
Shewhart gave us a technical tool to help identify the two types of variation: the control chart
What is important is the understanding of why correct identification of the two types of
variation is so vital. There are at least three prime reasons.
First, when there are irregular large deviations in output because of unexplained special
causes, it is impossible to evaluate the effects of changes in design, training, purchasing
policy etc. which might be made to the system by management. The capability of a process is
unknown, whilst the process is out of statistical control.
Second, when special causes have been eliminated, so that only common causes remain,
improvement then has to depend upon management action. For such variation is due to the
way that the processes and systems have been designed and built – and only management has
authority and responsibility to work on systems and processes. As Myron Tribus, Director of
the American Quality and Productivity Institute, has often said:
Finally, something of great importance, but which has to be unknown to managers who do
not have this understanding of variation, is that by (in effect) misinterpreting either type of
cause as the other, and acting accordingly, they not only fail to improve matters – they
literally make things worse.
These implications, and consequently the whole concept of the statistical control of
processes, had a profound and lasting impact on Dr Deming. Many aspects of his
management philosophy emanate from considerations based on just these notions.
129
So why SPC?
The plain fact is that when a process is within statistical control, its output is indiscernible
from random variation: the kind of variation which one gets from tossing coins, throwing
dice, or shuffling cards. Whether or not the process is in control, the numbers will go up, the
numbers will go down; indeed, occasionally we shall get a number that is the highest or the
lowest for some time. Of course we shall: how could it be otherwise? The question is - do
these individual occurrences mean anything important? When the process is out of control,
the answer will sometimes be yes. When the process is in control, the answer is no.
So the main response to the question Why SPC? is therefore this: It guides us to the type of
action that is appropriate for trying to improve the functioning of a process. Should we react
to individual results from the process (which is only sensible, if such a result is signalled by a
control chart as being due to a special cause) or should we instead be going for change to the
process itself, guided by cumulated evidence from its output (which is only sensible if the
process is in control)?
Phase 1: Stabilisation of the process by the identification and elimination of special causes:
Phase 2: Active improvement efforts on the process itself, i.e. tackling common causes;
Phase 3: Monitoring the process to ensure the improvements are maintained, and
incorporating additional improvements as the opportunity arises.
Control charts have an important part to play in each of these three Phases. Points beyond
control limits (plus other agreed signals) indicate when special causes should be searched for.
The control chart is therefore the prime diagnostic tool in Phase 1. All sorts of statistical tools
can aid Phase 2, including Pareto Analysis, Ishikawa Diagrams, flow-charts of various kinds,
etc., and recalculated control limits will indicate what kind of success (particularly in terms of
reduced variation) has been achieved. The control chart will also, as always, show when any
further special causes should be attended to. Advocates of the British/European approach will
consider themselves familiar with the use of the control chart in Phase 3. However, it is
strongly recommended that they consider the use of a Japanese Control Chart (q.v.) in order
to see how much more can be done even in this Phase than is normal practice in this part of
the world.
130
Statistical process control (SPC) involves using statistical techniques to measure and analyze the
variation in processes. Most often used for manufacturing processes, the intent of SPC is to monitor
product quality and maintain processes to fixed targets. Statistical quality control refers to using
statistical techniques for measuring and improving the quality of processes and includes SPC in
addition to other techniques, such as sampling plans, experimental design, variation reduction,
process capability analysis, and process improvement plans. SPC is used to monitor the consistency
of processes used to manufacture a product as designed. It aims to get and keep processes under
control. No matter how good or bad the design, SPC can ensure that the product is being
manufactured as designed and intended. Thus, SPC will not improve a poorly designed product's
reliability, but can be used to maintain the consistency of how the product is made and, therefore, of
the manufactured product itself and its as-designed reliability.
A primary tool used for SPC is the control chart, a graphical representation of certain descriptive
statistics for specific quantitative measurements of the manufacturing process. These descriptive
statistics are displayed in the control chart in comparison to their "in-control" sampling
distributions. The comparison detects any unusual variation in the manufacturing process, which
could indicate a problem with the process. Several different descriptive statistics can be used in
control charts and there are several different types of control charts that can test for different
causes, such as how quickly major vs. minor shifts in process means are detected. Control charts are
also used with product measurements to analyze process capability and for continuous process
improvement efforts
Benefits:
Capabilities:
Run tests
If the process is stable, then the distribution of subgroup averages will be approximately
normal. With this in mind, we can also analyze the patterns on the control charts to see if
they might be attributed to a special cause of variation. To do this, we divide a normal
distribution into zones, with each zone one standard deviation wide. Figure IV.25 shows the
approximate percentage we expect to find in each zone from a stable process.
132
Zone C is the area from the mean to the mean plus or minus one sigma, zone B is from plus
or minus one to plus or minus two sigma, and zone A is from plus or minus two to plus or
minus three sigma. Of course, any point beyond three sigma (i.e., outside of the control limit)
is an indication of an out-of-control process.
Since the control limits are at plus and minus three standard deviations, finding the one and
two sigma lines on a control chart is as simple as dividing the distance between the grand
average and either control limit into thirds, which can be done using a ruler. This divides each
half of the control chart into three zones. The three zones are labeled A, B, and C as shown
on Figure
133
Based on the expected percentages in each zone, sensitive run tests can be developed for
analyzing the patterns of variation in the various zones. Remember, the existence of a non-
random pattern means that a special cause of variation was (or is) probably present.
Run tests
If the process is stable, then the distribution of subgroup averages will be approximately
normal. With this in mind, we can also analyze the patterns on the control charts to see if
they might be attributed to a special cause of variation. To do this, we divide a normal
distribution into zones, with each zone one standard deviation wide. Figure IV.25 shows the
approximate percentage we expect to find in each zone from a stable process.
134
Zone C is the area from the mean to the mean plus or minus one sigma, zone B is from plus
or minus one to plus or minus two sigma, and zone A is from plus or minus two to plus or
minus three sigma. Of course, any point beyond three sigma (i.e., outside of the control limit)
is an indication of an out-of-control process.
Since the control limits are at plus and minus three standard deviations, finding the one and
two sigma lines on a control chart is as simple as dividing the distance between the grand
average and either control limit into thirds, which can be done using a ruler. This divides each
half of the control chart into three zones. The three zones are labeled A, B, and C as shown
on Figure
135
Based on the expected percentages in each zone, sensitive run tests can be developed for
analyzing the patterns of variation in the various zones. Remember, the existence of a non-
random pattern means that a special cause of variation was (or is) probably present.
136
ISO 9001 is the internationally recognised standard for the quality management of
businesses. ISO 9001
applies to the processes that create and control the products and services an organisation
supplies
prescribes systematic control of activities to ensure that the needs and expectations of
customers are met
is designed and intended to apply to virtually any product or service, made by any process
anywhere in the world
BENEFITS
Implementing a Quality Management System will motivate staff by defining their key roles
and responsibilities. Gain ISO 9001 certification by completing our Free Quote form. Cost
savings can be made through improved efficiency and productivity, as product or service
deficiencies will be highlighted. From this, improvements can be developed, resulting in less
waste, inappropriate or rejected work and fewer complaints. Customers will notice that orders
are met consistently, on time and to the correct specification. This can open up the market
place to increased opportunities.
Identify the requirements of ISO 9001 and how they apply to the business involved.
Establish quality objectives and how they fit in to the operation of the business.
Produce a documented quality policy indicating how these requirements are satisfied.
Communicate them throughout the organisation.
Evaluate the quality policy, its stated objectives and then prioritise requirements to ensure
they are met.
137
Identify the boundaries of the management system and produce documented procedures as
required.
Ensure these procedures are suitable and adhered to.
Once developed, internal audits are needed to ensure the system carries on working.
ASSESSMSENT TO ISO9001
Once all the requirements of ISO 9001 have been met, it is time for an external audit. This
should be carried out by a third party, accredited certification body. In the UK, the body
should be accredited by UKAS (look for the ‘crown and tick’ logo). The chosen certification
body will review the quality manuals and procedures. This process involves looking at the
company’s evaluation of quality and ascertains if targets set for the management programme
are measurable and achievable. This is followed at a later date by a full on-site audit to ensure
that working practices observe the procedures and stated objectives and that appropriate
records are kept.
After a successful audit, a certificate of registration to ISO 9001 will be issued. There will
then be surveillance visits (usually once or twice a year) to ensure that the system continues
to work. This is covered in more detail in ISOQAR’s Audit Procedure information sheet.
ISO 9000 – Fundamentals and Vocabulary: this introduces the user to the concepts behind
the management systems and specifies the terminology used.
ISO 9001 – Requirements: this sets out the criteria you will need to meet if you wish to
operate in accordance with the standard and gain certification.
UNIT V
Software quality may be defined as conformance to explicitly stated functional and performance
requirements, explicitly documented development standards and implicit characteristics that are
expected of all professionally developed software.
2. Specified standards define a set of development criteria that guide the management in
software engineering.
3. A set of implicit requirements often goes unmentioned, for example ease of use,
maintainability etc.
If software conforms to its explicit requirements but fails to meet implicit requirements,
software quality is suspected.
Dr. Robert Burnett describes the need to focus our future thinking on defect rates to better
manage the analytical quality of laboratory tests. In the midst of our preoccupation with the
profound changes that are taking place in health care delivery in general, and laboratory
139
medicine in particular, it might be of some comfort to realize that there are some fundamental
things that have remained the same. Two management objectives that have not changed in
organizations, including clinical laboratories, are the need for high quality and the need for high
productivity
Two management objectives that have not changed in organizations, including clinical
laboratories, are the need for high quality and the need for high productivity.
Perhaps the emphasis has shifted. Fifteen or twenty years ago we could afford to focus
mainly on the quality of our product. It was fine to be efficient, but with a lot of inpatient
days, a high volume of ordered tests, and a predominately fee-for-service payment system, it
didn't have to be a top priority. Now, changes in reimbursement have made laboratories cost
centers. The majority of patients' hospital bills are paid either on the basis of a fixed per diem,
a fixed amount determined by the diagnosis, or some other variety of flat rate that has nothing
to do with the actual procedures and tests performed for an individual patient. In hospital
laboratories, the sudden shift from profit center to cost center has prompted downsizing and
reorganizing. At the same time much more effort is being spent to control test utilization and
to reduce the cost of performing those tests that remain. The prevailing feeling in most
laboratories is that quality is high enough, but test costs need to be reduced more, i.e.,
productivity needs to be higher. What I will review here are the factors that determine
analytical quality from the customer's perspective, the interdependence that exists between
quality and productivity, and the trade-offs that are often made.
What evidence do we have that analytical quality is generally high? In a recent publication
Westgard et al. found that only one of eighteen common laboratory tests was routinely
performed with precision high enough that current QC practices could detect medically
important errors. This raises questions: Why do we laboratory directors not perceive that
there is an enormous problem here? And why are we not being bombarded with complaints
from the medical staff? To answer these questions, think about how analytical quality is
perceived by people outside the laboratory. As laboratory professionals, we are aware of
several different components and indicators of analytical quality, but our customers are
140
sensitive only to the "bottom line" - which we call the defect rate. This quantity has been
defined in the literature on the basis of a fraction of analytical runs with an unacceptable
number of erroneous results, but here I want to define defect rate in terms of test results -
specifically, the fraction of test results reported with an error greater than TEa, the total error
deemed allowable on the basis of medical usefulness.
The defect rate for a test represents the best single indicator of analytical quality, as perceived
by our customers, that we can derive. Unfortunately, measuring defect rate is not as simple as
one might think. But to get a rough idea of what a typical defect rate might be, let's say we
are running a test on an automated chemistry analyzer, performing QC once a day. If we run
every day, and sometimes have an extra run thrown in, we might have 400 runs in a year
with, say, an average of 100 samples per run, for a total of 40,000 patient results per year.
In quality control system design, it is important to know the frequency of critical error (f)
associated with the method. This is defined as the frequency of runs in which the distribution
of results has shifted such that 5% or greater have errors larger than TEa. Let's say our
automated analyzer has f equal to 2% - not an unreasonable figure for a well-designed
instrument. This means that in a year, eight of the four hundred runs will have at least five
results with errors larger than TEa. But we have a quality control system in place, the purpose
of which is to detect such problems. Unfortunately, practical considerations in QC system
design often dictate that we can expect to catch only a fraction of the errors we would like to
detect. However, even if the probability of error detection, Ped, is a modest 50% (at the
critical error level) then four of the eight runs would be expected to fail QC and erroneous
results would presumably not be reported. This leaves four runs and a total of 20 erroneous
results that would be reported, which is 0.05% of the total number of results, or 5 defects per
10,000 test results, a defect rate of 1 in 2,000.
I digress here to acknowledge that the defect rate seen by the customer must also include
"blunders", or what statisticians call outliers. These include results reported on the wrong
141
specimen, or on specimens that were mishandled or improperly collected. Also included are
transcription errors. One might expect that we have fewer such blunders in the laboratory
than we had ten or twenty years ago because of automation, bar-coded identification labels
and instrument interfaces. On the other hand, downsizing has resulted in more pressure on the
remaining technologists, and this might be expected to increase the blunder rate, especially in
non-automated sections of the laboratory.
How realistic is the above estimate of defect rate? Well, it's only as good as the estimate of
the critical error frequency of the method or instrument, so the question becomes, how can
we know the actual value of f? This is a difficult problem, and probably represents the most
serious obstacle to knowing how cost-effective our QC systems really are. It might be
expected that critical error frequency is related to the frequency of runs rejected as being out
of control. In fact these two quantities would be equal if we were using a QC system with an
ideal power curve. Such a function would have a probability of false rejection (Pfr) equal to
zero and a probability of error detection also equal to zero until error size reached the critical
point. At this point Ped would go to 100%. Such a power curve is depicted below.
In the real world however, power curves look like the one below
142
Inspection of these "real" power curves reveals a problem that I don't believe is widely
appreciated. Most quality control systems will reject many runs where the error size is less
than critical. Pfr gives the probability of rejecting a run with zero error, but what I call the
true false rejection rate is much higher, because all runs rejected with error sizes less than
critical should be considered false rejections from a medical usefulness viewpoint. Note that
although Ped is lower for these smaller errors, this is offset by the fact that small errors occur
more frequently than large ones.
Two management objectives that have not changed in organizations, including clinical
laboratories, are the need for high quality and the need for high productivity.
Perhaps the emphasis has shifted. Fifteen or twenty years ago we could afford to focus
mainly on the quality of our product. It was fine to be efficient, but with a lot of inpatient
days, a high volume of ordered tests, and a predominately fee-for-service payment system, it
didn't have to be a top priority. Now, changes in reimbursement have made laboratories cost
centers. The majority of patients' hospital bills are paid either on the basis of a fixed per diem,
a fixed amount determined by the diagnosis, or some other variety of flat rate that has nothing
to do with the actual procedures and tests performed for an individual patient. In hospital
laboratories, the sudden shift from profit center to cost center has prompted downsizing and
reorganizing. At the same time much more effort is being spent to control test utilization and
to reduce the cost of performing those tests that remain. The prevailing feeling in most
laboratories is that quality is high enough, but test costs need to be reduced more, i.e.,
productivity needs to be higher. What I will review here are the factors that determine
143
analytical quality from the customer's perspective, the interdependence that exists between
quality and productivity, and the trade-offs that are often made.
What evidence do we have that analytical quality is generally high? In a recent publication
Westgard et al. found that only one of eighteen common laboratory tests was routinely
performed with precision high enough that current QC practices could detect medically
important errors. This raises questions: Why do we laboratory directors not perceive that
there is an enormous problem here? And why are we not being bombarded with complaints
from the medical staff? To answer these questions, think about how analytical quality is
perceived by people outside the laboratory. As laboratory professionals, we are aware of
several different components and indicators of analytical quality, but our customers are
sensitive only to the "bottom line" - which we call the defect rate. This quantity has been
defined in the literature on the basis of a fraction of analytical runs with an unacceptable
number of erroneous results, but here I want to define defect rate in terms of test results -
specifically, the fraction of test results reported with an error greater than TEa, the total error
deemed allowable on the basis of medical usefulness.
The defect rate for a test represents the best single indicator of analytical quality, as perceived
by our customers, that we can derive. Unfortunately, measuring defect rate is not as simple as
one might think. But to get a rough idea of what a typical defect rate might be, let's say we
are running a test on an automated chemistry analyzer, performing QC once a day. If we run
every day, and sometimes have an extra run thrown in, we might have 400 runs in a year
with, say, an average of 100 samples per run, for a total of 40,000 patient results per year.
In quality control system design, it is important to know the frequency of critical error (f)
associated with the method. This is defined as the frequency of runs in which the distribution
of results has shifted such that 5% or greater have errors larger than TE. Let's say our
automated analyzer has f equal to 2% - not an unreasonable figure for a well-designed
instrument. This means that in a year, eight of the four hundred runs will have at least five
144
results with errors larger than TEa. But we have a quality control system in place, the purpose
of which is to detect such problems. Unfortunately, practical considerations in QC system
design often dictate that we can expect to catch only a fraction of the errors we would like to
detect. However, even if the probability of error detection, Ped, is a modest 50% (at the
critical error level) then four of the eight runs would be expected to fail QC and erroneous
results would presumably not be reported. This leaves four runs and a total of 20 erroneous
results that would be reported, which is 0.05% of the total number of results, or 5 defects per
10,000 test results, a defect rate of 1 in 2,000.
I digress here to acknowledge that the defect rate seen by the customer must also include
"blunders", or what statisticians call outliers. These include results reported on the wrong
specimen, or on specimens that were mishandled or improperly collected. Also included are
transcription errors. One might expect that we have fewer such blunders in the laboratory
than we had ten or twenty years ago because of automation, bar-coded identification labels
and instrument interfaces. On the other hand, downsizing has resulted in more pressure on the
remaining technologists, and this might be expected to increase the blunder rate, especially in
non-automated sections of the laboratory.
How realistic is the above estimate of defect rate? Well, it's only as good as the estimate of
the critical error frequency of the method or instrument, so the question becomes, how can
we know the actual value of f? This is a difficult problem, and probably represents the most
serious obstacle to knowing how cost-effective our QC systems really are. It might be
expected that critical error frequency is related to the frequency of runs rejected as being out
of control. In fact these two quantities would be equal if we were using a QC system with an
ideal power curve. Such a function would have a probability of false rejection (Pfr) equal to
zero and a probability of error detection also equal to zero until error size reached the critical
point. At this point Ped would go to 100%. Such a power curve is depicted below.
145
In the real world however, power curves look like the one below
Inspection of these "real" power curves reveals a problem that I don't believe is widely
appreciated. Most quality control systems will reject many runs where the error size is less
than critical. Pfr gives the probability of rejecting a run with zero error, but what I call the
true false rejection rate is much higher, because all runs rejected with error sizes less than
critical should be considered false rejections from a medical usefulness viewpoint. Note that
although Ped is lower for these smaller errors, this is offset by the fact that small errors occur
more frequently than large ones.
Vision
146
Defect Tracking for Improving Product Quality and Productivity useful for a p p l i c a t i o n s
developed in an organization. T h e D e f e c t Tracking for Improving Product Quality
and Productivity is a web based application that can be accessed throughout the organization.
This system can be used for logging defects
against an application/module, assigning defects to individuals and tracking the defects to
resolution. There a r e f e a t u r e s l i k e e m a i l n o t i f i c a t i o n s , u s e r m a i n t e n a n c e ,
user access control, report generators etc in this system.
Project Specification
T h i s s y s t e m c a n b e u s e d a s a n a p p l i c a t i o n f o r t h e a n y product
based company to reduce the defects in product’s q u a l i t y a n d p r o d u c t i v i t y . U s e r
l o g g i n g s h o u l d b e a b l e t o upload the information of the user.
Functional Components
•Defect priority
•Date created
•Defect description
•Defect diagnosis
147
•Name of originator
•Name of Assignee
•Status
•Component admin having privileges (b),(d),(e),(f) for the components they own.
•Users having privileges for (b),(d),(e),(f) for components they have access to.
•M o d i f y t h e d e f e c t s b y c h a n g i n g / p u t t i n g v a l u e s i n fields.
•A s s i g n d e f e c t s t o o t h e r u s e r s h a v i n g a c c e s s t o t h e component.
•Generate reports of defects for components on which the user has access.
148
•Add a User to the component for creating and modifying defects against that component.
4. The Application Admin should be able to do the following tasks in addition to 1 & 3:
•Remove a user.
Database Requirements
Centralized
Integration Requirements
Web/Pervasive enabled
Preferred Technologies
Other Details
The application should be highly secured and with different levels &
categories of access control.
The Six Sigma metrics used in the manufacturing industries are equally useful for the service sector.
The metrics will change as per the service processes. The appropriate selection of the process,
qualitative as well as quantitative, in the use of Six Sigma is necessary.
For example, while developing a website, certain factors like site design, color schemes, user
interaction and easy navigation need to be kept in mind. When Six Sigma concepts are applied to the
site development process, all these variables will be examined for their effect on the customer – and
the ones that need improvement will be determined. It is a bit like carrying out a simple
improvement in the manufacturing process.
Defects in the service sector can be defined as the problem in the process that leads to low customer
satisfaction. Thee defects can be characterized as qualitative and quantitative. When a defect is
measured quantitatively, it should also be converted into equivalent qualitative measures and vice
versa.
150
For example, if the customer satisfaction for a service is being measured qualitatively, then it should
also be converted to quantitative as “satisfaction level” on a scale of 10. Below a certain level, the
process needs improvement.
Another example is defining defects in quantitative measures, such as delivery services. For example,
newspaper delivery has to happen before a certain time to be effective.
Level of measurement: Using the appropriate level of measurement is very important for it to be
useful and meaningful. For example, there may be 20 percent processes, which may be taking 80
percent of the total time of the project. When analyzing the qualitative measures, 20 percent of the
customers may account for 80 percent of customer dissatisfaction (i.e. defects).
The measurement of key areas, in contrast to detailed study, is necessary to get the larger picture of
the process defects.
Accounting for variations: In service processes there are large numbers of variations that may arise,
depending upon the complexity of the given task. The measurement of the typical task has to be
done, as well as for special cases or situations that arise.
Emphasize quantitative as well as qualitative measures: A proper mix of the qualitative and the
quantitative measures is very important to get useful results. A retailer’s process, which has more
personal customer contact, needs to measure the qualitative steps of the process.
A company that provides services where speed is relevant needs to concentrate more on the study
of quantitative measures.
Management should communicate the relevance and effect of Six Sigma with the people involved to
achieve the support for it.
As Six Sigma in service processes are linked to customer satisfaction ultimately leading to increase in
sales, the need to measure and improve these processes is important.
151
Think about it, all performance improvement methodologies (PDCA, Six Sigma, TQM,
reengineering, etc.) have four elements in common:
1. Customer Requirements
2. Process Maps and Measures
3. Data/Root Cause Analysis
4. Improvement Strategies
Six Sigma performance is a worthy business goal. However, the investment required to train
Green Belts and Black Belts is significant, as is the cultural shift that may be needed to
embrace advanced statistical methods. Fortunately, it is not an all-or-nothing proposition for
your organization. You can begin the journey simply by enhancing current process
improvement techniques.
Ishikawa Diagram
152
Definition: A graphic tool used to explore and display opinion about sources of variation in a
process. (Also called a Cause and Effect Chart or Fishbone Diagram.) Ishikawa diagrams (also called
fishbone diagrams, cause-and-effect diagrams or Fishikawa) are diagrams that show the causes of a
certain event -- created by Kaoru Ishikawa (1990).[1] Common uses of the Ishikawa diagram are
product design and quality defect prevention, to identify potential factors causing an overall effect.
Each cause or reason for imperfection is a source of variation. Causes are usually grouped into major
categories to identify these sources of variation.
Ishikawa Diagram The figure below shows an Ishikawa diagram. Note that this tool is referred to by
several different names: Ishikawa diagram, Cause and Effect diagram, Fishbone and Root Cause
Analysis. These names all refer to the same tool. The first name is after the inventor of the tool - K.
Ishikawa (1969) who first used the technique in the 1960s. Cause and Effect also aptly describes the
tool, since the tool is used to capture the causes of a particular effect and the relationships between
cause and effect. The term fishbone is used to describe the look of the diagram on paper. The basic
use of the tool is to find root causes of problems; hence, this last name.
How to Construct:
Tip:
153
Consider this figure. The basic concept in the fishbone diagram is that the name of a basic problem is
entered at the right of the diagram at the end of the main 'bone.' This is the problem of interest. At
an angle to this main bone are located typically three to six sub-bones which are the contributing
general causes to the problem under consideration. Associated with each of the sub-bones are the
causes which are responsible for the problem designated. This subdivision into ever increasing
154
specificity continues as long as the problem areas can be further subdivided. The practical maximum
depth of this tree is usually about four or five levels. When the fishbone is complete, one has a
rather complete picture of all the possibilities about what could be the root cause for the designated
problem.
The fishbone diagram can be used by individuals or teams; probably most effectively by a group. A
typical utilization is the drawing of a fishbone diagram on a blackboard by a team leader who first
asserts the main problem and asks for assistance from the group to determine the main causes
which are subsequently drawn on the board as the main bones of the diagram. The team assists by
making suggestions and, eventually, the entire cause and effect diagram is filled out. Once the entire
fishbone is complete, team discussion takes place to decide what are the most likely root causes of
the problem. These causes are circled to indicate items that should be acted upon, and the use of
the fishbone tool is complete.
The Ishikawa diagram, like most quality tools, is a visualization and knowledge organization tool.
Simply collecting the ideas of a group in a systematic way facilitates the understanding and ultimate
diagnosis of the problem. Several computer tools have been created for assisting in creating
Ishikawa diagrams. A tool created by the Japanese Union of Scientists and Engineers (JUSE) provides
a rather rigid tool with a limited number of bones. Other similar tools can be created using various
commercial tools. One example of creating a fishbone diagram is shown in an upcoming chapter.
Only one tool has been created that adds computer analysis to the fishbone. Bourne et al. (1991)
reported using Dempster-Shafer theory (Shafer and Logan, 1987) to systematically organize the
beliefs about the various causes that contribute to the main problem. Based on the idea that the
main problem has a total belief of one, each remaining bone has a belief assigned to it based on
several factors; these include the history of problems of a given bone, events and their causal
relationship to the bone, and the belief of the user of the tool about the likelihood that any
particular bone is the cause of the problem.
Purpose: To clearly illustrate the various sources of variation affecting a given KQC by sorting and
relation the sources to the effect.
Methods: How the process is performed and the specific requirements for doing it, such as
policies, procedures, rules, regulations and laws
Machines: Any equipment, computers, tools etc. required to accomplish the job
Materials: Raw materials, parts, pens, paper, etc. used to produce the final product
Measurements: Data generated from the process that are used to evaluate its quality
Environment: The conditions, such as location, time, temperature, and culture in which the
process operates
Causes
Causes in the diagram are often categorized, such as to the 8 M's, described below. Cause-
and-effect diagrams can reveal key relationships among various variables, and the possible
causes provide additional insight into process behavior.
Causes can be derived from brainstorming sessions. These groups can then be labeled as
categories of the fishbone. They will typically be one of the traditional categories mentioned
above but may be something unique to the application in a specific case. Causes can be traced
back to root causes with the 5 Whys technique.
Machine (technology)
Method (process)
Material (Includes Raw Material, Consumables and Information.)
Man Power (physical work)/Mind Power (brain work): Kaizens, Suggestions
Measurement (Inspection)
Milieu/Mother Nature (Environment)
Management/Money Power
Maintenance
Product=Service
156
Price
Place
Promotion/Entertainment
People(key person)
Process
Physical Evidence
Productivity & Quality
Surroundings
Suppliers
Systems
Skills
People
– Was the document properly interpreted? – Was the information properly disseminated? –
Did the recipient understand the information? – Was the proper training to perform the task
administered to the person? – Was too much judgment required to perform the task? – Were
guidelines for judgment available? – Did the environment influence the actions of the
individual? – Are there distractions in the workplace? – Is fatigue a mitigating factor? – How
much experience does the individual have in performing this task?
Machines
– Was the correct tool used? – Are files saved with the correct extension to the correct
location? – Is the equipment affected by the environment? – Is the equipment being properly
maintained (i.e., daily/weekly/monthly preventative maintenance schedule) – Does the
software or hardware need to be updated? – Does the equipment or software have the features
to support our needs/usage? – Was the machine properly programmed? – Is the
tooling/fixturing adequate for the job? – Does the machine have an adequate guard? – Was
the equipment used within its capabilities and limitations? – Are all controls including
157
emergency stop button clearly labeled and/or color coded or size differentiated? – Is the
equipment the right application for the given job?
Measurement
– Does the gauge have a valid calibration date? – Was the proper gauge used to measure the
part, process, chemical, compound, etc.? – Was a guage capability study ever performed? -
Do measurements vary significantly from operator to operator? - Do operators have a tough
time using the prescribed gauge? - Is the gauge fixturing adequate? – Does the gauge have
proper measurement resolution? – Did the environment influence the measurements taken?
– Is all needed information available and accurate? – Can information be verified or cross-
checked? – Has any information changed recently / do we have a way of keeping the
information up to date? – What happens if we don't have all of the information we need? – Is
a Material Safety Data Sheet (MSDS) readily available? – Was the material properly tested?
– Was the material substituted? – Is the supplier’s process defined and controlled? – Were
quality requirements adequate for part function? – Was the material contaminated? – Was the
material handled properly (stored, dispensed, used & disposed)?
Environment
– Is the process affected by temperature changes over the course of a day? – Is the process
affected by humidity, vibration, noise, lighting, etc.? – Does the process run in a controlled
environment? – Are associates distracted by noise, uncomfortable temperatures, fluorescent
lighting, etc.?
Method
– Was the canister, barrel, etc. labeled properly? – Were the workers trained properly in the
procedure? – Was the testing performed statistically significant? – Was data tested for true
root cause? – How many “if necessary” and “approximately” phrases are found in this
process? – Was this a process generated by an Integrated Product Development (IPD) Team?
– Did the IPD Team employ Design for Environmental (DFE) principles? – Has a capability
study ever been performed for this process? – Is the process under Statistical Process Control
158
Advantage:
•Different opinions by teamwork
•Easy to apply
•Little effortto practise
•Better understanding for causes and effects
Disadvantage:
Other uses for the Cause and Effect tool include the organization diagramming, parts
hierarchies, project planning, tree diagrams, and the 5 Why's.
Steps Activities
Identify the Write the problem/issue to be studied in the "head of the fish". From this box
problem originates the main branch (the 'fish spine') of the diagram.
Brainstorm the major categories of causes of the problem. Label each bone of
the fish. Write the categories of causes as branches from the main arrow.
Identify the major Machine, Method, Materials, Measurement, Man and Mother Nature.
Ask: Why does this happen? As each idea is given, the facilitator writes it as a
branch from the appropriate category.
Identify possible
causes Again ask: why does this happen? about each cause. Write sub-causes
branching off the causes.
Layers of branches indicate causal relationships. When the group runs out of
160
ideas, focus attention to places on the chart where ideas are few.
Analyze the results of the fishbone after team members agree that an
adequate amount of detail has been provided under each major category.
Do this by looking for those items that appear in more than one category.
Interpret your
These become the most likely causes.
diagram
For those items identified as the most likely causes, the team should reach
consensus on listing those items in priority order with the first item being the
most probable cause.
Small Organization
Small organizations, by definition of the opposite of the large organizations have fewer than
200 employees although it should be noted that many of the companies studied have 30 or
fewer employees. This is noted because businesses with employees greater than an arbitrary
number, say 100, may be considered by organizations with fewer employees to be large and
sufficient enough in man- power to provide the resources typically associated with needs
required by industry accepted SPMM.
PROCESS MODELS
Small organizations may first balk at the seemingly prohibitive cost of adopting SPMM. One study
suggests that the cost to implement the Capability Maturity Model is between $490 and $2,004 per
person with the median being $1,475 and that achieving the next level of the Capability Maturity
Model can cost upward of $1,000,000. In addition, software product assessment can take between
161
$45,000 and $100,000. Meeting the goals of some key processes can be financially taxing and it may
be necessary to tailor the SPMM's key process areas.
Despite there being the fundamental size differences between large and small organizations, it is
possible for small organizations to overcome the hurdles that SPMM presents. It has been found
that by structuring the intentions. They are created in a manner by which the software\ developers
can generate the necessary components efficiently as well as lessening the burden inherent in
guessing. The organization is also able to keep valuable resources from having to determine the
scope and intent of the project after the development phase has begun.
Greatly enhanced levels of predictability can be achieved by SPMM adoption. One of the
most significant benefits of adopting the SPMM is being able to gather data about the past
and current projects and using that data to apply a set of metrics which allows the
organization to measure weaknesses, ultimately being more readily able to predict where
failures may occur. Predictability alone is an incredible asset from which a small organization
can tremendously benefit.
Increase Workflow
The use of SPMM increases productivity and workflow by providing a standard by which the
organization can abide. SPMM are the assembly line of the software industry – they provide
the team of developers a structure by which they may efficiently and effectively deliver
products. A structured environment is extremely beneficial is helping to prevent unnecessary
deviation found in organizations who adhere to no SPMM.
software process improvement. Small organizations that adopt SPMM are more marketable
and, from that, more competitive. Growing organizations require the operational efficiency
that SPMM provides and that
efficiency is seen by the market as maturity and capability. SPMM adherence is expected by
other or larger entities. Organizations who place bids on contracts are given preference if they
show adherence to SPMM. Adopting and adhering to a model is a way of telling potential
customers that the resulting product is one worthy of emerging from the crowded
marketplace.
Small organizations have adopted CMMI with great success despite initial perceived
negative. They see organization success after applying the model and can expect to grow
their business because of SPMM adoption. CMMI improves the quality of product and
perception both within and from outside the organization, helping to give them essential skills
and structure necessary to produce better software.
Adopting the Model
Small organizations often believe they can't afford expensive models because of perceived
implementation cost while still being able to bid low on competitive contracts. They also feel
that they simply do not have the manpower and resources necessary to be able to adopt
SPMM but it is of utmost importance for the small organization to establish a continuous
improvement activity. Re-organization, education, and new hires are all potential necessities
when trying to comply with CMMI while establishing configuration and requirement
163
procedures are priority. It seemingly is initially easy to identify technical area improvements
and hard to analyze organizational improvements. The organization may need to perform
organizational improvements before being able to apply CMMI to
software operations. Small organization SPMM implementation strategy has higher employee
involvement and flexibility that large organizations; the main difference between small and
large organizations is that smaller organizations adapt to change or instability through
exploration. It has been shown that small organizations can improve business by
implementing SPMM elements and that size does not affect
SPMM benefits. Small organization size also allows for all employees to be trained, leading
to better cohesion and understanding of why the process was adopted and how it applies to
software development - versus larger organization where time and money may inhibit that as
well as less cohesive groups
Organizational Improvement
CMMI is a framework that other projects can adapt to, not just a one-off model, and it is
enterprise- wide. Adaption of CMMI for small organizations can begin with few written
policies. Moving to adopt some other models then fully adopting CMMI while achieving the
target capability level within one year of beginning the model is a strategy employed by some
fledgling business first entering SPMM maturity. Small organization adoption objectives can
include fitting in with their current system, having little overhead, not interfering with
projects and having measurable pay-offs and benefits. manner, requiring less software
rework, and providing national recognition manner, requiring less software rework, and
providing national recognition .
**************
164