Documente Academic
Documente Profesional
Documente Cultură
Quality and
Assurance ACeL
Software quality is a complex and multifaceted concept that can be
described from different perspectives depending on the context
peculiarities and stakeholders. AMITY
UNIVERSITY
PREFACE
Software quality is a complex and multifaceted concept that can be described from different
perspectives depending on the context peculiarities and stakeholders. Though measuring quality
is not a new theme, asking a developer to measure the quality of a product may generally sound
like an unknown or even a new aspect to the software activities. In this book an attempt has been
made to describe various pertinent aspects of software quality from different points of view.
Quality is a dynamic attribute which keeps on changing over the life cycle time of the product,
product line and product family. Quality attributes must be sustained, preserved and improved.
Therefore, it appears high time to introduce software quality aspects to software engineers of
today rather than wait for them to learn through experience at a high cost.
Software quality assurance is now such a huge area that it is impossible to cover the whole
subject in one book. In addition I emphasize the importance of software quality assurance life
cycle, visualize the software quality assurance planning, monitoring, testing, understand and
establish the standards and procedures. I investigate the need of software quality metrics and
models, basic software quality assurance activities. It also includes the descriptions on the
benefits of software quality assurance for projects and software quality assurance planning,
established standards and evolution of standards. It also focuses on software measurements and
metrics together with needs, importance and significance of software metrics. Good testing
involves much more than just running the program a few times to see whether it works.
Thorough analysis of program helps us to test more systematically and more effectively. My
focus, therefore, is on key topics that are fundamental to all software development processes and
topics concerned with the software development process, Software requirements and
specifications, Software design techniques, Techniques for developing large software systems,
CASE tools and software development environments, Software testing, documentation and
maintenance. I need to combine the best of these approaches to build better software systems.
The book is primarily intended as a student text for senior undergraduate and graduate students
studying computer science, software engineering or systems engineering.
In this course, chapters 1 and 2 may be used to provide an overview of software quality and
quality models.
A more extensive course, lasting a semester, might either develop this material with either a
process or a techniques focus. If the orientation of the course is towards processes, then chapters
3, 4, 5 and 6 which cover software quality assurance, software quality control, metrics and
measurement of quality and quality standards might be covered in addition to the introductory
material.
Nevertheless, I hope that all software engineers and software engineering students can find best
from here. Following is the syllabus provided for your reference:
SYLLABUS
Why Quality?, Cost of Quality, TQM concept, Quality Pioneers Approaches to Quality.
Software Development Process, S/w quality Attributes (Product Specific and Organization
Specific, Hierarchical Models of quality. Concept of Quality Assurance and Quality Control
Implementing an IT Quality function, Content of SQA Plan, Quality Tools, Quality baselines,
Model and assessment fundamentals, Internal Auditing and Quality assurance.
Testing Concepts - ad hoc, white box, black box and integration, Cost Effectiveness of Software
Testing – credibility & ROI, right methods, Developing Testing Methodologies- Acquire and
study the test strategy, building the system test plan and unit plan , Verification and Validation
methods, Software Change Control- SCM, change control procedure, Defect Management –
causes, detection, removal and tracking,
Measuring Quality, measurement concepts- Standard unit of measure, software metrics, Metrics
Bucket, Problems with Metrics, Objective and subjective measurement, measure of central
tendency, attributes of good measurement, Installing measurement program, Risk Management-
defining, characterizing risk, managing risk, software risk management
Introduction to various Quality standards: ISO-9000 Series, Six Sigma, SEI CMMi Model.
Table of Contents
PREFACE ...................................................................................................................................... 2
SYLLABUS ................................................................................................................................... 4
CHAPTER 1 : QUALITY CONCEPTS AND PRACTICES ................................................. 13
1.1 INTRODUCTION ............................................................................................................... 13
1.1.1 Definition of Quality .................................................................................................... 14
1.2 COST OF QUALITY .......................................................................................................... 15
1.3 TOTAL QUALITY MANAGEMENT ............................................................................... 17
1.3.1 TQM Definition ............................................................................................................ 17
1.3.2 Principles of TQM ........................................................................................................ 19
1.3.3 The Concept of Continuous Improvement by TQM .................................................... 20
1.3.4 Implementation Principles and Processes of TQM ...................................................... 22
1.3.5 The building blocks of TQM ........................................................................................ 23
1.4 APPROACHES TO QUALITY .......................................................................................... 25
1.4.1 TQM Approach............................................................................................................. 25
1.4.2 Six Sigma ..................................................................................................................... 26
1.5 SUMMARY ........................................................................................................................ 27
Assignment-Module 1 ............................................................................................................... 28
Key - Module 1 ......................................................................................................................... 31
CHAPTER 2 : SOFTWARE QUALITY .................................................................................. 32
2.1 SOFTWARE DEVELOPMENT PROCESS ...................................................................... 32
2.1.1 System/Information Engineering and Modeling .......................................................... 32
2.1.2 Software Development Life Cycle ............................................................................... 33
2.1.3 Processes ....................................................................................................................... 33
2.1.4 Software development activities ................................................................................... 33
2.1.5 Process Activities/Steps ................................................................................................ 34
2.2 SOFTWARE DEVELOPMENT MODELS OR PROCESS MODEL ............................... 37
2.2.1 Waterfall Model ............................................................................................................ 37
2.2.2 Prototyping Model ........................................................................................................ 38
2.2.3 Spiral model .................................................................................................................. 38
2.2.4 Strength and Weakness of Waterfall, Prototype and Spiral Model .............................. 40
2.2.5 Iterative processes......................................................................................................... 41
2.2.6 Rapid Application Development (RAD) Model ........................................................... 43
2.2.7 Component Assembly Model ....................................................................................... 44
2.2.8 Process improvement models ....................................................................................... 45
2.3 SOFTWARE QUALITY ATTRIBUTES ........................................................................... 46
2.3.1 Introduction .................................................................................................................. 46
2.3.2 Common Quality Attributes ......................................................................................... 47
2.4 HIERARCHICAL MODELS OF QUALITY ..................................................................... 61
2.4.1 What is hierarchical model? ......................................................................................... 61
2.4.2 THE McCALL AND BOEHM MODELS ................................................................... 65
2.5 PRACTICAL EVALUATION ............................................................................................ 70
2.5.1 Quality Assurance......................................................................................................... 73
2.5.2 Quality Assurance Plan ................................................................................................ 73
2.5.3 Quality control .............................................................................................................. 75
2.5.4 Quality Assurance (QA) ............................................................................................... 76
2.5.5 Quality Control (QC): ................................................................................................... 77
2.5.6The Following Statements help differentiate Quality Control from Quality Assurance 77
2.6 SUMMARY ........................................................................................................................ 78
Assignment-Module 2 ............................................................................................................... 80
Key - Module 2 ...................................................................................................................... 82
CHAPTER 3 : SOFTWARE QUALITY ASSURANCE ......................................................... 83
3.1 IMPLEMENTING IT QUALITY FUNCTION ................................................................. 83
3.1.1 Past experience ............................................................................................................. 83
3.1.2 Create a clear mission ................................................................................................... 84
3.1.3 Set specific objectives .................................................................................................. 85
3.1.4 Develop simple strategies ............................................................................................. 85
3.1.5 Design a small, focused quality function...................................................................... 85
3.2 QUALITY FUNCTION DEPLOYMENT .......................................................................... 87
3.2.1 The QFD Team ............................................................................................................. 90
3.2.2 Benefits of QFD............................................................................................................ 91
3.3 ORGANIZATION OF INFORMATION ........................................................................... 96
3.3.1 Affinity Diagram .......................................................................................................... 97
3.4 HOUSE OF QUALITY ....................................................................................................... 98
3.5 SQA PLANNING ............................................................................................................... 99
3.5.1 SQA Plan Content ...................................................................................................... 100
3.6 QUALITY TOOLS ........................................................................................................... 101
3.7 QUALITY BASELINES................................................................................................... 119
3.7.1 Quality Baseline Concepts.......................................................................................... 119
3.7.2 Methods Used for Establishing Baselines .................................................................. 119
3.7.3 Model and Assessment Fundamentals ........................................................................ 119
3.7.4 Industry Quality Models ............................................................................................. 120
3.8 INTERNAL AUDITING AND QUALITY ASSURANCE ............................................. 120
3.8.1 Internal Audit Quality Assurance Reviews ................................................................ 121
3.8.2 Quality assurance services include: ............................................................................ 121
3.8.3 Scope of QAR:............................................................................................................ 121
3.8.4 Benefits of QAR: ........................................................................................................ 122
3.9 SUMMARY ...................................................................................................................... 122
Assignment-Module 3 ............................................................................................................. 124
Key - Module 3 .................................................................................................................... 126
CHAPTER 4 : SOFTWARE QUALITY CONTROL ........................................................... 127
4.1 SOFTWARE TESTING .................................................................................................... 127
4.1.1 Cost Effectiveness of Testing ..................................................................................... 128
4.2 SOME FUNDAMENTAL CONCEPTS ........................................................................... 129
4.2.1 Defects and failures .................................................................................................... 129
4.2.2 Input combinations and preconditions ........................................................................ 129
4.2.3 Economics .................................................................................................................. 130
4.2.4 Roles ........................................................................................................................... 130
4.3. KEY ISSUES ................................................................................................................. 130
4.3.1 Test selection criteria/Test adequacy criteria ............................................................. 130
4.3.2 Testing effectiveness/Objectives for testing ............................................................... 130
4.3.3 Testing for defect identification ................................................................................. 131
4.3.4 The oracle problem ..................................................................................................... 131
4.3.5 Theoretical and practical limitations of testing .......................................................... 131
4.3.6 The problem of infeasible paths ................................................................................. 131
4.3.7 Testability ................................................................................................................... 132
4.4 TESTING METHODS ...................................................................................................... 132
4.4.1 Static vs. dynamic testing ........................................................................................... 132
4.4.2 The box approach ....................................................................................................... 132
4.4.3 White-Box testing ....................................................................................................... 133
4.4.4 Black-box testing ........................................................................................................ 134
4.4.5 Grey-box testing ......................................................................................................... 135
4.4.6 Visual testing .............................................................................................................. 136
4.5 TESTING LEVELS .......................................................................................................... 137
4.5.1 Unit testing ................................................................................................................. 137
4.5.2 Integration testing ....................................................................................................... 137
4.5.3 System testing ............................................................................................................. 138
4.5.4 System integration testing .......................................................................................... 138
4.5.5 Top-down and bottom-up ........................................................................................... 138
4.6. OBJECTIVES OF TESTING .......................................................................................... 138
4.6.1 Installation testing....................................................................................................... 138
4.6.2 Compatibility testing .................................................................................................. 139
4.6.3 Smoke and sanity testing ............................................................................................ 139
4.6.4 Regression testing ....................................................................................................... 139
4.6.5 Acceptance testing ...................................................................................................... 140
4.6.6 Alpha testing ............................................................................................................... 140
4.6.7 Beta testing ................................................................................................................. 140
4.6.8 Functional vs non-functional testing .......................................................................... 140
4.6.9 Destructive testing ...................................................................................................... 141
4.6.10 Software performance testing ................................................................................... 141
4.6.11 Usability testing ........................................................................................................ 142
4.6.12 Accessibility ............................................................................................................. 142
4.6.13 Security testing ......................................................................................................... 142
4.6.14 Internationalization and localization ........................................................................ 142
4.7 THE TESTING PROCESS ............................................................................................... 144
4.7.1 Practical considerations .............................................................................................. 144
4.7. 2 Test Activities ............................................................................................................ 146
4.8 SOFTWARE TESTING LIFE CYCLE ............................................................................ 148
4.8.1 Measurement in software testing ................................................................................ 150
4.8.2 Testing artifacts .......................................................................................................... 150
4.8.3 Test Case Development .............................................................................................. 152
4.8.4 General Guidelines ..................................................................................................... 152
4.8.5 Test Case – Sample Structure ..................................................................................... 153
4.8.6 Most common software errors .................................................................................... 153
4.8.7 Guidelines for good tester? ......................................................................................... 155
4.9 SOFTWARE VERIFICATION AND VALIDATION ..................................................... 156
4.9.1 Software Verification and Validation Methods .......................................................... 158
4.10 SOFTWARE CHANGE CONTROL .............................................................................. 166
4.10.1 Software Change Requirements ............................................................................... 166
4.11 SOFTWARE CHANGE MANAGEMENT .................................................................... 169
4.11.1 Change Management and Configuration Management ............................................ 169
4.11.2 Where Changes Originate ......................................................................................... 170
4.11.5 Change Management Tools ...................................................................................... 174
4.11.6 SCM Tools................................................................................................................ 175
4.11.7 Problem-Report and Change-Request Tracking ....................................................... 176
4.11.8 Key to Change Management .................................................................................... 176
4.12 SOFTWARE CHANGE CONTROL PROCEDURES ................................................... 177
4.12.1 Initiating the Change ................................................................................................ 177
4.12.2 Working on the Change Request .............................................................................. 177
4.12.3 Testing the Change Request ..................................................................................... 178
4.13 DEFECT MANAGEMENT ............................................................................................ 178
4.13.1 What is a defect?....................................................................................................... 178
4.13.2 What are the defect categories? ................................................................................ 178
4.13.3 Defect Management Process .................................................................................... 180
4.13.3 Steps in Defect Management Process ....................................................................... 180
4.15 SUMMARY .................................................................................................................... 183
Assignment-Module 4 ............................................................................................................. 184
Key - Module 4 .................................................................................................................... 187
CHAPTER 5 METRICS AND MEASUREMENT OF SOFTWARE QUALITY.............. 188
5.1 MEASURING SOFTWARE QUALITY .......................................................................... 188
5.1.1 Measuring quality automatically ................................................................................ 188
5.2 SOFTWARE METRICS ................................................................................................... 189
5.3 TYPE OF SOFTWARE METRICS: ................................................................................. 190
5.4 ADVANTAGE OF SOFTWARE METRICS: .................................................................. 191
5.5 LIMITATION OF SOFTWARE METRICS: ................................................................... 191
5.6 SIZE METRICS ................................................................................................................ 192
5.7 SCIENCE METRICS ........................................................................................................ 193
5.8 FLOW METRICS ............................................................................................................. 195
5.9 INFORMATION FLOW METRICS ................................................................................ 196
5.10 PROBLEM WITH METRICS ........................................................................................ 198
5.10.1 Common mistakes include: ...................................................................................... 199
5.10.2 The main points with metrics are: ............................................................................ 199
5.10.3 Characteristics of Good Metrics ............................................................................... 200
5.11 OBJECTIVE AND SUBJECTIVE MEASUREMENT .................................................. 201
5.11.1 Objective Quality Assessment .................................................................................. 202
5.11.2 Subjective Quality Assessment ................................................................................ 203
5.12 MEASURES OF CENTRAL TENDENCY ................................................................... 203
5.12.1 Definition of Measures of Central Tendency ........................................................... 203
5.12.2 More about Measures of Central Tendency ............................................................. 203
5.12.3 Examples of Measures of Central Tendency ............................................................ 204
5.12.4 Example on Measures of Central Tendency ............................................................. 204
5.12.5 Properties of a good measure of central tendency are:-............................................ 204
5.12.6 Characteristics of Good Measurement ..................................................................... 205
5.13 INSTALLING THE MEASUREMENT PROGRAM .................................................... 205
5.13.1 Build the Measurement base..................................................................................... 206
5.13.2 Manage towards results. ........................................................................................... 206
5.13.3 Manage by process. .................................................................................................. 208
5.13.4 Management by fact. ................................................................................................ 209
5.14 RISK MANAGEMENT .................................................................................................. 209
5.14.1 Types of Risk ............................................................................................................ 210
5.14.2 Categories of risks: ................................................................................................... 210
5.14.3 Goals of Risk Management ...................................................................................... 212
5.14.4 Process for Identifying and Managing Risk ............................................................. 213
5.14.5 Strategies for Managing Risk ................................................................................... 213
5.15 RISK MANAGEMENT PROCESS ............................................................................... 214
5.16 RISK IDENTIFICATION ............................................................................................... 215
5.17 RISK ANALYSIS ........................................................................................................... 216
5.18 RISK MANAGEMENT PLANNING ............................................................................ 217
5.19 SOFTWARE RISK MANAGEMENT PROCESS ......................................................... 218
5.19.1 Risk Assessment ....................................................................................................... 219
5.19.2 Review based Risk Assessment Process .................................................................. 220
5.19.3 Data Model of Risk Management ............................................................................. 221
5.19.4 Risk Mitigation ......................................................................................................... 222
5.20 SUMMARY .................................................................................................................... 222
Assignment-Module 5 ............................................................................................................. 223
Key - Module 5 .................................................................................................................... 226
CHAPTER 6 : QUALITY STANDARDS............................................................................... 227
6.1 ISO 9000 series ................................................................................................................. 227
6.1.1 Benefits of ISO 9000 .................................................................................................. 227
6.1.2 Advantages And Disadvantages Of ISO? ................................................................... 228
6.1.3 ISO 9000 Series .......................................................................................................... 229
6.2 SIX SIGMA....................................................................................................................... 230
6.2.1 Methods .......................................................................................................................... 232
6.2.1.2 DMADV or DFSS Method ...................................................................................... 233
6.2.2 Quality management tools and methods used in Six Sigma ....................................... 233
6.2.3 Implementation roles .................................................................................................. 234
6.2.4 Certification .................................................................................................................... 235
6.2.5 Origin and meaning of the term "six sigma process" ..................................................... 236
6.2.6 Role of the 1.5 sigma shift .......................................................................................... 237
6.2.7 Sigma levels ................................................................................................................ 237
6.2.8 Software used for Six Sigma ...................................................................................... 239
6.2.9 Application ................................................................................................................. 240
6.2.10 Criticism ................................................................................................................... 241
6.3 CAPABILITY MATURITY MODEL INTEGRATION (CMMI) ................................... 244
6.3.1 CMMI representation ................................................................................................. 246
6.3.2 Appraisal ..................................................................................................................... 248
6.4 SUMMARY ...................................................................................................................... 249
Assignment-Module 3 ............................................................................................................. 250
Key - Module 6 .................................................................................................................... 252
REFERENCES .......................................................................................................................... 253
CHAPTER 1 : QUALITY CONCEPTS AND PRACTICES
1.1 INTRODUCTION
The concept of software quality is more complex than what common people tend to believe.
However, it is very popular both for common people and IT professionals. If we look at the
definition of quality in a dictionary, it is usual to find something like the following: set of
characteristics that allows us to rank things as better or worse than other similar ones. In many
cases, dictionaries mention the idea of excellence together with this type of definitions.
Certainly, this idea of quality does not help engineers to improve results in the different fields of
activity. In the world of industrial quality in general, a transition from a rigid concept to an
adaptive one was performed many years ago. The concept view tend to be more close to the
traditional idea of beauty: “it is in the eyes of the observer”. So, we reject absolute concepts and
tend to use customer satisfaction as main inspiration. For example, what characteristics are used
by customers as indicators of “quality” (i.e. excellence):
Product nature
Reputation of raw materials
Manufacturing location
Manufacturing method
Point-of-sale standing
Sophisticated restaurant than at the usual pub.
Price
Results
To understand the landscape of software quality it is central to answer the so often asked
question: what is quality? Once the concept of quality is understood it is easier to understand the
different structures of quality available on the market. As many prominent authors and
researchers have provided an answer to that question, we do not have the ambition of introducing
yet another answer but we will rather answer the question by studying the answers that some of
the more prominent gurus of the quality management community have provided. By learning
from those gone down this path before us we can identify that there are two major camps when
discussing the meaning and definition of (software) quality:
ii) Meeting customer needs: Quality that is identified independent of any measurable
characteristics. That is, quality is defined as the products or services capability to meet customer
expectations – explicit or not.
Quality software saves good amount of time and money. Because software will have fewer
defects, this saves time during testing and maintenance phases. Greater reliability contributes to
an immeasurable increase in customer satisfaction as well as lower maintenance costs. Because
maintenance represents a large portion of all software costs, the overall cost of the project will
most likely be lower than similar projects.
“Quality comprises all characteristics and significant features of a product or an activity which
relate to the satisfying of given requirements”. (German Industry Standard DIN 55350 Part 11)
“Quality is the totality of features and characteristics of a product or a service that bears on its
ability to satisfy the given needs” (ANSI Standard (ANSI/ASQC A3/1978).
High quality software usually conforms to the user requirements. A customer’s idea of quality
may cover a breadth of features - conformance to specifications, good performance on
platform(s)/configurations, completely meets operational requirements (even if not specified!),
compatibility to all the end-user equipment, no negative impact on existing end-user base at
introduction time etc.
1.2 COST OF QUALITY
In recent years organizations have been focusing much attention on quality management. There
are many different aspects of quality management but this tutorial focuses on the cost of quality.
The costs associated with quality are divided into two categories: costs due to poor quality and
costs associated with improving quality. Prevention costs and appraisal costs are costs associated
with improving quality, while failure costs result from poor quality. Management must
understand these costs to create quality improvement strategy. An organization’s main goal is to
survive and maintain high quality goods or services, with a comprehensive understanding of the
costs related to quality this goal can be achieved.
Costs are defined as the summation of costs over the life of a product. Customers prefer products
or services with a high quality and reasonable price. To ensure that customers will receive a
product or service that is worth the money they will spend firms should spend on prevention and
appraisal costs. Prevention costs are associated with preventing defects and imperfections from
occurring. Consider the Johnson and Johnson (J&J) safety seals that appear on all of their
products with the message, “if this safety seal is open do not use.” This is a preventive measure
because in the overall analysis it is least costly to purchase the safety seals in production than
undergo a possible cyanide scare. The focus of a prevention cost is to assure quality and
minimize or avoid the likelihood of an event with an adverse impact on the company goods,
services or daily operations. This also includes the cost of establishing a quality system. A
quality system should include the following three elements: training, process engineering, and
quality planning. Quality planning is establishing a production process in conformance with
design specification procedures, and designing of the proper test procedures and equipment.
Consider establishing training programs for employees to keep them efficient on emerging
technologies, such as updated computer languages and programs.
Appraisal costs are direct costs of measuring quality. In this case, quality is defined as the
conformance to customer expectations. This includes: lab testing, inspection, test equipment and
materials, costs associated with assessment for ISO 9000 or other quality award assessments. A
common example of appraisal costs is the expenses from inspections. An organization should
establish an inspection of their products and incoming goods from a supplier before they reach
the customer. This is also known as acceptance sampling, a technique used to verify that
products meet quality standards.
Failure Costs are separated into two different categories: internal and external. Internal failure
costs are expenses incurred from online failure. This includes cost of troubleshooting, loss of
production resulting from idle time either from manpower or during the production process.
External failure costs are associated with product failure after the completion of the production
process. An excellent example of external failure costs is the J&J cyanide scare. The company
incurred expenses in response to the customer fears of tampering with a purchased J&J product.
However, J&J managed to survive the incident, in part because of their method of corrective
action.
Phillip Crosby states that quality is free. As discussed, the costs related to achieving quality are
traded off between the prevention and appraisal costs and the failure costs. Therefore, the
prevention and appraisal costs resulting from improved quality, allow an organization to
minimize or be free of the failure costs resulting from poor quality. In summation, understanding
cost of quality helps companies to develop quality conformance as a useful strategic business
tool that improves their products, services and image. This leverage is vital in achieving the
goals and mission of a successful organization.
1.3 TOTAL QUALITY MANAGEMENT
Total Quality Management is a management approach that originated in the 1950's and has
steadily become more popular since the early 1980's. Total Quality is a description of the culture,
attitude and organization of a company that strives to provide customers with products and
services that satisfy their needs. The culture requires quality in all aspects of the company's
operations, with processes being done right the first time and defects and waste eradicated from
operations.
Total Quality Management, TQM, is a method by which management and employees can
become involved in the continuous improvement of the production of goods and services. It is a
combination of quality and management tools aimed at increasing business and reducing losses
due to wasteful practices.
Some of the companies who have implemented TQM include Ford Motor Company, Phillips
Semiconductor, SGL Carbon, Motorola and Toyota Motor Company.
This shows that TQM must be practiced in all activities, by all personnel, in Manufacturing,
Marketing, Engineering, R&D, Sales, Purchasing, HR, etc.
Management Commitment
Plan (drive, direct)
Check (review)
Employee Empowerment
Training
Suggestion scheme
Excellence teams
DOE, FMEA
Continuous Improvement
Systematic measurement and focus on CONQ
Excellence teams
Customer Focus
Supplier partnership
Continuous improvement must deal not only with improving results, but more importantly with
improving capabilities to produce better results in the future. The five major areas of focus for
capability improvement are demand generation, supply generation, technology, operations and
people capability.
A central principle of TQM is that mistakes may be made by people, but most of them are
caused, or at least permitted, by faulty systems and processes. This means that the root cause of
such mistakes can be identified and eliminated, and repetition can be prevented by changing the
process.
There are three major mechanisms of prevention:
The basis for TQM implementation is the establishment of a quality management system which
involves the organizational structure, responsibilities, procedures and processes. The most
frequently used guidelines for quality management systems are the ISO 9000 international
standards, which emphasize the establishment of a well- documented, standardized quality
system. The role of the ISO 9000 standards within the TQM circle of continuous improvement is
presented in the following figure.
Continuous improvement is a circular process that links the diagnostic, planning, implementation
and evaluation phases. Within this circular process, the ISO 9000 standards are commonly
applied in the implementation phase. An ISO 9000 quality system also requires the establishment
of procedures that standardize the way an organization handles the diagnostic and evaluation
phases. However, the ISO 9000 standards do not prescribe particular quality management
techniques or quality-control methods. Because it is a generic organizational standard, ISO 9000
does not define quality or provide any specifications of products or processes. ISO 9000
certification only assures that the organization has in place a well-operated quality system that
conforms to the ISO 9000 standards. Consequently, an organization may be certified but still
manufacture poor-quality products.
If an organization has a track record of effective responsiveness to the environment, and if it has
been able to successfully change the way it operates when needed, TQM will be easier to
implement. If an organization has been historically reactive and has no skill at improving its
operating systems, there will be both employee skepticism and a lack of skilled change agents. If
this condition prevails, a comprehensive program of management and leadership development
may be instituted. A management audit is a good assessment tool to identify current levels of
organizational functioning and areas in need of change. An organization should be basically
healthy before beginning TQM. If it has significant problems such as a very unstable funding
base, weak administrative systems, lack of managerial skill, or poor employee morale, TQM
would not be appropriate.
However, a certain level of stress is probably desirable to initiate TQM. People need to feel a
need for a change. Kanter (1983) addresses this phenomenon as describing building blocks
which are present in effective organizational change. These forces include departures from
tradition, a crisis or galvanizing event, strategic decisions, individual "prime movers," and action
vehicles. Departures from tradition are activities, usually at lower levels of the organization,
which occur when entrepreneurs move outside the normal ways of operating to solve a problem.
A crisis, if it is not too disabling, can also help create a sense of urgency which can mobilize
people to act. In the case of TQM, this may be a funding cut or threat, or demands from
consumers or other stakeholders for improved quality of service. After a crisis, a leader may
intervene strategically by articulating a new vision of the future to help the organization deal
with it.
A plan to implement TQM may be such a strategic decision. Such a leader may then become a
prime mover, who takes charge in championing the new idea and showing others how it will help
them get where they want to go. Finally, action vehicles are needed and mechanisms or
structures to enable the change to occur and become institutionalized.
The only point at which true responsibility for performance and quality can lie is with the people
who actually do the job or carry out the process, each of which has one or several suppliers and
customers.
An efficient and effective way to tackle process or quality improvement is through teamwork.
However, people will not engage in improvement activities without commitment and recognition
from the organization’s leaders, a climate for improvement and a strategy that is implemented
thoughtfully and effectively. The section on People expands on these issues, covering roles
within teams, team selection and development and models for successful teamwork.
An appropriate documented Quality Management System will help an organization not only
achieve the objectives set out in its policy and strategy, but also, and equally importantly, sustain
and build upon them. It is imperative that the leaders take responsibility for the adoption and
documentation of an appropriate management system in their organization if they are serious
about the quality journey. The Systems section discusses the benefits of having such a system,
how to set one up and successfully implement it.
Once the strategic direction for the organization’s quality journey has been set, it needs
Performance Measures to monitor and control the journey, and to ensure the desired level of
performance is being achieved and sustained. They can, and should be, established at all levels in
the organization, ideally being cascaded down and most effectively undertaken as team activities
and this is discussed in the section on Performance.
TQM focuses on achieving quality through engraining the philosophy within an organization,
although it does not form a system or a set of tools through which to achieve this. Companies
adopting a TQM philosophy should see their competitiveness increase, establish a culture of
growth, offer a productive and successful working environment, cut stress and waste and build
teams and partnerships.
The principles of TQM have been laid out in the ISO 9000 family of standards from the
International Organization for Standardization. Adopted by over one million companies in 176
countries worldwide, the standards lay down the requirements of a quality management system,
but not how these should be met.
Alternatively the DMADV (define, measure, analyse, design, verify) system is used for the
creation of new processes which fit with the six sigma principles. Motorola believes that even
combining the methodology and the metric is "still not enough to drive desired breakthrough
improvements and results that are sustainable over time", and therefore advocates the use of the
six sigma management systems, which aligns management strategy with improvement efforts.
Companies which have successfully implemented six sigma, such as GE, have reported savings
running into millions of dollars and six sigma is now being combined with lean manufacturing
processes to great effect.
But it is highly unlikely any of these interpretations present the end goal for quality management,
which as the methodologies teach, must always strive for continuous improvement
1.5 SUMMARY
Quality plays very important role in every aspect of software development. It plays key role in
the successful implementation of software. As an attribute of an item, quality refers to
measurable characteristics - things we are able to compare to known standards such as length,
color, electrical properties, and malleability. However, software, largely an intellectual entity, is
more challenging to characterize than physical objects. Nevertheless, measures of a program’s
characteristics do exist. These properties include cyclomatic complexity, cohesion, number of
function points, lines of code, and many others. When we examine an item based on its
measurable characteristics, two kinds of quality may be encountered: quality of design and
quality of conformance. TQM encourages participation amongst shop floor workers and
managers. TQM is an approach to improving the competitiveness, effectiveness and flexibility of
an organization for the benefit of all stakeholders. It is a way of planning, organizing and
understanding each activity, and of removing all the wasted effort and energy that is routinely
spent in organizations. It ensures the leaders adopt a strategic overview of quality and focus on
prevention not detection of problems. All senior managers must demonstrate their seriousness
and commitment to quality, and middle managers must, as well as demonstrating their
commitment, ensure they communicate the principles, strategies and benefits to the people for
whom they have responsibility. Only then will the right attitudes spread throughout the
organization.
Assignment-Module 1
1. Quality is __________
a. Conformance to specification
b. Meeting customer needs
c. Both of them
d. None of them
a. Waterfall
b. Spiral
c. Ludvall-Juran
d. None of the above
a. ISO/IEC 9126
b. ISO 9001
c. IEEE
d. ISO 9000
6. Mistakes may be made by people, but most of them are caused, or at least permitted, by
faulty systems and processes is the principle of __________ .
a. Quality
b. TQM
c. Six Sigma
d. ISO 9000
7. The principles of TQM have been laid out to __________ principles made up
__________ standards.
a. Ten
b. Six
c. Three
d. Fourteen
10. Six Sigma philosophy is the ___________ model for process improvement.
a. DMAIC
b. ISO 9126
c. Mc call
d. ISO 9000
Key - Module 1
1. c
2. c
3. a
4. a
5. d
6. b
7. d
8. a
9. d
10. a
CHAPTER 2 : SOFTWARE QUALITY
2.1.3 Processes
More and more software development organizations implement process methodologies. The
Capability Maturity Model (CMM) is one of the leading models. Independent assessments can be
used to grade organizations on how well they create software according to how they define and
execute their processes. There are dozens of others, with other popular ones being ISO 9000, ISO
15504, and Six Sigma.There are several models for such processes, each describing approaches
to a variety of tasks or activities that take place during the process.
As software is always of a large system (or business), work begins by establishing the
requirements for all system elements and then allocating some subset of these requirements to
software. This system view is essential when the software must interface with other elements
such as hardware, people and other resources. System is the basic and very critical requirement
for the existence of software in any entity. So if the system is not in place, the system should be
engineered and put in place. In some cases, to extract the maximum output, the system should be
re-engineered and spruced up. Once the ideal system is engineered or tuned, the development
team studies the software requirement for the system.
Extracting the requirements of a desired software product is the first task in creating it. While
customers probably believe they know what the software is to do, it may require skill and
experience in software engineering to recognize incomplete, ambiguous or contradictory
requirements. Customers typically have an abstract idea of what they want as an end result, but
not what software should do. Skilled and experienced software engineers recognize incomplete,
ambiguous, or even contradictory requirements at this point. Frequently demonstrating live code
may help reduce the risk that the requirements are incorrect.
Once the general requirements are gathered from the client, an analysis of the scope of the
development should be determined and clearly stated. This is often called a scope document.
Certain functionality may be out of scope of the project as a function of cost or as a result of
unclear requirements at the start of development. If the development is done externally, this
document can be considered a legal document so that if there are ever disputes, any ambiguity of
what was promised to the client can be clarified.
2.1.5.3 Specification
2.1.5.5 Implementation
Reducing a design to code may be the most obvious part of the software engineering ob, but it is
not necessarily the largest portion.
2.1.5.6 Testing
Testing of parts of software, especially where code by two different engineers must work
together, falls to the software engineer. Different testing methodologies are available to unravel
the bugs that were committed during the previous phases. Different testing tools and
methodologies are already available. Some companies build their own testing tools that are tailor
made for their own development operations.
2.1.5.7 Documentation
An important task is documenting the internal design of software for the purpose of future
maintenance and enhancement. This may also include the writing of an API, be it external or
internal. The software engineering process chosen by the developing team will determine how
much internal documentation (if any) is necessary. Plan-driven models (e.g., Waterfall) generally
produce more documentation than agile models.
A large percentage of software projects fail because the developers fail to realize that it doesn't
matter how much time and planning a development team puts into creating software if nobody in
an organization ends up using it. People are occasionally resistant to change and avoid venturing
into an unfamiliar area, so as a part of the deployment phase, its very important to have training
classes for the most enthusiastic software users (build excitement and confidence), shifting the
training towards the neutral users intermixed with the avid supporters, and finally incorporate the
rest of the organization into adopting the new software. Users will have lots of questions and
software problems which lead to the next phase of software.
2.1.5.9 Maintenance
Maintaining and enhancing software to cope with newly discovered problems or new
requirements can take far more time than the initial development of the software. The software
will definitely undergo change once it is delivered to the customer. There can be many reasons
for this change to occur. Change could happen because of some unexpected input values into the
system. In addition, the changes in the system could directly affect the software operations. The
software should be developed to accommodate changes that could happen during the post
implementation period.
Not only may it be necessary to add code that does not fit the original design but just determining
how software works at some point after it is completed may require significant effort by a
software engineer. About 60% of all software engineering work is maintenance, but this statistic
can be misleading. A small part of that is fixing bugs. Most maintenance is extending systems to
do new things, which in many ways can be considered new work.
i. formulate plans to: identify software targets, selected to implement the program, clarify
the project development restrictions;
ii. Risk analysis: an analytical assessment of selected programs, to consider how to identify
and eliminate risk;
iii. the implementation of the project: the implementation of software development and
verification;
Risk-driven spiral model, emphasizing the conditions of options and constraints in order to
support software reuse, software quality can help as a special goal of integration into the product
development. However, the spiral model has some restrictive conditions, as follows:
i. The spiral model emphasizes risk analysis, and thus requires customers to accept this
analysis and act on it. This requires both trust in the developer as well as the willingness
to spend more to fix the issues, which is the reason why this model is often used for
large-scale internal software development.
ii. If the implementation of risk analysis will greatly affect the profits of the project, the
spiral model should not be used.
iii. Software developers have to actively look for possible risks, and analyze it accurately for
the spiral model to work.
The first stage is to formulate a plan to achieve the objectives with these constraints, and then
strive to find and remove all potential risks through careful analysis and, if necessary, by
constructing a prototype. If some risks can not be ruled out, the customer has to decide whether
to terminate the project or to ignore the risks and continue anyway. Finally, the results are
evaluated and the design of the next phase begins.
2.2.4 Strength and Weakness of Waterfall, Prototype and Spiral Model
Strengths
•Emphasizes completion of one phase before moving on
•Emphasises testing as an integral part of the life cycle •Provides quality gates at each life cycle
phase
Weakness:
•Depends on capturing and freezing requirements early in the life cycle
Strengths
•Requirements can be set earlier and more reliably
•Requirements can be communicated more clearly and completely between developers and
clients
•Requirements and design options can be investigated quickly and with low cost
Weakness
•Requires a prototyping tool and expertise in using it – a cost for the development organization
•The prototype may become the production system
Strengths
•It promotes reuse of existing software in early stages of development
•Doesn’t involve separate approaches for software development and software maintenance.
Weakness
•This process needs or usually associated with Rapid Application Development, which is very
difficult practically.
•The process is more difficult to manage and needs a very different approach as opposed to the
waterfall model (Waterfall model has management techniques like GANTT charts to assess)
Agile processes seem to be more efficient than older methodologies, using less programmer time
to produce more functional, higher quality software, but have the drawback from a business
perspective that they do not provide long-term planning capability. In essence, they say that they
will provide the most bang for the buck, but won't say exactly when that bang will be.
Extreme Programming, XP, is the best-known agile process. In XP, the phases are carried out in
extremely small (or "continuous") steps compared to the older, "batch" processes. The
(intentionally incomplete) first pass through the steps might take a day or a week, rather than the
months or years of each complete step in the Waterfall model. First, one writes automated tests,
to provide concrete goals for development. Next is coding (by a pair of programmers), which is
complete when all the tests pass, and the programmers can't think of any more tests that are
needed. Design and architecture emerge out of refactoring, and come after coding. Design is
done by the same people who do the coding. The incomplete but functional system is deployed
or demonstrated for the users (at least one of which is on the development team). At this point,
the practitioners start again on writing tests for the next most important part of the system.
While Iterative development approaches have their advantages, software architects are still faced
with the challenge of creating a reliable foundation upon which to develop. Such a foundation
often requires a fair amount of upfront analysis and prototyping to build a development model.
The development model often relies upon specific design patterns and entity relationship
diagrams (ERD). Without this upfront foundation, Iterative development can create long term
challenges that are significant in terms of cost and quality.
Critics of iterative development approaches point out that these processes place what may be an
unreasonable expectation upon the recipient of the software: that they must possess the skills and
experience of a seasoned software developer. The approach can also be very expensive, akin to...
"If you don't know what kind of house you want, let me build you one and see if you like it. If
you don't, we'll tear it all down and start over." A large pile of building-materials, which are now
scrap, can be the final result of such a lack of up-front discipline. The problem with this criticism
is that the whole point of iterative programming is that you don't have to build the whole house
before you get feedback from the recipient. Indeed, in a sense conventional programming places
more of this burden on the recipient, as the requirements and planning phases take place entirely
before the development begins, and testing only occurs after development is officially over.
The information flow among business functions is modeled in a way that answers the following
questions:
The information flow defined as part of the business modeling phase is refined into a set of data
objects that are needed to support the business. The characteristic (called attributes) of each
object is identified and the relationships between these objects are defined.
The data objects defined in the data-modeling phase are transformed to achieve the information
flow necessary to implement a business function. Processing the descriptions are created for
adding, modifying, deleting, or retrieving a data object.
The RAD model assumes the use of the RAD tools like VB, VC++, Delphi etc… rather than
creating software using conventional third generation programming languages. The RAD model
works to reuse existing program components (when possible) or create reusable components
(when necessary). In all cases, automated tools are used to facilitate construction of the software.
Since the RAD process emphasizes reuse, many of the program components have already been
tested. This minimizes the testing and development time.
The Capability Maturity Model Integration (CMMI) is one of the leading models and based on
best practice. Independent assessments grade organizations on how well they follow their
defined processes, not on the quality of those processes or the software produced. CMMI has
replaced CMM.
ISO 9000 describes standards for a formally organized process to manufacture a product and the
methods of managing and monitoring progress. Although the standard was originally created for
the manufacturing sector, ISO 9000 standards have been applied to software development as
well. Like CMMI, certification with ISO 9000 does not guarantee the quality of the end result,
only that formalized business processes have been followed.
ISO/IEC 15504 Information technology — Process assessment also known as Software Process
Improvement Capability Determination (SPICE), is a "framework for the assessment of software
processes". This standard is aimed at setting out a clear model for process comparison. SPICE is
used much like CMMI. It models processes to manage, control, guide and monitor software
development. This model is then used to measure what a development organization or project
team actually does during software development. This information is analyzed to identify
weaknesses and drive improvement.
2.2.9 Formal methods
Formal methods are mathematical approaches to solving software (and hardware) problems at
the requirements, specification, and design levels. Formal methods are most likely to be applied
to safety-critical or security-critical software and systems, such as avionics software. Software
safety assurance standards, such as DO-178B, DO-178C, and Common Criteria demand formal
methods at the highest levels of categorization.
For sequential software, examples of formal methods include the B-Method, the specification
languages used in Automated theorem proving, RAISE, VDM, and the Z notation.
Another emerging trend in software development is to write a specification in some form of logic
(usually a variation of FOL), and then to directly execute the logic as though it were a program.
The OWL language, based on Description Logic, is an example. There is also work on mapping
some version of English (or another natural language) automatically to and from logic, and
executing the logic directly. Examples are Attemp to Controlled English, and Internet Business
Logic, which do not seek to control the vocabulary or syntax. A feature of systems that support
bidirectional English-logic mapping and direct execution of the logic is that they can be made to
explain their results, in English, at the business or scientific level.
2.3.1 Introduction
Quality attributes are the overall factors that affect run-time behavior, system design, and user
experience. They represent areas of concern that have the potential for application wide impact
across layers and tiers. Some of these attributes are related to the overall system design, while
others are specific to run time, design time, or user centric issues. The extent to which the
application possesses a desired combination of quality attributes such as usability, performance,
reliability, and security indicates the success of the design and the overall quality of the software
application.
When designing applications to meet any of the quality attributes requirements, it is necessary to
consider the potential impact on other requirements. You must analyze the tradeoffs between
multiple quality attributes. The importance or priority of each quality attribute differs from
system to system; for example, interoperability will often be less important in a single use
packaged retail application than in a line of business (LOB) system.
This chapter lists and describes the quality attributes that you should consider when designing
your application. To get the most out of this chapter, use the table below to gain an
understanding of how quality attributes map to system and application quality factors, and read
the description of each of the quality attributes. Then use the sections containing key guidelines
for each of the quality attributes to understand how that attribute has an impact on your design,
and to determine the decisions you must make to addresses these issues. Keep in mind that the
list of quality attributes in this chapter is not exhaustive, but provides a good starting point for
asking appropriate questions about your architecture.
Quality
Category Description
attribute
System
Testability is a measure of how easy it is to create test criteria for
Qualities
the system and its components, and to execute these tests in order
Testability to determine if the criteria are met. Good testability makes it more
likely that faults in a system can be isolated in a timely and
effective manner.
The following sections describe each of the quality attributes in more detail, and provide
guidance on the key issues and the decisions you must make for each one:
Availability
Conceptual Integrity
Interoperability
Maintainability
Manageability
Performance
Reliability
Reusability
Scalability
Security
Supportability
Testability
Availability
Availability defines the proportion of time that the system is functional and working. It can be
measured as a percentage of the total system downtime over a predefined period. Availability
will be affected by system errors, infrastructure problems, malicious attacks, and system load.
The key issues for availability are:
A physical tier such as the database server or application server can fail or become
unresponsive, causing the entire system to fail. Consider how to design failover support for
the tiers in the system. For example, use Network Load Balancing for Web servers to
distribute the load and prevent requests being directed to a server that is down. Also, consider
using a RAID mechanism to mitigate system failure in the event of a disk failure. Consider if
there is a need for a geographically separate redundant site to failover to in case of natural
disasters such as earthquakes or tornados.
Denial of Service (DoS) attacks, which prevent authorized users from accessing the system,
can interrupt operations if the system cannot handle massive loads in a timely manner, often
due to the processing time required, or network configuration and congestion. To minimize
interruption from DoS attacks, reduce the attack surface area, identify malicious behavior,
use application instrumentation to expose unintended behavior, and implement
comprehensive data validation. Consider using the Circuit Breaker or Bulkhead patterns to
increase system resiliency.
Inappropriate use of resources can reduce availability. For example, resources acquired too
early and held for too long cause resource starvation and an inability to handle additional
concurrent user requests.
Bugs or faults in the application can cause a system wide failure. Design for proper exception
handling in order to reduce application failures from which it is difficult to recover.
Frequent updates, such as security patches and user application upgrades, can reduce the
availability of the system. Identify how you will design for run-time upgrades.
A network fault can cause the application to be unavailable. Consider how you will handle
unreliable network connections; for example, by designing clients with occasionally-
connected capabilities.
Consider the trust boundaries within your application and ensure that subsystems employ
some form of access control or firewall, as well as extensive data validation, to increase
resiliency and availability.
Conceptual Integrity
Conceptual integrity defines the consistency and coherence of the overall design. This includes
the way that components or modules are designed, as well as factors such as coding style and
variable naming. A coherent system is easier to maintain because you will know what is
consistent with the overall design. Conversely, a system without conceptual integrity will
constantly be affected by changing interfaces, frequently deprecating modules, and lack of
consistency in how tasks are performed. The key issues for conceptual integrity are:
Mixing different areas of concern within your design. Consider identifying areas of
concern and grouping them into logical presentation, business, data, and service layers as
appropriate.
Inconsistent or poorly managed development processes. Consider performing an
Application Lifecycle Management (ALM) assessment, and make use of tried and tested
development tools and methodologies.
Lack of collaboration and communication between different groups involved in the
application lifecycle. Consider establishing a development process integrated with tools
to facilitate process workflow, communication, and collaboration.
Lack of design and coding standards. Consider establishing published guidelines for
design and coding standards, and incorporating code reviews into your development
process to ensure guidelines are followed.
Existing (legacy) system demands can prevent both refactoring and progression toward a
new platform or paradigm. Consider how you can create a migration path away from
legacy technologies, and how to isolate applications from external dependencies. For
example, implement the Gateway design pattern for integration with legacy systems.
Interoperability
Interaction with external or legacy systems that use different data formats. Consider how you
can enable systems to interoperate, while evolving separately or even being replaced. For
example, use orchestration with adaptors to connect with external or legacy systems and
translate data between systems; or use a canonical data model to handle interaction with a
large number of different data formats.
Boundary blurring, which allows artifacts from one system to defuse into another. Consider
how you can isolate systems by using service interfaces and/or mapping layers. For example,
expose services using interfaces based on XML or standard types in order to support
interoperability with other systems. Design components to be cohesive and have low
coupling in order to maximize flexibility and facilitate replacement and reusability.
Lack of adherence to standards. Be aware of the formal and de facto standards for the domain
you are working within, and consider using one of them rather than creating something new
and proprietary.
Maintainability
Maintainability is the ability of the system to undergo changes with a degree of ease. These
changes could impact components, services, features, and interfaces when adding or changing
the application’s functionality in order to fix errors, or to meet new business requirements.
Maintainability can also affect the time it takes to restore the system to its operational status
following a failure or removal from operation for an upgrade. Improving system maintainability
can increase availability and reduce the effects of run-time defects. An application’s
maintainability is often a function of its overall quality attributes but there a number of key
issues that can directly affect maintainability:
Manageability
Manageability defines how easy it is for system administrators to manage the application, usually
through sufficient and useful instrumentation exposed for use in monitoring systems and for
debugging and performance tuning. Design your application to be easy to manage, by exposing
sufficient and useful instrumentation for use in monitoring systems and for debugging and
performance tuning. The key issues for manageability are:
Lack of health monitoring, tracing, and diagnostic information. Consider creating a health
model that defines the significant state changes that can affect application performance, and
use this model to specify management instrumentation requirements. Implement
instrumentation, such as events and performance counters, that detects state changes, and
expose these changes through standard systems such as Event Logs, Trace files, or Windows
Management Instrumentation (WMI). Capture and report sufficient information about errors
and state changes in order to enable accurate monitoring, debugging, and management. Also,
consider creating management packs that administrators can use in their monitoring
environments to manage the application.
Lack of runtime configurability. Consider how you can enable the system behavior to change
based on operational environment requirements, such as infrastructure or deployment
changes.
Lack of troubleshooting tools. Consider including code to create a snapshot of the system’s
state to use for troubleshooting, and including custom instrumentation that can be enabled to
provide detailed operational and functional reports. Consider logging and auditing
information that may be useful for maintenance and debugging, such as request details or
module outputs and calls to other systems and services.
Performance
Increased client response time, reduced throughput, and server resource over utilization.
Ensure that you structure the application in an appropriate way and deploy it onto a system or
systems that provide sufficient resources. When communication must cross process or tier
boundaries, consider using coarse-grained interfaces that require the minimum number of
calls (preferably just one) to execute a specific task, and consider using asynchronous
communication.
Increased memory consumption, resulting in reduced performance, excessive cache misses
(the inability to find the required data in the cache), and increased data store access. Ensure
that you design an efficient and appropriate caching strategy.
Increased database server processing, resulting in reduced throughput. Ensure that you
choose effective types of transactions, locks, threading, and queuing approaches. Use
efficient queries to minimize performance impact, and avoid fetching all of the data when
only a portion is displayed. Failure to design for efficient database processing may incur
unnecessary load on the database server, failure to meet performance objectives, and costs in
excess of budget allocations.
Increased network bandwidth consumption, resulting in delayed response times and
increased load for client and server systems. Design high performance communication
between tiers using the appropriate remote communication mechanism. Try to reduce the
number of transitions across boundaries, and minimize the amount of data sent over the
network. Batch work to reduce calls over the network.
Reliability
Reliability is the ability of a system to continue operating in the expected way over time.
Reliability is measured as the probability that a system will not fail and that it will perform its
intended function for a specified time interval. The key issues for reliability are:
The system crashes or becomes unresponsive. Identify ways to detect failures and
automatically initiate a failover, or redirect load to a spare or backup system. Also, consider
implementing code that uses alternative systems when it detects a specific number of failed
requests to an existing system.
Output is inconsistent. Implement instrumentation, such as events and performance counters,
that detects poor performance or failures of requests sent to external systems, and expose
information through standard systems such as Event Logs, Trace files, or WMI. Log
performance and auditing information about calls made to other systems and services.
The system fails due to unavailability of other externalities such as systems, networks, and
databases. Identify ways to handle unreliable external systems, failed communications, and
failed transactions. Consider how you can take the system offline but still queue pending
requests. Implement store and forward or cached message-based communication systems that
allow requests to be stored when the target system is unavailable, and replayed when it is
online. Consider using Windows Message Queuing or BizTalk Server to provide a reliable
once-only delivery mechanism for asynchronous requests.
Reusability
Reusability is the probability that a component will be used in other components or scenarios to
add new functionality with little or no change. Reusability minimizes the duplication of
components and the implementation time. Identifying the common attributes between various
components is the first step in building small reusable components for use in a larger system.
The key issues for reusability are:
The use of different code or components to achieve the same result in different places; for
example, duplication of similar logic in multiple components, and duplication of similar logic
in multiple layers or subsystems. Examine the application design to identify common
functionality, and implement this functionality in separate components that you can reuse.
Examine the application design to identify crosscutting concerns such as validation, logging,
and authentication, and implement these functions as separate components.
The use of multiple similar methods to implement tasks that have only slight variation.
Instead, use parameters to vary the behavior of a single method.
Using several systems to implement the same feature or function instead of sharing or
reusing functionality in another system, across multiple systems, or across different
subsystems within an application. Consider exposing functionality from components, layers,
and subsystems through service interfaces that other layers and systems can use. Use
platform agnostic data types and structures that can be accessed and understood on different
platforms.
Scalability
Scalability is ability of a system to either handle increases in load without impact on the
performance of the system, or the ability to be readily enlarged. There are two methods for
improving scalability: scaling vertically (scale up), and scaling horizontally (scale out). To scale
vertically, you add more resources such as CPU, memory, and disk to a single system. To scale
horizontally, you add more machines to a farm that runs the application and shares the load. The
key issues for scalability are:
Applications cannot handle increasing load. Consider how you can design layers and tiers for
scalability, and how this affects the capability to scale up or scale out the application and the
database when required. You may decide to locate logical layers on the same physical tier to
reduce the number of servers required while maximizing load sharing and failover
capabilities. Consider partitioning data across more than one database server to maximize
scale-up opportunities and allow flexible location of data subsets. Avoid stateful components
and subsystems where possible to reduce server affinity.
Users incur delays in response and longer completion times. Consider how you will handle
spikes in traffic and load. Consider implementing code that uses additional or alternative
systems when it detects a predefined service load or a number of pending requests to an
existing system.
The system cannot queue excess work and process it during periods of reduced load.
Implement store-and-forward or cached message-based communication systems that allow
requests to be stored when the target system is unavailable, and replayed when it is online.
Security
Security is the capability of a system to reduce the chance of malicious or accidental actions
outside of the designed usage affecting the system, and prevent disclosure or loss of information.
Improving security can also increase the reliability of the system by reducing the chances of an
attack succeeding and impairing system operation. Securing a system should protect assets and
prevent unauthorized access to or modification of information. The factors affecting system
security are confidentiality, integrity, and availability. The features used to secure systems are
authentication, encryption, auditing, and logging. The key issues for security are:
Spoofing of user identity. Use authentication and authorization to prevent spoofing of user
identity. Identify trust boundaries, and authenticate and authorize users crossing a trust
boundary.
Damage caused by malicious input such as SQL injection and cross-site scripting. Protect
against such damage by ensuring that you validate all input for length, range, format, and
type using the constrain, reject, and sanitize principles. Encode all output you display to
users.
Data tampering. Partition the site into anonymous, identified, and authenticated users and use
application instrumentation to log and expose behavior that can be monitored. Also use
secured transport channels, and encrypt and sign sensitive data sent across the network
Repudiation of user actions. Use instrumentation to audit and log all user interaction for
application critical operations.
Information disclosure and loss of sensitive data. Design all aspects of the application to
prevent access to or exposure of sensitive system and application information.
Interruption of service due to Denial of service (DoS) attacks. Consider reducing session
timeouts and implementing code or hardware to detect and mitigate such attacks.
Supportability
Supportability is the ability of the system to provide information helpful for identifying and
resolving issues when it fails to work correctly. The key issues for supportability are:
Lack of diagnostic information. Identify how you will monitor system activity and
performance. Consider a system monitoring application, such as Microsoft System Center.
Lack of troubleshooting tools. Consider including code to create a snapshot of the system’s
state to use for troubleshooting, and including custom instrumentation that can be enabled to
provide detailed operational and functional reports.
Lack of tracing ability. Use common components to provide tracing support in code, perhaps
though Aspect Oriented Programming (AOP) techniques or dependency injection. Enable
tracing in Web applications in order to troubleshoot errors.
Lack of health monitoring. Consider creating a health model that defines the significant state
changes that can affect application performance, and use this model to specify management
instrumentation requirements. Implement instrumentation, such as events and performance
counters, that detects state changes, and expose these changes through standard systems such
as Event Logs, Trace files, or Windows Management Instrumentation (WMI). Capture and
report sufficient information about errors and state changes in order to enable accurate
monitoring, debugging, and management.
Testability
Testability is a measure of how well system or components allow you to create test criteria and
execute tests to determine if the criteria are met. Testability allows faults in a system to be
isolated in a timely and effective manner. The key issues for testability are:
Complex applications with many processing permutations are not tested consistently, perhaps
because automated or granular testing cannot be performed if the application has a
monolithic design. Design systems to be modular to support testing. Provide instrumentation
or implement probes for testing, mechanisms to debug output, and ways to specify inputs
easily. Design components that have high cohesion and low coupling to allow testability of
components in isolation from the rest of the system.
Lack of test planning. Start testing early during the development life cycle. Use mock objects
during testing, and construct simple, structured test solutions.
Poor test coverage, for both manual and automated tests. Consider how you can automate
user interaction tests, and how you can maximize test and code coverage.
Input and output inconsistencies; for the same input, the output is not the same and the output
does not fully cover the output domain even when all known variations of input are provided.
Consider how to make it easy to specify and understand system inputs and outputs to
facilitate the construction of test cases.
User Experience / Usability
The application interfaces must be designed with the user and consumer in mind so that they are
intuitive to use, can be localized and globalized, provide access for disabled users, and provide a
good overall user experience. The key issues for user experience and usability are:
Too much interaction (an excessive number of clicks) required for a task. Ensure you design
the screen and input flows and user interaction patterns to maximize ease of use.
Incorrect flow of steps in multi-step interfaces. Consider incorporating workflows where
appropriate to simplify multi-step operations.
Data elements and controls are poorly grouped. Choose appropriate control types (such as
option groups and check boxes) and lay out controls and content using the accepted UI
design patterns.
Feedback to the user is poor, especially for errors and exceptions, and the application is
unresponsive. Consider implementing technologies and techniques that provide maximum
user interactivity, such as Asynchronous JavaScript and XML (AJAX) in Web pages and
client-side input validation. Use asynchronous techniques for background tasks, and tasks
such as populating controls or performing long-running tasks.
A qualitative assessment is generally made, along with a more quantified assessment. These
measures may be derived from a formal test of examination, continuous assessment of
coursework or a quantified teacher assessment. In practice, the resulting scores are derived from
a whole spectrum of techniques. They range from those which may be regarded as objective and
transferable to those which are simply a more convenient representation of qualitative
judgements. In the past, these have been gathered together to form a traditional school report.
(Table 2.1)
The traditional school report often had an overall mark and grade, a single figure, generally
derived from the mean of the component figures, intended to provide a single measure of
success. In recent years, the assessment of pupils has become considerably more sophisticated
and the model on which the assessment is based has become more complicated. Subjects are
now broken down into skills, each of which is measured and the collective results used to give a
more detailed overall picture. For example, in English, pupils’ oral skills are considered
alongside their ability to read; written English is further subdivided into an assessment of style,
content and presentation. The hierarchical model requires another level of sophistication in order
to accommodate the changes (Figure 2.1). Much effort is currently being devoted to producing a
broader-based assessment, and in ensuring that qualitative judgements are as accurate and
consistent as possible. The aim is for every pupil to emerge with a broad-based ‘Record of
Achievement’ alongside their more traditional examination results.
Table 2.1 A traditional school report
English
Maths
Science
Humanities
Languages
Technology
OVERALL
A hierarchical model of software quality is based upon a set of quality criteria, each of which has
a set of measures or metrics associated with it. This type of model is illustrated schematically in
Figure 2.2.
Examples of quality criteria typically employed include reliability, security and adaptability.
The issues relating to the criteria of quality are:
This model was first proposed by McCall in 1977. It was later adapted and revised as the
MQ model (Watts, 1987). Jim McCall produced this model (Figure 2.3) for the US Air Force
and the intention was to bridge the gap between users and developers. He tried to map the user
view with the developer's priority. The model is aimed at system developers, to be used during
the development process. However, in an early attempt to bridge the gap between users and
developers, the criteria were chosen in an attempt to reflect users’ view as well as developers’
priorities.
With the perspective of hindsight, the criteria appear to be technically oriented, but they are
described by a series of questions which define them in terms acceptable to non-specialist
managers. The three perspective of model are described as:
Product revision
The product revision perspective identifies quality factors that influence the ability to change the
software product, these factors are:-
Product transition
The product transition perspective identifies quality factors that influence the ability to adapt the
software to new environments:-
Portability, the ability to transfer the software from one environment to another.
Reusability, the ease of using existing software components in a different context.
Interoperability, the extent, or ease, to which software components work together.
Product operations
The product operations perspective identifies quality factors that influence the extent to which
the software fulfils its specification:-
The McCall model, illustrated in Figure 2.4, identifies three areas of software work: product
operation, product revision and product transition. These are summarized in Table 2.2
Table 2.2 The three areas as addressed by McCall’s model (1977)
This study carried out by the National Computer Centre (NCC). The characteristics and sub-
characteristics of McCall model is shown in following figure.
The idea behind McCall’s Quality Model is that the quality factors synthesized should provide a
complete software quality picture. The actual quality metric is achieved by answering yes and no
questions that then are put in relation to each other. That is, if answering equally amount of “yes”
and “no” on the questions measuring a quality criteria you will achieve 50% on that quality
criteria1. The metrics can then be synthesized per quality criteria, per quality factor, or if relevant
per product or service
2.4.2.2 The Boehm Model
Barry W. Boehm (1978) also defined a hierarchical model of software quality characteristics, in
trying to qualitatively define software quality as a set of attributes and metrics (measurements).
Boehm’s model was defined to provide a set of ‘well-defined, well-differentiated characteristics
of software quality’. The model is hierarchical in nature but the hierarchy is extended, so that
quality criteria are subdivided. The first division is made according to the uses made of the
system. These are classed as ‘general’ or ‘as is’ utility, where the ‘as is’ utilities are a subtype of
the general utilities, roughly equating to the product operation criteria of McCall’s model. There
are two levels of actual quality criteria, the intermediate level being further split into primitive
characteristics, which are amenable to measurement. The model is summarized in Figure 2.5
At the highest level of his model, Boehm defined three primary uses (or basic software
requirements), these three primary uses are:-
As-is utility, the extent to which the as-is software can be used (i.e. ease of use, reliability and
efficiency).
Correctness was seen as an umbrella property encompassing other attributes. Two types of
correctness were consistently identified. Developers talked in terms of technical correctness,
which included factors such as reliability, maintainability and the traditional software virtues.
Computer users, however, talked of business correctness, of meeting business needs and criteria
such as timeliness, value for money and ease of transition.
This reinforced the existence of different views of quality. It suggests that these developers
emphasized conformance to specification, while users sought fitness for purpose. There was
remarkable agreement between the different organizations as to some of the basic findings.
In particular:
Table 2.4 Software quality criteria elicited from a large manufacture in company
Criteria Definition
User correctness The extent to which a system fulfills a set of objectives agreed
with the user.
Integrity The extent to which data and software are consistent and
accurate across systems.
Accuracy The accuracy of the actual output produced, i.e., is it the right
answer?
Timeliness The extent to which delivery fits with the deadlines and practices
of users.
User flexibility The extent to which the system can be adapted both to changes
in user requirements and individual taste.
Cost/benefit The extent to which the system fulfils its cost/benefit
specification both with regard to development costs and business
benefits.
User friendliness The time to learn how to use the system and ease of use once
learned.
Two principles included in QA are: “Fit for purpose”, the product should be suitable for the
intended purpose; and “Right first time”, mistakes should be eliminated. QA includes
management of the quality of raw materials, assemblies, products and components, services
related to production, and management, production and inspection processes.
Suitable quality is determined by product users, clients or customers, not by society in general. It
is not related to cost and adjectives or descriptors such “high” and “poor” are not applicable. For
example, a low priced product may be viewed as having high quality because it is disposable
where another may be viewed as having poor quality because it is not disposable.
Step 1: To define the quality goals for the processes. These goals will be accepted
unconditionally by the developer and the customer, both. These objectives are to be clearly
described in the plan, so that both the parties can understand easily the scope of the processes.
The developers might also set a standard to define the goals. If possible, the plan can also
describe the quality goals in terms of measurement. This will ultimately help to measure the
performance of the processes in terms of gradation.
Step 2: To define the organization and the roles and responsibilities of the participant activities.
It should include the reporting system for the outcome of the quality reviews. The quality team
should know where to submit the reports, directly to the developers or somebody else. In many
cases, the reports are submitted to the project review team, who in turn delivers the report to the
subsequent departments and keeps it in storage for records. Whatever is the process of reporting,
it should be well defined in the plan to avoid disputes or complications in the submission process
for reviews and audits.
Step 3: The subsidiary quality assurance plan: It includes the list of other related plans
describing project standards, which have references in any of the process. These subsidiary plans
are related to the quality standards of several business components and how they are related to
each other in achieving the collective qualitative objective. This information also helps to
determine the different types of reviews to be done and how often they will be performed.
Normally, the included referenced plans are identified below.
a. Documentation Plan
b. Measurement Plan
c. Risk Measurement Plan
d. Problem Resolution Plan
e. Configuration Management Plan
f. Product Development Plan
g. Test Plan
h. Subcontractor Management Plan etc.
Step 4: To identify the task and activities of the quality control team. Generally, this will include
following reviews:
a. Reviewing project plans to ensure that the project abide by the defined process.
b. Reviewing project to ensure the performance according to the plans.
c. Endorsement of variation from the standard process.
d. Assessing the improvement of the processes.
It is the responsibility of the quality manager, to fix the schedule for the reviews and audits to
conduct quality control. This schedule is also documented within the plan, so that task control
can be done at an individual level. Thus, the entire process of quality control is documented
within the plan. This helps as a guideline for the reviewers and developers, simultaneously.
a. Elements such as controls, job management, defined and well managed processes,
performance and integrity criteria, and identification of records
b. Competence, such as knowledge, skills, experience, and qualifications
c. Soft elements, such as personnel integrity, confidence, organizational culture, motivation,
team spirit, and quality relationships.
Controls include product inspection, where every product is examined visually, and often using a
stereo microscope for fine detail before the product is sold into the external market. Inspectors
will be provided with lists and descriptions of unacceptable product defects such as cracks or
surface blemishes for example. Quality control emphasizes testing of products to uncover defects
and reporting to management who make the decision to allow or deny product release, whereas
quality assurance attempts to improve and stabilize production (and associated processes) to
avoid, or at least minimize, issues which led to the defect(s) in the first place.
It is possible to have quality control without quality assurance. A testing team may be in a place
to conduct system testing at the end of development.
2.6 SUMMARY
All the different software development models have their own advantages and disadvantages.
Nevertheless, in the contemporary commercial software development world, the fusion of all
these methodologies is incorporated. Timing is very crucial in software development. If a delay
happens in the development phase, the market could be taken over by the competitor. Also if a
‘bug’ filled product is launched in a short period of time (quicker than the competitors), it may
affect the reputation of the company. So, there should be a tradeoff between the development
time and the quality of the product. Customers don’t expect a bug free product but they expect a
user-friendly product that they can give a thumbs-up to.
The better understanding about quality can be achieved by study of quality models. The initial
quality models were in hierarchical order. These hierarchies provide better perspective about
quality characteristics. The model proposed by McCall and Bohem fall in above category. The
perspectives in McCall model are- Product revision (ability to change), Product transition
(adaptability to new environments) and Product operations (basic operational characteristics). In
total McCall identified the 11 quality factors broken down by the 3 perspectives, as listed above.
For each quality factor McCall defined one or more quality criteria (a way of measurement), in
this way an overall quality assessment could be made of a given software product by evaluating
the criteria for each factor. Boehm’s model was defined to provide a set of ‘well-defined, well-
differentiated characteristics of software quality’. The model is hierarchical in nature but the
hierarchy is extended, so that quality criteria are subdivided. There are two levels of quality
criteria, the intermediate level being further split into primitive characteristics, which are
amenable to measurement in this model.
Assignment-Module 2
Key - Module 2
1. a
2. a
3. b
4. a
5. b
6. d
7. c
8. b
9. d
10. d
CHAPTER 3 : SOFTWARE QUALITY ASSURANCE
Despite the fact that as an organizational rallying point, total quality management has been
eclipsed by other quality processes, those organizations that embraced the concept surely
benefited from it. Most have made good use of TQM's basic concepts, resulting in greater
customer satisfaction and improved product quality. The challenge for IT is to mine from these
experiences valuable lessons. Some sound TQM concepts include:
Set quality measures and standards on customer or user wants and needs.
Place ultimate responsibility for quality with line organizations, and mobilize quality
networks or communities within these organizations.
Make quality a shared responsibility.
Create clear standards and measurements, e.g., "dashboard measurements," which
provide quality status information clearly and quickly.
Make use of existing process measures and checkpoints wherever possible rather than
introduce new measures.
Incorporate and align quality measures and business objectives.
Do not limit interventions to identifying failures to meet standards; require corrective
action plans based on root cause analysis.
Focus on correcting the process that contributed to failure rather than installing short-
term fixes to problems.
The main challenge lies in leveraging and incorporating these concepts into the critical
components of an IT quality function. The following approach helps define an IT quality
function.
The ultimate mission of the IT quality function must be to add value to the organization as a
whole and, in particular, to improve IT quality in every aspect, including applications, the
infrastructure, even the help desk. However, the IT quality function cannot serve as the sole
owner of quality; it must not try to resolve all quality issues alone. Further, it shouldn't operate in
an after-the-fact quality assurance mode. Instead, it should identify issues that impede quality
and facilitate their rapid resolution.
Taking a broad cross-functional perspective of IT quality issues, the mission of the quality
function must:
Quality objectives need to focus, ultimately, on user satisfaction and key areas problematic to the
IT area. They should answer the question, "What does the IT quality function want to
accomplish?" Sample objectives include: improve user satisfaction, control IT costs, reduce
defects, improve IT infrastructure and application stability, and improve user perception of IT
quality.
Quality strategies should answer the question, "How will we achieve our objectives?" A simple
strategy would be to address only broad, high-priority quality issues that affect the objectives.
Or, the quality function could focus on customer issues rather than internal issues. Another
strategy would be to use a small quality team and an extended quality community rather than
build a large quality organization within the information systems department. To be effective, the
quality function must avoid the tendency to grow a new bureaucracy.
The IT quality function must be created with certain design points, which need to include key
aspects, such as:
The quality function should be comprised of a small, focused team within the IT community.
The key is to avoid creating a large, bureaucratic entity, but rather employ a small team that
represents an extended community in the business functions.
The IT quality function should be led by an influential executive reporting directly to the CIO or
the chief financial officer. This will ensure that the new function has the required influence and
can manage across the organization effectively. The small team of quality advocates will report
directly to the quality executive.
The IT quality function should focus on broad, cross-functional quality issues that are high
priority and critical in nature to resolve. From an IT perspective, the scope should include such
areas as application development, networking, databases, data centers and end-user support (help
desk). From a business perspective, the function's responsibilities should include virtually the
entire organization because most business areas will likely have some sort of IT infrastructure or
application.
The IT quality leader will work with business executives and the CIO, while the quality
advocates will work with the extended quality community. The leader's key responsibilities are:
The IT quality function calls for a high-powered, extremely talented team of "A" players.
Therefore, the quality leader must be able to build and sustain an excellent executive network.
The leader should consistently demonstrate a high sense of urgency and motivate people to
address issues that concern the entire organization. For their part, quality advocates should be
adept at communicating with superiors and peers, analyzing issues and working in cross-
functional teams. The business executives, the CIO and the IT quality leader must agree to a set
of measurements that will track the progress of IT quality initiatives and issues. While
consistency between groups is desirable, it is more important to relate the measures logically to
the activities involved. The quality measures should reflect the items that remain important to
users and those that drive user satisfaction. Each measure should include a target and time frame.
An example of a user-focused measure: User's perception of IT performance (measure), increase
to 75 percent (target), by second quarter 1999 (time frame). User-focused measures should be
based on the user's view of IT quality. However, the IT quality function must also measure the
internal drivers affecting user measures. For example: Number of defects per user (measure),
reduce by 10 percent (target), by fourth quarter 1999 (time frame).
Quality Function Deployment was developed by Yoji Akao in Japan in 1966. By 1972 the power
of the approach had been well demonstrated at the Mitsubishi Heavy Industries Kobe Shipyard
and in 1978 the first book on the subject was published in Japanese and then later translated into
English in 1994. In Akao’s words, QFD "is a method for developing a design quality aimed at
satisfying the consumer and then translating the consumer's demand into design targets and
major quality assurance points to be used throughout the production phase. [QFD] is a way to
assure the design quality while the product is still in the design stage." As a very important side
benefit he points out that, when appropriately applied, QFD has demonstrated the reduction of
development time by one-half to one-third.
Quality function deployment is a team-based management tool in which the customer expectations
are used to drive the product development process. Conflicting characteristics or requirements are
identified early in the QFD process and can be resolved before production. Ultimately the goal of
QFD is to translate often subjective quality criteria into objective ones that can be quantified and
measured and which can then be used to design and manufacture the product. It is a
complimentary method for determining how and where priorities are to be assigned in product
development. The intent is to employ objective procedures in increasing detail throughout the
development of the product.
Organizations today use market research to decide on what to produce to satisfy customer
requirements. Some customer requirements adversely affect others, and customers often cannot
explain their expectations. Confusion and misinterpretation are also a problem while a product
moves from marketing to design to engineering to manufacturing. This activity is where the voice
of the customer becomes lost and the voice of the organization adversely enters the product design.
Instead of working on what the customer expects, work is concentrated on fixing what the
customer does not want. In other words, it is not productive to improve something the customer did
not want initially. By implementing QFD, an organization is guaranteed to implement the voice of
the customer in the final product.
Quality function deployment helps identify new quality technology and job functions to carry out
operations. This tool provides a historic reference to enhance future technology and prevent design
errors. QFD is primarily a set of graphically oriented planning matrices that are used as the basis
for decisions affecting any phase of the product development cycle. Results of QFD are measured
based on the number of design and engineering changes, time to market, cost, and quality. It is
considered by many experts to be a perfect blueprint for concurrent engineering. Quality function
deployment enables the design phase to concentrate on the customer requirements, thereby
spending less time on redesign and modifications. The saved time has been estimated at one-
third to one-half of the time taken for redesign and modification using traditional means. This
saving means reduced development cost and also additional income because the product enters
the market sooner.
There are two types of teams - new product or improving an existing product. Teams are
composed of members from marketing, design, quality, finance, and production. The existing
product team usually has fewer members, because the QFD process will only need to be
modified. Time and inter-team communication are two very important things that each team
must utilize to their fullest potential. Using time effectively is the essential resource in getting the
project done on schedule. Using inter-team communication to its fullest extent will alleviate
unforeseen problems and make the project run smoothly.
Team meetings are very important in the QFD process. The team leader needs to ensure that the
meetings are run in the most efficient manner and that the members are kept informed. The
format needs to have some way of measuring how well the QFD process is working at each
meeting and should be flexible, depending on certain situations. The duration of the meeting will
rely on where the team’s members are coming from and what needs to be accomplished. These
workshops may have to last for days if people are coming from around the world or for only
hours if everyone is local. There are advantages to shorter meetings, and sometimes a lot more
can be accomplished in a shorter meeting. Shorter meetings allow information to be collected
between times that will ensure that the right information is being entered into the QFD matrix.
Also, they help keep the team focused on a quality improvement goal.
QFD uses some principles from Concurrent Engineering in that cross-functional teams are
involved in all phases of product development. Each of the four phases in a QFD process uses a
matrix to translate customer requirements from initial planning stages through production
control. Each phase, or matrix, represents a more specific aspect of the product's requirements.
Relationships between elements are evaluated for each phase. Only the most important aspects
from each phase are deployed into the next matrix.
Phase 1: Product Planning: Building the House of Quality. Led by the marketing department,
Phase 1, or product planning, is also called The House of Quality. Many organizations only get
through this phase of a QFD process. Phase 1 documents customer requirements, warranty data,
competitive opportunities, product measurements, competing product measures, and the
technical ability of the organization to meet each customer requirement. Getting good data from
the customer in Phase 1 is critical to the success of the entire QFD process.
Phase 2: Product Design: This phase 2 is led by the engineering department. Product design
requires creativity and innovative team ideas. Product concepts are created during this phase and
part specifications are documented. Parts that are determined to be most important to meeting
customer needs are then deployed into process planning, or Phase 3.
Phase 3: Process Planning: Process planning comes next and is led by manufacturing
engineering. During process planning, manufacturing processes are flowcharted and process
parameters (or target values) are documented.
Phase 4: Process Control: And finally, in production planning, performance indicators are
created to monitor the production process, maintenance schedules, and skills training for
operators. Also, in this phase decisions are made as to which process poses the most risk and
controls are put in place to prevent failures. The quality assurance department in concert with
manufacturing leads Phase 4.
Based on concensus
PROMOTES Creates communication at interfaces
TEAMWORK Identifies actions at interfaces
Creates global view out of details
The driving force behind QFD is that the customer dictates the attributes of a product. Customer
satisfaction, like quality, is defined as meeting or exceeding customer expectations. Words used by
the customers to describe their expectations are often referred to as the voice of the customer.
Sources for determining customer expectations are focus groups, surveys, complaints, consultants,
standards, and federal regulations. Frequently, customer expectations are vague and general in
nature. It is the job of the QFD team to break down these customer expectations into more specific
customer requirements. Customer requirements must be taken literally and not incorrectly
translated into what organization official’s desire.
Quality function deployment begins with marketing to determine what exactly the customer
desires from a product. During the collection of information, the QFD team must continually ask
and answer numerous questions, such as:
Solicited Unsolicited
Quantitative Qualitative
Structured Random
Focus Groups
Trade Visits
Customer Visits
Complaint Reports Consultants
Organizations Standards
Government Regulations
Lawsuits
Sales Force
Training Programs
Hot Lines Conventions
Surveys Trade Journals
Customer Tests Trade Shows
Trade Trials Vendors
Preferred Customers Suppliers
OM Testing Academic
Product Purchase Survey Employees
Customer Audits
Lagging Leading
Customer information, sources, and ways an organization can collect data can be briefly stated as
follows:
Solicited, measurable, and routine data are typically found by customer surveys, market
surveys, and trade trials, working with preferred customers, analyzing products from other
manufacturers, and buying back products from the field. This information tells an
organization how it is performing in the current market.
Unsolicited, measurable, and routine data tend to take the form of customer complaints or
lawsuits. This information is generally disliked; however, it provides valuable learning
information.
Solicited, subjective, and routine data are usually gathered from focus groups. The object of
these focus groups is to find out the likes, dislikes, trends, and opinions about current and
future products.
Solicited, subjective, and haphazard data are usually gathered from trade visits, customers
visits, and independent consultants. These types of data can be very useful; however, they
can also be misleading, depending on the quantity and frequency of information.
Unsolicited, subjective, and haphazard data are typically obtained from conventions, vendors,
suppliers, and employees. This information is very valuable and often relates the true voice
of the customer.
The goal of QFD is not only to meet as many customer expectations and needs as possible, but
also to exceed customer expectations. Each QFD team must make its product either more
appealing than the existing product or more appealing than the product of a competitor. This
situation implies that the team has to introduce an expectation or need in its product that the
customer is not expecting but would appreciate. For example, cup holders were put into
automobiles as an extra bonus, but customers liked them so well that they are now expected in all
new automobiles.
The affinity diagram is a tool that gathers a large amount of data and subsequently organizes the
data into groupings based on their natural interrelationships. An affinity diagram should be
implemented when
New solutions are needed to circumvent the more traditional ways of problem solving.
This method should not be used when the problem is simple or a quick solution is needed. The
team needed to accomplish this goal effectively should be a multidisciplinary one that has the
needed knowledge to delve into the various areas of the problem. A team of six to eight members
should be adequate to assimilate all of the thoughts. Constructing an affinity diagram requires
four simple steps:
The first step is to phrase the objective in a short and concise statement. It is imperative that the
statement be as generalized and vague as possible.
The second step is to organize a brainstorming session, in which responses to this statement are
individually recorded on cards and listed on a pad. It is sometimes helpful to write down a
summary of the discussion on the back of cards so that, in the future when the cards are
reviewed, the session can be briefly explained.
Next, all the cards should be sorted by placing the cards that seem to be related into groups.
Then, a card or word is chosen that best describes each related group, which becomes the
heading for each group of responses. Finally, lines are placed around each group of responses
and related clusters are placed near each other with a connecting line.
Interrelationship
between
Technical Descriptors
Technical Descriptors
(Voice of the organization)
(Voice of the Customer)
Customer Requirements
Prioritized Customer
Requirements
Relationship between
Requirements and Descriptors
Prioritized Technical
Descriptors
The exterior walls of the house are the customer requirements. On the left side is a listing of
the voice of the customer, or what the customer expects in the product. On the right side are
the prioritized customer requirements, or planning matrix. Listed are items such as customer
benchmarking, customer importance rating, target value, scale-up factor, and sales point.
The ceiling, or second floor, of the house contains the technical descriptors. Consistency of
the product is provided through engineering characteristics, design constraints, and
parameters.
The interior walls of the house are the relationships between customer requirements and
technical descriptors. Customer expectations (customer requirements) are translated into
engineering characteristics (technical descriptors).
The roof of the house is the interrelationship between technical descriptors. Tradeoffs
between similar and/or conflicting technical descriptors are identified.
The foundation of the house is the prioritized technical descriptors. Items such as the
technical benchmarking, degree of technical difficulty, and target value are listed.
This is the basic structure for the house of quality; once this format is understood, any other
QFD matrices are fairly straightforward.
SQA Planning tackles almost every aspect of SQA’s operation. Through planning, each member
and even non-member of the SQA team is clearly defined. The reason for this is very simple:
when everyone knows their role and boundaries, there is no overlapping of responsibilities and
everyone could concentrate on their roles.
But SQA Planning is not only a document that tells who gets to do the specific task. The stages
in are also detailed. The whole SQA team will be very busy once the actual testing starts but with
SQA, everyone’s work is clearly laid out. Through planning, the actual state of the application
testing is known.
Again in smaller businesses, the planning maybe limited to the phase of the application testing
but when outlined for corporations, the scenario changes and only through planning that
everyone will know where they are and where they are going in terms of SQA.
SQA Planning is not just a simple document where objectives are written and stages are clearly
stated. Because of the need to standardize software development ensuring the limitation of error,
a scientific approach is recommended in developing an SQA plan. Certain standards such as
IEEE Std 730 or 983.
In the first phase, the SQA team should write in detail the activities related for software
requirements. In this stage, the team will be creating steps and stages on how they will analyze
the software requirements. They could refer to additional documents to ensure the plan works
out.
The second stage of SQA Plan or the SQAP for AD (Architectural Design) the team should
analyze in detail the preparation of the development team for detailed build-up. This stage is a
rough representation of the program but it still has to go through rigorous scrutiny before it
reaches the next stage.
The third phase which tackles the quality assurance plan for detailed design and actual product is
probably the longest among phases. The SQA team should write in detail the tools and approach
they will be using to ensure that the produced application is written according to plan. The team
should also start planning on the transfer phase as well.
The last stage is the QA plan for transfer of technology to the operations. The SQA team should
write their plan on how they will monitor the transfer of technology such as training and support.
Start your quality journey by mastering these tools, and you'll have a name for them too:
"indispensable."
Variations: cause enumeration diagram, process fishbone, time–delay fishbone, CEDAC (cause–
and–effect diagram with the addition of cards), desired–result fishbone, reverse fishbone diagram
The fishbone diagram identifies many possible causes for an effect or problem. It can be used to
structure a brainstorming session. It immediately sorts ideas into useful categories.
Agree on a problem statement (effect). Write it at the center right of the flipchart or whiteboard.
Draw a box around it and draw a horizontal arrow running to it.
Brainstorm the major categories of causes of the problem. If this is difficult use generic
headings:
Methods
Machines (equipment)
People (manpower)
Materials
Measurement
Environment
Again ask “why does this happen?” about each cause. Write sub–causes branching off the
causes. Continue to ask “Why?” and generate deeper levels of causes. Layers of branches
indicate causal relationships. When the group runs out of ideas, focus attention to places on the
chart where ideas are few.
This fishbone diagram was drawn by a manufacturing team to try to understand the source of
periodic iron contamination. The team used the six generic headings to prompt ideas. Layers of
branches show thorough thinking about the causes of the problem.
Note that some ideas appear in two different places. “Calibration” shows up under “Methods” as
a factor in the analytical procedure, and also under “Measurement” as a cause of lab error. “Iron
tools” can be considered a “Methods” problem when taking samples or a “Manpower” problem
with maintenance personnel.
A check sheet is a structured, prepared form for collecting and analyzing data. This is a generic
tool that can be adapted for a wide variety of purposes.
When data can be observed and collected repeatedly by the same person or at the same location.
When collecting data on the frequency or patterns of events, problems, defects, defect location,
defect causes, etc.
Design the form. Set it up so that data can be recorded simply by making check marks or Xs or
similar symbols and so that data do not have to be recopied for analysis.
Label all spaces on the form.
Test the check sheet for a short trial period to be sure it collects the appropriate data and is easy
to use.
Each time the targeted event or problem occurs, record data on the check sheet.
The figure below shows a check sheet used to collect data on telephone interruptions. The tick
marks were added as data was collected over several weeks.
The control chart is a graph used to study how a process changes over time. Data are plotted in
time order. A control chart always has a central line for the average, an upper line for the upper
control limit and a lower line for the lower control limit. These lines are determined from
historical data. By comparing current data to these lines, you can draw conclusions about
whether the process variation is consistent (in control) or is unpredictable (out of control,
affected by special causes of variation).
Control charts for variable data are used in pairs. The top chart monitors the average, or the
centering of the distribution of data from the process. The bottom chart monitors the range, or the
width of the distribution. If your data were shots in target practice, the average is where the shots
are clustering, and the range is how tightly they are clustered. Control charts for attribute data are
used singly.
When controlling ongoing processes by finding and correcting problems as they occur.
When analyzing patterns of process variation from special causes (non-routine events) or
common causes (built into the process).
When determining whether your quality improvement project should aim to prevent specific
problems or to make fundamental changes to the process.
Determine the appropriate time period for collecting and plotting data.
Look for “out-of-control signals” on the control chart. When one is identified, mark it on the
chart and investigate the cause. Document how you investigated, what you learned, the cause and
how it was corrected.
Out-of-control signals
A single point outside the control limits. In Figure 3-4, point sixteen is above the UCL (upper
control limit).
Two out of three successive points are on the same side of the centerline and farther than 2 σ
from it. In Figure 3-4, point 4 sends that signal.
Four out of five successive points are on the same side of the centerline and farther than 1 σ from
it. In Figure 3-4, point 11 sends that signal.
A run of eight in a row are on the same side of the centerline. Or 10 out of 11, 12 out of 14 or 16
out of 20. In Figure 3-4, point 21 is eighth in a row above the centerline.
Obvious consistent or persistent patterns that suggest something unusual about your data and
your process.
Continue to plot data as they are generated. As each new data point is plotted, check for new out-
of-control signals.
When you start a new control chart, the process may be out of control. If so, the control limits
calculated from the first 20 points are conditional limits. When you have at least 20 sequential
points from a period when the process is operating in control, recalculate control limits.
iv Histogram
A frequency distribution shows how often each different value in a set of data occurs. A
histogram is the most commonly used graph to show frequency distributions. It looks very much
like a bar chart, but there are important differences between them.
When you want to see the shape of the data’s distribution, especially when determining whether
the output of a process is distributed approximately normally.
When analyzing what the output from a supplier’s process looks like.
When seeing whether a process change has occurred from one time period to another.
When determining whether the outputs of two or more processes are different.
When you wish to communicate the distribution of data quickly and easily to others.
Histogram Construction
Use the histogram worksheet to set up the histogram. It will help you determine the number of
bars, the range of numbers that go into each bar and the labels for the bar edges. After calculating
W in step 2 of the worksheet, use your judgment to adjust it to a convenient number. For
example, you might decide to round 0.9 to an even 1.0. The value for W must not have more
decimal places than the numbers you will be graphing.
Draw x- and y-axes on graph paper. Mark and label the y-axis for counting data values. Mark
and label the x-axis with the L values from the worksheet. The spaces between these numbers
will be the bars of the histogram. Do not allow for spaces between bars.
For each data point, mark off one count above the appropriate bar with an X or by shading that
portion of the bar.
Histogram Analysis
Before drawing any conclusions from your histogram, satisfy yourself that the process was
operating normally during the time period being studied. If any unusual events affected the
process during the time period of the histogram, your analysis of the histogram shape probably
cannot be generalized to all time periods.
v Pareto Chart
A Pareto chart is a bar graph. The lengths of the bars represent frequency or cost (time or
money), and are arranged with longest bars on the left and the shortest to the right. In this way
the chart visually depicts which situations are more significant.
Decide what measurement is appropriate. Common measurements are frequency, quantity, cost
and time.
Decide what period of time the Pareto chart will cover: One work cycle? One full day? A week?
Collect the data, recording the category each time. (Or assemble data that already exist.)
Determine the appropriate scale for the measurements you have collected. The maximum value
will be the largest subtotal from step 5. (If you will do optional steps 8 and 9 below, the
maximum value will be the sum of all subtotals from step 5.) Mark the scale on the left side of
the chart.
Construct and label bars for each category. Place the tallest at the far left, then the next tallest to
its right and so on. If there are many categories with small measurements, they can be grouped as
“other.”
Steps 8 and 9 are optional but are useful for analysis and communication.
Calculate the percentage for each category: the subtotal for that category divided by the total for
all categories. Draw a right vertical axis and label it with percentages. Be sure the two scales
match: For example, the left measurement that corresponds to one-half should be exactly
opposite 50% on the right scale.
Calculate and draw cumulative sums: Add the subtotals for the first and second categories, and
place a dot above the second bar indicating that sum. To that sum add the subtotal for the third
category, and place a dot above the third bar for that new sum. Continue the process for all the
bars. Connect the dots, starting at the top of the first bar. The last dot should reach 100 percent
on the right scale.
Example #1 shows how many customer complaints were received in each of five categories.
Example #2 takes the largest category, “documents,” from Example #1, breaks it down into six
categories of document-related complaints, and shows cumulative values.
If all complaints cause equal distress to the customer, working on eliminating document-related
complaints would have the most impact, and of those, working on quality certificates should be
most fruitful.
The scatter diagram graphs pairs of numerical data, with one variable on each axis, to look for a
relationship between them. If the variables are correlated, the points will fall along a line or
curve. The better the correlation, the tighter the points will hug the line.
When your dependent variable may have multiple values for each value of your independent
variable.
When trying to determine whether the two variables are related, such as…
After brainstorming causes and effects using a fishbone diagram, to determine objectively
whether a particular cause and effect are related.
When determining whether two effects that appear to be related both occur with the same cause.
Draw a graph with the independent variable on the horizontal axis and the dependent variable on
the vertical axis. For each pair of data, put a dot or a symbol where the x-axis value intersects the
y-axis value. (If two dots fall together, put them side by side, touching, so that you can see both.)
Look at the pattern of points to see if a relationship is obvious. If the data clearly form a line or a
curve, you may stop. The variables are correlated. You may wish to use regression or correlation
analysis now. Otherwise, complete steps 4 through 7.
Divide points on the graph into four quadrants. If there are X points on the graph,
Count X/2 points from top to bottom and draw a horizontal line.
Count X/2 points from left to right and draw a vertical line.
If number of points is odd, draw the line through the middle point.
Add the diagonally opposite quadrants. Find the smaller sum and the total of points in all
quadrants.
A = points in upper left + points in lower right
B = points in upper right + points in lower left
Q = the smaller of A and B
N=A+B
The ZZ-400 manufacturing team suspects a relationship between product purity (percent purity)
and the amount of iron (measured in parts per million or ppm). Purity and iron are plotted against
each other as a scatter diagram, as shown in the figure below.
There are 24 data points. Median lines are drawn so that 12 points fall on each side for both
percent purity and ppm iron.
Then they look up the limit for N on the trend test table. For N = 24, the limit is 6.
Q is equal to the limit. Therefore, the pattern could have occurred from random chance, and no
relationship is demonstrated.
Figure 3.10 Scatter Diagram Example
Here are some examples of situations in which might you use a scatter diagram:
Variable A is the temperature of a reaction after 15 minutes. Variable B measures the color of the
product. You suspect higher temperature makes the product darker. Plot temperature and color
on a scatter diagram.
Variable A is the number of employees trained on new software, and variable B is the number of
calls to the computer help line. You suspect that more training reduces the number of calls. Plot
number of people trained versus number of calls.
To test for autocorrelation of a measurement being monitored on a control chart, plot this pair of
variables: Variable A is the measurement at a given time. Variable B is the same measurement,
but at the previous time. If the scatter diagram shows correlation, do another diagram where
variable B is the measurement two times previously. Keep increasing the separation between the
two times until the scatter diagram shows no correlation.
Even if the scatter diagram shows a relationship, do not assume that one variable caused the
other. Both may be influenced by a third variable.
When the data are plotted, the more the diagram resembles a straight line, the stronger the
relationship.
If a line is not clear, statistics (N and Q) determine whether there is reasonable certainty that a
relationship exists. If the statistics say that no relationship exists, the pattern could have occurred
by random chance.
If the scatter diagram shows no relationship between the variables, consider whether the data
might be stratified.
If the diagram shows no relationship, consider whether the independent (x-axis) variable has
been varied widely. Sometimes a relationship is not apparent because the data don’t cover a wide
enough range.
Think creatively about how to use scatter diagrams to discover a root cause.
Drawing a scatter diagram is the first step in looking for a relationship between variables.
vii. Stratification
Stratification is a technique used in combination with other data analysis tools. When data from a
variety of sources or categories have been lumped together, the meaning of the data can be
impossible to see. This technique separates the data so that patterns can be seen.
When data come from several sources or conditions, such as shifts, days of the week, suppliers
or population groups.
Before collecting data, consider which information about the sources of the data might have an
effect on the results. Set up the data collection so that you collect that information as well.
When plotting or graphing the collected data on a scatter diagram, control chart, histogram or
other analysis tool, use different marks or colors to distinguish data from various sources. Data
that are distinguished in this way are said to be “stratified.”
Analyze the subsets of stratified data separately. For example, on a scatter diagram where data
are stratified into data from source 1 and data from source 2, draw quadrants, count points and
determine the critical value only for the data from source 1, then only for the data from source 2.
Stratification Example
The ZZ–400 manufacturing team drew a scatter diagram to test whether product purity and iron
contamination were related, but the plot did not demonstrate a relationship. Then a team member
realized that the data came from three different reactors. The team member redrew the diagram,
using a different symbol for each reactor’s data:
Figure 3.11: Stratification Example
Now patterns can be seen. The data from reactor 2 and reactor 3 are circled. Even without doing
any calculations, it is clear that for those two reactors, purity decreases as iron increases.
However, the data from reactor 1, the solid dots that are not circled, do not show that
relationship. Something is different about reactor 1.
Stratification Considerations
Here are examples of different sources that might require data to be stratified:
Equipment
Shifts
Departments
Materials
Suppliers
Time of day
Products
Always consider before collecting data whether stratification might be needed during analysis.
Plan to collect stratification information. After the data are collected it might be too late.
On your graph or chart, include a legend that identifies the marks or colors used.
3.7 QUALITY BASELINES
Quality Baselines (Assessments and Models) Organizations need to establish baselines of
performance for quality, productivity and customer satisfaction. These baselines are used to
document current performance and document improvements by showing changes from a
baseline. In order to establish a baseline, a model and/or goal must be established for use in
measuring against to determine the baseline.
The scope of internal auditing within an organization is broad and may involve topics such as the
efficacy of operations, the reliability of financial reporting, deterring and investigating fraud,
safeguarding assets, and compliance with laws and regulations.
Internal auditing frequently involves measuring compliance with the entity's policies and
procedures. However, internal auditors are not responsible for the execution of company
activities; they advise management and the Board of Directors (or similar oversight body)
regarding how to better execute their responsibilities. As a result of their broad scope of
involvement, internal auditors may have a variety of higher educational and professional
backgrounds.
Publicly-traded corporations typically have an internal auditing department, led by a Chief Audit
Executive ("CAE") who generally reports to the Audit Committee of the Board of Directors,
with administrative reporting to the Chief Executive Officer.
Review of the audit universe and the method followed for annual risk assessment leading to the
audit plan.
Evaluate organizational structure, staffing, and internal audit approach of the department.
Determine how internal auditing is perceived through interviews and surveys with customers,
including governance personnel.
Examine techniques and methodology for testing controls. Identify ways to enhance the
department's policies and practices.
Evaluate whether the department conforms to The IIA's International Standards for the
Professional Practice of Internal Auditing (ISPPIA).
Assess compliance with the ISPPIA as promulgated by the Institute of Internal Auditors .
3.9 SUMMARY
The scope of Software Quality Assurance or SQA starts from the planning of
the application until it is being distributed for the actual operations. To successfully monitor the
application build up process, the SQA team also has their written plan. In a regular SQA plan,
the team will have enumerated all the possible functions, tools and metrics that will be expected
from the application. SQA planning will be the basis of everything once the actual SQA starts.
Without SQA planning, the team will never know what the scope of their function is. Through
planning, the client’s expectations are detailed and from that point, the SQA team will know how
to build metrics and the development team could start working on the application.
Most of the organizations use quality tools for various purposes related to controlling and
assuring quality. Although there are a good number of quality tools specific to certain domains,
fields, and practices, some of the quality tools can be used across such domains. These quality
tools are quite generic and can be applied to any condition. There are various basic quality tools
used in organizations. These tools can provide much information about problems in the
organization assisting to derive solutions for the same. A brief training, mostly a self-training, is
sufficient for someone to start using the tools.
Auditing is an independent, objective assurance and consulting activity designed to add value
and improve an organization's operations. It helps an organization accomplish its objectives by
bringing a systematic, disciplined approach to evaluate and improve the effectiveness of risk
management, control, and governance processes. Internal auditing is a catalyst for improving an
organization’s effectiveness and efficiency by providing insight and recommendations based on
analyses and assessments of data and business processes.
Assignment-Module 3
2. QFD focuses on
a. Product Transition
b. Product operation
c. Product and Process Planning
d. Confusion and misinterpretation
3. Benefits of QFD
a. Customer satisfaction
b. Conformance to specification
c. Creates communication at interface
d. None of them
7. A ___________ always has a central line for the average, an upper line for the upper
control limit and a lower line for the lower control limit.
a. Histogram
b. Pareto chart
c. Bar chart
d. Control chart
Key - Module 3
1. b
2. c
3. c
4. c
5. a
6. b
7. d
8. c
9. a
10. a
CHAPTER 4 : SOFTWARE QUALITY CONTROL
Software testing can be stated as the process of validating and verifying that a software
program/application/product:
The view of software testing has evolved towards a more constructive one. Testing is no longer
seen as an activity which starts only after the coding phase is complete, with the limited purpose
of detecting failures. Software testing is now seen as an activity which should encompass the
whole development and maintenance process and is itself an important part of the actual product
construction. Indeed, planning for testing should start with the early stages of the requirement
process, and test plans and procedures must be systematically and continuously developed, and
possibly refined, as development proceeds. These test planning and designing activities
themselves constitute useful input for designers in highlighting potential weaknesses (like design
oversights or contradictions, and omissions or ambiguities in the documentation). Software
testing, depending on the testing method employed, can be implemented at any time in the
development process.
Different software development models will focus the test effort at different points in the
development process. Newer development models, such as Agile, often employ test-driven
development and place an increased portion of the testing in the hands of the developer, before it
reaches a formal team of testers. In a more traditional model, most of the test execution occurs
after the requirements have been defined and the coding process has been completed.
A primary purpose of testing is to detect software failures so that defects may be discovered and
corrected. Testing cannot establish that a product functions properly under all conditions but can
only establish that it does not function properly under specific conditions. The scope of software
testing often includes examination of code as well as execution of that code in various
environments and conditions as well as examining the aspects of code: does it do what it is
supposed to do and do what it needs to do. In the current culture of software development, a
testing organization may be separate from the development team. There are various roles for
testing team members. Information derived from software testing may be used to correct the
process by which software is developed.
4.2.3 Economics
A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5
billion annually. More than a third of this cost could be avoided if better software testing was
performed. It is commonly believed that the earlier a defect is found the cheaper it is to fix it.
4.2.4 Roles
Software testing can be done by software testers. Until the 1980s the term "software tester" was
used generally, but later it was also seen as a separate profession. Regarding the periods and the
different goals in software testing, different roles have been established: manager, test lead, test
designer, tester, automation developer, and test administrator.
While white-box testing can be applied at the unit, integration and system levels of the software
testing process, it is usually done at the unit level. It can test paths within a unit, paths between
units during integration, and between subsystems during a system–level test. Though this method
of test design can uncover many errors or problems, it might not detect unimplemented parts of
the specification or missing requirements.
API testing (application programming interface) - testing of the application using public
and private APIs
Code coverage - creating tests to satisfy some criteria of code coverage (e.g., the test
designer can create tests to cause all statements in the program to be executed at least
once)
Fault injection methods - intentionally introducing faults to gauge the efficacy of testing
strategies
Mutation testing methods
Static testing methods
Code coverage tools can evaluate the completeness of a test suite that was created with any
method, including black-box testing. This allows the software team to examine parts of a system
that are rarely tested and ensures that the most important function points have been tested. Code
coverage as a software metric can be reported as a percentage for:
Specification-based testing aims to test the functionality of software according to the applicable
requirements. This level of testing usually requires thorough test cases to be provided to the
tester, who then can simply verify that for a given input, the output value (or behavior), either
"is" or "is not" the same as the expected value specified in the test case. Test cases are built
around specifications and requirements, i.e., what the application is supposed to do. It uses
external descriptions of the software, including specifications, requirements, and designs to
derive test cases. These tests can be functional or non-functional, though usually functional.
This method of test can be applied to all levels of software testing: unit, integration, system and
acceptance. It typically comprises most if not all testing at higher levels, but can also dominate
unit testing as well.
By knowing the underlying concepts of how the software works, the tester makes better-
informed testing choices while testing the software from outside. Typically, a grey-box tester
will be permitted to set up his testing environment; for instance, seeding a database; and the
tester can observe the state of the product being tested after performing certain actions. Grey-box
testing implements intelligent test scenarios, based on limited information. This will particularly
apply to data type handling, exception handling, and so on.
4.4.6 Visual testing
The aim of visual testing is to provide developers with the ability to examine what was
happening at the point of software failure by presenting the data in such a way that the developer
can easily find the information he requires, and the information is expressed clearly.
At the core of visual testing is the idea that showing someone a problem (or a test failure), rather
than just describing it, greatly increases clarity and understanding. Visual testing therefore
requires the recording of the entire test process – capturing everything that occurs on the test
system in video format. Output videos are supplemented by real-time tester input via picture-in-
a-picture webcam and audio commentary from microphones.
Visual testing is particularly well-suited for environments that deploy agile methods in their
development of software, since agile methods require greater communication between testers and
developers and collaboration within small teams.
Ad hoc testing and exploratory testing are important methodologies for checking software
integrity, because they require less preparation time to implement, whilst important bugs can be
found quickly. In ad hoc testing, where testing takes place in an improvised, impromptu way, the
ability of a test tool to visually record everything that occurs on a system becomes very
important.
Visual testing is gathering recognition in customer acceptance and usability testing, because the
test can be used by many individuals involved in the development process.
For the customer, it becomes easy to provide detailed bug reports and feedback, and for program
users, visual testing can record user actions on screen, as well as their voice and image, to
provide a complete picture at the time of software failure for the developer.
4.5 TESTING LEVELS
Tests are frequently grouped by where they are added in the software development process, or by
the level of specificity of the test. The main levels during the development process as defined by
the SWEBOK guide are unit-, integration-, and system testing that are distinguished by the test
target without implying a specific process model.
All the bottom or low-level modules, procedures or functions are integrated and then tested.
After the integration testing of lower level integrated modules, the next level of modules will be
formed and can be used for integration testing. This approach is helpful only when all or most of
the modules of the same development level are ready. This method also helps to determine the
levels of software developed and makes it easier to report testing progress in the form of a
percentage.
Top Down Testing is an approach to integrated testing where the top integrated modules are
tested and the branch of the module is tested step by step until the end of the related module.
Smoke testing is used to determine whether there are serious problems with a piece of software,
for example as a build verification test.
1. A smoke test is used as an acceptance test prior to introducing a new build to the main
testing process, i.e. before integration or regression.
2. Acceptance testing performed by the customer, often in their lab environment on their
own hardware, is known as user acceptance testing (UAT). Acceptance testing may be
performed as part of the hand-off process between any two phases of development.
Non-functional testing refers to aspects of the software that may not be related to a specific
function or user action, such as scalability or other performance, behavior under certain
constraints, or security. Testing will determine the flake point, the point at which extremes of
scalability or performance leads to unstable execution. Non-functional requirements tend to be
those that reflect the quality of the product, particularly in the context of the suitability
perspective of its users.
Software fault injection, in the form of fuzzing, is an example of failure testing. Various
commercial non-functional testing tools are linked from the software fault injection page; there
are also numerous open-source and free software tools available that perform destructive testing.
Load testing is primarily concerned with testing that the system can continue to operate under a
specific load, whether that be large quantities of data or a large number of users. This is
generally referred to as software scalability. The related load testing activity of when performed
as a non-functional activity is often referred to as endurance testing. Volume testing is a way to
test software functions even when certain components (for example a file or database) increase
radically in size. Stress testing is a way to test reliability under unexpected or rare workloads.
Stability testing (often referred to as load or endurance testing) checks to see if the software can
continuously function well in or above an acceptable period.
There is little agreement on what the specific goals of performance testing are. The terms load
testing, performance testing, reliability testing, and volume testing, are often used
interchangeably.
4.6.12 Accessibility
Accessibility testing might include compliance with:
Software is often localized by translating a list of strings out of context, and the translator
may choose the wrong translation for an ambiguous source string.
Technical terminology may become inconsistent if the project is translated by several
people without proper coordination or if the translator is imprudent.
Literal word-for-word translations may sound inappropriate, artificial or too technical in
the target language.
Untranslated messages in the original language may be left hard coded in the source
code.
Some messages may be created automatically at run time and the resulting string may be
ungrammatical, functionally incorrect, misleading or confusing.
Software may use a keyboard shortcut which has no function on the source language's
keyboard layout, but is used for typing characters in the layout of the target language.
Software may lack support for the character encoding of the target language.
Fonts and font sizes which are appropriate in the source language may be inappropriate in
the target language; for example, CJK characters may become unreadable if the font is
too small.
A string in the target language may be longer than the software can handle. This may
make the string partly invisible to the user or cause the software to crash or malfunction.
Software may lack proper support for reading or writing bi-directional text.
Software may display images with text that was not localized.
Localized operating systems may have differently-named system configuration files and
environment variables and different formats for date and currency.
To avoid these and other localization problems, a tester who knows the target language must run
the program with all the possible use cases for translation to see if the messages are readable,
translated correctly in context and do not cause failures.
4.7 THE TESTING PROCESS
Testing concepts, strategies, techniques, and measures need to be integrated into a defined and
controlled process which is run by people. The test process supports testing activities and
provides guidance to testing teams, from test planning to test output evaluation, in such a way as
to provide justified assurance that the test objectives will be met cost-effectively.
A very important component of successful testing is a collaborative attitude towards testing and
quality assurance activities. Managers have a key role in fostering a generally favorable
reception towards failure discovery during development and maintenance; for instance, by
preventing a mindset of code ownership among programmers, so that they will not feel
responsible for failures revealed by their code.
Test guides
The testing phases could be guided by various aims, for example: in risk-based testing, which
uses the product risks to prioritize and focus the test strategy; or in scenario-based testing, in
which test cases are defined based on specified software scenarios.
Test activities conducted at different levels must be organized, together with people, tools,
policies, and measurements, into a well-defined process which is an integral part of the life cycle.
In IEEE/EIA Standard 12207.0, testing is not described as a stand-alone process, but principles
for testing activities are included along with both the five primary life cycle processes and the
supporting process. In IEEE Std 1074, testing is grouped with other evaluation activities as
integral to the entire life cycle.
Test documentation and work products
Documentation is an integral part of the formalization of the test process. The IEEE Standard for
Software Test Documentation (IEEE829-98) provides a good description of test documents and
of their relationship with one another and with the testing process. Test documents may include,
among others, Test Plan, Test Design Specification, Test Procedure Specification, Test Case
Specification, Test Log, and Test Incident or Problem Report. The software under test is
documented as the Test Item. Test documentation should be produced and continually updated,
to the same level of quality as other types of documentation in software engineering.
Formalization of the test process may involve formalizing the test team organization as well. The
test team can be composed of internal members (that is, on the project team, involved or not in
software construction), of external members, in the hope of bringing in an unbiased, independent
perspective, or, finally, of both internal and external members. Considerations of costs, schedule,
maturity levels of the involved organizations, and criticality of the application may determine the
decision.
Several measures related to the resources spent on testing, as well as to the relative fault-finding
effectiveness of the various test phases, are used by managers to control and improve the test
process. These test measures may cover such aspects as number of test cases specified, number
of test cases executed, number of test cases passed, and number of test cases failed, among
others.
Evaluation of test phase reports can be combined with root-cause analysis to evaluate test
process effectiveness in finding faults as early as possible. Such an evaluation could be
associated with the analysis of risks. Moreover, the resources that are worth spending on testing
should be commensurate with the use/criticality of the application: different techniques have
different costs and yield different levels of confidence in product reliability. Termination A
decision must be made as to how much testing is enough and when a test stage can be
terminated. Thoroughness measures, such as achieved code coverage or functional completeness,
as well as estimates of fault density or of operational reliability, provide useful support, but are
not sufficient in themselves.
To carry out testing or maintenance in an organized and cost-effective way, the means used to
test each part of the software should be reused systematically. This repository of test materials
must be under the control of software configuration management, so that changes to software
requirements or design can be reflected in changes to the scope of the tests conducted. The test
solutions adopted for testing some application types under certain circumstances, with the
motivations behind the decisions taken, form a test pattern which can itself be documented for
later reuse in similar projects.
Planning
Like any other aspect of project management, testing activities must be planned. Key aspects of
test planning include coordination of personnel, management of available test facilities and
equipment (which may include magnetic media, test plans and procedures), and planning for
possible undesirable outcomes. If more than one baseline of the software is being maintained,
then a major planning consideration is the time and effort needed to ensure that the test
environment is set to the proper configuration.
Test-case generation
Generation of test cases is based on the level of testing to be performed and the particular testing
techniques. Test cases should be under the control of software configuration management and
include the expected results for each test.
The environment used for testing should be compatible with the software engineering tools. It
should facilitate development and control of test cases, as well as logging and recovery of
expected results, scripts, and other testing materials.
Execution
The results of testing must be evaluated to determine whether or not the test has been successful.
In most cases, “successful” means that the software performed as expected and did not have any
major unexpected outcomes. Not all unexpected outcomes are necessarily faults, however, but
could be judged to be simply noise. Before a failure can be removed, an analysis and debugging
effort is needed to isolate, identify, and describe it.
Problem reporting/Test log
Testing activities can be entered into a test log to identify when a test was conducted, who
performed the test, what software configuration was the basis for testing, and other relevant
identification information. Unexpected or incorrect test results can be recorded in a problem-
reporting system, the data of which form the basis for later debugging and for fixing the
problems that were observed as failures during testing. Also, anomalies not classified as faults
could be documented in case they later turn out to be more serious than first thought.
Defect tracking
Failures observed during testing are most often due to faults or defects in the software. Such
defects can be analyzed to determine when they were introduced into the software, what kind of
error caused them to be created (poorly defined requirements, incorrect variable declaration,
memory leak, programming syntax error, for example), and when they could have been first
observed in the software. Defect-tracking information is used to determine what aspects of
software engineering need improvement and how effective previous analyses and testing have
been.
Planning. Planning High Level Test plan, QA plan (quality goals), identify – reporting
procedures, problem classification, acceptance criteria, databases for testing, measurement
criteria (defect quantities/severity level and defect origin), project metrics and finally begin the
schedule for project testing. Also, plan to maintain all test cases (manual or automated) in a
database.
Analysis. Involves activities that – develop functional validation based on Business
Requirements (writing test cases basing on these details), develop test case format (time
estimates and priority assignments), develop test cycles (matrices and timelines), identify test
cases to be automated (if applicable), define area of stress and performance testing, plan the test
cycles required for the project and regression testing, define procedures for data maintenance
(backup, restore, validation), review documentation.
Design. Activities in the design phase – Revise test plan based on changes, revise test cycle
matrices and timelines, verify that test plan and cases are in a database or requisite, continue to
write test cases and add new ones based on changes, develop Risk Assessment Criteria,
formalize details for Stress and Performance testing, finalize test cycles (number of test case per
cycle based on time estimates per test case and priority), finalize the Test Plan, (estimate
resources to support development in unit testing).
Construction (Unit Testing Phase). Complete all plans, complete Test Cycle matrices and
timelines, complete all test cases (manual), begin Stress and Performance testing, test the
automated testing system and fix bugs, (support development in unit testing), run QA acceptance
test suite to certify software is ready to turn over to QA.
Test Cycle(s) / Bug Fixes (Re-Testing/System Testing Phase). Run the test cases (front and back
end), bug reporting, verification, revise/add test cases as required.
Final Testing and Implementation (Code Freeze Phase). Execution of all front end test cases –
manual and automated, execution of all back end test cases – manual and automated, execute all
Stress and Performance tests, provide on-going defect tracking metrics, provide on-going
complexity and design metrics, update estimates for test cases and test plans, document test
cycles, regression testing, and update accordingly.
Post Implementation. Post implementation evaluation meeting can be conducted to review
entire project. Activities in this phase – Prepare final Defect Report and associated metrics,
identify strategies to prevent similar problems in future project, automation team – 1) Review
test cases to evaluate other cases to be automated for regression testing, 2) Clean up automated
test cases and variables, and 3) Review process of integrating results from automated testing in
with results from manual testing.
There are a number of frequently-used software metrics, or measures, which are used to assist in
determining the state of the software or the adequacy of the testing.
Test plan
A test specification is called a test plan. The developers are well aware what test plans will be
executed and this information is made available to management and the developers. The idea is
to make them more cautious when developing their code or making additional changes. Some
companies have a higher-level document called a test strategy.
Traceability matrix
A test case normally consists of a unique identifier, requirement references from a design
specification, preconditions, events, a series of steps (also known as actions) to follow, input,
output, expected result, and actual result. Clinically defined a test case is an input and an
expected result. This can be as pragmatic as 'for condition x your derived result is y', whereas
other test cases described in more detail the input scenario and what results might be expected. It
can occasionally be a series of steps (but often steps are contained in a separate test procedure
that can be exercised against multiple test cases, as a matter of economy) but with one expected
result or expected outcome. The optional fields are a test case ID, test step, or order of execution
number, related requirement(s), depth, test category, author, and check boxes for whether the test
is automatable and has been automated. Larger test cases may also contain prerequisite states or
steps, and descriptions. A test case should also contain a place for the actual result. These steps
can be stored in a word processor document, spreadsheet, database, or other common repository.
Test script
A test script is a procedure, or programming code that replicates user actions. Initially the term
was derived from the product of work created by automated regression test tools. Test Case will
be a baseline to create test scripts using a tool or a program.
Test suite
The most common term for a collection of test cases is a test suite. The test suite often also
contains more detailed instructions or goals for each collection of test cases. It definitely contains
a section where the tester identifies the system configuration used during testing. A group of test
cases may also contain prerequisite states or steps, and descriptions of the following tests.
Test fixture or test data
In most cases, multiple sets of values or data are used to test the same functionality of a
particular feature. All the test values and changeable environmental components are collected in
separate files and stored as test data. It is also useful to provide this data to the client and with the
product or a project.
Test harness
The software, tools, samples of data input and output, and configurations are all referred to
collectively as a test harness.
a. The purpose of each test case is to run the test in the simplest way possible. [Suitable
techniques - Specification derived tests, Equivalence partitioning]
b. Concentrate initially on positive testing i.e. the test case should show that the software does
what it is intended to do. [Suitable techniques - Specification derived tests, Equivalence
partitioning, State-transition testing]
c. Existing test cases should be enhanced and further test cases should be designed to show that
the software does not do anything that it is not specified to do i.e. Negative Testing [Suitable
techniques - Error guessing, Boundary value analysis, Internal boundary value testing, State-
transition testing]
d. Where appropriate, test cases should be designed to address issues such as performance,
safety requirements and security requirements [Suitable techniques - Specification derived
tests]
e. Further test cases can then be added to the unit test specification to achieve specific test
coverage objectives. Once coverage tests have been designed, the test procedure can be
developed and the tests executed [Suitable techniques - Branch testing, Condition testing,
Data definition-use testing, State-transition testing]
Test Case ID
Test Dependency/Setup
Expected Results
Pass/Fail
Error Handling: Inadequate – protection against corrupted data, tests of user input, version
control; Ignores – overflow, data comparison, Error recovery – aborting errors, recovery from
hardware problems.
Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases
outside boundary.
Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors,
Incorrect conversion from one data representation to another, Wrong formula, Incorrect
approximation.
Initial and Later states: Failure to – set data item to zero, to initialize a loop-control variable, or
re-initialize a pointer, to clear a string or flag, Incorrect initialization.
Control flow errors: Wrong returning state assumed, Exception handling based exits, Stack
underflow/overflow, Failure to block or un-block interrupts, Comparison sometimes yields
wrong result, Missing/wrong default, Data Type errors.
Errors in Handling or Interpreting Data: Un-terminated null strings, Overwriting a file after
an error exit or user abort.
Race Conditions: Assumption that one event or task finished before another begins, Resource
races, Tasks starts before its prerequisites are met, Messages cross or don’t arrive in the order
sent.
Load Conditions: Required resources are not available, No available large memory area, Low
priority tasks not put off, Doesn’t erase old files from mass storage, Doesn’t return unused
memory.
Testing Errors: Failure to notice/report a problem, Failure to use the most promising test case,
Corrupted data files, Misinterpreted specifications or documentation, Failure to make it clear
how to reproduce the problem, Failure to check for unresolved problems just before release,
Failure to verify fixes, Failure to provide summary report.
Know the technology. Knowledge of the technology in which the application is developed is an
added advantage to any tester. It helps design better and powerful test cases basing on the
weakness or flaws of the technology. Good testers know what it supports and what it doesn’t, so
concentrating on these lines will help them break the application quickly.
Perfectionist and a realist. Being a perfectionist will help testers spot the problem and being a
realist helps know at the end of the day which problems are really important problems. You will
know which ones require a fix and which ones don’t.
Tactful, diplomatic and persuasive. Good software testers are tactful and know how to break
the news to the developers. They are diplomatic while convincing the developers of the bugs and
persuade them when necessary and have their bug(s) fixed. It is important to be critical of the
issue and not let the person who developed the application be taken aback of the findings.
An explorer. A bit of creativity and an attitude to take risk helps the testers venture into
unknown situations and find bugs that otherwise will be looked over.
Troubleshoot. Troubleshooting and figuring out why something doesn’t work helps testers be
confident and clear in communicating the defects to the developers.
Posses people skills and tenacity. Testers can face a lot of resistance from programmers. Being
socially smart and diplomatic doesn’t mean being indecisive. The best testers are both-socially
adept and tenacious where it matters.
Organized. Best testers very well realize that they too can make mistakes and don’t take
chances. They are very well organized and have checklists, use files, facts and figures to support
their findings that can be used as an evidence and double-check their findings.
Objective and accurate. They are very objective and know what they report and so convey
impartial and meaningful information that keeps politics and emotions out of message. Reporting
inaccurate information is losing a little credibility. Good testers make sure their findings are
accurate and reproducible.
Defects are valuable. Good testers learn from them. Each defect is an opportunity to learn and
improve. A defect found early substantially costs less when compared to the one found at a later
stage. Defects can cause serious problems if not managed properly. Learning from defects helps
– prevention of future problems, track improvements, improve prediction and estimation.
Verification: Have we built the software right? (i.e., does it implement the requirements).
Validation: Have we built the right software? (i.e., do the requirements satisfy the
customer).
The terms verification and validation are commonly used interchangeably in the industry; it is
also common to see these two terms incorrectly defined.
According to the IEEE Standard Glossary of Software Engineering Terminology:
Within the modeling and simulation community, the definitions of verification and validation are
similar:
Verification is ensuring that the product has been built according to the requirements and
design specifications while validation ensures that the product actually meets the user's
needs, and that the specifications were correct in the first place.
Verification ensures that "you built it right". Validation confirms that the product, as
provided, will fulfill its intended use. Validation ensures that "you built the right thing".
Verification is Static while Validation is Dynamic. This means in Verification the s/w is
inspected by looking into the code going line by line or function by function. In
Validation, code is executed and s/w is run to find defects. Since in verification code is
reviewed, location of the defect can be found which is not possible in validation.
Verification is to determine the right thing, which involves the testing the implementation
of right process. Ex: Are we building the product right? Validation is to perform the
things in right direction, like checking the developed software adheres the requirements
of the client. Ex: right product was built
Software inspection
i. Formal methods
ii. Program verification technique
iii. Cleanroom method
iv. Structured testing
v. Structured integration testing
4.9.1.1 Software Inspections
Software inspections can be used for the detection of defects in detailed designs before coding,
and in code before testing. They may also be used to verify test designs, test cases and test
procedures. More generally, inspections can be used for verifying the products of any
development process that is defined in terms of:
Software inspections are efficient. Projects can detect over 50% of the total number of defects
introduced in development by doing them Software inspections are economical because they
result in significant reductions in both the number of defects and the cost of their removal.
Detection of a defect as close as possible to the time of its introduction results in:
an increase in the developers' awareness of the reason for the defect's occurrence, so that the
likelihood that a similar defect will recur again is reduced;
reduced effort in locating the defect, since no effort is required to diagnose which
component, out of many possible components, contains the defect.
Software inspections are formal processes. They differ from walkthroughs by:
repeating the process until an acceptable defect rate (e.g. number of errors per thousand lines
of code) has been achieved;
analysing the results of the process and feeding them back to improve the production process,
and forward to give early measurements of software quality;
avoiding discussion of solutions;
including rework and follow-up activities.
The following subsections summarize the software inspection process.
(a) Objectives
(b) Organisation
moderator;
secretary;
reader;
inspector;
author.
The moderator leads the inspection and chairs the inspection meeting. The person should have
implementation skills, but not necessarily be knowledgeable about the item under inspection. He
or she must be impartial and objective. For this reason moderators are often drawn from staff
outside the project. Ideally they should receive some training in inspection procedures.
The secretary is responsible for recording the minutes of inspection meetings, particularly the
details about each defect found.
The reader guides the inspection team through the review items during the inspection meetings.
Inspectors identify and describe defects in the review items under inspection. They should be
selected to represent a variety of viewpoints (e.g. designer, coder and tester).
The author is the person who has produced the items under inspection. The author is present to
answer questions about the items under inspection, and is responsible for all rework.
A person may have one or more of the roles above. In the interests of objectivity, no person may
share the author role with another role.
(c) Input
review items;
specifications of the review items;
inspection checklist;
standards and guidelines that apply to the review items;
inspection reporting forms;
defect list from a previous inspection.
(d) Activities
i. overview;
ii. preparation;
iii. review meeting;
iv. rework;
v. follow-up.
(i) Overview
The purpose of the overview is to introduce the review items to the inspection team. The
moderator describes the area being addressed and then the specific area that has been designed in
detail. For a re-inspection, the moderator should flag areas that have been subject to rework since
the previous inspection. The moderator then distributes the inputs to participants.
(ii) Preparation
Moderators, readers and inspectors then familiarize themselves with the inputs. They might
prepare for a code inspection by reading:
Any defects in the review items should be noted on RID forms and declared at the appropriate
point in the examination. Preparation should be done individually and not in a meeting.
The moderator checks that all the members have performed the preparatory activities. The
amount of time spent by each member should be reported and noted. The reader then leads the
meeting through the review items. For documents, the reader may summarize the contents of
some sections and cover others line-by-line, as appropriate. For code, the reader covers every
piece of logic, traversing every branch at least once. Data declarations should be summarized.
Inspectors use the checklist to find common errors. Defects discovered during the reading should
be immediately noted by the secretary. The defect list should cover the:
Any solutions identified should be noted. The inspection team should avoid searching for
solutions and concentrate on finding defects. At the end of the meeting, the inspection team takes
one of the following decisions:
(iv) Rework
After examination, software authors correct the defects described in the defect list.
(v) Follow-up
After rework, follow-up activities verify that all the defects have been properly corrected and
that no secondary defects have been introduced. The moderator is responsible for follow-up.
(e) Output
defect list;
defect statistics;
inspection report.
Formal Methods, such as LOTOS, Z and VDM, possess an agreed notation, with well-defined
semantics, and a calculus, which allow proofs to be constructed. The first property is shared with
other methods for software specification, but the second sets them apart. Formal Methods may be
used in the software requirements definition phase for the construction of specifications.
Program verification techniques may be used in the detailed design and production phase to
show that a program is consistent with its specification. These techniques require that the:
If these conditions are not met, formal program verification cannot be attempted.
The clean-room method replaces unit testing and integration testing with software inspections
and program verification techniques. System testing is carried out by an independent testing
team. The clean-room method is not fully compliant with ESA PSS-05-0 because:
Structured Testing is a method for verifying software based upon the mathematical properties of
control graphs. The method:
Software with high complexity is hard to test. The Structured Testing method uses the
cyclomatic complexity metric for measuring complexity, and recommends that module designs
be simplified until they are within the complexity limits. Structured Testing provides a
technique, called the 'baseline method', for defining test cases. The objective is to cover every
branch of the program logic during unit testing. The minimum number of test cases is the
cyclomatic complexity value measured in the first step of the method.
Structured Integration Testing is a method based upon the Structured Testing Method that:
The method can be applied at all levels of design above the module level. Therefore it may also
be applied in unit testing when units assembled from modules are tested.
Codeline - Source code required to produce software. It could be a specific product or even a
basic set of code that many of your internet applications commonly use. A main codeline
should exist in your organization for each type of application that your organization creates.
Codelines can be used to help manage software version control and change control. Software
codelines should have specific purposes. One codeline of code may be a main codeline which
other projects use to provide base functions. Another may be a specific project to be
delivered to a customer. Other codelines may be used to enhance or add features to the main
codeline.
Codeline policy - Each codeline should have its own policy. One codeline may require more
stringent testing that another one. A codeline under development will require a policy that
does not require stringent testing when code is checked in. Production codeline should have a
policy requiring stringent testing.
Environment - When discussing code use, the environment is either test (development),
Quality Assurance (QA) test, or production. The test or development environment is used for
developers to test their code. The QA environment is used by customers to verify business
functionality. The production environment is where the software runs for the purpose of
customer use. Changes to the production environment must be the most stringent.
Branching - The creation of a new codeline based upon a current codeline. Branching should
only be done when absolutely necessary.
There should be many changes of similar type which allows for templates to be used during the
approval process. If a change control board improves the above objectives and it does not
significantly reduce efficiency, it should be used. The board, if structured correctly, could be
used to help users get ready or be aware of the change.
Software change management is the process of selecting which changes to encourage, which to
allow, and which to prevent, according to project criteria such as schedule and cost. The process
identifies the changes’ origin, defines critical project decision points, and establishes project
roles and responsibilities. You need to define a change management process and policy within
your company’s business structure and your team’s development process. Change management
is not an isolated process. The project team must be clear on what, when, how, and why to carry
it out.
The relationship between change tracking and SCM is at the heart of change management. SCM
standards commonly define change control as a subordinated task after configuration
identification. This has led some developers to see SCM as a way to prevent changes rather than
facilitate them. By emphasizing the change tracking and SCM relationship, change management
focuses on selecting and making the correct changes as efficiently as possible. In this context,
SCM addresses versions, workspaces, builds, and releases.
A change data repository supports any change management process. When tracking changes,
developers, testers, and possibly users enter data on new change items and maintain their status.
SCM draws on the change data to document the versions and releases, also stored in a repository,
and updates the data store to link changes to their implementation. Software change management
is an integral part of project management. The only way for developers to accomplish their
project goals is to change their software.
Ideally, all software change would result from your required and planned development effort,
driven by requirements and specifications, and documented in your design. However, adding
new code is a change you must manage. Adding functions that were not requested (no matter
how useful and clever) consumes project resources and increases the risk of errors downstream.
Even requested features may range in priority from “mandatory” to “nice to have.” Monitoring
the cost to implement each request identifies features that adversely affect the project’s cost-to-
benefit ratio.
4.11.2.2 Unexpected Problems
You will undoubtedly discover problems during any development effort and spend resources to
resolve them. The effort expended and the effort’s timing need to be proportional to the problem
- small bugs should not consume your project budget.
The team must determine whether the code fails to implement the design properly or whether the
design or requirements are flawed. In the latter case, you should be sure to correct design or
requirements errors. Integrated change management toolsets, which I’ll discuss later in the
article, can make the process seamless: change to a code file can prompt the developer to update
the corresponding documentation files. The investment in documentation updates will be
recovered many times over when the software is maintained later.
4.11.2.3 Enhancements
All software projects are a research and development effort to some extent, so you will receive
enhancement ideas. Here is where project management is most significant: the idea could be a
brilliant shortcut to the project goal, or a wrong turn that threatens project success. As with
requirements or design errors, you need to document these types of changes. Adhere to your
development standards when implementing an enhancement to assure future maintainability.
You should address changes when they are only potential changes, before they’ve consumed
project resources. Like any project task, changes follow a life cycle, or change process, that you
must track. In fact, there are three critical decision points in drive any change process. These
decision points form the framework of change management.
4.11.3.1 Approve the Concept
Change requests come from testers or users identifying problems, and from customers adding or
changing requirements. You want to approve all changes before investing significant resources.
This is the first key decision point in any change management process. If you accept an idea,
assign a priority to ensure appropriate resources and urgency are applied.
Once you’ve accepted a change request, evaluate it against your project’s current requirements,
specifications, and designs, as well as how it will affect the project’s schedule and budget. This
analysis may convince you to revise your priorities. Sometimes, the team will discover that a
complex problem has an elegant solution or that several bugs have a common resolution. The
analysis will also clarify the cost-to-benefit ratio, making the idea more or less desirable. Once
you clarify the facts, make sure the change is properly managed with a second formal review.
A change request is completed when the change is folded into the planned development effort.
During requirements analysis and design phases, this may occur immediately after you approve
the request. During coding, however, you often must conduct separate implementation and
testing to verify the resolution for any unplanned changes, including both testing of the original
issue and a logically planned regression test to determine if the change created new problems.
After testing, you must still review the change to ensure it won’t negatively affect other parts of
the application. If the testing indicates a risk of further problems, you might want to reject the
change request even at this point.
4.11.3.4 Rejected or Postponed Requests
At any of the decision points, you can decide whether to reject or postpone the change request. In
this case, retain the change request and all associated documentation. This is important because if
the idea comes up again, you need to know why you decided against it before. And, if
circumstances change, you may want to move ahead with the change with as little rework as
possible.
If a problem has shut down testing—or worse, a production system—you may not have time for
a full analysis and formal decision. Focus this process on an immediate resolution, whether a
code “hack” or a work-around, that eliminates the shutdown. You can update the change request
to document the quick fix and change it to a lower priority. By leaving the change request open,
you won’t omit the full analysis and resolution, but you can properly schedule and manage these
activities. Alternately, you can close the emergency change request when the fix is in place, and
create a new change request to drive a complete resolution.
The change management process requires several decision-makers at the various decision points.
Your change management process should address the following questions:
Who will make the decision? Ultimately, the project manager is responsible for these
decisions, but you can delegate some of them to other project leaders.
Who must give input for the decision? Who can give input?
Who will perform the analysis, implementation, and testing? This can be specified
generally, although each issue may require particular contributors.
Who must be notified once the decision is made? When, how, and in how much detail
will the notice be given?
Who will administer and enforce the procedures? Often this becomes a task for SCM or
the release manager, since it directly impacts their efforts.
You don’t need to handle all issues at all project stages the same way. Think of the project as
consisting of concentric worlds starting with the development team, expanding to the test team,
the quality team, and finally the customer or user. As your team makes requirements, design, and
software available to wider circles, you need to include these circles in change decisions. For
example, accepting a change to a code module will require retesting the module. You must notify
the test team, who should at least have a say in the scheduling. The standard SCM baselines
represent an agreement between the customer and the project team about the product: initially the
requirements, then the design, and finally the product itself. The customer must approve any
change to the agreed-upon items. The change management process helps you maintain good faith
with the customer and good communication between project members.
A successful system coordinates people, process, and technology. Once you define the process
and tools, ensure that your team is trained and motivated to use them. The best tool is worthless
if it is not used properly, whether from lack of skill or resentment over being forced to use it.
Process and tool training should make the tool’s benefits clear to your team.
Change management’s most important components are an SCM tool and a problem-report and
change-request tracking tool. Increasingly, change management toolsets integrate with one
another and with development tools such as requirements or test case tracing. For example, you
can link a new version directly to the change request it implements and to tests completed against
it.
At the simple and inexpensive end of the tool scale are SCCS (part of most UNIX systems) and
RCS, which define the basics of version control. Various systems build on these, including CVS
and Sun’s TeamWare, adding functions such as workspace management, graphical user
interface, and (nearly) automatic merging. In the midrange are products such as Microsoft’s
SourceSafe, Merant’s PVCS, MKS Source Integrity, and Continuus/CM, which generally
provide features to organize artifacts into sets and projects. Complete SCM environments are
represented by Platinum’s CCC/Harvest and Rational’s ClearCase, giving full triggering and
integration capabilities.
You should go into demos with a sketch of how your development process works, especially if
you’re considering a significant tool expenditure. This lets you ask specifically how the tool
could handle your needs. The tool budget will need to include the effort to define and document
procedures, write scripts and integration artifacts, and train the team. If the tool is new to your
organization, verify that the vendor can support your implementation or recommend a consultant
who can.
As with other tools, estimate the volume of data the tool needs to handle and verify that it will
perform at that level. Consider how many individuals need to use the tool at one time and
whether you need strict controls over who can change various parts of the data. If you conduct
your reviews in meetings, report generation will be a significant part of tool use. For an
electronic approval cycle, the e-mail interface is vital. Increasingly, tools are providing a web
interface to simplify distributed use.
Security Defects: Application security defects generally involve improper handling of data sent
from the user to the application. These defects are the most severe and given highest priority for
a fix.
Examples:
- Authentication: Accepting an invalid username/password
Data Quality/Database Defects: Deals with improper handling of data in the database.
Examples:
- Values not deleted/inserted into the database properly
Critical Functionality Defects: The occurrence of these bugs hampers the crucial functionality
of the application.
Examples:- Exceptions
- Buttons like Save, Delete, Cancel not performing their intended functions
- A missing functionality (or) a feature not functioning the way it is intended to
- Continuous execution of loops
User Interface Defects: As the name suggests, the bugs deal with problems related to UI are
usually considered less severe.
Examples:
- Improper error/warning/UI messages
- Spelling mistakes
- Alignment problems
The primary goal is to prevent defects. Where this is not possible or practical, the goals
are to both find the defect as quickly as possible and minimize the impact of the defect.
The defect management process should be risk driven -- i.e., strategies, priorities, and
resources should be based on the extent to which risk can be reduced.
Defect measurement should be integrated into the software development process and be
used by the project team to improve the process. In other words, the project staff, by
doing their job, should capture information on defects at the source. It should not be
done after-the-fact by people unrelated to the project or system
As much as possible, the capture and analysis of the information should be automated.
Defect information should be used to improve the process. This, in fact, is the primary
reason for gathering defect information.
Most defects are caused by imperfect or flawed processes. Thus to prevent defects, the
process must be altered.
4.13.3.3 Defect Discovery - Identification and reporting of defects for development team
acknowledgment. A defect is only termed discovered when it has been documented and
acknowledged as a valid defect by the development team member(s) responsible for the
component(s) in error.
4.13.3.4 Defect Resolution - Work by the development team to prioritize, schedule and fix a
defect, and document the resolution. This also includes notification back to the tester to ensure
that the resolution is verified.
4.13.3.5 Process Improvement - Identification and analysis of the process in which a defect
originated to identify ways to improve the process to prevent future occurrences of similar
defects. Also the validation process that should have identified the defect earlier is analyzed to
determine ways to strengthen that process.
4.14.1 Defect Discovery – Identification and reporting of potential defects. The defect tracking
software must be simple enough so that people will use it, but ensure that the minimum
necessary information is captured. The information captured here should be enough to reproduce
the defect and allow development to determine root cause and impact.
4.14.2 Defect Analysis & Prioritization – The development team determines if the defect report
corresponds to an actual defect, if the defect has already been reported, and what the impact and
priority of the defect is. Prioritization and scheduling of the defect resolution is often part of the
overall change management process for the software development organization.
4.14.3 Defect Resolution – Here the development team determines the root cause, implements
the changes needed to fix the defect, and documents the details of the resolution in the defect
management software, including suggestions on how to verify the defect is fixed. In
organizations using software product lines approaches, or other shared component approaches,
defect resolution may need to be coordinated across multiple branches of development.
4.14.4 Defect Verification – The build containing the resolution to the defect is identified, and
testing of the build is performed to ensure the defect truly has been resolved, and that the
resolution has not introduced side effects or regressions. Once all affected branches of
development have been verified as resolved, the defect can be closed.
4.15 SUMMARY
Software testing is the process of testing software product. Effectiveness software testing will
contribute to the delivery of higher quality software products, more satisfied users, lower
maintenance costs, more accurate and reliable results. However, ineffective testing will lead to
the opposite results; low quality products, unhappy users, increased maintenance costs,
unreliable and inaccurate results. Hence, software testing is necessary and important activity of
software development process. Good testing involves much more than just running the program a
few times to see whether it works. Through analysis of a program helps us to test more
systematically and more effectively. Change is inevitable in all stages of a software project.
Change management will help you direct and coordinate those changes so they can enhance—
not hinder—your software. There is very much need to control software change. Software
change management provides much guidelines in this way. Software verification and validation
should show that the product conforms to all the requirements. Users will have more confidence
in a product that has been through a rigorous verification programme than one subjected to
minimal examination and testing before release.
Assignment-Module 4
4. The tester is only aware of what the software is supposed to do, not how it does it.
a. White box testing
b. Black box testing
c. Static testing
d. Dynamic testing
5. "Like a walk in a dark labyrinth without a flashlight.
a. White box testing
b. Black box testing
c. Static testing
d. Dynamic testing
13. Verification is
a. Checking product with respect to customer’s expectation.
b. Checking product with respect to specification.
c. Checking product with respect to constraints of the project.
d. All of the above
14. Validation is
a. Checking product with respect to customer’s expectation.
b. Checking product with respect to specification.
c. Checking product with respect to constraints of the project.
d. All of the above
Key - Module 4
1. c
2. a
3. a
4. b
5. b
6. d
7. c
8. d
9. c
10. a
11. c
12. b
13. b
14. a
15. c
CHAPTER 5 METRICS AND MEASUREMENT OF
SOFTWARE QUALITY
Coverity framework
(2) “Function ‘foo’ has a memory leak on line 73 that is the result of an allocation on line 34 and
the following path decisions on lines 38, 54, and 65 ..”
Our belief is that a metric based on the latter is much more valuable in measuring source code
quality. Today, many open source packages rely on our static source code analysis as a key
indicator of reliability and security. For example, MySQL, PostgreSQL, and Berkeley DB have
certified versions of their software that contain zero Coverity defects.
Products Metrics: Product metrics are also known as quality metrics and is used to measure the
properties of the software. Product metrics includes product non reliability metrics, functionality
metrics, performance metrics, usability metrics, cost metrics, size metrics, complexity metrics
and style metrics. Products metrics help in improving the quality of different system component
& comparisons between existing systems.
5.4 ADVANTAGE OF SOFTWARE METRICS:
In Comparative study of various design methodology of software systems.
For analysis, comparison and critical study of various programming language with
respect to their characteristics.
In comparing and evaluating capabilities and productivity of people involved in software
development.
In the preparation of software quality specifications.
In the verification of compliance of software systems requirements and specifications.
In making inference about the effort to be put in the design and development of the
software systems.
In getting an idea about the complexity of the code.
In taking decisions regarding further division of complex module is to be done or not.
In providing guidance to resource manager for their proper utilization.
In comparison and making design tradeoffs between software development and
maintenance cost.
In providing feedback to software managers about the progress and quality during various
phases of software development life cycle.
In allocation of testing resources for testing the code.
• Any line of program text excluding comment or blank line, regardless of the number of
statements or parts of statements on the line, is considered a Line of Code.
In terms of the total tokens used, the size of the program can be expressed as N = N1 + N2
Function Count:
The size of a large software product can be estimated in better way through a larger unit
called module. A module can be defined as segment of code which may be compiled
independently.
For example, let a software product require n modules. It is generally agreed that the size of
module should be about 50-60 line of code. Therefore size estimate of this Software product
is about n x 60 line of code.
One of the hypothesis of this theory is that the length of a well-structured program is a function
of n1 and n2 only. This relationship is known as length prediction equation and is defined as
Nh = n1 log2 n1 + n2 log2 n2
The following length estimators have been suggested by some other researchers:
It is described as
It was applied and validated by Jensen and Vairavan for real time application programs written
in Pascal and found even more accurate results than Halstead’s estimator.
Nz = n [0.5772 + ln (n) ]
where n1 : Number of unique operators which include basic operators, keywords/reserve- words
and functions/procedures.
The programming vocabulary n = n1 + n2 leads to another size measures which may be defined
as :
V = N log 2 n
Where n1* is the minimum number of operators and n2* is the minimum number of operands.
The notion of program graph has been used for this measure and it is used to measure and control
the number of paths through a program. The complexity of a computer program can be correlated
with the topological complexity of a graph.
McCabe proposed the cyclomatic number, V (G) of graph theory as an indicator of software
complexity. The cyclomatic number is equal to the number of linearly independent paths through
a program in its graphs representation. For a program control graph G, cyclomatic number, V
(G), is given as:
V (G) = E – N + P
P is a program
Here, the notion of program graph has been extend to the notion of flow graph. A flow graph of a
program P can be defined as a set of nodes and a set of edges. A node represents a declaration or
a statement while an edge represents one of the following:
2 Control flow from a statement node dj to a statement node si which is declared in dj.
3 Flow from a declaration node dj to statement node si through a read access of a variable or a
constant in si which is declared in dj.
2. The complexity due to procedure’s connections to its environment. The effect of the first
factor has been included through LOC (Lin Of Code) measure. For the quantification of second
factor, Henry and Kafura have defined two terms, namely FAN-IN and FAN-OUT.
FAN-IN of a procedure is the number of local flows into that procedure plus the number of data
structures from which this procedure retrieve information.
FAN –OUT is the number of local flows from that procedure plus the number of data structures
which that procedure updates.
Where the length is taken as LOC and the term FAN-IN *FAN-OUT represent the total number
of input –output combinations for the procedure.
Metrics, for both process and software, tell us to what extent a desired characteristic is present in
our processes or our software systems. Maintainability is a desired characteristic of a software
component and is referenced in all the main software quality models (including the ISO 9126).
One good measure of maintainability would be time required to fix a fault. This gives us a handle
on maintainability but another measure that would relate more to the cause of poor
maintainability would be code complexity. A method for measuring code complexity was
developed by Thomas McCabe and with this method a quantitative assessment of any piece of
code can be made. Code complexity can be specified and can be known by measurement,
whereas time to repair can only be measured after the software is in support. Both time to repair
and code complexity are software metrics and can both be applied to software process
improvement.
5.10 PROBLEM WITH METRICS
It is not enough to simply create a metric. The measure should accurately reflect the process. We
use metrics to base decisions on and to focus our actions. It is not only important to measure the
right indicators, it is important to measure them well. To be effective and reliable, the metrics
we choose to use need to have ten key characteristics. The following table suggests the qualities
to look for in indicators.
Is defined and The measure has been defined by and/or agreed to by all
mutually understood key process participants (internally and externally)
Encompasses both The measure integrates factors from all aspects of the
outputs and inputs process measured
Measures only what is The measure focuses on a key performance indicator that
important is of real value to managing the process
Although there may never be a single perfect measure, it is certainly possible to create a measure
or even multiple measures which reflect the performance of your system. If the metrics are
chosen carefully, then, in the process of achieving their metrics, managers and employees will
make the right decisions and take the right actions that enable the organization to maximize its
performance. These guidelines will make sure you pick the right indicators and measure them
well.
ii. Align metrics with strategy: no one really wants twitter followers. You want something
else – influence, or interaction, or something that one way or another actually does you
some good. The interim steps are important, but don’t only measure these. You also need
to figure out a way to measure the outcomes of your strategy.
iii. Use multiple measures of success: this follows from the first two points. Most of the
things that we really care about are hard to actually measure. If we are going to try, we
need to use multiple measures so that we can triangulate on our desired objectives.
i. Valid: clearly related to the feature being measured e.g. monotonically increases as the
feature increases
ii. Objective: independent of personal opinion
iii. Reproducible: measurements can be consistently repeated
iv. Precise: sensitive to changes in the feature measured
v. Robust: not easily manipulated or sensitive to extraneous factors
vi. Comparable: highly correlated with other metrics measuring the same feature
vii. Universal: can be translated into sub-metrics for lower parts of the product or process
i. Economical: does not consume significant resources for collection; preferably a bi-
product of other activities
ii. Standardised: the metric uses a mathematically appropriate scale
iii. Sustainable: likely to be valid in the future so that trend forecasts based on the metric will
be effective
iv. Cost-Effective: benefits from the data obtained justify the cost of gathering that data
v. Useful: supports the goals of the organisation
5.11 OBJECTIVE AND SUBJECTIVE MEASUREMENT
A question that often arises during the planning of an experiment or a test is whether to obtain
objective performance data or subjective data, e.g. data related to preference setting. Objective
performance data are usually preferred for experiments. In addition, they are required for design
evaluations whenever the evaluation criteria are objective. Unfortunately, however, objective
measurements are frequently more difficult – even impossible - to carry out, and the process of
collecting objective data is usually more time-consuming and costly. In contrast, subjective data
may be obtained easily, quickly, and inexpensively. The subjective measurement technique also
provides the only direct means for the assessment of user opinion and preferences. The sources
of objective data that are frequently used in user trials can be divided into three categories:
Many kinds of objective data can be measured when, for instance, all the components of a
balanced system are considered. This system is applicable to both working and living contexts in
the field. The same fact is often relevant in simulations.
The typical methods used in subjective measurement are:
ranking methods,
rating methods,
questionnaire methods
interviews
checklists.
However, subjective data and preference data must be interpreted with caution. Following points
should be considered when evaluating subjective data:
If the subjects in experiments and tests do not fit the user profile compiled during the planning
phase, their opinions and preferences may not accurately reflect those of the intended users of the
product. Conclusions based on data obtained from inappropriate subjects may not be valid.
’’Data metrics’’: In order to measure the amount of distortion introduced by the capture,
compression and transmission processes, these metrics take into account only the signal
reliability without considering the content of the video under analysis.
’’Picture metrics’’: This distortion measurement is focused on the content of the video under
analysis, i.e., this approach allows quantifying the effect of distortions and content on
perceived quality. In this case, these metrics are closer to the human perceived quality than
the Data metrics method.
Mean =
Median = 5
Mode = 5
i. Define the objectives for the measurement program - how it is to be used. Consider how
to implement the four uses of measurement, given the maturity level of the organization.
The use of measurement should be tied to the organization’s mission, goals and
objectives.
ii. Create an environment receptive to measurement. Begin with the prerequisites listed
earlier in this section. Establish service level agreements between IT and the users to
define quality and productivity that must be defined before they can be measured. People
involved with the measurement should help develop the measure. Establish a quality
management environment and ensure the work processes being used have been
implemented.
iii. Define the measurement hierarchy, which has three levels of quantitative data: measures,
metrics, and a strategic results dashboard (also called key indicators). This measurement
hierarchy maps to a three-level IT organizational tier: staff, line management and senior
management. IT staff collects basic measures, such as product size, cycle time, or defect
count. IT line management uses fundamental metrics, such as variance between actual
and budgeted cost, user satisfaction or defect rates per LOC to manage a project or part of
the IT function. Senior management uses a strategic results dashboard, where the metrics
represent the quantitative data needed to manage the IT function and track to the mission,
vision, or goals. For example, a mission with a customer focus should have a customer
satisfaction metric. A metric of the number of projects completed on time gives insight
into the function's ability to meet short and long-term business goals.
iv. Define the standard units of measurement (discussed in Measurement Concepts).
i. Identify desired business results, beginning with a mission or vision statement. Turn
operative phrases in the mission or vision (such as “deliver on time” or “satisfy
customer”) into specific objectives (such as "all software will be delivered to the
customer by the date agreed upon with the customer"), and then rank these objectives in
order of importance. When objectives are written with a subject, action, target value, and
time frame it is much easier to identify the actual metric that will serve as the results
metric or key indicator.
ii. Identify current baselines by determining the current operational status for each of the
desired business results/objectives.
iii. Select a measure or metric for each desired business result or objective, and determine
whether it has been standardized by the IT industry (such as cycle time, which is
measured as elapsed calendar days from the project start date to the project end date). If
not, explore the attributes of the result or objective and define a measure or metric that is
quantitative, valid, reliable, attainable, easy to understand and collect, and a true
representation of the intent. Ideally there should be three to five metrics, with no more
than seven. Convert the business results metrics into a strategic dashboard of key
indicators. Examples of indicators includes productivity, customer satisfaction,
motivation, skill sets, and defect rates.
iv. Consider trade-offs between the number one ranked business result and the other desired
results. For example, the #1 result to complete on time will affect other desired results,
such as minimize program size and develop easy-to-read documentation.
v. Based on the baseline and desired business result or objective, determine a goal for each
result metric. Goals typically specify a subject (such as financial, customer, process or
product, or employee) and define an action that is change or control related (such as
improve or reduce, increase or decrease or control or track). If a baseline for on time
projects is 60%, the goal might be to increase to 80% by next year. Benchmarking can
also be useful prior to setting goals, as it allows an understanding of what is possible
given a certain set of circumstances.
5.13.3 Manage by process.
Managing by process means to use processes to achieve management's desired results. When
results are not achieved, a quality management philosophy tells the organization to look at how
the system (i.e., its processes) can be improved rather than reacting, making emotional decisions,
and blaming people. Quantitative feedback, which provides indicators of process performance, is
needed in order to operate this way. Various processes usually contribute jointly to meeting
desired business results, and, therefore, it is important to understand and identify what things
contribute to, or influence, desired results. This phase consists of four steps to implement
measurement in a process, and to identify the attributes of the contributors, which if met will
achieve the desired process results. These steps provide the information to manage a process and
to measure its status.
i. Develop a matrix of process results and contributors to show which contributors drive
which results. The results should come from the process policy statement. The
contributors can be positive or negative, and involve process, product, or resource
attributes. Process attributes include characteristics such as time, schedule, and
completion. Product attributes include characteristics such as size, correctness, reliability,
usability, and maintainability. Resource attributes include characteristics such as amount,
skill, and attitude. A cause-and-effect diagram is often used to graphically illustrate the
relationship between results and contributors.
ii. Assure process results are aligned to business results. Processes should help people
accomplish their organization’s mission. Alignment is subjective in many organizations,
but the more objective it is, the greater the chance that processes will drive the mission.
iii. Rank the process results and the contributors from a management perspective. This will
help workers make trade offs and identify where to focus management attention.
iv. Select metrics for both the process results and contributors, and create two tactical
process dashboards: one for process results and one for contributors. These dash boards
are used to manage the projects and to control and report project status. Normally results
are measured subjectively and contributors are measured objectively. For example, for a
result of customer satisfaction, contributors might include competent resources, an
available process, and a flexible and correct product. Sometimes, as with customer
satisfaction, factors that contribute to achieving the result can actually be used to develop
the results metric. In other words, first determine what contributes to customer
satisfaction or dissatisfaction and then it can be measured.
Typically the focus of decisions is common cause problems and special cause problems.
Certain aspects of many of the risk management standards have come under criticism for having
no measurable improvement on risk, whether the confidence in estimates and decisions seem to
increase.
Risk management is a process for identifying, assessing, and prioritizing risks of different kinds.
Once the risks are identified, the risk manager will create a plan to minimize or eliminate the
impact of negative events. A variety of strategies is available, depending on the type of risk and
the type of business. There are a number of risk management standards, including those
developed by the Project Management Institute, the International Organization for
Standardization (ISO), the National Institute of Science and Technology
Project schedule get slip when project tasks and schedule release risks are not addressed
properly.
Schedule risks mainly affect on project and finally on company economy and may lead to project
failure.
Schedules often slip due to following reasons:
Budget Risk
Operational Risks
Risks of loss due to improper process implementation, failed system or some external events
risks.
Programmatic Risks
These are the external risks beyond the operational limits. These are all uncertain risks
Within risk management the “emphasis is shifted from crisis management to anticipatory
management”.
Boehm defines four major reasons for implementing software risk management:
i. Avoiding software project disasters, including run away budgets and schedules, defect-
ridden software products, and operational failures.
ii. Avoiding rework caused by erroneous, missing, or ambiguous requirements, design or
code, which typically consumes 40-50% of the total cost of software development.
iii. Avoiding overkill with detection and prevention techniques in areas of minimal or no
risk.
iv. Stimulating a win-win software solution where the customer receives the product they
need and the vendor makes the profits they expect.
The risk management process is an on-going part of managing the software development
process. It is designed to be a continuous feedback loop where additional information and risk
status are utilized to refine the project's risk list and risk management plans. Let's use the
crossing the street analogy to examine the risk management process. First we identify the risk:
we want to cross the street and know there is a possibility of traffic. We analyze the risk. What is
the probability of being hit by the car? How much is it going to hurt if we are hit? How important
is it that we cross this street at this time? We look both ways, we see the on-coming car, and we
judge its rate of speed. We form a plan to reduce the risk and decide to wait until the car has
passed. We implement the plan and wait. We track the situation by watching the car and we see
it pull into a driveway. We change our plan and proceed across the street. We step onto the curb
across the street and stop thinking about crossing the street (i.e., we close the risk).
5.16 RISK IDENTIFICATION
During the first step in the software risk management process, risks are identified and added to
the list of known risks. The output of this step is a list of project-specific risks that have the
potential of compromising the project's success. There are many techniques for identifying risks,
including interviewing, reporting, decomposition, assumption analysis, critical path analysis, and
utilization of risk taxonomies.
Voluntary Reporting: Another risk identification technique is voluntary reporting, where any
individual who identifies a risk is encouraged and rewarded for bringing that risk to
management’s attention. This requires the complete elimination of the “shoot the messenger”
syndrome. It avoids the temptation to assign risk reduction actions to the person who identified
the risk. Risks can also be identified through required reporting mechanisms such as status
reports or project reviews.
Decomposition: As the product is being decomposed during the requirements and design phases,
another opportunity exists for risk identifications. Every TBD ("To Be Done/Determined") is a
potential risk. As Ould states, “The most important thing about planning is writing down what
you don’t know, because what you don’t know is what you must find out”. Decomposition in the
form of work breakdown structures during project planning can also help identify areas of
uncertainty that may need to be recorded as risks.
Assumption Analysis: Process and product assumptions must be analyzed. For example, we
might assume the hardware would be available by the system test date or three additional
experienced C++ programmers will be hired by the time coding starts. If these assumptions
prove to be false, we could have major problems.
Critical Path Analysis: As we perform critical path analysis for our project plan, we must
remain on the alert to identify risks. Any possibility of schedule slippage on the critical path
must be considered a risk because it directly impacts our ability to meet schedule.
Risk Taxonomies: Risk taxonomies are lists of problems that have occurred on other projects
and can be used as checklists to help ensure all potential risks have been considered. An example
of a risk taxonomy can be found in the Software Engineering Institute’s Taxonomy -Based Risk
Identification report that covers 13 major risk areas with about 200 questions.
Additionally, the interrelationships between risks are assessed to determine if compounding risk
conditions magnify losses.
The following is an example of risk analysis. During our analysis, we determine that there is a
30% probability the Test Bed will be available one week later than scheduled and a 10%
probability it will be a month late. If the Test Bed is one week late, the testers can use their time
productively by using the simulators to test other aspects of the software (loss = $0). The
simulator can be utilized for up to two weeks. However, if the Test Bed delivery is one month
late, there are not enough productive activities to balance the loss. Losses include unproductive
testers for two weeks, overtime later, morale problems, and delays in finding defects for a total
estimated loss of $100,000. In addition to the dollar loss, the testing is on the critical path and not
all of the lost testing time can be made up in overtime (loss estimated at two week schedule
slippage).
Boehm defines the Risk Exposure equation to help quantitatively establish risk priorities. Risk
Exposure measures the impact of a ris k in terms of the expected value of the loss. Risk Exposure
(RE) is defined as the probability of an undesired outcome times the expected loss if that
outcome occurs.
Given the example above, the Risk Exposure is 10% x $100,000 = $10,000 and 10% x 2 calendar
week = 0.2 calendar week. Comparing the Risk Exposure measurement for various risks can help
identify those risks with the greatest probable negative impact to the project or product and thus
help establish which risks are candidates for further action. The list of risks is then prioritized
based on the results of our risk analysis. Since resource limitations rarely allow the consideration
of all risks, the prioritized list of risks is used to identify risks requiring additional planning and
action. Other risks are documented and tracked for possible future consideration. Based on
changing conditions, additional information, the identification of new risks, or the closure of
existing risks, the list of risks requiring additional planning and action may require periodic
updates.
- The subcontractor may not deliver the software at the required reliability level and as a result
the reliability of the total system may not meet performance specifications.
· The interface with the new control device is not defined and as a result its driver may take more
time to implement then scheduled.
Is it too big a risk? If the risk is too big for us to be willing to accept, we can avoid the risk by
changing our project strategies and tactics to choose a less risky alternate or we may decide not
to do the project at all. For example, if our project has tight schedule constraints and includes
state of the art technology, we may decide to wait until a future project to implement our newly
purchased CASE tools.
· Avoiding a risk in one part of the project may create risks in other parts of the project.
Identify: Before risks can be managed its must be identified before adversely affecting the
project. Establishing an environment that encourages people to raise concerns and issues and
conducting quality reviews throughout all phases of a project are common techniques for
identifying risks.
Analyze: Analysis is the conversion of risk data into risk decision-making information. It
includes reviewing, prioritizing, and selecting the most critical risks to address. The Software
Risk Evaluation (SRE) Team analyzes each identified risk in terms of its consequence on cost,
schedule, performance, and product quality.
Plan: Planning turns risk information into decisions and actions for both the present and future.
Planning involves developing actions to address individual risks, prioritizing risk actions and
creating a Risk Management Plan. The key to risk action planning is to consider the future
consequences of a decision made today.
Track: Tracking consists of monitoring the status of risks and the actions taken against risks to
mitigate them.
Control: Risk control relies on project management processes to control risk action plans,
correct for variations from plans, respond to triggering events, and improve risk management
processes. Risk control activities are documented in the Risk Management Plan.
Active review: Its starting and ending times are set by the risk manager as well as its scope and
participants (the stakeholders involved in the review). The review has a defined set of inputs
(reports, checklists, questionnaires, etc.) and associated risk identification techniques. As a rule,
the snapshot from the last continuous review is included as an input of the active review. The
active review ends with the risk analysis session that aims at assessing and prioritizing the
identified risks and produces a relevant report.
Continuous review: It starts with the end of the previous review and ends with the start of the
next review (being it active or continuous). It just keeps the communication channel open
enabling the communicated risk information being memorized. The set of its input documents is
not controlled by the risk manager. Any project stakeholder can pass risk-related information
disregarding the way of its generation. Typically, a snapshot is taken at the end of the continuous
review to provide an input to the subsequent active review. A snapshot is also taken at the end of
an active review to summarize the effects of risk identification activities. The risk assessment
report is generated at the end of an active review. We assume that the process has the active and
continuous reviews interleaved, their extent (in time) and scope (in terms of inputs and
participants) being controlled by the risk manager. This way we achieve the following benefits:
Project: General project description (process, methodology, organization, size, initiation date).
Mitigation area: Area of a project that is exposed to a common type of risks (e.g. requirement
specification, personnel management etc.)
Review: This is the root object of the identification phase Opening a new review starts risk
identification activities whereas closing the review ends the risk information acquisition.
Checklist: Checklists are used to collect information that helps to identify risks. A checklist
includes its name, description, author’s identification and its
Predefined risk: Risk that is stored in the risk knowledge base. It may be selected by one or
more answers to the questions.
Predefined risk factor: Risk factor providing the context for a risk stored in the risk knowledge
base.
Identified risk: Detailed risk description (from the risk knowledge base) in the context of a
particular project.
Identified factor: Context of the identified risk extracted from the risk knowledge base.
5.20 SUMMARY
Metrics should always be seen as indicators, not as absolute truth. It is possible to score well on
all metrics, but still have an unsatisfactory design. The application of simple product metrics to
entire programs can only indicate certain problems but does not relate measurement results back
to design principles. It can be very difficult for developer to decide on the right action to take
upon receipt of a particular metrics value. Design metrics may be used to relate knowledge about
good design to characteristic structural system properties. Software developers should be able to
infer more about the software they are developing during the design process.
Assignment-Module 5
6. Fan-In of a procedure is
a. Number of local flows into that procedure plus the number of data structures.
b. Number of components dependent on it
c. Number of components related to it
d. None of them
7. Fan-Out of a procedure is
a. Number of local flows from that procedure plus the number of data structures
b. Number of components dependent on it
c. Number of components related to it
d. None of them
Key - Module 5
1. d
2. a
3. a
4. a
5. d
6. a
7. a
8. d
9. d
10. d
11. a
12. a
13. a
14. a
15. a
CHAPTER 6 : QUALITY STANDARDS
A quality management system (QMS) defines and establishes an organization's quality policy
and objectives. It also allows an organization to document and implement the procedures needed
to attain these goals. A properly implemented QMS ensures that procedures are carried out
consistently, that problems can be identified and resolved, and that the organization can
continuously review and improve its procedures, products and services. It is a mechanism for
maintaining and improving the quality of products or services so that they consistently meet or
exceed the customer's implied or stated needs and fulfil their quality objectives"
The standards are voluntary and as a result have no legal requirements attached. The best known
quality standards are known as the 9000 Series or ISO 9000.
The ISO 9000 quality management system can enable your company to increase profitability and
customer satisfaction through reduced waste and rework, shortened cycle times, improved
problem tracking and resolution and better supplier relations.
Other benefits of ISO certification:
6.1.2.2 Disadvantages
costly,
time consuming to document and maintain,
requires employee buy-in
To achieve maximum benefit from ISO 9000 the focus must be on documenting, understanding
and improving your systems and processes.
ISO 9000 - Explains fundamental quality concepts and provides guidelines for the
selection and application of each standard.
ISO 9001 - Model for quality assurance in design, development, production, installation
and servicing.
ISO 9002 - Model for quality assurance in the production and installation of
manufacturing systems.
ISO 9003 - Quality assurance in final inspection and testing.
ISO 9004 - Guidelines for the applications of standards in quality management and
quality systems.
ISO 9000 and ISO 9004 are guidance standards. They describe what is necessary to
accomplish the requirements outlined in standards 9001, 9002 or 9003.
Organizations choose the standards to which they want to become registered, based on their
structure, their products, services and their specific function. Selecting the appropriate standards
is an important decision.
6.2 SIX SIGMA
Six Sigma is a business management strategy, originally developed by Motorola in 1986. Six
Sigma became well known after Jack Welch made it a central focus of his business strategy at
General Electric in 1995, and today it is widely used in many sectors of industry.
Six Sigma seeks to improve the quality of process outputs by identifying and removing the
causes of defects (errors) and minimizing variability in manufacturing and business processes. It
uses a set of quality management methods, including statistical methods, and creates a special
infrastructure of people within the organization ("Black Belts", "Green Belts", etc.) who are
experts in these methods. Each Six Sigma project carried out within an organization follows a
defined sequence of steps and has quantified financial targets (cost reduction and/or profit
increase).
The term Six Sigma originated from terminology associated with manufacturing, specifically
terms associated with statistical modelling of manufacturing processes. The maturity of a
manufacturing process can be described by a sigma rating indicating its yield or the percentage
of defect-free products it creates. A six sigma process is one in which 99.99966% of the products
manufactured are statistically expected to be free of defects (3.4 defects per million).
Six Sigma originated as a set of practices designed to improve manufacturing processes and
eliminate defects, but its application was subsequently extended to other types of business
processes as well. In Six Sigma, a defect is defined as any process output that does not meet
customer specifications, or that could lead to creating an output that does not meet customer
specifications.
The core of Six Sigma was “born” at Motorola in the 1970s out of senior executive Art Sundry's
criticism of Motorola’s bad quality. As a result of this criticism, the company discovered a
connection between increases in quality and decreases in costs of production. At that time, the
prevailing view was that quality costs extra money. In fact, it reduced total costs by driving down
the costs for repair or control. Bill Smith subsequently formulated the particulars of the
methodology at Motorola in 1986. Six Sigma was heavily inspired by the quality improvement
methodologies of the six preceding decades, such as quality control, Total Quality Management
(TQM), and Zero Defects, based on the work of pioneers such as Shewhart, Deming, Juran,
Crosby, Ishikawa, Taguchi, and others.
Continuous efforts to achieve stable and predictable process results (i.e., reduce process
variation) are of vital importance to business success.
Manufacturing and business processes have characteristics that can be measured,
analyzed, improved and controlled.
Achieving sustained quality improvement requires commitment from the entire
organization, particularly from top-level management.
Features that set Six Sigma apart from previous quality improvement initiatives include:
A clear focus on achieving measurable and quantifiable financial returns from any Six
Sigma project.
An increased emphasis on strong and passionate management leadership and support.
A special infrastructure of "Champions", "Master Black Belts", "Black Belts", "Green
Belts", "Red Belts" etc. to lead and implement the Six Sigma approach.
A clear commitment to making decisions on the basis of verifiable data, rather than
assumptions and guesswork.
The term "Six Sigma" comes from a field of statistics known as process capability studies.
Originally, it referred to the ability of manufacturing processes to produce a very high proportion
of output within specification. Processes that operate with "six sigma quality" over the short term
are assumed to produce long-term defect levels below 3.4 defects per million opportunities
(DPMO). Six Sigma's implicit goal is to improve all processes to that level of quality or better.
Six Sigma is a registered service mark and trademark of Motorola Inc. As of 2006 Motorola
reported over US$17 billion in savings from Six Sigma. Other early adopters of Six Sigma who
achieved well-publicized success include Honeywell (previously known as AlliedSignal) and
General Electric, where Jack Welch introduced the method. By the late 1990s, about two-thirds
of the Fortune 500 organizations had begun Six Sigma initiatives with the aim of reducing costs
and improving quality.
In recent years, some practitioners have combined Six Sigma ideas with lean manufacturing to
create a methodology named Lean Six Sigma. The Lean Six Sigma methodology views lean
manufacturing, which addresses process flow and waste issues, and Six Sigma, with its focus on
variation and design, as complementary disciplines aimed at promoting "business and
operational excellence". Companies such as IBM use Lean Six Sigma to focus transformation
efforts not just on efficiency but also on growth. It serves as a foundation for innovation
throughout the organization, from manufacturing and software development to sales and service
delivery functions.
6.2.1 Methods
Six Sigma projects follow two project methodologies inspired by Deming's Plan-Do-Check-Act
Cycle. These methodologies, composed of five phases each, bear the acronyms DMAIC and
DMADV.
DMAIC is used for projects aimed at improving an existing business process. DMAIC is
pronounced as "duh-may-ick".
DMADV is used for projects aimed at creating new product or process designs. DMADV
is pronounced as "duh-mad-vee".
Define the problem, the voice of the customer, and the project goals, specifically.
Measure key aspects of the current process and collect relevant data.
Analyze the data to investigate and verify cause-and-effect relationships. Determine what
the relationships are, and attempt to ensure that all factors have been considered. Seek out
root cause of the defect under investigation.
Improve or optimize the current process based upon data analysis using techniques such
as design of experiments, poka yoke or mistake proofing, and standard work to create a
new, future state process. Set up pilot runs to establish process capability.
Control the future state process to ensure that any deviations from target are corrected
before they result in defects. Implement control systems such as statistical process
control, production boards, visual workplaces, and continuously monitor the process.
Some organizations add a Recognize step at the beginning, which is to recognize the right
problem to work on, thus yielding an RDMAIC methodology.
Define design goals that are consistent with customer demands and the enterprise
strategy.
Measure and identify CTQs (characteristics that are Critical To Quality), product
capabilities, production process capability, and risks.
Analyze to develop and design alternatives, create a high-level design and evaluate
design capability to select the best design.
Design details, optimize the design, and plan for design verification. This phase may
require simulations.
Verify the design, set up pilot runs, implement the production process and hand it over to
the process owner(s).
Six Sigma identifies several key roles for its successful implementation.
Executive Leadership includes the CEO and other members of top management. They are
responsible for setting up a vision for Six Sigma implementation. They also empower the
other role holders with the freedom and resources to explore new ideas for breakthrough
improvements.
Champions take responsibility for Six Sigma implementation across the organization in
an integrated manner. The Executive Leadership draws them from upper management.
Champions also act as mentors to Black Belts.
Master Black Belts, identified by champions, act as in-house coaches on Six Sigma. They
devote 100% of their time to Six Sigma. They assist champions and guide Black Belts
and Green Belts. Apart from statistical tasks, they spend their time on ensuring consistent
application of Six Sigma across various functions and departments.
Black Belts operate under Master Black Belts to apply Six Sigma methodology to
specific projects. They devote 100% of their time to Six Sigma. They primarily focus on
Six Sigma project execution, whereas Champions and Master Black Belts focus on
identifying projects/functions for Six Sigma.
Green Belts are the employees who take up Six Sigma implementation along with their
other job responsibilities, operating under the guidance of Black Belts.
Some organizations use additional belt colours, such as Yellow Belts, for employees that
have basic training in Six Sigma tools and generally participate in projects and 'white
belts' for those locally trained in the concepts but do not participate in the project team.
6.2.4 Certification
Corporations such as early Six Sigma pioneers General Electric and Motorola developed
certification programs as part of their Six Sigma implementation, verifying individuals'
command of the Six Sigma methods at the relevant skill level (Green Belt, Black Belt etc.).
Following this approach, many organizations in the 1990s started offering Six Sigma
certifications to their employees. Criteria for Green Belt and Black Belt certification vary; some
companies simply require participation in a course and a Six Sigma project. There is no standard
certification body, and different certification services are offered by various quality associations
and other providers against a fee. The American Society for Quality for example requires Black
Belt applicants to pass a written exam and to provide a signed affidavit stating that they have
completed two projects, or one project combined with three years' practical experience in the
body of knowledge. The International Quality Federation offers an online certification exam that
organizations can use for their internal certification programs; it is statistically more demanding
than the ASQ certification. Other providers offering certification services include the the Juran
Institute, Six Sigma Qualtec, Air Academy Associates and many others.
Capability studies measure the number of standard deviations between the process mean and the
nearest specification limit in sigma units. As process standard deviation goes up, or the mean of
the process moves away from the center of the tolerance, fewer standard deviations will fit
between the mean and the nearest specification limit, decreasing the sigma number and
increasing the likelihood of items outside specification.
Graph of the normal distribution, which underlies the statistical assumptions of the Six Sigma
model. The Greek letter σ (sigma) marks the distance on the horizontal axis between the mean,
µ, and the curve's inflection point. The greater this distance, the greater is the spread of values
encountered. For the green curve shown above, µ = 0 and σ = 1. The upper and lower
specification limits (USL and LSL, respectively) are at a distance of 6σ from the mean. Because
of the properties of the normal distribution, values lying that far away from the mean are
extremely unlikely. Even if the mean were to move right or left by 1.5σ at some point in the
future (1.5 sigma shift, coloured red and blue), there is still a good safety cushion. This is why
Six Sigma aims to have processes where the mean is at least 6σ away from the nearest
specification limit.
Hence the widely accepted definition of a six sigma process is a process that produces 3.4
defective parts per million opportunities (DPMO). This is based on the fact that a process that is
normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard deviations
above or below the mean (one-sided capability study). So the 3.4 DPMO of a six sigma process
in fact corresponds to 4.5 sigma, namely 6 sigma minus the 1.5-sigma shift introduced to account
for long-term variation. This allows for the fact that special causes may result in a deterioration
in process performance over time, and is designed to prevent underestimation of the defect levels
likely to be encountered in real-life operation.
The table below gives long-term DPMO values corresponding to various short-term sigma levels.
It must be understood that these figures assume that the process mean will shift by 1.5 sigma
toward the side with the critical specification limit. In other words, they assume that after the
initial study determining the short-term sigma level, the long-term Cpk value will turn out to be
0.5 less than the short-term Cpk value. So, for example, the DPMO figure given for 1 sigma
assumes that the long-term process mean will be 0.5 sigma beyond the specification limit (Cpk =
–0.17), rather than 1 sigma within it, as it was in the short-term study (Cpk = 0.33). Note that the
defect percentages indicate only defects exceeding the specification limit to which the process
mean is nearest. Defects beyond the far specification limit are not included in the percentages.
Sigma level DPMO Percent defective Percentage yield Short-term Cpk Long-term Cpk
Analysis tools
Arena
ARIS Six Sigma
Bonita Open Solution BPMN2 standard and KPIs for statistic monitoring
JMP
Microsoft Visio
Minitab
R language (The R Project for Statistical Computing). Open source software: statistical and
graphic functions from the base installation can be used for Six Sigma projects. Furthermore,
some contributed packages at CRAN contain specific tools for Six Sigma: SixSigma,
qualityTools, qcc and IQCC.
SDI Tools
Sigma XL
Software AG web Methods BPM Suite
SPC XL
Stat graphics
STATISTICA
6.2.9 Application
Six Sigma mostly finds application in large organizations. An important factor in the spread of
Six Sigma was GE's 1998 announcement of $350 million in savings thanks to Six Sigma, a
figure that later grew to more than $1 billion. According to industry consultants, companies with
fewer than 500 employees are less suited to Six Sigma implementation, or need to adapt the
standard approach to make it work for them. This is due both to the infrastructure of Black Belts
that Six Sigma requires, and to the fact that large organizations present more opportunities for
the kinds of improvements Six Sigma is suited to bringing about.
In healthcare
Six Sigma strategies were initially applied to the healthcare industry in March 1998. The
Commonwealth Health Corporation (CHC) was the first health care organization to successfully
implement the efficient strategies of Six Sigma. Substantial financial benefits were claimed, for
example in their radiology department throughput improved by 33% and costs per radiology
procedure decreased by 21.5%; Six Sigma has subsequently been adopted in other hospitals
around the world.
Critics of Six Sigma believe that while Six Sigma methods may have translated fluidly in a
manufacturing setting, they would not have the same result in service-oriented businesses, such
as the health industry.
6.2.10 Criticism
6.2.10.1 Lack of originality
Noted quality expert Joseph M. Juran has described Six Sigma as "a basic version of quality
improvement", stating that "there is nothing new there. It includes what we used to call
facilitators. They've adopted more flamboyant terms, like belts with different colors. I think that
concept has merit to set apart, to create specialists who can be very helpful. Again, that's not a
new idea. The American Society for Quality long ago established certificates, such as for
reliability engineers."
The use of "Black Belts" as itinerant change agents has (controversially) fostered an industry of
training and certification. Critics argue there is overselling of Six Sigma by too great a number of
consulting firms, many of which claim expertise in Six Sigma when they have only a
rudimentary understanding of the tools and techniques involved.
A Fortune article stated that "of 58 large companies that have announced Six Sigma programs,
91 percent have trailed the S&P 500 since". The statement was attributed to "an analysis by
Charles Holland of consulting firm Qualpro (which espouses a competing quality-improvement
process)." The summary of the article is that Six Sigma is effective at what it is intended to do,
but that it is "narrowly designed to fix an existing process" and does not help in "coming up with
new products or disruptive technologies." Advocates of Six Sigma have argued that many of
these claims are in error or ill-informed.
A more direct criticism is the "rigid" nature of Six Sigma with its over-reliance on methods and
tools. In most cases, more attention is paid to reducing variation and less attention is paid to
developing robustness (which can altogether eliminate the need for reducing variation).
Articles featuring critics have appeared in the November-December 2006 issue of USA Army
Logistician regarding Six-Sigma: "The dangers of a single paradigmatic orientation (in this case,
that of technical rationality) can blind us to values associated with double-loop learning and the
learning organization, organization adaptability, workforce creativity and development,
humanizing the workplace, cultural awareness, and strategy making."
A Business Week article says that James McNerney's introduction of Six Sigma at 3M had the
effect of stifling creativity and reports its removal from the research function. It cites two
Wharton School professors who say that Six Sigma leads to incremental innovation at the
expense of blue skies research. This phenomenon is further explored in the book Going Lean,
which describes a related approach known as lean dynamics and provides data to show that
Ford's "6 Sigma" program did little to change its fortunes.
In articles and especially on Internet sites and in text books, claims are made about the huge
successes and millions of dollars that Six Sigma has saved. Six Sigma seems to be a "silver
bullet" method. However, there does not seem to be trustworthy evidence for this:
Probably more to the Six Sigma literature than concepts, relates to the evidence for Six Sigma’s
success. So far, documented case studies using the Six Sigma methods are presented as the
strongest evidence for its success. However, looking at these documented cases, and apart from a
few that are detailed from the experience of leading organizations like GE and Motorola, most
cases are not documented in a systemic or academic manner. In fact, the majority are case studies
illustrated on websites, and are, at best, sketchy. They provide no mention of any specific Six
Sigma methods that were used to resolve the problems. It has been argued that by relying on the
Six Sigma criteria, management is lulled into the idea that something is being done about quality,
whereas any resulting improvement is accidental (Latzko 1995). Thus, when looking at the
evidence put forward for Six Sigma success, mostly by consultants and people with vested
interests, the question that begs to be asked is: are we making a true improvement with Six
Sigma methods or just getting skilled at telling stories? Everyone seems to believe that we are
making true improvements, but there is some way to go to document these empirically and
clarify the causal relations.
While 3.4 defects per million opportunities might work well for certain products/processes, it
might not operate optimally or cost effectively for others. A pacemaker process might need
higher standards, for example, whereas a direct mail advertising campaign might need lower
standards. The basis and justification for choosing six (as opposed to five or seven, for example)
as the number of standard deviations, together with the 1.5 sigma shift is not clearly explained.
In addition, the Six Sigma model assumes that the process data always conform to the normal
distribution. The calculation of defect rates for situations where the normal distribution model
does not apply is not properly addressed in the current Six Sigma literature. This particularly
counts for reliability-related defects and other problems that are not time invariant. The IEC,
ARP, EN-ISO, DIN and other (inter)national standardization organizations have not created
standards for the Six Sigma process. This might be the reason that it became a dominant domain
of consultants (see critics above).
The statistician Donald J. Wheeler has dismissed the 1.5 sigma shift as "goofy" because of its
arbitrary nature. Its universal applicability is seen as doubtful.
The 1.5 sigma shift has also become contentious because it results in stated "sigma levels" that
reflect short-term rather than long-term performance: a process that has long-term defect levels
corresponding to 4.5 sigma performance is, by Six Sigma convention, described as a "six sigma
process.” The accepted Six Sigma scoring system thus cannot be equated to actual normal
distribution probabilities for the stated number of standard deviations, and this has been a key
bone of contention over how Six Sigma measures are defined. The fact that it is rarely explained
that a "6 sigma" process will have long-term defect rates corresponding to 4.5 sigma
performance rather than actual 6 sigma performance has led several commentators to express the
opinion that Six Sigma is a confidence trick.
According to the Software Engineering Institute (SEI, 2008), CMMI helps “integrate
traditionally separate organizational functions, set process improvement goals and priorities,
provide guidance for quality processes, and provide a point of reference for appraising current
processes.”
Figure 6.3: Characteristics of maturity levels
CMMI was developed by a group of experts from industry, government, and the Software
Engineering Institute (SEI) at Carnegie Mellon University. CMMI models provide guidance for
developing or improving processes that meet the business goals of an organization. A CMMI
model may also be used as a framework for appraising the process maturity of the organization.
CMMI originated in software engineering but has been highly generalized over the years to
embrace other areas of interest, such as the development of hardware products, the delivery of all
kinds of services, and the acquisition of products and services. The word "software" does not
appear in definitions of CMMI. This generalization of improvement concepts makes CMMI
extremely abstract. It is not as specific to software engineering as its predecessor, the Software
CMM.
CMMI was developed by the CMMI project, which aimed to improve the usability of maturity
models by integrating many different models into one framework. The project consisted of
members of industry, government and the Carnegie Mellon Software Engineering Institute (SEI).
The main sponsors included the Office of the Secretary of Defense (OSD) and the National
Defense Industrial Association.
CMMI is the successor of the capability maturity model (CMM) or Software CMM. The CMM
was developed from 1987 until 1997. In 2002, CMMI Version 1.1 was released, Version 1.2
followed in August 2006, and CMMI Version 1.3 in November 2010. Some of the major changes
in CMMI V1.3 are the support of Agile Software Development, improvements to high maturity
practices and alignment of the representation (staged and continuous).
The SEI published that 60 organizations measured increases of performance in the categories of
cost, schedule, productivity, quality and customer satisfaction. The median increase in
performance varied between 14% (customer satisfaction) and 62% (productivity). However, the
CMMI model mostly deals with what processes should be implemented, and not so much with
how they can be implemented. These results do not guarantee that applying CMMI will increase
performance in every organization. A small company with few resources may be less likely to
benefit from CMMI; this view is supported by the process maturity profile (page 10). Of the
small organizations (<25 employees), 70.5% are assessed at level 2: Managed, while 52.8% of
the organizations with 1001–2000 employees are rated at the highest level (5: Optimizing).
Interestingly, Turner & Jain (2002) argue that although it is obvious there are large differences
between CMMI and agile methods, both approaches have much in common. They believe
neither way is the 'right' way to develop software, but that there are phases in a project where one
of the two is better suited. They suggest one should combine the different fragments of the
methods into a new hybrid method. Sutherland et al. (2007) assert that a combination of Scrum
and CMMI brings more adaptability and predictability than either one alone. David J. Anderson
(2005) gives hints on how to interpret CMMI in an agile manner. Other viewpoints about using
CMMI and Agile development are available on the SEI website.
CMMI Roadmaps, which are a goal-driven approach to selecting and deploying relevant process
areas from the CMMI-DEV model, can provide guidance and focus for effective CMMI
adoption. There are several CMMI roadmaps for the continuous representation, each with a
specific set of improvement goals. Examples are the CMMI Project Roadmap, CMMI Product
and Product Integration Roadmaps and the CMMI Process and Measurements Roadmaps. These
roadmaps combine the strengths of both the staged and the continuous representations.
The combination of the project management technique earned value management (EVM) with
CMMI has been described (Solomon, 2002). To conclude with a similar use of CMMI, Extreme
Programming (XP), a software engineering method, has been evaluated with CMM/CMMI
(Nawrocki et al., 2002). For example, the XP requirements management approach, which relies
on oral communication, was evaluated as not compliant with CMMI.
CMMI can be appraised using two different approaches: staged and continuous. The staged
approach yields appraisal results as one of five maturity levels. The continuous approach yields
one of six capability levels. The differences in these approaches are felt only in the appraisal; the
best practices are equivalent and result in equivalent process improvement results.
6.3.2 Appraisal
An organization cannot be certified in CMMI; instead, an organization is appraised. Depending
on the type of appraisal, the organization can be awarded a maturity level rating (1-5) or a
capability level achievement profile.
To determine how well the organization’s processes compare to CMMI best practices,
and to identify areas where improvement can be made
To inform external customers and suppliers of how well the organization’s processes
compare to CMMI best practices
To meet the contractual requirements of one or more customers
Appraisals of organizations using a CMMI model must conform to the requirements defined in
the Appraisal Requirements for CMMI (ARC) document. There are three classes of appraisals,
A, B and C, which focus on identifying improvement opportunities and comparing the
organization’s processes to CMMI best practices. Of these, class A appraisal is the most formal
and is the only one that can result in a level rating. Appraisal teams use a CMMI model and
ARC-conformant appraisal method to guide their evaluation of the organization and their
reporting of conclusions. The appraisal results can then be used (e.g., by a process group) to plan
improvements for the organization.
The Standard CMMI Appraisal Method for Process Improvement (SCAMPI) is an appraisal
method that meets all of the ARC requirements. Results of an SCAMPI appraisal may be
published (if the appraised organization approves) on the CMMI Web site of the SEI: Published
SCAMPI Appraisal Results. SCAMPI also supports the conduct of ISO/IEC 15504, also known
as SPICE (Software Process Improvement and Capability Determination), assessments etc.
6.4 SUMMARY
Software quality has been a principle concern. In the early days of trading, acceptable quality
was generally decided by agreement between developers and end users. With the wider extent of
technology, some traditional means of effective acceptable quality became important. Standards
performs basis of understanding, provides degree of definiteness and precision for new methods,
permits accurate results comparison, reduce cost of design and of maintenance by use of
developed, proven products and techniques and assists in the coordination of development by
establishing methods and direction. ISO 9000 is necessary but not enough to guarantee software
quality.
Assignment-Module 3
1. ___________defines and establishes an organization's quality policy and objectives.
a. QMC
b. QMS
c. QA
d. QC
Key - Module 6
1. a
2. c
3. b
4. d
5. a
6. a
7. b
8. b
9. a
10. d
REFERENCES
3. CSTE Common Body Of Knowledge, V6.1 and CSQA Common Body Of Knowledge,
V6.2.
4. Managing Quality an Integrative Approach, Foster, S. Thomas, Upper Saddle River:
Prentice Hall, 2001.
5. Quality Function Deployment - A Practitioner’s Approach, James L. Brossert,
(Milwaukee, Wisc.: ASQC Quality Press, 1991.
6. Software Quality: Concepts and Evidences, Luis Fernández Sanz, Departamento de
Sistemas Informáticos Universidad Europea de Madrid.
7. Software Testing, Antonia Bertolino, Istituto di Elaborazione della Informazione
Consiglio Nazionale delle RicercheResearch Area of S. Cataldo 56100 PISA Italy.
8. The Quality Toolbox, Nancy R. Tague, ASQ Quality Process, Second Edition, 2004.
9. www.acw.mit.edu
10. www.aleanjourney.com
11. www.asq.org
12. www.cnx.org
13. www.cs.colostate.edu
14. www.csbdu.in
15. www.defectmanagement.com
16. www.drdobbs.com
17. www.ehow.com
18. www.exforsys.com
19. www.herkules.oulu.fi
20. www.icoachmath.com
21. www.ipcc-nggip.iges.org.jp
22. www.johnsonandjohnson.com.
23. www.mhhe.com
24. www.mks.com
25. www.msdn.microsoft.com
26. www.msqaa.org
27. www.peart.ucs.indiana.edu
28. www.physicaltherapyjournal.com
29. www.processexcellencenetwork.com
30. www.processexcellencenetwork.com
31. www.qualitydigest.com
32. www.selectbs.com
33. www.softwaretestingdiary.com
34. www.sqa.net
35. www.stylusinc.com
36. www.timkastelle.org
37. www.undergraduate.cse.uwa.edu.au
38. www.westfallteam.com
39. www.wikipedia.com
40. www.zarate-consult.de
41. http://dissertations.ub.rug.nl
42. http://msdn.microsoft.com