Documente Academic
Documente Profesional
Documente Cultură
Project Manager’s
Guide to Systems
Engineering
Measurement for
Project Success
A Basic Introduction to Systems Engineering Measures
for Use by Project Managers
Guide to Systems Engineering Measurement
®
Authors
This document was prepared by the Measurement Working Group of the International Council on Systems
Engineering. The authors who made a significant contribution to the generation of this Guide are:
Ronald S. Carson, PhD, ESEP, Lead The Boeing Company (retired)
Paul J. Frenz, CSEP, MWG Chair General Dynamics
Elizabeth O’Donnell The Boeing Company
REVISION HISTORY
Version Revision Date Comments/Description
New 21 March 2015 Approved by INCOSE Technical Board as an INCOSE Technical Paper
Copyright © 2015 by the International Council on Systems Engineering. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical,
photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without
the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the INCOSE Administrative
Office, International Council on Systems Engineering, 7670 Opportunity Rd #220, CA 92111-2222 USA, +1 858.541.1725, publications@incose.org.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this publication, they
make no representations or warranties with respect to the accuracy or completeness of the contents of this publication and specifically
disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales
representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should
consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial
damages, including but not limited to special, incidental, consequential, or other damages.
INCOSE publishes its products in a variety of formats. For more information about INCOSE products, visit our web site at www.incose.org.
ISBN: 978-1-937073-06-01
General Citation Guidelines: References to this publication should be formatted as follows, with appropriate adjustments for formally
recognized styles:
INCOSE (2015). Project Manager’s Guide to Systems Engineering Measurement for Project Success: A Basic Introduction
to Systems Engineering Measures for Use by Project Managers (Version 1.0). R. S. Carson, P. J. Frenz, and, E. O’Donnell.
San Diego, CA: International Council on Systems Engineering.
INCOSE Notices
Author Use: Authors have full rights to use their contributions unfettered, with credit to the INCOSE technical source, except as noted in the
following text.
INCOSE Use: In accordance with Section 107 of the 1976 United States Copyright Act, limited use is granted to reproduce for purposes
such as criticism, comment, news reporting teaching, scholarship, or research. Reproduction for these purposes is limited to one 1,000 word
extraction from legally acquired copies of this publication. For instructional purposes, only one copy of the 1,000 word extraction is allowed
per student. All limited use shall provide attribution to INCOSE and the original author(s) where practical, provided this copyright notice is
included with all reproductions. No other use, such as preparing derivative works, and redistributing is granted.
Table of Contents
1 INTRODUCTION....................................................................................................................................................................... 1
7 ACRONYMS...............................................................................................................................................................................58
Preface
Acknowledgements
The completion of this document required the review and comment of members from the Measurement
Working Group and other groups in INCOSE. Their efforts are greatly appreciated:
INCOSE technical documents are developed within the working groups of INCOSE. Members of the
working groups serve voluntarily and without compensation. The documents developed within INCOSE
represent a consensus of the broad expertise on the subject within INCOSE.
Comments
Comments for revision of INCOSE technical reports are welcome from any interested party, regardless
of membership affiliation with INCOSE. Suggestions for change in documents should be in the form of a
proposed change of text, together with appropriate supporting rationale. Please use the feedback form that
is provided at the end of this document. Comments on technical reports and requests for interpretations
should be addressed to:
http://www.incose.org/practice/techactivities/wg/measure/
1 INTRODUCTION
In This Chapter
ÂÂProject Management and Measurement
Typical project measures, such as cost and schedule measures, provide the project manager (and systems
engineer) visibility into how well the project is tracking against its planned budget and schedule targets.
For the project manager, meeting these target measures – while crucial to assessing the performance
of this system that produces the product or service – are not all that should be considered to ensure the
project achieves the technical objectives and is on the path for success.
With the increasing complexity of systems and the important role of systems engineering (SE) on today’s
projects, there is other information which can be measured, evaluated and acted upon by the managers
who control the SE processes on these projects, for project success.
Analysis, status reporting, and assessment of the project’s systems engineering measures can
complement cost and schedule control, and can help meet programmatic targets. By tracking these SE
measures the project manager gains visibility into whether the delivered system will meet its requirements
and satisfy the customer’s needs and expectations. The project manager or systems engineer needs to
decide which measures are worth addressing or tracking, what tailoring is needed, and how to act upon
the results of the measurements.
This guide provides explanations and examples of some of the systems engineering data, and how it
can be collected, measured, tailored and controlled towards ensuring project success. This is not a
comprehensive treatise on creating and implementing a measurement program. Rather, it is an informative
guide to assist you in determining which measures can best enable the success of your project. If you’re
looking for straightforward information and measurement techniques, this is the guide for you.
1 2002 PMI briefing “You Can’t Manage What You Don’t Measure!!!,” Theresa Ramirez, PMP, 10/18/2002, https://
www.hashdoc.com/documents/2520/you-can-t-manage-what-you-don-t-measure (accessed 17 February 2015).
The six chapters in this guide are geared toward helping you understand the problems that systems
engineering measurement aim to identify and control, and to identify the steps involved in the development,
analysis, and utilization of these measures.
This symbol appears next to suggestions or ideas that are important to remember.
This symbol identifies a suggestion that should be heeded, to avoid potential pitfalls.
This symbol indicates that more information and detail are provided in later sections.
In This Chapter
ÂÂWhat is Systems Engineering?
ÂÂValue of Measurement
System Development
Solution/System
Integration, Verification, & Validation Planning Realization
I, V, & V Planning
ctu efin
on
re
ti
rifica
d Ve re
ratio rchitectu
it
Development Realization
A
Integ
Value of Measurement
Measurement is central to achieving the objectives of a project and an effective systems engineering
process. It is important to both becoming aware of and increasing our knowledge. We measure in order to
assess where we are in terms of current performance, to set goals for improvement, and to foresee what
could potentially occur based on a current situation.
With good attention to measurement comes knowledge, essential for project success – knowledge about
project progress, performance, and problems. A defined system for measurement provides the project
manager and others with an effective way to communicate this knowledge, to track project goals and
objectives, and to provide rationale for decisions.
Systems engineering measures allow you the ability to gauge project performance with respect to these
objectives. With attention to good SE measures, you can more readily see how your investment in systems
engineering result in continuous improvement and the achievement of project goals.
SYSTEMS ENGINEERING
Process
Resource Measure Product
Measure Measure
3 http://www.incose.org/ProductsPubs/pdf/INCOSE_SysEngMeasurementPrimer_2010-1205.pdf (accessed
17 February 2015).
4 INCOSE Systems Engineering Measurement Primer, December 2010 (accessed 17 February 2015).
Processes are executed through the Systems Engineering functional activity to produce products, as
shown in Figure 2-2. The feedback control loop to control SE processes is indicated by the reversed arrows.
Key data collected and analyzed as the processes take place provide insight into the activities and the
progress of developing the systems engineering products. This measurement data and analysis for product
measures enables you to make necessary adjustments, address the project progress, and maintain control.
1. Detect and analyze issues and trends – Use your measurement data and analysis to help you
identify and understand the areas that are going well or need improvement.
2. Identify and correct problems early – Provide the information early on to enable the implementation
of appropriate corrective action and to avoid or minimize cost overruns and schedule slippages. By
getting feedback on the system processes, you’ll be able to recognize error-prone products that can
then be corrected earlier, generally at lower cost.
3. Assess the quality – Use your data to help you evaluate the quality of the engineering or program
technical products.
4. Make key tradeoffs – Measurement throughout the project helps in the performance of tradeoff
analysis, and the selection of the optimal approach or determination of feasible alternative solutions.
5. Enable a focus on risk areas – Measures help identify problems and complexities (requirements
development, design, technical performance progress, etc.), to help identify root causes of risks and
problems and allow you to manage risks before they become issues.
6. Track specific project objectives – By providing status on progress with respect to achieving technical
and management objectives, you can perform better technical planning, make adjustments to
resources based on discrepancies between planned and actual progress, and make other decisions
to revise project plans.
7. Communicate effectively – Measures can help you provide the project team with quantified infor-
mation related to status, potential problems, progress, and completion. Regular communication
on measures can increase awareness of progress, reduce uncertainty and ambiguity, and improve
organizational focus and comprehension.
8. Know which data to ignore – Learning to read your measures will help you discern which data you
can ignore. By sorting those out, you will focus on the measures that are useful to you.
9. Spend your time where it matters – Learning which measures are most important for your project
means getting more time to focus on other areas.
10. Describe rationale for decisions – A good measurement system presents the information for
decision-making and provides accountability of the decisions made.
In This Chapter
ÂÂQuick-Start Measurement Questions
Using Systems Engineering measures, you can evaluate results or progress related to the specific
aspects of your project. Some measures can be more applicable than others, depending on project
considerations such as project phase, development strategy, applicable tools and databases, the product
domain, and particular category of measurement. An understanding of the factors that influence these
types of project considerations lets you adjust your approach and select appropriate measures, as well as
make improvements that can help you achieve your goals. The following sections can help to answer key
questions and get you started towards selecting what to measure.
You should measure what is critical to your program to be successful. See Chapter 5 for
guidance on measurement selection.
See the next section and Chapter 5 – Use the Measurement Guide Table to identify appropriate
measures.
This is highly project specific, but if forced to make a recommendation, from a scheduling
perspective, measure Problem Report Aging, and Technical Inch-stones late starts and stops
(finishes). From a technical perspective, measure reduction in uncertainty, over time, of any technical or
management parameter necessary for making decisions.
Microsoft Excel® provides enough capability for most measurement efforts. Examples in
Section 5 are done with Microsoft Excel.
You want to select the “critical few” measures that provide the insight into areas of highest risk
to your specific project.
Technical Debt: The promise to complete a technical shortcoming in the future while declaring
it complete enough today. See Chapter 4, A Look at Technical Debt.
Where do I find more on technical measures – measures of performance (MOP), measures of effectiveness
(MOE), and technical performance measures (TPM)?
Knowledgeable staff can effectively compensate for not having detailed processes. New
product domains or customers may present unexpected challenges to novice or even to
experienced staff, so additional technical maturity measurements may be useful.
Let’s go to the expert, Dr. Edward Deming: “In God we Trust; all others bring data.” In other
words, trust the data first. Then ask, “Why are the data in question?”
What do I measure with regard to a specific product domain, e.g., software-intensive, cyber, regulatory
(FAA, FDA), medical, aerospace, automotive, etc.?
See the “Product Domain” rows in the next section and in Chapter 5.
Project
Considerations Applicable Factors
Measurement Technical
Size Complexity Stability Schedule
Category Quality
Implement
Conceive and Architect and Operate and
Phase and Verify Validate
Define Design Support
Integrate
Development Acquirer- Supplier-
Waterfall Agile/Spiral Increments
Strategy Funded Funded
The categories of Project Considerations are indicated in the first column: Measurement Category, Phase,
Development Strategy, Tools and Databases, and Product Domain. Each of these is elaborated
in turn so that a project manager can select those measures most applicable. The specific
measures listed in Section 5 are associated with applicable factors in each row.
Measurement Category. Measures are associated with specific matters that may need to be addressed:
Technical Quality (maturity, correctness, completeness)
Size (number of requirements, systems, interfaces, etc.)
Complexity (number of stakeholders with different needs, number of external system interfaces,
degree of coupling of subsystems)
Stability (variation of any measure over time)
Schedule (timely accomplishment of technical tasks)
Phase. Measures are associated with a specific phase of the project in the life cycle:
Conceive and Define (identification and development of needs, requirements, and concepts)
Architect and Design (definition of the system requirements, subsystems, interfaces, and design)
Implement and Integrate (realization of the system elements in hardware, software, training,
support, and the integration of these various elements)
Verify (proving by various means that the implemented design satisfies the requirements)
Validate (establishing that the as-built system satisfies the stakeholder needs)
Operate and Support (using and maintaining the capabilities of the system, including training)
Tools and Databases. Measures associated with different levels of infrastructure for developing and
managing development information:
Manual or Spreadsheet (development artifacts are primarily paper-based; systems engineering is
labor-intensive)
Requirements Management (systems engineering automation is limited to managing requirements
for traceability and configuration control)
Static Model-Based SE (SE uses automated tools for defining and managing system requirements
and architecture)
Simulation-Based SE (SE requirements and architecture are derived from system simulations so
that validation is inherent in the development process)
Product Domain. Measures that depend on the type or development environment of the product:
Software-Intensive (project is primarily software development using existing or off-the-shelf
hardware)
Hardware-Intensive (project requires significant hardware development and integration)
Complex (project uses numerous highly interacting elements of hardware and software)
Regulatory Environment (the product must be certified by one or more regulatory authorities prior
to operation)
Commercial (the product will be offered for commercial sale)
Government (the product will be offered primarily for government sale)
In This Chapter
ÂÂWhat is Technical Debt?
ÂÂWhy is Technical Debt incurred?
ÂÂHow to Avoid and Measure Technical Debt!
Imagine you have been managing a project for eight months. You have met all of your milestones,
successfully completed all reviews, and your program measures are all positive, with both CPI and SPI
greater than 1.0, risks appropriately treated, and you have hardly touched your management reserve or
contingency schedule.
Leaning back and reflecting on your program management skills and your future promotion, in walks your
technical lead who mentions that there are a few issues to still clean up before we can go to production.
Snapped back to reality and sitting up straight, you ask “What do you mean? The schedule says we’re done
and ready to build!”
What hit this program manager is Technical Debt. Technical Debt is the promise to complete a technical
shortcoming in the future while declaring it “complete enough” today. And he/she is not the first program
manager to receive this surprise when all program measures look good.
In this chapter, you will learn what comprises technical debt, how it is incurred, how you can identify and
measure it, and techniques for debt management.
The challenge for the program manager is that the program management measures don’t typically track
technical debt, because the schedule says it is “100% – Done.”
Technical Debt: The promise to complete a technical shortcoming in the future while
declaring it complete today.
The schedule has as a task “deliver a CDRL,” typically information in some form. The CDRL item is
delivered, so the Control Account Manager (CAM) takes 100% credit and moves to the next task. The
schedule was not developed to have a separate task for CDRL approval with duration and budget.
Depending on your customer, that CDRL may require rework for any of a number of reasons, such as
missing or incorrect content. That unplanned rework is technical debt.
The drawing package is to be completed and submitted four weeks prior to a program review. In order to
meet the schedule, the drawing package is declared complete and is submitted prior to the formal review.
While the customer is reviewing the drawing package, the work continues -trying to clean up the loose
ends prior to the formal review. The drawing package was declared fully complete, within schedule, and
there is no formal task or budget monitoring the remaining efforts.
Just like personal debt, these unaccounted for tasks begin to snowball, and the program
manager uncovers a significant issue one day that can no longer be ignored.
People want to believe that things are not so bad and that they can recover without impact. This
is rarely the case, for many reasons. Your job is to use the proper techniques to prevent, avoid
and detect Technical Debt in a timely matter and to take corrective action to minimize impact.
These are hard to detect and to plan for in advance. Since the product status is declared as complete, it
would have been baselined and would require a Problem Report to update. You are able to measure
incomplete work once the Problem Report is written.
Work your way through each of the classes of unscheduled tasks. You will notice that this
may add duration to your schedule. Sorry, but this is reality. Some of the rework will not be
required but others may take longer than scheduled. However, having the work properly
scheduled allows you the opportunity to staff the right resources and to have adequate budget to
complete the required tasks.
Another advantage of explicitly scheduling these tasks is that you will be more likely to allocate an
appropriate budget in a separate work package. This allows you to take the EVMS credit for the initial
task completion and to appropriately schedule follow-on tasks.6
Each project and situation is different, but here is an example of a measure for more formal situations,
with a formal configuration management system: Measuring age and state of problem reports (PR). You
can typically extract this data electronically from whatever tool is used to store the problem reports.
Most tools move the problem reports through a series of states. Typical states are: Open, Analysis,
CCB Review, Assigned, Verification, and Closed. It is important to close the problem reports as soon
as possible to reduce cost to the program. It is also important to make sure that all problem reports
are analyzed as soon as possible. Any problem report in an Open or Analysis state represents an
uncharacterized risk to the program. An example of a problem report aging graph is shown in Figure
4-1. All problem reports Open over one month and all PRs in Analysis over one month should be
reviewed; all non-Closed problem reports should be reviewed after three months.
7 Friedman, George and Sage, Andrew P., “Case Studies of Systems Engineering and Management in Systems
Acquisition,” Systems Engineering, Vol 7, No. 1, 2004, pp. 90.
50 Open
Assign
40
Problem Reports
Analyze
30 Verify
CCB Review
20
Closed
10
0
1 2 3 4 5 6+
Months Open
Industry data associated with the relative cost of delaying the correction of an error or
making a change is shown in Table 4-1 and also in Figure 4-2. The cost of making the
change/correction escalates the longer you delay. If it would take one unit to make the
change/correction in the Design phase, then it would take 40 units to make the same change/
correction after the System Test phase.
Cumulative Percentage Life Cycle Cost against Time
8 Mike Phillips, “V&V Principles”, Verification and Validation Summit 2010. http://www.faa.gov/about/office_org/
headquarters_offices/ang/offices/tc/library/v&vsummit2010/Briefings/V%20and%20V%20Principles%20-%20
M%20Phillips.pptx (accessed 5 August 2014).
The program/organization needs to define what “earning rules” will be used to take credit for tasks.
There are several commonly used methods such as the 50/50 method which allow 50% credit when
you start a task and 50% when completed, Others include: 20/80, 25/75, and 0/100. It is important
to select an earning method and appropriate guidance in a program CAM Earned Value Guidance
document that is available to all CAMs which also facilitates knowledge transfer if a change in
personnel occurs. This CAM Earned Value Guidance document should emphasize the importance
taking earned value credit correctly and ethically to avoid technical debt.
As part of the training and program CAM Earned Value Guidance document, it is important to ensure
the quality of the work product as well. The CAM is responsible for ensuring the technical quality
and completeness of the work product or the result is Technical Debt which will need to be paid with
interest in the future.
Now it is up to you!
In This Chapter
ÂÂIdentifying measures that help manage the risks in different life cycle phases and for different
types of programs
Beyond these typical, important project management measures lurks the risk of technical debt.
In the last chapter we examined what can happen if technical debt accumulates from late
starts, incomplete work, deferred decisions, and issues that arise. In this chapter we identify
specific measures for managing technical risk to help avoid the accumulation of technical debt. These
are organized according to the “quick-start” program descriptors in Section 3 for easier reference. Each
measure in the Measurement Guide Table that follows is listed, along with a “how to use” set of steps and
references for further information. “Shading” indicates that the measure may be useful for a program with
the shaded characteristics.
Schedule Measures
Project
Considerations Applicable Factors
Measurement Technical
Size Complexity Stability Schedule
Category Quality
Implement
Conceive and Architect and Operate and
Phase and Verify Validate
Define Design Support
Integrate
Development Acquirer- Supplier-
Waterfall Agile/Spiral Increments
Strategy Funded Funded
Delayed starts are leading indicators for delayed finishes. However, be wary of
starting tasks when necessary data is not available, is incomplete, or is likely to
change because rework of dependent input is likely; “don’t be a slave to schedule.” • Late Starts
For effective feedback control, the measurement delay should be (# or % of
no greater than the measurement frequency. In this case of weekly schedule
measurement, the data should be available before the next week begins. events)
Weekly Inchstones
Weekly Inchstones(Cum)
(Cum)
300 • Late
250
Upper Left: Completions
Cumulative tracking
of “planned” and
(# or % of
200
of Tasks
provides overall
Number
0
10/23
11/20
12/18
10/22
11/19
5/8/2
6/5/2
7/3/2
1/1/2
4/9/2
5/7/2
6/4/2
7/2/2
3/27/
4/10/
4/24/
5/22/
6/19/
7/17/
7/31/
8/14/
8/28/
9/11/
9/25/
10/9/
11/6/
12/4/
1/15/
1/29/
2/12/
2/26/
3/12/
3/26/
4/23/
5/21/
6/18/
7/16/
7/30/
8/13/
8/27/
9/10/
9/24/
10/8/
11/5/
Weekly
Weekly Actual vs. Actual
Scheduled vs. Stops
Starts and Scheduled Starts and Stops
25
20
Lower Left:
15
Weekly difference of
Tasks
10
planned vs. actual
5
Tasks
0
provides immediate
Number
5/9/2014
6/6/2014
7/4/2014
8/1/2014
1/2/2015
5/8/2015
6/5/2015
7/3/2015
3/28/2014
4/11/2014
4/25/2014
5/23/2014
6/20/2014
7/18/2014
8/15/2014
8/29/2014
9/12/2014
9/26/2014
11/7/2014
12/5/2014
1/16/2015
1/30/2015
2/13/2015
2/27/2015
3/13/2015
3/27/2015
4/10/2015
4/24/2015
5/22/2015
6/19/2015
7/17/2015
7/31/2015
8/14/2015
8/28/2015
9/11/2015
9/25/2015
10/10/2014
10/24/2014
11/21/2014
12/19/2014
(5)
visibility of schedule
(10)
compliance
(15)
(20)
(25)
(35)
For all schedule-related measures it is important to find the root cause of what is
late so that the program critical path is not jeopardized and rework is not incurred by
immature or incomplete work.
Project
Considerations Applicable Factors
Measurement Technical
Size Complexity Stability Schedule
Category Quality
Implement
Conceive and Architect and Operate and
Phase and Verify Validate
Define Design Support
Integrate
Development Acquirer- Supplier-
Waterfall Agile/Spiral Increments
Strategy Funded Funded
7 100
Number of Peer Reviews
6 80
5
4 60
3 40
2
20
1
0 0
>10 >30 >60 >90 Phase Phase Phase Phase
days days days days 1 2 3 4
Aging (days) Program Phase
Project
Considerations Applicable Factors
Measurement Technical
Size Complexity Stability Schedule
Category Quality
Implement
Conceive and Architect and Operate and
Phase and Verify Validate
Define Design Support
Integrate
Development Acquirer- Supplier-
Waterfall Agile/Spiral Increments
Strategy Funded Funded
Unresolved uncertainty carries technical debt into the decision-making process. The
goal is not to eliminate the uncertainty, but to reduce it to a level at which a decision
can be made with acceptable risk. This applies to individual technical parameters as • Measure
well as to the results of technical reviews. reduction in
uncertainty
Trend lines similar to technical performance measures 10 make the uncertainty visible (% or number)
compared to the needed value. In the example, the uncertainty of Parameter 1 must of each key
be reduced below the decision threshold prior to making the decision. technical
6 parameter
Parameter 1 over time
Uncertainty
5 and compare
Decision threshold with needed
Parameter Uncertainty and
Decision Threshold (%)
4 uncertainty
value for
3
making
decisions
2
Decision
1
• May be
combined
0 with technical
0 2 4 6 8 10 12 performance
Time after project start (month)
measures
For example, the trend line for reduction in Parameter 1 (e.g., weight or power)
uncertainty crosses the decision threshold of 3% at month 10; decisions based on
Parameter 1 prior to the point are at higher risk for being revisited once Parameter 1 is
known with more accuracy. Reduction in uncertainty can arise from developmental
testing or through more accurate analysis of the system.
Project
Considerations Applicable Factors
Measurement Technical
Size Complexity Stability Schedule
Category Quality
Implement
Conceive and Architect and Operate and
Phase and Verify Validate
Define Design Support
Integrate
Development Acquirer- Supplier-
Scope Change
Waterfall Agile/Spiral Increments
Strategy Funded Funded (number or
Tools and Manual or Requirements Static Model- Simulation-
Databases Spreadsheet Management Based SE Based SE
volatility of
Software- Hardware- Regulatory requirements
Product Domain Complex Commercial Government
Intensive Intensive Environment
(#, %)
Primarily Primarily Experienced,
Staff Capability Intermediate
Novice Experienced new domain
• Identify
Requirements
It is not uncommon to have some requirements changes during a project. Project
(#) at each
managers need to be aware of additions or modifications to requirements that
architectural
(a) affect contractual agreements or (b) change the required effort or resources
level or for each
necessary to meet project obligations (cost, schedule, people, laboratories).
subsystem
Trend analyses are useful for tracking scope changes. Action thresholds for change
• Determine
may decrease over time as the design matures and the impact of requirements
number of
changes becomes greater. Prior to a system requirements review (SRR) the volatility is
requirements
expected to be high, but must settle down ahead of the SRR. Failing to move the SRR
changed each
will incur technical debt and likely rework. Once the critical design review (CDR) takes
month
place, most subsequent changes will increase project costs and lengthen schedules. 11
Deleted Requirements
60%
unexpected
Revised Requirements
changes
50% Total
40% Regression
• Evaluate risk
30%
based on
20% program phase
10%
0%
r y p v r y
Jan Ma Ma Jul Se No Jan Ma Ma
11 Figure from INCOSE “Systems Engineering Leading Indicators Guide v2.0, January 29,
2010, section 3.1, http://www.incose.org/ProductsPubs/pdf/SELI-Guide-Rev2-01292010-
Industry.pdf (accessed June 2014).
Project
Considerations Applicable Factors
Measurement Technical
Size Complexity Stability Schedule
Category Quality
Implement
Conceive and Architect and Operate and
Phase and Verify Validate
Define Design Support
Integrate
Development Acquirer- Supplier-
Technology
Waterfall Agile/Spiral Increments
Strategy Funded Funded Readiness
Tools and Manual or Requirements Static Model- Simulation-
Databases Spreadsheet Management Based SE Based SE
Level (TRL 12
Software- Hardware- Regulatory or technical
Product Domain Complex Commercial Government
Intensive Intensive Environment maturity)
Primarily Primarily Experienced,
Staff Capability
Novice
Intermediate
Experienced new domain of solution
elements
Technical maturity (or technology readiness) level identifies the technical debt
inherent in the elements of the solution based on the development status (e.g., in- • Evaluate
production, prototype, variation on a product family). Most projects require at least individual
TRL 6 (prototype) before incorporating an item in a development project. solution
elements
A quick way to evaluate the state of the program is to create a histogram showing
how many items are in a given maturity category so that appropriate management • Identify
oversight can be provided to manage the technical risk. In the example, management program
attention should be focused on the elements with TRL < 7 and on developing risks based
contingency plans in case any element does not achieve full maturity according to on technical
a development plan. maturity or
TRLs
12
Number of Subsystems or Elements
Minimum maturity
• Monitor for
10
change
8
Note: TRL 9 is
6 fully mature
(demonstrated
4 in actual
operation)
2
0
TRL1 TRL2 TRL3 TRL4 TRL5 TRL6 TRL7 TRL8 TRL9
Technical Maturity
Project Solution
Considerations Applicable Factors
satisfies
Measurement Technical
Size Complexity Stability Schedule requirements
Category Quality
Implement
(% compliant) 13
Conceive and Architect and Operate and
Phase and Verify Validate
Define Design
Integrate
Support • At each tech
Development Acquirer- Supplier- nical review,
Waterfall Agile/Spiral Increments
Strategy Funded Funded evaluate the
Tools and Manual or Requirements Static Model- Simulation- ability of the
Databases Spreadsheet Management Based SE Based SE
conceptual/
Software- Hardware- Regulatory
Product Domain
Intensive Intensive
Complex
Environment
Commercial Government prototype
Primarily Primarily Experienced, solution to
Staff Capability Intermediate
Novice Experienced new domain satisfy each
requirement.
The key technical progress measure for development programs is an evaluation of Once verification
the degree to which the design is satisfying the requirements. Any non-compliance begins, assess
is an issue that must be corrected and indicates a need for rework. Unknown the solution
as “verified”
compliance is risk of a future discovery of non-compliance and is therefore a form of
based on the
technical debt based on uncertainty. verification
results.
This measure can be represented as a time-dependent bar chart showing progress
• Calculate % of
of technical compliance until all requirements are verified. requirements
satisfied by the
solution.
100% 6000
90% SRR: system
% Requirements Satified
review
70% 4000
60% Not Compliant SFR: system
by Design
functional
50% Unknown Compliance 3000
review
40% Assessed as Compliant
30% 2000 PDR: preliminary
Verified design review
20% 1000
10% Total Requirements CDR: critical
0% 0 design review
SRR SFR PDR CDR TRR FCA PCA TRR: test readi
ness review
Time/Phase
FCA: functional
configuration
audit
13 Carson, Ronald S. and Bojan Zlicaric, “Using Performance-Based Earned Value to Measure PCA: physical
Systems Engineering Effectiveness,” Proceedings of INCOSE 2008 (Utrecht, Netherlands). configuration
audit
Project
Considerations Applicable Factors
Measurement Technical
Size Complexity Stability Schedule
Category Quality
Implement
Conceive and Architect and Operate and
Phase and Verify Validate
Define Design Support
Integrate
Development Acquirer- Supplier-
Waterfall Agile/Spiral Increments
Strategy Funded Funded
Tools and Manual or Requirements Static Model- Simulation-
Databases Spreadsheet Management Based SE Based SE
Software- Hardware- Regulatory
Product Domain Complex Commercial Government
Intensive Intensive Environment
Primarily Primarily Experienced,
Staff Capability Intermediate
Novice Experienced new domain
Technical performance measures can be applied for selected technical parameters • Track the
to ensure adequate progress is being achieved. Time-based plots of estimated progress
or demonstrated performance are compared with required values (minimum or of selected
maximum) to help manage the risk. This is a quantitative form of a risk mitigation technical
plan. A plan line with decision bounds should be established early in the program parameters
with required progress in achieving the threshold value (e.g., “not to exceed”). Failure compared with
to achieve the required progress converts the risk to an issue and may require a required values
design change to ensure technical compliance. to ensure
INCOSE Systems Engineering adequate
Specified “Not to Exceed” Value Handbook v. 3.2.2
100% INCOSE-TP-2003-002-03.2.2 progress
Action Team to Bring Demonstrated October 2011 is being
90 Back into Spec Variance Predicted Variance
Planned achieved 14
Value Current
80 Profile Estimate
70
Achievement
to Date
Demonstrated
Values
Project
Considerations Applicable Factors
Measurement Technical
Size Complexity Stability Schedule
Category Quality
Implement
Conceive and Architect and Operate and
Phase and Verify Validate
Define Design Support
Integrate
Development Acquirer- Supplier-
Counts and
Waterfall Agile/Spiral Increments
Strategy Funded Funded stability of
Tools and Manual or Requirements Static Model- Simulation-
Databases Spreadsheet Management Based SE Based SE
elements of the
Software- Hardware- Regulatory system 15, for
Product Domain Complex Commercial Government
Intensive Intensive Environment example:
Primarily Primarily Experienced,
Staff Capability Intermediate
Novice Experienced new domain
Number of
external systems
Database tools enable managers to more easily count elements of the solution,
and stakeholder
whether requirements, interfaces, or solution elements (subsystems, boxes, wires,
or program
etc.). While the absolute numbers may not be critical, sudden growth can indicate
interfaces (#)
scope change or increased complexity and development risk.
Number
Visibility of these changes is provided by simple charts of counts vs. time. Project
of internal
managers should monitor these measures for unexpected changes while the
interfaces (#)
design should be stable. For example, “External Systems” should be stable at
Systems Requirements Review, and “Elements” and “Interfaces” should be stable at Number of solu
Preliminary Design Review. In the graph none of the three conditions is satisfied so tion elements (#)
that the project manager should investigate root causes and take corrective action
• Count the item
to avoid additional technical debt from the changing design. Increasing complexity
of interest (#)
based on increasing element and interface counts may also lead to more risk during
the integration phase after the critical design review. • Monitor for
changes (%)
45
PDR
SRR
40
35
Elements
30
Interfaces
Count
25
External Systems
20
15
10
5
0
1 2 3 4 5 6 7 8 9 10
Time (months)
15 Carson, Ronald and Paul Kohl, “New Opportunities for Architecture Measurement”,
Proceedings of INCOSE 2013 (Philadelphia, PA).
Project
Considerations Applicable Factors
Measurement Technical
Size Complexity Stability Schedule
Category Quality
Implement
Conceive and Architect and Operate and
Phase and Verify Validate
Define Design Support
Integrate
Development Acquirer- Supplier-
Waterfall Agile/Spiral Increments
Strategy Funded Funded
Tools and Manual or Requirements Static Model- Simulation-
Databases Spreadsheet Management Based SE Based SE
Software- Hardware- Regulatory
Product Domain Complex Commercial Government
Intensive Intensive Environment
Primarily Primarily Experienced,
Staff Capability Intermediate
Novice Experienced new domain
Once development is nearly complete the project can begin to accumulate data on • Compare
operational performance for reliability and system availability. The Verification phase measured
provides a “first look” at these system performance measures that have significant vs. predicted
consequences during operations and support phase. reliability and
availability
A time-dependent line chart can be used to compare current performance vs. vs. predicted
operational need or requirement. The need for design or other changes can become values (%)
apparent if deficiencies are other than initial “growing pains.” In the example below
the implemented design is failing to meet its reliability requirement even as the • Compare
system moves into operation, and root cause investigation may be required to identify measured
and correct the deficiency. mean repair
time vs.
1 predicted
(minutes)
0.95
0.9 Reliability
Reliability (probability)
Requirement
0.85
Reliability-Actual
0.8
0.75
Verification
Operation
0.7
0.65
0.6
1 2 3 4 5 6 7 8 9 10
Time (months)
Project
Considerations Applicable Factors
Measurement Technical
Size Complexity Stability Schedule
Category Quality
Implement
Conceive and Architect and Operate and
Phase and Verify Validate
Define Design Support
Integrate
Development Acquirer- Supplier-
Waterfall Agile/Spiral Increments
Strategy Funded Funded
Technical debt in the form of rework accumulates when errors in technical data are not
identified and corrected before the data is used by other groups (e.g., Requirements
for Design and Verification, Design for Build and Verification, Trade-off Analyses for Defect
Design). The longer the delay in discovering the error, the larger the cost of the rework. containment 16
Histograms of defect containment are a valid way to display this information (defects • Errors
introduced by phase vs. phase in which they are discovered and corrected). introduced in
one phase and
14
Verify identified and
12 corrected in a
Build
Number of Defects by Phase
0
Pre-Design Pre-Build Pre-Verify Post-Verify
This measure can be used within a project for additional spirals, increments, or agile
scrums so that more rigor is applied in finding defects prior to propagation. The
measure is also useful for organizational and system process improvement so that
error propagation can be reduced on successive projects.
In This Chapter
ÂÂProject Case Study with example use of technical measures
After carefully reviewing the SOW, the team came together to lay out the Integrated Master Schedule (IMS).
As normal, the initial schedule was for nine months instead of the needed six months. George and Sara
(SE lead) work with the other engineers to fit the IMS into the required timeframe that is acceptable to all.
George, who has been previously burned by technical debt, asked the team to step back from the IMS
to see if all needed tasks were accounted for. Sara noted that the deliverable documentation frequently
needed to be updated after initial, formal delivery before final approval was received.. George added tasks
to the IMS to allow rework and resubmission tasks, but is careful to not link added tasks to non-dependent
tasks that would push out the schedule. Judy, from HW, noted that the required peer reviews of drawings
didn’t allow for rework and verification as well. While acknowledging that it is a little more difficult to add
to the IMS without pushing the date out, the team justified their changes by asserting the expectation that
an improved widget would be built the first time. Now comfortable with the IMS, the team baselines the
schedule.
Due to the tight schedule, Sara began working the requirements from the SOW almost immediately. The
requirements were not extensive, but the team elected to make use of their requirements management tool
to manage and track requirements. After a check of the requirements’ goodness (Correct, Complete, Clear,
Consistent, Verifiable, Traceable, Feasible, Modular and Design Independent 18), the requirements were
added to the tool and baselined.
While the requirements were being worked, George completed his plan to manage the program. He
created the appropriate cost accounts and budgets for each of the expected tasks. In addition, George
created a Risk plan and spent time with the team identifying risks and developing mitigation plans. During
this activity George noted that they would need to make a final technical decision on size based on some
18 INCOSE Guide for Writing Requirements, INCOSE-TP-2010-006-01, 17 April 2012 (accessed 17 February 2015).
power dissipation analyses that had only just begun. He identified a date by which the power dissipation
had to be known, to within +/-5%, and added that into the IMS along with a tracking chart.
George was expected to report monthly as to management program status. George had freedom to
tailor the standard program measures and elected to use Schedule Performance Index (SPI) and Cost
Performance Index (CPI) measures, along with the Risk matrix. These would provide insight into the overall
program health but George wanted visibility into the technical health as well.
Working with Sara and Andy (HW Lead), the team discussed what critical few areas needed to be
measured to be successful. Schedule came up first. It was one of the driving requirements. George pointed
out that SPI would take care of that. Sara countered back that the schedule is too short to depend on a
monthly SPI measure. There wouldn’t be enough time to take corrective action by the time an issue was
noticed. Sara proposed an “Inchstones” IMS Measure, to track late starts and stops on a weekly basis.
It could be generated automatically from the IMS and dropped into a spreadsheet with minimal effort,
allowing for weekly insight into schedule performance. Sara explained that it would be easier to recover
a day then to recover a week. The measure was new to George, but meeting the commitment date was
critical to the project’s success so he accepted the measure.
Due to the main objective being a 50% reduction in size, size was added to the technical performance
measures (TPM) for reporting on a monthly basis.
Andy pointed out that due to the tight schedule and long lead times for HW components, requirements
volatility was critical to manage as well. Without further discussion, the team agreed to include this
measure as well on a monthly basis.
Sara, aware that the schedule tends to push shortening the time for verification of the design, nominated
one more measure to pick up later in the life cycle: Requirements Verification percentage. George and the
team accepted the addition of a Requirements Verification measure on a weekly basis, and decided that
these would be their tracked technical measures – the critical few that would provide the technical insight
needed to be successful.
With the project planning complete, and with programmatic and project measures selected, the project
team began to execute their plan.
The first step for the measurement effort was creating baseline charts for Program Risks, Power Dissipation
Uncertainty, Requirements Volatility, and Size TPM.
D 1
Probability
Low Risk
C 3 Medium Risk
HighRisk
B 2
1 2 3 4 5
Impact
Risk #1 Long lead times for Risk #2 New Software drivers will be Risk #3 Reduced size widget will not
components will delay integration required for new components meet power dissipation threshold
and test
• Schedule and budget only allow • Reducing the size of widget package
• Due to tight schedule, long lead for components with compatible will create power dissipation
parts could drive schedule and cost drivers capability
• Mitigate through allowing • Mitigate through software review • Mitigate through monitoring
component selection if and approval before any new parameter uncertainty measure
commercially available components can be selected
Figure 6-1. Program Risk Summary. Initial program risks were identified.
6
Power Dissipation
5 Uncertainty
Parameter Uncertainty and
Decision Threshold (%)
Decision threshold
4
2
Decision
0
0 1 2 3 4
Time after project start (month)
Figure 6-2. Decision Uncertainty Measurement. Uncertainty needs to be reduced below 3% in 3 months.
Requirements Volatility
30 PDR CDR
25
% Requirements Change
20
15
10
0
Baseline Mar-15 Apr-15 May-15 Jun-15 Jul-15 Aug-15
6
Actual Size (cm)
5 Threshold
5 Upper Threshold
Plan Line
Actual Size (cm)
0
1 2 3 4 5 6 7
Months
Figure 6-4. Size TPM. The plan is to achieve the threshold requirement in 3 months.
Week 2
After the first week, Sara prepared the weekly technical measures charts – Inchstones and Late Starts and
Stops.
Weekly Inchstones (Cum)
140
130
120
110
100
90
Planned Starts (Cum)
Number of Tasks
80
70 Actual Starts (Cum)
60
Planned Stops (Cum)
50
40
Actual Stops (Cum)
30
20
10
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
Figure 6-5. Week 2 stop/start inchstone status indicates the baseline plan.
The first thing George noticed was that the project was already behind on schedule starts. Checking
further, George learned that Andy was only available 25% of the time and the electrical engineer was still
not available. This prompted George to take corrective action to visit engineering and get a commitment to
provide needed resources immediately.
1
Number of Tasks
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
(1)
(2)
4 starts
(3) behind plan
Start Variances (Cum)
(4)
Stop Variances (Cum)
(5)
Week 3
The measures showed no further erosion of late starts now that the HW resources had been provided, but
the late stops increased by one due to the prior late starts.
140
130
120
110
100
90
Planned Starts (Cum)
Number of Tasks
80
70 Actual Starts (Cum)
60
Planned Stops (Cum)
50
40
Actual Stops (Cum)
30
20
10
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
Figure 6-7. Week 3 stop/start inchstone status indicates deficient starts.
1
Number of Tasks
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
(1)
(2)
2 stops behind
(3)
Start Variances (Cum)
(4) 4 starts behind
Stop Variances (Cum)
(5)
Figure 6-8. Week 3 stop/start cumulative variances indicate four late starts and two late completions.
Week 4
The measures showed no further erosion of late starts but the project was still behind schedule. This
prompted George to request the HW team to either put in additional effort or to use an alternative resource
to get back on schedule. Andy indicated that his tasks were going well and he should be able to catch up in
the next week.
Weekly Inchstones (Cum)
140
130
120
110
100
90
Planned Starts (Cum)
Number of Tasks
80
70 Actual Starts (Cum)
60
Planned Stops (Cum)
50
40
Actual Stops (Cum)
30
20
10
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
Figure 6-9. Week 4 stop/start inchstone status indicates deficient starts and completions.
1
Number of Tasks
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
(1)
(2)
(3)
Start Variances (Cum)
(4)
Stop Variances (Cum)
(5)
Figure 6-10. Week 4 stop/start cumulative variances indicate four late starts and two late completions.
Week 5
Andy was able to pull in all but one of the tasks so the late starts showed improvements. The late stops
continued to increase due to the previous late starts.
140
130
120
110
100
90
Planned Starts (Cum)
Number of Tasks
80
70 Actual Starts (Cum)
60
Planned Stops (Cum)
50
40
Actual Stops (Cum)
30
20
10
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
Figure 6-11. Week 5 stop/start inchstone status indicates deficient starts and completions.
1
Number of Tasks
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
(1)
(2)
(3)
Start Variances (Cum)
(4)
Stop Variances (Cum)
(5)
Figure 6-13. Cumulative SPI & CPI indicates more resources are needed
because the project is behind schedule and under budget.
Risk #1 decreased
E
in probability due to
less uncertainty with D
Probability
1 2 3 4 5
Impact
Work was progressing by hardware engineering on meeting the power dissipation need. Some progress
was being made, but data on the chart required discussion with engineering on minimal progress to date
within short schedule.
Power Dissipation Uncertainty
6
Power Dissipation
5 Uncertainty
Parameter Uncertainty and
Decision Threshold (%)
Decision threshold
4
Decision
1
0
0 1 2 3 4
Time after project start (month)
Requirements Volatility
The requirements volatility
30 PDR CDR was within plan as the initial
requirements efforts were
25 completed.
% Requirements Change
20
15
10
0
Baseline Mar-15 Apr-15 May-15 Jun-15 Jul-15 Aug-15
The projected Widget size was making progress towards goal but seemed to be lagging due to late
hardware resource assignments.
Widget Size TPM
6
Actual Size (cm)
5 Threshold
5 Upper Threshold
Plan Line
3.9
Actual Size (cm)
0
1 2 3 4 5 6 7
Months
Requirements Compliance
100%
90%
% Requirements Satified
80%
70%
60% Not Compliant
by Design
Figure 6-18. Requirements compliance status indicates 20% of requirements have not yet
been assessed, which increases risk of future noncompliance (a form of technical debt).
With George’s weekly insight and management of the project from a technical perspective, the project is
green across the board for program management measures. George took a deep breath and knew he had
successfully dodged a bullet. Maybe there was something to these technical SE measures?
Week 8
Technical measures have been looking good week to week with the entire team reviewing measures and
agreeing to corrective actions. Schedule starts have even gotten ahead of plan this week.
140
130
120
110
100
90
Planned Starts (Cum)
Number of Tasks
80
70 Actual Starts (Cum)
60
Planned Stops (Cum)
50
40
Actual Stops (Cum)
30
20
10
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
Cumulative Late Starts/Stops
1
Number of Tasks
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
(1)
(2)
(3)
Start Variances (Cum)
(4)
Stop Variances (Cum)
(5)
Figure 6-19. Week 8 stop/start inchstone status indicates recovery of starts and completions.
1.1
1.08
1.06 SPI Cum
1.04 CPI Cum
1.02
Index
1
0.98
0.96
0.94
0.92
0.9
Apr May Jun Jul Aug Sep Oct
Figure 6-20. Cumulative SPI & CPI indicates some recovery of schedule
while still under budget.
Power Dissipation Risk was reduced in probability as lower power components were identified. This is also
reflected with progress in Power Dissipation chart.
D
Probability
Low Risk
C 1 Medium Risk
HighRisk
B 2 3
1 2 3 4 5
Impact
6
Power Dissipation
5 Uncertainty
Parameter Uncertainty and
Decision Threshold (%)
Decision threshold
4
Decision
1
0
0 1 2 3 4
Time after project start (month)
Figure 6-22. Decision Uncertainty Measurement is on plan to achieve the decision point below
3% in three months.
Requirements Volatility
30 PDR CDR
25
% Requirements Change
20
15
10
0
Baseline Mar-15 Apr-15 May-15 Jun-15 Jul-15 Aug-15
The technical concern this month is that Size TPM is behind the projected glide path.
6
Actual Size (cm)
5 Threshold
5 Upper Threshold
Plan Line
3.9
Actual Size (cm)
4 3.6
0
1 2 3 4 5 6 7
Months
Requirements Compliance
After discussions with Andy,
100%
it becomes clear that they
90%
have a problem part for
% Requirements Satified
80%
which an acceptable reduced
70%
size part hasn’t been located. Not Compliant
60%
by Design
After George again completes the monthly program review, with green measures across the board,
George’s manager, Lee, follows up with finance to confirm status. Lee knew that this is normally the part
of the life cycle where problems start to show up on programs (Technical Debt). Finance independently
confirms that the program is on track.
0
10
20
30
40
50
60
70
80
90
100
110
120
130
140
0
1
2
3
4
5
(5)
(4)
(3)
(2)
(1)
®
3/27/2015
3/27/2015
Week Ten
4/3/2015
4/3/2015
4/10/2015
4/10/2015
4/17/2015
4/17/2015
4/24/2015
4/24/2015
5/1/2015
5/1/2015
5/8/2015
5/8/2015
5/15/2015
5/15/2015
5/22/2015
5/22/2015
5/29/2015
5/29/2015
6/5/2015
6/5/2015
6/12/2015
6/12/2015
6/19/2015
6/19/2015
6/26/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/7/2015
8/14/2015
8/14/2015
8/21/2015
8/21/2015
8/28/2015
8/28/2015
9/4/2015
Guide to Systems Engineering Measurement
9/4/2015
Actual Stops (Cum)
Actual Starts (Cum)
9/11/2015
Planned Stops (Cum)
Planned Starts (Cum)
9/11/2015
9/18/2015
9/18/2015
9/25/2015
9/25/2015
Figure 6-26. Week Ten stop/start inchstone status indicates starts and completions are on or ahead of plan.
MEASUREMENT WORKING GROUP
Week 12
All weekly measures continue to be collected and reviewed. The TPM measure has also moved to weekly
review progress of the tiger team.
Widget Size TPM
2.5
2
1.5
1
0.5
0
6/1/15 6/8/15 6/15/15 6/22/15 6/29/15
Weeks
Week 13
The focused effort of the tiger team is beginning to show significant progress at meeting size requirement
this week.
Widget Size TPM
2.7
2.5
2
1.5
1
0.5
0
6/1/15 6/8/15 6/15/15 6/22/15 6/29/15
Weeks
Figure 6-28. Size TPM indicates recovery toward the threshold value.
1
0.98
0.96
0.94
0.92
0.9
Mar Apr May Jun Jul Aug Sep
Figure 6-29. Cumulative SPI & CPI indicates recovery of schedule while still under budget.
D
Probability
Low Risk
C 3 1 Medium Risk
HighRisk
B 2
A
1 2 3 4 5
Impact
Figure 6-30. Program Risk Status indicates no status change.
Power Dissipation Uncertainty has been reduced to a level required for making decisions.
Decision threshold
4 has been reduced to a level
for which the technical
3
decision can be made —
2 the technical debt has
been retired. This technical
1
measure no longer needs
0 to be tracked.
0 1 2 3 4
Time after project start (month)
30 PDR CDR
25
% Requirements Change
20
15
10
0
Baseline Mar-15 Apr-15 May-15 Jun-15 Jul-15 Aug-15
Figure 6-32. Requirements Volatility Status indicates volatility below allowable levels (good).
Requirements Compliance
100%
90%
% Requirements Satified
80%
70%
60% Not Compliant
by Design
Number of Tasks
80
70 Actual Starts (Cum)
efforts of the tiger team, the 60
Planned Stops (Cum)
Widget Size TPM has met the 50
Actual Stops (Cum)
40
Widget size requirement. 30
20
10
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
Cumulative Late Starts/Stops
5
1
Number of Tasks
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
Andy suggests that it is time (1)
identified an alternative
Figure 6-34. Week 14 stop/start inchstone status indicates starts and
method to perform needed completions are ahead of plan.
functionality, allowing
smaller parts to be used Widget Size TPM
and getting Size TPM within
the 50% window. Given 4 3.6 Actual Size (cm)
Threshold
3.4
that the uncertainty in the 3.5 Upper Threshold
Plan Line
TPM estimate is less than 3
Actual Size (cm)
2.7 2.4
2.5
the difference to the Upper
2
Threshold value, the team
1.5
agrees to retire the TPM 1
measure to avoid the busy 0.5
work and to provide focus 0
on the measures potentially 6/1/15 6/8/15 6/15/15 6/22/15 6/29/15
Weeks
needing attention.
Figure 6-35. Size TPM indicates satisfaction of the size requirement.
Week 18
George receives a call requesting that additional features be added to the project. George hesitates and
remembers that requirements volatility is a key technical driver for his program. George responds with:
“Are you willing to add duration to the project to get these features?” Of course the answer is no, so
George sticks to his guns and suggests that these changes be added to a future project. The customer,
understanding the impact, agrees. Weekly inchstone status continues to show good progress.
80
70 Actual Starts (Cum)
60
Planned Stops (Cum)
50
40
Actual Stops (Cum)
30
20
10
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
1
Number of Tasks
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
(1)
(2)
(3)
Start Variances (Cum)
(4)
Stop Variances (Cum)
(5)
Figure 6-36. Week 18 stop/start inchstone status indicates starts and completions are ahead of plan.
Number of Tasks
80
70 Actual Starts (Cum)
Requirements Verification 60
Planned Stops (Cum)
Measures. The Requirements 50
Actual Stops (Cum)
40
Verification Measure will 30
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
ensure that design fully meets
requirements. (There was
no value in starting to collect
this measure prior to the Cumulative Late Starts/Stops
verification phase of project.) 5
To be successful, verification 4
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
(1)
measures). Unsuccessful
(2)
tests indicate an issue that will
(3)
need to be resolved (could be Start Variances (Cum)
(4)
redesign, or revising the tests Stop Variances (Cum)
(5)
or requirements).
Figure 6-37. Week 20 stop/start inchstone status indicates starts and
completions are ahead of plan.
Requirements Verification
Number of Tasks
80
70 Actual Starts (Cum)
beginning to fall behind plan, 60
Planned Stops (Cum)
possibly indicating some 50
Actual Stops (Cum)
40
failures during verification 30
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
try to assure George that there
is nothing to worry about.
However, George has been
doing some reading lately Cumulative Late Starts/Stops
to learn more about how 5
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
(1)
trust the data first! George
(2)
decides to take corrective
(3)
action now to bring this Start Variances (Cum)
(4)
project in on schedule. Stop Variances (Cum)
(5)
George decides to have daily
standup meetings to review Figure 6-39. Week 21 stop/start inchstone status indicates starts and
progress with verification. completions are ahead of plan.
After the first week, it is clear
Requirements Verification
to the team that there was
lots of wishful thinking going 200
on and help was needed.
An additional resource was 150
Requirements
Week 22
Schedule Measures continue to be reviewed weekly but only Requirements Verification is shown here for
analysis.
Requirements Verification
200
150
Requirements
100
Unverified Reqts
50 Verified Reqts
Verification Plan
0
20 21 22 23 24 25 26 27
Week
Figure 6-41. Requirements Verification progress measures indicates that verification is recovering.
1.1
1.08
1.06 SPI Cum
1.04 CPI Cum
1.02
Index
1
0.98
0.96
0.94
0.92
0.9
Mar Apr May Jun Jul Aug Sep
Figure 6-42. Cumulative SPI & CPI indicates good schedule and cost status.
Requirements Volatility
30 PDR CDR
25
% Requirements Change
20
15
10
Requirements Compliance
100%
90%
% Requirements Satified
80%
70%
60%
by Design
50%
40%
Not Compliant
30%
Unknown Compliance
20%
Assessed as Compliant Figure 6-45. Requirements compliance status
10%
0% indicates fewer requirements that have not
Mar Apr May Jun Jul Aug Sep yet been assessed and no non-compliances.
Month
Week 23
Schedule Measures continue to be reviewed weekly but only Requirements Verification is shown here
for analysis.
Requirements Verification
200
150
Requirements
100
Unverified Reqts
50 Verified Reqts
Verification Plan
0
20 21 22 23 24 25 26 27
Week
Week 24
The Requirements Verification Measure continues to make progress. Schedule continues to be ahead of plan.
Weekly Inchstones (Cum)
140
130
120
110
100
90
Planned Starts (Cum)
Number of Tasks
80
70 Actual Starts (Cum)
60
Planned Stops (Cum)
50
40
Actual Stops (Cum)
30
20
10
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
Cumulative Late Starts/Stops
5
1
Number of Tasks
0
3/27/2015
4/3/2015
4/10/2015
4/17/2015
4/24/2015
5/1/2015
5/8/2015
5/15/2015
5/22/2015
5/29/2015
6/5/2015
6/12/2015
6/19/2015
6/26/2015
7/3/2015
7/10/2015
7/17/2015
7/24/2015
7/31/2015
8/7/2015
8/14/2015
8/21/2015
8/28/2015
9/4/2015
9/11/2015
9/18/2015
9/25/2015
(1)
Requirements Verification
200
150
Requirements
100
Unverified Reqts
50 Verified Reqts
Verification Plan Figure 6-48. Requirements
Verification progress measures
0 indicates that verification is on
20 21 22 23 24 25 26 27 plan.
Week
Week 25
The Requirements Verification Measure progress continues to be monitored to ensure that the project is
successfully completed.
Requirements Verification
200
150
Requirements
100
Unverified Reqts
50 Verified Reqts
Verification Plan
0
20 21 22 23 24 25 26 27
Week
Figure 6-49. Requirements Verification progress measures indicate that verification
is ahead of plan.
Week 27
The Requirements Verification Measure reaches 100%. All other technical measures are on plan as well.
Discussions turn to de-staffing the project now that effort is nearly complete.
Requirements Verification
200
150
Requirements
100
Unverified Reqts
50 Verified Reqts
Verification Plan
0
20 21 22 23 24 25 26 27
Week
Figure 6-50. Requirements Verification progress measures indicate completion
(all requirements have been successfully verified).
1.1
1.08
1.06
1.04
1.02
Index
1
0.98
0.96
0.94 SPI Cum
0.92 CPI Cum
0.9
Mar Apr May Jun Jul Aug Sep
Figure 6-51. Cumulative SPI & CPI indicates schedule completion, under budget.
D
Probability
Low Risk
C Medium Risk
HighRisk
B
A
1 2 3 4 5
Impact
Risk #2 New Software drivers will be Risk #3 Reduced size widget will not
required for new components meet power dissipation threshold
• Retired — All software drivers • Retired — Power dissipation
implemented and verified threshold met
Figure 6-52. Program Risk Status indicates that all risks have been retired.
Requirements Volatility
30 PDR CDR
25
% Requirements Change
20
15
10
0
Baseline Mar-15 Apr-15 May-15 Jun-15 Jul-15 Aug-15
Requirements Compliance
100%
90%
% Requirements Satified
80%
70%
60%
by Design
50%
40%
Not Compliant
30%
Unknown Compliance
20%
Assessed as Compliant
10%
0%
Mar Apr May Jun Jul Aug Sep
Month
Project Closeout
As part of project closeout, the team took some time to document lessons learned.
1. Tailoring your measures collected to the critical few for your project avoids wasted effort while
providing needed insights. Select measures that are easy to collect, at a high level if necessary,
and drill in only when there is a need.
2. Weekly schedule late starts and stops allowed early corrective action to maintain schedule.
3. Planning for all tasks helped prevent technical debt from getting out of control. “Hope is not a plan.”
7 ACRONYMS
(INTENTIONALLY BLANK)