Sunteți pe pagina 1din 22

138 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 59, NO.

1, FEBRUARY 2012

Assessing Organizational Capabilities: Reviewing


and Guiding the Development of Maturity Grids
Anja M. Maier, James Moultrie, and P. John Clarkson

Abstract—Managing and improving organizational capabilities Irrespective of the driver for adopting an improvement frame-
is a significant and complex issue for many companies. To support work, dealing with hundreds of requirements that the diverse
management and enable improvement, performance assessments standard-setting bodies impose leaves many companies con-
are commonly used. One way of assessing organizational capabili-
ties is by means of maturity grids. While maturity grids may share fused and frustrated. Confusion has triggered calls for an
a common structure, their content differs and very often they are overview [3] or a taxonomy of improvement frameworks [4], [5].
developed anew. This paper presents both a reference point and A taxon suggested by Paulk [5] pertains to management philoso-
guidance for developing maturity grids. This is achieved by re- phies, such as total quality management (TQM) and associated
viewing 24 existing maturity grids and by suggesting a roadmap with it maturity assessment grids. Maturity grids can be used
for their development. The review places particular emphasis on
embedded assumptions about organizational change in the formu- both as assessment tools and as improvement tools. Crosby’s
lation of the maturity ratings. The suggested roadmap encompasses quality management maturity grid (QMMG) features as the pi-
four phases: planning, development, evaluation, and maintenance. oneering example, advocating a progression through five stages:
Each phase discusses a number of decision points for development, uncertainty, awakening, enlightenment, wisdom, and certainty
such as the selection of process areas, maturity levels, and the deliv- (see Fig. 1).
ery mechanism. An example demonstrating the roadmap’s utility
in industrial practice is provided. The roadmap can also be used to In case of a voluntary evaluation of performance levels, com-
evaluate existing approaches. In concluding the paper, implications panies often look for assessments that do not take too long and
for management practice and research are presented. do not cost too much, which makes maturity grid assessments
Index Terms—Communication, design science, interface man- especially attractive. However, while managing organizational
agement, maturity grid, maturity matrix, maturity model, new capabilities is a significant and complex issue for many com-
product development, organizational capabilities, organizational panies and features prominently in organization literature, we
change, performance assessment, process improvement, project nevertheless observe that the contribution of assessments using
management, quality management. maturity grids has so far been overlooked in academic litera-
ture. It seems as if maturity grids have been in the shadow of the
more widely known capability maturity model (CMM) [6], [7]
I. INTRODUCTION and its derivatives, including the capability maturity model in-
UALITY management and process improvement initia- tegration (CMMI) [8] and the people-CMM [9]—all developed
Q tives and their influence on performance excellence have
led to an explosion of standards, regulations, bodies of knowl-
at Carnegie Mellon’s Software Engineering Institute (SEI).

edge, statutes, models, and grids that have been published. These A. Connection Between Maturity Grids and Models
“improvement frameworks” often seek to enable the assessment
Differentiating between capability maturity grids and CMMs
of organizational capabilities. Customers may explicitly require
is difficult. While they are complementary improvement frame-
compliance with some frameworks, and the market may implic-
works with a number of similarities, key distinctions can be
itly require compliance with others. Some might be imposed by
made with respect to the work orientation, the mode of assess-
regulation, others might simply be recognized as being useful in
ment, and the intent.
building or maintaining a competitive advantage [1] or in over-
1) Work Orientation: Maturity grids differ from other pro-
coming the paradoxical organizational struggle to maintain, yet
cess maturity frameworks, such as the SEI’s CMMI, which ap-
replace or renew capabilities [2].
plies to specific processes like software development and acqui-
sition. The CMMI model identifies the best practices for specific
processes and evaluates the maturity of an organization in terms
of how many of these practices it has implemented. By contrast,
Manuscript received August 19, 2009; revised June 1, 2010 and July 30, most maturity grids apply to companies in any industry and do
2010; accepted August 25, 2010. Date of publication July 14, 2011; date of
current version January 20, 2011. Review of this manuscript was arranged by not specify what a particular process should look like. They
Department Editor J. K. Pinto. identify the characteristics that any process and every enterprise
A. M. Maier is with the Department of Management Engineer- should have in order to design and deploy high-performance
ing, Technical University of Denmark, Lyngby 2800, Denmark (e-mail:
anja.maier@cantab.net). processes [10].
J. Moultrie is with the Institute of Manufacturing, University of Cambridge, 2) Mode of Assessment: Typically, an assessment using the
Cambridge CB3 0FS, U.K. (e-mail: jm329@cam.ac.uk). CMMs consists, among other instruments, of Likert or binary
P. J. Clarkson is with the Engineering Design Centre, University of Cam-
bridge, Cambridge CB2 1PZ, U.K. (e-mail: pjc10@cam.ac.uk). yes/no-based questionnaires and checklists to enable assessment
Digital Object Identifier 10.1109/TEM.2010.2077289 of performance. In contrast, an assessment via a maturity grid is
0018-9391/$26.00 © 2011 IEEE
MAIER et al.: ASSESSING ORGANIZATIONAL CAPABILITIES: REVIEWING AND GUIDING THE DEVELOPMENT OF MATURITY GRIDS 139

Fig. 1. Excerpt reproduced from the QMMG [88].

typically structured around a matrix or a grid. Levels of maturity and recognize the need for synthesis among maturity assess-
are allocated against key aspects of performance or key activi- ment methods.
ties, thereby creating a series of cells. An important feature of a Each of the previous research efforts cited succeeded in com-
maturity grid approach is that in the cells it provides descriptive paring maturity models in specific knowledge domains or used
text for the characteristic traits of performance at each level, specific maturity assessment methods to arrive at an overall
also known as a “behaviorally anchored scale” [11]. level of, for example, project management within a number of
3) Intent: Many CMMs follow a standard format and are industry sectors. As such, these efforts represent valuable initial
internationally recognized. As a result, they can be used for steps to give researchers and industry the needed overview in
certification of performance. Maturity grids tend to be some- their quest for awareness and perhaps synthesis of assessment
what less complex as diagnostic and improvement tools without methods.
aspiring to provide certification. Accordingly, the intention of Given what has been said earlier, there is a need for a cross-
use by companies differs. Companies often follow a number of domain review of maturity grids and a gap in current understand-
approaches in parallel and a maturity grid assessment may be ing relating to underpinning theoretical concepts of maturity-
used as a stand-alone assessment or as a subset of a broader grid-based assessments. The consequences of these are that new
improvement initiative. methods are developed from unsuitable foundations or that un-
necessary effort is often expended in developing new assessment
tools that duplicate those that already exist. Therefore, this pa-
B. Lack of Cross-Domain Reviews per aims to review existing maturity grids to provide a common
There is a lack of concerted cross-domain academic effort in reference point.
understanding the limitations and benefits of maturity grids. In
specific knowledge domains, there have been some studies that
compare a variety of maturity assessments, mostly focusing on C. Lack of Guidance for Authors of Maturity Grids
maturity models: Becker et al. [12] compared six maturity mod- Since the early 1990s and in parallel to the absence of an
els for IT management, Rohloff [106] reviews three maturity inter-domain academic debate, we see rapid growth in the de-
models for business process management, Kohlegger et al. [13] velopment and use of maturity grid assessments in management
conducted a qualitative content analysis of a set of 16 maturity practice across a number of industry sectors. Such maturity
models, DeBruin and Rosemann [14] presented a cross-domain grids are developed by researchers in academia, practitioners,
review of two maturity models for knowledge management and and consultants, and as a result, these are often proprietary or
for business process management, respectively, Siponen [15] ex- difficult to access. When examples do reach the academic liter-
plored three evaluations based on maturity principles in the area ature, understandably, they are published in specialized journals
of information management, and Pee and Kankanhalli [16] com- relating to the domain addressed. With few exceptions, work is
pare maturity assessments for knowledge management. Analy- presented without reference to that which precedes it and new
ses concentrate mostly on a small sample of maturity models language is developed for concepts that have already been de-
and describe strengths and/or weaknesses of existing approaches scribed elsewhere. Many such efforts have been criticized as ad
within the respective domains, i.e., IT management, knowledge hoc in their development [20]. It is perhaps not surprising, as in
management, business process management, and information the absence of guidance, authors do what they think is best and
management. that is often not good enough, especially considering the poten-
Studies with a larger sample size that use maturity assessment tial impacts of assessment results on a company’s operations and
methods to gauge the current level of a specific organizational employees’ morale. Consequently, this paper aims to suggest a
capability in different industry sectors have been conducted, e.g., more rigorous approach to the development of maturity grids.
by Ibbs and Kwak [17], Pennypacker and Grant [11], [18], and Recent studies intend to aid the development of maturity
Mullaly [19] in the domain of project management. These stud- models. Becker et al. [12] compare six maturity models for
ies report on a substantial interest in the development of viable information technology management and suggest a procedural
methods to assess and improve project management maturity model for their development, Mettler and Rohner [21] suggest
140 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 59, NO. 1, FEBRUARY 2012

decision parameters for the development of maturity models in no differentiation is made between a grid, a matrix, and a
information systems, Kohlegger et al. [13] compare the content table [22].
of 16 maturity models, and Van Steenbergen suggest a set of 2) Self-assessment: A number of assessment methods use
questions for subsequent development of maturity models. Van an external facilitator to assign and/or decide on scores
Steenbergen et al. [105] propose parameters for the develop- based on participants’ comments and/or require a certified
ment of focus area maturity models. This paper complements auditor. These approaches do not meet this criterion as this
these studies by reviewing existing maturity grids, by focusing paper focuses on approaches aimed at self-assessment.
on underpinning theoretical concepts of maturity-grid-based as- 3) Publicly available: Many maturity grids are proprietary
sessments and by guiding their development and evaluation. tools generated by consulting organizations. Approaches
included in this review are all in the public domain and
D. Objectives of the Paper available without cost.
Table I summarizes the 24 maturity grid assessments ana-
The objectives of this paper are to review existing matu- lyzed in this study, presented in chronological order of pub-
rity grids to provide a common reference point and to suggest lication. Our sample consists of maturity grids developed by
the parameters for a more rigorous approach to the develop- academic researchers, industry, and consulting firms, provided
ment of maturity grids. This paper is intended for practitioners they meet the aforementioned criteria. As a result, many mod-
in industry and academic researchers concerned with process els that make significant contributions in their own field were
improvement, intervention, and change management in orga- not included in this review. To mention a few, for example, the
nizations. Both readerships might benefit from an overview of Achieving Excellence Design Evaluation Toolkit [23] uses a
published maturity grids for assessing organizational capabil- Likert-scale and is, as defined in this paper, not a grid-based
ities, and potentially be introduced to grids that they had not approach. Kochikar’s approach in knowledge management [24]
previously seen. Furthermore, they might benefit from guidance is particularly influenced by the CMM in that it is a staged
for the systematic development of their own maturity grid. approach, requiring baseline performance in certain key re-
search areas before advancing to the next level of maturity. Fur-
E. Structure of the Paper ther, Ehms and Langen’s comprehensive model [25] in knowl-
edge management relies on an auditor to infer the scores from
The remainder of the paper is organized as follows.
participants and the learning organization maturity model by
Section II describes the methods used to elicit the content of
Chinowsky et al. [26], for example, violates the first criterion as
this paper. Section III introduces the notion of maturity and
it is not a grid-based tool.
traces its history and evolution. In Section IV, existing maturity
Outside the scope of the paper is also work on related terms.
grids are reviewed. Section V suggests a roadmap as a process
For example, first, technology readiness [27], [28] and uptake
for creating maturity grids. Section VI presents an illustrative
of its principles, e.g., system readiness [29] or organizational
example of its application. Section VII concludes the paper with
readiness [107]. Second, life-cycle theories, for example, de-
the main implications for management practice and research.
scribing the adoption of a new product, technology, or service
in the market, often visualized using S-shaped curves [30], [31];
II. METHODS the development of electronic data processing toward maturity
This section explains the rationale for selecting the review (using six stages of growth e.g., [32]) and the phases of growth
sample of this paper and furthermore describes how the sug- organizations, in general, pass through toward maturity in the
gested roadmap for the development of new and the evaluation market [33].
of existing maturity grids was built up and evaluated.
B. Elicitation of Guidance
A. Selection of the Review Sample
The overall research approach taken for the review and sug-
Our sampling strategy for the review consisted of the follow- gestion of subsequent guidance is that of conceptual analy-
ing activities: First, we keyword-searched academic databases sis [15]. Individual maturity grids are compared according to
and World Wide Web search engines. Second, we followed-up critical issues: the assessment aim, the selection and rationale of
references from publications found in the preceding activity. Fi- process areas, the conceptualization of leverage points for matu-
nally, to check comprehensiveness and gather suggestions from rity, and administration of the assessment. In order to show how
experts in the field of organizational studies, we sent the list we came to a particular conclusion, relevant citations from the
of 61 grids and models to five academic scholars active in this original material are included wherever possible. The roadmap
field. Out of the list of potential models, we selected maturity takes the form of a description of the sequence of four process
grids for further analysis that fulfilled the following criteria. steps [34] with a set of decision points for each phase. Develop-
1) Grid-based approach: The grids need to be akin to the ment of the roadmap was pursued like a design exercise in that it
original QMMG by Crosby as opposed to following a aimed to create an innovative artifact—a method—intended to
staged CMM-style. The representation in grid or matrix solve an identified problem. The underlying theoretical perspec-
format using text descriptions as qualifiers is either used tive is thus design science [35], [36]. For presentation of this
as underlying model and/or assessment instrument. Here, paper, we have furthermore been inspired by Eisenhardt’s [37]
MAIER et al.: ASSESSING ORGANIZATIONAL CAPABILITIES: REVIEWING AND GUIDING THE DEVELOPMENT OF MATURITY GRIDS 141

TABLE I
COMPARISON OF EXISTING MATURITY GRIDS
142 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 59, NO. 1, FEBRUARY 2012

TABLE I
(Continued)
MAIER et al.: ASSESSING ORGANIZATIONAL CAPABILITIES: REVIEWING AND GUIDING THE DEVELOPMENT OF MATURITY GRIDS 143

TABLE I
(Continued)
144 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 59, NO. 1, FEBRUARY 2012

TABLE I
(Continued)
MAIER et al.: ASSESSING ORGANIZATIONAL CAPABILITIES: REVIEWING AND GUIDING THE DEVELOPMENT OF MATURITY GRIDS 145

roadmap to develop theory from case studies, which alerts the in natural development or growth, in other words, the “state of
reader to steps and associated decision points in the development being complete, perfect, or ready”. Secondly, there is reference
journey. to the process of bringing something to maturity, “to bring to
Guidance, in the form of a roadmap with four phases and a maturity or full growth; to ripen” [39].
number of associated decision points, was developed in three It is the latter definition, which stresses the process toward
steps: first, more than 20 extant maturity grids for assessing maturity that interests us in the paper. How do authors of indi-
organizational capabilities were reviewed. The sample contains vidual maturity grid assessments conceptualize a progression to
contributions resulting from the fields of management science, maturity? Searching for answers to this question, one realizes
new product development, engineering design, and healthcare. that the concept of maturity is best discussed in connection with
Second, guidance was elicited on the basis of experience from the context within which it has been applied.
the authors of this paper. Third, two experts, who have inde-
pendently developed and applied a maturity grid for assessing B. Evolution of the Notion of Maturity in Literature(s)
organizational capabilities were interviewed. Both experts have
undertaken consulting work and are now pursuing academic The concept of maturity has seen widespread attention in
careers in engineering and construction. Interviews were con- a number of academic fields. While maturity grids have been
ducted to obtain further insights and to validate our results from known for some time, their popularization as a means of assess-
comparing extant approaches in literature and findings from our ment has been more recent [19].
own experience with developing and applying maturity grid as- 1) Process Maturity: The concept of process maturity stems
sessments in small- and medium-sized companies as well as from TQM, where the application of statistical process control
large multinational corporations from a number of industry sec- techniques showed that improving maturity of any technical and
tors. In summary, insights from reviewed literature, the authors’ business process ideally leads to a reduction of variability in-
own experience and the experts’ feedback were used as the basis herent in the process, and thus, an improvement in the mean
of this guide. performance of the process. While Crosby has inspired the no-
tion of progression through stages towards maturity, his maturity
concept as a way of measuring organizational capabilities was
C. Evaluation of Guidance
not formalized [5].
The roadmap suggested in this paper was evaluated in two 2) Organizational Maturity: Through the widely adopted
ways. First, application of the roadmap’s use in industry demon- CMM for improvement of the software development process
strates its workings. Outcomes of multiple applications of the (CMM-SW), this concept of process maturity migrated to a
maturity grid to assess communication at departmental inter- measure of organizational maturity. Integral to the CMM-SW
faces given as the case example in this paper is taken as is the concept that organizations advance through a series of
an indicator for both the roadmap’s and the grid’s reliabil- five stages or levels of maturity: from an initial level to a
ity. In addition, feedback from the participants in the assess- repeatable, defined, managed, and an optimizing level. These
ments is also taken as direct member validation [38] of the levels describe an evolutionary path from ad hoc, chaotic pro-
communication grid method (CGM) [71] and as indirect ev- cesses to mature, disciplined software processes, and define
idence of the roadmap’s utility. Second, in connecting to the the degree to which a process is institutionalized and effective
design science perspective, a further evaluation of the de- [6], [40].
velopment method presented here is provided by applying 3) Process Capability: Rather than measuring organiza-
the guidelines for devising an artifact formulated by Hevner tional capability with a single value, ISO/IEC 15504 measures
et al. [36] and reformulated and used specifically in the context process capability directly and organizational capability with a
of requirements for maturity models by Becker et al. [12] (see process capability profile. CMMI integrated both organizational
Section VI and Table III). maturity and process capability. Although ISO/IEC 15504 and
CMMI both use capability levels to characterize process capa-
III. MATURITY bility, their operational definitions are somewhat different. The
key taxonomic distinction is between a multilevel organizational
When discussing the concept of maturity, it is important to
versus process measure.
provide definitions as the language can be inconsistent and con-
Paulk [5] suggests the term organizational capability to char-
fusing. Even within one field of expertise, the concept of ma-
acterize a hybrid between organizational maturity and pro-
turity espoused might not be one and the same. This section
cess capability. This is different from the notion of organi-
introduces a dictionary definition of maturity and continues by
zational capabilities applied in this paper. In addition, irre-
specifying what in the literature has come to be termed “pro-
spective of the way of arriving at an overall score either on
cess maturity,” “organizational maturity,” “process capability,”
the level of processes (process capability) or aggregate level
“project maturity,” and “maturity of organizational capabilities.”
for an organization (organizational maturity), the notion of
higher levels of maturity representing increased transparency is
A. Notion of Maturity: A Dictionary Definition retained [5].
Broadly speaking, there is reference to two different aspects 4) Project Maturity: Perhaps since software is developed
of the concept of maturity. Firstly, there is reference to some- through projects, it is natural that the concept of organizational
thing or someone as having reached the state of completeness maturity would migrate from software development processes
146 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 59, NO. 1, FEBRUARY 2012

to project management, and this has been reflected in an inter- the selected process areas, and most importantly, the selected
est in applying the concept of “maturity” to (software) project maturity levels.
management [17], [41]–[46]. Although the focus on assessing
project management using the notion of maturity has started A. Visualization of Maturity Levels
comparatively recent, a number of alternative models have been
A common principle is to represent maturity as a number
released [18], [19], [44], [47].
of cumulative stages, where higher stages build on the require-
While inspired by CMM- or ISO/IEC 15504-like notions of
ments of lower stages. The practice, with the highest number
maturity, which focus on process transparency and increased
representing high maturity and the lowest number representing
(statistical) control, research into project management maturity
low maturity, appears to have wide practical acceptance.
shows variations in how maturity is conceptualized. One way to
This evolution toward maturity has been visualized in a num-
determine a mature project is by looking at what organizations
ber of ways, e.g., using a ladder representation [47] or using a
and people are doing operationally [17], [48]. Skulmoski [49]
spider web representation [46]. Either way, the different steps
goes further to include an organization’s receptivity to project
on the ladder or the different intersections on the spider web
management. Andersen and Jessen [47] continue in this vein
have to be characterized and more often than not opinions about
and determine maturity as a combination of behavior, attitude,
step differences diverge. Even when focusing on one application
and competences. Maturity is described as a composite term,
area without close examination, the assumption cannot be made
and characteristics and indicators are used to denote or measure
that maturity levels in different maturity grids describe exactly
maturity. In response to the many competing models, the Project
the same concepts.
Management Institute (PMI) launched the organizational project
management maturity model (OPM3) program [18] as a best
practice standard for assessing and developing capabilities in B. Review of Underlying Notions of Maturity
portfolio management, program management, and project man- What makes organizational capabilities mature? There are
agement [50]. many possibilities to answer this question, where each answer
5) Maturity of Organizational Capabilities: The concept of is based on a certain rationale. Such rationale is usually, whether
capability has been used in strategic literature, specifically in explicitly stated or implicitly embraced, a statement about lever-
the resource-based view to explain differential firm perfor- age points envisaged to be used in organizational change initia-
mance [51]–[54]. The capability perspective hinges on the in- tives.
ductively reasoned relationship between process capability and Directly to compare structural differences and theoretical as-
business performance. This paper uses the terms organizational sumptions about (organizational) maturity and conceptualiza-
capabilities as the collective skills, abilities, and expertise of tions of organizational change, excerpts of five grids concern-
an organization [55], [56]. In this vein, organizational capa- ing “coordination” and “collaboration” are selected. We discern
bilities refer to, for example, design, innovation, project man- four leverage points that have been used: 1) existence and ad-
agement, knowledge management, collaboration, and leader- herence to a structured process; 2) alteration of organizational
ship [55]. Thus, organizations can be viewed as comprised of a structure; 3) emphasis on people; and 4) emphasis on learning.
set of capabilities [57], which are the skill sets that provide an 1) Existence and Adherence to a Structured Process (Infras-
organization with its competitive advantage. tructure, Transparency, and Formality): A number of extant
While it seems potentially misaligned to have first described grids base the selection of maturity levels on the systematic
process maturity, followed by organizational maturity, and fi- process improvement approach, underlying the five-level rank-
nally, by one example of an organizational capability, project ing system introduced by the SEI, e.g., [11], [17], [20], [48].
management, before finally moving to the focus of this paper, Maturity is defined as “[. . .] the extent to which a specific pro-
maturity of organizational capabilities, in general, two reasons cess is explicitly defined, managed, measured, controlled, and
justify this sequence. First, the historic timeline of influence is effective” [6]. Thus, maturity is defined as the degree to which
maintained. Second, this body of literature engages with and a process is institutionalized and effective [40].
shows variations in conceptualizations of maturity that would Organizations are encouraged to use existing and well-known
be fruitful across disciplines. Variations show that there is more methods and practices to progress along the maturity scale. The
than one leverage point to achieve maturity—the subject of the assumption is that the more structured a process and the more
cross-domain review of existing maturity grids in the ensuing transparent in terms of measurability of performance the better.
section. The levels range from level 1 “initial,” to level 2 “repeatable,”
to level 3 “defined,” to level 4 “managed,” and level 5 “opti-
mized,” where the lowest maturity level 1 corresponds to initial
or learner, and the highest maturity level 5 corresponds to a de-
IV. REVIEW OF UNDERLYING NOTIONS OF MATURITY IN sired performance of a process of best practice. In software, for
EXISTING MATURITY GRIDS
example, this translates into continuous improvement through
This section focuses on the notion of maturity across the focused and sustained effort toward building a process infras-
selected sample of maturity grids. Furthermore, Table I gives an tructure. It is often assumed that at the higher levels of maturity,
overview of extant maturity grids and allows comparison with the process is no longer needed, as the “right” performance is
respect to the aim, the scope, the administration mechanism, embedded (see also [43] and [58]).
MAIER et al.: ASSESSING ORGANIZATIONAL CAPABILITIES: REVIEWING AND GUIDING THE DEVELOPMENT OF MATURITY GRIDS 147

Fig. 2. Redrawn “teamwork” by Hammer [10].

Fig. 3. Redrawn activity 7—Coordinating R&D and marketing from Szakonyi [59].

2) Alteration of Organizational Structure (e.g., Job Roles Fig. 4). For collaboration strategy, within level 2 (occasional ad
and Policy): One structural element of how people are envis- hoc partnering), one bullet point reads “no agreed policy.” Under
aged to work together in an organization is through the use level 3 (established partners), we find a bullet point that reads
of teams. Depending on the culture of the organization, team- “internal skill shortages clearly recognized.” Reading through
work might involve people from one discipline, it might be the cell descriptions from the lowest to the highest level of
cross-disciplinary, it might involve customers and/or suppliers, maturity, the underlying assumption is that successful collabo-
it might be different from project to project or it might happen in ration is fostered through long-term relationships, and structured
a standardized way. Hammer’s process audit, for example, [10] and strategic relationship planning, most importantly incorpo-
sees maturity of teamwork to evolve from “project focused” to rating external partners into the core team (see Fig. 4).
“cross-functional,” to “teamwork as norm” through to “involv- Text descriptions in Constructing Excellence [60] (see Fig. 5)
ing customers and suppliers” (see Fig. 2). embrace the underlying notion that “the more interchange and
For Szakonyi [59] (see Fig. 3), maturity seems to be an in- participation is practiced among teams the better,” regardless
crease in knowledge about skills, methods, and responsibilities. of the specific task. Further, “lack of trust” and “power strug-
The most mature form of collaboration between, in this case, gles” are placed at the lowest level of maturity. The underlying
R&D and marketing, seems to be a technical person in-charge assumption seems to be that organizational change could be
of marketing. This imposes a specific form of organizational successful by focusing on interventions in the social relations
model as an instantiation of best practice. The description is among employees, in contrast to structural changes, as we have
“static” and does not indicate what aspects lead to improve- seen earlier.
ment. In fact, the job role is more related to responsibilities. 4) Emphasis on Learning (Awareness, Mindset, and Atti-
It could be inferred that organizational change with regard to tude): Strutt et al. [61] and Maier et al. [62] operating in very
coordination of teams is best initiated via structural changes in different application areas have chosen to adapt ideas from
job roles and training in skills, and methods—which makes it the concept of single- and double-loop learning [63], [64] in
also a candidate for a focus on people (see Section IV-B3). order to discriminate between the maturity levels. These au-
3) Emphasis on People (Skills, Training, and Building Re- thors have a shared rationale, but operationalization is differ-
lationships): A number of maturity grids conceive of people ent. Strutt et al. [61] chose to define the text descriptions in a
as the best leverage points for an evolution toward maturity of prescriptive way, Maier et al. [eg., 62] in a descriptive manner.
collaboration and cooperation (e.g., Fraser et al. [73] and Con- Maier et al. (see Fig. 6) chose an underlying maturity con-
structing Excellence [60]). Fraser et al. use a number of bullet cept progressing toward raising awareness for adequate actions
points for each text description addressing different aspects (see and attitudes. The underlying notion of change seems to be that
148 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 59, NO. 1, FEBRUARY 2012

Fig. 4. Redrawn “collaboration strategy” by Fraser et al. [73] (General discussion questions omitted).

Fig. 5. Redrawn “collaboration” by Constructing Excellence [60] (direction of rows changed).

Fig. 6. Redrawn “collaboration” by Maier et al. [89].

proactive collaboration (Level C) is favored over reactive (Level uation, and maintenance, as depicted in Fig. 7. For each phase,
B). Ultimately, mapping of the current and desired situations is a number of decision points and decision options are discussed.
aimed for irrespective of the specific levels. While these phases are generic, their order is important. Pro-
In summary, the aforementioned analysis shows how the same gression through some phases may be iterative and earlier work
subject is conceptualized in different ways. Cell descriptions might have to be re-visited. Although courses of action with re-
“reveal” the researchers’ views of an organization, its processes, spect to decision points within the phases of this roadmap may
people, and products. Conceptualizations impact organizational vary, the phases are consistent. Consequently, they lend them-
change initiatives as they specify leverage points. While some selves to being applied across multiple domains. The follow-
maturity grids can be clearly placed, others, e.g., Szakonyi [59] ing sections describe each of these phases, decision points and
use a mixture of organizational structure and emphasis on people options in more detail. Although predominantly aimed as guid-
as leverage points for capability improvement. ance for future developments of maturity grids, the roadmap
may be used for evaluative purposes of existing assessments.
Given differing contexts of development and application, dif-
V. ROADMAP: PHASES AND DECISION POINTS
fering performance goals, and perspectives on organizational
In order to develop sound maturity grids, we suggest a development and change, the subject matter does not lend it-
roadmap consisting of four phases: planning, development, eval- self to all-encompassing prescriptions and specific instructions
MAIER et al.: ASSESSING ORGANIZATIONAL CAPABILITIES: REVIEWING AND GUIDING THE DEVELOPMENT OF MATURITY GRIDS 149

Fig. 7. Phases and decision points of roadmap to develop new and evaluate existing maturity grids.

for navigation among different options. Instead, the roadmap The analytic one aims for evidence to determine what improve-
alerts researchers and practitioners to a set of decisions, which ments are needed and whether an improvement initiative has
need to be taken and made explicit when developing a maturity been successful. It includes general management philosophies,
grid. such as Crosby’s TQM, which are general principles applicable
in almost any context. The benchmarking one depends on identi-
fying best practices and/or an “excellent” organization in a field
A. Phase I—Planning to which practices are compared. It specifies best practices that
Phase I, the planning phase, sees the author of a maturity have been demonstrated to add value in a particular context and
grid decide on the intended audience (user community and im- have been explicitly stated in models and standards, of which
provement entity), the purpose of the assessment, the scope, and improvement based on ISO 9001 or CMMI are examples. An-
success criteria. Each decision point will be described in turn. alytic and benchmarking strategies can be complementary [5].
1) Specify Audience: The first important decision when de- Both may lead to a measurement-based strategy, which means
veloping maturity grids is to specify the work orientation, i.e., that processes (and systems) are measured and compared to ob-
who the expected users are. The term audience refers to all jectives set by management or industry standards in order to
stakeholders, who will participate in various aspects of the as- identify which processes need to be improved.
sessment, be it for the data acquisition process, as implementer Even though, accordingly, maturity grids are analytic strate-
of results, or as the subject of the assessment, also known as the gies, it is evident that in existing examples two overarching
improvement entity. Multiple audiences may be addressed for aims can be identified; first, improvement through “raising
varying reasons. A quality manager or product development en- awareness,” and second, improvement through “benchmarking”
gineer, for example, might be the target audience for providing across companies or industry sectors. Benchmarking includes
information on the capability to be assessed. The improvement comparison against an identified best practice example and mak-
entity, however, could be the whole research and development ing statements about performance of a whole industry sector in
department. Further, the whole assessment exercise might be terms of a certain process or capability. Benchmarking seems
aimed to provide recommendations for the Chief Executive Of- to incorporate “raising awareness,” but not vice versa. In order
ficer’s corporate planning. for a grid to be used to benchmark processes and capabilities
For reasons of clarity and accuracy of interpretation, it is across an industry sector, it must be applied to a high number of
necessary to differentiate between audiences, e.g., the users companies with similar parameters to attain sufficient data for
and the improvement entity. Decisions will have both logisti- valid comparison.
cal and conceptual implications. Logistical implications con- 3) Clarify Scope: An author needs to determine the scope
cern predominantly time and resource constraints relating to of the grid. Is it designed to be generic or domain-specific?
the participants and facilitator of the assessment. Conceptual For example, is a grid developed to assess and improve energy
implications relate mainly to validity, reliability, and generaliz- management in general or in a particular discipline, e.g., con-
ability of the assessment and address questions, such as: can one struction? If it is supposed to be domain-specific, it is especially
person in the company judge or decide alone for the company important to gather information about the context, the idiosyn-
in question? Can results acquired from one group of employees crasies and terminology of the specific domain in order for it to
be transferred to hold true for a different group? be understood by and of relevance to the audience.
2) Define Aim: Categorizations of software improvement 4) Define Success Criteria: How do authors of maturity
initiatives seem to distinguish between two “improvement assessments know whether development and application was
paradigms”: an analytic and a benchmarking one [65], [66]. successful? One way would be to check whether success
150 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 59, NO. 1, FEBRUARY 2012

criteria have been fulfilled. Success criteria need to be deter- objective. Then, from the goals, key process areas can be de-
mined at the outset and manifest in the form of high-level rived. For example, one could break safety management down
and specific requirements. Assessment methods are intervention into safety demonstration, safety implementation, strategies re-
methods. High-level requirements for an intervention method lating to sustaining companies’ capabilities in the long-term,
developed in managerially focused action research [67] are, for etc., and find processes associated with these categories that
example, usability and usefulness. Usability mainly addresses show strategic and operational significance. The number of se-
the degree to which users understand the language and concepts lected process areas is dependent on the subject chosen, and
used [68]. Usefulness could be seen in terms of companies’ thus, also exhibits great variation. For reasons of feasibility and
perceptions of whether they found the assessment helpful in logistics, an appropriate number of items for such an assessment
stimulating learn effects or in leading to effective plans for im- method is about 20 [72] (see also examples in the review sample
proving a certain situation. Specific requirements pertain to the in Table I).
individual context and are also influenced by the underlying the- 2) Select Maturity Levels (Rating Scale): The next step in
oretical stance used by the author of a maturity grid (see Phase the development of a maturity grid is to define a set of matu-
II). The requirement list needs to incorporate the developer’s rity levels. Different assessment frameworks use different rating
and the user’s objectives. scales. For example, ISO 9001 measures compliance with all
requirements on a binary pass/fail scale (with scope for minor
nonconformance). The Baldrige Award offers points for each
implemented requirement. The software CMM, having been in-
B. Phase II—Development spired by Crosby’s maturity grid and motivated by the lack of
Phase II, the development phase, defines the architecture of formalization in measurement, introduces the concept of orga-
the maturity grid. The architecture has a significant impact on nizational maturity as an ordinal scale for measuring capability.
its use. An author makes decisions about the process areas to An explicit statement of the underlying rationale and consistent
be assessed, the maturity levels (rating scale) to be assigned, implementation of a maturity grid is required to provide theo-
the cell descriptions to be formulated, and the administration retical rigor. Levels need to be distinct, well defined, and need
mechanism to be used. to show a logical progression as clear definition eases interpre-
1) Select Process Areas (Content): Selecting process areas is tation of results.
arguably one of the most difficult aspects of devising a maturity Deciding on what rationale informs the rating scale essentially
grid. The goal is to establish key process areas that are mutually means deciding on a leverage point for organizational change.
exclusive and collectively exhaustive. An effective assessment Referring to the review on existing maturity grids in Section IV,
should be based on an underpinning conceptual framework, we discern different underlying notions, namely:
generated from (traceable) principles of good practice [69]. In- 1) existence and adherence to a structured process (e.g., in-
evitably, the selection of process areas yields insights into the frastructure, transparency, and formality);
authors’ conceptualizations of the field. The conceptual frame- 2) alteration of organizational structure (e.g., job roles and
work underlying the assessment method determines the scope policy);
of the assessment. 3) emphasis on people (e.g., skills, training, and building
In a number of existing grids, justification of process ar- relationships);
eas is based on the experience in the field of the origina- 4) emphasis on learning (e.g., awareness, mindset, and atti-
tor [10], [59], [70] and by reference to established knowledge in tude).
the field, e.g., in the domain of project management, the PMI’s In some examples, we find a mixture (see review in
knowledge areas are referred to (for example, as in Grant and Section IV).
Pennypacker [11]). In the absence of significant prior experi- 3) Formulate Cell Text (Intersection of Process Areas and
ence, and in a relatively new field, it may not be possible to Maturity Levels): Identification and formulation of behavioral
gather sufficient evidence through existing literature in order to characteristics for capabilities or processes is one of the most im-
derive a comprehensive list of process areas. In this instance, a portant steps in developing a maturity grid assessment. Process
literature review is considered only sufficient in providing a the- characteristics need to be described at each level of maturity.
oretical starting point and other means of identification are nec- To discriminate between levels, descriptions should be precise,
essary. Typically, a number of options are available. Selection of concise, and clear. This requires: 1) a decision on whether the
the most appropriate technique/s depends on the stakeholders in- cell text is “prescriptive” or “descriptive”; 2) a justification of
volved in the grid development and the resources available to the the information source; and 3) a decision on the mechanism of
developer or development team. Process areas may be solicited formulating the text descriptions.
by interviewing a panel of experts, synthesizing the most critical a) Prescriptive or descriptive: In a prescriptive approach,
and most frequently mentioned concepts in literature, and/or a specific and detailed courses of action are suggested for each
combination of the two in either order [71]. Alternatively, un- maturity level of a process area. For a descriptive approach, the
derstanding and recognizing organizational process goals may focus is on a detailed account of the individual case and concerns
be a point of departure for defining the key processes [61]. This for direct comparability of results between application cases are
could proceed as follows: first, defining associated goals, which less paramount. The choice also has an impact on maintenance,
are considered necessary to achieve the organization’s overall since detailed activities, if not sufficiently generic, need to be
MAIER et al.: ASSESSING ORGANIZATIONAL CAPABILITIES: REVIEWING AND GUIDING THE DEVELOPMENT OF MATURITY GRIDS 151

maintained for relevance and accuracy. In summary, there are aiming at benchmarking seem to prefer electronic distribution
a number of aspects to consider, e.g., the underlying rationale, systems for questionnaires to reach a wide variety and large
characteristics, and knowledge of the subject area. number of participants [11], [20]. A combination of the two is
The subject area might be technical or social. Prescribing de- possible.
tailed activities of what should be done at what stage is easier a) Focus on process (raising awareness): Individual
for technical issues. For example, if deciding on energy man- scores are taken as prompts for a discussion and identification
agement, specific regulations can be used, whereas for a social of steps for improvement. A facilitated workshop can enable
issue, such as teamwork, a generally acceptable and widely ap- participants to discuss what influenced their judgment in scor-
plicable detailed prescription might be more difficult. ing a process area and why might it deviate from his or her
In addition to the consideration of whether a subject area is colleagues’ perception. Overall, the emphasis lies on the (dis-
more technical or more social in nature, DeBruin and Rosemann cursive) process of arriving at the result.
[14] point to another consideration, which asks whether a field is Completion of the grid in a group-administered work-
well established or new. Given the answer, different actions can shop [67], [71], [74] has a number of advantages. The re-
be taken to define individual cell descriptions and even maturity sponse rate is high and single-respondent bias can be avoided.
levels. Actions include, for example, a top-down or a bottom-up Further, if respondents are unclear about the meaning of a
approach [14]. In a top-down approach, definitions are written term, they can ask for clarification. This ensures participants
first, and then, measures or a set of practices are developed to fit have a common reference point, which facilitates interpre-
the definitions. In a bottom-up approach, the requirements and tation of the resulting scores. As each process area on the
measures are determined first, and then, definitions are writ- grid is addressed in a group, the workshop also functions
ten to reflect these. A top-down approach works well if the as a team-building exercise [69]. Finally, face-to-face interac-
field is relatively new and there is little evidence of what is tion in groups elicits a higher level of iterative engagement
thought to represent maturity. The emphasis, in this instance, with the subject area than a questionnaire that is completed
is first on what represents maturity, and then, how this can be individually.
measured. b) Focus on end results (benchmarking): Scores are col-
b) Information source: A number of options are available lated to give an overall assessment of the capability and an
to formulate the text descriptions in each cell: 1) by synthesizing overall maturity level for the project, business unit, company,
viewpoints from a sample representing the future recipients of or any other chosen improvement entity. An overall assessment
the assessment; or 2) by reviewing and comparing practices assumes that all processes are of equal importance. However, as
of a number of organizations, for example, through empirical individual scores for each process are averaged out, aggregation
studies, reviewing written case studies in literature, and best of results can mask potentially outstanding performance in one
practice guides from excellence initiatives. area or potentially weak performance in another. It also obscures
c) Formulation mechanism: There are two mechanisms differences in individual scores, which are often interesting in-
to formulate the actual cell text as rating scale. One is to iden- tervention points. Overall, when benchmarking, emphasis lies
tify extreme ends of the scale, i.e., best and worst practices, on the end result.
and then, to determine characteristics of all the stages in be- In either case, the main value of a maturity grid-based assess-
tween. In this case, key tasks and procedures considered to ment is to capture a companies’ own evaluation of the situation.
represent best practice should be based on discussion with rele- Participants’ scores might be used as motivational driver for
vant stakeholders and experts in the field. This strategy assumes management to change the maturity level of their team, project,
that the rationale for individual cell descriptions is inductively or organization and organizational capabilities.
generated from the descriptions of practices. Alternatively, in-
dividual text descriptions for the cells in each selected process
area to be assessed are deduced from the underlying rationale C. Phase III—Evaluation
and formulated accordingly. However, this depends on the de- The transition from Phase II to III is fluid. Grids are likely
cision as whether a definition is prescriptive or descriptive in to evolve over time, where, through continued use, difficulties
nature. or limitations may be revealed [61], [67]. As the assessment is
4) Define Administration Mechanism: The administration used and feedback gained from the experience of companies,
mechanism of a maturity grid is integral to the success of the the grid should be iteratively refined. Evaluations may be con-
assessment. In choosing a mechanism, consideration needs to be tinued until a saturation point is reached, i.e., until no more
given to the aim of the assessment, and the resources and support significant changes are being suggested by participants and/or
infrastructure available for conducting the assessment. This de- until evaluation results are satisfactory. The first applications
cision point is important because different delivery mechanisms of the assessment should ideally be treated as a final stage of
emphasize different aspects. In reviewing extant approaches, the evaluation.
choice of delivery method appears to be connected to the general Thus, Phase III—evaluation, is an important stage in the de-
goals and objectives of the assessment. Approaches aiming at velopment of a maturity grid and serves a number of functions.
raising awareness and improving performance that way appear For example, tests are used to validate the grid, to obtain feed-
to select paper-based distribution mechanisms, be it through in- back on whether the grid fulfilled the requirements when ap-
terview and/or group workshops [62], [67], [73]. Approaches plied in practice, and to identify items for refinement. Ideally,
152 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 59, NO. 1, FEBRUARY 2012

evaluations are conducted within companies or institutions that E. Summary of All Phases and Decision Points
are independent of the development. During this phase, it is im-
Decision points during all phases are summarized in Table II.
portant to test input into the grid (choices made during Phases If desired, each decision point allows for participative involve-
I and II) for validity and the results acquired by applying the
ment of the user. Having described the roadmap’s four phases
grid in practice for correctness—in case of benchmarking also
and their respective decision points and options, Section VI
for generalizability. introduces an illustrative example of its implementation in
1) Validation: Once a grid is populated, it must be tested
practice.
for validity and relevance. This includes checking whether good
translations of the constructs have been achieved. In other words,
evidence needs to be given for correspondence between the au- VI. EXPERIENCE IMPLEMENTING THE ROADMAP
thor’s findings and the understandings of participants of the Spanning four years of development and application, we
assessments. Acknowledging its difficulty and allowing for an devised a maturity grid to assess communication manage-
element of subjectivity, the maturity grid needs to be tested ment in engineering design [62], [71] and named it the com-
for breadth of the domain covered. This requires a degree of munication grid method (CGM). Development of the ma-
agreement on what particular elements need to be included or turity grid progressed by alternating between cross-domain
excluded, justifying the use of the theoretical framework under- literature reviews and empirical field studies in eight com-
lying the selection of process areas (Phase II). panies. Three exploratory studies were conducted during
In addition to testing the content of the grid for validity, it is Phase I (planning) and Phase II (development), and five appli-
necessary to ensure that the results obtained through applying cations were undertaken during Phases II and III (evaluation).
the grid “in the field” are correct, accurate, and repeatable. A In what follows, the development process is described in de-
case study approach to method evaluation may be employed. tail, following the phases and decision points of the roadmap
Although case studies cannot provide the scientific rigor of for- suggested in the previous section of this paper.
mal experiments, they can provide sufficient information to help
judge if specific methods will benefit a project or an organiza-
tion [76]. If benchmarking is desired, results acquired through A. Phase I—Planning
the assessment need to be tested for generalizability. Exploratory studies on communication in new product devel-
2) Verification: In terms of verification, through application, opment were conducted in three companies in the aerospace, IT,
the method developed needs to be evaluated against the success and engineering tools sectors. This comprised a total of eight
criteria and requirements defined during Phase I—planning. weeks of observation and 63 semistructured face-to-face inter-
views with junior engineers and senior personnel.
1) Specify Audience: Interview results showed that mis-
D. Phase IV—Maintenance alignment of perceptions on factors influencing communication
Phase IV, the maintenance phase, is an ongoing phase. Con- occurred predominantly at team interfaces. Therefore, the as-
tinued accuracy and relevance of a maturity grid will be ensured sessment tool needed to aim mainly at communication across
by maintaining it over time. Access and provision of neces- one interface. The improvement entity was therefore to be com-
sary resources to maintain the grid will affect its evolution and munication across the specified team or department interface,
use. Maintenance becomes necessary as domain knowledge and the results were to be aimed at the same people providing the
understanding broadens and deepens. Similarly, current best scores.
practice becomes outdated as a result of, for example, new tech- 2) Define Aim: Human communication in complex product
nological developments. Maintenance is especially necessary if development projects sees manifold configurations of people,
detailed and prescriptive activities have been specified in the products, and processes, which makes each communication situ-
cell text. For example, if a certain limit for CO2 emissions was ation different. Best practice benchmarks or generally applicable
specified in the grid, current regulation might require a differ- indices for communication, against which a particular situation
ent limit, and thus, mandate adjustment. Documentation and can be compared, are perhaps impossible to find. Therefore,
training material is required, especially if the facilitator of an the assessment purpose chosen was to raise awareness amongst
assessment is not the author. participants and diagnose improvement opportunities for their
In addition, if the tool was developed for benchmarking pur- current practices.
poses, it is particularly important to ensure accurate data storage 3) Clarify Scope: The primary goal was to be domain-
and retrieval. In general, while regular maintenance is recom- specific, i.e., the assessment was predominantly aimed at com-
mended, updating a tool to reflect current best practice can munication within engineering firms. The focus on factors in-
compromise validity. Hence, if substantial changes are made fluencing communication at a team or departmental interface
after a tool has been formally evaluated, the evaluation phase rather than centered on a particular product development phase
needs to be repeated. also enabled cross-domain assessment.
Finally, maintenance includes adequate documentation and 4) Define Success Criteria: A number of success criteria in
appropriate communication of the design process and results to the form of high-level and specific requirements were defined
the academic community and findings from application cases to of which a few are mentioned in what follows. In general, the
(participating) practitioners. maturity grid was to be usable by and useful for industry. Success
MAIER et al.: ASSESSING ORGANIZATIONAL CAPABILITIES: REVIEWING AND GUIDING THE DEVELOPMENT OF MATURITY GRIDS 153

TABLE II
DECISION POINTS AND DECISION OPTIONS FOR THE DEVELOPMENT PHASES OF A MATURITY GRID

criteria for usability pertained to clarity of language used and a combination of the above. These aspects are complementary.
ease of scoring and analyses functions provided. Usefulness was Thus, the maturity grid method needed to incorporate many as-
determined by the tool’s ability to trigger reflection and learning pects of communication and raise awareness of precisely those.
episodes among the participants of the maturity grid assessment. This is important as foci (subconsciously) chosen determine the
As an example for a high-level requirement that emerged solution space.
through interview, we would like to mention different concep- As an example for a specific requirement, during interview,
tualizations of communication. Design engineers and engineer- engineers mentioned a myriad of factors, ranging from indi-
ing managers were asked to describe their daily design tasks vidual capabilities and information handling skills to differ-
in general and their interactions with colleagues in particular. ences in format of information representation, mutual trust,
Descriptions and explanations of their work yielded a number roles and responsibilities, decision making, handling of tech-
of communication problems. Descriptions suggest that intervie- nical conflicts, hierarchies, overview of sequence of tasks, and
wees focused in their personal conceptualization of communi- terminology. Many engineers contended that it would be dif-
cation problems on different aspects. Some focused on the in- ficult to intervene as they were uncertain, which factor to
formation to be selected, some focused on the act of information start looking at first. Consequently, the maturity grid assess-
transmission (the interaction or utterance), some highlighted un- ment needed to provide multiple entry points for organizational
derstanding of the situation as a key aspect, and some mentioned change.
154 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 59, NO. 1, FEBRUARY 2012

B. Phase II—Development The roadmap suggests using more or less 20 process areas in
The roadmap reminds the developers of maturity grids to the- order for it to be a feasible assessment. Therefore, it was decided
oretically ground their selection of process areas and maturity to use a modular system and have a base of about 40 process
levels, which influences the formulation of the cell text and areas of which a selection would be used in negotiation with the
ultimately forms the basis for assessment. The rationale under- users.
lying the selection of process areas for the CGM is Luhmann’s 2) Select Maturity Levels: From a system-theoretic perspec-
system-evolutionary communication theory and the selection of tive, it is particularly important to choose an approach that trig-
the maturity levels is grounded in Argyris’ and Schön’s learning gers internal processes within the system under assessment.
theory. Details are described in what follows. Change can be initiated, yet not fully controlled. Change initia-
1) Select Process Areas: The predominant objective was tives, however, are only processable if they are phrased in the
to assess design communication within engineering firms. Al- “language of the system” [84]. Social systems develop their own
though communication is one of the oldest fields of human perceptions of reality, which they use to orientate and position
inquiry—starting out with Aristotle’s rhetoric [77]—there is no themselves in relation to their environment. This perception of
canon of theory to which all scientists refer. Literature on com- reality creates the system’s knowledge base, which, at the same
munication theory is rich yet disparate with little cross referenc- time, functions as indicator and benchmark for its own behavior.
ing. In addition, if one looks at review articles or textbooks, ord- Learning is the act of developing this knowledge base further. It
ering criteria are different. For example, Littlejohn [78] grouped occurs on both individual and organizational levels. In the most
contributions to communication theory according to their disci- successful cases, learning leads to a coherent reality construc-
plinary origins, Craig [79] identified streams of communication tion of a system, and thus, the system development keeps up
research, such as “cybernetic” and “critical” to name a few, and with changes in the environment [86].
Miller [80] differentiates between communication theories that Therefore, the maturity levels were based on an adaptation
focus on perspectives, processes, and contexts. Anderson [81] of the three learning types described by Argyris and Schön
analyzed the contents of seven communication theory textbooks [63], [64]. In addition to these three learning types, an extra,
and identified over 200 distinct “theories” of which only 18 of preliminary stage was added.
the 249 (7%) were included in more than three books. 1) Level A: It describes situations, where participants do not
This said and referring to an elicited high-level requirement engage with the questions asked in the grid, i.e., they do
of addressing and assessing the comprehensiveness of commu- not reflect on the connection between the factor in question
nication (see previous section), informational, interactional, and and communication.
situational aspects cannot be viewed in isolation. A perspective 2) Level B: It corresponds with the first type of learning
that integrates all aspects is the system-evolutionary perspective, described by Argyris and Schön, and points out that some
which advocates the generation, evolution, and reproduction of action is changed in order to correct a mistake, but other
social systems through communication. Engineering design is than that tasks are carried out as usual.
considered as a social system [82], [83] and communication is 3) Level C: It matches Argyris’ and Schön’s second type of
seen as being a constitutive part of a social system [79], [84]. learning: people at this stage, modify their actions, as well
The view of the structure generating function of communica- as thinking critically about existing norms, procedures,
tion has been proliferated through works of Krippendorff [85]. policies, and objectives that govern their actions. This
The perspective of system generation through communication means that a mistake is not just corrected once, but that
finds a vocal proponent in the late sociologist Luhmann [84]. the general situation that led to a mistake is taken into
Communication, encompassing the three aspects information, consideration.
utterance, and understanding, constitutes an emergent property 4) Level D: It corresponds to the third type of learning and
of the interaction between many (at least two) individuals and signifies a stage, where employees are aware of the influ-
is characterized in this view as coordination of behavior. ence of a given factor on communication and continuously
A number of factors influence this coordination of behavior. check whether the way things are handled or setup is still
Inductive “coding” of interview data (see Phase I) yielded more appropriate for the given situation. At this stage, learn-
than 40 factors influencing communication. Factors included, ing does not only happen when a mistake needs to be
for example, individual capabilities and information handling corrected: participants have the general mindset of con-
skills to differences in format of information representation, tinuously adjusting and improving the situation.
mutual trust, roles and responsibilities, decision making, han- Given the complexity of communication and the unique na-
dling of technical conflicts, hierarchies, overview of sequence of ture of each project and each company, it is difficult to specify
tasks, and terminology. They were grouped under the headings in detail what behavior every team should adopt. Therefore, it
“product,” “information,” “individual,” “team,” and “organiza- should be noted that stage “D” is not necessarily the best. Par-
tion” [75], and subsequently, mapped against Luhmann’s three ticipants decide what is most adequate for the given project and
aspects of communication. The resulting framework presents resources available.
a comprehensive, arguably, not exhaustive, and selection of In summary, the concept of learning permeates the assess-
factors that forms input to the audit method. Factors are seen ment approach put forward here in two ways: first, individual
as levers for change, and thus, form the process areas to be participants in the assessment will learn about their colleagues’
assessed. perceptions by going through the assessment exercise. Second,
MAIER et al.: ASSESSING ORGANIZATIONAL CAPABILITIES: REVIEWING AND GUIDING THE DEVELOPMENT OF MATURITY GRIDS 155

maturity levels in the assessment follow the underlying rationale broader use in industry or judge scientific purpose. As research
of specific learning types. progressed and predominantly during the three early applica-
3) Formulate Cell Text: Once the factors that influence the tion cases after which the design was frozen, subtle changes
communication process and levels of maturity assigned to each were made to the list of process areas, the number of interfaces
factor were identified, matrices of cells (scorecards or grids) assessed at any one time, and the administration mechanism.
were devised. In this case, a set of about 10 scorecards resulted. Description of the development process reflects the final design
An example of one factor is shown in Fig. 6. Aiming to use the of the maturity grid.
right language, the authors first suggested the cell descriptions,
and subsequently, elicited feedback from users until no more D. Phase IV—Maintenance
hesitations of people scoring occurred. This was a laborious
The CGM aims to raise awareness. It may compare teams
process of iterative refinement.
and departments within companies with each other, rather than
4) Choose Administration Mechanism: Our main emphasis
positing a best practice benchmark. Cell text is descriptive, and
was to enable learning by following the assessment. Due to
therefore, the maturity grid will remain up-to-date. The develop-
the tradeoff between time and cost constraints and wanting to
ment process was documented and communicated to scientific
situate numerical scores within the context of occurrence, we
audiences through peer-reviewed publications. Documentation
experimented with a number of administration mechanisms.
of results from the application cases were made available to
In the end, individual 30-min interviews were combined with
participating industry partners.
a 2-h group workshop in which the result of the completed
Following the roadmap in the development of the communica-
scorecards, causes and solutions for identified problem areas
tion grid gave us a clear structure for development. It also meant
and gaps between the current and desired states were discussed.
that both the authors and the users of the assessment method
knew what we were expecting and could therefore define the
C. Phase III—Evaluation rules of engagement and estimate time and effort.
In keeping with the design science perspective underlying
The CGM was applied to the following interfaces:
this paper (see Section II), a further evaluation of the CGM and
1) software and service design developing middleware (Ap-
the suggested roadmap for development of such maturity as-
plication Case 1: information technology);
sessment methods was undertaken (see Table III). We followed
2) the management team of a small engineering company
the guidelines for devising an artifact formulated by Hevner
(Application Case 2: engineering tools manufacturer);
et al. [36] and reformulated and used specifically in the context
3) design and service of an aero engine (Application Case 3:
of requirements for maturity models by Becker et al. [12].
aerospace manufacturer defense);
4) embodiment design and simulation of the design of an
automotive vehicle series (Application Case 4: automotive VII. SUMMARY AND CONCLUSION
manufacturer); This paper has discussed various notions of maturity, i.e.,
5) preliminary design and detail design of an aero engine tur- “process maturity,” “organizational maturity,” “process capabil-
bine blade (Application Case 5: aerospace manufacturer ity,” “project maturity,” and “maturity of organizational capabil-
civil). ities” (see Section III). It presented a comprehensive overview
Development and evaluation of the CGM evolved through of 24 extant maturity grids that build on the ideas of Crosby’s
rounds of iteration between each application case. We used QMMG from the late 1970s (see Section IV). In reviewing,
member validation [38] to obtain feedback on the outcome of the particular emphasis was placed on analyzing embedded as-
assessment (reliability of results) as well as, for example, func- sumptions about organizational change in the maturity scales of
tionality and usability of the method. After each application, the the examples reviewed. As direct comparison, leverage points
results were fed back to the participants of the studies through for maturing collaboration as an organizational capability were
workshops, a final presentation, and in some cases, a written shown to be one or a combination of: existence and adherence
report. Repeatedly, participants felt the results accurately rep- to a structured process (e.g., infrastructure, transparency, and
resented the actual status of communication between the team formality), alteration of organizational structure (e.g., job roles
interfaces chosen. This ensured reliability. The method itself and policy), emphasis on people (e.g., skills, training, and build-
was evaluated according to set criteria, such as functionality ing relationships), and/or emphasis on learning (e.g., awareness,
and usability, usefulness and learn effect, triggering reflection, mindset, and attitude). This shows that combining different per-
and correctness of results obtained [68], [69]. Findings from spectives and measures of “goodness” for one and the same
verbal feedback were cross-referenced with a questionnaire us- process or capability is difficult. Assessing maturity will, there-
ing the same criteria and completed by the same participants. fore, perhaps always be more subjective than objective [47].
For a comprehensive description refer to Maier [75]. In addition While the number of maturity grids and models is grow-
to feedback from experts in industry participating in the studies, ing, there is little support available on how to develop these
feedback was also sought from experts from two engineering approaches to organizational capability assessment and devel-
consultancies and engineering design researchers from a variety opment. To address this issue, the paper also provided a four-
of universities. The different respondent groups were chosen be- phase roadmap for developing maturity grids (see Section V)
cause a single group of respondents could not properly replicate and showed the roadmap’s implementation in industry with an
156 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 59, NO. 1, FEBRUARY 2012

TABLE III
EVALUATION OF THE ROADMAP TO DEVELOP MATURITY GRIDS AND THE CGM

illustrative example (see Section VI). This roadmap is a first at- chosen needs to be comprehensive, complete, correct, consistent
tempt at identifying and synthesizing phases and decision points and, above all, theoretically justified. For industrial applicabil-
that may be useful to both authors of assessment grids and im- ity, however, certain flexibility for adaptation and tailoring needs
plementers, who need to handle the multimodel environment. to be designed into the method. It is necessary to strike a balance
It provides the parameters within which professional develop- between developing an exhaustive method and a usable one.
ment of a maturity grid might occur. Further, it provides the
parameters within which professional judgments for evaluation
of existing grids might be made. However, it cannot, and does A. Implications for Industrial Practice
not aim to provide the answer to every dilemma an author of a Maturity grids are built upon conceptual models that in their
grid may face. Decision points and options provide instances for own rights provide insights into the author’s perspective of
reflection to ensure appropriate courses of action when devel- the factors important in an organization. Thus, the maturity
oping new or evaluating existing maturity grids. Such instances grid-based assessment methods collectively offer a contempo-
occur, for example, when selecting process areas and facing the rary representation of different conceptualizations of organiza-
issue of academic rigor versus logistical feasibility, and thereby, tional practices and capabilities that are viewed as important
practical utility. For academic purposes, the list of process areas for success. In addition, this review presents an overview of the
MAIER et al.: ASSESSING ORGANIZATIONAL CAPABILITIES: REVIEWING AND GUIDING THE DEVELOPMENT OF MATURITY GRIDS 157

different maturity grid approaches available to assess organiza- [2] D. Leonard-Barton, “Core capabilities and core rigidities: A paradox in
tional capabilities and initiate change processes at a given point managing new product development,” Strateg. Manage. J., vol. 13, no. 8,
pp. 111–125, 1992.
in time. This provides organizations with a better understand- [3] J. W. Moore, “An integrated collection of software engineering stan-
ing of existing capabilities and enables benchmarking against a dards,” IEEE Softw., vol. 16, no. 6, pp. 51–57, Nov./Dec. 1999.
range of competitors. While maturity grids may share a common [4] C. P. Halvorsen and R. Conradi, “A taxonomy to compare SPI frame-
works,” presented at the 8th Eur. Workshop Softw. Process Technol.,
structure, their content differs and very often they are developed Witten, Germany, 2001.
anew. This paper provides a common reference point and guid- [5] M. C. Paulk, “A taxonomy for improvement frameworks,” presented at
ance for the evaluation of existing grids and the development of the World Congr. Softw. Qual., Bethesda, MD, 2008.
[6] M. C. Paulk, B. Curtis, M. B. Chrissis, and C. V. Weber, “Capability
new grids. It alerts both the novice and expert user to the poten- maturity model SM for software, version 1.1,” Carnegie Mellon Univ.,
tial decisions to be addressed at the start of the development of Pittsburgh, PA, Tech. Rep. CMU/SEI-93-TR-024 ESC-TR-93-177, Feb.
a maturity grid assessment tool. 1993.
[7] M. C. Paulk, C. V. Weber, B. Curtis, and M. B. Chrissis, The Capa-
bility Maturity Model: Guidelines for Improving the Software Process.
B. Implications for Research Boston, MA: Addison-Wesley, 1995.
[8] M. B. Chrissis, M. Konrad, and S. Shrum, CMMI. Guidelines for Process
The maturity grids selected all embrace the notion that suc- Integration and Product Improvement. Boston, MA: Addison-Wesley,
cessful organizational change can be triggered and/or achieved 2003.
[9] B. Curtis, W. E. Hefley, and S. Miller, The People Capability Matu-
by an assessment of practices. Most maturity grids are based rity Model: Guidelines for Improving the Workforce. Reading, MA:
around the underpinning assumption that key organizational ca- Addison Wesley Longman, 2002.
pabilities need to be rooted in codified business processes. Thus, [10] M. Hammer, “The process audit,” Harv. Bus. Rev., vol. 85, no. 4, pp. 111–
121, 2007.
there is an acceptance that business processes are beneficial and [11] K. P. Grant and J. S. Pennypacker, “Project management maturity: An
necessary. However, the underlying idea of “cause and effect” assessment of project management capabilities among and between se-
might be fallacious because the processes under assessment lected industries,” IEEE Trans. Eng. Manage., vol. 53, no. 1, pp. 59–68,
Feb. 2006.
are in many cases social processes that do not follow simple [12] J. Becker, R. Knackstedt, and J. Pöppelbuß, “Developing maturity models
cause and effect patterns [87]. Careful analysis is necessary, for IT management: A procedure model and its application,” Bus. Inf.
however, to discern whether maturity grid methods fall into a Syst. Eng., vol. 1, no. 3, pp. 213–222, 2009.
[13] M. Kohlegger, R. Maier, and S. Thalmann, “Understanding maturity
naturalistic–mechanistic perspective that holds processes to be models results of a structured content analysis,” presented at the I-
fully quantifiable and controllable. Given the variety of maturity KNOW I-SEMANTICS, Graz, Austria, 2009.
grids available, the skeptical observer would wisely enquire as [14] T. DeBruin and M. Rosemann, “Understanding the main phases of de-
veloping a maturity model,” presented at the 16th Australas. Conf. Inf.
to the basis upon which each maturity grid is founded. A num- Syst., Sydney, NSW Australia, 2005.
ber of perspectives on organizational change—certain ideas of [15] M. Siponen, “Towards maturity of information security maturity crite-
goodness based on certain paradigms of goodness—are em- ria: Six lessons learned from software maturity criteria,” Inf. Manage.
Comput. Secur., vol. 10, no. 5, pp. 210–224, 2002.
bedded in the rating scales of maturity grids. The overview, [16] L. G. Pee and A. Kankanhalli, “A model of organisational knowledge
analysis, and suggestions provided here gives a background for management maturity based on people, process, and technology,” J. Inf.
research into theory building on the development of maturity Knowl. Manage., vol. 8, no. 2, pp. 79–99, 2009.
[17] C. Ibbs and Y. Kwak, “Assessing project management maturity,” Proj.
grids and for research into management tools as interventions Manage. J., vol. 31, no. 1, pp. 32–43, 2000.
in organizations. [18] J. S. Pennypacker and K. P. Grant, “Project management maturity: An
industry benchmark,” Proj. Manage. J., vol. 34, no. 1, pp. 4–11, 2003.
[19] M. Mullaly, “Longitudinal analysis of project management maturity,”
ACKNOWLEDGMENT Proj. Manage. J., vol. 36, no. 3, pp. 62–73, 2006.
[20] U. Kulkarni and R. St Louis, “Organisational self assessment of knowl-
The authors would like to thank the constructive comments edge management maturity,” presented at the 9th Americas Conf. Inf.
provided by the three anonymous reviewers, the Associate Edi- Syst., Pittsburgh, PA, 2003.
tor Prof. J. Pinto, and the Editors-in-Chief, Profs. G. Farris and [21] T. Mettler and P. Rohner, “A design science research perspective on
maturity models in information systems,” University of St. Gallen, St.
R. Sabherwal. They would also like to thank the following peo- Gallen, May 2009.
ple, who were contacted during the writing of this paper: Dr. B. [22] R. Phaal, C. J. P. Farrukh, and D. R. Probert, “Technology management
Bater, InfoPlex Associates, Prof. I. Cooper, Eclipse Consultants, tools: Concept, development and application,” Technovation, vol. 26,
no. 3, pp. 336–344, 2005.
Dr. N. Crilly, University of Cambridge, Prof. K. Grant, Univer- [23] D. O. Health. (2010, Sep 6). Achieving Excellence Design Evaluation
sity of Texas at San Antonio, Dr. M. Langen, Siemens AG, and Toolkit (AEDET Evolution) last accessed [Online]. Available: http://
Dr. S. Macmillan, University of Cambridge. They also thank www.dh.gov.uk/en/Publicationsandstatistics/Publications/Publications
PolicyAndGuidance/DH_082089.
the members of the Engineering Design Centre, University of [24] V. P. Kochikar, “The knowledge management maturity model—A staged
Cambridge, the members of the Design Management Group, framework for leveraging knowledge,” presented at the KMWorld 2000,
University of Cambridge, and the members of the section Work, San Jose, CA, 2000.
[25] K. Ehms and M. Langen, “Reifemodelle im KM-consulting—
Technology, and Organization, Technical University of Den- Erfahrungen aus 4 Jahren Beratungspraxis,” presented at the KnowTech
mark, who supported with helpful advice on earlier versions of 2004, Muenchen, Germany, 2004.
the manuscript. [26] P. Chinowsky, K. Molenaar, and A. Realph, “Learning organizations in
construction,” J. Manage. Eng., vol. 23, no. 1, pp. 27–33, 2007.
REFERENCES [27] J. C. Mankins, “Technology readiness levels: A white paper,” presented
at the NASA, Washington, DC, 1995.
[1] M. C. Paulk, “Surviving the quagmire of process models, integrated [28] J. C. Mankins, “Technology readiness assessments: A retrospective,”
models, and standards,” presented at the ASQ Annu. Qual. Congr., Acta Astronautica, vol. 65, no. 9–10, pp. 1216–1223, 2009.
Toronto, Canada, 2004.
158 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 59, NO. 1, FEBRUARY 2012

[29] J. E. Ramirez-Marquez and B. Sauser, “System development planning [61] J. E. Strutt, J. V. Sharp, E. Terry, and R. Miles, “Capability maturity
via system maturity optimization,” IEEE Trans. Eng. Manage., vol. 56, models for offshore organisational management,” Environ. Int., vol. 32,
no. 3, pp. 533–548, Aug. 2009. no. 8, pp. 1094–1105, 2006.
[30] R. Foster, Innovation: The Attacker’s Advantage. New York: Summit [62] A. M. Maier, C. M. Eckert, and P. J. Clarkson, “Identifying requirements
Books, 1986. for communication support: A maturity grid-inspired approach,” Expert
[31] C. M. Christensen, “Exploring the limits of the technology S-Curve. Syst. Appl., vol. 31, no. 4, pp. 663–672, 2006.
Part I: Component technologies,” Prod. Oper. Manage., vol. 1, no. 4, [63] C. Argyris and D. Schön, Organizational Learning: A Theory of Action
pp. 334–357, 1992. Perspective. Reading, MA: Addison-Wesley, 1978.
[32] R. Nolan, “Managing the computer resource: A stage hypothesis,” Harv. [64] C. Argyris and D. Schön, Organization Learning II: Theory, Method and
Bus. Rev., vol. 16, no. 7, pp. 399–405, 1973. Practice. Reading, MA: Addison-Wesley, 1978b.
[33] L. E. Greiner, “Evolution and revolution as organizations grow,” Harv. [65] K. E. Emam and D. R. Goldenson, “An empirical review of software
Bus. Rev., vol. 50, no. 4, pp. 37–46, 1972. process assessments,” presented at the Nati. Res. Counc. Can., Inst. Inf.
[34] N. Cross, Engineering Design Methods: Strategies for Product Design. Technol., Ottawa, ON, Nov. 1999.
London, U.K.: Wiley, 2008. [66] M. C. Paulk, S. Guha, W. E. Hefley, E. B. Hyder, and M. Iqbal, “Com-
[35] N. Cross, “Designerly ways of knowing: Design discipline versus design paring the eSCM-SP and CMMI: A comparison between the e-sourcing
science,” Des. Issu., vol. 17, no. 3, pp. 49–55, 2001. capability model for service providers v2 and the capability maturity
[36] A. R. Hevner, S. R. March, J. Park, and S. Ram, “Design science in model integration v1.1,” Carnegie Mellon Univ., Pittsburgh, PA, IT Ser-
information systems research,” MIS Q., vol. 28, no. 1, pp. 75–105, 2004. vices Qualification Center, Dec. 2005.
[37] K. M. Eisenhardt, “Building theories from case study research,” Acad. [67] J. Moultrie, “Development of a design audit tool for SMEs,” J. Prod.
Manage. Rev., vol. 14, no. 4, pp. 532–550, 1989. Innov. Manage., vol. 24, no. 4, pp. 335–368, 2007.
[38] M. Bloor, “Techniques of validation in qualitative research: A critical [68] J. Wilson, “Communication artifacts. The design of objects and the object
commentary,” in Context and Method in Qualitative Research, G. Miller of design,” in Design and the Social Sciences. Making Connections,
and R. Dingwall, Eds. London, U.K.: Sage, 1997, pp. 37–50. J. Frascara, Ed. London, U.K.: Taylor and Francis, 2002, pp. 24–
[39] Oxford English Dictionary (OED) “Mature” and “Maturity,” 2009. 32.
[40] K. Dooley, A. Subra, and J. Anderson, “Maturity and its impact on new [69] V. Chiesa, P. Coughlan, and C. Voss, “Development of a technical in-
product development project performance,” Res. Eng. Des., vol. 13, novation audit,” J. Prod. Innov. Manage., vol. 13, no. 2, pp. 105–136,
no. 1, pp. 23–39, 2001. 1996.
[41] J. K. Crawford, Project Management Maturity Model Providing a Proven [70] R. Szakonyi, “Measuring R&D effectiveness—I,” Res.—Technol. Man-
Path to Project Management Excellence. New York: Marcel Dekker, age., vol. 37, pp. 27–32, Mar./Apr. 1994.
2002. [71] A. M. Maier, M. Kreimeyer, U. Lindemann, and J. P. Clarkson, “Reflect-
[42] J. K. Crawford, Project Management Maturity Model, 2nd ed. Boca ing communication: A key factor for successful collaboration between
Raton, FL: Auerbach Publications, Taylor and Francis Group, 2007. embodiment design and simulation,” J. Eng. Des., vol. 20, no. 3, pp. 265–
[43] Project Management Institute, A Guide to the Project Management Body 287, 2009.
of Knowledge. Newtown Square, PA: Project Management Institute, [72] J. Moultrie, “Development of a design audit tool to assess product design
2000. capability,” in Institute for Manufacturing. Cambridge, MA: Univ. of
[44] T. J. Cooke-Davies and A. Arzymanow, “The maturity of project man- Cambridge, 2004.
agement in different industries: An investigation into variations between [73] P. Fraser, C. Farrukh, and M. Gregory, “Managing product development
project management models,” Int. J. Proj. Manage., vol. 21, no. 6, collaborations—A process maturity approach,” J. Eng. Manufacture,
pp. 471–478, 2003. Proc. Instn Mech. Engrs, Part B, vol. 217, no. B11, pp. 1499–1519,
[45] H. Kerzner, Using the Project Management Maturity Model: Strategic 2003.
Planning for Project Management, 2nd ed. Hoboken, NJ: Wiley, 2005. [74] P. Fraser, J. Moultrie, and M. Gregory, “The use of maturity models/grids
[46] R. Gareis and M. Huemann, “Maturity models for the project-oriented as a tool in assessing product development capability,” presented at the
company,” in Gower Handbook of Project Management, J. R. Turner, IEEE Int. Eng. Manage. Conf., Cambridge, U.K., 2002.
Ed., 4th ed. Aldershot, U.K.: Gower, 2008, pp. 183–208. [75] A. M. Maier, “A grid-based assessment method of communication in
[47] E. S. Andersen and S. A. Jessen, “Project maturity in organisations,” Int. engineering design,” in Engineering Department. Cambridge, U.K.:
J. Proj. Manage., vol. 21, no. 6, pp. 457–461, 2003. Univ. of Cambridge, 2007.
[48] Y. H. Kwak and C. W. Ibbs, “Calculating project management’s return [76] B. Kitchenham and S. L. Pfleeger, “Case studies for method and tool
on investment,” Proj. Manage. J., vol. 31, no. 2, pp. 38–47, 2000. evaluation,” IEEE Softw., vol. 12, no. 4, pp. 52–62, Jul. 1995.
[49] G. Skulmoski, “Project maturity and competence interface,” Cost Eng., [77] W. R. Roberts, “Rhetorica,” in The Works of Aristotle, vol. XI, W. D.
vol. 43, no. 6, pp. 11–18, 2001. Ross, Ed. New York: Oxford Univ. Press, 1936.
[50] Project Management Institute, Organizational Project Management Ma- [78] S. W. Littlejohn, Theories of Human Communication. Belmont, CA:
turity Model (OPM3). Newtown Square, PA: Project Management Wadsworth Publ., 1999.
Institute, 2005. [79] R. T. Craig, “Communication theory as a field,” Commun. Theory, vol. 9,
[51] E. G. Penrose, The Theory of the Growth of the Firm. New York: no. 2, pp. 119–161, 1999.
Wiley, 1959. [80] K. Miller, Communication Theories: Perspectives, Processes, and Con-
[52] B. Wernerfelt, “A resource-based view of the firm,” Strateg. Manage. texts, 2nd ed. New York: McGraw Hill, 2005.
J., vol. 5, no. 2, pp. 171–180, 1984. [81] J. A. Anderson, Communication Theory. Epistemological Foundations.
[53] J. B. Barney, “Firm resources and sustained competitive advantage,” J. New York: The Guilford Press, 1996.
Manage., vol. 17, no. 1, pp. 99–120, 1991. [82] L. L. Bucciarelli, “Reflective practice in engineering design,” Des. Stud.,
[54] K. M. Eisenhardt and C. B. Schoonhoven, “Resource-based view of vol. 5, no. 3, pp. 185–190, 1984.
strategic alliance formation: Strategic and social effects in entrepreneurial [83] L. L. Bucciarelli, Designing Engineers. Cambridge, MA: MIT Press,
firms,” Org. Sci., vol. 7, no. 2, pp. 136–150, 1996. 1994.
[55] D. Ulrich and N. Smallwood, Why the Bottom Line isn’t!: How to Build [84] N. Luhmann, Social Systems. Stanford, CA: Stanford Univ. Press,
Value Through People and Organizational Change. New York: Wiley, 1995.
2003. [85] K. Krippendorff, “Communication and the genesis of structure,” Gen.
[56] D. Ulrich and N. Smallwood, “Capitalizing on capabilities,” Harv. Bus. Syst., vol. 16, pp. 171–185, 1971.
Rev., vol. 82, no. 6, pp. 119–127, 2004. [86] E. V. Glasersfeld, “Konstruktion der Wirklichkeit und der Begriff der
[57] D. Teece, G. Pisano, and A. Shuen, “Dynamic capabilities and strategic Objektivität,” in Einführung in Den Konstruktivismus, H. Gumin and
management,” Manage. J., vol. 18, no. 7, pp. 509–533, 1997. A. Mohler, Eds. München, Germany: München, 1985, pp. 1–26.
[58] A. Fincher, “Project management maturity model,” presented at the 28th [87] S. H. Pfleeger, N. Fenton, and S. Page, “Evaluating software engineering
Annu. Semin. Symp., Chicago, IL, 1997. standards,” IEEE Comput., vol. 27, no. 9, pp. 71–79, Sep. 1994.
[59] R. Szakonyi, “Measuring R&D effectiveness—II,” Res.—Technol. Man- [88] P. B. Crosby, Quality is Free: The Art of Making Quality Certain. New
age., vol. 37, pp. 44–55, May/Jun. 1994. York: Penguin, 1979.
[60] Constructing Excellence, Effective Teamwork. A Best Practice Guide for [89] A. M. Maier, M. Kreimeyer, C. Hepperle, C. M. Eckert, U. Linde-
the Construction Industry. Watford, Herfordshire, U.K., 2004. mann, and P. J. Clarkson, “Exploration of correlations between factors
MAIER et al.: ASSESSING ORGANIZATIONAL CAPABILITIES: REVIEWING AND GUIDING THE DEVELOPMENT OF MATURITY GRIDS 159

influencing communication in complex product development,” Concur- Anja M. Maier received the M.A. degree in politi-
rent Eng. Res. Appl., vol. 16, no. 1, pp. 37–59, 2008. cal science, communication science, and philosophy
[90] R. Hughes, “The safety management maturity grid,” in Proc. Prof. Saf., from the University of Münster, Germany and the
Jun. 1985, pp. 15–18. Ph.D. degree in engineering design from the Univer-
[91] R. A. Radice, J. T. Harding, P. E. Munnis, and R. W. Philips, “A pro- sity of Cambridge, Cambridge, U.K.
gramming process study,” IBM Syst. J., vol. 24, no. 2, pp. 91–101, She is currently an Associate Professor in engi-
1985. neering management with the Technical University of
[92] Building Research Energy Conservation Support Unit (BRECSU), Re- Denmark, Lyngby, Denmark. She was also a Consul-
viewing Energy Management. Watford, U.K.: Building Research Es- tant in the manufacturing and software industries. Her
tablishment, Jan. 1993. research interests include the areas of design process
[93] J. Hackos, “The information process maturity model: A 2004 update,” improvement, interface management, change man-
Best Pract., vol. 6, no. 4, pp. 1–8, 2004. agement, capability assessment, collaborative designing, communication, and
[94] M. E. McGrath and C. L. Akiyama, “PACE: An integrated process innovation in design.
for product and cycle-time excellence,” in Setting the PACE in Product Prof. Maier is a member of the Academy of Management, the Design Society,
Development: A Guide to Product And Cycle-Time Excellence, M. E. and a Fellow of the Cambridge Philosophical Society.
McGrath, Ed. revised edition, Boston, MA: Butterworth-Heinemann,
1996, pp. 17–30.
[95] T. R. Stacey, “The information security program maturity grid,” Inf.
Syst. Secur., vol. 5, pp. 22–33, 1996.
[96] D. A. Hillson, “Towards a risk maturity model,” Int. J. Proj. Bus. Risk James Moultrie received the B.Tech (Hons) de-
Manage., vol. 1, no. 1, pp. 35–45, 1997. gree in mechanical engineering from Lough-
[97] S. Austin, A. Baldwin, J. Hammond, M. Murray, D. Root, D. Thomson, borough University, U.K., the M.A. degree in
and A. Thorpe, Design Chains—A Handbook for Integrated Collabora- industrial design from DeMontfort University, Le-
tive Design. London, U.K.: Thomas Telford, 2001. icester, U.K., the M.B.A. degree from Loughborough
[98] M. Bruce and J. Bessant, Design in Business. Strategic Innovation University, U.K., and the Ph.D. degree in engineer-
Through Design. London, U.K.: Financial Times Press, 2002. ing from the University of Cambridge, Cambridge,
[99] D. M. Fisher, “The business process maturity model. A practical approach U.K.
to identifying opportunities for optimization,” in Proc. BP Trends, Sep. He is currently a Senior Lecturer in innovation
2004, pp. 1–7. and design management with the University of Cam-
[100] R. Woodall, I. Cooper, D. Crowhurst, M. Hadi, and S. Platt, “MaSC: bridge, Cambridge, U.K. He is a Chartered Mechan-
Managing sustainable companies,” Eng. Sustainability, vol. 157, no. 1, ical Engineer (IMechE) and has as also been a Product Marketing Manager,
pp. 15–21, 2004. Project Manager, and Senior Engineer in the precision instruments sector. His
[101] K. B. Kahn, G. Barczak, and R. Moss, “Perspective: Establishing an research interests include the areas of innovation and design management.
NPD best practices framework,” J. Prod. Innov. Manage., vol. 23, no. 2, Dr. Moultrie was the recipient of a Scientific and Technical Academy Award
pp. 106–116, 2006. and an Emmy for work on a range of lenses for professional cinematography in
[102] D. M. Ashcroft, C. Morecroft, D. Parker, and P. R. Noyce, “Safety culture 2000.
assessment in community pharmacy: Development, face validity, and
feasibility of the Manchester Patient Safety Assessment Framework,”
Qual. Saf. Health Care, vol. 14, no. 6, pp. 417–421, 2005.
[103] D. Parker, M. Lawrie, and P. Hudson, “A framework for understanding P. John Clarkson received the B.Eng., M.A.
the development of organisational safety culture,” Saf. Sci., vol. 44, (Cantab), and the Ph.D. degrees in engineering from
no. 6, pp. 551–562, 2006. the University of Cambridge, Cambridge, U.K.
[104] D. Parker, M. Lawrie, J. Carthey, and M. Coultous, “The Manchester He is currently a Professor of Engineering Design
patient safety framework: Sharing and learning,” Clin. Risk, vol. 14, and the Director of the Engineering Design Centre
no. 4, pp. 140–142, 2008. at the University of Cambridge, Cambridge, U.K. He
[105] M. van Steenbergen, R. Bos, S. Brinkkemper, I. van de Weerd, and was also a PA Consultant, where he was engaged in
W. Bekkers, “The design of focus area maturity models,” in Global product development practice, with a focus on med-
Perspectives on Design Science Research. Lecture Notes in Computer ical equipment and projects for the U.K. Ministry of
Science, R. Winter, J. L. Zhao, and S. Aier, Eds. Berlin, Heidelberg: Defence. He has authored or coauthored more than
Springer-Verlag, 2010, pp. 317–332. 400 papers in the past ten years and also a number of
[106] M. Rohloff, “Case study and maturity model for business process man- practitioner workbooks on medical equipment design and inclusive design. His
agement implementation,” in U. Dayal, J. Eder, J. Koehler, and H. Rei- research interests include the general area of engineering design, in particular,
jers. Business Process Management. (Lecture Notes Comput. Sci.) eds. the development of design methodologies to address specific design issues.
Springer: Berlin/Heidelberg. vol. 5701, pp. 128–142, 2009. Prof. Clarkson is a Chartered Engineer, a Fellow of the Institution of Engi-
[107] Z. Radnor, “Lean in public services: Is it a panacea or paradox?” Public neering and Technology, and a member of the American Society of Mechanical
Exe. J., pp. 24-25, Jan./Feb. 2009. Engineers and the Design Society.

S-ar putea să vă placă și