Sunteți pe pagina 1din 9

Measuring Technical Debt

Patrick Morrison North Carolina State University pjmorris@ncsu.edu Abstract


The objective of this paper is to survey potential quantitative measures of the technical debt metaphor. It has long been observed that there is a tradeoff between software quality and ship date in the positive impacts of early availability and the negative impacts of maintenance expense when favoring ship date over quality. The metaphor of 'technical debt' was first described nearly two decades ago to describe a disciplined approach to measuring and managing this tradeoff. Over time the use of the term has spread, particularly where technical choices within the software development process have economic impacts. Consideration has been given recently to the idea of exploring and formalizing notions of technical debt. This paper examines how the metaphor has been defined and used, and it surveys past and present measures of software product, process and people with an eye toward their use in a quantitative definition of technical debt. on technical debt [2] notes "Little is known about technical debt, beyond feelings and opinions" and provides a list of questions about the meaning and use of the metaphor that depend on the ability to quantify and measure technical debt. Finding a quantitative measure for technical debt could aid software developers and management in deciding upon courses of action during software development that produce improved project outcomes. Determining that the concept is too vague to obtain general quantitative measures would also be useful. The objective of this paper is to identify quantitative measurements for its definition and use. In order to do so, this paper examines existing measures for the properties associated with technical debt that are defined by Brown et al. [2], namely visibility, value, present value, debt accretion, environment, origin, and impact. Visibility denotes the level of awareness of internal software quality, its impact on maintenance decisions, Value is the benefit of the chosen tradeoff, Present value incorporates the costs of debt's impact and uncertainty, Debt accretion measures maintenance cost increase over time, Environment reflects whole-system dependencies, Origin reflects the intent leading to the incurred debt, and Impact denotes the scope of the physical changes necessary to remove the debt. These properties will be set in the context of existing measures of software and software projects, and in the existing context of uses of the term technical debt. The remainder of the paper is organized as follows. Section 2 describes the observed properties of technical debt as described by the literature and by practitioners and also surveys desirable properties of a metric for technical debt, from the perspective of both software metrics and project management. Section 3 discusses existing measures of technical debt. Section 4 discusses current applications of technical debt. Section 5 documents limitations of the present paper's data, method and conclusions. Section 6 presents a discussion and summary of the paper, with an eye toward further work.

Introduction
"Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. [..] The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. ", Ward Cunningham [1]

Almost 20 years ago [1], Ward Cunningham used the 'Technical Debt' metaphor to describe the long-term cost of making sub-optimal technical design and implementation choices in software in exchange for releasing that software at a given time. Technical debt is a project management tradeoff between internal software quality versus project scope and resources. The notion has proven to be both lasting and popular in discussions among software professionals. Technical debt, however, has not yet been quantified in a general way. The overview of a recent FSE workshop

Properties of Technical Debt

This section analyzes two definitions of technical debt; its original definition, and that of an FSE technical debt workshop from last year including input from the originator of the term [1,2]. The words software and system are used interchangeably to describe the software under development. The original definition of technical debt, quoted in the introduction, incorporates notions of debt, interest, and repayment [1]. An often-used unit of measure for these quantities is programmer time. Debt is incurred at the point in time when a believed-to-be suboptimal set of technical choices is implemented as part of delivering software to a customer. The loan is one of time, borrowed against the cost of conforming the software to the mentioned ideal of software quality. The loan collateral is the value of the softwares use from an earlier point in time than if the debt were not incurred. The loan is repaid by taking the time to update the software to conform to the ideal of quality. Interest on the debt is paid during the interval between when the debt is incurred and when it is repaid, corresponding to increased effort to understand, change, and manage sub-optimal code. From this description, it is clear that measuring aspects of the code, while necessary, is not sufficient for evaluating technical debt. Notably the programmers time, the ideal of quality, and the value of early delivery must all be considered. The cost of programmer time depends on many dimensions including organization, geography, skill level, experience level, environment, and rarity of skills, but for a given project and organization there is typically a small, known, range of values. Programmer time can readily be translated to monetary cost in a given organization. It appears that a key property of technical debt is the above-mentioned ideal to which the software must (eventually) conform. In a practical sense this refers to the knowledge embedded in documents such as coding standards, organizational policies, design and architecture handbooks and in the knowledge of the developers, managers, architects, and designers working on the project. In an abstract sense, this could be viewed a microeconomic production function applied at a more granular level than the usual firm level. Finally, the value of the programs delivery at a point in time must be considered. This implies the need to estimate the earned value at a point in time for both the development organization and the user organization (which may possibly be the same organization).

A more recent consideration of the properties of technical debt identifies the properties of visibility, value, present value, debt accretion, environment, origin and impact [2]. Visibility denotes the level of awareness of internal software quality. This is the comparison between the ideal software and the software as it stands, made explicit. Measuring this requires not only the definition of the ideal or production function, but of distance from that ideal along dimensions that can be influenced by the expenditure of programmer time. Once this distance has been calculated it must then be presented in a manner understandable to developers and management. Value is a measure of the economic difference between the system and the ideal system [2]. It appears that this would be measured in currency or other economic units. An accurate assessment of it will be available only after system use as it is measured in terms of actual economic results. In the meantime, estimates of Value must be made. Present value incorporates the costs of debt's impact and uncertainty. This is an estimate based on the same units Value, but it additionally requires terms expressing the Impact of a given choice and the uncertainty with which the estimate is made. Debt accretion measures debt increase over time. This conforms to the earlier notion of interest. It seems practical to express this in the units of Value. Environment reflects whole-system dependencies. This includes the language, technical tools and knowledge available to developers as well as the entire set of procedures, standards, and organizational characteristics that influence the development, modification, release, and support of the affected software system. These could be viewed as non-functional requirements of the system, though some may be due to the organization. When a significant monetary loan is extended, there is a process of due diligence considering the nature and terms of the loan, the collateral, and the borrowers financial condition. One element of this is a borrowers credit score or credit rating. Something analogous to this rating could be used to establish how Environment impacts technical debt Value. Origin reflects the intent leading to the incurred debt. The origin of debt may be Intentional, reflecting a deliberate decision to acquire the debt, or Unintentional, reflecting debt acquired without consideration of the tradeoffs involved. Impact denotes the scope of the physical changes necessary to remove the debt. It can be inferred that this would include both the specific changes made to system artifacts such as source code files, tests suites, documentation, configuration files, and the changes necessary for installation of the software in its production environment. In this, it appears to be interrelated with Environment. The aggregate set of changes to remove the

debt reflects the Impact of that debt. Each change has a Value component, though these cannot necessarily be measured in programmer time as other organizational roles may be involved. Visibility allows presentation of the properties and measures described to this point. There are at least two aspects to this; the definition of measures, and their values for each of the properties and their relationships, and the ability to measure how often the measures are used. As a place to start, Visibility could be achieved by presenting a detailed list of all elements of Impact together with the Environment credit rating. The next section investigates how these properties may be measured.

source code, project and people. Software Metrics is the study of properties of the software itself. The products of a software development process may include various types of documentation, operating procedures, configuration files, binaries, and source code. The primary output typically measured is source code. We will present some of the most commonly used source code metrics now in use, but there is an extensive body of literature on metrics [4, 5].

1.1

Metrics for Value

Measures of Technical Debt


The software field cannot hope to have its Kepler or its Newton until it has had its army of Tycho Brahes, carefully preparing the well-defined observational data from which a deeper set of scientific insights may be derived. Barry Boehm [3]

Measurement provides the foundation of data upon which analysis, theories, and predictions can be built. In examining the properties of technical debt several units of measurement have been observed. This section surveys potential units of measure for each of these properties. The definitions here lie in the vague middle ground between concrete examples and general axioms, but the goal is to identify the kinds of measures suitable for each property. Measures of Value are discussed first as a foundation for the discussion of other properties and measures. If Value is defined as the economic difference between the system as it is and the ideal system, it is necessary to account for both the expense saved and the benefit gained by not conforming the system to its ideal. Expenses and benefits can be expressed in currency units, but they have different sources. The expense can be measured in terms of human effort on the part of the organization. Benefits can also be measured on these terms, but this does not account for the benefit obtained by the use of the software as this use often goes beyond the bounds of the development organization. This paper restricts itself to measures of debt internal to the development organization. As mentioned earlier, the primary driver of internal costs is programmer time, which can be translated to currency units. There are many other expense components, including hardware, licensing, and the support structures required for developers, managers, executives and other employees. For measures of value, we will examine measures of

To ground our presentation of metrics, we present the list of metrics used to evaluate the linux kernel in a recent study of its evolution [6]; Number of modules (directories, files, functions), Lines of code (LOC, reported both with and without comments), McCabes Cyclomatic Complexity, Halstead volume, difficulty, and effort, Omans Maintainability index, files and directories handled (added, deleted, or modified), and the rate of release of new versions. The study also identifies coupling as a worthwhile measure that was excluded because of difficulties in its calculation. Modules here are taken to be distinct sections of the system, in this case distinguished by file system and language declaration boundaries. Modularity is a primary concern in software metrics, and a range of metrics has been defined to measure its properties. Cohesion measures how closely interdependent modules are physically located. Coupling measures how many connections a module has with other modules, sometimes referred to as fan-in or afferent coupling to indicate the number of modules a module is depended on by, and fanout or efferent coupling to indicate the number of modules a module depends on. The most common unit of measure for software is the line of code (LOC). It is also one of the most contentious. Should comment lines be counted? Blank lines? Isnt a line of APL more capable than a line of Assembler? It is common to exclude blank lines and comments, and to indicate the language when reporting LOC, but assumptions must be carefully identified when reporting or reading LOC figures. The controversy only begins there. For example, do simple lines count in the same way as complicated lines? Some organizations look at LOC as a measure of programmer productivity, but this penalizes, for example, programmers who remove unneeded code. It is also necessary to recognize that each additional line of code increases both the necessary effort and the likelihood of defects [7]. Cyclomatic complexity [8] counts branches (ifs, loops, switch) and distinct conditions to measure the number of distinct paths through a piece of code, indicating the complexity of its logic. Halstead measures, also

measuring complexity, are built upon a count of total operators (operations) and operands (data) and total unique operators and operands within the module being developed [9]. Omans Maintainability index is a regression model built upon a set of software projects [10] that attempts to predict maintenance effort. Files and directories handled (added, deleted, or modified), and the rate of release of new versions are not defined in the literature; they were measured and reported to give context to the data presented in the study. While the metrics presented above were selected from a single study, they are representative of the metrics used in empirical validation studies, and so they seem to represent good candidates for application to measures of technical debt. One more type of metric should be mentioned for its application to technical debt. Clone detection is a developing research area [11] that correlates closely to a primary practitioner concern; removal of duplication. This is sometimes referred to as the DRY Principle, Dont Repeat Yourself, stated more formally as Every piece of information must have a single, unambiguous, authoritative representation within a system [12]. Intuitively, the existence of duplicate code (clone(s)) suggests simplification is possible and that maintenance effort may be increased, as a change in one clone may be required in the other(s). Recent related work focuses on identifying changes at a semantic level rather than at a textual level [13], aiding comprehension and review of code changes. There have been hundreds of metrics defined for measuring source code [4, 5], and this has generated a great deal of discussion about when, where and how these metrics are valid and useful. This is a significant enough issue that a number of frameworks for summarizing metrics have been developed and discussed. One of the most significant of these frameworks establishes a set of concepts defined in measurement theoretical terms, namely size, length, complexity, coupling and cohesion [14]. These concepts are generically defined, so that they can be used with artifacts beyond source code. Size and length are roughly analogous to linear measures such as volume, weight and depth, and to LOC, files handled and release rate. Complexity, cohesion and coupling are generalizations that include the analogous metrics mentioned earlier. There is no direct measure of duplication/cloning, but these are clearly measures of size; given two pieces of code having the same functionality, the smaller one is commonly seen as superior. It is possible to map the evolution study metrics on to

this concept framework, permitting normalization and comparison of values for these metrics against other studies that can be defined and normalized in this way.

3.1

Project Metrics

If you visit enough auto mechanics shops, you will eventually see a sign hanging on the wall that says something like Good, Fast, Cheap: Pick any two. This has played enough of a role in various kinds of projects over time that it has become part of the Project Management Body of Knowledge (PMBOK) [15], though the terms used there are Scope, Schedule, and Budget. This combination is sometimes referred to as the Iron Triangle, reflecting the strength of the relationships between the variables. Most projects, most of the time are evaluated on what will be delivered (scope), when it will be delivered (schedule), and how much it will cost (budget). These each may be hierarchical relationships, depending on the complexity of what is being provided and the number of people involved. For software projects, the scope can be viewed as the aggregate of the functional and non-functional requirements defined for the project. Environment must be considered as well for correlating this with technical debt properties. The budget can be measured in terms of the number and roles of people required as well as any additional resources not already available to the project organization, for example software, hardware or consumables used by the project. Schedule must consider the end date, day-by-day staffing requirements and task completion dependencies. This again requires reference to the Impact and Environment properties of technical debt. When collecting project management data for software engineering purposes, it is important to identify the time spent and how the time is spent from actual programming environments in as timely and unobtrusive a manner as is possible, while reviewing data collection with the people from whom it is collected in order to find and remove sources of confusion [16].

3.2

People Metrics

Underlying all software considerations is the one constant in all software development: human effort. Of course people, even programmers, are all unique, and machine-generated code exists, but ultimately the size, complexity and rates of change possible for software is bounded, for now, by human ability. Several models of program comprehension have been built, though validation in complex professional development environments remains an open problem [17]. One research program has defined a set of common

programming constructs and assessed the difficulty with which they are comprehended and applied [18]. Further research could support this notion that a fundamental measure of software, particularly of its technical debt, is our own ability to comprehend it. One of the most common approaches in improving quality and performance of organizational tasks is to measure the productivity of the people involved in performing those tasks. It turns out that this is also one of the worst possible approaches for software development tasks. For significant intellectual tasks the need to consider performance diminishes the ability to perform those tasks, if only because people will take the metrics in to account when they act and report on their actions [19]. Balancing the projects and organizations need for certainty and quality with the individuals need for focus and intrinsic motivation is one of the most challenging aspects of performance measurement.

multiplied by the effort required to accomplish each task (and, possibly converted in to costs through multiplication by currency/effort rates). Measuring Impact requires a list of tuples representing the tasks, components, personnel and Value representing the identified unit(s) of debt. To illustrate for a typical piece of software in a typical organization, this may require programmer time to comprehend, compile and unit test the change in the development environment, quality assurance time to move the change to a quality assurance environment, regression test, and move it in to production on passing regression tests, and management time to document, facilitate and report the change. This idea is analogous to the term Effort often used in the literature. The nature of Impact ties it closely to Environment. Depending on the organization, the Environment may include a single person and a machine or two, or it could involve three or more people and dozens or even hundreds of machines. The specifics matter a great deal and they vary from organization to organization, so it seems likely that an organization-specific model of the Environment is necessary for describing and quantifying the Impact of technical debt. At the same time, there are patterns of typical elements and how they are used that have been documented in the literature [21]. Defining an organizations specifics in terms of a published model supports tool development, data collection and analysis, as well as reducing a given organizations effort in building such a definition. It seems plausible that the more complicated the Environment, the greater the Impact will be when attempting to reduce debt, leading to greater likelihood of Debt Accretion. A study of CMM level 5 organizations found that the primary factor affecting effort, software quality and cycle time once development was normalized to CMM level 5 standards was software size (measured in SLOC) [7]. A separate study, holding the requirement specification constant but varying the development company suggests that each organization has its own mean level for effort, software quality, and cycle time [22]. These studies seem to lend support to the notions of Environment and Impact. It appears that expense portion of a given technical debt can be viewed as the Impact of making a desired change in a given Environment, where the desired change conforms software to the ideal embodied in a given Environment.

3.3

Measuring non-Value properties

Debt Accretion accounts for the buildup of debt over time. It has been observed in small experiments that maintenance, particularly corrective maintenance, is more expensive than new development [20]. Making decisions that increase the amount of maintenance required in the future is one means of achieving debt accretion. It appears that the units chosen for Value can be applied in measuring Debt Accretion. Present Value, borrowing from the financial term, reflects the value at a given moment of a debt by including consideration of the debt accretion should it not be paid off and the uncertainty involved in keeping or paying off that debt. This concept provides the means for bringing future projections in to the present for comparison and decision-making. It appears that the units chosen for Value can be applied in measuring Present Value. Origin is the simplest property to measure as its intentionality can be found in the existence of records indicating the intent to make a change in a particular way. Where records exist indicating that a deliberate decision was made to incur technical debt, the origin is intentional. In all other cases the origin is unintentional. Simple does not mean easy; many such decisions are made without record. Greater Visibility of technical debt and the tradeoffs involved could encourage better record keeping on this point. Impact, the scope of physical changes necessary to effect the desired debt reduction, denotes the collection of tasks required to remove a piece of debt from the system

Uses of Technical Debt

Technical debt-like measures have been applied to refactoring decisions [1], scheduling of feature development [30], project and product quality assessment [1,25], development speed [7], effort estimation for

development [2,23] and maintenance [20] and resource selection [31]. All of these uses are part of the domain of software maintenance. Software maintenance examines the factors involved in making changes to software systems over time. If metrics measure the value of technical debt, software maintenance speaks to its impact, and repayment. The ISO standard for software maintenance characterizes maintenance as corrective, preventative, adaptive, or perfective [24]. Corrective maintenance is the modification of software to correct a discovered problem after its release. Preventative maintenance is the modification of software after its release to prevent a problem from occurring. Adaptive maintenance is the modification of software to allow it to conform to changes in its hardware or software environment. Perfective maintenance is the modification of software to correct latent faults, whether they affect program behavior, documentation or maintainability. Repayment of technical debt is a form of perfective maintenance, while the interest that accrues may be from any form of maintenance. It has been observed that changing existing code is more expensive and difficult than adding or removing code [20]. In parallel it has also been observed that conforming to specific definitions of good design can make code easier to understand and maintain, in both student and professional development [25, 26]. This suggests both the value of designing to accommodate change, and the cost of knowingly releasing software that will later require changes. Caution must be taken here; other studies show that simple measures of size explain most of the effort involved in initial system development [23]. There are some high level trends that have been observed in patterns of software maintenance that can be used to inform discussions of technical debt. David Parnas discussed the problem of Software Aging in terms of how keeping systems well modularized and well documented should be useful in lengthening their lifespan [27]. This parallels the software metrics communitys concern with coupling and cohesion, and recognizing the necessity of clear documentation in making the continued use of aging software possible. Meir Lehman proposed and refined laws of Software Evolution, beginning in the 1970s after having observed patterns of behavior and change in the course of large systems development, primarily during the development of OS/360 [28]. Space does not permit a full examination, but one relevant law is presented, the second law of software evolution: The entropy of a system (its

[sic] unstructuredness) increases with time, unless specific work is executed to maintain or reduce it. It might be said that technical debt is a measure of this kind of entropy. A recent study used the evolution of linux to evaluate the laws of software evolution [6]. Broadly, there was confirmation of certain theses (e.g. continuous growth and change), and a lack of confirmation of others (e.g. selfregulation and feedback). In terms of entropy, while linux increases in overall size and complexity, the average complexity of each function is actually declining. There is not direct statistical support here for this being due to specific work being executed to maintain or reduce [entropy], but linux continued adoption appears to correlate with Lehmans second law at least anecdotally. The chief reason for measures of technical debt is to give development teams a means for optimizing their efforts in achieving their goals. In this sense technical debt is both an application and extension of Barry Boehms Software Engineering Economics [3] where he defines economics as the study of how people make decisions in resource-limited situations and summarizes the field of software cost estimation and its analytical frameworks. Software cost estimation has become an important and extensively studied technique [29], but it is typically set in the context of executive decisions outside of the system development process. Development team members make decisions about technical debt during the development and maintenance of systems, something Boehm refers to as the internal dynamics of a software project [3]. It has been observed that there has been little study of the application of cost estimation models in industry, even Boehms COCOMO and COCOMO II, and that there is a significant dependence upon expert judgment for making day-to-day technical decisions [29]. Three recent frameworks extend work done in software cost estimation in meaningful ways. The Incremental Funding Model (IFM) considers sequencing of software feature development in light of the benefit derived from release of a feature at a point in time, compared to alternative release schedules for a given set of features [30]. It introduces the notion of a Minimum Marketable Feature (MMF), the smallest unit of software that a customer would find valuable, and goes in to some detail on how this level of modularity supports greater management choice and economic value for software projects. Currently being evaluated on the MODIST project, a framework for making resource decisions in software projects builds a casual model using a set of Bayesian Nets incorporating statistical knowledge of software engineering factors in order to assist managerial decision

making [31]. This is distinguished, in part, from typical regression models for software cost estimation, by the ability to update data at any point in the model to reflect the situation being modeled. This appears to be a promising conceptual framework for managing the many dimensions of technical debt and for building organization-specific models. Finally, the literature survey discovered one framework targeted explicitly at measuring technical debt [32]. This framework calls for the creation of a technical debt item record for each discovered piece of technical debt. Each item is assigned a description, a date recorded, a person responsible, a component location, and a type, which reflects the project phase the debt is incurred in. Each item has attributes of principal, interest amount and interest probability assigned an ordinal value of low, medium, and high to reflect a coarse-grained notion of the items debt impact. These estimated values are then refined through the use of historical data from the organization and the project as it proceeds. The goal of the framework is to support project-level decisionmaking, to provide reference data for future projects, and to validate the proposed framework. Each of the described frameworks is managementoriented, which supports development team-higher level management interaction as they translate software concerns in to dollars, dates, and decisions, key factors in managing technical debt.

Limitations

The papers examined are a tiny proportion of the papers published on the topics of software quality, maintenance, cost estimation, and software metrics. Even within the span of the papers surveyed there is too wide a range of dimensions, metrics, and values across too wide a span of concerns to be hopeful of being precise. While the aim of the paper is to serve as an introduction to the literature around technical debt, this cannot hope to be a thorough survey of the field, given the wide range of topics addressed.

Discussion and Summary

management, as well as the notion of recording past experience for analysis and prediction. Finally, technical debt is accessible to the project management level of discussion in a project, rather than being limited to the programmer's view. This touches on the measures of scope, time, and budget used by project management to make decisions. Some observations can be made at this point. There is an ocean of data available for making technical decisions in software projects. There are also a number of studies supporting the notion that various design quality attributes support reduced maintenance effort. And yet, for the most part, most teams in most places do not make use of this kind of information. It may be a lack of awareness, it may be the presence of internal politics, or it may be that there is insufficient value to most teams to pursue the effort involved in collecting and monitoring this kind of data. However, it appears economically valuable to bring the technical attributes of software to the level of management attention for use in making project decisions. Over the last decade or so the typical development environment has grown beyond the editor, compiler and linker to include common use of systems for version control and bug tracking as well as the increased use of unit testing and acceptance testing frameworks. The availability of data from these support systems permits richer metric and trend analysis than was previously possible. I would conjecture that, much as the editor, compiler and linker have merged in to an Integrated Development Environment (IDE), the integration will continue with these other systems, including links between each subsystem that permit a holistic view of the technical debt characteristics of software components under development. Internal tracking of these quantities is likely to take the form of a tree or graph, based on a conceptual model (or models) of the factors involved. Visibility of this data has both technical and social impacts, but a key to this unification would be to make the link between this software component development data and project management systems. Clearly this has to be carefully managed, in order to avoid team and organizational dysfunctions, but the resulting traceability would support higher quality software with lower effort, as well better visibility in to the software development process. Deriving accurate, useful measures of technical debt that reflect the inherent complexity of software development in terms simple enough to be incorporated in to project management and executive discussions is a significant challenge, but attempts will continue to be made as effective measures would be economically valuable.

There are some broad observations that can be made about technical debt. Technical debt is an internal software quality measurement. For this reason, the literature on measures of internal software quality in software metrics and software maintenance can be consulted. Technical debt is a prediction about the future, in the sense that its presence is a measure of expected future maintenance effort. This touches on aspects of both project management and software project

References

[1] W. Cunningham. The WyCash portfolio management system. Addendum to the Proc. on Object-Oriented Programming systems, languages, and applications. Pp 29-30. 1992. [2] N. Brown et al. Managing Technical Debt in SoftwareReliant Systems, FSE Workshop on Future of Software Engineering Research. Pp 47-52. 2011. [3] B. W. Boehm. Software engineering economics. IEEE Trans. Software Eng. SE-10(1), pp. 4-21. 1984. [4] B. Kitchenham, Whats up with software metrics? A preliminary mapping study, Journal of Systems and Software 83(1). 2010. [5] A. Meneely, B. Smith, and L. Williams, "Software Metrics Validation Criteria: A Systematic Literature Review.", Trans. on Software Engineering and Methodology, to appear [6] A. Israeli and D. G. Feitelson. The linux kernel as a case study in software evolution. J. Syst. Software 83(3), pp. 485501. 2010. [7] M. Agrawal and K. Chari. Software effort, quality, and cycle time a study of CMM level 5 projects. IEEE Trans. Software Eng. 33(3), pp. 145-56. 2007. [8] T. McCabe. A complexity measure. IEEE Trans. Software Eng. 2(4), pp. 308-320. 1976. [9] M. Halstead. Elements of Software Science. Elsevier Science Inc. 1977. [10] J. Hagemeister and P. Oman. Construction and testing of polynomials predicting software maintainability. Journal of Systems and Software 24(3), pp. 251-266. 1994. [11] M. Kim, V. Sazawal, D. Notkin, G. Murphy. An empirical study of code clone genealogies. ACM Sigsoft Software Engineering Notes 30(5) pp.187-196. 2005. [12] A. Hunt and D. Thomas. The Pragmatic Programmer. Addison-Wesley, 1999. [13] Kim, M. and Notkin, D. 2009. Discovering and representing systematic code changes. Proc. Intl. Conf. on Software Engineering (31) pp 309- 319. 2009. [14] L. C. Briand, S. Morasca and V. R. Basili. Property-based Software Engineering Measurement. IEEE Trans. Software Eng. 22(1), pp. 68-86. 1996. [15] Product Management Body of Knowledge, 4th Ed. Project Management Institute. 2008. [16] V. R. Basili and D. M. Weiss. METHODOLOGY FOR COLLECTING VALID SOFTWARE ENGINEERING DATA. IEEE Trans. Software Eng. 10(6), pp. 728-738. 1984.

[17] A. Von Mayrhauser and A. M. Vans. Program comprehension during software maintenance and evolution. Computer 28(8), pp. 44-55. 1995. [18] Y. Wang. On the cognitive complexity of software and its quantification and formal measurement. International Journal of Software Science and Computational Intelligence 1(2), pp. 31-53. 2009. [19] R. Austin. Measuring and Monitoring Performance in Organizations. Dorset House. 1996. [20] V. Nguyen, B. Boehm and P. Danphitsanuphan. Assessing and estimating corrective, enhancive, and reductive maintenance tasks: A controlled experiment. Proc. 16Th Asia-Pacific Software Engineering Conference (APSEC 2009). Pp. 381-388. 2009. [21] N. Chapin, J.E. Hale, K. Md. Khan, J.F. Ramil, W. Tan, Types of software evolution and software maintenance. Journal of Software Maintenance and Evolution: Research and Practice. 13(1), pp. 3-30. 2001. [22] B. C. D. Anda, D. I. K. Sjoberg and A. Mockus. Variability and reproducibility in software engineering: A study of four companies that developed the same system. IEEE Trans. Software Eng. 35(3), pp. 407-29. 2009. [23] L. C. Briand and J. Wust. Modeling development effort in object-oriented systems using design properties. IEEE Trans. Software Eng. 27(11), pp. 963-986. 2001. [24] ISO/IEEE 14764-2006, Software Engineering Software Life Cycle Processes Maintenance, 2nd ed. IEEE. 2006. [25] (R-J) L. C. Briand, C. Bunse and J. W. Daly. A controlled experiment for evaluating quality guidelines on the maintainability of object-oriented designs. IEEE Trans. Software Eng. 27(6), pp. 513-530. 2001. [26] R.D. Banker, S. M. Datar, C.F. Kemerer, D. Zweig. Software Complexity and Maintenance Costs, Communications of the ACM, 36(11), pp. 81-94. 1993. [27] (R-C) D. L. Parnas. Software aging. Proc. 16th Intl. Conf. on Software Engineering. Pp. 279-287. 1994. [28] L. A. Belady and M. M. Lehman. A model of large program development. IBM Syst J 15(3), pp. 225-53. 1976. [29] M. Jorgensen, M. Shepperd, A Systematic Review of Software Development Cost Estimation Studies, IEEE Trans. Software Eng. 33(1), pp. 33-53. 2001. [30] M. Denne and J. Cleland-Huang. The incremental funding method: Data-driven software development. IEEE Software 21(3), pp. 39-47. 2004. [31] N. Fenton, W. Marsh, M. Neil, P. Cates, S. Forey and M. Tailor. Making resource decisions for software projects. Presented at 26th IEEE International Conference on Software

Engineering. pp. 397-406. 2004.

[32] C. Seaman and Y. Guo. Measuring and monitoring technical debt. Advances in Computers 82() pp 25-46. 2011.

S-ar putea să vă placă și