Sunteți pe pagina 1din 71

Kenya Methodist University Faculty of Sciences Department of Mathematics and Computer Science

Course Code: CISY 112

Course Title: SOFTWARE ENGINEERING PRINCIPLES

CISY 112

Software Engineering Principles

List of Contents Course Outline Module 1: Introduction to Software Engineering Module 2: History Module 3: The Software Process Module 4: Software Requirements Module 5: Requirements Engineering Module 7: Object-Oriented Design Module 8: User Interface Design Module 9: Software Development Module 10: Software Evolution Module 11: Software Cost Estimation Module 12: Verification and Validation Module 13: Software Testing References

Kenya Methodist University

CISY 112

Software Engineering Principles

Course Outline
Course Objectives: At the end the course, students will be able to: Demonstrate knowledge of basic issues, principles and practices in software engineering. Produce software development artifacts (Requirements Specifications, etc.). Assess and critique software engineering projects and determine which principles and practices are most appropriate to the given situation. Justify the selection of specific software engineering practices.

Course Content: Definitions, software quality, and approaches in software engineering: systems approach and engineering approach. Software development life cycle and software development problems. Software projects; types, project planning, project schedule, project personnel. Cost estimation, tool and techniques. Teaching Methods: Class lectures which consist of proper explanation of the various elements. Regular assignments and CATs and discussion of the questions asked in the assignments and CATs. Group discussions and active participation in class.

Reference Textbook: Sommeville, I.: Software Engineering; 7th edition, 2004. Schach, S. R.: Object-Oriented and Classical Software Engineering; McGraw-Hill (2001). Pfleeger, S. L.: Software Engineering: The Production of Quality Software; Macmillan Publishing Company, 1987. Fairley, R.: Software Engineering Concepts; Tata McGraw Hill, 1997

Kenya Methodist University

ii

CISY 112

Software Engineering Principles

Module 1: Introduction to Software Engineering


Introduction The economies of ALL developed nations are dependent on software. More and more systems are software controlled. Software engineering is concerned with theories, methods and tools for professional software development. Software engineering expenditure represents a significant fraction of GNP in all developed countries.

Software Costs Software costs often dominate system costs. The costs of software on a PC are often greater than the hardware cost. Software costs more to maintain than it does to develop. For systems with a long life, maintenance costs may be several times development costs. Software engineering is concerned with cost-effective software development.

FAQs of Software Engineering What is software? Computer programs and associated documentation. Software products may be developed for a particular customer or may be developed for a general market. Software products may be Generic - developed to be sold to a range of different customers. Bespoke (custom) - developed for a single customer according to their specification.

What is software engineering? Software engineering is an engineering discipline, which is concerned with all aspects of software production. Software engineers should adopt a systematic and organized approach to their work and use appropriate tools and techniques depending on the problem to be solved, the development constraints and the resources available.

Kenya Methodist University

CISY 112

Software Engineering Principles

What is the difference between software engineering and computer science? Computer science is concerned with theory and fundamentals; software engineering is concerned with the practicalities of developing and delivering useful software. Computer science theories are currently insufficient to act as a complete underpinning for software engineering.

What is the difference between software engineering and system engineering? System engineering is concerned with all aspects of computer-based systems development including hardware, software and process engineering. Software engineering is part of this process. System engineers are involved in system specification, architectural design, integration and deployment.

What is a software process? A set of activities whose goal is the development or evolution of software. Generic activities in all software processes are: Specification - what the system should do and its development constraints. Development - production of the software system. Validation - checking that the software is what the customer wants. Evolution - changing the software in response to changing demands.

What is a software process model? A simplified representation of a software process, presented from a specific perspective. Examples of process perspectives are Workflow perspective - sequence of activities. Data-flow perspective - information flow. Role/action perspective - who does what.

Generic process models: Waterfall Evolutionary development Formal transformation


Kenya Methodist University 2

CISY 112

Software Engineering Principles

Integration from reusable components

What are the costs of software engineering? Roughly 60% of costs are development costs, 40% are testing costs. For custom software, evolution costs often exceed development costs. Costs vary depending on the type of system being developed and the requirements of system attributes such as performance and system reliability. Distribution of costs depends on the development model that is used.

What are software engineering methods? Structured approaches to software development which include system models, notations, rules, design advice and process guidance Model descriptions - Descriptions of graphical models, which should be produced. Rules - Constraints applied to system models. Recommendations - Advice on good design practice. Process guidance - What activities to follow.

What is CASE (Computer-Aided Software Engineering) Software systems which are intended to provide automated support for software process activities. CASE systems are often used for method support. Upper-CASE - Tools to support the early process activities of requirements and design. Lower-CASE - Tools to support later activities such as programming, debugging and testing.

What are the attributes of good software? The software should deliver the required functionality and performance to the user and should be maintainable, dependable and usable. Maintainability - Software must evolve to meet changing needs. Dependability - Software must be trustworthy. Efficiency - Software should not make wasteful use of system resources. Usability - Software must be usable by the users for which it was designed.

Kenya Methodist University

CISY 112

Software Engineering Principles

What are the key challenges facing software engineering? Coping with legacy systems, coping with increasing diversity and coping with demands for reduced delivery times. Legacy systems - Old, valuable systems must be maintained and updated. Heterogeneity - Systems are distributed and include a mix of hardware and software. Delivery - There is increasing pressure for faster delivery of software.

Professional and Ethical Responsibility Software engineering involves wider responsibilities than simply the application of technical skills. Software engineers must behave in an honest and ethically responsible way if they are to be respected as professionals. Ethical behavior is more than simply upholding the law.

Issues of Professional Responsibility Confidentiality - Engineers should normally respect the confidentiality of their employers or clients irrespective of whether or not a formal confidentiality agreement has been signed. Competence - Engineers should not misrepresent their level of competence. They should not knowingly accept work, which is outwith, their competence. Intellectual property rights - Engineers should be aware of local laws governing the use of intellectual property such as patents, copyright, etc. They should be careful to ensure that the intellectual property of employers and clients is protected. Computer misuse - Software engineers should not use their technical skills to misuse other peoples computers. Computer misuse ranges from relatively trivial (game playing on an employers machine, say) to extremely serious (dissemination of viruses).

Kenya Methodist University

CISY 112

Software Engineering Principles

Module 2: History
Introduction The need for systematic approaches to development and maintenance of computer software products became apparent in the 1960s. In the 1950s, hardware vendors provided methodologies (and associated forms, manuals, etc.) to their customers. Among the earliest such methods were the "Study Organization Plan (SOP)" from IBM and "Accurately Defined Systems (ADS): from NCR. By the midsixties, numerous commentators had provided their suggestions for "what to do and how to create quality computer applications." Within this mix of ideas and suggestions, most organizations lived with in-house development methodologies, which resulted in poor quality, non-standard systems and applications. By 1968, the "software development problem" had become so large that the NATO Science Committee sponsored a conference in Garmisch, Germany in October 1968 to examine the issue. A second conference followed in Rome, Italy in October 1969. These reports captured some of the prevailing ideas on how to make the process of the analysis and development of computer applications and their component computer programs more engineering-like. The use of standardized parts, which could be re-used in other applications and assembled as modules in structured ways, was considered. Although numerous authors have focused their attention on software engineering as a process, little has been done to examine the history of software engineering as a movement. In 1996, the International Conference and Research Center for Computer Science in Dagstuhl, Germany, sponsored a week-long meeting of software engineering practitioners, academic computer scientists, and computer historians. During the week, numerous papers, positions, and experiences were presented. At the end of the week, the various groups prepared a number of short reports. According to the "Historian's Report" the problem of identifying a history stems from the broad, undocumented, practice of software engineering, as well as other identified problems. The Report suggested, however, that a history might be able to be engineered by the practitioners under the guidance of the historians, however, such a project, while eliciting great excitement, seemed unwieldy given the normal tools of the historian.

Definitions Software Engineering is the technological and managerial discipline concerned with systematic production and maintenance of software products that are developed and modified on time and within cost estimates.

Kenya Methodist University

CISY 112

Software Engineering Principles

The primary goals of software engineering are to improve software quality and to increase productivity and job satisfaction of software engineers. Its a new technological discipline distinct from but based on the foundations of: o Computer science to provide scientific foundations. o Management science provides foundations for project management. o Economics - provides the foundations for resource estimation and cost control. o Communication skills o Engineering approach to problem solving.

Programmer is used to denote an individual, who is concerned with the details of implementing, packaging and modifying algorithms and data structures written in a particular programming language.

Software engineers are additionally concerned with issues of analysis, design, verification and testing, documentation, software maintenance, and project management.

Computer software or software product is all the source code and associated documents and documentation that constitute a software product.

Customer is used to denote an individual or organization that initiates procurement or modification of a software product.

Project Size Categories Project size is a major factor that determines the level of management control and types of tools and techniques that are required on a software project. Trivial Projects Small Projects Medium-size projects Large Projects Very Large Projects Extremely Large Projects

Kenya Methodist University

CISY 112

Software Engineering Principles

Factors that influence quality and productivity Individual ability Team communication Product complexity Appropriate notations Systematic approaches Change control Level of technology Required reliability Available time Problem understanding Stability of requirements Required skills Facilities and resources Adequacy of training Management skills Appropriate goals Rising expectations

Kenya Methodist University

CISY 112

Software Engineering Principles

Module 3: The Software Process


A software process is a coherent set of activities for specifying, designing, implementing and testing software systems. A structured set of activities required to develop a software system: Specification Design Validation Evolution

A software process model is an abstract representation of a process. It presents a description of a process from some particular perspective.

Generic software process models a) The waterfall model - Separate and distinct phases of specification and development. b) Evolutionary development - Specification and development are interleaved. c) Formal systems development - A mathematical system model is formally transformed to an implementation. d) Reuse-based development - The system is assembled from existing components.

Waterfall Model

The drawbacks of the waterfall model are: The difficulty of accommodating change after the process is underway. Inflexible partitioning of the project into distinct stages. This makes it difficult to respond to changing customer requirements.
Kenya Methodist University 8

CISY 112

Software Engineering Principles

Therefore, this model is only appropriate when the requirements are well-understood.

Evolutionary Development Exploratory development - Objective is to work with customers and to evolve a final system from an initial outline specification. It should start with well-understood requirements. Throw-away prototyping - Objective is to understand the system requirements. Should start with poorly understood requirements

Problems of evolutionary development include: Lack of process visibility. Systems are often poorly structured. Special skills (e.g. in languages for rapid prototyping) may be required.

Evolutionary development is usually employed in: For small or medium-size interactive systems For parts of large systems (e.g. the user interface) For short-lifetime systems

Formal Systems Development This is based on the transformation of a mathematical specification through different representations to an executable program. Transformations are correctness-preserving so it is straightforward to show that the program conforms to its specification. Embodied in the Cleanroom approach to software development

Kenya Methodist University

CISY 112

Software Engineering Principles

Problems of the formal systems development: Need for specialized skills and training to apply the technique. Difficult to formally specify some aspects of the system such as the user interface.

Applicability of the formal systems development is in critical systems especially those where a safety or security case must be made before the system is put into operation.

Reuse-based Development Based on systematic reuse where systems are integrated from existing components or COTS (Commercial-off-the-shelf) systems. Process stages include: Component analysis Requirements modification System design with reuse Development and integration

This approach is becoming more important but still limited experience with it.

Process Iteration System requirements ALWAYS evolve in the course of a project so process iteration where earlier stages are reworked is always part of the process for large systems. Iteration can be applied to any of the generic process models. Two (related) approaches: Incremental development Spiral development

Incremental Development

Kenya Methodist University

10

CISY 112

Software Engineering Principles

Rather than deliver the system as a single delivery, the development and delivery is broken down into increments with each increment delivering part of the required functionality. User requirements are prioritised and the highest priority requirements are included in early increments. Once the development of an increment is started, the requirements are frozen though requirements for later increments can continue to evolve.

Advantages of the incremental development include: Customer value can be delivered with each increment so system functionality is available earlier. Early increments act as a prototype to help elicit requirements for later increments. Lower risk of overall project failure. The highest priority system services tend to receive the most testing.

Spiral Development Process is represented as a spiral rather than as a sequence of activities with backtracking. Each loop in the spiral represents a phase in the process. No fixed phases such as specification or design loops in the spiral are chosen depending on what is required. Risks are explicitly assessed and resolved throughout the process.

Kenya Methodist University

11

CISY 112

Software Engineering Principles

The spiral model sectors include: Objective setting - Specific objectives for the phase are identified. Risk assessment and reduction - Risks are assessed and activities put in place to reduce the key risks. Development and validation - A development model for the system is chosen which can be any of the generic models Planning - The project is reviewed and the next phase of the spiral is planned.

CASE Computer-aided software engineering (CASE) is software to support software development and evolution processes. Activity automation includes: Graphical editors for system model development Data dictionary to manage design entities Graphical UI builder for user interface construction Debuggers to support program fault finding Automated translators to generate new versions of a program

Case Technology Case technology has led to significant improvements in the software process though not the order of magnitude improvements that were once predicted. Reasons include: Software engineering requires creative thought - this is not readily automatable. Software engineering is a team activity and, for large projects, much time is spent in team interactions. CASE technology does not really support these.

CASE Classification Classification helps us understand the different types of CASE tools and their support for process activities. Functional perspective - Tools are classified according to their specific function. Process perspective - Tools are classified according to process activities that are supported. Integration perspective - Tools are classified according to their organization into integrated units.
Kenya Methodist University 12

CISY 112

Software Engineering Principles

Functional Tool Classification Tool Type Planning tools Editing tools Change management tools systems Configuration management tools Prototyping tools Method-support tools Language-processing tools Program analysis tools analyzers Testing tools Debugging tools Documentation tools Re-engineering tools systems Examples PERT tools, estimation tools, spreadsheets Text editors, diagram editors, word processors Requirements traceability tools, change control

Version management systems, system building tools Very high-level languages, user interface generators Design editors, data dictionaries, code generators Compilers, interpreters Cross reference generators, static analyzers, dynamic Test data generators, file comparators Interactive debugging systems Page layout programs, image editors Cross-reference systems, program restructuring

CASE Integration Tools - Support individual process tasks such as design consistency checking, text editing, etc. Workbenches - Support a process phase such as specification or design. Normally include a number of integrated tools Environments - Support all or a substantial part of an entire software process. Normally include several integrated workbenches

Kenya Methodist University

13

CISY 112

Software Engineering Principles

Kenya Methodist University

14

CISY 112

Software Engineering Principles

Module 4: Software Requirements


These are the descriptions and specifications of a system.

Requirements Engineering This is the process of establishing the services that the customer requires from a system and the constraints under which it operates and is developed. The requirements themselves are the descriptions of the system services and constraints that are generated during the requirements engineering process.

What is a requirement? It may range from a high-level abstract statement of a service or of a system constraint to a detailed mathematical functional specification This is inevitable as requirements may serve a dual function May be the basis for a bid for a contract - therefore must be open to interpretation May be the basis for the contract itself - therefore must be defined in detail Both these statements may be called requirements

Types of requirement 1. User requirements - Statements in natural language plus diagrams of the services the system provides and its operational constraints. Written for customers. 2. System requirements - A structured document setting out detailed descriptions of the system services. Written as a contract between client and contractor. 3. Software specification - A detailed software description which can serve as a basis for a design or implementation. Written for developers.

Requirements definition The software must provide a means of representing and accessing external files created by other tools.

Requirements specification The user should be provided with facilities to define the type of external files. Each external file type may have an associated tool which may be applied to the file.
Kenya Methodist University 15

CISY 112

Software Engineering Principles

Each external file type may be represented as a specific icon on the users display. Facilities should be provided for the icon representing an external file type to be defined by the user. When a user selects an icon representing an external file, the effect of that selection is to apply the tool associated with the type of the external file to the file represented by the selected icon.

Requirement Readers Client managers System end-users User requirements Client engineers Contractor managers System architects System end-users System requirements Client engineers System architects Software developers

Client engineers (perhaps) Software design specification System architects Software developers

Functional and non-functional requirements Functional requirements - Statements of services the system should provide, how the system should react to particular inputs and how the system should behave in particular situations. Non-functional requirements - constraints on the services or functions offered by the system such as timing constraints, constraints on the development process, standards, etc. Domain requirements - Requirements that come from the application domain of the system and that reflect characteristics of that domain

Kenya Methodist University

16

CISY 112

Software Engineering Principles

Functional requirements Describe functionality or system services Depend on the type of software, expected users and the type of system where the software is used Functional user requirements may be high-level statements of what the system should do but functional system requirements should describe the system services in detail

Examples of functional requirements The user shall be able to search either all of the initial set of databases or select a subset from it. The system shall provide appropriate viewers for the user to read documents in the document store. Every order shall be allocated a unique identifier (ORDER_ID), which the user shall be able to copy to the accounts permanent storage area.

Requirements imprecision Problems arise when requirements are not precisely stated Ambiguous requirements may be interpreted in different ways by developers and users Consider the term appropriate viewers User intention - special purpose viewer for each different document type Developer interpretation - Provide a text viewer that shows the contents of the document

Requirements completeness and consistency In principle requirements should be both complete and consistent. Complete - They should include descriptions of all facilities required. Consistent - There should be no conflicts or contradictions in the descriptions of the system facilities. In practice, it is impossible to produce a complete and consistent requirements document.

Kenya Methodist University

17

CISY 112

Software Engineering Principles

Non-functional requirements 1. Define system properties and constraints e.g. reliability, response time and storage requirements. Constraints are I/O device capability, system representations, etc. 2. Process requirements may also be specified mandating a particular CASE system, programming language or development method 3. Non-functional requirements may be more critical than functional requirements. If these are not met, the system is useless

Non-functional classifications 1. Product requirements - Requirements that specify that the delivered product must behave in a particular way e.g. execution speed, reliability, etc. 2. Organisational requirements - Requirements, which are a consequence of organisational policies and procedures, e.g. process standards used, implementation requirements, etc. 3. External requirements - Requirements which arise from factors, which are external to the system and its development, process e.g. interoperability requirements, legislative requirements, etc.

Goals and requirements Non-functional requirements may be very difficult to state precisely and imprecise requirements may be difficult to verify. Goal - A general intention of the user such as ease of use Verifiable non-functional requirement - A statement using some measure that can be objectively tested Goals are helpful to developers as they convey the intentions of the system users

Requirements Measures Property Speed Measure Processed transactions/second User/Event response time Screen refresh time Size K Bytes Number of RAM chips

Kenya Methodist University

18

CISY 112

Software Engineering Principles

Ease of use

Training time Number of help frames

Reliability

Mean time to failure Probability of unavailability Rate of failure occurrence Availability

Robustness

Time to restart after failure Percentage of events causing failure Probability of data corruption on failure

Portability statements

Percentage of target Number of target systems

dependent

Domain requirements 1. Derived from the application domain and describe system characteristics and features that reflect the domain 2. May be new functional requirements, constraints on existing requirements or define specific computations 3. If domain requirements are not satisfied, the system may be unworkable

Domain requirements problems 1. Understandability Requirements are expressed in the language of the application domain This is often not understood by software engineers developing the system 2. Implicitness Domain specialists understand the area so well that they do not think of making the domain requirements explicit.

User requirements Should describe functional and non-functional requirements so that they are understandable by system users who dont have detailed technical knowledge User requirements are defined using natural language, tables and diagrams

Kenya Methodist University

19

CISY 112

Software Engineering Principles

Problems with natural language Lack of clarity - Precision is difficult without making the document difficult to read Requirements confusion - Functional and non-functional requirements tend to be mixedup Requirements amalgamation - Several different requirements may be expressed together

Guidelines for writing requirements Invent a standard format and use it for all requirements Use language in a consistent way. Use shall for mandatory requirements, should for desirable requirements Use text highlighting to identify key parts of the requirement Avoid the use of computer jargon.

System Requirements More detailed specifications of user requirements Serve as a basis for designing the system May be used as part of the system contract System requirements may be expressed using system models

Problems with NL specification Ambiguity - The readers and writers of the requirement must interpret the same words in the same way. NL is naturally ambiguous so this is very difficult Over-flexibility - The same thing may be said in a number of different ways in the specification Lack of modularization - NL structures are inadequate to structure system requirements

Alternatives to NL specification Notation Structured Natural Language This approach depends on defining standard forms or templates to express the requirement specification.
Kenya Methodist University 20

Description

CISY 112

Software Engineering Principles

Design Description

This approach uses a language like a programming language but with more abstract features and languages to specify the requirements by defining an operational model of the system. A graphical language, supplemented by text annotations is used to define the functional requirements for the system. An early example of such a graphical language was SADT (Ross, 1977; Schoman and Ross, 1977). More recently, use-case descriptions (Jacobsen, Christerson et al., 1993) have been used.

Graphical Notations

Mathematical specifications These are notations based on mathematical concepts such as finite-state machines or sets. specifications. These unambiguous specifications reduce the arguments between customer and contractor about system functionality. However, most customers dont understand formal specifications and are reluctant to accept it as a system contract.

Structured Language Specifications A limited form of natural language may be used to express requirements This removes some of the problems resulting from ambiguity and flexibility and imposes a degree of uniformity on a specification Often best supported using a forms-based approach

The Requirements Document The requirements document is the official statement of what is required of the system developers Should include both a definition and a specification of requirements It is NOT a design document. As far as possible, it should set of WHAT the system should do rather than HOW it should do it

Requirements Document Requirements 1. Specify external system behavior 2. Specify implementation constraints 3. Easy to change 4. Serve as reference tool for maintenance
Kenya Methodist University 21

CISY 112

Software Engineering Principles

5. Record forethought about the life cycle of the system i.e. predict changes 6. Characterize responses to unexpected events

Requirements Document Structure 1. Introduction 2. Glossary 3. User requirements definition 4. System architecture 5. System requirements specification 6. System models 7. System evolution 8. Appendices 9. Index

Users of a Requirements Document System customers Specify the requirements and read them to check that they meet their needs. They specify changes to the requirements.

Managers

Use the requirements document to plan a bid for the system and to plan the system development process.

System engineers

Use the requirements to understand what system is to be developed.

System test engineers

Use the requirements to develop validation tests for the system.

System maintenance engineers

Use the requirements to help understand the system and the relationships between its parts.

Kenya Methodist University

22

CISY 112

Software Engineering Principles

Module 5: Requirements Engineering


These are the processes used to discover, analyze and validate system requirements. The processes used for RE vary widely depending on the application domain, the people involved and the organisation developing the requirements. However, there are a number of generic activities common to all processes: Requirements elicitation Requirements analysis Requirements validation Requirements management

A feasibility study decides whether or not the proposed system is worthwhile. Its a short focused study that checks If the system contributes to organizational objectives If the system can be engineered using current technology and within budget If the system can be integrated with other systems that are use

Feasibility study implementation is based on information assessment (what is required), information collection and report writing.

Elicitation and analysis is sometimes called requirements elicitation or requirements discovery. It involves technical staff working with customers to find out about the application domain, the services that the system should provide and the systems

Kenya Methodist University

23

CISY 112

Software Engineering Principles

operational constraints. It may involve end-users, managers, engineers involved in maintenance, domain experts, trade unions, etc. These are called stakeholders. The problems of requirements analysis include: Stakeholders dont know what they really want Stakeholders express requirements in their own terms Different stakeholders may have conflicting requirements Organizational and political factors may influence the system requirements The requirements change during the analysis process. New stakeholders may emerge and the business environment change

Requirements validation is concerned with demonstrating that the requirements define the system that the customer really wants. l Requirements error costs are high so validation is very important. Fixing a requirements error after delivery may cost up to 100 times the cost of fixing an implementation error. Requirements checking involve: Validity - Does the system provide the functions which best support the customers needs? Consistency - Are there any requirements conflicts? Completeness - Are all functions required by the customer included? Realism - Can the requirements be implemented given available budget and technology Verifiability - Can the requirements be checked?

Requirements validation techniques 1. Requirements reviews this is a systematic manual analysis of the requirements. Regular reviews should be held while the requirements definition is being formulated. Both client and contractor staff should be involved in reviews. Reviews may be formal (with completed documents) or informal. Good communications between developers, customers and users can resolve problems at an early stage. A review checks for: Verifiability. Is the requirement realistically testable? Comprehensibility. Is the requirement properly understood? Traceability. Is the origin of the requirement clearly stated?

Kenya Methodist University

24

CISY 112

Software Engineering Principles

Adaptability. Can the requirement be changed without a large impact on other requirements?

2. Prototyping - Using an executable model of the system to check requirements. 3. Test-case generation - Developing tests for requirements to check testability 4. Automated consistency analysis - Checking the consistency of a structured requirements description

Requirements management is the process of managing changing requirements during the requirements engineering process and system development. Requirements are inevitably incomplete and inconsistent because new requirements emerge during the process as business needs change and a better understanding of the system is developed and different viewpoints have different requirements and these are often contradictory. Requirements could change because : The priority of requirements from different viewpoints changes during the development process. System customers may specify requirements from a business perspective that conflict with end-user requirements The business and technical environment of the development system changes during its

Requirements Management Planning During the requirements engineering process, you have to plan. This requires: Requirements identification - How requirements are individually identified A change management process - The process followed when analyzing a requirements change Traceability policies - The amount of information about requirements relationships that is maintained

Kenya Methodist University

25

CISY 112

Software Engineering Principles

CASE tool support - The tool support required to help manage requirements change

Requirements Change Management Problem analysis. - discuss requirements problem and propose change Change analysis and costing. Assess effects of change on other requirements Change implementation. Modify requirements document and other documents to reflect change

Kenya Methodist University

26

CISY 112

Software Engineering Principles

Module 6: Software Design


Divided into three steps, using the concept of abstraction levels: a) Architectural Design: Very high level description of the top level components of the system. b) Conceptual Design: Provides solutions to the requirements and specifications. c) Technical Design: Provides a fully operational model of the software before it is implemented.

Architectural Design The design process for identifying the subsystems making up a system and the framework for sub-system control and communication is architectural design. The output of this design process is a description of the software architecture. Architectural design is an early stage of the system design process. It represents the link between specification and design processes. Its often carried out in parallel with some specification activities. It involves identifying major system components and their communications.

Advantages of Explicit Architecture Stakeholder communication - architecture may be used as a focus of discussion by system stakeholders. System analysis - means that analysis of whether the system can meet its nonfunctional requirements is possible. Large-scale reuse - the architecture may be reusable across a range of systems.

Architectural Design Process The architectural design process consists of: System structuring - the system is decomposed into several principal sub-systems and communications between these sub- systems are identified. Control modeling - a model of the control relationships between the different parts of the system is established. Modular decomposition - the identified sub-systems are decomposed into modules.

A sub-system is a system in its own right whose operation is independent of the services provided by other sub-systems.
Kenya Methodist University 27

CISY 112

Software Engineering Principles

A module is a system component that provides services to other components but would not normally be considered as a separate system.

Architectural Models Different architectural models may be produced during the design process. Each model presents different perspectives on the architecture. The architectural models include: Static structural model that shows the major system components. Dynamic process model that shows the process structure of the system. Interface model that defines sub-system interfaces. Relationships model such as a data-flow model

The architectural model of a system may conform to a generic architectural model or style. An awareness of these styles can simplify the problem of defining system architectures. However, most large systems are heterogeneous and do not follow a single architectural style.

Architecture Attributes Performance - Localize operations to minimize sub-system communication Security - Use a layered architecture with critical assets in inner layers Safety - Isolate safety-critical components Availability - Include redundant components in the architecture Maintainability - Use fine-grain, self-contained components

System Structuring This is concerned with decomposing the system into interacting sub-systems. The architectural design is normally expressed as a block diagram presenting an overview of the system structure. More specific models showing how sub-systems share data are distributed and interface with each other may also be developed. The repository model here sub-systems must exchange data. This may be done in two ways: Shared data is held in a central database or repository and may be accessed by all subsystems

Kenya Methodist University

28

CISY 112

Software Engineering Principles

Each sub-system maintains its own database and passes data explicitly to other subsystems. When large amounts of data are to be shared, the repository model of sharing is most commonly used.

Repository Model Characteristics Advantages Efficient way to share large amounts of data Sub-systems need not be concerned with how data is produced. Centralised management e.g. backup, security, etc. Sharing model is published as the repository schema Disadvantages Sub-systems must agree on a repository data model. Inevitably a compromise. Data evolution is difficult and expensive No scope for specific management policies Difficult to distribute efficiently

Client-Server Architecture Distributed system model which shows how data and processing is distributed across a range of components. Set of stand-alone servers which provide specific services such as printing, data management, etc. Set of clients which call on these services. Network which allows clients to access server.

Advantages Distribution of data is straightforward


Kenya Methodist University 29

CISY 112

Software Engineering Principles

Makes effective use of networked systems. May require cheaper hardware Easy to add new servers or upgrade existing servers

Disadvantages No shared data model so sub-systems use different data organisation. data interchange may be inefficient Redundant management in each server No central register of names and services - it may be hard to find out what servers and services are available

Control Models These are concerned with the control flow between sub-systems. They are distinct from the system decomposition model. Centralized control - One sub-system has overall responsibility for control and starts and stops other sub-systems Event-based control - Each sub-system can respond to externally generated events from other sub-systems or the systems environment.

Centralized control A control sub-system takes responsibility for managing the execution of other sub-systems Call-return model is a top-down subroutine model where control starts at the top of a subroutine hierarchy and moves downwards. Applicable to sequential systems. Manager model is applicable to concurrent systems. One system component controls the stopping, starting and coordination of other system processes. Can be implemented in sequential systems as a case statement

Event-Driven Systems Driven by externally generated events where the timing of the event is outwith the control of the sub-systems which process the event. Two principal event-driven models: Broadcast models. An event is broadcast to all sub-systems. Any sub-system which can handle the event may do so. Interrupt-driven models. Used in real-time systems where interrupts are detected by an interrupt handler and passed to

Kenya Methodist University

30

CISY 112

Software Engineering Principles

some other component for processing Other event driven models include spreadsheets and production systems.

Broadcast Model Effective in integrating sub-systems on different computers in a network. Sub-systems register an interest in specific events. When these occur, control is transferred to the subsystem which can handle the event. Control policy is not embedded in the event and message handler. Sub-systems decide on events of interest to them. However, sub-systems dont know if or when an event will be handled.

Interrupt-Driven Systems Used in real-time systems where response to an event fast is essential. There are known interrupt types with a handler defined for each type. Each type is associated with a memory location and a hardware switch causes transfer to its handler. Allows fast response but complex to program and difficult to validate.

Modular Decomposition Another structural level where sub-systems are decomposed into modules. Two modular decomposition models covered An object model where the system is decomposed into interacting objects A data-flow model where the system is decomposed into functional modules which transform inputs to outputs. Also known as the pipeline model. If possible, decisions about concurrency should be delayed until modules are implemented.

Object Models Structure the system into a set of loosely coupled objects with well-defined interfaces. Object-oriented decomposition is concerned with identifying object classes, their attributes and operations. When implemented, objects are created from these classes and some control model used to coordinate object operations.

Data-Flow Models Functional transformations process their inputs to produce outputs. May be referred to as a pipe and filter model (as in UNIX shell). Variants of this approach are very common. When
Kenya Methodist University 31

CISY 112

Software Engineering Principles

transformations are sequential, this is a batch sequential model which is extensively used in data processing systems. Not really suitable for interactive systems.

Conceptual Design The conceptual design answers questions such as these: Where will the data come from? What will happen to the data in the system? What will the system look like to users? What choices will be offered to users? What is the timing of events? What will the reports and screens look like?

A good conceptual design should have the following characteristics: It is written in the client's language. It contains no technical jargon. It describes the functions of the system. It is independent of implementation. It is linked with the requirements documents.

The problem is that there doesn't seem to be much difference between the conceptual design and the requirements! The difference is this: the requirements describe what is required while the conceptual design describes what will be provided. e.g Requirement: the user must be able to open a file. Conceptual design: the user opens a file by performing the following actions. Select File from the main menu. System displays a list of files. Select a file. Select OK or CANCEL.

Technical Design This is the description of the hardware and software components and their functions. It also describes the data structures and data flow. It usually shows how the conceptual design can be implemented by a collection of components. Each component has an interface. Components interact through their interfaces.

Kenya Methodist University

32

CISY 112

Software Engineering Principles

Fundamental Design Issues Principles provide the underlying basis for development and evaluation of techniques. The different fundamental design concepts include: a) Abstraction b) Information hiding c) Structure d) Modularity e) Concurrency f) Verification g) Aesthetics

Design Qualities A good design usually consists of: a) Modularity b) High cohesion c) Low coupling d) Information hiding e) Fault tolerance f) Simplicity

Tips & Techniques to Achieve Design Quality a) Prototyping b) Dividing responsibilities c) Design rationale d) Efficiency and design

Design Reviews Reviewing is an important part of software engineering (and engineering in general). Any product of development can be reviewed: requirements, specification, design, implementation, test results, etc.

Kenya Methodist University

33

CISY 112

Software Engineering Principles

Reviewing is generally performed by a review team i.e. the people responsible for the product, others who can study the product with fresh eyes, and including the authors might not be desirable It is important that reviewing is an egoless activity. A reviewing team does not usually correct the errors that its members find. Instead, the views of the team are recorded accurately. Three design reviews that are carried out are: a) The preliminary design review : meeting of clients and suppliers the conceptual design is reviewed and approved by the clients.

b) The critical design review : meeting of designers. requirements team (sometimes called analysts) may also be present ensure that the conceptual and technical designs are free of defects and meet the requirements

c) The program design review : meeting of designers and developers. ensure that the detailed design is feasible the implementation team will be able to understand it.

Kenya Methodist University

34

CISY 112

Software Engineering Principles

Module 7: Object-Oriented Design


This is the designing of systems using self-contained objects and object classes.

Characteristics of OOD Objects are abstractions of real-world or system entities and manage themselves Objects are independent and encapsulate state and representation information. System functionality is expressed in terms of object services Shared data areas are eliminated. Objects communicate by message passing Objects may be distributed and may execute sequentially or in parallel

Advantages of OOD Easier maintenance - objects may be understood as stand-alone entities. Objects are appropriate reusable components For some systems, there may be an obvious mapping from real world entities to system objects

Object-Oriented Development Object-oriented analysis, design and programming are related but distinct. OOA is concerned with developing an object model of the application domain. OOD is concerned with developing an object-oriented system model to implement requirements. OOP is concerned with realizing an OOD using an OO programming language such as Java or C++.

Objects and Object Classes Objects are entities in software systems which represent instances of real-world and system entities. Object classes are templates for objects. They may be used to create objects. Object classes may inherit attributes and services from other object classes.

Objects An object is an entity which has a state and a defined set of operations which operate on that state. The state is represented as a set of object attributes. The operations associated

Kenya Methodist University

35

CISY 112

Software Engineering Principles

with the object provide services to other objects (clients) which request these services when some computation is required. Objects are created according to some object class definition. An object class definition serves as a template for objects. It includes declarations of all the attributes and services which should be associated with an object of that class.

Object Communication Conceptually, objects communicate by message passing. Messages are the name of the service requested by the calling object. Copies of the information required to execute the service and the name of a holder for the result of the service. In practice, messages are often implemented by procedure calls where a Name = procedure name and Information = parameter list.

Generalization and Inheritance Objects are members of classes which define attribute types and operations. Classes may be arranged in a class hierarchy where one class (a super-class) is a generalization of one or more other classes (sub-classes). A sub-class inherits the attributes and operations from its super class and may add new methods or attributes of its own. Generalization in the UML is implemented as inheritance in OO programming languages.

Advantages of Inheritance It is an abstraction mechanism which may be used to classify entities It is a reuse mechanism at both the design and the programming level The inheritance graph is a source of organizational knowledge about domains and systems

Problems with Inheritance Object classes are not self-contained. They cannot be understood without reference to their superclasses. Designers have a tendency to reuse the inheritance graph created during analysis. Can lead to significant inefficiency The inheritance graphs of analysis, design and implementation have different functions and should be separately maintained.

Kenya Methodist University

36

CISY 112

Software Engineering Principles

Unified Modeling Language (UML) Software development organizations faced a number of inefficiencies in translating designs from one notation to another, retraining designers to use different notations, and in finding appropriate tool support for these notations. The UML is the latest in a long list of object oriented analysis and design notations that have appeared in the last decade. Its initial intent was to unify the approaches of the three contributors to the software design field: Gary Booch, Jim Rumbaugh, and Ivar Jacobson. Three factors have contributed to providing UML with the potential to become the single, core notation for the next generation of analysis and design methods. Political perspective UML is being developed by a consortium led by three leaders in object-oriented analysis and design methods. Marketing perspective wide acceptance of this work is being sought by submitting the UML specification to standardization process within Object Management Group (OMG). This is a fast, efficient standardization process in comparison with more traditional routes such as the IEEE and ISO. Furthermore, OMG has over 900 member organizations. Technical perspective an insightful decision was made early on to concentrate attention on a standard modeling language, not a standard modeling method. A single method for analysis and design is unrealistic given the wide variety of systems being produced. However, a single common notation allows users of different methods to communicate through commonly described artifacts they produce, and for tools supporting those methods to be able to interchange artifacts using the common language. Most people use UML to specify, construct, visualize or document a software-intensive system. While UML maintains a consistent model of a system, users may wish to focus on a variety of specific views of the underlying model expressed as graphical diagrams. UML supports the following views: a. Use case diagrams - these captures the system functionality from an end-user perspective. Its typically developed early in the development life-cycle to specify the context of a system and to derive a set of user-focused requirements. b. Class diagrams these capture the essential vocabulary of a system. It is built and refined throughout the development lifecycle, recording the names, properties, and relationships of the primary model elements in the system. c. Behavior diagrams State diagram these model the states of each piece of a system and the transitions that occur between states over time in response to different stimuli. Using these diagrams, the typical lifecycle of the system can be examined.

Kenya Methodist University

37

CISY 112

Software Engineering Principles

Activity diagram these capture internally-generated sequences of events within a system. They are a restricted form of state diagram where external events and asynchronous behavior are ignored to concentrate on the internal processing that tales place within a system. Sequence diagram these explicitly model the message structure of a system, and the time ordered sequence of messages sent and received in a system. as a result, they are useful for helping a designer to optimize the message traffic throughout a system. Collaboration diagram these allow the structure of interactions among pieces of a system to be easily visualized, independently of the sequence of events. Here, a more abstract view of the flow of data through the system is provided. Package diagram

d. Implementations diagrams Component diagram this captures the physical structure of the implementation of a system. It is built as part of the architecture of a system to organize and manage the physical elements of the solution. Deployment diagram this captures the physical topology of the systems hardware. It is built as part of the architecture description of a system to identify the features of the deployed system that impact such aspects as system integrity and performance.

Kenya Methodist University

38

CISY 112

Software Engineering Principles

Module 8: User Interface Design


System users often judge a system by its interface rather than its functionality. A poorly designed interface can cause a user to make catastrophic errors. Poor user interface design is the reason why so many software systems are never used.

Graphical user interfaces Most users of business systems interact with these systems through graphical interfaces although, in some cases, legacy text-based interfaces are still used.

GUI characteristics Characteristic Description Windows Icons Menus Pointing Graphics Multiple windows allow different information to be displayed simultaneously on the users screen.

Icons different types of information. On some systems, icons represent files; on others, icons represent processes. Commands are selected from a menu rather than typed in a command language. A pointing device such as a mouse is used for selecting choices from a menu or indicating items of interest in a window. Graphical elements can be mixed with text on the same display.

GUI advantages They are easy to learn and use. Users without experience can learn to use the system quickly. The user may switch quickly from one task to another and can interact with several different applications. Information remains visible in its own window when attention is switched. Fast, full-screen interaction is possible with immediate access to anywhere on the screen

Kenya Methodist University

39

CISY 112

Software Engineering Principles

Module 9: Software Development


Rapid Software Development Because of rapidly changing business environments, businesses have to respond to new opportunities and competition. This requires software and rapid development and delivery is not often the most critical requirement for software systems. Businesses may be willing to accept lower quality software if rapid delivery of essential functionality is possible. Because of the changing environment, it is often impossible to arrive at a stable, consistent set of system requirements. Therefore a waterfall model of development is impractical and an approach to development based on iterative specification and delivery is the only way to deliver software quickly.

Characteristics of RAD Processes The processes of specification, design and implementation are concurrent. There is no detailed specification and design documentation is minimised. The system is developed in a series of increments. End users evaluate each increment and make proposals for later increments. System user interfaces are usually developed using an interactive development system.

Advantages of incremental development: Accelerated delivery of customer services. Each increment delivers the highest priority functionality to the customer. User engagement with the system. Users have to be involved in the development which means the system is more likely to meet their requirements and the users are more committed to the system.

Problems with incremental development: Management problems - Progress can be hard to judge and problems hard to find because there is no documentation to demonstrate what has been done. Contractual problems - The normal contract may include a specification; without a specification, different forms of contract have to be used. Validation problems - Without a specification, what is the system being tested against? Maintenance problems - Continual change tends to corrupt software structure making it more expensive to change and evolve to meet new requirements.
Kenya Methodist University 40

CISY 112

Software Engineering Principles

RAD Environment Tools Database programming language Interface generator Links to office applications Report generators

Prototyping For some large systems, incremental iterative development and delivery may be impractical; this is especially true when multiple teams are working on different sites. A prototype is an initial version of a system used to demonstrate concepts and try out design options. It can be used in: The requirements engineering process to help with requirements elicitation and validation; In design processes to explore options and develop a UI design; In the testing process to run back-to-back tests.

This system is thrown away when the system specification has been agreed.

Kenya Methodist University

41

CISY 112

Software Engineering Principles

Benefits of prototyping: Improved system usability. A closer match to users real needs. Improved design quality. Improved maintainability. Reduced development effort.

Prototypes should be discarded after development as they are not a good basis for a production system: It may be impossible to tune the system to meet non-functional requirements; Prototypes are normally undocumented; The prototype structure is usually degraded through rapid change; The prototype probably will not meet normal organisational quality standards.

Agile Methods Dissatisfaction with the overheads involved in design methods led to the creation of agile methods. These methods: Focus on the code rather than the design; Are based on an iterative approach to software development; Are intended to deliver working software quickly and evolve this quickly to meet changing requirements. Agile methods are probably best suited to small/medium-sized business systems or PC products.

Kenya Methodist University

42

CISY 112

Software Engineering Principles

Module 10: Software Evolution


Software change is inevitable since New requirements emerge when the software is used; The business environment changes; Errors must be repaired; New computers and equipment is added to the system; The performance or reliability of the system may have to be improved.

A key problem for organisations is implementing and managing change to their existing software systems. Organisations have huge investments in their software systems - they are critical business assets. To maintain the value of these assets to the business, they must be changed and updated. The majority of the software budget in large companies is devoted to evolving existing software rather than developing new software.

Program evolution dynamics Program evolution dynamics is the study of the processes of system change. After major empirical studies, Lehman and Belady proposed that there were a number of laws which applied to all systems as they evolved. There are sensible observations rather than laws. They are applicable to large systems developed by large organisations. Perhaps less applicable in other cases.

Lehmans Laws

Kenya Methodist University

43

CISY 112

Software Engineering Principles

Software Maintenance Modifying a program after it has been put into use. Maintenance does not normally involve major changes to the systems architecture. Changes are implemented by modifying existing components and adding new components to the system. The system requirements are likely to change while the system is being developed because the environment is changing. Therefore a delivered system won't meet its requirements. Systems are tightly coupled with their environment. When a system is installed in an environment it changes that environment and therefore changes the system requirements. Systems MUST be maintained therefore if they are to remain useful in an environment. Maintenance is inevitable

Types of Maintenance Maintenance to repair software faults - changing a system to correct deficiencies in the way meets its requirements. Maintenance to adapt software to a different operating environment - changing a system so that it operates in a different environment (computer, OS, etc.) from its initial implementation.
Kenya Methodist University 44

CISY 112

Software Engineering Principles

Maintenance to add to or modify the systems functionality - modifying the system to satisfy new requirements.

Maintenance Costs Usually greater than development costs (2* to 100* depending on the application). Normally affected by both technical and non-technical factors and increases as software is maintained. Maintenance corrupts the software structure so makes further maintenance more difficult. Ageing software can have high support costs (e.g. old languages, compilers etc.).

Maintenance Cost Factors Team stability - maintenance costs are reduced if the same staff are involved with them for some time. Contractual responsibility - the developers of a system may have no contractual responsibility for maintenance so there is no incentive to design for future change. Staff skills - maintenance staff are often inexperienced and have limited domain knowledge. Program age and structure - as programs age, their structure is degraded and they become harder to understand and change.

Kenya Methodist University

45

CISY 112

Software Engineering Principles

Module 11: Software Cost Estimation


Introduction It has been surveyed that nearly one-third projects overrun their budget and late delivered and two-thirds of all major projects substantially overrun their original estimates. The accurate prediction of software development costs is a critical issue to make the good management decisions and accurately determining how much effort and time a project required for both project managers as well as system analysts and developers. Without reasonably accurate cost estimation capability, project managers can not determine how much time and manpower cost the project should take and that means the software portion of the project is out of control from its beginning; system analysts can not make realistic hardware-software tradeoff analyses during the system design phase; software project personnel can not tell managers and customers that their proposed budget and schedule are unrealistic. This may lead to optimistic over promising on software development and the inevitable overruns and performance compromises as a consequence. But, actually huge overruns resulting from inaccurate estimates are believed to occur frequently. The overall process of developing a cost estimate for software is not different from the process for estimating any other element of cost. There are, however, aspects of the process that are peculiar to software estimating. Some of the unique aspects of software estimating are driven by the nature of software as a product. Other problems are created by the nature of the estimating methodologies. Software cost estimation is a continuing activity which starts at the proposal stage and continues through the lift time of a project. Continual cost estimation is to ensure that the spending is in line with the budget. Cost estimation is one of the most challenging tasks in project management. It is to accurately estimate needed resources and required schedules for software development projects. The software estimation process includes estimating the size of the software product to be produced, estimating the effort required, developing preliminary project schedules, and finally, estimating overall cost of the project. It is very difficult to estimate the cost of software development. Many of the problems that plague the development effort itself are responsible for the difficulty encountered in estimating that effort. One of the first steps in any estimate is to understand and define the system to be estimated. Software, however, is intangible, invisible, and intractable. It is inherently more difficult to understand and estimate a product or process that cannot be seen and touched. Software grows and changes as it is written. When hardware design has been inadequate, or when hardware fails to perform as expected, the "solution" is often attempted through changes to the software. This change may occur late in the development process, and sometimes results in unanticipated software growth. After 20 years research, there are many software cost estimation methods available including algorithmic methods, estimating by analogy, expert judgment method, price to
Kenya Methodist University 46

CISY 112

Software Engineering Principles

win method, top-down method, and bottom-up method. No one method is necessarily better or worse than the other, in fact, their strengths and weaknesses are often complimentary to each other. To understand their strengths and weaknesses is very important when you want to estimate your projects.

Expert Judgment Method Expert judgment techniques involve consulting with software cost estimation expert or a group of the experts to use their experience and understanding of the proposed project to arrive at an estimate of its cost. Generally speaking, a group consensus technique, Delphi technique, is the best way to be used. The strengths and weaknesses are complementary to the strengths and weaknesses of algorithmic method. To provide a sufficiently broad communication bandwidth for the experts to exchange the volume of information necessary to calibrate their estimates with those of the other experts, a wideband Delphi technique is introduced over standard Deliphi technique. The estimating steps using this method: 1. Coordinator present each expert with a specification and an estimation form. 2. Coordinator calls a group meeting in which the experts discuss estimation issues with the coordinator and each other. 3. Experts fill out forms anonymously 4. Coordinator prepares and distributes a summary of the estimation on an iteration form. 5. Coordinator calls a group meeting, specially focusing on having the experts discuss points where their estimates varied widely. 6. Experts fill out forms, again anonymously, and steps 4 and 6 are iterated for as many rounds as appropriate. The wideband Delphi Technique has subsequently been used in a number of studies and cost estimation activities. It has been highly successful in combining the free discuss advantages of the group meeting technique and advantage of anonymous estimation of the standard Delphi Technique. The advantages of this method are:

The experts can factor in differences between past project experience and requirements of the proposed project.

Kenya Methodist University

47

CISY 112

Software Engineering Principles

The experts can factor in project impacts caused by new technologies, architectures, applications and languages involved in the future project and can also factor in exceptional personnel characteristics and interactions, etc.

The disadvantages include:


This method can not be quantified. It is hard to document the factors used by the experts or experts-group. Expert may be some biased, optimistic, and pessimistic, even though they have been decreased by the group consensus. The expert judgment method always compliments the other cost estimating methods such as algorithmic method.

Estimating by Analogy Estimating by analogy means comparing the proposed project to previously completed similar project where the project development information id known. Actual data from the completed projects are extrapolated to estimate the proposed project. This method can be used either at system-level or at the component-level. Estimating by analogy is relatively straightforward. Actually in some respects, it is a systematic form of expert judgment since experts often search for analogous situations so as to inform their opinion. The steps using estimating by analogy are: 1. Characterizing the proposed project. 2. Selecting the most similar completed projects whose characteristics have been stored in the historical data base. 3. Deriving the estimate for the proposed project from the most similar completed projects by analogy. The main advantages of this method are: The estimation are based on actual project characteristic data. The estimator's past experience and knowledge can be used which is not easy to be quantified. The differences between the completed and the proposed project can be identified and impacts estimated.

However there are also some problems with this method, Using this method, we have to determine how best to describe projects. The choice of variables must be restricted to information that is available at the point that the
Kenya Methodist University 48

CISY 112

Software Engineering Principles

prediction required. Possibilities include the type of application domain, the number of inputs, the number of distinct entities referenced, the number of screens and so forth. Even once we have characterized the project, we have to determine the similarity and how much confidence can we place in the analogies. Too few analogies might lead to maverick projects being used; too many might lead to the dilution of the effect of the closest analogies. Martin Shepperd etc. introduced the method of finding the analogies by measuring Euclidean distance in n-dimensional space where each dimension corresponds to a variable. Values are standardized so that each dimension contributes equal weight to the process of finding analogies. Generally speaking, two analogies are the most effective. Finally, we have to derive an estimate for the new project by using known effort values from the analogous projects. Possibilities include means and weighted means which will give more influence to the closer analogies.

It has been estimated that estimating by analogy is superior technique to estimation via algorithmic model in at least some circumstances. It is a more intuitive method so it is easier to understand the reasoning behind a particular prediction..

Top-Down and Bottom-Up Methods Top-Down Estimating Method Top-down estimating method is also called Macro Model. Using top-down estimating method, an overall cost estimation for the project is derived from the global properties of the software project, and then the project is partitioned into various low-level components. The leading method using this approach is Putnam model. This method is more applicable to early cost estimation when only global properties are known. In the early phase of the software development, It is very useful because there are no detailed information available. The advantages of this method are:

It focuses on system-level activities such as integration, documentation, configuration management, etc., many of which may be ignored in other estimating methods and it will not miss the cost of system-level functions. It requires minimal project detail, and it is usually faster, easier to implement.

The disadvantages are:

It often does not identify difficult low-level problems that are likely to escalate costs and sometime tends to overlook low-level components. It provides no detailed basis for justifying decisions or estimates.

Because it provides a global view of the software project, it usually embodies some effective features such as cost-time trade off capability that exists in Putnam model.
Kenya Methodist University 49

CISY 112

Software Engineering Principles

Bottom-up Estimating Method Using bottom-up estimating method, the cost of each software components is estimated and then combine the results to arrive at an estimated cost of overall project. It aims at constructing the estimate of a system from the knowledge accumulated about the small software components and their interactions. The leading method using this approach is COCOMO's detailed model. The advantages:

It permits the software group to handle an estimate in an almost traditional fashion and to handle estimate components for which the group has a feel. It is more stable because the estimation errors in the various components have a chance to balance out.

The disadvantages:

It may overlook many of the system-level costs (integration, configuration management, quality assurance, etc.) associated with software development. It may be inaccurate because the necessary information may not available in the early phase. It tends to be more time-consuming. It may not be feasible when either time and personnel are limited.

Algorithmic Method The algorithmic method is designed to provide some mathematical equations to perform software estimation. These mathematical equations are based on research and historical data and use inputs such as Source Lines of Code (SLOC), number of functions to perform, and other cost drivers such as language, design methodology, skill-levels, risk assessments, etc. The algorithmic methods have been largely studied and there are a lot of models have been developed, such as COCOMO models, Putnam model, and function points based models. General advantages: It is able to generate repeatable estimations. It is easy to modify input data, refine and customize formulas. It is efficient and able to support a family of estimations or a sensitivity analysis. It is objectively calibrated to previous experience.

General disadvantages:

Kenya Methodist University

50

CISY 112

Software Engineering Principles

It is unable to deal with exceptional conditions, such as exceptional personnel in any software cost estimating exercises, exceptional teamwork, and an exceptional match between skill-levels and tasks. Poor sizing inputs and inaccurate cost driver rating will result in inaccurate estimation. Some experience and factors can not be easily quantified.

COCOMO Models One very widely used algorithmic software cost model is the Constructive Cost Model (COCOMO). The basic COCOMO model has a very simple form: MAN-MONTHS = K1* (Thousands of Delivered Source Instructions) K2 Where K1 and K2 are two parameters dependent on the application and development environment. Estimates from the basic COCOMO model can be made more accurate by taking into account other factors concerning the required characteristics of the software to be developed, the qualification and experience of the development team, and the software development environment. Some of these factors are: Complexity of the software 1. Required reliability 2. Size of data base 3. Required efficiency (memory and execution time) 4. Analyst and programmer capability 5. Experience of team in the application area 6. Experience of team with the programming language and computer 7. Use of tools and software engineering practices Many of these factors affect the person months required by an order of magnitude or more. COCOMO assumes that the system and software requirements have already been defined, and that these requirements are stable. This is often not the case. COCOMO model is a regression model. It is based on the analysis of 63 selected projects. The primary input is KDSI. The problems are: 1. In early phase of system life-cycle, the size is estimated with great uncertainty value. So, the accurate cost estimate can not be arrived at.

Kenya Methodist University

51

CISY 112

Software Engineering Principles

2. The cost estimation equation is derived from the analysis of 63 selected projects. It usually have some problems outside of its particular environment. For this reason, the recalibration is necessary. According to Kemerer's research, the average error for all versions of the model is 601%. The detailed model and Intermediate model seem not much better than basic model. The first version of COCOMO model was originally developed in 1981. Now, it has been experiencing increasing difficulties in estimating the cost of software developed to new life cycle processes and capabilities including rapid-development process model, reuse-driven approaches, object-oriented approaches and software process maturity initiative. For these reasons, The newest version, COCOMO 2.0, was developed. The major new modeling capabilities of COCOMO 2.0 are a tailorable family of software size models, involving object points, function points and source lines of code; nonlinear models for software reuse and reengineering; an exponent-driver approach for modeling relative software diseconomies of scale; and several additions, deletions, and updates to previous COCOMO effort-multiplier cost drivers. This new model is also serving as a framework for an extensive current data collection and analysis effort to further refine and calibrate the model's estimation capabilities.

Putnam model Another popular software cost model is the Putnam model. The form of this model is: Technical constant C= size * B1/3 * T4/3 Total Person Months B=1/T4 *(size/C)3 T= Required Development Time in years Size is estimated in LOC Where: C is a parameter dependent on the development environment and It is determined on the basis of historical data of the past projects. Rating: C=2,000 (poor), C=8000 (good) C=12,000 (excellent). The Putnam model is very sensitive to the development time: decreasing the development time can greatly increase the person-months needed for development. One significant problem with the PUTNAM model is that it is based on knowing, or being able to estimate accurately, the size (in lines of code) of the software to be developed. There is often great uncertainty in the software size. It may result in the inaccuracy of cost estimation. According to Kemerer's research, the error percentage of SLIM, a Putnam model based method,is 772.87%.

Kenya Methodist University

52

CISY 112

Software Engineering Principles

Function Point Analysis Based Methods From above two algorithmic models, we found they require the estimators to estimate the number of SLOC in order to get man-months and duration estimates. The Function Point Analysis is another method of quantifying the size and complexity of a software system in terms of the functions that the systems delivers to the user. A number of proprietary models for cost estimation have adopted a function point type of approach, such as ESTIMACS and SPQR/20. The function point measurement method was developed by Allan Albrecht at IBM and published in 1979. He believes function points offer several significant advantages over SLOC counts of size measurement. There are two steps in counting function points: Counting the user functions. The raw function counts are arrived at by considering a linear combination of five basic software components: external inputs, external outputs, external inquiries, logic internal files, and external interfaces, each at one of three complexity levels: simple, average or complex.. .The sum of these numbers, weighted according to the complexity level, is the number of function counts (FC). Adjusting for environmental processing complexity. The final function points is arrived at by multiplying FC by an adjustment factor that is determined by considering 14 aspects of processing complexity. This adjustment factor allows the FC to be modified by at most 35% or -35%.

The collection of function point data has two primary motivations. One is the desire by managers to monitor levels of productivity. Another use of it is in the estimation of software development cost. There are some cost estimation methods which are based on a function point type of measurement, such as ESTIMACS and SPQR/20. SPQR/20 is based on a modified function point method. Whereas traditional function point analysis is based on evaluating 14 factors, SPQR/20 separates complexity into three categories: complexity of algorithms, complexity of code, and complexity of data structures. ESTIMACS is a propriety system designed to give development cost estimate at the conception stage of a project and it contains a module which estimates function point as a primary input for estimating cost. The advantages of function point analysis based model are: 1. function points can be estimated from requirements specifications or design specifications, thus making it possible to estimate development cost in the early phases of development. 2. function points are independent of the language, tools, or methodologies used for implementation. 3. non-technical users have a better understanding of what function points are measuring since function points are based on the system user's external view of the system
Kenya Methodist University 53

CISY 112

Software Engineering Principles

From Kemerer's research, the mean error percentage of ESTIMACS is only 85.48%. So, considering the 601% with COCOMO and 771% with SLIM, I think the Function Point based cost estimation methods is the better approach especially in the early phases of development.

The Selection and Use of Estimation Methods The selection of Estimation methods From the above comparison, we know no one method is necessarily better or worse than the other, in fact, their strengths and weaknesses are often complimentary to each other. According to the experience, it is recommended that a combination of models and analogy or expert judgment estimation methods is useful to get reliable, accurate cost estimation for software development. For known projects and projects parts, we should use expert judgment method or analogy method if the similarities of them can be got, since it is fast and under these circumstance, reliable; For large, lesser known projects, it is better to use algorithmic model. In this case, many researchers recommend the estimation models that do not required SLOC as an input. I think COCOMO2.0 is the first candidate because COCOMO2.0 model not only can use Source lines of code (SLOC) but also can use Object points, unadjusted function points as metrics for sizing a project. If we approach cost estimation by parts, we may use expert judgment for some known parts. This way we can take advantage of both: the rigor of models and the speed of expert judgment or analogy. Because the advantages and disadvantages of each technique are complementary, a combination will reduce the negative effect of any one technique, augment their individual strengths and help to crosscheck one method against another.

Use of Estimation Methods It is very common that we apply some cost estimation methods to estimate the cost of software development. But what we have to note is that it is very important to continually re-estimate cost and to compare targets against actual expenditure at each major milestone. This keeps the status of the project visible and helps to identify necessary corrections to budget and schedule as soon as they occur. At every estimation and re-estimation point, iteration is an important tool to improve estimation quality. The estimator can use several estimation techniques and check whether their estimates converge. The other advantages are as following:

Different estimation methods may use different data. This results in better coverage of the knowledge base for the estimation process. It can help to identify cost components that cannot be dealt with or were overlooked in one of the methods
Kenya Methodist University 54

CISY 112

Software Engineering Principles

Different viewpoints and biases can be taken into account and reconciled. A competitive contract bid, a high business priority to keep costs down, or a small market window with the resulting tight deadlines tends to have optimistic estimates. A production schedule established by the developers is usually more on the pessimistic side to avoid committing to a schedule and budget one cannot meet.

It is also very important to compare actual cost and time to the estimates even if only one or two techniques are used. It will also provide the necessary feedback to improve the estimation quality in the future. Generally, the historical data base for cost estimation should be set up for future use. Identifying the goals of the estimation process is very important because it will influence the effort spent in estimating, its accuracy, and the models used. Tight schedules with high risks require more accurate estimates than loosely defined projects with a relatively openended schedule. The estimators should look at the quality of the data upon which estimates are based and at the various objectives.

Model Calibration The act of calibration standardizes a model. Many model are developed for specific situations and are, by definition, calibrated to that situation. Such models usually are not useful outside of their particular environment. So, the act of calibration is needed to increase the accuracy of one of these general models by making it temporarily a specific model for whatever product it has been calibrated for. Calibration is in a sense customizing a generic model. Items which can be calibrated in a model include: product types, operating environments, labor rates and factors, various relationships between functional cost items, and even the method of accounting used by a contractor. All general models should be standardized (i.e. calibrated), unless used by an experienced modeler with the appropriate education, skills and tools, and experience in the technology being modeled. Calibration is the process of determining the deviation from a standard in order to compute the correction factors. For cost estimating models, the standard is considered historical actual costs. The calibration procedure is theoretically very simple. It is simply running the model with normal inputs (known parameters such as software lines of code) against items for which the actual cost are known. These estimates are then compared with the actual costs and the average deviation becomes a correction factor for the model. In essence, the calibration factor obtained is really good only for the type of inputs that were used in the calibration runs. For a general total model calibration, a wide range of components with actual costs need to be used. Better yet, numerous calibrations should be performed with different types of components in order to obtain a set of calibration factors for the various possible expected estimating situations.

Kenya Methodist University

55

CISY 112

Software Engineering Principles

Conclusions The accurate prediction of software development costs is a critical issue to make the good management decisions and accurately determining how much effort and time a project required for both project managers as well as system analysts and developers. There are many software cost estimation methods available including algorithmic methods, estimating by analogy, expert judgment method, top-down method, and bottom-up method. No one method is necessarily better or worse than the other, in fact, their strengths and weaknesses are often complimentary to each other. To understand their strengths and weaknesses is very important when you want to estimate your projects. For a specific project to be estimated, which estimation methods should be used depend on the environment of the project. According to the weaknesses and strengths of the methods, you can choose some methods to be used. I think a combination of the expert judgment or analogy method and COCOMO2.0 is the best approach that you can choose. For known projects and projects parts, we should use expert judgment method or analogy method if the similarities of them can be got, since it is fast and under these circumstance, reliable; For large, lesser known projects, it is better to use algorithmic model like COCOMO2.0 which will be available in early 1997. If COCOMO2.0 is not available, ESTIMACS or the other function point based methods are highly recommended especially in the early phase of the software life-cycle because in the early phase of software life-cycle SLOC based methods have great uncertainty values of size. If there are many great uncertainty values of size, reuse, cost drivers etc., the analogous method or wide-band Delphi technology should be considered as the first candidate. And , the COCOMO 2.0 has capabilities to deal with the current software process and is served as a framework for an extensive current data collection and analysis effort to further refine and calibrate the model's estimation capabilities. In general, the COCOMO2.0 will be very popular. Now Dr. Barry Boehm and his students are developing COCOMO2.0. They expect to have it calibrated and usable in early 1997. Some recommendations: 1. Do not depend on a single cost or schedule estimate. 2. Use several estimating techniques or cost models, compare the results, and determine the reasons for any large variations. 3. Document the assumptions made when making the estimates. 4. Monitor the project to detect when assumptions that turn out to be wrong jeopardize the accuracy of the estimate. 5. Improve software process: An effective software process can be used to increase accuracy in cost estimation in a number of ways. 6. Maintaining a historical database

Kenya Methodist University

56

CISY 112

Software Engineering Principles

Module 12: Verification and Validation


Verification: Are we building the product right? The software should conform to its specification. Validation: Are we building the right product? The software should do what the user really needs / wants. Types of V & V include: Static V&V - Software inspections / reviews: where one analyzes static system representations such as requirements, design, source code, etc. Dynamic V&V - Software testing: where one executes an implementation of the software to examine outputs and operational behavior

Program Testing There are two types of program testing: a) Defect testing: Tests designed to discover system defects. Sometimes referred to as coverage testing. b) Statistical testing: Tests designed to assess system reliability and performance under operational conditions. Makes use of an operational profile.

V & V Goals Verification and validation should establish confidence that the software is fit for purpose. This does NOT usually mean that the software must be completely free of defects. The level of confidence required depends on at least three factors:

Kenya Methodist University

57

CISY 112

Software Engineering Principles

1. Software function / purpose: Safety-critical systems require a much higher level of confidence than demonstration-of-concept prototypes. 2. User expectations: Users may tolerate shortcomings when the benefits of use are high. 3. Marketing environment: Getting a product to market early may be more important than finding additional defects.

V&V versus Debugging V&V and debugging are distinct processes. V&V is concerned with establishing the existence of defects in a program. Debugging is concerned with locating and repairing these defects. Defect locating is analogous to detective work or medical diagnosis.

Software Inspections / Reviews These involve people examining a system representation (requirements, design, source code, etc.) with the aim of discovering anomalies and defects. They do not require execution so may be used before system implementation. Can be more effective than testing after system implementation.

Why code inspections can be so effective Many different defects may be discovered in a single inspection. (In testing, one defect may mask others so several executions may be required.) They reuse domain and programming language knowledge. (Reviewers are likely to have seen the types of error that commonly occur.)

Inspections and testing are complementary in that inspections can be used early with nonexecutable entities and with source code at the module and component levels while testing can validate dynamic behaviour and is the only effective technique at the sub-system and system code levels. Inspections cannot directly check nonfunctional requirements such as performance, usability, etc.

Program Inspections /Reviews This is a formalized approach to document walkthroughs or desk-checking. Its intended exclusively for defect DETECTION not correction. Defects may be logical errors, anomalies in the code that might indicate an erroneous condition, or non-compliance with standards.

Kenya Methodist University

58

CISY 112

Software Engineering Principles

Inspection pre-conditions (entry criteria) A precise specification must be available. Team members must be familiar with the organization standards. Syntactically correct code must be available for code inspections.

Inspection pre-conditions An error checklist should be prepared. Management must accept the fact that inspection will increase costs early in the software process. Management must not use inspections results for staff appraisals.

Inspection Process

Inspection Procedure System overview presented to inspection team. Code and associated documents are distributed to inspection team in advance. Inspection takes place and discovered errors are noted. Modifications are made to repair discovered errors (by owner). Re-inspection may or may not be required.

Inspection Teams Its typically made up of 4-7 members. Author (owner) of the element being inspected.

Kenya Methodist University

59

CISY 112

Software Engineering Principles

Inspectors who find errors, omissions and inconsistencies. Reader who steps through the element being reviewed with the team. Moderator who chairs the meeting and notes discovered errors. Other roles are Scribe and Chief moderator

Code inspection checklists Checklist of common errors should be used to drive individual preparation. Error checklist for is programming language dependent. The weaker the type checking (by the compiler), the larger the checklist. Examples: initialisation, constant naming, loop termination, array bounds, etc.

Cleanroom Software Development The name is derived from the 'Cleanroom' process in semiconductor fabrication. The philosophy is defect avoidance rather than defect removal. Its a software development process based on: Incremental development (if appropriate) Formal specification Static verification using correctness arguments Statistical testing to certify program reliability NO defect testing!

Cleanroom Process Teams Specification team: responsible for developing and maintaining the system specification

Kenya Methodist University

60

CISY 112

Software Engineering Principles

Development team: responsible for developing and verifying the software. The software is NOT executed or even compiled during this process. Certification team: responsible for developing a set of statistical tests to measure reliability after development.

Cleanroom Process Evaluation Results in IBM and elsewhere have been very impressive with very few discovered faults in delivered systems. Independent assessment shows that the process is no more expensive than other approaches. Its however not clear how this approach can be transferred to an environment with less skilled engineers.

Kenya Methodist University

61

CISY 112

Software Engineering Principles

Module 13: Software Testing


Introduction Because of the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructive nature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software.

Software Testing Fundamentals Testing objectives include 1. Testing is a process of executing a program with the intent of finding an error. 2. A good test case is one that has a high probability of finding an as yet undiscovered error. 3. A successful test is one that uncovers an as yet undiscovered error. Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present.

White Box Testing White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that 1. guarantee that all independent paths within a module have been exercised at least once, 2. exercise all logical decisions on their true and false sides, 3. execute all loops at their boundaries and within their operational bounds, and 4. exercise internal data structures to ensure their validity.

Kenya Methodist University

62

CISY 112

Software Engineering Principles

The Nature of Software Defects Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. General processing tends to be well understood while special case processing tends to be prone to errors. We often believe that a logical path is not likely to be executed when it may be executed on a regular basis. Our unconscious assumptions about control flow and data lead to design errors that can only be detected by path testing. Typographical errors are random.

Basis Path Testing This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing.

Flow Graphs Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. Each flow graph node represents one or more procedural statements. The edges between nodes represent flow of control. An edge must terminate at a node, even if the node does not represent any useful procedural statements. A region in a flow graph is an area bounded by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic complexity is a metric that provides a quantitative measure of the logical complexity of a program. It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed.

The Basis Set An independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). The basis set is not unique. Any number of different basis sets can be derived for a given procedural design. Cyclomatic complexity, V(G), for a flow graph G is equal to 1. The number of regions in the flow graph. 2. V(G) = E - N + 2 where E is the number of edges and N is the number of nodes. 3. V(G) = P + 1 where P is the number of predicate nodes.
Kenya Methodist University 63

CISY 112

Software Engineering Principles

Deriving Test Cases 1. From the design or source code, derive a flow graph. 2. Determine the cyclomatic complexity of this flow graph.
o

Even without a flow graph, V(G) can be determined by counting the number of conditional statements in the code.

3. Determine a basis set of linearly independent paths.


o

Predicate nodes are useful for determining the necessary paths.

4. Prepare test cases that will force execution of each path in the basis set.
o

Each test case is executed and compared to the expected results.

Automating Basis Set Derivation The derivation of the flow graph and the set of basis paths is amenable to automation. A software tool to do this can be developed using a data structure called a graph matrix. A graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph. Each row and column correspond to a particular node and the matrix corresponds to the connections (edges) between nodes. By adding a link weight to each matrix entry, more information about the control flow can be captured. In its simplest form, the link weight is 1 if an edge exists and 0 if it does not. But other types of link weights can be represented:

the probability that an edge will be executed, the processing time expended during link traversal, the memory required during link traversal, or the resources required during link traversal.

Graph theory algorithms can be applied to these graph matrices to help in the analysis necessary to produce the basis set.

Loop Testing This white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined: 1. simple loops, 2. nested loops, 3. concatenated loops, and 4. unstructured loops.
Kenya Methodist University 64

CISY 112

Software Engineering Principles

Simple Loops The following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop: 1. skip the loop entirely, 2. only pass once through the loop, 3. m passes through the loop where m < n, 4. n - 1, n, n + 1 passes through the loop.

Nested Loops The testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. One approach for nested loops: 1. Start at the innermost loop. Set all other loops to minimum values. 2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values. 3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values. 4. Continue until all loops have been tested.

Concatenated Loops Concatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g. the loop counter for one is the loop counter for the other), then the nested approach can be used.

Unstructured Loops This type of loop should be redesigned not tested!!!

Other White Box Techniques Other white box testing techniques include: 1. Condition testing
o

exercises the logical conditions in a program.

Kenya Methodist University

65

CISY 112

Software Engineering Principles

2. Data flow testing


o

selects test paths according to the locations of definitions and uses of variables in the program.

Black Box Testing Introduction Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories: 1. incorrect or missing functions, 2. interface errors, 3. errors in data structures or external database access, 4. performance errors, and 5. initialization and termination errors. Tests are designed to answer the following questions: 1. How is the function's validity tested? 2. What classes of input will make good test cases? 3. Is the system particularly sensitive to certain input values? 4. How are the boundaries of a data class isolated? 5. What data rates and data volume can the system tolerate? 6. What effect will specific combinations of data have on system operation? White box testing should be performed early in the testing process, while black box testing tends to be applied during later stages. Test cases should be derived which 1. reduce the number of additional test cases that must be designed to achieve reasonable testing, and 2. tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.

Equivalence Partitioning This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an

Kenya Methodist University

66

CISY 112

Software Engineering Principles

evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions. Equivalence classes may be defined according to the following guidelines: 1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined. 2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined. 3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined. 4. If an input condition is boolean, then one valid and one invalid equivalence class are defined.

Boundary Value Analysis This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include: 1. For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively. 2. If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits. 3. Apply guidelines 1 and 2 to the output. 4. If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.

Cause-Effect Graphing Techniques Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are four steps: 1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each. 2. A cause-effect graph is developed. 3. The graph is converted to a decision table. 4. Decision table rules are converted to test cases.
Kenya Methodist University 67

CISY 112

Software Engineering Principles

References Sommeville, I.: Software Engineering; 7th Edition, 2004. Fairley, R.: Software Engineering Concepts; Tata McGraw Hill, 1997 http://sern.ucalgary.ca/courses/seng/621/W98/johnsonk/cost.htm

Kenya Methodist University

68

S-ar putea să vă placă și