Sunteți pe pagina 1din 36

© Oxford University Press 2011. All rights reserved.

1
 If you were to test all the possible combinations,
projects execution time and costs will rise
exponentially
We have learned that we cannot test
everything (i.e. all combinations of inputs
and pre-conditions).
That is we must Prioritise our testing effort
using a Risk Based Approach.
 Testing should start as early as possible in the
software development life cycle.
Testing decreases cost
Cost of finding and
correcting fault
100 x

10 x

1x

0.1 x

Requirements Design Coding Testing Maintenance


Product Lifecycle Phase
7
 Defects are not evenly distributed in a system
 They are ‘clustered’
 In other words, most defects found during testing are usually
confined to a small number of modules ( 80% of
uncovered errors focused in 20% modules of the whole
application) “Pareto law”
 Similarly, most operational failures of a system are usually
confined to a small number of modules
 An important consideration in test prioritisation!
5.The Pesticide Paradox
 Testing identifies bugs, and programmers respond to
fix them
 As bugs are eliminated by the programmers, the
software improves
 As software improves the effectiveness of previous
tests erodes
 Therefore we must learn, create and use new tests
based on new techniques to catch new bugs ( i.e.
It is not a matter of repetition. It is a matter of
learning and improving)
 Testing is done differently in different contexts
 For example, safety-critical software is tested differently
from an e-commerce site
 Whilst, Testing can be 50% of development costs, in NASA's
Apollo program (it was 80% testing)
 3 to 10 failures per thousand lines of code (KLOC) typical for
commercial software
 1 to 3 failures per KLOC typical for industrial software
 0.01 failures per KLOC for NASA Shuttle code!
 Also different industries impose different testing standards
• If we build a system and, in doing so, find
and fix defects ....
It doesn’t make it a good system

• Even after defects have been resolved, it


may still be unusable and/or does not fulfil
the users’ needs and expectations
 Testing is a single phase in SDLC performed after
coding.
 Testing is easy.
 Software development is of more worth as
compared to testing.
 Complete testing is possible.
 The testing starts after the program development.
 The purpose of testing is to check the functionality
of the software.
 Anyone can be a tester.

© Oxford University Press 2011. All rights reserved. 13


Objectives

 Difference between error, fault and failure.


 Life Cycle of a bug.
 How does a bug affect economics of software
testing?
 How does a bug classified?
 Testing Principles
 Software Testing Life Cycle (STLC) and its models.
 Difference between verification and validation.
 Development of software testing methodology

© Oxford University Press 2011. All rights reserved. 14


Software Testing Terminology

 Failure
The inability of a system or component to perform a required
function according to its specification.

 Fault / Defect / Bug


Fault is a condition that in actual causes a system to
produce failure. It can be said that failures are manifestation
of bugs.

© Oxford University Press 2011. All rights reserved. 15


Software Testing Terminology

Error
Whenever a member of development team makes any
mistake in any phase of SDLC, errors are produced. It might
be a typographical error, a misleading of a specification, a
misunderstanding of what a subroutine does and so on. Thus,
error is a very general term used for human mistakes.

© Oxford University Press 2011. All rights reserved. 16


© Oxford University Press 2011. All rights reserved. 17
 Testware
The documents created during the testing activities are
known as Testware.

 Incident
the symptom(s) associated with a failure that alerts the user
to the occurrence of a failure.

 Test Oracle
to judge the success or failure of a test,

© Oxford University Press 2011. All rights reserved. 18


© Oxford University Press 2011. All rights reserved. 19
© Oxford University Press 2011. All rights reserved. 20
Bug affects Economics of Software Testing

© Oxford University Press 2011. All rights reserved. 21


 Critical Bugs
the worst effect on the functioning of software such that it
stops or hangs the normal functioning of the software.

 Major Bug
This type of bug does not stop the functioning of the
software but it causes a functionality to fail to meet its
requirements as expected.

 Medium Bugs
Medium bugs are less critical in nature as compared to
critical and major bugs.
Minor Bugs

© Oxford University Press 2011. All rights reserved. 22


Requirements and Specifications Bugs
Design Bugs
Control Flow Bugs
Logic Bugs
Processing Bugs
Data Flow Bugs
Error Handling Bugs
Race Condition Bugs
Boundary Related Bugs
User Interface Bugs
Coding Bugs
Interface and Integration Bugs
System Bugs
Testing Bugs

© Oxford University Press 2011. All rights reserved. 23


Testing Principles

 Effective Testing not Exhaustive Testing

 Testing is not a single phase performed in SDLC

 Destructive approach for constructive testing

 Early Testing is the best policy.

 The probability of the existence of an error in a section of a


program is proportional to the number of errors already
found in that section.

 Testing strategy should start at the smallest module level and


expand toward the whole program.

© Oxford University Press 2011. All rights reserved. 25


Testing should also be performed by an independent team.
Everything must be recorded in software testing.

Invalid inputs and unexpected behavior have a high probability of


finding an error.

Testers must participate in specification and design reviews.

© Oxford University Press 2011. All rights reserved. 26


Software Testing Life Cycle (STLC)

© Oxford University Press 2011. All rights reserved. 27


 Defining the Test Strategy
 Estimate of the number of test cases, their duration and cost.
 Plan the resources like the manpower to test, tools required,
documents required.
 Identifying areas of risks.
 Defining the test completion criteria.
 Identification of methodologies, techniques and tools for
various test cases.
 Identifying reporting procedures, bug classification,
databases for testing, Bug Severity levels, project metrics

© Oxford University Press 2011. All rights reserved. 28


 Determining the test objectives and their Prioritization
 Preparing List of Items to be Tested
 Mapping items to test cases
 Selection of Test case design techniques
 Creating Test Cases and Test Data
 Setting up the test environment and supporting tools
 Creating Test Procedure Specification

© Oxford University Press 2011. All rights reserved. 29


© Oxford University Press 2011. All rights reserved. 30
 Understanding the Bug
 Reproducing the bug
 Analyzing the nature and cause of the bug
 Reliability analysis
 Coverage analysis
 Overall defect analysis

© Oxford University Press 2011. All rights reserved. 31


© Oxford University Press 2011. All rights reserved. 32
 Select and Rank Test Factors
 Identify the System Development Phases
 Identify the Risks associated with System under
Development

© Oxford University Press 2011. All rights reserved. 33


Verification: “Are we building the product right?”

Validation: “Are we building the right product?”

© Oxford University Press 2011. All rights reserved. 34


© Oxford University Press 2011. All rights reserved. 35
 Unit Testing
 Integration Testing
 Function Testing
 System Testing
 Acceptance Testing

© Oxford University Press 2011. All rights reserved. 36

S-ar putea să vă placă și