Documente Academic
Documente Profesional
Documente Cultură
INTRODUCTION TO
SOFTWARE TESTING
Introduction
The Testing Process
What is Software Testing?
Why Should We Test? What is the Purpose?
Who Should Do Testing?
What Should We Test?
Selection of Good Test Cases
Measurement of the Progress of Testing
Incremental Testing Approach
Basic Terminology Related to Software Testing
Testing Life Cycle
When to Stop Testing?
Principles of Testing
Limitations of Testing
Available Testing Tools, Techniques and Metrics
INTRODUCTION
Testing is the processes of executing the program
with the intent of finding faults. Who should do
this testing and when should it start are very
important questions that are answered in the
text. As we know that software testing is the
fourth phase of Software Development Life Cycle
(SDLC). About 70% of development time is spent
on testing. We explore this and many other
interesting concepts in this chapter.
It may be:
Positive
Testing:
Operate
application it should be operated.
Negative
Testing:
Testing
for
abnormal operations.
Positive View of Negative Testing:
The job of testing is to discover errors
before the user does.
SOFTWARE VERIFICATION
SOFTWARE VALIDATION
Year(year to
test)
Expected
result
Observed
result
Match
-1
-1
-1
Yes
-400
-1
-1
Yes
100
Yes
1000
Yes
1800
Yes
1900
Yes
2010
Yes
400
Yes
1600
Yes
10
2000
Yes
11
2400
Yes
12
Yes
13
1204
Yes
14
1996
Yes
15
2004
Yes
MEASUREMENT OF TESTING
Stage 1: Exploration.
Purpose: To gain familiarity with the application.
Stage 2: Baseline test.
Purpose: To devise and execute a simple test case.
Stage 3: Trends analysis.
Purpose: To evaluate whether the application performs as expected when
actual output can be predetermined.
Stage 4: Inventory.
Purpose: To identify the different categories of data and create a test for
each category item.
Stage 5: Inventory combinations.
Purpose: To combine different input data.
Stage 6: Push the boundaries.
Purpose: To evaluate application behavior at data boundaries.
Stage 7: Devious data.
Purpose: To evaluate system response when specifying bad data.
Stage 8: Stress the environment.
Purpose: To attempt to break the system.
Error(or Mistake or Bugs): When people make mistakes while coding, we call
these mistakes bugs.
Fault (or Defect): A missing or incorrect statement(s) in a program resulting
from an error is a fault.
Failure: A failure occurs when a fault executes.
Incident: An incident is a symptom associated with a failure that alerts the user
to the occurrence of a failure.
Test: A test is the act of exercising software with test cases.
Test Case: The essence of software testing is to determine a set of test cases
for the item to be tested.
Test Suite: A collection of test scripts or test cases that is used for validating
bug fixes within a logical or physical area of a product.
Test Script: The step-by-step instructions that describe how a test case is to be
executed.
Test Ware: It includes all testing documentation created during testing process.
Test Oracle: Any means used to predict the outcome of a test.
Test Log: A chronological record of all relevant details about the execution of a
test.
Test Report: A document describing the conduct and results of testing carried
out for a system.
TESTING LIFE
CYCLE
Error
Fix
Requirements
Specification
Error
Fault Resolution
Fault
Error
Design
Fault Isolation
Error
Fault
Coding
Incident
Fault Classification
Fault
Testing
PRINCIPLES OF TESTING
LIMITATIONS OF TESTING
Mothora: It is an automated mutation testing toolset developed at Purdue university. The tester can
create and execute test cases, measure test case
adequacy.
NuMegas Bounds Checker, Rationals Purify:
They are run-time checking and debugging aids.
Ballista COTS Software Robustness Testing
Harness(Ballista): It is full-scale automated
robustness testing tool. The goal is to automatically
test and harden commercial off-the-shalf (COTS)
software against robustness failures.
SUMMARY
CHAPTER-2
SOFTWARE VERIFICATION AND
VALIDATION
Introduction
V&V Limitations
Requirements Tracing
INTRODUCTION
DIFFERENCES BETWEEN
VERIFICATION AND
VALIDATION
Verification
Validation
It is a dynamic process of
validating/testing the actual project.
DIFFERENCES BETWEEN
QA & QC?
Quality Assurance
(QA)
It is process related.
It focuses on the
process use d to
develop a product.
It involves the quality
of the processes
It is a preventive
control
Allegiance is to
development
V&V Limitations
Theoretical foundations
Impracticality of testing all data
Impracticality of testing all paths
No absolute proof of correctness.
Traceability analysis
Interface analysis
Criticality analysis
Step 1: Construct a block diagram or control flow diagram
(CFD) of the system and its elements. Each block will
represent one software function (or module) only.
Step 2: Trace each critical function or quality requirement
through CFD.
Step 3:Classify all traced software functions as critical to
V&V Tasks
Traceability analysis
Software requirements
evaluation.
Interface analysis
Criticality analysis
System V&V test plan
generation.
Acceptance V&V test
Plan generation
Configuration
management
assessment
Hazard analysis
Risk Analysis
Key Issues
Evaluates
the
correctness,
completeness,
accuracy,
constituency, testability and
readability
of
software
requirements.
Evaluates
the
software
interfaces
Identifies the criticality of each
software function.
Initiates the V&V test planning
for V &V system test.
Ensures
completeness
and
adequacy of S CM process.
identifies potential hazards,
based on the product data
during
the
specified
development activity.
Identifies potential risks, based
on the product data during the
specified development activity.
V&V
Activity
Design V&V
V&V Tasks
Traceability analysis
Software design
evaluation
Interface analysis.
Criticality analysis
Component V&V test
plan generation and
verification.
Integration V&V test
plan generation and
verification
Hazard analysis
Risk analysis
Traceability analysis
Source code and source
code documentation
evaluation
Interface analysis
Criticality analysis
V&V test case
generation and
verification
V&V test procedure
generation and
Implementati
on V&V
Key Issues
Evaluates
software
design
modules
for
correctness,
completeness,
accuracy,
consistency, testability and
readability.
Initiates the V&V test planning
for V&V component test.
Initiates the V&V test planning
for V&V integration test.
Verifies
the
correctness,
completeness,
consistency,
accuracy,
testability
and
readability of source code.
V&V
Activity
Test V&V
V&V Tasks
Maintenance
V&V
Key Issues
Traceability analysis
Acceptance V&V test
procedure
Integration V&V Test
execution and
verification
System V&V test
execution and
verification
SVVP (software
verification and
validation plan) revision
Propose3d change
assessment
Anomaly evaluation
Criticality analysis
Migration assessment
Retirement assessment
Hazard analysis
Risk analysis
Requirements Tracing
(SQA)
Software Technical
Reviews
Types of STRs
Informal Reviews
Formal Reviews
It is a planned meeting
Review Methodologies
Walk Through
Inspection (or work product reviews),
and
Audits
TEST PLAN
Introduction
Test items
Features to be tested
Approach
Test deliverables
Testing tasks
Environmental needs
Responsibilities
Schedule
Approvals
Inputs
Expected results
Actual results
Data and time
Test procedure step
Environment
Repeatability
Testers
Other observers
Additional information
Impact
CHAPTER-3
BLACK BOX
TESTING TECHNIQUES
INTRODUCTION
What is BVA?
Limitations of BVA
Robustness Testing
Expected
output
100
100
Isosceles
100
100
Isosceles
100
100
100
Equilateral
100
100
199
Isosceles
100
100
200
Not a triangle
100
100
100
Isosceles
100
100
Isosceles
100
100
Equilateral
100
100
100
Isosceles
10
100
199
100
Not a triangle
11
200
100
Isosceles
12
100
100
Isosceles
13
100
100
100
Equilateral
14
199
100
100
Isosceles
15
200
100
100
Not a triangle
EQUIVALENCE CLASS
TESTING
Advantages of Decision
Tables
Disadvantages of Decision
Tables
Applications of Decision
Tables
CAUSE-EFFECT GRAPHING
TECHNIQUE
Testing Effort
Testing Efficiency
Testing Effectiveness
KIVIAT CHARTS
SUMMARY
Decision tables.
Equivalence partitioning.
CHAPTER-4
WHITE-BOX (OR
STRUCTURAL)
TESTING TECHNIQUES
McCabe Metrics
OO Metrics
Grammar Metrics
Code refactoring
Test Coverage
Number of parents
Combined OO Metrics .
Line count .
Nesting levels.
Counts of decision types.
Maximum number of predicates.
Comment lines.
Step 1: Construction of flow graph from the source code or flow charts
Step 2: identification of independent paths.
1.
2.
3.
4.
5.
V (G) 1.
V (G) is the maximum number of independent
paths in graph, G.
Inserting and deleting functional statements to
G does not affect, G.
G has only one path if and only if V (G) =1 . If
V (G) =1 then the graph, G has one path only.
Inserting a new now in G, increases V (G) by
unity.
Complexity
What it means?
1-10
1-20
20-40
>40
Variabl
e
Defined at node
(n)
Dcu (v n)
Dpu (v,n)
(5,6)
{(2,3), (2,5)}
(4,6)
{(3,4), (3,6)}
(4,6)
{(3,4), (3,6)}
(4,6,7)
{}
(4,6,7)
{}
(4,6,7)
{}
Count
(6)
{(6,2), (6,7)}
Count
(6)
{(6,2), (6,7)}
1.
CHAPTER-5
GRAY BOX TESTING
To do such type of testing, not only does not one need to understand the code
and logic test cases that can cover good portions of the code.
Understanding of
code and logic means white box or structural testing
whereas writing effective test cases mans black box testing. So, we need a
combination of white box and black box techniques for test effectiveness.
This type of testing is known as gray box testing. We must thus understand
that:
White + Black = Gray
Black Box
Testing
1. Low granularity
Medium Granularity
High granularity.
2. Internals NOT
known.
Internals partially
known
Internals relevant to
testing are known
4 Also known as
Opaque box testing
Closed box testing
Input output testing
Data driven testing
Behavioral
Functional testing
Also known as
translucent box
testing.
Also known as
Glass box testing
Clear box testing
Design based
testing
Logic based testing
Structural testing
Code based testing
Normally done by
testers and
developers.
6. Testing method
where
System is viewed
as a black box
Internal
behaviour of the
program is ignored
Testing is based
upon external
specifications.
Here, internals
Internals fully
partly known
known.
(gray), but not fully
known (white). Test
design is based on
the knowledge of
algorithm, interval
states, architecture
or other high level
descriptions of the
program behaviour.
7. It is likely to be
least exhaustive of
the three.
It is some where in
between
Potentially most
exhausitve of the
three.
8. Requirements
based. Test cases
based on the
functional
specifications, as
internals not known.
Better variety/depth
in test cases on
account of high level
knowledge of
internals.
Ability to exercise
code with relevant
variety of data.
9. Being
specification based
it would not suffer
from the deficiency
as described for
white box testing.
It is suited for
functional/business
domain testing bit in
depth.
Appropriate for
algorithm testing.
It is concerned with
validating outputs for
given inputs, the
application being
treated as a black box.
It facilitates structural
testing. It enables logic
coverage, coverage of
statements, decisions,
conditions, path and
control flow within the
code.
It can determine
and therefore
test better :
data domain,s
internal
boundaries and
overflow.
CHAPTER-6
REDUCING THE NUMBER
OF TEST CASES
Prioritization Guidelines
Priority category scheme
Risk Analysis
Regression Testing overview
Prioritization of test cases for regression
Testing
Regression Testing Technique- A case
Study
Slice Based Testing
Normal Testing
Regression Testing
It is done during
maintenance phase.
It is basically softwares
verification and validation.
It is done on modified
software
It is cheaper.
It is a costlier testing.
CHAPTER-7
LEVELS OF TESTING
Pairwise Integration
Statement Fragment.
Source Node.
Sink Node.
Module Execution Path (MEP)
Message
Module
Module-to-Module path (MM-Path)
Module to Module Path Graph
Non-functional
Testing
Configuration remains
same for a test suits
Test configuration is
different for each test
System Testing
1.
2.
Types of test
Sample exit
criteria
Scalability
Maximum
limits
Product should
scale upto one
million records or
1000 users
Product should
scale upto 10
million records
or 5000
Performance
test
Response
time
Throughput
Latency
Query for
10,000 records
should have
response time
less than 3
seconds.
Stress
System
when
stressed
beyond the
25 clients login
should be possible
in a configuration
that can take only
Product should
be able to
withstand 100
clients logic
Thus, we can say that we start with unit or module testing. Then we
go in for integration testing which is then followed by system testing.
Then, we go in for acceptance testing and regression testing
acceptance testing may involve alpha and beta testing while
regression testing is done during maintenance.
System testing can comprise of n different tests. That it is could
mean:
End to end integration testing.
User interface testing.
Stress testing
Testing of availability
Performance testing is a type of testing that is easy to understand but
difficult to perform due to amount of information and effort needed.
CHAPTER-8
OBJECT ORIENTED TESTING
GUI testing
Concatenation
involves
the
formation of a subclass that has
no locally defined features other
than the minimum requirement of
a class definition.
Class hierarchies that do not meet
these condition are very likely to
be buggy.
Element
position
Collecti Expecte
on state d result
All
operatio
ns
First
Last
Not
Empty
Not
Empty
Added
Added
Dc
Dc
Single
element
Empty
Deleted
reject
Single
item
Single
item
Delete/r Single
emove
item
Single
item
Integration
Testing
of
Currency
Conversion
Program : whether integration testing is suitable for this
problem depends on how it is implemented. Three main
choices are available for the compute button.
First choice concentrates all logic in one place and simply
uses the status of the option buttons as in IF tests. Such
IF tests would be thoroughly done at the unit level.
The second and more object oriented choice would have
methods in each option button object that send the
exchange rate value for the corresponding country in
response to a click event.
The third choice is in the visual basic style. The code for
VB includes a global variable for the exchange rate. There
are event produces for each option button.
Objectorientation
Unit testing of
classes
Abstract
classes
Encapsulation
and
inheritance
Polymorphism
Dynamic
binding
Inter-object
communicatio
n via
Message sequencing
Sequence diagrams.
CHAPTER-9
AUTOMATED TESTING
Automated testing
Consideration during automated testing
Types of testing tool-static V/s dynamic
Problems with manual testing
Benefits of automated testing
Disadvantages of automated testing
Skills needed for using auto mated tools.
Test automation : No silver bullet
Debugging
Criteria for selection of test tools
Steps for tools selection
Characteristics of modern tools
Case study on automated tools, Namely, Rational Robot, Win
Runner, silk test and load Runner
Static testing
Dynamic testing
Static testing does not require the Dynamic testing involves testing
actual execution of software
the software by actually
executing it.
It is more cost effective
Step 1 : identify your test suite requirements among the generic requirement
discussed. Add other requirements, if any.
Step 2 : make sure experience discussed in previous section are taken care of.
Step 3 : collect the experiences of other organizations which used similar test
tools.
Step 4: keep a checklist of questions to be asked to the vendors on
cost/effro/suppors.
Step 5 : identify list of tools that meet the above requirements and give
priority for the tool which is available with the source code.
Step 6: evaluate and shortlist one/set of tools and train all test developers on
the tool.
Step 7 : deploy the tool across test teams after training all potential users of
the tool.
CHAPTER-10
TEST POINT ANALYSIS (TPA)
Introduction
Methodology
Case study
TPA for case study
Phase wise breakup over testing life cycle
Path analysis
Path analysis process
TPA Philosophy
The effort estimation technique TPA, is based on
three fundamental elements.
Productivity
Size denotes the size of the information system to be
tested.
Test strategy implies the quality characteristics that are
to be tested on each subsystem.
Reliability
Method of calculation of Static Points:
ST = (FP * Qi)
= FPf * Df * QD
= Dynamic test points.
= Number of function points assigned to
the function .
Df
= Weighing factor for function
dependent
factors.
QD
= Weighing factor for dynamic quality
characteristics.
Weight
Category
Description
Low
Nominal
12
High
Step - 1
System study by KR V and V company : KR V and V
requests a 2 day systems and requirements study to
understand the scope of testing work and assess the
testing requirements to arrive at TPA estimate.
Step -2
Plan (10%)
319
Development (40%)
1276
Execute (45%)
1435
Management (5%)
159
Alternate Course
Exception Course
Beginning from the start each time, list all possible ways
to reach the end, keeping the direction of flow in mind.
It is a critical functionality
If basic flow fails; there may not be much of a
point in testing other paths
Basic flow should be included in sanity test suites
and also in Acceptance testing
CHAPTER-11
TESTING YOUR WEBSITESFUNCTIONAL AND
NON-FUNCTIONAL TESTING
Abstract
Introduction
Methodology
Planning :
How long will it take our available
resources to build this product?
How will we test product
Planning
We need to get this product out
now.
Purely driven by available time
window and resources.
Implementation
Let us decide on the sequence of
building block hat will optimize
our integration of a series of
Implementation
Let us put in the framework and
hang some of the key features.
We can then show it as a demo or
Non-functional testing
(or white box testing)
This type of test covers the object and code that executes
within the browser, but does not execute the server based
component. For example, java script and VB script code
within HTML that does rollovers, and other special effects.
The test cases for web browser testing can be derived as
follows:
If all mandatory fields on the form are not filled then it
will display a message on pressing a submit button.
Hidden password
CHAPTER-12
REGRESSION TESTING OF A
RELATIONAL DATABASE
Introduction
Why test an RDBMS?
What should we test?
When should we test?
How to test?
Who should test?
Database integrity
Meaning
Examples
LOAD TESTING
TOOLS
Mercury interactive.,
rational suite, test
studio.
TEST DATA
GENERATOR
They help to
generate large
amounts of data for
stress and load
testing.
www.stickyminds.com
www.testingcenter.com
www.owasp.org
www.ieee.org
www.iiste.org
www.ijcsi.org
BY: