Sunteți pe pagina 1din 79

White Paper Contest On Testing

Concepts

BY

Lokesh Kumar Pachipala


Nithya Natarajan
Software Testing
Software Testing

üWhat is Software Testing


It is a process of evaluating a
system by manual or automatic means
and verifying that it satisfies specified
requirements or identify differences
between expected and actual results.
Software Testing

ü Why Software Testing


It is important as it cause impact on
operational performance andreliability if not
done properly. Effective software testing helps
to deliver quality software products that satisfy
user’s requirements, Needs and expectations. If
done Poorly, it leads to high maintenance cost
and user dissatisfaction
Software Testing

ü Principles of Software Testing


Testing is the process of executing a
program with an intent of finding error and it
should be planned long before testing begins.
Test cases must be written for invalid and
unexpected, as well as for valid and expected
input conditions. A good test case is one that
has high probability of detecting an error.
Software Testing

üPurpose and Benefit Testing


The main objective of testing is to help
clearly describe system behaviour and to find
defects in Requirements,design,documenta -
tion, and code as early as possible.The test
process should be such that it should reduce the
number of defects of the software product to be
delivered to the customer. All tests should be
traceable to customer requirements.
Software Testing

üObjective of Software Tester

The goal of a Tester is to find defects.The


goals of a Tester is to find Defects and find them
as early as possible.The goal of a Tester is to
find defects and find them as early as possible
and make sure they get fixed.
Quality Principles

üWhat is Quality

Meeting the customer’s require


-ment for the first time and every time.
Quality is much more than the Absence of
defects which allows us to meet customers
expectations
Quality Principles

üQuality – Customer’s View

• Doing the right things.


• Doing the right way.
• Doing it right first time.
• Doing it on time.
Quality Principles

üQuality – Supplier’s View

• Delivering the right product


• Satisfying the customer’s needs
• Meeting the customer’s expectations.
• Treating every customer with integrity,courtesy
and respect.
Quality Principles

üWhy Quality

It is most important factor affecting an


organisation’s long term performance.
Quality is the way to achieve Improved
productivity and comptitiveness in any
organisation.Quality saves, it does not
cost.
Quality Principles

üQuality Assurance
It is a planned and systematic set of
activities necessary to provide adequate
confidence that products and services
will confirm to specified requirements and
meet user needs.
Quality Principles

ü Quality Control

It is the process by which the product


quality is compared with applicable
Standards and the action taken when non-
conformance is detected.
Software Process

The process that deals with the technical and


management issues of software development
It is a process specifies a method of developing
software.
. Software Process

üSoftware Project

It is a software project in which a


software process is used.Software Product
It is the outcome of a software Project.
Software Development
Process
PDCA

Plan

Do /
Action Execute

Check
SDLC

Requirement Analysis

The main objective of the requirements analysis is to produce a document


that properly specifies all requirements of the customer. This is the primary
output Of this phase.

Many of the defects found in system and acceptance testing originate in


requirements. Removing an error injected during requirements can cost
As much as 100 times more during acceptance than if it is removed during
the requirements phase itself.
SDLC

Design

It is a development process in this process the user requirements are elicited


And software satisfying these requirements is designed, built, tested, and
delivered to the customer.

The development process is used when a new application is being developed


Or a major enhancement is planned for an existing application. Several
process models for software development exist. The most common ones
Include the waterfall model, which organises the phases in a linear sequence
Design Low Level Design
High Level Design
(System Design) (Detailed Design)

It is the phase of the life cycle Here the view of the application
when a logical view of the computer developed during the high-level
implementation of the solution to design is broken down into
the customer requirements is modules and programs. Logic
developed. design is done for every program
It contains 2 major components: and then documented as program
The functional architecture of the specifications.
Application and the database design.
Unit Test Cases are prepared
Preparation of Test Plan is done. based on these documents.
Design

Coding

During this phase, the detailed design


is used to produce the required programs
in a programming language. This stage
produces the source code, executables,
and databases following the appropriate
Coding standards.

Unit Testing is started for the programs


ready.
Software Testing Fundamentals

Primary role of software testing is to

• determine whether the system meets


requirement specifications.

• determine whether the system meets


business and user needs
Software Testing Fundamentals

ü What is Defect

A defect is a variance from a desired product


attribute.

Two categories of defects exists.

#Variance from product specifications.


#Variance from customer expectation.
Software Testing Fundamentals

üDefect Categories

#Wrong
#Extra
#Missing
Testing Policy

It is a management’s definition
of testing a department.
Quality Policy

It is again a management definition of


providing customer satisfaction for the first time
and every time.
Testing Levels

Unit Testing System Testing UAT IntegrationTesting

It is a testing in which the It refers to the testing in which


Individual Unit of the software units of an application
software are combined and tested for a
are tested in Isolation from Communication interfaces
other parts of a program. between them.
Integration Testing

Big Bang Top Down Bottom Up


Top Down

In this approach, all the A


modules are added or combined
from Higher level hierarchy to
B C D
lower level hierarchy. I.e. the
Higher module in isolation first,
then the next set of lower level E
Modules are tested with the
previously tested higher E E
modules.
Stub: Special code segments that when invoked by a code segment under
testing simulate the behaviour of designed and specified modules not yet
constructed.
Bottom Up

In bottom up integration
A
testing,all the modules are
added or combined from lower
Level hierarchy to higher level B C D
hierarchy i.e. the lower model
is tested in isolation first, then E
the next set of higher level
modules are tested with the
E E
previously tested lower
Modules.
Big Bang

Module 1 Module 2

System

Module 3 Module 4

A type of integration testing in which Software components of an


application are combined all at once into a overall system. According to this
approach, every module is first unit tested in isolation from every module after
that each module combined all at once and tested.
System Testing

Testing conducted on a complete,


Integrated systems compliance with its
specified requirements.
UAT

Testing conducted by client to evaluate


the system compliance as per the business
requirements
Testing Techniques

• Testing Techiques are classified as

vBlackBox
vWhite Box
Black Box Testing

The testing method focus on the functional requirements of the


software. This type of testing attempts to find incorrect or missing
functions, errors in data structures or external database access,
interface errors, performance errors and initialisation and termination
errors.

Black box testing focuses testing the function of the program or


application against its specifications. When creating black box test
cases, the input data used is critical.
Black Box Testing Types

Three techinques for managing the amount


of input data required.They are

• Equivalence Partitioning,
• Boundary Analysis
• Error guessing
Equivalence Partitioning

This technique partitions the data to Equivalent sets. This technique


optimises the testing required and Helps to avoid redundancy.

Where a deposit rate is input, it may have a valid range of 0% to 15%.

There is a +ve test, represented by a ‘valid’ equivalent set:-


0 <= percentage <=15

There are two –ve tests, represented by the two ‘invalid’ equivalent
sets:-
percentage < 0
percentage > 15
Equivalence Partitioning

Condition Test Expected result


Test case Descp. Data.
Nos. Case Id
Insert a negative Field should not
X2-1-1 -1
value into the accept negative
percentage field value
X2-1
Insert a valid Field should
X2-1-2 12
value into the accept the value
percentage field
Insert a value Field should
X2-1-3 16
greater than the accept the value
permitted range
into the
percentage field
Boundary Analysis

This technique ensures that


minimum,borderline, and maximum data
Values for a particular variable or equivalence
class are taken into account
Boundary Analysis
Condition Test Expected result
Test case Descp. Data.
Nos. Case Id
Insert a negative Field should not
X2-1-1 -0.1
value into the accept < 0
percentage field
Insert a zero (the Field should accept
X2-1-2 0
X2-1 minimum permitted) the value
value into the
percentage field.
X2-1-3 Insert a valid value 16 Field should accept
into the percentage the value
field.
X2-1-4 Insert the minimum Field should accept
permitted value. 15 the value
Insert a value greater
X2-1-5 than the permitted Field should not
15.1
range. accept the value
Error Guessing

Based on the theory that test cases can


be developed based upon the intuition and
experience of the test engineer.
White Box Testing

White Box testing examines the basic program structure and it derives the test
data from the program logic; ensuring that all statements and conditions have
been executed at least once.
White box tests verify that the software design is valid and also whether it was
built according to the specified design.

Statement Coverage: Execute all statements at least once.


Decision (Branch) Coverage : Execute statement is executed at least once;
Each decision takes on all possible outcomes at least once.
Condition Coverage : Each statement is executed at least once; each condition
In a decision takes on all possible outcomes at least once.
Decision/Condition Coverage : Each statement is executed at least once; each
decision takes on all possible outcomes at least once; each condition in a
Decision takes on all possible outcomes at least once.
Multiple/Condition Coverage : Each statement is executed at least once; all
possible combinations of condition outcomes in each decision occur at least
once.
White Box Testing

Example:

Procedure liability (age, sex, married, premium)


Begin
premium := 500;
if ((age < 25) and (sex = male) and (not married) then
premium := premium + 1500;
else (if (married or (sex = female)) then
premium := premium – 200;
if ((age > 45) and (age < 65) then
premium := premium – 100;)
End;
White Box Testing

The three input parameters are age (integer), sex (male or female), and
married (true or false). Keeping in mind a logic coverage methods for the
liability (insurance) procedure follows. The following notation is used in each table
Shown below.

The first column of each row denotes the specific “IF” statement from the exercise
program. Fox example, “IF-2” means the second IF statement in the sample
Program.

The last column indicates a test-case number in parentheses. Fox example,


“(3)” indicates test-case number 3. Any information following the test case
number is the test data itself in abbreviated form. For example, “23 F T”
means age = 23, sex = Female, and married = True.

An asterisk (*) in any box means “any valid input”.


White Box Testing
Decision coverage Age Sex Married Test case

IF - 1 < 25 Male False (1) 23 M F

IF - 1 < 25 Female False (2) 23 M F

IF - 2 * Female * (2)

IF - 2 >= 25 Male False (3) 50 M F

IF - 3 <= 45 Female * (2)

IF - 3 > 45, < 65 * * (3)


White Box Testing
Condition coverage Age Sex Married Test case

IF - 1 < 25 Female False (1) 23 M F

IF - 1 >= 25 Male True (2) 23 M T

IF - 2 * Male True (2)

IF - 2 * Female False (1)

IF - 3 <= 45 * * (1)

IF - 3 > 45 * * (3) 70 F F

IF - 3 < 65 * * (2)

IF - 3 >= 65 * * (3)
Multi. Cond. cover Age Sex Married Test case
IF - 1 < 25 Male True (1) 23 M T
IF - 1 < 25 Male False (2) 23 M F
IF - 1 < 25 Female True (3) 23 F T
IF - 1 < 25 Female False (4) 23 F F
IF - 1 >=25 Male True (5) 30 M T
IF - 1 >= 25 Male False (6) 70 M F
IF - 1 >= 25 Female True (7) 50 F T
IF - 1 >= 25 Female False (8) 30 F F
IF - 2 * Male True (5)
IF - 2 * Male False (6)
IF - 2 * Female True (7)
IF - 2 * Female False (8)
IF - 3 <=45,>= 65 * * Impossible
IF - 3 <=45,>= 65 * * (8)
IF - 3 >45,>=65 * * (6)
IF - 3 >45,< 65 * * (7)
Life Cycle Testing

Requirements Acceptance Test

Functional
Specification Integration Test

Design Unit Test

Code Code Review


Test Level Criteria’s
Test Entry Expected result
Objective Test Types
Level Criteria
1. Conversion 1. Program
Unit To test the Spec reviewed
1. Unit test cases
Test internal 2. Error-handling and available. 100% executed.
logic and 2. Test results
3. Function 2. File/DB
design of a documented.
program 4. Regression Design
module. reviewed and 3. No severity Fatal
available. or High problem
outstanding.
3. Code ready
for Unit Test. 4. Outstanding
severity Minimum
and low problems
4. Unit test
documented.
cases reviewed
and ready.

5. Test data
defined and
ready.
Test Level Criteria
Test Entry Expected result
Objective Test Types
Level Criteria
Integrat To test the 1. Conversion 1. Prg Spec 1. Integration test
interface reviewed and conditions 100%
-ion 2. Error-handling
available. executed.
between
Test program 3. Function
2. File/DB 2. Test results
modules.
4. Regression Design documented.
reviewed and
3. No severity Fatal
available.
and High problem
3. Unit test
outstanding.
executed for
the related Outstanding severity
program medium and low
modules. problems
4.Integration documented.
test cases
reviewed and
ready.
5. Test data
defined and
ready.
Test Level Criteria’s
Test Entry Expected result
Objective Test Types
Level Criteria
System To test the 1. Conversion. 1. Technical 1. System test
Test functional Spec reviewed cases 100%
2. Error-
behaviour in and available. executed.
handling.
application
level, 3. Function 2. Functional 2. No severity Fatal
interfaces to Spec reviewed or high problem
4. Interface. and available. outstanding.
other
applications, 5. Transaction 3. Integration
exit criteria 3. Test results
technical flow. documented.
aspects of met.
system. 4. System test 4. Outstanding
cases severity Medium
reviewed and and low problems
ready. documented and a
5. Test plan is in place for
environment fixing.
ready.
Testing Process

Define
Test Plan
Test Cases

Identify Area
for Automation

Create Test
Track Data
Defects

Analyse Execute
results Tests
Test Plan
It consists of steps that define the overall process for conducting the tests.
Table of contents of a test plan Might contain the following.

Communication
Test Scope Test Design
approach

Roles &
Test Objective Test Tools
Responsibilities

Assumptions Test Schedule

Test
Risk Analysis
Environment
Test Scope

It basically talks about two areas


1. What is covered in the test?
2. What is not covered in the test?
(Basically with respect to functionalities).
Test Objective

It is nothing but setting a goal.It is a


statement of what the tester is expected to
accomplish or Validate during a specific test
Activity.

Example:
Testing for this system should concentrate
on validating that the requirements based on the
test cases document.
Test Assumptions

These assumptions document test


prerequisites, which if not met could have a
negative impact on the test.

Example:
The support team will be available
throughout the testing period in solving
technical and functional Issues. Test team will
inform 1 day in Advance if the support of
development team is required during the
weekends.
Test Design

The test design details what types of tests


must be conducted, what stages of testing are
required (e.g Unit, Integration, System,
Performance and UAT)
Risk Analysis

This section deals with the risks and


their possible impact on the test effort.

Example:
1. Non-availability of testing resource.
2. Delay in environment readiness.
3. Any major change request raised during testing which
calls for a testing.
4. Hardware issues.
5. Poor system performance.
Roles & Responsibilities

This section defines who is responsible


for each stage or type of testing and
what is his/her role.
Test Schedule

This plan include major test activities


and the start and end dates for the
Same.
Test Environment

Environment requirements for each stage


and type of testing should be outlined in this
section of the plan.

Example:
Unit testing may be conducted in the
development
environment. While separate environments may
be needed for Integration and system testing.
Communication Approach

Various communication mechanisms


such as formal and informal meetings,
defect tracking Mechanism etc.,
Test Tools

Any tools that will be needed to support the


testing process should be included here.
Defect Tracking

Severity

From the producer’s viewpoint, a defect


is a deviation from specifications,
whether missing, wrong, or extra. The severity of a defect should be
assign objectively team based on
From the customers viewpoint, a defect pre-defined severity descriptions.
is any that causes customer
dissatisfaction. Example:
A severity one defect may be defined
as one that causes data corruption,
a system crash, security violations
etc.,
In large projects, it may also be
Necessary to assign a priority to the
defect which determine the order in
which defects should be fixed.
The problem has been fixed
Bug Life Cycle in development environment
and the fix is ready to be
The problem has been migrated to testing
analyzed and determined Environment.
that no fix is required.
Fixed

Open Closed

Waived The problem has been


Any further action to the
confirmed fixed. No
problem is pending due to
further action required.
a justifiable reason. The
deferral has to approved
by the project manager.
Unit Testing Writing Techniques

Field Type of NULL Unique Length Numeric Date Negative Default


S.No
Name check
(FLC/F
LV/FC)

FLC – Field Level Checks


FLV – Field Level Validations Example
FC – Functional check
Integration Testing Writing Techniques

Type of Data Dependency Check Data Transfer Check


S.No Module Name
check
(DDC/D Module Depends Transfer to Transfer
TC) depende for Module Data
d on

1 Employee DTC Salary Eno, Name

Net salary
2 Pay slip DDC Leave Entry
computation

Note: Only valid entries are entered to check the DDC and DTC
Scenario Testing Writing Techniques

1. Identify the flow of the application.


2. Understand the flow.
3. Identify the Scenario’s.
4. Break the Scenario’s into Sub-Scenario’s

Note: Only valid entries are entered to check the DDC and DTC

Example
Performance Testing

Stress Load
Stress

The best way to capture the nature of Web site load is to identify and track, [e.g. using
a log analyzer] a set of key user session variables that are applicable and relevant to
your Web site traffic.
Some of the variables that could be tracked include:
•The length of the session (measured in pages)
•The duration of the session (measured in minutes and seconds)
•The type of pages that were visited during the session (e.g., home page, product
information page, credit card information page etc.)
•The typical/most popular ‘flow’ or path through the website
•The % of ‘browse’ vs. ‘purchase’ sessions
•The % type of users (new user vs. returning registered user)
•Measure how many people visit the site per week/month or day. Then break down
these current traffic patterns into one-hour time slices, and identify the peak-hours
(i.e. if the user get lots of traffic during lunch time etc.), and the numbers of users
during those peak hours. This information can then be used to estimate the number
of concurrent users on the site.
Load

Performance Tests are tests that determine end-to-end timing


(benchmarking) of various time critical business processes and
transactions, while the system is under low load, but with a production
sized database. This sets ‘best possible’ performance expectation
under a given configuration of infrastructure. It also highlights very
early in the testing process if changes need to be made before load
testing should be undertaken.

How to implement Performance Testing

A key indicator of the quality of a performance test is repeatability. Re-executing a


performance test multiple times should give the same set of results each time. If the
results are not the same each time, then the differences in results from one run to the
next cannot be attributed to changes in the application, configuration or environment
Testing Types
Alpha Testing:
A customer conducts the Alpha testing at the developer’s site. The software
is used in a natural setting with the developer recording errors and usage
problems. Alpha tests are conducted in the controlled environment by the
developer.

Beta Testing:
The beta testing is conducted at one or more customer sites by the end
user(s) of the software. The developer will not be present in the customer’s
place. So, the beta test is a “live” application of the software in an
environment that cannot be controlled by a developer. The customer records
all the problems (real or apparent) that are encountered during the beta
testing and reports to the developer at regular interval. As a result of
problems reported during beta test, the software developer makes the
modifications and then prepares for release of the software product to the
entire customer base.
Mapping:

Tester at this point will have both the database and front-end screen shots.
carefully data base should be mapped by understanding the entries made
In front end (input) and values displayed in front end(output). The purpose
of each field, screens and functionality should also be understood. The
tester should arrive at clarity on the input and output of the application.

In these cases, tester should use his discretion to decide the validations
required at field, module and application level depending on the
application purpose.

Once these are done then the test team can start building test conditions
for the application and from then on proceed with the normal test
preparation style.
Test Execution Process:
The preparation to test the application is now over. The test team should
next plan the execution of the test on the application.

Tests on the application are done on stages. The test execution takes
place in three passes or sometimes four passes on the state of the
application. They are:

Pass 0 :
This is done to check the health of the system before the start of the test
process. This stage may not be applicable to most test process. Free form
Testing will be adopted in this stage.

Pass 1 or Comprehensive :
All the test scripts developed for testing are executed. Some cases the
application may not have certain module(s) ready for test, hence they
will be covered comprehensively in the next pass. The testing here
should not only cover all test cases but also business cycles as defined
in the application.
Discrepancy or Pass 2:

All test scripts that have resulted in a defect during the comprehensive
pass should executed. In other words, all defects that have been fixed
should be retested. Function points that may be affected by the defect
should also be taken up for testing. Automated test scripts captured
During the pass one are used here. This type of testing is called as
Regression testing. Defects that are not fixed will be executed only after
they are fixed.

Sanity or Pass 3 :

This is the final round in the test process. This is done either at the client’s
site or at Take depending on the strategy adopted. This is done in order
to check if the system is sane enough for the next stage I.e. UAT or
production as the case may be under a isolated environment. Ideally the
defects that are fixed from the previous pass are checked and free from
testing done to ensure integrity is conducted.
Testing Process

Test Planning Test Execution Test Results

Baseline Documents A Pass #0 Reports C

Test Strategy/Test Plan B Pass #1 Metrics

Environ. and Control Pass #2 Summary

Reviews Pass #3

Enrichment Process Performance


A B B

1. Prepare Test Plan Daily Reports


1. Business Requirement
2. High Level Conditions 1. Test Problem Report
2. RFD(s) and BD(s)
3. Prepare Test Cases 2. Test Summary Report
3. Design Specification
4. Prepare Test Data 3. Downtime Log
4. E mails
5. Minutes of meeting 5. Setup Test Environment
6. Setup Test Bed Final Reports
7. Receive Executable 1. Test Problem Report
8. Executable Pass 1 – 2. Traceability Matrix
Comprehensive Testing 3. Functionalities not tested.
9. Prepare T.P.R.
10. Release Executable for
fixing.
11. Execute Pass II –
Discrepancy Pass.
Severity Vs Cause

Dependency on Customer/
End User.
Fatal
Inadequate Tools

High
Lack of Standards

Medium Lack of Skills

Lack of Training
Low

Oversight
White Paper Contest On Testing
Concepts

BY

Lokesh Kumar Pachipala


Nithya Natarajan

S-ar putea să vă placă și