Sunteți pe pagina 1din 51

Software Testing –

Why and what it is?


Testing… Testing… Testing…

This word is common to all of us and we do testing practice in our daily life unknowingly. Let's address it
elaborately.
Assume you want to purchase TV and you are in TV show room. Before purchasing a TV what all you will
inspect. Ok I will tell you:-
TV display
Color resolution
Remote (If Available) function
Power supply etc
This is nothing but a small testing done by you before buying the product. In the same way before releasing
software, Software company do testing on their software product to ensure that the customer receives best,
efficient and bug (defect) free product.

Note: We will address all that you want to know about software testing, testing process and different tool
used in testing process.

Software Testing

Software testing is the process of uncovering evidence of defects in software systems or product.
OR
Software testing is a systematic process of executing a program or application with intent of finding defect.

A defect can be introduced during any phase of development or maintenance and results from one or more
"bugs"—mistakes, misunderstandings, omissions, or even misguided intent on the part of the developers.

Aims/Objectives of Testing
To uncover the maximum no of bugs or errors.
To increase the quality of the software product.
To ensure the user-friendliness.

Test Case Techniques

In software engineering, the most common definition of a test case is a set of conditions or variables under
which a tester will determine if a requirement or use case upon an application is partially or fully satisfied.
In order to fully test that all the requirements of an application are met, there must be at least one test case
for each requirement unless a requirement has sub-requirements. In that situation, each sub-requirement
must have at least one test case. This is frequently done using a traceability matrix.
A test case constitutes of a set of test inputs, execution conditions, and expected results developed for a
particular objective. Sometimes, a test case is misunderstood as a document to expose bugs. Nonetheless, it
is necessarily a document which describes the discrete steps to achieve a goal. An effective test case is one
which has the ability to :

1. find bugs which the development group considers to be valid.


2. verify the bugs once they are fixed.

This document covers the following test case techniques:

• Boundary Value Analysis


• Equivalence Partitioning
• Comparison Testing
• Orthogonal Array Testing
• Graph Based Testing
• Error Guessing

Professional criteria for test cases:

1. Boundary Value Analysis

Boundary Value Analysis (BVA) is a test data selection technique (Functional Testing technique) where
the extreme values are chosen. Boundary values include maximum, minimum, just inside/outside
boundaries, typical values, and error values. The hope is that, if a system works correctly for these special
values then it will work correctly for all values in between.
Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values
BVA focuses on the boundary of the input space to identify test cases
Rational is that errors tend to occur near the extreme values of an input variable

There are two ways to generalize the BVA techniques:


By the number of variables
For n variables: BVA yields 4n + 1 test cases.
2. By the kinds of ranges
Generalizing ranges depends on the nature or type of variables
Next Date has a variable Month and the range could be defined as {Jan, Feb, Dec}
Min = Jan, Min +1 = Feb, etc. Triangle had a declared range of {1, 20,000}
Boolean variables have extreme values True and False but there is no clear choice for the remaining three
values

Advantages of Boundary Value Analysis

Robustness Testing - Boundary Value Analysis plus values that go beyond the limits
Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
Forces attention to exception handling
For strongly typed languages robust testing results in run-time errors that abort normal execution
Limitations of Boundary Value analysis

BVA works best when the program is a function of several independent variables that represent bounded
physical quantities.

1. Independent Variables

NextDate test cases derived from BVA would be inadequate: focusing on the boundary would not leave
emphasis on February or leap years
Dependencies exist with NextDate's Day, Month and Year o Test cases derived without consideration of
the function
2. Physical Quantities
An example of physical variables being tested, telephone numbers - what faults might be revealed by
numbers of 000-0000, 000-0001, 555-5555, 999-9998, 999-9999?

Equivalence Partitioning

Equivalence partitioning is a black box testing method that divides the input domain of a program into
classes of data from which test cases can be derived.
EP can be defined according to the following guidelines:
• If an input condition specifies a range, one valid and one two invalid classes are defined.
• If an input condition requires a specific value, one valid and two invalid equivalence classes are
defined.
• If an input condition specifies a member of a set, one valid and one invalid equivalence class is
defined.
• If an input condition is Boolean, one valid and one invalid class is defined.
• Comparison Testing

There are situations where independent versions of software be developed for critical applications, even
when only a single version will be used in the delivered computer based system. It is these independent
versions which form the basis of a black box testing technique called Comparison testing or back-to-back
testing.

Orthogonal Array Testing

The Orthogonal Array Testing Strategy (OATS) is a systematic, statistical way of testing pair-wise
interactions by deriving a suitable small set of test cases (from a large number of possibilities)

Graph Based Testing Methods

Software testing begins by creating a graph of important objects and their relationships and then devising a
series of tests that will cover the graph so that each objects and their relationships and then devising a series
of tests that will cover the graph so that each object and relationship is exercised and error is uncovered.

Error Guessing

Error Guessing comes with experience with the technology and the project. Error Guessing is the art of
guessing where errors can be hidden. There are no specific tools and techniques for this, but you can write
test cases depending on the situation: Either when reading the functional documents or when you are
testing and find an error that you have not documented.

Testing Methodologies:

1. Manual:

• Manual Testing.

• Software Quality Assurance (SQA)

2. Automation:

Manual testing is time-consuming and tedious, requiring a heavy investment in human resources. Worst of
all, time constraints often make it impossible to manually test every feature thoroughly before the software
is released. This leaves you wondering whether serious bugs have gone undetected. Automated testing
addresses these problems by dramatically speeding up the testing process. You can create test scripts that
check all aspects of your application, and then run these tests on each new build.

Type of Testing
Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its
acceptance criteria, which enables a customer to determine whether to accept the system or not.

Assertion Testing: A dynamic analysis technique which inserts assertions about the relationship between
program variables into the program code. The truth of the assertions is determined as the program executes.

Alpha testing: Simulated or actual operational testing by potential users/customers or an independent test
team at the developers site, but outside the development organization. Alpha testing is often employed for
off-the-shelf software as a form of internal acceptance testing.

Background testing: is the execution of normal functional testing while the SUT is exercised by a realistic
work load. This work load is being processed "in the background" as far as the functional testing is
concerned.

Beta Testing: Testing conducted at one or more customer sites by the end user of a delivered software
product system.

Big-bang testing: Integration testing where no incremental testing takes place prior to all the system's
components being combined to form the system

Breadth test: A test suite that exercises the full scope of a system from a top-down Perspective, but does
not test any aspect in detail

Compatibility Testing: The process of determining the ability of two or more systems to exchange
information. In a situation where the developed software replaces an already working program, an
investigation should be conducted to assess possible comparability problems between the new software and
other programs or systems.

Compos-ability testing: Testing the ability of the interface to let users do more complex tasks by
combining different sequences of simpler, easy-to-learn tasks.

CRUD Testing: Build CRUD matrix and test all object creation, reads, updates, and deletion.

Data-Driven testing: An automation approach in which the navigation and functionality of the test script is
directed through external data; this approach separates test and control data from the test script.

Data flow testing: Testing in which test cases are designed based on variable usage Within the code
Database testing: Check the integrity of database field values

Dirty testing: Negative testing.

End-to-End testing: Similar to system testing; the 'macro' end of the test scale; involves testing of a
complete application environment in a situation that mimics real-world use, such as interacting with a
database, using network communications, or interacting with other hardware, applications, or systems if
appropriate.

Exception Testing: Identify error messages and exception handling processes an conditions that trigger
them.

Exhaustive Testing.(NBS) Executing the program with all possible combinations of values for program
variables. Feasible only for small, simple programs.
Exploratory Testing: An interactive process of concurrent product exploration, test Design, and test
execution. The heart of exploratory testing can be stated simply: The Outcome of this test influences the
design of the next test.

follow-up testing: we vary a test that yielded a less-than spectacular failure. We vary the operation, data,
or environment, asking whether the underlying fault in the code can yield a more serious failure or a failure
under a broader range of circumstances

Formal Testing: (IEEE) Testing conducted in accordance with test plans and procedures that have been
reviewed and approved by a customer, user, or designated level of management. Antonym: informal
testing.

Free Form Testing: Ad hoc or brainstorming using intuition to define test cases.

Gray box testing: Test designed based on the knowledge of algorithm, internal states, architectures, or
other high -level descriptions of the program behavior. Gray box testing: Examines the activity of back-end
components during test case execution.
Two types of problems that can be encountered during gray-box testing are:

• A component encounters a failure of some kind, causing the operation to be aborted. The user
interface will typically indicate that an error has occurred.
• The test executes in full, but the content of the results is incorrect. Somewhere in the system, a
component processed data incorrectly, causing the error in the results.

High-level tests: These tests involve testing whole, complete products

Integration Testing: Testing which takes place as sub elements are combined (i.e., integrated) to form
higher-level elements

Interface Tests: Programs that provide test facilities for external interfaces and function calls. Simulation
is often used to test external interfaces that currently may not be available for testing or are difficult to
control. For example, hardware resources such as hard disks and memory may be difficult to control.
Therefore, simulation can provide the characteristics or behaviors for specific function.

Internationalization testing: Testing related to handling foreign text and data within the program. This
would include sorting, importing and exporting test and data, correct handling of currency and date and
time formats, string parsing, upper and lower case handling and so forth.

Interoperability Testing: which measures the ability of your software to communicate across the network
on multiple machines from multiple vendors each of whom may have interpreted a design specification
critical to your success differently.

True inter-operability testing concerns testing for unforeseen interactions with other packages with which
your software has no direct connection. In some quarters, inter-operability testing labor equals all other
testing combined. This is the kind of testing that I say shouldnt be done because it cant be done.

Lateral testing: A test design technique based on lateral thinking principals, to identify faults.

Load testing: Testing an application under heavy loads, such as testing of a web site Under a range of
loads to determine at what point the system's response time degrades or fails.

Monkey Testing (smart monkey testing) : Input are generated from probability distributions that reflect
actual expected usage statistics -- e.g., from user profiles. There are different levels of IQ in smart monkey
testing. In the simplest, each input is considered independent of the other inputs. That is, a given test
requires an input vector with five components.

In low IQ testing, these would be generated independently. In high IQ monkey testing, the correlation (e.g.,
the covariance) between these input distribution is taken into account. In all branches of smart monkey
testing, the input is considered as a single event.

Mutation testing: A testing strategy where small variations to a program are inserted (a mutant), followed
by execution of an existing test suite. If the test suite detects the mutant, the mutant is §Ҧamp;#8992;
retired.§ҡ undetected, the test suite must be revised.

Negative test: A test whose primary purpose is falsification; that is tests designed to break the software

Orthogonal array testing: Technique can be used to reduce the number of combination and provide
maximum coverage with a minimum number of TC. Pay attention to the fact that it is an old and proven
technique. The OAT was introduced for the first time by Plackett and Barman in 1946 and was
implemented by G. Taguchi, 1987

Orthogonal array testing: Mathematical technique to determine which variations of Parameters need to be
tested.

Parallel Testing: Testing a new or an alternate data processing system with the same source data that is
used in another system. The other system is considered as the standard of comparison. Syn: parallel run

Penetration testing: The process of attacking a host from outside to ascertain remote security
vulnerabilities.

Performance Testing: Testing conducted to evaluate the compliance of a system or component with
specific performance requirements. OR To evaluate the time taken or response time of the system to
perform its required functions in comparison
Preventive Testing: Building test cases based upon the requirements specification prior to the creation of
the code, with the express purpose of validating the requirements

Prior Defect History Testing: Test cases are created or rerun for every defect found in prior tests of the
system.

Qualification Testing (IEEE): Formal testing, usually conducted by the developer for the consumer, to
demonstrate that the software meets its specified requirements. See: acceptance testing.

Range Testing: For each input identifies the range over which the system behavior Should be the same.

Recovery testing: Testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.

Reference testing: A way of deriving expected outcomes by manually validating a set of actual outcomes.
A less rigorous alternative to predicting expected outcomes in advance of test execution

Reliability testing: Verify the probability of failure free operation of a computer program in a specified
environment for a specified time. Reliability of an object is defined as the probability that it will not fail
under specified conditions, over a period of time. The specified conditions are usually taken to be fixed,
while the time is taken as an independent variable. Thus reliability is often written R(t) as a function of
time t, the probability that the object will not fail within time t. Any computer user would probably agree
that most software is flawed, and the evidence for this is that it does fail. All software flaws are designed in
-- the software does not break, rather it was always broken. But unless conditions are right to excite the
flaw, it will go unnoticed -- the software will appear to work properly.
Regression Testing: Selective re-testing of a system to verify the modification (bug fixes) have not caused
unintended effects and that system still complies with its specified requirements

Sanity Testing : typically an initial testing effort to determine if a new software version is performing well
enough to accept it for a major testing effort. For example, if the new software is often crashing systems,
bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough
condition to warrant further testing in its current state.

Scalability testing : Scalability testing is a subtype of performance test where performance requirements
for response time, throughput, and/or utilization are tested as load on the SUT is increased over time.

Security Testing: Security testing attempts to verify that protection mechanisms built into a system will, in
fact protect it from improper penetration. Following types of test must be conducted during security test:
• Authentication
• Authorization
• Encryption

Severity: The degree of impact that a defect has on the development or operation of a component or
system.

Skim Testing: A testing technique used to determine the fitness of a new build or release.

Soak Testing: This test is conducted to test memory related issues.

Spike testing: to test performance or recovery behavior when the system under test (SUT) is stressed with
a sudden and sharp increase in load should be considered a type of load test

Smoke test: A subset of all defined/planned test cases that cover the main functionality of a component or
system, to ascertaining that the most crucial functions of a program work, but not bothering with finer
details. A daily build and smoke test is among industry best practices.

State-based testing: Testing with test cases developed by modeling the system under test as a state
machine

State Transition Testing: Technique in which the states of a system are fist identified and then test cases
are written to test the triggers to cause a transition from one condition to another state.

Static testing: Source code analysis. Analysis of source code to expose potential defects.

Statistical testing: A test case design technique in which a model is used of the statistical distribution of
the input to construct representative test cases.

Storage test: Study how memory and space is used by the program, either in resident memory or on disk.
If there are limits of these amounts, storage tests attempt to prove that the program will exceed them.

Stress / Load / Volume test: Tests that provide a high degree of activity, either using boundary conditions
as inputs or multiple copies of a program executing in parallel as examples.OR To evaluate a system
beyond the limits of the specified requirements or system resources (such as disk space, memory, processor
utilization) to ensure the system do not break unexpectedly

Structural Testing:
1. (IEEE) Testing that takes into account the internal mechanism [structure] of a system or component.
Types include branch testing, path testing, statement testing.
2. Testing to insure each program statement is made to execute during testing and that each program
statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box
testing, logic driven testing.

System Testing: Testing the software for the required specifications on the intended hardware

Table testing: Test access, security, and data integrity of table entries

Thread Testing:

A testing technique used to test the business functionality or business logic of the AUT in an end-to-end
manner, in much the same way a User or an operator might interact with the system during its normal use.

Usability testing:

Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or
customer.

Unit Testing:

The testing done to a unit or to a smallest piece of software. Done to verify if it satisfies its functional
specification or its intended design structure. Volume testing: Testing where the system is subjected to
large volumes of data

Testing Techniques:

Test Design refers to understanding the sources of test cases, test coverage, how to develop and document
test cases, and how to build and maintain test data. There are two primary techniques by which tests can be
designed and they are:

1. BLACK BOX

2. WHITE BOX (Structural or opaque testing)

The importance of software testing and its impact on software cannot be underestimated. Software testing
is a fundamental component of software quality assurance and represents a review of specification, design
and coding. The greater visibility of software systems and the cost associated with software failure are
motivating factors for planning, through testing. It is not uncommon for a software organization to spent
40% of its effort on testing.

A large number of test case design methods have been developed to expose maximum number of bugs in
minimum effort. Software can be tested in two ways:
1. knowing the specified functions that the software has been designed to perform

2. Knowing the internal workings of a product, tests can be performed to see if they work well
together. The first test approach is black box testing and the second is white box testing.

Black Box Testing

Black-box test design treats the system as a literal "black-box", so it doesn't explicitly use knowledge of the
internal structure. It is usually described as focusing on testing functional requirements. For example, when
black box testing is applied to software engineering, the tester would only know the "legal" inputs and what
the expected outputs should be, but not how the program actually arrives at those outputs. It is because of
this that black box testing can be considered testing with respect to the specifications, no other knowledge
of the program is necessary. For this reason, the tester and the programmer can be independent of one
another, avoiding programmer bias toward his own work. For this testing, test groups are often used.

Synonyms for black-box include:

• behavioral
• functional
• opaque-box
• Closed-box.

Though centered around the knowledge of user requirements, black box tests do not necessarily involve the
participation of users. Among the most important black box tests that do not involve users are functionality
testing, volume tests, stress tests, recovery testing, and benchmarks . Additionally, there are two types of
black box test that involve users, i.e. field and laboratory tests.
Black box testing Methods
Graph-based Testing Methods

Black-box methods based on the nature of the relationships (links) among the program objects (nodes), test
cases are designed to traverse the entire graph

1. Transaction flow testing (nodes represent steps in some transaction and links represent logical
connections between steps that need to be validated)
2. Finite state modeling (nodes represent user observable states of the software and links represent
transitions between states)
3. Data flow modeling (nodes are data objects and links are transformations from one data object to
another)
4. Timing modeling (nodes are program objects and links are sequential connections between these objects,
link weights are required execution times)

Equivalence Partitioning

1. Black-box technique that divides the input domain into classes of data from which test cases can be
derived
2. An ideal test case uncovers a class of errors that might require many arbitrary test cases to be executed
before a general error is observed
3. Equivalence class guidelines:
• If input condition specifies a range, one valid and two invalid equivalence classes are defined
• If an input condition requires a specific value, one valid and two invalid equivalence classes are
defined
• If an input condition specifies a member of a set, one valid and one invalid equivalence class is
defined
• If an input condition is Boolean, one valid and one invalid equivalence class is defined

Boundary Value Analysis


Black-box technique that focuses on the boundaries of the input domain rather than its center BVA
guidelines: If input condition specifies a range bounded by values a and b, test cases should include a and b,
values just above and just below a and b
• If an input condition specifies and number of values, test cases should be exercise the minimum
and maximum numbers, as well as values just above and just below the minimum and maximum
values

• Apply guidelines 1 and 2 to output conditions, test cases should be designed to produce the
minimum and maxim output reports
• If internal program data structures have boundaries (e.g. size limitations), be certain to test the
boundaries

Comparison Testing

Black-box testing for safety critical systems in which independently developed implementations of
redundant systems are tested for conformance to specifications
Often equivalence class partitioning is used to develop a common set of test cases for each implementation

Orthogonal Array Testing

1. Black-box technique that enables the design of a reasonably small set of test cases that provide
maximum test coverage
2. Focus is on categories of faulty logic likely to be present in the software component (without
examining the code)
3. Priorities for assessing tests using an orthogonal array

• Detect and isolate all single mode faults


• Detect all double mode faults
• Multimode faults

Specialized Testing

1. Graphical user interfaces


2. Client/server architectures
3. Documentation and help facilities

• Real-time systems Task testing (test each time dependent task independently)
• Behavioral testing (simulate system response to external events)
• Intertask testing (check communications errors among tasks)
• System testing (check interaction of integrated system software and hardware)

Advantages of Black Box Testing

1. More effective on larger units of code than glass box testing


2. Tester needs no knowledge of implementation, including specific programming
languages
3. Tester and programmer are independent of each other
4. Tests are done from a user's point of view
5. Will help to expose any ambiguities or inconsistencies in the specifications
6. Test cases can be designed as soon as the specifications are complete

Disadvantages of Black Box Testing

1. Only a small number of possible inputs can actually be tested, to test every possible input stream
would take nearly forever
2. Without clear and concise specifications, test cases are hard to design
3. There may be unnecessary repetition of test inputs if the tester is not informed of test cases the
programmer has already tried
4. May leave many program paths untested
5. Cannot be directed toward specific segments of code which may be very complex (and therefore
more error prone)
6. Most testing related research has been directed toward glass box testing

WHITE BOX TESTING

Software testing approaches that examine the program structure and derive test data from the program
logic. Structural testing is sometimes referred to as clear-box testing since white boxes are considered
opaque and do not really permit visibility into the code.

Synonyms for white box testing

1. Glass Box testing


2. Structural testing
3. Clear Box testing
4. Open Box Testing

Types of White Box testing

A typical rollout of product is shown in figure 1 below.

The purpose of white box testing

• Initiate a strategic initiative to build quality throughout the life cycle of a software product or
service.
• Provide a complementary function to black box testing.
• Perform complete coverage at the component level.
• Improve quality by optimizing performance.

Practices:
This section outlines some of the general practices comprising white-box testing process. In general, white-
box testing practices have the following considerations:

1. The allocation of resources to perform class and method analysis and to document and review the
same.
2. Developing a test harness made up of stubs, drivers and test object libraries.
3. Development and use of standard procedures, naming conventions and libraries.
4. Establishment and maintenance of regression test suites and procedures.
5. Allocation of resources to design, document and manage a test history library.
6. The means to develop or acquire tool support for automation of capture/replay/compare, test suite
execution, results verification and documentation capabilities.

Code Coverage Analysis

Basis Path Testing A testing mechanism proposed by McCabe whose aim is to derive a logical complexity
measure of a procedural design and use this as a guide for defining a basic set of execution paths. These are
test cases that exercise basic set will execute every statement at least once.

Flow Graph Notation:

A notation for representing control flow similar to flow charts and UML activity diagrams.

Testing Steps:

1. Convert programming statement into flow chart.


2. Translate flow chart to flow graph.
3. Apply McCabes formula.

Number of paths = E N + 2 Where E = Edges, N = Nodes


Analysis all possible paths
Analysis all programming statement.
Analysis all possible paths.
Analysis all possible branches.
Analysis all conditional statement.
Example:

Consider the following simple program logic

if (A>B)
printf (A is greater then B);
else
Printf (B is greater then A);

Now, White box testing step using Flow Graph Notation method:

1. Flow Chart
Convert Flow chart into flow graph.

Apply McCabes formula

Number of path = N E + 2 = 6 6 + 2 = 2 4.
Now analysis the two path and all the statement in the path.

Cyclomatic Complexity

The cyclomatic complexity gives a quantitative measure of 4the logical complexity. This value gives the
number of independent paths in the basis set, and an upper bound for the number of tests to ensure that each
statement is executed at least once. An independent path is any path through a program that introduces at
least one new set of processing statements or a new condition (i.e., a new edge). Cyclomatic complexity
provides upper bound for number of tests required to guarantee coverage of all program statements.

Control Structure testing

1. Conditions Testing Condition testing aims to exercise all logical conditions in a program
module.They may define:

• Relational expression: (E1 op E2), where E1 and E2 are arithmetic expressions.


• Simple condition: Boolean variable or relational expression, possibly proceeded by a
NOT operator.

• Compound condition: composed of two or more simple conditions, Boolean operators


and parentheses.

• Boolean expression : Condition without Relational expressions.

2. Data Flow Testing

Selects test paths according to the location of definitions and use of variables.

3. Loop Testing Loops fundamental to many algorithms. Can define loops as simple, concatenated,
nested, and unstructured.

Examples:
Note that unstructured loops are not to be tested . rather, they are redesigned

Design by Contract (DbC):

DbC is a formal way of using comments to incorporate specification information into the code itself.
Basically, the code specification is expressed unambiguously using a formal language that describes the
code's implicit contracts. These contracts specify such requirements as:

• Conditions that the client must meet before a method is invoked.


• Conditions that a method must meet after it executes.
• Assertions that a method must satisfy at specific points of its execution

Tools that check DbC contracts at runtime such as JContract are used to perform this function.

Profiling:

Profiling provides a framework for analyzing Java code performance for speed and heap memory use. It
identifies routines that are consuming the majority of the CPU time so that problems may be tracked down
to improve performance. These include the use of Microsoft Java Profiler API and Sun’s profiling tools that
are bundled with the JDK. Third party tools such as JaViz
[http://www.research.ibm.com/journal/sj/391/kazi.html] may also be used to perform this function.

Error Handling

Exception and error handling is checked thoroughly are simulating partial and complete fail-over by
operating on error causing test vectors. Proper error recovery, notification and logging are checked against
references to validate program design.

Transactions
Systems that employ transaction, local or distributed, may be validated to ensure that ACID (Atomicity,
Consistency, Isolation, Durability). Each of the individual parameters is tested individually against a
reference data set.
Transactions are checked thoroughly for partial/complete commits and rollbacks encompassing databases
and other XA compliant transaction processors.

Advantages of White Box Testing

• Forces test developer to reason carefully about implementation


• Approximate the partitioning done by execution equivalence
• Reveals errors in "hidden" code
• Beneficent side-effects

Disadvantages of White Box Testing

• Expensive
• Cases omitted in the code could be missed out.

Gery / Hybrid testing Methods (Sandwich approach)

It is a mixture of black box and white box testing.

Black Box (Vs) White Box

Black box testing begins with a metaphor. Imagine youre testing an electronics system. Its housed in a
black box with lights, switches, and dials on the outside. You must test it without opening it up, and you
cant see beyond its surface. You have to see if it works just by flipping switches (inputs) and seeing what
happens to the lights and dials (outputs). This is black box testing. Black box software testing is doing the
same thing, but with software. The actual meaning of the metaphor, however, depends on how you define
the boundary of the box and what kind of access the blackness is blocking.

An opposite test approach would be to open up the electronics system, see how the circuits are wired, apply
probes internally and maybe even disassemble parts of it. By analogy, this is called white box testing.

Software Testing Strategy

A test strategy is a statement of the overall approach to testing, identifying what levels of testing are to be
applied, and the methods, techniques and tools.

Testing begins at the component level and works outward toward the integration of the entire computer-
based system. Different testing techniques are appropriate at different points in time. The developer of the
software conducts testing and may be assisted by independent test groups for large projects. The role of the
independent tester is to remove the conflict of interest inherent when the builder is testing his or her own
product.

There are various techniques which are applied simultaneously and in parallel to ensure total software
quality. These are described as under :

1. Unit Testing
2. Integration Testing
3. GUI Testing
4. System Testing
5. Regression Testing
6. Acceptance Testing
Brief about Unit Testing

Unit testing. Isn't that some annoying requirement that we're going to ignore? Many developers get very
nervous when you mention unit tests. Usually this is a vision of a grand table with every single method
listed, along with the expected results and pass/fail date. It's important, but not relevant in most
programming projects.

The unit test will motivate the code that you write. In a sense, it is a little design document that says, "What
will this bit of code do?" Or, in the language of object oriented programming, "What will these clusters of
objects do?"

The crucial issue in constructing a unit test is scope. If the scope is too narrow, then the tests will be trivial
and the objects might pass the tests, but there will be no design of their interactions. Certainly, interactions
of objects are the crux of any object oriented design.

Likewise, if the scope is too broad, then there is a high chance that not every component of the new code
will get tested. The programmer is then reduced to testing-by-poking-around, which is not an effective test
strategy.

Need for Unit Test

How do you know that a method doesn't need a unit test? First, can it be tested by inspection? If the code is
simple enough that the developer can just look at it and verify its correctness then it is simple enough to not
require a unit test. The developer should know when this is the case.

Unit tests will most likely be defined at the method level, so the art is to define the unit test on the methods
that cannot be checked by inspection. Usually this is the case when the method involves a cluster of
objects. Unit tests that isolate clusters of objects for testing are doubly useful, because they test for failures,
and they also identify those segments of code that are related. People who revisit the code will use the unit
tests to discover which objects are related, or which objects form a cluster. Hence: Unit tests isolate clusters
of objects for future developers.

Another good litmus test is to look at the code and see if it throws an error or catches an error. If error
handling is performed in a method, then that method can break. Generally, any method that can break is a
good candidate for having a unit test, because it may break at some time, and then the unit test will be there
to help you fix it.

The danger of not implementing a unit test on every method is that the coverage may be incomplete. Just
because we don't test every method explicitly doesn't mean that methods can get away with not being
tested. The programmer should know that their unit testing is complete when the unit tests cover at the very
least the functional requirements of all the code. The careful programmer will know that their unit testing is
complete when they have verified that their unit tests cover every cluster of objects that form their
application

Life Cycle Approach to Testing

Testing will occur throughout the project lifecycle i.e., from Requirements till User Acceptance Testing.
The main Objective to Unit Testing is as follows:

• To execute a program with the intent of finding an error;


• To uncover an as-yet undiscovered error and
• Prepare a test case with a high probability of finding an as-yet undiscovered error.
Levels of Unit Testing

1. UNIT
2. 100% code coverage
3. INTEGRATION
4. SYSTEM
5. ACCEPTANCE
6. MAINTENANCE AND REGRESSION

• The most 'micro' scale of testing;


• To test particular functions or code modules.
• Typically done by the programmer and not by testers.
• As it requires detailed knowledge of the internal program design and code.
• Not always easily done unless the application has a well-designed architecture
with tight code.

Unit Testing Flow

Types of Errors detected

The following are the Types of errors that may be caught :

1. Error in Data Structures


2. Performance Errors
3. Logic Errors
4. Validity of alternate and exception flows
5. Identified at analysis/design stages

Unit Testing Black Box Approach

• Field Level Check


• Field Level Validation
• User Interface Check
• Functional Level Check

Unit Testing White Box Approach

• STATEMENT COVERAGE
• DECISION COVERAGE
• CONDITION COVERAGE
• MULTIPLE CONDITION COVERAGE (nested conditions)
• CONDITION/DECISION COVERAGE
• PATH COVERAGE

Unit Testing FIELD LEVEL CHECKS

1. Null / Not Null Checks


2. Uniqueness Checks
3. Length Checks
4. Date Field Checks
5. Numeric Checks
6. Negative Checks

Unit Testing Field Level Validations

1. Test all Validations for an Input field


2. Date Range Checks (From Date/To Dates)
3. Date Check Validation with System date

Unit Testing User Interface Checks

1. Readability of the Controls


2. Tool Tips Validation
3. Ease of Use of Interface Across
4. Tab related Checks
5. User Interface Dialog
6. GUI compliance checks

Unit Testing - Functionality Checks

1. Screen Functionalities
2. Field Dependencies
3. Auto Generation
4. Algorithms and Computations
5. Normal and Abnormal terminations
6. Specific Business Rules if any...

Unit Testing - OTHER MEASURES

1. FUNCTION COVERAGE
2. LOOP COVERAGE
3. RACE COVERAGE
Execution of Unit Tests

1. Design a test case for every statement to be executed.


2. Select the unique set of test cases.
3. This measure reports whether each executable statement is encountered.
4. Also known as: line coverage, segment coverage and basic block coverage.
5. Basic block coverage is the same as statement coverage except the unit of code measured is each
sequence of non-branching statements.

Advantage of Unit Testing

1. Can be applied directly to object code and does not require processing source code.
2. Performance profilers commonly implement this measure.

Disadvantage of Unit Testing


Design a test-case for the pass/failure of every decision point -Select unique set of test cases
1. This measure reports whether Boolean expressions tested in control structures (such as the if-
statement and while-statement) evaluated to both true and false.
2. The entire Boolean expression is considered one true-or-false predicate regardless of whether it
contains logical-and or logical-or operators.
3. Additionally, this measure includes coverage of switch-statement cases, exception handlers, and
interrupt handlers
4. Also known as: branch coverage, all-edges coverage, basis path coverage, decision-decision-path
testing
5. "Basis path" testing selects paths that achieve decision coverage.

ADVANTAGE:

1. Simplicity without the problems of statement coverage.

DISADVANTAGE:
1. This measure ignores branches within Boolean expressions which occur due to short-circuit
operators.

Method for Condition Coverage:

1. Test if every condition (sub-expression) in decision for true/false -Select unique set of test cases.
2. Reports the true or false outcome of each Boolean sub-expression, separated by logical-and and
logical-or if they occur.
3. Condition coverage measures the sub-expressions independently of each other.
4. Reports whether every possible combination of Boolean sub-expressions occurs. As with
condition coverage, the sub-expressions are separated by logical-and and logical-or, when present.
5. The test cases required for full multiple condition coverage of a condition are given by the logical
operator truth table for the condition.

DISADVANTAGE:

1. Tedious to determine the minimum set of test cases required, especially for very complex Boolean
expressions
2. Number of test cases required could vary substantially among conditions that have similar
complexity
3. Condition/Decision Coverage is a hybrid measure composed by the union of condition coverage
and decision coverage.
4. It has the advantage of simplicity but without the shortcomings of its component measures
5. This measure reports whether each of the possible paths in each function have been followed.
6. A path is a unique sequence of branches from the function entry to the exit.
7. Also known as predicate coverage. Predicate coverage views paths as possible combinations of
logical conditions
8. Path coverage has the advantage of requiring very thorough testing

Function Coverage:

1. This measure reports whether you invoked each function or procedure.


2. It is useful during preliminary testing to assure at least some coverage in all areas of the software.
3. Broad, shallow testing finds gross deficiencies in a test suite quickly.

Loop Coverage:
This measure reports whether you executed each loop body zero times, exactly once, twice and more than
twice (consecutively). For do-while loops loop coverage reports whether you executed the body exactly
once and more than once.

The valuable aspect of this measure is determining whether while-loops and for-loops execute more than
once, information not reported by others measure.

Race Coverage:

1. This measure reports whether multiple threads execute the same code at the same time.
2. Helps detect failure to synchronize access to resources.
3. Useful for testing multi-threaded programs such as in an operating system

Conclusion

Testing irrespective of the phases of testing should encompass the following:

1. Cost of Failure associated with defective products getting shipped and used by customer is
enormous
2. To find out whether the integrated product work as per the customer requirements
3. To evaluate the product with an independent perspective
4. To identify as many defects as possible before the customer finds
5. To reduce the risk of releasing the product

Brief about Integration Testing

One of the most significant aspects of a software development project is the integration strategy.
Integration may be performed all at once, top-down, bottom-up, critical piece first, or by first integrating
functional subsystems and then integrating the subsystems in separate phases using any of the basic
strategies. In general, the larger the project, the more important the integration strategy.

Very small systems are often assembled and tested in one phase. For most real systems, this is impractical
for two major reasons. First, the system would fail in so many places at once that the debugging and
retesting effort would be impractical.

Second, satisfying any white box testing criterion would be very difficult, because of the vast amount of
detail separating the input data from the individual code modules. In fact, most integration testing has been
traditionally limited to "black box" techniques.

Large systems may require many integration phases, beginning with assembling modules into low-level
subsystems, then assembling subsystems into larger subsystems, and finally assembling the highest level
subsystems into the complete system.

To be most effective, an integration testing technique should fit well with the overall integration strategy.
In a multi-phase integration, testing at each phase helps detect errors early and keep the system under
control. Performing only cursory testing at early integration phases and then applying a more rigorous
criterion for the final stage is really just a variant of the high-risk "big bang" approach. However,
performing rigorous testing of the entire software involved in each integration phase involves a lot of
wasteful duplication of effort across phases. The key is to leverage the overall integration structure to allow
rigorous testing at each phase while minimizing duplication of effort.

It is important to understand the relationship between module testing and integration testing. In one view,
modules are rigorously tested in isolation using stubs and drivers before any integration is attempted. Then,
integration testing concentrates entirely on module interactions, assuming that the details within each
module are accurate. At the other extreme, module and integration testing can be combined, verifying the
details of each module's implementation in an integration context. Many projects compromise, combining
module testing with the lowest level of subsystem integration testing, and then performing pure integration
testing at higher levels. Each of these views of integration testing may be appropriate for any given project,
so an integration testing method should be flexible enough to accommodate them all. Combining module
testing with bottom-up integration.

Generalization of module testing criteria

Module testing criteria can often be generalized in several possible ways to support integration testing. As
discussed in the previous subsection, the most obvious generalization is to satisfy the module testing
criterion in an integration context, in effect using the entire program as a test driver environment for each
module. However, this trivial kind of generalization does not take advantage of the differences between
module and integration testing. Applying it to each phase of a multi-phase integration strategy, for
example, leads to an excessive amount of redundant testing.

More useful generalizations adapt the module testing criterion to focus on interactions between modules
rather than attempting to test all of the details of each module's implementation in an integration context.
The statement coverage module testing criterion, in which each statement is required to be exercised during
module testing, can be generalized to require each module call statement to be exercised during integration
testing. Although the specifics of the generalization of structured testing are more detailed, the approach is
the same. Since structured testing at the module level requires that all the decision logic in a module's
control flow graph be tested independently, the appropriate generalization to the integration level requires
that just the decision logic involved with calls to other modules be tested independently.

Module design complexity

Rather than testing all decision outcomes within a module independently, structured testing at the
integration level focuses on the decision outcomes that are involved with module calls. The design
reduction technique helps identify those decision outcomes, so that it is possible to exercise them
independently during integration testing. The idea behind design reduction is to start with a module control
flow graph, remove all control structures that are not involved with module calls, and then use the resultant
"reduced" flow graph to drive integration testing. Below figure shows a systematic set of rules for
performing design reduction. Although not strictly a reduction rule, the call rule states that function call
("black dot") nodes cannot be reduced. The remaining rules work together to eliminate the parts of the flow
graph that are not involved with module calls. The sequential rule eliminates sequences of non-call ("white
dot") nodes. Since application of this rule removes one node and one edge from the flow graph, it leaves
the cyclomatic complexity unchanged. However, it does simplify the graph so that the other rules can be
applied

The repetitive rule eliminates top-test loops that are not involved with module calls. The conditional rule
eliminates conditional statements that do not contain calls in their bodies. The looping rule eliminates
bottom-test loops that are not involved with module calls. It is important to preserve the module's
connectivity when using the looping rule, since for poorly-structured code it may be hard to distinguish the
"top" of the loop from the "bottom."

For the rule to apply there must be a path from the module entry to the top of the loop and a path from the
bottom of the loop to the module exit. Since the repetitive, conditional, and looping rules each remove one
edge from the flow graph, they each reduce cyclomatic complexity by one.

Rules 1 through 4 are intended to be applied iteratively until none of them can be applied, at which point
the design reduction is complete. By this process, even very complex logic can be eliminated as long as it
does not involve any module calls.

Brief about GUI Testing

What is GUI Testing?

GUI is the abbreviation for Graphic User Interface. It is absolutely essential that any application has to be
user-friendly. The end user should be comfortable while using all the components on screen and the
components should also perform their functionality with utmost clarity. Hence it becomes very essential to
test the GUI components of any application. GUI Testing can refer to just ensuring that the look-and-feel of
the application is acceptable to the user, or it can refer to testing the functionality of each and every
component involved.

The following is a set of guidelines to ensure effective GUI Testing and can be used even as a checklist
while testing a product/application.
Windows Compliance Testing

Application Start Application by Double Clicking on its ICON. The Loading message should show the
application name, version number, and a bigger pictorial representation of the icon. No Login is necessary.
The main window of the application should have the same caption as the caption of the icon in Program
Manager. Closing the application should result in an "Are you Sure" message box .Try to start the
application twice as it is loading. On each window, if the application is busy, then the hour glass should be
displayed. If there is no hour glass, then some enquiry in progress message should be displayed. All screens
should have a Help button (i.e.) F1 key should work the same.

If Window has a Minimize Button, click it. Window should return to an icon on the bottom of the screen.
This icon should correspond to the Original Icon under Program Manager. Double Click the Icon to return
the Window to its original size. The window caption for every application should have the name of the
application and the window name - especially the error messages. These should be checked for spelling,
English and clarity, especially on the top of the screen. Check does the title of the window make sense. If
the screen has a Control menu, then use all un-grayed options.

Check all text on window for Spelling/Tense and Grammar. Use TAB to move focus around the Window.
Use SHIFT+TAB to move focus backwards. Tab order should be left to right, and Up to Down within a
group box on the screen. All controls should get focus - indicated by dotted box, or cursor. Tabbing to an
entry field with text in it should highlight the entire text in the field. The text in the Micro Help line should
change - Check for spelling, clarity and non-updateable etc. If a field is disabled (grayed) then it should not
get focus. It should not be possible to select them with either the mouse or by using TAB. Try this for every
grayed control.

Never updateable fields should be displayed with black text on a gray background with a black label. All
text should be left justified, followed by a colon tight to it. In a field that may or may not be updateable, the
label text and contents changes from black to gray depending on the current status. List boxes are always
white background with black text whether they are disabled or not. All others are gray.

In general, double-clicking is not essential. In general, everything can be done using both the mouse and
the keyboard. All tab buttons should have a distinct letter.

Text Boxes

Move the Mouse Cursor over all Enterable Text Boxes. Cursor should change from arrow to Insert Bar. If it
doesn't then the text in the box should be gray or non-updateable. Refer to previous page. Enter text into
Box Try to overflow the text by typing to many characters - should be stopped Check the field width with
capitals W. Enter invalid characters - Letters in amount fields, try strange characters like + , - * etc. in All
fields. SHIFT and Arrow should Select Characters. Selection should also be possible with mouse. Double
Click should select all text in box.

Option (Radio Buttons)

Left and Right arrows should move 'ON' Selection. So should Up and Down. Select with mouse by
clicking.

Check Boxes

Clicking with the mouse on the box, or on the text should SET/UNSET the box. SPACE should do the
same.

Command Buttons

If Command Button leads to another Screen, and if the user can enter or change details on the other screen
then the Text on the button should be followed by three dots. All Buttons except for OK and Cancel should
have a letter Access to them. This is indicated by a letter underlined in the button text. Pressing
ALT+Letter should activate the button. Make sure there is no duplication. Click each button once with the
mouse - This should activate Tab to each button - Press SPACE - This should activate Tab to each button -
Press RETURN - This should activate The above are VERY IMPORTANT, and should be done for
EVERY command Button. Tab to another type of control (not a command button). One button on the
screen should be default (indicated by a thick black border). Pressing Return in ANY no command button
control should activate it. If there is a Cancel Button on the screen, then pressing should activate it. If
pressing the Command button results in uncorrectable data e.g. closing an action step, there should be a
message phrased positively with Yes/No answers where Yes results in the completion of the action.

Drop Down List Boxes

Pressing the Arrow should give list of options. This List may be scrollable. You should not be able to type
text in the box. Pressing a letter should bring you to the first item in the list with that start with that letter.
Pressing Ctrl - F4 should open/drop down the list box. Spacing should be compatible with the existing
windows spacing (word etc.). Items should be in alphabetical order with the exception of blank/none,
which is at the top or the bottom of the list box. Drop down with the item selected should be display the list
with the selected item on the top. Make sure only one space appears, shouldn't have a blank line at the
bottom.

Combo Boxes

Should allow text to be entered. Clicking Arrow should allow user to choose from list

List Boxes

Should allow a single selection to be chosen, by clicking with the mouse, or using the Up and Down Arrow
keys. Pressing a letter should take you to the first item in the list starting with that letter. If there is a 'View'
or 'Open' button besides the list box then double clicking on a line in the List Box, should act in the same
way as selecting and item in the list box, then clicking the command button. Force the scroll bar to appear,
make sure all the data can be seen in the box.

Brief about SYSTEM TESTING

For most organizations, software and system testing represents a significant element of a project's cost in
terms of money and management time. Making this function more effective can deliver a range of benefits
including reductions in risk, development costs and improved 'time to market' for new systems.

Systems with software components and software-intensive systems are more and more complex everyday.
Industry sectors such as telecom, automotive, railway, and aeronautical and space, are good examples. It is
often agreed that testing is essential to manufacture reliable products. However, the validation process does
not often receive the required attention. Moreover, the validation process is close to other activities such as
conformance, acceptance and qualification testing.

The difference between function testing and system testing is that now the focus is on the whole
application and its environment . Therefore the program has to be given completely. This does not mean
that now single functions of the whole program are tested,because this would be too redundant. The main
goal is rather to demonstrate the discrepancies of the product from its requirements and its documentation.
In other words, this again includes the question, ``Did we build the right product?'' and not just, Did we
build the product right?''

However, system testing does not only deal with this more economical problem,

it also contains some aspects that are orientated on the word ``system'' . This means that those tests should
be done in the environment for which the program was designed, like a mulituser network or whetever.
Even security guide lines have to be included. Once again, it is beyond doubt that this test cannot be done
completely, and nevertheless, while this is one of the most incomplete test methods, it is one of the most
important.

A number of time-domain software reliability models attempt to predict the growth of a system's reliability
during the system test phase of the development life cycle. In this paper we examine the results of applying
several types of Poisson-process models to the development of a large system for which system test was
performed in two parallel tracks, using different strategies for test data selection.

We will test that the functionality of your systems meets with your specifications, integrating with which-
ever type of development methodology you are applying. We test for errors that users are likely to make as
they interact with the application as well as your applications ability to trap errors gracefully. These
techniques can be applied flexibly, whether testing a financial system, e-commerce, an online casino or
games testing

System Testing is more than just functional testing, however, and can, when appropriate, also encompass
many other types of testing, such as:

1. security
2. load/stress
3. performance
4. browser compatibility
5. localization

Need for System Testing Effective software testing, as a part of software engineering, has been proven over
the last 3 decades to deliver real business benefits including:

Reduction of costs Reduce rework and support overheads


Increased productivity More effort spent on developing new functionality
and less on "bug fixing" as quality increases

Reduce commercial risks If it goes wrong, what is the potential impact on your
commercial goals? Knowledge is power, so why take
a leap of faith while your competition step forward
with confidence?

These benefits are achieved as a result of some fundamental principles of testing

for example, increased independence naturally increases objectivity.

Your test strategy must take into consideration the risks to your organization, commercial and technical.
You will have a personal interest in its success in which case it is only human for your objectivity to be
compromised.

System Testing Techniques

1. Goal is to evaluate the system as a whole, not its parts


2. Techniques can be structural or functional
3. Techniques can be used in any stage that tests the system as a whole (acceptance, installation, etc.)
4. Techniques not mutually exclusive
5. Structural techniques
6. Stress testing - test larger-than-normal capacity in terms of transactions, data, users, speed, etc.
7. Execution testing- test performance in terms of speed, precision, etc.
8. Recovery testing - test how the system recovers from a disaster, how it handles corrupted data, etc.
9. Operations testing - test how the system fits in with existing operations and procedures in the user
organization
10. Compliance testing - test adherence to standards
11. Security testing - test security requirements
12. Functional techniques
13. Requirements testing - fundamental form of testing - makes sure the system does what its required
to do
14. Regression testing - make sure unchanged functionality remains unchanged
15. Error-handling testing - test required error-handling functions (usually user error)
16. Manual-support testing - test that the system can be used properly - includes user documentation
17. Intersystem handling testing - test that the system is compatible with other systems in the
environment
18. Control testing - test required control mechanisms
19. Parallel testing - feed same input into two versions of the system to make sure they produce the
same output.

Functional techniques

1. Input domain testing - pick test cases representative of the range of allowable input, including
high, low, and average values
2. Equivalence partitioning - partition the range of allowable input so that the program is expected to
behave similarly for all inputs in a given partition, then pick a test case from each partition
3. Boundary value - choose test cases with input values at the boundary (both inside and outside) of
the allowable range
4. Syntax checking - choose test cases that violate the format rules for input
5. Special values - design test cases that use input values that represent special situations
6. Output domain testing - pick test cases that will produce output at the extremes of the output
domain
7. Structural techniques
8. Statement testing - ensure the set of test cases exercises every statement at least once
9. Branch testing - each branch of an if/then statement is exercised
10. Conditional testing - each truth statement is exercised both true and false
11. Expression testing - every part of every expression is exercised
12. Path testing - every path is exercised (impossible in practice)
13. Error-based techniques

Basic idea is that if you know something about the nature of the defects in the code, you can estimate
whether or not youve found all of them or not

1. Fault seeding - put a certain number of known faults into the code, then test until they are all
found
2. Mutation testing - create mutants of the program by making single changes, then run test cases
until all mutants have been killed
3. Historical test data - an organization keeps records of the average numbers of defects in the
products it produces, then tests a new product until the number of defects found approaches the
expected number

Conclusion:

Hence the system Test phase should begin once modules are integrated enough to perform tests in a whole
system environment. System testing can occur in parallel with integration test, especially with the top-
down method.

Brief about Regression Testing


What is regression testing?

1. Regression testing is the process of testing changes to computer programs to make sure that the
older programming still works with the new changes.
2. Regression testing is a normal part of the program development process. Test department coders
develop code test scenarios and exercises that will test new units of code after they have been
written.
3. Before a new version of a software product is released, the old test cases are run against the new
version to make sure that all the old capabilities still work. The reason they might not work
because changing or adding new code to a program can easily introduce errors into code that is not
intended to be changed.
4. The selective retesting of a software system that has been modified to ensure that any bugs have
been fixed and that no other previously working functions have failed as a result of the reparations
and that newly added features have not created problems with previous versions of the software.
Also referred to as verification testing.
5. Regression testing is initiated after a programmer has attempted to fix a recognized problem or has
added source code to a program that may have inadvertently introduced errors.
6. It is a quality control measure to ensure that the newly modified code still complies with its
specified requirements and that unmodified code has not been affected by the maintenance
activity.
Test Execution

Test Execution is the heart of the testing process. Each time your application changes, you will want to
execute the relevant parts of your test plan in order to locate defects and assess quality.

Create Test Cycles

During this stage you decide the subset of tests from your test database you want to execute.

Usually you do not run all the tests at once. At different stages of the quality assurance process, you need to
execute different tests in order to address specific goals. A related group of tests is called a test cycle, and
can include both manual and automated tests

Example: You can create a cycle containing basic tests that run on each build of the application throughout
development. You can run the cycle each time a new build is ready, to determine the application's stability
before beginning more rigorous testing.

Example: You can create another set of tests for a particular module in your application. This test cycle
includes tests that check that module in depth

To decide which test cycles to build, refer to the testing goals you defined at the beginning of the process.
Also consider issues such as the current state of the application and whether new functions have been added
or modified.

Following are examples of some general categories of test cycles to consider:

• sanity cycle checks the entire system at a basic level (breadth, rather than depth) to see that it is
functional and stable. This cycle should include basic-level tests containing mostly positive
checks.
• normal cycle tests the system a little more in depth than the sanity cycle. This cycle can group
medium-level tests, containing both positive and negative checks.
• advanced cycle tests both breadth and depth. This cycle can be run when more time is available
for testing. The tests in the cycle cover the entire application (breadth), and also test advanced
options in the application (depth).
• regression cycle tests maintenance builds. The goal of this type of cycle is to verify that a change
to one part of the software did not break the rest of the application. A regression cycle includes
sanity-level tests for testing the entire software, as well as in-depth tests for the specific area of the
application that was modified.

Run Test Cycles (Automated & Manual Tests)

Once you have created cycles that cover your testing objectives, you begin executing the tests in the cycle.
You perform manual tests using the test steps. Testing Tools executes automated tests for you. A test cycle
is complete only when all tests-automatic and manual-have been run.

1. With Manual Test Execution you follow the instructions in the test steps of each test. You use the
application, enter input, compare the application output with the expected output, and log the
results. For each test step you assign either pass or fail status.
2. During Automated Test Execution you create a batch of tests and launch the entire batch at once.
Testing Tools runs the tests one at a time. It then imports results, providing outcome summaries
for each test.

Analyze Test Results

After every test run one analyze and validate the test results. And have to identify all the failed steps in the
tests and to determine whether a bug has been detected, or if the expected result needs to be updated.

Change Request

Initiating a Change Request: A user or developer wants to suggest a modification that would improve an
existing application, notices a problem with an application, or wants to recommend an enhancement. Any
major or minor request is considered a problem with an application and will be entered as a change request.

Type of Change Request

1. Bug the application works incorrectly or provides incorrect information. (for example, a letter is
allowed to be entered in a number field)
2. Change a modification of the existing application. (for example, sorting the files alphabetically by
the second field rather than numerically by the first field makes them easier to find)
3. Enhancement new functionality or item added to the application. (for example, a new report, a
new field, or a new button)

Priority for the request

1. Low the application works but this would make the function easier or more user friendly.
2. High the application works, but this is necessary to perform a job.
3. Critical the application does not work, job functions are impaired and there is no work around.
This also applies to any Section 508 infraction.
Bug Tracking

1. Locating and repairing software bugs is an essential part of software development.


2. Bugs can be detected and reported by engineers, testers, and end-users in all phases of the testing
process.
3. Information about bugs must be detailed and organized in order to schedule bug fixes and
determine software release dates.

Bug Tracking involves two main stages: reporting and tracking.

Report Bugs
Once you execute the manual and automated tests in a cycle, you report the bugs (or defects) that you
detected. The bugs are stored in a database so that you can manage them and analyze the status of your
application.

When you report a bug, you record all the information necessary to reproduce and fix it. You also make
sure that the QA and development personnel involved in fixing the bug are notified.

Track and Analyze Bugs

The lifecycle of a bug begins when it is reported and ends when it is fixed, verified, and closed.

1. First you report New bugs to the database, and provide all necessary information to reproduce, fix,
and follow up the bug.
2. The Quality Assurance manager or Project manager periodically reviews all New bugs and
decides which should be fixed. These bugs are given the status Open and are assigned to a
member of the development team.
3. Software developers fix the Open bugs and assign them the status Fixed.
4. QA personnel test a new build of the application. If a bug does not reoccur, it is Closed. If a bug is
detected again, it is reopened.

Communication is an essential part of bug tracking; all members of the development and quality assurance
team must be well informed in order to insure that bugs information is up to date and that the most
important problems are addressed.

The number of open or fixed bugs is a good indicator of the quality status of your application. You can use
data analysis tools such as re-ports and graphs in interpret bug data.

About Acceptance Testing

In software engineering, acceptance testing is formal testing conducted to determine whether a system
satisfies its acceptance criteria and thus whether the customer should accept the system. The main types of
software testing are:

1. Component.
2. Interface.
3. System.
4. Acceptance.
5. Release.

Acceptance Testing checks the system against the "Requirements". It is similar to systems testing in that
the whole system is checked but the important difference is the change in focus: Systems testing checks
that the system that was specified has been delivered. Acceptance Testing checks that the system delivers
what was requested.

The customer and not the developer should always do acceptance testing.

The customer knows what is required from the system to achieve value in the business and is the only
person qualified to make that judgment. The forms of the tests may follow those in system testing, but at all
times they are informed by the business needs.

The test procedures that lead to formal 'acceptance' of new or changed systems. User Acceptance Testing is
a critical phase of any 'systems' project and requires significant participation by the 'End Users'. To be of
real use, an Acceptance Test Plan should be developed in order to plan precisely, and in detail, the means
by which 'Acceptance' will be achieved. The final part of the UAT can also include a parallel run to prove
the system against the current system.

Factors influencing Acceptance Testing

The User Acceptance Test Plan will vary from system to system but, in general, the testing should be
planned in order to provide a realistic and adequate exposure of the system to all reasonably expected
events. The testing can be based upon the User Requirements Specification to which the system should
conform.

As in any system though, problems will arise and it is important to have determined what will be the
expected and required responses from the various parties concerned; including Users; Project Team;
Vendors and possibly Consultants / Contractors.

In order to agree what such responses should be, the End Users and the Project Team need to develop and
agree a range of 'Severity Levels'. These levels will range from (say) 1 to 6 and will represent the relative
severity, in terms of business / commercial impact, of a problem with the system, found during testing.
Here is an example which has been used successfully; '1' is the most severe; and '6' has the least impact :

1. 'Show Stopper' i.e. it is impossible to continue with the testing because of the severity of this
error / bug
2. Critical Problem; testing can continue but we cannot go into production (live) with this problem
3. Major Problem; testing can continue but live this feature will cause severe disruption to business
processes in live operation
4. Medium Problem; testing can continue and the system is likely to go live with only minimal
departure from agreed business processes Minor Problem ; both testing and live operations may
progress. This problem should be corrected, but little or no changes to business processes are
envisaged
5. 'Cosmetic' Problem e.g. colors; fonts; pitch size However, if such features are key to the business
requirements they will warrant a higher severity level. The users of the system, in consultation
with the executive sponsor of the project, must then agree upon the responsibilities and required
actions for each category of problem. For example, you may demand that any problems in severity
level 1, receive priority response and that all testing will cease until such level 1 problem are
resolved.

Even where the severity levels and the responses to each have been agreed by all parties; the allocation of a
problem into its appropriate severity level can be subjective and open to question. To avoid the risk of
lengthy and protracted exchanges over the categorization of problems; we strongly advised that a range of
examples are agreed in advance to ensure that there are no fundamental areas of disagreement; or if there
are, these will be known in advance and your organization is forewarned.

Finally, it is crucial to agree the Criteria for Acceptance. Because no system is entirely fault free, it must be
agreed between End User and vendor, the maximum number of acceptable 'outstanding' in any particular
category. Again, prior consideration of this is advisable.

In some cases, users may agree to accept ('sign off') the system subject to a range of conditions. These
conditions need to be analyzed as they may, perhaps unintentionally, seek additional functionality which
could be classified as scope creep. In any event, any and all fixes from the software developers, must be
subjected to rigorous System Testing and, where appropriate Regression Testing.

Conclusion
Hence the goal of acceptance testing should verify the overall quality, correct operation, scalability,
completeness, usability, portability, and robustness of the functional components supplied by the Software
system.

Manual Testing

Before going through the manual testing and the step involved in it, Let us know some basic principle of
Testing.

Principles of Testing

1. All tests should be traceable to requirements


2. Tests should be planned ahead of execution
3. Pareto principle isolate suspect components for vigorous testing (80% of defects can be traced
back to 20% of requirements)
4. Testing should be done in an outward manner (Unit ->System)

Tasks involved in manual testing

• Strategy
• Test Plan
• Traceability
• Test Bed
• Test Design(Test case preparation)
• Test Script
• Test Execution Process
• Defect Management
• Softbase Transfer
• Deliverables

Test Strategy

A test strategy is a statement of the overall approach to testing, identifying what levels of testing are to be
applied, and the methods, techniques and tools.

It is the role of test management to ensure that new or modified service products meet the business
requirements for which they have been developed or enhanced.

The Testing strategy should define the objectives of all test stages and the techniques that apply. The
testing strategy also forms the basis for the creation of a standardized documentation set, and facilitates
communication of the test process and its implications outside of the test discipline. Any test support tools
introduced should be aligned with, and in support of, the test strategy. Test Approach/Test Architecture are
the acronyms for Test Strategy.

Test management is also concerned with both test resource and test environment management

1. Test Strategy Flow


2. Test Strategy Selection
3. Test Strategy Execution
4. General Testing Strategies
5. Developing a Test Strategy
6. Conclusion
Test Strategy Flow:

Test Cases and Test Procedures should manifest Test Strategy.

Test Strategy Selection

Selection of the Test Strategy is based on the following factors

1. Product: Test Strategy based on the Application to help people and teams of people in making
decisions.
2. Based on the Key Potential Risks:

• Suggestion of Wrong Ideas.


• People will use the Product Incorrectly
• Incorrect comparison of scenarios.
• Scenarios may be corrupted.
• Unable to handle Complex Decisions.

3. Determination of Actual Risk:

• Understand the underlying Algorithm.


• Simulate the Algorithm in parallel.
• Capability tests each major function.
• Generate large number of decision scenarios.
• Create complex scenarios and compare them.
• Review Documentation and Help.
• Test for sensitivity to user Error.

Test Strategy Execution:

Understand the decision Algorithm and generate the parallel decision analyzer using the Perl or Excel that
will function as a reference for high volume testing of the app.

1. Create a means to generate and apply large numbers of decision scenarios to the product. This will
be done using the GUI test Automation system or through the direct generation of Decide Right
scenario files that would be loaded into the product during test.
2. Review the Documentation, and the design of the user interface and functionality for its sensitivity
to user error.
3. Test with decision scenarios that are near the limit of complexity allowed by the product Compare
complex scenarios.
4. Test the product for the risk of silent failures or corruptions in decision analysis.
5. Issues in Execution of the Test Strategy
6. The difficulty of understanding and simulating the decision algorithm
7. The risk of coincide failure of both the simulation and the product.
8. The difficulty of automating decision tests

General Testing Strategies

1. Top-down
2. Bottom-up
3. Thread testing
4. Stress testing
5. Back-to-back testing

Developing a Test Strategy

The test Strategy will need to be customized for any specific software system.
The applicable test factors would be listed as the phases in which the testing must occur.
Four test steps must be followed to develop a customized test strategy.

1. Select and rank Test Factors


2. Identify the System Developmental Phases
3. Identify the Business risks associated with the System under Development.
4. Place risks in the Matrix

Conclusion:

Test Strategy should be developed in accordance with the business risks associated with the software when
the test team develops the test tactics. Thus the Test team needs to acquire and study the test strategy that
should question the following:

1. What is the relationship of importance among the test factors?


2. Which of the high level risks are the most significant?
3. What damage can be done to the business if the software fails to perform correctly? What damage
can be done to the business if the software is not completed on time?
4. Who are the individuals most knowledgeable in understanding the impact of the identified
business risks?

Hence the Test Strategy must address the risks and present a process that can reduce those risks. The
system accordingly focuses on risks thereby establishes the objectives for the test process

Test Plan

The next task would be the preparation of a Test Plan. A test plan states what the items to be tested are, at
what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied
testing of each item, and describes the test environment.

A Test Plan can be defined as a document that describes the scope, approach, resources and schedule of
intended test activities. It identifies test items, the features to be tested, the testing tasks, who will do each
task, and any risks requiring contingency planning.

The main purpose of preparing a Test Plan is that everyone concerned with the project are in sync with
regards to the scope, responsibilities, deadlines and deliverables for the project. It is in this respect that
reviews and a sign-off are very important since it means that everyone is in agreement of the contents of the
test plan and this also helps in case of any dispute during the course of the project (especially between the
developers and the testers).

Type of Test Plan

Different test plans are prepared based on the level of testing. The objective of each test plan is to provide a
plan for verification, by testing the software, that the software produced fulfills the requirements or design
statements of the appropriate software specification.

Unit Test Plan:

A Unit Test Plan describes the plans for testing individual units / components / modules of the software
Integration Test Plan:

An Integration Test Plan describes the plan for testing integrated software components.

System Test Plan:

A System Test Plan describes the plan for testing the system as a whole.

Acceptance Test Plan:

An Acceptance Test Plan describes the plan for acceptance testing of the software. Normally Acceptance
test plan is prepared by the Customer taking the help of Cognizant.

Purpose of preparing a Test Plan

1. A Test Plan is a useful way to think through the efforts needed to validate the acceptability of a
software product.
2. The completed document will help people outside the test group understand the 'why' and 'how' of
product validation.
3. It should be thorough enough to be useful but not so thorough that no one outside the test group
will read it.

Contents of a Test Plan

1. Purpose This section should contain the purpose of preparing the test plan.
2. Scope This section should talk about the areas of the application which are to be tested by the QA
team and specify those areas which are definitely out of scope (screens, database, mainframe
processes etc).
3. Test Approach This would contain details on how the testing is to be performed and whether any
specific strategy is to be followed (including configuration management).
4. Entry Criteria This section explains the various steps to be performed before the start of a test (i.e.)
pre-requisites. For example: Timely environment set up, starting the web server / app server,
successful implementation of the latest build etc.
5. Resources This section should list out the people who would be involved in the project and their
designation etc.
6. Tasks / Responsibilities This section talks about the tasks to be performed and the responsibilities
assigned to the various members in the project.
7. Exit Criteria Contains tasks like bringing down the system / server, restoring system to pre-test
environment, database refresh etc.
8. Schedules / Milestones This sections deals with the final delivery date and the various milestone
dates to be met in the course of the project.
9. Hardware / Software Requirements This section would contain the details of PCs / servers
required (with the configuration) to install the application or perform the testing; specific software
that needs to be installed on the systems to get the application running or to connect to the
database; connectivity related issues etc. Risks & Mitigation Plans This section should list out all
the possible risks that can arise during the testing and the mitigation plans that the QA team plans
to implement incase the risk actually turns into a reality.
10. Tools to be used This would list out the testing tools or utilities (if any) that are to be used in the
project (e.g.) Win Runner, Test Director, PCOM, WinSQL.
11. Deliverables This section contains the various deliverables that are due to the client at various
points of time (i.e.) daily, weekly, start of the project, end of the project etc. These could include
Test Plans, Test Procedure, Test Matrices, Status Reports, and Test Scripts etc. Templates for all
these could also be attached.
12. References
• Procedures
• Templates (Client Specific or otherwise)
• Standards/Guidelines (e.g. QView)
• Project related documents (RSD, ADD, FSD etc)
13. Annexure This could contain embedded documents or links to documents which have been / will
be used in the course of testing (e.g.) templates used for reports, test cases etc. Referenced
documents can also be attached here.
14. Sign-Off This should contain the mutual agreement between the client and the QA team with both
leads / managers signing off their agreement on the Test Plan.

Traceability

Requirements tracing is the process of documenting the links between the user requirements for the
system youre building and the work products developed to implement and verify those requirements.
These work products include Software requirements, design specifications, Software code, test plans
and other artifacts of the systems development process. Requirements tracing helps the project team to
understand which parts of the design and code implement the users requirements, and which tests are
necessary to verify that the users requirements have been implemented correctly.
• Business Requirement and Functional Specification
• Functional Specification and Test conditions
• Gap Analysis
• Tools Used

BR (Business Requirement) and FS(Functional Specification)

The requirements specified by the users in the business requirement document may not be exactly
translated into a functional specification. Therefore, a trace on specifications between functional
specification and business requirements is done on a one to one basis. This helps finding the gap between
the documents. These gaps are then closed by the author of the FS, or deferred after discussions.

Testers should understand these gaps and use them as an addendum to the FS, after getting this signed off
from the author of the FS. The final FS form may vary from the original, as deferring or taking in a gap
may have ripple effect on the application. Sometimes, these ripple effects may not be reflected in the FS.
Addendums (Addition) may sometime affect the entire system and the test case development.

FS and Test Conditions

Test conditions built by the tester are traced with the FS to ensure full coverage of the baseline document.
If gaps between the same are obtained, tester must then build conditions for the gaps. In this process, testers
must keep in mind the rules specified in test condition writing.

Gap Analysis

This is the terminology used on finding the difference between what it should be and what it is. As
explained, it is done on the Business requirement to FS and FS to test conditions. Mathematically, it
becomes evident that Business requirements that are users needs are tested, as Business Requirement and
test conditions are matched.

Simplifying the above,

A = Business Requirement
B = Functional Specification
C = Test Conditions

A = B, B = C, Therefore A = C
Another way of looking at this process is to eliminate as many mismatches at every stage of the process,
there by giving the customer an application, which will satisfy their needs.

In the case of UAT, there is a direct translation of specification from the Business Requirement to Test
Conditions leaving lesser amount of understandability loss.

Tools Used

The entire process of traceability is a time consuming process. In order to simplify, Rational Software
Incorporated has developed a tool, which will maintain the specifications of the documents. Then these are
mapped correspondingly. The specifications have to be loaded into the system by the user. Even though it
is a time consuming process, it helps in finding the ripple effect on altering a specification. The impacts on
test conditions can immediately be identified using the trace matrix.

Test Bed

1. High Level Planning


2. Feeds analysis
3. Feeds format Final Set-up
High Level Planning

In order to test the conditions and values that are to be tested, the application should be populated with data.
There are two ways of populating the data into tables of the application.

Intelligent: Data is tailor-made for every condition and value, having reference to its condition. These will
aid in triggering certain action by the application. By constructing such intelligent data, few data records
will suffice for the testing process.

Example:

Business rule, if the Interest to be paid is more than 8 % and the Tenor of the deposit exceeds one month,
then the system should give a warning. To populate an Interest to be Paid field of a deposit, we can give
9.5478 and make the Tenor as two months for a particular deposit. This will trigger the warning in the
application.

Unintelligent

Data is populated in mass, corresponding to the table structures. Its values are chosen at random, and not
with reference to the conditions derived. This type of population can be used for testing the performance of
the application and its behavior to random data. It will be difficult for the tester to identify his requirements
from the mass data.

Example:

Using the example described in Intelligent, to find a suitable record with Interest exceeding 8% and the
Tenor being more than two months is difficult.

Feeds analysis

Most applications are fed with inputs at periodic intervals, like end of day or every hour etc. Some
applications may be stand alone i.e., all processes will happen within its database and no external inputs of
processed data are required.
In the case of applications having feeds, received from other machines, they are sent in a format, which are
pre-designed. These feeds, at the application end, will be processed by local programs, and populated in
respective tables.

It is therefore, essential for testers to understand the data mapping between the feeds and the database
tables of the application. Usually, a document is published in this regard.

Translation of the high level data designed previously should be converted into the feed formats, in order to
populate the applications database.

Feeds format

The feed files are then uploaded into the application using a series of programs. In case of unintelligent
data, some tool would be used to generate mass data specific to the application, by specifying the
applications requirements to the tool. These will then be uploaded in to application.

Once these are uploaded, data might have to be interconnected to the application business logic. This may
be necessary for both types of applications, stand-alone and feeds fed application.

Test Design (Test case preparation)

1. Introduction
2. Test Case Formation
3. Explicit writing
4. Expected Results
5. Pre-Requirements
6. Data definition
7. Test Case Format

Introduction

Once the test plan, traceability and TestBed for a level of testing has been done, the next stage is to specify
a set of test cases or test paths for each item to be tested at that level. A number of test cases will be
identified for each item to be tested at each level of testing. Each test case will specify how the
implementation of a particular requirement or design is to be tested and the criteria for success of each test.

• A Unit Test Specification will detail the test cases for testing individual units of the software
• A Integration Test Specification will detail the test cases for each stage of integration of tested
software components
• A System Test Specification will detail the test cases of system testing of the software
• An Acceptance Test Specification will detail the test cases of acceptance testing of the software
It is important to design test cases for both positive and negative testing. Positive testing checks whether
the software is doing what it is supposed to do and Negative testing checks whether the software is doing
what it is not supposed to do.

Test Case Formation

At this stage, the Tester has clarity on how the application is to be tested. It now, becomes necessary to aid
the actual test action with test cases. Test cases are written based on the test conditions. It is the phrased
form of test conditions, which becomes readable and understandable by all.

Explicit writing There are three headings under which a test case is written. Namely,
Description: Here the details of the test on a specification or condition are written

Data and Pre-requirements: Here either the data for the test or specification is mentioned. Pre-
requirements for the test to be executed should also be clearly mentioned.

Expected Results: The expected result on execution of the instruction in the description is mentioned. In
general, it should reflect, in detail the result of the test execution.

While writing a case, to make the test case explicit, the tester should include the following

• Reference to the rules and specifications under test in words with minimal technical jargons
• Check on data shown by the application should refer to the table names if possible
• Location of the fields or if a new window displayed must be specified clearly
• Names of the fields and screens should also be explicit.

Expected Results

The out-come of executing an instruction would have a single or multiple impact on the application. The
resultant behavior of the application after test execution is the expected result.

Single Expected Result:

Has a single impact on the instruction executed

Example:

Description: Enter USER NAME, PASSWORD and click on OK button in login window

Expected : Application Home page should be displayed

Multiple Expected Result: Has multiple impacts on executing the instruction

Example:

Description: Enter USER NAME, PASSWORD and click on OK button in login window
Expected : Application Home page should be displayed and Login Username should be displayed in the
Top Right of the Home page

Language used in the expected results should not have uncertainty. The results expressed, should be clear
and have only one interpretation possible. It is advisable to use the term Should in the expected results.

Pre-Requirements

Test cases cannot generally be executed with normal state of the application. Below is a list of possible pre-
requirements that could be attached to the test case:

1. Enable or disable external interfaces.


2. Time at which the test cases is to be executed
3. Dates that are to be maintained (pre-date or post-date) in the database before testing, as its
sometimes not possible to predict dates of testing, and populate certain date fields when they are to
trigger certain actions in the application
4. Deletion of certain records to trigger an action by the application
5. Change values if required to trigger an action by the application

Data definition
Data for executing the test cases should be clearly defined in the test cases. They should indicate the values
that will be entered into the fields and also indicate the default values of the field.
In the cases of calculations involved, the test cases should indicate the calculated value in the expected
results of the test case.

TEST CASE FORMAT

By using above parameter, test case can be represented in many ways.Below are the common test case
format adopted by Software company:

Test Script

1. Brief on Test Scripts


2. Interaction with development team
3. Review of Test Scripts
4. Activity Report
5. Backend Testing

Brief on Test Scripts

This will sequence the flow of an End-to-End test path or sequence of executing the individual test
condition.
Test case specifies the test to be performed on each segment. Though the sequences of a path are analyzed,
navigations to test conditions are not available in the test cases.

Test scripts should ideally start from the login screen of the application. Doing this helps in two ways

• Start conditions are always the same, and uniformity can be achieved
• Automation of test scripts requires start and end conditions i.e. the automation tool will look for
the screen to be the same, as specified in its code. Then the tool will automatically run the series of
cases without intervention by the user. So, the test scripts must start and end in the same
originating screen.

The test scripts must explain the navigation paths very clearly and explicitly. The objective of this is to
have flexibility on the person who would execute the cases.

Test scripts sequences must also take into account the impacts the previous cases i.e. in cases of deletion of
certain record, the test should not flow by searching for details of the same.

In short, the test cases in series will form the test script in case of an End-to-End test approach. In
individual test conditions, the navigation and the test instruction will be a test case and this will constitute a
test script.

In practice, for End-to-End test approach, test scripts are written straightway incorporating the test cases. It
is only for explanation, these were categorized into two steps.

Interaction with development team

Interaction between the testing team and development team should begin while writing the test scripts. Any
interaction prior to test case writing would ideally bias both the teams. Screen shots of the various screens
should be obtained from the development team as well as interact on the design of the application.

The tester should not make any changes to the test script at this point based on what the development team
has presented. The contradictions between the Test Scripts and actual application are left to Project
Managers decisions only. Recommendations from the test team on the contradiction would be a value
addition.

Review of Test Scripts

Test cases are given to project leader and managers in think soft for review. The test scripts are then sent to
the Client for review. Based on the review, changes have to be made to the entire block that was built i.e.
test conditions, test data and test scripts. The Client then marks their comments in the comments column in
the test preparation script. Testers should understand that, if a change is made in the test script then it
requires, changes in the test conditions and data.

Activity Report

Test lead should report to his/her test manager on day to day activity of the test team. The report should
basically contain plan for the next day, activities pending, activity for the day-completed etc.

Backend Testing

The process of testing also involves the management of data, which at times are required to flow as an
input into the application referred to as feeds hence, from external data stores / applications or data that is
generated within the scope of the same application. The required data is to be extracted and processed to
produce information as per requirements specified by the business requirements document.
The process of absorbing data into an application could be varied, depending on factors like:

• Nature of data required


• Format of data received
• Limitations of the system supplying data, in presenting data in a required pattern.

Test Execution Process

The preparation to test the application is now over. The test team should next plan the execution of the test
case (or test script) on the application. In this section, we will see how test execution is performed.

1. Stages of Testing
2. Pre- Requirements for Testing
3. Test Plan

Stages of Testing

Tests on the application are done on stages. Test execution takes place in three passes or sometimes four
passes depending on the state of the application. They are:

Pre IST(Integrated System Testing) or Pass 0: This is done to check the health of the system before the start
of the test process. This stage may not be applicable to most test process. Free form testing will be adopted
in this stage.

Comprehensive or Pass 1:

All the test scripts developed for testing are executed. Some cases the application may not have certain
module(s) ready for test; hence they will be covered comprehensively in the next pass. The testing here
should not only cover all test cases but also business cycles as defined in the application.

Discrepancy or Pass 2:

All Test scripts that have resulted in a defect during the comprehensive pass should execute. In other
words, all defects that have been fixed should be retested. Function points that may be affected by the
defect should also be taken up for testing. Automated test scripts captured during the pass one are used
here. This type of testing is called as Regression testing. Defects that are not fixed will be executed only
after they are fixed.

Sanity or Pass 3:

This is the final round in the test process. This is done either at the clients site or at test lab depending on
the strategy adopted. This is done in order to check if the system is sane enough for the next stage i.e. UAT
or production as the case may be under a isolated environment. Ideally the defects that are fixed from the
previous pass are checked and freeform testing done to ensure integrity is conducted.

These categories apply for both User Acceptance Test (UAT) and Integrated System Testing (IST).

Pre- Requirements for Testing

1. Version Identification Values


The application would contain several program files for it to function. The version of these files and an
unique identification number for these files is a must for change management.

These numbers will be generated for every program file on transfer from the development machine to the
test environment. The number attributed to each program file is unique and if any change is made to the
program file between the time it is transferred to the test environment and the time when it is transferred
back to the development for correction, it can be detected by using these numbers. These identification
methods vary from one client to another.

These values have to be obtained from the development team by the test team. This helps in identifying
unauthorized transfers or usage of application files by both parties involved.

The responsibility of acquiring, comparing and tracking before and after softbase transfer lies with the test
team.

2. Interfaces for the application

In some applications external interfaces may have to connected or disconnected. In both cases the
development team should certify that the application would function in an integrated fashion. Actual
navigation to and from an interface may not be covered in black box testing.

3. Unit and Module test plan sign off

To begin an integrated test on the application, development team should have completed tests on the
software at Unit and module levels.

Unit and Module Testing: Unit testing focuses verification effort on the smallest unit of software design.
Using the Design specification as a guide, important control paths and field validations are tested. This is
normally a white box testing.

Clients and the development team must sign off this stage, and hand over the test plan and defect report for
the test to the testing team.

In cases of the UAT, IST sign off report must be handed over to the UAT before commencement of UAT.

Test Plan

This document is a deliverable to client. It contains actual plan for test execution with details to the minute.

Test Execution Sequence:

Test scripts can either be executed in a random format or in an sequential fashion. Some applications have
concepts that would require sequencing of the test cases before actual execution. The details of the
execution are documented in the test plan.
Sequencing can also be done on the modules of the application, as one module would populate or formulate
information required for another.

Allocation of test cases among the team:

The Test team should decide on the resources that would execute the test scripts. Ideally, the tester who
designed the test script for the module executes the test. In some cases, due to shortage of time or resource
at that point of time, additional test scripts might have to be executed by some members of the team. Clear
documentation of responsibilities is done in the test plan.

Allocation of test cases on different passes:


All test scripts may not be possibly executed in the first passes. Some of the reasons for this could be
• Functionality may some-times be introduced at a later stage and application may not support it, or
the test team may not be ready with the preparation
• External interfaces to the application may not be ready
• The client might choose to deliver some part of the application for testing and rest may be
delivered during other passes

Targets for completion of Phases:

Time frames for the passes have to decided and committed to the clients well in advance to the start of test.
Some of the factors consider for doings so are

Number of cases/scripts: Depending on the number of test scripts and the resource available, completion
dates are prepared

Complexity of Testing: In some cases the number of test cases may be less but the complexity of the test
may be a factor. The testing may involve time consuming calculations or responses from external interfaces
etc

Number of Errors: This is done very exceptionally; Pre-IST testing is done to check the health of the
application soon after the preparations are done. The number of errors that were reported should be taken as
a benchmark.

Manual Testing

Defect Management

1. What is a Defect?
2. Type of Defects
3. Defect reporting by tester
4. Defect Tracking by Test Lead
5. Tools Used
6. Defects Meetings
7. Defects Publishing
8. Defect Report Format

What is a defect?

A Defect is a product anomaly or flaw. Defects include such things as omissions and imperfections found
during testing phases. Symptoms (flaws) of faults contained in software that is sufficiently mature for
production will be considered as defects. Deviations from expectation that is to be tracked and resolved is
also termed a defect.

An evaluation of defects discovered during testing provides the best indication of software quality. Quality
is the indication of how well the system meets the requirements. So in this context defects are identified as
any failure to meet the system requirements.

Defect evaluation is based on methods that range from simple number count to rigorous statistical
modelling.

Rigorous evaluation uses assumptions about the arrival or discovery rates of defects during the testing
process. The actual data about defect rates are then fit to the model. Such an evaluation estimates the
current system reliability and predicts how the reliability will grow if testing and defect removal continue.
This evaluation is described as system reliability growth modeling.
Life cycle of a defect is explained diagrammatically below:

Types of Defects

Defects that are detected by the tester are classified into categories by the nature of the defect. The
following are the classification

Showstopper (X): The impact of the defect is severe and the system cannot go into the production
environment without resolving the defect since an interim solution may not be available.

Critical (C): The impact of the defect is severe, however an interim solution is available. The defect should
not hinder the test process in any way.

Non critical (N): All defects that are not in the X or C category are deemed to be in the N category. These
are also the defects that could potentially be resolved via documentation and user training. These can be
Graphic User Interface (GUI) defects are some minor field level observations.

Defect reporting by tester

Defects or Bugs when detected in the application by the tester must be duly reported through an automated
tool. Particulars that have to be filled by a tester are

1. Defect Id: Number associated with a particular defect, and henceforth referred by its ID
2. Date of execution: The date on which the test case which resulted in a defect was executed
3. Defect Category: These are explained in the next section, ideally decided by the test leader
4. Severity: As explained, it can be Critical, Non-Critical and Showstopper
5. Module ID: Module in which the defect occurred
6. Status: Raised, Authorised, Deferred, Fixed, Re-raised, Closed and Duplicate.
7. Defect description: Description as to how the defect was found, the exact steps that should be
taken to simulate the defect, other notes and attachments if any.
8. Test Case Reference No: The number of the test case and script in combination which resulted in
the defect
9. Owner: The name of the tester who executed the test case
10. Test case description: The instructions in the test cases for the step in which the error occurred
11. Expected Result: The expected result after the execution of the instructions in the test case
descriptions
12. History of the defect: Normally taken care of the automated tool used for defect tracking and
reporting.
13. Attachments: The screen shot showing the defect should be captured and attached
14. Responsibility: Identified team member of the development team for fixing the defect.

Defect Tracking by Test Lead

The test lead, categorizes the defects after meetings with the clients as,

1. Modify Cases: Test cases to be modified. This may arise when the testers understanding may be
incorrect.
2. Discussion Items: Arises when there is a difference of opinion between the test and the
development team. This is marked to the Domain consultant for final verdict.
3. Change Technology: Arises when the development team has to fix the bug.
4. Data Related: Arises when the defect is due to data and not coding.
5. User Training: Arises when the defect is not severe or technically not feasible to fix, it is decided
to train the user on the defect. This should ideally not be critical.
6. New Requirement: Inclusion of functionality after discussion
7. User Maintenance: Masters and Parameter maintained by the user causing the defect.
8. Observation: Any other observation, which is not classified in the above categories like a user
perspective GUI defect.

Reporting is done for defect evaluation and also to ensure that the development team is aware of the defects
found and is in the process of resolving the defects. A detailed report of the defects is generated everyday
and given to the development team for their feedback on defect resolution.

report is generated for every report to evaluate the rate at which new defects are found and the rate at which
the defects are tracked to closure.

Defect counts are reported as a function of time, creating a Defect Trend diagram or report, and as a
function of one or more defect parameters like category or status, creating a Defect Density report. These
types of analysis provide a perspective on the trends or distribution of defects that reveal the systems
reliability, respectively.

It is expected that defect discovery rates will eventually diminish as the testing and fixing progresses. A
threshold can be established below which the system can be deployed. Defect counts can also be reported
based on the origin in the implementation model, allowing detection of weak modules, hot spots, parts of
the system that are fixed again and again, indicating some fundamental design flaw.

Defects included in an analysis of this kind are confirmed defects. Not all reported defects report an actual
flaw, as some may be enhancement requests, out of the scope of the system, or describe an already reported
defect. However, there is a value to looking at and analysing why there are many defects being reported
that are either duplicates or not confirmed defects

Tools Used

Tools that are used to track and report defects are,

ClearQuest (CQ): It belongs to the Rational Test Suite and it is an effective tool in Defects Management.
CQ functions on a native access database and it maintains a common database of defects. With CQ the
entire Defect Process can be customized. For e.g., a process can be designed in such a manner that a defect
once raised needs to be definitely authorized and then fixed for it to attain the status of retesting. Such a
systematic defect flow process can be established and the history for the same can be maintained. Graphs
and reports can be customized and metrics can be derived out of the maintained defect repository.

Test Director (TD): TestDirector is an Automated Test Management Tool developed by Mercury
Interactive for Test Management to help to organize and manage all phases of the software testing process,
including planning, creating tests, executing tests, and tracking defects.

TestDirector enables us to manage user access to a project by creating a list of Authorized users and
assigning each user a password and a user group such that a perfect control can be exercised on the kinds of
additions and modifications an user can make to the project.

Apart from Manual Test Execution, the WinRunner automated test scripts of the project can also be
executed directly from TestDirector. TestDirector activates WinRunner, runs the tests, and displays the
results. Apart from the above, it is used for
• To report defects detected in the software.
• As a sophisticated system for tracking software defects.
• To monitor defects closely from initial detection until resolution.
• To analyze our Testing Process by means of various graphs and reports.

Defects Meetings
Meetings are conducted at the end of everyday between the test team and development team to discuss test
execution and defects. Here, defect categorizations are done .

Before meetings with the development team, test team should have internal discussions with the test lead
on the defects reported to the test lead. This process ensures that all defects are accurate and authentic to
the best knowledge of the test team.

Defects Publishing

Defects that are authorized are published in a mutually accepted media like Internet, Intranet, email etc.
These are published in the Intranet and depending on the clients requirements; defects are published either
in their Intranet or Internet.

Reports that are published are

• Daily defect report


• Summarized defect report for the individual passes
• Final defect report

Defect Report Format

Defect Report should contain all the necessary information about the defect. The following are the list of
Column that Defect Report should contain:

1. Defect ID
2. Date of Testing
3. Module
4. Test Script ID
5. Test Case Description
6. Defect Description
7. Expected Results
8. Severity
9. Status
10. Defect Category
11. Tester
12. Comments

Softbase Transfer

Between Passes
FTP
FATS
Revision of Test Cases
Between Passes

Softbase is the term used for describing the application software in the test and construction process.
Control of softbase should be with either the development team or the test team depending on the process
time frames i.e. whether construction or testing is in progress. There should also be control on the version
that is released for either construction or testing.

Softbase is transferred to the test environment after which the first pass of testing starts. During the first
pass, the defects discovered are fixed in the development environment on the same version of the softbase,
which is being tested. At the end of the first pass, after the test team has completed the execution of all the
test cases, the fixed version of the softbase is transferred into the test environment, to commence the second
pass of testing.

The same process is continued till the end completion of all passes.

FTP

The Acronym of FTP is File Transfer Protocol. File Transfer is used to transfer one or multiple files form
one system to another. Files can also be transferred one system to another immaterial of the operating
system there are functioning on. FTP supports two different modes of transfer: Binary and ASCII

FATS

The Acronym of FATS is Fully Automated Transfer System. FATS is an Automated version control
mechanism adopted by Citibank India Technologies for source code transfer from Development server to
Test Control Machine. In case of UAT source code is transferred from Development server to UAT
machine. FATS Uses SCCS (Source code control system) of Unix for Version Control. Source code
transfer will be transferred from Development server to FATS Server.

FATS server generates a unique Identification for each Program file that is to be transferred. For
completion of transfer there are three security levels namely

• Project Manager password


• User password
• Quality Assurance (QA) password.

Ideally, first the Project manager from the client side checks the file out of FATS transfer to check its
integrity. Then the user acknowledges the file for user requirements. Finally, QA clears the file on quality
standards.

Revision of Test Cases

At times, transfer of softbase could take time from release for correction to transfer back into test
environment. During the period testers should

• Include test scripts for new functionality incorporated during execution


• Enrich data and test cases based on happenings during the previous pass of testing
• Complete documentation for the previous passes
• Modify test cases which are classified as ‘MC’
• Free form testing on the softbase available

Deliverables

Deliverables to Clients

1. Software Quality Assurance (SQA) Deliverable


2. Deliverables to Clients

The following are the deliverables to Clients

• Test Strategy
• Effort Estimation
• Test Plan (includes Test scripts)
• Test Results
• Traceability matrix
• Pass defect report
• Final Defect report
• Final Summary Report
• Defect Analysis

Software Quality Assurance (SQA) Deliverable

The following are SQA deliverables

• The Test plan is a document, which needs SQA approval and sign off
• Test results though do not require sign off by SQA authority; need to be delivered for SQA
perusal.
• The Traceability document, Test Scripts with review comments (without the result status), defect
Report format, Defect analysis and tool evaluation document (for selection of tools for the
project), should be part of Test plan.
• Test Scripts bearing module name
• Mails requesting release, Risk Assessment Memo, Effort Estimation, Project & Defect Metrics
• Test results including the Final Defect Report & Executive Summary, Test Results (Test Results
with Pass/Fail status)

Software quality assurance (SQA) To deliver quality software product, quality has to be assured in each
and every phase of the development process. This task is done by SQA
team. SQA team consists of:

1. Quality assurance Team (QA).


2. Quality control Team (QC).

QA and AC activities

1. Peer review (Informal) - Cross checking the test case between tester.
2. Walk through (Semiformal ex Walking) - Going through test case, test plan, test script
3. Routing Checkup (Semiformal) - Routine checking up about manufacture
4. Inspection (Formal) - It is carried out by HOD.
5. Audit (Formal) (a) Evaluating the documents. (b) The people may be form same
company or different company or third party.
6. Verification & Validation (V &V)

Verification:

It is a set of activities to ensure each and every process is adhered to standard guidance (following) set to
the process (or step). It comes under QA.

Validation:

Evaluating a process or product, after completion against standard reference. It comes under QC.

Quality Standard

1. SEI (Software Engineering Institute)


Software Engineering Institute at Carnegie-Mellon University; initiated by the U.S. Defense Department to
help improve software development processes.

2. CMM (Capability Maturity Model)

'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by
the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality
software. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.

Level 1
Characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete
projects. Few if any processes in place; successes may not be repeatable.

Level 2
Software project tracking, requirements management, realistic planning, and configuration management
processes are in place; successful practices can be repeated.

Level 3
Standard software development and maintenance processes are integrated throughout an organization; a
Software Engineering Process Group is is in place to oversee software processes, and training programs are
used to ensure understanding and compliance.

Level 4
Metrics are used to track productivity, processes, and products. Project performance is predictable, and
quality is consistently high.

Level 5
The focus is on continuous process improvement. The impact of new processes and technologies can be
predicted and effectively implemented when required.

Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were
rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62%
were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.)
The median size of organizations was 100 software engineering/maintenance personnel; 32% of
organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical
key process area was in Software Quality Assurance.

ISO

'International Organization for Standardization' - The ISO 9001:2000 standard (which replaces the previous
standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many
kinds of production and manufacturing organizations, not just software.

It covers documentation, design, development, production, testing, installation, servicing, and other
processes. The full set of standards consists of:

(a) Q9001-2000 - Quality Management Systems: Requirements;


(b) Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary;
(c) Q9004-2000 - Quality Management Systems: Guidelines for Performance Improvements.

To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good
for about 3 years, after which a complete reassessment is required. Note that ISO certification does not
necessarily indicate quality products - it indicates only that documented processes are followed.

IEEE
'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE
Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit
Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI
Standard 730), and others.

ANSI

'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some
software-related standards in conjunction with the IEEE and ASQ (American Society for Quality). Other
software development/IT management process assessment methods besides CMMI and ISO 9000 include
SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT.

S-ar putea să vă placă și