Sunteți pe pagina 1din 29

TESTER’S

GUIDE
TABLE OF CONTENTS

POLICY................................................................................................................................4
Terms to understand.............................................................................................................4
What is software 'quality'? ...............................................................................................4
What is verification? validation? .....................................................................................4
What's an 'inspection'? .....................................................................................................4
QA & testing ? Differences..............................................................................................4
Life Cycle of Testing Process...............................................................................................5
Levels of Testing..................................................................................................................5
Unit Testing .....................................................................................................................5
Integration testing ............................................................................................................6
Acceptance testing............................................................................................................6
Types of Testing...................................................................................................................6
Incremental integration testing ........................................................................................6
Sanity testing....................................................................................................................6
Compatability testing .......................................................................................................7
Exploratory testing ...........................................................................................................7
Ad-hoc testing ..................................................................................................................7
Comparison testing ..........................................................................................................7
Load testing .....................................................................................................................7
System testing ..................................................................................................................7
Functional testing .............................................................................................................7
Volume testing..................................................................................................................7
Stress testing ....................................................................................................................8
Sociability Testing ...........................................................................................................8
Usability testing ...............................................................................................................8
Recovery testing ..............................................................................................................8
Security testing ................................................................................................................8
Performance Testing ........................................................................................................8
End-to-end testing ............................................................................................................9
Regression testing ............................................................................................................9
Parallel testing..................................................................................................................9
Install/uninstall testing .....................................................................................................9
Mutation testing ...............................................................................................................9
Alpha testing ....................................................................................................................9
Beta testing ......................................................................................................................9
Testing Techniques.............................................................................................................10
Black Box testing............................................................................................................10
Boundary testing.........................................................................................................11
Error Guessing............................................................................................................11
White Box testing...........................................................................................................12
Path Testing................................................................................................................12
Condition testing.........................................................................................................13
Loop Testing...............................................................................................................13
Data Flow Testing.......................................................................................................13
Stubs for Testing.............................................................................................................14
Drivers for Testing..........................................................................................................14
Web Testing Specifics........................................................................................................14
Internet Software - Quality Characteristics....................................................................14
WWW Project Peculiarities............................................................................................14
Basic HTML Testing......................................................................................................14
Suggestions for fast loading............................................................................................15
Link Testing....................................................................................................................15
Compatibility Testing.....................................................................................................16
Usability Testing.............................................................................................................17
Usability Tips..............................................................................................................17
Portability Testing..........................................................................................................17
Cookies Testing..............................................................................................................17
Testing - When is a program correct?.................................................................................18
Test Plan.............................................................................................................................18
Test cases............................................................................................................................20
What's a 'test case'? ........................................................................................................20
Testing Coverage................................................................................................................20
What if there isn't enough time for thorough testing? .......................................................27
Defect reporting..................................................................................................................27
Types of Automated Tools.................................................................................................28
Tester’s Guide

POLICY

We are committed to Continuous Improvement of Quality of Products and Customer


Services by adhering to International Standards.

Terms to understand

What is software 'quality'?


Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable.
What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents,
plans, code, requirements, and specifications. This can be done with checklists,
issues lists, walkthroughs, and inspection meetings. Validation typically involves
actual testing and takes place after verifications are completed.
What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people
including a moderator, reader (the author of whatever is being reviewed), and a recorder
to take notes. The subject of the inspection is typically a document such as a requirements
spec or a test plan, and the purpose is to find problems and see what's missing, not to fix
anything.
QA & testing ? Differences
Software QA involves the entire software development PROCESS - monitoring and
improving the process, making sure that any agreed-upon standards and procedures are
followed, and ensuring that problems are found and dealt with. It is oriented to
'prevention'
Testing involves operation of a system or application under controlled conditions and
evaluating the results (eg, 'if the user is in interface A of the application while using
hardware B, and does C, then D should happen'). The controlled conditions should
include both normal and abnormal conditions. Testing should intentionally attempt to

<10th October,1999> Page 4 of 2


Tester’s Guide

make things go wrong to determine if things happen when they shouldn't or things don't
happen when they should. It is oriented to 'detection'.
Life Cycle of Testing Process

The following are some of the steps to consider:

• Obtain requirements, functional design, and internal design specifications and


other necessary documents
• Obtain schedule requirements
• Determine project-related personnel and their responsibilities, reporting
requirements, required standards and processes (such as release processes, change
processes, etc.)
• Identify application's higher-risk aspects, set priorities, and determine scope and
limitations of tests
• Determine test approaches and methods - unit, integration, functional, system,
load, usability tests, etc.
• Determine test environment requirements (hardware, software, communications,
etc.)
• Determine testware requirements (record/playback tools, coverage analyzers, test
tracking, problem/bug tracking, etc.)
• Determine test input data requirements
• Identify tasks, those responsible for tasks
• Set schedule estimates, timelines, milestones
• Determine input equivalence classes, boundary value analyses, error classes
• Prepare test plan document and have needed reviews/approvals
• Write test cases
• Have needed reviews/inspections/approvals of test cases
• Prepare test environment and testware, obtain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking processes,
set up logging and archiving processes, set up or obtain test input data
• Obtain and install software releases
• Perform tests
• Evaluate and report results
• Track problems/bugs and fixes
• Retest as needed
• Maintain and update test plans, test cases, test environment, and testware through
life cycle

Levels of Testing

Unit Testing

<10th October,1999> Page 5 of 2


Tester’s Guide

The most 'micro' scale of testing; to test particular functions or code modules. Typically
done by the programmer and not by testers, as it requires detailed knowledge of the
internal program design and code. Not always easily done unless the application has a
well-designed architecture with tight code; may require developing test driver modules or
test harnesses.
Integration testing
Testing of combined parts of an application to determine if they function together
correctly. The 'parts' can be code modules, individual applications, client and server
applications on a network, etc. This type of testing is especially relevant to client/server
and distributed systems.
Integration can be top-down or bottom-up:
• Top-down testing starts with main and successively replaces stubs with the real
modules.
• Bottom-up testing builds larger module assemblies from primitive modules.
• Sandwich testing is mainly top-down with bottom-up integration and testing applied to
certain widely used components
Acceptance testing
Final testing based on specifications of the end-user or customer, or based on use by end-
users/customers over some limited period of time.
Types of Testing

Incremental integration testing


Continuous testing of an application as new functionality is added; requires that various
aspects of an application's functionality be independent enough to work separately before
all parts of the program are completed, or that test drivers be developed as needed; done
by programmers or by testers.
Sanity testing
Typically an initial testing effort to determine if a new software version is performing
well enough to accept it for a major testing effort. For example, if the new software is
crashing systems every 5 minutes, bogging down systems to a crawl, or destroying
databases, the software may not be in a 'sane' enough condition to warrant further testing
in its current state.

<10th October,1999> Page 6 of 2


Tester’s Guide

Compatability testing
Testing how well software performs in a particular hardware/software/operating
system/network/etc. environment.
Exploratory testing
Often taken to mean a creative, informal software test that is not based on formal test
plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing
Similar to exploratory testing, but often taken to mean that the testers have significant
understanding of the software before testing it.
Comparison testing
Comparing software weaknesses and strengths to competing products.
Load testing
Testing an application under heavy loads, such as testing of a web site under a range of
loads to determine at what point the system's response time degrades or fails.
System testing
Black-box type testing that is based on overall requirements specifications; covers all
combined parts of a system.
Functional testing
Black-box type testing geared to functional requirements of an application; this type of
testing should be done by testers. This doesn't mean that the programmers shouldn't
check that their code works before releasing it (which of course applies to any stage of
testing.)
Volume testing

Volume testing involves testing a software or Web application using corner cases of "task
size" or input data size. The exact volume tests performed depend on the application's
functionality, its input and output mechanisms and the technologies used to build the
application. Sample volume testing considerations include, but are not limited to:
If the application reads text files as inputs, try feeding it both an empty text file and a
huge (hundreds of megabytes) text file.
If the application stores data in a database, exercise the application's functions when the
database is empty and when the database contains an extreme amount of data.

<10th October,1999> Page 7 of 2


Tester’s Guide

If the application is designed to handle 100 concurrent requests, send 100 requests
simultaneously and then send the 101st request.
If a Web application has a form with dozens of text fields that allow a user to enter text
strings of unlimited length, try populating all of the fields with a large amount of text
and submit the form.

Stress testing
Term often used interchangeably with 'load' and 'performance' testing. Also used to
describe such tests as system functional testing while under unusually heavy loads, heavy
repetition of certain actions or inputs, input of large numerical values, large complex
queries to a database system, etc.
Sociability Testing

This means that you test an application in its normal environment, along with other
standard applications, to make sure they all get along together; that is, that they don't
corrupt each other's files, they don't crash, they don't consume system resources, they
don't lock up the system, they can share the printer peacefully, etc.
Usability testing
Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted
end-user or customer. User interviews, surveys, video recording of user sessions, and
other techniques can be used. Programmers and testers are usually not appropriate as
usability testers.
Recovery testing
Testing how well a system recovers from crashes, hardware failures, or other catastrophic
problems.
Security testing
Testing how well the system protects against unauthorized internal or external access,
willful damage, etc; may require sophisticated testing techniques.
Performance Testing
Term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance'
testing (and any other 'type' of testing) is defined in requirements documentation or QA
or Test Plans.

<10th October,1999> Page 8 of 2


Tester’s Guide

End-to-end testing
Similar to system testing; the 'macro' end of the test scale; involves testing of a complete
application environment in a situation that mimics real-world use, such as interacting
with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.
Regression testing
Re-testing after fixes or modifications of the software or its environment. It can be
difficult to determine how much re-testing is needed, especially near the end of the
development cycle. Automated testing tools can be especially useful for this type of
testing.
Parallel testing
With parallel testing, users can easily choose to run batch tests or asynchronous tests
depending on the needs of their test systems. Testing multiple units in parallel increases
test throughput and lower a manufacturer's
Install/uninstall testing
Testing of full, partial, or upgrade install/uninstall processes.
Mutation testing
A method for determining if a set of test data or test cases is useful, by deliberately
introducing various code changes ('bugs') and retesting with the original test data/cases to
determine if the 'bugs' are detected. Proper implementation requires large computational
resources.
Alpha testing
Testing of an application when development is nearing completion; minor design changes
may still be made as a result of such testing. Typically done by end-users or others, not
by programmers or testers.
Beta testing
Testing when development and testing are essentially completed and final bugs and
problems need to be found before final release. Typically done by end-users or others, not
by programmers or testers.

<10th October,1999> Page 9 of 2


Tester’s Guide

Testing Techniques

Black Box testing


Black box testing (data driven or input/output driven) is not based on any knowledge of
internal design or code. Tests are based on requirements and functionality. Black box
testing attempts to derive sets of inputs that will fully exercise all the functional
requirements of a system. It is not an alternative to white box testing. This type of testing
attempts to find errors in the following categories:

1. incorrect or missing functions,


2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and
5. initialization and termination errors.

Range testing :Equivalence Testing


This method divides the input domain of a program into classes of data from which test
cases can be derived. Equivalence partitioning strives to define a test case that uncovers
classes of errors and thereby reduces the number of test cases needed. It is based on an
evaluation of equivalence classes for an input condition. An equivalence class represents
a set of valid or invalid states for input conditions.

Equivalence classes may be defined according to the following guidelines:


1. If an input condition specifies a range, one valid and two invalid equivalence
classes are defined.
2. If an input condition requires a specific value, then one valid and two invalid
equivalence classes are defined.
3. If an input condition specifies a member of a set, then one valid and one
invalid equivalence class are defined.
4. If an input condition is boolean, then one valid and one invalid equivalence
class are defined.

Testcase Design for Equivalence partitioning

<10th October,1999> Page 10 of 2


Tester’s Guide

1. Good test case reduces by more than one the number of other test cases
which must be developed
2. Good test case covers a large set of other possible cases
3. Classes of valid inputs
4. Classes of invalid inputs
Boundary testing
This method leads to a selection of test cases that exercise boundary values. It
complements equivalence partitioning since it selects test cases at the edges of a class.
Rather than focusing on input conditions solely, BVA derives test cases from the output
domain also. BVA guidelines include:

1. For input ranges bounded by a and b, test cases should include values a and b
and just above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be
developed to exercise the minimum and maximum numbers and values just
above and below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be
designed to exercise the data structure at its boundary.

Test case Design for Boundary value analysis :


Situations on, above, or below edges of input, output, and condition classes have high
probability of success
Error Guessing
Error Guessing is the process of using intuition and past experience to fill in gaps in the
test data set. There are no rules to follow. The tester must review the test records with an
eye towards recognizing missing conditions. Two familiar examples of error prone
situations are division by zero and calculating the square root of a negative number.
Either of these will result in system errors and garbled output.
Other cases where experience has demonstrated error proneness are the processing of
variable length tables, calculation of median values for odd and even numbered
populations, cyclic master file/data base updates (improper handling of duplicate keys,

<10th October,1999> Page 11 of 2


Tester’s Guide

unmatched keys, etc.), overlapping storage areas, overwriting of buffers, forgetting to


initialize buffer areas, and so forth. I am sure you can think of plenty of circumstances
unique to your hardware/software environments and use of specific programming
languages.
Error Guessing is as important as Equivalence partitioning and Boundary Analysis
because it is intended to compensate for their inherent incompleteness. As Equivalence
Partitioning and Boundary Analysis complement one another, Error Guessing
complements both of these techniques.

White Box testing


White box testing (logic driven) is based on knowledge of the internal logic of an
application's code. Tests are based on coverage of code statements, branches, paths,
conditions. White box testing is a test case design method that uses the control structure
of the procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at
least once,
2. exercise all logical decisions on their true and false sides,
3. execute all loops at their boundaries and within their operational bounds, and
4. exercise internal data structures to ensure their validity.

Path Testing
A path-coverage test allow us to exercise every transition between the program
statements (and so every statement and branch as well).

• First we construct a program graph.


• Then we enumerate all paths.
• Finally we devise the test cases.

Possible criteria:

1. exercise every path from entry to exit;


2. exercise each statement at least once;
3. exercise each case in every branch/case.

<10th October,1999> Page 12 of 2


Tester’s Guide

Condition testing
A condition test can use a combination of Comparison operators and Logical operators.
The Comparison operators compare the values of variables and this comparison produces
a boolean result. The Logical operators combine booleans to produce a single boolean
result that is the result of the condition test.
e.g. (a == b) Result is true if the value of a is the same as the value of b.
Myers: take each branch out of a condition at least once.
White and Cohen: for each relational operator e1 < e2 test all combinations of e1, e2
orderings. For a Boolean condition, test all possible inputs (!).
Branch and relational operator testing---enumerate categories of operator values.
B1 || B2: test {B1=t,B2=t}, {t,f}, {f,t}
B1 || (e2 = e3): test {t,=}, {f,=}, {t,<}, {t,>}.

Loop Testing

1. For single loop, zero minimum, N maximum, no excluded values:


2. Try bypassing loop entirely.
3. Try negative loop iteration variable.
4. One iteration through loop.
5. Two iterations through loop---some initialization problems can be uncovered
only by two iterations.
6. Typical number of cases;
7. One less than maximum.
8. Maximum.
9. Try greater than maximum.

Data Flow Testing


Def-use chains:

1. def = definition of variable


2. use = use of that variable;
3. def-use chains go across control boundaries.
4. Testing---test every def-use chain at least once.

<10th October,1999> Page 13 of 2


Tester’s Guide

Stubs for Testing


A Stub is a dummy procedure, module or unit that stands in for an unfinished portion of a
system
Stubs for Top-Down Testing

• 4 basic types:
o Display a trace message
o Display parameter value(s)
o Return a value from a table
o Return table value selected by parameter

Drivers for Testing


Test Harness or a test driver is supporting code and data used to provide an environment
for testing part of a system in isolation.

Web Testing Specifics

Internet Software - Quality Characteristics


 Functionality - Verified content
 Reliability - Security and availability
 Efficiency - Response Times
 Usability - High user satisfaction
 Portability - Platform Independence

WWW Project Peculiarities


 Software Consists of large degree of components
 User Interface is more complex than many GUI based Client-Server applications.
 User may be unknown ( no training/ user manuals)
 Security threats come from anywhere
 User load unpredictable

Basic HTML Testing


 Check for illegal elements present
 Check for illegal attributes present
 Check for tags close
 Check for the tags <HEAD>,<TITLE>,<BODY>, <DOCTYPE>
 Check that all IMG tags should have ALT tag[ALT tags must be suggestive]
 Check for consistency of fonts, colors and font size.
 Check for spelling errors in text and images.

<10th October,1999> Page 14 of 2


Tester’s Guide

 Check for "Non Sense" mark up

Example for Non Sense mark up


<B>Hello</B> may be written as <B>H</B><B>ell</B><B>o</B>

Suggestions for fast loading


 Web pages weight should be reduced to lesser size as much as possible
 Don’t knock door of the Database every time. Go for the alternate.
Example: If your web application has Reports, generate the content of the report in a
Static HTML file in a periodic time. When the user view the report show him the
static HTML content. No need to go to the database and retrieve the data when the
user hits the report link.
Cached Query - If the data which is fetched using a query only changes periodically
Then we can cache the query for that period. This will avoid unnecessary database
access.
 Every IMG tags must have WIDTH and HEIGHT attributes.
IMG - Bad Example

<B> Hello </B>


<IMG SRC = "FAT.GIF">
<B> World </B>

IMG - Good Example

<B> Hello </B>


<IMG SRC = "FAT.GIF" WIDTH ="120" HEIGHT="150" >
<B> World </B>

 All the photographic images must be in "jpg" format


 Computer created images must be in "gif" format
 Background image should be less than 3.5k [ the background image should be same
for all the pages(except for functional reasons)]
 Avoid nested tables.
 Keep table text size to a minimum (e.g. less than 5000 characters)

Link Testing
 You must ensure that all the hyperlinks are valid
 This applies to both internal and external links
 Internal links shall be relative , to minimize the overhead and faults when the web site
is moved to production environment
 External links shall be referenced to absolute URLs
 External links can change without control - So, automate regression testing
 Remember that external non- home page links are more likely to break
 Be careful at links in "What's New" sections. They are likely to become obsolete
 Check that content can be accessed by means of : Search engine, Site Map

<10th October,1999> Page 15 of 2


Tester’s Guide

 Check the accuracy of Search Engine results


 Check that web Site Error 404 ("Not Found") is handled by means of a user-friendly
page

Compatibility Testing

 Check for the site behaviour across the industry standard browsers. The main issues
involve how different the browsers handle tables, images, caching and scripting
languages
 In cross browsers testing , check for :
 Behaviour of buttons
 Support of Java scripts
 Support of tables
 Acrobat, Real, Flash behaviour
 ActiveX control support
 Java compatibility
 Text size

Browser ActiveX VB Java Dynamic


Browser JavaScript Frames CSS 1.0 CSS 2.0
version controls Script applets HTML
Internet 4.0 and
Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled
Explorer later
Internet 3.0 and
Enabled Enabled Enabled Enabled Disabled Enabled Enabled Disabled
Explorer later
Netscape
4.0 and Disable
Navigato Disabled Enabled Enabled Enabled Enabled Enabled Enabled
later d
r
Netscape
3.0 and Disable Disable
Navigato Disabled Enabled Enabled Disabled Enabled Disabled
later d d
r
Both
Internet
Explorer 4.0 and Disable
Disabled Enabled Enabled Enabled Enabled Enabled Enabled
and later d
Navigato
r
Both
Internet
Explorer 3.0 and Disable Disable
Disabled Enabled Enabled Disabled Enabled Disabled
and later d d
Navigato
r
Microsof Unavailabl Disable Disabled Disabled Disable Disabled Disable Disable Disabled
t Web e d d d d

<10th October,1999> Page 16 of 2


Tester’s Guide

TV

Usability Testing
Aspects to be tested with care:
 Coherence of look and feel
 Navigational aids
 User Interactions
 Printing
With respect to
 Normal behaviour
 Destructive behaviour
 Inexperienced users

Usability Tips

1. Define categories in terms of user goals


2. Name sections carefully
3. Think internationally
4. Identify the homepage link on every page
5. Make sure search is always available
6. Test all the browsers your audience will use
7. Differentiate visited links from unvisited links
8. Never use graphics where HTML text will do
9. Make GUI design predictable and consistent
10. Check that printed pages fit appropriately to paper pages[consider that many people
just surf and print. Check especially the pages for which format is important. E.g.: an
application form can either be filled on-line or printed/filled/faxed]

Portability Testing

 Check that links to URLs outside the web site must be in canonical
form(http://www.srasys.co.in)
 Check that links to URLs into the web site must be in relative form(e.g.
…./aaouser/images/images.gif)

Cookies Testing

What are cookies?

A "Cookie" is a small piece of information sent by the web server to store on a web
browser. So it can later be read back from the browser. This is useful for having the
browser remember some specific information.

<10th October,1999> Page 17 of 2


Tester’s Guide

Why must you test cookies?

 Cookies can expire


 Users can disable them in Browser

How to perform Cookies testing?

 Check the behaviour after cookies expiration


 Work with cookies disabled
 Disable cookies mid-way
 Delete web cookies mid-way
 Clear memory and disk cache mid-way

Testing - When is a program correct?

There are levels of correctness. We must determine the appropriate level of correctness
for each system because it costs more and more to reach higher levels.

1. No syntactic errors
2. Compiles with no error messages
3. Runs with no error messages
4. There exists data which gives correct output
5. Gives correct output for required input
6. Correct for typical test data
7. Correct for difficult test data
8. Proven correct using mathematical logic
9. Obeys specifications for all valid data
10. Obeys specifications for likely erroneous input
11. Obeys specifications for all possible input

Test Plan

A software project test plan is a document that describes the objectives, scope, approach,
and focus of a software testing effort. The process of preparing a test plan is a useful way
to think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the 'why' and
'how' of product validation. It should be thorough enough to be useful but not so thorough
that no one outside the test group will read it. The following are some of the items that
might be included in a test plan, depending on the particular project:

• Title of the Project

<10th October,1999> Page 18 of 2


Tester’s Guide

• Identification of document including version numbers


• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test
plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature,
functionality, process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
• Test environment setup and configuration issues
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as
screen capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by
testers to help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel pre-training needs
• Test site/location
• Relevant proprietary, classified, security, and licensing issues.
• Appendix - glossary, acronyms, etc.

<10th October,1999> Page 19 of 2


Tester’s Guide

Test cases

What's a 'test case'?


1. A test case is a document that describes an input, action, or event and an
expected response, to determine if a feature of an application is working
correctly. A test case should contain particulars such as test case identifier,
test case name, objective, test conditions/setup, input data requirements,
steps, and expected results.
2. Note that the process of developing test cases can help find problems in the
requirements or design of an application, since it requires completely
thinking through the operation of the application. For this reason, it's useful
to prepare test cases early in the development cycle if possible.

Testing Coverage

1. Line coverage. Test every line of code (Or Statement coverage: test every
statement).
2. Branch coverage. Test every line, and every branch on multi-branch lines.
3. N-length sub-path coverage. Test every sub-path through the program of length
N. For example, in a 10,000 line program, test every possible 10-line sequence of
execution.
4. Path coverage. Test every path through the program, from entry to exit. The
number of paths is impossibly large to test.
5. Multicondition or predicate coverage. Force every logical operand to take every
possible value. Two different conditions within the same test may result in the
same branch, and so branch coverage would only require the testing of one of
them.
6. Trigger every assertion check in the program. Use impossible data if necessary.
7. Loop coverage. "Detect bugs that exhibit themselves only when a loop is
executed more than once."
8. Every module, object, component, tool, subsystem, etc. This seems obvious until
you realize that many programs rely on off-the-shelf components. The
programming staff doesn't have the source code to these components, so
measuring line coverage is impossible. At a minimum (which is what is measured
here), you need a list of all these components and test cases that exercise each one
at least once.
9. Fuzzy decision coverage. If the program makes heuristically-based or similarity-
based decisions, and uses comparison rules or data sets that evolve over time,
check every rule several times over the course of training.

<10th October,1999> Page 20 of 2


Tester’s Guide

10. Relational coverage. "Checks whether the subsystem has been exercised in a way
that tends to detect off-by-one errors" such as errors caused by using < instead of
<=. This coverage includes:
11.
o Every boundary on every input variable.
o Every boundary on every output variable.
o Every boundary on every variable used in intermediate calculations.
12. Data coverage. At least one test case for each data item / variable / field in the
program.
13. Constraints among variables: Let X and Y be two variables in the program. X and
Y constrain each other if the value of one restricts the values the other can take.
For example, if X is a transaction date and Y is the transaction's confirmation date,
Y can't occur before X.
14. Each appearance of a variable. Suppose that you can enter a value for X on three
different data entry screens, the value of X is displayed on another two screens,
and it is printed in five reports. Change X at each data entry screen and check the
effect everywhere else X appears.
15. Every type of data sent to every object. A key characteristic of object-oriented
programming is that each object can handle any type of data (integer, real, string,
etc.) that you pass to it. So, pass every conceivable type of data to every object.
16. Handling of every potential data conflict. For example, in an appointment
calendaring program, what happens if the user tries to schedule two appointments
at the same date and time?
17. Handling of every error state. Put the program into the error state, check for
effects on the stack, available memory, handling of keyboard input. Failure to
handle user errors well is an important problem, partially because about 90% of
industrial accidents are blamed on human error or risk-taking. Under the legal
doctrine of foreseeable misuse, the manufacturer is liable in negligence if it fails
to protect the customer from the consequences of a reasonably foreseeable misuse
of the product.
18. Every complexity / maintainability metric against every module, object,
subsystem, etc. There are many such measures. Jones lists 20 of them. People
sometimes ask whether any of these statistics are grounded in a theory of
measurement or have practical value. But it is clear that, in practice, some
organizations find them an effective tool for highlighting code that needs further
investigation and might need redesign.
19. Conformity of every module, subsystem, etc. against every corporate coding
standard. Several companies believe that it is useful to measure characteristics of
the code, such as total lines per module, ratio of lines of comments to lines of
code, frequency of occurrence of certain types of statements, etc. A module that
doesn't fall within the "normal" range might be summarily rejected (bad idea) or
re-examined to see if there's a better way to design this part of the program.
20. Table-driven code. The table is a list of addresses or pointers or names of
modules. In a traditional CASE statement, the program branches to one of several
places depending on the value of an expression. In the table-driven equivalent, the
program would branch to the place specified in, say, location 23 of the table. The

<10th October,1999> Page 21 of 2


Tester’s Guide

table is probably in a separate data file that can vary from day to day or from
installation to installation. By modifying the table, you can radically change the
control flow of the program without recompiling or relinking the code. Some
programs drive a great deal of their control flow this way, using several tables.
Coverage measures? Some examples:
21.
o check that every expression selects the correct table element
o check that the program correctly jumps or calls through every table
element
o check that every address or pointer that is available to be loaded into these
tables is valid (no jumps to impossible places in memory, or to a routine
whose starting address has changed)
o check the validity of every table that is loaded at any customer site.
22. Every interrupt. An interrupt is a special signal that causes the computer to stop
the program in progress and branch to an interrupt handling routine. Later, the
program restarts from where it was interrupted. Interrupts might be triggered by
hardware events (I/O or signals from the clock that a specified interval has
elapsed) or software (such as error traps). Generate every type of interrupt in
every way possible to trigger that interrupt.
23. Every interrupt at every task, module, object, or even every line. The interrupt
handling routine might change state variables, load data, use or shut down a
peripheral device, or affect memory in ways that could be visible to the rest of the
program. The interrupt can happen at any time-between any two lines, or when
any module is being executed. The program may fail if the interrupt is handled at
a specific time. (Example: what if the program branches to handle an interrupt
while it's in the middle of writing to the disk drive?)
24.
The number of test cases here is huge, but that doesn't mean you don't have to
think about this type of testing. This is path testing through the eyes of the
processor (which asks, "What instruction do I execute next?" and doesn't care
whether the instruction comes from the mainline code or from an interrupt
handler) rather than path testing through the eyes of the reader of the mainline
code. Especially in programs that have global state variables, interrupts at
unexpected times can lead to very odd results.
25. Every anticipated or potential race. Imagine two events, A and B. Both will occur,
but the program is designed under the assumption that A will always precede B.
This sets up a race between A and B -if B ever precedes A, the program will
probably fail. To achieve race coverage, you must identify every potential race
condition and then find ways, using random data or systematic test case selection,
to attempt to drive B to precede A in each case.
26.
Races can be subtle. Suppose that you can enter a value for a data item on two
different data entry screens. User 1 begins to edit a record, through the first
screen. In the process, the program locks the record in Table 1. User 2 opens the
second screen, which calls up a record in a different table, Table 2. The program
is written to automatically update the corresponding record in the Table 1 when

<10th October,1999> Page 22 of 2


Tester’s Guide

User 2 finishes data entry. Now, suppose that User 2 finishes before User 1. Table
2 has been updated, but the attempt to synchronize Table 1 and Table 2 fails.
What happens at the time of failure, or later if the corresponding records in Table
1 and 2 stay out of synch?
27. Every time-slice setting. In some systems, you can control the grain of switching
between tasks or processes. The size of the time quantum that you choose can
make race bugs, time-outs, interrupt-related problems, and other time-related
problems more or less likely. Of course, coverage is an difficult problem here
because you aren't just varying time-slice settings through every possible value.
You also have to decide which tests to run under each setting. Given a planned set
of test cases per setting, the coverage measure looks at the number of settings
you've covered.
28. Varied levels of background activity. In a multiprocessing system, tie up the
processor with competing, irrelevant background tasks. Look for effects on races
and interrupt handling. Similar to time-slices, your coverage analysis must specify
29.
o categories of levels of background activity (figure out something that
makes sense) and
o all timing-sensitive testing opportunities (races, interrupts, etc.).
30. Each processor type and speed. Which processor chips do you test under? What
tests do you run under each processor? You are looking for:
31.
o speed effects, like the ones you look for with background activity testing,
and
o consequences of processors' different memory management rules, and
o floating point operations, and
o any processor-version-dependent problems that you can learn about.
32. Every opportunity for file / record / field locking.
33. Every dependency on the locked (or unlocked) state of a file, record or field.
34. Every opportunity for contention for devices or resources.
35. Performance of every module / task / object. Test the performance of a module
then retest it during the next cycle of testing. If the performance has changed
significantly, you are either looking at the effect of a performance-significant
redesign or at a symptom of a new bug.
36. Free memory / available resources / available stack space at every line or on
entry into and exit out of every module or object.
37. Execute every line (branch, etc.) under the debug version of the operating
system. This shows illegal or problematic calls to the operating system.
38. Vary the location of every file. What happens if you install or move one of the
program's component, control, initialization or data files to a different directory or
drive or to another computer on the network?
39. Check the release disks for the presence of every file. It's amazing how often a
file vanishes. If you ship the product on different media, check for all files on all
media.
40. Every embedded string in the program. Use a utility to locate embedded strings.
Then find a way to make the program display each string.

<10th October,1999> Page 23 of 2


Tester’s Guide

41. Operation of every function / feature / data handling operation under:


42. Every program preference setting.
43. Every character set, code page setting, or country code setting.
44. The presence of every memory resident utility (inits, TSRs).
45. Each operating system version.
46. Each distinct level of multi-user operation.
47. Each network type and version.
48. Each level of available RAM.
49. Each type / setting of virtual memory management.
50. Compatibility with every previous version of the program.
51. Ability to read every type of data available in every readable input file format. If
a file format is subject to subtle variations (e.g. CGM) or has several sub-types
(e.g. TIFF) or versions (e.g. dBASE), test each one.
52. Write every type of data to every available output file format. Again, beware of
subtle variations in file formats-if you're writing a CGM file, full coverage would
require you to test your program's output's readability by every one of the main
programs that read CGM files.
53. Every typeface supplied with the product. Check all characters in all sizes and
styles. If your program adds typefaces to a collection of fonts that are available to
several other programs, check compatibility with the other programs (nonstandard
typefaces will crash some programs).
54. Every type of typeface compatible with the program. For example, you might test
the program with (many different) TrueType and Postscript typefaces, and fixed-
sized bitmap fonts.
55. Every piece of clip art in the product. Test each with this program. Test each
with other programs that should be able to read this type of art.
56. Every sound / animation provided with the product. Play them all under different
device (e.g. sound) drivers / devices. Check compatibility with other programs
that should be able to play this clip-content.
57. Every supplied (or constructible) script to drive other machines / software (e.g.
macros) / BBS's and information services (communications scripts).
58. All commands available in a supplied communications protocol.
59. Recognized characteristics. For example, every speaker's voice characteristics
(for voice recognition software) or writer's handwriting characteristics
(handwriting recognition software) or every typeface (OCR software).
60. Every type of keyboard and keyboard driver.
61. Every type of pointing device and driver at every resolution level and ballistic
setting.
62. Every output feature with every sound card and associated drivers.
63. Every output feature with every type of printer and associated drivers at every
resolution level.
64. Every output feature with every type of video card and associated drivers at
every resolution level.
65. Every output feature with every type of terminal and associated protocols.
66. Every output feature with every type of video monitor and monitor-specific
drivers at every resolution level.

<10th October,1999> Page 24 of 2


Tester’s Guide

67. Every color shade displayed or printed to every color output device (video card /
monitor / printer / etc.) and associated drivers at every resolution level. And
check the conversion to grey scale or black and white.
68. Every color shade readable or scannable from each type of color input device at
every resolution level.
69. Every possible feature interaction between video card type and resolution,
pointing device type and resolution, printer type and resolution, and memory
level. This may seem excessively complex, but I've seen crash bugs that occur
only under the pairing of specific printer and video drivers at a high resolution
setting. Other crashes required pairing of a specific mouse and printer driver,
pairing of mouse and video driver, and a combination of mouse driver plus video
driver plus ballistic setting.
70. Every type of CD-ROM drive, connected to every type of port (serial / parallel /
SCSI) and associated drivers.
71. Every type of writable disk drive / port / associated driver. Don't forget the fun
you can have with removable drives or disks.
72. Compatibility with every type of disk compression software. Check error
handling for every type of disk error, such as full disk.
73. Every voltage level from analog input devices.
74. Every voltage level to analog output devices.
75. Every type of modem and associated drivers.
76. Every FAX command (send and receive operations) for every type of FAX card
under every protocol and driver.
77. Every type of connection of the computer to the telephone line (direct, via PBX,
etc.; digital vs. analog connection and signaling); test every phone control
command under every telephone control driver.
78. Tolerance of every type of telephone line noise and regional variation
(including variations that are out of spec) in telephone signaling (intensity,
frequency, timing, other characteristics of ring / busy / etc. tones).
79. Every variation in telephone dialing plans.
80. Every possible keyboard combination. Sometimes you'll find trap doors that the
programmer used as hotkeys to call up debugging tools; these hotkeys may crash
a debuggerless program. Other times, you'll discover an Easter Egg (an
undocumented, probably unauthorized, and possibly embarrassing feature). The
broader coverage measure is every possible keyboard combination at every
error message and every data entry point. You'll often find different bugs when
checking different keys in response to different error messages.
81. Recovery from every potential type of equipment failure. Full coverage includes
each type of equipment, each driver, and each error state. For example, test the
program's ability to recover from full disk errors on writable disks. Include
floppies, hard drives, cartridge drives, optical drives, etc. Include the various
connections to the drive, such as IDE, SCSI, MFM, parallel port, and serial
connections, because these will probably involve different drivers.
82. Function equivalence. For each mathematical function, check the output against
a known good implementation of the function in a different program. Complete

<10th October,1999> Page 25 of 2


Tester’s Guide

coverage involves equivalence testing of all testable functions across all possible
input values.
83. Zero handling. For each mathematical function, test when every input value,
intermediate variable, or output variable is zero or near-zero. Look for severe
rounding errors or divide-by-zero errors.
84. Accuracy of every graph, across the full range of graphable values. Include
values that force shifts in the scale.
85. Accuracy of every report. Look at the correctness of every value, the formatting
of every page, and the correctness of the selection of records used in each report.
86. Accuracy of every message.
87. Accuracy of every screen.
88. Accuracy of every word and illustration in the manual.
89. Accuracy of every fact or statement in every data file provided with the product.
90. Accuracy of every word and illustration in the on-line help.
91. Every jump, search term, or other means of navigation through the on-line
help.
92. Check for every type of virus / worm that could ship with the program.
93. Every possible kind of security violation of the program, or of the system while
using the program.
94. Check for copyright permissions for every statement, picture, sound clip, or
other creation provided with the program.
95. Verification of the program against every program requirement and published
specification.
96. Verification of the program against user scenarios. Use the program to do real
tasks that are challenging and well-specified. For example, create key reports,
pictures, page layouts, or other documents events to match ones that have been
featured by competitive programs as interesting output or applications.
97. Verification against every regulation (IRS, SEC, FDA, etc.) that applies to the
data or procedures of the program.
98.
Usability tests of:
99. Every feature / function of the program.
100. Every part of the manual.
101. Every error message.
102. Every on-line help topic.
103. Every graph or report provided by the program.
104.
Localizability / localization tests:
105. Every string. Check program's ability to display and use this string if it is
modified by changing the length, using high or low ASCII characters, different
capitalization rules, etc.
106. Compatibility with text handling algorithms under other languages
(sorting, spell checking, hyphenating, etc.)
107. Every date, number and measure in the program.
108. Hardware and drivers, operating system versions, and memory-resident
programs that are popular in other countries.

<10th October,1999> Page 26 of 2


Tester’s Guide

109. Every input format, import format, output format, or export format that
would be commonly used in programs that are popular in other countries.
110. Cross-cultural appraisal of the meaning and propriety of every string
and graphic shipped with the program.

What if there isn't enough time for thorough testing?

Use risk analysis to determine where testing should be focused.


Since it's rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk
analysis is appropriate to most software development projects. This requires judgement
skills, common sense, and experience. (If warranted, formal methods are also available.)
Considerations can include:

• Which functionality is most important to the project's intended purpose?


• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance
expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?

Defect reporting

The bug needs to be communicated and assigned to developers that can fix it. After the
problem is resolved, fixes should be re-tested, and determinations made regarding
requirements for regression testing to check that fixes didn't create problems elsewhere.
The following are items to consider in the tracking process:

• Complete information such that developers can understand the bug, get an idea of
it's severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)

<10th October,1999> Page 27 of 2


Tester’s Guide

• Current bug status (e.g., 'Open', 'Closed', etc.)


• The application name and version
• The function, module, feature, object, screen, etc. where the bug occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• File excerpts/error messages/log file excerpts/screen shots/test tool logs that
would be helpful in finding the cause of the problem
• Severity Level
• Tester name
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of fix
• Date of fix
• Application version that contains the fix
• Verification Details

A reporting or tracking process should enable notification of appropriate personnel at


various stages.

Types of Automated Tools

1. code analyzers - monitor code complexity, adherence to standards, etc.


2. coverage analyzers - these tools check which parts of the code have been
exercised by a test, and may be oriented to code statement coverage,
condition coverage, path coverage, etc.
3. memory analyzers - such as bounds-checkers and leak detectors.
4. load/performance test tools - for testing client/server and web applications
under various load levels.
5. web test tools - to check that links are valid, HTML code usage is correct,
client-side and
6. server-side programs work, a web site's interactions are secure.
7. other tools - for test case management, documentation management, bug
reporting, and configuration management.

Condition Case description Expected result


Focus on Fax no. Enter Valid Fax No's IT should accept the no entered
Enter Alphabets, Check for the message IT should popup an error msg.

<10th October,1999> Page 28 of 2


Tester’s Guide

Enter Special character other than Braces,-,+ IT should display and error msg

Email - id Enter a valid Email-id(abc@abccd.com) IT should accept the enterd value


Enter Special character other than
Underscore, '.',Hyphen, @ IT should popup an error msg.
Enter Numbers as Email id IT should not display any error ms
Enter more than one '@' symbol IT should popup an error msg.

<10th October,1999> Page 29 of 2

S-ar putea să vă placă și