Sunteți pe pagina 1din 6

Testcase writing Bestpractices:

Understand the requirements document.

Break the requirements into smaller sections to reach the better coverage.

For each requirement, decide what technique you should use to derive the test
cases.

Understand

the

architecture,

data

flow

diagrams,

and

design

of

the

application/module carefully.

Consider the Black box technique which includes:

Equivalence class partitioning

Boundary value analysis

State transition

Use cases

Decision table

Consider the White box technique which includes:

Statement coverage

Decision coverage

Condition coverage

Multi condition coverage

Consider the Experience based technique which includes:

Error guessing

Exploratory testing

For each scenario, consider the functional as well as non-functional conditions.

Do not assume the functionality and features of the application while writing test
cases.

For each scenario, cover all positive as well as negative aspect.

Have a traceability matrix, which provides coverage of testing. Keep filling the
traceability matrix when you complete test cases for each requirement.
Test case template/format varies from company to company. There is no hard and
fix template available. A generic template for test case writing is shown below:

INTRODUCTION:
Glen Myers, in his classic book on software testing, defines software testing as a
process of executing a program with the intent of finding an error. This definition
serves very well to provide a goal for all testing activities: finding defects.
The by-product of this activity of finding defects provides a very good measure of the
quality of the application. For example, the number of defects found per release can
give an idea about the stability of the product, number of defects found for a
particular feature can give information about the feature stability, and number of
defects found on test complete (hypothetical) can give the crucial information to
make release decisions. The tests themselves give an idea about the test coverage
through mapping of tests to documented specs/ perceived user scenarios.
The process of executing a program can be very complicated though. Todays
complex applications varying in technology used, domains, user bases and
interfaces provide quite a challenge to exercise the program fully. Of course, it is a
well know fact that we can never cover all possibilities in program execution, so
testing can never be complete. But, we can still aim at effective testing and efficient
program execution with the intent of capturing defects and information (metrics).
Having said this, a test case which can be considered to be the smallest unit of a
test should also be written to satisfy this intent.
Quality can be measured for functionality, usability, performance, security,
compatibility, stress, and security among other attributes. In the context of this
writing though, we will talk specifically about black box functional tests, further
narrowing to web based applications.

WHAT IS A TEST CASE?:


The IEEE definition of test case is Documentation specifying inputs, predicted
results, and a set of execution conditions for a test item. The aim is to divide the
software function into small units of function that is testable with input, and producing
result that is measurable.
So, basically a test case is a feature/function description that should be executed
with a range of input, given certain preconditions, and the outcome measured
against expected result.
By the way, there is a common misconception relating to test cases and test scripts,
or even test suite. Many people use them interchangeably, and that is a mistake. In
short, a test script (or test suite) is a compilation of multiple test cases.
Back to the Top

WHAT INFORMATION WOULD THE TEST MANAGER


WANT OUT OF TEST CASE DOCUMENT/S?
The test cases provide important information to the client regarding the quality of
their product. The approach to test case writing should be such as to facilitate the
collection of this information.
1. Which features have been tested/ will be tested eventually?
2. How many user scenarios/ use cases have been executed?
3. How many features are stable?
4. Which features need more work?
5. Are sufficient input combinations exercised?
6. Does the app give out correct error messages if the user does not use it the way it was
intended to be used?
7. Does the app respond to the various browser specific functions as it should?
8. Does the UI conform to the specifications?
9. Are the features traceable to the requirement spec? Have all of them been covered?
10. Are the user scenarios traceable to the use case document? Have all of them been covered?
11. Can these tests be used as an input to automation?
12. Are the tests good enough? Are they finding defects?
13. Is software ready to ship? Is testing enough?
14. What is the quality of the application?

Back to the Top

APPROACH TO TEST CASE WRITING


The approach to organizing test cases will determine the extent to which they are
effective in finding defects and providing the information required from them. Various
approaches have been listed by Cem Kaner in his paper
athttp://www.kaner.com/pdfs/GoodTest.pdf

Function: Test each function/ feature in isolation


Domain : Test by partitioning different sets of values
Specification based: Test against published specifications

Risk based: Imagine a way in which a program could fail and then design tests to check
whether the program will actually fail.
User: Tests done by users.
Scenario/ use case based: Based on actors/ users and a set of actions they are likely to
perform in real life.
Exploratory: the tester actively controls the design of tests as those tests are performed and
uses information gained while testing to design new and better tests.

Since the goal should be to maximize the extent to which the application is
exercised, a combination of two or more of these works well. Exploratory testing in
combination with any of these approaches will give the focus needed to find defects
creatively.
Pure exploratory testing provides a rather creative option to traditional test case
writing, but is a topic of separate discussion.
Back to the Top

TEST CASE WRITING PROCEDURE

Study the application

Get as much information about the application as possible through available documentation
(requirement specs, use cases, user guides) , tutorials, or by exercising the software itself
(when available)
Determine a list of features and different user roles.
If its a special domain, try to obtain as much info about how the users might interact with the
application.

Standardize an approach for test case writing-

Write test cases for different features into different documents (usually excel sheets)
and name according to the feature. In case a particular application has well defined
user roles, differentiate the test cases based on a combination of user role and
feature. Write tests involving interaction between different user roles and modules
separately for complex applications.
Further as a check, make sure that the entire application flow has been covered. For
example, for an ecommerce application, the application flow will begin at registration
and end at the point the user gets an order confirmation or does a successful order
cancellation. Trace this flow to the set of test cases.

Identify sets of test cases

Back to the Top

Identify logical sets of test cases based on individual features/ user roles or integration
between them.

Create separate test cases for special functional tests e.g. browser specific tests (using
browser specific functions like back button, closing the browser session etc), UI tests,
usability tests, security tests, cookie/ state verification etc. to ensure that all tests under these
categories are covered.
Effective test cases verify functions in isolation. If a particular feature has a lot of input
combinations, separate the test into subtests. For e.g. to verify how the registration feature
works with invalid input, write sub tests for different values.

Main test case: Register_01- Verify user cannot register with invalid inputs in
registration form
Sub test cases:
Register_01a- Verify with invalid email id.
Register_01b- Verify with invalid phone number
Register_01c- Verify with large number of characters in password field.

Decide on a structure

The test case format given below serves well for functional test case writing. Some
of the information may be redundant if written for each test case e.g references, in
which case it can be mentioned only once in the beginning of the test case. In some
cases when the use cases or requirement specs are well written, there could be a
perfect mapping of each test case to a particular section of the document.

Test case name- Decide on a test case naming convention based on the approach used.
The idea is to use such a convention so that one look at the set of test cases will inform the
feature being tested or the user role/ scenario being tested. For e.g. using
Seller_Register_xx for all test cases on seller registration will immediately indicate the
number of test cases written for that user role and feature.

The test case name should be unique, so that when test cases document is used as
an input to the automation scripts, all/combination of test cases can be included in a
single suite of tests.
For e.g. consider the example of an auction site like eBay, buyer and seller are
distinct user roles. So, the most effective approach would be to write test cases
separately for buyer and seller, addressing different features that the user roles
execute. (E.g. Buyer_Register_01 Verify that inserting valid values for all fields on
the registration page, registers the buyer successfully) could be one of the test cases
where a buyer registers with eBay. Similarly Buyer_bids_01 Verify that buyer can
bid for items where the bid period has not yet expired could be one of the test case
for the bid feature for a buyer. Another set of test cases will address features for a
seller. Another set of test cases should address the interaction between buyer and
seller and will be scenario based.

Description- Explain the function under test. Clearly state exactly what attribute is under
test and under what condition.
Prerequisites- Every test needs to follow a sequence of actions, which lead to the
function under test. It could be a certain page that a user needs to be on, or certain data

that should be in the system (like registration data in order to login to the system), or
certain action. State this precondition clearly in the test case. This helps to define
specific steps for manual testing, and more so for automated testing, where the system
needs to be in a particular base state for the function to be tested.
Steps- Sequence of steps to execute the specific function.
Input- Specify the data used for a particular test or if it is a lot of data, point to a file
where this data is stored.
Expected result Clearly state the expected outcome in terms of the page/ screen that
should appear after the test, changes that should happen to other pages, and if possible,
changes that should happen to the database.

Back to the Top

Actual result State the actual result of the function execution. Especially in case the test
case fails, the information under actual result will be very useful to the developer to analyse
the cause of the defect.
Status- Write status separately for tests done using different environments, e.g. various
OS/browser combinations. Test case status could beo Passed The expected and actual results match.
o Failed- The actual result does not match the expected result.
o Not tested- The test case has not been executed for the test run, maybe is a lower
priority test case.
o Not Applicable-The test case does not apply to the feature any more since the
requirement changed.
o Cannot be tested May be the prerequisite/ precondition is not met. There could be a
defect in one of the steps leading up to the function under test.
Comments- Write additional information under this column. For e.g. the actual result occurs
only under a particular condition or a defect is reproducible only sometimes. This information
gives the developer/ client additional info about the feature behaviour which can be very
useful in determining the root cause of a problem. It is especially useful for failed cases, but
also serves as a feedback if additional observation is mentioned in the passed cases.
References- Refer / map a test case to the corresponding requirement spec or use case or
any other reference material that you used. This information helps gauge the test coverage
against the documented requirements.

CONCLUSION:
Its a huge task to write effective cases with all the appropriate details. Once the test
case documents are ready to be executed against, its typically only the beginning of
the test effort. As you become more familiar with the application (exercising creativity
while doing so) and become better in tune with the end users perspective (not only
relying on the documented features/ use cases), you will likely add more relevant
test cases. Also, it will be the same with each progression towards the final release,
as features get added/ deleted or modified while building towards the final product.
Thus, the usefulness of the test cases will ultimately depend on how current and
relevant they are.

S-ar putea să vă placă și