Sunteți pe pagina 1din 15

“Exploratory Testing” is a style of testing in which we learn about the behavior of a system by designing

a small test, executing it immediately, using the information gleaned from the last test to inform the
next.

Exploratory testing is an approach to software testing that is concisely described as


simultaneous learning, test design and test execution. wiki

Exploratory Testing in general means “Testing application without Requirement


Specifications”.

“Discovering the unexpected is more important than confirming the known.“— George
E. P. Box

Cem Kaner, who coined the term in 1984,[1] defines exploratory testing as "a style of software
testing that emphasizes the personal freedom and responsibility of the individual tester to
continually optimize the quality of his/her work by treating test-related learning, test design,
test execution, and test result interpretation as mutually supportive activities that run in
parallel throughout the project."[2]

Exploratory testing is one of the strongest ways to understand the application and then test it
with logic. This enhances the quality to the higher extent, and which can be combined with other
testing practices simultaneously.

There are cases when testers don’t get the requirement document but they have to finish all
testing activities within the limited time frame. Within this strict timeline, they have to explore,
learn the application, do test planning, test the application and present a detailed test analysis.
So, how can all this be achieved within such a short span?

Exploratory testing follows the approach of minimum planning and maximum test
execution, which involves both logical and critical thinking, as there is neither test
case to just read and execute nor automated test scripts to run as designed.

Exploratory software testing • is a style of software testing • that emphasizes the personal freedom and
responsibility • of the individual tester • to continually optimize the value of her work • b y g treatin g –
test-related learning, – test design, – test execution, and – test result interpretation • as mutually
supportive activities • that run in parallel throughout the project

Time sequence in exploration • In contrast with scripting, we: • Design the test as needed • Execute the
test at time of design or reuse it later • Vary the test as appropriate, whenever appropriate. Not scri p g
tin g doesn’t mean not pp g re parin g: • We often design support materials in advance and use them
many times throughout testing, such as – dt ta t a se t s – failure mode lists – combination charts.
Cognitive sequence in exploration

This is the fundamental difference between exploratory and scripted testing.

• The exploratory tester is always responsible for

managing the value of her own time.

– At any point in time, this might include:

° Reusing old tests

° Creating and running new tests

° Creating test-support artifacts, such as failure mode lists

° Conducting background research that can then guide test design

Exploratory Testing in general means “Testing application without Requirement


Specifications”. But it is more than that:

Skilled testers perform exploratory testing once scripted test execution is done and
brainstorming is done to find more scenarios.
If the time to test an application is less, then exploratory is the best way to achieve test
coverage. Normally we have seen the sprint cycles have even reduced to 1 week so in these
cases, it becomes difficult for the tester to write test cases and execute them.

Exploratory means NOT to test with scripted test cases, but you should aware of full
requirements and think beyond it.

Any tester testing without referring scripted test cases is performing exploratory testing.

Corner areas can be reached through efficient exploratory testing.

Chartered Exploratory Testing


There are many ways to perform Exploratory testing, but one of the most commonly used methods is
known as Chartered exploratory testing. In this method, the tester is required to make one charter, to
guide how to test within a defined timeline. So first we need to prepare Charter and then time box
should be defined, so as to finish the task within the defined timeline.

Mission

Defines the objective we need to achieve from exploratory tests.

What features we need to test and how to test

Testers may focus on specific complex functionalities for a particular application in one session.

Testers may choose one of the system workflows and test it in a particular session.

Charter

What we want to achieve

What tactics to use

What risks are involved

What kinds of defects to look for

Any high-level test plan

Who will be involved, would be good either to get test lead involved or perform pair to pair testing.

Session

Generally varies from 2-3 hours.


Breaks while performed exploratory testing isn’t recommended.

Don’t break current scenarios in-between if any other scenario is clicked.

Session Report

Testers can prepare test notes while exploring the application and prepare notes for their own learning.

If possible notes on all the scenario covered.

Note if clarification required with the client for some scenarios.

Bugs

List of bugs identified.

Raising them on Defect Management Tool with proper classification

Discussion

Schedule meeting with the manager to provide information regarding the different activities.

Prepare report and share with the team.

Keep documents prepared in Session Report for future reference.

Even though documentation is not mandatory for exploratory testing, it is advisable to create a one-
page document at the end of exploratory testing. This documentation can be submitted to the manager
which would showcase the status of testing and the current state of the application.

Where to use exploratory testing

Exploratory testing may be useful in situations where testing needs to be completed without any proper
requirement document.

It is also recommended when time available for testing is slim.

When the project has budget-related constraints.

If the modules are simple and don’t require elaborate test documentation in the form of test cases.

You need to add Critical thinking to find more defects.

Misconceptions Regarding Exploratory Testing

Lots of misconceptions are there regarding exploratory testing. Some of them are as follows:

Unstructured: Many people think that it is an unstructured form of testing which is not correct. It is
structured and only done by testers who have ample product knowledge.
No need to plan or document test: Many people think that there is no need to plan or document test.
But it’s a complete myth. The plan is mentioned in charter though it is very short and may be contained
in a paragraph.

It is new and fast: Many people believe that Exploratory testing is new and can help in test completion
at a fast pace. But the fact is, testers have been using exploratory testing for a very long time. Even
though it is a swift alternative it is a time-bound affair and can take up quite a lot of time depending on
the complexity of the flow or module under test.

It is more effective at finding bugs: Again it is not true. Traditional testing or scripting testing are also
quite effective, and maybe even more effective at finding bugs than exploratory testing.

No need to do is scripted testing: It is also not true. Scripted testing helps in sequentially covering all
elements of the software, to keep track of the progress and be confident of the quality of the software.
Scripted testing or traditional testing needs to be applied along with Exploratory testing to get the best
effect.

How to Be a Good Exploratory Tester

Be aware of the domain you are performing testing.

Gone through all the scripted test cases so that you can think scenarios beyond that.

Logical thinking is the key. Don’t confuse exploratory with monkey testing.

Get an idea, talk to the client, developers to be aware of the app complete functionality.

Don’t follow assumption while doing exploratory, note them down for clarifications.

Plan your tests i.e. objectives you want to achieve with current exploratory.

Don’t perform regression testing, exploratory test have the objective to find issues where critical
thinking is required.

Don’t be dependent on exploratory testing tools, this type of testing is best with individual till the time
AI testing tools are available for testing.

Conclusion

Most of the experienced testers while performing exploratory testing leverage their product knowledge
to test the application intuitively. That is why exploratory testing is done by the experienced tester.

For this reason, fresh testers should follow traditional scripted testing techniques and go through the
test documentations while testing the application.

Overall we can say that exploratory testing is an intelligent way of doing testing especially when the
need of the hour is to deliver quality application within a limited time frame. Also, it helps in improving
the design of the application in case of integration issues that are rarely encountered by end-users.
Even though documentation is not mandatory, good exploratory tests are planned, engaging and
creative in nature.

Exploratory testing

• Learning: Anything that can guide us in what to test, how to test, or

how to recognize a problem.

• Design: “to create, fashion, execute, or construct according to plan;

to conceive and plan out in the mind” (Websters)

– Designing is not scripting. The representation of a plan is not the

plan.

– Explorers’ designs can be reusable.

• Execution: Doing the test and collecting the results. Execution can

be automated or manual.

• Interpretation: What do we learn from program as it performs

under our test

– about the product and

– about how we are testing the product?

How…
3. Observing / Learning.
We’re observing what the system does and does not do. We discover
how it operates. We find its quirks and peculiarities while
characterizing its capabilities and limitations. Further, we’re learning
the system well enough that we can reflect our observations back to
the rest of the team. The information that our explorations uncover
will help to move the project forward, but only if we can report it out
in a way that the rest of the team can digest.
By LEARNING
Learning: Anything that can guide us in what to test, how to test, or

how to recognize a problem, such as:

– the project context (e.g., development objectives, resources and

constraints, stakeholders with influence), market forces that

drive the product (competitors, desired and customary benefits,

users), hardware and software platforms, and development

history of prior versions and related products.

– risks, failure history, support record of this and related products

and how this product currently behaves and fails.

Examples of learning activities

• Study competitive products (how they work, what they do, what

expectations they create)

• Research the history of this / related products (design / failures /

support)

• Inspect the product under test (and its data) (create function

lists, data relationship charts, file structures, user tasks, product

benefits, FMEA)

• Question: Identify missing info, imagine potential sources and

potentially revealing questions (interview users, developers, and

other stakeholders, use reference materials to supplement answers)

• Review written sources: specifications, other authoritative

documents, culturally authoritative sources, persuasive sources

• Try out potentially useful tools

Hardware / software platform: Design and run experiments to


establish lab procedures or polish lab techniques. Research the

compatibility space of the hardware/software (see, e.g. Kaner, Falk,

Nguyen’s (Testing Computer Software) chapter on Printer Testing).

• Team research: brainstorming or other group activities to combine

and extend knowledge

• Paired testing: mutual mentoring, foster diversity in models and

approaches.

Design
1. Designing.
Good testers know a lot about test design. Test design involves
identifying interesting things to vary, and interesting ways in which
to vary them. We can design tests around data: boundaries, special
characters, long inputs, nulls, number fields with 0, etc. We can also
design tests around sequences and logic using state models,
sequence diagrams, flow charts, decision tables, and other design
artifacts. All the traditional test design techniques apply. The
difference is that while we might make notes about the tests we're
designing, we aren't writing down formal test cases.

L i A thi • Learning: Anything that can guide us in what to test, how to test, or how to

recognize a problem.

• Design: “to create, fashion, execute, or construct

according to plan; to conceive and plan out in the

mind” (Websters)

–Designing is not scripting. The representation of a

plan is not the plan.

– Explorers’ designs can be reusable.

• Execution: Doing the test and collecting the results. Execution can be
automated or manual.

• Interpretation: What do we learn from program as it performs under our test

– about the product and

– about how we are testing the product?

Examples of design activities

• Map test ideas to FMEA or other lists of variables, functions, risks,

benefits, tasks, etc.

• Map test techniques to test ideas

• Map tools to test techniques.

• Map staff skills to tools / techniques, develop training as necessary

• Develop supporting test data

• Develop supporting oracles

• Data capture: notes? Screen/input capture tool? Log files? Ongoing

automated assessment of test results?

• Charter: Decide what you will work on and how you will work

EXECUTION
2. Executing.
As soon as we think of a test, we execute it. This is what
distinguishes exploratory testing from other styles of testing. We’re
not squirreling away a large set of designed tests for some future time
when we (or someone else) will execute them. We execute
them immediately.

Execution: Doing the test and collecting the

results. Execution can be automated or manual.

• Interpretation: What do we learn from program as it performs under our


test

– about the product and

– about how we are testing the product?

Examples of execution activities

• Configure the product under test

• Branch / backtrack: Let yourself be productively distracted from one

course of action in order to produce an unanticipated new idea.

• Alternate among different activities or perspectives to create or

relieve productive tension

• Pair testing: work and think with another person on the same

problem

• Vary activities and foci of attention

• Create and debug an automated series of tests

• Run and monitor the execution of an automated series of tests

Interpretation:
What do we learn from

program as it performs under our test

–about the product and

–about how we are testing the product?

Interpretation activities

• Part of interpreting the behavior exposed by a test is determining

whether the program passed or failed the test.

• A mechanism for determining whether a program passed or failed a

test is called an oracle. We discuss oracles in detail, on video and in


slides, at http://www.testingeducation.org/BBST/BBSTIntro1.html

• Oracles are heuristic: they are incomplete and they are fallible. One

of the key interpretation activities is determining which oracle is

useful for a given test or test result

Consistent within Product: Behavior consistent with behavior of

comparable functions or functional patterns within the product.

Consistent with Comparable Products: Behavior consistent with

behavior of similar functions in comparable products.

Consistent with a Model’s Predictions: Behavior consistent with

expectations derived from a model.

Consistent with History: Present behavior consistent with past

behavior.

Consistent with our Image: Behavior consistent with an image that

the organization wants to project.

Consistent with Claims: Behavior consistent with documentation or

ads.

Consistent with Specifications or Regulations: Behavior

consistent with claims that must be met.

Consistent with User’s Expectations: Behavior consistent with

what we think users want.

Consistent with Purpose: Behavior consistent with apparent

purpose .
$$$$$$$$$
1. Exploratory testing is everywhere
Don't fixate on working applications. Explore a wireframe. Explore a mockup. Explore databases,
systems diagrams, APIs, acceptance criteria, ideas, processes, feature files, assumptions, the UI,
specifications. Go and explore anything you can gather useful information from!

Exploratory testing is all you have at the beginning...

ET fits at the beginning of the test project because test procedures don't yet exist for the new technology being developed.
Even if they do exist, you have to learn the product (that requires exploring and questioning it), and the procedures would
have to be reviewed and upgraded. The process of writing test procedures is exploratory. Watch anyone, or yourself, writing
a test script, and you'll see those thought processes at work.

Be exploratory in the sense that a tourist on a tour bus is exploratory. Let your allotted tests take you to visit different parts
of the product, then improvise on the theme of those tests, briefly. Spend a few minutes working through variations of the
tests, then get back on the tour bus and do the next scripted test.

Take notes, whatever you will do

Advantages of Exploratory Testing


Wikipedia
The main advantage of exploratory testing is that less preparation is needed, important bugs are found quickly, and at
execution time, the approach tends to be more intellectually stimulating than execution of scripted tests.

Another major benefit is that testers can use deductive reasoning based on the results of previous results to guide their
future testing on the fly. They do not have to complete a current series of scripted tests before focusing in on or moving on to
exploring a more target rich environment. This also accelerates bug detection when used intelligently.

Another benefit is that, after initial testing, most bugs are discovered by some sort of exploratory testing. This can be
demonstrated logically by stating, "Programs that pass certain tests tend to continue to pass the same tests and are more
likely to fail other tests or scenarios that are yet to be explored."

Disadvantages are that tests invented and performed on the fly can't be reviewed in advance (and by that prevent errors in
code and test cases), and that it can be difficult to show exactly which tests have been run.

Freestyle exploratory test ideas, when revisited, are unlikely to be performed in exactly the same manner, which can be an
advantage if it is important to find new errors; or a disadvantage if it is more important to repeat specific details of the earlier
tests. This can be controlled with specific instruction to the tester, or by preparing automated tests where feasible,
appropriate, and necessary, and ideally as close to the unit level as possible.

Some of the striking advantages of exploratory testing are:

Testers get familiar with the application and at the same time, check whether the application
under test is behaving as per the expectation. In fact, the tester is doing the same thing as
traditional tester do, but without documentation.
The tester can find the bug quickly as compared to traditional testing. Results have shown that
the rate at which new bugs are found is much higher for exploratory testing, as compared to
scripted testing.

Many times, it’s not mandatory to follow traditional approach and prepare test cases for the
scenarios which do not have much business impact. For such scenarios, exploratory testing is
recommended. Also when the software versions are in the early stage and tester is required to
be familiar with the application, exploratory testing is recommended.

Since much logical thinking is required for Exploratory testing, the outcome is the best quality,
while in cases of scripted testing objective is to pass/fail the test without brainstorming on test
scenarios.

Exploratory can be done at any test level and type. Either performance, security, integration
etc. In general, it can be included in all the testing practices.

Uncommon integration issues can be uncovered by rigorous exploratory testing, which could
result in detecting more possible ways to break the application.

However, exploratory testing does not guarantee the quality of the application but it is
advantageous especially during time crunch or shoe-string budgets.

Business Value of Exploratory testing:


 Identifies critical issues/bugs earlier on in the cycle
 Saves time, efforts and increases collaboration
 Empowers testers to test organically to enhance functionality
 Less formality and rigidity of structure
 Fosters experimentation, discovery and creativity
 Better utilization of testing resources adding more value to the product
 Almost instant feedback, closing the gap between testers and
programmers
 User oriented feedback for developers and business analysts

Exploratory testing, in itself is quite powerful. But when combined with


automated testing, or other testing practices, it is a potent way to accelerate
bug detection, enhance product understanding, build better quality software
faster and streamline towards more functional tests. Tracking the actions
performed during exploratory UI testing, sophisticated testing tools can
convert the information into modular code that can be used for automated
regression tests.
Disadvantages of Exploratory Testing
Not able to showcase the efforts inputted for exploratory testing, since most activity is not
documented.

Not possible to showcase the quality of exploratory testing if bugs weren’t found.

The outcome of exploratory usually comes up with some unanswered queries which need to be
clarified by clients. Taking assumption mostly of them degrade the objective & effectiveness of
Exploratory Testing.

Exploratory testing is only effective if done for some hour’s duration. It is never a day-long
activity.

Mostly small tight schedules projects perform exploratory in that test case expected results for
exploratory remains unreviewed.

If much critical thinking is not done during exploratory testing, the outcome could be much
more disappointing than scripted testing.

Finalizing the test coverage is challenging, as there is no formal mapping between the tests
performed and requirements. Until the end of User Acceptance Testing, the results of
exploratory testing is not much taken into account for quality.

Since this is an unplanned activity, it is very difficult to convince customer about the expected
efforts and time.

Sometimes, this testing is taken lightly and areas are left untested. Also, chances are there to
leave the defects as there is no documentation involved in testing.

This does not help in root cause analysis in case of missed defects to production.

Challenges faced in Exploratory Testing


As in any other testing type, even exploratory testing have challenges:

The entire application and the purpose of each and every feature have to be learned in short
span of time.
The term “unplanned activity” rises lot many questions on effort, time, and efficiency that the
tester has put. Difficult to convince customer and management about the actual accounts.

There will be no proof to show the effort put unless defects are logged. In this case, it is
challenging to defend the actual tests performed.

Since test cases and test scripts are not involved in exploratory testing, the tester has to
continuously keep thinking about the possible tests to execute, which may become stressful.

Difficult to reproduce the defects in case of corner scenarios

Test Coverage is difficult to prove if it is clubbed with system/regression testing, as most of the
coverage is shown by their results, and exploratory has nothing much to show the coverage.

Difficult to record the each and every test performed.

Tedious when exploratory testing results have to be reported.

Exploratory Testing in an Agile Context = Elisabeth Hendrickson Quality Tree Software, Inc.

S-ar putea să vă placă și