Sunteți pe pagina 1din 25

SO F TW A RE TE S TI NG

S U N D A Y, D E C E M B E R 3 1 , 2 0 0 6

Exploratory or Ad-hoc Testing

Exploratory or Ad-hoc Testing

<!--[if !supportEmptyParas]-->(Excerpts from an article written by James Bach)<!--[endif]-->

<!--[if !supportEmptyParas]-->Exploratory software testing is a powerful approach, yet widely


misunderstood. In some situations, it can be orders of magnitude more productive than scripted
testing. All testers practice some form of exploratory testing, unless they simply don’t create
tests at all. Yet few of us study this approach, and it doesn't get much respect in our field. This
attitude is beginning to change as companies seek ever more agile and cost effective methods
of developing software. <!--[endif]-->

Among the hardest things to explain is something that everyone already knows. We all know
how to listen, how to read, how to think, and how to tell anecdotes about the events in our lives.
As adults, we do these things everyday. Yet the level of any of these skills, possessed by the
average person, may not be adequate for certain special situations. Psychotherapists must be
expert listeners and lawyers expert readers; research scientists must scour their thinking
forerrors and journalists report stories that transcend parlor anecdote.So it is with exploratory
testing (ET): simultaneous learning, test design and test execution.This is a simple concept. But
the fact that it can be described in a sentence can make it seem like something not worth
describing. Its highly situational structure can make it seem, to the casual observer, that it has
no structure at all. That’s why textbooks on software testing, with few exceptions, either don’t
discuss exploratory testing, or discuss it only to dismiss it as an unworthy practice. Exploratory
testing is also known as ad hoc testing.
P O S T E D B Y S O F T WA R E T E S T I N G AT 8 : 5 0 A M 2 C O M M E N T S

W E D N E S D A Y, O C T O B E R 1 1 , 2 0 0 6
Test Process
1. Identify the purpose of the product. (business requirement)
2. Identify functions.
3. Prioritize the functions (based on criticality/ potential instability)
4. Test the functions
5. Record the test results
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 2 : 5 1 A M 0 C O M M E N T S

Function tests
Function tests are simple tests of individual functions. Don't confuse this term
with functional testing, which is a broad term that encompasses black box
testing:

In function testing, we identify all of the individual features or functions, then


test them one at a time.

In functional testing, we treat the program, or any component of it, as a function


(whose inner workings we may not be able to see) and test the functions by
giving it inputs and comparing its outputs to expected results.

The two key tasks of function testing are:

Identify all the individual features and functions. (To keep them straight, you'll
probably organize them into a detailed outline, called a function list)
Now that you've found them, test them.

Function testing is a valuable first step in testing the program. You try each
function quickly, to see if anything is obviously broken. Similarly, many smoke
tests are function tests. (Smoke testing consists of running a series of simple
tests of key aspects of the program, to determine whether a given build is stable
enough to test further.)
courtesy: A course by Cem Kaner & James Bach
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 2 : 5 0 A M 0 C O M M E N T S

T U E S D A Y, O C T O B E R 1 0 , 2 0 0 6

Black-box testing
Black-box testing is a test design method that treats the system as a "black-box",
so it doesn't explicitly use knowledge of the internal structure. Black-box testing
is also known as functional testing.
P O S T E D B Y S O F T WA R E T E S T I N G AT 3 : 0 5 P M 0 C O M M E N T S

M O N D A Y, O C T O B E R 0 9 , 2 0 0 6

Quotes
A clever person solves a problem.A wise person avoids it.
-- Einstein

*
The significant problems we face cannot be solved by the same level of thinking
that created them. -- Albert Einstein

The difference between a programmer and a designer:


"If you make a general statement, a programmer says, 'Yes, but...'while a
designer says, 'Yes, and...'"
André Bensoussan .
*

To go faster, slow down. Everybody who knows about orbital mechanics


understands that.
-- Scott Cherf
*

One test is worth a thousand opinions.


- anonymous
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 1 : 4 7 P M 0 C O M M E N T S

Top Ten Tips for Bug Tracking


1.A good tester will always try to reduce the repro steps to the minimal steps to
reproduce; this is extremely helpful for the programmer who has to find the
bug.

2.Remember that the only person who can close a bug is the person who opened
it in the first place. Anyone can resolve it, but only the person who saw the bug
can really be sure that what they saw is fixed.

3. There are many ways to resolve a bug. FogBUGZ allows you to resolve a bug
as fixed, won't fix, postponed, not repro, duplicate, or by design.

4. Not Repro means that nobody could ever reproduce the bug. Programmers
often use this when the bug report is missing the repro steps.

5.You'll want to keep careful track of versions. Every build of the software that
you give to testers should have a build ID number so that the poor tester doesn't
have to retest the bug on a version of the software where it wasn't even
supposed to be fixed.

6. If you're a programmer, and you're having trouble getting testers to use the
bug database, just don't accept bug reports by any other method. If your testers
are used to sending you email with bug reports, just bounce the emails back to
them with a brief message: "please put this in the bug database. I can't keep
track of emails."
7. If you're a tester, and you're having trouble getting programmers to use the
bug database, just don't tell them about bugs - put them in the database and let
the database email them.

8.If you're a programmer, and only some of your colleagues use the bug
database, just start assigning them bugs in the database. Eventually they'll get
the hint.

9.If you're a manager, and nobody seems to be using the bug database that you
installed at great expense, start assigning new features to people using bugs. A
bug database is also a great "unimplemented feature" database, too.

10.Avoid the temptation to add new fields to the bug database. Every month or
so, somebody will come up with a great idea for a new field to put in the
database. You get all kinds of clever ideas, for example, keeping track of the file
where the bug was found; keeping track of what % of the time the bug is
reproducible; keeping track of how many times the bug occurred; keeping track
of which exact versions of which DLLs were installed on the machine where the
bug happened. It's very important not to give in to these ideas. If you do, your
new bug entry screen will end up with a thousand fields that you need to supply,
and nobody will want to input bug reports any more. For the bug database to
work, everybody needs to use it, and if entering bugs "formally" is too much
work, people will go around the bug database.

If you are developing code, even on a team of one, without an organized


database listing all known bugs in the code, you are simply going to ship low
quality code. On good software teams, not only is the bug database used
universally, but people get into the habit of using the bug database to make their
own "to-do" lists, they set their default page in their web browser to the list of
bugs assigned to them, and they start wishing that they could assign bugs to the
office manager to stock more Mountain Dew.

courtesy:joelonsoftware.com
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 1 : 1 3 P M 0 C O M M E N T S
Defect Report
A good bug report should contain these 3 essential parts

1 Steps to reproduce
2 expected results
3 Actual results

It would also be good to log:


1. Version number of the application
2. Test data (data the application is using)
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 1 : 0 8 P M 0 C O M M E N T S

Teaming Testers with Developers

Its been recognized for a long time that the close coupling of testers with developers
improves both the test cases and the code that is developed. An extreme case of this
practice is Microsoft, where every developer is shadowed with a tester. Needless to say,
one does not have to resort to such an extreme to gain the benefits of this teaming. This
practice should, therefore, understand the kinds of teaming that are beneficial, and
theenvironments in which they may be employed. The value of a best practice such as
teaming should be therefore more than just concept. Instead it should include guidance
on forming the right team while reporting the pitfalls and successes experienced.

courtesy: IBM reasearch technical paper

P O S T E D B Y S O F T WA R E T E S T I N G AT 1 1 : 0 2 P M 0 C O M M E N T S

Statistical Testing
The concept of statistical testing was invented by the late Harlan Mills (IBM
Fellow who invented Clean Room software engineering). The central idea is to
use
software testing as a means to assess the reliability of software as opposed to a
debugging
mechanism. This is quite contrary to the popular use of software testing as a
debugging
method. Therefore one needs to recognize that the goals and motivations of
statistical
testing are different fundamentally. There are many arguments as to why this
might indeed
be a very valid approach. The theory of this is buried in the concepts of Clean
Room
software engineering and are worthy of a separate discussion. Statistical testing
needs to
exercise the software along an operational profile and then measure interfailure
times that
are then used to estimate its reliability. A good development process should
yield an
increasing mean time between failure every time a bug is fixed. This then
becomes the
release criteria and the conditions to stop software testing.

courtesy:
IBM Research - Technical Report
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 1 : 0 0 P M 0 C O M M E N T S

Maintenance Testing
Excerpts from a paper by Monishita Bathija, CSTE

The objective of the Maintenance Testing stage is to ensure that the system is
modified to take care of the business change, and the system meets the business
requirements to a level acceptable to the users.

It is quite well known that fixing bugs during maintenance quite often leads to
new problems or bugs. This leads to frustration and disappointment for
customers and also leads to increased effort and cost for re-work for the
maintaining organization. This kind of “bug creep” usually occurs due to several
reasons. The most common ones being:

· Integration: The maintenance engineer fixes a bug in one component and


makes necessary changes for them. While doing so, the impact of this change in
other components and areas is often ignored. The engineer then tests this
component (which works fine) and proceeds to offer a patch to the customer.
When the customer installs it, the problem reported is solved but new problems
crop up.

· Deployment: The maintenance engineer has no clue of the deployment


scenarios to be supported. Whether it is in terms of firewalls, proxy servers,
client browsers, security policies, networking issues, etc which exist at the
customer site. These are often ignored during the maintenance phase thus
leading to a theoretical fix of the problem.

· Ideal setup: While fixing bugs, the maintenance engineer quite often as an
“ideal setup”. All registry entries have been made. Database connections have
been setup. Authorizations and permissions exist. Relationships have been set.
But when a bug is fixed and deployed, the engineer has to realize that all these
and many other system variables need to be checked and set.

These and several such problems can be addressed by including Maintenance


Testing in the processes for releasing maintenance patches.

Broadly the various steps that make Maintenance Testing are:


Ø Prepare for Testing
Ø Conduct Unit Tests
Ø Test System
Ø Prepare for Acceptance TestsConduct Acceptance Tests
P O S T E D B Y S O F T WA R E T E S T I N G AT 3 : 2 0 P M 0 C O M M E N T S
Risk:
Risk:

Software risk can be described as the probability that a defect producing a


negative impact on the busines. As per Felix Redmill, [ICSTEST conference
2004 London] risk is a function of two components :
- Probability of occurrence of an undesired event
- consequences of that event

Risk based testing is gaining significance because this approach will minimize
the risk with least efforts.
By applying Pareto’s 80-20 rule (which states that for many phenomena, 80% of
the consequences stem from 20% of the causes), 80% of the risk could be
eliminated by testing 20% of the requirements.
P O S T E D B Y S O F T WA R E T E S T I N G AT 3 : 0 7 P M 0 C O M M E N T S

RISK-BASED TESTING
by Paul Gerrard, Systeme Evolutif

The risk-based test method is an attempt to use early risk analysis to connect
the concerns and objectives of senior management to the test process and the
activities of project testers. The consequences of so doing are obvious and most
of these consequences are beneficial. If the test process is developed specifically
to address the risks of concern to stakeholders and management:

* Management has better visibility of the testing and so has more control over it.
* The budget for testing is likely to be more realistic.
* The testers have a clearer view of what they should be doing.
* The budget allocated to testing is determined by consensus, so everyone buys
into the quantity of testing that is planned.
* The information provision role of testers is promoted, but fault detection is as
important as ever.
* The information provided by testers increases their influence on management
decision-making.
* Release decision-making is better informed. The risk-based test method is
universal. Risks pervade all software projects and risk-taking is inevitable. The
method helps projects to identify their product risks of relevance and focus the
attention of testers onto the risks of most concern. Testing aims to mitigate risk
by finding faults, but testing also aims to reduce the uncertainty about risk by
providing better risk information to management. In doing so, management will
be better able to steer their projects away from hazards and make better
decisions.
P O S T E D B Y S O F T WA R E T E S T I N G AT 7 : 1 0 A M 0 C O M M E N T S

50 Specific Ways to Improve Your Testing


50 Specific Ways to Improve Your Testing
From the book Effective Software Testing by Elfriede Dustin
1. Involve testers from the beginning
2. Verify the requirements
3. Design test procedures as soon as requirements are available
4. Ensure that requirement changes are communicated
5. Beware of developing and testing based on an existing system
6. Understand the task at hand and the related testing goal
7. Consider the risks
8. Base testing efforts on a prioritized feature schedule
9. Keep software issues in mind
10. Acquire effective test data
11. Plan for the test environment
12. Estimate test preparation and execution
13. Define the roles and responsibilities
14. Require a mixture of testing skills, subject matter expertise, and experience
15. Evaluate the testers’ effectiveness
16. Understand the architecture and underlying components
17. Verify that the system supports testability
18. Use logging to increase system testability
19. Verify that the system supports debug vs. release execution modes
20. Divide and conquer
21. Mandate the use of a test procedure template, and other test design
standards
22. Derive effective test cases from requirements
23. Treat test procedures as “living” documents
24. Utilize system design and prototypes
25. Use proven testing techniques when designing test case scenarios
26. Avoid constraints and detailed data elements in test procedures
27. Apply exploratory testing
28. Structure the development approach to support effective unit testing
29. Develop unit tests in parallel or before the implementation
30. Make unit test execution part of the build process
31. Know the different types of testing support tools
32. Consider building a tool instead of buying one
33. Be aware of the impact of automated tools on the testing effort
34. Focus on the needs of your organization
35. Test the tools on an application prototype
36. Do not rely solely on capture/playback
37. Develop a test harness when necessary
38. Use proven test script development techniques
39. Automate regression tests whenever possible
40. Implement automated builds and smoke-tests
41. Do not make nonfunctional testing an afterthought
42. Conduct performance testing with production sized databases
43. Tailor usability tests to the intended audience
44. Consider all aspects of security, for specific requirements and system-wide
45. Investigate the system’s implementation to plan for concurrency tests
46. Setup an efficient environment for compatibility testing
47. Clearly define the beginning and the end of the test execution cycle
48. Isolate the test environment from the development environment
49. Implement a defect tracking life-cycle50. Track the execution of the test
program
P O S T E D B Y S O F T WA R E T E S T I N G AT 7 : 0 8 A M 1 C O M M E N T S
Iterative Lifecycle Models
Iterative Lifecycle ModelsAn iterative lifecycle model does not attempt to start
with a full specification ofrequirements. Instead, development begins by
specifying and implementing just part ofthe software, which can then be
reviewed in order to identify further requirements. Thisprocess is then repeated,
producing a new version of the software for each cycle of themodel.
A Requirements phase, in which the requirements for the software are gathered
andanalyzed. Iteration should eventually result in a requirements phase which
producesa complete and final specification of requirements.• Design phase, in
which a software solution to meet the requirements is designed.This may be a
new design, or an extension of an earlier design.• An Implementation and Test
phase, when the software is coded, integrated andtested.• A Review phase, in
which the software is evaluated, the current requirements arereviewed, and
changes and additions to requirements proposed.For each cycle of the model, a
decision has to be made as to whether the softwareproduced by the cycle will be
discarded, or kept as a starting point for the next cycle(sometimes referred to as
incremental prototyping). Eventually a point will be reachedwhere the
requirements are complete and the software can be delivered, or it
becomesimpossible to enhance the software as required, and a freash start has
to be made.The iterative lifecycle model can be likened to producing software by
successiveapproximation. Drawing an analogy with mathematical methods
which use successiveapproximation to arrive at a final solution, the benefit of
such methods depends on howrapidly they converge on a solution.Continuing
the analogy, successive approximation may never find a solution. Theiterations
may oscillate around a feasible solution or even diverge. The number
ofiterations required may become so large as to be unrealistic. We have all seen
softwaredevelopments which have made this mistake!The key to successful use
of an iterative software development lifecycle is rigorousvalidation of
requirements, and verification (including testing) of each version of thesoftware
against those requirements within each cycle of the model. The first threephases
of the example iterative model are in fact an abbreviated form a sequential V
orwaterfall lifecycle model. Each cycle of the model produces software which
requirestesting at the unit level, for software integration, for system integration
and foracceptance. As the software evolves through successive cycles, tests have
to be repeatedand extended to verify each version of the software.
Courtesy: IPL Information Processing Ltd
P O S T E D B Y S O F T WA R E T E S T I N G AT 7 : 0 8 A M 0 C O M M E N T S

Progressive Development Lifecycle Models


Progressive Development Lifecycle ModelsThe sequential V and waterfall
lifecycle models represent an idealised model of softwaredevelopment. Other
lifecycle models may be used for a number of reasons, such asvolatility of
requirements, or a need for an interim system with reduced functionalitywhen
long timescales are involved. As an example of other lifecycle models, let us
lookat progressive development and iterative lifecycle models.A common
problem with software development is that software is needed quickly, but itwill
take a long time to fully develop. The solution is to form a compromise
betweentimescales and functionality, providing "interim" deliveries of software,
with reducedfunctionality, but serving as a stepping stones towards the fully
functional software. It isalso possible to use such a stepping stone approach as a
means of reducing risk.The usual names given to this approach to software
development are progressivedevelopment or phased implementation. The
corresponding lifecycle model is referredto as a progressive development
lifecycle. Within a progressive development lifecycle,each individual phase of
development will follow its own software developmentlifecycle, typically using a
V or waterfall model. The actual number of phases willdepend upon the
development.
Each delivery of software will have to pass acceptance testing to verify the
softwarefulfils the relevant parts of the overall requirements. The testing and
integration of eachphase will require time and effort, so there is a point at which
an increase in the numberof development phases will actually become counter
productive, giving an increased costand timescale, which will have to be
weighed carefully against the need for an earlysolution.
The software produced by an early phase of the model may never actually be
used, itmay just serve as a prototype. A prototype will take short cuts in order to
provide aquick means of validating key requirements and verifying critical areas
of design. Theseshort cuts may be in areas such as reduced documentation and
testing. When such shortcuts are taken, it is essential to plan to discard the
prototype and implement the nextphase from scratch, because the reduced
quality of the prototype will not provide a goodfoundation for continued
development.
Courtesy: IPL Information Processing Ltd
P O S T E D B Y S O F T WA R E T E S T I N G AT 7 : 0 7 A M 0 C O M M E N T S

Sequential Lifecycle Models


Sequential Lifecycle ModelsThe software development lifecycle begins with the
identification of a requirement forsoftware and ends with the formal verification
of the developed software against thatrequirement. Traditionally, the models
used for the software development lifecycle havebeen sequential, with the
development progressing through a number of well definedphases. The
sequential phases are usually represented by a V or waterfall diagram.
Thesemodels are respectively called a V lifecycle model and a waterfall lifecycle
model.
There are in fact many variations of V and waterfall lifecycle models,
introducingdifferent phases to the lifecycle and creating different boundaries
between phases. Thefollowing set of lifecycle phases fits in with the practices of
most professional softwaredevelopers.• The Requirements phase, in which the
requirements for the software are gatheredand analyzed, to produce a complete
and unambiguous specification of what thesoftware is required to do.
The Architectural Design phase, where a software architecture
forimplementation of the requirements is designed and specified,
identifyingcomponents within the software and the relationships between the
components.

The Detailed Design phase, where the detailed implementation of each


componentis specified.• The Code and Unit Test phase, in which each
component of the software is codedand tested to verify that it faithfully
implements the detailed design.• The Software Integration phase, in which
progressively larger groups of testedsoftware components are integrated and
tested until the software works as a whole.• The System Integration phase, in
which the software is integrated to the overallproduct and tested.• The
Acceptance Testing phase, where tests are applied and witnessed to validatethat
the software faithfully implements the specified requirements.Software
specifications will be products of the first three phases of this lifecycle
model.The remaining four phases all involve testing the software at various
levels, requiringtest specifications against which the testing will be conducted as
an input to each of thesephases.
Courtesy: IPL Information Processing Ltd
P O S T E D B Y S O F T WA R E T E S T I N G AT 7 : 0 7 A M 0 C O M M E N T S

IEEE 829
IEEE 829-1998, also known as the 829 Standard for Software Test
Documentation is an IEEE
standard that specifies the form of a set of documents for use in eight defined
stages of
software testing, each stage potentially producing its own separate type of
document. The
standard specifies the format of these documents but does not stipulate whether
they all
must be produced, nor does it include any criteria regarding adequate content
for these
documents. These are a matter of judgment outside the purview of the standard.
The documents
are:Test Plan: a management planning document that shows: • How the testing
will be done • Who will do it • What will be tested • How long it will take • What
the test coverage will be, i.e. what quality level is required • Test Design
Specification: detailing test conditions and the expected results as well as test
pass criteria.

Test Case Specification: specifying the test data for use in running the test
conditions
identified in the Test Case Specification

Test Procedure Specification: detailing how to run each test, including any set-
up
preconditions and the steps that need to be followed

Test Item Transmittal Report: reporting on when tested software components


have progressed
from one stage of testing to the next

Test Log: recording which tests cases were run, who ran them, in what order,
and whether
each test passed or failed

Test Incident Report: detailing, for any test that failed, the actual versus
expected
result, and other information intended to throw light on why a test has failed

Test Summary Report: A management report providing any important


information uncovered by the tests accomplished, and including assessments of
the quality of the testing effort, the
quality of the software system under test, and statistics derived from Incident
Reports. The
report also records what testing was done and how long it took, in order to
improve any
future test planning. This final document is used to indicate whether the
software system
under test is fit for purpose according to whether or not it has met acceptance
criteria
defined by project stakeholders.
Source: Wikipedia
P O S T E D B Y S O F T WA R E T E S T I N G AT 7 : 0 6 A M 0 C O M M E N T S
V&V
Testing comprises of two components:
• Verification• Validation
IEEE Definitions of these terms are given below:
Verification: The process of evaluating a system or component to determine
whether the
products of the given development phase satisfy the conditions imposed at the
start of that
phase. [IEEE]
Validation: Determination of the correctness of the products of software
development with
respect to the user needs and requirements.
These definitions can be simplified as below:
Verification: Doing the job right?
Validation: Doing the right job?
In the real time development life cycle, checking whether the software has been
developed
as per the design document will be a verification activity while checking whether
the
software meets the business requirements document will be a validation activity
Both these components of testing are important because it would be ridiculous
to supply a
perfect chair when the customer’s requirement is a table.
P O S T E D B Y S O F T WA R E T E S T I N G AT 7 : 0 5 A M 0 C O M M E N T S

system testing
System testing is testing conducted on a complete, integrated system to evaluate
the
system's compliance with its specified requirements. System testing falls within
the scope
of Black box testing, and as such, should require no knowledge of the inner
design of the
code or logic.
Alpha testing and Beta testing are sub-categories of System testing.
As a rule, System testing takes, as its input, all of the "integrated" software
components
that have successfully passed Integration testing and also the software system
itself
integrated with any applicable hardware system(s). The purpose of Integration
testing is to
detect any inconsistencies between the software units that are integrated
together (called
assemblages) or between any of the assemblages and the hardware. System
testing is a more
limiting type of testing; it seeks to detect defects both within the "inter-
assemblages" and
also within the system as a whole.

Testing the whole systemSystem testing is actually done to the entire system
against the Functional Requirement Specification(s) (FRS) and/or the System
Requirement Specification (SRS). Moreover, the System testing is an
investigatory testing phase, where the focus is to have almost a destructive
attitude and test not only the design, but also the behavior and even the believed
expectations of the customer. It is also intended to test up to and beyond the
bounds defined in the software/hardware requirements specification(s).
source: wikipedia
P O S T E D B Y S O F T WA R E T E S T I N G AT 7 : 0 4 A M 0 C O M M E N T S

DEFINING WEBSITE QUALITY & RELIABILITY


(excerpts from an article by Dr.E.Miller)

Like any complex piece of software there is no single, all inclusive quality
measure that fully characterizes a WebSite (by which we mean any web browser
enabled application).
Dimensions of Quality. There are many dimensions of quality; each measure
will pertain to a particular WebSite in varying degrees. Here are some common
measures:
Timeliness: WebSites change often and rapidly. How much has a WebSite
changed since the last upgrade? How do you highlight the parts that have
changed?
Structural Quality: How well do all of the parts of the WebSite hold together?
Are all links inside and outside the WebSite working? Do all of the images work?
Are there parts of the WebSite that are not connected?
Content: Does the content of critical pages match what is supposed to be there?
Do key phrases exist continually in highly-changeable pages? Do critical pages
maintain quality content from version to version? What about dynamically
generated HTML (DHTML) pages?
Accuracy and Consistency: Are today's copies of the pages downloaded the same
as yesterday's? Close enough? Is the data presented to the user accurate
enough? How do you know?
Response Time and Latency: Does the WebSite server respond to a browser
request within certain performance parameters? In an e-commerce context,
how is the end-to-end response time after a SUBMIT? Are there parts of a site
that are so slow the user discontinues working?
Performance: Is the Browser --> Web --> ebSite --> Web --> Browser
connection quick enough? How does the performance vary by time of day, by
load and usage? Is performance adequate for e-commerce applications? Taking
10 minutes -- or maybe even only 1 minute -- to respond to an e-commerce
purchase may be unacceptable!
Impact of Quality. Quality remains is in the mind of the WebSite user. A poor
quality WebSite, one with many broken pages and faulty images, with Cgi-Bin
error messages, etc., may cost a lot in poor customer relations, lost corporate
image, and even in lost sales revenue. Very complex, disorganized WebSites can
sometimes overload the user.
The combination of WebSite complexity and low quality is potentially lethal to
Company goals. Unhappy users will quickly depart for a different site; and, they
probably won't leave with a good impression.
P O S T E D B Y S O F T WA R E T E S T I N G AT 5 : 5 4 A M 0 C O M M E N T S
Extent of Testing
Extent of Testing

Is
Testing = zero defect?

Not at all. One cannot ignore the possibilities of new defects getting introduced
while removing the known defects. It is highly improbable and impossible to
find all the defects in the software owing to its complexity and the human error
factor that is involved in producing the software.

Exhaustive testing is impractical because testing consumes enormous efforts


and resources. For eg, one can write infinite number of test cases to test a
simple calculator that performs only additions.

Hence it is imperative to decide the extent of testing while planning the tests.
Extent of testing can be decided based on the following factors:

· Risk
· Budget for testing
· Schedule
· Resource availability
· Contractual agreements
· Legal requirements
P O S T E D B Y S O F T WA R E T E S T I N G AT 5 : 2 3 A M 0 C O M M E N T S

Testers and Requirements


Testers and Requirements

A question seems to haunt the testers quite often: ‘What do I do with


requirements?’
This paper intends to explore it.

Why should a Tester review the Requirements document? The success of


software development greatly depends on the requirements. 76% of the projects
fail because of poor requirements.

The objective of introducing Testing activities right from the start of the project
is to uncover the anomalies or defects in the requirements. Uncovering the
defects upstream have lots of advantages as everyone would agree. The cost
increases 10 folds when a defect moves from one stage to another of the
development phase.

In Test Driven Development (TDD), testing starts even before coding. There
could be questions arising ‘How can testing start without having a product or
application to test’. It is important to get rid of this block from the minds of the
testers. Testing does not simply mean running the tests on the
application/product.

The objective of testing is changing from finding faults from preventing faults.
And hence the test process starts even before the coding starts. Practically the
test process starts when requirement is set for User Acceptance. Testers review
the user acceptance criteria at this stage in order to prepare User Acceptance
Test scripts. The testers will review the document in order to understand the
requirements clearly. Test cases should not be written based on any
assumptions.

Guidelines to review the requirements:

- clarity. The tester should understand what is exactly required by the client.
There should not be any ambiguity or assumptions
- While preparing test objective for positive testing of each requirement, gaps in
the requirements are identified
- While preparing test objective for negative testing of each requirement, gaps,
anomalies and inconsistencies in the requirements are brought out
- Difficulties with testability could bring out the practical difficulties in
implementation.
Incomplete requirements are potential grounds for defects because assumptions
will arise out of them. It is important to spot gaps in requirements.
P O S T E D B Y S O F T WA R E T E S T I N G AT 5 : 2 2 A M 0 C O M M E N T S

Quality Assurance and testing


The following explanations should be helpful in understanding the difference
between Quality Assurance and Testing

Quality Assurance: A set of activities designed to ensure that the software


development process is adequate to produce high quality product. Quality
assurance activities also involve monitoring and evaluating the process and thus
ensuring continuous improvement.

It would be appropriate to say that Quality Assurance is a preventive


mechanism.

Testing:
Software testing is a process comprising a set of activities performed with the
intent of evaluating the software against its requirements under specified
conditions.

Some times software testing is performed as an ad-hoc activity without having


any systems in place. These kind of practices are not recommended because the
results are not traceable, some times not repeatable. For this reason software
testing will be considered as a process in this book not as a single activity.
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 : 4 3 A M 0 C O M M E N T S

S U N D A Y, O C T O B E R 0 8 , 2 0 0 6
Standards relevant to software testing

Standards relevant to software testing:

· IEEE 1008, a standard for unit testing


· IEEE 1028, a standard for software inspections
· IEEE 1044, a standard for the classification of software anomalies
· IEEE 1044-1, a guide to the classification of software anomalies
· IEEE 730, a standard for software quality assurance plans
· IEEE 1061, a standard for software quality metrics and methodology
· BSS 7925-1, a vocabulary of terms used in software testing
· BSS 7925-2, a standard for software component testing
· IEEE 729, Glossary of Software Engineering Terminology..

P O S T E D B Y S O F T WA R E T E S T I N G AT 1 1 : 2 3 P M 0 C O M M E N T S

F R I D A Y, O C T O B E R 0 6 , 2 0 0 6

Software testing - an Intro


In early days of software development, testing was considered as an
insignificant secondary activity performed almost at the end of software
development life cycle. There were no test engineers or supporting techniques
for testing. Over the years software testing has evolved into a parallel process
spanning over the entire development life cycle and found a strong presence in
many organizations.

Though software testing has come a long way with dedicated resources,
magazines, conferences and maturity models, still this discipline is evolving.
Numerous research activities are carried out and new models and techniques
are being established.
Even today testing is considered as a monotonous and repetitive activity and
fails to attract top talents in the industry. However, there is a scope for this
situation to change thanks to the growing recognition for quality assurance and
testing in the software industry.

In the Test-Driven Development (TDD) environment, testing starts in the


requirements stage i.e even before the software is built. Hence it is necessary for
the testers to take active part in arriving at a clear requirement specification.
Because of this reason, there is a new breed of testers that also play the role of
business analysts and hold key positions within the development environment.
This development, in the near future will remove the monotonous tag from
testing career and attract better talents.

In recent times, software quality assurance has been identified as a potential


business area and many exclusive quality assurance and testing companies have
been launched and have progressed successfully. This is a good time to build
your career in software testing.
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 2 : 4 0 A M 0 C O M M E N T S

ABOUT ME

S O F T WA R E T E S T I N G
VIEW MY COMPLETE PROFILE

LINKS

• Google News
• Edit-Me
• Edit-Me

PREVIOUS POSTS

• Exploratory or Ad-hoc Testing


• Test Process
• Function tests
• Black-box testing
• Quotes
• Top Ten Tips for Bug Tracking
• Defect Report
• Teaming Testers with Developers
• Statistical Testing
• Maintenance Testing

ARCHIVES

• October 2006
• December 2006

S-ar putea să vă placă și