Documente Academic
Documente Profesional
Documente Cultură
S U N D A Y, D E C E M B E R 3 1 , 2 0 0 6
Among the hardest things to explain is something that everyone already knows. We all know
how to listen, how to read, how to think, and how to tell anecdotes about the events in our lives.
As adults, we do these things everyday. Yet the level of any of these skills, possessed by the
average person, may not be adequate for certain special situations. Psychotherapists must be
expert listeners and lawyers expert readers; research scientists must scour their thinking
forerrors and journalists report stories that transcend parlor anecdote.So it is with exploratory
testing (ET): simultaneous learning, test design and test execution.This is a simple concept. But
the fact that it can be described in a sentence can make it seem like something not worth
describing. Its highly situational structure can make it seem, to the casual observer, that it has
no structure at all. That’s why textbooks on software testing, with few exceptions, either don’t
discuss exploratory testing, or discuss it only to dismiss it as an unworthy practice. Exploratory
testing is also known as ad hoc testing.
P O S T E D B Y S O F T WA R E T E S T I N G AT 8 : 5 0 A M 2 C O M M E N T S
W E D N E S D A Y, O C T O B E R 1 1 , 2 0 0 6
Test Process
1. Identify the purpose of the product. (business requirement)
2. Identify functions.
3. Prioritize the functions (based on criticality/ potential instability)
4. Test the functions
5. Record the test results
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 2 : 5 1 A M 0 C O M M E N T S
Function tests
Function tests are simple tests of individual functions. Don't confuse this term
with functional testing, which is a broad term that encompasses black box
testing:
Identify all the individual features and functions. (To keep them straight, you'll
probably organize them into a detailed outline, called a function list)
Now that you've found them, test them.
Function testing is a valuable first step in testing the program. You try each
function quickly, to see if anything is obviously broken. Similarly, many smoke
tests are function tests. (Smoke testing consists of running a series of simple
tests of key aspects of the program, to determine whether a given build is stable
enough to test further.)
courtesy: A course by Cem Kaner & James Bach
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 2 : 5 0 A M 0 C O M M E N T S
T U E S D A Y, O C T O B E R 1 0 , 2 0 0 6
Black-box testing
Black-box testing is a test design method that treats the system as a "black-box",
so it doesn't explicitly use knowledge of the internal structure. Black-box testing
is also known as functional testing.
P O S T E D B Y S O F T WA R E T E S T I N G AT 3 : 0 5 P M 0 C O M M E N T S
M O N D A Y, O C T O B E R 0 9 , 2 0 0 6
Quotes
A clever person solves a problem.A wise person avoids it.
-- Einstein
*
The significant problems we face cannot be solved by the same level of thinking
that created them. -- Albert Einstein
2.Remember that the only person who can close a bug is the person who opened
it in the first place. Anyone can resolve it, but only the person who saw the bug
can really be sure that what they saw is fixed.
3. There are many ways to resolve a bug. FogBUGZ allows you to resolve a bug
as fixed, won't fix, postponed, not repro, duplicate, or by design.
4. Not Repro means that nobody could ever reproduce the bug. Programmers
often use this when the bug report is missing the repro steps.
5.You'll want to keep careful track of versions. Every build of the software that
you give to testers should have a build ID number so that the poor tester doesn't
have to retest the bug on a version of the software where it wasn't even
supposed to be fixed.
6. If you're a programmer, and you're having trouble getting testers to use the
bug database, just don't accept bug reports by any other method. If your testers
are used to sending you email with bug reports, just bounce the emails back to
them with a brief message: "please put this in the bug database. I can't keep
track of emails."
7. If you're a tester, and you're having trouble getting programmers to use the
bug database, just don't tell them about bugs - put them in the database and let
the database email them.
8.If you're a programmer, and only some of your colleagues use the bug
database, just start assigning them bugs in the database. Eventually they'll get
the hint.
9.If you're a manager, and nobody seems to be using the bug database that you
installed at great expense, start assigning new features to people using bugs. A
bug database is also a great "unimplemented feature" database, too.
10.Avoid the temptation to add new fields to the bug database. Every month or
so, somebody will come up with a great idea for a new field to put in the
database. You get all kinds of clever ideas, for example, keeping track of the file
where the bug was found; keeping track of what % of the time the bug is
reproducible; keeping track of how many times the bug occurred; keeping track
of which exact versions of which DLLs were installed on the machine where the
bug happened. It's very important not to give in to these ideas. If you do, your
new bug entry screen will end up with a thousand fields that you need to supply,
and nobody will want to input bug reports any more. For the bug database to
work, everybody needs to use it, and if entering bugs "formally" is too much
work, people will go around the bug database.
courtesy:joelonsoftware.com
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 1 : 1 3 P M 0 C O M M E N T S
Defect Report
A good bug report should contain these 3 essential parts
1 Steps to reproduce
2 expected results
3 Actual results
Its been recognized for a long time that the close coupling of testers with developers
improves both the test cases and the code that is developed. An extreme case of this
practice is Microsoft, where every developer is shadowed with a tester. Needless to say,
one does not have to resort to such an extreme to gain the benefits of this teaming. This
practice should, therefore, understand the kinds of teaming that are beneficial, and
theenvironments in which they may be employed. The value of a best practice such as
teaming should be therefore more than just concept. Instead it should include guidance
on forming the right team while reporting the pitfalls and successes experienced.
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 1 : 0 2 P M 0 C O M M E N T S
Statistical Testing
The concept of statistical testing was invented by the late Harlan Mills (IBM
Fellow who invented Clean Room software engineering). The central idea is to
use
software testing as a means to assess the reliability of software as opposed to a
debugging
mechanism. This is quite contrary to the popular use of software testing as a
debugging
method. Therefore one needs to recognize that the goals and motivations of
statistical
testing are different fundamentally. There are many arguments as to why this
might indeed
be a very valid approach. The theory of this is buried in the concepts of Clean
Room
software engineering and are worthy of a separate discussion. Statistical testing
needs to
exercise the software along an operational profile and then measure interfailure
times that
are then used to estimate its reliability. A good development process should
yield an
increasing mean time between failure every time a bug is fixed. This then
becomes the
release criteria and the conditions to stop software testing.
courtesy:
IBM Research - Technical Report
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 1 : 0 0 P M 0 C O M M E N T S
Maintenance Testing
Excerpts from a paper by Monishita Bathija, CSTE
The objective of the Maintenance Testing stage is to ensure that the system is
modified to take care of the business change, and the system meets the business
requirements to a level acceptable to the users.
It is quite well known that fixing bugs during maintenance quite often leads to
new problems or bugs. This leads to frustration and disappointment for
customers and also leads to increased effort and cost for re-work for the
maintaining organization. This kind of “bug creep” usually occurs due to several
reasons. The most common ones being:
· Ideal setup: While fixing bugs, the maintenance engineer quite often as an
“ideal setup”. All registry entries have been made. Database connections have
been setup. Authorizations and permissions exist. Relationships have been set.
But when a bug is fixed and deployed, the engineer has to realize that all these
and many other system variables need to be checked and set.
Risk based testing is gaining significance because this approach will minimize
the risk with least efforts.
By applying Pareto’s 80-20 rule (which states that for many phenomena, 80% of
the consequences stem from 20% of the causes), 80% of the risk could be
eliminated by testing 20% of the requirements.
P O S T E D B Y S O F T WA R E T E S T I N G AT 3 : 0 7 P M 0 C O M M E N T S
RISK-BASED TESTING
by Paul Gerrard, Systeme Evolutif
The risk-based test method is an attempt to use early risk analysis to connect
the concerns and objectives of senior management to the test process and the
activities of project testers. The consequences of so doing are obvious and most
of these consequences are beneficial. If the test process is developed specifically
to address the risks of concern to stakeholders and management:
* Management has better visibility of the testing and so has more control over it.
* The budget for testing is likely to be more realistic.
* The testers have a clearer view of what they should be doing.
* The budget allocated to testing is determined by consensus, so everyone buys
into the quantity of testing that is planned.
* The information provision role of testers is promoted, but fault detection is as
important as ever.
* The information provided by testers increases their influence on management
decision-making.
* Release decision-making is better informed. The risk-based test method is
universal. Risks pervade all software projects and risk-taking is inevitable. The
method helps projects to identify their product risks of relevance and focus the
attention of testers onto the risks of most concern. Testing aims to mitigate risk
by finding faults, but testing also aims to reduce the uncertainty about risk by
providing better risk information to management. In doing so, management will
be better able to steer their projects away from hazards and make better
decisions.
P O S T E D B Y S O F T WA R E T E S T I N G AT 7 : 1 0 A M 0 C O M M E N T S
IEEE 829
IEEE 829-1998, also known as the 829 Standard for Software Test
Documentation is an IEEE
standard that specifies the form of a set of documents for use in eight defined
stages of
software testing, each stage potentially producing its own separate type of
document. The
standard specifies the format of these documents but does not stipulate whether
they all
must be produced, nor does it include any criteria regarding adequate content
for these
documents. These are a matter of judgment outside the purview of the standard.
The documents
are:Test Plan: a management planning document that shows: • How the testing
will be done • Who will do it • What will be tested • How long it will take • What
the test coverage will be, i.e. what quality level is required • Test Design
Specification: detailing test conditions and the expected results as well as test
pass criteria.
Test Case Specification: specifying the test data for use in running the test
conditions
identified in the Test Case Specification
Test Procedure Specification: detailing how to run each test, including any set-
up
preconditions and the steps that need to be followed
Test Log: recording which tests cases were run, who ran them, in what order,
and whether
each test passed or failed
Test Incident Report: detailing, for any test that failed, the actual versus
expected
result, and other information intended to throw light on why a test has failed
system testing
System testing is testing conducted on a complete, integrated system to evaluate
the
system's compliance with its specified requirements. System testing falls within
the scope
of Black box testing, and as such, should require no knowledge of the inner
design of the
code or logic.
Alpha testing and Beta testing are sub-categories of System testing.
As a rule, System testing takes, as its input, all of the "integrated" software
components
that have successfully passed Integration testing and also the software system
itself
integrated with any applicable hardware system(s). The purpose of Integration
testing is to
detect any inconsistencies between the software units that are integrated
together (called
assemblages) or between any of the assemblages and the hardware. System
testing is a more
limiting type of testing; it seeks to detect defects both within the "inter-
assemblages" and
also within the system as a whole.
Testing the whole systemSystem testing is actually done to the entire system
against the Functional Requirement Specification(s) (FRS) and/or the System
Requirement Specification (SRS). Moreover, the System testing is an
investigatory testing phase, where the focus is to have almost a destructive
attitude and test not only the design, but also the behavior and even the believed
expectations of the customer. It is also intended to test up to and beyond the
bounds defined in the software/hardware requirements specification(s).
source: wikipedia
P O S T E D B Y S O F T WA R E T E S T I N G AT 7 : 0 4 A M 0 C O M M E N T S
Like any complex piece of software there is no single, all inclusive quality
measure that fully characterizes a WebSite (by which we mean any web browser
enabled application).
Dimensions of Quality. There are many dimensions of quality; each measure
will pertain to a particular WebSite in varying degrees. Here are some common
measures:
Timeliness: WebSites change often and rapidly. How much has a WebSite
changed since the last upgrade? How do you highlight the parts that have
changed?
Structural Quality: How well do all of the parts of the WebSite hold together?
Are all links inside and outside the WebSite working? Do all of the images work?
Are there parts of the WebSite that are not connected?
Content: Does the content of critical pages match what is supposed to be there?
Do key phrases exist continually in highly-changeable pages? Do critical pages
maintain quality content from version to version? What about dynamically
generated HTML (DHTML) pages?
Accuracy and Consistency: Are today's copies of the pages downloaded the same
as yesterday's? Close enough? Is the data presented to the user accurate
enough? How do you know?
Response Time and Latency: Does the WebSite server respond to a browser
request within certain performance parameters? In an e-commerce context,
how is the end-to-end response time after a SUBMIT? Are there parts of a site
that are so slow the user discontinues working?
Performance: Is the Browser --> Web --> ebSite --> Web --> Browser
connection quick enough? How does the performance vary by time of day, by
load and usage? Is performance adequate for e-commerce applications? Taking
10 minutes -- or maybe even only 1 minute -- to respond to an e-commerce
purchase may be unacceptable!
Impact of Quality. Quality remains is in the mind of the WebSite user. A poor
quality WebSite, one with many broken pages and faulty images, with Cgi-Bin
error messages, etc., may cost a lot in poor customer relations, lost corporate
image, and even in lost sales revenue. Very complex, disorganized WebSites can
sometimes overload the user.
The combination of WebSite complexity and low quality is potentially lethal to
Company goals. Unhappy users will quickly depart for a different site; and, they
probably won't leave with a good impression.
P O S T E D B Y S O F T WA R E T E S T I N G AT 5 : 5 4 A M 0 C O M M E N T S
Extent of Testing
Extent of Testing
Is
Testing = zero defect?
Not at all. One cannot ignore the possibilities of new defects getting introduced
while removing the known defects. It is highly improbable and impossible to
find all the defects in the software owing to its complexity and the human error
factor that is involved in producing the software.
Hence it is imperative to decide the extent of testing while planning the tests.
Extent of testing can be decided based on the following factors:
· Risk
· Budget for testing
· Schedule
· Resource availability
· Contractual agreements
· Legal requirements
P O S T E D B Y S O F T WA R E T E S T I N G AT 5 : 2 3 A M 0 C O M M E N T S
The objective of introducing Testing activities right from the start of the project
is to uncover the anomalies or defects in the requirements. Uncovering the
defects upstream have lots of advantages as everyone would agree. The cost
increases 10 folds when a defect moves from one stage to another of the
development phase.
In Test Driven Development (TDD), testing starts even before coding. There
could be questions arising ‘How can testing start without having a product or
application to test’. It is important to get rid of this block from the minds of the
testers. Testing does not simply mean running the tests on the
application/product.
The objective of testing is changing from finding faults from preventing faults.
And hence the test process starts even before the coding starts. Practically the
test process starts when requirement is set for User Acceptance. Testers review
the user acceptance criteria at this stage in order to prepare User Acceptance
Test scripts. The testers will review the document in order to understand the
requirements clearly. Test cases should not be written based on any
assumptions.
- clarity. The tester should understand what is exactly required by the client.
There should not be any ambiguity or assumptions
- While preparing test objective for positive testing of each requirement, gaps in
the requirements are identified
- While preparing test objective for negative testing of each requirement, gaps,
anomalies and inconsistencies in the requirements are brought out
- Difficulties with testability could bring out the practical difficulties in
implementation.
Incomplete requirements are potential grounds for defects because assumptions
will arise out of them. It is important to spot gaps in requirements.
P O S T E D B Y S O F T WA R E T E S T I N G AT 5 : 2 2 A M 0 C O M M E N T S
Testing:
Software testing is a process comprising a set of activities performed with the
intent of evaluating the software against its requirements under specified
conditions.
S U N D A Y, O C T O B E R 0 8 , 2 0 0 6
Standards relevant to software testing
P O S T E D B Y S O F T WA R E T E S T I N G AT 1 1 : 2 3 P M 0 C O M M E N T S
F R I D A Y, O C T O B E R 0 6 , 2 0 0 6
Though software testing has come a long way with dedicated resources,
magazines, conferences and maturity models, still this discipline is evolving.
Numerous research activities are carried out and new models and techniques
are being established.
Even today testing is considered as a monotonous and repetitive activity and
fails to attract top talents in the industry. However, there is a scope for this
situation to change thanks to the growing recognition for quality assurance and
testing in the software industry.
ABOUT ME
S O F T WA R E T E S T I N G
VIEW MY COMPLETE PROFILE
LINKS
• Google News
• Edit-Me
• Edit-Me
PREVIOUS POSTS
ARCHIVES
• October 2006
• December 2006