Sunteți pe pagina 1din 118

TESTING CONCEPTS

Contents
 Introduction & Key Definitions  SDLC (Software Development Life Cycle)

Models  STLC (Software Test Life Cycle)  Testing Techniques  Verification and Validation  Types of Testing

Contents
      

Integration Strategies Quality Assurance and Quality Control Test Plan Creation Test Case preparation Defect Life Cycle Testing Terminology Test Tools

Introduction
 Testing is a process aimed at evaluating an attribute

or capability of a program or a system and determining that it meets its stated requirements --- Positive testing  Process of executing a program or system with the intent of finding defects --- Negative testing  Process by which we explore and understand the status of benefits and the risk associated with the release of a software system --- Risk-based testing

Manual Testing
Procedure:  Test Plan document  Test case document  Execute test cases  Report defects Execution of test cases done manually and defects reported through (a) Defect report OR (b) Bug tracking tool

Automation Testing
Procedure:  Test Plan document  Test case document  Execute test cases  Report defects Here the test cases are automated with the help of automation tools and are executed

Static Testing and Dynamic testing


Static testing  Testing conducted without executing the software e.g. Inspection, Review, Template, Checklist etc Dynamic testing  Testing conducted by executing the software e.g. Smoke test, Functional test, Regression test etc

Why do we test?
 Provide confidence in the system  Identify areas of weakness  Establish the degree of quality  Establish the extent that the requirements have been

met, i.e. what the users asked for is what they got not what someone else though they wanted  To provide an understanding of the overall system  To prove it is both usable and operable

Attributes of a good test engineer


Good Testers are 1. Explorers Are not afraid to venture into unknown situations 2. Troubleshooters Good at figuring out why something does not work 3. Relentless Keep trying, look for recreating the elusive bug 4. Creative Come up with off the wall approaches to find bugs 5. Persuasive

Error, Fault, Failure


 Error
Error is a mistake, misconception or misunderstanding on the part of the developer e.g. a developer may type a variable name incorrectly and an error is introduced

 Fault A fault is a defect is introduced into the system due to error that causes the system to behave incorrectly and not according to specification A fault is sometimes called a bug

Error, Fault, Failure


 Failures

A failure is the inability of a software system or component to perform its required functions within specified performance requirements

Key Definitions
 Test case A test case is a document which gives the test input data, execution conditions, expected output, actual output, remarks etc.  Test Bed A test bed is an environment that contains all the hardware and software needed to test a software component or a software system.

Key Definitions
 Test Oracle
Its a mechanism which helps the testers to know whether the testcase as passed or failed.  The Test Harness The auxiliary code developed to support testing of units and components is called a test harness. The harness consists of the drivers (main program which accepts input and passes the data to the component to be tested and prints the relevant results) and stubs that are subordinate to (called by) the component under test.

SDLC
 SDLC Models

 Waterfall model  Prototype model  Spiral model  V model

Waterfall Model
Requirement Analysis Design Phase

Coding Phase

Testing Phase

Maintenance

Waterfall Model
 Requirement analysis

An examination of a requirements document shows that there are two major types of requirements 1. Functional requirements: Users describe what functions the software should perform. We test for compliance of these requirements at the system level with the functional based system tests. 2. Quality requirements: These are nonfunctional in nature but describe quality levels expected for the software. One example of a quality requirement is performance level. The users may have objectives for the software system in terms of memory use, response time, through-put and delays

Waterfall Model
 Design Phase
In the design phase the design of the software architecture is done, Algorithms and Control flow charts are documented and reviewed.

 Coding In the coding phase the design is translated into system readable form and the software developed in stages.

Waterfall Model
 Testing The software product is developed in stages and tested for user requirements.  Maintenance After the software is delivered to the customer, change will occur due to bugs or enhancements of functional or performance and hence the maintenance.

Waterfall Model
 Merits

This provides a template into which methods for analysis, design, code, testing and maintenance can be placed.  Demerits Real projects rarely follow sequentially. Difficult for customer to state all the requirements. The customer should have patience as the working model will be available late in the project time span. A major blunder if undetected can prove disastrous.

Prototype model
Engineer Product Refine Prototype Requirements Analysis Design

Customer Evaluation

Building Prototype

Prototype model
It is an iterative process that helps the developer to create a model of the software. The model can take any of the three forms as below  1. A paper prototype.  2. A working prototype that implements some of the function required  3. An existing software that performs part or full function desired but other features will be improved in the new development. When full requirements are not known then we can go in for prototyping

Prototype model
 Demerits The quality of the software has not been maintained. The developer often makes implementation compromises.

Spiral model
Planning Risk analysis

Customer Evaluation

Engineer product

Spiral model
 1. Planning
In planning the objective, alternatives and constraints of the software are planned

 2. Risk analysis
The resolution of risks takes place and analysis of alternatives and identification is taken care.

 3. Engineering
The development of the next level of the product happens.

Spiral model
 Customer evaluation The project is reviewed with customer and project team and a decision made to whether to continue with a further loop of the spiral. If it is decided to continue, plans are drawn up for the next phase of the project.

Merits
Realistic approach to real time projects. This maintains the systematic step wise approach suggested by classic life cycle and also incorporates iterative framework

V- model
User Requir -ements Software Requir -ements HLD LLD Acceptance testing System testing Integrating testing Unit testing

Coding Verification technique

Validation technique

V- model
 HLD High level design

The over all System Architecture is defined  LLD Low level design Algorithms and control flow charts at the component level  The left hand arm of the v-model represents SDLC which incorporates verification technique  The right hand arm of the v-model represents STLC which incorporates validation technique

STLC
Software Test Life Cycle (STLC) process covers the entire testing process and helps the testing team to focus on those areas that requires maximum attention.
 Analyze requirements

In this section, we thoroughly analyze the software requirements from a testing angle and come up with various questions like whether the testing can be conducted adequately or not Here we look for both functional and non-functional requirements of the system.

STLC
 Test strategy and planning

During this phase the test strategy is finalized and the test planning is done. The effort estimation is also done based on the test strategy and testing plan.
 Design test cases

This phase is entered when the coding is done.

STLC
 Execute test cases

During this phase the testing team concentrates on executing the test cases. The test results are documented.  Report test status Analyze results and take appropriate actions

Phases/Levels of Testing
.
Acceptance Testing Certification Testing System Testing Integration Testing

Unit Testing

Levels of testing
 Unit testing: A level of testing which is done on the smallest functional part of the product. It is usually done by development team  Integration testing: Is the phase where we identify the interface issues between independently developed and unit tested modules. This is usually done by developers and testers  System testing: Is the phase where the system is tested as a whole. The goal is to ensure that the system performs according to the requirements

Levels of testing
 Certification testing: Focuses on standards compliance aspects of the product if any. It is usually done in a lab to check whether the developed software meets the standards specified by various organizations  Acceptance testing: Is done prior to the installation of the software at the customers site which is based on the acceptance test plan by the customer Once the acceptance test is completed, the software can be installed at the customers premises

Defect types to be uncovered at each level


System Integration
System functionality Constraints

Unit

Interface issues Resource contention Issues Performance

Data validation Error handling Basic functionality Resource issues Basic performance

Testing Techniques
 White box testing  Statement testing  Path testing  Data flow testing  Black box testing  Equivalence partitioning  Boundary value analysis  Decision table  Error guessing  Input / Output Domain
Branch testing Loop testing

Testing Techniques
 White Box Testing Also called as structural or glass box testing Objectives of White box testing  All independent paths in a program are executed at least once  All logical decisions are exercised for their true and False paths  All loops are executed and are in operational bound

Testing Techniques
 Cyclomatic Complexity
Its a number which gives the number of independent paths in a program Cyclomatic complexity V(G)= E-N+2 E = number of flow graph edges N= number of flow graph nodes

Testing Techniques
 Black Box Testing
Also called as functional testing

(A) Equivalence Partitioning


Is dividing the input domain into classes of data from which test cases are derived

Invalid data

valid data

Invalid data

Testing Techniques
Black Box Testing (B) Boundary value analysis Testing the given number around its boundary values For a login screen, if the username is 5-15 alphanumeric, then 5 5-1 15 15-1 15+1

5+1

four valid and two invalid test cases

Testing Techniques
Black Box Technique (C) Cause and Effect Graph or Decision Table Cause and effect graphing is a technique that can be used to combine conditions and derive an affective set of test cases that may disclose inconsistencies in a specification.

Testing Techniques
Black Box Technique  Error guessing Designing test cases using this approach is based on testers or developers past experience with the code under test, and their intuition as to where the defects may lurk in the code. Error guessing is an ad hoc approach to test design in most cases  Input/Output domain Looking from input side, to generate inputs to map to outputs. Similarly ensure that you look from output side to ensure that you generated all possible inputs

Verification and Validation


 Verification

Inspection  Review  Checklist  Template  Walkthrough  Desk check  Validation  Levels of testing


Verification
 Verification is a static testing procedure.  It involves verifying the requirements document,

detailed design documents, test plans, walkthroughs and inspections of various documents produced during the development and testing process  Ensuring that the output of software development phase meets the requirements or goals set for the phase

Verification
Inspection

 A formal way of suggesting error in a work item. The work item may be a requirements artifact, design artifact, test plan artifact etc  This is a very effective verification technique.  Inspection is a team review of a work item by peers of the author of the work product.

Verification
Inspection  Several steps involved in the inspection process are Inspection policies and plans.  The inspection leader plans for the inspection, sets the date, schedules the meeting, runs the inspection meeting, appoints a recorder to record the results and monitor the follow up period after the review. Entry criteria  The inspection process begins when the inspection pre-conditions are met as specified in the inspection policies, procedures and plans. Attention is paid to quality, adherence to standards, testability, traceability and satisfaction of the users requirement.

Verification
Checklist  The checklist varies with the software artifact being inspected. It contains items that the inspection participants should focus their attention on, check and evaluate.

Invitation.
 The inspection leader invites each member

participating in the meeting and distributes the documents that are essential for the conduct of the meeting.  The inspection consists of an inspection leader, author, inspectors and recorder.

Verification
 Recorder/scribe records and documents problems,

defects, findings and recommendations  Author is the owner of the document, present review item and perform any needed rework on the reviewed item  Inspectors attend review-training sessions, prepare for reviews, and participate in meetings.

Verification
Roles of people in Inspection Process  Inspection Leader


Check entry criteria, planning, coordinating kick-off and logging meeting, follow-up, check exit criteria Initiate, planning, edit, change request to other docs if any, trigger process changes if any

 Author


Verification
 Inspectors


Checking, collect issues, attend logging meeting Note down issues during logging meeting

 Scribe


Verification
Review  Its an formal / informal way of suggesting error in a work item  Review involves a meeting of a group of people, whose intention is to evaluate a software related item.  Reviews are a type of static testing technique that can be used to evaluate the quality of software artifacts such as requirements document, a test plan, a design document or a code component

Verification
Walkthrough
 In case of detailed design and code walkthroughs,

the test inputs may be selected and review participants walkthrough the design or code with the set of inputs in a line by line manner. The reader can compare this process to manual execution of the code. If the presenter gives a skilled presentation of the material, the walkthrough participants are able to build a comprehensive mental model of the detailed design or code and are able to evaluate its quality and detect defects.

Validation
 Validation is a dynamic testing procedure.  Validation involves actual testing of the

product as per the test plan (unit testing, integration testing, system etc).  Validation is building the right product  Validation occurs only in coding and executable application.  Validation is done only on executable forms of a work product.

Types of Testing
 Unit testing

Unit is a piece of code which has the characteristic to execute independently. A unit can be program, group of programs or a module etc  The objective of unit testing is to detect functional and structural defects in the unit. Unit testing is done to ensure that each individual software unit is functioning according to the specification. If a defect is revealed by the tests it is easier to locate and repair since only one unit is under consideration

Types of Testing
 To implement best practices it is important to plan for

and allocate resources to test each unit. If defects escape detection in unit test because of poor unit testing practices, they are likely to show up during integration, system or acceptance test where they are much more costly to locate and repair

Types of Testing
Performing unit tests
Test data Results

Driver

Module under test Stubs Stubs

Types of Testing
Performing unit tests
 Since the component in a unit is not a stand-alone

program, driver and / or stub software must be developed for each unit test.  In most applications the driver is nothing but the main program that accepts test case data, passes such data to the component to be tested and prints the relevant results  Stubs serve to replace modules that are subordinate to (called by) the component under test.

Types of Testing
Faults detected by unit testing
 Data validation

The module interface is tested to ensure that information properly flows into and out off the program unit which is under the test.  Error handling
  

Exception handling Memory related errors Display format errors

Types of Testing
Faults detected by unit testing
 Basic functionality  Resource issues

RAM, processor, disk space etc  Basic performance Response time, throughput, cpu and memory utilization

Types of Testing
Tools for Unit testing The following tools are available for unit testing 1. Code Coverage analyzers JCover, Pure Coverage etc  2. Static Analyzers Static analyzers carry out analysis of the program without executing it. Static analysis is done on attributes such as cyclomatic complexity, nesting levels etc. e.g. JStyle, Quality analyzer, JTest.

Types of Testing
Tools for Unit testing  3. Memory leak detectors e.g. Bounds checker, Purify and Insure++

Types of Testing
 Smoke testing
These are the basic testing conducted before the product goes for further in-depth testing. If the build passes smoke test then it is a stable and good build

Types of Testing
 Regression testing
Is conducted to find if code changes have any side affect on the existing functionality of the system. Test cases related to critical functionality and which influence external interfaces are selected for regression testing

Types of Testing
 Functionality testing
It is an exhaustive testing which test the product in all direction conducted at module level and repeated till the module is stable Functional tests are black box in nature and focus is on the inputs and correct outputs for each function

Types of Testing
 Integration testing
Is conducted to verify whether the system works has expected when integrated with modules are external products

 System testing Also called as end to end testing Is conducted to validate the over all functionality of the product. System testing is conducted before the product goes for acceptance testing

Types of Testing
 User Acceptance testing The testing carried out before the release of the software by the user in the test environment to check for all user requirements These test cases play a vital role in the success or failure of the software product

Alpha testing
Is conducted by the user in the test environment. usually after 80% of the project completion Gives confidence to both the customer as well as the project team

Types of testing
 Beta testing Is conducted at the clients place before the final acceptance. Here we leave it to the end users. Conducted usually after 98% of the project completion  Adhoc testing Is conducted without following any test cases This type of testing is conducted to find more number of defects which otherwise was not yielding through the regular test case pattern

Types of testing
 GUI testing
Is conducted to find whether our application followed specified gui standards

 Usability testing
Is conducted to find the user friendliness of the product. Testers may design the test case, testing to be conducted by end-users (called subjects) Involves executable code, training manual, installation notes and online help

Types of testing
 Security testing
Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of system data and services.

 Installation testing
To find whether the product is installed properly. That is to verify folders are created and named properly, permission for the files and default, custom and reinstallation and up gradation are all combined with installation testing

Types of testing
 Localization testing
Is conducted to test for different languages

 Performance testing
Is conducted to verify whether system responses are under acceptable limits Involves measuring time-to-do specified operations Operations can be interactive or non interactive

Types of testing
 Load testing
Load is operations that use up system resources. Resources of interest are CPU utilization and memory utilization. Operations may be user initiated or system initiated, may be dependent on data that is being processed e.g. 100 consumes more resources than 10 Dependent on volume of data Computational activity Data processing activity

Types of testing
 Stress testing
The goal of stress test is to try to break the system. Find the circumstances under which it will crash and can be created via -- Excessive load -- Resource starvation Stress testing often uncovers race conditions, dead locks, depletion of resources in unplanned patterns and upsets in normal operation of the software system.

Types of testing
 Volume testing
Is conducted for large volume of data

 Soak testing
This type of testing finds two types of errors memory leaks and buffer over flows

Types of testing
 Scalability testing
Is conducted to check for -- Constant response time despite increase in load -- Process more operations by adding more Hardware (extra CPUs, extra memory) or external software or altering the hardware and software configuration --Systems do not scale infinitely, there is always a knee

Types of testing
 Reliability testing Reliability testing can be categorized they are --Low resource testing To see the application works correctly under reduced system resources, such as low memory --Endurance testing Check whether application works without any failures when used continuously e.g. memory leak, disk space for logs etc --Volume testing Is conducted to test for large volumes of data

Types of testing
 Portability testing
Is conducted to test the software system on different environments

Integration Strategy
The integration strategy is nothing but the procedure to combine different modules or units into integrated subsystem or system. This can be broadly classified as

 Incremental approach  Top down approach  Bottom up approach  Sandwich approach  Non-incremental approach  Big bang approach

Integration Strategy
 Non-Incremental approach This approach is also called big bang approach In this, all the modules are put together and combined randomly into integrated system and then tested as a whole. Here all modules are combined in advance and the entire program is tested The result is usually a chaos because a set of errors are encountered. Correction is difficult because isolation of the causes is complicated. Therefore to solve this problem we have another strategy, which is called incremental approach

Integration Strategy
Incremental approach  Top-down approach
In this modules are integrated by moving downward through the control hierarchy beginning with the main control module or the main program and gradually coming down one by one

Integration Strategy
Incremental approach  Top-down approach
M1

M2

M3

M4

M5

M6

M7

Integration Strategy
Incremental approach Top-down approach
Here the top modules are implemented first The top Module1 is ready for testing and mod2 and mod3 is not ready and has functions called from module1. So we replace module2 by a dummy program or subroutine called STUB and test for module1. In this way all modules are tested and integrated

Integration Strategy
Incremental approach  Bottom-up approach
As the name implies, testing begins construction and testing with atomic modules, i.e. with components at the lower levels in the program structure. Since components are integrated from the bottom up, processing required for the components subordinate to a given level is always available and the need for stubs is eliminated

Integration Strategy
Incremental approach  Bottom-up approach
M1

M2

M3

M4

M5

M6

M7

Integration Strategy
Incremental approach  Bottom-up approach Here the bottom modules are implemented first the top module which has functions that invoke the bottom module is not ready. So, we replace the top module by a dummy program or dummy subroutine called Driver and test the bottom module In this way all the modules are tested and integrated

Integration Strategy
Incremental approach
 Sandwich Integration

It is a strategy where both top-down and bottom-up integration strategies are used to integrate and test a program. This is useful when the program structure is very complex and frequent use of drivers and stubs becomes unavoidable. It is an approach that uses top-down approach for modules in the upper level of the hierarchy and bottom up approach for lower levels

Integration Strategy
Incremental approach  Sandwich Integration
M1

M2

M3

M4

M5

M6

M7

M8

Quality Assurance VS Quality control


 Quality Assurance (QA)  Quality Control (QC)  Difference between QA and QC

Quality Assurance
 Quality assurance activities define a

framework for achieving software quality. The QA process involves defining or selecting standards that should be applied to the software development process or software product. These standards may be embedded in procedures and processes which are applied during development. Processes may be supported by tools that embed knowledge of the quality standards

Quality Assurance
 Quality Assurance involves entire software

development process i.e. monitoring and improving the process, making sure that any agreed upon standards and procedures are followed and ensuring that problems are found and dealt with Examples: Inspections, Reviews, Check list, Templates etc

Quality control
 Quality control involves overseeing the

software development process and to ensure that quality assurance procedures and standards are being followed. The deliverables from the software process are checked against the defined project standards in the quality control process

Difference between QA and QC


 Quality assurance is prevention based,

process oriented, organizational level and software development phase parallel activity  Quality control is detection based, project oriented, producer responsibility and software development end phase activity.

Test plan creation


A plan is a document that provides a framework or approach for achieving a set of goals. In order to meet a set of goals, a plan describes what specific tasks must be accomplished, who is responsible for each task, what tools, procedures and techniques must be used, how much time and effort is needed and what resources are essential. A plan also contains milestones.

Test plan creation


 Test plan Identifier  Introduction  Features to be tested  Features not to be tested  Test strategy  Suspension criteria  Resumption criteria

Test plan creation


 Test stop criteria  Budget and Schedule  Test environment  Test deliverables  Risks and contingencies  Reference section  Approvals

Test plan creation


 (1) Test plan identifier
Each test plan should have an unique identifier so that it can be associated with a specific project and become a part of the project history. The project history and all the project related items should be stored in a project database or come under the control of a configuration management system.

Test plan creation


 (2) Introduction
In this section the test planner gives an overall description of the project, the software system being developed or maintained. It includes high level description of testing goals and testing approaches to be used. If test plans are developed as multilevel documents that is separate documents for unit, integration, system and acceptance test .then each plan must reference the next higher level plan for consistency and compatibility reasons

Test plan creation


 (3) Features to be tested
Features may be described as distinguishing characteristics of a software component or system in terms of its functional and quality requirements. Features relate to performance, reliability, portability and functional requirements for the software being tested.

Test plan creation


 (4) Test Strategy
Gives the approach to testing. Testing activities are described. Tools and techniques necessary for the tests are included  Constraints on testing such as time and budget limitations  Testing on what platform, Are the test cases automated, testing types, percentage of white box and black box testing etc are documented

Test plan creation


 (5) Suspension criteria

When do we suspend the testing for eg: continuous change in user requirement, failure of smoke test cases etc  (6) Resumption criteria If suspension criteria are met we resume with testing  (7) Test stop criteria When do we stop testing? We stop testing if the following conditions are met The product is stable All user requirements are covered in-depth

Test plan creation


 (8) Budget and Schedule
This section gives the budget and resource allocation, training programs etc  The schedule section contains the effort involved for different activities of the testing. Activities like documentation, implementation and execution

 (9) Test Environment This section tells all the hardware, software and network required for the testing

Test Plan Creation


 (10) Test Deliverables

The output expected from testing Test cases Test script Test Execution logs Defect report  (11) Risks and Contingencies This section contains the risks in the project and the test plan to solve (a) Disaster recovery: To solve this we back up cd back up or tape back up on weekly or daily basis

Test Plan Creation


 (12) Reference section This section gives reference of various documents followed for implementing the test plan  (13) Approvals This section tells who approved and reviewed the test plan

Defect Tracking Process


 Defect life cycle  Detection or creation  Assignment  Evaluation  Fixing  Verify  Detection / creation Any one detect the defect be it test engineer, manager, developer or customer

Defect Tracking Process


 Assignment The defect is assigned to the concerned developer to be fixed  Evaluation The developer finds the root cause of the bug  Fixing The developer provides the necessary code changes to fix the bug

Defect Tracking Process


 Verify The test engineer verifies that the bug is fixed properly  Few types of errors  Cosmetic errors  Functional errors  Security errors  Data integrity or fatal errors  Template of a bug tracking tool

Template of Defect Tracking Tool


Bug id Date of creation Project Module Manager Created by Synopsis Descripti on Evaluation Workaround
 Rejected

Developer Priority Verifier Severity

Fixed
 Assign

Verify Build no Mailing List

Hardware Status Software


 Open  Closed

Duplicate of  RFE Deferred for  Reopen

Defect Tracking Tool


 Synopsis


Its a one line description of the bug Detailed description of the bug and the steps to produce the bug A temporary solution to the bug

 Description


 Workaround


Defect Tracking Process




Priority

Priority is the importance to fix the bug Low, medium, high




Severity
Severity is the impact of the bug on the application

Testing Terminology
 Test Script Is a program which emulates real user actions  Test Suite Is a group of test scripts. This represents a particular type of testing. Example: regression test suite, smoke test suite etc  Test Bench A group of test systems used for performance bench marking

Testing Terminology
 Test Framework A set off rules defined to implement test scripts and execute the test scripts. It standardizes the test script implementation process  Test Environment
Its the hardware, software and network required for testing

Testing Terminology
 Software Matrix or Testing matrix
  

KLOC (Kilo lines of code)


This depends on the project size.

Productivity
Lines of code developed per person day effort

Test effort
Person days effort spent on testing divided by total effort in the project. The test effort is between 15% to 50% of the total effort. Usually around 30%.

Testing Terminology


Test effectiveness
Defects identified during testing divided by total defects (defects identified during testing plus defects identified at customer place)

Defect finding rate


Defects identified per person day effort

Schedule slippage
Difference between the actual schedule and estimated schedule

Testing Terminology


Effort slippage
Actual effort minus planned effort divided by planned effort

Root cause analysis


Process of identifying the root cause for any failures like schedule slippage, effort slippage etc.

TOI (Transfer Of Information)


Transfer of project knowledge before leaving the project

Testing Terminology
 Defect Density Defects for per kilo lines of code


Base Lining
Its the process of approving a document for further modification through configuration management process

Hot Pics
A critical bug appeared at the customer site which requires immediate attention to fix the bug. Its a high priority bug

Testing Terminology
 Show stopper
A serious bug appeared before the software release because of which we need to stop the release

 Release notes
The document attached with the software, released to the customer. It contains new features included, known issues, Hardware and software requirements, default installation procedures etc

Testing Terminology
 Code coverage
Gives the percentage of code covered during testing Examples of code coverage tools: GCOV, Jcover, Rational coverage etc

 Change Management
Is used to change a request suggested by a internal team or external customer. Change management starts after design phase

Automation Test Tools


 Functionality Testing Tools


  

Winrunner, Quick Test Professional (QTP) (from Mercury Interactive) Silk Test from Segue Rational Robot from IBM Open Source test tools

 Test Management Tools  QC from HP  Rational Test Manager from IBM etc

Automation Test Tools


 Configuration Management Tools  Visual Source Safe(VSS) etc  Defect tracking tools  Rational Clear Quest  Bugzella, Bugs Online etc  Performance Testing Tools  Load Runner from Mercury Interactive  Silk Performer from Segue  Rational Performance from IBM  Open STA etc

Test Management Tools


 A Test Management Tool is a centralized place for

placing the test scripts and executing the test scripts from any system that is connected to the tool The tool allows  Test Plan  Test Design  Test Implementation  Test Execution  Analyze Test Results

S-ar putea să vă placă și