Documente Academic
Documente Profesional
Documente Cultură
Cognizant 500 Glen Pointe Center West Teaneck, NJ 07666 Ph: 201-801-0233 www.cognizant.com
TABLE OF CONTENTS
Introduction ...................................................................................................................................5 About this Module .........................................................................................................................5 Target Audience ...........................................................................................................................5 Module Objectives ........................................................................................................................5 Pre-requisite .................................................................................................................................5 Chapter 1: Introduction to Testing ...............................................................................................6 Learning Objectives ......................................................................................................................6 What is Software Testing..............................................................................................................6 Testing Life Cycle .........................................................................................................................6 Broad Categories of Testing .........................................................................................................7 The Testing Techniques ...............................................................................................................7 Types of Testing ...........................................................................................................................8 SUMMARY ...................................................................................................................................8 Test your Understanding ..............................................................................................................9 Chapter 2: Black Box Vs. White Box Testing ............................................................................10 Learning Objective:.....................................................................................................................10 Introduction to Black Box and White Box testing........................................................................10 Black box testing ........................................................................................................................10 Black box testing - without user involvement .............................................................................11 Black box testing - with user involvement ..................................................................................11 White Box Testing ......................................................................................................................14 Black Box (Vs) White Box...........................................................................................................18 SUMMARY .................................................................................................................................20 Test your Understanding ............................................................................................................20 Chapter 3: Other Testing Types ..................................................................................................21 Learning Objective ......................................................................................................................21 What is GUI Testing? .................................................................................................................21 Regression Testing.....................................................................................................................31 Integration Testing ......................................................................................................................38 Acceptance Testing ....................................................................................................................43 Configuration Testing & Installation Testing ...............................................................................45 Alpha testing and Beta testing ....................................................................................................48 Test your Understanding ............................................................................................................52
Page 2 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 4 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Introduction
About this Module
This module provides you with a brief description of the module, audience, suggested prerequisites and module objectives.
Target Audience
Entry Level Trainees
Module Objectives
After completing this module, the student will be able to: Explain Software Testing List the types of testing Explain Test Strategy Describe Test Plan Describe Test Design Describe Test Cases Describe Test Data Explain Test Execution Perform Defect reporting and analyzing the defects List Test Automation advantages and disadvantages Work with Winrunner Describe Performance Testing Work with Loadrunner Too Work with Test Director
Pre-requisite
This module does not require any prerequisite
Page 5 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
According to the respective projects, the scope of testing can be tailored, but the process mentioned above is common to any testing activity. Software Testing has been accepted as a separate discipline to the extent that there is a separate life cycle for the testing activity. Involving software testing in all phases of the software
Page 6 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 7 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
SUMMARY
Testing is the process of executing a program with the intent of finding errors Evolution of Software Testing The Testing process and lifecycle Broad categories of testing Widely employed Types of Testing The Testing Techniques
Page 8 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 9 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 10 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Black box testing Methods Graph-based Testing Methods Black-box methods based on the nature of the relationships (links) among the program objects (nodes), test cases are designed to traverse the entire graph Transaction flow testing (nodes represent steps in some transaction and links represent logical connections between steps that need to be validated)
Page 12 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Boundary Value Analysis Black-box technique that focuses on the boundaries of the input domain rather than its center BVA guidelines: If input condition specifies a range bounded by values a and b, test cases should include a and b, values just above and just below a and b If an input condition specifies and number of values, test cases should exercise the minimum and maximum numbers, as well as values just above and just below the minimum and maximum values Apply guidelines 1 and 2 to output conditions, test cases should be designed to produce the minimum and maxim output reports If internal program data structures have boundaries (e.g. size limitations), be certain to test the boundaries Comparison Testing Black-box testing for safety critical systems in which independently developed implementations of redundant systems are tested for conformance to specifications Often equivalence class partitioning is used to develop a common set of test cases for each implementation Orthogonal Array Testing Black-box technique that enables the design of a reasonably small set of test cases that provide maximum test coverage Focus is on categories of faulty logic likely to be present in the software component (without examining the code)
Page 13 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Advantages of Black Box Testing More effective on larger units of code than glass box testing Tester needs no knowledge of implementation, including specific programming languages Tester and programmer are independent of each other Tests are done from a user's point of view Will help to expose any ambiguities or inconsistencies in the specifications Test cases can be designed as soon as the specifications are complete Disadvantages of Black Box Testing Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever Without clear and concise specifications, test cases are hard to design There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried May leave many program paths untested Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone) Most testing related research has been directed toward glass box testing
The purpose of white box testing Initiate a strategic initiative to build quality throughout the life cycle of a software product or service. Provide a complementary function to black box testing. Perform complete coverage at the component level. Improve quality by optimizing performance. Practices: This section outlines some of the general practices comprising white-box testing process. In general, white-box testing practices have the following considerations: The allocation of resources to perform class and method analysis and to document and review the same. Developing a test harness made up of stubs, drivers and test object libraries. Development and use of standard procedures, naming conventions and libraries. Establishment and maintenance of regression test suites and procedures. Allocation of resources to design, document and manage a test history library. The means to develop or acquire tool support for automation of capture/replay/compare, test suite execution, results verification and documentation capabilities. 1 Code Coverage Analysis 1.1 Basis Path Testing A testing mechanism proposed by McCabe whose aim is to derive a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths. These are test cases that exercise basic set will execute every statement at least once. 1.1.1 Flow Graph Notation A notation for representing control flow similar to flow charts and UML activity diagrams.
Page 15 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Note that unstructured loops are not to be tested rather, they are redesigned.
Page 16 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
4 Error Handling Exception and error handling is checked thoroughly are simulating partial and complete fail-over by operating on error causing test vectors. Proper error recovery, notification and logging are checked against references to validate program design. 5 Transactions Systems that employ transaction, local or distributed, may be validated to ensure that ACID (Atomicity, Consistency, Isolation, Durability). Each of the individual parameters is tested individually against a reference data set. Transactions are checked thoroughly for partial/complete commits and rollbacks encompassing databases and other XA compliant transaction processors. Advantages of White Box Testing Forces test developer to reason carefully about implementation Approximate the partitioning done by execution equivalence Reveals errors in "hidden" code Beneficent side-effects Disadvantages of White Box Testing Expensive Cases omitted in the code could be missed out.
Page 17 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
SUMMARY
Black box testing can sometimes describe user-based testing (people); system or requirements-based testing (coverage); usability testing (risk); or behavioral testing or capture replay automation (activities). White box testing, on the other hand, can sometimes describe developer-based testing (people); unit or code-coverage testing (coverage); boundary or security testing (risks); structural testing, inspection or code-coverage automation (activities); or testing based on probes, assertions, and logs (evaluation). Black-box test design treats the system as a literal "black-box", so it doesn't explicitly use knowledge of the internal structure. It is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box White-box test design allows one to peek inside the "box", and it focuses specifically on using internal knowledge of the software to guide the selection of test data. It is used to detect errors by means of execution-oriented test cases. Synonyms for white-box include: structural, glass-box and clear-box.
Page 20 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 21 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 22 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Specific Field Tests Date Field Checks Assure that leap years are validated correctly & do not cause errors/miscalculations. Assure that month code 00 and 13 are validated correctly & do not cause errors/miscalculations. Assure that 00 and 13 are reported as errors. Assure that day values 00 and 32 are validated correctly & do not cause errors/miscalculations. Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/ miscalculations. Assure that Feb. 30 is reported as an error. Assure that century change is validated correctly & does not cause errors/miscalculations. Assure that out of cycle dates are validated correctly & do not cause errors/miscalculations. Numeric Fields Assure that lowest and highest values are handled correctly. Assure that invalid values are logged and reported. Assure that valid values are handles by the correct procedure. Assure that numeric fields with a blank in position 1 are processed or reported as an error. Assure that fields with a blank in the last position are processed or reported as an error an error. Assure that both + and - values are correctly processed. Assure that division by zero does not occur.
Page 27 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 28 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 29 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
* These shortcuts are suggested for text formatting applications, in the context for which they make sense. Applications may use other modifiers for these operations.
Page 30 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Regression Testing
What is Regression Testing? Regression testing is the process of testing changes to computer programs to make sure that the older programming still works with the new changes. Regression testing is a normal part of the program development process. Test department coders develop code test scenarios and exercises that will test new units of code after they have been written. Before a new version of a software product is released, the old test cases are run against the new version to make sure that all the old capabilities still work. The reason they might not work because changing or adding new code to a program can easily introduce errors into code that is not intended to be changed. The selective retesting of a software system that has been modified to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of the reparations and that newly added features have not created problems with previous versions of the software. Also referred to as verification testing Regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. It is a quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity. Test Execution Test Execution is the heart of the testing process. Each time your application changes, you will want to execute the relevant parts of your test plan in order to locate defects and assess quality. Create Test Cycles During this stage you decide the subset of tests from your test database you want to execute. Usually you do not run all the tests at once. At different stages of the quality assurance process, you need to execute different tests in order to address specific goals. A related group of tests is called a test cycle, and can include both manual and automated tests Example: You can create a cycle containing basic tests that run on each build of the application throughout development. You can run the cycle each time a new build is ready, to determine the application's stability before beginning more rigorous testing. Example: You can create another set of tests for a particular module in your application. This test cycle includes tests that check that module in depth. To decide which test cycles to build, refer to the testing goals you defined at the beginning of the process. Also consider issues such as the current state of the application and whether new functions have been added or modified.
Page 31 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 32 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Traceability ensures completeness, that all lower level requirements derive from higher level requirements, and that all higher level requirements are allocated to lower level requirements. Traceability is also used in managing change and provides the basis for test planning. SAMPLE TRACEABILITY MATRIX A traceability matrix is a report from the requirements database or repository. The examples below show traceability between user and system requirements. User requirement identifiers begin with "U" and system requirements with "S."
Tracing S12 to its source makes it clear this requirement is erroneous: it must be eliminated, rewritten, or the traceability corrected.
Page 34 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
In addition to traceability matrices, other reports are necessary to manage requirements. What goes into each report depends on the information needs of those receiving the report(s). Determine their information needs and document the information that will be associated with the requirements when you set up your requirements database or repository. Phases of Testing The primary objective of testing effort is to determine the conformance to requirements specified in the contracted documents. The integration of this code with the internal code is the important objective. Goal is to evaluate the system as a whole, not its parts Techniques can be structural or functional. Techniques can be used in any stage that tests the system as a whole (System testing, Acceptance Testing, Unit testing, Installation, etc.) Types and Phases of Testing
Page 35 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 36 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 37 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Integration Testing
One of the most significant aspects of a software development project is the integration strategy. Integration may be performed all at once, top-down, bottom-up, critical piece first, or by first integrating functional subsystems and then integrating the subsystems in separate phases using any of the basic strategies. In general, the larger the project, the more important the integration strategy. Very small systems are often assembled and tested in one phase. For most real systems, this is impractical for two major reasons. First, the system would fail in so many places at once that the debugging and retesting effort would be impractical. Second, satisfying any white box testing criterion would be very difficult, because of the vast amount of detail separating the input data from the individual code modules. In fact, most integration testing has been traditionally limited to ``black box'' techniques. Large systems may require many integration phases, beginning with assembling modules into lowlevel subsystems, then assembling subsystems into larger subsystems, and finally assembling the highest level subsystems into the complete system. To be most effective, an integration testing technique should fit well with the overall integration strategy. In a multi-phase integration, testing at each phase helps detect errors early and keep the system under control. Performing only cursory testing at early integration phases and then applying a more rigorous criterion for the final stage is really just a variant of the high-risk "big bang" approach. However, performing rigorous testing of the entire software involved in each integration phase involves a lot of wasteful duplication of effort across phases. The key is to
Page 38 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Generalization of module testing criteria Module testing criteria can often be generalized in several possible ways to support integration testing. As discussed in the previous subsection, the most obvious generalization is to satisfy the module testing criterion in an integration context, in effect using the entire program as a test driver environment for each module. However, this trivial kind of generalization does not take advantage of the differences between module and integration testing. Applying it to each phase of a multiphase integration strategy, for example, leads to an excessive amount of redundant testing.
Page 39 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 40 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Incremental integration Hierarchical system design limits each stage of development to a manageable effort, and it is important to limit the corresponding stages of testing as well. Hierarchical design is most effective when the coupling among sibling components decreases as the component size increases, which simplifies the derivation of data sets that test interactions among components. The remainder of this section extends the integration testing techniques of structured testing to handle the general case of incremental integration, including support for hierarchical design. The key principle is to test just the interaction among components at each integration stage, avoiding redundant testing of previously integrated sub-components. To extend statement coverage to support incremental integration, it is required that all module call statements from one component into a different component be exercised at each integration stage. To form a completely flexible "statement testing" criterion, it is required that each statement be executed during the first phase (which may be anything from single modules to the entire
Page 41 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 42 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Acceptance Testing
In software engineering, acceptance testing is formal testing conducted to determine whether a system satisfies its acceptance criteria and thus whether the customer should accept the system. The main types of software testing are: Component. Interface. System. Acceptance. Release. Acceptance Testing checks the system against the "Requirements". It is similar to systems testing in that the whole system is checked but the important difference is the change in focus: Systems Testing checks that the system that was specified has been delivered. Acceptance Testing checks that the system delivers what was requested. The customer, and not the developer should always do acceptance testing. The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgment. The forms of the tests may follow those in system testing, but at all times they are informed by the business needs. The test procedures that lead to formal 'acceptance' of new or changed systems. User Acceptance Testing is a critical phase of any 'systems' project and requires significant participation by the 'End Users'. To be of real use, an Acceptance Test Plan should be developed in order to plan precisely, and in detail, the means by which 'Acceptance' will be achieved. The final part of the UAT can also include a parallel run to prove the system against the current system. Factors influencing Acceptance Testing The User Acceptance Test Plan will vary from system to system but, in general, the testing should be planned in order to provide a realistic and adequate exposure of the system to all reasonably expected events. The testing can be based upon the User Requirements Specification to which the system should conform. As in any system though, problems will arise and it is important to have determined what will be the expected and required responses from the various parties concerned; including Users;
Page 43 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 44 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Report these failures to the development teams so that the associated defects can be fixed. Determine the effect of adding or modifying hardware resources such as: o o o o Memory Disk and tape resources Processors Load balancers
Determine an optimal system configuration. Examples Typical examples include configuration testing of an application that must: Have multiple functional variants. Support internationalization. Support personalization. Preconditions Configuration testing can typically begin when the following preconditions hold: The configurability requirements to be tested have been specified. Multiple variants of the application exist. The relevant software components have passed unit testing. Software integration testing has started. However, configuration testing can begin prior to the distribution of the software components onto the hardware components. The relevant system components have passed system integration testing. The independent test team is adequately staffed and trained in configuration testing. The test environment is ready.
Page 45 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Work Products Configuration testing typically results in the production of all or part of the following work products from the test work product set: Documents: Project Test Plan Master Test List Test Procedures Test Report Test Summary Report Software and Data: Test Harness Test Scripts Test Suites Test Cases Test Data Phases Configuration testing typically consists of the following tasks being performed during the following phases:
Page 46 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Guidelines The iterative and incremental development cycle implies that configuration testing is regularly performed in an iterative and incremental manner. Configuration testing must be automated if adequate regression testing is to occur. To the extent practical, reuse functional test cases as configuration test cases. Installation testing: Testing to identify the ways in which the installation procedures lead to incorrect results. Quality System Regulations require installation and inspection procedures (including testing where appropriate) and documentation of inspection and testing to demonstrate proper installation. Likewise, manufacturing equipment must meet specified requirements, and automated systems be validated for their intended use. Terminology in this testing area can be confusing. Terms such as beta test, site validation, user acceptance test, and installation verification have all been used to describe installation testing. To avoid confusion, and for the purposes of this document, installation testing is defined as any testing that takes place outside of the developer's controlled environment. Installation testing is any testing that takes place at a user's site with the actual hardware and software that will be part of the installed system configuration. The testing is accomplished through either actual or simulated use of the software being tested within the environment in which it is intended to function. Guidance contained here is general in nature and is applicable to any installation testing. However, in some areas, there are specific site validation requirements that need to be considered in the planning of installation testing. Test planners should check with Soft Solutions International to determine whether there are any additional regulatory requirements for installation testing.
Page 47 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Tasks Typically involves the following teams performing the following testing tasks: Independent Test Team: o Test Planning: Determine alpha testing completion criteria. Update the alpha testing subsection of Project Test Plan (PTP) Test Design: Select an adequate subset of the system test suites of test cases (both functional and quality) to be repeated on the production environment during alpha testing. Test Implementation: Fix any defects in the test suites found during evaluation. Test Execution: Execute the alpha test suites on the production environment. Test Reporting: Report failures that occurred during testing to the development teams so that the associated defects can be fixed. Environments Alpha testing is typically performed on the following environments with the following tools: o Production Environments None
o o o
Page 49 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Guidelines To the extent practical, reuse the tests from system testing when performing alpha testing rather than producing new tests. Definition Beta testing is the launch testing of the application in the production environment by a few select users prior to acceptance testing and the release of the application to its entire user community. Objectives The typical objectives of beta testing are to: Cause failures that only tend to occur during actual usage by the user community rather than during formal testing. Report these failures to the development teams so that the associated defects can be fixed. Obtain additional user community feedback beyond that received during usability testing. Help determine the extent to which the system is ready for: o o Acceptance testing. Launch.
Provide input to the defect trend analysis effort. Preconditions Beta test execution can typically begin when the following preconditions hold: The application has passed all system tests. The delivery phase has begun. The application has passed alpha testing. The production environment is ready.
Page 50 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Customer Organization: o o o o Environments Beta testing is typically performed on the following environments (limited to a select group of users) using the following tools: Production Environments: Client Environment Contact Center Environment Content Management Environment Data Center Environment Tools: Defect reporting tool.
User Organizations:
Page 51 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Guidelines Limit the user test group to users who are willing to use a lower quality version of the application in exchange for obtaining it early and having input into its iteration. Beta testing is critical if formal usability testing was not performed during system testing. Beta testing often uses actual live data rather than data created for testing purposes.
2. The testing that ensures that no unwanted changes were introduced is a) Unit Testing b) System Testing c) Acceptance Testing d) Regression Testing
Answers: 1) a 2) d
Page 52 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Unit Testing
Unit testing: Isn't that some annoying requirement that we're going to ignore? Many developers get very nervous when you mention unit tests. Usually this is a vision of a grand table with every single method listed, along with the expected results and pass/fail date. It's important, but not relevant in most programming projects. The unit test will motivate the code that you write. In a sense, it is a little design document that says, "What will this bit of code do?" Or, in the language of object oriented programming, "What will these clusters of objects do?" The crucial issue in constructing a unit test is scope. If the scope is too narrow, then the tests will be trivial and the objects might pass the tests, but there will be no design of their interactions. Certainly, interactions of objects are the crux of any object oriented design. Likewise, if the scope is too broad, then there is a high chance that not every component of the new code will get tested. The programmer is then reduced to testing-by-poking-around, which is not an effective test strategy. Need for Unit Test How do you know that a method doesn't need a unit test? First, can it be tested by inspection? If the code is simple enough that the developer can just look at it and verify its correctness then it is simple enough to not require a unit test. The developer should know when this is the case. Unit tests will most likely be defined at the method level, so the art is to define the unit test on the methods that cannot be checked by inspection. Usually this is the case when the method involves a cluster of objects. Unit tests that isolate clusters of objects for testing are doubly useful, because they test for failures, and they also identify those segments of code that are related. People who revisit the code will use the unit tests to discover which objects are related, or which objects form a cluster. Hence: Unit tests isolate clusters of objects for future developers. Another good litmus test is to look at the code and see if it throws an error or catches an error. If error handling is performed in a method, then that method can break. Generally, any method that can break is a good candidate for having a unit test, because it may break at some time, and then the unit test will be there to help you fix it. The danger of not implementing a unit test on every method is that the coverage may be incomplete. Just because we don't test every method explicitly doesn't mean that methods can get away with not being tested. The programmer should know that their unit testing is complete when the unit tests cover at the very least the functional requirements of all the code. The careful programmer will know that their unit testing is complete when they have verified that their unit tests cover every cluster of objects that form their application.
Page 53 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Types of Errors detected The following are the Types of errors that may be caught Error in Data Structures Performance Errors Logic Errors Validity of alternate and exception flows Identified at analysis/design stages
Page 54 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 56 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 57 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Advantage of Unit Testing Can be applied directly to object code and does not require processing source code. Performance profilers commonly implement this measure. Disadvantage of Unit Testing Insensitive to some control structures (number of iterations) Does not report whether loops reach their termination condition Statement coverage is completely insensitive to the logical operators (|| and &&). Method for Statement Coverage Design a test-case for the pass/failure of every decision point Select unique set of test cases
Page 58 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 59 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Integration Testing
Integration testing is the testing of a partially integrated application to identify defects involving the interaction of collaborating components. Objectives The typical objectives of integration testing are to: Determine if components will work properly together. Identify defects that are not easily identified during unit testing. Kinds of Integration Testing Integration testing includes the following kinds of testing: Commercial Component Integration Commercial component integration testing is the integration testing of multiple commercialoff- theshelf (COTS) software components to determine if they are not interoperable (i.e., if they contain any interface defects). Software Integration Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects. System Integration System integration testing is the integration testing of two or more system components. Specifically, system integration testing is the testing of software components that have been distributed across multiple platforms (e.g., client, web server, application server, and database
Page 60 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
System Testing
For most organizations, software and system testing represents a significant element of a project's cost in terms of money and management time. Making this function more effective can deliver a range of benefits including reductions in risk, development costs and improved 'time to market' for new systems. Systems with software components and software-intensive systems are more and more complex everyday. Industry sectors such as telecom, automotive, railway, and aeronautical and space, are good examples. It is often agreed that testing is essential to manufacture reliable products. However, the validation process does not often receive the required attention. Moreover, the validation process is close to other activities such as conformance, acceptance and qualification testing. The difference between function testing and system testing is that now the focus is on the whole application and its environment . Therefore the program has to be given completely. This does not mean that now single functions of the whole program are tested, because this would be too redundant. The main goal is rather to demonstrate the discrepancies of the product from its requirements and its documentation. In other words, this again includes the question, ``Did we build the right product?'' and not just, ``Did we build the product right?'' However, system testing does not only deal with this more economical problem, it also contains some aspects that are orientated on the word ``system'' . This means that those tests should be done in the environment for which the program was designed, like a mulituser network or whetever. Even security guide lines have to be included. Once again, it is beyond doubt that this test cannot be done completely, and nevertheless, while this is one of the most incomplete test methods, it is one of the most important. A number of time-domain software reliability models attempt to predict the growth of a system's reliability during the system test phase of the development life cycle. In this paper we examine the results of applying several types of Poisson-process models to the development of a large system for which system test was performed in two parallel tracks, using different strategies for test data selection. We will test that the functionality of your systems meets with your specifications, integrating with which-ever type of development methodology you are applying. We test for errors that users are likely to make as they interact with the application as well as your applications ability to trap errors gracefully. These techniques can be applied flexibly, whether testing a financial system, ecommerce, an online casino or games testing.
Page 61 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
These benefits are achieved as a result of some fundamental principles of testing, for example, increased independence naturally increases objectivity. Your test strategy must take into consideration the risks to your organisation, commercial and technical. You will have a personal interest in its success in which case it is only human for your objectivity to be compromised. System Testing Techniques Goal is to evaluate the system as a whole, not its parts Techniques can be structural or functional Techniques can be used in any stage that tests the system as a whole (acceptance, installation, etc.) Techniques not mutually exclusive Structural techniques Stress testing - test larger-than-normal capacity in terms of transactions, data, users, speed, etc. Execution testing- test performance in terms of speed, precision, etc. Recovery testing - test how the system recovers from a disaster, how it handles corrupted data, etc. Operations testing - test how the system fits in with existing operations and procedures in the user organization Compliance testing - test adherence to standards
Page 62 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 63 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
SUMMARY
Testing irrespective of the phases of testing should encompass the following: Cost of Failure associated with defective products getting shipped and used by customer is enormous To find out whether the integrated product work as per the customer requirements To evaluate the product with an independent perspective To identify as many defects as possible before the customer finds To reduce the risk of releasing the product Hence the system Test phase should begin once modules are integrated enough to perform tests in a whole system environment. System testing can occur in parallel with integration test, especially with the top-down method.
Answers: 1) a 2) b
Page 64 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
public void testSimpleAdd() { Money m12CHF= new Money(12, "CHF"); Money m14CHF= new Money(14, "CHF");
Page 65 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Fixture
What if you have two or more tests that operate on the same or similar sets of objects? Tests need to run against the background of a known set of objects. This set of objects is called a test fixture. When you are writing tests you will often find that you spend more time writing the code to set up the fixture than you do in actually testing values. To some extent, you can make writing the fixture code easier by paying careful attention to the constructors you write. However, a much bigger savings comes from sharing fixture code. Often, you will be able to use the same fixture for several different tests. Each case will send slightly different messages or parameters to the fixture and will check for different results. When you have a common fixture, here is what you do: Create a subclass of TestCase Create a constructor which accepts a String as a parameter and passes it to the superclass. Add an instance variable for each part of the fixture Override setUp() to initialize the variables Override tearDown() to release any permanent resources you allocated in setUp For example, to write several test cases that want to work with different combinations of 12 Swiss Francs, 14 Swiss Francs, and 28 US Dollars, first create a fixture: public class MoneyTest extends TestCase { private Money f12CHF; private Money f14CHF; private Money f28USD; protected void setUp() { f12CHF= new Money(12, "CHF"); f14CHF= new Money(14, "CHF"); f28USD= new Money(28, "USD"); } } Once you have the Fixture in place, you can write as many Test Cases as you'd like.
Page 66 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Test Case
How do you write and invoke an individual test case when you have a Fixture? Writing a test case without a fixture is simple- override runTest in an anonymous subclass of TestCase. You write test cases for a Fixture the same way, by making a subclass of TestCase for your set up code and then making anonymous subclasses for the individual test cases. However, after a few such tests you would notice that a large percentage of your lines of code are sacrificed to syntax. JUnit provides a more concise way to write a test against a Fixture. Here is what you do: Write the test case method in the fixture class. Be sure to make it public, or it can't be invoked through reflection.Create an instance of the TestCase class and pass the name of the test case method to the constructor. For example, to test the addition of a Money and a MoneyBag, write: public void testMoneyMoneyBag() { // [12 CHF] + [14 CHF] + [28 USD] == {[26 CHF][28 USD]} Money bag[]= { f26CHF, f28USD }; MoneyBag expected= new MoneyBag(bag); assertEquals(expected, f12CHF.add(f28USD.add(f14CHF))); } Create an instance of of MoneyTest that will run this test case like this: new MoneyTest("testMoneyMoneyBag") When the test is run, the name of the test is used to look up the method to run. Once you have several tests, organize them into a Suite.
Suite
How do you run several tests at once? As soon as you have two tests, you'll want to run them together. You could run the tests one at a time yourself, but you would quickly grow tired of that. Instead, JUnit provides an object, TestSuite which runs any number of test cases together. For example, to run a single test case, you execute: TestResult result= (new MoneyTest("testMoneyMoneyBag")).run(); To create a suite of two test cases and run them together, execute: TestSuite suite= new TestSuite(); suite.addTest(new MoneyTest("testMoneyEquals")); suite.addTest(new MoneyTest("testSimpleAdd")); TestResult result= suite.run(); Another way is to let JUnit extract a suite from a TestCase. To do so you pass the class of your Test Case to the TestSuite constructor.
Page 67 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
TestRunner
How do you run your tests and collect their results? Once you have a test suite, you'll want to run it. JUnit provides tools to define the suite to be run and to display its results. You make your suite accessible to a TestRunner tool with a static method suite that returns a test suite For example, to make a MoneyTest suite available to a TestRunner, add the following code to MoneyTest:
public static Test suite() { TestSuite suite= new TestSuite(); suite.addTest(new MoneyTest("testMoneyEquals")); suite.addTest(new MoneyTest("testSimpleAdd")); return suite; } If a TestCase class doesn't define a suite method a TestRunner will extract a suite and fill it with all the methods starting with "test". JUnit provides both a graphical and a textual version of a TestRunner tool.Start it by typing java junit.awtui.TestRunner or junit.swingui.TestRunner. The graphical user interface presents a window with: A field to type in the name of a class with a suite method, A run button to start the test, A progress indicator that turns from red to green in the case of a failed test, A list of failed tests. In the case of an unsuccessful test JUnit reports the failed tests in a list at the bottom. JUnit distinguishes between failures and errors. A failure is anticipated and checked for with assertions. Errors are unanticipated problems like an
Page 68 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
In a dynamic programming environment like VisualAge for Java which supports hot code update you can leave the JUnit window up all the time. In other environments you have to restart the graphical version for each run. This is tedious and time consuming. As an alternative JUnit's AWT and Swing UIs use junit.runner.LoadingTestCollector . This LoadingTestCollector reloads all your classes for each test run.This feature can be disabled by unchecking the 'Reload classes every run' checkbox. There is a batch interface to JUnit, also. To use it typejava junit.textui.TestRunner followed by the name of the class with a suite method at an operating system prompt. The batch interface shows the result as text output. An alternative way to invoke the batch interface is to define a main method in your TestCase class. For example, to start the batch TestRunner for MoneyTest, write: public static void main(String args[]) { junit.textui.TestRunner.run(suite()); } With this definition of main you can run your tests by simply typing java MoneyTest at an operating system prompt. For using either the graphical or the textual version make sure that the junit.jar file is on your CLASSPATH.
Page 69 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 70 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 71 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Test Strategy Selection Selection of the Test Strategy is based on the following factors Product o Test Strategy based on the Application to help people and teams of people in making decisions. Suggestion of Wrong Ideas. People will use the Product Incorrectly Incorrect comparison of scenarios. Scenarios may be corrupted. Unable to handle Complex Decisions. Understand the underlying Algorithm. Simulate the Algorithm in parallel. Capability test each major function. Generate large number of decision scenarios. Create complex scenarios and compare them. Review Documentation and Help. Test for sensitivity to user Error.
Page 72 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 73 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 74 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Test Plan
A Test Plan can be defined as a document that describes the scope, approach, resources and schedule of intended test activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. The main purpose of preparing a Test Plan is that everyone concerned with the project are in sync with regards to the scope, responsibilities, deadlines and deliverables for the project. It is in this respect that reviews and a sign-off are very important since it means that everyone is in agreement of the contents of the test plan and this also helps in case of any dispute during the course of the project (especially between the developers and the testers). Purpose of preparing a Test Plan A Test Plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. Contents of a Test Plan Purpose Scope Test Approach Entry Criteria Resources Tasks / Responsibilities Exit Criteria Schedules / Milestones
Page 75 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 78 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 79 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 80 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 81 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Consideration should also be given to the development of a comprehensive corporate awareness program for communicating the procedures for the business recovery process. Training Needs Assessment The plan must specify which person or group of persons requires which type of training. It is necessary for all new or revised processes to be explained carefully to the staff. For example it may be necessary to carry out some process manually if the IT system is down for any length of time. These manual procedures must be fully understood by the persons who are required to carry them out. For larger organizations it may be practical to carry out the training in a classroom environment, however, for smaller organizations the training may be better handled in a workshop style. This section of the BCP will identify for each business process what type of training is required and which persons or group of persons need to be trained. Training Materials Development Schedule Once the training needs have been identified it is necessary to specify and develop suitable training materials. This can be a time consuming task and unless priorities are given to critical training programmes, it could delay the organization in reaching an adequate level of preparedness. This section of the BCP contains information on each of the training programmes with details of the training materials to be developed, an estimate of resources and an estimate of the completion date. Prepare Training Schedule Once it has been agreed who requires training and the training materials have been prepared a detailed training schedule should be drawn up.
Page 82 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 84 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 86 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Scratch area o Used for investigative update tests and those which have unusual requirements. o Existing data cannot be trusted. o Used at tester's own risk! o Testing rarely has the luxury of completely separate environments for each test and each tester. Controlling data, and the access to data, in a system can be fraught. Many different stakeholders have different requirements of the data, but a common requirement is that of exclusive use. While the impact of this requirement should not be underestimated, a number of stakeholders may be able to work with the same environmental data, and to a lesser extent, setup data and their work may not need to change the environmental or setup data. The test strategy can take advantage of this by disciplined use of text / value fields, allowing the use of 'soft' partitions. 'Soft' partitions allow the data to be split up conceptually, rather than physically. Although testers are able to interfere with each others tests, the team can be educated to avoid each others work. If, for instance, tester 1's tests may only use customers with Russian nationality and tester 2's tests only with French, the two sets of work can operate independently in the same dataset. A safe area could consist of London addresses, the change area Manchester addresses, and the scratch area Bristol addresses. Typically, values in free-text fields are used for soft partitioning.
Page 88 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 89 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Not loaded at all o Some tests simply take whatever is in the system and try to test with it. This can be appropriate where a dataset is known and consistent, or has been set up by a prior round of testing. It can also be appropriate in environments where data cannot be reloaded, such as the live system. However, it can be symptomatic of an uncontrolled approach to data, and is not often desirable. o Environmental data tends to be manually loaded, either at installation or by manipulating environmental or configuration scripts. Large volumes of setup data can often be generated from existing datasets and loaded using a data load tool, while small volumes of setup data often have an associated system maintenance function and can be input using the system.Fixed input data may be generated or migrated and is loaded using any and all of themethods above, while consumable input data is typically listed in test scripts or generated as an input to automation tools. When data is loaded, it can append itself to existing data, overwrite existing data, or delete existing data first. Each is appropriate in different circumstances, and due consideration should be given to the consequences. Testing the Data A theme bought out at the start of this paper was 'A System is Programmed by its Data'. In order to test the system, one must also test the data it is configured with; the environmental and setup data. Environmental data is necessarily different between the test and live environment. Although testing can verify that the environmental variables are being read and used correctly, there is little point in testing their values on a system other than the target system. Environmental data is often checked
Page 90 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 91 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 92 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 93 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Collecting Status Data Four categories of data will be collected during testing. These are explained in the following paragraphs. Test Results Data This data will include, Test factors -The factors incorporated in the plan, the validation of which becomes the Test Objective. Business Objective The validation that specific business objectives have been met. Interface Objectives - Validation that data/Objects can be correctly passed among Software components. Functions/Sub functions - Identifiable Software components normally associated with the requirements of the software. Units- The smallest identifiable software components Platform- The hardware and Software environment in which the software system will operate. Test Transactions, Test Suites, and Test Events These are the test products produced by the test team to perform testing. Test transactions/events: The type of tests that will be conducted during the execution of tests, which will be based on software requirements. Inspections A verification of process deliverables against deliverable specifications. Reviews: Verification that the process deliverables / phases are meeting the users true needs. Defect This category includes a Description of the individual defects uncovered during the testing process. This description includes but not limited to : Data the defect uncovered Name of the Defect Location of the Defect
Page 94 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 95 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Methods of Test Reporting Reporting Tools - Use of word processing, database, defect tracking, and graphic tools to prepare test reports. Some Database test tools like Data Vision is a database reporting tool similar to Crystal Reports. Reports can be viewed and printed from the application or output as HTML, LaTeX2e, XML, DocBook, or tab- or comma-separated text files. From the LaTeX2e and DocBook output files you can in turn produce PDF, text, HTML, PostScript, and more. Some query tools available for Linux based databases include: MySQL dbMetrix PgAccess Cognos Powerhouse This is not yet available for Linux; Cognos is looking into what interest people have in the product to assess what their strategy should be with respect to the Linux ``market.'' GRG - GNU Report Generator The GRG program reads record and field information from a dBase3+ file, delimited ASCII text file or a SQL query to a RDBMS and produces a report listing. The program was loosely designed to produce TeX/LaTeX formatted output, but plain ASCII text, troff, PostScript, HTML or any other kind of ASCII based output format can be produced just as easily. Word Processing: One way of increasing the utility of computers and word processors for the teaching of writing may be to use software that will guide the processes of generating, organizing, composing and revising text. This allows each person to use the normal functions of the computer keyboard that are common to all word processors, email editors, order entry systems, and data base management products. From the Report Manager, however, you can quickly scan through any number of these reports and see how each person's history compares. A one-page summary report may be printed with either the Report Manager program or from the individual keyboard or keypad software at any time. Individual Reports include all of the following information. Status Report Word Processing Tests or Keypad Tests Basic Skills Tests or Data Entry Tests Progress Graph Game Scores Test Report for each test
Page 96 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 97 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 98 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
System Test Reports A System Test plan standard that identified the objective of testing , what was to be tested, how was it to be tested, and when tests should occur. The system test Report should present the results of executing the test plan. If these details are maintained Electronically , then it need only be referenced , not included in the report. Acceptance Test Report There are two primary objectives of Acceptance testing Report: The first is to ensure that the system as implemented meets the real operating needs of the user/customer. If the defined requirements are those true needs, testing should have accomplished this objective. The second objective is to ensure that software system can operate in the real world user environment, which includes people skills and attitudes, time pressures, changing business conditions, and so forth. The Acceptance Test Report should encompass these criterias for the User acceptance respectively. Conclusion The Test Logs obtained from the execution of the test results and finally the test reports should be designed to accomplish the following objectives: Provide Information to the customer whether the system should be placed into production, if so the potential consequences and appropriate actions to minimize these consequences. One Long term objective is for the Project and the other is for the information technology function. The project can use the test report to trace problems in the event the application malfunction in production. Knowing which functions have been correctly tested and which ones still contain defects can assist in taking corrective actions. The data can also be used to analyze the developmental process to make changes to prevent defects from occurring in the future. These defect prone components identify tasks/steps that if improved, could eliminate or minimize the occurrence of high frequency defects in future.
Page 99 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Test Case
A test case is a testing work product that automatically performs a single test on an executable work product. Goals The goals of a test case is to automate or document the following: Perform a single test (e.g., a single test of a use case path or class method). Cause failures that uncover underlying defects so that they can be identified and removed. Help improve the quality of the item under test. Help developers understand the behavior of the item under test. Help developers improve the quality of the specifications (e.g., use case path and class responsibilities) of the item under test. Objectives To support these goals, the objectives of a single test case include: Document the purpose of the test case (i.e., the part of the item under test being tested, the type of failures to be elicited). Document the producer of the test case.
Page 100 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Evaluator: Test Inspection Team Approvers: None. Maintainers: o The Requirements Team for requirements model test cases. o The Architecture Team for architecture model test cases. o The Software Development Team for design model and unit test cases.
Page 101 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Phases Initiation: Completed Construction: Completed Delivery: Completed Usage: Maintained Retirement: Archived Preconditions A test case typically can be started if the following preconditions hold: The relevant sections of the Project Test Plan are completed. The relevant team is staffed. The relevant requirements, architecture, or design are completed. The relevant item under test is started. The relevant test suite is started. Inputs Work products: Project Test Plan System Requirements Specification System Architecture Document Software Architecture Document Javadoc including responsibilities Software components (e.g., method signatures, assertions, branching and looping logic) Stakeholders: None
Page 102 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
SUMMARY
A test plan contains description of testing objectives and Goals, Test Strategy/Approach based on customer priorities, Test Environment (Hardware, Software, Network, Communication etc.) Features to test with priority/criticality, Test Deliverables. A test case has set of test inputs, execution conditions and expected results. A test reflects what tests need to be performed.
Page 103 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
What is a Defect?
A mismatch in the application and its specification is a defect. A software error is present when the program does not do what its end user expects it to do. A Defect is a product anomaly or flaw. Defects include such things as omissions and imperfections found during testing phases. Symptoms (flaws) of faults contained in software that is sufficiently mature for production will be considered as defects. A deviation from expectation that is to be tracked and resolved is also termed a defect. An evaluation of defects discovered during testing provides the best indication of software quality. Quality is the indication of how well the system meets the requirements. So in this context defects are identified as any failure to meet the system requirements. Defect evaluation is based on methods that range from simple number count to rigorous statistical modeling. Rigorous evaluation uses assumptions about the arrival or discovery rates of defects during the testing process. The actual data about defect rates are then fit to the model. Such an evaluation estimates the current system reliability and predicts how the reliability will grow if testing and defect removal continue. This evaluation is described as system reliability growth modelling. Defect Classification The severity of bugs will be classified as follows:
Page 104 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 106 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Once the development team has started working on the defect the status is set to WIP ((Work in Progress) or if the development team is waiting for a go ahead or some technical feedback, they will set to Dev Waiting. After the development team has fixed the defect, the status is set to FIXED, which means the defect is ready to re-test. On re-testing the defect, and the defect still exists, the status is set to REOPENED, which will follow the same cycle as an open defect. If the fixed defect satisfies the requirements/passes the test case, it is set to Closed.
SUMMARY
A bug report is a case against a product. In order to work it must supply all necessary information to not only identify the problem but what is needed to fix it as well. It is not enough to say that something is wrong. The report must also say what the system should be doing. The report should be written in clear concise steps, so that someone who has never seen the system can follow the steps and reproduce the problem. It should include information about the product, including the version number, what data was used.
Page 107 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 108 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Chapter 8: Automation
Learning Objective
After completing this chapter, you will be able to: Explain automated testing
What is Automation?
Automated testing is automating the manual testing process currently in use.
Automation Benefits
Today, rigorous application testing is a critical part of virtually all software development projects. As more organizations develop mission-critical systems to support their business activities, the need is greatly increased for testing methods that support business objectives. It is necessary to ensure that these systems are reliable, built according to specification, and have the ability to support business processes. Many internal and external factors are forcing organizations to ensure a high level of software quality and reliability. In the past, most software tests were performed using manual methods. This required a large staff of test personnel to perform expensive, and time-consuming manual test procedures. Owing to the size and complexity of todays advanced software applications, manual testing is no longer a viable option for most testing situations. Every organization has unique reasons for automating software quality activities, but several reasons are common across industries. Using Testing Effectively By definition, testing is a repetitive activity. The very nature of application software development dictates that no matter which methods are employed to carry out testing (manual or automated), they remain repetitious throughout the development lifecycle. Automation of testing processes allows machines to complete the tedious, repetitive work while human personnel perform other tasks. Automation allows the tester to reduce or eliminate the required think time or read time necessary for the manual interpretation of when or where to click the mouse or press the enter key. An automated test executes the next operation in the test hierarchy at machine speed, allowing tests to be completed many times faster than the fastest individual. Furthermore, some types of testing, such as load/stress testing, are virtually impossible to perform manually. Reducing Testing Costs The cost of performing manual testing is prohibitive when compared to automated methods. The reason is that computers can execute instructions many times faster, and with fewer errors than individuals. Many automated testing tools can replicate the activity of a large number of users (and their associated transactions) using a single computer. Therefore, load/stress testing using automated methods require only a fraction of the computer hardware that would be necessary to complete a manual test. Imagine performing a load test on a typical distributed client/server application on which 50 concurrent users were planned.
Page 109 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 110 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Identifying Tests Requiring Automation - Most, but not all, types of tests can be automated. Certain types of tests like user comprehension tests, tests that run only once, and tests that require constant human intervention are usually not worth the investment to automate. The following are examples of criteria that can be used to identify tests that are prime candidates for automation. High Path Frequency - Automated testing can be used to verify the performance of application paths that are used with a high degree of frequency when the software is running in full production. Examples include: creating customer records, invoicing and other high volume activities where software failures would occur frequently. Critical Business Processes - In many situations, software applications can literally define or control the core of a companys business. If the application fails, the company can face extreme disruptions in critical operations. Mission-critical processes are prime candidates for automated testing. Examples include: financial month-end closings, production planning, sales order entry and other core activities. Any application with a high-degree of risk associated with a failure is a good candidate for test automation. Repetitive Testing - If a testing procedure can be reused many times, it is also a prime candidate for automation. For example, common outline files can be created to establish a testing session, close a testing session and apply testing values. These automated modules can be used again and again without having to rebuild the test scripts. This modular approach saves time and money when compared to creating a new end-to-end script for each and every test.
Page 111 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 113 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 114 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 115 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 116 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Automation Methods
Capture/Playback Approach The Capture/Playback tools capture the sequence of manual operations in a test script that are entered by the test engineer. These sequences are played back during the test execution. The benefit of this approach is that the captured session can be re-run at some later point in time to ensure that the system performs the required behavior. The short-comings of Capture/Playback are that in many cases, if the system functionality changes, the capture/playback session will need to be completely re-run to capture the new sequence of user interactions. Tools like Winrunner provide a scripting language, and it is possible for engineers to edit and maintain such scripts. This sometimes reduces the effort over the completely manual approach, however overall savings is usually minimal. Data Driven Approach Data driven approach is a test that plays back the same user actions but with varying input values. This allows one script to test multiple sets of positive data. This is applicable when large volume and different sets of data need to be fed to the application and tested for correctness. The benefit of this approach is that the time consumed is less and accurate than manually testing it. Testing can be done with both positive and negative approach simultaneously. Test Script execution: In this phase we execute the scripts that are already created. Scripts need to be reviewed and validated for results and accepted as functioning as expected before they are used live. Steps to be followed before execution of scripts: Test tool to be installed in the machine. Test environment /application to be tested to be installed in the machine. Prerequisite for running the scripts such as tool settings, playback options, necessary data table or data pool updation needs to be taken care. Select the script that needs to be executed and run it Wait until execution is done. Analysis the results via Test manager or in the logs.
Page 117 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 120 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 121 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 122 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 123 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 124 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
SUMMARY
Automated testing is automating the manual testing process currently in use. Owing to the size and complexity of todays advanced software applications, manual testing is no longer a viable option for most testing situations.
Page 125 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Rational Administrator
What is a Rational Project? A Rational project is a logical collection of databases and data stores that associates the data you use when working with Rational Suite. A Rational project is associated with one Rational Test data store, one RequisitePro database, one Clear Quest databases, and multiple Rose models and RequisitePro projects, and optionally places them under configuration management. Rational administrator is used to create and manage rational repositories, users and groups and manage security privileges. How to create a new project?
Page 127 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Once the Create button in the Configure project window is chosen, the below seen Create Test Data store window will be displayed. Accept the default path and click OK button.
Page 128 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Once the below window is displayed it is confirmed that the Test datastore is successfully created and click OK to close the window.
Click OK in the configure project window and now your first Rational project is ready to play with. Rational Administrator will display your TestProject details as below:
Page 129 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Rational Robot
Rational Robot to develop three kinds of scripts: GUI scripts for functional testing and VU and VB scripts for performance testing. Robot can be used to: Perform full functional testing. Record and play back scripts that navigate through your application and test the state of objects through verification points. Perform full performance testing. Use Robot and TestManager together to record and play back scripts that help you determine whether a multi-client system is performing within user-defined standards under varying loads. Create and edit scripts using the SQABasic, VB, and VU scripting environments. The Robot editor provides color-coded commands with keyword Help for powerful integrated programming during script development. Test applications developed with IDEs such as Visual Basic, Oracle Forms, PowerBuilder, HTML, and Java. Test objects even if they are not visible in the application's interface. Collect diagnostic information about an application during script playback. Robot is integrated with Rational Purify, Quantify, and PureCoverage. You can play back scripts under a diagnostic tool and see the results in the log. The Object-Oriented Recording technology in Robot lets you generate scripts quickly by simply running and using the application-under-test. Robot uses Object-Oriented Recording to identify objects by their internal object names, not by screen coordinates. If objects change locations or their text changes, Robot still finds them on playback.
Page 130 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Once logged you will see the robot window. Go to File-> New->Script In the above screen displayed enter the name of the script say First Script by which the script is referred to from now on and any description (Not mandatory).The type of the script is GUI for functional testing and VU for performance testing.
Page 131 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
The GUI Script top pane) window displays GUI scripts that you are currently recording, editing, or debugging. It has two panes: Asset pane (left) Lists the names of all verification points and low-level scripts for this script. Script pane (right) Displays the script. The Output window bottom pane) has two tabs: Build Displays compilation results for all scripts compiled in the last operation. Line numbers are enclosed in parentheses to indicate lines in the script with warnings and errors. Console Displays messages that you send with the SQAConsoleWrite command. Also displays certain system messages from Robot. To display the Output window: Click View Output. How to record a play back script? To record a script just go to Record->Insert at cursor Then perform the navigation in the application to be tested and once recording is done stop the recording. Record-> Stop
Page 132 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
In this window we can set general options like identification of lists, menus, recording think time in General tab: Web browser tab: Mention the browser type IE or Netscape Robot Window: During recording how the robot should be displayed and hotkeys details Object Recognition Order: the order in which the recording is to happen. For ex: Select a preference in the Object order preference list. If you will be testing C++ applications, change the object order preference to C++ Recognition Order.
Page 133 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Go to Tools-> Playback options to set the options needed while running the script. This will help you to handle unexpected window during playback, error recovery, mention the time out period, to manage log and log data. Verification points A verification point is a point in a script that you create to confirm the state of an object across builds of the application-under-test. During recording, the verification point captures object information (based on the type of verification point) and stores it in a baseline data file. The information in this file becomes the baseline of the expected state of the object during subsequent builds When you play back the script against a new build, Robot retrieves the information in the baseline file for each verification point and compares it to the state of the object in the new build. If the captured object does not match the baseline, Robot creates an actual data file. The information in this file shows the actual state of the object in the build. After playback, the results of each verification point appear in the log in Test Manager. If a verification point fails (the baseline and actual data do not match), you can select the verification point in the log and click View Verification Point to open the appropriate Comparator. The Comparator displays the baseline and actual files so that you can compare them. Verification point
Page 134 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 135 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 136 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 137 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 138 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
During compilation, the Build tab in the Output window displays compilation results and error messages with line numbers for all compiled scripts and library source files. The compilation results can be viewed in the Build tab of the Output window.
Page 139 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
After the script is created and compiled and errors fixed it can be executed. The results need to be analyzed in the Test Manager.
Page 140 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
When the script execution is started the following window will be displayed. The folder in which the log is to stored and the log name needs to be given in this window.
In the Results tab of the Test Manager, you could see the results stored. From Test Manager you can know start time of the script and
Page 141 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Supported environments
Operating system WinNT4.0 with service pack 5 Win2000 WinXP(Rational 2002) Win98 Win95 with service pack1 Protocols Oracle SQL server HTTP Sybase Tuxedo SAP People soft Web browsers IE4.0 or later Netscape navigator (limited support)
Page 142 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
SUMMARY
Rational offers the most complete lifecycle toolset (including testing) of these vendors for the windows platform. When it comes to Object Oriented development they are the acknowledged leaders with most of the leading OO experts working for them. Some of their products are worldwide leaders e.g. Rational Robot, Rational Rose, Clear case, RequistePro, etc.
Page 143 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 145 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 146 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 147 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Phase 2 Test Plan The following configuration information will be identified as part of performance testing environment requirement identification. Hardware Platform Server Machines Processors Memory Disk Storage Load Machines configuration Network configuration
Page 148 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Phase 3 Test Design Based on the test strategy detailed test scenarios would be prepared. During the test design period the following activities will be carried out: Scenario design Detailed test execution plan Dedicated test environment setup Script Recording/ Programming Script Customization (Delay, Checkpoints, Synchronizations points) Data Generation Parameterization/ Data pooling
Page 149 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Phase 4 Scripting
Phase 5 Test Execution The test execution will follow the various types of test as identified in the test plan. All the scenarios identified will be executed. Virtual user loads are simulated based on the usage pattern and load levels applied as stated in the performance test strategy. The following artifacts will be produced during test execution period: Test logs Test Result
Page 150 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Phase 7 Preparation of Reports The test logs and results generated are analyzed based on Performance under various load, Transaction/second, database throughput, Network throughput, Think time, Network delay, Resource usage, Transaction Distribution and Data handling. Manual and automated results analysis methods can be used for performance results analysis. The following performance test reports/ graphs can be generated as part of performance testing:Transaction Response time Transactions per Second Transaction Summary graph Transaction performance Summary graph Transaction Response graph Under load graph Virtual user Summary graph
Page 151 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Common Mistakes in Performance Testing No Goals No general purpose model Goals =>Techniques, Metrics, Workload Not trivial Biased Goals To show that OUR system is better than THEIRS Analysts = Jury Unsystematic Approach Analysis without Understanding the Problem Incorrect Performance Metrics Unrepresentative Workload Wrong Evaluation Technique Overlook Important Parameters Ignore Significant Factors Inappropriate Experimental Design Inappropriate Level of Detail No Analysis Erroneous Analysis
Page 152 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 153 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 154 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 155 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 156 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 157 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 158 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Architecture Benchmarking Hardware Benchmarking - Hardware benchmarking is performed to size the application with the planned Hardware platform. It is significantly different from capacity planning exercise in that it is done after development and before deployment Software Benchmarking - Defining the right placement and composition of software instances can help in vertical scalability of the system without addition of hardware resources. This is achieved through software benchmark test.
Page 159 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 162 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Test Implementation: o Develop test scripts o Simulating extreme workloads. Test Execution: o o Regression Testing Profiling
Test Reporting Environments Load testing is typically performed on the following environments using the following tools: Test Environment: o Test Harness o Use case modeling tool o Performance analyzer o Profiler
Page 164 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
(*) Optional stress testing of COTS software components during the technology analysis and technology vendor selection tasks. (**) Optional stress testing of the executable architecture as well as the COTS components during the vendor and tool evaluation and vendor and tool selection tasks.
Page 165 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
SUMMARY
The performance testing is a measure of the performance characteristics of an application. The main objective of a performance testing is to demonstrate that the system functions to specification with acceptable response times while processing the required transaction volumes in real-time production database. The objective of a performance test is to demonstrate that the system meets requirements for transaction throughput and response times simultaneously. The main deliverables from such a test, prior to execution, are automated test scripts and an infrastructure to be used to execute automated tests for extended periods.
Page 166 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
The test cases for a particular requirement are classified into Simple, Average and Complex based on the following four factors. Test case complexity for that requirement OR Interface with other Test cases OR No. of verification points OR Baseline Test data Refer the test case classification table given below
Page 167 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
A sample guideline for classification of test cases is given below. Any verification point containing a calculation is considered 'Complex' Any verification point, which interfaces with or interacts with another application is classified as 'Complex' Any verification point consisting of report verification is considered as 'Complex' A verification point comprising Search functionality may be classified as 'Complex' or 'Average' depending on the complexity Depending on the respective project, the complexity needs to be identified in a similar manner. Based on the test case type an adjustment factor is assigned for simple, average and complex test cases. This adjustment factor has been calculated after a thorough study and analysis done on many testing projects. The Adjustment Factor in the table mentioned below is pre-determined and must not be changed for every project.
From the break up of Complexity of Requirements done in the first step, we can get the number of simple, average and complex test case types. By multiplying the number of requirements with it s corresponding adjustment factor, we get the simple, average and complex test case points. Summing up the three results, we arrive at the count of Total Test Case Points.
Page 168 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Page 169 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
Coverage analysis is a structural testing technique that helps eliminate gaps in a test suite. It helps most in the absence of a detailed, up-to-date requirements specification. Each project must choose a minimum percent coverage for release criteria based on available testing resources and the importance of preventing post-release failures. Clearly, safety-critical software should have a high goal. We must set a higher coverage goal for unit testing than for system testing since a failure in lower-level code may affect multiple high-level callers.
Page 171 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
SUMMARY
TCP is a measure of estimating the complexity of an application. This is also used as an estimation technique to calculate the size and effort of a testing project.
Page 173 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
REFERENCES
WEBSITES
http://members.tripod.com/~bazman/ http://www.aptest.com/resources.html http://www.mtsu.edu/~storm/ http://www.softwareqatest.com/ http://www.softwaretesting.nildram.co.uk/ http://www.softwaretestinginstitute.com/ http://www.sqatester.com/ http://www.testing.com/ Cognizant eResources: http://elibrary/ \\ctsintcosaca\library
BOOKS
The Art of Software Testing, by, Glenford J. Myers The Complete Guide to Software Testing, by, Bill Hetzel Software Testing Techniques, by, Boris Beizer Automated Testing Handbook, by, Linda G. Hayes Automating Specification-Based Software Testing, by, Robert M. Poston, IEEE Black Box Testing, by, Boris Beizer Client Server Software Testing on the Desktop and the Web, by, Daniel J. Mosley Fundamental Concepts for the Software Quality Engineer, by, Taz Daughtrey Testing Applications on the Web, by, Hung Q. Nguyen Software Test Automation, by, Mark Fewster & Dorothy Graham Black-Box Testing , by, Boris Beizer Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems, by, Hung Q. Nguyen,Bob Johnson ,Michael Hacke & Robert Johnson Software Testing: A Craftsman's Approach, by, Paulc. Jorgensen, Paul Jorgensen Automated Software Testing: Introduction, Management, and Performance, by, Elfriede Dustin, Jeff Rashka, John Paul & Paperback Practical Tools and Techniques for Managing Hardware and Software Testing, by, Rex Black 50 Ways to improve your testing, by, Elfriede Dustin Effective use of Test automation tools, by, Mark Fewster & Dorothy Graham The craft of software testing, by, Brian Marick Software test automation, by, Bret Pettichord
Page 174 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected
STUDENT NOTES:
Page 175 Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected