Sunteți pe pagina 1din 10

Testing Testing Tol:- Test Director, Win Runner, Load Runner

Winrunner(7.4) is a autoamtion tesing tool. It test the application and shows the results Testdirector(7.0) is a test management tool. it is used to oragnize the tesing process. we can create testcases also. WinRunner is Functional and Regression Testing tool. It is a Record and Pla !ac" tool. Test#irector is a test management tools. $ou can e%ecute the test scripts through Test#irector &n'ironment.
1. What is load testing? - Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods. What is Performance testing? - Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction. Did u use LoadRunner? What version? - es. !ersion ".2. Explain the Load testing process? Step 1: Planning the test $ere, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing ob%ectives. Step !: "reating #users $ere, we create !user scripts that contain tasks performed by each !user, tasks performed by !users as a whole, and tasks measured as transactions. Step $: "reating the scenario & scenario describes the events that occur during a testing session. 't includes a list of machines, scripts, and !users that run during the scenario. (e create scenarios using Load)unner *ontroller. (e can create manual scenarios as well as goal-oriented scenarios. 'n manual scenarios, we define the number of !users, the load generator machines, and percentage of !users to be assigned to each script. +or web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. Load)unner automatically builds a scenario for us. Step %: Running the scenario (e emulate load on the server by instructing multiple !users to perform tasks simultaneously. ,efore the testing, we set the scenario configuration and scheduling. (e can run the entire scenario, !user groups, or individual !users. Step &: 'onitoring the scenario (e monitor scenario e-ecution using the Load)unner online runtime, transaction, system resource, (eb resource, (eb server resource, (eb application server resource, database server resource, network delay, streaming media resource, firewall server resource, .)/ server resource, and 0ava performance monitors. Step (: )nal*+ing test results 1uring scenario e-ecution, Load)unner records the performance of the application under different loads. (e use Load)unner23456473s graphs and reports to analy8e the application23456473s performance.

2.

3. #.

Manual Testing

ValDot offers effective manual testing, We are using these techniques for manual testing. Black Box Testing -Testing of a function without knowing internal structure of the program. White Box Testing -Testing of a function with knowing internal structure of the program. Regression Testing -To ensure that the co e changes have not ha an a verse affect to the other mo ules or on existing functions. !unctional Testing"

#tu $ #R# % entif$ &nit !unctions !or 'ach unit !unction Take 'ach %nput !unction % entif$ 'quivalence (lass !orm Test (ases !orm Test (ases !or Boun ar$ Values !rom Test (ases !or 'rror )uessing !orm &nit !unction v*s Test (ases, (ross Reference +atrix !in The (overage

&nit Testing" The most ,micro, scale of testing to test particular functions or co e mo ules. T$picall$ one -$ the programmer an not -$ testers . &nit - smallest testa-le piece of software.

. unit can -e compile * assem-le * linke * loa e / an put un er a test harness. &nit testing one to show that the unit oes not satisf$ the functional specification an * or its implemente structure oes not match the inten e esign structure.

%ntegration Testing" %ntegration is a s$stematic approach to -uil the complete software structure specifie in the esign from unit-teste mo ules. There are two wa$s integration performe . %t is calle 0re-test an 0ro-test. 0re-test" the testing performe in +o ule evelopment area is calle 0re-test. The 0re-test is require onl$ if the evelopment is one in mo ule evelopment area.

.lpha Testing" Testing of an application when evelopment is nearing completion minor esign changes ma$ still -e ma e as a result of such testing. T$picall$ one -$ en -users or others, not -$ programmers or testers.

Beta Testing" Testing when evelopment an testing are essentiall$ complete an final -ugs an pro-lems nee to -e foun -efore final release. T$picall$ one -$ en -users or others, not -$ programmers.

#$stem Testing"

. s$stem is the -ig component. #$stem testing is aime at revealing -ugs that cannot -e attri-ute to a component as such, to inconsistencies -etween components or planne interactions -etween components. (oncern" issues, -ehaviors that can onl$ -e expose -$ testing the entire integrate s$stem 1e.g., performance, securit$, recover$2.

Volume Testing"

The purpose of Volume Testing is to fin weaknesses in the s$stem with respect to its han ling of large amounts of ata uring short time perio s. !or example, this kin of testing ensures that the s$stem will process ata across ph$sical an logical -oun aries such as across servers an across isk partitions on one server.

#tress Testing"

This refers to testing s$stem functionalit$ while the s$stem is un er unusuall$ heav$ or peak loa / it,s similar to the vali ation testing mentione previousl$ -ut is carrie out in a 3high-stress3 environment. This requires that $ou make some pre ictions a-out expecte loa levels of $our We- site.

&sa-ilit$ Testing"

&sa-ilit$ means that s$stems are eas$ an fast to learn, efficient to use, eas$ to remem-er, cause no operating errors an offer a high egree of satisfaction for the user. &sa-ilit$ means -ringing the usage perspective into focus, the si e towar s the user.

#ecurit$ Testing"

%f $our site requires firewalls, encr$ption, user authentication, financial transactions, or access to ata-ases with sensitive ata, $ou ma$ nee to test these an also test $our site,s overall protection against unauthori4e internal or external access.

Involved in documenting test cases, scripts and resolving User Acceptance Testing (UAT) Black box testing for the entire software life cycle and across legacy, client-server and web platforms:- .lso known as functional testing. .software testing technique where-$ the internal workings of
the item -eing teste are not known -$ the tester. !or example, in a -lack -ox test on a software esign the tester onl$ knows the inputs an what the expecte outcomes shoul -e an not how the program arrives at those outputs. The tester oes not ever examine the programmingco e an oes not nee an$ further knowle ge of the program other than its specifications.

Typical black-box test design techniques include: Decision ta-le testing"- Decision tables are a precise $et compact wa$ to mo el complicate logic. 567 Decision ta-les, like flowcharts an if-then-else an switch-case statements, associate con itions with actions to perform, -ut in man$ cases o so in a more elegant wa$. The four quadrants Conditions Condition alternatives Actions Action entries

.ll-pairs testing"- All-pairs testing or pairwise testing is a com-inatorial software testing metho that, for each pair of input parameters to a s$stem 1t$picall$, a softwarealgorithm2, tests all possi-le iscrete com-inations of those parameters. &sing carefull$ chosen test vectors, this can -e one much faster than an exhaustive search of all com-inations of all parameters, -$ 3paralleli4ing3 the tests of parameter pairs. The num-er of tests is t$picall$ 81nm2, where n an m are the num-er of possi-ilities for each of the two parameters with the most choices. The reasoning -ehin all-pairs testing is this" the simplest -ugs in a program are generall$ triggere -$ a single input parameter. The next simplest categor$ of -ugs consists of those epen ent on interactions -etween pairs of parameters, which can -e caught with all-pairs testing. 567 Bugs involving interactions -etween three or more parameters are progressivel$ less common 597, whilst at the same time -eing progressivel$ more expensive to fin -$ exhaustive testing, which has as its limit the exhaustive testing of all possi-le inputs.5:7 +an$ testing metho s regar all-pairs testing of a s$stem or su-s$stem as a reasona-le cost--enefit compromise -etween often computationall$ infeasi-le higher-or er com-inatorial testing metho s, an less exhaustive metho s which fail to exercise all possi-le pairs of parameters. #tate transition ta-les"- %n automata theor$ an sequential logic, a state transition table is a ta-le showing what state 1or states in the case of a non eterministic finite automaton2 a finite semiautomaton orfinite state machine will move to, -ase on the current state an other inputs. . state table is essentiall$ a truth ta-le in which some of the inputs are the current state, an the outputs inclu e the next state, along with other outputs. . state ta-le is one of man$ wa$s to specif$ a state machine, other wa$s -eing a state iagram, an a characteristic equation. A B Current State Next State Output

0 0

S1

S2

0 0

S2

S1

0 1

S1

S2

0 1

S2

S2

1 0

S1

S1

1 0

S2

S1

1 1

S1

S1

1 1

S2

S2

'quivalence partitioning"- Equivalence partitioning 1also calle Equivalence Class artitioning or EC


567

2 is a software testing technique that ivi es the input ata of a software unit

into partitions of ata from which test cases can -e erive . %n principle, test cases are esigne to cover each partition at least once. This technique tries to efine test cases that uncover classes of errors, there-$ re ucing the total num-er of test cases that must -e evelope . Boun ar$ value anal$sis"- !oundary value analysis is a software testing technique in which tests are esigne to inclu e representatives of -oun ar$ values. Values on the e ge of an equivalence partition or at the smallest value on either si e of an e ge. The values coul -e either input or output ranges of a software component. #ince these -oun aries are common locations for errors that result in software faults the$ are frequentl$ exercise in test cases.

reating Test

ases and scripts:- . test case in software engineering is a set of con itions or
etermine whether an application or software s$stem is working

varia-les un er which a tester will

correctl$ or not. The mechanism for etermining whether a software program or s$stem has passe or faile such a test is known as a test oracle. %n some settings, an oracle coul -e a requirement or use case, while in others it coul -e a heuristic. %t ma$ take man$ test cases to etermine that a software program or s$stem is functioning correctl$. Test cases are often referre to as test scripts, particularl$ when written. Written test cases are usuall$ collecte into test suites.

!ormal test cases


%n or er to full$ test that all the requirements of an application are met, there must -e at least two test cases for each requirement" one positive test an one negative test/ unless a requirement has su-requirements. %n that situation, each su--requirement must have at least two test cases. ;eeping track of the link -etween the requirement an the test is frequentl$ one using a tracea-ilit$ matrix. Written test cases shoul inclu e a escription of the functionalit$ to -e teste , an the preparation require to ensure that the test can -e con ucte . . formal written test-case is characteri4e -$ a known input an -$ an expected output, which is worke out before the test is execute . The known input shoul test a precon ition an the expecte output shoul test a postcon ition.

%nformal test cases


For applications or systems without formal requirements, test cases can be written based on the accepted normal operation of programs of a similar class. In some schools of testing,test cases are not written at all but the activities and results are reported after the tests have been run. %n scenario testing, h$pothetical stories are use to help the tester think through a complex pro-lem or s$stem. These scenarios are usuall$ not written own in an$ etail. The$ can -e as simple as a iagram for a testing environment or the$ coul -e a escription written in prose. The i eal scenario test is a stor$ that is motivating, cre i-le, complex, an eas$ to evaluate. The$ are usuall$ ifferent from test cases in that test cases are single steps while scenarios cover a num-er of steps. 5e it7T$pical

written test case format

. test case is usuall$ a single step, or occasionall$ a sequence of steps, to test the correct -ehaviour*functionalities, features of an application. .n expecte result or expecte outcome is usuall$ given. . itional information that ma$ -e inclu e " test case %D test case escription test step or or er of execution num-er

relate requirement1s2 epth test categor$ author check -oxes for whether the test is automata-le an has -een automate . itional fiel s that ma$ -e inclu e an complete when the tests are execute " pass*fail remarks escriptions.

<arger test cases ma$ also contain prerequisite states or steps, an

"hat is #ser Acceptance Testing$ &ser .cceptance Testing 1&.T2 - also calle -eta testing, application testing, an *or en user testing - is a phase of software evelopment in which the software is teste in the 3real worl 3 -$ the inten e au ience or a -usiness representative. Whilst the technical testing of %T s$stems is a highl$ professional an exhaustive process, testing of -usiness functionalit$ is an entirel$ ifferent proposition. The %oal o& #AT Testing %f $ou ask some-o $ the question =What is the goal of a software test>,, $ou might get an answer like" =The goal is to prove that a s$stem oes what it is suppose to o,,. This answer is not exactl$ correct an emonstrates the necessit$ to efine some fun amentals a-out software testing. .nother response woul -e, =The goal is to fin faults or efects,,. The goal of User cceptance Testing is to assess if the system can support day!to!day business and user scenarios and ensure the system is sufficient and correct for business usage. "here does Testing 'it (n$ When a software eveloper writes co e, it is common for mistakes to occur, such that requirements are not a equatel$ implemente or the$ are forgotten. %t is uring this process that errors are intro uce into the s$stem. %t is also possi-le that the -usiness ha not communicate their requirements correctl$, or the$ coul have insufficient etails, which coul result in a s$stem working as esigne , -ut not as expecte . &.T tests are create to verif$ the s$stem?s -ehavior is consistent with the requirements. These tests will reveal efects within the s$stem. The work associate with &.T -egins after requirements arewritten an continues through the final stage of testing -efore the client*user accepts the new s$stem. The )*+ Model &or Testing The =V@ mo el is a metho olog$ where evelopment an testing takes place at the same time with the same kin of information availa-le to -oth teams. %t is goo practice to write the &.T test plan imme iatel$ after the requirements have -een finali4e . The "#" model shows development phases on the left hand side and testing phases on the right hand side.

"hy is Testing (,portant$ +ost of us have ha an experience with software that i not work as expecte . #oftware that oesn?t work can have a large impact on an organi4ation, an it can lea to man$ pro-lems inclu ing" A <oss of mone$ B this can inclu e losing customers right through to financial penalties for non-compliance to legal requirements A <oss of time B this can -e cause -$ transactions taking a long time to process -ut can inclu e staff not -eing a-le to work ue to a fault or failure A Damage to -usiness reputation B if an organi4ation is una-le to provi e service to their customers ue to software pro-lems then the customers will lose confi ence or faith in this organi4ation 1an pro-a-l$ take their -usiness elsewhere2 %t is important to test the s$stem to ensure it is error free an that it supports the -usiness that epen s on it. The later pro-lems are iscovere the more costl$ the$ are to fix. The later problems are discovered the more costly they are to fix. -o,e Co,,on Testing roble,s %f $ou make a list of some of the most common test pro-lems, $ou will reali4e that in man$ cases the maCorit$ of pro-lems are nontechnical. +ore often than not, the$ are consequences of the test process itself, inclu ing the overall composition of the test team an whether the compan$ follows well-integrate processes for formal requirements han ling an change management. The results in icate the huge iscrepanc$ in the level of importance that ifferent organi4ations give to testing. #ome of these pro-lems are more common to $ounger organi4ations/ others are pitfalls that an$one can encounter. The Cost Multiplier .n informal surve$ of the relative cost of fin ing efects throughout the software evelopment lifec$cle was con ucte several $ears ago. %t was foun that a pro-lem that goes un etecte an unfixe until an application is actuall$ in operation can -e DE B 6EE times more expensive to resolve than resolving the pro-lem earl$ in the evelopment c$cle. %n tra itional testing metho ologies, most efects are iscovere late in the software evelopment process. !in ing an fixing these efects cost much more than if the$ ha -een foun earlier. .nother surve$ of the relative cost of testing software compare with the overall cost of eveloping software gives a range of estimates, from 6EF in smaller organi4ations to GEF in some larger an mature organi4ations. .essons .earned/!est ractices %n or er to avoi mistakes that ma$ impact the process, cost, an qualit$ of the software testing phase, it is a goo i ea for testers to get involve with the proCect as earl$ as possi-le an consi er the following" A !ocus testing on requirements 0oorl$ written requirements account for the maCorit$ of proCect failures, so it is important to have the testers involve with the review an test plan creation at the -eginning of the proCect A Design s$stems for testa-ilit$ #$stems shoul -e esigne an co e with testing in min . Testers shoul emphasi4e the importance of error logs, interfaces an smaller, stan alone components which can greatl$ improve the testa-ilit$ of a s$stem once testers -ecome involve A (onsi er usa-ilit$ testing

8ne of the most overlooke requirements is =usa-ilit$@. Testers shoul promote the importance of ease of use an sche ule usa-ilit$ tests as earl$ as possi-le Traits o& a %ood #AT Tester . &.T Tester is one of the most important testing roles since the$ are vali ating the s$stem meets the -usiness nee s0 &.T is usuall$ the final activit$ -efore the s$stem goes =live@ an this role requires multi-facete skills. These qualities allow the person pla$ing that role to perform this important activit$. Without these qualities one ma$ not -e a-le to con uct a proper &.T test. These four core qualities of a &.T Tester are" A Backgroun " 'xperience of user operations, Hot involve in the overall %T proCect, 'xperience in the use of %T facilities, an respecte as an in epen ent thinker A #kill" . goo communicator, avoi s politics, 'xpects the s$stem to fail A %n epen ence" Hot involve in user specifications, Ias an in epen ent reporting structure, an is a self starter A .ttitu e" <ateral thinker, tenacious, anal$tical The 1ole2 Activities2 and Deliverables o& the !usiness Analyst during #AT Business .nal$sts make goo &.T testers -ecause the$ are in epen ent from the evelopers/ therefore argua-l$ more o-Cective. %n most cases the$ also un erstan the -usiness requirements, an can prepare test scenarios an test ata, which are realistic. This allows them to -etter efine the context in which the s$stem will -e use an -etter assess its fit for purpose. !inall$ Business .nal$sts have a veste interest in ensuring the s$stem is of high qualit$ so are motivate to perform rigorous testing. Tasks o& #ser Acceptance Testing When performing &.T, there are seven 1G2 -asic steps to ensure the s$stem is teste thoroughl$ an meets the -usiness nee s. 6 B .nal$4e Business Requirements 9 B % entif$ &.T #cenarios : B Define the &.T Test 0lan D B (reate &.T Test (ases J B Run the Tests K B Recor the Results G B (onfirm Business 8-Cectives are met Docu,ents #sed by the !usiness Analyst 8ne of the most important activities performe -$ the Business .nal$st is to i entif$ an evelop &.T test scenarios. These scenarios are erive -$ anal$4ing the ocuments that were previousl$ evelope uring the earl$ phases of the proCect. These ocuments inclu e" Business &se (ase Business 0rocess !lows 0roCect (harter (ontext Diagram Business Requirements Document 1BRD2 #$stem Requirements #pecification 1#R#2 Testing )ui elines an Techniques 8ther Ven or?s Delivera-les

Docu,ents Created by the !usiness Analyst 8nce &.T Test #cenarios are i entifie , the Business 0rocess &nit will create three elivera-les" &.T Test 0lan &.T Test (ases .fter running the tests, a Defect <og captures pro-lems

"hat is a #AT Test lan$ The &.T Test 0lan ocuments the strateg$ that will -e use to verif$ an ensure an application meets its requirements to the -usiness. The &.T Test 0lan is a ocument which outlines the plan for user acceptance testing of the proCect elivera-les. This ocument is a high level gui e, an will refer to test cases that will -e evelope an use to recor the results of user testing. "hat are #AT Test Cases$

The &ser .cceptance Test (ases help the test execution team to test the application thoroughl$. This also helps ensure that the &. testing provi es sufficient coverage of all the &.T scenarios. The &se (ases create uring the Requirements efinition phase ma$ -e use as inputs for creating test cases. The User cceptance Test $ase describes in a simple language the precise steps to be taken to test something "hat is a #AT De&ect .og$ The &.T Defect <ogis a ocument for capturing an reporting efects i entifie so that the$ can -e evaluate an resolve . %nformation inclu e in the Defect <og is" #everit$ 1e.g., Iigh, +e , <ow2 #tatus 1e.g., 8pen, (lose , Deferre 2 Date Reporte *!ixe 0ro-lem Description uring &.T. Defects are ocumente

S-ar putea să vă placă și