Sunteți pe pagina 1din 52

Kinds of Testing WHAT KINDS OF TESTING SHOULD BE CONSIDERED? 1.

Black box testing: not based on any knowledge of internal design or code.Tests are based on requirements and functionality 2. White box testing: based on knowledge of the internal logic of an applications code. Tests are based on coverage of code statements, branches, paths, and conditions. 3. Unit testing: the most micro scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. 4. Incremental integration testing: continuous testing of an application as new functionality is added; requires that various aspects of an applications functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers. 6. Integration testing: testing of combined parts of an application to determine if they function together correctly the parts can be code modules, individual applications, client and server applications on a networked. This type of testing is especially relevant to client/server and distributed systems. 7. Functional testing: black-box type testing geared to functional requirements of an application; testers should do this type of testing. This does not mean that the programmers should not check their code works before releasing it(which of course applies to any stage of testing). 8. System testing: black box type testing that is based on overall requirements specifications; covers all combined parts of system. 9. End to end testing: similar to system testing; the macro end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with database, using network communications, or interacting with other hardware, applications, or systems if appropriate. 10. Sanity testing: typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5minutes warrant further testing in item current state. 11. Regression testing: re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing. 12. Acceptance testing: final testing based on specifications of the end-user or customer, or based on use by end users/customers over some limited period of time. 13. Load testing: testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails.

14. Stress testing: term often used interchangeably with load and performance testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repletion of certain actions or inputs input of large numerical values, large complex queries to a database system, etc. 15. Performance testing: term often used interchangeable with stress and load testing. Ideally performance testing (and another type of testing) is defined in requirements documentation or QA or test plans. 16. Usability testing: testing for user-friendlinesses. Clearly this is subjective,and will depend on the targeted end-ser or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used programmers and testers are usually not appropriate as usability testers. 17. Install/uninstall testing: testing of full, partial, or upgrade install/uninstall processes. 18. Recovery testing: testing how well a system recovers from crashes, hardware failures or other catastrophic problems. 19. Security testing: testing how well system protects against unauthorized internal or external access, damage, etc, any require sophisticated testing techniques. 20. Compatibility testing: testing how well software performs in a particular hardware/software/operating/system/network/etc environment. 21. Exploratory testing: often taken to mean a creative, informal software test that is not based on formal test plans of test cases; testers may be learning the software as they test it. 22. Ad-hoc testing: similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software testing it. 23. User acceptance testing: determining if software is satisfactory to an end-user or customer. 24. Comparison testing: comparing software weakness and strengths to competing products. 25. Alpha testing: testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. 26. Beta testing: testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers. 27. Mutation testing: method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (bugs) and retesting with the original test data/cases to determine if the bugs are detected proper implementation requires large computational resources.

16. What is bug life cycle? A: New: when tester reports a defect Open: when developer accepts that it is a bug or if the developer rejects the defect, then the

status is turned into Rejected Fixed: when developer make changes to the code to rectify the bug Closed/Reopen: when tester tests it again. If the expected result shown up, it is turned into Closed and if the problem resists again, its Reopen

Manual Testing Process : Process is a roadmap to develop the project is consists a number of sequential steps. Software Testing Life Cycle: Test Plan Test Development Test Execution Analyse Results Defect Tracking Summarise Report Test Plan : It is a document which describes the testing environment, purpose, scope, objectives, test strategy, schedules, mile stones, testing tool, roles and responsibilities, risks, training, staffing and who is going to test the application, what type of tests should be performed and how it will track the defects. Test Development : Preparing test cases Preparing test data Preparing test procedure Preparing test scenario Writing test script Test Execution : In this phase we execute the documents those are prepared in test development phase. Analyse result : Once executed documents will get results either pass or fail. we need to analyse the results during this phase. Defect Tracking :

When ever we get defect on the application we need to prepare the bug report file and forwards to Test Team Lead and Dev Team. The Dev Team will fix the bug. Again we have to test the application. This cycle repeats till we get the software with our defects. Summarise Reports : Test Reports Bug Reports Test Documentation

Difference between client server testing and web server testing. Web systems are one type of client/server. The client is the browser, the server is whatever is on the back end (database, proxy, mirror, etc). This differs from so-called traditional client/server in a few ways but both systems are a type of client/server. There is a certain client that connects via some protocol with a server (or set of servers). Also understand that in a strict difference based on how the question is worded, testing a Web server specifically is simply testing the functionality and performance of the Web server itself. (For example, I might test if HTTP Keep-Alives are enabled and if that works. Or I might test if the logging feature is working. Or I might test certain filters, like ISAPI. Or I might test some general characteristics such as the load the server can take.) In the case of client server testing, as you have worded it, you might be doing the same general things to some other type of server, such as a database server. Also note that you can be testing the server directly, in some cases, and other times you can be testing it via the interaction of a client. You can also test connectivity in both. (Anytime you have a client and a server there has to be connectivity between them or the system would be less than useful so far as I can see.) In the Web you are looking at HTTP protocols and perhaps FTP depending upon your site and if your server is configured for FTP connections as well as general TCP/IP concerns. In a traditional client/server you may be looking at sockets, Telnet, NNTP, etc.

61. HOW TO TEST A WEBSITE BY MANUAL TESTING? A: Web Testing During testing the websites the following scenarios should be considered. Functionality Performance

Usability Server side interface Client side compatibility Security Functionality: In testing the functionality of the web sites the following should be tested. Links Internal links External links Mail links Broken links Forms Field validation Functional chart Error message for wrong input Optional and mandatory fields Database Testing will be done on the database integrity. Cookies Testing will be done on the client system side, on the temporary internet files. Performance: Performance testing can be applied to understand the web sites scalability, or to benchmark the performance in the environment of third party products such as servers and middle ware for potential purchase. Connection speed: Tested over various Networks like Dial up, ISDN etc Load What is the no. of users per time? Check for peak loads & how system behaves. Large amount of data accessed by user. Stress Continuous load Performance of memory, cpu, file handling etc. Usability : Usability testing is the process by which the human-computer interaction characteristics of a system are measured, and weaknesses are identified for correction. Usability can be defined as the degree to which a given piece of software assists the person sitting at the keyboard to accomplish a task, as opposed to becoming an additional impediment to such accomplishment. The broad goal of usable systems is often assessed using several

Criteria: Ease of learning Navigation Subjective user satisfaction General appearance Server side interface: In web testing the server side interface should be tested. This is done by Verify that communication is done properly. Compatibility of server with software, hardware, network and database should be tested. The client side compatibility is also tested in various platforms, using various browsers etc. Security: The primary reason for testing the security of an web is to identify potential vulnerabilities and subsequently repair them. The following types of testing are described in this section: Network Scanning Vulnerability Scanning Password Cracking Log Review Integrity Checkers Virus Detection Performance Testing Performance testing is a rigorous usability evaluation of a working system under realistic conditions to identify usability problems and to compare measures such as success rate, task time and user satisfaction with requirements. The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing. To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly. A clearly defined set of expectations is essential for meaningful performance testing. For example, for a Web application, you need to know at least two things: expected load in terms of concurrent users or HTTP connections acceptable response time Load testing: Load testing is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called volume testing, or longevity/endurance testing Examples of volume testing: testing a word processor by editing a very large document testing a printer by sending it a very large job testing a mail server with thousands of users mailboxes

Examples of longevity/endurance testing: testing a client-server application by running the client in a loop against the server over an extended period of time Goals of load testing: Expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc. Ensure that the application meets the performance baseline established during Performance testing. This is done by running regression tests against the application at a specified maximum load. Although performance testing and load testing can seen similar, their goals are different. On one hand, performance testing uses load testing techniques and tools for measurement and benchmarking purposes and uses various load levels whereas load testing operates at a predefined load level, the highest load that the system can accept while still functioning properly. Stress testing: Stress testing is a form of testing that is used to determine the stability of a given system or entity. This is designed to test the software with abnormal situations. Stress testing attempts to find the limits at which the system will fail through abnormal quantity or frequency of inputs. Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing). The main purpose behind this madness is to make sure that the system fails and recovers gracefully this quality is known as recoverability. Stress testing does not break the system but instead it allows observing how the system reacts to failure. Stress testing observes for the following. Does it save its state or does it crash suddenly? Does it just hang and freeze or does it fail gracefully? Is it able to recover from the last good state on restart? Etc. Compatability Testing A Testing to ensure compatibility of an application or Web site with different browsers, OS and hardware platforms. Different versions, configurations, display resolutions, and Internet connect speeds all can impact the behavior of the product and introduce costly and embarrassing bugs. We test for compatibility using real test environments. That is testing how will the system performs in the particular software, hardware or network environment. Compatibility testing can be performed manually or can be driven by an automated functional or reg The purpose of compatibility testing is to reveal issues related to the product& interaction session test suite.with other software as well as hardware. The product compatibility is evaluated by first identifying the hardware/software/browser components that the product is designed to support. Then a hardware/software/browser matrix is designed that indicates the configurations on which the product will be tested. Then, with input from the client, a testing script is designed that will be sufficient to evaluate compatibility between the product and the hardware/software/browser matrix.

Finally, the script is executed against the matrix,and any anomalies are investigated to determine exactly where the incompatibility lies. Some typical compatibility tests include testing your application: On various client hardware configurations Using different memory sizes and hard drive space On various Operating Systems In different network environments With different printers and peripherals (i.e. zip drives, USBs, etc.) 62. which comes first test strategy or test plan? A: Test strategy comes first ans this is the high level document. and approach for the testing starts from test strategy and then based on this the test lead prepares the test plan. 63. what is the difference between web based application and client server application as a testers point of view? A: According to Testers Point of view 1) Web Base Application (WBA)is a 3 tier application ;Browser,Back end and Server. Client server Application(CSA) is a 2 tier Application ;Front End ,Back end . 2) In the WBA tester test for the Script error like java script error VB script error etc, that shown at the page. In the CSA tester does not test for any script error. 3) Because in the WBA once changes perform reflect at every machine so tester has less work for test. Whereas in the CSA every time application need to be instal hence ,it maybe possible that some machine has some problem for that Hardware testing as well as software testing is needed. 63. What is the significance of doing Regression testing? A: To check for the bug fixes. And this fix should not disturb other functionality To Ensure the newly added functionality or existing modified functionality or developer fixed bug arises any new bug or affecting any other side effect. this is called regression test and ensure already PASSED TEST CASES would not arise any new bug. 64. What are the diff ways to check a date field in a website? A: There are different ways like : 1) you can check the field width for minimum and maximum. 2) If that field only take the Numeric Value then check itll only take Numeric no other type. 3) If it takes the date or time then check for other. 4) Same way like Numeric you can check it for the Character,Alpha Numeric aand all. 5) And the most Important if you click and hit the enter key then some time pag e may give the error of javascript, that is the big fault on the page .

6) Check the field for the Null value .. ETC The date field we can check in different ways Possitive testing: first we enter the date in given format Negative Testing: We enter the date in invalid format suppose if we enter date like 30/02/2006 it should display some error message and also we use to check the numeric or text

46. High severity, low priority bug? A: A page is rarely accessed, or some activity is performed rarely but that thing outputs some important Data incorrectly, or corrupts the data, this will be a bug of H severity L priority 47. If project wants to release in 3months what type of Risk analysis u do in Test plan? A: Use risk analysis to determine where testing should be focused. Since its rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include: Which functionality is most important to the projects intended purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? Which parts of the application were developed in rush or panic mode? Which aspects of similar/related previous projects caused problems? Which aspects of similar/related previous projects had large maintenance expenses? Which parts of the requirements and design are unclear or poorly thought out? What do the developers think are the highest-risk aspects of the application? What kinds of problems would cause the worst publicity? What kinds of problems would cause the most customer service complaints? What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio 48. Test cases for IE 6.0 ?

A: Test cases for IE 6.0 i.e Internet Explorer 6.0: 1)First I go for the Installation side, means that + is it working with all versions of Windows ,Netscape or other softwares in other words we can say that IE must check with all hardware and software parts. 2) Secondly go for the Text Part means that all the Text part appears in frequent and smooth manner. 3) Thirdly go for the Images Part means that all the Images appears in frequent and smooth manner. 4) URL must run in a better way. 5) Suppose Some other language used on it then URL take the Other Characters, Other than Normal Characters. 6)Is it working with Cookies frequently or not. 7) Is it Concerning with different script like JScript and VBScript. HTML Code work on that or not. 9) Troubleshooting works or not. 10) All the Tool bars are work with it or not. 11) If Page has Some Links, than how much is the Max and Min Limit for that. 12) Test for Installing Internet Explorer 6 with Norton Protected Recycle Bin enabled . 13) Is it working with the Uninstallation Process. 14) Last but not the least test for the Security System for the IE 6.0 49. Where you involve in testing life cycle ,what type of test you perform ? A: Generally test engineers involved from entire test life cycle i.e, test plan, test case preparation, execution, reporting. Generally system testing, regression testing, adhoc testing etc. 50. what is Testing environment in your company ,means hwo testing process start ? A: testing process is going as follows quality assurance unit quality assurance manager testlead test engineer 51. who prepares the use cases? A: In Any company except the small company Business analyst prepares the use cases But in small company Business analyst prepares along with team lead 52. What methodologies have you used to develop test cases? A: generally test engineers uses 4 types of methodologies 1. Boundary value analysis 2.Equivalence partition

3.Error guessing 4.cause effect graphing 53. Why we call it as a regression test nor retest? A: If we test whether defect is closed or not i.e Retesting But here we are checking the impact also regression means repeated times 54. Is automated testing better than manual testing. If so, why? A: Automated testing and manual testing have advantages as well as disadvantages Advantages: It increase the efficiency of testing process speed in process reliable Flexible disadvantages Tools should have compatibility with our development or deployment tools needs lot of time initially If the requirements are changing continuously Automation is not suitable Manual: If the requirements are changing continuously Manual is suitable Once the build is stable with manual testing then only we go 4 automation Disadvantages: It needs lot of time We can not do some type of testing manually E.g Performances 55. what is the exact difference between a product and a project.give an example ? A: Project Developed for particular client requirements are defined by client Product developed for market Requirements are defined by company itself by conducting market survey Example Project: the shirt which we are interested stitching with tailor as per our specifications is project Product: Example is Ready made Shirt where the particular company will imagine particular measurements they made the product Mainframes is a product Product has many mo of versions but project has fewer versions i.e depends upon change request and enhancements 56. Define Brain Stromming and Cause Effect Graphing? With Eg? A: BS: A learning technique involving open group discussion intended to expand the range of available ideas OR A meeting to generate creative ideas. At PEPSI Advertising, daily, weekly and bi-monthly brainstorming sessions are held by various work groups within the firm. Our monthly I-

Power brainstorming meeting is attended by the entire agency staff. OR Brainstorming is a highly structured process to help generate ideas. It is based on the principle that you cannot generate and evaluate ideas at the same time. To use brainstorming, you must first gain agreement from the group to try brainstorming for a fixed interval (eg six minutes). CEG : A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications. 57. Actually by using severity u should know which one u need to solve so what is the need of priority? A: I guess severity reflects the seriousness of the bug where as priority refers to which bug should rectify first. of course if the severity is high the same case is with priority in normal. severity decided by the tester where as priority decided by developers. which one need to solve first knows through priority not with severity. how serious of the bug knows through severity. severity is nothing impact of that bug on the application. Priority is nothing but importance to resolve the bug yeah of course by looking severity we can judge but sometimes high severity bug doesnt have high priority At the same time High priority bug dont have high severity So we need both severity and priority 58. What do u do if the bug that u found is not accepted by the developer and he is saying its not reproducible. Note:The developer is in the on site location ? A: once again we will check that condition with all reasons. then we will attach screen shots with strong reasons. then we will explain to the project manager and also explain to the client when they contact us Sometimes bug is not reproducible it is because of different environment suppose development team using other environment and you are using different environment at this situation there is chance of bug not reproducing. At this situation please check the environment in the base line documents that is functional documents if the environment which we r using is correct we will raise it as defect We will take screen shots and sends them with test procedure also 59. what is the difference between three tier and two tier application?

A: Client server is a 2-tier application. In this, front end or client is connected to Data base server through Data Source Name,front end is the monitoring level. Web based architecture is a 3-tier application. In this, browser is connected to web server through TCP/IP and web server is connected to Data base server,browser is the monitoring level. In general, Black box testers are concentrating on monitoring level of any type of application. All the client server applications are 2 tier architectures. Here in these architecture, all the Business Logic is stored in clients and Data is stored in Servers. So if user request anything, business logic will b performed at client, and the data is retrieved from Server(DB Server). Here the problem is, if any business logic changes, then we need to change the logic at each any every client. The best ex: is take a super market, i have branches in the city. At each branch i have clients, so business logic is stored in clients, but the actual data is store in servers.If assume i want to give some discount on some items, so i need to change the business logic. For this i need to goto each branch and need to change the business logic at each client. This the disadvantage of Client/Server architecture. So 3-tier architecture came into picture: Here Business Logic is stored in one Server, and all the clients are dumb terminals. If user requests anything the request first sent to server, the server will bring the data from DB Sever and send it to clients. This is the flow for 3-tier architecture. Assume for the above. Ex. if i want to give some discount, all my business logic is there in Server. So i need to change at one place, not at each client. This is the main advantage of 3tier architecture.

31. If we have no SRS, BRS but we have test cases does u execute the test cases blindly or do u follow any other process? A: Test case would have detail steps of what the application is supposed to do. SO 1) Functionality of application is known. 2) In addition you can refer to Backend, I mean look into the Database. To gain more knowledge of the application 32. How to execute test case? A: There are two ways: 1. Manual Runner Tool for manual execution and updating of test status. 2. Automated test case execution by specifying Host name and other automation pertaining details.

33. Difference between re testing and regression testing? A: Retesting: Re-execution of test cases on same application build with different input values is retesting. Regression Testing: Re-execution of test cases on modifies form of build is called regression testing 34. What is the difference between bug log and defect tracking? A; Bug log is a document which maintains the information of the bug where as bug tracking is the process. 35. Who will change the Bug Status as Differed? A: Bug will be in open status while developer is working on it Fixed after developer completes his work if it is not fixed properly the tester puts it in reopen After fixing the bug properly it is in closed state. Developer 36. wht is smoke testing and user interface testing ? A: ST: Smoke testing is non-exhaustive software testing, as pertaining that the most crucial functions of a program work, but not bothering with finer details. The term comes to software testing from a similarly basic type of hardware testing. UIT: I did a bit or R n D on this. some says its nothing but Usability testing. Testing to determine the ease with which a user can learn to operate, input, and interpret outputs of a system or component. Smoke testing is nothing but to check whether basic functionality of the build is stable or not? I.e. if it possesses 70% of the functionality we say build is stable. User interface testing: We check all the fields whether they are existing or not as per the format we check spelling graphic font sizes everything in the window present or not|

17. What is deferred status in defect life cycle? A: Deferred status means the developer accepted the bus, but it is scheduled to rectify in the next build

18. What is smoke test? A; Testing the application whether its performing its basic functionality properly or not, so that the test team can go ahead with the application 19. Do you use any automation tool for smoke testing? A: - Definitely can use. 20. What is Verification and validation? A: Verification is static. No code is executed. Say, analysis of requirements etc. Validation is dynamic. Code is executed with scenarios present in test cases. 21. What is test plan and explain its contents? A: Test plan is a document which contains the scope for testing the application and what to be tested, when to be tested and who to test. 22. Advantages of automation over manual testing? A: Time, resource and Money 23. What is ADhoc testing? A: AdHoc means doing something which is not planned. 24. What is mean by release notes? A: Its a document released along with the product which explains about the product. It also contains about the bugs that are in deferred status. 25. Scalability testing comes under in which tool? A: Scalability testing comes under performance testing. Load testing, scalability testing both r same. 26. What is the difference between Bug and Defect? A: Bug: Deviation from the expected result. Defect: Problem in algorithm leads to failure. A Mistake in code is called Error. Due to Error in coding, test engineers are getting mismatches in application is called defect. If defect accepted by development team to solve is called Bug. 27. What is hot fix? A: A hot fix is a single, cumulative package that includes one or more files that are used to address a problem in a software product. Typically, hot fixes are made to address a specific customer situation and may not be distributed outside the customer organization. Bug found at the customer place which has high priority.

28. What is the difference between functional test cases and compatability testcases? A: There are no Test Cases for Compatibility Testing; in Compatibility Testing we are Testing an application in different Hardware and software. If it is wrong plz let me know. 29. What is Acid Testing?? A: ACID Means: ACID testing is related to testing a transaction. A-Atomicity C-Consistent I-Isolation D-Durable Mostly this will be done database testing. 30. What is the main use of preparing a traceability matrix? A: To Cross verify the prepared test cases and test scripts with user requirements. To monitor the changes, enhance occurred during the development of the project. Traceability matrix is prepared in order to cross check the test cases designed against each requirement, hence giving an opportunity to verify that all the requirements are covered in testing the application.

37. what is bug, deffect, issue, error? A: Bug: Bug is identified by the tester. Defect: Whenever the project is received for the analysis phase ,may be some requirement miss to get or understand most of the time Defect itself come with the project (when it comes). Issue: Client site error most of the time. Error: When anything is happened wrong in the project from the development side i.e. called as the error, most of the time this knows by the developer. Bug: a fault or defect in a system or machine Defect: an imperfection in a device or machine; Issue: An issue is a major problem that will impede the progress of the project and cannot be resolved by the project manager and project team without outside help Error: Error is the deviation of a measurement, observation, or calculation from the truth

38. What is the diff b/w functional testing and integration testing? A: functional testing is testing the whole functionality of the system or the application whether it is meeting the functional specifications Integration testing means testing the functionality of integrated module when two individual modules are integrated for this we use top-down approach and bottom up approach 39. what type of testing u perform in organization while u do System Testing, give clearly? A: Functional testing User interface testing Usability testing Compatibility testing Model based testing Error exit testing User help testing Security testing Capacity testing Performance testing Sanity testing Regression testing Reliability testing Recovery testing Installation testing Maintenance testing Accessibility testing, including compliance with: Americans with Disabilities Act of 1990 Section 508 Amendment to the Rehabilitation Act of 1973 Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)

1. What is diff. between CMMI and CMM levels? A: - CMM: - this is applicable only for software industry. KPAs -18 CMMI: - This is applicable for software, out sourcing and all other industries. KPA 25

3. What is status of defect when you are performing regression testing? A:-Fixed Status 4. What is the first test in software testing process? A) Monkey testing

B) Unit Testing c) Static analysis d) None of the above A: - Unit testing is the first test in testing process, though it is done by developers after the completion of coding it is correct one. 4. When will the testing starts? a) Once the requirements are Complete b) In requirement phase? A: - Once the requirements are complete. This is Static testing. Here, u r supposed to read the documents (requirements) and it is quite a common issue in S/w industry that many requirements contradict with other requirements. These are also can be reported as bugs. However, they will be reviewed before reporting them as bugs (defects). 5. What is the part of Qa and QC in refinement v model? A: V model is a kind of SDLC. QC (Quality Control) team tests the developed product for quality. It deals only with product, both in static and dynamic testing. QA (Quality Assurance) team works on the process and manages for better quality in the process. It deals with (reviews) everything right from collecting requirements to delivery. 6. What are the bugs we cannot find in black box? A: If there r any bugs in security settings of the pages or any other internal mistake made in coding cannot be found in black box testing. 7. What are Microsoft 6 rules? A: As far as my knowledge these rules are used at user Interface test. These are also called Microsoft windows standards. They are . GUI objects are aligned in windows All defined text is visible on a GUI object Labels on GUI objects are capitalized Each label includes an underlined letter (mnemonics) Each window includes an OK button, a Cancel button, and a System menu 8. What are the steps to test any software through automation tools? A: First, you need to segregate the test cases that can be automated. Then, prepare test data as per the requirements of those test cases. Write reusable functions which are used frequently in those test cases. Now, prepare the test scripts using those reusable functions and by applying loops and conditions where ever necessary. However, Automation framework that is followed in the organization should strictly follow through out the process.

9. What is Defect removable efficiency? A: - The DRE is the percentage of defects that have been removed during an activity, computed with the equation below. The DRE can also be computed for each software development activity and plotted on a bar graph to show the relative defect removal efficiencies for each activity. Or, the DRE may be computed for a specific task or technique (e.g. design inspection, code walkthrough, unit test, 6 month operation, etc.) Number Defects Removed DRE = * 100 Number Defects at Start of Process DRE=A/A+B = 0.8 A = Testing Team (Defects by testing team) B = customer ( customer ) If dre <=0.8 then good product otherwise not. 10. Example for bug not reproducible? A: Difference in environment 11. During alpha testing why customer people r invited? A: becaz alpha testing related to acceptance testing, so, accepting testing is done in front of client or customer for there acceptance 12. Difference between adhoc testing and error guessing? A: Adhoc testing: without test data r any documents performing testing. Error Guessing: This is a Test data selection technique. The selection criterion is to pick values that seem likely to cause errors. 13. Diff between test plan and test strategy? A: Test plan: After completion of SRS learning and business requirement gathering test management concentrate on test planning, this is done by Test lead, or Project lead. Test Strategy: Depends on corresponding testing policy quality analyst finalizes test Responsibility Matrix. This is dont by QA. But both r Documents. 14. What is V-n-V Model? Why is it called as V& why not U? Also tell at what Stage Testing should be best to stared? A: It is called V coz it looks like V. the detailed V model is shown below.
SRS HLD (High Level Design) / LLD (Low level / Integration testing Acceptance testing / / System testing

Design) / Coding /

/ / Unit Testing

There is no such stage for which you wait to start testing. Testing starts as soon as SRS document is ready. You can raise defects that are present in the document. Its called verification.

White Box Testing White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that 1. guarantee that all independent paths within a module have been exercised at least once, 2. exercise all logical decisions on their true and false sides, 3. execute all loops at their boundaries and within their operational bounds, and 4. exercise internal data structures to ensure their validity.

Loop Testing This white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined: 1. simple loops, 2. nested loops, 3. concatenated loops, and 4. unstructured loops. Simple Loops The following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop: 1. skip the loop entirely, 2. only pass once through the loop, 3. m passes through the loop where m < n, 4. n - 1, n, n + 1 passes through the loop. Nested Loops

The testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. One approach for nested loops: 1. Start at the innermost loop. Set all other loops to minimum values. 2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values. 3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values. 4. Continue until all loops have been tested. Concatenated Loops Concatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g. the loop counter for one is the loop counter for the other), then the nested approach can be used. Unstructured Loops This type of loop should be redesigned not tested!!! Other White Box Techniques Other white box testing techniques include: 1. Condition testing o exercises the logical conditions in a program. 2. Data flow testing o selects test paths according to the locations of definitions and uses of variables in the program. Black Box Testing Introduction Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories: 1. incorrect or missing functions, 2. interface errors, 3. errors in data structures or external database access, 4. performance errors, and 5. initialization and termination errors. Tests are designed to answer the following questions: 1. How is the functions validity tested? 2. What classes of input will make good test cases? 3. Is the system particularly sensitive to certain input values? 4. How are the boundaries of a data class isolated? 5. What data rates and data volume can the system tolerate? 6. What effect will specific combinations of data have on system operation?

White box testing should be performed early in the testing process, while black box testing tends to be applied during later stages. Test cases should be derived which 1. reduce the number of additional test cases that must be designed to achieve reasonable testing, and 2. tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.

40. What is the main use of preparing Traceability matrix and explain the real time usage? A: A traceability matrix is created by associating requirements with the work products that satisfy them. Tests are associated with the requirements on which they are based and the product tested to meet the requirement. A traceability matrix is a report from the requirements database or repository. 41. How can u do the following 1) Usability testing 2) scalability Testing A: UT: Testing the ease with which users can learn and use a product. ST: Its a Web Testing defn.allows web site capability improvement. PT: Testing to determine whether the system/software meets the specified portability requirements. 42. What does u mean by Positive and Negative testing & what is the diffs between them. Can anyone explain with an example? A: Positive Testing: Testing the application functionality with valid inputs and verifying that output is correct Negative testing: Testing the application functionality with invalid inputs and verifying the output. Difference is nothing but how the application behaves when we enter some invalid inputs suppose if it accepts invalid input the application Functionality is wrong

Positive test: testing aimed to show that s/w work i.e. with valid inputs. This is also called as test to pass Negative testing: testing aimed at showing s/w doesnt work. Which is also know as test to fail BVA is the best example of -ve testing. 43. what is change request, how u use it? A: Change Request is a attribute or part of Defect Life Cycle. Now when u as a tester finds a defect n report to ur DLhe in turn informs the Development Team. The DT says its not a defect its an extra implementation or says not part of reqment. Its newscast has to pay. Here the status in ur defect report would be Change Request I think change request controlled by change request control board (CCB). If any changes required by client after we start the project, it has to come thru that CCB and they have to approve it. CCB got full rights to accept or reject based on the project schedule and cost. 44. What is risk analysis, what type of risk analysis u did in u r project? A: Risk Analysis: A systematic use of available information to determine how often specified events and unspecified events may occur and the magnitude of their likely consequences OR procedure to identify threats & vulnerabilities, analyze them to ascertain the exposures, and highlight how the impact can be eliminated or reduced Types : 1.QUANTITATIVE RISK ANALYSIS 2.QUALITATIVE RISK ANALYSIS 45. What is API ? A: Application program interface

Whats a test case? ???A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. ???A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. ???Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, its useful to prepare test cases early in the development cycle if possible.

What do I do if I find a bug/error? in normal terms, if a bug or error is detected in a system, it needs to be communicated to the developer in order to get it fixed. Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed. (Please note that there are various ways to communicate the bug to the developer and track the bug status) Statuses associated with a bug: New: When a bug is found/revealed for the first time, the software tester communicates it to his/her team leader (Test Leader) in order to confirm if that is a valid bug. After getting confirmation from the Test Lead, the software tester logs the bug and the status of New is assigned to the bug. Assigned: After the bug is reported as New, it comes to the Development Team. The development team verifies if the bug is valid. If the bug is valid, development leader assigns it to a developer to fix it and a status of Assigned is assigned to it. Open: Once the developer starts working on the bug, he/she changes the status of the bug to Open to indicate that he/she is working on it to find a solution. Fixed: Once the developer makes necessary changes in the code and verifies the code, he/she marks the bug as Fixed and passes it over to the Development Lead in order to pass it to the Testing team. Pending Retest: After the bug is fixed, it is passed back to the testing team to get retested and the status of Pending Retest is assigned to it.

Retest: The testing team leader changes the status of the bug, which is previously marked with Pending Retest to Retest and assigns it to a tester for retesting. Closed: After the bug is assigned a status of Retest, it is again tested. If the problem is solved, the tester closes it and marks it with Closed status. Reopen: If after retesting the software for the bug opened, if the system behaves in the same way or same bug arises once again, then the tester reopens the bug and again sends it back to the developer marking its status as Reopen. Pending Rejected: If the developers think that a particular behavior of the system, which the tester reports as a bug has to be same and the bug is invalid, in that case, the bug is rejected and marked as Pending Reject. Rejected: If the Testing Leader finds that the system is working according to the specifications or the bug is invalid as per the explanation from the development, he/she rejects the bug and marks its status as Rejected. Postponed: Sometimes, testing of a particular bug has to be postponed for an indefinite period. This situation may occur because of many reasons, such as unavailability of Test data, unavailability of particular functionality etc. That time, the bug is marked with Postponed status. Deferred: In some cases a particular bug stands no importance and is needed to be/can be avoided, that time it is marked with Deferred status. What is a Test Case? A test case is a noted/documented set of steps/activities that are carried out or executed on the software in order to confirm its functionality/behavior to certain set of inputs. How do I find out a BUG/ERROR? Basically, test cases/scripts are run in order to find out any unexpected behavior of the software product under test. If any such unexpected behavior or exception occurs, it is called as a bug. What is a bug/error? A bug or error in software product is any exception that can hinder the functionality of either the whole software or part of it. What is Software Requirements Specification?

A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software. What is Soak Testing? Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed. What is Ramp Testing? Continuously raising an input signal until the system breaks down. What is Scalability Testing? Performance testing focused on ensuring the application under test gracefully handles increases in work load. What is Quality Assurance? All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer. What is Gorilla Testing? Testing one particular module, functionality heavily. What is Exhaustive Testing? Testing which covers all combinations of input values and preconditions for an element of the software under test. What is Endurance Testing? Checks for memory leaks or other problems that may occur with prolonged execution. What is Debugging? The process of finding and removing the causes of software failures. What is Data Driven Testing? Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

What is Code Complete? Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented. What is Branch Testing? Testing in which all branches in the program source code are tested at least once. What is Automated Testing? Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing. The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

What is Agile Testing? Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. What is Ad Hoc Testing? A testing phase where the tester tries to break the system by randomly trying the systems functionality. Can include negative testing as well. Difference between Smoke Testing and Sanity Testing? Smoke Testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details. Sanity Testing is a cursory testing,it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It [] What is the difference between QC and QA? Quality assurance is the process where the documents for the product to be tested is verified with actual requirements of the customers. It includes inspection, auditing , code review , meeting etc. Quality control is the process where the product is actually executed and the expected behavior is verified by comparing with the actual behavior of []

What is a scenario? A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations. What is Difference betweem Manual and Automation Testing? This answer is quite simple, Manual is when user needs to do many things based on the test case specified, say like click some tab and check if the tab is working fine or click on a particular URL and check if the web site specified opens. The above stuff can also be done automatically, i.e. [] What is L10 Testing? L10 Testing is Localization Testing, it verifies whether your products are ready for local markets or not. What is I18N Testing? I18N Testing is Internationalization testing Determine whether your developed products support for international character encoding methods is sufficient and whether your product development methodologies take into account international coding standards. What makes a good Software Test engineer? A good test engineer has a test to break attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, [] What is the role of documentation in QA? Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document [] How do you perform integration testing? First, unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the

application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the [] What is clear box testing? Clear box testing is the same as white box testing. It is a testing approach that examines the applications program structure, and derives test cases from the applications program logic. What is closed box testing? Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the inner workings of the software. What is open box testing? Open box testing is same as white box testing. It is a testing approach that examines the applications program structure, and derives test cases from the applications program logic. What is Software Quality Assurance? Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to prevention. What is Security Testing? Security Testing Application vulnerabilities leave your system open to attacks, Downtime, Data theft, Data corruption and application Defacement. Security within an application or web service is crucial to avoid such vulnerabilities and new threats. While automated tools can help to eliminate many generic security issues, the detection of application vulnerabilities requires independent evaluation of your specific [] What is Stress Testing? Stress Testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. For example, a web server may be stress tested using scripts, bots, and various denial of service What is Acceptance Testing? Acceptance Testing

User acceptance testing (UAT) is one of the final stages of a software project and will often occur before the customer accepts a new system. Users of the system will perform these tests which, ideally, developers have derived from the User Requirements Specification, to which the system should conform. Test designers will draw up a formal. What is Compatibility Testing? Compatibility Testing One of the challenges of software development is ensuring that the application works properly on the different platforms and operating systems on the market and also with the applications and devices in its environment. Compatibility testing service aims at locating application problems by running them in real environments, thus ensuring you that your application What is Walkthrough? A review of requirements, designs or code characterized by the author of the object under review guiding the progression of the review. What is Inspection? A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection). What is Capture/Playback Tool? A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. What is Big Bang Testing? Integration testing where no incremental testing takes place prior to all the systems components being combined to form the system. What is Effort Variance? Effort variance = (Actual effort - Planned Effort)/Planned effort * 100 How do you create a test strategy? The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test [] How do you create a test plan/design? Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test

procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking ???Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application. ???Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases. ???It is the test team who, with assistance of developers and clients, develops test cases and scenarios for integration and system testing. ???Test scenarios are executed through the use of test procedures or scripts. ???Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. ???Test procedures or scripts include the specific data that will be used for testing the process or transaction. ???Test procedures or scripts may cover multiple test scenarios. ???Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope. ???Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment. ???Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing. ???A pre-test meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release. Inputs for this process: ???Approved Test Strategy Document. ???Test tools, or automated test tools, if applicable. ???Previously developed scripts, if applicable. ???Test documentation problems uncovered as a result of testing. ???A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data. Outputs for this process: ???Approved documents of test scenarios, test cases, test conditions and test data. ???Reports of software design issues, given to software developers for correction. Explain WinRunner testing process? WinRunner testing process involves six main stages 1. Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested 2. Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested. 3. Debug Test: run tests in Debug mode to make sure they run smoothly 4. Run Tests: run tests in Verify mode to test your application. 5. View Results: determines the success or failure of the tests. 6. Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect

directly from the Test Results window. What is a test schedule? The test schedule is a schedule that identifies all tasks required for a successful testing effort, a schedule of all test activities and resource requirements. What is a Test Configuration Manager? Test Configuration Managers maintain test environments, scripts, software and test data. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Test Configuration Manager. What is a Test Engineer? A Test Engineer is an engineer who specializes in testing. Test engineers create test cases, procedures, scripts and generate data. They execute test procedures and scripts, analyze standards of measurements, evaluate results of system/integration/regression testing. They also ???Speed up the work of your development staff; ???Reduce your risk . What is a Test/QA Team Lead? The Test/QA Team Lead coordinates the testing activity, communicates testing status to management and manages the test team What testing roles are standard on most testing projects? Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also [] What is alpha testing? Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by end-users or others, not programmers, software engineers, or test engineers What is acceptance testing? Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager, however, it is

conducted with the full support of the project team. The test team [] What is comparison testing? Comparison testing is testing that compares software weaknesses and strengths to those of competitors products. What is beta testing? Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers. What is compatibility testing? Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment. What is recovery/error testing? Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. What is security/penetration testing? Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques. What is load testing? Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail. What is performance testing? Performance testing verifies loads, volumes and response times, as defined by requirements. Although performance testing is a part of system testing, it can be regarded as a distinct level of testing. What is sanity testing? Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, [] What is end-to-end testing? End-to-end testing is similar to system testing, the *macro* end of the test

scale; it is the testing a complete application in a situation that mimics real life use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system. What is regression testing? The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify that changes introduced during the release have not undone any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted [] What is system testing? System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an applications accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur [] What is integration testing? Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by [] What is usability testing? Usability testing is testing for user-friendliness. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Test engineers are needed, because programmers and developers are usually not appropriate as usability testers What is functional testing? Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers should perform functional testing. What is parallel/audit testing? Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly. What is unit testing?

Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable. What is white box testing? White box testing is based on knowledge of the internal logic of an applications code. Tests are based on coverage of code statements, branches, paths and conditions. What is black box testing? Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality. What is a good code? A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that [] What is validation? Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed What is verification? Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings. How can it be known when to stop testing? This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are: ???Deadlines (release deadlines, testing deadlines, etc.) ???Test cases completed with certain percentage passed ???Test budget depleted ???Coverage of code/functionality/requirements reaches a specified point ???Bug rate falls below a certain level ???Beta or alpha testing period ends

Why to Test Your Software? Even the most carefully planned and designed software, cannot possibly be free of defects. Your goal as a quality engineer is to find these defects. This requires creating and executing many tests. In order for software testing to be successful, you should start the Software Testing process as soon as possible. Each new version must be tested in order to ensure that "improvements" do not generate new defects. If you begin Software Testing only shortly before an application is scheduled for release, you will not have time to detect and repair many serious defects. Thus by Software Testing ahead of time, you can prevent problems for your users and avoid costly delays. Let us derive the main definition for Software Testing. I.Identifying the defects in Software Testing. what does this defect mean in Software Testing? The main purpose of Software Testing is to identify the defects. Defect:-A flow in a component or system that can cause the component or system to fail to perform its required function. Check also Fault,Failure Error,Bug etc. Fault:- Fault is similar to a defect. Failure:-Deviation of the component or system from its expected delivery,service or result. Error:-A human action that produces an incorrect result. Bug:- Bug is similar to that of an defect. II.Isolate the defects. Isolating means seperation or dividing the defects. These isolated defects are collected in the Defect Profile What is Defect Profile Document? a.Defect Profile is a document with many columns in Software Testing. b.This is a template provided by the company. III.Subjected for rectification The Defect Profile is subjected for rectification that means it is send to developer IV.Defects are rectified After getting from the developer make sure all the defects are rectified,

before defining it as a Quality product. What is Quality in Testing? Quality is defined as justification of user requirements or satisfaction of user requirements. SOFTWARE TESTING MAIN DEFINITION---This is the process in which the defects are identified, isolated , and subjected for rectification and finally make sure that all the defects are rectified , in order to ensure that the product is a Quality product. Objective of Software Testing * Understand the difference between verification and validation testing activities * Understad what benefits the V model offers over other models. * Be aware of other models in order to compare and contrast. * Understand the cost of fixing faults increases as you move the product towards live use. * Understand what constitutes a master test plan in Software Testing. * Understand the meaning of each testing stage in Software Testing Load Runner Questions: 1. What is load testing? Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods. 2. What is Performance testing? - Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction. 3. Did u use LoadRunner? What version? - Yes. Version 7.2. 4. Explain the Load testing process? Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions.

Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us. Step 4: Running the scenario. We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers. Step 5: Monitoring the scenario. We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner.s graphs and reports to analyze the application.s performance. 5. When do you do load and performance Testing? We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing. 6. What are the components of LoadRunner? The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online. 7. What Component of LoadRunner would you use to record a Script? The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols. 8. What Component of LoadRunner would you use to play Back the script in multi user mode?

The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group. 9. What is a rendezvous point? You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time. 10. What is a scenario? A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations. 11. Explain the recording mode for web Vuser script? We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script. 12. Why do you create parameters? Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system. 13. What is correlation?Explain the difference between automatic correlation and manual correlation? Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate. 14. How do you find out where correlation is required? Give few examples from your projects?

Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated. In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation. 15. Where do you set automatic correlation options? Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created. 16. What is a function to capture dynamic values in the web Vuser script? Web_reg_save_param function saves dynamic data information to a parameter. 17. When do you disable log in Virtual User Generator, When do you choose standard and extended logs? Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options. 18. How do you debug a LoadRunner script? VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only. 19. How do you write user defined functions in LR? Give me few functions you wrote in your previous project?

Before we create the User Defined functions we need to create the external library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project. 20. What are the changes you can make in run-time settings? The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard Log and c) Extended Think Time - In think time we have two options like Ignore think time and Replay think time. d) General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction. 21. Where do you set Iteration for Vuser testing? We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations. 22. How do you perform functional testing under load? Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain. 23. What is Ramp up? How do you set this? This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be specified. To set Ramp Up, go to Scenario Scheduling Options 24. What is the advantage of running the Vuser as thread? VuGen provides the facility to use multithreading. This enables more Vusers to be run per generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator. 25. If you want to stop the execution of your script on error, how do you do that? The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a

specific error condition. When you end a script using this function, the Vuser is assigned the status Stopped. For this to take effect, we have to first uncheck the .Continue on error. option in Run-Time Settings. 26. What is the relation between Response Time and Throughput? The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time. 27. Explain the Configuration of your systems? The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives. 28. How do you identify the performance bottlenecks? Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc. 29. If web server, database and Network are all fine where could be the problem? The problem could be in the system itself or in the application server or in the code written for the application. 30. How did you find web server related issues? Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that occurred during scenario, the number of http responses per second, the number of downloaded pages per second. 31. How did you find database related issues? By running .Database. monitor and help of .Data Resource Graph. we can find database related issues. E.g. You can specify the resource you want to measure on before running the controller and than you can see database related issues

32. Explain all the web recording options? 33. What is the difference between Overlay graph and Correlate graph? Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Yaxis on the merged graph show.s the current graph.s value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph.s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graph.s Y-axis. 34. How did you plan the Load? What are the Criteria? Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding. 35. What does vuser_init action contain? Vuser_init action contains procedures to login to a server. 36. What does vuser_end action contain? Vuser_end section contains log off procedures. 37. What is think time? How do you change the threshold? Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen. 38. What is the difference between standard log and extended log? The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned by the server. Advanced trace. 39. Explain the following functions: - lr_debug_message - The lr_debug_message function sends a debug message to the output log when the specified message class is set.

lr_output_message - The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. lr_error_message - The lr_error_message function sends an error message to the LoadRunner Output window. lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch - The lrd_fetch function fetches the next row from the result set. 40. Throughput - If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would be reasonable to conclude that the bandwidth is constraining the volume of data delivered. 41. Types of Goals in Goal-Oriented Scenario - Load Runner provides you with five different types of goals in a goal oriented scenario: * The number of concurrent Vusers * The number of hits per second * The number of transactions per second * The number of pages per minute * The transaction response time that you want your scenario 42. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load increases. At 56 Vusers, there is a sudden, sharp increase in the average response time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously. 43. What is correlation? Explain the difference between automatic correlation and manual correlation? Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate. 44. Where do you set automatic correlation options? Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database, can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created. 45. What is a function to capture dynamic values in the web vuser script?

Web_reg_save_param function saves dynamic data information to a parameter.

WinRunner Functions
Database Functions db_check This function captures and compares data from a database. Note that the checklist file (arg1) can be created only during record. arg1 - checklist file. db_connect This function creates a new connection session with a database. arg1 - the session name (string) arg2 - a connection string for example DSN=SQLServer_Source;UID=SA;PWD=abc123 db_disconnect This function disconnects from the database and deletes the session. arg1 - the session name (string) db_dj_convert This function executes a Data Junction conversion export file (djs). arg1 - the export file name (*.djs) arg2 - an optional parameter to override the output file name arg3 - a boolean optional parameter whether to include the headers (the default is TRUE) arg4 - an optional parameter to limit the records number (-1 is no limit and is the default) db_execute_query This function executes an SQL statement. Note that a db_connect for (arg1) should be called before this function arg1 - the session name (string) arg2 - an SQL statement arg3 - an out parameter to return the records number. db_get_field_value

This function returns the value of a single item of an executed query. Note that a db_execute_query for (arg1) should be called before this function arg1 - the session name (string) arg2 - the row index number (zero based) arg3 - the column index number (zero based) or the column name. db_get_headers This function returns the fields headers and fields number of an executed query. Note that a db_execute_query for (arg1) should be called before this function arg1 - the session name (string) arg2 - an out parameter to return the fields number arg3 - an out parameter to return the concatenation of the fields headers delimited by TAB. db_get_last_error This function returns the last error message of the last ODBC operation. arg1 - the session name (string) arg2 - an out parameter to return the last error. db_get_row This function returns a whole row of an executed query. Note that a db_execute_query for (arg1) should be called before this function arg1 - the session name (string) arg2 - the row number (zero based) arg3 - an out parameter to return the concatenation of the fields values delimited by TAB. db_record_check This function checks that the specified record exists in the database.Note that the checklist file (arg1) can be created only using the Database Record Verification Wizard. arg1 - checklist file. arg2 - success criteria. arg3 - number of records found. db_write_records This function writes the records of an executed query into a file. Note that a db_execute_query for (arg1) should be called before this function arg1 - the session name (string) arg2 - the output file name arg3 - a boolean optional parameter whether to

include the headers (the default is TRUE) arg4 - an optional parameter to limit the records number (-1 is no limit and is the default). ddt_update_from_db This function updates the table with data from database. arg1 - table name. arg2 - query or conversion file (*.sql ,*.djs). arg3 (out) - num of rows actually retrieved. arg4 (optional) - max num of rows to retrieve (default - no limit). GUI-Functions GUI_add This function adds an object to a buffer arg1 is the buffer in which the object will be entered arg2 is the name of the window containing the object arg3 is the name of the object arg4 is the description of the object GUI_buf_get_desc This function returns the description of an object arg1 is the buffer in which the object exists arg2 is the name of the window containing the object arg3 is the name of the object arg4 is the returned description GUI_buf_get_desc_attr This function returns the value of an object property arg1 is the buffer in which the object exists arg2 is the name of the window arg3 is the name of the object arg4 is the property arg5 is the returned value GUI_buf_get_logical_name This function returns the logical name of an object arg1 is the buffer in which the object exists arg2 is the description of the object arg3 is the name of the window containing the object arg4 is the returned name

GUI_buf_new This function creates a new GUI buffer arg1 is the buffer name GUI_buf_set_desc_attr This function sets the value of an object property arg1 is the buffer in which the object exists arg2 is the name of the window arg3 is the name of the object arg4 is the property arg5 is the value GUI_close This function closes a GUI buffer arg1 is the file name. GUI_close_all This function closes all the open GUI buffers. GUI_delete This function deletes an object from a buffer arg1 is the buffer in which the object exists arg2 is the name of the window containing the object arg3 is the name of the object (if empty, the window will be deleted) GUI_desc_compare This function compares two phisical descriptions (returns 0 if the same) arg1 is the first description arg2 is the second description

QTP QUESTIONS: . This Quick Test feature allows you select the appropriate add-ins to load with your test. Add-in Manager

2. Name the six columns of the Keyword view. Item, Operation, Value, Documentation, Assignment, Comment 3. List the steps to change the logical name of an object named Three Panel into Status Bar in the Object Repository. Select Tools>Object Repository. In the Action1 object repository list of objects, select an object, right click and select Rename from the pop-up menu. 4. Name the two run modes available in Quick Test Professional. Normal and Fast 5. When Quick Test Professional is connected to Quality Center, all automated assets (e.g. tests, values) are stored in Quality Center. True 6. What information do you need when you select the Record and Run Setting Record and run on these applications (opened when a session begins)? Application details (name, location, any program arguments) 7. Name and discuss the three elements that make up a recorded step. Item the object recorded, Operation the action performed on the object, Value the value selected, typed or set for the recorded object 8. There are two locations to store run results. What are these two locations and discuss each. New run results folder permanently stores a copy of the run results in a separate location under the automated test folder. Temporary run results folder Run results can be overwritten every time the test is played back. 9. True or False: The object class Quick Test uses to identify an object is a property of the object. False 10. True or False: You can modify the list of pre-selected properties that Quick Test uses to identify an object.

True 11. True or False: A synchronization step instructs Quick Test to wait for a state of a property of an object before proceeding to the next recorded step. Synchronization steps are activated only during recording. True 12. Manually verifying that an order number was generated by an application and displayed on the GUI is automated in Quick Test using what feature? a Checkpoint 13. True or False: Quick Test can automate verification which are not visible on the application under test interface. True 14. What is Checkpoint Timeout Value?. A checkpoint timeout value specifies the time interval (in seconds) during which Quick Test attempts to perform the checkpoint successfully. Quick Test continues to perform the checkpoint until it passes or until the timeout occurs. If the checkpoint does not pass before the timeout occurs, the checkpoint fails. 15. True or False:You can modify the name of a checkpoint for better readability. Ans: False 16. Define a regular expression for the following range of values a. Verify that the values begin with Afx3 followed by 3 random digits Afx3459, Afx3712, Afx3165 b. Verify that a five-digit value is included in the string Status code 78923 approv Afx3\d{3} Status Code \d{5} approved 17. Write the letter of the type of parameter that best corresponds to the requirement: a. An order number has to be retrieved from the window and saved into a file for each test run. b. A value between 12 and 22 is entered in no particular order for every test run. c. Every iteration of a test should select a new city from a list item.

A. Environment Parameter B. Input Parameter C. Component Parameter D. Output Parameter E. Random Parameter D, E, B 18. This is the Data Table that contains values retrieved from the application under test. You can view the captured values after the test run, from the Test Results. What do you call this data table?. Run-Time Data Table 19. Name and describe each of the four types of trigger events. Ans:A pop up window appears in an opened application during the test run. A property of an object changes its state or value. A step in the test does not run successfully. An open application fails during the test run. 20. Explain initial and end conditions. Ans: Initial and end conditions refer to starting and end points of a test that allows the test to iterate from the same location, and with the same set up every time (e.g. all fields are blank, the test starts at the main menu page). 21. What record and run setting did you select so that the test iterates starting at the home page? Ans: Record and run test on any open Web browser. 22. What Quick Test features did you use to automate step 4 in the test case? What property did you select? Ans: Standard checkpoint on the list item Number of Tickets Properties: items count, inner text, all items 23. Select Tools> Object Repository. In the Action1 object repository list of objects, select an object, right click and select Rename from the pop-up menu. Ans: Input Parameter 24. What planning considerations did you have to perform in order to meet the above listed requirements?.

Ans:Register at least three new users prior to creating the automated test in order to have seed data in the database. 25. What Quick Test feature did you use to meet the requirement: The test should iterate at least three times using different user names and passwords Ans: Random parameter, range 1 to 4 26.Discuss how you automated the requirement: Each name used during sign-in should the be first name used when booking the ticket at the Book a Flight page. Ans: The username is already an input parameter. Parameterize the step passFirst0 under BookAFlight and use the parameter for username. 27.Challenge: What Quick Test feature did you use to meet the requirement: All passwords should be encrypted Ans: Challenge: From the Data table, select the cell and perform a right mouse click. A pop up window appears. Select Data > Encrypt.

S-ar putea să vă placă și