Sunteți pe pagina 1din 59

Testing Questions and Answers

1. What is the difference between Functional Requirement and Non-Functional


Requirement?
The Functional Requirement specifies how the system or application SHOULD DO where in
Non Functional Requirement it specifies how the system or application SHOULD BE.
Some functional Requirements are

Authentication
Business rules
Historical Data
Legal and Regulatory Requirements
External Interfaces

Some Non-Functional Requirements are

Performance
Reliability
Security
Recovery
Data Integrity
Usability

2. How Severity and Priority are related to each other?

Severity- tells the seriousness/depth of the bug where as


Priority- tells which bug should rectify first.
Severity- Application point of view
Priority- User point of view

3. Explain the different types of Severity?


1.
2.
3.
4.
5.
6.
7.
8.
9.

User Interface Defect-Low


Boundary Related Defects-Medium
Error Handling Defects-Medium
Calculation Defects-High
Interpreting Data Defects-High
Hardware Failures& Problems-High
Compatibility and Intersystem defects-High
Control Flow defects-High
Load conditions (Memory leakages under load testing)-High

4. What is the difference between Priority and Severity?


The terms Priority and Severity are used in Bug Tracking to share the importance of a bug

among the team and to fix it.


Severity: Is found in the Application point of view
Priority- Is found in the User point of view
Severity- (tells the seriousness/depth of the bug)
1.
2.

The Severity status is used to explain how badly the deviation is affecting the build.
The severity type is defined by the tester based on the written test cases and
functionality.

Example
If an application or a web page crashes when a remote link is clicked, in this case clicking
the remote link by an user is rare but the impact of application crashing is severe, so the
severity is high and priority is low.
PRIORITY- (tells which bug should rectify first)
1.

2.

The Priority status is set by the tester to the developer mentioning the time frame to
fix a defect. If High priority is mentioned then the developer has to fix it at the
earliest.
The priority status is set based on the customer requirements.

Example
If the company name is misspelled in the home page of a website, then the priority is high
and the severity is low to fix it.
Severity: Describes the bug in terms of functionality.
Priority: Describes the bug in terms of customer.
Few examples:
High Severity and Low Priority -> Application doesn't allow customer expected
configuration.
High Severity and High Priority -> Application doesn't allow multiple user's.
Low Severity and High Priority -> No error message to prevent wrong operation.
Low Severity and low Priority -> Error message is having complex meaning.
Or
Few examples:
High Severity -Low priority
Supposing, you try the wildest or the weirdest of operations in a software (say, to be
released the next day) which a normal user would not do and supposing this renders a run time error in the application,the severity would be high. The priority would be low as the
operations or the steps which rendered this error by most chances will not be done by a
user.

Low Severity -High priority


An example would be- you find a spelling mistake in the name of the website which you are
testing.Say, the name is supposed to be Google and its spelled there as 'Gaogle'. Though, it
doesn't affect the basic functionality of the software, it needs to be corrected before the
release. Hence, the priority is high.
High severity- High Priority
A bug which is a show stopper. i.e., a bug due to which we are unable to proceed our
testing.An example would be a run time error during the normal operation of the
software,which would cause the application to quit abruptly.
Low severity - low priority
Cosmetic bugs
What is Defect Severity?
A defect is a product anomaly or flaw, which is variance from desired product specification.
The classification of defect based on its impact on operation of product is called Defect
Severity.
5. What is Bucket Testing?
Bucket testing (also known as A/B Testing) is mostly used to study the impact of various
product designs in website metrics, two simultaneous versions were run in a single or set of
web pages to measure the difference in click rates, interface and traffic.
6. What is Entry and Exit Criteria in Software Testing?
Entry Criteria is the process that must be present when a system begins, like,

SRS (Software Requirement Specification)


FRS (Functional Requirement Specification)
Usecase
Test Case
Test plan

Exit Criteria ensures whether testing is completed and the application is ready for release,
like,

Test Summary Report


Metrics
Defect Analysis report

7. What is Concurrency Testing?


Concurrency Testing (also commonly known as Multi User Testing) is used to know the
effects of accessing the Application, Code Module or Database by different users at the

same time.It helps in identifying and measuring the problems in Response time, levels of
locking and deadlocking in the application.
Example
Load runner is widely used for this type of testing, Vugen (Virtual User Generator) is used to
add the number of concurrent users and how the users need to be added like Gradual Ramp
up or Spike Stepped.
8. Explain Statement coverage/Code coverage/Line Coverage?
Statement Coverage or Code Coverage or Line Coverage is a metric used in White Box
Testing where we can identify the statements executed and where the code is not executed
cause of blockage. In this process each and every line of the code needs to be checked and
executed.
Some advantages of Statement Coverage / Code Coverage / Line Coverage are

It verifies what the written code is expected to do and not to do.


It measures the quality of code written.
It checks the flow of different paths in the program also ensure whether those paths
are tested or not.

To Calculate Statement Coverage,


Statement Coverage = Statements Tested / Total No. of Statements.
9. Explain Branch Coverage/Decision Coverage?
Branch Coverage or Decision Coverage metric is used to check the volume of testing done in
all components. This process is used to ensure whether all the code is executed by verifying
every branch or decision outcome (if and while statements) by executing atleast one time,
so that no branches lead to the failure of the application.
To Calculate Branch Coverage,
Branch Coverage = Tested Decision Outcomes / Total Decision Outcomes.
10. What is the difference between High level and Low Level test case?
High level Test cases are those which cover major functionality in the application (i.e.
retrieve, update display, cancel (functionality related test cases), database test cases).
Low level test cases are those related to User Interface (UI) in the application.
11. Explain Localization testing with example?
Localization is the process of changing or modifying an application to a particular culture
or locale. This includes change in user interface, graphical designs or even the initial
settings according to their culture and requirements.
In terms of Localization Testing it verifies how correctly the application is changed or
modified into that target culture and language.

In case of translation required of the application on that local language, testing should be
done on each field to check the correct translation. Other formats like date conversion,
hardware and software usage like operating system should also be considered in localization
testing.
Examples for Localization Testing are
In Islamic Banking all the transactions and product features are based on Shariah Law,
some important points to be noted in Islamic Banking are
1.
2.
3.

In Islamic Banking, the bank shares the profit and loss with the customer.
In Islamic Banking, the bank cannot charge interest on the customer; instead they
charge a nominal fee which is termed as "Profit
In Islamic Banking, the bank will not deal or invest in business like Gambling,
Alcohol, Pork, etc.

In this case, we need to test whether these Islamic banking conditions were modified and
applied in the application or product.
In Islamic Lending, they follow both the Gregorian calendar and Hijiri Calendar for
calculating the loan repayment schedule. The Hijiri Calendar is commonly called as Islamic
Calendar followed in all the Muslim countries according to the lunar cycle. The Hijiri
Calendar has 12 months and 354 days which is 11 days shorter than Gregorian calendar. In
this case, we need to test the repayment schedule by comparing both the Gregorian
calendar and Hijiri Calendar.
12. Explain Risk Analysis in Software Testing?
In Software Testing, Risk Analysis is the process of identifying risks in applications and
prioritizing them to test.
In Software testing some unavoidable risk might takes place like

Change in requirements or Incomplete requirements


Time allocation for testing.
Developers delaying to deliver the build for testing.
Urgency from client for delivery.
Defect Leakage due to application size or complexity.

To overcome these risks, the following activities can be done

Conducting Risk Assessment review meeting with the development team.


Profile for Risk coverage is created by mentioning the importance of each area.
Using maximum resources to work on High Risk areas like allocating more testers for
High risk areas and minimum resources for Medium and Low risk areas. Creation of
Risk assessment database for future maintenance and management review.

13. What is the difference between Two Tier Architecture and Three Tier
Architecture?

In Two Tier Architecture or Client/Server Architecture two layers like Client and
Server is involved. The Client sends request to Server and the Server responds to the
request by fetching the data from it. The problem with the Two Tier Architecture is the
server cannot respond to multiple requests at the same time which causes data integrity
issues.
The Client/Server Testing involves testing the Two Tier Architecture of user interface in the
front end and database as backend with dependencies on Client, Hardware and Servers.
In Three Tier Architecture or Multi Tier Architecture three layers like Client, Server
and Database are involved. In this the Client sends a request to Server, where the Server
sends the request to Database for data, based on that request the Database sends back the
data to Server and from Server the data is forwarded to Client.
The Web Application Testing involves testing the Three Tier Architecture including the User
interface, Functionality, Performance, Compatibility, Security and Database testing.
14. What is the difference between Static testing and dynamic testing?
Static Testing (done in Verification stage)
Static Testing is a White Box testing technique where the developers verify or test their
code with the help of checklist to find errors in it, this type of testing is done without
running the actually developed application or program. Code Reviews, Inspections,
Walkthroughs are mostly done in this stage of testing.
Dynamic Testing (done in Validation stage)
Dynamic Testing is done by executing the actual application with valid inputs to check the
expected output. Examples of Dynamic Testing methodologies are Unit Testing, Integration
Testing, System Testing and Acceptance Testing.
Some differences between Static Testing and Dynamic Testing are,

Static Testing is more cost effective than Dynamic Testing because Static Testing is
done in the initial stage.
In terms of Statement Coverage, the Static Testing covers more areas than Dynamic
Testing in shorter time.
Static Testing is done before the code deployment where the Dynamic Testing is
done after the code deployment.
Static Testing is done in the Verification stage where the Dynamic Testing is done in
the Validation stage.

15. Explain Use case diagram. What are the attributes of use cases?
Use Case Diagrams is an overview graphical representation of the functionality in a system.
It is used in the analysis phase of a project to specify the system to be developed.
In Use Case Diagrams the whole system is defined as ACTORS, USE CASES and
ASSOCIATIONS, the ACTORS are the external part of the system like users, computer
software & hardware, USECASES is the behavior or functionality of the system when these
ACTORS perform an action, the ASSOCIATIONS are the line drawn to show the connection
between ACTORS and USECASES. One ACTOR can link too many USECASES and one

USECASE can link too many ACTORS.


16. What is Web Application testing? Explain the different phases in Web
Application testing?
Web Application testing is done on a website to check its load, performance, Security,
Functionality, Interface, compatibility and other usability related issues. In Web application
testing, three phases of testing is done, they are,
Web Tier Testing
In Web tier testing, the browser compatibility of the application will be tested for IE, Fire
Fox and other web browsers.
Middle Tier Testing
In Middle tier testing, the functionality and security issues were tested.
Database Tier Testing
In Database tier testing, the database integrity and the contents of the database were
tested and verified.
17. Explain Unit testing, Interface Testing and Integration testing. Also explain the
types of integration testing in brief?
Unit testing
Unit Testing is done to check whether the individual modules of the source code are working
properly. i.e. testing each and every unit of the application separately by the developer in
developer's environment.
Interface Testing
Interface Testing is done to check whether the individual modules are communicating
properly one among other as per the specifications.
Interface testing is mostly used in testing the user interface of GUI application.
Integration testing
Integration Testing is done to check the connectivity by combining all the individual modules
together and test the functionality.
The types of Integration Testing are
1.

Big Bang Integration Testing

In Big Bang Integration Testing, the individual modules are not integrated until all the
modules are ready. Then they will run to check whether it is performing well.
In this type of testing, some disadvantages might occur like,

Defects can be found at the later stage.It would be difficult to find out whether the defect
arouse in Interface or in module.
2. Top Down Integration Testing

In Top Down Integration Testing, the high level modules are integrated and tested first. i.e
Testing from main module to sub module. In this type of testing, Stubs are used as
temporary module if a module is not ready for integration testing.
3. Bottom Up Integration Testing
In Bottom Up Integration Testing, the low level modules are integrated and tested first i.e
Testing from sub module to main module. Same like Stubs, here drivers are used as a
temporary module for integration testing.
18. Explain Alpha, Beta, Gamma Testing?
Alpha Testing:
Alpha Testing is mostly like performing usability testing which is done by the in-house
developers who developed the software or testers. Sometimes this Alpha Testing is done by
the client or an outsider with the presence of developer and tester. The version release after
alpha testing is called Alpha Release.
Beta Testing:
Beta Testing is done by limited number of end users before delivery, the change request
would be fixed if the user gives feedback or reports defect. The version release after beta
testing is called beta Release.
Gamma Testing:
Gamma Testing is done when the software is ready for release with specified requirements,
this testing is done directly by skipping all the in-house testing activities.
19. Explain the methods and techniques used for Security Testing?
Security testing can be performed in many ways like,
1.
2.
3.

Black Box Testing


White Box Testing
Database Testing

1. Black Box Testing


a. Session Hijacking

Session Hijacking commonly called as "IP Spoofing" where a user session will be attacked
on a protected network.
b. Session Prediction
Session prediction is a method of obtaining data or a session ID of an authorized user and
gets access to the application. In a web application the session ID can be retrieved from
cookies or URL.
The session prediction happening can be predicted when a website is not responding
normally or stops responding for an unknown reason.
c. Email Spoofing
Email Spoofing is duplicating the email header ("From" address) to look like originated from
actual source and if the email is replied it will land in the spammers inbox. By inserting
commands in the header the message information can be altered. It is possible to send a
spoofed email with information you didn't write.
d. Content Spoofing
Content spoofing is a technique to develop a fake website and make the user believe that
the information and website is genuine. When the user enters his Credit Card Number,
Password, SSN and other important details the hacker can get the data and use if for fraud
purposes.
e. Phishing
Phishing is similar to Email Spoofing where the hacker sends a genuine look like mail
attempting to get the personal and financial information of the user. The emails will appear
to have come from well known websites.
f. Password Cracking
Password Cracking is used to identify an unknown password or to identify a forgotten
password
Password cracking can be done through two ways,
1.
2.

Brute Force The hacker tries with a combination of characters within a length
and tries until it is getting accepted.
Password Dictionary The hacker uses the Password dictionary where it is
available on various topics.

2. White Box level


a. Malicious Code Injection
SQL Injection is most popular in Code Injection Attack, the hacker attach the malicious code
into the good code by inserting the field in the application. The motive behind the injection
is to steal the secured information which was intended to be used by a set of users.

Apart from SQL Injection, the other types of malicious code injection are XPath Injection,
LDAP Injection, and Command Execution Injection. Similar to SQL Injection the XPath
Injection deals with XML document.
b. Penetration Testing:
Penetration Testing is used to check the security of a computer or a network. The test
process explores all the security aspects of the system and tries to penetrate the system.
c. Input validation:
Input validation is used to defend the applications from hackers. If the input is not validated
mostly in web applications it could lead to system crashes, database manipulation and
corruption.
d. Variable Manipulation
Variable manipulation is used as a method for specifying or editing the variables in a
program. It is mostly used to alter the data sent to web server.
3. Database Level
a. SQL Injection
SQL Injection is used to hack the websites by changing the backend SQL statements, using
this technique the hacker can steal the data from database and also delete and modify it.
20. Explain IEEE 829 standards and other Software Testing standards?
An IEEE 829 standard is used for Software Test Documentation, where it specifies format for
the set of documents to be used in the different stages software testing. The documents
are,
Test Plan- Test Plan is a planning document which has information about the scope,
resources, duration, test coverage and other details.
Test Design- Test Design document has information of test pass criteria with test
conditions and expected results.
Test Case- Test case document has information about the test data to be used.
Test Procedure- Test Procedure has information about the test steps to be followed and
how to execute it.
Test Log- Test log has details about the run test cases, test plans & fail status, order, and
the resource information who tested it.
Test Incident Report- Test Incident Report has information about the failed test
comparing the actual result with expected result.
Test Summary Report- Test Summary Report has information about the testing done and
quality of the software, it also analyses whether the software has met the requirements
given by customer.
The other standards related to software testing are,
IEEE 1008 is for Unit Testing
IEEE 1012 is for Software verification and validation

IEEE
IEEE
IEEE
IEEE

1028 is for Software Inspections


1061 is for Software metrics and methodology
1233 is for guiding the SRS development
12207 is for SLC process

21. What is Test Harness?


Test Harness is configuring a set of tools and test data to test an application in various
conditions, which involves monitoring the output with expected output for correctness.
The benefits of Test Harness are,

Productivity increase due to process automation.


Quality in the application.

22. What is the difference between bug log and defect tracking?
Bug Log: Bug Log document showing the number of defect such as open, closed, reopen or
deferred of a particular module
Defect Tracking- The process of tracking a defect such as symptom, whether reproducible
/not, priority, severity and status.
23. What are Integration Testing and Regression Testing?
Integration Testing:

Combining the modules together & construct software architecture.


To test the communication & data flow
White & Black box testing techniques are used
It is done by developer & tester

Regression Testing

It is re-execution of our testing after the bug is fixed to ensure that the build is free
from bugs.
Done after bug is fixed
It is done by Tester

24. Explain Peer Review in Software Testing?


It is an alternative form of Testing, where some colleagues were invited to examine your
work products for defects and improvement opportunities.
Some Peer review approaches are,
Inspection
It is a more systematic and rigorous type of peer review. Inspections are more effective at
finding defects than are informal reviews.

Ex: In Motorola's Iridium project nearly 80% of the defects were detected through
inspections where only 60% of the defects were detected through formal reviews.
Team Reviews: It is a planned and structured approach but less formal and less rigorous
comparing to Inspections.
Walkthrough: It is an informal review because the work product's author describes it to
some colleagues and asks for suggestions. Walkthroughs are informal because they typically
do not follow a defined procedure, do not specify exit criteria, require no management
reporting, and generate no metrics.
Or
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or
no preparation is usually required.
Pair Programming: In Pair Programming, two developers work together on the same
program at a single workstation and continuously reviewing their work.
Peer Desk check
In Peer Desk check only one person besides the author examines the work product. It is an
informal review, where the reviewer can use defect checklists and some analysis methods to
increase the effectiveness.
Passaround: It is a multiple, concurrent peer desk check where several people are invited
to provide comments on the product.
25. Explain Compatibility testing with an example?
Compatibility testing is to evaluate the application compatibility with the computing
environment like Operating System, Database, Browser compatibility, backwards
compatibility, computing capacity of the Hardware Platform and compatibility of the
Peripherals.
Example
If Compatibility testing is done on a Game application, before installing a game on a
computer, its compatibility is checked with the computer specification that whether it is
compatible with the computer having that much of specification or not.
26. What is Traceability Matrix?
Traceability Matrix is a document used for tracking the requirement, Test cases and the
defect. This document is prepared to make the clients satisfy that the coverage done is
complete as end to end, this document consists of Requirement/Base line doc Ref No., Test
case/Condition, Defects / Bug id. Using this document the person can track the Requirement
based on the Defect id.
27. Explain Boundary value testing and Equivalence testing with some examples?
Boundary value testing is a technique to find whether the application is accepting the
expected range of values and rejecting the values which falls out of range.
Exmple

A user ID text box has to accept alphabet characters ( a-z ) with length of 4 to 10
characters.
BVA is done like this, max value: 10 pass; max-1: 9 pass;
max+1=11 fail ;min=4 pass;min+1=5 pass;min-1=3 fail;
Like wise we check the corner values and come out with a conclusion whether the
application is accepting correct range of values.
Equivalence testing is normally used to check the type of the object.
Example
A user ID text box has to accept alphabet characters (a - z) with length of 4 to 10
characters.
In +ve condition we have test the object by giving alphabets. i.e. a-z char only, after that
we need to check whether the object accepts the value, it will pass.
In -ve condition we have to test by giving other than alphabets (a-z) i.e. A-Z, 0-9, blank
etc, it will fail.
28. What is Security testing?
Security testing is the process that determines that confidential data stays confidential
Or
Testing how well the system protects against unauthorized internal or external access,
willful damage, etc?
This process involves functional testing, penetration testing and verification.
29. What is Installation testing?
Installation testing is done to verify whether the hardware and software are installed and
configured properly. This will ensure that all the system components were used during the
testing process. This Installation testing will look out the testing for a high volume data,
error messages as well as security testing.
30. What is AUT?
AUT is nothing but "Application Under Test". After the designing and coding phase in
Software development life cycle, the application comes for testing then at that time the
application is stated as Application Under Test.
31. What is Defect Leakage?
Defect leakage occurs at the Customer or the End user side after the application delivery.
After the release of the application to the client, if the end user gets any type of defects by
using that application then it is called as Defect leakage. This Defect Leakage is also called
as Bug Leakage.
32. What are the contents in an effective Bug report?
1.
2.
3.
4.

Project
Subject
Description
Summary

5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.

Detected By (Name of the Tester)


Assigned To (Name of the Developer who is supposed to the Bug)
Test Lead (Name)
Detected in Version
Closed in Version
Date Detected
Expected Date of Closure
Actual Date of Closure
Priority (Medium, Low, High, Urgent)
Severity (Ranges from 1 to 5)
Status
Bug ID
Attachment
Test Case Failed (Test case that is failed for the Bug)

33. What is Error guessing and Error seeding?


Error Guessing is a test case design technique where the tester has to guess what faults
might occur and to design the tests to represent them.
Error Seeding is the process of adding known faults intentionally in a program for the
reason of monitoring the rate of detection & removal and also to estimate the number of
faults remaining in the program.
34. What is Ad-hoc testing?
Ad hoc testing is concern with the Application Testing without following any rules or test
cases.
For Ad hoc testing one should have strong knowledge about the Application.
35. What are the basic solutions for the software development problems?
1.

2.
3.
4.

5.

Basic requirements- A clear, detailed, complete, achievable, testable requirement


has to be developed. Use some prototypes to help pin down requirements. In nimble
environments, continuous and close coordination with customers/end-users is
needed.
Schedules should be realistic- enough time to plan, design, test, bug fix, re-test,
change, and document in the given schedule. Adequate
testing- testing should be started early, it should be re-tested after the bug fixed or
changed, enough time should be spend for testing and bug-fixing.
Proper study on initial requirements- be ready to look after more changes after
the development has begun and be ready to explain the changes done to others.
Work closely with the customers and end-users to manage expectations. This avoids
excessive changes in the later stages.
Communication- conduct frequent inspections and walkthroughs in appropriate
time period; ensure that the information and the documentation is available on upto-date if possible electronic. More emphasize on promoting teamwork and
cooperation inside the team; use prototypes and proper communication with the
end-users to clarify their doubts and expectations.

36. What are the common problems in the software development process?
Inadequate requirements from the Client: if the requirements given by the client is not

clear, unfinished and not testable, then problems may come.


Unrealistic schedules: Sometimes too much of work is being given to the developer and ask
him to complete in a Short duration, then the problems are unavoidable.
Insufficient testing: The problems can arise when the developed software is not tested
properly.
Given another work under the existing process: request from the higher management to
work on another project or task will bring some problems when the project is being tested
as a team.
Miscommunication: in some cases, the developer was not informed about the Clients
requirement and expectations, so there can be deviations.
37. What is the difference between Software Testing and Quality Assurance (QA)?
Software Testing involves operation of a system or application under controlled conditions
and evaluating the result. It is oriented to 'detection'.
Quality Assurance (QA) involves the entire software development PROCESS- monitoring and
improving the process, making sure that any agreed-upon standards and procedures are
followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.
38. How to Test the water bottle?
Note: Before going to generate some test idea on how to test a water bottle, I would like to
ask few questions like:
1.
2.

3.

Is it a bottle made up off glass, plastic, rubber, some metal, some kind of disposable
materials or any thing else?
Is it meant only to hot water or we can use it with other fluids like tea, coffee, soft
drinks, hot chocolate, soups, wine, cooking oil, vinegar, gasoline, acids, molten lava
(!) etc.?
Who is going to use this bottle? A school going kid, a housewife, some beverage
manufacturing company, an office-goer, a sports man, a mob protesting in a rally
(going to use as missiles), an Eskimo living in an igloo or an astronaut in a space
ship?

These kinds of questions may allow a tester to know a product (that he is going to test) in a
better way. In our case, I am assuming that the water bottle is in form of a pet bottle and
actually made up off either plastic or glass (there are 2 versions of the product) and is
intended to be used mainly with water. About the targeted user, even the manufacturing
company is not sure about them! (Sounds familiar! When a software company develops a
product without clear idea about the users who are going to use the software!)
Test Ideas
1.

2.
3.
4.

Check the dimension of the bottle. See if it actually looks like a water bottle or a
cylinder, a bowl, a cup, a flower vase, a pen stand or a dustbin! [Build Verification
Testing!]
See if the cap fits well with the bottle.[Installability Testing!]
Test if the mouth of the bottle is not too small to pour water. [Usability Testing!]
Fill the bottle with water and keep it on a smooth dry surface. See if it leaks.
[Usability Testing!]

5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.

17.

18.
19.
20.

Fill the bottle with water, seal it with the cap and see if water leaks when the bottle
is tilted, inverted, squeezed (in case of plastic made bottle)! [Usability Testing!]
Take water in the bottle and keep it in the refrigerator for cooling. See what
happens. [Usability Testing!]
Keep a water-filled bottle in the refrigerator for a very long time (say a week). See
what happens to the water and/or bottle. [Stress Testing!]
Keep a water-filled bottle under freezing condition. See if the bottle expands (if
plastic made) or breaks (if glass made). [Stress Testing!]
Try to heat (boil!) water by keeping the bottle in a microwave oven! [Stress Testing!]
Pour some hot (boiling!) water into the bottle and see the effect. [Stress Testing!]
Keep a dry bottle for a very long time. See what happens. See if any physical or
chemical deformation occurs to the bottle.
Test the water after keeping it in the bottle and see if there is any chemical change.
See if it is safe to be consumed as drinking water.
Keep water in the bottle for sometime. And see if the smell of water changes.
Try using the bottle with different types of water (like hard and soft water).
[Compatibility Testing!]
Try to drink water directly from the bottle and see if it is comfortable to use. Or
water gets spilled while doing so. [Usability Testing!]
Test if the bottle is ergonomically designed and if it is comfortable to hold. Also see if
the center of gravityof the bottle stays low (both when empty and when filled with
water) and it does not topple down easily.
Drop the bottle from a reasonable height (may be height of a dining table) and see if
it breaks (both with plastic and glass model). If it is a glass bottle then in most cases
it may break. See if it breaks into tiny little pieces (which are often difficult to clean)
or breaks into nice large pieces (which could be cleaned without much difficulty).
[Stress Testing!] [Usability Testing!]
Test the above test idea with empty bottles and bottles filled with water. [Stress
Testing!]
Test if the bottle is made up of material, which is recyclable. In case of plastic made
bottle test if it is easily crushable.
Test if the bottle can also be used to hold other common household things like
honey, fruit juice, fuel, paint, turpentine, liquid wax etc. [Capability Testing!]

39. What is Portlet Testing ?


Following are the features that should be concentrated while testing a portlet
i. Test alignment/size display with multiple style sheets and portal configurations. When you
configure a portlet object in the portal, you must choose from the following alignments:
a. Narrow portlets are displayed in a narrow side column on the portal page. Narrow
portlets must fit in a column that is fewer than 255 pixels wide.
b. Wide portlets are displayed in the middle or widest side column on the portal page. Wide
portlets fit in a column fewer than 500 pixels wide.
ii. Test all links and buttons within the portlet display. (if there are errors, check that all
forms and functions are uniquely named, and that the preference and gateway settings are
configured correctly in the portlet web service editor.)
iii. Test setting and changing preferences. (if there are errors, check that the preferences
are uniquely named and that the preference and gateway settings are configured correctly
in the portlet web service editor.)

iv. Test communication with the backend application. Confirm that actions executed through
the portlet are completed correctly. (if there are errors, check the gateway configuration in
the portlet web service editor.)
v. Test localized portlets in all supported languages. (if there are errors, make sure that the
language files are installed correctly and are accessible to the portlet.)
vi. If the portlet displays secure information or uses a password, use a tunnel tool to
confirm that any secure information is not sent or stored in clear text.
Vii. If backwards compatibility is supported, test portlets in multiple versions of the portal.
40. What is Equivalence Partitioning?
Concepts: Equivalence partitioning is a method for deriving test cases. In this method,
classes of input conditions called equivalence classes are
identified such that each member of the class causes the same kind of
processing and output to occur. In this method, the tester identifies various equivalence
classes for partitioning. A class is a set of input conditions that are is likely to be handled
the same way by the system. If the system were to handle one case in the class
erroneously, it would handle all cases erroneously.
41. Why Learn Equivalence Partitioning?
Equivalence partitioning drastically cuts down the number of test cases required to test a
system reasonably. It is an attempt to get a good 'hit rate', to find the most errors with the
smallest number of test cases.
DESIGNING TEST CASES USING EQUIVALENCE PARTITIONING
To use equivalence partitioning, you will need to perform two steps.
1.
2.

Identify the equivalence classes


Design test cases

STEP 1:
IDENTIFY EQUIVALENCE CLASSES Take each input condition described in the specification
and derive at least two equivalence classes for it. One class represents the set of cases
which satisfy the condition (the valid class) and one represents cases which do not (the
invalid class) Following are some general guidelines for identifying equivalence classes: a) If
the requirements state that a numeric value is input to the system and must be within a
range of values, identify one valid class inputs which are within the valid range and two
invalid equivalence classes inputs which are too low and inputs which are too high. For
example, if an item in inventory can have a quantity of - 9999 to + 9999, identify the
following classes:
1.
2.

One valid class: (QTY is greater than or equal to -9999 and is less than or equal to
9999). This is written as (- 9999 < = QTY < = 9999)
The invalid class (QTY is less than -9999), also written as (QTY < -9999)

3.

The invalid class (QTY is greater than 9999) , also written as (QTY >9999) b) If the
requirements state that the number of items input by the system at some point must
lie within a certain range, specify one valid class where the number of inputs is
within the valid range, one invalid class where there are too few inputs and one
invalid class where there are, too many inputs.

42. What are two types of Metrics?


1.

2.

Process metrics: Primary metrics are also called as Process metrics. This is the
metric the Six Sigma practitioners care about and can influence. Primary metrics are
almost the direct output characteristic of a process. It is a measure of a process and
not a measure of a high-level business objective. Primary Process metrics are usually
Process Defects, Process cycle time and Process consumption.
Product metrics: Product metrics quantitatively characterize some aspect of the
structure of a software product, such as a requirements specification, a design, or
source code.

43. What is the Outcome of Testing?


A stable application, performing its task as expected.
44. Why do you go for White box testing, when Black box testing is available?
A benchmark that certifies Commercial (Business) aspects and also functional (technical)
aspects is objectives of black box testing. Here loops, structures, arrays, conditions, files,
etc are very micro level but they arc Basement for any application, So White box takes
these things in Macro level and test these things
45. What is Baseline document, Can you say any two?
A baseline document, which starts the understanding of the application before the tester,
starts actual testing. Functional Specification and Business Requirement Document
46. Tell names of some testing type which you learnt or experienced?
Any 5 or 6 types which are related to companies profile is good to say in the interview,
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.

Ad - Hoc testing
Cookie Testing
CET (Customer Experience Test)
Depth Test
Event-Driven
Performance Testing
Recovery testing
Sanity Test
Security Testing
Smoke testing
Web Testing

47. What exactly is Heuristic checklist approach for unit testing?


It is method of achieving the most appropriate solution of several found by alternative

methods is selected at successive stages testing. The checklist Prepared to Proceed is called
Heuristic checklist
48. What is a Data Guideline?
Data Guidelines are used to specify the data required to populate the test bed and prepare
test scripts. It includes all data parameters that are required to test the conditions derived
from the requirement / specification The Document, which supports in preparing test data
are called Data guidelines
49. Why do you go for Test Bed?
When Test Condition is executed its result should be compared to Test result (expected
result), as Test data is needed for this here comes the role of test Bed where Test data is
made ready.
50. Why do we prepare test condition, test cases, test script (Before Starting
Testing)?
These are test design document which are used to execute the actual testing Without which
execution of testing is impossible, finally this execution is going to find the bugs to be fixed
so we have prepare this documents.
51. Is it not waste of time in preparing the test condition, test case & Test Script?
No document prepared in any process is waste of rime, That too test design documents
which plays vital role in test execution can never be said waste of time as without which
proper testing cannot be done.
52. How do you go about testing of Web Application?
To approach a web application testing, the first attack on the application should be on its
performance behavior as that is very important for a web application and then transfer of
data between web server and .front end server, security server and back end server.
53. What kind of Document you need for going for a Functional testing?
Functional specification is the ultimate document, which expresses all the functionalities of
the application and other documents like user manual and BRS are also need for functional
testing. Gap analysis document will add value to understand expected and existing system.
54. Can the System testing be done at any stage?
No, .The system as a whole can be tested only if all modules arc integrated and all modules
work correctly System testing should be done before UAT (User Acceptance testing) and
Before Unit Testing.
55. What is Mutation testing & when can it be done?
Mutation testing is a powerful fault-based testing technique for unit level testing. Since it is
a fault-based testing technique, it is aimed at testing and uncovering some specific kinds of
faults, namely simple syntactic changes to a program. Mutation testing is based on two
assumptions: the competent programmer hypothesis and the coupling effect. The

competent programmer hypothesis assumes that competent programmers turn to write


nearly "correct" programs. The coupling effect stated that a set of test data that can
uncover all simple faults in a program is also capable of detecting more complex faults.
Mutation testing injects faults into code to determine optimal test inputs.
56. Why it is impossible to test a program completely?
With any software other than the smallest and simplest program, there are too many
inputs, too many outputs, and too many path combinations to fully test. Also, software
specifications can be subjective and be interpreted in different ways.
57. How will you review the test case and how many types are there?
There are 2 types of review:
Informal Review: technical lead reviewing.
Peer Review: by a peer at the same organization (walkthrough? technical - inspection).
Or
Reviews:
1.
2.
3.
4.
5.

Management Review
Technical Review
Code Review
Formal Review (Inspections and Audits)
Informal Review (Peer Review and Code Review)

and coming to walk through....


objectives of Reviews:
1.
2.
3.

To find defects in requirements.


To find defects in Design.
To identify deviations in any process and also provide valued suggestions to improve
the process.

58. What do you mean by Pilot Testing?

Pilot testing involves having a group of end users try the system prior to its full
deployment in order to give feedback on IIS 5.0 features and functions.
Or
Pilot Testing is a Testing Activity which resembles the Production Environment.
It is Done Exactly between UAT and Production Drop.
Few Users who simulate the Production environment to continue the Business
Activity with the System.
They Will Check the Major Functionality of the System before going into production.
This is basically done to avoid the high-level Disasters.
Priority of the Pilot Testing Is High and Issues Raised in Pilot Testing has to be Fixed
As Soon As Possible.

59. What is SRS and BRS in manual testing?


BRS is Business Requirement Specification which means the client who want to make the
application gives the specification to software development organization and then the
organization convert it to SRS (Software requirement Specification) as per the need of the
software.
60. What is Smoke Test and Sanity Testing? When will use the Above Tests?
Smoke Testing: It is done to make sure if the build we got is testable or not, i.e to check
for the testability of the build also called as "day 0" check. Done at the 'build level'
Sanity Testing: It is done during the release phase to check for the main functionalities
without going deeper. Sometimes also called as subset of regression testing. When no
rigorous regression testing is done to the build, sanity does that part by checking major
functionalities. Done at the 'release level'
61. What is debugging?
Debugging is finding and removing "bugs" which cause the program to respond in a way
that is not intended.
62. What is determination?
Determination has different meanings in different situations. Determination means a strong
intention or a fixed intention to achieve a specific purpose. Determination, as a core value,
means to have strong will power in order to achieve a task in life. Determination means a
strong sense of self-devotion and self-commitment in order to achieve or perform a given
task. The people who are determined to achieve various objectives in life are known to
succeed highly in various walks of life.
Another way, it could also mean calculating, ascertaining or even realizing a specific
amount, limit, character, etc. It also refers to a certain result of such ascertaining or even
defining a certain concept.
It can also mean to reach at a particular decision and firmly achieve its purpose.
63. What is exact difference between Debugging & Testing?
Testing is nothing but finding an error/problem and its done by testers where as debugging
is nothing but finding the root cause for the error/problem and that is taken care by
developers.
Or
Debugging- is removing the bug and is done by developer.
Testing - is identifying the bug and is done by tester.
64. What is fish model can you explain?
Fish model explains the mapping between different stages of development and testing.
Phase 1

Information gathering takes place and here the BRS document is prepared.
Phase 2
Analysis takes place
During this phase, development people prepare SRS document which is a combination of
functional requirement specification and system requirement specification. During this
phase, testing people are going for reviews.
Phase-3
Design phase
Here HLD and LLD high level design document and low level design documents are prepared
by development team. Here, the testing people are going for prototype reviews.
Phase-4
coding phase
White box testers start coding and white box testing is being conducted by testing team.
Phase-5
testing phase
White box testing takes place by the black box test engineers.
Phase-6
release and maintenance.
65. What is Conformance Testing?
The process of testing that an implementation conforms to the specification on which it is
based. Usually applied to testing conformance to a formal standard.
66. What is Context Driven Testing?
The context-driven school of software testing is flavor of Agile Testing that advocates
continuous and creative evaluation of testing opportunities in light of the potential
information revealed and the value of that information to the organization right now.
67. What is End-to-End testing?
Similar to system testing, the 'macro' end of the test scale involves testing of a complete
application environment in a situation that mimics real-world use, such as interacting with a
database, using network communications, or interacting with other hardware, applications,
or systems if appropriate.

68. When the testing should be ended?


Testing is a never ending process, because of some factors testing May terminates.
The factors may be most of the tests are executed, project deadline, test budget depletion,
bug rate falls down below the criteria.
69. What is Parallel/Audit Testing?
Testing where the user reconciles the output of the new system to the output of the current
system to verify the new system performs the operations correctly.
70. What are the roles of glass-box and black-box testing tools?
Black-box testing
It is not based on knowledge of internal design or code. Tests are based on requirements
and functionality. Black box testing is used to find the errors in the following.
1.
2.
3.
4.
5.

Interface errors
Performance errors
Initialization errors
Incorrect or missing functionality
Errors while accessing external database

Glass-box testing
It is based on internal design of an application code. Tests are based on path coverage,
branch coverage, and statement coverage. It is also known as White Box testing.
1.
2.
3.
4.
5.

White box test cases can check for;


All independent paths with in a module are executed atleast once
Execute all loops
Exercises all logical decisions
Exercise internal data structure to ensure their validity

71. What is your experience with change control? Our development team has only
10 members. Do you think managing change is such a big deal for us?
Whenever the modifications happening to the actual project all the corresponding
documents are adapted on the information. So as to keep the documents always in sync
with the product at any point of time
72. What is GAP ANALYSIS?
The gap analysis can be done by traceability matrix that means tracking down each
individual requirement in SRS to various work products.
73. How do you know when your code has met specifications?
With the help of traceability matrix. All the requirements are tracked to the test cases.
When all the test cases are executed and passed is an indication that the code has met the
requirements.

74. At what stage of the life cycle does testing begin in your opinion?
Testing is a continuous process and it starts as and when the requirement for the project
/product begins to be framed.
Requirements phase: testing is done to check whether the project/product details are
reflecting clients ideas or giving an idea of complete project from the clients perspective (as
he wished to be) or not.
75. What are the properties of a good requirement?
Requirement specifications are important and one of the most reliable methods of insuring
problems in a complex software project. Requirements are the details describing an
application's externally perceived functionality and properties. Requirements should be
clear, complete, reasonably detailed, cohesive, attainable and testable.
76. How do you scope, organize, and execute a test project?
The Scope can be defined from the BRS, SRS, FRS or from functional points. It may be
anything that is provided by the client. And regarding organizing we need to analyze the
functionality to be covered and who will testing the modules and pros and cons of the
application. Identify the number if test cases, resource allocation, what are the risks that we
need mitigate all these come into picture.
Once this is done it is very easy to execute based on the plan what we have chalked out.
77. How would you ensure 100% coverage of testing?
We can not perform 100% testing on any application. but the criteria to ensure test
completion on a project are:
1.
2.
3.
4.
5.
6.

All the test cases are executed with the certain percentage of pass.
Bug falls below a certain level
Test budget depleted
Dead lines reached (project or test)
When all the functionalities are covered in a test cases
All critical & high bugs must have a status of CLOSED

78. Do you go about testing a web application?


Ideally to test a web application, the components and functionality on both the client and
server side should be tested. But it is practically impossible
The best approach to examine the project's requirements, set priorities based on risk
analysis, and then determine where to focus testing efforts within budget and schedule
constraints.
To test a web application we need to perform testing for both GUI and client-server
architecture.
Based on many factors like project requirements, risk analysis, budget and schedule, we
can determine that what kind of testing will be appropriate for your project. We can perform
unit n integration testing, functionality testing, GUI testing, usability testing, compatibility
testing, security testing, performance testing, recovery testing and regression testing.

79. What are your strengths?


I'm well motivated, well-organized, good team player, dedicative to work and I've got a
strong desire to succeed, and I'm always ready and willing to learn new information and
skills.
80. When should you begin testing?
For any Project, testing activity will be there from starting onwards, After the Requirements
gathering, Design Document (High and Low) will be prepared, that will be tested, whether
they are confirming to requirements or not, Design then Coding- White box will be done,
after the Build or System is ready, Integration followed by functional testing will be done,
Till the product or Project was stable. After the product or project is stable, then testing will
be stopped.
81. When should you begin test planning?
Test planning is done by test lead. As a test lead test planning begins when TRM is finalized
by project manager and handover to the test lead. Here test lead have some responsibilities
those are,
1.
2.
3.
4.

Testing team formation


identifying tactical risks
preparing test plan
Reviews on test plans

82. Would you like to work in a team or alone, why?


I would like to work in a team. Because the process of software development
is like a relay race where many runners have to contribute in their respective laps. It is
important because the complexity of work and degree of efforts required is beyond level of
an individual.
83. When should testing Start in a project? Why?
Testing in a continuous activity carried out at every stage of the project. You first test
everything that you get from the client. As tester (technical tester), my work will start as
soon as the project starts.
84. Have you ever created a test plan?
This is just a sample answer - "I have never created any test plan. I developed and
executed testcase. But I was involved/ participated actively with my Team Leader while
creating Test Plans."
85. Define quality for me as you understand it
It is software that is reasonably bug-free and delivered on time and within the budget,
meets the requirements and expectations and is maintainable.
86. What is the role of QA in a development project?
Quality Assurance

Group assures the Quality it must monitor the whole development process. they are most
concentration on prevention of bugs.
It must set standards, introduce review procedures, and educate people into better ways to
design and develop products.
87. How involved where you with your Team Lead in writing the Test Plan?
As per my knowledge Test Member are always out of scope while preparing the Test Plan,
Test Plan is a higher level document for Testing Team. Test Plan includes Purpose, scope,
Customer/Client scope, schedule, Hardware, Deliverables and Test Cases etc. Test plan
derived from PMP (Project Management Plan). Team member scope is just go through TEST
PLAN then they come to know what all are their responsibilities, Deliverable of modules.
Test Plan is just for input documents for every testing Team as well as Test Lead.
88. What processes/methodologies are you familiar with?
Methodology
1.
2.
3.
4.

Spiral methodology
Waterfall methodology. these two are old methods.
Rational unified processing. this is from I B M and
Rapid application development. this is from Microsoft office.

89. What is globalization testing?


The goal of globalization testing is to detect potential problems in application design that
could inhibit globalization. It makes sure that the code can handle all international support
without breaking functionality that would cause either data loss or display problems.
90. What is base lining?
Base lining: Process by which the quality and cost effectiveness of a service is assessed,
usually in advance of a change to the service. Base lining usually includes comparison of the
service before and after the Change or analysis of trend information. The term
Benchmarking is normally used if the comparison is made against other enterprises.
For example:
If the company has different projects. For each project there will be separate test plans.
This test plans should be accepted by peers in the organization after modifications. That
modified test plans are the baseline for the testers to use in different projects. Any further
modifications are done in the test plan. Present modified becomes the baseline. Because
this test plan becomes the basis for running the testing project.
91. Define each of the following and explain how each relates to the other: Unit,
System and Integration testing.
Unit testing

it is a testing on each unit (program)


System testin
This is a bottleneck stage of our project. This testing done after integration of all modules to
check whether our build meets all the requirements of customer or not. Unit and integration
testing is a white box testing which can be done by programmers. System testing is a black
box testing which can be done by people who do not know programming. The hierarchy of
this testing is unit testing integration testing system testing
Integration testing: integration of some units called modules. the test on these modules
is called integration testing (module testing).
92. Who should you hire in a testing group and why?
Testing is an interesting part of software cycle. and it is responsible for providing an quality
product to a customer. It involves finding bugs which is more difficult and challenging. I
wanna be part of testing group because of this.
93. What do you think the role of test-group manager should be? Relative to
senior management? Relative to other technical groups in the company? Relative
to your staff?
ROLES OF test-group manager INCLUDE

Defect find and close rates by week, normalized against level of effort (are we
finding defects, and can developers keep up with the number found and the ones
necessary to fix?)
Number of tests planned, run, passed by week (do we know what we have to test,
and are we able to do so?)
Defects found per activity vs. total defects found (which activities find the most
defects?)
Schedule estimates vs. actual (will we make the dates, and how well do we
estimate?)
People on the project, planned vs. actual by week or month (do we have the people
we need when we need them?)
Major and minor requirements changes (do we know what we have to do, and does it
change?)

94. What criteria do you use when determining when to automate a test or leave it
manual?
The Time and Budget both are the key factors in determining whether the test goes on
Manual or it can be automated. Apart from that the automation is required for areas such as
Functional, Regression, Load and User Interface for accurate results.
95. How do you analyze your test results? What metrics do you try to provide?
Test results are analyzed to identify the major causes of defect and which is the phase that
has introduced most of the defects. This can be achieved through cause/effect analysis or
Pareto analysis. Analysis of test results can provide several test matrics. Where matrices are
measure to quantify s/w, s/w development resources and s/w development process. Few

matrices which we can provide are:


Defect density: total no of defects reported during testing/size of project
Test effectiveness'/(t+uat)
where t: total no of defect recorded during testing
and UAT: total no of defect recorded during use acceptance testing
Defect removal efficiency(DRE): (total no of defect removed / total no of defect
injected)*100
96. How do you perform regression testing?
Regression Testing is carried out both manually and automation. The automatic tools are
mainly used for the Regression Testing as this is mainly focused repeatedly testing the
same application for the changes the application gone through for the new functionality,
after fixing the previous bugs, any new changes in the design etc. The regression testing
involves executing the test cases, which we ran for finding the defects. Whenever any
change takes place in the Application we should make sure, the previous functionality is still
available without any break. For this reason one should do the regression testing on the
application by running/executing the previously written test cases.
97. Describe to me when you would consider employing a failure mode and effect
analysis
FMEA (Failure Mode and Effects Analysis) is a proactive tool, technique and quality method
that enables the identification and prevention of process or product errors before they
occur. Failure modes and effects analysis (FMEA) is a disciplined approach used to identify
possible failures of a product or service and then determine the frequency and impact of the
failure.
98. What is UML and how to use it for testing?
The Unified Modeling Language is a third-generation method for specifying, visualizing, and
documenting the artifacts of an object-oriented system under development From the inside,
the Unified Modeling Language consists of three things:
1.
2.
3.

A formal metamodel
A graphical notation
A set of idioms of usage

99. What you will do during the first day of job?


In my present company HR introduced me to my colleagues. and i known the following
things.
1.
2.
3.

What is the organization structure?


What is the current project developing, on what domain etc.,
I will know to whom i have to report and what r my other responsibilities.

100. What is IEEE? Why is it important?


Organization of engineers Scientists and students involved in electrical, electronics, and
related fields. It is important because it functions as a publishing house and standardsmaking body.
101. Define Verification and Validation. Explain the differences between the two.
Verification - Evaluation done at the end of a phase to determine that requirements are
established during the previous phase have been met. Generally Verification refers to the
overall s/w evaluation activity, including reviewing, inspecting, checking and auditing.
Validation: - The process of evaluating s/w at the end of the development process to
ensure compliance with requirements. Validation typically involves actual testing and takes
place after verification is complete.
Or
Verification: Whether we are building the product right?
Validation: Whether we are building the right product/System?
102. Describe a past experience with implementing a test harness in the
development Of software
Harness: an arrangement of straps for attaching a horse to a cart.
Test Harness: This class of tool supports the processing of tests by working it almost
painless to
1.
2.
3.

Install a candidate program in a test environment


Feed it input data
Simulate by stubs the behavior of subsidiary modules.

103. What criteria do you use when determining when to automate a test or leave
it manual?
The Time and Budget both are the key factors in determining whether the test goes on
Manual or it can be automated. Apart from that the automation is required for areas such as
Functional, Regression, Load and User Interface for accurate results.
104. What would you like to do five years from now?
I would like to be in a managerial role, ideally working closely with external clients. I have
worked in client-facing roles for more than two years and I enjoy the challenge of keeping
the customer satisfied. I think it's something I'm good at. I would also like to take on
additional responsibility within this area, and possibly other areas such as Finally, I'd like to
be on the right career path towards eventually becoming a Senior Manager within the
company. I'm very aware that these are ambitious goals, however I feel through hard work
and dedication they are quite attainable.
105. Define each of the following and explain how each relates to the other: Unit,
System, and Integration testing

Unit system comes first. Performed by a developer.


Integration testing comes next. Performed by a tester
System testing comes last-Performed by a tester.

106. What is IEEE? Why is it important?


"Institute of Electrical & Electronic Engineers" Organization of engineers, scientists and
students involved in electrical, electronics, and related fields. It also functions as a
publishing house and standards-making body.
107. What is the role of QA in a company that produces software?
The role of the QA in the company is to produce a quality software and to ensure that it
meets all the requirements of its customers before delivering the product.
108. How would you build a test team?
Building a test team needs a number of factors to judge. Firstly, you have to consider the
complexity of the application or project that is going to be tested. Next testing, time allotted
levels of testing to be performed. With all these parameters in mind you need to decide the
skills and experience level of your testers and how many testers.
109. In an application currently in production, one module of code is being
modified. Is it necessary to re- test the whole application or is it enough to just
test functionality associated with that module?
It depends on the functionality related with that module. We need to check whether that
module is inter-related with other modules. If it is related with other modules, we need to
test related modules too. Otherwise, if it is an independent module, no need to test other
modules.
110. What are ISO standards? Why are they important?
ISO 9000 specifies requirements for a Quality Management System overseeing the
production of a product or service. It is not a standard for ensuring a product or service is of
quality; rather, it attests to the process of production, and how it will be managed and
reviewed.
For ex a few:
ISO 9000:2000
Quality management systems. Fundamentals and vocabulary
ISO 9000-1:1994
Quality management and quality assurance standards. Guidelines for selection and use
ISO 9000-2:1997
Quality management and quality assurance standards. Generic guidelines for the application
of ISO 9001, ISO 9002 and ISO 9003
ISO 9000-3:1997

Quality management and quality assurance standards. Guidelines for the application of ISO
9001:1994 to the development, supply, installation and maintenance of computer software
ISO 9001:1994
Quality systems. Model for quality assurance in design, development, production,
installation and servicing
ISO 9001:2000
Quality management systems. Requirements
111. What is the Waterfall Development Method and do you agree with all the
steps?
Waterfall approach is a traditional approach to the s/w development. This will work out of it
project is a small one (Not complex).Real time projects need spiral methodology as SDLC.
Some product based development can follow Waterfall, if it is not complex. Production cost
is less if we follow waterfall method.
112. What is migration testing?
Changing of an application or changing of their versions and conducting testing is migration
testing. Testing of programs or procedures used to convert data from existing systems for
use in replacement systems.
113. What is terminology? Why testing Necessary fundamental test process
psychology of testing Testing Terminologies
Error: a human action that produces an incorrect result.
Fault: a manifestation of an error in software.
Failure: a deviation of the software from its expected delivery or service.
Reliability: the probability that the software will not cause the failure of the system for a
specified time under specified conditions.
Why Testing is Necessary
Testing is necessary because software is likely to have faults in it and it is better (cheaper,
quicker and more expedient) to find and remove these faults before it is put into live
operation. Failures that occur during live operation are much more expensive to deal with
than failures than occur during testing prior to the release of the software. Of course other
consequences of a system failing during live operation include the possibility of the software
supplier being sued by the customers!
Testing is also necessary so we can learn about the reliability of the software (that is, how
likely it is to fail within a specified time under specified conditions).
114. What is UAT testing? When it is to be done?

UAT stands for 'User acceptance Testing' This testing is carried out with the user perspective
and it is usually done before a release
UAT stands for User Acceptance Testing. It is done by the end users along with testers to
validate the functionality of the application. It is also called as Pre-Production testing.
115. How to find that tools work well with your existing system?
I think we need to do a market research on various tools depending on the type of
application we are testing. Say we are testing an application made in VB with an Oracle
Database, and then Win runner is going to give good results. But in some cases it may not,
say your application uses a lots of 3rd party Grids and modules which have been integrated
into the application. So it depends on the type of application u r testing.
Also we need to know what sort of testing will be performed. If u need to test the
performance, u cannot use a record and playback tool, u need a performance testing tool
such as Load runner.
116. What is the difference between a test strategy and a test plan?
TEST PLAN: IT IS PLAN FOR TESTING.IT DEFINES SCOPE, APPROACH, AND
ENVIRONEMENT.
TEST STRATEGY: A TEST STRATEGY IS NOT A DOCUMENT.IT IS A FRAMEWORK FOR
MAKING DECISIONS ABOUT VALUE.
117. What is Scenarios in term of testing?
Scenario means development. We define scenario by the following definition: Set of test
cases that ensure the business process flows are tested from end to end. It may be
independent tests or a series of tests that follow each other, each dependant on the output
of the previous one. The term test scenario and test case are often used synonymously.
118. Explain the differences between White-box, Gray-box, and Black-box testing?
Black box testing Tests are based on requirements and functionality. Not based on any
knowledge of internal design or code.
White box testing Tests are based on coverage of code statements, branches, paths,
conditions. Based on knowledge of the internal logic of an application's code.
Gray Box Testing A Combination of Black and White Box testing methodologies, testing a
piece of software against its specification but using some knowledge of its internal workings.
119. What is structural and behavioral Testing?
Structural Testing
It is basically the testing of code which is called white box testing.
Behavioral Testing

It is also called functional testing where the functionality of software is being tested. This
kind of testing is called black box testing.
Structural Testing
It's a White Box Testing Technique. Since the testing is based on the internal structure of
the program/code & hence it is called as Structural Testing.
Behavioral Testing:
It's a Black Box Testing Technique. Since the testing is based on the external
behavior/functionality of the system /application & hence it is called as Behavioral Testing.
120. How does unit testing play a role in the development / Software lifecycle?
We can catch simple bugs like GUI, small functional Bugs during unit testing. This reduces
testing time. Overall this saves project time. If developer doesn't catch this type of bugs,
this will come to integration testing part and if it catches by a tester, this need to go
through a Bug life cycle and consumes a lot of time.
121. What made you pick testing over another career?
Testing is one aspect which is very important in the Software Development Life Cycle
(SDLC). I like to be part of the team which is responsible for the quality of the application
being delivered. Also, QA has broad opportunities and large scope for learning various
technologies. And of course it has lot more opportunities than the Development.
Sample Test Case:

Test
Test Case
Input
Case
Description Data
ID

Expected
Result

Actual
Result

Pass/Fail

Remarks

Sample Bug Case:

S. no Links

Bug
ID

Description

Initial Bug
Status

Retesting
Bug Status

Conf Bug Status

Manual Software Testing Interview Questions with Answers


As a software tester the person should have certain qualities, which are imperative. The person should
be observant, creative, innovative, speculative, patient, etc. It is important to note, that when you opt

for manual testing, it is an accepted fact that the job is going to be tedious and laborious. Whether you
are a fresher or experienced, there are certain questions, to which answers you should know.
1) What is difference between bug, error and defect?
Bug and defect essentially mean the same. It is the flaw in a component or system, which can cause the
component or system to fail to perform its required function. If a bug or defect is encountered during
the execution phase of the software development, it can cause the component or the system to fail. On
the other hand, an error is a human error, which gives rise to incorrect result. You may want to know
about, how to log a bug (defect), contents of a bug, bug life cycle, and bug and statuses used during a
bug life cycle, which help you in understanding the terms bug and defect better.
2) Explain white box testing.
One of the testing types used in software testing is white box testing. Read in detail on white box
testing.
3) Tell me about V model in manual testing.
V model is a framework, which describes the software development life cycle activities right from
requirements specification up to software maintenance phase. Testing is integrated in each of the
phases of the model. The phases of the model start with user requirements and are followed by system
requirements, global design, detailed design, implementation and ends with system testing of the entire
system. Each phase of model has the respective testing activity integrated in it and is carried out parallel
to the development activities. The four test levels used by this model include, component testing,
integration testing, system testing and acceptance testing.
4) What are stubs and drivers in manual testing?
Both stubs and drivers are a part of incremental testing. There are two approaches, which are used in
incremental testing, namely bottom up and top down approach. Drivers are used in bottom up testing.
They are modules, which test the components to be tested. The look of the drivers is similar to the
future real modules. A skeletal or special purpose implementation of a component, which is used to
develop or test a component, that calls or is otherwise dependent on it. It is the replacement for the
called component.
5) Explain black box testing.
Find the answer to the question in the article on black box testing.
6) Explain compatibility testing.
The answer to this question is in the article on compatibility testing.
7) What are the check lists, which a software tester should follow?
Read the link on check lists for software tester to find the answer to the question.

8) What are the different types of software testing?


There are a number of types of software testing which you will learn in the preceding link.
9) What are the phases of STLC?
Like there are different phases of the software development life cycle, there are different phases of
software testing life cycle as well. Read through software testing life cycle for more explanation.
10) What is a Review?
A review is an evaluation of a said product or project status to ascertain any discrepancies from the
actual planned results and to recommend improvements to the said product. The common examples of
reviews are informal review or peer review, technical review, inspection, walkthrough, management
review. This is one of the manual testing interview questions.
11) Explain beta testing.
For answer to this question, refer to the article on beta testing.
12) Explain equivalence class partition.
It is either specification based or a black box technique. Gather information on equivalence partitioning
from the article on equivalence partitioning.
13) What is a test case?
Find the answer to this question in the article titled test cases.
14) What is a test suite?
A test suite is a set of several test cases designed for a component of a software or system under test,
where the post condition of one test case is normally used as the precondition for the next test.
15) What is acceptance testing?
Refer to the article on acceptance testing for the answer.
16) What is boundary value analysis?
A boundary value is an input or an output value, which resides on the edge of an equivalence partition.
It can also be the smallest incremental distance on either side of an edge, like the minimum or a
maximum value of an edge. Boundary value analysis is a black box testing technique, where the tests are
based on the boundary values.
17) What is compatibility testing?

Compatibility testing is a part of non-functional tests carried out on the software component or the
entire software to evaluate the compatibility of the application with the computing environment. It can
be with the servers, other software, computer operating system, different web browsers or the
hardware as well.
18) What is exact difference between debugging & testing?
When a test is run and a defect has been identified. It is the duty of the developer to first locate the
defect in the code and then fix it. This process is known as debugging. In other words, debugging is the
process of finding, analyzing and removing the causes of failures in the software. On the other hand,
testing consists of both static and dynamic testing life cycle activities. It helps to determine that the
software does satisfy specified requirements and it is fit for purpose.
19) Explain in short, sanity testing, ad-hoc testing and smoke testing.
Sanity testing is a basic test, which is conducted if all the components of the software can be compiled
with each other without any problem. It is to make sure that there are no conflicting or multiple
functions or global variable definitions have been made by different developers. It can also be carried
out by the developers themselves. Smoke testing on the other hand is a testing methodology used to
cover all the major functionality of the application without getting into the finer nuances of the
application. It is said to be the main functionality oriented test. Ad hoc testing is different than smoke
and sanity testing. This term is used for software testing, which is performed without any sort of
planning and/or documentation. These tests are intended to run only once. However in case of a defect
found it can be carried out again. It is also said to be a part of exploratory testing.
20) Explain performance testing.
It is one of the non-functional types of software testing. Performance of software is the degree to which
a system or a component of system accomplishes the designated functions given constraints regarding
processing time and throughput rate. Therefore, performance testing is the process to test to determine
the performance of software.
21) What is exploratory testing?
Read the page on exploratory testing to find the answer.
22) What is integration testing?
One of the software testing types, where tests are conducted to test interfaces between components,
interactions of the different parts of the system with operating system, file system, hardware and
between different software. It may be carried out by the integrator of the system, but should ideally be
carried out by a specific integration tester or a test team.
23) What is meant by functional defects and usability defects in general? Give appropriate example.

We will take the example of Login window to understand functionality and usability defects. A
functionality defect is when a user gives a valid user name but invalid password and the user clicks on
login button. If the application accepts the user name and password, and displays the main window,
where an error should have been displayed. On the other hand a usability defect is when the user gives
a valid user name, but invalid password and clicks on login button. The application throws up an error
message saying Please enter valid user name when the error message should have been Please enter
valid Password.
24) What is pilot testing?
It is a test of a component of a software system or the entire system under the real time operating
conditions. The real time environment helps to find the defects in the system and prevent costly bugs
been detected later on. Normally a group of users use the system before its complete deployment and
give their feedback about the system.
25) Explain statement coverage.
It is a structure based or white box technique. Test coverage measures in a specific way the amount of
testing performed by a set of tests. One of the test coverage type is statement coverage. It is the
percentage of executable statements which have been exercise by a particular test suite. The formula
which is used for statement coverage is: Statement Coverage = Number of statements exercised Total
number of statements * 100%
26) Explain stress testing.
Find the answer to this question in this article on stress testing.
27) What is regression testing?
Regression testing is the testing of a particular component of the software or the entire software after
modifications have been made to it. The aim of regression testing is to ensure new defects have not
been introduced in the component or software, especially in the areas where no changes have been
made. In short, regression testing is the testing to ensure nothing has changed, which should not have
changed due to changes made.
28) What is security testing?
Read on security testing for an appropriate answer.
29) What is system testing?
System testing is testing carried out of an integrated system to verify, that the system meets the
specified requirements. It is concerned with the behavior of the whole system, according to the scope
defined. More often than not system testing is the final test carried out by the development team, in
order to verify that the system developed does meet the specifications and also identify defects which
may be present.

30) What is the difference between retest and regression testing?


Retesting, also known as confirmation testing is testing which runs the test cases that failed the last
time, when they were run in order to verify the success of corrective actions taken on the defect found.
On the other hand, regression testing is testing of a previously tested program after the modifications to
make sure that no new defects have been introduced. In other words, it helps to uncover defects in the
unchanged areas of the software.
31) Explain priority, severity in software testing.
Priority is the level of business importance, which is assigned to a defect found. On the other hand,
severity is the degree of impact, the defect can have on the development or operation of the
component or the system.
32) Explain the bug life cycle in detail.
This is one of the most commonly asked interview questions, hence this question is always a part of
software testing interview questions and answers for experienced as well as freshers. The bug life cycle
is the stages the bug or defect goes through before it is fixed, deferred or rejected. Read in detail on bug
life cycle.
33) What is the difference between volume testing and load testing?
Volume testing checks if the system can actually come up with the large amount of data. For example, a
number of fields in a particular record or numerous records in a file, etc. On the other hand, load testing
is measuring the behavior of a component or a system with increased load. The increase in load can be
in terms of number of parallel users and/or parallel transactions. This helps to determine the amount of
load, which can be handled by the component or the software system.
34) What is usability testing?
Refer to the article titled usability testing for an answer to this question.
35) Explain the test case life cycle.
On an average a test case goes through the following phases. The first phase of the test case life cycle is
identifying the test scenarios either from the specifications or from the use cases designed to develop
the system. Once the scenarios have been identified, the test cases apt for the scenarios have to be
developed. Then the test cases are reviewed and the approval for those test cases has to be taken from
the concerned authority. After the test cases have been approved, they are executed. When the
execution of the test cases start,
the results of the tests have to be recorded. The test cases which pass are marked accordingly. If the test
cases fail, defects have to be raised. When the defects are fixed the failed test case has to be executed
again.

36) What is verification and validation?


Read on the two techniques used in software testing namely verification and validation in the article on
verification and validation.
37) Which are the different methodologies used in software testing?
Refer to software testing methodologies for detailed information on the different methodologies used
in software testing.
38) Explain the waterfall model in testing.
Waterfall model is a part of software development life cycle, as well as software testing. It is one of the
first models to be used for software testing.
39) Explain is Validation?
The process of evaluating software at the end of the software development process to ensure
compliance with software requirements. The techniques for validation are testing, inspection and
reviewing.
40) What is Verification?
The process of determining whether or not the products of a given phase of the software development
cycle meet the implementation steps and can be traced to the incoming objectives established during
the previous phase. The techniques for verification are testing, inspection and reviewing.
41) What is Acceptance Testing? Testing conducted to enable a user/customer to determine whether to
accept a software product. Normally performed to validate the software meets a set of agreed
acceptance criteria. 42) What is Accessibility Testing? Verifying a product is accessible to the people
having disabilities (deaf, blind, mentally disabled etc.). 43) What is Ad Hoc Testing? A testing phase
where the tester tries to 'break' the system by randomly trying the system's functionality. Can include
negative testing as well. See also Monkey Testing. 44) What is Agile Testing? Testing practice for
projects using agile methodologies, treating development as the
customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development. 45)
What is Application Binary Interface (ABI)? A specification defining requirements for portability of
applications in binary forms across different system platforms and environments. 46) What is
Application Programming Interface (API)? A formalized set of software calls and routines that can be
referenced by an application program in order to access supporting system or network services. 47)
What is Automated Software Quality (ASQ)? The use of software tools, such as automated testing
tools, to improve software quality. 48) What is Automated Testing? Testing employing software tools
which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
The use of software to control the execution of tests, the comparison of actual outcomes to predicted
outcomes, the setting up of test preconditions, and other test control and test reporting functions. 49)

What is Backus-Naur Form? A metalanguage used to formally describe the syntax of a language. 50
)What is Basic Block? A sequence of one or more consecutive, executable statements containing no
branches. 51) What is Basis Path Testing? A white box test case design technique that uses the
algorithmic flow of the program to design tests. 52). What is Basis Set? The set of tests derived using
basis path testing. 53) What is Baseline? The point at which some deliverable produced during the
software engineering process is put under formal change control. 54) What you will do during the first
day of job? What would you like to do five years from now? 55). What is Beta Testing? Testing of a
release of a software product conducted by customers. 56). What is Binary Portability Testing? Testing
an executable application for portability across system platforms and
environments, usually for conformation to an ABI specification. 17). What is Black Box Testing? Testing
based on an analysis of the specification of a piece of software without reference to its internal
workings. The goal is to test how well the component conforms to the published requirements for the
component. 18). What is Bottom Up Testing? An approach to integration testing where the lowest level
components are tested first, then used to facilitate the testing of higher level components. The process
is repeated until the component at the top of the hierarchy is tested. 19). What is Boundary Testing?
Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests
are stress tests).
20). What is Bug? What is Defect? A fault in a program which causes the program to perform in an
unintended or unanticipated manner. If software misses some feature or function from what is there in
requirement it is called as defect. 21. What is Boundary Value Analysis? BVA is similar to Equivalence
Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the
specification. his means that if a function expects all values in range of negative 100 to positive 1000,
test inputs would include negative 101 and positive 1001. 22. What is Branch Testing? Testing in which
all branches in the program source code are tested at least once. 23. What is Breadth Testing? A test
suite that exercises the full functionality of a product but does not test features in detail. 24. What is
CAST? Computer Aided Software Testing. 25. What is Capture/Replay Tool? A test tool that records test
input as it is sent to the software under test. The input cases stored can then be used to reproduce the
test at a later time. Most commonly applied to GUI test tools. 26. What is CMM? The Capability
Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software
processes of an organization and for identifying the key practices that are required to increase the
maturity of these processes.
27. What is Cause Effect Graph? A graphical representation of inputs and the associated outputs effects
which can be used to design test cases. 28. What is Code Complete? Phase of development where
functionality is implemented in entirety; bug fixes are all that are left. All functions found in the
Functional Specifications have been implemented. 29. What is Code Coverage? An analysis method that
determines which parts of the software have been executed (covered) by the test case suite and which
parts have not been executed and therefore may require additional attention. 30. What is Code
Inspection? A formal testing technique where the programmer reviews source code with a group who
ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically
common programming errors, and analyzing its compliance with coding standards. 31. What is Code

Walkthrough? A formal testing technique where source code is traced by a group with a small set of test
cases, while the state of program variables is manually monitored, to analyze the programmer's logic
and assumptions. 32. What is Coding? The generation of source code.
33. What is Compatibility Testing? Testing whether software is compatible with other elements of a
system with which it should operate, e.g. browsers, Operating Systems, or hardware. 34. What is
Component? A minimal software item for which a separate specification is available. 35. What is
Component Testing? Testing of individual software components (Unit Testing). 36. What is Concurrency
Testing? Multi-user testing geared towards determining the effects of accessing the same application
code, module or database records. Identifies and measures the level of locking, deadlocking and use of
single-threaded code and locking semaphores. 37. What is Conformance Testing? The process of testing
that an implementation conforms to the specification on which it is based. Usually applied to testing
conformance to a formal standard.
38. What is Context Driven Testing? The conte5xt-driven school of software testing is flavor of Agile
Testing that advocates continuous and creative evaluation of testing opportunities in light of the
potential information revealed and the value of that information to the organization right now. 39.
What is Conversion Testing? Testing of programs or procedures used to convert data from existing
systems for use in replacement systems. 40. What is Cyclomatic Complexity? A measure of the logical
complexity of an algorithm, used in white-box testing. 41. What is Data Dictionary? A database that
contains definitions of all data items defined during analysis. 42. What is Data Flow Diagram? A
modeling notation that represents a functional decomposition of a system. 43. What is Data Driven
Testing? Testing in which the action of a test case is parameterized by externally defined data values,
maintained as a file or spreadsheet. A common technique in Automated Testing. 44. What is
Debugging? The process of finding and removing the causes of software failures.
45. What is Defect? Nonconformance to requirements or functional / program specification 46. What is
Dependency Testing? Examines an application's requirements for pre-existing software, initial states
and configuration in order to maintain proper functionality. 47. What is Depth Testing? A test that
exercises a feature of a product in full detail. 48. What is Dynamic Testing? Testing software through
executing it. See also Static Testing. 49. What is Emulator? A device, computer program, or system that
accepts the same inputs and produces the same outputs as a given system. 50. What is Endurance
Testing? Checks for memory leaks or other problems that may occur with prolonged execution.
51. What is End-to-End testing? Testing a complete application environment in a situation that mimics
real-world use, such as interacting with a database, using network communications, or interacting with
other hardware, applications, or systems if appropriate. 52. What is Equivalence Class? A portion of a
component's input or output domains for which the component's behaviour is assumed to be the same
from the component's specification.
53. What is Equivalence Partitioning? A test case design technique for a component in which test cases
are designed to execute representatives from equivalence classes. 54. What is Exhaustive Testing?
Testing which covers all combinations of input values and preconditions for an element of the software

under test. 55. What is Functional Decomposition? A technique used during planning, analysis and
design; creates a functional hierarchy for the software. 54. What is Functional Specification? A
document that describes in detail the characteristics of the product with regard to its intended features.
55. What is Functional Testing? Testing the features and operational behavior of a product to ensure
they correspond to its specifications. Testing that ignores the internal mechanism of a system or
component and focuses solely on the outputs generated in response to selected inputs and execution
conditions. or Black Box Testing. 56. What is Glass Box Testing? A synonym for White Box Testing. 57.
What is Gorilla Testing? Testing one particular module, functionality heavily. 58. What is Gray Box
Testing? A combination of Black Box and White Box testing methodologies? testing a piece of software
against its specification but using some knowledge of its internal workings. 59. What is High Order
Tests? Black-box tests conducted once the software has been integrated.
60. What is Independent Test Group (ITG)? A group of people whose primary responsibility is software
testing,
61. What is Inspection? A group review quality improvement process for written material. It consists of
two aspects; product (document itself) improvement and process improvement (of both document
production and inspection). 62. What is Integration Testing? Testing of combined parts of an application
to determine if they function together correctly. Usually performed after unit and functional testing.
This type of testing is especially relevant to client/server and distributed systems. 63. What is
Installation Testing? Confirms that the application under test recovers from expected or unexpected
events without loss of data or functionality. Events can include shortage of disk space, unexpected loss
of communication, or power out conditions. 64. What is Load Testing? See Performance Testing. 65.
What is Localization Testing? This term refers to making software specifically designed for a specific
locality. 66. What is Loop Testing? A white box testing technique that exercises program loops. 67.
What is Metric? A standard of measurement. Software metrics are the statistics describing the structure
or content of a program. A metric should be a real objective measurement of something such as number
of bugs per lines of code. 68. What is Monkey Testing? Testing a system or an Application on the fly, i.e
just few tests here and there to ensure the system or an application does not crash out. 69. What is
Negative Testing? Testing aimed at showing software does not work. Also known as "test to fail". See
also Positive Testing. 70. What is Path Testing? Testing in which all paths in the program source code
are tested at least once.
71. What is Performance Testing? Testing conducted to evaluate the compliance of a system or
component with specified performance requirements. Often this is performed using an automated test
tool to simulate large number of users. Also know as "Load Testing". 72. What is Positive Testing?
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
73. What is Quality Assurance? All those planned or systematic actions necessary to provide adequate
confidence that a product or service is of the type and quality needed and expected by the customer.
74. What is Quality Audit? A systematic and independent examination to determine whether quality
activities and related results comply with planned arrangements and whether these arrangements are
implemented effectively and are suitable to achieve objectives. 75. What is Quality Circle? A group of

individuals with related interests that meet at regular intervals to consider problems or other matters
related to the quality of outputs of a process and to the correction of problems or to the improvement
of quality. 76. What is Quality Control? The operational techniques and the activities used to fulfill and
verify requirements of quality. 77. What is Quality Management? That aspect of the overall
management function that determines and implements the quality policy. 78. What is Quality Policy?
The overall intentions and direction of an organization as regards quality as formally expressed by top
management.
79. What is Quality System? The organizational structure, responsibilities, procedures, processes, and
resources for implementing quality management. 80. What is Race Condition? A cause of concurrency
problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism
used by either to moderate simultaneous access. 81. What is Ramp Testing? Continuously raising an
input signal until the system breaks down. 82. What is Recovery Testing? Confirms that the program
recovers from expected or unexpected events without loss of data or functionality. Events can include
shortage of disk space, unexpected loss of communication, or power out conditions. 83. What is
Regression Testing? Retesting a previously tested program following modification to ensure that faults
have not been introduced or uncovered as a result of the changes made.
84. What is Release Candidate? A pre-release version, which contains the desired functionality of the
final version, but which needs to be tested for bugs (which ideally should be removed before the final
version is released).
85. What is Sanity Testing? Brief test of major functional elements of a piece of software to determine if
its basically operational. See also Smoke Testing. 86. What is Scalability Testing? Performance testing
focused on ensuring the application under test gracefully handles increases in work load. 87. What is
Security Testing? Testing which confirms that the program can restrict access to authorized personnel
and that the authorized personnel can access the functions available to their security level. 88. What is
Smoke Testing? A quick-and-dirty test that the major functions of a piece of software work. Originated
in the hardware testing practice of turning on a new piece of hardware for the first time and considering
it a success if it does not catch on fire. 89. What is Soak Testing? Running a system at high load for a
prolonged period of time. For example, running several times more transactions in an entire day (or
night) than would be expected in a busy day, to identify and performance problems that appear after a
large number of transactions have been executed. 90. What is Software Requirements Specification? A
deliverable that describes all data, functional and behavioral requirements, all constraints, and all
validation requirements for software/ 91. What is Software Testing? A set of activities conducted with
the intent of finding errors in software.
92. What is Static Analysis? Analysis of a program carried out without executing the program. 93. What
is Static Analyzer? A tool that carries out static analysis. 94. What is Static Testing? Analysis of a
program carried out without executing the program.
95. What is Storage Testing? Testing that verifies the program under test stores data files in the correct
directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of

space. This is external storage as opposed to internal storage. 96. What is Stress Testing? Testing
conducted to evaluate a system or component at or beyond the limits of its specified requirements to
determine the load under which it fails and how. Often this is performance testing using a very high
level of simulated load. 97. What is Structural Testing? Testing based on an analysis of internal workings
and structure of a piece of software. See also White Box Testing.
98. What is System Testing? Testing that attempts to discover defects that are properties of the entire
system rather than of its individual components. 99. What is Testability? The degree to which a system
or component facilitates the establishment of test criteria and the performance of tests to determine
whether those criteria have been met. 100. What is Testing? The process of exercising software to verify
that it satisfies specified requirements and to detect errors. The process of analyzing a software item to
detect the differences between existing and required conditions (that is, bugs), and to evaluate the
features of the software item (Ref. IEEE Std 829). The process of operating a system or component
under specified conditions, observing or recording the results, and making an evaluation of some aspect
of the system or component. What is Test Automation? It is the same as Automated Testing. 101. What
is Test Bed? An execution environment configured for testing. May consist of specific hardware, OS,
network topology, configuration of the product under test, other application or system software, etc.
The Test Plan for a project should enumerated the test beds(s) to be used.
102. What is Test Case? Test Case is a commonly used term for a specific test. This is usually the
smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps,
verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions,
and expected outcomes developed for a particular objective, such as to exercise a particular program
path or to verify compliance with a specific requirement. Test Driven Development? Testing
methodology associated with Agile Programming in which every chunk of code is covered by unit tests,
which must all pass all the time, in an effort to eliminate
unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal
number of lines of test code to the size of the production code. 103. What is Test Driver? A program or
test tool used to execute a tests. Also known as a Test Harness. 104. What is Test Environment? The
hardware and software environment in which tests will be run, and any other software with which the
software under test interacts when under test including stubs and test drivers. 105. What is Test First
Design? Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that
programmers do not write any production code until they have first written a unit test. 106. What is
Test Harness? A program or test tool used to execute a tests. Also known as a Test Driver.
107. What is Test Plan? A document describing the scope, approach, resources, and schedule of
intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will
do each task, and any risks requiring contingency planning. 108. What is Test Procedure? A document
providing detailed instructions for the execution of one or more test cases. 109. What is Test Script?
Commonly used to refer to the instructions for a particular test that will be carried out by an automated
test tool. 110. What is Test Specification? A document specifying the test approach for a software
feature or combination or features and the inputs, predicted results and execution conditions for the

associated tests. 111. What is Test Suite? A collection of tests used to validate the behavior of a
product. The scope of a Test Suite varies from organization to organization. There may be several Test
Suites for a particular product for example. In most cases however a Test Suite is a high level concept,
grouping together hundreds or thousands of tests related by what they are intended to test. 112. What
is Test Tools? Computer programs used in the testing of a system, a component of the system, or its
documentation.
113. What is Thread Testing? A variation of top-down testing where the progressive integration of
components follows the implementation of subsets of the requirements, as opposed to the integration
of components by successively lower levels. 114. What is Top Down Testing? An approach to
integration testing where the component at the top of the component hierarchy is tested first, with
lower level components being simulated by stubs. Tested components are then used to test lower level
components. The process is repeated until the lowest level components have been tested. 115. What is
Total Quality Management? A company commitment to develop a process that achieves high quality
product and customer satisfaction. 116. What is Traceability Matrix? A document showing the
relationship between Test Requirements and Test Cases. 117. What is Usability Testing? Testing the
ease with which users can learn and use a product. 118. What is Use Case? The specification of tests
that are conducted from the end-user perspective. Use cases tend to focus on operating software as an
end-user would conduct their day-to-day activities.
119. What is Unit Testing? Testing of individual software components. 120. What is Validation? The
process of evaluating software at the end of the software development process to ensure compliance
with software requirements. The techniques for validation is testing, inspection and reviewing. 121.
What is Verification? The process of determining whether of not the products of a given phase of the
software development cycle meet the implementation steps and can be traced to the incoming
objectives established during the previous phase. The techniques for verification are testing, inspection
and reviewing. 122. What is White Box Testing? Testing based on an analysis of internal workings and
structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also
known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing. White box testing is
used to test the internal logic of the code for ex checking whether the path has been executed once,
checking whether the branches has been executed atleast once .....Used to check the structure of the
code.
123. What is Workflow Testing? Scripted end-to-end testing which duplicates specific workflows which
are expected to be utilized by the end-user. 124. What's the difference between load and stress testing
? One of the most common, but unfortunate misuse of terminology is treating load testing and stress
testing as synonymous. The consequence of this ignorant semantic abuse is usually that the system is
neither properly load tested nor subjected to a meaningful stress test. Stress testing is subjecting a
system to an unreasonable load while denying it the resources (e.g., RAM, disc, mips, interrupts, etc.)
needed to process that load. The idea is to stress a system to the breaking point in order to find bugs
that will make that break potentially harmful. The system is not expected to process the overload
without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing
data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on

the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress
testing is often deliberately distorted so as to force the system into resource depletion. Load testing is
subjecting a system to a statistically representative (usually) load. The two main reasons for using such
loads is in support of software reliability testing and in performance testing. The term 'load testing' by
itself is too vague and imprecise to warrant use. For example, do you mean representative load,'
'overload,' 'high load,' etc. In performance testing, load is varied from a minimum (zero) to the
maximum level the system can sustain without running out of resources or having, transactions >suffer
(application-specific) excessive delay. A third use of the term is as a test whose objective is to determine
the maximum sustainable load the system can handle. In this usage, 'load testing' is merely testing at
the highest transaction arrival rate in performance testing.
125. What's the difference between QA and testing? QA is more a preventive thing, ensuring quality in
the company and therefore the product rather than just testing the product for software bugs? TESTING
means 'quality control' QUALITY CONTROL measures the quality of a product QUALITY ASSURANCE
measures the quality of processes used to create a quality product. 126. What is the best tester to
developer ratio? Reported tester: developer ratios range from 10:1 to 1:10. There's no simple answer. It
depends on so many things, Amount of reused code, number and type of interfaces, platform, quality
goals, etc. It also can depend on the development model. The more specs, the less testers. The roles can
play a big part also. Does QA own beta? Do you include process auditors or planning activities? These
figures can all vary very widely depending on how you define 'tester' and 'developer'. In some
organizations, a 'tester' is anyone who happens to be testing software at the time -- such as their own.
In other organizations, a 'tester' is only a member of an independent test group. It is better to ask about
the test labor content than it is to ask about the tester/developer ratio. The test labor content, across
most applications is generally accepted as 50%, when people do honest accounting. For life-critical
software, this can go up to 80%.
127. How can new Software QA processes be introduced in an existing organization? - A lot depends
on the size of the organization and the risks involved. For large organizations with high-risk (in terms of
lives or property) projects, serious management buy-in is required and a formalized QA process is
necessary. - Where the risk is lower, management and organizational buy-in and QA implementation
may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to
keep bureaucracy from getting out of hand. - For small groups or projects, a more ad-hoc process may
be appropriate, depending on the type of customers and projects. A lot will depend on team leads or
managers, feedback to developers, and ensuring adequate communications among customers,
managers, developers, and testers. - In all cases the most value for effort will be in requirements
management processes, with a goal of clear, complete, testable requirement specifications or
expectations. 128. What are 5 common problems in the software development process? 1. poor
requirements - if requirements are unclear, incomplete, too general, or not testable, there will be
problems. 2. unrealistic schedule - if too much work is crammed in too little time, problems are
inevitable. 3. inadequate testing - no one will know whether or not the program is any good until the
customer complains or systems crash. 4. features - requests to pile on new features after development
is underway; extremely common. 5. miscommunication - if developers don't know what's needed or

customer's have erroneous expectations, problems are guaranteed. 129. What are 5 common solutions
to software development problems? 1. solid requirements - clear, complete, detailed, cohesive,
attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down
requirements. 2. realistic schedules - allow adequate time for planning, design, testing, bug fixing, retesting, changes, and documentation; personnel should be able to complete the project without burning
out. 3. adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for
testing and bug-fixing. 4. stick to initial requirements as much as possible - be prepared to defend
against changes and additions once development has begun, and be prepared to explain consequences.
If changes are necessary, they should be adequately reflected in related schedule changes. If possible,
use rapid prototyping during the design phase so that customers can see what to expect. This will
provide them a higher comfort level with their requirements decisions and minimize changes later on. 5.
communication - require walkthroughs and inspections when appropriate; make extensive use of group
communication tools - e-mail, groupware, networked bug-tracking tools and change management tools,
intranet capabilities, etc.; insure that documentation is available and up-to-date - preferably electronic,
not paper; promote teamwork and cooperation; use prototypes early on so that customers'
expectations are clarified. 130. What is 'good code'? 'Good code' is code that works, is bug free, and is
readable and maintainable. Some organizations have coding 'standards' that all developers are
supposed to
adhere to, but everyone has different ideas about what's best, or what is too many or too few rules.
There are also various theories and metrics, such as McCabe Complexity metrics. It should be kept in
mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews',
'buddy checks' code analysis tools, etc. can be used to check for problems and enforce standards. For C
and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not
apply to a particular situation: - minimize or eliminate use of global variables. - use descriptive function
and method names - use both upper and lower case, avoid abbreviations, use as many characters as
necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent
in naming conventions. - use descriptive variable names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20
characters is not out of line); be consistent in naming conventions. - function and method sizes should
be minimized; less than 100 lines of code is good, less than 50 lines is preferable. - function descriptions
should be clearly spelled out in comments preceding a function's code.- organize code for readability. use whitespace generously - vertically and horizontally - each line of code should contain 70 characters
max. - one code statement per line. - coding style should be consistent throughout a program (eg, use of
brackets, indentations, naming conventions, etc.) - in adding comments, err on the side of too many
rather than too few comments; a common rule of thumb is that there should be at least as many lines of
comments (including header blocks) as lines of code. - no matter how small, an application should
include documentation of the overall program function and flow (even a few paragraphs is better than
nothing); or if possible a separate flow chart and detailed program documentation. - make extensive use
of error handling procedures and status and error logging. - for C++, to minimize complexity and
increase maintainability, avoid too many levels of inheritance in class hierarchies (relative to the size and
complexity of the application). Minimize use of multiple inheritance, and minimize use of operator

overloading (note that the Java programming language eliminates multiple inheritance and operator
overloading.) - for C++, keep class methods small, less than 50 lines of code per method is preferable. for C++, make liberal use of exception handlers 131. What is 'good design'? 'Design' could refer to many
things, but often refers to 'functional design' or 'internal design'. Good internal design is indicated by
software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is
robust with sufficient error-handling and status logging capability; and works correctly when
implemented. Good functional design is indicated by an application whose functionality can be traced
back to customer and end-user requirements. For programs that have a user interface, it's often a good
idea to assume that the end user will have little computer knowledge and may not read a user manual
or even the on-line help; some common rules-of-thumb include: - the program should act in a way that
least surprises the user - it should always be evident to the user what can be done next and how to exit the program shouldn't let the users do something stupid without warning them.
132. What makes a good test engineer? A good test engineer has a 'test to break' attitude, an ability to
take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and
diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to
communicate with both technical (developers) and non-technical (customers, management) people is
useful. Previous software development experience can be helpful as it provides a deeper understanding
of the software development process, gives the tester an appreciation for the developers' point of view,
and reduce the learning curve in automated test tool programming. Judgment skills are needed to
assess high-risk areas of an application on which to focus testing efforts when time is limited.
133. What makes a good Software QA engineer? The same qualities a good tester has are useful for a
QA engineer. Additionally, they must be able to understand the entire software development process
and how it can fit into the business approach and goals of the organization. Communication skills and
the ability to understand various sides of issues are important. In organizations in the early stages of
implementing QA processes, patience and diplomacy are especially needed. An ability to find problems
as well as to see 'what's missing' is important for inspections and reviews. 134. What makes a good QA
or Test manager? A good QA, test, or QA/Test(combined) manager should: - be familiar with the
software development process - be able to maintain enthusiasm of their team and promote a positive
atmosphere, despite what is a somewhat 'negative' process (e.g., looking for or preventing problems) be able to promote teamwork to increase productivity - be able to promote cooperation between
software, test, and QA engineers - have the diplomatic skills needed to promote improvements in QA
processes -have the ability to withstand pressures and say 'no' to other managers when quality is
insufficient or QA processes are not being adhered to - have people judgement skills for hiring and
keeping skilled personnel- be able to communicate with technical and non-technical people, engineers,
managers, and customers. - be able to run meetings and keep them focused 135. What's the role of
documentation in QA? Critical. (Note that documentation can be electronic, not necessarily paper.) QA
practices should be documented such that they are repeatable. Specifications, designs, business rules,
inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc.
should all be documented. There should ideally be a system for easily finding and obtaining documents
and determining what documentation will have a particular piece of information. Change management

for documentation should be used if possible. 136. What's the big deal about 'requirements'? One of
the most reliable methods of insuring problems, or failure, in a complex software project is to have
poorly documented requirements specifications. Requirements are the details describing an
application's externally-perceived functionality and properties. Requirements should be clear, complete,
reasonably
detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'userfriendly' (too subjective). A testable requirement would be something like 'the user must enter their
previously-assigned password to access the application'. Determining and organizing requirements
details in a useful and efficient way can be a difficult effort; different methods are available depending
on the particular project. Many books are available that describe various approaches to this task. Care
should be taken to involve ALL of a project's significant 'customers' in the requirements process.
'Customers' could be in-house personnel or out, and could include end-users, customer acceptance
testers, customer contract officers, customer management, future software maintenance engineers,
salespeople, etc. Anyone who could later derail the project if their expectations aren't met should be
included if possible. Organizations vary considerably in their handling of requirements specifications.
Ideally, the requirements are spelled out in a document with statements such as 'The product shall.....'.
'Design' specifications should not be confused with 'requirements'; design specifications should be
traceable back to the requirements. In some organizations requirements may end up in high level
project plans, functional specification documents, in design documents, or in other documents at
various levels of detail. No matter what they are called, some type of documentation with detailed
requirements will be needed by testers in order to properly plan and execute tests. Without such
documentation, there will be no clear-cut way to determine if a software application is performing
correctly. 137. What steps are needed to develop and run software tests? The following are some of
the steps to consider: - Obtain requirements, functional design, and internal design specifications and
other necessary documents - Obtain budget and schedule requirements - Determine project-related
personnel and their responsibilities, reporting requirements, required standards and processes (such as
release processes, change processes, etc.) - Identify application's higher-risk aspects, set priorities, and
determine scope and limitations of tests - Determine test approaches and methods - unit, integration,
functional, system, load, usability tests, etc. - Determine test environment requirements (hardware,
software, communications, etc.) -Determine testware requirements (record/playback tools, coverage
analyzers, test tracking, problem/bug tracking, etc.) - Determine test input data requirements - Identify
tasks, those responsible for tasks, and labor requirements - Set schedule estimates, timelines,
milestones - Determine input equivalence classes, boundary value analyses, error classes - Prepare test
plan document and have needed reviews/approvals - Write test cases - Have needed
reviews/inspections/approvals of test cases - Prepare test environment and testware, obtain needed
user manuals/reference documents/configuration guides/installation guides, set up test tracking
processes, set up logging and archiving processes, set up or obtain test input data - Obtain and install
software releases - Perform tests - Evaluate and report results - Track problems/bugs and fixes - Retest
as needed - Maintain and update test plans, test cases, test environment, and testware through life
cycle 138. What is 'configuration management'? Configuration management covers the processes used
to control, coordinate, and

track: code, requirements, documentation, problems, change requests, designs,


tools/compilers/libraries/patches, changes made to them, and who makes the changes. 139. What if the
software is so buggy it can't really be tested at all? The best bet in this situation is for the testers to go
through the process of reporting whatever bugs or blocking-type problems initially show up, with the
focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates
deeper problems in the software development process (such as insufficient unit testing or insufficient
integration testing, poor design, improper build or release procedures, etc.) managers should be
notified, and provided with some documentation as evidence of the problem. 140. How can it be known
when to stop testing? This can be difficult to determine. Many modern software applications are so
complex, and run in such an interdependent environment, that complete testing can never be done.
Common factors in deciding when to stop are: - Deadlines (release deadlines, testing deadlines, etc.)Test cases completed with certain percentage passed - Test budget depleted - Coverage of
code/functionality/requirements reaches a specified point - Bug rate falls below a certain level - Beta or
alpha testing period ends 141. What if there isn't enough time for thorough testing? Use risk analysis to
determine where testing should be focused. Since it's rarely possible to test every possible aspect of an
application, every possible combination of events, every dependency, or everything that could go
wrong, risk analysis is appropriate to most software development projects. This requires judgement
skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations
can include: - Which functionality is most important to the project's intended purpose? - Which
functionality is most visible to the user? - Which functionality has the largest safety impact? - Which
functionality has the largest financial impact on users? - Which aspects of the application are most
important to the customer? - Which aspects of the application can be tested early in the development
cycle? - Which parts of the code are most complex, and thus most subject to errors? - Which parts of the
application were developed in rush or panic mode? - Which aspects of similar/related previous projects
caused problems? - Which aspects of similar/related previous projects had large maintenance expenses?
- Which parts of the requirements and design are unclear or poorly thought out? - What do the
developers think are the highest-risk aspects of the application? - What kinds of problems would cause
the worst publicity? - What kinds of problems would cause the most customer service complaints?What kinds of tests could easily cover multiple functionalities? - Which tests will have the best high-riskcoverage to time-required ratio? 142. What can be done if requirements are changing continuously? A
common problem and a major headache. - Work with the project's stakeholders early on to understand
how requirements might change so that alternate test plans and strategies can be worked out in
advance, if possible. - It's helpful if the
application's initial design allows for some adaptability so that later changes do not require redoing the
application from scratch. - If the code is well-commented and well-documented this makes changes
easier for the developers.- Use rapid prototyping whenever possible to help customers feel sure of their
requirements and minimize changes. - The project's initial schedule should allow for some extra time
commensurate with the possibility of changes.- Try to move new requirements to a 'Phase 2' version of
an application, while using the original requirements for the 'Phase 1' version. - Negotiate to allow only
easily-implemented new requirements into the project, while moving more difficult new requirements
into future versions of the application. - Be sure that customers and management understand the

scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management
or the customers (not the developers or testers) decide if the changes are warranted - after all, that's
their job. - Balance the effort put into setting up automated testing with the expected effort required to
re-do them to deal with changes. - Try to design some flexibility into automated test scripts. - Focus
initial automated testing on application aspects that are most likely to remain unchanged. - Devote
appropriate effort to risk analysis of changes to minimize regression testing needs. - Design some
flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test
cases, or set up only higher-level generic-type test plans) - Focus less on detailed test plans and test
cases and more on ad hoc testing (with an understanding of the added risk that this entails). 143. What
if the project isn't big enough to justify extensive testing? Consider the impact of project errors, not the
size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the
same considerations as described previously in 'What if there isn't enough time for thorough testing?'
apply. The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis.
144. What if the application has functionality that wasn't in the requirements? It may take serious
effort to determine if an application has significant unexpected or hidden functionality, and it would
indicate deeper problems in the software development process. If the functionality isn't necessary to
the purpose of the application, it should be removed, as it may have unknown impacts or dependencies
that were not taken into account by the designer or the customer. If not removed, design information
will be needed to determine added testing needs or regression testing needs. Management should be
made aware of any significant added risks as a result of the unexpected functionality. If the functionality
only effects areas such as minor improvements in the user interface, for example, it may not be a
significant risk. 145. How can Software QA processes be implemented without stifling productivity? By
implementing QA processes slowly over time, using consensus to reach agreement on processes, and
adjusting and experimenting as an organization grows and matures, productivity will be improved
instead of stifled. Problem
prevention will lessen the need for problem detection, panics and burn-out will decrease, and there will
be improved focus and less wasted effort. At the same time, attempts should be made to keep
processes simple and efficient, minimize paperwork, promote computer-based processes and
automated tracking and reporting, minimize time required in meetings, and promote training as part of
the QA process. However, no one - especially talented technical types - likes rules or bureaucracy, and in
the short run things may slow down a bit. A typical scenario would be that more days of planning and
development will be needed, but less time will be required for late-night bug-fixing and calming of irate
customers. 146. What if an organization is growing so fast that fixed QA processes are impossible? This
is a common problem in the software industry, especially in new technology areas. There is no easy
solution in this situation, other than: - Hire good people - Management should 'ruthlessly prioritize'
quality issues and maintain focus on the customer - Everyone in the organization should be clear on
what 'quality' means to the customer 147. How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple dependencies among clients, data
communications, hardware, and servers. Thus testing requirements can be extensive. When time is
limited (as it usually is) the focus should be on integration and system testing. Additionally,
load/stress/performance testing may be useful in determining client/server application limitations and

capabilities. There are commercial tools to assist with such testing. 148.How can World Wide Web sites
be tested? Web sites are essentially client/server applications - with web servers and 'browser' clients.
Consideration should be given to the interactions between html pages, TCP/IP communications, Internet
connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in
applications), and applications that run on the server side (such as cgi scripts, database interfaces,
logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of
servers and browsers, various versions of each, small but sometimes significant differences between
them, variations in connection speeds, rapidly changing technologies, and multiple standards and
protocols. The end result is that testing for web sites can become a major ongoing effort. Other
considerations might include: - What are the expected loads on the server (e.g., number of hits per unit
time?), and what kind of performance is required under such loads (such as web server response time,
database query response times). What kinds of tools will be needed for performance testing (such as
web load testing tools, other tools already in house that can be adapted, web robot downloading tools,
etc.)? - Who is the target audience? What kind of browsers will they be using? What kind of connection
speeds will they by using? Are they intra- organization (thus with likely high connection speeds and
similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)? What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast
should animations, applets, etc. load and run)? - Will down
time for server and content maintenance/upgrades be allowed? how much? - What kinds of security
(firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be
tested? - How reliable are the site's Internet connections required to be? And how does that affect
backup system or redundant connection requirements and testing? - What processes will be required to
manage updates to the web site's content, and what are the requirements for maintaining, tracking, and
controlling page content, graphics, links, etc.? - Which HTML specification will be adhered to? How
strictly? What variations will be allowed for targeted browsers? - Will there be any standards or
requirements for page appearance and/or graphics throughout a site or parts of a site?? - How will
internal and external links be validated and updated? how often? - Can testing be done on the
production system, or will a separate test system be required? How are browser caching, variations in
browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion'
problems to be accounted for in testing?- How extensive or customized are the server logging and
reporting requirements; are they considered an integral part of the system and do they require testing?How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked,
controlled, and tested? - Pages should be 3-5 screens max unless content is tightly focused on a single
topic. If larger, provide internal links within the page. - The page layouts and design elements should be
consistent throughout a site, so that it's clear to the user that they're still within a site. - Pages should be
as browser-independent as possible, or pages should be provided or generated based on the browsertype. - All pages should have links external to the page; there should be no dead-end pages. - The page
owner, revision date, and a link to a contact person or organization should be included on each page.
149. How is testing affected by object-oriented designs? Well-engineered object-oriented design can
make it easier to trace from code to internal design to functional design to requirements. While there
will be little affect on black box testing (where an understanding of the internal design of the application

is unnecessary), white-box testing can be oriented to the application's objects. If the application was
well-designed this can simplify test design. 150. What is Extreme Programming and what's it got to do
with testing? Extreme Programming (XP) is a software development approach for small teams on riskprone projects with unstable requirements. It was created by Kent Beck who described the approach in
his book 'Extreme Programming Explained'. Testing ('extreme testing') is a core aspect of Extreme
Programming. Programmers are expected to write unit and functional test code first - before the
application is developed. Test code is under source control along with the rest of the code. Customers
are expected to be an integral part of the project team and to help develop scenarios for
acceptance/black box testing. Acceptance tests are preferably automated, and are modified and rerun
for each of the frequent development iterations. QA and test personnel are also required to be an
integral part of the project team. Detailed requirements documentation is not used, and frequent rescheduling, re-estimating, and re-prioritizing is expected.
151. Will automated testing tools make testing easier? - Possibly. For small projects, the time needed
to learn and implement them may not be worth it. For larger projects, or on-going long-term projects
they can be valuable. - A common type of automated tool is the 'record/playback' type. For example, a
tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an
application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in
the form of text based on a scripting language that is interpretable by the testing tool. If new buttons
are added, or some underlying code in the application is changed, etc. the application can then be
retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects
of the changes. The problem with such tools is that if there are continual changes to the system being
tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to
continuously update the scripts. Additionally, interpretation of results (screens, data, logs, etc.) can be a
difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types
of platforms.- Other automated tools can include: code analyzers - monitor code complexity, adherence
to standards, etc. coverage analyzers - these tools check which parts of the code have been exercised by
a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc.
memory analyzers - such as bounds-checkers and leak detectors. load/performance test tools - for
testing client/server and web applications under various load levels. web test tools - to check that links
are valid, HTML code usage is correct, client-side and server-side programs work, a web site's
interactions are secure. other tools - for test case management, documentation management, bug
reporting, and configuration management. 152. What's the difference between black box and white
box testing? Black-box and white-box are test design methods. Black-box test design treats the system
as a black-box, so it doesn't explicitly use knowledge of the internal structure. Black-box test design is
usually described as focusing on testing functional requirements. Synonyms for black-box include:
behavioral, functional, opaque-box, and closed-box. White-box test design allows one to peek inside the
box, and it focuses specifically on using internal knowledge of the software to guide the selection of
test data. Synonyms for white-box include: structural, glass-box and clear-box. While black-box and
white-box are terms that are still in popular use, many people prefer the terms 'behavioral' and
'structural'. Behavioral test design is slightly different from black-box test design because the use of
internal knowledge isn't strictly forbidden, but it's still discouraged. In practice, it hasn't proven useful to

use a single test design method. One has to use a mixture of different methods so that they aren't
hindered by the limitations of a particular one. Some call this 'gray-box' or 'translucent-box' test design,
but others wish we'd stop talking about boxes altogether.It is important to understand that these
methods are used during the test design phase, and their influence is hard to see in the tests once
they're implemented. Note that any level of testing (unit testing, system testing, etc.) can use any test
design methods. Unit testing is usually associated with structural test design, but this is because testers
usually don't have well-defined requirements at the unit level to validate.
153. What kinds of testing should be considered? Black box testing - not based on any knowledge of
internal design or code. Tests are based on requirements and functionality. White box testing - based on
knowledge of the internal logic of an application's code. Tests are based on coverage of code
statements, branches, paths, conditions.unit testing - the most 'micro' scale of testing; to test particular
functions or code modules. Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. Not always easily done unless the application has a
well-designed architecture with tight code; may require developing test driver modules or test
harnesses. incremental integration testing - continuous testing of an application as new functionality is
added; requires that various aspects of an application's functionality be independent enough to work
separately before all parts of the program are completed, or that test drivers be developed as needed;
done by programmers or by testers. integration testing - testing of combined parts of an application to
determine if they function together correctly. The 'parts' can be code modules, individual applications,
client and server applications on a network, etc. This type of testing is especially relevant to client/server
and distributed systems. functional testing - black-box type testing geared to functional requirements of
an application; this type of testing should be done by testers. This doesn't mean that the programmers
shouldn't check that their code works before releasing it (which of course applies to any stage of
testing.) system testing - black-box type testing that is based on overall requirements specifications;
covers all combined parts of a system. end-to-end testing - similar to system testing; the 'macro' end of
the test scale; involves testing of a complete application environment in a situation that mimics realworld use, such as interacting with a database, using network communications, or interacting with other
hardware, applications, or systems if appropriate. sanity testing or smoke testing - typically an initial
testing effort to determine if a new software version is performing well enough to accept it for a major
testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down
systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to
warrant further testing in its current state. regression testing - re-testing after fixes or modifications of
the software or its environment. It can be difficult to determine how much re-testing is needed,
especially near the end of the development cycle. Automated testing tools can be especially useful for
this type of testing. acceptance testing - final testing based on specifications of the end-user or
customer, or based on use by end-users/customers over some limited period of time. load testing testing an application under heavy loads, such as testing of a web site under a range of loads to
determine at what point the system's response time degrades or fails.stress testing - term often used
interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system
functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input
of large numerical values, large complex queries to a database system, etc. performance testing - term

often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other
'type' of testing) is defined in requirements documentation or QA or Test Plans.usability testing - testing
for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer.
User interviews, surveys, video
recording of user sessions, and other techniques can be used. Programmers and testers are usually not
appropriate as usability testers. install/uninstall testing - testing of full, partial, or upgrade
install/uninstall processes. recovery testing - testing how well a system recovers from crashes, hardware
failures, or other catastrophic problems. failover testing - typically used interchangeably with 'recovery
testing'security testing - testing how well the system protects against unauthorized internal or external
access, willful damage, etc; may require sophisticated testing techniques. compatibility testing - testing
how well software performs in a particular hardware/software/operating system/network/etc.
environment. exploratory testing - often taken to mean a creative, informal software test that is not
based on formal test plans or test cases; testers may be learning the software as they test it. ad-hoc
testing - similar to exploratory testing, but often taken to mean that the testers have significant
understanding of the software before testing it. context-driven testing - testing driven by an
understanding of the environment, culture, and intended use of software. For example, the testing
approach for life-critical medical equipment software would be completely different than that for a lowcost computer game. user acceptance testing - determining if software is satisfactory to an end-user or
customer. comparison testing - comparing software weaknesses and strengths to competing products.
alpha testing - testing of an application when development is nearing completion; minor design changes
may still be made as a result of such testing. Typically done by end-users or others, not by programmers
or testers. beta testing - testing when development and testing are essentially completed and final bugs
and problems need to be found before final release. Typically done by end-users or others, not by
programmers or testers. mutation testing - a method for determining if a set of test data or test cases is
useful, by deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational
resources. 154. Why is it often hard for management to get serious about quality assurance? Solving
problems is a high-visibility process; preventing problems is low-visibility.This is illustrated by an old
parable:In ancient China there was a family of healers, one of whom was known throughout the land
and employed as a physician to a great lord. The physician was asked which of his family was the most
skillful healer. He replied, "I tend to the sick and dying with drastic and dramatic treatments, and on
occasion someone is cured and my name gets out among the lords.""My elder brother cures sickness
when it just begins to take root, and his skills are known among the local peasants and neighbors." "My
eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His name is
unknown outside our home." 155. Why does software have bugs? 1. Miscommunication or no
communication - as to specifics of what an application should or shouldn't do (the application's
requirements). 2. Software complexity - the complexity of current software applications can be difficult
to comprehend for anyone without experience in modern-day software development. Multi-tiered
applications, client-server and distributed applications, data communications, enormous relational
databases, and sheer size of applications have all contributed to the exponential growth in

software/system complexity. programming errors - programmers, like anyone else, can make mistakes.
3. Changing requirements (whether documented or undocumented) - the end-user may not understand
the effects of changes, or may understand and request them anyway - redesign, rescheduling of
engineers, effects on other projects, work already completed that may have to be redone or thrown out,
hardware requirements that may be affected, etc. If there are many minor changes or any major
changes, known and unknown dependencies among parts of the project are likely to interact and cause
problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering
staff may be affected. In some fast-changing business environments, continuously modified
requirements may be a fact of life. In this case, management must understand the resulting risks, and
QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs
from running out of control. 3. Poorly documented code - it's tough to maintain and modify code that is
badly written or poorly documented; the result is bugs. In many organizations management provides no
incentive for programmers to document their code or write clear, understandable, maintainable code. In
fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job
security if nobody else can understand it ('if it was hard to write, it should be hard to read'). 4. software
development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own
bugs or are poorly documented, resulting in added bugs. 156. How can new Software QA processes be
introduced in an existing organization? A lot depends on the size of the organization and the risks
involved. For large organizations with high-risk (in terms of lives or property) projects, serious
management buy-in is required and a formalized QA process is necessary. Where the risk is lower,
management and organizational buy-in and QA implementation may be a slower, step-at-a-time
process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out
of hand. For small groups or projects, a more ad-hoc process may be appropriate, depending on the
type of customers and projects. A lot will depend on team leads or managers, feedback to developers,
and ensuring adequate communications among customers, managers, developers, and testers. The most
value for effort will often be in (a) requirements management processes, with a goal of clear, complete,
testable requirement specifications embodied in requirements or design documentation, or in 'agile'type environments extensive continuous coordination with end-users, (b) design inspections and code
inspections, and (c) post-mortems/retrospectives. 157. how do the companies expect the defect
reporting to be communicated by the tester to the development team. Can the excel sheet template
be used for defect reporting. If so what are the common fields that are to be included ? who assigns
the priority and severity of the defect To report bugs in excel: Sno. Module Screen/ Section Issue detail
SeverityPriority
Issuestatusthis is how to report bugs in excel sheet and also set filters on the Columns attributes.But
most of the companies use the share point process of reporting bugs In this when the project came for
testing a module wise detail of project is inserted to the defect management system they are using. It
contains following field1. Date2. Issue brief3. Issue description (used for developer to regenerate the
issue)4. Issue status ( active, resolved, onhold, suspend and not able to regenerate)5. Assign to (Names
of members allocated to project)6. Priority (High, medium and low)7. severity (Major, medium and low)
158. What are the tables in testplans and testcases? Test plan is a document that contains the scope,
approach, test design and test strategies. It includes the following:-1. Test case identifier2.

Scope3.Features to be tested4. Features not to be tested.5. Test strategy.6. Test Approach7. Test
Deliverables8. Responsibilities.9 Staffing and Training10.Risk and Contingencies11. ApprovalWhile A test
case is a noted/documented set of steps/activities that are carried out or executed on the software in
order to confirm its functionality/behavior to certain set of inputs. 159. What are the table contents in
testplans and test cases? Test Plan is a document which is prepared with the details of the testing
priority. A test Plan generally includes: 1. Objective of Testing2. Scope of Testing3. Reason for testing4.
Timeframe5. Environment6. Entrance and exit criteria7. Risk factors involved8. Deliverables 160. What
automating testing tools are you familiar with? Win Runner , Load runner, QTP , Silk Performer, Test
director, Rational robot, QA run. 161. How did you use automating testing tools in your job? 1. For
regression testing2. Criteria to decide the condition of a particular build3. Describe some problem that
you had with automating testing tool.The problem of winrunner identifying the third party controls like
infragistics control. 162. How do you plan test automation? 1. Prepare the automation Test plan2.
Identify the scenario3. Record the scenario4. Enhance the scripts by inserting check points and
Conditional Loops5. Incorporated Error Handler6. Debug the script7. Fix the issue8. Rerun the script and
report the result. 163. Can test automation improve test effectiveness? Yes, Automating a test makes
the test process:1.Fast2.Reliable3. Repeatable4.Programmable5.Reusable6.Comprehensive6. What is
data - driven automation?Testing the functionality with more test cases becomes laborious as the
functionality grows. For multiple sets of data (test cases), you can execute the test once in which you
can figure out for which data it has failed and for which data, the test has passed. This feature is
available in the WinRunner with the data driven test where the data can be taken from an excel sheet or
notepad.
164. What are the main attributes of test automation? software test automation attributes
:Maintainability - the effort needed to update the test automation suites for each new releaseReliability
- the accuracy and repeatability of the test automationFlexibility - the ease of working with all the
different kinds of automation test wareEfficiency - the total cost related to the effort needed for the
automationPortability - the ability of the automated test to run on different environmentsRobustness the effectiveness of automation on an unstable or rapidly changing systemUsability - the extent to which
automation can be used by different types of users 165. Does automation replace manual testing?
There can be some functionality which cannot be tested in an automated tool so we may have to do it
manually. therefore manual testing can never be replaced. (We can write the scripts for negative testing
also but it is hectic task).When we talk about real environment we do negative testing manually. 166.
How will you choose a tool for test automation? choosing of a tool depends on many things ...1.
Application to be tested2. Test environment3. Scope and limitation of the tool.4. Feature of the tool.5.
Cost of the tool.6. Whether the tool is compatible with your application which means tool should be
able to interact with your application7. Ease of use 167. How you will evaluate the tool for test
automation? We need to concentrate on the features of the tools and how this could be beneficial for
our project. The additional new features and the enhancements of the features will also help. 168. What
are main benefits of test automation? FAST ,RELIABLE,COMPREHENSIVE,REUSABLE 169. What could go
wrong with test automation? 1. The choice of automation tool for certain technologies 2. Wrong set of
test automated 170. How you will describe testing activities? Testing activities start from the
elaboration phase. The various testing activities are preparing the test plan, Preparing test cases,

Execute the test case, Log the bug, validate the bug & take appropriate action for the bug, Automate the
test cases. 171. What testing activities you may want to automate? Automate all the high priority test
cases which needs to be executed as a part of regression testing for each build cycle. 172. Describe
common problems of test automation. The common problems are:1. Maintenance of the old script
when there is a feature change or enhancement2. The change in technology of the application will affect
the old scripts
173. What types of scripting techniques for test automation do you know? 5 types of scripting
techniques:LinearStructuredSharedData DrivenKey Driven 174. What are principles of good testing
scripts for automation? 1. Proper code guiding standards2. Standard format for defining functions,
exception handler etc3. Comments for functions4. Proper errorhandling mechanisms5. The appropriate
synchronisation techniques18. What tools are available for support of testing during software
development life cycle?Testing tools for regression and load/stress testing for regression testing like,
QTP, load runner, rational robot, winrunner, silk, testcomplete, Astra are available in the market. -For
defect tracking BugZilla, Test Runner are available. 175. Can the activities of test case design be
automated? As I know it, test case design is about formulating the steps to be carried out to verify
something about the application under test. And this cannot be automated. However, I agree that the
process of putting the test results into the excel sheet. 176. What are the limitations of automating
software testing? Hard-to-create environments like out of memory, invalid input/reply, and
corrupt registry entries make applications behave poorly and existing automated tools cant force
these condition - they simply test your application in normal environment. 177. What skills needed to
be a good test automator? 1.Good Logic for programming.2. Analytical skills.3.Pessimestic in Nature.
178. How to find that tools work well with your existing system? 1. Discuss with the support officials2.
Download the trial version of the tool and evaluate3. Get suggestions from people who are working on
the tool 179. Describe some problem that you had with automating testing tool 1. The inability of
winrunner to identify the third party control like infragistics controls2. The change of the location of the
table object will cause object not found error.3. The inability of the winrunner to execute the script
against multiple languages 180. What are the main attributes of test automation? Maintainability,
Reliability, Flexibility, Efficiency, Portability, Robustness, and Usability - these are the main attributes in
test automation. 181. What testing activities you may want to automate in a project? Testing tools can
be used for :* Sanity tests(which is repeated on every build),* stress/Load tests(U simulate a large no of
users, which is manually impossible) &* Regression tests(which are done after every code change) 182.
How to find that tools work well with your existing system? To find this, select the suite of tests which
are most important for your application.
First run them with automated tool. Next subject the same tests to careful manual testing. If the results
are coinciding you can say your testing tool has been performing. 183. How will you test the field that
generates auto numbers of AUT when we click the button 'NEW" in the application? We can create a
textfile in a certain location, and update the auto generated value each time we run the test and
compare the currently generated value with the previous one will be one solution. 184. How will you
evaluate the fields in the application under test using automation tool? We can use Verification
points(rational Robot) to validate the fields .Ex.Using objectdata, objectdata properties VP we can

validate fields. 185. Can we perform the test of single application at the same time using different
tools on the same machine? No. The Testing Tools will be in the ambiguity to determine which browser
is opened by which tool. 186. Difference between Web application Testing and Client Server Testing.
State the different types for Web application Testing and Client Server Testing types? which winrunner
7.2 version compatible with internet explorer, firefox 187. What is 'configuration management'?
Configuration management is a process to control and document any changes made during the life of a
project. Revision control, Change Control, and Release Control are important aspects of Configuration
Management. 188. How to test the Web applications? The basic difference in webtesting is here we
have to test for URL's coverage and links coverage. Using WinRunner we can conduct webtesting. But
we have to make sure that Webtest option is selected in "Add in Manager". Using WR we cannot test
XML objects. 189. What are the problems encountered during the testing the application compatibility
on different browsers and on different operating systems Font issues, alignment issues 190. How
testing is proceeded when SRS or any other document is not given?
If SRS is not there we can perform Exploratory testing. In Exploratory testing the basic module is
executed and depending on its results, the next plan is executed.
191. How do we test for severe memory leakages ? By using Endurance Testing . Endurance Testing
means checking for memory leaks or other problems that may occur with prolonged execution. 192.
What is the difference between quality assurance and testing? Quality assurance involves the entire
software development process and testing involves operation of a system or application to evaluate the
results under certain conditions. QA is oriented to prevention and Testing is oriented to detection. 193.
Why does software have bugs? 1.miscommunication2.programming errors3.time pressures.4.changing
194. What is memory leaks and buffer overflows ? Memory leaks means incomplete deallocation - are
bugs that happen very often. Buffer overflow means data sent as input to the server that overflows the
boundaries of the input area, thus causing the server to misbehave. Buffer overflows can be used. 195.
What are the major differences between stress testing,load testing,Volume testing? Stress testing
means increasing the load ,and checking the performance at each level. Load testing means at a time
giving more load by the expectation and checking the performance at that level. Volume testing means
first we have to apply initial. 196. What is Exhaustive Testing? Testing which covers all combinations of
input values and preconditions for an element of the software under test. 197. What is Functional
Decomposition? A technique used during planning, analysis and design; creates a functional hierarchy
for the software.
198. What is Functional Specification? A document that describes in detail the characteristics of the
product with regard to its intended features.

S-ar putea să vă placă și