Sunteți pe pagina 1din 19

Master QA Document

By Mahesh Wagh mahehwagh29@gmail.com

“A mistake in coding is called error, error found by tester is called defect, defect accepted by
development team then it is called bug, build does not meet the requirements then it is failure.”

WEB TESTING

Functionality:
In testing the functionality of the web sites the following should be tested:
• Links
I. Internal Links
ii. External Links
iii. Mail Links
IV. Broken Links
• Forms
I. Field validation
ii. Error message for wrong input
iii. Optional and Mandatory fields
• Database
* Testing will be done on the database integrity.
• Cookies
* Testing will be done on the client system side, on the temporary Internet files.
Performance:
Performance testing can be applied to understand the web site’s scalability, or to benchmark the
performance in the environment of third party products such as servers and middleware for
potential purchase.
• Connection Speed:
Tested over various networks like Dial Up, ISDN etc
• Load:
I. what is the no. of users per time?
ii. Check for peak loads and how system behaves
iii. Large amount of data accessed by user.
• Stress:
i. Continuous Load
ii. Performance of memory, CPU, file handling etc.
Usability:
Usability testing is the process by which the human-computer interaction characteristics of a
system are measured, and weaknesses are identified for correction.
• Ease of learning
• Navigation
• Subjective user satisfaction
• General appearance
Server Side Interface:
In web testing the server side interface should be tested. This is done by verify that
communication is done properly. Compatibility of server with software, hardware, network and
database should be tested.
Client Side Compatibility:
The client side compatibility is also tested in various platforms, using various browsers etc.

Master QA Doc-Mahesh Wagh


Security:
The primary reason for testing the security of a web is to identify potential vulnerabilities and
subsequently repair them.
• Network Scanning
• Vulnerability Scanning
• Password Cracking
• Log Review
• Integrity Checkers
• Virus Detection

What is the difference between client-server testing and web based testing and what
are things that we need to test in such applications?

Ans:
Projects are broadly divided into two types of:
 2 tier applications.
 3 tier applications.
 Desktop application:
1. Application runs in single memory (Front end and Back end in one place).
2. Single user only.
 Client/Server application:
1. Application runs in two or more machines.
2. Application is a menu-driven.
3. Connected mode (connection exists always until logout).
4. Limited number of users.
5. Less number of network issues when compared to web app.
 Web application:
1. Application runs in two or more machines.
2. URL-driven.
3. Disconnected mode (state less).
4. Unlimited number of users.
5. Many issues like hardware compatibility, browser compatibility, version compatibility,
security issues, performance issues etc.
 Desktop application runs on personal computers and work stations, so when you test
the desktop application you are focusing on a specific environment. You will test complete
application broadly in categories like GUI, functionality, Load, and backend i.e. DB.
 In client server application you have two different components to test. Application is
loaded on server machine while the application exe on every client machine.
You will test broadly in categories like, GUI on both sides, functionality, Load, client-server
interaction, backend.
This environment is mostly used in Intranet networks.
You are aware of number of clients and servers and their locations in the test scenario.
 Web application is a bit different and complex to test as tester don’t have that much
control over the application.
Application is loaded on the server whose location may or may not be known and no exe is
installed on the client machine, you have to test it on different web browsers.
Web applications are supposed to be tested on different browsers and OS platforms so
broadly Web application is tested mainly for browser compatibility and operating system
compatibility, error handling, static pages, backend testing and load testing.

Master QA Doc-Mahesh Wagh


What is Cookie?
Cookie is small information stored in text file on user’s hard drive by web server. This information
is later used by web browser to retrieve information from that machine. Generally cookie contains
personalized user data or information that is used to communicate between different web pages.

1) Session cookies: This cookie is active till the browser that invoked the cookie is
open. When we close the browser this session cookie gets deleted. Some time
session of say 20 minutes can be set to expire the cookie.
2) Persistent cookies: The cookies that are written permanently on user machine
and last for months or years.

 Test cases:
 1) As a Cookie privacy policy make sure from your design documents that no personal or
sensitive data is stored in the cookie.
 2) If you have no option than saving sensitive data in cookie make sure data stored in
cookie is stored in encrypted format.
 3) Make sure that there is no overuse of cookies on your site under test. Overuse of
cookies will annoy users if browser is prompting for cookies more often and this could
result in loss of site traffic and eventually loss of business.
 4) Disable the cookies from your browser settings: If you are using cookies on your site,
your sites major functionality will not work by disabling the cookies.
 Then try to access the web site under test. Navigate through the site. See if appropriate
messages are displayed to user like “For smooth functioning of this site make sure that
cookies are enabled on your browser”. There should not be any page crash due to
disabling the cookies. (Please make sure that you close all browsers, delete all previously
written cookies before performing this test)
 5) Delete cookie: Allow site to write the cookies and then close all browsers and
manually delete all cookies for web site under test. Access the web pages and check the
behavior of the pages.
 6) Cookie Testing on Multiple browsers: This is the important case to check if your
web application page is writing the cookies properly on different browsers as intended and
site works properly using these cookies. You can test your web application on Major used
browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.

What is State transition testing technique?


 State transition technique is a dynamic testing technique, which is used when the system
is defined in terms of a finite number of states and the transitions between the states is
governed by the rules of the system.
Error Guessing:
 Is a technique where experience of a tester is used to find the errors or part of application
with highest possibility of finding error? This is skill-based technique without any rules.
Use case testing:
In this technique, use cases/scenarios are used to write the test cases. Interaction of users
and system is described in a use case.

Importance of Using Checklist for Testing:

- Maintaining a standard repository of reusable test cases for your application will ensure the most
common bugs will be caught more quickly.
- Checklist helps to quickly complete writing test cases for new versions of the application.
- Reusing test cases help to save money on resources to write repetitive tests.
Master QA Doc-Mahesh Wagh
- Important test cases will be covered always making it almost impossible to forget.
- Testing checklist can be referred by developers to ensure most common issues are fixed in
development phase itself.

Few notes to remember:


1) Execute these scenarios with different user roles e.g. admin user, guest user etc.
2) For web applications these scenarios should be tested on multiple browsers like IE, FF, Chrome,
and Safari with versions approved by client.
3) Test with different screen resolutions like 1024 x 768, 1280 x 1024, etc.
4) Application should be tested on variety of displays like LCD, CRT, Notebooks, Tablets, and
Mobile phones.
4) Test application on different platforms like Windows, Mac, Linux operating systems.

Comprehensive Testing Checklist for Testing Web and Desktop Applications:

Assumptions: Assuming that your application supports following functionality


- Forms with various fields
- Child windows
- Application interacts with database
- Various search filter criteria and display results
- Image upload
- Send email functionality
- Data export functionality
General Test Scenarios
1. All mandatory fields should be validated and indicated by asterisk (*) symbol
2. Validation error messages should be displayed properly at correct position
3. All error messages should be displayed in same CSS style (e.g. using red colour)
4. General confirmation messages should be displayed using CSS style other than error messages
style (e.g. using green colour)
5. Tool tips text should be meaningful
6. Dropdown fields should have first entry as blank or text like ‘Select’
7. Delete functionality for any record on page should ask for confirmation
8. Select/deselect all records options should be provided if page supports record
add/delete/update functionality
9. Amount values should be displayed with correct currency symbols.
10. Default page sorting should be provided
11. Reset button functionality should set default values for all fields
12. All numeric values should be formatted properly
13. Input fields should be checked for max field value. Input values greater than specified max
limit should not be accepted or stored in database
14. Check all input fields for special characters
15. Field labels should be standard e.g. field accepting user’s first name should be labelled
properly as ‘First Name’
16. Check page sorting functionality after add/edit/delete operations on any record
17. Check for timeout functionality. Timeout values should be configurable. Check application
behaviour after operation timeout
18. Check cookies used in an application
19. Check if downloadable files are pointing to correct file paths
20. All resource keys should be configurable in config files or database instead of hard coding
21. Standard conventions should be followed throughout for naming resource keys

Master QA Doc-Mahesh Wagh


22. Validate markup for all web pages (validate HTML and CSS for syntax errors) to make sure it is
compliant with the standards
23. Application crash or unavailable pages should be redirected to error page
24. Check text on all pages for spelling and grammatical errors
25. Check numeric input fields with character input values. Proper validation message should
appear
26. Check for negative numbers if allowed for numeric fields
27. Check amount fields with decimal number values
28. Check functionality of buttons available on all pages
29. User should not be able to submit page twice by pressing submit button in quick succession.
30. Divide by zero errors should be handled for any calculations
31. Input data with first and last position blank should be handled correctly
GUI and Usability Test Scenarios.
1. All fields on page (e.g. text box, radio options, dropdown lists) should be aligned properly
2. Numeric values should be right justified unless specified otherwise
3. Enough space should be provided between field labels, columns, rows, error messages etc.
4. Scroll bar should be enabled only when necessary
5. Font size, style and color for headline, description text, labels, infield data, and grid info should
be standard as specified in SRS
6. Description text box should be multi-line
7. Disabled fields should be grayed out and user should not be able to set focus on these fields
8. Upon click of any input text field, mouse arrow pointer should get changed to cursor
9. User should not be able to type in drop down select lists
10. Information filled by users should remain intact when there is error message on page submit.
User should be able to submit the form again by correcting the errors
11. Check if proper field labels are used in error messages
12. Dropdown field values should be displayed in defined sort order
13. Tab and Shift+Tab order should work properly
14. Default radio options should be pre-selected on page load
15. Field specific and page level help messages should be available
16. Check if correct fields are highlighted in case of errors
17. Check if dropdown list options are readable and not truncated due to field size limit
18. All buttons on page should be accessible by keyboard shortcuts and user should be able to
perform all operations using keyboard
19. Check all pages for broken images
20. Check all pages for broken links
21. All pages should have title
22. Confirmation messages should be displayed before performing any update or delete operation
23. Hour glass should be displayed when application is busy
24. Page text should be left justified
25. User should be able to select only one radio option and any combination for check boxes.
Test Scenarios for Filter Criteria
1. User should be able to filter results using all parameters on the page
2. Refine search functionality should load search page with all user selected search parameters
3. When there is at least one filter criteria is required to perform search operation, make sure
proper error message is displayed when user submits the page without selecting any filter criteria.
4. When at least one filter criteria selection is not compulsory user should be able to submit page
and default search criteria should get used to query results
5. Proper validation messages should be displayed for invalid values for filter criteria

Master QA Doc-Mahesh Wagh


Test Scenarios for Result Grid
1. Page loading symbol should be displayed when it’s taking more than default time to load the
result page
2. Check if all search parameters are used to fetch data shown on result grid
3. Total number of results should be displayed on result grid
4. Search criteria used for searching should be displayed on result grid
5. Result grid values should be sorted by default column.
6. Sorted columns should be displayed with sorting icon
7. Result grids should include all specified columns with correct values
8. Ascending and descending sorting functionality should work for columns supported with data
sorting
9. Result grids should be displayed with proper column and row spacing
10. Pagination should be enabled when there are more results than the default result count per
page
11. Check for Next, Previous, First and Last page pagination functionality
12. Duplicate records should not be displayed in result grid
13. Check if all columns are visible and horizontal scroll bar is enabled if necessary
14. Check data for dynamic columns (columns whose values are calculated dynamically based on
the other column values)
15. For result grids showing reports check ‘Totals’ row and verify total for every column
16. For result grids showing reports check ‘Totals’ row data when pagination is enabled and user
navigates to next page
17. Check if proper symbols are used for displaying column values e.g. % symbol should be
displayed for percentage calculation
18. Check result grid data if date range is enabled
Test Scenarios for a Window
1. Check if default window size is correct
2. Check if child window size is correct
3. Check if there is any field on page with default focus (in general, the focus should be set on first
input field of the screen)
4. Check if child windows are getting closed on closing parent/opener window
5. If child window is opened, user should not be able to use or update any field on background or
parent window
6. Check window minimize, maximize and close functionality
7. Check if window is re-sizable
8. Check scroll bar functionality for parent and child windows
9. Check cancels button functionality for child window
Database Testing Test Scenarios
1. Check if correct data is getting saved in database upon successful page submits
2. Check values for columns which are not accepting null values
3. Check for data integrity. Data should be stored in single or multiple tables based on design
4. Index names should be given as per the standards e.g. IND_<Tablename>_<ColumnName>
5. Tables should have primary key column
6. Table columns should have description information available (except for audit columns like
created date, created by etc.)
7. for every database add/update operation log should be added
8. Required table indexes should be created
9. Check if data is committed to database only when the operation is successfully completed
10. Data should be rolled back in case of failed transactions

Master QA Doc-Mahesh Wagh


11. Database name should be given as per the application type i.e. test, UAT, sandbox, live
(though this is not a standard it is helpful for database maintenance)
12. Database logical names should be given according to database name (again this is not
standard but helpful for DB maintenance)
13. Stored procedures should not be named with prefix “sp_”
14. Check is values for table audit columns (like createddate, createdby, updatedate, updatedby,
isdeleted, deleteddate, deletedby etc.) are populated properly
15. Check if input data is not truncated while saving. Field length shown to user on page and in
database schema should be same
16. Check numeric fields with minimum, maximum, and float values
17. Check numeric fields with negative values (for both acceptance and non-acceptance)
18. Check if radio button and dropdown list options are saved correctly in database
19. Check if database fields are designed with correct data type and data length
20. Check if all table constraints like Primary key, foreign key etc. are implemented correctly
21. Test stored procedures and triggers with sample input data
22. Input field leading and trailing spaces should be truncated before committing data to database
23. Null values should not be allowed for Primary key column
Test Scenarios for Image Upload Functionality
(Also applicable for other file upload functionality)
1. Check for uploaded image path
2. Check image upload and change functionality
3. Check image upload functionality with image files of different extensions (e.g. JPEG, PNG, BMP
etc.)
4. Check image upload functionality with images having space or any other allowed special
character in file name
5. Check duplicate name image upload
6. Check image upload with image size greater than the max allowed size. Proper error message
should be displayed.
7. Check image upload functionality with file types other than images (e.g. txt, doc, pdf, exe etc.).
Proper error message should be displayed
8. Check if images of specified height and width (if defined) are accepted otherwise rejected
9. Image upload progress bar should appear for large size images
10. Check if cancel button functionality is working in between upload process
11. Check if file selection dialog shows only supported files listed
12. Check multiple images upload functionality
13. Check image quality after upload. Image quality should not be changed after upload
14. Check if user is able to use/view the uploaded images
Test Scenarios for Sending Emails
(Test cases for composing or validating emails are not included)
(Make sure to use dummy email addresses before executing email related tests)
1. Email template should use standard CSS for all emails
2. Email addresses should be validated before sending emails
3. Special characters in email body template should be handled properly
4. Language specific characters (e.g. Russian, Chinese or German language characters) should be
handled properly in email body template
5. Email subject should not be blank
6. Placeholder fields used in email template should be replaced with actual values e.g. {Firstname}
{Lastname} should be replaced with individuals first and last name properly for all recipients
7. If reports with dynamic values are included in email body, report data should be calculated
correctly
Master QA Doc-Mahesh Wagh
8. Email sender name should not be blank
9. Emails should be checked in different email clients like Outlook, Gmail, Hotmail, Yahoo! mail
etc.
10. Check send email functionality using TO, CC and BCC fields
11. Check plain text emails
12. Check HTML format emails
13. Check email header and footer for company logo, privacy policy and other links
14. Check emails with attachments
15. Check send email functionality to single, multiple or distribution list recipients
16. Check if reply to email address is correct
17. Check sending high volume of emails
Test Scenarios for Excel Export Functionality
1. File should get exported in proper file extension
2. File name for the exported Excel file should be as per the standards e.g. if file name is using
timestamp, it should get replaced properly with actual timestamp at the time of exporting the file
3. Check for date format if exported Excel file contains date columns
4. Check number formatting for numeric or currency values. Formatting should be same as shown
on page
5. Exported file should have columns with proper column names
6. Default page sorting should be carried in exported file as well
7. Excel file data should be formatted properly with header and footer text, date, page numbers
etc. values for all pages
8. Check if data displayed on page and exported Excel file is same
9. Check export functionality when pagination is enabled
10. Check if export button is showing proper icon according to exported file type e.g. Excel file icon
for xls files
11. Check export functionality for files with very large size
12. Check export functionality for pages containing special characters. Check if these special
characters are exported properly in Excel file
Performance Testing Test Scenarios
1. Check if page load time is within acceptable range
2. Check page load on slow connections
3. Check response time for any action under light, normal, moderate and heavy load conditions
4. Check performance of database stored procedures and triggers
5. Check database query execution time
6. Check for load testing of application
7. Check for stress testing of application
8. Check CPU and memory usage under peak load condition
Security Testing Test Scenarios
1. Check for SQL injection attacks
2. Secure pages should use HTTPS protocol
3. Page crash should not reveal application or server info. Error page should be displayed for this
4. Escape special characters in input
5. Error messages should not reveal any sensitive information
6. All credentials should be transferred over an encrypted channel
7. Test password security and password policy enforcement
8. Check application logout functionality
9. Check for Brute Force Attacks
10. Cookie information should be stored in encrypted format only

Master QA Doc-Mahesh Wagh


11. Check session cookie duration and session termination after timeout or logout
11. Session tokens should be transmitted over secured channel
13. Password should not be stored in cookies
14. Test for Denial of Service attacks
15. Test for memory leakage
16. Test unauthorized application access by manipulating variable values in browser address bar
17. Test file extension handing so that exe files are not uploaded and executed on server
18. Sensitive fields like passwords and credit card information should not have auto complete
enabled
19. File upload functionality should use file type restrictions and also anti-virus for scanning
uploaded files
20. Check if directory listing is prohibited
21. Password and other sensitive fields should be masked while typing
22. Check if forgot password functionality is secured with features like temporary password expiry
after specified hours and security question is asked before changing or requesting new password
23. Verify CAPTCHA functionality
24. Check if important events are logged in log files
25. Check if access privileges are implemented correctly
What is risk-based testing?

Risk-based testing is the term used for an approach to creating a test strategy that is based on
prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of
risks by risk level. Tests to address each risk are then specified, starting with the highest risk first.

When is used Decision table testing?

Decision table testing is used for testing systems for which the specification takes the form of rules
or cause-effect combinations. In a decision table the inputs are listed in a column, with the outputs
in the same column but below the inputs. The remainder of the table explores combinations of
inputs to define the outputs produced.

Rapid Application Development?

Rapid Application Development (RAD) is formally a parallel development of functions and


subsequent integration. Components/functions are developed in parallel as if they were mini
projects, the developments are time-boxed, delivered, and then assembled into a working
prototype. This can very quickly give the customer something to see and use and to provide
feedback regarding the delivery and their requirements. Rapid change and development of the
product is possible using this methodology. However the product specification will need to be
developed for the product at some point, and the project will need to be placed under more formal
controls prior to going into production.

16. What is the difference between Testing Techniques and Testing Tools?

Testing technique: – Is a process for ensuring that some aspects of the application system or unit
functions properly there may be few techniques but many tools.

Master QA Doc-Mahesh Wagh


Testing Tools: – Is a vehicle for performing a test process. The tool is a resource to the tester, but
itself insufficient to conduct testing

What is component testing?

Component testing, also known as unit, module and program testing, searches for defects in, and
verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are
separately testable. Component testing may be done in isolation from the rest of the system
depending on the context of the development life cycle and the system. Most often stubs and
drivers are used to replace the missing software and simulate the interface between the software
components in a simple manner. A stub is called from the software component to be tested; a
driver calls a component to be tested.

Here is an awesome video on Unit Testing

20. What is functional system testing?

Testing the end to end functionality of the system as a whole is defined as a functional system
testing.

21. What are the benefits of Independent Testing?

Independent testers are unbiased and identify different defects at the same time.

What are the different Methodologies in Agile Development Model?

1. There are currently seven different agile methodologies that I am aware of:
2. Extreme Programming (XP)
3. Scrum
4. Lean Software Development
5. Feature-Driven Development
6. Agile Unified Process
7. Crystal
8. Dynamic Systems Development Model (DSDM)

What is random/monkey testing? When it is used?

9. Random testing often known as monkey testing. In such type of testing data is generated
randomly often using a tool or automated mechanism. With this randomly generated input the
system is tested and results are analysed accordingly. These testing are less reliable; hence it is
normally used by the beginners and to see whether the system will hold up under adverse
effects.

What are the phases of a formal review?

Master QA Doc-Mahesh Wagh


In contrast to informal reviews, formal reviews follow a formal process. A typical formal review
process consists of six main steps:

10. Planning
11. Kick-off
12. Preparation
13. Review meeting
14. Rework
15. Follow-up.

31. What is the role of moderator in review process?

16. The moderator (or review leader) leads the review process. He or she determines, in co-
operation with the author, the type of review, approach and the composition of the review
team. The moderator performs the entry check and the follow-up on the rework, in order to
control the quality of the input and output of the review process. The moderator also schedules
the meeting, disseminates documents before the meeting, coaches other team members, paces
the meeting, leads possible discussions and stores the data that is collected.

What is negative and positive testing?

17. A negative test is when you put in an invalid input and receives errors. While a positive testing,
is when you put in a valid input and expect some action to be completed in accordance with the
specification.

What is the purpose of a test completion criterion?

18. The purpose of test completion criterion is to determine when to stop testing

What is the difference between re-testing and regression testing?

19. Re-testing ensures the original fault has been removed; regression testing looks for unexpected
side effects.

What are the Experience-based testing techniques?

20. In experience-based techniques, people's knowledge, skills and background are a prime
contributor to the test conditions and test cases. The experience of both technical and business
people is important, as they bring different perspectives to the test analysis and design process.
Due to previous experience with similar systems, they may have insights into what could go
wrong, which is very useful for testing.

“How much testing is enough?”

Master QA Doc-Mahesh Wagh


The answer depends on the risk for your industry, contract and special requirements.

When should testing be stopped?

21. It depends on the risks for the system being tested. There are some criteria bases on which you
can stop testing.
22. Deadlines (Testing, Release)
23. Test budget has been depleted
24. Bug rate fall below certain level
25. Test cases completed with certain percentage passed
26. Alpha or beta periods for testing ends
27. Coverage of code, functionality or requirements are met to a specified point

What is black box testing? What are the different black box testing techniques?

28. Black box testing is the software testing method which is used to test the software without
knowing the internal structure of code or program. This testing is usually done to check the
functionality of an application. The different black box testing techniques are
29. Equivalence Partitioning
30. Boundary value analysis
31. Cause effect graphing

What is test coverage?

32. Test coverage measures in some specific way the amount of testing performed by a set of tests
(derived in some other way, e.g. using specification-based techniques). Wherever we can count
things and can tell whether or not each of those things has been tested by some test, then we
can measure coverage.

What is a failure?

33. Failure is a departure from specified behaviour.

What is the purpose of test design technique?

34. Identifying test conditions and Identifying test cases

What is “use case testing”?

35. In order to identify and execute the functional requirement of an application from end to finish
“use case” is used and the techniques used to do this is known as “Use Case Testing”

What is the difference between STLC (Software Testing Life Cycle) and SDLC (Software
Development Life Cycle)?

Master QA Doc-Mahesh Wagh


36. The complete Verification and Validation of software is done in SDLC, while STLC only does
Validation of the system. SDLC is a part of STLC.

What is white box testing and list the types of white box testing?

37. White box testing technique involves selection of test cases based on an analysis of the internal
structure (Code coverage, branches coverage, paths coverage, condition coverage etc.) of a
component or system. It is also known as Code-Based testing or Structural testing. Different
types of white box testing are
38. Statement Coverage
39. Decision Coverage

In white box testing what do you verify?

In white box testing following steps are verified.

40. Verify the security holes in the code


41. Verify the incomplete or broken paths in the code
42. Verify the flow of structure according to the document specification
43. Verify the expected outputs
44. Verify all conditional loops in the code to check the complete functionality of the application
45. Verify the line by line coding and cover 100% testing

What is the difference between static and dynamic testing?

 Static testing: During Static testing method, the code is not executed and it is performed
using the software documentation.

 Dynamic testing: To perform this testing the code is required to be in an executable form.

What are the tables in test plans?

Test design, scope, test strategies, approach are various details that Test plan document consists
of.

 Test case identifier


 Scope
 Features to be tested
 Features not to be tested
 Test strategy & Test approach
 Test deliverables
 Responsibilities
 Staffing and training
 Risk and Contingencies

Master QA Doc-Mahesh Wagh


What is the difference between UAT (User Acceptance Testing) and System testing?

System Testing: System testing is finding defects when the system under goes testing as a
whole, it is also known as end to end testing. In such type of testing, the application
undergoes from beginning till the end.UAT: User Acceptance Testing (UAT) involves
running a product through a series of specific tests which

Q. What is Risk Based Testing?


Ans. Identifying the critical functionality in the system then deciding the orders in which these
functionality to be tested and applying testing.

Q. What is Early Testing?


Ans. Conducting testing as soon as possible in development life cycle to find defects at early
stages of SDLC.
Early testing is helpful to reduce the cost of fixing defects at later stages of STLC.

Q. What is Exhaustive Testing?

Ans. Testing functionality with all valid, invalid inputs and preconditions is called exhaustive
testing.
Q. What is Defect Clustering?

Ans. Any small module or functionality may contain more number of defects – concentrate more
testing on this functionality.
Q. What is Positive Testing?

Ans. Testing conducted on the application to determine if system works. Basically known as “test
to pass” approach.
Q. What is Negative Testing?

Ans. Testing Software with negative approach to check if system is not “showing error when not
supposed to” and “not showing error when supposed to”.

Q. What is End-to-End Testing?


Ans. Testing the overall functionality of the system including the data integration among all the
modules is called end to end testing.

Q. What is Exploratory Testing?


Ans. Exploring the application, understanding the functionality, adding (or) modifying existing test
cases for better testing is called exploratory testing.

Q. What is Non-functionality Testing?


Ans. Validating various non functional aspects of the system such as user interfaces, user
friendliness security, compatibility, Load, Stress and Performance etc is called non functional
testing.

Q. What is Performance Testing?


Ans. Process of measuring various efficiency characteristics of a system such as response time,
through put, load stress transactions per minutes transaction mix.

Master QA Doc-Mahesh Wagh


Q. What is Load Testing?
Ans. Analyzing functional and performance behaviour of the application under various conditions is
called Load Testing.

Q. What is Stress Testing?


Ans. Checking the application behaviour under stress conditions
(or)
Reducing the system resources and keeping the load as constant checking how does the
application is behaving is called stress testing.

Q. What is Process?
Ans. A process is set of a practices performed to achieve a give purpose; it may include tools,
methods, materials and or people.

Q. What is Software Configuration Management?


Ans. The process of identifying, Organizing and controlling changes to software development and
maintenance.
(or)
A methodology to control and manage a software development project

Q. What is Test Plan?


Ans. A document describing the scope, approach, resources, and schedule of testing activities. It
identifies test items, features to be tested, testing tasks, which will do each task, and any risks
requiring contingency planning.

Q. What is Test Scenario?


Ans. Identify all the possible areas to be tested (or) what to be tested.

Q. What is a Defect?
Ans. Any flaw imperfection in a software work product.
(or)
Expected result is not matching with the application actual result.

Q. What is Severity?
Ans. It defines the important of defect with respect to functional point of view i.e. how critical is
defect with respective to the application.

Q. What is Priority?
Ans. It indicates the importance or urgency of fixing a defect

Q. What is Test Case?


Ans. A Test case is a set of preconditions steps to be followed with input data and expected
behaviour to validate a functionality of a system.

Q. What is Business Validation Test Case?


Ans. A test case is prepared to check business condition or business requirement is called business
validation test case.

Q. What is a Good Test Case?


Ans. Test cases that have high priority of catching defects in called a good test case.

Master QA Doc-Mahesh Wagh


Q. What is Use Case Testing?
Ans. Validating software to confirm whether it is developed as per the use cases or not is called
uses case testing.

Q. What is Defect Age?


Ans. The time gap between date of detection & date of closure of a defect.

Q. What is Showstopper Defect?


Ans. A defect which is not permitting to continue further testing is called Showstopper Defect

Q. What is Test Closure?


Ans. It is the last phase of the STLC, where the management prepares various test summary
reports that explains the complete statistics of the project based on the testing carried out.
Q. What is Bucket Testing?
Ans. Bucket testing is also known as A/B testing. It is mostly used to study the impact of the
various product designs in website metrics. Two simultaneous versions are run on a single or set
of web pages to measure the difference in click rates, interface and traffic.

Q. What is Scalability Testing?


Ans. It is used to check whether the functionality and performance of a system, whether system is
capable to meet the volume and size changes as per the requirements
Scalability testing is done using load test by changing various software, hardware configurations
and testing environment.

Q. What is Fuzz Testing?


Ans. Fuzz testing is a black box testing technique which uses a random bad data to attack a
program to check if anything breaks in the application.

Q. What is Difference between QA, QC and testing?


Ans. QA?
->It is process oriented Aim is to prevent defects in an application
QC?
->Set of activities used to evaluate a developed work product. It is product oriented
Testing?
->Executing and verifying application with the intention of finding defects

What is risk-based testing?

Risk-based testing is the term used for an approach to creating a test strategy that is based on
prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of
risks by risk level. Tests to address each risk are then specified, starting with the highest risk first.

What is incremental testing?


Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to
provide an early feedback to software developers.

What are the advantages of waterfall model?

The various advantages of the waterfall model include: It is a linear model. It is a segmental
model. It is systematic and sequential. It is a simple one. It has proper documentation.

Master QA Doc-Mahesh Wagh


What is RAD?

The RAD (Rapid Application Development Model) model is proposed when requirements and
solutions can be modularized as independent system or software components, each of which can
be developed by different teams. After these smaller system components are developed, they are
integrated to produce the large software system solution.

What is risk analysis?


Answer: Risk analysis is, in simpler words, the analysis of things that may go wrong and coming
up with preventive measures. For example, an important team member falling sick just before the
delivery would pose a bigger risk to the delivery. One possible way to mitigate this risk would be
to prepare team members to handle each other's responsibilities so that the missing team
member's work can be shared.

What is a "Good Tester"?


Answer: There are several qualities that'd define a good tester: 1. Natural ability to find bugs in
given system 2. Strong logical and analytical skills 3. Good team player 4. Flair for learning new
skills, tools and domains and implementing them. 5. Following time-schedules.

13. What is the difference between Two Tier Architecture and Three Tier Architecture?

In Two Tier Architecture or Client/Server Architecture two layers like Client and Server is
involved. The Client sends request to Server and the Server responds to the request by fetching
the data from it. The problem with the Two Tier Architecture is the server cannot respond to
multiple requests at the same time which causes data integrity issues.
The Client/Server Testing involves testing the Two Tier Architecture of user interface in the front
end and database as backend with dependencies on Client, Hardware and Servers.

In Three Tier Architecture or Multi Tier Architecture three layers like Client, Server and
Database are involved. In this the Client sends a request to Server, where the Server sends the
request to Database for data, based on that request the Database sends back the data to Server
and from Server the data is forwarded to Client.

The Web Application Testing involves testing the Three Tier Architecture including the User
interface, Functionality, Performance, Compatibility, Security and Database testing.

31. What is Defect Leakage?

Defect leakage occurs at the Customer or the End user side after the application delivery. After the release of
the application to the client, if the end user gets any type of defects by using that application then it is called
as Defect leakage. This Defect Leakage is also called as Bug Leakage.

What are two types of Metrics?

1. Process metrics: Primary metrics are also called as Process metrics. This is the metric the Six Sigma
practitioners care about and can influence. Primary metrics are almost the direct output characteristic
of a process. It is a measure of a process and not a measure of a high-level business objective.
Primary Process metrics are usually Process Defects, Process cycle time and Process consumption.
2. Product metrics: Product metrics quantitatively characterize some aspect of the structure of a
software product, such as a requirements specification, a design, or source code.

Master QA Doc-Mahesh Wagh


How would you ensure 100% coverage of testing?

We cannot perform 100% testing on any application. But the criteria to ensure test completion on a project
are:

3. All the test cases are executed with the certain percentage of pass.
4. Bug falls below a certain level
5. Test budget depleted
6. Deadlines reached (project or test)
7. When all the functionalities are covered in a test cases
8. All critical & high bugs must have a status of CLOSED

What is UML and how to use it for testing?

The Unified Modelling Language is a third-generation method for specifying, visualizing, and documenting the
artefacts of an object-oriented system under development from the inside, the Unified Modelling Language
consists of three things:

9. A formal metamodel
10. A graphical notation
11. A set of idioms of usage

What are the components of an SRS?


An SRS contains the following basic components:
Introduction
Overall Description
External Interface Requirements
System Requirements
System Features

What is Endurance Testing?


Checks for memory leaks or other problems that may occur with prolonged execution.

What is Exhaustive Testing?


Testing which covers all combinations of input values and preconditions for an element of the
software under test.

What is Release Candidate?


A pre-release version, which contains the desired functionality of the final version, but which needs
to be tested for bugs (which ideally should be removed before the final version is released)

What is a proxy server?


A proxy server is a server, which behaves like an intermediary between the client and the main
server. Therefore, the requests onto the main server are first sent to the proxy server from the
client system, which are then forwarded to the main server. The response from the main server is
sent to the client through the proxy server itself. The request and/or the response may be
modified by the proxy server depending on the filtering rules of the server.

Bug:
An Error found in the development environment before the product is shipped to the customer.
Bug: Simply Bug is an error found BEFORE the application goes into production.

Defect:
Defect is the difference between expected and actual result in the context of testing. Defect is the
deviation of the customer requirement. Simply defect can be defined as a variance between
expected and actual. Defect is an error found AFTER the application goes into production.

Error:

Master QA Doc-Mahesh Wagh


This is cause due to human actions like code is not following the standard, there is some mistake
in syntax, or there is mistake in invocation of variable or might be there are some mistakes in
which database connectivity code is faulty.

Failure:
Failures are caused by environment or sometime due to mishandling of product. Suppose we are
using a compass just beside a current running wire then this will not show the correct direction
and this is not helping in getting the right information from the product.

Fault:
An incorrect step, process, or data definition in a computer program which causes the program to
perform in an unintended or unanticipated manner. See: bug, defect, error, exception.

Difference between Volume, Load and stress testing in software

Volume Testing = Large amounts of data


Load Testing = Large amount of users
Stress Testing = Too many users, too much data, too little time and too little room

Master QA Doc-Mahesh Wagh

S-ar putea să vă placă și