Sunteți pe pagina 1din 13

TESTING

Testing : Testing is the process of critically evaluating software to find flaws and fix
them and to
determine its current state of readiness for release

Q:Why Testing?
A: - To unearth and correct defects.
- To detect defects early and to reduce cost of defect fixing.
- To ensure that product works as user expected it to.
- To avoid user detecting problems.

Q.What is test strategy document. What it contain and is it organisation level


document or project level document?
A.test strategy document provide highlevel description of how product is to be tested ?
i,e defines approach to be followed to complete testing and what are testing types to be
followed ?
test strategy is a organisation level document and it is part of test plan
Various phases are :
• System design phase
• System testing
• Regression testing:
Test Case :A test case is a document that describes an input, action, or event and an
expected response, to determine if a feature of an application is working correctly. A test
case should contain particulars such as test case identifier, test case name, objective,
test conditions/setup, input data requirements, steps, and expected results.

Characterstics of Test Case :A good test case should have the following characterstics
:
- TC should start with “what you are testing”.
- TC should be independent.
- TC should not contain “If” statements.
- TC should be uniform.
- TC should be Maintainable, Repeatable, Traceable, Efficient, Executable by other Testers

Test Scenario :A set of test cases that ensure that the business process flows are
tested from end to end. They may be independent tests or a series of tests that follow
each other, each dependent on the output of the previous one. The terms "test scenario"
and "test case" are often used synonymously.

Test Plan :A software project test plan is a document that describes the objectives,
scope, approach, and focus of a software testing effort

Contents of Test Plan


1.Introduction 2.Assumptions and Constraints
3.Test items 4.Feautures to be tested 5.Features not to be tested
6.Approach 7.Entry/Exit Criteria 8.Suspension Criteria and Resumption
requirements
9.Test Deliverables 10.Testing tasks 11.Test Configuration Items
12.Team Structure 13.Responsibities 14.Training Needs
15.Schedule 16.Risks and Contigencies 17.Defect Reporting and Management

Test script :The test script is the combination of a test case, test procedure, and test
data. Initially the term was derived from the product of work created by automated
regression test tools. Today, test scripts can be manual, automated, or a combination of
both.
Test suite :The most common term for a collection of test cases is a test suite.

SDLC :The Systems Development Life Cycle (SDLC), or Software Development Life
Cycle in systems engineering and software engineering, is the process of creating or
altering systems, and the models and methodologies that people use to develop these
systems. The concept generally refers to computer or information systems.

Traceability matrix :.Traceability Matrix is mapping between the requirement and test
cases.
It is document which maps requirements with test cases. By preparing Traceability
matrix, we can ensure that we have covered all functionalities in our test cases.

Q.What is Test Bed,Test Data and Test Harness?


A.Test Bed is the enviorment where the testing is supposed to be done. Setting up the
hardware and software requirements before starting testing is known as testbed.
Test data is multiple sets of values or data are used to test the same functionality of a
particular feature. All the test values and changeable environmental components are
collected in separate files and stored as test data.
Test Harness or automated test framework is a collection of software and test data
configured to test a program unit by running it

Error: The deviation occurs in Coding is known as Error.It is at Developres Level.


Defect: The deviation occures in outcome of a System.It is at Customer/End-user Level.
Bug: The deviation between Expcted and Actual Result.It is at Tester Level.

STLC: Although variations exist between organizations, there is a typical cycle for
testing:

• Requirements analysis: Testing should begin in the requirements phase of the


software development life cycle. During the design phase, testers work with
developers in determining what aspects of a design are testable and with what
parameters those tests work.
• Test planning: Test strategy, test plan, testbed creation. Since many activities
will be carried out during testing, a plan is needed.
• Test development: Test procedures, test scenarios, test cases, test datasets,
test scripts to use in testing software.
• Test execution: Testers execute the software based on the plans and tests and
report any errors found to the development team.
• Test reporting: Once testing is completed, testers generate metrics and make
final reports on their test effort and whether or not the software tested is ready
for release.
• Test result analysis: Or Defect Analysis, is done by the development team
usually along with the client, in order to decide what defects should be treated,
fixed, rejected (i.e. found software working properly) or deferred to be dealt with
later.
• Defect Retesting: Once a defect has been dealt with by the development team,
it is retested by the testing team.
• Regression testing: It is common to have a small test program built of a subset
of tests, for each integration of new, modified, or fixed software, in order to
ensure that the latest delivery has not ruined anything, and that the software
product as a whole is still working correctly.
• Test Closure: Once the test meets the exit criteria, the activities such as
capturing the key outputs, lessons learned, results, logs, documents related to the
project are archived and used as a reference for future projects.
Resumption Criteria - If testing is suspended, resumption will only occur when
the problem's that caused the suspension has been resolved. When a critical defect is
the cause of the suspension, the “FIX” must be verified by the test department before
testing is resumed.
Assumption Criteria - Certain things in project will change very frequently, so you have
to assume many things. That all should mentioned in this section.

Functional vs non-functional testing:-


Functional testing :- In this type of testing, the software is tested for the functional
requirements. Functional testing refers to tests that verify a specific action or function of
the code.

Non-functional testing :- Non-functional testing refers to aspects of the software such


as scalability or security that may not be related to a specific function or user action,.
Non-functional testing tends to answer such questions as "how many people can log in at
once", or "how easy is it to hack this software".

Grey Box Testing:-Grey Box testing is a Software testing technique that uses a
combination of black box testing and white box testing.
It is mostly carried out by the Programmer who has created the software or application
under test.....but It can also be used by testers who know the internal workings or
algorithm of the application/software under test and can write tests specifically for the
anticipated results
Q:How Many Software Testing Techniques are there?
A:In total there are 3 types of testing techniques are there. They are
1.Review Techniques
2. Black box Techniques
3.White box Techniques

Note :1.Combination of White box and Black box techniques are called Gray box or
Validation Techniques or Dynamic testing techniques.
2.Review techniques are also called as Verification techniques or Static testing
techniques

Review Techniques: The techniques used to verify the documents that will be useful in
testing are called Review techniques or verification techniques or Static testing
techniques. There are 3 types in this. They are
1.Walk Through
2.Inspections
3.Peer Reviews

Black box Techniques: There are 5 types in this as well


1.Boundary Value Analysis (BVA)
2.Equivalence Class Partitions (ECP)
3.Decision Tables (DT)
4.State Transition Diagrams (STD)
5.Error Guessing

Equivalence class Partitioning :equivalence partition is a black box technique for


writting test cases.
equivalence partition is process of reducing the hung (infinite) set of possible into much
smaller but still equally effective.
In equivalence partion:-
1) Range :- one valid two invalid
2) specific set:- one valid one invalid.
3) number:- one valid two invalid
Ex:if we want to check the textbox object which accepts a to z characters
In +ve condition we have test the object by giving alphabets i.e. a-z char only
then object will accept the value it will pass.
In -ve condition we will check the object by giving other than alphabets(a-z)
i.e A-Z, 0-9, blank etc
Boundary Value Analysis:Generates test cases for the boundary values
Ex if we want to check the textbox object which accepts 4 to 10 alphabets char
then we will check the object by giving boundary values as min min-1, min+1, max, max-
1, max+1
i.e 4, 3, 5, 10, 9, 11
Min = 4 Pass
Min-1 = 3 Fail
Min+1 = 5 Pass
Max = 10 Pass
Max-1 = 9 Pass
Max+1 = 11 Fail
Decision Tables: If our test related to an operation with alternative expectations, the
testers are using DT techniques.
For example: If login user name and password are correct, next window will appear. If
any one of user name or password is are wrong, error window should open. So here we
have 2 alternatives like Next window and error window.
State Transition Diagrams (STD): If a test related to an operation with no alternative
expectations, the testers are using state transition diagrams.
Error Guessing: This is purely based on previous experience and judgment of tester.
Error Guessing is the art of guessing where errors can be hidden. For this technique there
are no specific tools, writing the test cases that cover all the application paths.

White box Techniques: There are 4 types in this


1.Basic path Coverage
2.Control Structure Coverage
3.Programs Technique Coverage
4.Mutation Coverage

1.Basic path Coverage: It’s a program testing technique.


Using this technique, the programmer is validating the execution of that program without
any runtime errors. In this validation, the programmer is Running that program more
than one time to cover all the area of that program.
The number of times the programmer running the program to cover whole program is
called Cyclomatic Complexity.
For example: If you execute a if else program, you need to execute it two time to cover
each and every line in the program. So the Cyclomatic complexity of that program is 2
i.e. Cyclomatic complexity=2
2.Control Structure Coverage: After completion of basic path coverage, the
corresponding programmer is validating that program correctness in terms of inputs,
process and outputs.
3.Program Technique Coverage: After completion of control structure coverage, the
corresponding programmer is calculating the execution speed of that program. If the
execution speed is not acceptable, then the programmer is performing changes in that
program structure without disturbing functionality.
For Example: Take swapping of 2 numbers as example. This can be done using 3
variables and 2 variables.
4.Mutation Coverage: Mutation means that a change. The programmers are
performing changes in a tested program to estimate the correctness and completeness
of that program testing.
Note: White-box testing techniques are also known as clear box techniques OR Open box
testing techniques

Q:Software Integration and Integration Testing Techniques?


A:After completion of related programs writing and unit testing, the programmers are
inter connecting that programs to form a software.
There are 4 approaches to integrate programs such as

1.Top-down Approach: In this approach the programmers are interconnecting main


program and some sub programs. In the place of remaining under constructive sub
programs the programmers are using temporary programs called as Stubs.
2.Bottom Up Approach: In this approach the programmers are inter connecting sub
programs without main program.In the place of under constructive main program the
programmers are using temporary program called as Driver.
3.Hybrid Approach: It is a combination of top down and bottom up approaches. This
approach is also known as sandwich approach.
In the place of under constructive sub and main programs the programmers are using
temporary program called stub and Driver.
4.System Approach: From this approach the programmers are integrating programs
after completion of 100% of coding. This approach is also called BigBang Approach.

Q:Advantages and Diasadvantages of Black box testing?


A.Advantages of Black Box Testing
- Tester can be non-technical.
- Used to verify contradictions in actual system and the specifications.
- Test cases can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing


- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So writing test cases is
slow and difficult
- Chances of having unidentified paths during this testing

Q:Difference Between Project (Application) and Product?


A:Software Project: A software developed depending on a specific customer
requirements is called as application or project.
Examples: Any application that will be used only in that organization. OR Software
developed to use with in the organization only. They won’t share it to out side world.
Software Product: A software developed depending on overall requirements in market
is called as software product.
Examples: SQL Server, Visual Studio, Adobe Photoshop, Google search engine, Gmail and
etc… which will be accessible to the outside world.

Q.High severity and low priority Bug ?


A.High severity and low priority Bug :When the application has critical problem and it has
to be solved after a month.
Ex's:1.If there is an application if that application crashes after multiple use of any
functionality (Ex: save Button use 200 times then that application will crash)
2. Crash in some any module that is too be delivered later

Q:Low severity and high priority Bug ?


A:When the application has trivial problem ie (less affected) and it has to be solved
within a day
Ex's:1.Home Page in web application has misspelled client name and found at the time
of delivering the product
2. If any Web site say Yahoo now if the logo of site "Yahoo" spells "Yho"
Because it effect the name of site so important to do quick hence high Priority but it is
not going to crash because of spell chage so severity low

Q.Explain Risk,Risk Mitigation and Risk Contingency Plan?


A.Risk is defined in ISO 31000 as the effect of uncertainty on objectives (whether
positive or negative)..
Risk mitigation or Risk reduction involves the employment of methods that reduce the
probability of a risk occurring, or reducing the severity of the impact of a risk on the
outcome of the project.[mitigate=Make less severe]
A contingency plan is nothing more than a plan to solve a problem that may occur, but
has not occurred yet.[contingency=A possible event]
Q: What are the important components of the risk?
A:The important components of risk are a) Probability the risk will ocur b) Impact of the
risk c) Frequency of occurance

Q:What are the contents of Risk management Plan? Have you ever prepared a
Risk Management Plan ?
A:In general it consist of types risk associated and mitigation for it. It also consist of
severity. In general RMP is created by PM and updated by Module Leads when needed.

Q.What is Quality Assurance & Quality Control?


A:Quality assurance :A set of activities designed to ensure that the development
and/or maintenance process is adequate to ensure a system will meet its objectives.
QA is oriented towards Prevention of defects through verification.
QA is process oriented.
Ex: Code peer reviews, walkthroughs and code inspections are examples
Quality control : A set of activities designed to evaluate a developed working Product.
QC is oriented towards Detection of defects through validation.
QC is product oriented.
Ex: Testing is QC Activity

Q.What is Verification & Verification?


A:Verification :- It is also called Inprocess testing and Quality Assurance.
It involves reviews and meetings to evaluate documents,plans,code requirements
and
specifications.It checks whether we are building product right?
Validaton :- Validation :- Typically involves actual testing and takes place after
verifications are complete.It checks whether we are building right product ? Also called
as
end process testing and Quality Control

Q.What is an Audit?
A.An audit is a careful examination with the intent to verify the accuracy of something.
When testers review test cases they are auditing one another's work.

Q:What are SQA Activities?


A:SQA activities - suggesting & reviewing the process docuemtents
Example - reviewing project management plan, etc

Q.What is difference between client server,desktop and Web Testing?


A:Desktop application runs on personal computers and work stations, so when you
test the desktop application you are focusing on a specific environment. You will test
complete application broadly in categories like GUI, functionality, Load, and backend i.e
DB.
In client server application you have two different components to test. Application is
loaded on server machine while the application exe on every client machine. You will test
broadly in categories like, GUI on both sides, functionality, Load, client-server interaction,
backend. This environment is mostly used in Intranet networks. You are aware of number
of clients and servers and their locations in the test scenario.
Web application is a bit different and complex to test as tester don’t have that much
control over the application. Application is loaded on the server whose location may or
may not be known and no exe is installed on the client machine, you have to test it on
different web browsers. Web applications are supposed to be tested on different
browsers and OS platforms so broadly Web application is tested mainly for browser
compatibility and operating system compatibility, error handling, static pages, backend
testing and load testing.

Q.After completing testing, what would you deliver to the client?


A. If sending the build to client first time.
- Deployment Guide(Build)
- Release notes
- Test scripts (if client gives any)
If sending the build from 2nd time onwards
-Bugs status (if client logged any bugs)

Q:What would u send in a Test Report ?


A:1.Bug report (No of Bugs Logged,reopned,Verified)
2.Test Case Results(No of Test Cases Executed,Passed and Failed)
3.CR Implementation
4.Non-Functional Implementation(Performance Testing,Load Testing...Results)
Q:Difference between Authorization and Authentication
A:Authentication is the process of identifying an individual, usually based on a username
and password.Authentication is commonly done through the use of logon
passwords,smart card, retina scan, voice recognition, or fingerprints .
Ex:Authentication is equivalent to showing your drivers license at the ticket counter at
the airport.
Authorization is the process of giving individuals access to system objects based on their
identity . In multi-user computer systems, a system administrator defines for the system
which users are allowed access to the system and what privileges of use
Ex:Authorization is equivalent to checking the guest list at an exclusive party, or
checking for your ticket when you go to the opera.

Q:What is an Estimate ?
A:An estimate is a forecast or prediction.

Q:What are the Estimation Techniques?


A:Estimation Techniques:
1.Work Breakdown Structure (WBS)
2.Test Point Analysis
3.Function Point Analysis
4. Use Case Based Estimation

Q:How do you calulate estimation for your Test scenarios


A:Testing Estimation depends upon the lines of code.
suppose 10 LOC(LINES OF CODE)=1 FP(Functiona point)
then for 1000 LOC=100 FP
FP*3TECHNIQUES(BVA,EP,Error guessing)=Test case
100FP*3=300TC
We can estimate 30 Test Cases per a day
that means 300TC/30=10DAYS for writing the test cases
Testplan=1/2 of the test case prepartion that means=5 days
Test case execution=1 and 1/2 of the test case prepartion=15 days
Buffer time =20 days

Q:What metrics used to measure the size of the software?


A:There are 4 ways in which u can measure the program size
1)Lines of code i.e.LOC
2)Function points
3)McCabe's complexity metric which is the number of decisions(+1) in a
program4)Halstead's metrics that are used to calculate program length

Q:What do we do when there is a known bug at the time of software release?


A:If there is any open bug at the time of release we mention in the release note which
we send with the release, in open list of issues. As we know if client finds that we have
mentioned the issue in relese note he will not be worried about the type of testing being
done on the application. But if he himself finds that bug it will hurt our company image.

Q:What is the difference in writing the test cases for Integration testing and
system testing?
A:Mostly the integration Testing is not done by Test Engineer.
Integration testing test case includes a partial amount of the conditions used for system
testing.
System testing = Functionality Testing + Integration + Unit (Complete round of testing)

Q:what is Defect leakage?


A:If any defect which we could not find out in system test environment then it is called
as defect leakage.

Q.Explain Defect Density,Defect Age and Build Interval Period?


A:Defect density =Total Number of Defects /Size of the Project
Size of Project can be Function Points, Feature Points, Use
Cases, KLOC etc

Defect Age: defect finding date and defect closing date period is Defect age.

Build Interval Period: if we found a bug in version1.0 then sent it to development team
n they solved that
they will release new version 2.0
the time gap between version releases is called build intervel

Q:What is Defect removable efficiency?


A: The DRE is the percentage of defects that have been removed during an activity,
computed with the equation below. The DRE can also be computed for each software
development activity
or the DRE may be computed for a specific task or technique (e.g. design inspection,
code walkthrough, unit test, 6 month operation, etc.)
Number of Defects Removed
DRE = –—————————————————— * 100
Number of Defects at Start of Process
DRE=A/A+B = 0.8 (A and B are defects logged by Testing Team & Customer
respectively)
If DRE <=0.8 then good product otherwise not.

Q.What are the Common Factors in Deciding When to Stop Testing?


A.Common factors in deciding when to stop Testing:
1.Show stoppers encountered
2.Too many minor bugs pending to be fixed
3.Deadlines (release deadlines, testing deadlines, etc.)
4.Test cases completed with certain percentage passed
5.Test budget depleted
6.Coverage of code/functionality/requirements reaches a specified point
7.Bug rate falls below a certain level
8.Beta or alpha testing period ends

Q:Static testing Vs Dynamic testing :


A:Reviews, walkthroughs, or inspections are considered as static testing, whereas
actually executing programmed code with a given set of test cases is referred to as
dynamic testing.
Dynamic testing may begin before the program is 100% complete in order to test
particular sections of code (modules or discrete functions). Typical techniques for this are
either using
stubs/drivers or execution from a debugger environment.
Q:What is Acid Testing??
A: ACID testing is related to testing a transaction.
ACID Means: A-Atomicity,C-Consistent,I-Isolation & D-Durable
Mostly this will be done database testing.
Q: What are Microsoft 6 rules?
A: - As far as my knowledge these rules are used at user Interface test.These are also
called Microsoft windows standards. They are
• GUI objects are aligned in windows
• All defined text is visible on a GUI object
• Labels on GUI objects are capitalized
• Each label includes an underlined letter (mnemonics)
• Each window includes an OK button, a Cancel button, and a System menu

Q.What is Severity and Priority and who will decide what?


A.Severity : Seriousness of the defect w.r.t functionality, it will be decided by tester.
Priority : Importance of the defect w.r.t customer requirement, it will be decided by
developer.

Q.Gene Description?
A. PathLIMs deals with AP Lab, CP Lab, ISH Lab, IHC laboratories.

In PathLIMS pathologist creates an internal study number ( can be of Non-Insitu/Non-TMA,


Insitu, TMA) and an investigator opens corresponding pathrequest for that Internal Study
number. There can be many number of pathrequests for single Internal study
number(Protocol).

Inputs for the PathRequest will be list of animals, tissues, testprocedures(Necropsy, IHC,
Routine Histology ... etc), cassettes (TMA Cassettes, HTB specimens, Histology
Cassettes), Antibodies, cell Lines and Timepoints. Based on the inputs for pathrequest,
request will move to corresponding labs. For Example if the input is necropsy test
procedure then request will move to Necropsy under AP lab. If the input is Timepoint
then request will move to CP Lab. Inputs will differ depending on the type of internal
study. If the internal study is Insitu then pathRequest inputs will be different.

Once PathRequest opens and submitted the by Investigator, corresponging Lab


supervisors and pathlogists will review and approve the Request. Approved PathRequest
will be sent to the corresponding labs and corresponding Lab Technician and Supervisor
will work on it according to the request and encloses the results. Once these results are
reviewed by pathologist and Supervisor PathRequest can be closed or completed.

PathLIMS Requests section is about to design the experiment, based on design request
will move to corresponding lab and HTB section is to design tissue, to track all tissues

TESTING TYPES
What kinds of testing should be considered?
1.Unit testing - Testing of individual software components or modules. Typically done
by the programmer and not by testers, as it requires detailed knowledge of the internal
program design and code. may require developing test driver modules or test harnesses.

2.Integration testing - Testing of integrated modules to verify combined functionality


after integration. Modules are typically code modules, individual applications, client and
server applications on a network, etc. This type of testing is especially relevant to
client/server and distributed systems.

3.Incremental integration testing - continuous testing of an application as new


functionality is added.This is done by programmers or by testers.
4.Functional testing -Functional testing is based on functional requirements of the
application.
Functional testing is the subset of system testing and comes under blackbox testing

5.Black box testing - Black box testing treats the software as a "black box"—without
any knowledge of internal design or code. Tests are based on requirements and
functionality.
Black box testing methods include: equivalence partitioning, boundary value analysis,
all-pairs testing, fuzz testing, model-based testing, traceability matrix, exploratory
testing and specification-based testing.

6.White box testing - based on knowledge of the internal logic of an application's


code. Tests are based on coverage of code statements, branches, paths, conditions. Also
known as Glass box
Various types of White Box testing techniques are Mutation testing,Statis testing,API
testing,Code Coverage and Fault Injection

7.System testing - System testing tests a completely integrated system to verify that it
meets its requirements.
System testing is end to end testing it covers all the functionality, performance, usabilty,
database, stress testing.It comes under black box testing

8.End-to-end testing - End-to-end testing is the process of testing transactions or


business level products as they pass right through the computer systems. Thus this
generally ensures that all aspects of the business are supported by the systems under
test.
i,e it is a type of testing in which one will perform testing on the end to end scenarios of
the application
ex:login
balance enquire
withdraw
balance enquire
logout

9.Acceptance testing -Normally this type of testing is done to verify if system meets
the customer specified requirements. User or customer do this testing to determine
whether to accept application.

10.Load Testing :-Testing the application with maximum number of users and input.

11.Stress Testing:-Testing the application with more than the maximum number of
users and input.

12.Performance testing: - It is the process of testing the application performance to


check if it meets the performance goals with respect to response time, throughput and
scalability

13.Usability testing - User-friendliness check. Application flow is tested, Can new user
understand the application easily, Proper help documented whenever user stuck at any
point.

14.Install/Uninstall testing - Tested for full, partial, or upgrade install/uninstall


processes on different operating systems under different hardware, software
environment.

15.Recovery testing(or Failover testing) - Testing how well a system recovers from
crashes, hardware failures, or other catastrophic problems.

16.Security testing : - Testing how well the system protects against unauthorized
internal or external access. Checked if system, database is safe from external attacks.

17.Compatibility testing :- Testing how well software performs in a particular


hardware/software/operating system/network environment and different combinations of
above.

18.Comparison testing :- Comparison of product strengths and weaknesses with


previous versions or other similar products.

19.Alpha testing :- Alpha testing is simulated or actual operational testing by potential


users/customers or an independent test team at the developers' site. Alpha testing is
often employed for off-the-shelf software as a form of internal acceptance testing, before
the software goes to beta testing.

20.Beta testing :- Beta testing comes after alpha testing. Versions of the software,
known as beta versions, are released to a limited audience outside of the programming
team. The software is released to groups of people so that further testing can ensure the
product has few faults or bugs. Sometimes, beta versions are made available to the open
public to increase the feedback field to a maximal number of future users.

21.Exploratory Testing :-Exploratory testing is a type of testing in which the domain


experts will perform the testing on the application to explore the functionality since they
do not have the proper requirement document support....

22.Adhoc Testing :- Adhoc Testing is a type of testing in which the Test Engineer will
perform the testing on the application in his own style after understanding the
requirements
Ad hoc testing is a commonly used term for software testing performed without planning
and documentation.

23.Monkey Testing :-Testing an application without having the knowledge of the


application. Testing a system randomly is referred as monkey testing

24.Agile testing :- involves testing from the customer perspective as early as possible,
testing early and often as code becomes available and stable enough from module/unit
level testing.

Since working increments of the software is released very often in agile software
development there is also a need to test often. This is often done by using automated
acceptance testing to minimize the amount of Manual labor

25.context-driven testing : - testing driven by an understanding of the environment,


culture, and intended use of software. For example, the testing approach for life-critical
medical equipment software would be completely different than that for a low-cost
computer game.

26.Smoke testing :- physical verification of the application (ex:checking


labels,buttons,url etc...)
Smoke testing is generally done immediately after the development phase is over. Its
main aim is to find out that the software would provide its basic functionality and would
not crash completely.Is nothing but test Whether the Application is ready to test or not
It is called a build verification testing.Smoke Testing is Wide and Shallow

27.Sanity testing :- functional veriifcation of the application (ex:checking button


functionalities)
Sanity testing is done on an updated version of a software to verify whether
requirements are met or not.
Sanity test is used to determine a small section of the application is still working after a
minor change
Ex : suppose there is a new service pack for a windows operating system, then it is
tested first to check whether the operating system works as required even after applying
the new service pack or does it fail in any aspect.

It is called Build Acceptance Testing. Sanity testing is Deep and Narrow

28.Regression testing : - Regression Testing is a process of testing the fix of bug in


the next upgrade version.Regression testing is a process of checking whether the fix of
the earlier bug break/effect some area in the application

29.Retestig :- Re-testing is a process of testing the fix of the bug in the same version or
check whether the bug is fixed in the same version.

30. GUI software testing :-In computer science, GUI software testing is the process of
testing a product that uses a graphical user interface, to ensure it meets its written
specifications. This is normally done through the use of a variety of test cases.

30.Volume testing :-Volume Testing means Testing the software with large volume of
data in the database.
Volume Testing is the subset of stress testing.In the volume testing we increase the size
of the database to check the performance of the software.

31.Maintenance testing :-Maintenance testing is that testing which is performed to


either identify equipment problems, diagnose equipment problems or to confirm that
repair measures have been effective. It can be performed at either the system level , the
equipment level or the component level .

32.Mutation testing :- Mutation testing (or Mutation analysis or Program mutation) is a


method of software testing, which involves modifying programs' source code or byte
code in small ways. In short, any tests which pass after code has been mutated are
considered defective.

33.Comprehensive Testing:- Comprehensice Testing is a SUT(System Under Test). All


the things to be tested in this method.
* Unit Testing.(Integrated)
* Functional Testing/ Regression Testing
* Non Functioal Testing(Load Stress Usability)
* Compatable Testing (Browser OS)
* Portocol Testing(Network IP Testing)

Application / System should be tested in all this kind of methods called Comprehensive
testing.

34.System integration testing :- System integration testing verifies that a system is


integrated to any external or third party systems defined in the system requirements

35.Penetration testing : -Penetration testing (also called pen testing) is the practice of
testing a computer system, network or Web application to find vulnerabilities that an
attacker could exploit.
The main objective of penetration testing is to determine security weaknesses.

36.Globalization Testing: The goal of globalization testing is to detect potential


problems in Application design that could inhibit globalization. It makes sure that the
code can handle all international support without breaking functionality that would cause
either data loss or display problems.
Globalization testing checks proper functionality of the Product with any of the
culture/locale settings using every type of international input possible.Proper
functionality of the product assumes both a stable component that works according to
design specification, regardless of international environment settings or cultures/locales,
and the correct representation of data.

37.Localization Testing: Localization is the process of customizing a software


Application that was originally designed for a domestic market so that it can be released
in foreign markets. This process involves translating all native language strings to the
target language and customizing the GUI so that it is appropriate for the target market.
Depending on the size and complexity of the software, localization can range from a
simple process involving a small team of translators, linguists, desktop publishers and
engineers to a complex process requiring a Localization Project Manager directing a team
of a hundred specialists. Localization is usually done using some combination of in-house
resources, independent contractors and full-scope services of a localization company

38.Buddy Testing : Buddy Testing is a process where a tester is paired with developer
for testing special scenarios.
In simple it is "pairing up developers with testers" for testing

39.Scalability Testing : Testing of a software application for measuring its non-


functional capability to scale up in terms of user load supported, the number of
transactions, the data volume etc.

S-ar putea să vă placă și