Sunteți pe pagina 1din 9

SOFTWARE TESTING : UNIT 1

1. Explain in detail about the software testing and its development process.

Ans :
s/w testing : Software Testing is the process of executing a program or system with the intent of
finding errors. It is an investigation conducted to provide stakeholders with information about the
quality of the product or service under test. Software testing also provides an objective,
independent view of the software to allow the business to appreciate and understand the risks of
software implementation. Test techniques include, but are not limited to, the process of executing a
program or application with the intent of finding software bugs .

Software testing can also be stated as the process of validating and verifying that a software
program/application/product:

1. meets the business and technical requirements that guided its design and development;
2. works as expected; and
3. can be implemented with the same characteristics.

Scope of testing : Testing cannot establish that a product functions properly under all conditions
but can only establish that it does not function properly under specific conditions. The scope of
software testing often includes examination of code as well as execution of that code in various
environments and conditions as well as examining the aspects of code .

Functional vs non-functional testing: Functional testing refers to activities that verify a


specific action or function of the code. Non-functional testing refers to aspects of the software
that may not be related to a specific function or user action, such as scalability or other
performance, behavior under certain constraints, or security

Software verification and validation:

 Verification: Have we built the software right? (i.e., does it match the specification).
 Validation: Have we built the right software? (i.e., is this what the customer wants).

- Verification is the process of evaluating a system or component to determine whether the


products of a given development phase satisfy the conditions imposed at the start of that
phase.
- Validation is the process of evaluating a system or component during or at the end of the
development process to determine whether it satisfies specified requirements.
s/w development process : Software testing is an integral part of the software-development
process, which comprises the following four components (see Figure 1-1):

1. Plan (P): Devise a plan. Define your objective and determine the strategy and supporting methods
to achieve it. You should base the plan on an assessment of your current situation, and the
strategy should clearly focus on the strategic initiatives/key units that will drive your
improvement plan.

2. Do (D): Execute the plan. Create the conditions and perform the necessary training to execute the
plan. Make sure everyone thoroughly understands the objectives and the plan. Teach workers the
procedures and skills they need to fulfill the plan and thoroughly understand the job. Then
perform the work according to these procedures.

3. Check (C): Check the results. Check to determine whether work is progressing according to the
plan and whether the expected results are being obtained. Check for performance of the set
procedures, changes in conditions, or abnormalities that may appear. As often as possible,
compare the results of the work with the objectives.

4. Act (A): Take the necessary action. If your checkup reveals that the work is not being performed
according to the plan or that results are not what you anticipated, devise measures to take
appropriate actions.

Testing involves only the check component of the plan-do-check-act (PDCA) cycle. The software
development team is responsible for the three remaining components. The development team plans
the project and builds the software (the “do” component); the testers check to determine that the
software meets the needs of the customers and users. If it does not, the testers report defects to the
development team. It is the development team that makes the determination as to whether the
uncovered defects are to be corrected.
The role of testing is to fulfill the check responsibilities assigned to the testers; it is not to determine
whether software can be placed into production. That is the responsibility of the customers, users,
and development team.

2. Describe the eight considerations in developing testing methodologies.

The following are eight considerations you need to address when customizing the software-testing
process:
1. Determine the test strategy objectives.
2. Determine the type of development project.
3. Determine the type of software system.
4. Determine the project scope.
5. Identify the software risks.
6. Determine when testing should occur.
7. Define the system test plan standard.
8. Define the unit test plan standard.

Determining the Test Strategy Objectives : Test strategy is normally developed by a team very
familiar with the business risks associated with the software; tactics are developed by the test team.
Thus, the test team needs to acquire and study the test strategy. In this study, the test team should
ask the following questions:
- What is the ranking of the test factors?
- Which of the high-level risks are the most significant?
- What damage can be done to the business if the software fails to perform correctly?
- What damage can be done to the business if the software is not completed on time?
- Which individuals are most capable of understanding the impact of the identified business risks?

Determining the Type of Development Project:


The type of development project refers to the environment/methodology in which the software
will be developed. As the environment changes, so does the testing risk. For example, the risks
associated with the traditional development effort differ from the risks associated with off-the
shelf purchased software. Different testing approaches must be used for different types of
projects, just as different development approaches are used.

Determining the Type of Software System:


The type of software system refers to the processing that will be performed by that system. This
step contains 16 different software system types. However, a single software system may
incorporate more than one of these types. Identifying the specific software type will help build an
effective test plan.
1.Batch (general). Can be run as a normal batch job and makes no unusual hardware or input-
output actions.
2.Event control. Performs real-time data processing as a result of external events
3. Process control. Receives data from an external source and issues commands to that source to
control its actions based on the received data.
4. Procedure control. Controls other software.
5. Computer system software. Provides services to operational computer programs.
6. Software development tools. Provides services to aid in the development of software.
{from where u r getting patience to read all 16 points , leave yaar , only one day time .. u cn see
pg no – 76 of text book by william perry otherwise. Great soul}
Determining the Project Scope: The project scope refers to the totality of activities to be
incorporated into the software system being tested- the range of system requirements /
specifications to be understood. The scope of new system development is different from the scope
of changes to an existing system.
Identifying the Software Risks: Strategic risks are the high-level business risks faced by the
software system; software system risks are subsets. The purpose of decomposing the strategic
risks into tactical risks is to assist in creating the test scenarios that will address those risks. It is
difficult to create test scenarios for high-level risks. Tactical risks can be categorized as follows:
- Structural risks
- Technical risks
- Size risks
Determining When Testing Should Occur: Testing can and should occur throughout the phases
of a project.
A. Requirements phase activities .
B. Design phase activities
C. Program phase activities
D. Test phase activities
E. Operations phase activities
F. Maintenance phase activities
Defining the System Test Plan Standard: A tactical test plan must be developed to describe
when and how testing will occur. This test plan will provide background information on the
software being tested, on the test objectives and risks, as well as on the business functions to be
tested and the specific tests to be performed.

Defining the Unit Test Plan Standard: During internal design, the system is divided into the
components or units that perform the detailed processing. Each of these units should have its
own test plan. It is a bad idea economically to submit units that contain defects to higher levels
of testing. Thus, extra effort spent in developing unit test plans, testing units, and ensuring that
units are defect free prior to integration testing can have a significant payback in reducing overall
test costs.
3.With a neat sketch explain the workbench concept.?
To understand testing methodology, you must understand the workbench concept. In IT
organizations, workbenches are more frequently referred to as phases, steps, or tasks. The
workbench is a way of illustrating and documenting how a specific activity is to be performed.
Defining workbenches is normally the responsibility of a process management committee, which
in the past has been more frequently referred to as a standards committee. There are four
components to each workbench:
1. Input. The entrance criteria or deliverables needed to complete a task.
2. Procedures to do. The work tasks or processes that will transform the input into the output.
3. Procedures to check. The processes that determine that the output meet the standards.
4. Output. The exit criteria or deliverables produced from the workbench.
NOTE : Testing tools are not considered part of the workbench because they are incorporated
into either the procedures to do or procedures to check. The workbench is illustrated in Figure 3-
3, and the software development life cycle, which is comprised of many workbenches, is
illustrated in Figure 3-4.
The workbench concept can be used to illustrate one of the steps involved in building systems.
The programmer’s workbench consists of the following steps:
1. Input products (program specs) are given to the producer (programmer).
2. Work is performed (e.g., coding/debugging); a procedure is followed; a product or interim
deliverable (e.g., a program/module/unit) is produced.
3.Work is checked to ensure product meets specs and standards, and that the do procedure was
performed correctly.
4. If the check process finds problems, the product is sent back for rework.
5. If the check process finds no problems, the product is released to the next workbench.

{what ? Is this looking small . huh? I know our capability , we read 8 kb but can write 80 Mb ,
this much only was in text book what ill do then , tell . don’t strain …}

4. Discuss about the roles in the software testing.


There are two customer dissatisfaction gaps:
1. Risks Associated with Implementing Specifications.
2. Risks Associated with Not Meeting Customer Needs.
Management needs to evaluate these risks and determine their level of risk appetite. For
example, is management willing to accept the risk of un-maintainable software? If not,
management should take action to minimize that risk. An obvious action is to develop
maintenance standards. Another obvious action is to test the software to ensure its
maintainability.
The role of all software testing groups is to validate whether the documented specifications have
been implemented as specified. Additional roles that might be assigned to software testers
include the following:
■■ Testing for all or part of the test factors. When establishing the software testing role,
management will want to accept some test factors for incorporating into the software tester’s
role such as testing for ease of use, and exclude others such as operational performance of the
software. In other words, management may decide they can live with inefficient software but
cannot live with difficult to use processes.
■■ Ensuring that the documented specifications meet the true needs of the customer. Testers can
attempt to verify that the documented specifications are in fact the true needs of the customer.
For example, they might initiate a requirements review as a means of verifying the
completeness of the defined specifications.
■■ Improving the software testing process. Testers can use the analysis of their testing to
identify ways to improve testing.
■■ Improving the developmental test process. Testers can use their experience in testing to make
recommendations on how the software development process could be improved.
■■ Participating in acceptance testing. Testers can use their software testing expertise to help the
users of the software systems develop and implement acceptance testing plans that will
determine whether the completed software meets the operational needs of the users.
■■ Recommending changes to the software system. In developing and conducting software tests,
testers may identify better ways of implementing the documented specifications.
■■ Evaluating the adequacy of the system of controls within the software system. There are two
components of a software system: the component that does the specified work and the
component that checks that the specified work was performed correctly. The latter component is
referred to as the “system of internal control within the software system”. Testers can evaluate
whether those controls are adequate to reduce the risks for which they were designed to
minimize.
Management needs to clearly establish the role for software testers in their IT organization.
Some IT managers want a limited role for software testers, whereas others want an expanded
role. Also included in a decision of the role of software testers is whether they will be
independent of the developmental project or part of the developmental project.

5.Briefly explain the computer system strategic risks and economics of testing.
Strategic risks are the high-level business risks faced by the software system; software system
risks are subsets. The purpose of decomposing the strategic risks into tactical risks is to assist in
creating the test scenarios that will address those risks.
There are two types of risks associated with software:
1. Risks Associated with Implementing Specifications.
2. Risks Associated with Not Meeting Customer Needs.
Risks Associated with Implementing Specifications:
There are many risks that, if not properly controlled, will result in missing, incomplete,
inaccurate specifications. The risk factors that can cause specifications not to be implemented as
specified include:
■■ Inadequate schedule and budget. If the testers do not have adequate time or resources, they
will not be able to test all the implemented specifications.
■■ Inadequate test processes. If the test processes are defective, the testers will create defects as
they conduct testing. Thus, even though they have performed the process as specified, they will
not be able to accomplish the tasks those test processes were designed to achieve.
■■ Inadequate competency. Testers who do not know the basics of testing, who do not know
how to use the test processes provided them, and who are inadequately trained in the use of
testing tools and techniques will not be able to accomplish test objectives.

Risks Associated with Not Meeting Customer Needs:


Meeting customers’ needs must be differentiated from implementing the documented software
specifications. One of the major problems in meeting needs is that the process for documenting
needs by the IT project leader is defective. the test risks become the factors that need to be
considered in the development of the test strategy. The following list briefly describes the test
factors :
Correctness , file integrity ,authorization , audit trail , Continuity of processing , service levels ,
access control , reliability , ease of use . portability , coupling , performance , ease of operation .

Economics of Testing:
One information services manager described testing in the following manner: “Too little testing
is a crime, but too much testing is a sin”. The risk of under-testing is directly translated into
system defects present in the production environment. The risk of over-testing is the
unnecessary use of valuable resources in testing computer systems that have no flaws, or so few
flaws that the cost of testing far exceeds the value of detecting the system defects.
Effective testing is conducted throughout the system development life cycle (SDLC). The SDLC
represents the activities that must occur to build software, and the sequence in which those
activities must occur. Most of the problems associated with testing occur from one of the
following causes:
- Failing to define testing objectives
- Testing at the wrong phase in the life cycle
- Using ineffective testing techniques .

The cost-effectiveness of testing is illustrated in Figure 2-4 as a testing cost curve. As the cost of
testing increases, the number of undetected defects decreases. The left side of the illustration
represents an under-test situation in which the cost of testing is less than the resultant loss from
undetected defects. At some point, the two lines cross and an over-test condition begins. In this
situation, the cost of testing to uncover defects exceeds the losses from those defects. A cost-
effective perspective means testing until the optimum point is reached, which is the point where
the cost of testing no longer exceeds the value received from the defects uncovered.

Few organizations have established a basis to measure the effectiveness of testing. This makes it
difficult for the individual systems analyst/programmer to determine the cost-effectiveness of
testing. Without testing standards, the effectiveness of the process cannot be evaluated in
sufficient detail to enable the process to be measured and improved.

S-ar putea să vă placă și