Sunteți pe pagina 1din 33

Software Engineering | Regression Testing

Regression Testing is the process of testing the modified parts of the code and the
parts that might get affected due to the modifications to ensure that no new errors
have been introduced in the software after the modifications have been made.
Regression means return of something and in the software field, it refers to the
return of a bug.
When to do regression testing?
 When a new functionality is added to the system and the code has been
modified to absorb and integrate that functionality with the existing code.
 When some defect has been identified in the software and the code is
debugged to fix it.
 When the code is modified to optimize its working.
Process of Regression testing:
Firstly, whenever we make some changes to the source code for any reasons like
adding new functionality, optimization, etc. then our program when executed fails in
the previously designed test suite for obvious reasons. After the failure, the source
code is debugged in order to identify the bugs in the program. After identification of
the bugs in the source code, appropriate modifications are made. Then appropriate
test cases are selected from the already existing test suite which covers all the
modified and affected parts of the source code. We can add new test cases if
required. In the end regression testing is performed using the selected test cases.
Techniques for the selection of Test cases for Regression Testing:
 Select all test cases: In this technique, all the test cases are selected from
the already existing test suite. It is the most simple and safest technique but not
much efficient.
 Select test cases randomly: In this technique, test cases are selected
randomly from the existing test-suite but it is only useful if all the test cases are
equally good in their fault detection capability which is very rare. Hence, it is not
used in most of the cases.
 Select modification traversing test cases: In this technique, only those test
cases are selected which covers and tests the modified portions of the source
code the parts which are affected by these modifications.
 Select higher priority test cases: In this technique, priority codes are
assigned to each test case of the test suite based upon their bug detection
capability, customer requirements, etc. After assigning the priority codes, test
cases with highest priorities are selected for the process of regression testing.
Test case with highest priority has highest rank. For example, test case with
priority code 2 is less important than test case with priority code 1.

Tools for regression testing: In regression testing, we generally select the test
cases form the existing test suite itself and hence, we need not to compute their
expected output and it can be easily automated due to this reason. Automating the
process of regression testing will be very much effective and time saving.
Most commonly used tools for regression testing are:
 Selenium
 WATIR (Web Application Testing In Ruby)
 QTP (Quick Test Professional)
 RFT (Rational Functional Tester)
 Winrunner
 Silktest
Advantages of Regression Testing:
 It ensures that no new bugs has been introduced after adding new
functionalities to the system.
 As most of the test cases used in Regression Testing are selected from the
existing test suite and we already know their expected outputs. Hence, it can be
easily automated by the automated tools.
 It helps to maintain the quality of the source code.
Disadvantages of Regression Testing:
 It can be time and resource consuming if automated tools are not used.
 It is required even after very small changes in the code.

Software Engineering | Black box testing


Prerequisite – Software Testing | Basics
Black box testing is a type of software testing in which the functionality of the
software is not known. The testing is done without the internal knowledge of the
products.
Black box testing can be done in following ways:
1. Syntax Driven Testing – This type of testing is applied to systems that can be
syntactically represented by some language. For example- compilers,language that
can be represented by context free grammar. In this, the test cases are generated so
that each grammar rule is used at least once.
2. Equivalence partitioning – It is often seen that many type of inputs work similarly
so instead of giving all of them separately we can group them together and test only
one input of each group. The idea is to partition the input domain of the system into a
number of equivalence classes such that each member of class works in a similar
way, i.e., if a test case in one class results in some error, other members of class
would also result into same error.

The technique involves two steps:


1. Identification of equivalence class – Partition any input domain into
minimum two sets: valid values and invalid values. For example, if the valid
range is 0 to 100 then select one valid input like 49 and one invalid like 104.
2. Generating test cases –
(i) To each valid and invalid class of input assign unique identification number.
(ii) Write test case covering all valid and invalid test case considering that no
two invalid inputs mask each other.
To calculate the square root of a number, the equivalence classes will be:
(a) Valid inputs:
 Whole number which is a perfect square- output will be an integer.
 Whole number which is not a perfect square- output will be decimal
number.
 Positive decimals
(b) Invalid inputs:
 Negative numbers(integer or decimal).
 Characters other that numbers like “a”,”!”,”;”,etc.
3. Boundary value analysis – Boundaries are very good places for errors to occur.
Hence if test cases are designed for boundary values of input domain then the
efficiency of testing improves and probability of finding errors also increase. For
example – If valid range is 10 to 100 then test for 10,100 also apart from valid and
invalid inputs.
4. Cause effect Graphing – This technique establishes relationship between logical
input called causes with corresponding actions called effect. The causes and effects
are represented using Boolean graphs. The following steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop cause effect graph.
3. Transform the graph into decision table.
4. Convert decision table rules to test cases.
For example, in the following cause effect graph:

It can be converted into decision table like:


Each column corresponds to a rule which will become a test case for testing. So
there will be 4 test cases.
5. Requirement based testing – It includes validating the requirements given in
SRS of software system.
6. Compatibility testing – The test case result not only depend on product but also
infrastructure for delivering functionality. When the infrastructure parameters are
changed it is still expected to work properly. Some parameters that generally affect
compatibility of software are:
1. Processor (Pentium 3,Pentium 4) and number of processors.
2. Architecture and characteristic of machine (32 bit or 64 bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).

Types of Software Testing


Introduction:-

Testing is a process of executing a program with the aim of finding error. To make
our software perform well it should be error free.If testing is done successfully it will
remove all the errors from the software.

Principles of Testing:-

(i) All the test should meet the customer requirements


(ii) To make our software testing should be performed by third party
(iii) Exhaustive testing is not possible.As we need the optimal amount of testing
based on the risk assessment of the application.
(iv) All the test to be conducted should be planned before implementing it
(v) It follows pareto rule(80/20 rule) which states that 80% of errors comes from 20%
of program components.
(vi) Start testing with small parts and extend it to large parts.
Types of Testing:-

1. Unit Testing

It focuses on smallest unit of software design. In this we test an individual unit or


group of inter related units.It is often done by programmer by using sample input and
observing its corresponding outputs.
Example:
a) In a program we are checking if loop, method or
function is working fine
b) Misunderstood or incorrect, arithmetic precedence.
c) Incorrect initialization

2. Integration Testing

The objective is to take unit tested components and build a program structure that
has been dictated by design.Integration testing is testing in which a group of
components are combined to produce output.

Integration testing are of two types: (i) Top down (ii) Bottom up
Example
(a) Black Box testing:- It is used for validation.
In this we ignores internal working mechanism and
focuses on what is the output?.

(b) White Box testing:- It is used for verification.


In this we focus on internal mechanism i.e.
how the output is achieved?

3. Regression Testing

Every time new module is added leads to changes in program. This type of testing
make sure that whole component works properly even after adding components to
the complete program.
Example
In school record suppose we have module staff, students
and finance combining these modules and checking if on
integration these module works fine is regression testing
4. Smoke Testing

This test is done to make sure that software under testing is ready or stable for
further testing
It is called smoke test as testing initial pass is done to check if it did not catch the fire
or smoked in the initial switch on.
Example:
If project has 2 modules so before going to module
make sure that module 1 works properly

5. Alpha Testing

This is a type of validation testing.It is a type of acceptance testing which is done


before the product is released to customers. It is typically done by QA people.
Example:
When software testing is performed internally within
the organization

6. Beta Testing

The beta test is conducted at one or more customer sites by the end-user of the
software. This version is released for the limited number of users for testing in real
time environment
Example:
When software testing is performed for the limited
number of people

7. System Testing

In this software is tested such that it works fine for different operating system.It is
covered under the black box testing technique. In this we just focus on required input
and output without focusing on internal working.
In this we have security testing, recovery testing , stress testing and performance
testing
Example:
This include functional as well as non functional
testing

8. Stress Testing

In this we gives unfavorable conditions to the system and check how they perform in
those condition.
Example:
(a) Test cases that require maximum memory or other
resources are executed
(b) Test cases that may cause thrashing in a virtual
operating system
(c) Test cases that may cause excessive disk requirement

9. Performance Testing

It is designed to test the run-time performance of software within the context of an


integrated system.It is used to test speed and effectiveness of program.
Example:
Checking number of processor cycles.

Software Engineering | Debugging


Introduction:
In the context of software engineering, debugging is the process of fixing a bug in the
software. In other words, it refers to identifying, analyzing and removing errors. This
activity begins after the software fails to execute properly and concludes by solving
the problem and successfully testing the software. It is considered to be an
extremely complex and tedious task because errors need to be resolved at all stages
of debugging.

Debugging Process: Steps involved in debugging are:


 Problem identification and report preparation.
 Assigning the report to software engineer to the defect to verify that it is
genuine.
 Defect Analysis using modeling, documentations, finding and testing
candidate flaws, etc.
 Defect Resolution by making required changes to the system.
 Validation of corrections.

Debugging Strategies:
1. Study the system for the larger duration in order to understand the system. It
helps debugger to construct different representations of systems to be
debugging depends on the need. Study of the system is also done actively to
find recent changes made to the software.
2. Backwards analysis of the problem which involves tracing the program
backward from the location of failure message in order to identify the region of
faulty code. A detailed study of the region is conducting to find the cause of
defects.
3. Forward analysis of the program involves tracing the program forwards using
breakpoints or print statements at different points in the program and studying
the results. The region where the wrong outputs are obtained is the region that
needs to be focused to find the defect.
4. Using the past experience of the software debug the software with similar
problems in nature. The success of this approach depends on the expertise of
the debugger.

Debugging Tools:
Debugging tool is a computer program that is used to test and debug other
programs. A lot of public domain software like gdb and dbx are available for
debugging. They offer console-based command line interfaces. Examples of
automated debugging tools include code based tracers, profilers, interpreters, etc.
Some of the widely used debuggers are:
 Radare2
 WinDbg
 Valgrind

Difference Between Debugging and Testing:


Debugging is different from testing. Testing focuses on finding bugs, errors, etc
whereas debugging starts after a bug has been identified in the software. Testing is
used to ensure that the program is correct and it was supposed to do with a certain
minimum success rate. Testing can be manual or automated. There are several
different types of testing like unit testing, integration testing, alpha and beta testing,
etc.
Debugging requires a lot of knowledge, skills, and expertise. It can be supported by
some automated tools available but is more of a manual process as every bug is
different and requires a different technique, unlike a pre-defined testing mechanism.

Software Engineering | COCOMO Model


Cocomo (Constructive Cost Model) is a regression model based on LOC,
i.e number of Lines of Code. It is a procedural cost estimate model for software
projects and often used as a process of reliably predicting the various parameters
associated with making a project such as size, effort, cost, time and quality. It was
proposed by Barry Boehm in 1970 and is based on the study of 63 projects, which
make it one of the best-documented models.
The key parameters which define the quality of any software products, which are
also an outcome of the Cocomo are primarily Effort & Schedule:
 Effort: Amount of labor that will be required to complete a task. It is measured
in person-months units.
 Schedule: Simply means the amount of time required for the completion of
the job, which is, of course, proportional to the effort put. It is measured in the
units of time such as weeks, months.
Different models of Cocomo have been proposed to predict the cost estimation at
different levels, based on the amount of accuracy and correctness required. All of
these models can be applied to a variety of projects, whose characteristics
determine the value of constant to be used in subsequent calculations. These
characteristics pertaining to different system types are mentioned below.

Boehm’s definition of organic, semidetached, and embedded systems:


1. Organic – A software project is said to be an organic type if the team size
required is adequately small, the problem is well understood and has been
solved in the past and also the team members have a nominal experience
regarding the problem.
2. Semi-detached – A software project is said to be a Semi-detached type if the
vital characteristics such as team-size, experience, knowledge of the various
programming environment lie in between that of organic and Embedded. The
projects classified as Semi-Detached are comparatively less familiar and
difficult to develop compared to the organic ones and require more experience
and better guidance and creativity. Eg: Compilers or different Embedded
Systems can be considered of Semi-Detached type.
3. Embedded – A software project with requiring the highest level of complexity,
creativity, and experience requirement fall under this category. Such software
requires a larger team size than the other two models and also the developers
need to be sufficiently experienced and creative to develop such complex
models.
All the above system types utilize different values of the constants used in Effort
Calculations.

Types of Models: COCOMO consists of a hierarchy of three increasingly detailed


and accurate forms. Any of the three forms can be adopted according to our
requirements. These are types of COCOMO model:
1. Basic COCOMO Model
2. Intermediate COCOMO Model
3. Detailed COCOMO Model
The first level, Basic COCOMO can be used for quick and slightly rough calculations
of Software Costs. Its accuracy is somewhat restricted due to the absence of
sufficient factor considerations.
Intermediate COCOMO takes these Cost Drivers into account and Detailed
COCOMOadditionally accounts for the influence of individual project phases, i.e in
case of Detailed it accounts for both these cost drivers and also calculations are
performed phase wise henceforth producing a more accurate result. These two
models are further discussed below.
Estimation of Effort: Calculations –
4. Basic Model –

5.
The above formula is used for the cost estimation of for the basic COCOMO
model, and also is used in the subsequent models. The constant values a and b
for the Basic Model for the different categories of system:
SOFTWARE PROJECTS A B

Organic 2.4 1.05

Semi Detached 3.0 1.12

Embedded 3.6 1.20


The effort is measured in Person-Months and as evident from the formula is
dependent on Kilo-Lines of code. These formulas are used as such in the Basic
Model calculations, as not much consideration of different factors such as
reliability, expertise is taken into account, henceforth the estimate is rough.
6. Intermediate Model –
The basic Cocomo model assumes that the effort is only a function of the
number of lines of code and some constants evaluated according to the
different software system. However, in reality, no system’s effort and schedule
can be solely calculated on the basis of Lines of Code. For that, various other
factors such as reliability, experience, Capability. These factors are known as
Cost Drivers and the Intermediate Model utilizes 15 such drivers for cost
estimation.
Classification of Cost Drivers and their attributes:
(i) Product attributes –
 Required software reliability extent
 Size of the application database
 The complexity of the product
(ii) Hardware attributes –
Run-time performance constraints
 Memory constraints
 The volatility of the virtual machine environment
 Required turnabout time
(iii) Personnel attributes –
 Analyst capability
 Software engineering capability
 Applications experience
 Virtual machine experience
 Programming language experience
(iv) Project attributes –
 Use of software tools
 Application of software engineering methods
 Required development schedule
;

VERY NOMINA VERY

COST DRIVERS LOW LOW L HIGH HIGH

Product Attributes

Required Software Reliability 0.75 0.8 1.00 1.15 1.40


VERY NOMINA VERY

COST DRIVERS LOW LOW L HIGH HIGH

0.9

Size of Application Database 4 1.00 1.08 1.16

0.8

Complexity of The Product 0.70 5 1.00 1.15 1.30

Hardware Attributes

Runtime Performance Constraints 1.00 1.11 1.30

Memory Constraints 1.00 1.06 1.21

Volatility of the virtual machine 0.8

environment 7 1.00 1.15 1.30

0.9

Required turnabout time 4 1.00 1.07 1.15

Personnel attributes

1.1

Analyst capability 1.46 9 1.00 0.86 0.71

1.1

Applications experience 1.29 3 1.00 0.91 0.82

1.1

Software engineer capability 1.42 7 1.00 0.86 0.70

1.1

Virtual machine experience 1.21 0 1.00 0.90

1.0

Programming language experience 1.14 7 1.00 0.95

Project Attributes

Application of software engineering 1.24 1.1 1.00 0.91 0.82


VERY NOMINA VERY

COST DRIVERS LOW LOW L HIGH HIGH

methods 0

1.1

Use of software tools 1.24 0 1.00 0.91 0.83

1.0

Required development schedule 1.23 8 1.00 1.04 1.10


The project manager is to rate these 15 different parameters for a particular
project on a scale of one to three. Then, depending on these ratings,
appropriate cost driver values are taken from the above table. These 15 values
are then multiplied to calculate the EAF (Effort Adjustment Factor). The
Intermediate COCOMO formula now takes the form:

The values of a and b in case of the intermediate model are as follows:

SOFTWARE PROJECTS A B

Organic 3.2 1.05

Semi Detached 3.0 1.12

Embeddedc 2.8 1.20


7. Detailed Model –
Detailed COCOMO incorporates all characteristics of the intermediate version
with an assessment of the cost driver’s impact on each step of the software
engineering process. The detailed model uses different effort multipliers for
each cost driver attribute. In detailed cocomo, the whole software is divided into
different modules and then we apply COCOMO in different modules to estimate
effort and then sum the effort.
The Six phases of detailed COCOMO are:
 Planning and requirements
 System design
 Detailed design
 Module code and test
 Integration and test
 Cost Constructive model
The effort is calculated as a function of program size and a set of cost drivers
are given according to each phase of the software lifecycle.
Also read: Classical Waterfall Model, Iterative Waterfall Model, Prototyping
Model, Spiral Model
Software Measurement in Software Engineering

To assess the quality of the engineered product or system and to better understand
the models that are created, some measures are used. These measures are collected
throughout the software development life cycle with an intention to improve the
software process on a continuous basis. Measurement helps in estimation, quality
control, productivity assessment and project control throughout a software project.
Also, measurement is used by software engineers to gain insight into the design and
development of the work products. In addition, measurement assists in strategic
decision-making as a project proceeds.
Software measurements are of two categories, namely, direct measures and indirect
measures. Direct measures include software processes like cost and effort applied
and products like lines of code produced, execution speed, and other defects that
have been reported. Indirect measures include products like functionality, quality,
complexity, reliability, maintainability, and many more.
Generally, software measurement is considered as a management tool which if
conducted in an effective manner, helps the project manager and the entire software
team to take decisions that lead to successful completion of the project.
Measurement process is characterized by a set of five activities, which are listed
below.

 Formulation: This performs measurement and develops appropriate metric


for software under consideration.
 Collection: This collects data to derive the formulated metrics.
 Analysis: This calculates metrics and the use of mathematical tools.
 Interpretation: This analyzes the metrics to attain insight into the quality of
representation.
 Feedback: This communicates recommendation derived from product
metrics to the software team.

Note that collection and analysis activities drive the measurement process. In order
to perform these activities effectively, it is recommended to automate data collection
and analysis, establish guidelines and recommendations for each metric, and use
statistical techniques to interrelate external quality features and internal product
attributes.
Zipf's Law

Zipf’s Law is a statistical distribution in certain data sets, such as words in a


linguistic corpus, in which the frequencies of certain words are inversely
proportional to their ranks. Named for linguist George Kingsley Zipf, who
around 1935 was the first to draw attention to this phenomenon, the law
examines the frequency of words in natural language and how the most
common word occurs twice as often as the second most frequent word,
three times as often as the subsequent word and so on until the least
frequent word. The word in the position n appears 1/n times as often as the
most frequent one.

When words are ranked according to their frequencies in a large enough


collection of texts and then the frequency is plotted against the rank, the
result is a logarithmic curve. (Or if you graph on a log scale, the result is a
straight line.)

The most common word in English is “the,” which appears about one-tenth


of the time in a typical text; the next most common word (rank 2)
is “of,”which appears about one-twentieth of the time. In this type of
distribution, frequency declines sharply as the rank number increases, so a
small number of items appear very often, and a large number rarely occur.

A Zipfian distribution of words is universal in natural language: It can be


found in the speech of children less than 32 months old as well as in the
specialized vocabulary of university textbooks. Studies show that this
phenomenon also applies in nearly every language.

Individually, neither syntax nor semantics is sufficient to induce a Zipfian


distribution on its own. However, syntax and semantics work together for a
Zipfian distribution.
Only recently has Zipf’s Law been tested rigorously on databases large
enough to ensure statistical validity. Researchers at the Centre de Recerca
Matematica, part of the Government of Catalonia's CERCA network, who
are attached to the Universitat Autonoma de Barcelona Department of
Mathematics, analyzed the full collection of English-language texts in the
Project Gutenberg, a free database with more than 30,000 works. When
the rarest words were left out, Zipf’s Law applied to more than half of the
words.

The law can be applied to fields other than literature. Zipfian distributions
have been found in the population ranks of cities in various countries,
corporation sizes, income rankings and ranks of the number of people
watching the same TV channel.

Estimation Techniques - Wideband


Delphi

Delphi Method is a structured communication technique, originally


developed as a systematic, interactive forecasting method which relies
on a panel of experts. The experts answer questionnaires in two or more
rounds. After each round, a facilitator provides an anonymous summary
of the experts’ forecasts from the previous round with the reasons for
their judgments. Experts are then encouraged to revise their earlier
answers in light of the replies of other members of the panel.

It is believed that during this process the range of answers will decrease
and the group will converge towards the "correct" answer. Finally, the
process is stopped after a predefined stop criterion (e.g. number of
rounds, achievement of consensus, and stability of results) and the mean
or median scores of the final rounds determine the results.

Delphi Method was developed in the 1950-1960s at the RAND


Corporation.
Wideband Delphi Technique
In the 1970s, Barry Boehm and John A. Farquhar originated the
Wideband Variant of the Delphi Method. The term "wideband" is used
because, compared to the Delphi Method, the Wideband Delphi
Technique involved greater interaction and more communication between
the participants.

In Wideband Delphi Technique, the estimation team comprise the project


manager, moderator, experts, and representatives from the development
team, constituting a 3-7 member team. There are two meetings −

 Kickoff Meeting

 Estimation Meeting

Wideband Delphi Technique – Steps


Step 1 − Choose the Estimation team and a moderator.

Step 2 − The moderator conducts the kickoff meeting, in which the team
is presented with the problem specification and a high level task list, any
assumptions or project constraints. The team discusses on the problem
and estimation issues, if any. They also decide on the units of estimation.
The moderator guides the entire discussion, monitors time and after the
kickoff meeting, prepares a structured document containing problem
specification, high level task list, assumptions, and the units of
estimation that are decided. He then forwards copies of this document
for the next step.

Step 3 − Each Estimation team member then individually generates a


detailed WBS, estimates each task in the WBS, and documents the
assumptions made.
Step 4 − The moderator calls the Estimation team for the Estimation
meeting. If any of the Estimation team members respond saying that the
estimates are not ready, the moderator gives more time and resends the
Meeting Invite.

Step 5 − The entire Estimation team assembles for the estimation
meeting.

Step 5.1 − At the beginning of the Estimation meeting, the moderator


collects the initial estimates from each of the team members.

Step 5.2 − He then plots a chart on the whiteboard. He plots each


member’s total project estimate as an X on the Round 1 line, without
disclosing the corresponding names. The Estimation team gets an idea of
the range of estimates, which initially may be large.
Step 5.3 − Each team member reads aloud the detailed task list that
he/she made, identifying any assumptions made and raising any
questions or issues. The task estimates are not disclosed.

The individual detailed task lists contribute to a more complete task list
when combined.

Step 5.4 − The team then discusses any doubt/problem they have about
the tasks they have arrived at, assumptions made, and estimation
issues.

Step 5.5 − Each team member then revisits his/her task list and
assumptions, and makes changes if necessary. The task estimates also
may require adjustments based on the discussion, which are noted as +N
Hrs. for more effort and –N Hrs. for less effort.

The team members then combine the changes in the task estimates to
arrive at the total project estimate.

Step 5.6 − The moderator collects the changed estimates from all the
team members and plots them on the Round 2 line.

In this round, the range will be narrower compared to the earlier one, as
it is more consensus based.
Step 5.7 − The team then discusses the task modifications they have
made and the assumptions.

Step 5.8 − Each team member then revisits his/her task list and
assumptions, and makes changes if necessary. The task estimates may
also require adjustments based on the discussion.

The team members then once again combine the changes in the task
estimate to arrive at the total project estimate.

Step 5.9 − The moderator collects the changed estimates from all the
members again and plots them on the Round 3 line.

Again, in this round, the range will be narrower compared to the earlier
one.

Step 5.10 − Steps 5.7, 5.8, 5.9 are repeated till one of the following
criteria is met −

 Results are converged to an acceptably narrow range.

 All team members are unwilling to change their latest estimates.

 The allotted Estimation meeting time is over.


Step 6 − The Project Manager then assembles the results from the
Estimation meeting.

Step 6.1 − He compiles the individual task lists and the corresponding
estimates into a single master task list.

Step 6.2 − He also combines the individual lists of assumptions.

Step 6.3 − He then reviews the final task list with the Estimation team.

Advantages and Disadvantages of


Wideband Delphi Technique
Advantages

 Wideband Delphi Technique is a consensus-based estimation technique for


estimating effort.
 Useful when estimating time to do a task.
 Participation of experienced people and they individually estimating would
lead to reliable results.
 People who would do the work are making estimates thus making valid
estimates.
 Anonymity maintained throughout makes it possible for everyone to
express their results confidently.
 A very simple technique.
 Assumptions are documented, discussed and agreed.

Disadvantages

 Management support is required.


 The estimation results may not be what the management wants to hear.

Software Engineering | Software Maintenance


Software Maintenance is the process of modifying a software product
after it has been delivered to the customer. The main purpose of
software maintenance is to modify and update software application after
delivery to correct faults and to improve performance.
Need for Maintenance –
Software Maintenance must be performed in order to:
 Correct faults.
 Improve the design.
 Implement enhancements.
 Interface with other systems.
 Accommodate programs so that different hardware, software,
system features, and telecommunications facilities can be used.
 Migrate legacy software.
 Retire software.

Categories of Software Maintenance –


Maintenance can be divided into the following:
1. Corrective maintenance:
Corrective maintenance of a software product may be essential
either to rectify some bugs observed while the system is in use, or to
enhance the performance of the system.
2. Adaptive maintenance:
This includes modifications and updations when the customers need
the product to run on new platforms, on new operating systems, or
when they need the product to interface with new hardware and
software.
3. Perfective maintenance:
A software product needs maintenance to support the new features
that the users want or to change different types of functionalities of
the system according to the customer demands.
4. Preventive maintenance:
This type of maintenance includes modifications and updations to
prevent future problems of the software. It goals to attend problems,
which are not significant at this moment but may cause serious
issues in future.
Reverse Engineering –
Reverse Engineering is processes of extracting knowledge or design
information from anything man-made and reproducing it based on
extracted information. It is also called back Engineering.
Software Reverse Engineering –
Software Reverse Engineering is the process of recovering the design
and the requirements specification of a product from an analysis of it’s
code. Reverse Engineering is becoming important, since several existing
software products, lack proper documentation, are highly unstructured,
or their structure has degraded through a series of maintenance efforts.
Why Reverse Engineering?
 Providing proper system documentatiuon.
 Recovery of lost information.
 Assisting with maintenance.
 Facility of software reuse.
 Discovering unexpected flaws or faults.
Used of Software Reverse Engineering –
 Software Reverse Engineering is used in software design, reverse
engineering enables the developer or programmer to add new features to the
existing software with or without knowing the source code.
 Reverse engineering is also useful in software testing, it helps the testers to
study the virus and other malware code .

Software Configuration Management


Tutorial
What is Software Configuration Management?
Configuration Management helps organizations to systematically manage,
organize, and control the changes in the documents, codes, and other
entities during the Software Development Life Cycle. It is abbreviated as
the SCM process. It aims to control cost and work effort involved in making
changes to the software system. The primary goal is to increase
productivity with minimal mistakes.

Why do we need Configuration management?


The primary reasons for Implementing Software Configuration
Management System are:

 There are multiple people working on software which is continually


updating
 It may be a case where multiple version, branches, authors are
involved in a software project, and the team is geographically
distributed and works concurrently
 Changes in user requirement, policy, budget, schedule need to be
accommodated.
 Software should able to run on various machines and Operating
Systems
 Helps to develop coordination among stakeholders
 SCM process is also beneficial to control the costs involved in
making changes to a system

Any change in the software configuration Items will affect the final product.
Therefore, changes to configuration items need to be controlled and
managed.

Tasks in SCM process


Configuration Identification

Baselines

Change Control

Configuration Status Accounting

Configuration Audits and Reviews

Configuration Identification:
Configuration identification is a method of determining the scope of the
software system. With the help of this step, you can manage or control
something even if you don't know what it is. It is a description that contains
the CSCI type (Computer Software Configuration Item), a project identifier
and version information.
Activities during this process:

 Identification of configuration Items like source code modules, test


case, and requirements specification.
 Identification of each CSCI in the SCM repository, by using an object-
oriented approach
 The process starts with basic objects which are grouped into
aggregate objects. Details of what, why, when and by whom changes
in the test are made
 Every object has its own features that identify its name that is explicit
to all other objects
 List of resources required such as the document, the file, tools, etc.

Example:

Instead of naming a File login.php its should be named login_v1.2.php


where v1.2 stands for the version number of the file

Instead of naming folder "Code" it should be named "Code_D" where D


represents code should be backed up daily.

Baseline:
A baseline is a formally accepted version of a software configuration item. It
is designated and fixed at a specific time while conducting the SCM
process. It can only be changed through formal change control procedures.

Activities during this process:

 Facilitate construction of various versions of an application


 Defining and determining mechanisms for managing various versions
of these work products
 The functional baseline corresponds to the reviewed system
requirements
 Widely used baselines include functional, developmental, and
product baselines

In simple words, baseline means ready for release.

Change Control:
Change control is a procedural method which ensures quality and
consistency when changes are made in the configuration object. In this
step, the change request is submitted to software configuration manager.
Activities during this process:

 Control ad-hoc change to build stable software development


environment. Changes are committed to the repository
 The request will be checked based on the technical merit, possible
side effects and overall impact on other configuration objects.
 It manages changes and making configuration items available during
the software lifecycle

Configuration Status Accounting:


Configuration status accounting tracks each release during the SCM
process. This stage involves tracking what each version has and the
changes that lead to this version.

Activities during this process:

 Keeps a record of all the changes made to the previous baseline to


reach a new baseline
 Identify all items to define the software configuration
 Monitor status of change requests
 Complete listing of all changes since the last baseline
 Allows tracking of progress to next baseline
 Allows to check previous releases/versions to be extracted for testing

Configuration Audits and Reviews:


Software Configuration audits verify that all the software product satisfies
the baseline needs. It ensures that what is built is what is delivered.

Activities during this process:

 Configuration auditing is conducted by auditors by checking that


defined processes are being followed and ensuring that the SCM
goals are satisfied.
 To verify compliance with configuration control standards. auditing
and reporting the changes made
 SCM audits also ensure that traceability is maintained during the
process.
 Ensures that changes made to a baseline comply with the
configuration status reports
 Validation of completeness and consistency
Participant of SCM process:
Following are the key participants in SCM

1. Configuration Manager

 Configuration Manager is the head who is Responsible for identifying


configuration items.
 CM ensures team follows the SCM process
 He/She needs to approve or reject change requests

2. Developer

 The developer needs to change the code as per standard


development activities or change requests. He is responsible for
maintaining configuration of code.
 The developer should check the changes and resolves conflicts

3. Auditor

 The auditor is responsible for SCM audits and reviews.


 Need to ensure the consistency and completeness of release.

4. Project Manager:

 Ensure that the product is developed within a certain time frame


 Monitors the progress of development and recognizes issues in the
SCM process
 Generate reports about the status of the software system
 Make sure that processes and policies are followed for creating,
changing, and testing

5. User

The end user should understand the key SCM terms to ensure he has the
latest version of the software

Software Configuration Management Plan


The SCMP (Software Configuration management planning) process
planning begins at the early phases of a project. The outcome of the
planning phase is the SCM plan which might be stretched or revised during
the project.

 The SCMP can follow a public standard like the IEEE 828 or
organization specific standard
 It defines the types of documents to be management and a document
naming. Example Test_v1
 SCMP defines the person who will be responsible for the entire SCM
process and creation of baselines.
 Fix policies for version management & change control
 Define tools which can be used during the SCM process
 Configuration management database for recording configuration
information.

Software Configuration Management Tools


Any Change management software should have the following 3 Key
features:

Concurrency Management:

When two or more tasks are happening at the same time, it is known as
concurrent operation. Concurrency in context to SCM means that the same
file being edited by multiple persons at the same time.

If concurrency is not managed correctly with SCM tools, then it may create
many pressing issues.

Version Control:
SCM uses archiving method or saves every change made to file. With the
help of archiving or save feature, it is possible to roll back to the previous
version in case of issues.

Synchronization:

Users can checkout more than one files or an entire copy of the repository.
The user then works on the needed file and checks in the changes back to
the repository.They can synchronize their local copy to stay updated with
the changes made by other team members.

Following are popular tools

1. Git: Git is a free and open source tool which helps version control. It is
designed to handle all types of projects with speed and efficiency.

2. Team Foundation Server: Team Foundation is a group of tools and


technologies that enable the team to collaborate and coordinate for building
a product.

3. Ansible: It is an open source Software configuration management tool.


Apart from configuration management it also offers application deployment
& task automation.

Conclusion:
 Configuration Management helps organizations to systematically
manage, organize, and control the changes in the documents, codes,
and other entities during the Software Development Life Cycle.
 The primary goal of the SCM process is to increase productivity with
minimal mistakes
 The main reason behind configuration management process is that
there are multiple people working on software which is continually
updating. SCM helps establish concurrency, synchronization, and
version control.
 A baseline is a formally accepted version of a software configuration
item
 Change control is a procedural method which ensures quality and
consistency when changes are made in the configuration object.
 Configuration status accounting tracks each release during the SCM
process
 Software Configuration audits verify that all the software product
satisfies the baseline needs
 Project manager, Configuration manager, Developer, Auditor, and
user are participants in SCM process
 The SCM process planning begins at the early phases of a project.
 Git, Team foundation Sever and Ansible are few popular SCM tools

DELPHI METHOD (COST ESTIMATION MODELT)


1. 1. Wideband Delphi Method
2. 2. Introduction • Predicting the resources required for a software development
process • Software cost and effort estimation will never be an exact science. •
Too many variables - human, technical, environmental, political etc.. • can
affect the ultimate cost of software and effort applied to develop it. • if
expectations are not realistic from the beginning of the project, stakeholders
will not trust the team or the project manager. • However, software project
estimation can be transformed from a black art to a series of systematic steps
that provides estimates for acceptable risks.
3. 3. Quote • Watts Humphrey in his book, Managing the Software process, has
said, “If you • don’t know where you are, a map won’t help.” This saying is
very relevant while • dealing with software project estimation. In a software
project, unless you are sure that your estimation is accurate, you cannot make
much progress.
4. 4. Fundamental estimation questions • How much effort is required to
complete an activity? • How much calendar time is needed to complete an
activity? • What is the total cost of an activity? • Project estimation and
scheduling and interleaved management activities
5. 5. Cost Components • Hardware and software costs • Travel and training
costs • Effort costs (the dominant factor in most Project is salaries of
engineers involved in the project • Social and insurance costs • Effort costs
must take overheads into account • costs of building, heating, lighting • costs
of networking and communications • costs of shared facilities (e.g. library,
staff restaurant, etc.)
6. 6. Costing and pricing • Estimates are made to discover the cost, to the
developer, of producing a software system • There is not a simple relationship
between the development cost and the price charged to the customer •
Broader organizational, economic, political and • business considerations
influence the price charged
7. 7. Estimation techniques • There is no simple way to make an accurate
estimate of the effort required to develop a software system • Initial estimates
are based on inadequate information in a user requirements definition • The
software may run on unfamiliar computers or use new technology • The
people in the project may be unknown • Project cost estimates may be self-
fulfilling • The estimate defines the budget and the product is adjusted to meet
the budget
8. 8. Estimate uncertainty
9. 9. Estimation – Tools • Work Breakdown Structure (WBS) • Dividing into
Logical Units/Tasks • Creating a WBS is a prerequisite for any estimation
activity. It enables you to conceptualize an abstract entity, such as a project,
into distinct, independent units. • it gives idea about the size and complexity of
the project. • helps in planning, scheduling, and monitoring a project
realistically.
10. 10. Wideband Delphi Method • Developed in 40s at RAND Corp as a
forecasting tool. Since been adapted across many industries for estimation. •
A consensus-based estimation technique for estimating effort • Proven to be a
very effective estimation tool, and it lends itself well to software projects. •
Barry Boehm & John Farquhar originated Wideband variant of Delphi in 70s
-Called "wideband" as compared to existing Delphi method, it involved greater
interaction & more communication between participants.
11. 11. Wideband Delphi Process • Input Work Products: • Vision and scope
document, or other documentation that defines the scope of the work product
being estimated • Output Work Products: • Work breakdown structure (WBS) •
List of Assumptions • Effort estimates for each of the tasks in the WBS •
Overall, better understanding of the project • Entry Criteria: • The vision and
scope document has been agreed by stakeholders, users, managers, and
engineering team. • The kickoff meeting and estimation session have been
scheduled (each at least two hours). • The project manager and the
moderator agree on the goal of the identifying the scope of the work to be
estimated.
12. 12. Wideband Delphi Steps • Team Selection: Project Manager selects a
moderator & an estimation team with 3 to 7 members. • Kickoff Meeting: The
first meeting during which estimation team creates a WBS and discusses
assumptions. • Individual Preparation: After the meeting, each team member
creates an effort estimate for each task. • Estimation Session: The second
meeting in which the team revises the estimates as a group and achieves
consensus. • Assemble Tasks: After the estimation session, the project
manager summarizes results and reviews them with team • Review Results:
Review the results that have come out from the Estimation Session.
13. 13. Team Selection • Picking qualified team is important part of generating
accurate estimates. • TMs must be willing to estimate each task honestly, and
should be comfortable working with rest of the team. • Should be
knowledgeable about organization’s needs and past engineering projects to
make educated estimates. • Team should include representatives from each
areas of development team: managers, devs, designers, architects, QA,
analysts, technical writers, etc • Moderator should be familiar with the Delphi
process, but should not have a stake in the outcome of the session • PM
should avoid Moderator role - should be part of estimation team • One or more
observers - selected stakeholders, users, and managers should be
encouraged to attend the meeting
14. 14. Kickoff Meeting • TMs are given vision, scope & other docs. • A goal
statement for estimation session should be agreed upon by project manager
and moderator and distributed to the team before the session. • Should be no
more than a few sentences that describe the scope of the work that is to be
estimated • Ex: Generate estimates for programming and testing the first
phase of RedRock Project • Moderator leads the meeting 14
15. 15. Kickoff Meeting (Cont’d) • Meeting consists of these activities - •
Moderator explains the Wideband Delphi method to any new estimators. • If
any TM has not yet read vision & scope and supporting docs, the moderator
reviews it with the team. • Moderator reviews goal of estimation session with
team, and checks that TMs are knowledgeable to contribute. • Discusses
product being developed & brainstorms assumptions. • Generates a task list
consisting of 10 – 20 major tasks. These tasks represent the top level of work
breakdown structure. • Agrees on the units of estimation (days, weeks, pages)
• Disagreement among TMs could result because of • Missing requirements,
on which programs or tasks are to be included • Assumptions 15
16. 16. Individual Preparation • After kickoff meeting, moderator writes down
assumptions and tasks generated by team & distributes them. • TMs
independently generates a set of preparation results, that contains • Estimate
for each of the tasks • Any additional tasks that should be included in WBS
missed by Team during kickoff meeting. • Any assumptions that TM made in
order to create the estimates • Any effort related to project overhead (status
meetings, reports, vacation, etc) should NOT be taken into account. Should
be added to the “Project overhead tasks” section. • Potential delays, (like
certain tasks can’t start until after specific dates) NOT be taken into account.
Should be added to the “Calendar waiting time” section. • Each estimate
should be made in terms of effort, not calendar time.
17. 17. Estimation Form
18. 18. Estimation Session • Meeting consists of these activities - • Moderator
collects all estimate forms. Estimates are tabulated on whiteboard by plotting
the totals on a line • Estimators read out clarifications & changes to task list
written on estimation form. New or changed tasks, discovered assumptions,
or questions are raised. Specific estimate times are NOT discussed. • Team
resolves issues or disagreements. Since individual estimate times are not
discussed, these disagreements are usually about the tasks themselves, and
are often resolved by adding assumptions. • Estimators revise their individual
estimates by filling in “Delta” column on their forms.
19. 19. Assemble Tasks • Project Manager works with Moderator to gather all
results from individual preparation and estimation session. • PM removes
redundancies and resolves remaining estimate differences to generate a final
task list, with effort estimates • The assumptions are summarized and added
to list. The Visio doc and other docs are updated with assumptions. • PM
should create spreadsheet that lists final estimates that each person came up
with. The spreadsheet should indicate the best-case and worst-case
scenarios, • Any task with an especially wide discrepancy should be marked
for further discussion. • Final task list should be in same format as individual
preparation results. 19
20. 20. Summarized Estimation Results 20
21. 21. Review Results • Once results are ready PM calls a final meeting to
review the estimation results with the team. • Goal of meeting is to determine
whether the results of the session are sufficient for further planning. The team
should determine whether the • Estimates make sense and if the range is
acceptable. They should also examine the final task list to verify that it’s
complete. There may be an area that needs to be refined: • For example, a
task might need to be broken down into subtasks. In this case, the team may
agree to hold another estimation session to break down those tasks into their
own subtasks and estimate each of them. • This is also a good way to handle
any tasks that have a wide discrepancy between the best-case and worst-
case scenarios. 21
22. 22. Pros & Cons Pros: • Very Simple Process • Consensus-based estimates
are often more accurate than individual estimates • People of would do the
work is making estimates • Among Assumptions are documented, discussed
and agreed Cons: • Difficult to repeat again and again with different group of
experts. • You can possibly reach consensus on incorrect estimate. • False
sense of confidence is developed at times. • Experts may be all biased in the
same objective direction. 22
23. 23. Conclusion • During last ten years, the Delphi method was used more
often especially for national science and technology foresight. Some
modifications and methodological improvements have been made, mean
while. Nevertheless, one has to be aware of the strengths and weakness of
the method so that it cannot be applied in every case. • Delphi method is
better to use as additional method to other research methods.

S-ar putea să vă placă și