Sunteți pe pagina 1din 16

INTRODUCTION

Let us first understand what software engineering stands for. The term is made of two words,
software and engineering.
Software is more than just a program code. A program is an executable code, which serves some
computational purpose. Software is considered to be collection of executable programming code,
associated libraries and documentations. Software, when made for a specific requirement is
called software product.
Engineering on the other hand, is all about developing products, using well-defined, scientific
principles and methods.
Software engineering is an engineering branch associated with development of software product
using well-defined scientific principles, methods and procedures. The outcome of software
engineering is an efficient and reliable software product.
Definitions

IEEE defines software engineering as:


(1) The application of a systematic,disciplined,quantifiable approach to the
development,operation and maintenance of software; that is, the application of engineering to
software.
(2) The study of approaches as in the above statement.
Fritz Bauer, a German computer scientist, defines software engineering as:
Software engineering is the establishment and use of sound engineering principles in order to
obtain economically software that is reliable and work efficiently on real machines.

Need of Software Engineering


Let me present the importance of SW engineering by comparison with its absence:
- Requirements collection can be incomplete or not deep enough.
- The lack of methodical, quantifiable methods drives to non-comparable
experience.
- The lack of systematic methods drive to weak or too complex architecture.
- Inconsistent user interface results evident too late to be changed.
- Code will become not easy to maintain.
- No systematic testing strategies drive to omnipresent and never-ending bugs.
All these calamities can be avoided, for sure, being very careful and experienced. In
most of the cases this implies a strong discipline, following personal rules. Change
developer and you're done.

Characteristics of good software


The goal of software engineering is, of course, to design and develop better
software. However, what exactly does "better software" mean? In order to answer
this question, this lesson introduces some common software quality characteristics.
Six of the most important quality characteristics are maintainability, correctness,
reusability, reliability, portability, and efficiency.
Maintainability is "the ease with which changes can be made to satisfy new
requirements or to correct deficiencies" [Balci 1997]. Well designed software should
be flexible enough to accommodate future changes that will be needed as new
requirements come to light. Quite often the programmer responsible for writing a
section of code is not the one who must maintain it. For this reason, the quality of
the software documentation significantly affects the maintainability of the software
product.
Correctness is "the degree with which software adheres to its specified
requirements" [Balci 1997]. At the start of the software life cycle, the requirements
for the software are determined and formalized in the requirements specification
document. Well designed software should meet all the stated requirements.
Reusability is "the ease with which software can be reused in developing other
software" [Balci 1997]. By reusing existing software, developers can create more
complex software in a shorter amount of time. Reuse is already a common
technique employed in other engineering disciplines. In much the same way,
software can be designed to accommodate reuse in many situations. A simple
example of software reuse could be the development of an efficient sorting routine
that can be incorporated in many future applications.
Reliability is "the frequency and criticality of software failure, where failure is an
unacceptable effect or behavior occurring under permissible operating conditions".
The frequency of software failure is measured by the average time between failures.
The criticality of software failure is measured by the average time required for
repair. Ideally, software engineers want their products to fail as little as possible
(i.e., demonstrate high correctness) and be as easy as possible to fix (i.e.,
demonstrate good maintainability). For some real-time systems such as air traffic
control or heart monitors, reliability becomes the most important software quality
characteristic.
Portability is "the ease with which software can be used on computer
configurations other than its current one". Porting software to other computer
configurations is important for several reasons. First, "good software products can
have a life of 15 years or more, whereas hardware is frequently changed at least
every 4 or 5 years. Thus good software can be implemented, over its lifetime, on
three or more different hardware configurations" [Schach 1999]. Second, porting
software to a new computer configuration may be less expensive than developing
analogous software from scratch.
Efficiency is "the degree with which software fulfills its purpose without waste of
resources". One measure of efficiency is the speed of a program's execution.

Another measure is the amount of storage space the program requires for
execution. Often these two measures are inversely related, that is, increasing the
execution efficiency causes a decrease in the space efficiency. This relationship is
known as the space-time tradeoff. When it is not possible to design a software
product with efficiency in every aspect, the most important resources of the
software are given priority.

What are the categories of software?


System software
Application software
Embedded software
Web Applications
Artificial Intelligence software
Scientific software.

Define testing?
Testing is a process of executing a program with the intent of finding of an
error.

What is Black box testing?


Black Box Testing, also known as Behavioral Testing, is a software testing method in which the
internal structure/ design/ implementation of the item being tested is not known to the tester.
These tests can be functional or non-functional, though usually functional.

This method is named so because the software program, in the eyes of the tester, is like a black
box; inside which one cannot see.
A tester, without knowledge of the internal structures of a website, tests the web
pages by using a browser; providing inputs (clicks, keystrokes) and verifying the
outputs against the expected outcome.

What is white box testing?


White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box Testing,
Transparent Box Testing, Code-Based Testing or Structural Testing) is a software testing method
in which the internal structure/ design/ implementation of the item being tested is known to the
tester. The tester chooses inputs to exercise paths through the code and determines the
appropriate outputs. Programming know-how and the implementation knowledge is essential.
White box testing is testing beyond the user interface and into the nitty-gritty of a system.
This method is named so because the software program, in the eyes of the tester, is like a white/
transparent box; inside which one clearly sees.
EXAMPLE
A tester, usually a developer as well, studies the implementation code of a certain field on a
webpage, determines all legal (valid and invalid) AND illegal inputs and verifies the outputs
against the expected outcomes, which is also determined by studying the implementation code.
White Box Testing is like the work of a mechanic who examines the engine to see why the car is
not moving.

What is gray box testing?


Gray Box Testing is a software testing method which is a combination of Black Box Testing
method and White Box Testing method. In Black Box Testing, the internal structure of the item
being tested is unknown to the tester and in White Box Testing the internal structure in known. In
Gray Box Testing, the internal structure is partially known. This involves having access to
internal data structures and algorithms for purposes of designing the test cases, but testing at the
user, or black-box level.
Gray Box Testing is named so because the software program, in the eyes of the tester is like a
gray/ semi-transparent box; inside which one can partially see.
EXAMPLE
An example of Gray Box Testing would be when the codes for two units/ modules are studied
(White Box Testing method) for designing test cases and actual tests are conducted using the
exposed interfaces (Black Box Testing method).

What is verification and validation?


What is Verification?

Verification is a process of evaluating the intermediary work products of a software development


lifecycle to check if we are in the right track of creating the final product.
Now the question here is : What are the intermediary products? Well, These can include the
documents which are produced during the development phases like, requirements specification,
design documents, data base table design, ER diagrams, test cases, traceability matrix etc. We
sometimes tend to neglect the importance of reviewing these documents but we should
understand that reviewing itself can find out many hidden anomalies when if found or fixed in
the later phase of development cycle, can be very costly.
In other words we can also state that verification is a process to evaluate the mediator products of
software to check whether the products satisfy the conditions imposed during the beginning of
the phase.
What is Validation?

Validation is the process of evaluating the final product to check whether the software meets the
business needs. In simple words the test execution which we do in our day to day life are actually
the validation activity which includes smoke testing, functional testing, regression testing,
systems testing etc
Difference between software Verification and Validation:
Verification

Validation

Are we building the system right?

Are we building the right system?

Verification is the process of evaluating


products of a development phase to find
out whether they meet the specified
requirements.

Validation is the process of evaluating


software at the end of the development
process to determine whether software
meets the customer expectations and
requirements.

The objective of Verification is to make


sure that the product being develop is as
per the requirements and design
specifications.

The objective of Validation is to make sure


that the product actually meet up the
users requirements, and check whether
the specifications were correct in the first
place.

Following activities are involved in


Verification: Reviews, Meetings and
Inspections.

Following activities are involved in


Validation: Testing like black box testing,
white box testing, gray box testing etc.

Verification is carried out by QA team Validation is carried out by testing team.


to check whether implementation
software is as per specification document

or not.
Execution of code is not comes under
Verification.

Execution of code is comes under


Validation.

Verification process explains whether


the outputs are according to inputs or
not.

Validation process describes whether


the software is accepted by the user or
not.

Verification is carried out before the


Validation.

Validation activity is carried out just


after the Verification.

Following items are evaluated during


Verification: Plans, Requirement
Specifications, Design Specifications,
Code, Test Cases etc,

Following item is evaluated during


Validation: Actual product or Software
under test.

Cost of errors caught in Verification is


less than errors found in Validation.

Cost of errors caught in Validation is


more than errors found in Verification.

It is basically manually checking the of


documents and files like requirement
specifications etc.

It is basically checking of developed


program based on the requirement
specifications documents & files.

In software project management, software testing, and software engineering,


verification and validation (V&V) is the process of checking that a software
system meets specifications and that it fulfills its intended purpose. It may also be
referred to as software quality control. It is normally the responsibility of software
testers as part of the software development lifecycle. In simple terms, software
verification is: "Assuming we should build X, does our software achieve its goals
without any bugs or gaps?" On the other hand, software validation is: "Was X what
we should have built? Does X meet the high level requirements?"

SOFTWARE DEVELOPMENT LIFE CYCLE


A life cycle describes a way, most commonly a sequence of phases or major events
and activities, that has been found to lead to success in some endeavor. Life cycles
typically go from cradle to grave. There are life cycles for all kinds of things,
including life: you're born, grow up, go to school, earn a living, raise a family, retire,
and die. It sounds kind of boring when stated this way, so most literature tends to
glamorize it (thank goodness).

Each phase produces deliverables required by the next phase in the life cycle. Requirements are
translated into design. Code is produced according to the design which is called development
phase. After coding and development the testing verifies the deliverable of the implementation
phase against requirements.
There are following six phases in every Software development life cycle model:

1. Requirement gathering and analysis


2. Design
3. Implementation or coding
4. Testing
5. Deployment
6. Maintenance

Requirement gathering and analysis: Business requirements are gathered in this phase. This
phase is the main focus of the project managers and stake holders. Meetings with managers,
stake holders and users are held in order to determine the requirements like; Who is going to use
the system? How will they use the system? What data should be input into the system? What
data should be output by the system? These are general questions that get answered during a
requirements gathering phase. After requirement gathering these requirements are analyzed for
their validity and the possibility of incorporating the requirements in the system to be
development is also studied.
Finally, a Requirement Specification document is created which serves the purpose of guideline
for the next phase of the model.
Design: In this phase the system and software design is prepared from the requirement
specifications which were studied in the first phase. System Design helps in specifying hardware
and system requirements and also helps in defining overall system architecture. The system
design specifications serve as input for the next phase of the model.
In this phase the testers comes up with the Test strategy, where they mention what to test, how to
test.
Implementation / Coding: On receiving system design documents, the work is
divided in modules/units and actual coding is started. Since, in this phase the code
is produced so it is the main focus for the developer. This is the longest phase of the
software development life cycle.

Deployment: After successful testing the product is delivered / deployed to the customer for
their use.
As soon as the product is given to the customers they will first do the beta testing. If any changes
are required or if any bugs are caught, then they will report it to the engineering team. Once
those changes are made or the bugs are fixed then the final deployment will happen.

Maintenance: Once when the customers starts using the developed system then
the actual problems comes up and needs to be solved from time to time. This
process where the care is taken for the developed product is known as
maintenance.
Waterfall Model

Waterfall model is the simplest model of software development paradigm. It says the all the
phases of SDLC will function one after another in linear manner. That is, when the first phase is
finished then only the second phase will start and so on.
This model assumes that everything is carried out and taken place perfectly as planned in the
previous stage and there is no need to think about the past issues that may arise in the next phase.
This model does not work smoothly if there are some issues left at the previous step. The
sequential nature of model does not allow us go back and undo or redo our actions.
This model is best suited when developers already have designed and developed similar software
in the past and are aware of all its domains.
The Waterfall Model was first Process Model to be introduced. It is also referred to as a linearsequential life cycle model. It is very simple to understand and use. In a waterfall model, each
phase must be completed fully before the next phase can begin. This type of model is basically
used for the for the project which is small and there are no uncertain requirements. At the end of
each phase, a review takes place to determine if the project is on the right path and whether or
not to continue or discard the project. In this model the testing starts only after the development
is complete. In waterfall model phases do not overlap.
Diagram of Waterfall-model:

Advantages of waterfall model:

This model is simple and easy to understand and use.

It is easy to manage due to the rigidity of the model each phase has specific
deliverables and a review process.

In this model phases are processed and completed one at a time. Phases do not overlap.

Waterfall model works well for smaller projects where requirements are very well
understood.

Disadvantages of waterfall model:

Once an application is in the testing stage, it is very difficult to go back and change
something that was not well-thought out in the concept stage.

No working software is produced until late during the life cycle.

High amounts of risk and uncertainty.

Not a good model for complex and object-oriented projects.

Poor model for long and ongoing projects.

Not suitable for the projects where requirements are at a moderate to high risk of
changing.

When to use the waterfall model:

This model is used only when the requirements are very well known, clear and fixed.

Product definition is stable.

Technology is understood.

There are no ambiguous requirements

Ample resources with required expertise are available freely

The project is short.

Very less customer enter action is involved during the development of the product. Once the
product is ready then only it can be demoed to the end users.
Iterative Model

This model leads the software development process in iterations. It projects the process of
development in cyclic manner repeating every step after every cycle of SDLC process.

The software is first developed on very small scale and all the steps are followed which are taken
into consideration. Then, on every next iteration, more features and modules are designed, coded,
tested and added to the software. Every cycle produces a software, which is complete in itself
and has more features and capabilities than that of the previous one.
After each iteration, the management team can do work on risk management and prepare for the
next iteration. Because a cycle includes small portion of whole software process, it is easier to
manage the development process but it consumes more resources.
Advantages of Iterative model:

In iterative model we can only create a high-level design of the application before we
actually begin to build the product and define the design solution for the entire product.
Later on we can design and built a skeleton version of that, and then evolved the design
based on what had been built.

In iterative model we are building and improving the product step by step. Hence we can
track the defects at early stages. This avoids the downward flow of the defects.

In iterative model we can get the reliable user feedback. When presenting sketches and
blueprints of the product to users for their feedback, we are effectively asking them to
imagine how the product will work.

In iterative model less time is spent on documenting and more time is given for
designing.

Disadvantages of Iterative model:

Each phase of an iteration is rigid with no overlaps

Costly system architecture or design issues may arise because not all requirements are
gathered up front for the entire lifecycle

When to use iterative model:

Requirements of the complete system are clearly defined and understood.

When the project is big.

Major requirements must be defined; however, some details can evolve with time.

Incremental Method

Incremental Model is combination of one or more Waterfall Models. In Incremental Model,


Project requirements are divided into multiple modules and each module is developed separately.
Finally developed modules are integrated with other modules. During development of each
module, waterfall model is followed for each module development separately. Each developed
module in Incremental Model is standalone feature and could be delivered to the end users to use
it. On incremental basis other modules are integrated as additional features one after another and
finally delivered to the client. In Incremental Model no need to wait for all the modules to be
developed and integrated. As each module is standalone application and there is no dependencies
on other modules so we can deliver the project with initial developed feature and other features
could be added on incremental basis with new releases. Incremental process goes until all the
requirements fulfilled and whole system gets developed.
Example-1:

Consider in the above picture, there is one square that has to develop with features F1, F2, F3
and F4. In the Incremental Model all the four features will be divided into four different small
squares called modules (M1, M2, M3 and M4).Once the first module (M1) is developed, it gets
delivered to the client and later on after development of second module M2 integrated with
module M1. Gradually we develop other modules M3 and M4 and keep on integrating until
complete square gets ready or developed.
Advantages of Incremental model:

Generates working software quickly and early during the software life cycle.

This model is more flexible less costly to change scope and requirements.

It is easier to test and debug during a smaller iteration.

In this model customer can respond to each built.

Lowers initial delivery cost.

Easier to manage risk because risky pieces are identified and handled during itd
iteration.

Disadvantages of Incremental model:

Needs good planning and design.

Needs a clear and complete definition of the whole system before it can be broken down
and built incrementally.

Total cost is higher than waterfall.

When to use the Incremental model:

This model can be used when the requirements of the complete system are clearly defined
and understood.

Major requirements must be defined; however, some details can evolve with time.

There is a need to get a product to the market early.

Spiral Model

One of the most flexible SDLC methodologies, the Spiral model takes a cue from the Iterative
model and its repetition; the project passes through four phases over and over in a spiral until
completed, allowing for multiple rounds of refinement. This model allows for the building of a
highly customized product, and user feedback can be incorporated from early on in the project.
But the risk you run is creating a never-ending spiral for a project that goes on and on.
The spiral model is similar to the incremental model, with more emphasis placed on
risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering
and Evaluation. A software project repeatedly passes through these phases in
iterations (called Spirals in this model). The baseline spiral, starting in the planning
phase, requirements are gathered and risk is assessed. Each subsequent spirals
builds on the baseline spiral.

Planning Phase: Requirements are gathered during the planning phase. Requirements like
BRS that is Bussiness Requirement Specifications and SRS that is System Requirement
specifications.
Risk Analysis: In the risk analysis phase, a process is undertaken to identify risk and alternate
solutions. A prototype is produced at the end of the risk analysis phase. If any risk is found
during the risk analysis then alternate solutions are suggested and implemented.
Engineering Phase: In this phase software is developed, along with testing at the end of the
phase. Hence in this phase the development and testing is done.
Evaluation phase: This phase allows the customer to evaluate the output of the project to date
before the project continues to the next spiral.

Diagram of Spiral model:

Advantages of Spiral model:

Good for large and mission-critical projects.


Strong approval and documentation control.
Additional Functionality can be added at a later date.
Software is produced early in the software life cycle.

Disadvantages of Spiral model:

Can be a costly model to use.

Risk analysis requires highly specific expertise.

Projects success is highly dependent on the risk analysis phase.

Doesnt work well for smaller projects.

When to use Spiral model:

When costs and risk evaluation is important

For medium to high-risk projects

V model

The major drawback of waterfall model is we move to the next stage only when the previous one
is finished and there was no chance to go back if something is found wrong in later stages. VModel provides means of testing of software at each stage in reverse manner.
At every stage, test plans and test cases are created to verify and validate the
product according to the requirement of that stage. This makes both verification
and validation go in parallel. This model is also known as verification and validation
model.

V - Model is an extension of the waterfall model and is based on association of a


testing phase for each corresponding development stage. This means that for every
single phase in the development cycle there is a directly associated testing phase.
This is a highly disciplined model and next phase starts only after completion of the
previous phase.

S-ar putea să vă placă și