Sunteți pe pagina 1din 13

SOFTWARE ENGINEERING

A Component is a tested, special purpose software unit which is reusable, adaptable, portable
and interoperable. In software terms, components are also called componentware (CM).
Framework is the combination of components that can be plugged into an application.

A Software interface is the program that makes it possible for components to interact and
interoperate with each other. Eg. JAVA BEANS
( DCOM is abbreviated as Distributed Component Object Model. )
Software Entities are the processes, requirements, products and resources of a software
engineering landscape.

An Engineering approach to software engineering is to produce system on time and within


budget.
Software Development problems are :
1) Conceptual problem.
2) Representation problem.

Vanilla Framework helps to bridge gap between a high level solution, to a problem and its
implementation in software.
Software Requirement Specification (SRS) : This is a blueprint for the complete design of a
software product.
Quality Factors : correctness, reliability, maintainability, testability, efficiency, integrity,
usability, portability, interoperability, reusability.

Reusability Criterion :
Self Descriptive : this is with natural language.
Modularity : means change in one component has minimal impact on other.
Portability : means transfer of software from one system to other.
Platform Independence means it can execute on any type of platform.

Incremental Approach to software development has been formulated by watts Humphrey.


Clean Room Engineering is used to control the quality of incrementally developed software
product and to certify the fitness of software products for usage at time of delivery.

CAPABILITY MATURITY MODEL (CMM) :


CMM describes software process management maturity relative to five levels
ie., Initial, Repeatable, Defined, Managed, Optimizing
In the Initial level there is a lack of planning and the development of a clear-cut guide
that software development teams can follow. Few details of a software process have been
defined at this level. Good results are considered miraculous.
KPA ---- Key Process Areas
In the Second level ie., the CMM Repeatable Process is characterized by a commitment
to discipline in carrying out a software development project. And is achieved by : Requirements
management, software project planning, software project tracking and oversight, software
subcontract management, software quality assurance, software configuration management.
In the Third level ie., the CMM Defined Process is to guide the structuring and
evaluation of a software project. And is achieved by : Organisational process focus and
definition, training program, software product engineering, intergroup coordination, peer
reviews.
In the Fourth level ie., the CMM Managed Process is for data gathering and analysis
and managing software quality. And is achieved by : Quantitative process management,
Software quality management.
In the Fifth level ie., the CMM Optimizing Process is associated with defect
prevention, automation of the software process wherever possible, and methods for improving
software quality and team productivity and shortening development time.

Validation occurs whenever a system component is evaluated to ensure that it satisfies system
requirements.
Verification consists in checking whether the product of a particular phase satisfies the
conditions imposed at that phase.

Software Evolution : This is characterized by genotypes and phenotypes.


A genotype provides information about a member of a population.
A phenotype characterizes the behavior of a population member.
Evolution pattern = where || why || what || when || how || by-whom.

Software Life-Cycle : This is the period of time beginning with a concept for a software product
and ending whenever the software is no longer available for use.
The Software life-cycle typically includes the following: Requirements, Analysis,
Design, construction, testing (Validation), installation, operation, maintenance, and retirement.
Several models (spiral, waterfall etc.) have been proposed to describe this process.

Software Life-Cycle Model represents the activities, their inputs and outputs and their
interactions during the life-cycle.
Software Life-Cycle Models :

What is Software Engineering ?


The application of a systematic, disciplined, quantifiable approach to development,
operation, and maintenance of software; that is, the application of engineering to software.
The specification, development, management, and evolution of software systems.
A discipline whose aim is the production of quality software, delivered on time, within
budget, and satisfying users' needs.
Designing and developing high-quality software. Application of computer science
techniques to a variety of problems.
What is a CASE tool ?
CASE stands for Computer Aided Software Engineering; it can be used to mean any
computer-based tool for software planning, development, and evolution.
What is a Function Point ?
Function points and feature points are methods of estimating the "amount of functionality"
required for a program, and are thus used to estimate project completion time. The basic idea
involves counting inputs, outputs, and other features of a description of functionality.

What is a Spiral Model ?


Basically, the idea is evolutionary development, using the waterfall model for each step; it's
intended to help manage risks. Don't define in detail the entire system at first.
The developers should only define the highest priority features. Define and implement those,
then get feedback from users/customers (such feedback distinguishes "evolutionary" from
"incremental" development).
With this knowledge, they should then go back to define and implement more features in
smaller chunks.

What is a Spec Mark?


SPEC mark refers to the results of the first suite

What is Hungarian Notation?


A naming convention for C code.

What is SEI Maturity Model?


First step in improving the existing situation is to get management buy-in and
management action to clean up the software management processes.
Second step (Integration) is to get everyone working together as a team.
Third step (Measurements) is to establish objective ways of understanding status and
predict where things are going in your process.
Continuous improvement: Understand that this is building a foundation for continually
getting better.

What is a BUG?
A Fault, Failure, Mistake.

What is a Clean Room?


'Cleanroom' is a software process based on mathematical verification of components and
statistical system-level testing.

What is Personal Software Process?


A discipline for monitoring, testing, and improving your own Software Engineering
works.

What are the two major types of testing?


The following are the two major groups of testing
i) Black Box testing ii) Glass box testing.

Black Box testing: Functional or black box testing is an approach to testing where the tests are
derived from the program or component specification. The system is a black box whose behavior
can only be determined by studying its inputs and the related outputs. Another name for this is
the functional testing because the tester is only concentrated with the functionality and not the
implementation of the software.
Black Box testing focuses on the functional requirements of the software i.e, Black Box
testing enables the software engineer to derive sets of input conditions that will fully exercise all
functiol requirements for a program.
The challenge in the black box testing is to cause failures in the module by designing test
causes that,with an appropriate input and controlled externally conditions,can produce an output
that will clearly indicate a module failure

Which Test Cases to Automate?


Tests that need to be run for every build of the application.
Tests that use multiple data values for the same actions (data driven tests).
Tests that require detailed information from application internals.

If Performed Manual Testing:


Time Consuming, Low Reliability, Human Resources, Inconsistent.

If Performed Automated Testing:


Speed, Repeatability, Reusability, Reliability, Programming Capabilities.

What are CheckPoints?


• Checkpoints enable you to compare the current behavior of your application to its
expected behavior.
• GUI checkpoints check information about GUI objects. For example, you can check that
a button is enabled or see which item is selected in a list.
• Database checkpoints check the data content in a database.
• Text checkpoints read text in GUI objects and in bitmaps, and enable you to check their
contents.
• Bitmap checkpoints compare a "snapshot" of a window or an area in your application to
an image captured in an earlier version.

Software Production Process:-


The process of building, delivering and evolving the software system from the inception
of an idea all the way to the delivery and final retirement of the system is called a software
production process.
The software production process may follow different methods of software development .
• Water Fall Model
• Prototyping Process
• Rad Model
• Incremental Model
• Spiral Model

Conception:-
SDLC starts with the conception phase. This phase is triggered by a competitor ,a
problem or an opportunity
The problem perceived
The goal to be achieved
The benefits from the solution
The scope of the project

Initiation:-
The Software engineers work with users to carry out a macro level study of the users
requirements. The software engineers define the various alternatives possible and the cost-benefit
justification of these alternatives.

Analysis:-
The software Engineers carry our a detailed study of the users requirement .They then
arrive at the proposed system to be built .The model of this system is to be used to freeze all
requirements before the next phase begins.

Design:-
In this phase the functional specifications are used for translating the model into a design
of the desired system like data flow diagrams ,decisions tables,databases etc.

Construction:-
This phases produces the actual code that will be delivered to the customer as the running
system .Individual modules developed in this phase are tested before being delivered to the next
phase

Testing:-
All the modules that have been developed before are integrated or put together in this
phase, and tested as a complete system. A system is tested for online response, volume of
transactions, stress, recovery from failure, and usability.

Implementation:-
Implementation means converting a new system design into operation .This involves
creating computer compatible files, training the operating staff, installing hardware and any other
requirements.

Prototyping Process:-
The basic idea of prototyping model is instead of fixing requirements before design and
coding can begin ,a prototype is built to understand the requirements .the prototype is built with
the know requirements by this the user can be know how the system works

Rad Model:-
Rapid Application development is high speed adoption of the linear sequential model in
which rapid development is achieved by using component based construction, business model
and data modeling etc.

Incremental Model: -
Incremental model delivers software in small but usable pieces called increments .In
general each increment builds on those that have already been delivered. In this analysis, design,
coding and testing are done for every model.

How can u measure the quality of your project


Some sdlc models
Explain Waterfall model and say what the output is in each phase?
Types of testing

Levels of testing

Unit testing:-
Generally the code which is generated is compiled. The unit test is white box oriented
and the steps can be conducted in parallel for multiple components.
1. The module Interface is tested to ensure that information properly flows into and out of the
program unit under test.
2. The local data structure is examined to ensure that data stored temporarily maintains its
integrity during all steps in an algorithm’s execution.
3. Boundary conditions are tested to ensure that the module operates properly at boundaries
established to limit and restrict processing
4. All the statements are executed at least once and error handling paths are tested

Integration testing:-
Integration testing is a systematic technique for constructing the program structure .In
integration test you have like
Top down:-
Top down integration testing with the main routine and one or two immediate
subordinate routines in the system structure it is good to have modules are integrated as they are
developed, top level interfaces are tested first
Bottom up:-
Bottom up integration testing is the traditional strategy used to integrate the components
of a software system into a functioning whole
Regressive testing:-
Retesting the already test modules and adding new modules .Regressive testing is an
important strategy for reducing side effects.
System Level Testing Performance testing:- Performance testing is designed to test the run
time performance of software or hardware
Recovery testing :- is a system test forces the software to fail in a variety of ways and verifies
that recovery is properly performed .if recovery is automatic, re initialization , check pointing
,data recovery and restart are evaluated for correctness.
Security Testing :- Security testing attempts to verify that protection mechanisms built into a
system will in fact ,protect it from improper penetration.
Acceptance testing:-
when customer software is built for one customer ,a series of acceptance tests are
conducted to enable the customer to validate all requirements. conducted by the end user rather
than software engineers, an acceptance test can range from an informal test drive to a planned
and systematically executed series of tests.
if software is developed as a product to be used by many customers ,it is impractical to
perform acceptance tests with each one. Most software product builders use a process called
alpha and beta testing to uncover errors that only the end user seems able to find

Test Case Design


A product can be tested in one of two ways
Knowing the specified function that a product has been designed to perform, tests can be
conducted that demonstrate each function is fully operational. This is know as black box testing.
Knowing the internal workings of a product , tests can be conducted to ensure that the
internal operation of the product performs according to specification and all internal components
have been exercised. This is know as white box testing
White box testing:-
Using white box testing methods, the software engineer can derive test cases that do the
following:-
Guarantee that all independent paths with in a module have been executive once at least.
Exercise all logical decisions on their true and false sides
Exercise all loops, data flow, conditional testing are working

Black box testing:-


Black box testing methods focus on the functional requirements of the software. It enables the
software engineer to derive sets of input conditions that will fully exercise all functional
requirements for a program .Black box testing is not an alternative to white box testing
techniques .Rather it is a complementary approach that is likely to uncover a different class of
errors than white box methods.

Software Testing Fundamentals

Testing objectives include

1. Testing is a process of executing a program with the intent of finding an error.


2. A good test case is one that has a high probability of finding an as yet
undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.

Testing should systematically uncover different classes of errors in a minimum amount of time
and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that
the software appears to be working as stated in the specifications. The data collected through
testing can also provide an indication of the software's reliability and quality. But, testing cannot
show the absence of defect -- it can only show that software defects are present.

White Box Testing

White box testing is a test case design method that uses the control structure of the procedural
design to derive test cases. Test cases can be derived that

1. guarantee that all independent paths within a module have been exercised at least once,
2. exercise all logical decisions on their true and false sides,
3. execute all loops at their boundaries and within their operational bounds, and
4. Exercise internal data structures to ensure their validity.

The Nature of Software Defects

Logic errors and incorrect assumptions are inversely proportional to the probability that a
program path will be executed. General processing tends to be well understood while special
case processing tends to be prone to errors.

We often believe that a logical path is not likely to be executed when it may be executed on a
regular basis. Our unconscious assumptions about control flow and data lead to design errors that
can only be detected by path testing. Typographical errors are random.

Basis Path Testing

This method enables the designer to derive a logical complexity measure of a procedural design
and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis
set are guaranteed to execute every statement in the program at least once during testing.

Flow Graphs

Flow graphs can be used to represent control flow in a program and can help in the derivation of
the basis set. Each flow graph node represents one or more procedural statements. The edges
between nodes represent flow of control. An edge must terminate at a node, even if the node
does not represent any useful procedural statements. A region in a flow graph is an area bounded
by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic
complexity is a metric that provides a quantitative measure of the logical complexity of a
program. It defines the number of independent paths in the basis set and thus provides an upper
bound for the number of tests that must be performed.

The Basis Set

An independent path is any path through a program that introduces at least one new set of
processing statements (must move along at least one new edge in the path). The basis set is not
unique. Any number of different basis sets can be derived for a given procedural design.
Cyclomatic complexity, V(G), for a flow graph G is equal to

1. The number of regions in the flow graph.


2. V(G) = E - N + 2 where E is the number of edges and N is the number of nodes.
3. V(G) = P + 1 where P is the number of predicate nodes.

Deriving Test Cases

1. From the design or source code, derive a flow graph.


2. Determine the cyclomatic complexity of this flow graph.
o Even without a flow graph, V(G) can be determined by counting the number of
conditional statements in the code.
3. Determine a basis set of linearly independent paths.
o Predicate nodes are useful for determining the necessary paths.
4. Prepare test cases that will force execution of each path in the basis set.
o Each test case is executed and compared to the expected results.

Automating Basis Set Derivation

The derivation of the flow graph and the set of basis paths is amenable to automation. A software
tool to do this can be developed using a data structure called a graph matrix. A graph matrix is a
square matrix whose size is equivalent to the number of nodes in the flow graph. Each row and
column correspond to a particular node and the matrix corresponds to the connections (edges)
between nodes. By adding a link weight to each matrix entry, more information about the control
flow can be captured. In its simplest form, the link weight is 1 if an edge exists and 0 if it does
not. But other types of link weights can be represented:

• the probability that an edge will be executed,


• the processing time expended during link traversal,
• the memory required during link traversal, or
• the resources required during link traversal.

Graph theory algorithms can be applied to these graph matrices to help in the analysis necessary
to produce the basis set.
Loop Testing

This white box technique focuses exclusively on the validity of loop constructs. Four
different classes of loops can be defined:

1. simple loops,
2. nested loops,
3. concatenated loops, and
4. Unstructured loops.

Simple Loops

The following tests should be applied to simple loops where n is the maximum number of
allowable passes through the loop:

1. skip the loop entirely,


2. only pass once through the loop,
3. m passes through the loop where m < n,
4. n - 1, n, n + 1 passes through the loop.

Nested Loops

The testing of nested loops cannot simply extend the technique of simple loops since this would
result in a geometrically increasing number of test cases. One approach for nested loops:

1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimums. Add tests for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer loops at
minimums and other nested loops to typical values.
4. Continue until all loops have been tested.

Concatenated Loops

Concatenated loops can be tested as simple loops if each loop is independent of the others. If
they are not independent (e.g. the loop counter for one is the loop counter for the other), then the
nested approach can be used.

Unstructured Loops

This type of loop should be redesigned not tested!!!

Other White Box Techniques

Other white box testing techniques include:


1. Condition testing
o exercises the logical conditions in a program.
2. Data flow testing
o selects test paths according to the locations of definitions and uses of variables in
the program.

Black Box Testing


Introduction

Black box testing attempts to derive sets of inputs that will fully exercise all the functional
requirements of a system. It is not an alternative to white box testing. This type of testing
attempts to find errors in the following categories:

1. incorrect or missing functions,


2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and
5. initialization and termination errors.

Tests are designed to answer the following questions:

1. How is the function's validity tested?


2. What classes of input will make good test cases?
3. Is the system particularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the system tolerate?
6. What effect will specific combinations of data have on system operation?

White box testing should be performed early in the testing process, while black box testing tends
to be applied during later stages. Test cases should be derived which

1. reduce the number of additional test cases that must be designed to achieve reasonable
testing, and
2. tell us something about the presence or absence of classes of errors, rather than an error
associated only with the specific test at hand.

Equivalence Partitioning

This method divides the input domain of a program into classes of data from which test cases can
be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors
and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence
classes for an input condition. An equivalence class represents a set of valid or invalid states for
input conditions.

Equivalence classes may be defined according to the following guidelines:


1. If an input condition specifies a range, one valid and two invalid equivalence classes are
defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid
equivalence class are defined.
4. If an input condition is Boolean, then one valid and one invalid equivalence class are
defined.

Boundary Value Analysis

This method leads to a selection of test cases that exercise boundary values. It complements
equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on
input conditions solely, BVA derives test cases from the output domain also. BVA guidelines
include:

1. For input ranges bounded by a and b, test cases should include values a and b and just
above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to
exercise the minimum and maximum numbers and values just above and below these
limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to
exercise the data structure at its boundary.

Cause-Effect Graphing Techniques

Cause-effect graphing is a technique that provides a concise representation of logical conditions


and corresponding actions. There are four steps:

1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is
assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.

1. Normalization
2. All normal forms
3. How to normalize many to many relation
4. What is a procedure Give one example?

S-ar putea să vă placă și