Sunteți pe pagina 1din 137

Software Architecture

10IS81

Software Architecture
The various activities involved in creating software architecture are:

UNIT-1

Q1) With the help of neat block diagram of ABC (architecture business cycle). Explain in
detail the different activities which are involved in creating software architecture.(10
Marks)(June/July2014)
Software architecture is a result of technical, business, and social influences. Its existence in turn
affects the technical, business, and social environments that subsequently influence future
architectures. We call this cycle of influences, from the environment to the architecture and
back to the environment, the Architecture Business Cycle (ABC).

Creating the business case for the system


It is an important step in creating and constraining any future requirements.
How much should the product cost?
What is its targeted market?
What is its targeted time to market?
Will it need to interface with other systems?
Are there system limitations that it must work within?
These are all the questions that must involve the systems architects.
They cannot be decided solely by an architect, but if an architect is not consulted in the
creation of the business case, it may be impossible to achieve the business goals.
Understanding the requirements
There are a variety of techniques for eliciting requirements from the stakeholders.For ex:
Object oriented analysis uses scenarios, or use cases to embody requirements.

Safety-critical systems use more rigorous approaches, such as finite-state-machine


models or formal specification languages.

Another technique that helps us understand requirements is the creation of prototypes.


Regardless of the technique used to elicit the requirements, the desired qualities of the system to
be constructed determine the shape of its structure.
Creating or selecting the architecture
In the landmark book The Mythical Man-Month, Fred Brooks argues forcefully and
eloquently that conceptual integrity is the key to sound system design and that conceptual
integrity can only be had by a small number of minds coming together to design the system's
architecture.
Documenting and communicating the architecture
For the architecture to be effective as the backbone of the projects design, it
must be communicated clearly and unambiguously to all of the stakeholders.

Here is how the cycle works:

The architecture affects the structure of the developing organization. An architecture


prescribes a structure for a system it particularly prescribes the units of software that must be
implemented (or otherwise obtained) and integrated to form the system. These units are the basis
for the development project's structure. Teams are formed for individual software units and the
development, test, and integration activities all revolve around the units. Likewise,schedules and
budgets allocate resources in chunks corresponding to the units. If a company becomes adept at
building families of similar systems, it will tend to invest in each team by nurturing each area of
expertise. Teams become embedded in the organization's structure. This is feedback from the
architecture to the developing organization.

Dept. of CSE, SJBIT

10IS81

Page 1

Developers must understand the work assignments it requires of them, testers must
understand the task structure it imposes on them, management must understand the
scheduling implications it suggests, and so forth.

Analyzing or evaluating the architecture


Choosing among multiple competing designs in a rational way is one of the architects
greatest challenges.

Evaluating an architecture for the qualities that it supports is essential to ensuring


that the system constructed from that architecture satisfies its stakeholders needs.

Dept. of CSE, SJBIT

Page 2

Software Architecture

10IS81

Use scenario-based techniques or architecture tradeoff analysis method (ATAM) or cost


benefit analysis method (CBAM).

Implementing the system based on the architecture


This activity is concerned with keeping the developers faithful to the structures and
interaction protocols constrained by the architecture.

Having an explicit and well-communicated architecture is the first step toward


ensuring architectural conformance.

Ensuring that the implementation conforms to the architecture


Finally, when an architecture is created and used, it goes into a maintenance phase.

Constant vigilance is required to ensure that the actual architecture and its
representation remain to each other during this phase.

Q2) Explain the Architecture Business Cycle?)(June 2012)(Dec 2012) (Dec/Jan 2013)( (10 Marks)
Sol :
Software architecture is a result of technical, business, and social influences. Its existence in turn
affects the technical, business, and social environments that subsequently influence future
architectures. We call this cycle of influences, from the environment to the architecture and
back to the environment, the Architecture Business Cycle (ABC).

Software Architecture

10IS81

prescribes a structure for a system it particularly prescribes the units of software that must be
implemented (or otherwise obtained) and integrated to form the system. These units are the basis
for the development project's structure. Teams are formed for individual software units and the
development, test, and integration activities all revolve around the units. Likewise, schedules and
budgets allocate resources in chunks corresponding to the units. If a company becomes adept at
building families of similar systems, it will tend to invest in each team by nurturing each area of
expertise. Teams become embedded in the organization's structure. This is feedback from the
architecture to the developing organization.
In the software product line, separate groups were given responsibility for building and
maintaining individual portions of the organization's architecture for a family of products. In any
design undertaken by the organization at large, these groups have a strong voice in the system's
decomposition, pressuring for the continued existence of the portions they control.
The architecture can affect the goals of the developing organization. A successful system built from
it can enable a company to establish a foothold in a particular market area. The architecture
can provide opportunities for the efficient production and deployment of similar systems, and the
organization may adjust its goals to take advantage of its newfound expertise to plumb the market.
This is feedback from the system to the developing organization and the systems it builds.
The architecture can affect customer requirements for the next system by giving the customer the
opportunity to receive a system (based on the same architecture) in a more reliable, timely, and
economical manner than if the subsequent system were to be built from scratch. The customer
may be willing to relax some requirements to gain these economies. Shrink-wrapped software has
clearly affected people's requirements by providing solutions that are not tailored to their precise
needs but are instead inexpensive and (in the best of all possible worlds) of high quality. Product
lines have the same effect on customers who cannot be so flexible with their requirements.
The process of system building will affect the architect's experience with subsequent systems by
adding to the corporate experience base. A system that was successfully built around a tool bus or
.NET or encapsulated finite-state machines will engender similar systems built the same way in
the future. On the other hand, architectures that fail are less likely to be chosen for future projects.
A few systems will influence and actually change the software engineering culture, that is, the
technical environment in which system builders operate and learn. The first relational
databases, compiler generators, and table-driven operating systems had this effect in the 1960s
and early 1970s; the first spreadsheets and windowing systems, in the 1980s. When such
pathfinder systems are constructed, subsequent systems are affected by their legacy.

The architecture affects the structure of the developing organization. An architecture


Dept. of CSE, SJBIT

Page 3

Q3) Define Software Architecture. Explain the common software Architecture Structures? (Dec
12/Jan 13) (June/July 13)(Dec/Jan 2014)(10 marks)
Soln:
Architectural structures can by and large be divided into three groups, depending on the broad nature of
the elements they show.
Dept. of CSE, SJBIT

Page 4

Software Architecture

10IS81

Module structures.
Here the elements are modules, which are units of implementation. Modules represent a code-based way
of considering the system. They are assigned areas of functional responsibility. There is less emphasis on
how the resulting software manifests itself at runtime. Module structures allow us to answer questions
such as What is the primary functional responsibility assigned to each module? What other software
elements is a module allowed to use? What other software does it actually use? What modules are related
to other modules by generalization or specialization (i.e., inheritance) relationships?
Component-and-connector structures.
Here the elements are runtime components (which are the principal units of computation) and connectors
(which are the communication vehicles among components). Component-and-connector structures help
answer questions such as What are the major executing components and how do they interact? What are
the major shared data stores? Which parts of the system are replicated? How does data progress through
the system? What parts of the system can run in parallel? How can the system's structure change as it
executes?

Software Architecture

10IS81

Architecture is the structure of the components of a program or system, their interrelationships, and
the principles and guidelines governing their design and evolution over time. Any system has an
architecture that can be discovered and analyzed independently of any knowledge of the process by
which the architecture was designed or evolved.
Architecture is components and connectors. Connectors imply a runtime mechanism for
transferring control and data around a system. When we speak of "relationships" among elements, we
intend to capture both runtime and non-runtime relationships.
Q5) Define the Software Architecture? Discuss in detail implication of the definition?
(Dec 12)(10 Marks)
Soln:
There are fundamentally three reasons for software architectures importance from a technical
perspective.
Communication among stakeholders: software architecture represents a common abstraction of
a system that most if not all of the systems stakeholders can use as a basis for mutual
understanding, negotiation, consensus and communication.
Early design decisions: Software architecture manifests the earliest design decisions about a
system with respect to the system's remaining development, its deployment, and its maintenance
life. It is the earliest point at which design decisions governing the system to be built can be
analyzed.
Transferable abstraction of a system: software architecture model is transferable across
systems. It can be applied to other systems exhibiting similar quality attribute and functional
attribute and functional requirements and can promote large-scale re-use.
We will address each of these points in turn:

Allocation structures.
Allocation structures show the relationship between the software elements and the elements in one or
more external environments in which the software is created and executed. They answer questions
such as What processor does each software element execute on? In what files is each element stored
during development, testing, and system building? What is the assignment of software elements to
development teams?
Q4) State and explain the different architectural activities (June/July 13) (10 Marks)
Soln:
Architecture is high-level design. Other tasks associated with design are not architectural, such
as deciding on important data structures that will be encapsulated.
Architecture is the overall structure of the system. The different structures provide the
critical engineering leverage points to imbue a system with the quality attributes that will render it a
success or failure. The multiplicity of structures in an architecture lies at the heart of the concept.
Dept. of CSE, SJBIT

Page 5

ARCHITECTURE IS THE VEHICLE FOR STAKEHOLDER COMMUNICATION

Each stakeholder of a software system customer, user, project manager, coder, tester and so on - is
concerned with different system characteristics that are affected by the architecture. For ex. The user is
concerned that the system is reliable and available when needed; the customer is concerned that the
architecture can be implemented on schedule and to budget; the manager is worried that the
architecture will allow teams to work largely independently, interacting in disciplined and
controlled ways. Architecture provides a common language in which different concerns can be
expressed, negotiated, and resolved at a level that is intellectually manageable even for large, complex
systems.
ARCHITECTURE MANIFESTS THE EARLIEST SET OF DESIGN DECISIONS
Software architecture represents a systems earliest set of design decisions. These early decisions are the
most difficult to get correct and the hardest to change later in the development process, and they have the
most far-reaching effects.
i)The architecture defines constraints on implementation

Dept. of CSE, SJBIT

Page 6

We wish to minimize the design complexity of the system we are building. Advantages to this approach
include enhanced re-use more regular and simpler designs that are more easily understood and
communicated, more capable analysis, shorter selection time, and greater interoperability.
iv)An architecture permits template-based development
An architecture embodies design decisions about how elements interact that, while reflected in each
element's implementation, can be localized and written just once. Templates can be used to capture in one
place the inter-element interaction mechanisms.
v)An architecture can be the basis for training
The architecture, including a description of how elements interact to carry out the required behavior, can
serve as the introduction to the system for new project members.

This means that the implementation must be divided into the prescribed elements, the elements must
interact with each other in the prescribed fashion, and each element must fulfill its responsibility to the
others as dictated by the architecture.
ii)The architecture dictates organizational structure
The normal method for dividing up the labor in a large system is to assign different groups different
portions of the system to construct. This is called the work breakdown structure of a system.
iii)The architecture inhibits or enables a systems quality attributes
Whether a system will be able to exhibit its desired (or required) quality attributes is substantially
determined by its architecture. However, the architecture alone cannot guarantee functionality or quality.
Decisions at all stages of the life cyclefrom high-level design to coding and implementationaffect
system quality. Quality is not completely a function of architectural design. To ensure quality, a good
architecture is necessary, but not sufficient.
iv)Predicting system qualities by studying the architecture
Architecture evaluation techniques such as the architecture tradeoff analysis method support top-down
insight into the attributes of software product quality that is made possible (and constrained) by software
architectures.
v)The architecture makes it easier to reason about and manage change
Software systems change over their lifetimes. Every architecture partitions possible changes into three
categories: local, nonlocal, and architectural.A local change can be accomplished by modifying a single
element. A nonlocal change requires multiple element modifications but leaves the underlying
architectural approach intact.

Software Architecture

Software Architecture

10IS81

vi)The architecture helps in evolutionary prototyping


The system is executable early in the product's life cycle. Its fidelity increases as prototype parts are
replaced by complete versions of the software. A special case of having the system executable early is
that potential performance problems can be identified early in the products life cycle.
The architecture enables more accurate cost and schedule estimates
Cost and schedule estimates are an important management tool to enable the manager to acquire the
necessary resources and to understand whether a project is in trouble.
ARCHITECTURE AS A TRANSFERABLE, RE-USABLE MODEL
The earlier in the life cycle re-use is applied, the greater the benefit that can be achieved. While code reuse is beneficial, re-use at the architectural level provides tremendous leverage for systems with similar
requirements.
i)Software product lines share a common architecture
A software product line or family is a set of software-intensive systems sharing a common, managed set
of features that satisfy the specific needs of a particular market segment or mission and that are developed
from a common set of core assets in a prescribed way.
ii)Systems can be built using large. Externally developed elements
Whereas earlier software paradigms focused on programming as the prime activity, with progress
measured in lines of code, architecture-based development often focuses on composing or
assembling elements that are likely to have been developed separately, even independently, from
each other.
iii)Less is more: it pays to restrict the vocabulary of design alternatives
Dept. of CSE, SJBIT

Page 7

10IS81

Q6) Define Architectural Patterns,reference models and reference architectures and bring out
relationship between them?(Dec 12)/(June 2012)(6 Marks)

Soln:

An architectural pattern is a description of element and relation types together with a set of constraints
on how they may be used. For ex: client-server is a common architectural pattern. Client and server are
two element types, and their coordination is described in terms of the protocol that the server uses to
communicate with each of its clients.
A reference model is a division of functionality together with data flow between the pieces. A reference
model is a standard decomposition of a known problem into parts that cooperatively solve the problem.
A reference architecture is a reference model mapped onto software elements (that cooperatively
implement the functionality defined in the reference model) and the data flows between them. Whereas a
reference model divides the functionality, A reference architecture is the mapping of that functionality
onto a system decomposition.
Q7) Explain Model Based Structures?(Dec 12)(4 Marks)
Soln:
Module-based structures include the following structures.
Decomposition: The units are modules related to each other by the "is a submodule of "
relation, showing how larger modules are decomposed into smaller ones recursively until they are
small enough to be easily understood.
Uses: The units are related by the uses relation. One unit uses another if the correctness of the
first requires the presence of a correct version (as opposed to a stub) of the second.
Dept. of CSE, SJBIT

Page 8

Software Architecture

10IS81

Layered: Layers are often designed as abstractions (virtual machines) that hide
implementation specifics below from the layers above, engendering portability.
Class or generalization: The class structure allows us to reason about re-use and the
incremental addition of functionality.
Q8) Explain various process recommendations as used by architect while developing SA?
(June 2012)(4 Marks)
Soln:
Process recommendations are as follows:
The architecture should be the product of a single architect or a small group of architects with an
identified leader. The architect (or architecture team) should have the functional requirements for the
system and an articulated, prioritized list of quality attributes that the architecture is expected to satisfy.
The architecture should be well documented, with at least one static view and one dynamic view, using
an agreed-on notation that all stakeholders can understand with a minimum of effort.The architecture
should be circulated to the systems stakeholders, who should be actively involved in its review.The
architecture should be analyzed for applicable quantitative measures (such as maximum
throughput) and formally evaluated for quality attributes before it is too late to make changes to it.The
architecture should lend itself to incremental implementation via the creation of a skeletal system in
which the communication paths are exercised but which at first has minimal functionality. This
skeletal system can then be used to grow the system incrementally, easing the integration and testing
efforts.The architecture should result in a specific (and small) set of resource contention areas, the
resolution of which is clearly specified, circulated and maintained.
Q9) Briefly explain,what does Software architecture Constitute?(Dec 12/Jan 13))(5 Marks)
Soln:

Figure 2.1 : Typical, but uninformative, presentation of a software architecture

Figure 2.1, taken from a system description for an underwater acoustic simulation, purports to describe
that system's "top-level architecture" and is precisely the kind of diagram most often displayed to help
explain an architecture. Exactly what can we tell from it?
The system consists of four elements.
Three of the elements Prop Loss Model (MODP), Reverb Model (MODR), and Noise Model
(MODN)might have more in common with each other than with the fourthControl Process
(CP)because they are positioned next to each other.All of the elements apparently have some sort of
relationship with each other, since the diagram is fully connected.
Dept. of CSE, SJBIT

Page 9

Software Architecture

10IS81

UNIT 2
Q1) Discuss the Invariants,Advantages and Disadvantages of Pipes and Filters Architectural
Style?(Dec12) (June/July 2014) (10 Marks)
Soln:
Conditions (invariants) of this style are:
Filters must be independent entities. They should not share state with other filter Filters do not know the
identity of their upstream and downstream filters. Specification might restrict what appears on input pipes
and the result that appears on the output pipes. Correctness of the output of a pipe-and-filter network
should not depend on the order in which filter perform their processing.

Common specialization of this style includes :


Pipelines:
Restrict the topologies to linear sequences of filters.
Bounded pipes:
Restrict the amount of data that can reside on pipe.
Typed pipes:
Requires that the data passed between two filters have a well-defined type.
Advantages:
They allow the designer to understand the overall input/output behavior of a system as a simple
composition of the behavior of the individual filters. They support reuse: Any two filters can be hooked
together if they agree on data. Systems are easy to maintain and enhance: New filters can be added to
exciting systems. They permit certain kinds of specialized analysis eg: deadlock, throughput They
support concurrent execution.
Disadvantages:
They lead to a batch organization of processing. Filters are independent even though they process data
incrementally. Not good at handling interactive applications When incremental display updates are
required. They may be hampered by having to maintain correspondences between two separate but related
streams. Lowest common denominator on data transmission. This can lead to both loss of performance
and to increased complexity in writing the filters.

Dept. of CSE, SJBIT

Page 10

Q3) Explain Process control Paradigms with various process control definitions?(June 2012)(6 marks)

Q2) What are the Basic Requirements for Mobile Robotics Architecture? How implicit Invocation Model
Handles them?(June 12)(Dec 13/Jan 14)(8 Marks)
Soln:
DESIGN CONSIDERATIONS:

Software Architecture

Software Architecture

10IS81

REQ1: Supports deliberate and reactive behavior. Robot must coordinate the actions to accomplish its
mission and reactions to unexpected situations
REQ2: Allows uncertainty and unpredictability of environment. The situations are not fully defined
and/or predicable. The design should handle incomplete and unreliable information
REQ3: System must consider possible dangerous operations by Robot and environment
REQ4: The system must give the designer flexibility (missions change/requirement changes)
SOLUTION : IMPLICIT INVOCATION
The third solution is based on the form of implicit invocation, as embodied in the Task-ControlArchitecture (TCA). The TCA design is based on hierarchies of tasks or task trees Parent tasks initiate
child task Temporal dependencies between pairs of tasks can be defined A must complete A must
complete before B starts (selective concurrency) Allows dynamic reconfiguration of task tree at run time
in response to sudden change(robot and environment) Uses implicit invocation to coordinate tasks and
Tasks communicate using multicasting message (message server) to tasks that are registered for these
events TCAs implicit invocation mechanisms support three functions:
Exceptions: Certain conditions cause the execution of an associated exception handling routines i.e.,
exception override the currently executing task in the sub-tree (e.g., abort or retry) tasks Wiretapping:
Message can be intercepted by tasks superimposed on an existing task tree E.g., a safety-check
component utilizes this to validate outgoing motion commands Monitors: Monitors read information and
execute some action if the data satisfy certain condition.
E.g. battery check

10IS81

Soln:
PROCESS CONTROL PARADIGMS
Process Variables: Properties of process that can be measured.
Controllable vehicle: Process variable whose value of system is intended to control.
Input variable: process variable that measures an input to the process
Manipulated variable: process variable whose value can be changed by the controller
Set point: the desired value for a controlled variable
Open-loop system: system in which information about process variables is not used to adjust the system
Closed-loop system: system in which information about process variables is used to manipulate a
process variable to compensate for variations in process variables and operating conditions
Feedback control System: The controlled variable is measured and the result is used to manipulate
one or more of the process variables
Feed forward control system: some of the process variables are measured, and anticipated
disturbances are compensated without waiting for changes in the controlled variable to be visible.
Q4) Write a Note on Heterogeneous Architecture? (Dec 11/Jan 12)(3 Marks)
Soln:
HETEROGENEOUS ARCHITECTURES
Architectural styles can be combined in several ways:
One way is through hierarchy. Example: UNIX pipeline
Second way is to combine styles is to permit a single component to use a mixture of architectural
connectors. Example: active database
Third way is to combine styles is to completely elaborate one level of architectural description in a
completely different architectural style. Example: case studies
Q5) Enlist different Architetcural
Invocation?(Dec12/Jan13)(6 Marks)

styles

and

discuss

in

brief

Event-based,

Implicit

Soln:
The Architectural Styles are:Dataflow systems,Call-and-return systems,Independent components,Virtual
machines,Data-centered systems:.
EVENT-BASED, IMPLICIT INVOCATION
Instead of invoking the procedure directly a component can announce one or more events.
Other components in the system can register an interest in an event by associating a procedure to it.
When the event is announced, the system itself invokes all of the procedure that have been registered for
the event. Thus an event announcement implicitly causes the invocation of procedures in other
modules.Architecturally speaking, the components in an implicit invocation style are modules whose
interface provides both a collection of procedures and a set of events.
Advantages:
It provides strong support for reuse

Req1: permits clear cut separation of action and reaction


Req2: a tentative task tree can be built to handle uncertainty
Req3: performance, safety and fault tolerance are served
Req4: makes incremental development and replacement of components straight forward

Dept. of CSE, SJBIT

Page 11

Dept. of CSE, SJBIT

Page 12

Software Architecture

10IS81

Software Architecture

10IS81

Any component can be introduced into the system simply by registering it for the events of that system.
Implicit invocation eases system evolution.
Components may be replaced by other components without affecting the interfaces of other components.
Disadvantages:
Components relinquish control over the computation performed by the system.
Concerns change of data.
Global performance and resource management can become artificial issues.
Q6) Explain Software Paradigm for process Control?(Dec12/Jan13)(Dec 13/Jan 14)(4 Marks)
Soln:
An architectural style for software that controls continuous processes can be based on the process-control
model, incorporating the essential parts of a process-control loop:
Computational elements: separate the process of interest from the controlled policy Process
definition, including mechanisms for manipulating some process variables Control algorithm, for
deciding how to manipulate variables
Data element: continuously updated process variables and sensors that collect them Process
variables, including designed input, controlled and manipulated variables and knowledge of
which can be sensed Set point, or reference value for controlled variable Sensors to obtain
values of process variables pertinent to control
The control loop paradigm: establishes the relation that the control algorithm exercises.
Q7) State the problem of KWIC?Propose Abstract datatypes and Implicit Invocation styles to
Implement solution for same?(Dec12/Jan13)(June/July 2013) (June/July 2014) (10 Marks)

SOLUTION 2: IMPLICIT INVOCATION


Uses a form of component integration based on shared data
Differs from 1st solution by these two factors
Interface to the data is abstract
Computations are invoked implicitly as data is modified. Interactions is based on an active data model.
Advantages:
Supports functional enhancement to the system
Supports reuse.
Disadvantages:
Difficult to control the processing order.
Because invocations are data driven, implementation of this kind of decomposition uses more space.

Soln:
Parnas proposed the following problems: KWIC index system accepts an ordered set of lines. Each line is
an ordered set of words and each word is an ordered set of characters. Any line may be circularly shifted
by repeated removing the first word and appending it at the end of the line. KWIC index system outputs a
listing of all circular shifts of all lines in alphabetical order.
Parnas used the problem to contrast different criteria for decomposing a system into modules. He
describes 2 solutions:
a) Based on functional decomposition with share access to data representation.
b) Based on decomposition that hides design decision.
SOLUTION 1: ABSTRACT DATA TYPES
Decomposes The System Into A Similar Set Of Five Modules.
Data is no longer directly shared by the computational components.
Each module provides an interface that permits other components to access data only by invoking
procedures in that interface.

Dept. of CSE, SJBIT

Page 13

Dept. of CSE, SJBIT

Page 14

Software Architecture

10IS81

Software Architecture

10IS81

Q8) Explain Block diagram for Cruise Control?(June 12)(Dec 13/Jan14)(4 Marks)
Soln:
A cruise control (CC) system that exists to maintain the constant vehicle speed even over varying terrain.
Inputs: System On/Off: If on, maintain speed Engine On/Off: If on, engine is on. CC is active only in this
state Wheel Pulses: One pulse from every wheel revolution Accelerator: Indication of how far accelerator
is de-pressed Brake: If on, temp revert cruise control to manual mode Inc/Dec Speed: If on,
increase/decrease maintained speed Resume Speed: If on, resume last maintained speed Clock: Timing
pulses every millisecond Outputs: Throttle: Digital value for engine throttle setting

Artifact. This specifies the resource that is required to be highly available, such as a processor,
communication channel, process, or storage.
Environment. The state of the system when the fault or failure occurs may also affect the desired system
response. For example, if the system has already seen some faults and is operating in other than normal
mode, it may be desirable to shut it down totally. However, if this is the first fault observed, some
degradation of response time or function may be preferred. In our example, the system is operating
normally.
Response. There are a number of possible reactions to a system failure. These include logging the failure,
notifying selected users or other systems, switching to a degraded mode with either less capacity or less
function, shutting down external systems, or becoming unavailable during repair. In our example, the
system should notify the operator of the unexpected message and continue to operate normally. Response
measure. The response measure can specify an availability percentage, or it can specify a time to repair,
times during which the system must be available, or the duration for which the system must be available
Q2) What are the qualities of a system?Explain modifiability general scenario?(June/July 13)(Dec
13/Jan 14)(10 marks)

UNIT-III
Q1) what is availability? Explain general scenario for availability?(June 2012) (June/July 2014)
(10marks)
Soln:
AVAILABILITY SCENARIO
Availability is concerned with system failure and its associated consequences Failures are usually a result
of system errors that are derived from faults in the system. It is typically defines as .
Source of stimulus. We differentiate between internal and external indications of faults or failure since
the desired system response may be different. In our example, the unexpected message arrives from
outside the system. Stimulus. A fault of one of the following classes occurs. - omission. A component
fails to respond to an input. - crash. The component repeatedly suffers omission faults. - timing. A
component responds but the response is early or late. - response. A component responds with an incorrect
value.

Dept. of CSE, SJBIT

Page 15

Soln:
It is the ability of the system to do the work for which it was intended.
MODIFIABILITY SCENARIO
Modifiability is about the cost of change. It brings up two concerns.
What can change (the artifact)?
When is the change made and who makes it (the environment)?
Source of stimulus. This portion specifies who makes the changesthe developer, a system
administrator, or an end user. Clearly, there must be machinery in place to allow the system administrator
or end user to modify a system, but this is a common occurrence. In Figure 4.4, the modification is to be
made by the developer.
Stimulus. This portion specifies the changes to be made. A change can be the addition of a function, the
modification of an existing function, or the deletion of a function. It can also be made to the qualities of
the systemmaking it more responsive, increasing its availability, and so forth. The capacity of the
system may also change. Increasing the number of simultaneous users is a frequent requirement. In our
example, the stimulus is a request to make a modification, which can be to the function, quality, or
capacity.
Artifact. This portion specifies what is to be changedthe functionality of a system, its platform, its user
interface, its environment, or another system with which it interoperates. In Figure 4.4, the modification is
to the user interface.
Environment. This portion specifies when the change can be madedesign time, compile time, build
time, initiation time, or runtime. In our example, the modification is to occur at design time. Response.
Whoever makes the change must understand how to make it, and then make it, test it and deploy it. In our
example, the modification is made with no side effects.
Response measure. All of the possible responses take time and cost money, and so time and cost are the
most desirable measures. Time is not always possible to predict, however, and so less ideal measures are
frequently used, such as the extent of the change (number of modules affected). In our example, the time
Dept. of CSE, SJBIT

Page 16

Software Architecture

10IS81

to perform the modification should be less than three hours.

Q3) what do you mean by Tactics? Explain Availability Tactics with a Neat Diagram?(June /July 13)
(10 Marks)
Soln:
A tactic is a design decision that influences the control of a quality attribute response.
AVAILABILITY TACTICS

Software Architecture

10IS81

redundancy is often used in a client/server configuration, such as database management systems, where
quick responses are necessary even when a fault occurs
3.Passive redundancy (warm restart/dual redundancy/triple redundancy). One component (the
primary) responds to events and informs the other components (the standbys) of state updates they must
make. When a fault occurs, the system must first ensure that the backup state is sufficiently fresh before
resuming services.
Spare. A standby spare computing platform is configured to replace many different failed components. It
must be rebooted to the appropriate software configuration and have its state initialized when a failure
occurs.
Shadow operation. A previously failed component may be run in "shadow mode" for a short time to
make sure that it mimics the behavior of the working components before restoring it to service.
State resynchronization. The passive and active redundancy tactics require the component being restored
to have its state upgraded before its return to service.
Checkpoint/rollback. A checkpoint is a recording of a consistent state created either periodically or in
response to specific events. Sometimes a system fails in an unusual manner, with a detectably inconsistent
state. In this case, the system should be restored using a previous checkpoint of a consistent state and a
log of the transactions that occurred since the snapshot was taken.
Q4) Explain the quality attribute general scenario? List the parts of such scenario? Distinguish
between Availability and Modifiability Scenario?(Dec 12)(Dec 13/Jan 14) (10 Marks)

The above figure depicts goal of availability tactics. All approaches to maintaining availability involve
some type of redundancy, some type of health monitoring to detect a failure, and some type of recovery
when a failure is detected. In some cases, the monitoring or recovery is automatic and in others it is
manual.
FAULT DETECTION
1.Ping/echo. One component issues a ping and expects to receive back an echo, within a predefined time,
from the component under scrutiny. This can be used within a group of components mutually responsible
for one task
2.Heartbeat (dead man timer). In this case one component emits a heartbeat message periodically and
nother component listens for it. If the heartbeat fails, the originating component is assumed to have failed
and a fault correction component is notified. The heartbeat can also carry data.
3.Exceptions. The exception handler typically executes in the same process that introduced the exception.

Soln:
QUALITY ATTRIBUTE SCENARIOS
A quality attribute scenario is a quality-attribute-specific requirement. It consists of six parts.
1) Source of stimulus. This is some entity (a human, a computer system, or any other
actuator) that generated the stimulus.
2) Stimulus. The stimulus is a condition that needs to be considered when it arrives at a system.
3) Environment. The stimulus occurs within certain conditions. The system may be in an
overload condition or may be running when the stimulus occurs, or some other condition may be true.
4) Artifact. Some artifact is stimulated. This may be the whole system or some pieces of it.
5) Response. The response is the activity undertaken after the arrival of the stimulus.
6) Response measure. When the response occurs, it should be measurable in some fashion so
that the requirement can be tested.
Figure 4.1 shows the parts of a quality attribute scenario.

FAULT RECOVERY
1.Voting. Processes running on redundant processors each take equivalent input and compute a simple
output value that is sent to a voter. If the voter detects deviant behavior from a single processor, it fails it.
2.Active redundancy (hot restart). All redundant components respond to events in parallel. The
response from only one component is used (usually the first to respond), and the rest are discarded. Active

Availability is concerned with system failure and its associated consequences Failures are usually a result
of system errors that are derived from faults in the system.

Dept. of CSE, SJBIT

Dept. of CSE, SJBIT

Page 17

Page 18

Software Architecture

10IS81

It is typically defines as

Source of stimulus. We differentiate between internal and external indications of faults or failure
since the desired system response may be different. In our example, the unexpected message arrives
from outside the system.
Stimulus. A fault of one of the following classes occurs.
- omission. A component fails to respond to an input.
- crash. The component repeatedly suffers omission faults.
- timing. A component responds but the response is early or late.
- response. A component responds with an incorrect value.

Artifact. This specifies the resource that is required to be highly available, such as a processor,
communication channel, process, or storage.
Environment. The state of the system when the fault or failure occurs may also affect the desired
system response. For example, if the system has already seen some faults and is operating in other than
normal mode, it may be desirable to shut it down totally. However, if this is the first fault observed,
some degradation of response time or function may be preferred. In our example, the system is operating
normally.
Response. There are a number of possible reactions to a system failure. These include logging the
failure, notifying selected users or other systems, switching to a degraded mode with either less
capacity or less function, shutting down external systems, or becoming unavailable during repair. In
our example, the system should notify the operator of the unexpected message and continue to operate
normally.
Response measure. The response measure can specify an availability percentage, or it can specify a
time to repair, times during which the system must be available, or the duration for which the system
must be available.
MODIFIABILITY SCENARIO
Modifiability is about the cost of change. It brings up two concerns.
What can change (the artifact)?
When is the change made and who makes it (the environment)?

Software Architecture

10IS81

Source of stimulus. This portion specifies who makes the changesthe developer, a system
administrator, or an end user. Clearly, there must be machinery in place to allow the system administrator
or end user to modify a system, but this is a common occurrence. In Figure 4.4, the modification is to be
made by the developer. Stimulus. This portion specifies the changes to be made. A change can be the
addition of a function, the modification of an existing function, or the deletion of a function. It can also be
made to the qualities of the systemmaking it more responsive, increasing its availability, and so forth.
The capacity of the system may also change. Increasing the number of simultaneous users is a frequent
requirement. In our example, the stimulus is a request to make a modification, which can be to the
function, quality, or capacity.
Artifact. This portion specifies what is to be changedthe functionality of a system, its platform, its user
interface, its environment, or another system with which it interoperates. In Figure 4.4, the modification is
to the user interface.
Environment. This portion specifies when the change can be madedesign time, compile time, build
time, initiation time, or runtime. In our example, the modification is to occur at design time.
Response. Whoever makes the change must understand how to make it, and then make it, test it and
deploy it. In our example, the modification is made with no side effects.
Response measure. All of the possible responses take time and cost money, and so time and cost are the
most desirable measures. Time is not always possible to predict, however, and so less ideal measures are
frequently used, such as the extent of the change (number of modules affected). In our example, the time
to perform the modification should be less than three hours.
Q5) What are the qualities that architecture itself should posses?(Dec 12)(6 Marks)
Soln:
Achieving quality attributes must be considered throughout design, implementation, and deployment. No
quality attribute is entirely dependent on design, nor is it entirely dependent on implementation or
deployment. For example: Usability involves both architectural and non-architectural aspects
Modifiability is determined by how functionality is divided (architectural) and by coding techniques
within a module (non-architectural).Performance involves both architectural and non-architectural
dependencies The message of this section is twofold: Architecture is critical to the realization of many
qualities of interest in a system, and these qualities should be designed in and can be evaluated at the
architectural level Architecture, by itself, is unable to achieve qualities. It provides the foundation for
achieving quality.
Q6) List the Parts of Quality Attribute Scenario?(Dec12)(June 12)(Dec 12/Jan13)(4 Marks)
Soln:
A quality attribute scenario is a quality-attribute-specific requirement. It consists of six parts.
1) Source of stimulus. This is some entity (a human, a computer system, or any other actuator) that
generated the stimulus.
2) Stimulus. The stimulus is a condition that needs to be considered when it arrives at a system.
3) Environment. The stimulus occurs within certain conditions. The system may be in an overload
condition or may be running when the stimulus occurs, or some other condition may be true.
4) Artifact. Some artifact is stimulated. This may be the whole system or some pieces of it.
5) Response. The response is the activity undertaken after the arrival of the stimulus.

Dept. of CSE, SJBIT

Page 19

Dept. of CSE, SJBIT

Page 20

Software Architecture

10IS81

6) Response measure. When the response occurs, it should be measurable in some fashion so that the
requirement can be tested.
Q7) What is the goal of Tactics of Testability? Discuss 2 Categories of Tactics for Testing?(Dec
12)(Dec 13/Jan 14)(10 Marks)
Soln:
The goal of tactics for testability is to allow for easier testing when an increment of software development
is completed.
INPUT/OUTPUT
Record/playback. Record/playback refers to both capturing information crossing an interface and using
it as input into the test harness. The information crossing an interface during normal operation is saved in
some repository. Recording this information allows test input for one of the components to be generated
and test output for later comparison to be saved.
Separate interface from implementation. Separating the interface from the implementation allows
substitution of implementations for various testing purposes. Stubbing implementations allows the
remainder of the system to be tested in the absence of the component being stubbed.
Specialize access routes/interfaces. Having specialized testing interfaces allows the capturing or
specification of variable values for a component through a test harness as well as independently from its
normal execution. Specialized access routes and interfaces should be kept separate from the access routes
and interfaces for required functionality.
INTERNAL MONITORING
Built-in monitors. The component can maintain state, performance load, capacity, security, or other
information accessible through an interface. This interface can be a permanent interface of the component
or it can be introduced temporarily. A common technique is to record events when monitoring states have
been activated. Monitoring states can actually increase the testing effort since tests may have to be
repeated with the monitoring turned off. Increased visibility into the activities of the component usually
more than outweigh the cost of the additional testing.

Software Architecture

10IS81

Maintain integrity. Data should be delivered as intended. It can have redundant information encoded in it,
such as checksums or hash results, which can be encrypted either along with or independently from the
original data.
Limit exposure. Attacks typically depend on exploiting a single weakness to attack all data and services
on a host. The architect can design the allocation of services to hosts so that limited services are available
on each host.
Limit access. Firewalls restrict access based on message source or destination port. Messages from
unknown sources may be a form of an attack. It is not always possible to limit access to known sources.

Q9) Explain the following with respect to Tactics?

(Dec 12/Jan 13)(10 Marks)

i) Fault Prevention ii) Defer Binding Time iii) Resource Arbitration iv)Internal Monitoring v) Run
Time Tactics

Tactics for achieving security can be divided into those concerned with resisting attacks, those concerned
with detecting attacks, and those concerned with recovering from attacks.
RESISTING ATTACKS
Authenticate users. Authentication is ensuring that a user or remote computer is actually who it purports
to be. Passwords, one-time passwords, digital certificates, and biometric identifications provide
authentication.
Authorize users. Authorization is ensuring that an authenticated user has the rights to access and modify
either data or services. Access control can be by user or by user class.
Maintain data confidentiality. Data should be protected from unauthorized access. Confidentiality is
usually achieved by applying some form of encryption to data and to communication links. Encryption
provides extra protection to persistently maintained data beyond that available from authorization.

Soln:
FAULT PREVENTION
Removal from service. This tactic removes a component of the system from operation to undergo some
activities to prevent anticipated failures.
Transactions. A transaction is the bundling of several sequential steps such that the entire bundle can be
undone at once. Transactions are used to prevent any data from being affected if one step in a process
fails and also to prevent collisions among several simultaneous threads accessing the same data.
Process monitor. Once a fault in a process has been detected, a monitoring process can delete the
nonperforming process and create a new instance of it, initialized to some appropriate state as in the spare
tactic.
DEFER BINDING TIME
Many tactics are intended to have impact at loadtime or runtime, such as the following.
Runtime registration supports plug-and-play operation at the cost of additional overhead to manage the
registration.
Configuration files are intended to set parameters at startup.
Polymorphism allows late binding of method calls.
Component replacement allows load time binding.
Adherence to defined protocols allows runtime binding of independent processes.
RESOURCE ARBITRATION
First-in/First-out. FIFO queues treat all requests for resources as equals and satisfy them in turn.
Fixed-priority scheduling. Fixed-priority scheduling assigns each source of resource requests a
particular priority and assigns the resources in that priority order. Three common prioritization strategies
are semantic importance. Each stream is assigned a priority statically according to some domain

Dept. of CSE, SJBIT

Dept. of CSE, SJBIT

Q8) Classify Secutity Tactics? What are different tactics for resisting attacks?(June 2012)(8
Marks)
Soln:

Page 21

Page 22

3. Projected lifetime of the system.


If the system is intended to have a long lifetime, modifiability, scalability, and portability become
important. On the other hand, a modifiable, extensible product is more likely to survive longer in the
marketplace, extending its lifetime.
4. Targeted market.
For general-purpose (mass-market) software, the platforms on which a system runs as well as its feature
set will determine the size of the potential market. Thus, portability and functionality are key to market
share. Other qualities, such as performance, reliability, and usability also play a role.
5. Rollout schedule.
If a product is to be introduced as base functionality with many features released later, the flexibility and
customizability of the architecture are important. Particularly, the system must be constructed with ease of
expansion and contraction in mind.
6. Integration with legacy systems.
If the new system has to integrate with existing systems, care must be taken to define
appropriate integration mechanisms. This property is clearly of marketing importance but has
substantial architectural implications.

characteristic of the task that generates it,deadline monotonic. Deadline monotonic is a static priority
assignment that assigns higher priority to streams with shorter deadlines,.rate monotonic. Rate monotonic
is a static priority assignment for periodic streams that assigns higher priority to streams with shorter
periods.
Dynamic priority scheduling:
1.round robin. Round robin is a scheduling strategy that orders the requests and then, at every assignment
possibility, assigns the resource to the next request in that order.
2.earliest deadline first. Earliest deadline first assigns priorities based on the pending requests with the
earliest deadline.
Static scheduling. A cyclic executive schedule is a scheduling strategy where the pre-emption points and
the sequence of assignment to the resource are determined offline.
INTERNAL MONITORING
Built-in monitors. The component can maintain state, performance load, capacity, security, or other
information accessible through an interface. This interface can be a permanent interface of the component
or it can be introduced temporarily. A common technique is to record events when monitoring states have
been activated. Monitoring states can actually increase the testing effort since tests may have to be
repeated with the monitoring turned off. Increased visibility into the activities of the component usually
more than outweigh the cost of the additional testing.
RUNTIME TACTICS
Maintain a model of the task. In this case, the model maintained is that of the task. The task model is
used to determine context so the system can have some idea of what the user is attempting and provide
various kinds of assistance. For example, knowing that sentences usually start with capital letters would
allow an application to correct a lower-case letter in that position.
Maintain a model of the user. In this case, the model maintained is of the user. It determines the user's
knowledge of the system, the user's behavior in terms of expected response time, and other aspects
specific to a user or a class of users. For example, maintaining a user model allows the system to pace
scrolling so that pages do not fly past faster than they can be read.
Maintain a model of the system. In this case, the model maintained is that of the system. It determines
the expected system behavior so that appropriate feedback can be given to the user. The system model
predicts items such as the time needed to complete current activity.

Software Architecture

Software Architecture

10IS81

Q10) Explain Business Qualities?(Dec 13/Jan 14)(4 Marks)

10IS81

Q11) Explain Modifiability Tactics?(Dec13/Jan14)(10 Marks)

Soln:
LOCALIZE MODIFICATIONS
Maintain semantic coherence. Semantic coherence refers to the relationships among responsibilities in a
module. The goal is to ensure that all of these responsibilities work together without excessive reliance on
other modules.
Anticipate expected changes. Considering the set of envisioned changes provides a way to evaluate a
particular assignment of responsibilities. In reality this tactic is difficult to use by itself since it is not
possible to anticipate all changes.
Generalize the module. Making a module more general allows it to compute a broader range of
functions based on input
Limit possible options. Modifications, especially within a product line, may be far ranging and hence
affect many modules. Restricting the possible options will reduce the effect of these modifications

Soln:
1.Time to market.
If there is competitive pressure or a short window of opportunity for a system or product, development
time becomes important. This in turn leads to pressure to buy or otherwise re-use existing elements.
2.Cost and benefit.
The development effort will naturally have a budget that must not be exceeded. Different architectures
will yield different development costs. For instance, an architecture that relies on technology (or expertise
with a technology) not resident in the developing organization will be more expensive to realize than one
that takes advantage of assets already inhouse. An architecture that is highly flexible will typically be
more costly to build than one that is rigid (although it will be less costly to maintain and modify).
Dept. of CSE, SJBIT

Page 23

Dept. of CSE, SJBIT

Page 24

Software Architecture

10IS81
Unit-IV

Q1) What do you mean by architectural pattern ? how it is categorized ?Explain the
structure part of the solution for ISO layered architecture.(June/July 2014)
Sol:
Architectural patterns express fundamental structural organization schemas for software systems.
They provide a set of predefined subsystems, specify their responsibilities, and include
rules and guidelines for organizing the relationships between them
The layers architectural pattern helps to structure applications that can be decomposed
into groups of subtasks in which each group of subtasks is at a particular level of abstraction.
Example: Networking protocols are best example of layered architectures. Such a protocol
consists of a set of rules and conventions that describes how computer programmer
communicates across machine boundaries. The format, contacts and meaning of all messages
are defined. The protocol specifies agreements at a variety of abstraction levels, ranging from the
details of bit transmission to high level abstraction logic. Therefore the designers use secured sub
protocols and arrange them in layers. Each layer deals with a specific aspect of communication
and users the services of the next lower layer. (see diagram & explain more)
Context: a large system that requires decomposition.
Problem: THE SYSTEM WE ARE BUILDING IS DIVIDED BY MIX OF LOW
AND HIGH LEVEL ISSUES, WHERE HIGH-LEVEL OPERATIONS RELY ON THE
LOWER-LEVEL ONES. FOR EX, HIGH-LEVEL WILL BE INTERACTIVE TO USER AND
LOW-LEVEL WILL BE CONCERNED WITH HARDWARE IMPLEMENTATION

In such a case, we need to balance the following forces:


Late source code changes should not ripple through the systems. They should be
confined to one component and not affect others.

Interfaces should be stable, and may even be impressed by a standards body.

Parts of the system should be exchangeable (i.e, a particular layer can be changed).

It may be necessary to build other systems at a later date with the same low-level issues
as the system you are currently designing.

Similar responsibilities should be grouped to help understandability and maintainability.

Dept. of CSE, SJBIT

Page 25

Software Architecture

10IS81

There is no standard component granularity.

Complex components need further decomposition.

Crossing component boundaries may impede performance, for example when a


substantial amount of data must be transferred over several boundaries.

The system will be built by a team of programmers, and works has to be


subdivided along clear boundaries.

Solution:
Structure your system into an appropriate number of layers and place them on top of each
other.
Lowest layer is called 1 (base of our system), the highest is called layer N. i.e, Layer 1,
. Layer J-1,
Layer J, .. Layer N.
Most of the services that layer J Provides are composed of services provided by
layer J-1. In other words, the services of each layer implement a strategy for combining
the services of the layer below in a meaningful way. In addition, layer Js services may
depend on other services in layer J.
Structure:
An individual layer can be described by the following CRC card:

o
The main structural characteristics of the layers patterns is that services of layer J are only use by
layer J+1-there are no further direct dependencies between layers. This structure can be
compared with a stack, or even an onion. Each individual layer shields all lower from direct
access by higher layers.

Dept. of CSE, SJBIT

Page 26

Software Architecture

10IS81

Software Architecture

10IS81

'The request moves down through the layers until it reaches Layer 1, is sent to Layer 1 of
the right stack, and there moves up through the layers of the right stack. The response to
the request follows the reverse path until it arrives at Layer N of the left stack.

Dynamics:
Scenario I is probably the best-known one. A client Issues a request to Layer N. Since
Layer N cannot carry out the request on its own. It calls the next Layer N - 1 for
supporting subtasks. Layer N - I provides these. In the process sending further requests
to Layer N-2 and so on until Layer I is reached. Here, the lowest-level services are finally
performed. If necessary, replies to the different requests are passed back up from Layer 1
to Layer 2, from Layer 2 to Layer 3, and so on until the final reply arrives at Layer N.
Scenario II illustrates bottom-up communication-a chain of actions starts at Layer 1,
for example when a device driver detects input. The driver translates the input into an
internal format and reports it to Layer 2 which starts interpreting it, and so on. In this way
data moves up through the layers until it arrives at the highest layer. While top-down
information and control flow are often described as 'requests'. Bottom-up calls can be
termed 'notifications'.
Scenario III describes the situation where requests only travel through a subset of the
layers. A top level request may only go to the next lower level N- 1 if this level can satisfy
the request. An example of this is where level N- 1 acts as a cache and a request from level
N can be satisfied without being sent all the way down to Layer 1 and from here to a
remote server.
Scenario IV An event is detected in Layer 1, but stops at Layer 3 instead of travelling all
the way up to Layer N. In a communication protocol, for example, a resend request
may arrive from an impatient client who requested data some time ago. In the meantime
the server has already sent the answer, and the answer and the re-send request cross. In this
case, Layer 3 of the server side may notice this and intercept the re-send request without
further action.
Scenario V involves two stacks of N layers communicating with each other. This scenario
is well-known from communication protocols where the stacks are known as
'protocol stacks'. In the following diagram, Layer N of the left stack issues a request.
Dept. of CSE, SJBIT

Page 27

Implementation:
The following steps describe a step-wise refinement approach to the definition of a layered
architecture. Define the abstraction criterion for grouping tasks into layers.
o This criterion is often the conceptual distance from the platform (sometimes, we encounter
other abstraction paradigm as well).
o In the real world of software development we often use a mix of abstraction criterions. For ex,
the distance from the hardware can shape the lower levels, and conceptual complexity
governs the higher ones.
o Example layering obtained using mixed model layering principle is shown below Uservisible elements
Specific application modules
Common services level
Operating system interface level
Operating system
Hardware
Determine the number of abstraction levels according to your abstraction criterion.
Each abstraction level corresponds to one layer of the pattern.
Having too many layers may impose unnecessary overhead, while too few layers
can result in a poor structure.
Name the layers and assign the tasks to each of them.
The task of the highest layer is the overall system task, as perceived by the client.
The tasks of all other layers are to be helpers to higher layers.
Specify the services
It is often better to locate more services in higher layers than in lower layers.
The base layers should be kept slim while higher layers can expand to cover a
spectrum of applicability.
This phenomenon is also called the inverted pyramid of reuse.
Refine the layering
Iterate over steps 1 to 4.
It is not possible to define an abstraction criterion precisely before thinking about
the implied layers and their services.

Dept. of CSE, SJBIT

Page 28

Software Architecture

10IS81

Alternatively it is wrong to define components and services first and later impose
a layered structure.

The solution is to perform the first four steps several times until a natural and
stable layering evolves.
Specify an interface for each layer.
If layer J should be a black box for layer J+1, design a flat interface that offers all layer
Js services.
White box approach is that in which, layer J+1 sees the internals of layer J.
Gray box approach is a compromise between black and white box approaches.
Here layer J+1 is aware of the fact that layer J consists of three components, and
address them separately, but does not see the internal workings of individual components.
Structure individual layers
When an individual layer is complex, it should be broken into separate components.
This subdivision can be helped by using finer-grained patterns.
Specify the communication between adjacent layers.
Push model (most often used): when layer J invokes a service of layer J+1, any required
information is passed as part of the service call.
Pull model: it is the reverse of the push model. It occurs when the lower layer
fetches available information from the higher layer at its own discretion.
Decouple adjacent layers.
For top-down communication, where requests travel top-down, we can use one-way
coupling (i.e, upper layer is aware of the next lower layer, but the lower layer is unaware
of the identity of its users) here return values are sufficient to transport the results in the
reverse direction.
For bottom-up communication, you can use callbacks and still preserve a topdown one way coupling. Here the upper layer registers callback functions with the lower
layer.
We can also decouple the upper layer from the lower layer to a certain degree using
object oriented techniques.
Design an error handling strategy
An error can either be handled in the layer where it occurred or be passed to the next
higher layer.

Software Architecture

10IS81

2.Pipe Component
3.Data Source
4.Data Sink
1.Filter component:
Filter components are the processing units of the pipeline. A filter enriches, refines or transforms its input
data. It enriches data by computing and adding information, refines data by concentrating or extracting
information, and transforms data by delivering the data in some other representation. It is responsible for
the following activities:
The subsequent pipeline element pulls output data from the filter. (passive filter)

2. Pipe component:
Pipes denote the connection between filters, between the data source and the first filter, and between
the last filter and the data sink.
If two active components are joined, the pipe synchronizes them.
This synchronization is done with a first-in- first-out buffer.

3. Data source component:


The data source represents the input to the system, and provides a sequence of data values of the same
structure or type.
It is responsible for delivering input to the processing pipeline.

As a rule of thumb, try to handle the errors at the lowest layer possible.

Q2) List the Components of Pipes and Filters Architectural Pattern?With Sketch explain CRC
Card for the same?(June 12)(Dec 12/Jan 13)(Dec 13/Jan 14)(8 Marks)
4.Data sink component:
The data sink collects the result from the end of the pipeline (i.e, it consumes output).
Two variants of data sink:

Soln:
The componenets of Pipes and Filters are:
1.Filter Component
Dept. of CSE, SJBIT

Page 29

Dept. of CSE, SJBIT

Page 30

Q4) Discuss the steps involved in implementation of pipes of filters architecture?(Dec 12)(12 Marks)

An active data sink pulls results out of the preceding processing stage.
An passive data sink allows the preceding filter to push or write the results into it.

Software Architecture

Software Architecture

10IS81

Q3) Explain the forces that influence the solution to problems based on black board pattern?(Dec
12)(June 12)(Dec 12/Jan 13)(7 Marks)
Soln:
The following forces influence solutions to problems of this kind:
A complete search of the solution space is not feasible in a reasonable time.Since the domain is immature,
we may need to experiment with different algorithms for the same subtask. Hence, individual modules
should be easily exchangeable.
There are different algorithms that solve partial problems.
Input as well as intermediate and final results, have different representation, and the algorithms are
implemented according to different paradigms. An algorithm usually works on the results of other
algorithms. Uncertain data and approximate solutions are involved. Employing dis fact algorithms
induces potential parallelism.
Q4) Write a note on HEARSEY-II System?(June 12)(5 Marks)
Soln:
HEARSAY-II --The first Blackboard system was the HEARSAY-II speech recognition system from the
early 1970's. It was developed as a natural language interface to a literature database. Its task was to
answer queries about documents and to retrieve documents from a collection of abstracts of Artificial
Intelligence publications. The inputs to the system were acoustic signals that were semantically
interpreted and then transformed to a database query. The control component of HEARSAY-I1 consists
of the following:
The focus of control database, which contains a table of primitive change types of blackboard changes,
and those condition-parts that can be executed for each change type.
The scheduling queue, which contains pointers to condition- or action-parts of knowledge source.
The monitor, which keeps track of each change made to the blackboard.
The scheduler, which uses experimentally-derived heuristics to calculate priorities for the conditionand action- parts waiting in the scheduling queue.

Dept. of CSE, SJBIT

Page 31

10IS81

Soln:
Implementation:
Divide the systems tasks into a sequence of processing stages.
Each stage must depend only on the output of its direct predecessor.
All stages are conceptually connected by the data flow.
Define the data format to be passed along each pipe.
Defining a uniform format results in the highest flexibility because it makes recombination of its filters
easy.
You must also define how the end of input is marked.
Decide how to implement each pipe connection.
This decision directly determines whether you implement the filters as active or passive components.
Adjacent pipes further define whether a passive filter is triggered by push or pull of data.
Design and implement the filters.
Design of a filter component is based both on the task it must perform and on the adjacent pipes.
You can implement passive filters as a function, for pull activation, or as a procedure for push activation.
Active filters can be implemented either as processes or as threads in the pipeline program.
Design the error handling.
Because the pipeline components do not share any global state, error handling is hard to address and is
often neglected. As a minimum, error detection should be possible. UNIX defines specific output
channel for error messages, standard error. If a filter detects error in its input data, it can ignore input
until some clearly marked separation occurs. It is hard to give a general strategy for error handling
with a system based on the pipes and filter pattern.
Set up the processing pipeline.
If your system handles a single task you can use a standardized main program that sets up the pipeline and
starts processing. You can increase the flexibility by providing a shell or other end-user facility to set
up various pipelines from your set of filter components.

Q5) Explain layers arachitectures Pattern,with sketches and CRC cards? (Dec 12/Jan13)(6 Marks)
Soln:
Structure your system into an appropriate number of layers and place them on top of each other. Lowest
layer is called 1 (base of our system), the highest is called layer N. i.e, Layer 1, . Layer J-1, Layer J,
.. Layer N. Most of the services that layer J Provides are composed of services provided by layer J-1. In
other words, the services of each layer implement a strategy for combining the services of the layer below
in a meaningful way. In addition, layer Js services may depend on other services in layer J.
Structure:
An individual layer can be described by the following CRC card:

Dept. of CSE, SJBIT

Page 32

Software Architecture

10IS81

The main structural characteristics of the layers patterns is that services of layer J are only use by layer
J+1-there are no further direct dependencies between layers. This structure can be compared with a stack,
or even an onion. Each individual layer shields all lower from direct access by higher layers.

Overall Architecture Looks Something Like this:

Q6) Explain implementation steps of Layered Pattern?(Dec 13/Jan 14)(10 Marks)


Soln:
The following steps describe a step-wise refinement approach to the definition of a layered architecture.
Define the abstraction criterion for grouping tasks into layers.
This criterion is often the conceptual distance from the platform (sometimes, we encounter other
abstraction paradigm as well). In the real world of software development we often use a mix of
abstraction criterions. For ex, the distance from the hardware can shape the lower levels, and
conceptual complexity governs the higher ones.
Example layering obtained using mixed model layering principle is shown below User-visible elements
Specific application modules
Common services level
Operating system interface level
Operating system
Dept. of CSE, SJBIT

Page 33

Software Architecture

10IS81

Hardware
Determine the number of abstraction levels according to your abstraction criterion.
Each abstraction level corresponds to one layer of the pattern. Having too many layers may impose
unnecessary overhead, while too few layers can result in a poor structure.
Name the layers and assign the tasks to each of them.
The task of the highest layer is the overall system task, as perceived by the client. The tasks of all other
layers are to be helpers to higher layers.
Specify the services
It is often better to locate more services in higher layers than in lower layers. The base layers should be
kept slim while higher layers can expand to cover a spectrum of applicability. This phenomenon is
also called the inverted pyramid of reuse.
Refine the layering
Iterate over steps 1 to 4.
It is not possible to define an abstraction criterion precisely before thinking about the implied layers and
their services.
Alternatively it is wrong to define components and services first and later impose a layered structure.
The solution is to perform the first four steps several times until a natural and stable layering evolves.
Specify an interface for each layer.
If layer J should be a black box for layer J+1, design a flat interface that offers all layer Js services.
White box approach is that in which, layer J+1 sees the internals of layer J.
Gray box approach is a compromise between black and white box approaches. Here layer J+1 is aware
of the fact that layer J consists of three components, and address them separately, but does not see the
internal workings of individual components.
Structure individual layers
When an individual layer is complex, it should be broken into separate components.
This subdivision can be helped by using finer-grained patterns.
Specify the communication between adjacent layers.
Push model (most often used): when layer J invokes a service of layer J+1, any required information is
passed as part of the service call.
Pull model: it is the reverse of the push model. It occurs when the lower layer fetches available
information from the higher layer at its own discretion.
Decouple adjacent layers.
For top-down communication, where requests travel top-down, we can use one-way coupling (i.e, upper
layer is aware of the next lower layer, but the lower layer is unaware of the identity of its users) here
return values are sufficient to transport the results in the reverse direction.
Q7) Explain Benefits and Liabilities of Pipes and filter pattern?(Dec13/Jan14)(6 Marks)
Soln:
The Pipes and Filters architectural pattern has the following benefits
No intermediate files necessary, but possible.
Computing results using separate programs is possible without pipes, by storing intermediate results in
pipes.
Flexibility by the filter change
Dept. of CSE, SJBIT

Page 34

Software Architecture

10IS81

Filters have simple interface that allows their easy exchange within a processing pipeline.
Flexibility by recombination
This benefit combined with reusability of filter component allows you to create new processing pipelines
by rearranging filters or by adding new ones.
Reuse of filter components.
Support for recombination leads to easy reuse of filter components.
Rapid prototyping of pipelines.
Easy to prototype a data processing system from existing filters.
Efficiency by parallel processing.
It is possible to start active filter components in parallel in a multiprocessor system or a network.
The Pipes and Filters architectural pattern has the following Liabilities
Sharing state information is expensive or inflexible
This pattern is inefficient if your processing stage needs to share a large amount of global data.
Efficiency gain by parallel processing is often an illusion, because:
The cost for transferring data between filters is relatively high compared to the cost of the computation
carried out by a single filter.Some filter consumes all their input before producing any output. Contextswitching between threads or processes is expensive on a single processor machine. Synchronization
of filters via pipes may start and stop filters often.
Data transformation overhead
Using a single data type for all filter input and output to achieve highest flexibility results in data
conversion overheads.
Error handling
Is difficult. A concrete error-recovery or error-handling strategy depends on the task you need to solve.

Q8) What are the known uses of Black Board Pattern?(Dec 13/Jan 14)(4 marks)
Sol:
Known uses:
HEARSAY-II The first Blackboard system was the HEARSAY-II speech recognition system from the
early 1970's. It was developed as a natural language interface to a literature database. Its task was to
answer queries about documents and to retrieve documents from a collection of abstracts of Artificial
Intelligence publications. The inputs to the system were acoustic signals that were semantically
interpreted and then transformed to a database query. The control component of HEARSAY-I1 consists
of the following:
The focus of control database, which contains a table of primitive change types of blackboard changes,
and those condition-parts that can be executed for each change type.
The scheduling queue, which contains pointers to condition- or action-parts of knowledge source.
The monitor, which keeps track of each change made to the blackboard.
The scheduler, which uses experimentally-derived heuristics to calculate priorities for the conditionand action- parts waiting in the scheduling queue.
HASP/SIAP
CRYSALIS
TRICERO
SUS
Dept. of CSE, SJBIT

Page 35

Software Architecture

10IS81

Q9) Illustrate the behavior of the black board architecture based on Speech recognition and list the
steps to implement black board pattern?(June/July 13)(10 marks)
Soln:
Implementation:
Define the problem
Specify the domain of the problem
Scrutinize the input to the system
Define the o/p of the system
Detail how the user interacts with the system.
Define the solution space for the problem
Specify exactly what constitutes a top level solution
List the different abstraction levels of solutions
Organize solutions into one or more abstraction hierarchy.
Find subdivisions of complete solutions that can be worked on independently.
Divide the solution process into steps.
Define how solutions are transformed into higher level solutions.
Describe how to predict hypothesis at the same abstraction levels.
Detail how to verify predicted hypothesis by finding support for them in other levels.
Specify the kind of knowledge that can be uses to exclude parts of the solution space.
Divide the knowledge into specialized knowledge
These subtasks often correspond to areas of specialization.
Define the vocabulary of the blackboard
Elaborate your first definition of the solution space and the abstraction levels of your solution.
Find a representation for solutions that allows all knowledge sources to read from and contribute to the
blackboard.
The vocabulary of the blackboard cannot be defined of knowledge sources and the control component.
Specify the control of the system.
Control component implements an opportunistic problem-solving strategy that determines which
knowledge sources are allowed to make changes to the blackboard.The aim of this strategy is to construct
a hypothesis that is acceptable as a result. The following mechanisms optimizes the evaluation of
knowledge sources, and so increase the effectiveness and performance of control strategy. Classifying
changes to the blackboard into two types. One type specify all blackboard change that may imply new set
of applicable knowledge sources, and the other specifies all blackboard changes that do not. Associating
categories of blackboard changes with sets of possibly applicable knowledge sources. Focusing of
control. The focus contains either partial results on the blackboard or knowledge sources that should be
preferred over others. Creating on queue in which knowledge sources classified as applicable wait for
their execution.
Implement the knowledge sources
Split the knowledge sources into condition parts and action-parts according to the needs of the control
component. We can implement different knowledge sources in the same system using different
technologies

Dept. of CSE, SJBIT

Page 36

Software Architecture

10IS81

Unit-V
Q1) What do you mean by broker architecture?What are the steps involved in implementing
distributed broker architecture patterns?(Dec 12/Jan 13)(June 2012)(June/July 13)(10 Marks)
Soln:
The broker architectural pattern can be used to structure distributed software systems with decoupled
components that interact by remote service invocations. A broker component is responsible for
coordinating communication, such as requests, as well as for transmitting results and exceptions.
Implementation:
1) Define an object existing model, or use an existing model.
Each object model must specify entities such as object names, requests, objects, values, exceptions,
supported types, interfaces and operations.
2) Decide which kind of component-interoperability the system should offer.
You can design for interoperability either by specifying a binary standard or by introducing a high-level
IDL.
IDL file contains a textual description of the interfaces a server offers to its clients.
The binary approach needs support from your programming language.
3) Specify the APIS the broker component provides for collaborating with clients and servers.
Decide whether clients should only be able to invoke server operations statically, allowing clients to bind
the invocations at complete time, or you want to allow dynamic invocations of servers as well.
This has a direct impact on size and no. of APIS.
4) Use proxy objects to hide implementation details from clients and servers.
Client side proxies package procedure calls into message and forward these messages to the local broker
component.
Server side proxies receive requests from the local broker and call the methods in the interface
implementation of the corresponding server.
5) Design the broker component in parallel with steps 3 and 4
During design and implementations, iterate systematically through the following steps
5.1 Specify a detailed on-the-wire protocol for interacting with client side and server side proxies.
5.2 A local broker must be available for every participating machine in the network.
5.3 When a client invokes a method of a server the broker system is responsible for returning all results
and exceptions back to the original client.
5.4 If the provides do not provide mechanisms for marshalling and un marshalling parameters results, you
must include functionality in the broker component.
5.5 If your system supports asynchronous communication b/w clients and servers, you need to provide
message buffers within the broker or within the proxies for temporary storage of messages.
5.6 Include a directory service for associating local server identifiers with the physical location of the
corresponding servers in the broker.
5.7 When your architecture requires system-unique identifiers to be generated dynamically during server
registration, the broker must offer a name service for instantiating such names.

Dept. of CSE, SJBIT

Page 37

Software Architecture

10IS81

5.8 If your system supports dynamic method invocation the broker needs some means for maintaining
type information about existing servers.
5.9 Plan the brokers action when the communication with clients, other brokers, or servers fails.
6) Develop IDL compliers
An IDL compiler translates the server interface definitions to programming language code. When many
programming languages are in use, it is best to develop the compiler as afrarnework that allows the
developer to add his own code generators.
Q2)Explain with neat diagram, dynamic scenario of MVC?(June 2012)(Dec 12/Jan 13) (10 Marks)
Soln:
Dynamics: The following scenarios depict the dynamic behavior of MVC. For simplicity only one viewcontroller pair is shown in the diagrams.
Scenario I shows how user input that results in changes to the model triggers the change-propagation
mechanism:
The controller accepts user input in its event-handling procedure, interprets the event, and activates a
service procedure of the model.
The model performs the requested service. This results in a change to its internal data.
The model notifies all views and controllers registered with the change-propagation mechanism of the
change by calling their update procedures.
Each view requests the changed data from the model and redisplays itself on the screen.
Each registered controller retrieves data from the model to enable or disable certain user functions..
The original controller regains control and returns from its event handling procedure.

Scenario II shows how the MVC triad is initialized. The following steps occur:
The model instance is created, which then initializes its internal data structures.
A view object is created. This takes a reference to the model as a parameter for its initialization.
The view subscribes to the change-propagation mechanism of the model by calling the attach procedure.
The view continues initialization by creating its controller. It passes references both to the model and to
itself to the controller's initialization procedure.
The controller also subscribes to the change-propagation mechanism by calling the attach procedure.
After initialization, the application begins to process events.
Dept. of CSE, SJBIT

Page 38

Software Architecture

10IS81

Software Architecture

10IS81

do you modularize the user interface functionality of a web application so that you can easily modify the
individual parts? The following forces influence the solution:
Same information is presented differently in different windows. For ex: In a bar or pie chart. The display
and behavior of the application must reflect data manipulations immediately. Changes to the user
interface should be easy, and even possible at run-time. Supporting different look and feel standards or
porting the user interface should not affect code in the core of the application.
Solution:
MVC divides an interactive application into the three areas: processing, output and input.
Model component encapsulates core data and functionality and is independent of o/p and i/p.
View components display user information to user a view obtains the data from the model. There can be
multiple views of the model.
Each view has an associated controller component controllers receive input (usually as mouse events)
events are translated to service requests for the model or the view. The user interacts with the system
solely through controllers.
The separation of the model from view and controller components allows multiple views of the same
model.

Q3) explain MVC pattern? (June/July 13)(Dec 13/Jan 14)(10marks)

View component:
Presents information to the user
Retrieves data from the model
Creates and initializes its associated controller
Implements the update procedure

Consider a simple information system for political elections with proportional representation. This offers
a spreadsheet for entering data and several kinds of tables and charts for presenting the current results.
Users can interact with the system via a graphical interface. All information displays must reflect changes
to the voting data immediately.
Context: Interactive applications with a flexible human-computer interface
Problem: Different users place conflicting requirements on the user interface. A typist enters information
into forms via the keyboard. A manager wants to use the same system mainly by clicking icons and
buttons. Consequently, support for several user interface paradigms should be easily incorporated. How

Structure:

Model component:
Contains the functional core of the application.
Registers dependent views and controllers
Notifies dependent components about data changes (change propagation mechanism)

Soln:
MVC architectural pattern divides an interactive application into three components.
The model contains the core functionality and data.
Views display information to the user.
Controllers handle user input.
Views and controllers together comprise the user interface.
A change propagation mechanism ensures consistence between the user interface and the model.

Dept. of CSE, SJBIT

Page 39

Controller component:
Accepts user input as events (mouse event, keyboard event etc)
Translates events to service requests for the model or display requests for the view.
Dept. of CSE, SJBIT

Page 40

Software Architecture

10IS81

An object-oriented implementation of MVC would define a separate class for each component. In a C++
implementation, view and controller classes share a common parent that defines the update interface. This
is shown in the following diagram.

Dynamics: The following scenarios depict the dynamic behavior of MVC. For simplicity only one viewcontroller pair is shown in the diagrams.
Scenario I shows how user input that results in changes to the model triggers the change-propagation
mechanism:
The controller accepts user input in its event-handling procedure, interprets the event, and activates a
service procedure of the model.
The model performs the requested service. This results in a change to its internal data.
The model notifies all views and controllers registered with the change-propagation mechanism of the
change by calling their update procedures.
Each view requests the changed data from the model and redisplays itself on the screen.
Each registered controller retrieves data from the model to enable or disable certain user functions..
The original controller regains control and returns from its event handling procedure.

Software Architecture

10IS81

After initialization, the application begins to process events.


Implementation:
1) Separate human-computer interaction from core functionality
Analysis the application domain and separate core functionality from the desired input and output
behavior
2) Implement the change-propagation mechanism
Follow the publisher subscriber design pattern for this, and assign the role of the publisher to the model.
3) Design and implement the views
design the appearance of each view and Implement all the procedures associated with views.
4) Design and implement the controllers
For each view of application, specify the behavior of the system in response to user actions.
We assume that the underlying pattern delivers every action of and user as an event. A controller receives
and interprets these events using a dedicated procedure.
5) Design and implement the view controller relationship.
A view typically creates its associated controller during its initialization.
6) Implement the setup of MVC.
The setup code first initializes the model, then creates and initializes the views.
After initialization, event processing is started.
Because the model should remain independent of specific views and controllers, this set up code should
be placed externally.
7) Dynamic view creation
If the application allows dynamic opening and closing of views, it is a good idea to provide a component
for managing open views.
8) pluggable controllers
The separation of control aspects from views supports the combination of different controllers with a
view.
This flexibility can be used to implement different modes of operation.
9) Infrastructure for hierarchical views and controllers
Apply the composite pattern to create hierarchically composed views.
If multiple views are active simultaneously, several controllers may be interested in events at the same
time.
Q4) Discuss the most common scenario illustrating dynamic behavior of Broker system?(Dec 12)(10 Marks)
Soln:
Scenario 1. illustrates the behaviour when a server registers itself with the local broker component:

Scenario II shows how the MVC triad is initialized. The following steps occur:
The model instance is created, which then initializes its internal data structures,A vi,ew object is created.
This takes a reference to the model as a parameter for its initialization,The view subscribes to the
change-propagation mechanism of the model by calling the attach procedure,The view continues
initialization by creating its controller. It passes references both to the model and to itself to the
controller's initialization procedure.
The controller also subscribes to the change-propagation mechanism by calling the attach procedure.
Dept. of CSE, SJBIT

Page 41

Dept. of CSE, SJBIT

Page 42

Software Architecture

10IS81

Software Architecture

10IS81

Scenario II illustrates the behaviour when a client sends a request to a local server. In this scenario we
describe a synchronous invocation, in which the client blocks until it gets a response from the server. The
broker may also support asynchronous invocations, allowing clients to execute further tasks without
having to wait for a response.

Q5) Discuss the Consequences of Presentation abstraction control architectural pattern?(Dec 12) (Dec
12/Jan13)( (10 Marks)
Soln:
Consequences:
Benefits:separation of concerns
o Different semantic concepts in the application domain are represented by separate agents.
Support for change and extension
o Changes within the presentation or abstraction components of a PAC agent do not affect other agents in
the system.
Support for multi tasking
o PAC agents can be distributed easily to different threads, processes or machines.
o Multi tasking also facilitates multi user applications.
Liabilities:
Increased system complexity
Implementation of every semantic concept within an application as its own PAC agent may result in a
complex system structure.
Complex control component
Individual roles of control components should be strongly separated from each other. The
implementations of these roles should not depend on specific details of other agents.
The interface of control components should be independent of internal details.
It is the responsibility of the control component to perform any necessary interface and data adaptation.
Efficiency:
Overhead in communication between PAC agents may impact system efficiency.
Example: All intermediate-level agents are involved in data exchange. if a bottom-level agent retrieve
data from top-level agent.
Dept. of CSE, SJBIT

Page 43

The smaller the atomic semantic concepts of an applications are, and the greater the similarly of their user
interfaces, the less applicable this pattern is.
Q6) What are known uses of PAC?(Dec 13/Jan 14)
Soln:
Known uses:
Network traffic management
Gathering traffic data from switching units.
Threshold checking and generation of overflow exceptions.
Logging and routing of network exceptions.
Visualization of traffic flow and network exceptions.
Displaying various user-configurable views of the whole network.
Statistical evaluations of traffic data.
Access to historic traffic data.
Dept. of CSE, SJBIT

Page 44

Software Architecture

10IS81

System administration and configuration.


Mobile robot
Provide the robot with a description of the environment it will work in, places in this environment, and
routes between places.
Subsequently modify the environment.
Specify missions for the robot.
Control the execution of missions.
Observe the progress of missions.

UNIT-VI

Q1) State and explain the properties of reflection pattern (June/July 13) (Dec13/Jan 14)(10marks)
Soln:
The reflection architectural pattern provides a mechanism for changing structure and behavior of software
systems dynamically. It supports the modification of fundamental aspects, such as the type structures and
function call mechanisms. In this pattern, an application is split into two parts:
A Meta level provides information about selected system properties and makes the s/w self aware.
A base level includes application logic changes to information kept in the Meta level affect subsequent
base-level behavior.
Context: Building systems that support their own modification a prior
Problem:
Designing a system that meets a wide range of different requirements a prior can be an overwhelming
task.
A better solution is to specify an architecture that is open to modification and extension i.e., we have to
design for change and evolution.
Several forces are associated with the problem:
Changing software is tedious, error prone and often expensive.
Adaptable software systems usually have a complex inner structure. Aspects that are subject to change are
encapsulated within separate components.
The more techniques that are necessary for keep in a system changeable the more awkward and complex
its modifications becomes.
Changes can be of any scale, from providing shortcuts for commonly used commands to adapting an
application framework for a specific customer.
Even fundamental aspects of software systems can change for ex. communication mechanisms b/w
components.

Software Architecture

10IS81

A base level
Meta level provides a self representation of the s/w to give it knowledge of its own structure and behavior
and consists of so called Meta objects (they encapsulate and represent information about the software).
Ex: type structures algorithms or function call mechanisms.
Base level defines the application logic. Its implementation uses the Meta objects to remain independent
of those aspects that are likely to change.
An interface is specified for manipulating the Meta objects. It is called the Meta object protocol (MOP)
and allows clients to specify particular changes.
Structure:
1.Meta level
2.Meta objects protocol(MOP)
3.Base level
1.Meta level
Meta level consists of a set of Meta objects. Each Meta object encapsulates selected information about a
single aspect of a structure, behavior, or state of the base level.
There are three sources for such information.
It can be provided by run time environment of the system, such as C++ type identification objects.
It can be user defined such as function call mechanism
It can be retrieved from the base level at run time.
All Meta objects together provide a self representation of an application.
What you represent with Meta objects depends on what should be adaptable only system details that are
likely to change r which vary from customer to customer should be encapsulated by Meta objects.
The interface of a Meta objects allows the base level to access the information it maintains or the services
it offers.
2.Base level
It models and implements the application logic of the software. Its component represents the various
services the system offers as well as their underlying data model.
It uses the info and services provided by the Meta objects such as location information about components
and function call mechanisms. This allows the base level to remain flexible.
Base level components are either directly connected to the Meta objects and which they depend or submit
requests to them through special retrieval functions.

Solution:
Make the software self-aware, and make select aspects of its structure and behavior accessible for
adaptation and change.
This leads to an architecture that is split into two major parts: A Meta level

3.Meta object protocol (MOP)


Serves an external interface to the Meta level, and makes the implementation of a reflective system
accessible in a defined way.
Clients of the MOP can specify modifications to Meta objects or their relationships using the base level
MOP itself is responsible for performing these changes. This provides a reflective application with
explicit control over its own modification.
Meta object protocol is usually designed as a separate component. This supports the implementation of
functions that operate on several Meta objects.
To perform changes, the MOP needs access to the internals of Meta objects, and also to base level
components (sometimes).

Dept. of CSE, SJBIT

Dept. of CSE, SJBIT

Page 45

Page 46

Behavior in case of exceptions


Algorithm for application services.
3. Identify structural aspects of the system, which when changed, should not affect the implementation
of the base level.
4. Identify system services that support both the variation of application services identified
In step 2 and the independence of structural details identified in step 3 Eg: for system services are
Resource allocation
Garbage allocation
Page swapping
Object creation.

One way of providing this access is to allow the MOP to directly operate on their internal states. Another
way (safer, inefficient) is to provide special interface for their manipulation, only accessible by MOP.

Software Architecture

Software Architecture

10IS81

10IS81

5. Define the meta objects


For every aspect identified in 3 previous steps, define appropriate Meta objects.
Encapsulating behavior is supported by several domain independent design patterns, such as objectifier
strategy, bridge, visitor and abstract factory.
6. Define the MOP
There are two options for implementing the MOP.
Integrate it with Meta objects. Every Meta object provides those functions of the MOP that operate on it.
Implement the MOP as a separate component.
Robustness is a major concern when implementing the MOP. Errors in change specifications should be
detected wherever possible.
7. Define the base level
Implement the functional core and user interface of the system according to the analysis model developed
in step 1.
Use Meta objects to keep the base level extensible and adaptable. Connect every base level component
with Meta objects that provide system information on which they depend, such as type information etc.
Provide base level components with functions for maintaining the relationships with their associated Meta
objects. The MOP must be able to modify every relationship b/w base level and Meta level.

The general structure of a reflective architecture is very much like a Layered system

Q2) What are the steps involved in implementing micro kernel system?(June 12)(June/July
13)(Dec 13/Jan 14) (June/July 2014) (12 Marks)

Implementation: Iterate through any subsequence if necessary.


1. Define a model of the application
Analyze the problem domain and decompose it into an appropriate s/w structure.
2. Identify varying behavior
Analyze the developed model and determine which of the application services may vary and which
remain stable.
Following are ex: of system aspects that often vary
Real time constraints
Transaction protocols
Inter Process Communication mechanism
Dept. of CSE, SJBIT

Page 47

Soln:
Implementation:
1. Analyze the application domain:
Perform a domain analysis and identify the core functionality necessary for implementing ext servers.
2. Analyze external servers
That is polices ext servers are going to provide
3. Categorize the services
Group all functionality into semantically independent categories.
4. Partition the categories
Separate the categories into services that should be part of the microkernel, and those that should be
available as internal servers.
Dept. of CSE, SJBIT

Page 48

Software Architecture

10IS81

5. Find a consistent and complete set of operations and abstractions


for every category you identified in step 1.
6. Determine the strategies for request transmission and retrieval.
Specify the facilities the microkernel should provide for communication b/w components.
Communication strategies you integrate depend on the requirements of the application domain.
7. Structure the microkernel component
Design the microkernel using the layers pattern to separate system-specific parts from systemindependent parts of the kernel.
8. Specify the programming interfaces of microkernel
To do so, you need to decide how these interfaces should be accessible externally. You must take into an
account whether the microkernel is implemented as a separate process or as a module that is physically
shared by other component in the latter case, you can use conventional method calls to invoke the
methods of the microkernel. 9. The microkernel is responsible for managing all system resources such
as memory blocks, devices or device contexts - a handle to an output area in a GUI implementation. 10.
9.Design and implement the internal servers as separate processes or shared libraries
Perform this in parallel with steps 7-9, because some of the microkernel services need to access internal
servers. It is helpful to distinguish b/w active and passive servers Active servers are implemented as
processes Passive servers as shared libraries. Passive servers are always invoked by directly calling their
interface methods, where as active server process waits in an event loop for incoming requests.
11 Implement the external servers
Each external server is implemented as a separate process that provide its own service interface
The internal architecture of an external server depends on the policies it comprises
Specify how external servers dispatch requests to their internal procedures.
12. Implement the adapters
Primary task of an adapter is to provide operations to its clients that are forwarded to an external server.
You can design the adapter either as a conventional library that is statically linked to client during
complication or as a shared library dynamically linked to the client on demand.
13. Develop client applications
or use existing ones for the ready-to-run microkernel system.
Q3) Explain the known uses of reflection pattern (Dec 12)(June 12)(Dec 12/Jan 13)(10 Marks)
Soln:
Known uses:
CLOS.
This is the classic example of a reflective programming language [Kee89]. In CLOS, operations defined
for objects are called generic functions, and their processing is referred to as generic function invocation.
Generic function invocation is divided into three phases:
The system first determines the methods that are applicable to a given invocation.
It then sorts the applicable methods in decreasing order of precedence.
The system finally sequences the execution of the list of applicable methods.
MIP
It is a run-time type information system for C++. The functionality of MIP is separated into four layers:
Dept. of CSE, SJBIT

Page 49

Software Architecture

10IS81

The first layer includes information and functionality that allows software to identify and compare types.
The second layer provides more detailed information about the type system of an application.
The third layer provides information about relative addresses of data members, and offers functions for
creating 'empty' objects of user-defined types.
The fourth layer provides full type information, such as that about friends of a class, protection of data
members, or argument and return types of function members.
PGen
It allows an application to store and read arbitrary C++ object structures.
NEDIS
NEDIS includes a meta level called run-time data dictionary. It provides the following services and
system information:
Properties for certain attributes of classes, such as their allowed value ranges.
Functions for checking attribute values against their required properties.
Default values for attributes of classes, used to initialize new objects.
Functions specifying the behavior of the system in the event of errors
Country-specific functionality, for example for tax calculation.
Information about the 'look and feel' of the software, such as the layout of input masks or the language to
be used in the user interface.
OLE 2.0
It provides functionality for exposing and accessing type information about OLE objects and their
interfaces.
Q4) Explain the components of microkernel pattern (Dec 12) (Dec 12/Jan 13)(10 Marks)
Soln:
Microkernel pattern defines 5 kinds of participating components.
Internal servers
External servers
Adapters
Clients
Microkernel
Microkernel
The microkernel represents the main component of the pattern.
It implements central services such as communication facilities or resource handling.
The microkernel is also responsible for maintaining system resources such as processes or files.
It controls and coordinates the access to these resources.
A microkernel implements atomic services, which we refer to as mechanisms.
These mechanisms serve as a fundamental base on which more complex functionality called policies are
constructed.

Dept. of CSE, SJBIT

Page 50

Software Architecture

10IS81

Software Architecture

10IS81

o Problem arises if a client accesses the interfaces of its external server directly ( direct dependency)
Such a system does not support changeability
If ext servers emulate existing application platforms clients will not run without modifications.
Adapter (emulator)
Represents the interfaces b/w clients and their external servers and allow clients to access the services of
their external server in a portable way.
They are part of the clients address space.
The following OMT diagram shows the static structure of a microkernel system.
An internal server (subsystem)
Extends the functionality provided by microkernel.
It represents a separate component that offers additional functionality.
Microkernel invokes the functionality of internal services via service requests.
Therefore internal servers can encapsulates some dependencies on the underlying hardware or software
system.
The OMT Diagram shows a static structure of MicroKernel Syetm.

An external server (personality)


Uses the microkernel for implementing its own view of the underlying application domain.
Each external server runs in separate process.
It receives service requests from client applications using the communication facilities provided by the
microkernel, interprets these requests, executes the appropriate services, and returns its results to clients.
Different external servers implement different policies for specific application domains.

Client:
o It is an application that is associated with exactly one external server. It only accesses the programming
interfaces provided by the external server.
Dept. of CSE, SJBIT

Page 51

Q5) Explain the advantages and disadvantages of reflexive architectural pattern?(June 12)(6
Marks)
Soln:
The reflection architecture provides the following Benefits:
No explicit modification of source code:
You just specify a change by calling function of the MOP.
Changing a software system is easy
MOP provides a safe and uniform mechanism for changing s/w. it hides all specific techniques such as
use of visitors, factories from user.
Support for many kind of change:
Because Meta objects can encapsulate every aspect of system behavior, state and structure. The reflection
architecture also has Liabilities:
Modifications at meta level may cause damage:
Incorrect modifications from users cause serious damage to the s/w or its environment. Ex: changing a
database schema without suspending the execution of objects in the applications that use it or passing
code to the MOP that includes semantic errors
Robustness of MOP is therefore of great importance.
Dept. of CSE, SJBIT

Page 52

Software Architecture

10IS81

Software Architecture

10IS81

Q2) Give the structure of Whole port design with CRC? (June/July 13)(Dec 13/Jan 14)(5 Marks)
Soln:
The Whole-Part pattern introduces two types of participant:
Whole
Whole object represents an aggregation of smaller objects, which we call parts.
It forms a semantic grouping of its parts in that it co ordinates and organizes their collaboration.
Some methods of whole may be just place holder for specific part services when such a method is
invoked the whole only calls the relevant part services, and returns the result to the client.
Part
Each part object is embedded in exactly one whole. Two or more parts cannot share the same part.
Each part is created and destroyed within the life span of the whole.

Increased number of components:


It includes more Meta objects than base level components.
Lower efficiency:
Slower than non reflective systems because of complex relnp b/w base and meta level.
Not all possible changes to the software are supported
Ex: changes or extensions to base level code.
Not all languages support reflection
Difficult to implement in C ++

Static relationship between whole and its part are illustrated in the OMT diagram below

Unit VII
Q1) What are the application areas of master slave pattern (Dec 13/Jan 14) (10 Marks)
Soln:
There are 3 application areas for master slave pattern.
Master-slave for fault tolerance
In this variant the master just delegates the execution of a service to a fixed number of replicated
implementations, each represented by a slave.
Master-slave for parallel computation
In this variant the master divides a complex task into a number of identical sub-tasks, each of which is
executed in parallel by a separate slave.
Master-slave for computational concurrency.
In this variant the execution of a service is delegated to at least three different implementations, each of
which is a separate slave.
Other variants
Slaves as processes
To handle slaves located in separate processes, you can extend the original Master-Slave structure with
two additional components
Slaves as threads
In this variant the master creates the threads, launches the slaves, and waits for all threads to complete
before continuing with its own computation.
Master-slave with slave co ordination
In this case the computation of all slaves must be regularly suspended for each slave to coordinate itself
with the slaves on which it depends, after which the slaves resume their individual computation.

Dept. of CSE, SJBIT

Page 53

Q3) What are the variants of Proxy Pattern?(june 2012)(5 Marks)


Soln:
Variants:
Remote proxy:
Clients of remote components should be scheduled from network addresses and IPC protocols.
Protection proxy:
Components must be protected from unauthorized access.
Cache proxy:
Multiple local clients can share results from remote components.
Synchronization proxy:
Multiple simultaneous accesses to a component must be synchronized.
Counting proxy:
Accidental deletion of components must be prevented or usage statistics collected
Dept. of CSE, SJBIT

Page 54

Software Architecture

10IS81

Virtual proxy:
Processing or loading a component is costly while partial information about the component may be
sufficient.
Firewall proxy:
Local clients should be protected from the outside world.
Q4) Discuss the 5 steps implementation of master slave pattern? (Dec 2012)(June/July 13)(10
Marks)
Soln:
Implementation:
1. Divide work:
Specify how the computation of the task can be split into a set equal sub tasks. Identify the sub services
that are necessary to process a subtask.
2. Combine sub-task results
Specify how the final result of the whole service can be computed with the help of the results obtained
from processing individual sub-tasks.
3. Specify co operation between master and slaves

Define an interface for the subservice identified in step1 it will be implemented by the slave and used by
the master to delegate the processing of individual subtask. One option for passing subtasks from the
master to the slaves is to include them as a parameter when invoking the subservice.
Another option is to define a repository where the master puts sub tasks and the slaves fetch them.
4. Implement the slave components according to the specifications developed in previous step.
5. Implement the master according to the specifications developed in step 1 to 3
There are two options for dividing a task into subtasks.
The first is to split work into a fixed number of subtasks.
The second option is to define as many subtasks as necessary or possible.
Use strategy pattern to support dynamic exchange and variations of algorithms for subdividing a task.
Q5) Define Proxy Design.Discuss benefits and liabilities of same?(June/July 13)(10 Marks)
Soln:
Proxy design pattern makes the clients of a component communicate with a representative rather than to
the component itself. Introducing such a place holder can serve many purposes, including enhanced
efficiency, easier access and protection from unauthorized access.
Dept. of CSE, SJBIT

Page 55

Software Architecture

10IS81

The Proxy pattern provides the following Benefits:


Enhanced efficiency and lower cost
The Virtual Proxy variant helps to implement a 'load-on-demand' strategy. This allows you to avoid
unnecessary loads from disk and usually speeds up your application
Decoupling clients from the location of server components
By putting all location information and addressing functionality into a Remote Proxy variant, clients are
not affected by migration of servers or changes in the networking infrastructure. This allows client code
to become more stable and reusable.
Separation of housekeeping code from functionality.
A proxy relieves the client of burdens that do not inherently belong to the task the client is to perform.
The Proxy pattern has the following Liabilities:
Less efficiency due to indirection
All proxies introduce an additional layer of indirection.
Over kill via sophisticated strategies
Be careful with intricate strategies for caching or loading on demand they do not always pay.
Q6) How the whole-part can be implemented? Also mention the advantages and disadvantages of
whole-part(Dec 2012) (June/July 2014) (10 Marks)
Soln:
Implementation:
1. Design the public interface of the whole
Analyze the functionality the whole must offer to its clients.
Only consider the clients view point in this step.
Think of the whole as an atomic component that is not structured into parts.
2. Separate the whole into parts, or synthesize it from existing ones.
There are two approaches to assembling the parts either assemble a whole bottom-up from existing
parts, or decompose it top-down into smaller parts.
Mixtures of both approaches is often applied
3. If you follow a bottom up approach, use existing parts from component libraries or class libraries and
specify their collaboration.
4. If you follow a top down approach, partition the Wholes services into smaller collaborating
services and map these collaborating services to separate parts.
5. Specify the services of the whole in terms of services of the parts.
Decide whether all part services are called only by their whole, or if parts may also call each other. Two
are two possible ways to call a Part service: @ If a client request is forwarded to a Part service, the Part
does not use any knowledge about the execution context of the Whole, relying on its own environment
instead. @ A delegation approach requires the Whole to pass its own context information to the Part.
6. Implement the parts
If parts are whole-part structures themselves, design them recursively starting with step1 . if not reuse
existing parts from a library.
7. Implement the whole
Implement services that depend on part objects by invoking their services from the whole.
The whole-part pattern offers several Benefits:
Dept. of CSE, SJBIT

Page 56

Software Architecture

10IS81

8.Changeability of parts:
Part implementations may even be completely exchanged without any need to modify other parts or
clients.
9.Separation of concerns:
Each concern is implemented by a separate part.
10.Reusability in two aspects:
Parts of a whole can be reused in other aggregate objects
Encapsulation of parts within a whole prevents clients from scattering the use of part objects all over its
source code.
The whole-part pattern suffers from the following Liabilities:
11.Lower efficiency through indirection
Since the Whole builds a wrapper around its Parts, it introduces an additional level of indirection between
a client request and the Part that fulfils it.
12.Complexity of decomposition into parts.
An appropriate composition of a Whole from different Parts is often hard to find, especially when a
bottom-up approach is applied.
Q7) Briefly explain benefits of Master Slave Pattern? (June 2012) (June/July 2014) (6 Marks)
Soln:
The Master-Slave design pattern provides several Benefits:
Exchangeability and extensibility
By providing an abstract slave class, it is possible to exchange existing slave implementations or add new
ones without major changes to the master.
Separation of concerns
The introduction of the master separates slave and client code from the code for partitioning work,
delegating work to slaves, collecting the results from the slaves, computing the final result and handling
slave failure or inaccurate slave results.
Efficiency
The Master-Slave pattern for parallel computation enables you to speed up the performance of computing
a particular service when implemented carefully.
Q8) Briefly comment on the different steps carried out to realize the implementation of Proxy
pattern?(June/July 2011)(Dec 13/Jan 14)(June/July 2013)(6 Marks)

Software Architecture

10IS81

5. Associate the proxy and the original by giving the proxy a handle to the original. This handle may be
a pointer a reference an address an identifier, a socket, a port, and so on.
6. Remove all direct relationships between the original and its client
Replace them by analogous relationships to the proxy.
Q9) Explain the variants of whole-part design pattern in brief?(Dec 12)(10 Marks)
Soln:
Variants:
Shared parts:
This variant relaxes the restriction that each Part must be associated with exactly one Whole by allowing
several Wholes to share the same Part.
Assembly parts
In this variant the Whole may be an object that represents an assembly of smaller objects.
Container contents
In this variant a container is responsible for maintaining differing contents
Collection members
This variant is a specialization of Container-Contents, in that the Part objects all have the same type.
Composite pattern
It is applicable to Whole-Part hierarchies in which the Wholes and their Parts can be treated uniformlythat is, in which both implement the same abstract interface.
Q10)Explain Dynamic part of master-slave design?(Dec 12)(Dec 13/Jan 14)(8 Marks)
Soln:
The scenario comprises six phases:
A client requests a service from the master.
The master partitions the task into several equal sub-tasks.
The master delegates the execution of these sub-tasks to several slave instances, starts their execution and
waits for the results they return.
The slaves perform the processing of the sub-tasks and return the results of their computation back to the
master.
The master computes a final result for the whole task from the partial results received from the slaves.
The master returns this result to the client.

Soln:
1. Identify all responsibilities for dealing with access control to a component
Attach these responsibilities to a separate component the proxy.
2. If possible introduce an abstract base class that specifies the common parts of the interfaces of
both the proxy and the original.
Derive the proxy and the original from this abstract base.
3. Implement the proxys functions
To this end check the roles specified in the first step
4. Free the original and its client from responsibilities that have migrated into the proxy.

Dept. of CSE, SJBIT

Page 57

Dept. of CSE, SJBIT

Page 58

Software Architecture

10IS81

Software Architecture

10IS81

Unit-VIII

Q11) Give any 2 benefits of proxy design pattern?(Dec 12)(2 Marks)


Soln:
The Proxy pattern provides the following Benefits:
Enhanced efficiency and lower cost
The Virtual Proxy variant helps to implement a 'load-on-demand' strategy. This allows you to avoid
unnecessary loads from disk and usually speeds up your application
Decoupling clients from the location of server components
By putting all location information and addressing functionality into a Remote Proxy variant, clients are
not affected by migration of servers or changes in the networking infrastructure. This allows client code
to become more stable and reusable.

Q12)List the known uses and Liabilities of Proxy Pattern?(Dec 2012)(6 Marks)
Soln:
The Proxy pattern has the following Liabilities:

Less efficiency due to indirection


All proxies introduce an additional layer of indirection.

Over kill via sophisticated strategies


Be careful with intricate strategies for caching or loading on demand they do not always pay.
Known uses:

NeXT STEP
The Proxy pattern is used in the NeXTSTEP operating system to provide local stubs for remote objects.
Proxies are created by a special server on the first access to the remote object.

OMG-COBRA
It uses the Proxy pattern for two purposes. So called 'client-stubs', or IDL-stubs, guard clients against the
concrete implementation of their servers and the Object Request Broker.

OLE
In Microsoft OLE servers may be implemented as libraries dynamically linked to the address space of the
client, or as separate processes. Proxies are used to hide whether a particular server is local or
remote from a client.

WWW proxy
It gives people inside the firewall concurrent access to the outside world. Efficiency is increased by
caching recently transferred files.

Orbix
It is a concrete OMG-CORBA implementation, uses remote proxies. A client can bind to an original by
specifying its unique identifier.

Dept. of CSE, SJBIT

Page 59

Q1) Write a note on View Catalog?(Dec 12/Jan 13)(4 Marks)


Soln:
View Catalog
A view catalog is the reader's introduction to the views that the architect has chosen to include in the suite
of documentation. There is one entry in the view catalog for each view given in the documentation suite. Each entry should give the following:
The name of the view and what style it instantiates
A description of the view's element types, relation types, and properties
A description of what the view is for
Management information about the view document, such as the latest version, the location of the
view document, and the owner of the view document

Q2) Explain with neat diagram,evolutionary delivery life cycle model?(june 2012)(10
Marks)
Soln:
Any organization that embraces architecture as a foundation for its software development processes needs
to understand its place in the life cycle. Several life-cycle models exist in the literature, but one that puts
architecture squarely in the middle of things is the evolutionary delivery life cycle model shown in figure
7.1.

Dept. of CSE, SJBIT

Page 60

Software Architecture

10IS81

The intent of this model is to get user and customer feedback and iterate through several releases before
the final release. The model also allows the adding of functionality with each iteration and the delivery of
a limited version once a sufficient set of features has been developed.
Q3) What are the suggested standard organization
documentation?(june 2012)(June/July 13)(12 Marks)

points

for

interface

Soln:
An interface is a boundary across which two independent entities meet and interact or communicate with
each other.
1. Interface identify
When an element has multiple interfaces, identify the individual interfaces to distinguish them. This
usually means naming them. You may also need to provide a version number.
2. Resources provided:
The heart of an interface document is the resources that the element provides.
Resource syntax this is the resources signature
Resource Semantics:
Assignment of values of data
Changes in state
Events signaled or message sent
how other resources will behave differently in future
humanly observable results
Resource Usage Restrictions
initialization requirements
limit on number of actors using resource
3. Data type definitions:
If used if any interface resources employ a data type other than one provided by the underlying
programming language, the architect needs to communicate the definition of that type. If it is defined by
another element, then reference to the definition in that elements documentation is sufficient.
4. Exception definitions:
These describe exceptions that can be raised by the resources on the interface. Since the same exception
might be raised by more than one resource, if it is convenient to simply list each resources exceptions but
define them in a dictionary collected separately.
5. Variability provided by the interface.
Does the interface allow the element to be configured in some way? These configuration parameters and
how they affect the semantics of the interface must be documented.
6. Quality attribute characteristics:
The architect needs to document what quality attribute characteristics (such as performance or reliability)
the interface makes known to the element's users
7. Element requirements:
What the element requires may be specific, named resources provided by other elements. The
documentation obligation is the same as for resources provided: syntax, semantics, and any usage
restrictions.
8. Rationale and design issues:
Dept. of CSE, SJBIT

Page 61

Software Architecture

10IS81

Why these choices the architect should record the reasons for an elements interface design. The rationale
should explain the motivation behind the design, constraints and compromises, what alternatives designs
were considered.
9. Usage guide:
Item 2 and item 7 document an element's semantic information on a per resource basis. This sometimes
falls short of what is needed. In some cases semantics need to be reasoned about in terms of how a broad
number of individual interactions interrelate.

Q4) List the steps of ADD and explain?(Dec 12)(June 12)(Dec 12/Jan 13)(Dec 13/Jan 14)
(June/July 2014) (10 Marks)
Soln:
ADD Steps:
Steps involved in attribute driven design (ADD)
1. Choose the module to decompose
Start with entire system
Inputs for this module need to be available
Constraints, functional and quality requirements
2. Refine the module
a) Choose architectural drivers relevant to this decomposition
b) Choose architectural pattern that satisfies these drivers
c) Instantiate modules and allocate functionality from use cases representing using multiple views
d) Define interfaces of child modules
e) Verify and refine use cases and quality scenarios
3. Repeat for every module that needs further decomposition
1. Choose The Module To Decompose
the following are the modules: system->subsystem->submodule
Decomposition typically starts with system, which then decompose into subsystem and then into
sub-modules.
In our Example, the garage door opener is a system
Opener must interoperate with the home information system
Dept. of CSE, SJBIT

Page 62

Software Architecture

10IS81

2. Refine the module


Choose Architectural Drivers:
choose the architectural drivers from the quality scenarios and functional requirements
The drivers will be among the top priority requirements for the module.
In the garage system, the 4 scenarios were architectural drivers,
By examining them, we see
Real-time performance requirement
Modifiability requirement to support product line
Requirements are not treated as equals
Less important requirements are satisfied within constraints obtained by satisfying more
important requirements
This is a difference of ADD from other architecture design methods
2. Choose Architectural Pattern
For each quality requirement there are identifiable tactics and then identifiable patterns that implement
these tactics.
The goal of this step is to establish an overall architectural pattern for the module
The pattern needs to satisfy the architectural pattern for the module tactics selected to satisfy the drivers
Two factors involved in selecting tactics:
Architectural drivers themselves
Side effects of the pattern implementing the tactic on other requirements
This yields the following tactics:
Semantic coherence and information hiding. Separate responsibilities dealing with the user interface,
communication, and sensors into their own modules. Increase computational efficiency. The
performance-critical computations should be made as efficient as possible. Schedule wisely. The
performance-critical computations should be scheduled to ensure the achievement of the timing deadline.
3. Instantiate Modules And Allocate Functionality Using Multiple Views
Instantiate modules
The non-performance-critical module of Figure 7.2 becomes instantiated as diagnosis and
raising/lowering door modules in Figure 7.3. We also identify several responsibilities of the virtual
machine: communication and sensor reading and actuator control. This yields two instances of the virtual
machine that are also shown in Figure 7.3.
Allocate functionality
Assigning responsibilities to the children in a decomposition also leads to the discovery of necessary
information exchange. At this point in the design, it is not important to define how the information is
exchanged. Is the information pushed or pulled? Is it passed as a message or a call parameter? These are
all questions that need to be answered later in the design process. At this point only the information itself
and the producer and consumer roles are of interest
Represent the architecture with multiple views
Module decomposition view
Concurrency view
Dept. of CSE, SJBIT

Page 63

Software Architecture

10IS81

Two users doing similar things at the same time


One user performing multiple activities simultaneously
Starting up the system
Shutting down the system
Deployment view
4. Define Interfaces Of Child Modules
It documents what this module provides to others.
Analyzing the decomposition into the 3 views provides interaction information for the interface
Module view:
Producers/consumers relations,patterns of communication, Concurrency view:
Interactions among threads
Synchronization information
Deployment view
Hardware requirement
Timing requirements
Communication requirements
5. Verify And Refine Use Cases And Quality Scenarios As Constraints For The Child Modules
Functional requirements
Using functional requirements to verify and refine
Decomposing functional requirements assigns responsibilities to child modules
We can use these responsibilities to generate use cases for the child module
User interface:
Handle user requests
Translate for raising/lowering module
Display responses
Raising/lowering door module
Control actuators to raise/lower door
Stop when completed opening or closing
Obstacle detection:
Recognize when object is detected
Stop or reverse the closing of the door
Communication virtual machine
Manage communication with house information system(HIS)
Scheduler
Guarantee that deadlines are met when obstacle is detected
Sensor/actuator virtual machine
Manage interactions with sensors/actuators
Diagnosis:
Manage diagnosis interaction with HIS
Constraints:
The decomposition satisfies the constraint
OS constraint-> satisfied if child module is OS
The constraint is satisfied by a single module
Dept. of CSE, SJBIT

Page 64

Software Architecture

10IS81

Constraint is inherited by the child module


The constraint is satisfied by a collection of child modules
E.g., using client and server modules to satisfy a communication constraint
Quality scenarios:
Quality scenarios also need to be verified and assigned to child modules
A quality scenario may be satisfied by the decomposition itself, i.e, no additional impact on child modules
A quality scenario may be satisfied by the decomposition but generating constraints for the children
The decomposition may be neutral with respect to a quality scenario
A quality scenario may not be satisfied with the current decomposition
Q5) What are the uses of architectural documentation? Bring out the concept of view as
applied to architectural documentation?(June/July 2013)(10 Marks)
Soln:
The Uses of Architectural Documentation are:
Architecture documentation is both prescriptive and descriptive. That is, for some audiences it prescribes
what should be true by placing constraints on decisions to be made. For other audiences it describes what
is true by recounting decisions already made about a system's design.
All of this tells us that different stakeholders for the documentation have different needsdifferent kinds
of information, different levels of information, and different treatments of information.
One of the most fundamental rules for technical documentation in general, and software architecture
documentation in particular, is to write from the point of view of the reader. Documentation that was easy
to write but is not easy to read will not be used, and "easy to read" is in the eye of the beholderor in this
case, the stakeholder.
Documentation facilitates that communication. Some examples of architectural stakeholders and the
information they might expect to find in the documentation are given in Table 9.1.
In addition, each stakeholders come in two varieties: seasoned and new. A new stakeholder will want
information similar in content to what his seasoned counterpart wants, but in smaller and more
introductory doses. Architecture documentation is a key means for educating people who need an
overview: new developers, funding sponsors, visitors to the project, and so forth.
The concept of a view, which you can think of as capturing a structure, provides us with the basic
principle of documenting software architecture Documenting an architecture is a matter of documenting
the relevant views and then adding documentation that applies to more than one view. This principle is
useful because it breaks the problem of architecture documentation into more tractable parts, which
provide the structure for the remainder of this chapter:
Choosing the relevant views
Documenting view
Documenting information that applies to more than one view
Q6) Architecture serves as a Communication vehicle across
Documentation facilitates that communication?(Dec 12)(10 Marks)

Software Architecture

10IS81

Every suite of architectural documentation needs an introductory piece to explain its organization to a
novice stakeholder and to help that stakeholder access the information he or she is most interested in.
There are two kinds of "how" information:
View Catalog
A view catalog is the reader's introduction to the views that the architect has chosen to include in the suite
of documentation. There is one entry in the view catalog for each view given in the documentation suite.
Each entry should give the following: The name of the view and what style it instantiates, A description
of the view's element types, relation types, and properties A description of what the view is for
Management information about the view document, such as the latest version, the location of the view
document, and the owner of the view document
View Template
A view template is the standard organization for a view. It helps a reader navigate quickly to a section of
interest, and it helps a writer organize the information and establish criteria for knowing how much work
is left to do.
WHAT THE ARCHITECTURE IS
This section provides information about the system whose architecture is being documented, the relation
of the views to each other, and an index of architectural elements.
System Overview
This is a short prose description of what the system's function is, who its users are, and any important
background or constraints. The intent is to provide readers with a consistent mental model of the system
and its purpose. Sometimes the project at large will have a system overview, in which case this section of
the architectural documentation simply points to that.
Mapping between Views
Since all of the views of an architecture describe the same system, it stands to reason that any two views
will have much in common. Helping a reader of the documentation understand the relationships among
views will give him a powerful insight into how the architecture works as a unified conceptual whole.
Being clear about the relationship by providing mappings between views is the key to increased
understanding and decreased confusion.
Element List
The element list is simply an index of all of the elements that appear in any of the views, along with a
pointer to where each one is defined. This will help stakeholders look up items of interest quickly.
Project Glossary
The glossary lists and defines terms unique to the system that have special meaning. A list of acronyms,
and the meaning of each, will also be appreciated by stakeholders. If an appropriate glossary already
exists, a pointer to it will suffice here.
Q7) What are the suggested standard organization points for view documentation?(June
12)(Dec 12/Jan 13)(June/July 2013)(8 Marks)

stake holders?

Soln:
Soln:
Dept. of CSE, SJBIT

Page 65

Dept. of CSE, SJBIT

Page 66

Software Architecture

10IS81

Primary presentation- elements and their relationships, contains main information about these system ,
usually graphical or tabular.
Element catalog- details of those elements and relations in the picture,
Context diagram- how the system relates to its environment
Variability guide- how to exercise any variation points a variability guide should include documentation
about each point of variation in the architecture, including
The options among which a choice is to be made
The binding time of the option. Some choices are made at design time, some at build time, and others at
runtime.
Architecture background why the design reflected in the view came to be? an architecture background
includes
o rationale, explaining why the decisions reflected in the view were made and why alternatives were
rejected
o analysis results, which justify the design or explain what would have to change in the face of a
modification
o assumptions reflected in the design
Glossary of terms used in the views, with a brief description of each.
Other information includes management information such as authorship, configuration control data, and
change histories. Or the architect might record references to specific sections of a requirements document
to establish traceability

System Modeling and Simulation

10CS82

VTU QUESTION PAPER SOLUTION


UNIT 1
INTRODUCTION TO SIMULATION

1.

Define system. Explain the components of a system with an example. 10 Marks (June

2014)
Soln: A system is a set of interacting or interdependent components forming an integrated whole
Every system is delineated by its spatial and temporal boundaries, surrounded and influenced by its
environment, described by its structure and purpose and expressed in its functioning.
Following are considered as the elements of a system in terms of Information systems:
1. Input
2. Output
3. Processor
4. Control
5. Feedback
6. Boundary and interface
7. Environment
1. INPUT: Input involves capturing and assembling elements that enter the system to be processed. The
inputs are said to be fed to the systems in order to get the output. For example, input of a 'computer
system' is input unit consisting of various input devices like keyboard,mouse,joystick etc.
2. OUTPUT: Those elements that exists in the system due to the processing of the inputs is known as
output. A major objective of a system is to produce output that has value to its user. The output of the
system maybe in the form of cash, information, knowledge, reports, documents etc. the system is
defined as output is required from it. It is the anticipatory recognition of output that helps in defining the
input of the system.

Dept of CSE, SJBIT


Dept. of CSE, SJBIT

Page 1

Page 67

System Modeling and Simulation

10CS82

3. PROCESSOR(S): The processor is the element of a system that involves the actual transformation of
input into output. It is the operational component of a system. For example, processor of a 'computer
system' is central processing unit that further consists of arithmetic and logic unit(ALU), control unit
and memory unit etc.
4. CONTROL: The control element guides the system. It is the decision-making sub-system that
controls the pattern of activities governing input,processing and output. It also keeps the system within
the boundary set. For example,control in a 'computer system' is maintained by the control unit that
controls and coordinates various units by means of passing different signals through wires.
5. FEEDBACK: Control in a dynamic system is achieved by feedback. Feedback measures output

System Modeling and Simulation

10CS82

Every study begins with a statement of the problem, provided by policy makers. Analyst
ensures its clearly understood. If it is developed by analyst policy makers should understand and
agree with it.
2. Setting of objectives and overall project plan
The objectives indicate the questions to be answered by simulation. At this point a
determination should be made concerning whether simulation is the appropriate methodology.
Assuming it is appropriate, the overall project plan should include
A statement of the alternative systems
A method for evaluating the effectiveness of these alternatives
Plans for the study in terms of the number of people involved
Cost of the study
The number of days required to accomplish each phase of the work with the anticipated
results.

against a standard input in some form of cybernetic procedure that includes communication and control.

3. Model conceptualization

The feedback may generally be of three types viz.,positive,negative and informational. The positive

The construction of a model of a system is probably as much art as science. The art of
modeling is enhanced by an ability
To abstract the essential features of a problem
To select and modify basic assumptions that characterize the system
To enrich and elaborate the model until a useful approximation results

feedback motivates the system. The negative indicates need of an action. The feedback is a reactive form
of control. Outputs from the process of the system are fed back to the control mechanism. The control
mechanism then adjusts the control signals to the process on the basis of the data it receives.
Feedforward is a protective form of control. For example, in a 'computer system' when logical decisions
are taken,the logic unit concludes by comparing the calculated results and the required results.
6. BOUNDARY AND INTERFACE: A system should be defined by its boundaries-the limits that
identify its components,processes and interrelationships when it interfaces with another system. For

Thus, it is best to start with a simple model and build toward greater complexity. Model
conceptualization enhance the quality of the resulting model and increase the confidence of the
model user in the application of the model.
4. Data collection
There is a constant interplay between the construction of model and the collection of
needed input data. Done in the early stages. Objective kind of data are to be collected.
5. Model translation

example,in a 'computer system' there is boundary for number of bits, the memory size etc.that is
responsible for different levels of accuracy on different machines(like 16-bit,32-bit etc.). The interface
in a 'computer system'may be CUI (Character User Interface) or GUI (Graphical User Interface).
7. ENVIRONMENT: The environment is the 'supersystem' within which an organisation operates.It
excludes input,processes and outputs. It is the source of external elements that impinge on the system.

Real-world systems result in models that require a great deal of information storage and
computation. It can be programmed by using simulation languages or special purpose simulation
software.
Simulation languages are powerful and flexible. Simulation software models
development time can be reduced.
6. Verified

For example,if the results calculated/the output generated by the 'computer system' are to be used for

It pertains to he computer program and checking the performance. If the input parameters
and logical structure and correctly represented, verification is completed.

decision-making purposes in the factory,in a business concern,in an organisation,in a school,in a college

7. Validated

or in a government office then the system is same but its environment is different.
2. With a neat flow diagram, explain the steps in simulation study. 10 Marks (June 2014)
Soln:

1. Problem formulation

Dept of CSE, SJBIT

Page 2

It is the determination that a model is an accurate representation of the real system.


Achieved through calibration of the model, an iterative process of comparing the model to actual
system behavior and the discrepancies between the two.
8. Experimental Design
Dept of CSE, SJBIT

Page 3

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

The alternatives that are to be simulated must be determined. Which alternatives to


simulate may be a function of runs. For each system design, decisions need to be made
concerning
Length of the initialization period
Length of simulation runs
Number of replication to be made of each run

9. Production runs and analysis


They are used to estimate measures of performance for the system designs that are being
simulated.
10. More runs
Based on the analysis of runs that have been completed. The analyst determines if
additional runs are needed and what design those additional experiments should follow.
11. Documentation and reporting
Two types of documentation.
Program documentation
Progress documentation
Program documentation
Can be used again by the same or different analysts to understand how the program
operates. Further modification will be easier. Model users can change the input parameters for
better performance.

system ii) entity iii) attribute iv) activity

Gives the history of a simulation project. The result of all analysis should be reported
clearly and concisely in a final report. This enables to review the final formulation and
alternatives, results of the experiments and the recommended solution to the problem. The final
report provides a vehicle of certification.
12. Implementation

3. Explain the following components of simulation system with an example of bank system i)

Progress documentation

Success depends on the previous steps. If the model user has been thoroughly involved
and understands the nature of the model and its outputs, likelihood of a vigorous implementation is
enhanced.

v) event

10 Marks (June 2013 )

Soln:
i) System: A collection of entities (e.g., people and machines) that ii together over time to
accomplish one or more goals.
ii)entity : An entity is an object of interest in a system.
Ex: In the factory system, departments, orders, parts and products The entities
iii)Attribute :An attribute denotes the property of an entity.
Ex: Quantities for each order, type of part, or number of machines in a Department are
attributes of factory system

Dept of CSE, SJBIT

Page 4

Dept of CSE, SJBIT

Page 5

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

iv) activity: Any process causing changes in a system is called as an activity.


Ex: Manufacturing process of the department.
v) event: An event is defined as an instaneous occurrence that may change the state of the
system.

4. List any three situations when simulation tool is appropriate and not appropriate tool.
10 Marks (June 2013 )
Soln: When Simulation is the Appropriate Tool

Simulation enables the study of and experimentation with the internal interactions of

ii)continuous system

a complex system, or of a subsystem within a complex system.

Systems in which the changes are predominantly smooth are called continuous system. Ex:

Head of a water behind a dam.

Informational, organizational and environmental changes can be simulated and the effect

of those alternations on the models behavior can be observer.

The knowledge gained in designing a simulation model can be of great value

toward suggesting improvement in the system under investigation

When Simulation is Not Appropriate


Simulation should be used when the problem cannot be solved using common sense.

iii) stochastic system

Simulation should not be used if the problem can be solved analytically.

Has one or more random variable as inputs. Random inputs leads to random outputs. Ex:

Simulation should not be used, if it is easier to perform direct experiments.

Simulation of a bank involves random interarrival and service times.

Simulation should not be used, if the costs exceeds savings.


iv)deterministic system
5. Define the following terms used in simulation

8 Marks (Dec /Jan

Contains no random variables. They have a known set of inputs which will result in a unique
set of outputs. Ex: Arrival of patients to the Dentist at the scheduled appointment time

2012-13 )
Sol: i) discrete system
Systems in which the changes are predominantly discontinuous are called discrete systems. Ex:

v)entity

Bank the number of customers changes only when a customer arrives or when the service

An entity is an object of interest in a system.

provided a customer is completed.

Ex: In the factory system, departments, orders, parts and products are The entities

vi)Attribute
An attribute denotes the property of an entity.
Dept of CSE, SJBIT

Page 6

Dept of CSE, SJBIT

Page 7

Consists of steps 1 and 2

attributes of factory system

I Phase

Ex: Quantities for each order, type of part, or number of machines in a Department are

System Modeling and Simulation

System Modeling and Simulation

10CS82

10CS82

It is period of discovery/orientation
6. Draw the flowchart of steps involved in simulation study.

12 Marks

(Dec /Jan

The analyst may have to restart the process if it is not fine-tuned


Recalibrations and clarifications may occur in this phase or another phase.

2012-13 )
Sol:

II Phase
Consists of steps 3,4,5,6 and 7
A continuing interplay is required among the steps
Exclusion of model user results in implications during implementation
III Phase
Consists of steps 8,9 and 10
Conceives a thorough plan for experimenting
Discrete-event stochastic is a statistical experiment
The output variables are estimates that contain random error and therefore proper
statistical analysis is required.
IV Phase
Consists of steps 11 and 12

Successful implementation depends on the involvement of user and every steps

successful completion

7. What is simulation? Explain with flow chart, the steps involved in simulation study
10Marks (June 2012)
Sol:
Simulation
A Simulation is the imitation of the operation of a real-world process or system over time.
Brief Explanation

The behavior of a system as it evolves over time is studied by developing a simulation

model.
This model takes the form of a set of assumptions concerning the operation of the syste
The simulation model building can be broken into 4 phases.
Dept of CSE, SJBIT

Page 8

Dept of CSE, SJBIT

Page 9

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

Head of a water behind a dam.

9. What is system and system environment? List the components of a system, with
example.

5 Marks (June 2012)

Sol: Systems
A system is defined as an aggregation or assemblage of objects joined in some regular
interaction or interdependence toward the accomplishment of some purpose.

8. Differentiate between continuous and discrete systems.

5 Marks

(June

2012)
Soln: i) discrete system
Systems in which the changes are predominantly discontinuous are called discrete systems. Ex:
Bank the number of customers changes only when a customer arrives or when the service
provided a customer is completed.

Example : Production System


In the above system there are certain distinct objects, each of which possesses properties of
interest. There are also certain interactions occurring in the system that cause changes in the
system.
Components of a System
Entity : An entity is an object of interest in a system.
Ex: In the factory system, departments, orders, parts and products are The entities.

ii)continuous system

Attribute

Systems in which the changes are predominantly smooth are called continuous system. Ex:

An attribute denotes the property of an entity.

Dept of CSE, SJBIT

Page 10

Dept of CSE, SJBIT

Page 11

Simulation should not be used if the problem can be solved analytically.

An event is defined as an instaneous occurrence that may change the state of the system.

Simulation should be used when the problem cannot be solved using common sense.

Event

When Simulation is Not Appropriate

description of all the entities, attributes and activities as they exist at one point in time.

By simulating different capabilities for a machine, requirements can be determined.

system at any time, relative to the objective of study. In other words, state of the system mean a

Simulation can be used to verify analytic solutions.

The state of a system is defined as the collection of variables necessary to describe a

implementation, so as to prepare for what may happen.

State of the System

methodologies. Simulation can be used to experiment with new designs or policies prior to

Ex: Manufacturing process of the department.

Simulation can be used as a pedagogical device to reinforce analytic solution

Any process causing changes in a system is called as an activity.

obtained into which variables are most important and how variables interact.

Activity

By changing simulation inputs and observing the resulting outputs, valuable insight may be

attributes of factory system.

suggesting improvement in the system under investigation.

Ex: Quantities for each order, type of part, or number of machines in a Department are

System Modeling and Simulation

System Modeling and Simulation

10CS82

10CS82

Simulation should not be used, if it is easier to perform direct experiments.


System Environment

Simulation should not be used, if the costs exceeds savings.

The external components which interact with the system and produce necessary changes
are said to constitute the system environment. In modeling systems, it is necessary to
decide on the boundary between the system and its environment. This decision may depend
on the purpose of the study.
Ex: In a factory system, the factors controlling arrival of orders may be considered to be
outside the factory but yet a part of the system environment. When, we consider the demand
and supply of goods, there is certainly a relationship between the factory output and arrival
of orders. This relationship is considered as an activity of the system.

Simulation should not be performed, if the resources or time are not available.

11. Explain the steps in simulation study. With flow chart?

10 Marks ( Dec /Jan

2011-12)
Sol: Steps in a Simulation study
1. Problem formulation
Every study begins with a statement of the problem, provided by policy makers. Analyst
ensures its clearly understood. If it is developed by analyst policy makers should

10. List any five circumstances, When the Simulation is appropriate tool and when it is
not?

10 Marks ( Dec /Jan 2011-12)

understand and agree with it.


2. Setting of objectives and overall project plan
The objectives indicate the questions to be answered by simulation. At this point a

Ans: When Simulation is the Appropriate Tool


Simulation enables the study of and experimentation with the internal interactions of a
complex system, or of a subsystem within a complex system.
Informational, organizational and environmental changes can be simulated and the effect of
those alternations on the models behavior can be observer.
The knowledge gained in designing a simulation model can be of great value toward
Dept of CSE, SJBIT

Page 12

determination

should

be

made

concerning

whether

simulation

is

the

appropriate

methodology. Assuming it is appropriate, the overall project plan should include the alternative
systems
A method for evaluating the effectiveness of these alternatives
Plans for the study in terms of the number of people involved Cost of the study
The number of days required to accomplish each phase of the work with the
Dept of CSE, SJBIT

Page 13

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

anticipated results.

Length of simulation runs

3. Model conceptualization

Number of replication to be made of each run

The construction of a model of a system is probably as much art as science. The art of

9. Production runs and analysis

modeling is enhanced by an ability

They are used to estimate measures of performance for the system designs that are being

To abstract the essential features of a problem

simulated.

To select and modify basic assumptions that characterize the system

10. More runs

To enrich and elaborate the model until a useful approximation results

Based on the analysis of runs that have been completed. The analyst determines if

Thus, it is best to start with a simple model and build toward greater complexity. Model

additional runs are needed and what design those additional experiments should follow.

conceptualization enhance

11. Documentation and reporting

the

quality

of

the

resulting

model

and

increase

the

confidence of the model user in the application of the model.

Two types of documentation.

4. Data collection

Program documentation

There is a constant interplay between the construction of model and the collection of needed

Process documentation

input data. Done in the early stages. Objective kind of data are to be collected.

Program documentation

5. Model translation

Can be used again by the same or different analysts to understand how the program

Real-world systems result in models that require a great deal of information storage and

operates. Further modification

computation. It can be programmed by using simulation languages or special purpose

parameters for better performance.

simulation software.

Process documentation

Simulation

languages

are

powerful

and

flexible.

Simulation

software

models

will

be easier. Model users can change the input

Gives the history of a simulation project. The result of all analysis should be reported clearly

development time can be reduced.

and concisely in a final report. This enable to review the final formulation and alternatives,

6. Verified

results of the experiments and the recommended solution to the problem. The final report

It pertains to computer program and checking the performance. If the input parameters

provides a vehicle of certification.

and logical structure and correctly represented, verification is completed.

12. Implementation

7. Validated

Success depends on the previous steps. If the model user has been thoroughly involved and

It is the determination that a model is an accurate representation of the real system.

understands

Achieved through calibration of the model, an iterative process of comparing the model to actual

implementation is enhanced.

the

nature

of

the

model

and

its

outputs,

likelihood

of

system behavior and the discrepancies between the two.


8. Experimental Design
The alternatives that are to be simulated must be determined. Which alternatives to simulate
may be a function of runs. For each system design, decisions need to be made concerning
Length of the initialization period
Dept of CSE, SJBIT

Page 14

Dept of CSE, SJBIT

Page 15

vigorous

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

Unit 2
GENERAL PRINCIPLES, SIMULATION SOFTWARE

1. Describe queuing system with respect to arrival and service mechanisms, system capacity,
queue discipline, flow diagrams of arrival and departure events. 10 Marks (June2014)
Soln:

Execution of the arrival event

Execution of the departure event.

Dept of CSE, SJBIT

Page 16

Dept of CSE, SJBIT

Page 17

System Modeling and Simulation

10CS82

2. A small shop has one check out counter. Customers arrive at this checkout counter at random
from 1 to 10 minutes apart. Each possible value of inter-arrival time has the same probability of
occurrence equal to 0.10. Service times vary from 1 to 6 minutes with probability shown below:

System Modeling and Simulation

i) Event ii) Event notice iii) FEL


v) Clock

10CS82

iv) Delay

Vi) System state

10 Marks (June 2013)

sol:
i)

System State

A collection of variables that contain all the information necessary to describe the system at any
time
Develop simulation table for 10 customers. Find: i) Average waiting time; ii) Average
service time; iii) Average time, customer spends in system. Take the random digits for arrivals as
91, 72, 15, 94, 30, 92, 75, 23, 30 and for service times are 84, 10, 74, 53, 17, 79, 91, 67, 89, 38

ii)

event

An instantaneous occurrence that changes the state of a system as an arrival of a new customer).
iv)

Delay

An instantaneous occurrence that changes the state of a system as an arrival of a new customer).

sequentially. 10 Marks (June2014)

i)

system

A collection of entities (e.g., people and machines) that ii together over time to accomplish
one or
more goals.
4. Consider the grocery store with one check out counter. Prepare the simulation table for
eight customers and find out average waiting time of customer in queue,idle time of
server and average service time .The inter arrival time (IAT) and service time (ST) are
given in minutes.
IAT : 3,2,6,4,4,5,8
ST(min) :3,5,5,8,4,6,2,3
Assume first customer arrives at time t=

10 Marks (June 2013)

Sol:

3.

Explain the term used in discrete event simulation with an example:

Dept of CSE, SJBIT

Page 18

Dept of CSE, SJBIT

Page 19

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

RD for demand: 24,35,65,25,8,85,77,68,28,5,92,55,49,69,70


RD for lead time :5,0,3

6. Define the term used in discrete event simulation:


5. Suppose the maximum inventory level is M 11 units and the review period is 5 days
estimate by simulation the average ending units in inventory and number of days

6 Marks (June2012)

i) System safe ii)list iii) event iv)FEL v) delay vi)system


Soln:

ii)

/Jan 2012-13)

time

(15 days) The probability for daily demand and lead time is given below. 20 Marks (Dec

A collection of variables that contain all the information necessary to describe the system at any

and an order of 8 units scheduled to arrive in two days time. Simulate for three cycles

i)

when a shortage condition occurs .Initial simulation is started with level of 3 units

System safe

list

A collection of (permanently or temporarily) associated entities ordered in some logical fashion


Demand
P

0
0.1

1
0.25

2
0.35

3
0.2

(such as all customers currently in a waiting line, ordered by first come, first served, or by

0.1

priority).

iii)

0.5

Propability

Lead time

event

An instantaneous occurrence that changes the state of a system as an arrival of a new customer).
iv)

0.3

0.2

Delay

An instantaneous occurrence that changes the state of a system as an arrival of a new customer).
i)

Dept of CSE, SJBIT

Page 20

system

Dept of CSE, SJBIT

Page 21

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

A collection of entities (e.g., people and machines) that ii together over time to accomplish

8. Suppose the maximum inventory level is M 11 units and the review period is 5 days

one or more goals.

estimate by simulation the average ending units in inventory and number of days

7. Six dump trucks are used to haul coal from the entrance of a small mine to railload.

when a shortage condition occurs .Initial simulation is started with level of 3 units and

Each truck is loaded by one of two loaders. After loading truck moves to scale, to be

an order of 8 units scheduled to arrive in two days time. Simulate for three cycles (15

weighed. After weighing a truck begins to travel time and then returns to loader

days) The probability for daily demand and lead time is given below. 20Marks (Dec

queue. It has been assumed that five of trucks Are at loader and one at scale at time

/Jan 2011-12)

0.By using event scheduling algorithm find out busy time of loader and scale and
stopping time E is 64 mins. 14 marks (June2012)

Loading time

10

Weighing time

12

12 12

Travel time

60 100

40

Demand

0.1

0.25

0.35

0.2

0.1

10

15

10

10

16

12

16

--

Lead time

40

80

--

--

Propability

0.5

0.3

0.

Sol:

RD for demand: 24,35,65,25,8,85,77,68,28,5,92,55,49,69,70


RD for lead time :5,0,3
Sol:

Average Loader Utilization = 44 / 2 = 0.92


Average Scale Utilization

Dept of CSE, SJBIT

=24/24 = 1.00

Page 22

Dept of CSE, SJBIT

Page 23

System Modeling and Simulation

10CS82

Unit-3

System Modeling and Simulation

10CS82

Soln:

STATISTICAL MODELS IN SIMULATION

1. Explain the event scheduling/time advance algorithm with an example. 08 Marks (June 2014)
Soln:

values of X is finite, or countably infinite.

pace of X) = {0,1,2,}
p(xi), i = 1,2, must satisfy:
1. p(xi ) 0, for all i
2. p(xi ) = 1
The collection of pairs [xi, p(xi)], i = 1,2,, is called the probability
distribution of X, and p(xi) is called the probability mass function
(pmf) of X.

2. A company uses 6 trucks to haul manganese are from Kolar to industry. There are two loaders,
to load each truck. After loading, a truck moves to the weighing scale to be weighed. The queue
discipline is FIFO. When it is weighed, a truck travels to the industry and returns to the loader
queue. The distribution of loading time, weighing time and travel time are as follows:

3. Six dump trucks are used to haul coal from the entrance of a small mine to railroad.
Each truck is loaded by one of two loaders. After loading truck moves to scale, to be
weighed as soon as possible. Both the loader and the scale have first come first
served waiting line for trucks. Travel time from a loader to scale is considered
negligible. After weighing a truck begins to travel time and then returns to loader

table.

Assume 5 trucks are at the loader and one is at the scale, at time 0. Stopping event time TE =

queue. The activities of loading, weighing and travel time are given in the following

Calculate the total busy time of both loaders, the scale, average loader and scale utilization.

64min. 12 Marks (June 2014)

Dept of CSE, SJBIT

Page 24

Dept of CSE, SJBIT

Page 25

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

futureeventlist(FEL).
Loading time 10

10

15

10

10

Always ranked by the event time

Weighing time 12

12

12

16

12

16

--

Entity: An object in the system that requires explicit representation in the model, e.g., people,

Travel time

100

40

40

80

--

--

machines, nodes, packets, server, customer

60

Event: Aninstantaneousoccurrencethatchangesthestateofasystem.
End of simulation is completion of two weighing from the scale. Depict the simulation
table and estimate the loader and scale utilizations. Assume that five of the trucks are at
the loaders and one is at the scale at time 0.

15Marks (June 2013)

5. A large milling machine has three different bearings that fail in service. The
cumulative distribution function of the life of each bearing is identical given in table

sol:

1. When a bearing fails, the mill stops, a repair person is called and a new bearing is
installed. the delay time of the repair persons arriving at the milling machine is also a
random variable, with the distribution given in table 2.Downtime for the mill is
estimated at $5/minute. The direct on site cost of the repair person is $15/hour. It
takes 20 minutes to change 1 bearing, 30 minutes to change 2 bearings and 40
minutes to change 3 bearings .The bearing cost is $16 each. A proposal has been made
to replace all 3 bearings whenever a bearing fails. Management needs an evaluation of
this proposal. Simulate the system for 10,000 hours of operation under proposed
method and determine the total cost of the proposed system.

20 Marks

(Dec 12-13)

Table 1 Bearing life distribution


Bearing
Average Loader Utilization = 44 / 2 = 0.92

1000 1100

1300

1400

1500

1600

1700 1800 1900

0.13

0.09

0.12

0.02

0.06

Life(Hrs)
Probability

Average Scale Utilization

1200

=24/24 = 1.00

0.10 0.13

0.25

0.05

0.05

Table 2 Delay time distribution


Delay(minutes)

10

Probability

0.6

0.3

15
0.1

Consider the following sequence of random digits for bearing life times
4. Explain the following.

5Marks (June 2013)

Bearing 1

67

49

84

44

30

10

15

System :A collectionofentitiesthatinteracttogetherovertimetoaccomplishoneormore goals.

Bearing 2

70

43

86

93

81

44

19

51

Event List: list of event notices for future events, ordered by time of occurrence; known as the

Bearing 3

76

65

61

96

65

56

11

86

Dept of CSE, SJBIT

Page 26

Dept of CSE, SJBIT

Page 27

System Modeling and Simulation

10CS82

Consider the following sequence of random digits for delay time

System Modeling and Simulation

10CS82

6. What do you mean by World View? Discuss the different types of World View? 10
Marks (June 2012)

Delay

Soln: World Views

Modeler thinks in terms of processes

Simulation model is defined in terms of entities or objects and their life cycle as they flow

Other Examples of Simulation (2)

A process is the life cycle of one entity, which consists of various events and activities

Sol:

Example 2.5 (Cont.)

through the system, demanding resources and queueing to wait for resources
Someactivitiesmightrequiretheuseofoneormoreresourceswhosecapacitiesarelimited
Processes interact, e.g., one process has to wait in a queue because the resource it needs is busy
with an other process
A process is a time- sequenced list of events, activities and delays, including demands for
resource, that define the life cycle of one entity as it moves through a system
Variable time advance

The delay time of the repairperson's arriving at the milling machine is also a
random variable, with the distribution given in Table
The cumulative distribution function of the life of each bearing is
identical, as shown in Table 2.22

Activity-scanning

Modeler concentrates on activities of a model and those conditions that allow an


Activity to begin
At each clock advance, the conditions for each activity are checked, and, if the
conditions are true, then the corresponding activity begins
Fix time advance
Disadvantage: The repeated scanning to discover whether an activity can be given results in slow
Dept of CSE, SJBIT

Page 28

Dept of CSE, SJBIT

Page 29

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

runtime
Improvement: Three-phase approach
- Combination of event scheduling with activity scanning
World Views
Events are activities of duration zero time units
Two types of activities
- B activities: activities bound to occur all primary events and unconditional activities
- C activities: activities or events that are conditional upon certain conditions being true
The B-type activities can be scheduled ahead of time, just as in the event-scheduling approach
- Variable time advance
- FEL contains only B-type events
Scanning to learn whether any C- type activities can begin or C-type events occur happen only at
the end of each time advance, after all B-type events have completed

7. The maximum inventory level, M, is 11 units and the review period, N, is 5 days.
The problem is to estimate, by simulation, the average ending units in inventory
and the number of days when a shortage condition occurs. The distribution of the
number of units demanded per day is shown in table 9. In this example, lead-time is a
random variable, as shown in table 10. Assume that orders are placed at the close of
business and are received for inventory at the beginning as determined by the leadtime.
Sol: Simulation of an (M, N) Inventory System

10 marks (June 2012)

8. What do you mean by World View? Discuss the different types of World View?
10 Marks (Dec 11-12)
Sol: World Views
Dept of CSE, SJBIT

Page 30

Dept of CSE, SJBIT

Page 31

- Variable time advance

Variable time advance

The B-type activities can be scheduled ahead of time, just as in the event-scheduling approach

resource, that define the life cycle of one entity as it moves through a system

- C activities: activities or events that are conditional upon certain conditions being true

A process is a time- sequenced list of events, activities and delays, including demands for

- B activities: activities bound to occur all primary events and unconditional activities

with an other process

Two types of activities

Processes interact, e.g., one process has to wait in a queue because the resource it needs is busy

Events are activities of duration zero time units

Someactivitiesmightrequiretheuseofoneormoreresourceswhosecapacitiesarelimited

World Views

through the system, demanding resources and queueing to wait for resources

- Combination of event scheduling with activity scanning

Simulation model is defined in terms of entities or objects and their life cycle as they flow

Improvement: Three-phase approach

A process is the life cycle of one entity, which consists of various events and activities

runtime

Modeler thinks in terms of processes

Disadvantage: The repeated scanning to discover whether an activity can be given results in slow

Process-interaction

System Modeling and Simulation

System Modeling and Simulation

10CS82

10CS82

- FEL contains only B-type events


Scanning to learn whether any C- type activities can begin or C-type events occur happen only at
the end of each time advance, after all B-type events have completed

9. Using

multiplicative

congruential

method,

generate

random

numbers

to

complete cycle. Explain maximum density and maximum period, A=11,M=16,X0=7


10 Marks (Dec 11-12)
Sol:The sequence of Xi and subsequent Ri values is computed as follows:
X0 = 27
X1 = (17.27 + 43) mod 100 = 502 mod 100 = 2
R1=2100=0. 02
X2 = (17 2 + 43) mod 100 = 77 mod 100 = 77

R3=52 100=0. 52

Activity to begin

*[3pt] X3 = (1777+ 43) mod 100 = 1352 mod 100 = 52

Modeler concentrates on activities of a model and those conditions that allow an

R2=77 100=0. 77

Activity-scanning

At each clock advance, the conditions for each activity are checked, and, if the
the set I = {0,1 /m, 2/m,..., (m l)/m), since each Xi is an continuous on the interval [0,

Fix time advance

First, notice that the numbers generated from Equation (7.2) can only assume values from

conditions are true, then the corresponding activity begins

Dept of CSE, SJBIT

Page 32

Dept of CSE, SJBIT

Page 33

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

1], This approximation appears to be of little consequence, provided that the modulus m is

Unit-4

a very large integer. (Values such as m = 231 1 and m = 248 are in common use in

Queuing Models

generators appearing in many simulation languages.) By maximum density is meant that the

1. Explain the following continuous distributions:

values assumed by Ri = 1, 2,..., leave no large gaps on [0,1]

i) Uniform distribution
ii) Exponential distributions.10 Marks (June 2014)

Second, to help achieve maximum density, and to avoid cycling (i.e., recurrence of the

Sol:

same sequence of generated numbers) in practical applications, the generator should have the

i) Uniform distribution

largest possible period. Maximal period can be achieved by the proper choice of a, c, m, and

In probability theory and statistics, the continuous uniform distribution or rectangular distribution is

X0 .

a family of symmetric probability distributions such that for each member of the family, all intervals of
the same length on the distribution's support are equally probable. The support is defined by the two

For m a power of 2, say m =2b and c 0, the longest possible period is P = m = 2b,

parameters, a and b, which are its minimum and maximum values.

which is achieved provided that c is relatively prime to m (that is, the greatest common
factor of c and m i s l ), and =a = l+4k, where k is an integer.

For m a power of 2, say m =2b and c = 0, the longest possible period is P = m4 = 2b-2,

which is achieved provided that the seed X0 is odd and the multiplier ,a, is given by
a=3+8K , for some K=0,1,..

For m a prime number and c=0, the longest possible period is P=m-1, which is achieved

provided that the multiplier , a, has the property that the smallest integer k such that a k- 1is

ii) Exponential distributions

divisible by m is k= m-1.
In probability theory and statistics, the exponential distribution (a.k.a. negative exponential distribution)
is the probability distribution that describes the time between events in a Poisson process, i.e. a process
in which events occur continuously and independently at a constant average rate. It is the continuous
analogue of thegeometric distribution, and it has the key property of being memoryless.

Dept of CSE, SJBIT

Page 34

Dept of CSE, SJBIT

Page 35

System Modeling and Simulation

10CS82

2. Explain the characteristics of queuing system. List the different queuing notations.10
Marks (June 2014)

System Modeling and Simulation

10CS82

3. Explain Kendalls notation for parallel server queuing system A/B/C/N/K and also
interpreted

The key elements of queuing systems are the customers andservers.The term customer can

meaning of M/M/2//.

10Marks. (June 2013)

Sol:

G General, arbitrary

Descriptions determine tractability of (efficient) analytic solution

H Hyperexponential distribution

Each of these is described mathematically

Ek Erlang distribution of order k

Service discipline

D Constant, deterministic

Population size

M Markov, exponential distribution

System capacity

Common symbols for A and B

Number of servers

N, K are usually dropped, if they are infinity

Service time distribution

K represents the size of the calling population

Arrival process

N represents the system capacity

Notations:

c represents the number of parallel servers

brokenmachine.

B represents the service-time distribution

sometimes the server moves to thecustomer; for example a repairman moving to a

A represents the interarrival-time distribution

CPUs etc.Although the terminology employed will be that of a customerarriving to a server,

A notation system for parallel server queues: A/B/c/N/K

refer to people, parts, trucks,e-mails etc. and the term server clerks, mechanics,repairmen,

Only a small set of possibilities are solvable using standard queueing theory
Examples

Dept of CSE, SJBIT

Page 36

Dept of CSE, SJBIT

Page 37

System Modeling and Simulation

10CS82

M/M/1/8/8 same as M/M/1: Single-server with unlimited capacity and call population.
Interarrival and service times are exponentially distributed

System Modeling and Simulation

5. Explain Kendalls notation for parallel server queuing system A/B/C/N/K and also
interpreted

G/G/1/5/5: Single-server with capacity 5 and call-population 5.

10CS82

meaning of M/M/2//.

10Marks. (Dec/Jan 12-13)

Sol:

M/M/5/20/1500/FIFO: Five parallel server with capacity 20, call-population 1500

A notation system for parallel server queues: A/B/c/N/K

and service discipline FIFO

A represents the interarrival-time distribution

4. Explain steady state parameters of M/G/1 queue. 10Marks. (June 2013)

B represents the service-time distribution

Sol:

c represents the number of parallel servers

Steady-State Behavior of Finite-Population

N represents the system capacity

Models

K represents the size of the calling population

In practical problems calling population is finite

N, K are usually dropped, if they are infinity

When the calling population is small, the presence of one or more customers

Common symbols for A and B

in the system has a strong effect on the distribution of future arrivals.

M Markov, exponential distribution

Consider a finite-calling population model with K customers (M/M/c/K/K)

D Constant, deterministic

The time between the end of one service visit and the next call for service is

Ek Erlang distribution of order k

exponentially distributed with mean = 1/.

H Hyperexponential distribution

Service times are also exponentially distributed with mean 1/.

G General, arbitrary

c parallel servers and system capacity is K.


Examples
M/M/1/8/8 same as M/M/1: Single-server with unlimited capacity and call population.
Interarrival and service times are exponentially distributed
G/G/1/5/5: Single-server with capacity 5 and call-population 5.
M/M/5/20/1500/FIFO: Five parallel server with capacity 20, call-population 1500
and service discipline FIFO

6. Explain the terms used in queuing notations of the form A/B/C/N/K


6 Marks (Dec/Jan 12-13)
Sol: A notation system for parallel server queues: A/B/c/N/K
A represents the interarrival-time distribution,
B represents the service-time distribution,
Dept of CSE, SJBIT

Page 38

Dept of CSE, SJBIT

Page 39

System Modeling and Simulation

10CS82

c represents the number of parallel servers,

System Modeling and Simulation

p(3)

N represents the system capacity,

also,

p(3)

10CS82

= e-223/3! = 0.18
= F(3) F(2) = 0.857-0.677=0.18

K represents the size of the calling population


The probability of two or more beeps in a 1-hour period:
7. Define a discrete random variable

p(2 or more)

4marks (Dec/Jan 12-13)

= 1 p(0) p(1)

= 0.594

Discrete Random Variables

= 1 F(1)

Sol:

values of X is finite, or countable infinite.

9. Explain the terms used in queuing notations of the form A/B/C/N/K


6 Marks (June 2012)
Sol: A notation system for parallel server queues: A/B/c/N/K

pace of X) = {0,1,2,}

A represents the interarrival-time distribution,


B represents the service-time distribution,

K represents the size of the calling population

2. p(xi ) = 1

N represents the system capacity,

1. p(xi ) 0, for all i

c represents the number of parallel servers,

p(xi), i = 1,2, must satisfy:

8. The number of Hurricanes hitting the coast of Indian follows poisson distribution
with mean =0.8 per year Determine:

10. List the steady state parameters of M/G/1

8 Marks (June 2012)

Sol:

The probability of more than one hurricane in a year

ii)

The probability of more than two hurricanes in a year

i)

Single-server queues with Poisson arrivals & unlimited capacity.


6 Marks (June

Suppose service times have mean 1/m and variance s2 and r = l/m < 1, the steady-state
= / , P0 = 1-

sol:

parameters of M/G/1 queue:

2012)

(1+ )
2

L = 2(1+

Poisson Distribution
Example: A computer repair person is beeped each time there is a call for

(1+ )
2

LQ 2(1=

(1/
2 +2 )
2(1- )

1
w=

service. The number of beeps per

, wQ =

(1/ 2 + 2 )
2(1- )

hour ~ Poisson( = 2 per hour).


11. Explain Kendalls notation for parallel server queuing system A/B/C/N/K and also
interpreted

The probability of three beeps in the next hour:


Dept of CSE, SJBIT

Page 40

meaning of M/M/2//.

10Marks. (Dec/Jan 2011-12)

Dept of CSE, SJBIT

Page 41

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

Sol:

The time between the end of one service visit and the next call for service is

A notation system for parallel server queues: A/B/c/N/K

exponentially distributed with mean = 1/.

A represents the interarrival-time distribution

Service times are also exponentially distributed with mean 1/.

B represents the service-time distribution

c parallel servers and system capacity is K.

c represents the number of parallel servers


N represents the system capacity
K represents the size of the calling population
N, K are usually dropped, if they are infinity
Common symbols for A and B
M Markov, exponential distribution
D Constant, deterministic
Ek Erlang distribution of order k
H Hyperexponential distribution
G General, arbitrary

Examples
M/M/1/8/8 same as M/M/1: Single-server with unlimited capacity and call population.
Interarrival and service times are exponentially distributed
G/G/1/5/5: Single-server with capacity 5 and call-population 5.
M/M/5/20/1500/FIFO: Five parallel server with capacity 20, call-population 1500
and service discipline FIFO

12. Explain steady state parameters of M/G/1 queue. 10Marks. (Dec/Jan 2011-12)
Sol:
Steady-State Behavior of Finite-Population
Models
In practical problems calling population is finite
When the calling population is small, the presence of one or more customers
in the system has a strong effect on the distribution of future arrivals.
Consider a finite-calling population model with K customers (M/M/c/K/K)
Dept of CSE, SJBIT

Page 42

Dept of CSE, SJBIT

Page 43

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

Unit-5
Random-Number Generation, Random-Variate Generation

2. The sequence of random numbers 0.54, 0.73, 0.98, 0.11 and 0.68 has been generated. Use
Kolmogorov-Smirnov test with

1. Explain linear congruential method. Write three ways of achieving maximal period.05
Marks (June 2014)

= 0.05 to determine if the hypothesis that the numbers

are uniformly distributed on the interval [0, 1] can be rejected. Take D

= 0.565. 05 Marks

(June 2014)

A linear congruential generator (LCG) is an algorithm that yields a sequence of pseudo-randomized numbers
calculated with a discontinuous piecewise linear equation. The method represents one of the oldest and bestknown pseudorandom number generator algorithms.The theory behind them is relatively easy to understand, and they are
easily implemented and fast, especially on computer hardware which can provide modulo arithmetic by storage-bit

Sol: Ri =Xi/m, i= 1,2, (7.2)


The sequence of Xi and subsequent Ri values is computed as follows:
X0 = 27
R1=2100=0. 02

The generator is defined by the recurrence relation:

X1 = (17.27 + 43) mod 100 = 502 mod 100 = 2

truncation.

X2 = (17 2 + 43) mod 100 = 77 mod 100 = 77


where

R2=77 100=0. 77

is the sequence of pseudorandom values, and

*[3pt] X3 = (1777+ 43) mod 100 = 1352 mod 100 = 52

the "modulus"

R3=52 100=0. 52

the "multiplier"

First, notice that the numbers generated from Equation (7.2) can only assume values from

the "increment"

the set I = {0,1 /m, 2/m,..., (m l)/m), since each Xi is an integer in the set {0,1,2,..., m 1}.

the "seed" or "start value" are integer constants that specify the generator.

Thus, each Ri is discrete on I, instead of continuous on the interval [0, 1], This approximation

leave no large gaps on [0,1]

Provided that the offset c is nonzero, the LCG will have a full period for all seed values if and only if

simulation languages.) By maximum density is meant that the values assumed by Ri = 1, 2,...,

The period of a general LCG is at most m, and for some choices of factor a much less than that.

such as m = 231 1 and m = 248 are in common use in generators appearing in many

RNG. If c 0, the method is called a mixed congruential generator.

appears to be of little consequence, provided that the modulus m is a very large integer. (Values

If c = 0, the generator is often called a multiplicative congruential generator (MCG), or Lehmer

1.

and

are relatively prime,

X2 = 75(2,074,941,799) mod(231 - 1) = 559,872,160

are currently being questioned because of the use of this poor LCG.

R1= X1 231

example of this is RANDU, which was widely used in the early 1970s and led to many results which

X1 = 2,074,941,799

Historically, poor choices had led to ineffective implementations of LCGs. A particularly illustrative

X1= 75(123,457) mod (231 - 1) = 2,074,941,799 mod (231 - 1)

sensitive to the choice of the parameters c, m, and a.

Sol: The first few numbers generated are as follows:

producing pseudorandom numbers which can pass formal tests for randomness, this is extremely

(June 2014)

These three requirements are referred to as the Hull-Dobell Theorem.[3] While LCGs are capable of

The random numbers are 0.4357, 0.4146, 0.8353, 0.9952, 0.8004, 0.7945, 0.1530.10 Marks

is a multiple of 4 if

3.

is divisible by all prime factors of

2.

3. What is acceptance-rejection technique? Generate three Poisson variates with mean = 0.2.

is a multiple of 4.

Dept of CSE, SJBIT

Page 44

Dept of CSE, SJBIT

Page 45

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

R2 = X2 231= 0.2607
X3 = 75(559,872,160) mod(231 - 1) = 1,645,535,613
R3 = X3 231= 0.7662
Notice that this routine divides by m + 1 instead of m ; however, for sucha large value of
m , the effect is negligible.

4. Generate five random numbers using multiplicative congruential method with X 0=5,
a=10,m=64.

The calculations in Table 7.2 are illustrated in Figure 7.2, where the empirical cdf,
SN(X), is compared to the uniform cdf, F(x). It can be seen that D+ is the largest

6Marks. (June 2013)

Soln: X0 = 63

deviation of SN(x) above F(x), and that D- is the largest deviation of SN(X) below F(x). For

X1 = (19)(63) mod 100 = 1197 mod 100 = 97

example, at R(3) the value of D+ is given by 3/5 - R(3) = 0.60 - 0.44 =0.16 and of D- is given

X2 = (19) (97) mod 100 = 1843 mod 100 = 43

by R(3) = 2/5 = 0.44 - 0.40 = 0.04. Although the test statistic D is defined by Equation

X3 = (19) (43) mod 100 = 817 mod 100 = 17

(7.3) as the maximum deviation over all x, it can be seen from Figure 7.2 that the

When m is a power of 10, say m = 10b , the modulo operation is accomplished by saving

maximum deviation will always occur at one of the jump points R(1) , R(2) . . . , and thus

the b rightmost (decimal) digits.

the deviation at other values of x need


not be considered.

5. The six numbers 0.44,0.66,0.82,0.16,0.05,0.92 are generated. Using kolmogorov- smirnov

6. Using

multiplicative

congruential

method,

generate

random

numbers

to

test with =0.05 and check the hypothesis that the numbers are uniformly distributed

complete cycle. Explain maximum density and maximum period,A=11,M=16,X0=7 6

on the interval[0,1] can be rejected.

Marks (June 2013)

8Marks. (June 2013)

Sol:

Sol: The sequence of Xi and subsequent Ri values is computed as follows:

First, the numbers must be ranked from smallest to largest. The calculations can be

X0 = 27

facilitated by use of Table 7.2. The top row lists the numbers from smallest (1) ) to

X1 = (17.27 + 43) mod 100 = 502 mod 100 = 2

largest (R(n) ) .The computations for D+, namely i /N -R(i} and for D-, namely R(i ) - ( i -

R1=2100=0. 02

l ) / N, are easily accomplished using Table 7.2. The statistics are c mputed as D+ = 0.26

X2 = (17 2 + 43) mod 100 = 77 mod 100 = 77

and D- = 0.21.
Therefore, D = max{0.26, 0.21} = 0.26. The critical value of D, obtained from Table A.8 for

R2=77 100=0. 77

a = 0.05 and N = 5, is 0.565. Since the computed value, 0.26, is less than the tabulated

*[3pt] X3 = (1777+ 43) mod 100 = 1352 mod 100 = 52

critical value, 0.565, the hypothesis of no difference between the distribution of the

R3=52 100=0. 52

generated numbers and the uniform distribution is not rejected.


7. Using suitable frequency test find out whether the random numbers generated
are uniformly distributed on the interval [0,1] can be rejected.Assume =0.05 and
Dept of CSE, SJBIT

Page 46

Dept of CSE, SJBIT

Page 47

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

is compared to the uniform cdf, F(x). It can be seen that D+ is the largest deviation of SN(x)

2012-13)

The calculations in Table 7.2 are illustrated in Figure 7.2, where the empirical cdf, SN(X),

D=0.565.the random numbers are 0.54,0.73,0.98,0.11,0.68

4 Marks (Dec/Jan

not be considered.

Therefore, D = max{0.26, 0.21} = 0.26. The critical value of D, obtained from Table A.8 for a

occur at one of the jump points R(1) , R(2) . . . , and thus the deviation at other values of x need

and D- = 0.21.

deviation over all x, it can be seen from Figure 7.2 that the maximum deviation will always

) / N, are easily accomplished using Table 7.2. The statistics are computed as D+ = 0.26

- 0.40 = 0.04. Although the test statistic D is defined by Equation (7.3) as the maximum

largest (R(n) ) .The computations for D+, namely i /N -R(i} and for D-, namely R(i ) - ( i - l

value of D+ is given by 3/5 - R(3) = 0.60 - 0.44 =0.16 and of D- is given by R(3) = 2/5 = 0.44

be facilitated by use of Table 7.2. The top row lists the numbers from smallest (R(1) ) to

above F(x), and that D- is the largest deviation of SN(X) below F(x). For example, at R(3) the

sol: First, the numbers must be ranked from smallest to largest. The calculations can

= 0.05 and N = 5, is 0.565. Since the computed value, 0.26, is less than the tabulated critical
value, 0.565, the hypothesis of no difference between the distribution of the generated
numbers and the uniform distribution is not rejected.

9. Suggest a step by step procedure to generate random variates using inverse


transform technique for exponential distribution. 6 Marks (Dec/Jan 2012-13)
Sol: Inverse Transform Technique :

technique will be explained in detail for the exponential distribution and then applied to other

13)

is the underlying principle for sampling from a wide variety of discrete distributions. The

10Marks (Dec/Jan 2012-

D=0.565.the random numbers are 0.54,0.73,0.98,0.11,0.68

the Weibull, and the triangular distributions and empirical distributions. Additionally, it

uniformly distributed on the interval [0,1] can be rejected. Assume =0.05 and

The inverse transform technique can be used to sample from exponential, the uniform,

8. Using suitable frequency test find out whether the random numbers generated are

It is the most straightforward, but always the most efficient., technique computationally.

be facilitated by use of Table 7.2. The top row lists the numbers from smallest (R(1) ) to

distributions.

Sol: First, the numbers must be ranked from smallest to largest. The calculations can

largest (R(n) ) .The computations for D+, namely i /N -R(i} and for D-, namely R(i ) - ( i - l
) / N, are easily accomplished using Table 7.2. The statistics are computed as D+ = 0.26

EXPONENTIAL DISTRIBUTION :

and D- = 0.21.
Therefore, D = max{0.26, 0.21} = 0.26. The critical value of D, obtained from Table A.8 for a

The exponential distribution, has probability density function (pdf) given by

= 0.05 and N = 5, is 0.565. Since the computed value, 0.26, is less than the tabulated critical

0,

numbers and the uniform distribution is not rejected.

f(X)= e-x ,

value, 0.565, the hypothesis of no difference between the distribution of the generated

x0

x<0

and cumulative distribution function (cdf) given by


f(X) = - x f(t) dt =
0,
Dept of CSE, SJBIT

Page 48

1 e x,

x0

x<0

Dept of CSE, SJBIT

Page 49

System Modeling and Simulation

10CS82

System Modeling and Simulation

The parameter can be interpreted as the mean number of occurrences per time unit.

10CS82

accomplished through steps 4.


Step 4. Generate (as needed) uniform random numbers R1, R2, R3,... and compute the

For example, if interarrival times Xi , X2, X3, . . . had an exponential distribution with rate ,

desired random variates by Xi = F-1 (Ri) For the exponential case, F (R) = (-1/)ln(1- R) by

then could be interpreted as the mean number of arrivals per time unit, or the arrival rate|

Equation (5.1), so that Xi = -1/ ln ( 1 Ri) ( 5.2 )

Notice that for any j E(Xi)= 1/ so that is the mean interarrival time. The goal here is to

for i = 1,2,3,.... One simplification that is usually employed in Equation (5.2) is to

develop a procedure for generating values X1,X2,X3, .. which have an exponential

replace 1 Ri by Ri to yield Xi = -1/ ln Ri ( 5.3 )

distribution.

which is justified since both Ri and 1- Ri are uniformly distributed on (0,1).

The inverse transform technique can be utilized, at least in principle, for any distribution. But

10. Explain the two different techniques for generating random numbers with examples?

it is most useful when the cdf. F(x), is of such simple form that its inverse, F , can be

10 Marks (June 2012)

easily computed. A step-by-step procedure for the inverse transform technique illustrated

Sol: 1 Linear Congruential Method)

by me exponential distribution, is as follows:

The linear congruential method, initially proposed by Lehmer [1951], produces a sequence of
integers, X\, X2,... between zero and m 1 according to the following recursive relationship:

Step 1. Compute the cdf of the desired random variable X. For the exponential distribution,

X i+1 = (aX i + c) mod m,

i = 0,1,2,...

the cdf is F(x) = 1 e , x > 0.

The initial value X0 is called the seed,


If c 0 in Equation (7.1), the form is called the mixed congruential method.When c = 0,

Step 2. Set F(X) = R on the range of X. For the exponential distribution, it becomes 1 e

-X

the form is known as the multiplicative congruential method. The selection of the values for

R on the range x >=0. Since X is a random variable (with the exponential distribution in this

a, c, m and X0 drastically affects the statistical properties and the cycle length. . The random

case), it follows that 1 - is also a random variable, here called R. As will be shown later,

integers are being generated [0,m-1], and to convert the integers to random numbers:

R has a uniform distribution over the interval (0,1).,

2 Combined Linear Congruential Generators


As computing power has increased, the complexity of the systems that we are able to

Step 3. Solve the equation F(X) = R for X in terms of R. For the exponential distribution,

simulate has also increased.

the solution proceeds as follows:

One fruitful approach is to combine two or more multiplicative congruen-tial generators in

e x=R

such a way that the combined generator has good statistical properties and a longer

e-x= 1 R

period. The following result from L'Ecuyer [1988] suggests how this can be done: If Wi, 1 ,

-X= ln(1 - R)
x= -1/ ln(1 R)

Wi , 2. . . , W i,kare any independent, discrete-valued random variables (not necessarily


( 5.1 )

identically distributed), but one of them, say Wi, 1, is uniformly distributed on the
integers 0 to mi 2, then is uniformly distributed on the integers 0 to mi 2.

Equation (5.1) is called a random-variate generator for the exponential distribution. In


general, Equation (5.1) is written as X=F-1(R ). Generating a sequence of values is
Dept of CSE, SJBIT

Page 50

Dept of CSE, SJBIT

Page 51

System Modeling and Simulation

11. The

sequence

of

numbers

0.44,0.81,0.14,0.05,0.93

were

10CS82

generated,

use

System Modeling and Simulation

10CS82

the

Kolmogorov-Smirnov test with a level of significance a of 0.05.compare F(X) and SN(X)


10 Marks (June 2012)

Sol:

12. Explain the two different techniques for generating random numbers with examples?
10 Marks (Dec/Jan 2011-12)

a, c, m and X0 drastically affects the statistical properties and the cycle length. . The random

deviation at other values of x need not be considered

the form is known as the multiplicative congruential method. The selection of the values for

deviation will always occur at one of the jump points R(1) , R(2) . . . , and thus the

If c 0 in Equation (7.1), the form is called the mixed congruential method.When c = 0,

the maximum deviation over all x, it can be seen from Figure 7.2 that the maximum

The initial value X0 is called the seed,

R(3) = 2/5 = 0.44 - 0.40 = 0.04. Although the test statistic D is defined by Equation (7.3) as

X i+1 = (aX i + c) mod m,

at R(3) the value of D+ is given by 3/5 - R(3) = 0.60 - 0.44 =0.16 and of D- is given by

integers, X\, X2,... between zero and m 1 according to the following recursive relationship:

SN(x) above F(x), and that D- is the largest deviation of SN(X) below F(x). For example,

The linear congruential method, initially proposed by Lehmer [1951], produces a sequence of

compared to the uniform cdf, F(x). It can be seen that D+ is the largest deviation of

Sol:1 Linear Congruential Method)

The calculations in Table 7.2 are illustrated in Figure 7.2, where the empirical cdf, SN(X),

i = 0,1,2,...

integers are being generated [0,m-1], and to convert the integers to random numbers:
2 Combined Linear Congruential Generators
As computing power has increased, the complexity of the systems that we are able to
simulate has also increased.
One fruitful approach is to combine two or more multiplicative congruen-tial generators in
such a way that the combined generator has good statistical properties and a longer
period. The following result from L'Ecuyer [1988] suggests how this can be done: If Wi, 1 ,
Dept of CSE, SJBIT

Page 52

Dept of CSE, SJBIT

Page 53

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

Wi , 2. . . , W i,kare any independent, discrete-valued random variables (not necessarily

SN(X), is compared to the uniform cdf, F(x). It can be seen that D+ is the largest

identically distributed), but one of them, say Wi, 1, is uniformly distributed on the

deviation of SN(x) above F(x), and that D- is the largest deviation of SN(X) below F(x). For

integers 0 to mi 2, then is uniformly distributed on the integers 0 to mi 2.

example, at R(3) the value of D+ is given by 3/5 - R(3) = 0.60 - 0.44 =0.16 and of D- is given
by R(3) = 2/5 = 0.44 - 0.40 = 0.04. Although the test statistic D is defined by Equation
(7.3) as the maximum deviation over all x, it can be seen from Figure 7.2 that the
maximum deviation will always occur at one of the jump points R(1) , R(2) . . . , and thus
the deviation at other values of x need not be considered.

13. The six numbers 0.44,0.66,0.82,0.16,0.05,0.92 are generated. Using kolmogorov- smirnov
test with =0.05 and check the hypothesis that the numbers are uniformly distributed
on the interval[0,1] can be rejected. 10 Marks. (Dec/Jan 2011-12)
Sol:
First, the numbers must be ranked from smallest to largest. The calculations can be
facilitated by use of Table 7.2. The top row lists the numbers from smallest (1) ) to
largest (R(n) ) .The computations for D+, namely i /N -R(i} and for D-, namely R(i ) - ( i l ) / N, are easily accomplished using Table 7.2. The statistics are c mputed as D+ = 0.26
and D- = 0.21.
Therefore, D = max{0.26, 0.21} = 0.26. The critical value of D, obtained from Table A.8 for
a = 0.05 and N = 5, is 0.565. Since the computed value, 0.26, is less than the tabulated
critical value, 0.565, the hypothesis of no difference between the distribution of the
generated numbers and the uniform distribution is not rejected.

The calculations in Table 7.2 are illustrated in Figure 7.2, where the empirical cdf,
Dept of CSE, SJBIT

Page 54

Dept of CSE, SJBIT

Page 55

System Modeling and Simulation

10CS82

Unit-6

System Modeling and Simulation

10CS82

Data size = 100. Observed frequency Oi : 12 10 19 17 10 8 7 5.5 3 3 1. Take level of


significance = 0.05. 10 Marks (June 2014)

Input Modeling

Sol: Chi-Square Test


One procedure for testing the hypothesis that a random sample of size n of the random variable X
1. Explain different steps in the development of a useful model of input data. 10 Marks (June

follows a specific distributional form is the chi-square goodness-of-fit test.


This test formalizes the intuitive idea of comparing the histogram of the data to the shape of the

2014)

that class interval. The expected frequency for each class interval is computed as Ei = npi, where

an example of input data.

Where 0i, is the observed frequency in the ith class interval and Ei, is the expected frequency in

For the simulation of a reliability system, the distribution of time-to=failure of a component is

continuous distribution assumptions, When parameters are estimated by maximum likelihood.

system, typical input data are the distributions of time between arrivals and service times.

candidate density or mass function, The test is valid for large sample sizes, for both discrete and

Input data provide the driving force for a simulation model. In the simulation of a queuing

pi is the theoretical, hypothesized probability associated with the ith class interval.

estimated by sample statistics. The hypotheses are:

commitment. Unfortunately, in some situations it is not possible to collect data.

of freedom, where s represents the number of parameters of the hypothesized distribution

Collect data from the real system of interest. This often requires a substantial time and resource

It can be shown that approximately follows the chi-square distribution with k-s-1 degrees

There are four steps in the development of a useful model of input data:

H0: the random variable, X, conforms to the distributional assumption with the parameter(s)

H1: the random variable X does not conform

step typically begins by developing a frequency distribution, or histogram, of the data.

given by the parameter estimate(s)

Identify a probability distribution to represent the input process. When data are available, this

If the distribution being tested is discrete; each value of the random variable should be a class

cell-frequency requirement. For the discrete case, if combining adjacent cells is not required,

available, these parameters may be estimated from the data.

interval, unless it is necessary to combine adjacent class intervals to meet the minimum expected

Choose parameters that determine a specific instance of the distribution family. When data are

Pi = P(XI) = P(X =Xi)

3. Suggest a step by step procedure to generate random variates using inverse transform

iterations of this procedure fail to yield a fit between an assumed distributional form and the

Pi= f(x) dx= F(ai) F(ai -1 )

second step, chooses a different family of distributions, and repeats the procedure. If several

or assumed cdf F(x), pi, can be computed By

that the chosen distribution is a good approximation of the data, then the analyst returns to the

and ai, are the endpoints of the ith class interval. For the continuous case with assumed pdf f(x),

chisquare and the Kolmo-gorov-Smirnov tests are standard goodness-of-fit tests. If not satisfied

If the distribution being tested is continuous, the class intervals are given by [ai-1,ai), where ai-1

may be evaluated informally via graphical methods, or formally via statistical tests. The

Otherwise, pi, is determined by summing the probabilities of appropriate adjacent cells.

Evaluate the chosen distribution and the associated parameters for good-of-fit. Goodness-of-fit

sol:

2. Explain Chi-square goodness of fit test. Apply it to Poisson assumption with alpha = 3.64,

technique for exponential distribution.

collected data.

Dept of CSE, SJBIT

Page 56

10Marks (June 2013)

Dept of CSE, SJBIT

Page 57

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

4. Explain four methods of selecting input models without data.


Exponential Distribution:

10Marks (June 2013)

Exponential cdf:
r=

S ol :

F(x)
1e

-x

There are four steps in the development of a useful model of input data:
for x 0

Collect data from the real system of interest. This often requires a substantial

R=F(X)

time and resource commitment. Unfortunately, in some situations it is not

X=F-1(R)

possible to collect data.

To generate X1, X2, X3

Identify a probability distribution to represent the input process. When data are
available, this step typically begins by developing a frequency distribution, or

Xi =

F-1(R )

histogram, of the data.

-(1/) ln(1-R )

Choose parameters that determine a specific instance of the distribution family.

- (1/) ln(Ri)

When data are available, these parameters may be estimated from the data.

Since both 1-Ri & Ri are uniformly distributed between 0& 1


Evaluate the chosen distribution and the associated parameters for good-of-fit.
Goodness-of-fit may be evaluated informally via graphical methods, or formally
via statistical tests. The chisquare and the Kolmo-gorov-Smirnov tests are
standard goodness-of-fit tests. If not satisfied that the chosen distribution is a
good approximation of the data, then the analyst returns to the second step,
chooses a different family of distributions, and repeats the procedure. If several
iterations of this procedure fail to yield a fit between an assumed distributional
form and the collected data.

5. The following is set of single digit numbers from a random number generator.
Using appropriate test ,check whether the numbers are uniformly.N=50,=0.05 and X2
=16.9

Figure: Inverse-transformtechnique for exp( = 1)

20Marks (Dec/Jan 2012-13)

6,7,0,6,9,9,0,6,4,6,4,0,8,2,6,6,1,2,6,8,5,6,0,4,7
1,3,5,0,7,1,4,9,8,6,0,8,6,6,7,1,0,4,7,9,2,0,1,4,8
Sol: Use the chi-square test with a = 0.05 to test whether the data shown below are uniformly

Dept of CSE, SJBIT

Page 58

Dept of CSE, SJBIT

Page 59

=0

distributed. Table 7.3 contains the essential computations. The test 0.05,9
uses n = 10 intervals of

System Modeling and Simulation

System Modeling and Simulation

equal length, namely [0, 0.1), [0.1, 0.2), . . . , [0.9, 1.0). The value of X0
compared with the critical value X2
value of X2

0.05,9

10CS82

is 3.4. This is

otherwise

10CS82

6 Marks (June 2012)

Sol:

= 16.9.Since X0 2 is much smaller than the tabulated

, the null hypothesis of a uniform distribution is not rejected.

7. Explain with an example, importance of data distribution using histogram.


14 Marks

(June 2012)

Both the Kolmogorov-Smirnov and the chi-square test are acceptable for testing the
uniformity

of a sample of data, provided that the sample size is large. However, the

Kolmogorov-Smirnov test is the more powerful of the two and is recommended.

A frequency distribution or histogram is useful in identifying the shape of a distribution.

Furthermore, the Kolmogorov-Smirnov test can be applied to small sample sizes, whereas

3. Determine the frequency of occurrences within each interval.

concerned with the independence of random numbers which are

2. Label the horizontal axis to conform to the intervals selected.

produced by the generator would not be random. The tests in the remainder of this chapter are

frequencies are adjusted).

This set of numbers would pass the frequency tests with ease, but the ordering of the numbers

however, unequal widths however, unequal width may be used if the heights of the

values are in the range 0.01-0.10, the second 10 values are in the range 0.11-0.20, and so on.

1. Divide the range of the data into intervals (intervals are usually of equal width;

Imagine a set of 100 numbers which are being tested for independence where the first 10

A histogram is constructed as follows:

the chi-square is valid only for large samples, say N>=50.

4. Label the vertical axis so that the total occurrences can be plotted for each interval.
5. Plot the frequencies on the vertical axis.

6. Develop a random variate generator for X with pdf given below

details will not show well. If the intervals are too narrow, the histogram will be ragged and will

= 2-x 1<x<2

If the intervals are too wide, the histogram will be coarse, or blocky, and its shape and other

F(x)= x,

0<x<1

Dept of CSE, SJBIT

Page 60

Dept of CSE, SJBIT

Page 61

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

not smooth the data.


The histogram for continuous data corresponds to the probability density function of a
theoretical distribution.

9. Develop a random variate generator for X with pdf given below


F(x)

= x, 0<x<1
= 2-x 1<x<2
= 0 otherwise

8. Suggest a step by step procedure to generate random variates using inverse transform
technique for exponential distribution.

10Marks (Dec/Jan 2011-12)

10 Marks (Dec/Jan 2011-12)

Sol:

sol:

Exponential Distribution:
Exponential cdf:
r=

F(x)
1 e-x

for x 0

R=F(X)
X=F-1(R)
To generate X1, X2, X3
Xi =

F-1(R )

-(1/) ln(1-R )

- (1/) ln(Ri)

Since both 1-Ri & Ri are uniformly distributed between 0& 1

Dept of CSE, SJBIT

Page 62

Dept of CSE, SJBIT

Page 63

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

minutes).

Unit-7
Estimation of Absolute Performance

object of interest is one days operation.


1. Explain the types of simulation with respect to output analysis. Give examples. 07 Marks (June

Confidence-Interval Estimation

Purpose

Sol:

Sol:

2. Briefly explain the confidence-interval estimation method. 07 Marks (June 2014)

2014)

To understand confidence intervals fully, it is important to distinguish between


measures
of error, and measures of risk, e.g., confidence interval versus prediction interval.

q is the system performance, the precision of the estimator can be measured by:

Suppose the model is the normal distribution with mean q, variance s2 (both
unknown).

q.

Let Y i be the average cycle time for parts produced on the ith replication of the
simulation (its mathematical expectation is q).
Average cycle time will vary from day to day, but over the long-run the
average
of the averages will be close to q.
Sample variance across R replications:

independence.

S2 =

Confidence-Interval Estimation

ial conditions, e.g. inventory on hand and # of backorders at time 0 would

1 R (Y
i. - Y.. ) 2
R -1 i=1

Confidence Interval (CI):

most likely influence the performance of week 1.

A measure of error.

Type of Simulations

Where Y i. are normally distributed.

-terminating simulations
3. Explain output analysis for termination simulation. 06 Marks (June 2014)
Sol:

me TE, where E is a specified event that stops the

A terminating simulation: runs over a simulated time interval [0, TE].

simulation.

A common goal is to estimate:

0 under well-specified initial conditions.


E.

=E
for discrete output

0) with no customers present and 8 of the

i=1

=E

11 teller working (initial conditions), and closes at 4:30 pm (Time TE = 480


Dept of CSE, SJBIT

Page 64

Dept of CSE, SJBIT

Page 65

System Modeling and Simulation

1
TE

TE

Y (t)dt

10CS82

System Modeling and Simulation

Y (t) =

for continuous output Y (t),0 t TE

10CS82

1, if out of stock on day i


0,

otherwise

Performance measure that does not fit: quantile or percentile:

In general, independent replications are used, each run using a different random number
stream and independently chosen initial conditions.

Estimating quantiles: the inverse of the problem of estimating a proportion


or
Pr{Y } = p

Statistical Background

probability.

Important to distinguish within-replication data from across-replication data.

Consider a histogram of the observed values Y:


Find

4. Briefly explain measure of performance of simulation system. 10Marks. (June 2013)


Sol:

such that 100p% of the histogram is to the left of (smaller

than
Confidence-Interval Estimation

Consider the estimation of a performance parameter, q (or f), of a simulated system.


Discrete time data: [Y1, Y2, , Yn], with ordinary mean: q

To understand confidence intervals fully, it is important to distinguish between measures of


error, and measures of risk, e.g., confidence interval versus prediction interval.
Suppose the model is the normal distribution with mean q, variance s2 (both unknown).

Continuous-time data: {Y(t), 0 t TE} with time-weighted mean:

Let Y i be the average cycle time for parts produced on the ith replication of the

Point estimation for discrete time data.


The point estimator:

simulation (its mathematical expectation is q).


Average cycle time will vary from day to day, but over the long-run the average of the averages

1 n Y
i
= n
i=1

will be close to q.

Is unbiased if its expected value is , that is if:

Sample variance across R replications:

E() =

Is biased if:

S2 =

1 R (Y
i. - Y.. ) 2
R -1 i=1

Confidence-Interval Estimation
Confidence Interval (CI):
A measure of error.

Point Estimator
Point estimation for continuous-time data.

Where Y i. are normally distributed.

The point estimator:


=

TE

TE 0
Is biased in general where:

S
Y t / 2,R-1 R

Y (t)dt

An unbiased or low-bias estimator is desired.


Usually, system performance measures can be put into the common framework of
q or f:

5. Explain the distinction between terminating or transient simulation and steady state
simulation . Give an example.

10Marks (June 2013)

e.g., the proportion of days on which sales are lost through an out-of-stock situation, let:
Dept of CSE, SJBIT

Page 66

Dept of CSE, SJBIT

Page 67

System Modeling and Simulation

10CS82

Sol:

System Modeling and Simulation

Interval

6. Explain

the 0.05,
chi-square
9

test with

a =9 0.05
0.05,

Oi

This is compared with the critical value 2

10 Marks.

Ei

10CS82

Oi-Ei (Oi-Ei)2 (Oi-Ei)2 /Ei

is 3.4.
to test whether the data shown
below are

100

uniformly distributed. Table 7.3 contains the essential computations. The test uses n
= 10 intervals of equal length, namely (0, 0.1], (0.1, 0.2], . . . , (0.9, 1.0]. The value of 0
2

the tabulated value of 2


rejected.

=16.9.Since 0

is much smaller than

100

3.4

7. Differentiate between terminating and steady state simulation with respect to output
analysis with an example.

10 Marks (Dec/Jan 2012-13)

the null hypothesis of a uniform distribution is not


Sol:

(Dec/Jan 2012-13)

Confidence-Interval Estimation

A good guess for the average cycle time on a particular day is our estimator but it
is unlikely to be exactly right.
PI is designed to be wide enough to contain the actual average cycle time on any
particular day with high probability.
Normal-theory prediction interval:
Y t / 2,R-1S 1
The length of PI will not go to 0 as R increases because we can never simulate
away risk.
PIs limit is:
Output Analysis for Terminating Simulations
Dept of CSE, SJBIT

Page 68

Dept of CSE, SJBIT

Page 69

System Modeling and Simulation

10CS82

System Modeling and Simulation

terminating simulation: runs over a simulated time interval [0, TE].

10CS82

Prediction Interval (PI):


A measure of risk.

and independently chosen initial conditions.

A good guess for the average cycle time on a particular day is our estimator but it
is unlikely to be exactly right.

8. Records pertaining to the

monthly

PI is designed to be wide enough to contain the actual average cycle time on any

number of job related injuries at an

underground coal mine were being studied by federal agency. the values of past

particular day with high probability.


Normal-theory prediction interval:

100 months are as follows:

Injuries/month

40

13

1
R

Y t / 2,R-1S 1 +
..

Frequency

of 35

The length of PI will not go to 0 as R increases because we can never simulate


away risk.

occurance

PIs limit is:


Apply the chi square test to these data to test the hypothesis that the distribution is
poisson with mean 1.0 and =0.05 and

X20.05.9 =7.81.

10 Marks (Jun

z / 2

Output Analysis for Terminating Simulations


A terminating simulation: runs over a simulated time interval [0, TE].
In general, independent replications are used, each run using a different random number

2012)

stream and independently chosen initial conditions.

Sol:
The first few numbers generated are as follows:
X1= 75(123,457) mod (231 - 1) = 2,074,941,799 mod (231 - 1)

10. Explain chi-square of goodness of- fit test for exponential distribution?

X1 = 2,074,941,799
1

sol:

31

R = X 2
5

10 marks (Dec/Jan 2011-12)

Compare histogram
31

X2 = 7 (2,074,941,799) mod(2 - 1) = 559,872,160

Valid for large sample sizes

R2 = X2 231= 0.2607

Arrange the n observations

X3 = 7 (559,872,160) mod(231 - 1) = 1,645,535,613

Which approximately follows the chi-square distribution with k-s-1 degrees of freedom, where s =

R3 = X3 231= 0.7662

# of parameters of the hypothesized distribution estimated by the sample statistics.

Null

hypothesis observations come from a specified distribution cannot be rejected at a significance of


9. Differentiate between terminating and steady state simulation with respect tooutput
analysis with an example.

if: Comments: m Errors in cells with small Eisaffect the test statistics more than cells with large
Eis. m Minimum size of Eidebated: recommends a value of 5 or more; if not combine adjacent

10 Marks (Jun 2012)

Sol:

cells. m Test designed for discrete distributions and large sample sizes only. For continuous

Confidence-Interval Estimation

distributions, Chi-Square test is only an approximation (i.e., level of significance holds only for n-

Dept of CSE, SJBIT

Page 70

Dept of CSE, SJBIT

Page 71

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82
=

>).

Is biased if:

observations categorized into cells at intervals of 0.1, between 0 and 1. At level of

Is unbiased if its expected value is , that is if:

Example 1: 500 random numbers generated using a random number generator;

E() =

significance of 0.1, are these numbers IID U(0,1)?


Interval

Oi

50

54

63

45

52

42

49

48

50

47

Ei
50
50
50
50
50
50
50
50
50

5.84

500

50

10

[(Oi-Ei)^2]/Ei

Point Estimator
Point estimation for continuous-time data.

The point estimator:


1 T
Y (t)dt
=
TE 0
Is biased in general where:

0.08

0.02
1.28

An unbiased or low-bias estimator is desired.

0.08

Usually, system performance measures can be put into the common framework of q or f:

0.5

e.g., the proportion of days on which sales are lost through an out-of-stock situation, let:

3.38
0.32

Y (t) =

1, if out of stock on day i


0,

otherwise

0.18
Performance measure that does not fit: quantile or percentile:
Estimating quantiles: the inverse
of =the
Pr{Y }
p problem of estimating a proportion or

0 = 5.85; from the table [2

=14.68;

probability.

Hypothesis accepted at significance level of 0.10.

Consider a histogram of the observed values Y:


Find

11. Briefly explain measure of performance of simulation system.

10Marks.

(Dec/Jan

such that 100p% of the histogram is to the left of (smaller than

Confidence-Interval Estimation
To understand confidence intervals fully, it is important to distinguish between measures of

2011-12)

error, and measures of risk, e.g., confidence interval versus prediction interval.
Suppose the model is the normal distribution with mean q, variance s2 (both unknown).

Sol:
Consider the estimation of a performance parameter, q (or f), of a simulated system.

Let

Yi

be the average cycle time for parts produced on the ith replication of the

Discrete time data: [Y1, Y2, , Yn], with ordinary mean: q

simulation (its mathematical expectation is q).

Continuous-time data: {Y(t), 0 t TE} with time-weighted mean:

Average cycle time will vary from day to day, but over the long-run the average of the averages

Point estimation for discrete time data.

will be close to q.

The point estimator:

Sample variance across R replications:

S2 =

1
R -1

- Y )2

1 n Y
i
n i=1

Dept of CSE, SJBIT

Page 72

Dept of CSE, SJBIT

Page 73

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

of validating the future.

Unit -8

A necessary condition for input-output transformation is that some version of the system

Verification, Calibration, and Validation; Optimization

under study exists so that the system data under at least one set of input condition can be
collected to compare to model prediction.

1. Explain three step approach for validation process as formulated by Nayler and Finger.10

If the system is in planning stage and no system operating data can be collected,

Marks (June 2014)


Model assumptions fall into two general classes: structural assumptions and data
assumptions.

complete input-output validation is not possible.


2. With a neat diagram model building, verification and validation process 10Marks

Structural assumptions involve questions of how the system operates and usually

(June 2013) (June 2014)

involve simplification and abstractions of reality.


For example, consider the customer queuing and service facility in a bank. Customers
may form one line, or there may be an individual line for each teller. If there are many

sol:

lines, customers may be served strictly on a first-come, first-served basis, or some


customers may change lines if one is moving faster. The number of tellers may be fixed
or variable. These structural assumptions should be verified by actual observation during
appropriate time periods together with discussions with managers and tellers regarding
bank policies and actual implementation of these policies.
Data assumptions should be based on the collection of reliable data and correct
statistical analysis of the data.

Validating Input-Output Transformation


In this phase of validation process the model is viewed as input output transformation.
That is, the model accepts the values of input parameters and transforms these inputs into
output measure of performance. It is this correspondence that is being validated.
Verification of Simulation Models
Instead of validating the model input-output transformation by predicting the future ,the

modeler may use past historical data which has been served for validation purposes that

accurately in the computerized representation.

is, if one set has been used to develop calibrate the model, its recommended that a

separate data test be used as final validation test.

operations, or some amount of simplification of actual operations.

Thus accurate prediction of the past may replace prediction of the future for purpose

Many suggestions can be given for use in the verification process:-

Dept of CSE, SJBIT

Page 74

The purpose of model verification is to assure that the conceptual model is reflected

The conceptual model quite often involves some degree of abstraction about system

Dept of CSE, SJBIT

Page 75

System Modeling and Simulation

10CS82

System Modeling and Simulation

10CS82

1: Have the computerized representation checked by someone other than its developer.
2: Make a flow diagram which includes each logically possible action a system can take
when an event occurs, and follow the model logic for each a for each action for each
event type.
3: Closely examine the model output for reasonableness under a variety of settings of
Input parameters.
4. Have the computerized representation print the input parameters at the end of the
Simulation to be sure that these parameter values have not been changed inadvertently.
5. Make the computerized representation of self-documenting as possible.
6. If the computerized representation is animated, verify that what is seen in the

7. The interactive run controller (IRC) or debugger is an essential component of

Calibration and Validation of Models

animation imitates the actual system.

Successful

simulation

model

building. Even the best of simulation analysts makes

Verification and validation although are conceptually distinct, usually are conducted

simultaneously by the modeler.

mistakes or commits logical errors when building a model. The IRC assists in finding and

correcting those errors in the follow ways:


(a) The simulation can be monitored as it progresses.

Validation is the overall process of comparing the model and its behavior to the real

system and its behavior.

(b) Attention can be focused on a particular line of logic or multiple lines of logic that

constitute a procedure or a particular entity.


(c) Values of selected model components can be observed. When the simulation has

Calibration is the iterative process of comparing the model to the real system,

making

adjustments to the model, comparing again and so on.

paused, the current value or status of variables, attributes, queues, resources, counters, etc.,

can be observed.

The following figure 7.2 shows the relationship of the model calibration to the overall

validation process.
3. Describe with a neat diagram iterative process of calibrating a model. 10Marks

(June 2013)

The comparison of the model to reality is carried out by variety of test.

Subjective test usually involve people, who are knowledgeable about one or more aspects

Tests are subjective and objective.

of the system, making judgments about the model and its output.

Objective tests always require data on the system's behavior plus the corresponding data

produced by the model.


Dept of CSE, SJBIT

Page 76

Dept of CSE, SJBIT

Page 77

System Modeling and Simulation

10CS82

System Modeling and Simulation

Successful

simulation

model

10CS82

building. Even the best of simulation analysts makes

As an aid in the validation process, Naylor and Finger [1967] formulated a three

mistakes or commits logical errors when building a model. The IRC assists in finding and

step approach which has been widely followed:-

correcting those errors in the follow ways:

1. Build a model that has high face validity.

(a) The simulation can be monitored as it progresses.

2. Validate model assumptions.

(b) Attention can be focused on a particular line of logic or multiple lines of logic that

3. Compare the model input-output transformations to corresponding input-output

constitute a procedure or a particular entity.

transformations for the real system.

(c) Values of selected model components can be observed. When the simulation has
paused, the current value or status of variables, attributes, queues, resources, counters, etc.,

4. Explain with a neat diagram verification of simulation model. 10 Marks

can be observed.

(Dec 2012-13)
(d) The simulation can be temporarily suspended, or paused, not only to view information
Sol:

but also to reassign values or redirect entities.

Verification of Simulation Models

8. Graphical interfaces are recommended for accomplishing verification & validation .

The purpose of model verification is to assure that the conceptual model is reflected

accurately in the computerized representation.

5. Describe with a neat diagram iterative process of calibrating a model.

The conceptual model quite often involves some degree of abstraction about system

10Marks (Dec 2012-13)

operations, or some amount of simplification of actual operations.

Sol:

Many suggestions can be given for use in the verification process:1: Have the computerized representation checked by someone other than its developer.
2: Make a flow diagram which includes each logically possible action a system can take
when an event occurs, and follow the model logic for each a for each action for each
event type.
3: Closely examine the model output for reasonableness under a variety of settings of
Input parameters.
4. Have the computerized representation print the input parameters at the end of the
Simulation to be sure that these parameter values have not been changed inadvertently.
5. Make the computerized representation of self-documenting as possible.

Calibration and Validation of Models

6. If the computerized representation is animated, verify that what is seen in the

animation imitates the actual system.

simultaneously by the modeler.

Verification and validation although are conceptually distinct, usually are conducted

7. The interactive run controller (IRC) or debugger is an essential component of


Dept of CSE, SJBIT

Page 78

Dept of CSE, SJBIT

Page 79

System Modeling and Simulation

10CS82

Validation is the overall process of comparing the model and its behavior to the real

system and its behavior.

System Modeling and Simulation

10CS82

The conceptual model quite often involves some degree of abstraction about system

operations, or some amount of simplification of actual operations.


Many suggestions can be given for use in the verification process:-

Calibration is the iterative process of comparing the model to the real system,

making

adjustments to the model, comparing again and so on.

1: Have the computerized representation checked by someone other than its developer.
2: Make a flow diagram which includes each logically possible action a system can take
when an event occurs, and follow the model logic for each a for each action for each

The following figure 7.2 shows the relationship of the model calibration to the overall

validation process.

event type.
3: Closely examine the model output for reasonableness under a variety of settings of
Input parameters.

6. If the computerized representation is animated, verify that what is seen in the

Subjective test usually involve people, who are knowledgeable about one or more aspects

5. Make the computerized representation of self-documenting as possible.

Tests are subjective and objective.

The comparison of the model to reality is carried out by variety of test.

4. Have the computerized representation print the input parameters at the end of the
Simulation to be sure that these parameter values have not been changed inadvertently.

of the system, making judgments about the model and its output.

animation imitates the actual system.

Objective tests always require data on the system's behavior plus the corresponding data

produced by the model.

7. The interactive run controller (IRC) or debugger is an essential component of


Successful

simulation

model

building. Even the best of simulation analysts makes

mistakes or commits logical errors when building a model. The IRC assists in finding and

paused, the current value or status of variables, attributes, queues, resources, counters, etc.,

transformations for the real system.

(c) Values of selected model components can be observed. When the simulation has

3. Compare the model input-output transformations to corresponding input-output

constitute a procedure or a particular entity.

2. Validate model assumptions.

(b) Attention can be focused on a particular line of logic or multiple lines of logic that

1. Build a model that has high face validity.

(a) The simulation can be monitored as it progresses.

step approach which has been widely followed:-

correcting those errors in the follow ways:

As an aid in the validation process, Naylor and Finger [1967] formulated a three

can be observed.
(d) The simulation can be temporarily suspended, or paused, not only to view information

6. Explain with a neat diagram verification of simulation model.

but also to reassign values or redirect entities.

10 Marks (June 2012)

8. Graphical interfaces are recommended for accomplishing verification & validation .

Sol:
Verification of Simulation Models

The purpose of model verification is to assure that the conceptual model is reflected

7. Describe with a neat diagram iterative process of calibrating a model. Which are

accurately in the computerized representation.


Dept of CSE, SJBIT

three steps that aid in the validation process?


Page 80

10 Marks (June 2012)

Dept of CSE, SJBIT

Page 81

System Modeling and Simulation

10CS82

Sol:

System Modeling and Simulation

10CS82

8. Describe with a neat diagram iterative process of calibrating a model.

Calibration and Validation of Models

10Marks (Dec/Jan 2011-12)

Sol:

Verification and validation although are conceptually distinct, usually are conducted

simultaneously by the modeler.

Validation is the overall process of comparing the model and its behavior to the real

system and its behavior.

Calibration is the iterative process of comparing the model to the real system,

making

adjustments to the model, comparing again and so on.

The following figure 7.2 shows the relationship of the model calibration to the overall

validation process.

The comparison of the model to reality is carried out by variety of test.


Calibration and Validation of Models

Tests are subjective and objective.

Subjective test usually involve people, who are knowledgeable about one or more aspects

simultaneously by the modeler.

Verification and validation although are conceptually distinct, usually are conducted

of the system, making judgments about the model and its output.

Objective tests always require data on the system's behavior plus the corresponding data

Validation is the overall process of comparing the model and its behavior to the real

produced by the model.

system and its behavior.

As an aid in the validation process, Naylor and Finger [1967] formulated a three

step approach which has been widely followed:-

adjustments to the model, comparing again and so on.

Calibration is the iterative process of comparing the model to the real system,

making

1. Build a model that has high face validity.


2. Validate model assumptions.

3. Compare the model input-output transformations to corresponding input-output

validation process.

The following figure 7.2 shows the relationship of the model calibration to the overall

transformations for the real system.

Dept of CSE, SJBIT

Page 82

The comparison of the model to reality is carried out by variety of test.

Dept of CSE, SJBIT

Page 83

System Modeling and Simulation

10CS82

System Modeling and Simulation

Subjective test usually involve people, who are knowledgeable about one or more aspects

accurately in the computerized representation.

Tests are subjective and objective.

of the system, making judgments about the model and its output.

10CS82

The conceptual model quite often involves some degree of abstraction about system

operations, or some amount of simplification of actual operations.

Objective tests always require data on the system's behavior plus the corresponding data

Many suggestions can be given for use in the verification process:-

4. Have the computerized representation print the input parameters at the end of the

transformations for the real system.

Input parameters.

3. Compare the model input-output transformations to corresponding input-output

3: Closely examine the model output for reasonableness under a variety of settings of

2. Validate model assumptions.

event type.

1. Build a model that has high face validity.

when an event occurs, and follow the model logic for each a for each action for each

step approach which has been widely followed:-

2: Make a flow diagram which includes each logically possible action a system can take

As an aid in the validation process, Naylor and Finger [1967] formulated a three

1: Have the computerized representation checked by someone other than its developer.

produced by the model.

Simulation to be sure that these parameter values have not been changed inadvertently.
5. Make the computerized representation of self-documenting as possible.

9. With a neat diagram model building, verification and validation process

6. If the computerized representation is animated, verify that what is seen in the

10Marks (Dec/Jan 2011-12)

animation imitates the actual system.

sol:

7. The interactive run controller (IRC) or debugger is an essential component of


Successful

simulation

model

building. Even the best of simulation analysts makes

mistakes or commits logical errors when building a model. The IRC assists in finding and
correcting those errors in the follow ways:
(a) The simulation can be monitored as it progresses.
(b) Attention can be focused on a particular line of logic or multiple lines of logic that
constitute a procedure or a particular entity.
(c) Values of selected model components can be observed. When the simulation has
paused, the current value or status of variables, attributes, queues, resources, counters, etc.,
can be observed.

10. Explain any two output analysis for steady-state simulation?

Marks

(June

2011)
Verification of Simulation Models

Sol:

The purpose of model verification is to assure that the conceptual model is reflected

Dept of CSE, SJBIT

Page 84

Initialization Bias [Steady-State Simulations]


Dept of CSE, SJBIT

Page 85

System Modeling and Simulation

10CS82

guide how much data to delete to


reduce initialization bias to a negligible level.

System Modeling and Simulation

10CS82

11. Write a Short not 15 Marks (June 2011)


Sol:
a) Characteristics of queuing system?

precise trend as the # of replications, R,


increases.

Characteristics of Queueing Systems


Key elements of queueing systems:
Customer: refers to anything that arrives at a facility and requires service, e.g.,
people, machines, trucks, emails.

pproach steady state.

Server: refers to any resource that provides the requested service, e.g.,
repairpersons, retrieval machines, runways at airport.

Error Estimation

[Steady-State Simulations]

Calling Population [Characteristics of Queueing System]

a) Stationary time series Y i


exhibiting positive

Calling population: the population of potential customers, may be assumed to be finite or

autocorrelation.

infinite.
Finite population model: if arrival rate depends on the number of customers being served

b) Stationary time series Y i

and waiting, e.g., model of one corporate jet, if it is being repaired, the repair arrival rate

exhibiting negative

becomes zero.

autocorrelation.

Infinite population model: if arrival rate is not affected by the number of customers being
served and waiting, e.g., systems with large population of potential customers.

S
n

= BV (Y ),

where B =

c) Nonstationary time series with

System Capacity

an upward trend

[Characteristics of Queueing System]

System Capacity: a limit on the number of customers that may be in the waiting line or
system.
Limited capacity, e.g., an automatic car wash only has room for 10 cars to wait in
line to enter the mechanism.
Unlimited capacity, e.g., concert ticket sales with no limit on the number of people allowed
to wait to purchase tickets.
b) Arrival Process

[Characteristics of Queueing System]

For infinite-population models:


In terms of interarrival times of successive customers.
Random arrivals: interarrival times usually characterized by a probability distribution.
Most important model: Poisson arrival process (with rate l), where An represents the
Dept of CSE, SJBIT

Page 86

Dept of CSE, SJBIT

Page 87

System Modeling and Simulation

10CS82

System Modeling and Simulation

interarrival time between customer n-1 and customer n, and is exponentially distributed (with

10CS82

e routed to
queue j upon departure, then the arrival rate form queue i to queue j is ipij (over the long run).

mean 1/l).
Scheduled arrivals: interarrival times can be constant or constant plus or minus a
small random amount to represent early or late arrivals.

Networks of Queues
The overall arrival rate into queue j:
=a +

e.g., patients to a physician or scheduled airline flight arrivals to an airport.


At least one customer is assumed to always be present, so the server is never idle,

pij

all i

Sum of arrival rates from other queues in network


If queue j has cj < parallel servers, each working at rate j, then the long-run utilization

e.g., sufficient raw material for a machine.

of each server is j=j/(cj) (where j < 1 for stable queue).

b) Errors while generating pseudorandom numbers

1/j, then, in steady state, queue j behaves likes an M/M/cj queue with arrival rate .

or initiates, the ideal properties of uniform distribution and independence as closely as

if there are cj identical servers delivering exponentially distributed service times with mean

generation scheme, is to produce a sequence of numbers between zero and 1 which simulates,

If arrivals from outside the network form a Poisson process with rate aj for each queue j, and

Pseudo means false, so false random numbers are being generated. The goal of any

possible.

When generating pseudo-random

numbers, certain

problems

or errors

can

occur. These errors, or departures from ideal randomness, are all related to the properties
stated previously.
Some examples include the following
1. The generated numbers may not be uniformly distributed.
2. The generated numbers may be discrete -valued instead continuous valued
3. The mean of the generated numbers may be too high or too low.
4. The variance of the generated numbers may be too high or low
5. There may be dependence. The following are examples:
(a) Autocorrelation between numbers.
(b) Numbers successively higher or lower than adjacent numbers.

c) Network of Queue
Many systems are naturally modeled as networks of single queues: customers departing
from one queue may be routed to another.

on system capacity:

out of a queue is the same as thearrival rate into the queue (over the long run).
Dept of CSE, SJBIT

Page 88

Dept of CSE, SJBIT

Page 89

Information and network Security

10CS835

Information and network Security

10CS835

A security perimeter is the first level of security that protects all internal systems from

QUESTION BANK

outside threats , as pictured in Dia 6.19


UNIT1: Introduction to Information Security
1. What is security and explain multiple layers of security in an organization?(10 Marks)
(Dec 2012) (June 2013)(Dec 2013)(June 2014)
Management Controls

Unfortunately, the perimeter does not protect against internal attacks from employee threats
or on-site physical threats.
FIREWALLS

Program Management

A Firewall is a device that selectively discriminates against information following into or

System Security Plan

out of the organization.

Life Cycle Maintenance

A Firewall is usually a computing device , or a specially condiaured computer that allows or

Risk Management

prevents information from entering or exiting the defined area based on a set of predefined

Review of Security Controls

rules.

Legal Compliance

Firewalls are usually placed on the security perimeter, just behind or as part of a gateway

Operational Controls

router.

Contingency Planning

While the gateway router is primarily designed to connect the organizations systems to the

Security ETA

outside world, it too can be used as the front-line defense against attacks as it can be

Personnel Security

condiaured to allow only a few types of protocols to enter.

Physical Security
Production Inputs and Outputs
Hardware & Software Systems Maintenance
Data Integrity
Technical Controls
Logical Access Controls
Identification, Authentication, Authorization, and Accountability
Audit Trails
Asset Classification and Control
Cryptography
Defense in Depth
One of the basic tenets of security architectures is the implementation of security in layers.
This layered approach is called Defense in Depth
Defense in depth requires that the organization establishes sufficient security controls and
safeguards so that an intruder faces multiple layers of control.
Security Perimeter
A perimeter is the boundary of an area. A security perimeter defines the edge between the

2. List critical characteristics of information and explain in brief any five of them. (10
marks) (Dec 2012) (June 2013) (8 marks) (Dec 2013) (Dec 2014)
Critical Characteristics Of Information The value of information comes from the
characteristics it possesses. .
Availability
Enables users who need to access information to do so without interference or obstruction
and in the required format. The information is said to be available to an authorized user when
and where needed and in the correct format.
Accuracy
Free from mistake or error and having the value that the end-user expects. If information
contains a value different from the users expectations due to the intentional or unintentional
modification of its content, it is no longer accurate.
Authenticity
The quality or state of being genuine or original, rather than a reproduction or fabrication.
Information is authentic when it is the information that was originally created, placed, stored,
or transferred.
Confidentiality
The quality or state of preventing disclosure or exposure to unauthorized individuals or
systems.
Integrity

outer limit of an organizations security and the beginning of the outside world.

Dept. of CSE, SJBIT

Page 1

Dept. of CSE, SJBIT

Page 2

BCP occurs concurrently with DRP when the damage is major or long term, requiring more

The quality or state of being whole, complete, and uncorrupted. The integrity of information
is threatened when the information is exposed to corruption, damage, destruction, or other
disruption of its authentic state.
Utility
The quality or state of having value for some purpose or end. Information has value when it
serves a particular purpose. This means that if information is available, but not in a format
meaningful to the end-user, it is not useful.

Information and network Security

Information and network Security

10CS835

3. What are the policies present in NSTISSC security model. (8 marks) (Dec 2012) (June
2013) (10 marks ) (Dec 2014)
The National Security Telecommunications and Information Systems Security Committee
(NSTISSC) was established by President Bush under National Security Directive 42 (NSD
42) entitled, "National Policy for the Security of National Security Telecommunications and
Information Systems," dated 5 July 1990. It reaffirms the Secretary of Defense as the
Executive Agent and the Director, National Security Agency as the National Manager for
National Security Telecommunications and Information Systems Security. In addition, the
Directive establishes the NSTISSC.
The NSTISSC provides a forum for the discussion of policy issues, sets national policy, and
promulgates direction, operational procedures, and guidance for the security of national
security systems through the NSTISSC Issuance System. National security systems contain
classified information or:
a. involves intelligence activities;
b. involves cryptographic activities related to national security;
c. involves command and control of military forces;
d. involves equipment that is an integral part of a weapon or weapons system(s); or
e. is critical to the direct fulfillment of military or intelligence missions (not including routine
administrative and business applications). Plans for events of this type are referred to in a
number of ways:

10CS835

than simple restoration of information and information resources.


4. What are approaches to information security implementation? Explain. (top down and
bottom up approaches) (10 marks) (June 2013) (Dec 2013) ( 5 marks) (Dec 2014)
The implementation of information security in an organization must begin somewhere, and
cannot happen overnight. Securing information assets is in fact an incremental process that
requires coordination, time, and patience. Information security can begin as a grassroots
effort in which systems administrators attempt to improve the security of their systems.
This is often referred to as a bottom-up approach. The key advantage of the bottom-up
approach is the technical expertise of the individual administrators.Working with information
systems on a day-to-day basis, these administrators possess in-depth knowledge that can
greatly enhance the development of an information security system. They know and
understand the threats to their systems and the mechanisms needed to protect them
successfully. Unfortunately, this approach seldom works, as it lacks a number of critical
features, such as participant support and organizational staying power.
The top-down approach, in which the project is initiated by upper-level managers who issue
policy, procedures and processes, dictate the goals and expected outcomes, and determine
accountability for each required action, has a higher probability of success. This approach has
strong upper-management support, a dedicated champion, usually dedicated funding, a clear
planning and implementation process, and the means of influencing organizational culture.
The most successful kind of top-down approach also involves a formal development strategy
referred to as a systems development life cycle. For any organization-wide effort to succeed,
however, management must buy into and fully support it. The role played in this effort by the
champion cannot be overstated.

Business Continuity Plans (BCPs)


Disaster Recovery Plans (DRPs)
Incident Response Plans (IRPs)
Contingency Plans

Typically, this champion is an executive, such as a chief information officer (CIO), or the
vice president of information technology (VP-IT), who moves the project forward, ensures
that it is properly managed, and pushes for acceptance throughout the organization. Without
this high-level support, many of the mid-level administrators fail to make time for the project
or dismiss it as a low priority.

Contingency Planning (CP)


Incident Response Planning (IRP)
Disaster Recovery Planning (DRP)
Business Continuity Planning (BCP)
The primary functions of these three planning types:
IRP focuses on immediate response, but if the attack escalates or is disastrous the process
changes to disaster recovery and BCP.

Also critical to the success of this type of project is the involvement and support of the end
users. These individuals are most directly affected by the process and outcome of the project
and must be included in the information security process. Key end users should be assigned
to a developmental team, known as the joint application development team (JAD).

DRP typically focuses on restoring systems after disasters occur, and as such is closely
associated with BCP.
Dept. of CSE, SJBIT

Page 3

Dept. of CSE, SJBIT

Page 4

Information and network Security

10CS835

Information and network Security

10CS835

False Attack Stimulus: An event that triggers alarms and causes a false positive when no
actual attacks are in progress.
False Negative: The failure of an IDS system to react to an actual attack event. Of all failures,
this is the most grievous, for the very purpose of an IDS is to detect attacks.
False Positive: An alarm or alert that indicates that an attack is in progress or that an attack
has successfully occurred when in fact there was no such attack.
Noise: The ongoing activity from alarm events that are accurate and noteworthy but not
5. Explain the Security System Development Life Cycle. (8 marks) (June 2014)

necessarily significant as potentially successful attacks.

nowledge about SDLC is very important for anyone who wants to understand S-SDLC. The
Following are some of the major steps which are common throughout the SDLC process,
regardless of the organization. Here is a photo representation of a Sample Software
Development Life Cycle:
Requirements Gathering
A Software Requirement Specification or SRS is a document which records expected
behavior of the system or software which needs to be developed.
Design
Software design is the blueprint of the system, which once completed can be provided to
developers for code development. Based on the components in design, they are translated into
software modules/functions/libraries, etc and these pieces together form a software system.
Coding
During this phase, the blueprint of the software is turned to reality by developing the source
code of the entire application. Time taken to complete the development depends on the size
of the application and number of programmers involved.
Testing
Once the application development is completed, it is tested for various issues like
functionality, performance, and so on. This is to ensure that the application is performing as
expected. If there are any issues, these issues are fixed before/after going to production
depending on the nature of issue and the urgency to go live for the application.
Deployment
Once the application is ready to go live, it is deployed on a production server in this phase. If
it is developed for a client, the deployment happens in a client premise or datacenter where
there client wants to get the application installed.

Site Policy: The rules and condiauration guidelines governing the implementation and

6. List and briefly explain Information Security Terminologies.(8 marks)( June 2013)(Dec
2013) (5 marks) (Dec 2014)

7. Explain security system development life cycle. (7 marks)(Dec 2014)


Requirements Gathering
A Software Requirement Specification or SRS is a document which records expected
behavior of the system or software which needs to be developed.
Design
Software design is the blueprint of the system, which once completed can be provided to
developers for code development. Based on the components in design, they are translated into
software modules/functions/libraries, etc and these pieces together form a software system.
Coding

IDS Terminology
Alert or Alarm: An indication that a system has just been attacked and/or continues to be
under attack. IDSs create alerts of alarms to notify administrators that an attack is or was or
occurring and may have been successful.

Dept. of CSE, SJBIT

Page 5

operationof IDSs within the organization.


Site Policy Awareness: An IDSs ability to dynamically modify its site policies in reaction or
response to environmental activity.
True Attack Stimulus: An event that triggers alarms and causes an IDS to react as if a real
attack is in progress. The event may be an actual attack, in which an attacker is at work on a
system compromise attempt, or it may be a drill, in which security personnel are using hacker
tools to conduct tests of a network segment.
Confidence Value: A value associated with an IDS's ability to detect and identify an attack
correctly. The confidence value an organization places in the IDS is based on experience and
past performance measurements.
Alarm Filtering: The process of classifying the attack alerts that an IDS produces in order to
distinguish/sort false positives from actual attacks more efficiently.
Alarm Clustering : A consolidation of almost identical alarms into a single higher-level
alarm.
Alarm Compaction: Alarm clustering that is based on frequency, similarity in attack
signature, similarity in attack target, or other similarities.

Dept. of CSE, SJBIT

Page 6

Security Responsibilities and Accountability Should Be Made Explicit:

During this phase, the blueprint of the software is turned to reality by developing the source
code of the entire application. Time taken to complete the development depends on the size
of the application and number of programmers involved.
Testing
Once the application development is completed, it is tested for various issues like
functionality, performance, and so on. This is to ensure that the application is performing as
expected. If there are any issues, these issues are fixed before/after going to production
depending on the nature of issue and the urgency to go live for the application.
Deployment
Once the application is ready to go live, it is deployed on a production server in this phase. If
it is developed for a client, the deployment happens in a client premise or datacenter where
there client wants to get the application installed.

Information and network Security

Information and network Security

8. Enlist the salient features drawbacks of


marks)(June 2013)(Dec 2013)(Dec 2014)

10CS835

ISO17799/BS 7799 security model. (6

10CS835

Policy documents should clearly identify the security responsibilities of users,


administrators, and managers.
To be legally binding, this information must be documented, disseminated, read,
understood, and agreed to.
Ignorance of law is no excuse, but ignorance of policy is.
Regarding the law, the organization should also detail the relevance of laws to issue specific
security policies.
These details should be distributed to users, administrators, and managers to assist them in
complying with their responsibilities

UNIT2: Planning for Security

Security Supports the Mission of the Organization


Failure to develop an information security system based on the organizations mission,

1.

(1)

vision and culture guarantees the failure of the information security program.
Security is an Integral Element of Sound Management
Effective management includes planning, organizing, leading, and controlling.
Security enhances these areas by supporting the planning function when information
(2)

security policies provide input into the organization initiatives.


Information security specifically supports the controlling function, as security controls
support sound management by means of the enforcement of both managerial and security
policies.

(3)

Security should be Cost-effective


The costs of information security should be considered part of the cost of doing business,
much like the cost of computers, networks, and voice communications systems.
These are not profit-generating areas of the organization and may not lead to competitive
(4)

advantages.
Information security should justify its own costs.
Security measures that do not justify cost benefit levels must have a strong business case
(such as a legal requirement) to warrant their use.

(5)

Systems owners have security responsibilities outside their own organizations


Whenever systems store and use information from customers, patients, clients, partners, and

2.

others, the security of this information becomes a serious responsibility for the owner of the
systems.
Dept. of CSE, SJBIT

Page 7

Why shaping an information security policy is difficult?(3 marks)(Dec 2014)


Failure To Explicitly Define Long-Term Implications: While keeping presentations about
proposed policies both short and to the point is certainly desirable, sometimes information
security specialists go overboard with this objective. They may then leave out the important
long-term implications of adopting a specific policy. Management may later be dismayed to
discover that these implications contradict, or are otherwise in conflict with, other
organizational objectives.
Full Cost Analysis Not Employed: With the severe economic pressure that so many IT shops
are facing these days, it can be tempting to only request the first step in a long
implementation process. For example, next years budget proposal, in support of a newly
adopted policy, may request only the purchase of new security software and training for
operations staff to use that same software.
User Training & Acceptance Efforts Not Undertaken: Failure to convince those who will be
affected by a policy that the new policy is indeed in their best interests is most likely going to
be a serious problem. As much as those responsible for meeting a deadline might like to
autocratically dictate that such-and-such a policy will be adopted by everyone period, this
approach sets up a resistance dynamic that will interfere with the consistent implementation
of a new policy.
Discovery Of Unforeseen Implementation Conflicts: Failure to research the cultural, ethical,
economic, and operational implications of the policy implementation process is often a
problem. This is particularly serious in multi-national organizations where the local way of
doing things may be at great variance from a newly adopted policy.
Communications Gap Between Technologists & Management: In many instances, the
information systems technical infrastructure is modified regularly to respond to new security
threats, to adjust new software so that it operates better, to accommodate a new business
partner, to improve user response time, etc.
What is policy? How it can and should be used? Explain. (5 marks) (Dec 2012) (June
2013) (10 marks ) (June 2014)
Information Security Policy, Standards, and Practices
Dept. of CSE, SJBIT

Page 8

Information and network Security

10CS835

Information and network Security

10CS835

Management from all communities of interest must consider policies as the basis for all

Usually, these policies are developed by those responsible for managing the information

information security efforts like planning, design and deployment.

technology resources. The optimal balance between the independent and comprehensive ISSP

Policies direct how issues should be addressed and technologies used Policies do not

approaches is the modular approach.It is also certainly managed and controlled but tailored to

specify the proper operation of equipments or software-this information should be placed in

the individual technology issues.

the standards, procedures and practices of users manuals and systems documentation.
Security policies are the least expensive control to execute, but the most difficult to

The modular approach provides a balance between issue orientation and policy

implement properly.

management. The policies created with this approach comprise individual modules, each

Shaping policy is difficult because: _ Never conflict with laws _ Stand up in court, if

created and updated by individuals responsible for the issues addressed. These individuals

challenged _ Be properly administered through dissemination and documented acceptance.

report to a central policy administration group that incorporates specific issues into an overall
comprehensive policy.

Enterprise Information Security Policy (EISP)


A security program policy (SPP) or EISP is also known as

Systems-Specific Policy (SysSP)

A general security policy

While issue-specific policies are formalized as written documents, distributed to users, and

IT security policy

agreed to in writing, SysSPs are frequently codified as standards and procedures to be used

Information security policy

When condiauring or maintaining systems


Systems-specific policies fall into two groups:

3. With a block diagram, explain how policies, standards, practices, procedures and

guidelines are related.(7 marks) (Dec 2012) (June 2013) (10 marks ) (June 2014)
Three approaches:

Access control lists (ACLs) consist of the access control lists, matrices, and capability
tables governing the rights and privileges of a particular user to a particular system.
_ An ACL is a list of access rights used by file storage systems, object brokers, or other

_ Independent ISSP documents, each tailored to a specific issue.

network communications devices to determine which individuals or groups may access an

_ A single comprehensive ISSP document covering all issues.


_ A modular ISSP document that unifies policy creation and administration, while

object that it controls.(Object Brokers are system components that handle message requests
between the software components of a system )

maintaining each specific issues requirements.

The independent document approach to take when creating and managing ISSPs typically has

4. Define security policy. Briefly discuss three types of security policies.(8 marks)( June

a scattershot effect. Each department responsible for a particular application of technology

2013)(Dec 2013) (5 marks) (Dec 2014)


Issue-Specific Security Policy (ISSP)

creates a policy governing its use, management, and control. This approach to creating ISSPs

As various technologies and processes are implemented, certain guidelines are needed to

may fail to cover all of the necessary issues, and can lead to poor policy distribution,

use them properly

management, and enforcement.

The ISSP:

The single comprehensive policy approach is centrally managed and controlled. With formal

addresses specific areas of technology like

procedures for the management of ISSPs in place , the comprehensive policy approach

Electronic mail

establishes guidelines for overall coverage of necessary issues and clearly identifies processes

Use of the Internet

for the dissemination, enforcement, and review of these guidelines.

Specific minimum condiaurations of computers to defend against worms and viruses.


Prohibitions against hacking or testing organization security controls.

Dept. of CSE, SJBIT

Page 9

Dept. of CSE, SJBIT

Page 10

System Security Plan

Use of telecommunications technologies (FAX and Phone)

Authorization of Processing (Certification and Accreditation)

Use of personal equipment on company networks

Life Cycle Maintenance

Home use of company-owned computer equipment.

Information and network Security

Information and network Security

10CS835

10CS835

Use of photocopy equipment.

Contingency Planning

Information security policy

Production, Input / Output Controls

IT security policy

Physical Security

A general security policy

Personnel Security

A security program policy (SPP) or EISP is also known as

Operational Controls

Enterprise Information Security Policy (EISP)

Hardware and Systems Software


5. Explain information security blueprint and its major components. (7 marks)(Dec 2014)
Designing a plan for security begins by creating or validating a security blueprint

Then use the blueprint to plan the tasks to be accomplished and the order in which to
proceed
Setting priorities can follow the recommendations of published sources, or from published
standards provided by government agencies, or private consultants

Data Integrity
Documentation
Security Awareness, Training, and Education
Incident Response Capability
Technical Controls
Identification and Authentication
Logical Access Controls

Information Security Blueprints

Audit Trails

One approach is to adapt or adopt a published model or framework for information security
A framework is the basic skeletal structure within which additional detailed planning of the
blueprint can be placed as it is developed of refined

However, because people can directly access ring as well as the information at the core of
the model, the side of the sphere of protection that attempts to control access by relying on
people requires a different approach to security than the side that uses technology.

Experience teaches us that what works well for one organization may not precisely fit another
This security blueprint is the basis for the design, selection, and implementation of all
security policies, education and training programs, and technological controls.

The sphere of protection overlays each of the levels of the sphere of use with a layer of
security, protecting that layer from direct or indirect use through the next layer
The people must become a layer of security, a human firewall that protects the

The security blueprint is a more detailed version of the security framework, which is an
outline of the overall information security strategy for the organization and the roadmap for
planned changes to the information security environment of the organization.

information from unauthorized access and use.


The members of the organization must become a safeguard, which is effectively trained,
implemented and maintained or else they too will represent a threat to the information.

6. Briefly describe management, operational and technical controls and explain when each
would be applied as part of a security framework? (10 marks) (June 2013) (Dec 2013) (
5 marks) (Dec 2014)
Management Controls

Risk Management

7. Explain information security architecture. (7 marks)(Dec 2014)(10 marks) (June


2014)(Dec 2013)
Information security architecture:
Goals

Review of Security Controls


Dept. of CSE, SJBIT

Page 11

Provide structure, coherence and cohesiveness.


Must enable business-to-security alignment.
Defined top-down beginning with business strategy.

Dept. of CSE, SJBIT

Page 12

Information and network Security

10CS835

Ensure that all models and implementations can be traced back to the business
strategy, specific business requirements and key principles.
Provide abstraction so that complicating factors, such as geography and technology
religion, can be removed and reinstated at different levels of detail only when
required.
Establish a common "language" for information security within the organization
Methodology
The practice of enterprise information security architecture involves developing an
architecture security framework to describe a series of "current", "intermediate" and "target"
reference architectures and applying them to align programs of change. These frameworks
detail the organizations, roles, entities and relationships that exist or should exist to perform a
set of business processes. This framework will provide a rigorous taxonomy and ontology
that clearly identifies what processes a business performs and detailed information about how
those processes are executed and secured. The end product is a set of artifacts that describe in
varying degrees of detail exactly what and how a business operates and what security controls
are required. These artifacts are often graphical.
Given these descriptions, whose levels of detail will vary according to affordability and other
practical considerations, decision makers are provided the means to make informed decisions
about where to invest resources, where to realign organizational goals and processes, and
what policies and procedures will support core missions or business functions.
A strong enterprise information security architecture process helps to answer basic questions
like:
What is the information security risk posture of the organization?
Is the current architecture supporting and adding value to the security of the
organization?
How might a security architecture be modified so that it adds more value to the
organization?
Based on what we know about what the organization wants to accomplish in the
future, will the current security architecture support or hinder that?

Information and network Security

10CS835

and accuracy of the spec is also a part of the targeted improvement. When possible start on a
small scale to test possible effects.
DO
Implement the plan, execute the process, make the product. Collect data for charting and
analysis in the following "CHECK" and "ACT" steps.
CHECK
Study the actual results (measured and collected in "DO" above) and compare against the
expected results (targets or goals from the "PLAN") to ascertain any differences. Look for
deviation in implementation from the plan and also look for the appropriateness and
completeness of the plan to enable the execution, i.e., "Do". Charting data can make this
much easier to see trends over several PDCA cycles and in order to convert the collected data
into information. Information is what you need for the next step "ACT".
ACT
Request corrective actions on significant differences between actual and planned results.
Analyze the differences to determine their root causes. Determine where to apply changes
that will include improvement of the process or product. When a pass through these four
steps does not result in the need to improve, the scope to which PDCA is applied may be
refined to plan and improve with more detail in the next iteration of the cycle, or attention
needs to be placed in a different stage of the process.
9. Illustrate with diagram how information is under attack from variety of sources with

reference to the spheres of security. (10 marks) (Dec 2012) (June 2013) (8 marks) (Dec
2013) (Dec 2014)

8. Describe the major steps in Plan_do_check_act method of information security

management system.(10 marks) (Dec 2012) (June 2013) (10 marks ) (June 2014)
PDCA (plandocheckact or plandocheckadjust) is an iterative four-step management
method used in business for the control and continuous improvement of processes and
products. It is also known as the Deming circle/cycle/wheel, Shewhart cycle, control
circle/cycle, or plandostudyact (PDSA). Another version of this PDCA cycle is OPDCA.
The added "O" stands for observation or as some versions say "Grasp the current condition."
This emphasis on observation and current condition has currency with Lean
manufacturing/Toyota Production System literature

The Sphere of Security


The sphere of security (Dia 6.16) is the foundations of the security framework.
The spheres of security illustrate how information is under attack from a variety of sources.
The sphere of use illustrates the ways in which people access information; for example,
people read hard copies of documents and can also access information through systems.
Information, as the most important asset in the model is at the center of the sphere.

PLAN
Establish the objectives and processes necessary to deliver results in accordance with the
expected output (the target or goals). By establishing output expectations, the completeness

Information is always at risk from attacks through the people and computer systems that

Dept. of CSE, SJBIT

Dept. of CSE, SJBIT

Page 13

have access to the information.


Page 14

1.

While the gateway router is primarily designed to connect the organizations systems to the

and then access systems that contain the information.

router.

attempting to access information from the Internet must first go through the local networks

Firewalls are usually placed on the security perimeter, just behind or as part of a gateway

Networks and Internet represent indirect threats, as exemplified by the fact that a person

Information and network Security

Information and network Security

10CS835

10CS835

outside world, it too can be used as the front-line defense against attacks as it can be

organizations place web servers .

The Sphere of Protection

The DMZ is a no-mans land between the inside and outside networks; it is also where some

Internet connection.

A buffer against outside attacks is frequently referred to as a Demilitarized Zone (DMZ).

To gain access to the network, one must either directly access the network or go through an

DMZs

systems or go through a network connection

Thus, firewalls can be used to create to security perimeters like the one shown in Dia. 6.19

To gain access to the computer systems, one must either directly access the computer

creating a buffer between the outside and inside networks.

organization and other computer-based systems:

A firewall can be a single device or a firewall subnet, which consists of multiple firewalls

Information, at the core of the sphere, is available for access by members of the

Firewalls can be packet filtering , stateful packet filtering, proxy or application level.

The first component is the sphere of use

information they can filter.

necessary to protect information at all times

There are a number of types of firewalls, which are usually classified by the level of

Generally speaking, the concept of the sphere is to represent the 360 degrees of security

condiaured to allow only a few types of protocols to enter.

Sphere of Use

Dia illustrates that between each layer of the sphere of use there must exist a layer of

A proxy server performs actions on behalf of another system

the computer systems, and between the Internet and Internal networks.

proxy server, or proxy firewall.

Controls are also implemented between systems and the information, between networks and

An alternative approach to the strategies of using a firewall subnet or a DMZ is to use a

between people and the information.

Proxy Servers

For example, the items labeled Policy & law and Education & Training are located

enter the interior networks.

Each shaded band is a layer of protection and control.

These servers provide access to organizational web pages, without allowing web requests to

protection to prevent access to the inner layer from the outer layer.

When deployed, a proxy server is condiaured to look like a web server and is assigned the

UNIT 3: Security Technology

domain name that users would be expecting to find for the system and its services.
When an outside client requests a particular web page, the proxy server receives the

Define and identify the various types of firewalls. (10 marks) (Dec 2012) (June 2013) (8
marks) (Dec 2013) (Dec 2014)

requests as if it were the subject of the request, then asks for the same information from the
true web server (acting as a proxy for the requestor), and then responds tot eh request as a

the network or it might be placed within the firewall subnet or the DMZ for added protection.

rules.

The proxy server may be hardened and become a bastion host placed in the public area of

prevents information from entering or exiting the defined area based on a set of predefined

the internal and more sensitive server.

A Firewall is usually a computing device , or a specially condiaured computer that allows or

This gives requestors the response they need without allowing them to gain direct access to

out of the organization.

proxy for the true web server.

A Firewall is a device that selectively discriminates against information following into or

For more frequently accessed web pages, proxy servers can cache or temporarily store the
page, and thus are sometimes called cache servers.
Dept. of CSE, SJBIT

Page 15

Dept. of CSE, SJBIT

Page 16

Information and network Security

10CS835

Information and network Security

Dia 6.20 shows a representative example of a condiauration using a proxy .

10CS835

This could include packets coming into the organizations networks with addresses from
machines already within the organization(IP Spoofing).

2.

Describe how the various types of firewalls interact with the network traffic at various
levels of the OSI model. (7 marks) (Dec 2012) (June 2013) (10 marks ) (June 2014)

It could also include high volumes of traffic going to outside addresses(As in a Denial of
Service Attack)

Firewalls fall into four broad categories: packet filters, circuit level gateways, application

Both Host and Network based IDSs require a database of previous activity.

level gateways and stateful multilayer inspection firewalls.

In the case of host based IDSs, the system can create a database of file attributes, as well as

Packet filtering firewalls work at the network level of the OSI model, or the IP layer of
TCP/IP. They are usually part of a router. A router is a device that receives packets from one
network and forwards them to another network. In a packet filtering firewall each packet is

maintain a catalog of common attack signatures.

compared to a set of criteria before it is forwarded. Depending on the packet and the criteria,

databases of normal activity for comparison with future activity.

the firewall can drop the packet, forward it or send a message to the originator. Rules can

IDSs can be used together for the maximum level of security for a particular network and

Network-based IDSs can use a similar catalog of common attack signatures and develop

include source and destination IP address, source and destination port number and protocol
used. The advantage of packet filtering firewalls is their low cost and low impact on network
performance. Most routers support packet filtering. Even if other firewalls are used,
implementing packet filtering at the router level affords an initial degree of security at a low
network layer. This type of firewall only works at the network layer however and does not
support sophisticated rule based models (see Figure 5). Network Address Translation (NAT)
routers offer the advantages of packet filtering firewalls but can also hide the IP addresses of
computers behind the firewall, and offer a level of circuit-based filtering.
3.

set of systems.
4.

According to the NISTs documentation on industry best practices, what are the six
reasons to acquire and use IDS? Explain(7 marks) (Dec 2012) (June 2013) (10 marks )
(June 2014)
Why Use an IDS?
According to the NIST's documentation on industry best practices, there are several
compelling reasons to acquire and use an IDS:
1. To prevent problem behaviors by increasing the perceived risk of discovery and

Identify and describe the two categories of intrusion detection systems. (10 marks)
(June 2013) (Dec 2013) ( 5 marks) (Dec 2014)

punishment for those who would attack or otherwise abuse the system
2. To detect attacks and other security violations that are not prevented by other security

Intrusion Detection Systems (IDSs)

measures

In an effort to detect unauthorized activity within the inner network or an individual

3. To detect and deal with the preambles to attacks (commonly experienced as network

machines, an organization may wish to implement Intrusion Detection Systems (IDSs) IDSs

probes and other 'doorknob rattling' activities)

come in TWO versions, with Hybrids possible.

4. To document the existing threat to an organization.

Host Based IDS

5. To act as quality control for security design and administration, especially of large and

Host based IDSs are usually installed on the machines they protect to monitor the status of

complex enterprises.

various files stored on those machines.

6. To provide useful information about intrusions that do take place, allowing improved

The IDS learns the condiauration of the system , assigns priorities to various files depending

diagnosis, recovery, and correction of causative factors.

on their value, and can then alert the administrator of suspicious activity.

Network based IDSs look at patterns of network traffic and attempt to detect unusual

Explain the features of NIDS. List merits and demerits of the same. (3 marks)(Dec 2014)
.(7 marks) (Dec 2012) (June 2013)
Network-Based IDS

activity based on previous baselines.

A network-based IDS (NIDS) resides on a computer or appliance connected to a segment of

5.

Network Based IDS

an organization's network and monitors network traffic on that network segment, looking for
Dept. of CSE, SJBIT

Page 17

Dept. of CSE, SJBIT

Page 18

Information and network Security

10CS835

indications of ongoing or successful attacks. When a situation occurs that the NIDS is

6.

programmed to recognize as an attack, it responds by sending notifications to administrators.


When examining the packets "transmitted through an organization's networks, a NIDS looks

Information and network Security

10CS835

Explain the features of HIDS. List merits and demerits of the same. (3 marks)(Dec 2014)
.(7 marks) (Dec 2012) (June 2013)
Host-Based IDS
A host-based IDS (HIDS) works differently from a network-based version of IDS. While a

for attack patterns within network traffic such as large collections of related items that are of

network-based IDS resides on a network segment and monitors activities across that segment,

a certain type, which could indicate that a denial-of service attack is underway, or the

a host-based IDS resides on a particular computer or server, known as the host, and monitors

exchange of a series of related packets in a certain pattern, which could indicate that a port

activity only on that system. HIDSs are also known as system integrity verifiers5 as they

scan is in progress. A NIDS can detect many more types of attacks than a host-based IDS, but

benchmark and monitor the status of key system files and detect when an intruder creates,

to do so, it requires a much more complex condiauration and maintenance program. A NIDS

modifies, or deletes monitored files. A HIDS is also capable of monitoring system

is installed at a specific place in the network (such as on the inside of an edge router) from

condiauration databases, such as Windows registries, in addition to stored condiauration files

where it is possible to watch the traffic going into and out of a particular network segment

like .ini, .cfg, and .dat files. Most HIDSs work on the principle of condiauration or change

The NIDS can be deployed to watch a specific grouping of host computers on a specific

management, which means they record the sizes, locations, and other attributes of system

network segment, or it may be installed to monitor all traffic between the systems that make

files. The HIDS then triggers an alert when one of the following changes occurs: file

up an entire network. When placed next to a hub, switch, or other key networking device, the

attributes change, new files are created, or existing files are deleted. A HIDS can also monitor

NIDS may use

systems logs for predefined events. The HIDS examines these files and logs to determine if

that device's monitoring port. The monitoring port, also known as a switched port analysis

an attack s Underway or has occurred, and if the attack is succeeding or was successful. The

(SPAN) port or mirror port, is a specially condiaured connection on a network device that is

HIDS will maintain its own log file so that even when hackers successfully modify files on

capable of viewing all of the traffic that moves through the entire device. In the early '90s,

the target system to cover their tracks, the HIDS can provide an independent audit trail of the

before switches became the popular choice for connecting networks in a shared-collision

attack. Once properly condiaured, a HIDS is very reliable. The only time a HIDS produces a

domain, hubs were used. Hubs received traffic from one node, and retransmitted it to all other

false positive alert is when an authorized change occurs for a monitored file. This action can

nodes. This condiauration allowed any device connected to the hub to monitor all traffic

be quickly reviewed by an administrator and dismissed as acceptable. The administrator ma y

passing through the hub. Unfortunately, it also represented a security risk, since anyone

choose then to disregard subsequent changes to the same set of files. If properly condiaured, a

connected to the hub could monitor all the traffic that moved through that network segment.

HIDS can also detect when an individual user attempts to modify or exceed his or her access

More recently, switches have been deployed on most networks, and they, unlike hubs, create

authorization and give him or herself higher privileges. A HIDS has an advantage over NIDS

dedicated point-to-point links between their ports. These links create a higher level of

in that it can usually be installed in such a way that it can access information that is encrypted

transmission security and privacy, and effectively prevent anyone from being able to capture,

when traveling over the network. For this reason, a HIDS is able to use the content of

and thus eavesdrop on, the traffic passing through the switch. Unfortunately, however, this

otherwise encrypted communications to make decisions about possible or successful attacks.

ability to capture the traffic is necessary for the use of an IDS. Thus, monitoring ports are

Since the HIDS has a mission to detect intrusion activity on one computer system, all the

required. These connections enable network administrators to collect traffic from across the

traffic it needs to make that decision is coming to the system where the HIDS is running. The

network for analysis by the IDS as well as for occasional use in diagnosing network faults

nature of the network packet delivery, whether switched or in a shared-collision

and measuring network performance. Diaure 7-2 shows a sample screen from Demark Pure

domain, is not a factor.

Secure (see www.demarc.com) displaying events generated by the Snort Network IDS

A HIDS relies on the classification of files into various categories and then applies various
notification actions, depending on the rules in the HIDS condiauration. Most HIDSs provide
only a few general levels of alert notification. For example, an administrator can condiaure a
HIDS to treat the following types of changes as reportable security events: changes in a

Engine (see www.snort.org).

Dept. of CSE, SJBIT

Page 19

Dept. of CSE, SJBIT

Page 20

Information and network Security

7.

10CS835

system folder (e.g., in C:\Windows or C:\WINNT); and changes within a security-related


application (such as C:\TripWire). In other words, administrators can condiaure the system to
alert on any changes within a critical data folder. The condiauration rules may classify
changes to a specific application folder (e.g., C:\Program Files\Office) as being normal, and
hence unreportable. Administrators can condiaure the system to log all activity but to page
them or e-mail them only if a reportable security event occurs. Although this change-based
system seems simplistic, it seems to suit most administrators, who, in general, become
concerned only if unauthorized changes occur in specific and sensitive areas of the host file
system. Applications frequently modity their internal files, such as dictionaries and
condiauration templates, and users are constantly updating their data files. Unless a HIDS is
very specifically condiaured, these actions can generate a large volume of false alarms.
Managed HIDSs can monitor multiple computers simultaneously. They do this by creating a
condiauration fiJe on each monitored host and by making each HIDS report back to a master
console system, which is usually located on the system administrator's computer. This master
console monitors the information provided from the managed hosts and notifies the
administrator when it senseS recognizable attack conditions.
Discuss scanning, analysis tools, and content filters. (10 marks) (Dec 2012) (June2013) (8
marks) (Dec 2013) (Dec 2014)
Scanning and Analysis Tools

Information and network Security

8.

command before battle, walking down the line checking out the equipment and mental
preparedness of each soldier. In a similar way, the security administrator can use
vulnerability analysis tools to inspect the units (host computers and network devices) under
his or her command. A word of caution, though, should be heeded: many of these scanning
and analysis tools have distinct signatures, and Some Internet service providers (ISPs) scan
for these signatures. If the ISP discovers someone using hacker tools, it can pull that person's
access privileges. As such, it is probably best for administrators first to establish a working
relationship with their ISPs and notify the ISP of their plans.
Discuss the process of encryption and define key terms. (10 marks) (Dec 2014)
Basic Encryption Definitions
To understand the fundamentals of cryptography, you must become familiar with the
following definitions:
-Algorithm: The programmatic steps used to convert an unencrypted message into an
encrypted sequence of bits that represent the message; sometimes used as a reference to the
programs that enable the cryptographic processes

-Cipher or cryptosystem: An encryption method or process encompassing the algorithm,

In order to secure a network, it is imperative that someone in the organization knows exactly
where the network needs securing. This may sound like a simple and intuitive statement;
however, many companies skip this step. They install a simple perimeter firewall, and then,
lulled into a sense of security by this single layer of defense, they rest on their laurels. To
truly assess the risk within a computing environment, one must deploy technical controls
using a strategy of defense in depth. A strategy based on the concept of defense in depth is
likely to include intrusion detection systems (IDS), active vulnerability scanners, passive
vulnerability scanners, automated log analyzers, and protocol analyzers (commonly referred
to as sniffers). As you've learned, the first item in this list, the IDS, helps to secure networks
by detecting intrusions; the remaining items in the list also help secure networks, but they do
this by helping administrators identify where the network needs securing. More specifically,
scanner and analysis tools can find vulnerabilities in systems, holes in security components,
and unsecured aspects of the network. Although some information security experts may not
perceive. them as defensive tools, scanners, sniffers, and other such vulnerability analysis
tools can be invaluable to security administrators because they enable administrators to see
what the attacker sees. Some of these tools are extremely complex and others are rather
simple. The tools can also range from being expensive commercial products to those that are
freely available at no cost. Many of the best scanning and analysis tools .are those that the
attacker community has developed, and are available free on the Web. Good administrators
should have several hacking Web sites' bookmarked and should try to keep up with chat room
discussions on new vulnerabilities, recent conquests, and favorite assault techniques. There is
nothing wrong with a security administrator using the tools that potential attackers use in
order to examine his or her defenses and find areas that require additional attention. In the
military, there is a long and distinguished history of generals inspecting the troops under their

key(s) or cryptovariable(s), and procedures used to perform encryption and decryption

Dept. of CSE, SJBIT

Dept. of CSE, SJBIT

Page 21

10CS835

-Ciphertext or cryptogram: The unintelligible encrypted or encoded message resulting from


an encryption
-Code: The process of converting components (words or phrases) of an unencrypted message
into encrypted components
-Decipher: To decrypt or convert ciphertext into the equivalent plaintext
-Encipher: To encrypt or convert plaintext into the equivalent ciphertext
-Key or cryptovariable: The information used in conjunction with an algorithm to create the
ciphertext from the plaintext or derive the plaintext from the ciphertext; the key can be a
series of bits used by a computer program, or it can be a passphrase used by humans that is
then converted into a series of bits for use in the computer program
-Keyspace: The entire range of values that can possibly be used to construct an individual key
-Link encryption: A series of encryptions and decryptions between a number of
systems,wherein each system in a network decrypts the message sent to it and then reencrypts
it using different keys and sends it to the next neighbor, and this process continues until the
message reaches the final destination
-Plaintext or cleartext: The original unencrypted message that is encrypted; also the name
given to the results of a message that has been successfully decrypted

Page 22

matching

-Work factor: The amount of effort (usually in hours) required to perform cryptanalysis on an

Any message that is encrypted by using the private key can only be decrypted by using the

the digital encoding of a picture or graphic

only be decrypted by applying the same algorithm, but by using the matching private key.

-Steganography: The process of hiding messages-for example, messages can be hidden within

Information and network Security

Information and network Security

10CS835

public

10CS835

key.

encoded message so that it may be decrypted when the key or algorithm (or both) are

decrypt the content of the message.

cipher method. With the bit stream method, each bit in the plaintext is transformed into a

slower than symmetric encryption. It requires far more processing power to both encrypt and

A plaintext can be encrypted through one of two methods, the bit stream method or the block

keys are supposed to be public). A problem with asymmetric encryption, however, is that it is

Cipher Methods

This means that you do not have to worry about passing public keys over the Internet (the

unknown

cipher bit one bit at a time. In the case of the block cipher method, the message is divided

2. Explain the different categories of attackers on the cryptosystem.(8 marks)( June

process, known as frequency analysis, can be used along with published frequency of

level.

enable him or her to calculate the number of each type of letter used in the message. This

the level of its binary digits (bits), but SOme operations may operate at the byte or character

searching for a common text structure, wording, or syntax in the encrypted message that can

will operate on data at

should note that most encryption methods using computer systems

algorithm's structure. These attacks are known as ciphertext attacks, and involve a hacker

combination of these operations, as described in the following sections. As you read on, you

force attacks in which the ciphertext is repeatedly searched for clues that can lead to the

(XOR), whereas block methods can use substitution, transposition, XOR, or some

Historically, attempts to gain unauthorized access to secure communications have used brute

stream methods most commonly use algorithm functions like the exclusive OR operation

Attacks on Cryptosystems

bits is transformed into an encrypted block of cipher bits using an algorithm and a key. Bit

2013)(Dec 2013) (5 marks) (Dec 2014)

into blocks, for example, sets of 8-,16-,32-, or 64-bit blocks, and then each block of plaintext

occurrence patterns of various languages and can allow an experienced attacker to crack

UNIT 4: Cryptography

almost any code quickly if the individual has a large enough sample of the encoded text. To

1. What are the fundamental differences between symmetric and asymmetric encryption.

protect against this, modern algorithms attempt to remove the repetitive and predictable
sequences of characters from the cirhertext.

(6 marks) (June 2013)(Dec 2013)


Symmetric Encryption
Symmetric encryption is the oldest and best-known technique. A secret key, which can be a
number, a word, or just a string of random letters, is applied to the text of a message to
change the content in a particular way. This might be as simple as shifting each letter by a
number of places in the alphabet. As long as both sender and recipient know the secret key,
they can encrypt and decrypt all messages that use this key.
Asymmetric Encryption
The problem with secret keys is exchanging them over the Internet or a large network while
preventing them from falling into the wrong hands. Anyone who knows the secret key can
decrypt the message. One answer is asymmetric encryption, in which there are two related
keys--a key pair. A public key is made freely available to anyone who might want to send
you a message. A second, private key is kept secret, so that only you know it.

Occasionally, an attacker may obtain dup icate texts, one in ciphertext and one in plaintext,
which enable the individual to reverse-engineer the encryption algorithm in a knownplaintext attack scheme. Alternatively, attackers may conduct a selected- plaintext attack by
sending potential victims a specific text that they are sure the victims will forward on to
others. When the victim does encrypt and forward the message, it can be used in the attack if
the attacker can acquire the outgoing encrypted version. At the very least, reverse engineering
can usually lead the attacker to discover the cryptosystem that is being employed. Most
publicly available encryption methods are generally released to the information and computer
security communities for testing of the encryption algorithm's resistance to cracking. In
addition, attackers are kept informed of which methods of attack have failed. Although the
purpose of sharing this information is to develop a more secure algorithm, it has the danger of

Any message (text, binary files, or documents) that are encrypted by using the public key can

keeping attackers from wasting their time--that is, freeing them up to find new weaknesses in
the cryptosystem or new, more challenging means of obtaining encryption keys.

Dept. of CSE, SJBIT

Page 23

Dept. of CSE, SJBIT

Page 24

Information and network Security

10CS835

Information and network Security

10CS835

In general, attacks n cryptosystems fall into four general categories: man-in-the-middle,

-Work factor: The amount of effort (usually in hours) required to perform cryptanalysis on an

correlation, dictionary, and timing. Although many of these attacks were discussed in Chapter

encoded message so that it may be decrypted when the key or algorithm (or both) are

2, they are reiterated here in the context of cryptosystems and their impact on these systems

unknown

3. Define the following terms i) algorithm ii) Key iii) Plaintext iv) steganography v) Work
factor vi) key space. (10 marks) (June 2013) (Dec 2013) ( 5 marks) (Dec 2014)

4. Describe the terms: authentication, integrity, privacy, authorization and nonrepudiation. (5 marks) (Dec 2012) (June 2013) (10 marks ) (June 2014)

Basic Encryption Definitions


To understand the fundamentals of cryptography, you must become familiar with the
following definitions:
-Algorithm: The programmatic steps used to convert an unencrypted message into an
encrypted sequence of bits that represent the message; sometimes used as a reference to the
programs that enable the cryptographic processes

-Cipher or cryptosystem: An encryption method or process encompassing the algorithm,


key(s) or cryptovariable(s), and procedures used to perform encryption and decryption
-Ciphertext or cryptogram: The unintelligible encrypted or encoded message resulting from
an encryption
-Code: The process of converting components (words or phrases) of an unencrypted message

1. AUTHENTICATION: The assurance that the communicating entity is the one that it
Introduction to Network Security, Authentication Applications, claims to be.
_ Peer Entity Authentication: Used in association with a logical connection to provide
confidence in the identity of the entities connected.
_ Data Origin Authentication: In a connectionless transfer, provides assurance that the source
of received data is as claimed.
2. ACCESS CONTROL: The prevention of unauthorized use of a resource (i.e., this service
controls who can have access to a resource, under what conditions access can occur, and what
those accessing the resource are allowed to do).
3. DATA CONFIDENTIALITY: The protection of data from unauthorized disclosure. _
Connection Confidentiality: The protection of all user data on a connection. _ Connectionless
Confidentiality: The protection of all user data in a single data block _ Selective-Field
Confidentiality: The confidentiality of selected fields within the user Data on a connection or
in a single data block. _ Traffic Flow Confidentiality: The protection of the information that
might be
Derived from observation of traffic flows.
4. DATA INTEGRITY: The assurance that data received are exactly as sent by an authorized
entity (i.e., contain no modification, insertion, deletion, or replay).

into encrypted components


5. Discuss the man-in-the-middle attack. ?(7 marks) (June 2013)(Dec 2013)(10

-Decipher: To decrypt or convert ciphertext into the equivalent plaintext


-Encipher: To encrypt or convert plaintext into the equivalent ciphertext

marks)(Dec 2014)

-Key or cryptovariable: The information used in conjunction with an algorithm to create the

Man-in-the-Middle Attack

ciphertext from the plaintext or derive the plaintext from the ciphertext; the key can be a

A man-in-the-middle attack, as discussed in Chapter 2, is designed to intercept the

series of bits used by a computer program, or it can be a passphrase used by humans that is

transmission of a public key or even to insert a known key structure in place of the requested

then converted into a series of bits for use in the computer program

public key. Thus, attackers attempt to place themselves between the sender and receiver, and

-Keyspace: The entire range of values that can possibly be used to construct an individual key

once they've intercepted the request for key exchanges, they send each participant a valid

-Link encryption: A series of encryptions and decryptions between a number of

public key, which is known only to them. From the perspective of the victims of such attacks,

systems,wherein each system in a network decrypts the message sent to it and then reencrypts

their encrypted communication appears to be occurring normally, but in fact the attacker is

it using different keys and sends it to the next neighbor, and this process continues until the

receiving each encrypted message and decoding it (with the key given to the sending party),

message reaches the final destination

and then encrypting and sending it to the originally intended recipient. Establishment of

-Plaintext or cleartext: The original unencrypted message that is encrypted; also the name

public keys with digital signatures can prevent the traditional man-in-the-middle attack, as

given to the results of a message that has been successfully decrypted

the attacker cannot duplicate the signatures.

-Steganography: The process of hiding messages-for example, messages can be hidden within
the digital encoding of a picture or graphic
Dept. of CSE, SJBIT

Page 25

Dept. of CSE, SJBIT

Page 26

Information and network Security

10CS835

UNIT 5 : Authentication Applications


1. Discuss active security attacks.(10 marks)(Dec 2012)(7 marks)( Dec2013)
Security Attacks:
Security attacks, used both in X.800 and RFC 2828, are classified as passive attacks and
active attacks.
A passive attack attempts to learn or make use of information from the system but does not
affect system resources.
An active attack attempts to alter system resources or affect their operation. Passive attacks
are in the nature of eavesdropping on, or monitoring of, transmissions. The goal of the
opponent is to obtain information that is being transmitted. Two types of passive attacks are
release of message contents and traffic analysis.

Confidential information. To prevent an opponent from learning the contents of these


transmissions.
A second type of passive attack, traffic analysis, is subtler (Diaure 1.3b). Suppose that we had
a way of masking the contents of messages or other information traffic so that opponents,
even if they captured the message, could not extract the information from the message. The
common technique for masking contents is encryption. If we had encryption protection in
place, an opponent might still be able to observe the pattern of these messages. The opponent
could determine the location and identity of communicating hosts and could observe the
frequency and length of messages being exchanged. This information might be useful in
guessing the nature of the communication that was taking place. Passive attacks are very
difficult to detect because they do not involve any alteration of the data. Typically, the
message traffic is sent and received in an apparently normal fashion and neither the sender
nor receiver is aware that a third party has read the messages or observed the traffic pattern.
However, it is feasible to prevent the success of these attacks, usually by means of
encryption. Thus, the emphasis in dealing with passive attacks is on prevention rather than
detection.
Active Attacks:
Active attacks involve some modification of the data stream or the creation of a false stream
and can be subdivided into four categories: Masquerade, Replay, Modification of messages,
and Denial of service.
2. Describe briefly the security attacks and specific security mechanismz covered by
X.800. (5 marks)(Jun 2013)(7 marks) (Dec 2013)
Security attacks, used both in X.800 and RFC 2828, are classified as passive attacks and
active attacks.
A passive attack attempts to learn or make use of information from the system but does not
affect system resources.
An active attack attempts to alter system resources or affect their operation. Passive attacks
are in the nature of eavesdropping on, or monitoring of, transmissions. The goal of the
opponent is to obtain information that is being transmitted. Two types of passive attacks are
release of message contents and traffic analysis.

Dept. of CSE, SJBIT

Page 27

Information and network Security

10CS835

Confidential information. To prevent an opponent from learning the contents of these


transmissions.
A second type of passive attack, traffic analysis, is subtler (Diaure 1.3b). Suppose that we had
a way of masking the contents of messages or other information traffic so that opponents,
even if they captured the message, could not extract the information from the message. The
common technique for masking contents is encryption. If we had encryption protection in
place, an opponent might still be able to observe the pattern of these messages. The opponent
could determine the location and identity of communicating hosts and could observe the
frequency and length of messages being exchanged. This information might be useful in
guessing the nature of the communication that was taking place. Passive attacks are very
difficult to detect because they do not involve any alteration of the data. Typically, the
message traffic is sent and received in an apparently normal fashion and neither the sender
nor receiver is aware that a third party has read the messages or observed the traffic pattern.
However, it is feasible to prevent the success of these attacks, usually by means of
encryption. Thus, the emphasis in dealing with passive attacks is on prevention rather than
detection.
3. Describe the authentication procedures covered by X.809.(10 marks)(Jun 2014)
X.800 divides these services into five categories and fourteen specific services as shown in
the below Table.
Table: Security Services (X.800)
1. AUTHENTICATION: The assurance that the communicating entity is the one that it
Introduction to Network Security, Authentication Applications, claims to be.
_ Peer Entity Authentication: Used in association with a logical connection to provide
confidence in the identity of the entities connected.
_ Data Origin Authentication: In a connectionless transfer, provides assurance that the source
of received data is as claimed.
2. ACCESS CONTROL: The prevention of unauthorized use of a resource (i.e., this service
controls who can have access to a resource, under what conditions access can occur, and what
those accessing the resource are allowed to do).
3. DATA CONFIDENTIALITY: The protection of data from unauthorized disclosure. _
Connection Confidentiality: The protection of all user data on a connection. _ Connectionless
Confidentiality: The protection of all user data in a single data block _ Selective-Field
Confidentiality: The confidentiality of selected fields within the user Data on a connection or
in a single data block. _ Traffic Flow Confidentiality: The protection of the information that
might be
Derived from observation of traffic flows.
4. DATA INTEGRITY: The assurance that data received are exactly as sent by an authorized
entity (i.e., contain no modification, insertion, deletion, or replay).
_ Connection Integrity with Recovery: Provides for the integrity of all user data on a
connection and detects any modification, insertion, deletion, or replay of any data
within an entire data sequence, with recovery attempted.
4. Explain the general x. 509 public key certificate.(6 marks)( Dec 2013)(8 marks)(Dec
2014)
X.509
The Standardization Process:
Dept. of CSE, SJBIT

Page 28

Information and network Security

10CS835

The decision of which RFCs become Internet standards is made by the IESG, on the
recommendation of the IETF. To become a standard, a specification must meet the following
criteria:
_ Be stable and well understood
_ Be technically competent
_ Have multiple, independent, and interoperable implementations with substantial operational
experience
_ Enjoy significant public support
_ Be recognizably useful in some or all parts of the Internet
The key difference between these criteria and those used for international standards from ITU
is the emphasis here on operational experience.The left-hand side of Diaure1.1 shows the
series of steps, called the standards track, that aspecification goes through to become a
standard; this process is defined in RFC 2026. The steps involve increasing amounts of
scrutiny and testing. At each step, the IETF must make a recommendation for advancement
of the protocol, and the IESG must ratify it. The process begins when the IESG approves the
publication of an Internet Draft document as an RFC
with the status of Proposed Standard.
Diaure 1.1 Internet RFC Publication Process
The white boxes in the diagram represent temporary states, which should be occupied for the
minimum practical time. However, a document must remain a Proposed Standard for at least
six months and a Draft Standard for at least four months to allow time for review and
comment. The gray boxes represent long-term states that may be occupied for years.
For a specification to be advanced to Draft Standard status, there must be at least two
independent and interoperable implementations from which adequate operational experience
has been obtained. After significant implementation and operational experience has been
obtained, a specification may be elevated to Internet Standard. At this point, the Specification
is assigned an STD number as well as an RFC number. Finally, when a protocol becomes
obsolete, it is assigned to the Historic state.
5. Compare active and passive attacks.(5 marks)(Jun 2013)(10 marks)(Jun 2013)(6
marks)(Dec 2014)
Security Attacks:
Security attacks, used both in X.800 and RFC 2828, are classified as passive attacks and
active attacks.
A passive attack attempts to learn or make use of information from the system but does not
affect system resources.
An active attack attempts to alter system resources or affect their operation. Passive attacks
are in the nature of eavesdropping on, or monitoring of, transmissions. The goal of the
opponent is to obtain information that is being transmitted. Two types of passive attacks are
release of message contents and traffic analysis.

Information and network Security

10CS835

could determine the location and identity of communicating hosts and could observe the
frequency and length of messages being exchanged. This information might be useful in
guessing the nature of the communication that was taking place. Passive attacks are very
difficult to detect because they do not involve any alteration of the data. Typically, the
message traffic is sent and received in an apparently normal fashion and neither the sender
nor receiver is aware that a third party has read the messages or observed the traffic pattern.
However, it is feasible to prevent the success of these attacks, usually by means of
encryption. Thus, the emphasis in dealing with passive attacks is on prevention rather than
detection.
Active Attacks:
Active attacks involve some modification of the data stream or the creation of a false stream
and can be subdivided into four categories: Masquerade, Replay, Modification of messages,
and Denial of service.
Introduction to Network Security, Authentication Applications,
A masquerade takes place when one entity pretends to be a different entity (Diaure 1.4a).
A masquerade attack usually includes one of the other forms of active attack. For example,
uthentication sequences can be captured and replayed after a valid authentication sequence
has taken place, thus enabling an authorized entity with few privileges to obtain extra
privileges by impersonating an entity that has those privileges.
Replay involves the passive capture of a data unit and its subsequent retransmission to
produce an unauthorized effect .
Modification of messages simply means that some portion of a legitimate message is altered,
or that messages are delayed or reordered, to produce an unauthorized effect (Diaure 1.4c).
For example, a message meaning "Allow John Smith to read confidential file accounts" is
modified to mean "Allow Fred Brown to read confidential file accounts."
The denial of service prevents or inhibits the normal use or management of communications
facilities (Diaure 1.4d). This attack may have a specific target; for example, an entity may
suppress all messages directed to a particular destination (e.g., the security audit service).
Another form of service denial is the disruption of an entire network, either by disabling the
network or by overloading it with messages so as to degrade performance.

6. Explain Kerberos version 4 message exchanges.(10 marks) (Dec 2012)(6 marks Dec
2014)

Confidential information. To prevent an opponent from learning the contents of these


transmissions.
A second type of passive attack, traffic analysis, is subtler (Diaure 1.3b). Suppose that we had
a way of masking the contents of messages or other information traffic so that opponents,
even if they captured the message, could not extract the information from the message. The
common technique for masking contents is encryption. If we had encryption protection in
place, an opponent might still be able to observe the pattern of these messages. The opponent

Kerberos:
Kerberos is an authentication service developed by MIT. The problem that Kerberos
addresses is this: Assume an open distributed environment in which users at workstations
wish to access services on servers distributed throughout the network. We would like for
servers to be able to restrict access to authorized users and to be able to authenticate requests
for service. In this environment, a workstation cannot be trusted to identify its users correctly
to network services. In particular, the following three threats exist:
_ A user may gain access to a particular workstation and pretend to be another user
operating from that workstation.
_ A user may alter the network address of a workstation so that the requests sent from
the altered workstation appear to come from the impersonated workstation.
_ A user may eavesdrop on exchanges and use a replay attack to gain entrance to a
server or to disrupt operations.

Dept. of CSE, SJBIT

Dept. of CSE, SJBIT

Page 29

Page 30

Encryption system dependence It requires the use of DES. Export restriction on DES as well
as doubts about the strength of DES were thus of concern ciphertext is tagged with an
encryption type identifier so that any encryption technique may be used.

In any of these cases, an unauthorized user may be able to gain access to services and data
that he or she is not authorized to access.
Rather than building in elaborate authentication protocols at each server, Kerberos provides a
centralized authentication server whose function is to authenticate users to servers and
servers to users. Unlike most other authentication schemes, Kerberos relies exclusively on
symmetric encryption, making no use of public-key encryption.
Two versions of Kerberos are in common use. Version 4 implementations still exist. Version
5 corrects some of the security deficiencies of version 4 and has been issued as a proposed
Internet Standard (RFC 1510).
Today the more commonly used architecture is a distributed architecture consisting of
dedicated user workstations (clients) and distributed or centralized servers. In this
environment, three approaches to security can be envisioned:
_ Rely on each individual client workstation to assure the identity of its user or users
and rely on each server to enforce a security policy based on user identification (ID).
_ Require that client systems authenticate themselves to servers, but trust the client
system concerning the identity of its user.
Introduction to Network Security, Authentication Applications,

Information and network Security

Information and network Security

10CS835

_ Require the user to prove his or her identity for each service invoked. Also require
that servers prove their identity to clients.
7. List out differences between Kerberos version 4 and version 5.(10 marks)(Jun 2013)
Kerberos Version 4:
Version 4 of Kerberos makes use of DES, to provide the authentication service. Viewing the
protocol as a whole, it is difficult to see the need for the many elements contained
therein.Therefore, we adopt a strategy used by Bill Bryant of Project Athena and build up to
the fullprotocol by looking first at several hypothetical dialogues. Each successive dialogue
adds additional complexity to counter security vulnerabilities revealed in the preceding
dialogue.
The Version 4 Authentication Dialogue:
The first problem is the lifetime associated with the ticket-granting ticket. If this lifetime is
very short (e.g., minutes), then the user will be repeatedly asked for a password. If the
lifetime is long (e.g., hours), then an opponent has a greater opportunity for replay. The
second problem is that there may be a requirement for servers to authenticate themselves to
users. Without such authentication, an opponent could sabotage the condiauration so that
messages to a server were directed to another location. The false server would then be in a
position to act as a real server and capture any information from the user and deny the true
service to the user.
Kerberos Version 5 is specified in RFC 1510 and provides a number of improvements over
version 4.
Differences between Versions 4 and 5:
Version 5 is intended to address the limitations of version 4 in two areas: environmental
shortcomings and technical deficiencies. Let us briefly summarize the improvements in each
area.
Kerberos Version 4 was developed for use within the Project Athena environment and,
accordingly, did not fully address the need to be of general purpose. This led to the following
environmental shortcomings:
Version 4 Version 5

Dept. of CSE, SJBIT

Page 31

10CS835

UNIT 6 : Electronic Mail Security


1. Explain the PGP message generation and reception processes. .(5 marks)(Jun 2013)(10
marks)(Jun 2013) (7 marks)( Dec2013)
1 Digital signature DSS/SHA or RSA/SHA A hash code of a message is created using SHA1. This message digest is encrypted using DSS or RSA with the sender's private key and
included with the
message.
2 Message encryption CAST or IDEA or Threekey Triple DES with Diffie-Hellman or RSA
A message is encrypted using CAST-128 or IDEA or 3DES with a one-time session key
generated by the sender. The session key is encrypted using Diffie- Hellman or RSA with the
recipient's public key and included with the message.
3 Compression Zip A message may be compressed, for storage or transmission, using ZIP.
4 Email compatibility Radix 64 conversion To provide transparency for email applications,
an
encrypted message may be converted to an ASCII string using radix 64 conversion. 5
Segmentation To accommodate maximum message size limitations, PGP performs
segmentation and reassembly.
Authentication:
Diaure 1.1a illustrates the digital signature service provided by PGP and the sequence is as
follows:
[Digital signatures provide the ability to:
verify author, date & time of signature
authenticate message contents
be verified by third parties to resolve disputes
Hence include authentication function with additional capabilities]
1. The sender creates a message.
2. SHA-1 is used to generate a 160-bit hash code of the message.
3. The hash code is encrypted with RSA using the sender's private key, and the result is
prepended to the message.
4. The receiver uses RSA with the sender's public key to decrypt and recover the hash code.

2. Describe the steps involved in providing aythentication and confidentiality by PGP. (10
marks)(Dec 2012) (6 marks)(Dec 2014)
The receiver generates a new hash code for the message and compares it with the decrypted
hash code. If the two match, the message is accepted as authentic. Diaure: 1.1 PGP
Cryptographic Functions The combination of SHA-1 and RSA provides an effective digital
signature scheme. Because of the strength of RSA, the recipient is assured that only the
possessor of the matching private key can generate the signature. Because of the strength of
SHA-1, the recipient is assured that no one else could generate a new message that matches
the hash code and, hence, the signature of the original message. Although signatures normally
are found attached to the message or file, this is not always the case: Detached signatures are
Dept. of CSE, SJBIT

Page 32

Information and network Security

10CS835

also supported. A detached signature may be stored and transmitted separately from the
message it signs.Detached Signatures are useful in several contexts.
_ A user may wish to maintain a separate signature log of all messages sent or received. _ A
detached signature of an executable program can detect subsequent virus infection.
_ A detached signature can be used when more than one party must sign a document, such as
a legal contract. Each person's signature is independent and therefore is applied only to the
document. Otherwise, signatures would have to be nested, with the second signer signing
both the document and the first signature, and so on.
Confidentiality:
Confidentiality is provided by encrypting messages to be transmitted or to be stored locally as
files. In both cases, the symmetric encryption algorithm CAST-128 (Carlisle Adams and
Stafford Tavares) may be used. Alternatively, IDEA (International Data Encryption
Algorithm) or 3DES (Data Encryption Standards) may be used. The 64-bit cipher feedback
(CFB) mode is used. As always, one must address the problem of key distribution. In PGP,
each symmetric key is used only once. That is, a new key is generated as a random 128-bit
number for each message. Thus, although this is referred to in the documentation as a session
key, it is in reality a one-time key. Because it is to be used only once, the session key is
bound to the message and transmitted with it. To protect the key, it is encrypted with the
receiver's public key. Diaure 1.1b illustrates the sequence, which can be described as follows:

Information and network Security

10CS835

4. Explain different MIME content types. (5 marks)(Jun 2013)(7 marks) (Dec 2013) (10
marks) (Dec 2012)
MIME-Version: Must have the parameter value 1.0. This field indicates that the message
conforms to RFCs 2045 and 2046.
Content-Type: Describes the data contained in the body with sufficient detail that the
receiving user agent can pick an appropriate agent or mechanism to represent the data to the
user or otherwise deal with the data in an appropriate manner.
Content-Transfer-Encoding: Indicates the type of transformation that has been used to
represent the body of the message in a way that is acceptable for mail transport.
Content-ID: Used to identify MIME entities uniquely in multiple contexts.
Content-Description: A text description of the object with the body; this is useful when the
object is not readable (e.g., audio data).
MIME Content Types:
The bulk of the MIME specification is concerned with the definition of a variety of content
types. This reflects the need to provide standardized ways of dealing with a wide variety of
information representations in a multimedia environment.Table 1.3 lists the content types
specified in RFC 2046. There are seven different major types of content and a total of 15
subtypes. In general, a content type declares the general type of data, and the subtype
specifies a particular format for that type of data.

3. Discuss limitations of SMTP/RFC 822 and how MIME overcomes these limitations. (6
5. Explain S/MIME certificate processing method. (10 marks)(Jun 2013)

marks Dec 2014)


Multipurpose Internet Mail Extensions( MIME):
MIME is an extension to the RFC 822 framework that is intended to address some of the
problems and limitations of the use of SMTP (Simple Mail Transfer Protocol) or some other
mail transfer protocol and RFC 822 for electronic mail. The following are the limitations of
the SMTP/822 scheme:
1.SMTP cannot transmit executable files or other binary objects. A number of schemes are in
use for converting binary files into a text form that can be used by SMTP mail systems,
including the popular UNIX UUencode/UUdecode scheme. However, none of these is a
standard or even a de facto standard.
2. SMTP cannot transmit text data that includes national language characters because these
are represented by 8-bit codes with values of 128 decimal or higher, and SMTP is limited to
7-bit ASCII.
3. SMTP servers may reject mail message over a certain size.
4. SMTP gateways that translate between ASCII and the character code EBCDIC do not use a
consistent set of mappings, resulting in translation problems.
5. SMTP gateways to X.400 electronic mail networks cannot handle nontextual data included
in X.400 messages.
6. Some SMTP implementations do not adhere completely to the SMTP standards defined in
RFC 821. Common problems include:
-Deletion, addition, or reordering of carriage return and linefeed
-Truncating or wrapping lines longer than 76 characters
-Removal of trailing white space (tab and space characters)
-Padding of lines in a message to the same length
-Conversion of tab characters into multiple space characters

Dept. of CSE, SJBIT

Page 33

S/MIME Functionality:
In terms of general functionality, S/MIME is very similar to PGP. Both offer the ability to
sign and/or encrypt messages. In this subsection, we briefly summarize S/MIME capability.
We then look in more detail at this capability by examining message formats and message
preparation.
Functions
S/MIME provides the following functions:
Enveloped data: This consists of encrypted content of any type and encrypted-content
encryption keys for one or more recipients.
Signed data: A digital signature is formed by taking the message digest of the content to be
signed and then encrypting that with the private key of the signer. The content plus signature
are then encoded using base64 encoding. A signed data message can only be viewed by a
recipient with S/MIME capability.
Clear-signed data: As with signed data, a digital signature of the content is formed.
However, in this case, only the digital signature is encoded using base64. As a
result,recipients without S/MIME capability can view the message content, although they
cannotverify the signature.
Signed and enveloped data: Signed-only and encrypted-only entities may be nested, so that
encrypted data may be signed and signed data or clear-signed data may be encrypted.
Cryptographic Algorithms:
Table 1.6 summarizes the cryptographic algorithms used in S/MIME. S/MIME uses the
following terminology, taken from RFC 2119 to specify the requirement level:
Must: The definition is an absolute requirement of the specification. An implementation
must include this feature or function to be in conformance with the specification.
Should: There may exist valid reasons in particular circumstances to ignore this feature or
function, but it is recommended that an implementation include the feature or function.

Dept. of CSE, SJBIT

Page 34

Information and network Security

10CS835

UNIT 7: IP Security
1. Mention the application of IPsec. (10 marks) (June 2013) (Dec 2013) ( 5 marks) (Dec
2014)
Applications of IPSec:
IPSec provides the capability to secure communications across a LAN, across private and
public WANs, and across the Internet. Examples of its use include the following:
-Secure branch office connectivity over the Internet: A company can build a secure virtual
private network over the Internet or over a public WAN. This enables a business to rely
heavily on the Internet and reduce its need for private networks, saving costs and network
management overhead.
-Secure remote access over the Internet: An end user whose system is equipped with IP
security protocols can make a local call to an Internet service provider (ISP) and gain secure
access to a company network. This reduces the cost of toll charges for traveling employees
and telecommuters.
-Establishing extranet and intranet connectivity with partners: IPSec can be used to secure
communication with other organizations, ensuring authentication and confidentiality and
providing a key exchange mechanism.
-Enhancing electronic commerce security: Even though some Web and electronic commerce
applications have built-in security protocols, the use of IPSec enhances that security.
The principal feature of IPSec that enables it to support these varied applications is that it
canencrypt and/or authenticate all traffic at the IP level. Thus, all distributed applications,
including remote logon, client/server, e-mail, file transfer, Web access, and so on, can be
secured. Diaure 1.1 is a typical scenario of IPSec usage. An organization maintains LANs at
dispersed locations. Nonsecure IP traffic is conducted on each LAN. For traffic offsite,
through somesort of private or public WAN, IPSec protocols are used. These protocols
operate in networking devices, such as a router or firewall, that connect each LAN to the
outside world. The IPSec networking device will typically encrypt and compress all traffic
going into the WAN, and decrypt and decompress traffic coming from the WAN; these
operations are transparent to workstations and servers on the LAN. Secure transmission is
also possible with individual users who dial into the WAN. Such user workstations must
implement the IPSec protocols to provide security.
2. Explain the security association selections that determine a security policy database
entry.( 6 marks)( Dec 2013)(8 marks)(Dec 2014)
Security Associations:
A key concept that appears in both the authentication and confidentiality mechanisms for IPis
the security association (SA). An association is a one-way relationship between asender and a
receiver that affords security services to the traffic carried on it. If a peer relationship is
needed, for two-way secure exchange, then two security associations are required. Security
services are afforded to an SA for the use of AH or ESP, but not both. A security association
is uniquely identified by three parameters:
Security Parameters Index (SPI): A bit string assigned to this SA and having local
significance only. The SPI is carried in AH and ESP headers to enable the receiving system
to select the SA under which a received packet will be processed. IP Destination Address:
Currently, only unicast addresses are allowed; this is the address of the destination endpoint
of the SA, which may be an end user system or a network system such as a firewall or router.
Dept. of CSE, SJBIT

Page 35

Information and network Security

10CS835

Security Protocol Identifier: This indicates whether the association is an AH or ESP security
association.
Hence, in any IP packet, the security association is uniquely identified by the Destination
Address in the IPv4 or IPv6 header and the SPI in the enclosed extension header (AH or
ESP).
SA Parameters:
In each IPSec implementation, there is a nominal Security Association Database that
definesthe parameters associated with each SA. A security association is normally defined by
the following parameters:
-Sequence Number Counter: A 32-bit value used to generate the Sequence Number field in
AH or ESP headers.
-Sequence Counter Overflow: A flag indicating whether overflow of the Sequence Number
Counter should generate an auditable event and prevent further transmission of packets on
this SA (required for all implementations).
-Anti-Replay Window: Used to determine whether an inbound AH or ESP packet is a replay.
-AH Information: Authentication algorithm, keys, key lifetimes, and related parameters being
used with AH (required for AH implementations).
3. Describe SA parameters and SA selectors in detail.(5 marks)( June 2013)(10 marks)
(Dec 2013)(June 2014)
SA Selectors:
IPSec provides the user with considerable flexibility in the way in which IPSec services are
applied to IP traffic. SAs can be combined in a number of ways to yield the desired user
condiauration. Furthermore, IPSec provides a high degree of granularity in discriminating
between traffic that is afforded IPSec protection and traffic that is allowed to bypass IPSec, in
the former case relating IP traffic to specific SAs.The means by which IP traffic is related to
specific SAs (or no SA in the case of traffic allowed to bypass IPSec) is the nominal Security
Policy Database (SPD). In its simplest form, an SPD contains entries, each of which defines a
subset of IP traffic and points to an SA for that traffic. In more complex environments, there
may be multiple entries that potentially relate to a single SA or multiple SAs associated with
a single SPD entry. The reader is referred to the relevant IPSec documents for a full
discussion.
Each SPD entry is defined by a set of IP and upper-layer protocol field values, called
selectors. In effect, these selectors are used to filter outgoing traffic in order to map it into a
particular SA. Outbound processing obeys the following general sequence for each IP packet:
-Compare the values of the appropriate fields in the packet (the selector fields) against the
SPD to find a matching SPD entry, which will point to zero or more SAs.
-Determine the SA if any for this packet and its associated SPI.
-Do the required IPSec processing (i.e., AH or ESP processing).
The following selectors determine an SPD entry:
-Destination IP Address: This may be a single IP address, an enumerated list or range of
addresses, or a wildcard (mask) address. The latter two are required to support more than one
destination system sharing the same SA (e.g., behind a firewall).
-Source IP Address: This may be a single IP address, an enumerated list or range of
addressee, or a wildcard (mask) address. The latter two are required to support more than one
source system sharing the same SA (e.g., behind a firewall).
-User ID: A user identifier from the operating system. This is not a field in the IP or upperlayer headers but is available if IPSec is running on the same operating system as the user.
Dept. of CSE, SJBIT

Page 36

Information and network Security

10CS835

-Data Sensitivity Level: Used for systems providing information flow security (e.g., Secret or
Unclassified).
-Transport Layer Protocol: Obtained from the IPv4 Protocol or IPv6 Next Header field. This
may be an individual protocol number, a list of protocol numbers, or a range of protocol
numbers.
-Source and Destination Ports: These may be individual TCP or UDP port values, an
enumerated list of ports, or a wildcard port.
4. Explain IPsec and ESP format. (5 marks)(Jun 2013)(10 marks) (Dec 2013)
-Next Header (8 bits): Identifies the type of header immediately following this header.
-Payload Length (8 bits): Length of Authentication Header in 32-bit words, minus 2.
For example, the default length of the authentication data field is 96 bits, or three 32- bit
words. With a three-word fixed header, there are a total of six words in the header, and the
Payload Length field has a value of 4.
-Reserved (16 bits): For future use.
-Security Parameters Index (32 bits): Identifies a security association.
-Sequence Number (32 bits): A monotonically increasing counter value, discussed later.
-Authentication Data (variable): A variable-length field (must be an integral number
of 32-bit words) that contains the Integrity Check Value (ICV), or MAC, for this packet,
discussed
5. Describe Transport tunnel modes used for IPsec AH authentication bringing out their

Information and network Security

10CS835

-Oakley Key Determination Protocol: Oakley is a key exchange protocol based on the DiffieHellman algorithm but providing added security. Oakley is generic in that it does not dictate
specific formats.
-Internet Security Association and Key Management Protocol (ISAKMP):
ISAKMP provides a framework for Internet key management and provides the specific
protocol support, including formats, for negotiation of security attributes. ISAKMP by itself
does not dictate a specific key exchange algorithm; rather, ISAKMP consists of a set of
message types that enable the
use of a variety of key exchange algorithms. Oakley is the specific key exchange algorithm
mandated for use with the initial version of ISAKMP.
Oakley Key Determination Protocol:
Oakley is a refinement of the Diffie-Hellman key exchange algorithm. Recall that DiffieHellman involves the following interaction between users A and B. There is prior
agreementon two global parameters: q, a large prime number; and a a primitive root of q. A
selects a random integer XA as its private key, and transmits to B its public keyY A = aXA
mod q.
Similarly, B selects a random integer XB as its private key and transmits to A its public
keyYB = a XBmod q. Each side can now compute the secret session key:
The Diffie-Hellman algorithm has two attractive features:
-Secret keys are created only when needed. There is no need to store secret keys for a long
period of time, exposing them to increased vulnerability.
-The exchange requires no preexisting infrastructure other than an agreement on the global
parameters. However, there are a number of weaknesses to Diffie-Hellman.

scope relevant to IPV4. (3 marks)(Dec 2014) .(19 marks) (Jun 2012) (June 2013)
-RFC 2401: An overview of a security architecture
-RFC 2402: Description of a packet authentication extension to IPv4 and IPv6
-RFC 2406: Description of a packet encryption extension to IPv4 and IPv6
-RFC 2408: Specification of key management capabilities
Support for these features is mandatory for IPv6 and optional for IPv4. In both cases, the
security features are implemented as extension headers that follow the main IP header. The
extension header for authentication is known as the Authentication header; that for encryption
is known as the Encapsulating Security Payload (ESP) header. In addition to these four RFCs,
a number of additional drafts have been published by the IP Security Protocol Working
Group set up by the IETF. The documents are divided into seven groups, as depicted in
Diaure 1.2 (RFC 2401).
-Architecture: Covers the general concepts, security requirements, definitions, and
mechanisms defining IPSec technology.
-Encapsulating Security Payload (ESP): Covers the packet format and general issues related
to the use of the ESP for packet encryption and, optionally, authentication.
-Authentication Header (AH): Covers the packet format and general issues related to the use
of AH for packet authentication.
-Encryption Algorithm: A set of documents that describe how various encryption algorithms
are used for ESP.

UNIT 8: Web Security


1. Explain the parameters that define session state and connection state in SSL. (7 marks)(
Dec2013)
A session state is defined by the following parameters (definitions taken from the SSL
specification):
-Session identifier: An arbitrary byte sequence chosen by the server to identify an active or
resumable session state.
-Peer certificate: An X509.v3 certificate of the peer. This element of the state may be null.
-Compression method: The algorithm used to compress data prior to encryption.
-Cipher spec: Specifies the bulk data encryption algorithm (such as null, AES, etc.) and a
hash algorithm (such as MD5 or SHA-1) used for MAC calculation. It also defines
cryptographic attributes such as the hash_size.
-Master secret: 48-byte secret shared between the client and server.
-Is resumable: A flag indicating whether the session can be used to initiate new connections.

6. Mention important features of Oakley algorithm. (10 marks) (June 2013) (Dec 2013)

A connection state is defined by the following parameters:

Dept. of CSE, SJBIT

Page 37

Dept. of CSE, SJBIT

Page 38

becomes the current states.

-Client write MAC secret: The secret key used in MAC operations on data sent by the client.

are created. Upon successful conclusion of the Handshake Protocol, the pending states

-Server write MAC secret: The secret key used in MAC operations on data sent by the server.

receive and send). In addition, during the Handshake Protocol, pending read and write states

connection.

Once a session is established, there is a current operating state for both read and write (i.e.,

-Server and client random: Byte sequences that are chosen by the server and client for each

Information and network Security

Information and network Security

10CS835

10CS835

-Server write key: The conventional encryption key for data encrypted by the server and
decrypted by the client.

3. What are the services provided by SSL record protocol?( 10 marks) (Dec 2012)(6 marks

1.5a), which consists of a single byte with the value 1. The sole purpose of this message is to

use as the IV with the following record.

Record Protocol, and it is the simplest. This protocol consists of a single message (Diaure

Handshake Protocol. Thereafter the final ciphertext block from each record is preserved for

The Change Cipher Spec Protocol is one of the three SSL-specific protocols that use the SSL

(IV) is maintained for each key. This field is first initialized by the SSL

Change Cipher Spec Protocol:

-Initialization vectors: When a block cipher in CBC mode is used, an initialization vector

SSL Record Format

decrypted by the server.

Dec 2014)\

-Client write key: The conventional encryption key for data encrypted by the client and

cause the pending state to be copied into the current state, which updates the cipher suite to
be used on this connection.
2. Discuss SSL protocol stack. (10 marks)(Dec 2012)
SSL Protocol Stack

Diaure 1.5. SSL Record Protocol Payload

other fields. The remainder of the alerts are the following:

In theory, there may also be multiple simultaneous sessions between parties, but this feature

-illegal_parameter: A field in a handshake message was out of range or inconsistent with

(applications such as HTTP on client and server), there may be multiple secure connections.

given the options available.

negotiation of new security parameters for each connection. Between any pair of parties

-handshake_failure: Sender was unable to negotiate an acceptable set of security parameters

can be shared among multiple connections. Sessions are used to avoid the expensive

decompress or decompress to greater than maximum allowable length).

by the Handshake Protocol. Sessions define a set of cryptographic security parameters, which

-decompression_failure: The decompression function received improper input (e.g., unable to

-Session: An SSL session is an association between a client and a server. Sessions are created

-bad_record_mac: An incorrect MAC was received.

connections are transient. Every connection is associated with one session.

-unexpected_message: An inappropriate message was received.

a suitable type of service. For SSL, such connections are peer-to-peer relationships. The

(definitions from the SSL specification):

-Connection: A connection is a transport (in the OSI layering model definition) that provides

contains a code that indicates the specific alert. First, we list those alerts that are always fatal

in the specification as follows:

may continue, but no new connections on this session may be established. The second byte

Two important SSL concepts are the SSL session and the SSL connection, which are defined

is fatal, SSL immediately terminates the connection. Other connections on the same session

and are examined later in this section.

byte takes the value warning(1) or fatal(2) to convey the severity of the message. If the level

Alert Protocol. These SSL-specific protocols are used in the management of SSL exchanges

current state. Each message in this protocol consists of two bytes (Diaure 17.5b). The first

defined as part of SSL: the Handshake Protocol, The Change Cipher Spec Protocol, and the

applications that use SSL, alert messages are compressed and encrypted, as specified by the

Web client/server interaction, can operate on top of SSL. Three higher-layer protocols are

The Alert Protocol is used to convey SSL-related alerts to the peer entity. As with other

In particular, the Hypertext Transfer Protocol (HTTP), which provides the transfer service for

Alert Protocol:

The SSL Record Protocol provides basic security services to various higher-layer protocols.

is not used in practice. There are actually a number of states associated with each session.
Dept. of CSE, SJBIT

Page 39

Dept. of CSE, SJBIT

Page 40

Information and network Security

10CS835

Information and network Security

10CS835

-close_notify: Notifies the recipient that the sender will not send any more messages on this

-Provide confidentiality of payment and ordering information: It is necessary to assure

connection. Each party is required to send a close_notify alert before closing the write side of

cardholders that this information is safe and accessible only to the intended recipient.

a connection.

Confidentiality also reduces the risk of fraud by either party to the transaction or by malicious

-no_certificate: May be sent in response to a certificate request if no appropriate certificate is

third parties. SET uses encryption to provide confidentiality.

available.
-bad_certificate: A received certificate was corrupt (e.g., contained a signature that did not

5. Explain SSL handshake protocol with a neat diagram. (5 marks)(Jun 2013)(7 marks)

verify).

(Dec 2013)

-unsupported_certificate: The type of the received certificate is not supported.

SSL: the Handshake Protocol, The Change Cipher Spec Protocol, and the Alert Protocol.

-certificate_revoked: A certificate has been revoked by its signer.

These SSL-specific protocols are used in the management of SSL exchanges and are

-certificate_expired: A certificate has expired.

examined later in this section.

-certificate_unknown: Some other unspecified issue arose in processing the certificate,

Two important SSL concepts are the SSL session and the SSL connection, which are defined

rendering it unacceptable.

in the specification as follows:


-Connection: A connection is a transport (in the OSI layering model definition) that provides
a suitable type of service. For SSL, such connections are peer-to-peer relationships. The

4. Describe the SET participants.(05 Marks)


Secure Electronic Transaction:

connections are transient. Every connection is associated with one session.

SET is an open encryption and security specification designed to protect credit card

-Session: An SSL session is an association between a client and a server. Sessions are created

transactions on the Internet. The current version, SETv1, emerged from a call for security

by the Handshake Protocol. Sessions define a set of cryptographic security parameters, which

standards by MasterCard and Visa in February 1996. A wide range of companies were

can be shared among multiple connections. Sessions are used to avoid the expensive

involved in developing the initial specification, including IBM, Microsoft, Netscape, RSA,

negotiation of new security parameters for each connection. Between any pair of parties

Terisa, and Verisign. Beginning in 1996. SET is not itself a payment system. Rather it is a set

(applications such as HTTP on client and server), there may be multiple secure connections.

of security protocols and formats that enables users to employ the existing credit card

In theory, there may also be multiple simultaneous sessions between parties, but this feature

payment infrastructure on an open network, such as the Internet, in a secure fashion. In

is not used in practice. There are actually a number of states associated with each session.

essence, SET provides three services:

Once a session is established, there is a current operating state for both read and write (i.e.,

-Provides a secure communications channel among all parties involved in a transaction

receive and send). In addition, during the Handshake Protocol, pending read and write states

-Provides trust by the use of X.509v3 digital certificates

are created. Upon successful conclusion of the Handshake Protocol, the pending states

-Ensures privacy because the information is only available to parties in a transaction

becomes the current states.

when and where necessary.

6. Explain the construction of dual signature in SET. Also show its verification by the

SET Overview:

merchant and the bank. (10 marks)(10)(Jun 2013)

A good way to begin our discussion of SET is to look at the business requirements for SET,

merchant can accept credit card transactions

its key features, and the participants in SET transactions.

through its relationship with a financial institution: This is the complement to the preceding

Requirements:

requirement. Cardholders need to be able to identify merchants with whom they can conduct

The SET specification lists the following business requirements for secure payment

secure transactions. Again, digital signatures and certificates are used.

processing with credit cards over the Internet and other networks:

Dept. of CSE, SJBIT

Page 41

Dept. of CSE, SJBIT

Page 42

to verify that a cardholder is a legitimate user of a valid account.

travels across the network. An interesting and important feature of SET is that it prevents the

fraud and the overall cost of payment processing. Digital signatures and certificates are used

-Confidentiality of information: Cardholder account and payment information is secured as it

mechanism that links a cardholder to a specific account number reduces the incidence of

To meet the requirements just outlined, SET incorporates the following features:

-Provide authentication that a cardholder is a legitimate user of a credit card account: A

Key Features of SET

during transmission of SET messages. Digital signatures are used to provide integrity.

software.

-Ensure the integrity of all transmitted data: That is, ensure that no changes in content occur

protocols and formats are independent of hardware platform, operating system, and Web

third parties. SET uses encryption to provide confidentiality.

-Facilitate and encourage interoperability among software and network providers: The SET

Confidentiality also reduces the risk of fraud by either party to the transaction or by malicious

with the use of other security mechanisms, such as IPSec and SSL/TLS.

cardholders that this information is safe and accessible only to the intended recipient.

use: SET can securely operate over a "raw" TCP/IP stack. However, SET does not interfere

-Provide confidentiality of payment and ordering information: It is necessary to assure

-Create a protocol that neither depends on transport security mechanisms nor prevents their

processing with credit cards over the Internet and other networks:

based on highly secure cryptographic algorithms and protocols.

The SET specification lists the following business requirements for secure payment

legitimate parties in an electronic commerce transaction: SET is a well-tested specification

Requirements:

-Ensure the use of the best security practices and system design techniques to protect all

Information and network Security

Information and network Security

10CS835

10CS835

merchant from learning the cardholder's credit card number; this is only provided to the
issuing bank. Conventional encryption by DES is used to provide confidentiality.

7. List out the key features of secure electronic transaction and explain in detail. .(5
marks)(Jun 2013)(10 marks)(Jun 2013)(6 marks)(Dec 2014)
Secure Electronic Transaction:
SET is an open encryption and security specification designed to protect credit card
transactions on the Internet. The current version, SETv1, emerged from a call for security
standards by MasterCard and Visa in February 1996. A wide range of companies were
involved in developing the initial specification, including IBM, Microsoft, Netscape, RSA,
Terisa, and Verisign. Beginning in 1996. SET is not itself a payment system. Rather it is a set
of security protocols and formats that enables users to employ the existing credit card
payment infrastructure on an open network, such as the Internet, in a secure fashion. In
essence, SET provides three services:
-Provides a secure communications channel among all parties involved in a transaction
-Provides trust by the use of X.509v3 digital certificates
-Ensures privacy because the information is only available to parties in a transaction
when and where necessary.
SET Overview:
A good way to begin our discussion of SET is to look at the business requirements for SET,
its key features, and the participants in SET transactions.
Dept. of CSE, SJBIT

Page 43

Dept. of CSE, SJBIT

Page 44

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

UNIT 1

UNIT 1 - Planning for Security

Planning for Security

Introduction

TOPICS

This blueprint for the organizations information security efforts can be realized only if it operates
1. Planning for Security

in conjunction with the organizations information security policy.

2. Introduction

Without policy, blueprints, and planning, the organization will be unable to meet the information
security needs of the various communities of interest.

3. Information Security Policy

The organizations should undertake at least some planning: strategic planning to manage the

4. Standards, and Practices

allocation of resources, and contingency planning to prepare for the uncertainties of the business

s.
co
m

Management from all communities of interest must consider policies as the basis for all information

la
bu

security efforts like planning, design and deployment.

la
bu

7. Model for contingency plan

Information Security Policy, Standards, and Practices

Policies direct how issues should be addressed and technologies used Policies do not specify the
proper operation of equipments or software-this information should be placed in the standards,

yl

yl

procedures and practices of users manuals and systems documentation.

lls

Security policies are the least expensive control to execute, but the most difficult to implement

lls

6. Contingency plan

s.
co
m

environment.

5. The Information Security Blue Print

properly.

.a

.a

Shaping policy is difficult because: _ Never conflict with laws _ Stand up in court, if challenged _

Be properly administered through dissemination and documented acceptance.

Enterprise Information Security Policy (EISP)


A security program policy (SPP) or EISP is also known as
A general security policy
IT security policy
Information security policy

EISP
The EISP is based on and directly supports the mission, vision, and direction of the organization and
Sets the strategic direction, scope, and tone for all security efforts within the organization
The EISP is an executive-level document, usually drafted by or with, the Chief Information Officer
(CIO) of the organization and is usually 2 to 10 pages long.

Page 4
www.allsyllabus.com

Page 5
www.allsyllabus.com

dissemination, enforcement, and review of these guidelines.

maintenance of the information security policies, and the practices and responsibilities of the users.

guidelines for overall coverage of necessary issues and clearly identifies processes for the

It also assigns responsibilities for the various areas of security, including systems administration,

procedures for the management of ISSPs in place , the comprehensive policy approach establishes

organization.

The single comprehensive policy approach is centrally managed and controlled. With formal

It defines then purpose, scope, constraints, and applicability of the security program in the

enforcement.

contains the requirements to be met by the information security blueprint or framework.

cover all of the necessary issues, and can lead to poor policy distribution, management, and

The EISP guides the development, implementation, and management of the security program. It

a policy governing its use, management, and control. This approach to creating ISSPs may fail to

direction of the organization.

has a scattershot effect. Each department responsible for a particular application of technology creates

The EISP does not usually require continuous modification, unless there is a change in the strategic

vtu.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

www.allsyllabus.com

Finally, it addresses legal compliance.

modular approach.It is also certainly managed and controlled but tailored to the individual technology

responsibilities assigned therein to various organizational components and

resources. The optimal balance between the independent and comprehensive ISSP approaches is the

General compliance to ensure meeting the requirements to establish a program and the

Usually, these policies are developed by those responsible for managing the information technology

According to NIST, the EISP typically addresses compliance in two areas:

issues.

policies created with this approach comprise individual modules, each created and updated by

As various technologies and processes are implemented, certain guidelines are needed to use them
properly
The ISSP:
addresses specific areas of technology like

individuals responsible for the issues addressed. These individuals report to a central policy
administration group that incorporates specific issues into an overall comprehensive policy.

Systems-Specific Policy (SysSP)

While issue-specific policies are formalized as written documents, distributed to users, and agreed to
in writing, SysSPs are frequently codified as standards and procedures to be used When condiauring

.a

.a

Electronic mail

ll
s
y

y
lls

Use of the Internet

ll
a
b

ll
a
b

The use of specified penalties and disciplinary action.

The modular approach provides a balance between issue orientation and policy management. The

u
s
.

u
s
.

Issue-Specific Security Policy (ISSP)

c
o
m

c
o
m

Systems-specific policies fall into two groups:

Prohibitions against hacking or testing organization security controls.

or maintaining systems

Specific minimum condiaurations of computers to defend against worms and viruses.

Home use of company-owned computer equipment.

Access control lists (ACLs) consist of the access control lists, matrices, and capability tables

Use of personal equipment on company networks

governing the rights and privileges of a particular user to a particular system.

Use of telecommunications technologies (FAX and Phone)

_ An ACL is a list of access rights used by file storage systems, object brokers, or other network

Use of photocopy equipment.

communications devices to determine which individuals or groups may access an object that it
controls.(Object Brokers are system components that handle message requests between the software

Three approaches:

components of a system )
A similar list, which is also associated with users and groups, is called a Capability Table. This

_ Independent ISSP documents, each tailored to a specific issue.


_ A single comprehensive ISSP document covering all issues.

specifies which subjects and objects a user or group can access. Capability tables are frequently
Condiauration rules: comprise the specific condiauration codes entered into security systems to

specific issues requirements.

complex matrices, rather than simple lists or tables.

_ A modular ISSP document that unifies policy creation and administration, while maintaining each

guide the execution of the system when information is passing through it.
The independent document approach to take when creating and managing ISSPs typically
ACL Policies
Page 6

Page 7

www.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

ACLs allow condiauration to restrict access from anyone and anywhere. Restrictions can

Documents must be properly disseminated (Distributed, read, understood, and agreed to) and

be set for a particular user, computer, time, duration-even a particular file.

managed

ACLs regulate:

Special considerations should be made for organizations undergoing mergers, takeovers, and

Who can use the system

partnerships

What authorized users can access

In order to remain viable, policies must have:

When authorized users can access the system

an individual responsible for reviews

Where authorized users can access the system from

a schedule of reviews

How authorized users can access the system

a method for making recommendations for reviews


a specific effective and revision date

The WHO of ACL access may be determined by an individual persons identity or that persons
The policy champion and manager is called the policy administrator.

s.
co
m

Responsible Individual

Determining WHAT users are permitted to access can include restrictions on the various attributes

s.
co
m

membership in a group of people with the same access privileges.

applications), name of the resource, or the location of the resource.

distribution, and storage of the policy.

Access is controlled by adjusting the resource privileges for the person or group to one of Read,

It is good practice to actively solicit input both from the technically adept information security

Write, Create, Modify, Delete, Compare, or Copy for the specific resource.

experts and from the business-focused managers in each community of interest when making

la
bu

Policy administrator is a mid-level staff member and is responsible for the creation, revision,

la
bu

of the system resources, such as the type of resources (printers, files, communication devices, or

This individual should also notify all affected members of the organization when the policy is

For the control of WHERE resources can be accessed from, many network-connected assets have

modified.

The policy administrator must be clearly identified on the policy document as the primary point of

lls

lls

restrictions placed on them to block remote usage and also have some levels of access that are

yl

revisions to security policies.

day-of-week restrictions for some network or system resources.

yl

To control WHEN access is allowed, some organizations choose to implement time-of-day and / or

Schedule of Reviews

describe fully how its resources can be used.

Policies are effective only if they are periodically reviewed for currency and accuracy and modified

In some systems, these lists of ACL rules are known as Capability tables, user profiles, or user

to reflect these changes.

.a

contact for additional information or for revision suggestions to the policy.

When these various ACL options are applied cumulatively, the organization has the ability to

.a

restricted to locally connected users.

Policies that are not kept current can become liabilities for the organization, as outdated rules are

policies. They specify what the user can and cannot do on the resources within that system.

enforced or not, and new requirements are ignored.

Rule Policies

Organization must demonstrate with due diligence, that it is actively trying to meet the requirements

Rule policies are more specific to the operation of a system than ACLs

of the market in which it operates.

Many security systems require specific condiauration scripts telling the systems what actions to

A properly organized schedule of reviews should be defined (at least annually) and published as part

perform on each set of information they process

of the document.

Examples of these systems include firewalls, intrusion detection systems, and proxy servers.

Review Procedures and Practices

Dia 6.5 shows how network security policy has been implemented by Check Point in a firewall rule

To facilitate policy reviews, the policy manager should implement a mechanism by which

set.

individuals can comfortably make recommendations for revisions.


Recommendation methods can involve e-mail, office mail, and an anonymous drop box.

Policy Management

Once the policy Hs come up for review, all comments should be examined and management

Policies are living documents that must be managed and nurtured, and are constantly changing and

approved improvements should be implemented.

growing
Page 8
www.allsyllabus.com

Page 9
www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

Because each information security environment is unique, the security team may need to modify or

1.5 Audience

Setting priorities can follow the recommendations of published sources, or from published standards

1.4 Background

Then use the blueprint to plan the tasks to be accomplished and the order in which to proceed

1.3 Relationship of Principles and Practices

Designing a plan for security begins by creating or validating a security blueprint

1.2 Practices

many work products have been created

1.1 Principles

At this point in the Security SDLC, the analysis phase is complete and the design phase begins

1. Introduction

Systems Design

Table of Contents

properly stored and secured.

ISO/IEC 17799 standards.

A clean desk policy stipulates that at the end of the business day, all classified information must be

among the references cited by the federal government when it decided not to select the

In todays open office environments, it may be beneficial to implement a clean desk policy

hey have been broadly reviewed by government and industry professionals, and are

organization.

particular asset-threat problem.

party should be applied to policies in order to keep them freely available, but only within the

Therefore, each implementation may need modification or even redesign before it suits the needs of a

The same protection scheme created to prevent production data from accidental release to the wrong

Experience teaches you that what works well for one organization may not precisely fit another.

The classification of information is an important aspect of policy.

adapt pieces from several frameworks.

Information Classification

c
o
m

u
s
.

provided by government agencies, or private consultants

Information Security Blueprints

1.6 Structure of this Document


1.7 Terminology

ll
a
b

ll
a
b

u
s
.

c
o
m

2. Generally Accepted System Security Principles

2.2 Computer Security is an Integral Element of Sound Management

A framework is the basic skeletal structure within which additional detailed planning of the blueprint

2.1 Computer Security Supports the Mission of the Organization

One approach is to adapt or adopt a published model or framework for information security

ll
s
y

can be placed as it is developed of refined

lls

2.3 Computer Security Should Be Cost-Effective

2.6 Computer Security Requires a Comprehensive and Integrated Approach

policies, education and training programs, and technological controls.

2.5 Computer Security Responsibilities and Accountability Should Be Made Explicit

This security blueprint is the basis for the design, selection, and implementation of all security

2.4 Systems Owners Have Security Responsibilities outside Their Own Organizations

Experience teaches us that what works well for one organization may not precisely fit another

.a

The security blueprint is a more detailed version of the security framework, which is an outline of

the overall information security strategy for the organization and the roadmap for planned changes to
the information security environment of the organization.

2.7 Computer Security Should Be Periodically Reassessed

.a

2.8 Computer Security is constrained by Societal Factors


3. Common IT Security Practices
3.1 Policy

3.1.3 System-Specific Policy

needs for coming years.

3.1.2 Issue-Specific Policy

realized and serve as a scalable, upgradeable, and comprehensive plan for the information security

3.1.1 Program Policy

The blueprint should specify the tasks to be accomplished and the order in which they are to be

3.2 Program Management

adapt or adopt a published model or framework for information security.

3.1.4 All Policies

One approach to selecting a methodology by which to develop an information security blueprint is to

3.3 Risk Management

sources presented later in this chapter.

3.2.2 System-Level Program

There is a number of published information security frameworks, including those from government

3.2.1 Central Security Program

This framework can be an outline of steps involved in designing and later implementing information
security in the organization.

Page 10

Page 11

www.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

3.13.4 Keystroke Monitoring

3.3.3 Uncertainty Analysis

3.14 Cryptography

3.4 Life Cycle Planning

NIST SP 800-14

3.4.1 Security Plan

Security Supports the Mission of the Organization

3.4.2 Initiation Phase

Security is an Integral Element of Sound Management

3.4.3 Development/Acquisition Phase

Security Should Be Cost-Effective

3.4.4 Implementation Phase

Systems Owners Have Security Responsibilities Outside Their Own Organizations

3.4.5 Operation/Maintenance Phase

Security Responsibilities and Accountability Should Be Made Explicit

3.4.6 Disposal Phase

Security Requires a Comprehensive and Integrated Approach

3.5 Personnel/User Issues

Security Should Be Periodically Reassessed

3.5.1 Staffing

Security is Constrained by Societal Factors

3.6 Preparing for Contingencies and Disasters


3.6.1 Business Plan

Security Supports the Mission of the Organization

Failure to develop an information security system based on the organizations mission,


vision and culture guarantees the failure of the information security program.

la
bu

3.6.2 Identify Resources

33 Principles enumerated

3.6.4 Develop Strategies

Security is an Integral Element of Sound Management

Effective management includes planning, organizing, leading, and controlling.

yl

3.6.5 Test and Revise Plan

.a

Information security specifically supports the controlling function, as security controls


support sound management by means of the enforcement of both managerial and security

3.8 Awareness and Training

security policies provide input into the organization initiatives.

policies.

3.9 Security Considerations in Computer Support and Operations

3.7.2 Characteristics

.a

3.7.1 Uses of a Capability

lls

Security enhances these areas by supporting the planning function when information

lls

3.7 Computer Security Incident Handling

yl

3.6.3 Develop Scenarios

la
bu

3.5.2 User Administration

s.
co
m

3.13.3 Audit Trail Reviews

3.3.2 Risk Mitigation

s.
co
m

3.3.1 Risk Assessment

3.10 Physical and Environmental Security

Security should be Cost-effective

3.11 Identification and Authentication

The costs of information security should be considered part of the cost of doing business, much like

3.11.1 Identification

the cost of computers, networks, and voice communications systems.

3.11.2 Authentication

These are not profit-generating areas of the organization and may not lead to competitive

3.11.3 Passwords

advantages.
Information security should justify its own costs.

3.11.4 Advanced Authentication

Security measures that do not justify cost benefit levels must have a strong business case (such as a

3.12 Logical Access Control

legal requirement) to warrant their use.

3.12.1 Access Criteria

Systems owners have security responsibilities outside their own organizations

3.12.2 Access Control Mechanisms

Whenever systems store and use information from customers, patients, clients, partners, and others,

3.13 Audit Trails

the security of this information becomes a serious responsibility for the owner of the systems.

3.13.1 Contents of Audit Trail Records

Security Responsibilities and Accountability Should Be Made Explicit:

3.13.2 Audit Trail Security


Page 12
www.allsyllabus.com

Page 13
www.allsyllabus.com

System

managers.

NIST SP 800-14 Generally Accepted principles and Practices for securing Information Technology

Policy documents should clearly identify the security responsibilities of users, administrators, and

vtu.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

www.allsyllabus.com

To be legally binding, this information must be documented, disseminated, read, understood, and
agreed to.

RFC 2196: Site Security handbook


Table of Contents

2. Security Policies

Security personnel alone cannot effectively implement security.

1.5 Basic Approach

Security Requires a Comprehensive and Integrated Approach

1.4 Related Work

complying with their responsibilities

1.3 Definitions

These details should be distributed to users, administrators, and managers to assist them in

1.2 Audience

policies.

1.1 Purpose of this Work

Regarding the law, the organization should also detail the relevance of laws to issue specific security

1. Introduction

Ignorance of law is no excuse, but ignorance of policy is.

c
o
m

2.3 Keeping the Policy Flexible

information security management and professionals, as well as the users, managers, administrators,

2.2 What Makes a Good Security Policy?

The THREE communities of interest (information technology management and professionals,

2.1 What is a Security Policy and Why Have One?

Security is every ones responsibility

u
s
.

and other stakeholders of the broader organization) should participate in the process of developing a
comprehensive information security program.

3.2 Network and Service Condiauration

Security should be periodically Re-assessed

Information security that is implemented and then ignore is considered negligent, the organization
having not demonstrated due diligence.
Security is an ongoing process

3.3 Firewalls
4. Security Services and Procedures

ll
s
y

4.1 Authentication

It cannot be implemented and then expected to function independently without constant maintenance

.a

4.2 Confidentiality
4.3 Integrity

and change

3. Architecture
3.1 Objectives

ll
a
b

4.5 Access

security process must be periodically repeated.

4.4 Authorization

To be effective against a constantly shifting set of threats and constantly changing user base, the

Continuous analysis of threats, assets, and controls must be conducted and new blueprint developed.

4.6 Auditing

.a

lls

ll
a
b

u
s
.

c
o
m

5. Security Incident Handling

secure the organizations information assets.

4.7 Securing Backups

Only through preparation, design, implementation, eternal vigilance, and ongoing maintenance can

5.1 Preparing and Planning for Incident Handling

Security is constrained by societal factors

5.4 Handling an Incident

security controls and safeguards.

5.3 Identifying an Incident

Legal demands, shareholder requirements, even business practices affect the implementation of

5.2 Notification and Points of Contact

There are a number of factors that influence the implementation and maintenance of security.

For example, security professionals generally prefer to isolate information assets from the Internet,

5.5 Aftermath of an Incident

which is the leading avenue of threats to the assets, but the business requirements of the organization
may preclude this control measure.

5.6 Responsibilities

Principles for securing Information Technology Systems

6. Ongoing Activities
7. Tools and Locations
Page 14

Page 15

www.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

8. Mailing Lists and Other Resources

References

9. References

Internet Security Task Force ( www.ca.com/ISTF)

VISA International Security Model

Computer Emergency Response Team(CERT) at Carnegie Mellon University

VISA International promotes strong security measures and has security guidelines

(www.cert.org)

Developed two important documents that improve and regulate its information systems

The Technology managers forum (www.techforum.com)

Security Assessment Process

The Information Security Forum (www.isfsecuritystandard.com)

Agreed Upon Procedures

The Information Systems Audit and Control association (www.isaca.com)

Both documents provide specific instructions on the use of the VISA cardholder

The International Association of Professional Security Consultants(www.iapsc.org)

Information Security Program

Global Grid Forum (www.gridforum.org)

The Security Assessment Process document is a series of recommendations for the detailed

SearchSecurity.com and NISTs Computer Resources Center

examination of an organizations systems with the eventual goal of integration into the VISA system
Planning for Security-Hybrid Framework

s.
co
m

s.
co
m

The Agreed Upon procedures document outlines the policies and technologies required for security

documents, a security team can develop a sound strategy for the design of good security architecture

can use to create a security system blueprint as they fill in the implementation details to address the

The only down side to this approach is the very specific focus on systems that can or do integrate with

components of a solid information security plan

VISAs systems.

Hybrid Framework for a Blue Print of an Information Security System

The NIST SP 800-26 framework of security includes philosophical components of the Human
Firewall Project, which maintains that people , not technology, are the primary defenders of

lls

drawback of providing less detail than would a complete methodology -It is possible to gain

information assets in an information security program, and are uniquely responsible for their

yl

yl

Baselining and best practices are solid methods for collecting security practices, but they can have the

protection

lls

Baselining and Best Practices

la
bu

This section presents a Hybrid framework or a general outline of a methodology that organizations

la
bu

systems that carry the sensitive cardholder information to and from VISA systems Using the two

NIST SP 800-26 Security Self Assessment Guide for Information Technology systems

for public agencies, but these policies can be adapted easily to private institutions.

Management Controls

-The documents found in this site include specific examples of key policies and planning

Risk Management

.a

NIST SP 800-26

-The Federal Agency Security Practices Site (fasp.csrc.nist.gov) is designed to provide best practices

.a

information by baselining and using best practices and thus work backwards to an effective design

Review of Security Controls

documents, implementation strategies for key technologies and position descriptions for
key security personnel.

Life Cycle Maintenance

-Of particular value is the section on program management, which include the following:

Authorization of Processing (Certification and Accreditation)

A summary guide: public law, executive orders, and policy documents

System Security Plan

Position description for computer system security officer


Position description for information security officer

Operational Controls

Position description for computer specialist

Personnel Security

Sample of an information technology (IT) security staffing plan for a large

Physical Security

service application (LSA)

Production, Input / Output Controls

Sample of information technology (IT) security program policy

Contingency Planning

Security handbook and standard operating procedures.

Hardware and Systems Software

Telecommuting and mobile computer security policy.

Data Integrity
Documentation
Page 16

www.allsyllabus.com

Page 17
www.allsyllabus.com

sphere.

Logical Access Controls

individual safeguards that can protect the various systems that are located closer to the center of the

Identification and Authentication

The items of control shown in the diaure are not intended to be comprehensive but rather illustrate

Technical Controls

As illustrated in the sphere of protection, a variety of controls can be used to protect the information.

Incident Response Capability

This Reinforces the Concept of DEFENCE IN DEPTH

Security Awareness, Training, and Education

vtu.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

www.allsyllabus.com

Audit Trails
However, because people can directly access ring as well as the information at the core of the model,

The members of the organization must become a safeguard, which is effectively trained,

Information is always at risk from attacks through the people and computer systems that have access

unauthorized access and use.

Information, as the most important asset in the model is at the center of the sphere.

The people must become a layer of security, a human firewall that protects the information from

hard copies of documents and can also access information through systems.

protecting that layer from direct or indirect use through the next layer

The sphere of use illustrates the ways in which people access information; for example, people read

The sphere of protection overlays each of the levels of the sphere of use with a layer of security,

The spheres of security illustrate how information is under attack from a variety of sources.

different approach to security than the side that uses technology.

The sphere of security (Dia 6.16) is the foundations of the security framework.

the side of the sphere of protection that attempts to control access by relying on people requires a

The Sphere of Security

implemented and maintained or else they too will represent a threat to the information.

u
s
.

u
s
.

to the information.

c
o
m

c
o
m

policies

to access information from the Internet must first go through the local networks and then access

Information security is therefore designed and implemented in three layers

Networks and Internet represent indirect threats, as exemplified by the fact that a person attempting

people (education, training, and awareness programs)

Generally speaking, the concept of the sphere is to represent the 360 degrees of security necessary to

must follow the sound management policies.

Each of the layers constitutes controls and safeguards that are put into place to protect the

The first component is the sphere of use

information and information system assets that the organization values.

Information, at the core of the sphere, is available for access by members of the organization and

The order of the controls within the layers follows prioritization scheme.

But before any controls and safeguards are put into place, the policies defining the management

other computer-based systems:

To gain access to the computer systems, one must either directly access the computer systems or go

protect information at all times

.a

.a

systems that contain the information.

technology

While the design and implementation of the people layer and the technology layer overlap, both

ll
s
y

lls

Sphere of Use

ll
a
b

ll
a
b

philosophies that guide the security process must already be in place.

through a network connection


Safeguards provide THREE levels of control

connection.

Three Levels of Control

To gain access to the network, one must either directly access the network or go through an Internet

Managerial Controls

The Sphere of Protection

Technical Controls

prevent access to the inner layer from the outer layer.

Operational Controls

Dia illustrates that between each layer of the sphere of use there must exist a layer of protection to
Each shaded band is a layer of protection and control.

Managerial Controls
performed by security administration of the organization

people and the information.

Managerial controls cover security processes that are designed by the strategic planners and

For example, the items labeled Policy & law and Education & Training are located between

security program management.

computer systems, and between the Internet and Internal networks.

Management Controls address the design and implementation of the security planning process and

Controls are also implemented between systems and the information, between networks and the

Page 18

Page 19

www.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

Contingency Planning

maintenance of the entire security life cycle.

Security ETA

Operational Controls

Personnel Security

Operational controls deal with the operational functionality of security in the organization.

Physical Security

They include management functions and lower-level planning, such as disaster recovery and

Production Inputs and Outputs

incident response planning

Hardware & Software Systems Maintenance

Operational controls also address personnel security, physical security, and the protection of

Data Integrity

production inputs and outputs

Technical Controls

In addition, operational controls guide the development of education, training, and awareness

Logical Access Controls

programs for users, administrators and management.

Identification, Authentication, Authorization, and Accountability

Finally, they address hardware and software systems maintenance and the integrity of data.

Audit Trails

Technical Controls

s.
co
m

Operational Controls

Management controls further describe the necessity and scope of legal compliance and the

s.
co
m

They also address risk management and security control reviews

Asset Classification and Control


Cryptography

Technical controls address those tactical and technical issues related to designing and implementing
Design of Security Architecture

la
bu

appropriate to protecting information

la
bu

security in the organization as well as issues related to examining and selecting the technologies

To inform the discussion of information security program architecture and to illustrate industry best
Many of these components are examined in an overview.

An overview is provided because being able to assess whether a framework and/or blueprint are on

accountability.

Defense in Depth

.a

components.

This layered approach is called Defense in Depth

needed.

Finally they include the classification of assets and users, to facilitate the authorization levels

One of the basic tenets of security architectures is the implementation of security in layers.

In addition, these controls cover cryptography to protect information in storage and transit.

.a

target to meet an organizations needs requires a working knowledge of these security architecture

Technical controls also address the development and implementation of audit trails for

accountability.

lls

lls

They also include logical access controls, such as identification, authentication, authorization, and

yl

practices , the following sections outline a few key security architectural components.

technical components.

yl

While operational controls address the specifics of technology selection and acquisition of certain

Defense in depth requires that the organization establishes sufficient security controls and safeguards

Summary

so that an intruder faces multiple layers of control.

Using the three sets of controls just described , the organization should be able to specify controls to

These layers of control can be organized into policy, training, and education and technology as per

cover the entire spectrum of safeguards , from strategic to tactical, and from managerial to technical.

the NSTISSC model.

The Framework

While policy itself may not prevent attacks , it certainly prepares the organization to handle them.

Management Controls

Coupled with other layers , policy can deter attacks.

Program Management

Training and education are similar.

System Security Plan

Technology is also implemented in layers, with detection equipment working in tandem with

Life Cycle Maintenance

reaction technology, all operating behind access control mechanisms.

Risk Management

Implementing multiple types of technology and thereby preventing the failure of one system from

Review of Security Controls

compromising the security of the information is referred to as redundancy.

Legal Compliance
Page 20
www.allsyllabus.com

Page 21
www.allsyllabus.com

organizations place web servers .

rules (shown as the header in the diagram) and the data content analysis (shown as 0100101000 in the

The DMZ is a no-mans land between the inside and outside networks; it is also where some

The Dia shows the use of firewalls and intrusion detection systems(IDS) that use both packet level

A buffer against outside attacks is frequently referred to as a Demilitarized Zone (DMZ).

Dia 6.18 illustrates the concept of building controls in multiple, sometimes redundant layers.

DMZs

firewalls, proxy servers, and access controls.

Thus, firewalls can be used to create to security perimeters like the one shown in Dia. 6.19

Redundancy can be implemented at a number of points through the security architecture such as

vtu.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

www.allsyllabus.com

diagram)

A proxy server performs actions on behalf of another system

Unfortunately, the perimeter does not protect against internal attacks from employee threats or on-

server, or proxy firewall.

threats , as pictured in Dia 6.19

An alternative approach to the strategies of using a firewall subnet or a DMZ is to use a proxy

A security perimeter is the first level of security that protects all internal systems from outside

Proxy Servers

of an organizations security and the beginning of the outside world.

the interior networks.

A perimeter is the boundary of an area. A security perimeter defines the edge between the outer limit

These servers provide access to organizational web pages, without allowing web requests to enter

Security Perimeter

c
o
m

c
o
m

When an outside client requests a particular web page, the proxy server receives the requests as if it

internet connection, and a physical security perimeter, usually at the gate to the organizations offices.

name that users would be expecting to find for the system and its services.

There can be both an electronic security perimeter, usually at the organizations exterior network or

When deployed, a proxy server is condiaured to look like a web server and is assigned the domain

site physical threats.

were the subject of the request, then asks for the same information from the true web server (acting as

Security perimeters can effectively be implemented as multiple technologies that safeguard the

ll
a
b

internal and more sensitive server.

The proxy server may be hardened and become a bastion host placed in the public area of the

The assumption is that if individuals have access to all systems within that particular domain.

network or it might be placed within the firewall subnet or the DMZ for added protection.
For more frequently accessed web pages, proxy servers can cache or temporarily store the page, and
thus are sometimes called cache servers.

Dia 6.20 shows a representative example of a condiauration using a proxy .

Key Technology Components

.a

.a

FIREWALLS

a proxy for the requestor), and then responds tot eh request as a proxy for the true web server.
This gives requestors the response they need without allowing them to gain direct access to the

protected information from those who would attack it.

Within security perimeters the organization can establish security domains, or areas of trust within
which users can freely communicate.

ll
s
y

lls

A Firewall is a device that selectively discriminates against information following into or out of the

organization.

ll
a
b

Both require perimeter security.

u
s
.

u
s
.

Intrusion Detection Systems (IDSs)

files stored on those machines.

a few types of protocols to enter.

Host based IDSs are usually installed on the machines they protect to monitor the status of various

world, it too can be used as the front-line defense against attacks as it can be condiaured to allow only

Host Based IDS

While the gateway router is primarily designed to connect the organizations systems to the outside

versions, with Hybrids possible.

Firewalls are usually placed on the security perimeter, just behind or as part of a gateway router.

organization may wish to implement Intrusion Detection Systems (IDSs) IDSs come in TWO

prevents information from entering or exiting the defined area based on a set of predefined rules.

In an effort to detect unauthorized activity within the inner network or an individual machines, an

A Firewall is usually a computing device , or a specially condiaured computer that allows or

value, and can then alert the administrator of suspicious activity.

they can filter.

The IDS learns the condiauration of the system , assigns priorities to various files depending on their

There are a number of types of firewalls, which are usually classified by the level of information
Firewalls can be packet filtering , stateful packet filtering, proxy or application level.
A firewall can be a single device or a firewall subnet, which consists of multiple firewalls creating a

Network Based IDS

buffer between the outside and inside networks.


Page 22

Page 23

www.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

Network based IDSs look at patterns of network traffic and attempt to detect unusual activity based

Thus, managers from each community of interest within the organization must be ready to act when

on previous baselines.

a successful attack occurs.

This could include packets coming into the organizations networks with addresses from machines
Managers must provide strategic planning to assure continuous information systems availability

Attack)

ready to use when an attack occurs.

Both Host and Network based IDSs require a database of previous activity.

Plans for events of this type are referred to in a number of ways:

In the case of host based IDSs, the system can create a database of file attributes, as well as maintain

Business Continuity Plans (BCPs)

a catalog of common attack signatures.

Disaster Recovery Plans (DRPs)

Network-based IDSs can use a similar catalog of common attack signatures and develop databases

Incident Response Plans (IRPs)

of normal activity for comparison with future activity.

Contingency Plans

IDSs can be used together for the maximum level of security for a particular network and set of

Contingency Planning (CP)


Incident Response Planning (IRP)

DIA 6.21 shows an example of an Intrusion Detection System.

Business Continuity Planning (BCP)


The primary functions of these three planning types:

la
bu

Summary

Disaster Recovery Planning (DRP)

la
bu

systems.

s.
co
m

Continuity Strategy

It could also include high volumes of traffic going to outside addresses(As in a Denial of Service

s.
co
m

already within the organization(IP Spoofing).

to disaster recovery and BCP.

expertise to better craft the security design

DRP typically focuses on restoring systems after disasters occur, and as such is closely associated
with BCP.

lls

lls

After your organization has selected a model, created a framework, and flashed it out into a blue

yl

IRP focuses on immediate response, but if the attack escalates or is disastrous the process changes

allow a decision maker to determine what should be implemented and when to bring in additional

yl

This cursory overview of Technology components is meant to provide sufficient understanding to

simple restoration of information and information resources.

.a

BCP occurs concurrently with DRP when the damage is major or long term, requiring more than

training and awareness program that increases information security knowledge and visibility and

.a

print for implementation, you should make sure your planning includes the steps needed to create a

organizations information assets

enables people across the organization to work in secure ways that enhance the safety of the

Components of CP

An incident is any clearly identified attack on the organizations information assets that would

Contingency Planning
Learning Objectives

threaten the assets confidentiality, integrity, or availability.

Upon completion of this part you should be able to:

An Incidence Response Plan (IRP) deals with the identification, classification, response, and

Understand the steps involved in incident reaction and incident recovery.

recovery from an incident.

Define the disaster recovery plan and its parts.

A Disaster Recovery Plan(DRP) deals with the preparation for and recovery from a disaster, whether

Define the business continuity plan and its parts.

natural or man-made.

Grasp the reasons for and against involving law enforcement officials in incident responses and

A Business Continuity Plan(BCP) ensures that critical business functions continue, if a catastrophic

when it is required.

incident or disaster occurs.

A key role for all managers is planning.


Unfortunately for managers, however, the probability that some form of attack will occur, whether

Contingency Planning Team

from inside or outside, intentional or accidental, human or nonhuman, annoying or catastrophic

i. Champion: The CP project must have a high level manager to support, promote , and endorse the

factors, is very high.

findings of the project.

Page 24
www.allsyllabus.com

Page 25
www.allsyllabus.com

the broad consequences

iii. Team members: The team members for this project should be the managers or their representatives

the indicators of attack

of the organization.

details on the method of attack

ii. Project Manager: A champion provides the strategic vision and the linkage to the power structure

vtu.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

www.allsyllabus.com

from the various communities of interest: Business, Information technology, and information security.
Potential Damage Assessment

Subordinate plans will take into account the identification of, reaction to, and recovery from each

the damage from the attack, recover from the effects, and return to normal operations.

Once potential damage has been assessed, a subordinate plan must be developed or identified.

The BIA therefore adds insight into what the organization must do to respond to attack, minimize

Subordinate Plan Classification

It begins with the prioritized list of threats and vulnerabilities identified in the risk management.

This final result is referred to as an attack scenario end case.

organization.

Costs include actions of the response team.

A BIA is an investigation and assessment of the impact that various attacks can have on the

best, worst, and most likely cases.

The first phase in the development of the CP process is the Business Impact Analysis.

From the attack success scenarios developed, the BIA planning team must estimate the cost of the

Business Impact Analysis

c
o
m

attack scenario.

Incident Response Planning

Obviously the organizations security team does everything In its power to stop these attacks, but

An attack scenario end case is categorized as disastrous or not.

Begin with Business Impact Analysis (BIA) if the attack succeeds, what do we do then?

u
s
.

u
s
.

c
o
m

an incident.

error, and deliberate acts of sabotage and vandalism, may be unstoppable.

Incident response planning covers the identification of, classification of, and response to

some attacks, such as natural disasters, deviations from service providers, acts of human failure or

ll
a
b

The CP team conducts the BIA in the following stages:

What is incident? What is incident Response?

An incident is an attack against an information asset that poses a clear threat to the confidentiality,
integrity, or availability of information resources.

-Attacks are only classified as incidents if they have the following characteristics:
1) . They are directed against information assets.

Threat Attack Identification and Prioritization

The attack profile is the detailed description of activities that occur during an attack

2) . They have a realistic chance of success.

_ Threat attack identification

If an action that threatens information occurs and is completed, the action is classified as an incident.

_ Subordinate plan classification

_ Business unit analysis

an incident.

.a

.a

_ Attack success scenarios

ll
s
y

lls

_ Potential damage assessment

ll
a
b

resources.

deliberate or accidental.

3) . They could threaten the confidentiality, integrity, or availability of information

Must be developed for every serious threat the organization faces, natural or man-made,

Incident Response-IR
Business Unit Analysis

IR is therefore the set of activities taken to plan for, detect, and correct the impact of an incident on
IR is more reactive than proactive.

the organization.

information assets.

The second major task within the BIA is the analysis and prioritization of business functions within

2. Detection

continued operations of the organization.

1. Planning

(departments, sections, divisions, groups, or other such units) to determine which are most vital to the

IR consists of the following FOUR phases:

This series of tasks serves to identify and prioritize the functions within the organizations units

Attack Success Scenario Development

3. Reaction

Next create a series of scenarios depicting the impact a successful attack from each threat could have

4. Recovery

on each prioritized functional area with:


Page 26

Page 27

www.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

This can consist of an on the ground walkthrough, in which everyone discusses hi/her actions at

-Planning for incidents is the first step in the overall process of incident response planning.

individuals sit around a conference table and discuss in turn how they would act as the incident

-Planning for an incident requires a detailed understanding of the scenarios developed for the BIA.

unfolded.

-With this information in hand, the planning team can develop a series of predefined responses that

3. Simulation: Simulation of an incident where each involved individual works individually rather

guide the organizations incident response (IR) team and information security staff.

than in conference, simulating the performance of each task required to react to and recover from a

-This assumes TWO things

simulated incident.

_ The organization has an IR team and

4. Parallel: This test is larger in scope and intensity. In the parallel test, individuals act as if an actual

_ The organization can detect the incident.

incident occurred, performing the required tasks and executing the necessary procedures.

-The IR team consists of those individuals who must be present to handle the systems and functional

5. Full interruption: It is the final; most comprehensive and realistic test is to react to an incident as

areas that can minimize the impact of an incident as it takes place.

if it were real.

-IR team verifies the threat, determines the appropriate response, ands co-ordinates the actions

In a full interruption, the individuals follow each and every procedure, including the interruption of

necessary to deal with the situation.


# Format and Content

s.
co
m

each particular location and juncture or it can be more of a talk through in which all involved

s.
co
m

Incident Planning

service, restoration of data from backups, and notification of appropriate individuals.


Best sayings.
Training and preparation hurt.
You dont have to like it, just do it.

# Storage

Keep it simple.

lls

Never assume

lls

Where is the IR plan stored?

yl

Lead from the front, not the rear.

follows the clearly outlined procedures for an assigned role.

yl

-To respond to an incident, the responder simply opens the binder, flips to the appropriate section, and

la
bu

The more you sweat in training, the less you bleed in combat.

require information, such as to create a directory of incidents with tabbed sections for each incident.

la
bu

-The IR plan must be organized in such a way to support, rather than impede, quick and easy access to

Incident Detection

their chances of success in the attacks.

Individuals sometimes notify system administrator, security administrator, or their managers of an

The document could be stored adjacent to the administrators workstation, or in a book case in the

unusual occurrence.

# Testing

This is most often a complaint to the help desk from one or more users about a technology service.

server room.

.a

You are paid for your results, not your methods.

If attackers gain knowledge of how a company responds to a particular incident, they can improve

.a

Note that the information in the IR plan should be protected as sensitive information.

These complaints are often collected by the help desk and can include reports such as the system is

-A plan untested is not a useful plan

acting unusual, programs are slow, my computer is acting weird, data is not available.

- Train as you diaht, and diaht as you train

# Incident Indicators

-Even if an organization has what appears on paper to be an effective IR plan, the procedures that

come from the plan has been practiced and tested.

Incident Reaction

-Testing can be done by the following FIVE strategies:

Incident reaction consists of actions that guide the organization to stop the incident,

1. Check list: copies of the IR plan are distributed to each individual with a role to play during an

mitigate the impact of the incident, and provide information for the recovery from the

actual incident. These individuals each review the plan and create a checklist of correct and incorrect

incident

components.

In reacting to the incident there are a number of actions that must occur quickly including:
notification of key personnel

2. Structured walkthrough: in a walkthrough, each involved individual practices the steps he/she
assignment of tasks

will take during an actual event.


Page 28
www.allsyllabus.com

Page 29
www.allsyllabus.com

Conduct an after-action review.

Most organizations maintain alert rosters for emergencies. An alert roster contains contact

Restore the confidence of the members of the organizations communities of interest.

Notification of Key Personnel

Continuously monitor the system.

documentation of the incident

vtu.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

www.allsyllabus.com

There must be a clear establishment of priorities.

happened, and how it happened, and what actions were taken. The documentation should record the

DRP Steps

It is important to ensure that the event is recorded for the organizations records, to know what

DRP strives to reestablish operations at the primary site.

Documenting the event is important:

the most valuable assets to preserve value for the longer term even at the risk of more disruption.

Documenting an Incident

When situations are classified as disasters plans change as to how to respond - take action to secure

turn call a few other people, and so on.

incidents.

A hierarchical roster is activated as the first person calls a few other people on the roster, who in

The contingency planning team must decide which actions constitute disasters and which constitute

A sequential roster is activated as a contact person calls each and every person on the roster.

Disaster recovery planning (DRP) is planning the preparation for and recovery from a disaster.

Two ways to activate an alert roster:

Disaster Recovery Planning

information for the individuals to be notified in an incident.

who, what, when, where, why, and how of the event.

c
o
m

There must be a clear delegation of roles and responsibilities.

c
o
m

Someone must be tasked with the documentation of the disaster.

The first task is to identify the human resources needed and launch them into action.

Someone must initiate the alert roster and notify key personnel.

Incident Recovery

Crisis Management

The organization repairs vulnerabilities, addresses any shortcomings in safeguards, and restores the
data and services of the systems.

There are several sources of information:

Supporting personnel and families during the crisis.


Determining impact on normal business operations and, if necessary, making a disaster declaration.

including system logs.

.a

.a

intrusion detection logs.

Crisis management is actions taken during and after a disaster focusing on the people involved and
addressing the viability of the business.

The crisis management team is responsible for managing the event from an enterprise perspective
and covers:

ll
s
y

y
lls

Damage Assessment

ll
a
b

ll
a
b

The full extent of the damage must be assessed.

u
s
.

u
s
.

condiauration logs and documents.

Keeping the public informed.

documentation from the incident response.

Communicating with major customers, suppliers, partners, regulatory agencies, industry

results of a detailed assessment of systems and data storage.

organizations, the media, and other interested parties.


Business continuity planning outlines reestablishment of critical business operations during a

acceptable in formal proceedings.

Business Continuity Planning

Computer evidence must be carefully collected, documented, and maintained to be


Recovery

disaster that impacts operations.


If a disaster has rendered the business unusable for continued operations, there must be a plan to

In the recovery process:

There are a number of strategies for planning for business continuity

the first place. Install, replace or upgrade them.

Continuity Strategies

Address the safeguards that failed to stop or limit the incident, or were missing from the system in

allow the business to continue to function.

Identify the vulnerabilities that allowed the incident to occur and spread and resolve them.

hot sites

new monitoring capabilities.

In general there are three exclusive options:

Evaluate monitoring capabilities. Improve their detection and reporting methods, or simply install
Restore the data from backups.

warm sites

Restore the services and processes in use.

cold sites
Page 30

Page 31

www.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

And three shared functions:

When the incident at hand constitutes a violation of law the organization may determine that

timeshare

involving law enforcement is necessary

service bureaus

There are several questions, which must then be answered:

mutual agreements

When should the organization get law enforcement involved?


What level of law enforcement agency should be involved: local, state, or federal?
What will happen when the law enforcement agency is involved?

Off-Site Disaster Data Storage


To get these types of sites up and running quickly, the organization must have the ability to port data
into the new sites systems like
Electronic vaulting - The bulk batch-transfer of data to an off-site facility.
Remote Journaling - The transfer of live transactions to an off-site facility; only transactions are
transferred not archived data, and the transfer is real-time.

databases at the remote site to multiple servers.


The Planning Document

s.
co
m

s.
co
m

Database shadowing - Not only processing duplicate real-time data storage, but also duplicates the

_ Establish responsibility for managing the document, typically the security administrator

la
bu

la
bu

_ Appoint a secretary to document the activities and results of the planning session(s)
_ Independent incident response and disaster recovery teams are formed, with a common planning
committee

yl

yl

_ Outline the roles and responsibilities for each team member

lls

lls

_ Develop the alert roster and lists of critical agencies

.a

The Planning Process

.a

_ Identify and prioritize threats to the organizations information and information systems

There are six steps in the Contingency Planning process:

Identifying the mission- or business-critical functions

Identifying the resources that support the critical functions


Anticipating potential contingencies or disasters
Selecting contingency planning strategies
Implementing the contingency strategies
Testing and revising the strategy

Using the Plan


_ During the incident
_ After the incident
_ Before the incident
Contingency Plan Format
Law Enforcement Involvement

Page 32
www.allsyllabus.com

Page 33
www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

UNIT 8 -Web Security

www.allsyllabus.com

UNIT 8

TOPICS:

WEB SECURITY

1. Web security requirements


Virtually all businesses, most government agencies, and many individuals now have Web sites. The
2.

Secure Socket layer (SSL)

number of individuals and companies with Internet access is expanding rapidly and all of these have
graphical Web browsers. As a result, businesses are enthusiastic about setting up facilities on the Web

3. Transport layer Security (TLS)

for electronic commerce. But the reality is that the Internet and the Web are extremely vulnerable to

4. Secure Electronic Transaction (SET)

compromises of various sorts. As businesses wake up to this reality, the demand for secure Web
services grows. The topic of Web security is a Very broad one. In this chapter, we begin with a
discussion of the general requirements for Web security and then focus on two standardized schemes
that are becoming increasingly important as part of Web commerce: SSL/TLS and SET.

c
o
m

c
o
m

Web Security Considerations:

The World Wide Web is fundamentally a client/server application running over the Internet and

u
s
.

u
s
.

TCP/IP intranets. As such, the security tools and approaches discussed so far in this book are relevant
to the issue of Web security. But, the Web presents new challenges not generally appreciated in the

ll
a
b

context of computer and network security:

ll
a
b

-The Internet is two way. Unlike traditional publishing environments, even electronic publishing
systems involving teletext, voice response, or fax-back, the Web is vulnerable to attacks on the Web

ll
s
y

servers over the Internet.

lls

-The Web is increasingly serving as a highly visible outlet for corporate and product information and

.a

.a

as the platform for business transactions. Reputations can be damaged and money can be lost if the

Web servers are subverted.

-Although Web browsers are very easy to use, Web servers are relatively easy to condiaure and

manage, and Web content is increasingly easy to develop, the underlying software is extraordinarily
complex. This complex software may hide many potential security flaws. The short history of the
Web is filled with examples of new and upgraded systems, properly installed, that are vulnerable to a
variety of security attacks.
-A Web server can be exploited as a launching pad into the corporation's or agency's entire computer
complex. Once the Web server is subverted, an attacker may be able to gain access to data and
systems not part of the Web itself but connected to the server at the local site.
-Casual and untrained (in security matters) users are common clients for Web-based services. Such
users are not necessarily aware of the security risks that exist and do not have the tools or knowledge
to take effective countermeasures.

Web Security Threats:


Page 163

Page 164

www.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

section.

network traffic between browser and server and gaining access to information on a Web site that is

Two important SSL concepts are the SSL session and the SSL connection, which are defined in the

supposed to be restricted. Active attacks include impersonating another user, altering messages in

specification as follows:

transit between client and server, and altering information on a Web site.

-Connection: A connection is a transport (in the OSI layering model definition) that provides a

Diaure: 1.1 Relative Location of Security Facilities in the TCP/IP Protocol Stack

suitable type of service. For SSL, such connections are peer-to-peer relationships. The connections are

Diaure 1.1 illustrates this difference. One way to provide Web security is to use IP Security (Diaure

transient. Every connection is associated with one session.

1.1a). The advantage of using IPSec is that it is transparent to end users and applications and provides

-Session: An SSL session is an association between a client and a server. Sessions are created by the

a general-purpose solution. Further, IPSec includes a filtering capability so that only selected traffic

Handshake Protocol. Sessions define a set of cryptographic security parameters, which can be shared

need incur the overhead of IPSec processing. Another relatively general-purpose solution is to

among multiple connections. Sessions are used to avoid the expensive negotiation of new security

implement security just above TCP (Diaure 1.1b). The foremost example of this approach is the

parameters for each connection. Between any pair of parties (applications such as HTTP on client and

Secure Sockets Layer (SSL) and the follow-on Internet standard known as Transport Layer Security

server), there may be multiple secure connections. In theory, there may also be multiple simultaneous

s.
co
m

specific protocols are used in the management of SSL exchanges and are examined later in this

group these threats is in terms of passive and active attacks. Passive attacks include eavesdropping on

s.
co
m

Table 1.1 provides a summary of the types of security threats faced in using the Web. One way to

provided as part of the underlying protocol suite and therefore be transparent to applications.

associated with each session. Once a session is established, there is a current operating state for both

Alternatively, SSL can be embedded in specific packages. For example, Netscape and Microsoft

read and write (i.e., receive and send). In addition, during the Handshake Protocol, pending read and

Explorer browsers come equipped with SSL, and most Web servers have implemented the protocol.

write states are created. Upon successful conclusion of the Handshake Protocol, the pending states

la
bu

sessions between parties, but this feature is not used in practice. There are actually a number of states

la
bu

(TLS). At this level, there are two implementation choices. For full generality, SSL (or TLS) could be

A session state is defined by the following parameters (definitions taken from the SSL specification):

to the specific needs of a given application. In the context of Web security, an important example of

-Session identifier: An arbitrary byte sequence chosen by the server to identify an active or
resumable session state.

lls

discussion of SSL/TLS and SET.

lls

this approach is Secure Electronic Transaction (SET). The remainder of this chapter is devoted to a

yl

becomes the current states.

shows examples of this architecture. The advantage of this approach is that the service can be tailored

yl

Application-specific security services are embedded within the particular application. Diaure 1.1c

-Peer certificate: An X509.v3 certificate of the peer. This element of the state may be null.

Netscape originated SSL. Version 3 of the protocol was designed with public review and input from

-Cipher spec: Specifies the bulk data encryption algorithm (such as null, AES, etc.) and a hash

industry and was published as an Internet draft document. Subsequently, when a consensus was

algorithm (such as MD5 or SHA-1) used for MAC calculation. It also defines cryptographic attributes

such as the hash_size.

reached to submit the protocol for Internet standardization, the TLS working group was formed within

.a

-Compression method: The algorithm used to compress data prior to encryption.

.a

Secure Socket Layer and Transport Layer Security:

IETF to develop a common standard. This first published version of TLS can be viewed as essentially

-Master secret: 48-byte secret shared between the client and server.

an SSLv3.1 and is very close to and backward compatible with SSLv3.

-Is resumable: A flag indicating whether the session can be used to initiate new connections. A
connection state is defined by the following parameters:

SSL Architecture

-Server and client random: Byte sequences that are chosen by the server and client for each

SSL is designed to make use of TCP to provide a reliable end-to-end secure service. SSL is not a

connection.

single protocol but rather two layers of protocols, as illustrated in Diaure 1.2.

-Server write MAC secret: The secret key used in MAC operations on data sent by the server.

Diaure 1.2. SSL Protocol Stack

-Client write MAC secret: The secret key used in MAC operations on data sent by the client.

The SSL Record Protocol provides basic security services to various higher-layer protocols. In

-Server write key: The conventional encryption key for data encrypted by the server and decrypted

particular, the Hypertext Transfer Protocol (HTTP), which provides the transfer service for Web

by the client.

client/server interaction, can operate on top of SSL. Three higher-layer protocols are defined as part of

-Client write key: The conventional encryption key for data encrypted by the client and decrypted by

SSL: the Handshake Protocol, The Change Cipher Spec Protocol, and the Alert Protocol. These SSL-

the server.

Page 165
www.allsyllabus.com

Page 166
www.allsyllabus.com

The Handshake Protocol consists of a series of messages exchanged by client and server. All of these

state to be copied into the current state, which updates the cipher suite to be used on this connection.

application data is transmitted.

consists of a single byte with the value 1. The sole purpose of this message is to cause the pending

to be used to protect data sent in an SSL record. The Handshake Protocol is used before any

Protocol, and it is the simplest. This protocol consists of a single message (Diaure 1.5a), which

to authenticate each other and to negotiate an encryption and MAC algorithm and cryptographic keys

The Change Cipher Spec Protocol is one of the three SSL-specific protocols that use the SSL Record

The most complex part of SSL is the Handshake Protocol. This protocol allows the server and client

Change Cipher Spec Protocol:

Handshake Protocol:

Diaure 1.4. SSL Record Format

unacceptable.

IV with the following record.

-certificate_unknown: Some other unspecified issue arose in processing the certificate, rendering it

Handshake Protocol. Thereafter the final ciphertext block from each record is preserved for use as the

-certificate_expired: A certificate has expired.

maintained for each key. This field is first initialized by the SSL

-certificate_revoked: A certificate has been revoked by its signer.

-Initialization vectors: When a block cipher in CBC mode is used, an initialization vector (IV) is

vtu.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

www.allsyllabus.com

have the format shown in Diaure 1.5c. Each message has three fields:
-Type (1 byte): Indicates one of 10 messages.

Diaure 1.5. SSL Record Protocol Payload

c
o
m

c
o
m

Diaure 1.6 shows the initial exchange needed to establish a logical connection between client

that use SSL, alert messages are compressed and encrypted, as specified by the current state. Each

-Content (_ 0 bytes): The parameters associated with this message

The Alert Protocol is used to convey SSL-related alerts to the peer entity. As with other applications

-Length (3 bytes): The length of the message in bytes.

Alert Protocol:

u
s
.

message in this protocol consists of two bytes (Diaure 17.5b). The first byte takes the value

u
s
.

and server. The exchange can be viewed as having four phases.

warning(1) or fatal(2) to convey the severity of the message. If the level is fatal, SSL immediately

ll
a
b

terminates the connection. Other connections on the same session may continue, but no new

ll
a
b

This phase is used to initiate a logical connection and to establish the security capabilities that will be

specific alert. First, we list those alerts that are always fatal (definitions from the SSL specification):

Phase 1. Establish Security Capabilities:

connections on this session may be established. The second byte contains a code that indicates the

ll
s
y

associated with it. The exchange is initiated by the client, which sends a client_hello message with

-bad_record_mac: An incorrect MAC was received.

the following parameters:

.a

.a

-unexpected_message: An inappropriate message was received.

lls

-Random: A client-generated random structure, consisting of a 32-bit timestamp and

decompress or decompress to greater than maximum allowable length).

-Version: The highest SSL version understood by the client.

-decompression_failure: The decompression function received improper input (e.g., unable to

-handshake_failure: Sender was unable to negotiate an acceptable set of security parameters given

28 bytes generated by a secure random number generator. These values serve as

the options available.

nonces and are used during key exchange to prevent replay attacks.

update the parameters of an existing connection or create a new connection on this session. A zero

fields. The remainder of the alerts are the following:

-Session ID: A variable-length session identifier. A nonzero value indicates that the client wishes to

-illegal_parameter: A field in a handshake message was out of range or inconsistent with other

the client, in decreasing order of preference. Each element of the list (each cipher suite) defines both a

connection.

-CipherSuite: This is a list that contains the combinations of cryptographic algorithms supported by

connection. Each party is required to send a close_notify alert before closing the write side of a

value indicates that the client wishes to establish a new connection on a new session.

-close_notify: Notifies the recipient that the sender will not send any more messages on this

-Compression Method: This is a list of the compression methods the client supports. After sending

available.

key exchange algorithm and a CipherSpec; these are discussed subsequently.

-no_certificate: May be sent in response to a certificate request if no appropriate certificate is

the client_hello message, the client waits for the server_hello message, which contains the same
parameters as the client_hello message. For the server_hello message, the following conventions

supported by the server. The Random field is generated by the server and is independent of the client's

-unsupported_certificate: The type of the received certificate is not supported.

apply. The Version field contains the lower of the version suggested by the client and the highest

-bad_certificate: A received certificate was corrupt (e.g., contained a signature that did not verify).

Page 167

Page 168

www.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

Random field. If the SessionID field of the client was nonzero, the same value is used by the server;

Message Authentication Code

otherwise the server's SessionID field contains the value for a new session. The CipherSuite field

There are two differences between the SSLv3 and TLS MAC schemes: the actual algorithm and the

contains the single cipher suite selected by the server from those proposed by the client. The

scope of the MAC calculation. TLS makes use of the HMAC algorithm defined in RFC 2104. HMAC

Compression field contains the compression method selected by the server from those proposed by

is defined as follows:

the client.

HMACK(M) = H[(K+ EX-OR opad)||H[(K+ EX-OR ipad)||M]]

The first element of the Cipher Suite parameter is the key exchange method (i.e., the means by which

where

the cryptographic keys for conventional encryption and MAC are exchanged). The following key

H = embedded hash function (for TLS, either MD5 or SHA-1)

exchange methods are supported:

M = message input to HMAC


K+ = secret key padded with zeros on the left so that the result is equal to the block

receiver's key must be made available.

SHA-1, block length = 512 bits)

-Fixed Diffie-Hellman: This is a Diffie-Hellman key exchange in which the server's certificate

ipad = 00110110 (36 in hexadecimal) repeated 64 times (512 bits)

s.
co
m

length of the hash code(for MD5 and

s.
co
m

-RSA: The secret key is encrypted with the receiver's RSA public key. A public-key certificate for the

public-key certificate contains the Diffie-Hellman publickey parameters. The client provides its

SSLv3 uses the same algorithm, except that the padding bytes are concatenated with the secret key

Diffie-Hellman public key parameters either in a certificate, if client authentication is required, or in a

rather than being XORed with the secret key padded to the block length. The level of security should

key exchange message. This method results in a fixed secret key between two peers, based on the

be about the same in both cases.

la
bu

opad = 01011100 (5C in hexadecimal) repeated 64 times (512 bits)

la
bu

contains the Diffie-Hellman public parameters signed by the certificate authority (CA). That is, the

TLS makes use of a pseudorandom function referred to as PRF to expand secrets into blocks of data

keys. In this case, the Diffie-Hellman public keys are exchanged, signed using the sender's private

for purposes of key generation or validation. The objective is to make use of a relatively small shared
secret value but to generate longer blocks of data in a way that is secure from the kinds of attacks

lls

lls

RSA or DSS key. The receiver can use the corresponding public key to verify the signature.

yl

Pseudorandom Function:

-Ephemeral Diffie-Hellman: This technique is used to create ephemeral (temporary, one-time) secret

yl

Diffie-Hellman calculation using the fixed public keys.

(Diaure 1.7):

-Anonymous Diffie-Hellman: The base Diffie-Hellman algorithm is used, with no authentication.

Diaure 1.7 TLS Function P_hash (secret, seed)

That is, each side sends its public Diffie-Hellman parameters to the other, with no authentication. This

Alert Codes:

TLS supports all of the alert codes defined in SSLv3 with the exception of no certificate. A number of

approach is vulnerable to man-in-the-middle attacks, in which the attacker conducts anonymous

.a

made on hash functions and MACs. The PRF is based on the following data expansion function

three Diffie-Hellman options because it results in a temporary, authenticated key.

.a

Certificates are used to authenticate the public keys. This would appear to be the most secure of the

Diffie-Hellman with both parties.

additional codes are defined in TLS; of these, the following are always fatal:

-Fortezza: The technique defined for the Fortezza scheme.

decryption_failed: A ciphertext decrypted in an invalid way; either it was not an even multiple of the

Following the definition of a key exchange method is the CipherSpec, which includes the following

block length or its padding values, when checked, were incorrect.

fields:

record_overflow: A TLS record was received with a payload (ciphertext) whose length exceeds 214

-CipherAlgorithm: Any of the algorithms mentioned earlier: RC4, RC2, DES, 3DES, DES40, IDEA,

+ 2048 bytes, or the ciphertext decrypted to a length of greater than 214 + 1024 bytes.

until enough output has been generated. The result of this algorithmic structure is a pseudorandom

unknown_ca: A valid certificate chain or partial chain was received, but the certificate was not

function. We can view the master_secret as the pseudorandom seed value to the function. The client

accepted because the CA certificate could not be located or could not be matched with a known,

and server random numbers can be viewed as salt values to complicate cryptanalysis.

trusted CA.

Transport Layer Security:

access_denied: A valid certificate was received, but when access control was applied, the sender

TLS is an IETF standardization initiative whose goal is to produce an Internet standard version of

decided not to proceed with the negotiation.

SSL. TLS is defined as a Proposed Internet Standard in RFC 2246. RFC 2246 is very similar to

decode_error: A message could not be decoded because a field was out of its specified range or the

SSLv3. In this section, we highlight the differences.

length of the message was incorrect.


Page 169

www.allsyllabus.com

Page 170
www.allsyllabus.com

of exchanged messages.

but this alert indicates that the sender is not able to renegotiate. This message is always a warning.

to 249. A variable padding length may be used to frustrate attacks based on an analysis of the lengths

client hello after initial handshaking. Either of these messages would normally result in renegotiation,

padding.length byte is 79 bytes long, then the padding length, in bytes, can be 1, 9, 17, and so on, up

no_renegotiation: Sent by a client in response to a hello request or by the server in response to a

255 bytes. For example, if the plaintext (or compressed text if compression is used) plus MAC plus

user_canceled: This handshake is being canceled for some reason unrelated to a protocol failure.

be any amount that results in a total that is a multiple of the cipher's block length, up to a maximum of

signature, decrypt a key exchange, or validate a finished message.

total size of the data to be encrypted is a multiple of the cipher's block length. In TLS, the padding can

decrypt_error: A handshake cryptographic operation failed, including being unable to verify a

In SSL, the padding added prior to encryption of user data is the minimum amount required so that the

impossible to continue. The remainder of the new alerts include the following:

Padding

internal_error: An internal error unrelated to the peer or the correctness of the protocol makes it

random numbers.

specifically because the server requires ciphers more secure than those supported by the client.

master_secret in TLS is calculated as a hash function of the pre_master_secret and the two ello

insufficient_security: Returned instead of handshake_failure when a negotiation has failed

The pre_master_secret for TLS is calculated in the same way as in SSLv3. As in SSLv3, the

supported.

Cryptographic Computations:

protocol_version: The protocol version the client attempted to negotiate is recognized but not

server.

detected.

where finished_label is the string "client finished" for the client and "server finished" for the

export_restriction: A negotiation not in compliance with export restrictions on key length was

vtu.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

Secure Electronic Transaction:

u
s
.

There are several small differences between the cipher suites available under SSLv3 and under TLS:

-Key Exchange: TLS supports all of the key exchange techniques of SSLv3 with the exception of

www.allsyllabus.com

and Visa in February 1996. A wide range of companies were involved in developing the initial
specification, including IBM, Microsoft, Netscape, RSA, Terisa, and Verisign. Beginning in 1996.

in SSLv3, with the exception of Fortezza.

SET is not itself a payment system. Rather it is a set of security protocols and formats that enables
users to employ the existing credit card payment infrastructure on an open network, such as the
Internet, in a secure fashion. In essence, SET provides three services:

.a

.a

Fortezza.

-Symmetric Encryption Algorithms: TLS includes all of the symmetric encryption algorithms found

ll
s
y

lls

Client Certificate Types:

SET is an open encryption and security specification designed to protect credit card transactions on
the Internet. The current version, SETv1, emerged from a call for security standards by MasterCard

ll
a
b

ll
a
b

Cipher Suites

c
o
m

u
s
.

c
o
m

-Provides trust by the use of X.509v3 digital certificates

dss_sign, rsa_fixed_dh, and dss_fixed_dh. These are all defined in SSLv3. In addition, SSLv3

-Provides a secure communications channel among all parties involved in a transaction

TLS defines the following certificate types to be requested in a certificate_request message: rsa_sign,

includes rsa_ephemeral_dh, dss_ephemeral_dh, and fortezza_kea. Ephemeral Diffie-Hellman

-Ensures privacy because the information is only available to parties in a transaction

A good way to begin our discussion of SET is to look at the business requirements for SET,

parameters. TLS does not include the Fortezza scheme.

SET Overview:

dss_sign types are used for that function; a separate signing type is not needed to sign Diffie-Hellman

when and where necessary.

involves signing the Diffie-Hellman parameters with either RSA or DSS; for TLS, the rsa_sign and

its key features, and the participants in SET transactions.

Certificate_Verify and Finished Messages:

also reduces the risk of fraud by either party to the transaction or by malicious third parties. SET uses

For TLS, we have

cardholders that this information is safe and accessible only to the intended recipient. Confidentiality

handshake messages, and a label that identifies client or server. The calculation is somewhat different.

-Provide confidentiality of payment and ordering information: It is necessary to assure

SSLv3, the finished message in TLS is a hash based on the shared master_secret, the previous

credit cards over the Internet and other networks:

pads. These extra fields were felt to add no additional security. As with the finished message in

The SET specification lists the following business requirements for secure payment processing with

handshake_messages. Recall that for SSLv3, the hash calculation also included the master secret and

Requirements:

In the TLS certificate_verify message, the MD5 and SHA-1 hashes are calculated only over

PRF(master_secret, finished_label, MD5(handshake_messages)||

encryption to provide confidentiality.

SHA-1(handshake_messages))
Page 171

Page 172

www.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

Note that unlike IPSec and SSL/TLS, SET provides only one choice for each cryptographic algorithm.
-Ensure the integrity of all transmitted data: That is, ensure that no changes in content occur

This makes sense, because SET is a single application with a single set of requirements, whereas

during transmission of SET messages. Digital signatures are used to provide integrity.

IPSec and SSL/TLS are intended to support a range of applications.

-Provide authentication that a cardholder is a legitimate user of a credit card

SET Participants:

account: A mechanism that links a cardholder to a specific account number reduces the incidence of

Diaure 1.8 indicates the participants in the SET system, which include the following:

fraud and the overall cost of payment processing. Digital signatures and certificates are used to verify

Diaure 1.8 Secure Electronic Commerce Components

that a cardholder is a legitimate user of a valid account.

Cardholder: In the electronic environment, consumers and corporate purchasers interact with

through its relationship with a financial institution: This is the complement to the preceding

payment card (e.g., MasterCard, Visa) that has been issued by an issuer.

requirement. Cardholders need to be able to identify merchants with whom they can conduct secure

Merchant: A merchant is a person or organization that has goods or services to sell to the cardholder.

transactions. Again, digital signatures and certificates are used.

Typically, these goods and services are offered via a Web site or by electronic mail. A merchant that

-Ensure the use of the best security practices and system design techniques to protect all

accepts payment cards must have a relationship with an acquirer.

s.
co
m

merchants from personal computers over the Internet. A cardholder is an authorized holder of a

s.
co
m

-Provide authentication that a merchant can accept credit card transactions

legitimate parties in an electronic commerce transaction: SET is a well-tested specification based

Issuer: This is a financial institution, such as a bank, that provides the cardholder with the payment

on highly secure cryptographic algorithms and protocols.

card. Typically, accounts are applied for and opened by mail or in person. Ultimately, it is the issuer

institution that establishes an account with a merchant and processes payment card authorizations and

la
bu

that is responsible for the payment of the debt of the cardholder. Acquirer: This is a financial

use: SET can securely operate over a "raw" TCP/IP stack. However, SET does not interfere with the

la
bu

-Create a protocol that neither depends on transport security mechanisms nor prevents their

multiple bankcard associations or with multiple individual issuers. The acquirer provides

protocols and formats are independent of hardware platform, operating system, and Web software.

authorization to the merchant that a given card account is active and that the proposed purchase does

lls

not exceed the credit limit.

lls

Key Features of SET

yl

payments. Merchants will usually accept more than one credit card brand but do not want to deal with

-Facilitate and encourage interoperability among software and network providers: The SET

yl

use of other security mechanisms, such as IPSec and SSL/TLS.

the acquirer is reimbursed by the issuer over some sort of payment network for electronic funds

travels across the network. An interesting and important feature of SET is that it prevents the

transfer.

.a

The acquirer also provides electronic transfer of payments to the merchant's account. Subsequently,

-Confidentiality of information: Cardholder account and payment information is secured as it

.a

To meet the requirements just outlined, SET incorporates the following features:

Conventional encryption by DES is used to provide confidentiality.

merchant from learning the cardholder's credit card number; this is only provided to the issuing bank.

-Integrity of data: Payment information sent from cardholders to merchants includes order

We now briefly describe the sequence of events that are required for a transaction. We will then look

information, personal data, and payment instructions. SET guarantees that these message contents are

at some of the cryptographic details.

not altered in transit. RSA digital signatures, using SHA-1 hash codes, provide message integrity.

1. The customer opens an account. The customer obtains a credit card account, such as MasterCard

Certain messages are also protected by HMAC using SHA-1.

or Visa, with a bank that supports electronic payment and SET.

-Cardholder account authentication: SET enables merchants to verify that a cardholder is a

2. The customer receives a certificate. After suitable verification of identity, the customer receives

legitimate user of a valid card account number. SET uses X.509v3 digital certificates with RSA

an X.509v3 digital certificate, which is signed by the bank. The certificate verifies the customer's

signatures for this purpose.

RSA public key and its expiration date. It also establishes a relationship, guaranteed by the bank,

-Merchant authentication: SET enables cardholders to verify that a merchant has a relationship with

between the customer's key pair and his or her credit card.

a financial institution allowing it to accept payment cards. SET uses X.509v3 digital certificates with

3. Merchants have their own certificates. A merchant who accepts a certain brand of card must be

RSA signatures for this purpose.

in possession of two certificates for two public keys owned by the merchant: one for signing
messages, and one for key exchange. The merchant also needs a copy of the payment gateway's
public-key certificate.
Page 173
www.allsyllabus.com

Page 174
www.allsyllabus.com

merchant can compute the quantities H(PIMS||H[OI]); D(PUc, DS)

list of the items to be purchased to the merchant, who returns an order form containing the list of

merchant also has the public key of the customer, taken from the customer's certificate. Then the

through the merchant's Web site to select items and determine the price. The customer then sends a

possession of the dual signature (DS), the OI, and the message digest for the PI (PIMD). The

4. The customer places an order. This is a process that may involve the customer first browsing

vtu.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

www.allsyllabus.com

items, their price, a total price, and anorder number.


5. The merchant is verified. In addition to the order form, the merchant sends a copy of its

3. The customer has linked the OI and PI and can prove the linkage.

the payment gateway, requesting authorization that the customer's available credit is sufficient for this

2. The bank has received PI and verified the signature.

7. The merchant requests payment authorization. The merchant sends the payment information to

1. The merchant has received OI and verified the signature.

the customer.

Again, if these two quantities are equal, then the bank has verified the signature. In summary,

way that it cannot be read by the merchant. The customer's certificate enables the merchant to verify

Diaure 1.9 Construction of Dual Signature

order form. The payment contains credit card details. The payment information is encrypted in such a

(OIMD), and the customer's public key, then the bank can compute H(H[OI]||OIMD); D(PUc, DS)

merchant, along with the customer's certificate. The order confirms the purchase of the items in the

has verified the signature. Similarly, if the bank is in possession of DS, PI, the message digest for OI

6. The order and payment are sent. The customer sends both order and payment information to the

where PUc is the customer's public signature key. If these two quantities are equal, then the merchant

certificate, so that the customer can verify that he or she is dealing with a valid store.

c
o
m

c
o
m

advantage. It would then have to find another OI whose hash matches the existing OIMD.

8. The merchant confirms the order. The merchant sends confirmation of the order to the customer.

For example, suppose the merchant wishes to substitute another OI in this transaction, to its

purchase.

u
s
.

9. The merchant provides the goods or service. The merchant ships the goods or provides the
service to the customer.

With SHA-1, this is deemed not to be feasible. Thus, the merchant cannot link another OI
with this PI.

10. The merchant requests payment. This request is sent to the payment gateway, which handles all

Payment Processing:

Table 1.3 lists the transaction types supported by SET. In what follows we look in some

Dual Signature: Before looking at the details of the SET protocol, let us discuss an important

ll
s
y

detail at the following transactions:

innovation introduced in SET: the dual signature. The purpose of the dual signature is to link two

-Payment capture

does not need to know the customer's credit card number, and the bank does not need to know the

-Payment authorization

order information (OI) to the merchant and the payment information (PI) to the bank. The merchant

-Purchase request

messages that are intended for two different recipients. In this case, the customer wants to send the

details of the customer's order. The customer is afforded extra protection in terms of privacy by

Table 1.3 SET Transaction Types

.a

.a

y
lls

of the payment processing.

ll
a
b

ll
a
b

u
s
.

Merchant registration Merchants must register with a CA before they can exchange SET messages

intended for this order and not for some other goods or service.

merchants.

resolve disputes if necessary. The link is needed so that the customer can prove that this payment is

Cardholder registration Cardholders must register with a CA before they can send SET messages to

keeping these two items separate. However, the two items must be linked in a way that can be used to

back later. The cardholder or merchant sends the Certificate Inquiry message to determine the status

can be summarized as DS = E(PRc, [H(H(PI)||H(OI)])

quickly, it will send a reply to the cardholder or merchant indicating that the requester should check

encrypts the final hash with his or her private signature key, creating the dual signature. The operation

Certificate inquiry and status If the CA is unable to complete the processing of a certificate request

These two hashes are then concatenated and the hash of the result is taken. Finally, the customer

Payment capture Allows the merchant to request payment from the payment gateway.

preceding paragraph. The customer takes the hash (using SHA-1) of the PI and the hash of the OI.

amount for a purchase on a given credit card account.

The linkage prevents this. Diaure 1.9 shows the use of a dual signature to meet the requirement of the

Payment authorization Exchange between merchant and payment gateway to authorize a given

from this customer, the merchant could claim that this OI goes with the PI rather than the original OI.

Purchase request Message from customer to merchant containing OI for merchant and PI for bank.

and a signed PI, and the merchant passes the PI on to the bank. If the merchant can capture another OI

with customers and payment gateways.

To see the need for the link, suppose that the customers send the merchant two messages: a signed OI

where PRc is the customer's private signature key. Now suppose that the merchant is in

of the certificate request and to receive the certificate if the request has been approved. Purchase
Page 175

Page 176

www.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

vtu.allsyllabus.com

www.allsyllabus.com

inquiry Allows the cardholder to check the status of the processing of an order after the purchase

Purchase Request message (Diaure 1.10). For this purpose, the cardholder generates a one-time

response has been received. Note that this message does not include information such as the status of

symmetric encryption key, Ks. The message includes the following:

back ordered goods, but does indicate the status of authorization, capture and credit processing.

1. Purchase-related information.

Authorization reversal Allows a merchant to correct previous authorization requests. If the order will

2. Order-related information.

not be completed, the merchant reverses the entire authorization. If part of the order will not be

3. Cardholder certificate.

completed (such as when goods are back ordered), the merchant reverses part of the amount of the
authorization. Capture reversal Allows a merchant to correct errors in capture requests such

credit to a cardholder's account such as when goods are returned or were damaged during shipping.

1. Verifies the cardholder certificates by means of its CA signatures.

Note that the SET Credit message is always initiated by the merchant, not the cardholder. All

2. Verifies the dual signature using the customer's public signature key. This ensures that the order has

communications between the cardholder and merchant that result in a credit being processed happen

not been tampered with in transit and that it was signed using the cardholder's private signature key.

outside of SET. Credit reversal Allows a merchant to correct a previously request credit. Payment

3. Processes the order and forwards the payment information to the payment gateway for

gateway certificate request Allows a merchant to query the payment gateway and receive a copy of

authorization (described later).

the gateway's current key-exchange and signature certificates. Batch administration Allows a

4. Sends a purchase response to the cardholder.

s.
co
m

When the merchant receives the Purchase Request message, it performs the following actions

s.
co
m

astransaction amounts that were entered incorrectly by a clerk. Credit Allows a merchant to issue a

merchant to communicate information to the payment gateway regarding merchant batches. Error

payment gateway. The payment authorization ensures that the transaction was approved by the issuer.

lls

Before the Purchase Request exchange begins, the cardholder has completed browsing, selecting, and

yl

This authorization guarantees that the merchant will receive payment; the merchant can therefore

yl

Purchase Request:

During the processing of an order from a cardholder, the merchant authorizes the transaction with the

provide the services or goods to the customer. The payment authorization exchange consists of two

lls

tests.

Payment Authorization:

la
bu

la
bu

message Indicates that a responder rejects a message because it fails format or content verification

Payment Capture

consists of four messages: Initiate Request, Initiate Response, Purchase Request, and Purchase

To obtain payment, the merchant engages the payment gateway in a payment capture transaction,

Response. In order to send SET messages to the merchant,the cardholder must have a copy of the

consisting of a capture request and a capture response message. For the Capture Request message,

the merchant generates, signs, and encrypts a capture request block, which includes the payment

certificates of the merchant and the payment gateway. The customer requests the certificates in the

.a

messages: Authorization Request and Authorization response.

to the customer. All of the preceding occurs without the use of SET. The purchase request exchange

.a

ordering. The end of this preliminary phase occurs when the merchant sends a completed order form

Initiate Request message, sent to the merchant. This message includes the brand of the credit card

amount and the transaction ID. The message also includes the encrypted capture token received earlier

that the customer is using. The message also includes an ID assigned to this request/response pair by

(in the Authorization Response) for this transaction, as well as the merchant's signature key and key-

the customer and a nonce used to ensure timeliness.

exchange key certificates.

The merchant generates a response and signs it with its private signature key. The response includes

When the payment gateway receives the capture request message, it decrypts and verifies the capture

the nonce from the customer, another nonce for the customer to return in the next message, and a

request block and decrypts and verifies the capture token block. It then checks for consistency

transaction ID for this purchase transaction. In addition to the signed response, the Initiate Response

between the capture request and capture token. It then creates a clearing request that is sent to the

message includes the merchant's signature certificate and the payment gateway's key exchange

issuer over the private payment network. This request causes funds to be transferred to the merchant's

certificate. The cardholder verifies the merchant and gateway certificates by means of their respective

account.

CA signatures and then creates the OI and PI. The transaction ID assigned by the merchant is placed

The gateway then notifies the merchant of payment in a Capture Response message. The message

in both the OI and PI. The OI does not contain explicit order data such as the number and price of

includes a capture response block that the gateway signs and encrypts. The message also includes the

items. Rather, it contains an order reference generated in the exchange between merchant and

gateway's signature key certificate. The merchant software stores the capture response to be used for

customer during the shopping phase before the first SET message. Next, the cardholder prepares the

reconciliation with payment received from the acquirer.

Page 177
www.allsyllabus.com

Page 178
www.allsyllabus.com

Software Testing

10CS842

UNIT 4

LEVELS OF TESTING, INTEGRATION TESTING

Software Testing

10CS842

previously tested units with respect to the functional decomposition tree. While this describes
integration testing as a process, discussions of this type offer little information about the goals or
techniques. Before addressing these (real) issues, we need to understand the consequences of the
alternative life cycle models.

4.1 Traditional View of Testing Levels


The traditional model of software development is the Waterfall model, which is drawn as a V in
Figure 4.1 to emphasize the basic levels of testing. In this view, information produced in one of the
development phases constitutes the basis for test case identification at that level. Nothing
controversial here: we certainly would hope that system test cases are somehow correlated with the
requirements specification, and that unit test cases are derived from the detailed design of the unit.
Two observations: there is a clear presumption of functional testing here, and there is an implied
bottom-up testing order.

Figure 4.1 The Waterfall Life Cycle


Of the three traditional levels of testing (unit, integration, and system), unit testing is best
understood. The testing theory and techniques we worked through in Parts I and II are directly
applicable to unit testing. System testing is understood better than integration testing, but both need
clarification. The bottom-up approach sheds some insight: test the individual components, and then
integrate these into subsystems until the entire system is tested. System testing should be something
that the customer (or user) understands, and it often borders on customer acceptance testing.
Generally, system testing is functional rather than structural; this is mostly due to the absence of a
structural basis for system test cases. In the traditional view, integration testing is whats left over:
its not unit testing, and its not system testing. Most of the usual discussions on integration testing
center on the order in which units are integrated: top-down, bottom-up, or the big bang
(everything at once). Of the three levels, integration is the least well understood; well do something
about that in this chapter and the next. The waterfall model is closely associated with top-down
development and design by functional decomposition. The end result of preliminary design is a
functional decomposition of the entire system into a treelike structure of functional components.
Figure 4.2 contains a partial functional decomposition of our ATM system. With this
decomposition. top-down integration would begin with the main program, checking the calls to the
three next level procedures (Terminal I/O, ManageSessions, and ConductTransactions). Following
the tree, the ManageSessions procedure would be tested, and then the CardEntry, PIN Entry, and
SelectTransaction procedures. In each case, the actual code for lower level units is replaced by a
stub, which is a throw-away piece of code that takes the place of the actual code. Bottom-up
integration would be the opposite sequence, starting with the CardEntry, PIN Entry, and
SelectTransaction procedures, and working up toward the main program. In bottom-up integration,
units at higher levels are replaced by drivers (another form of throw-away code) that emulate the
procedure calls. The big bang approach simply puts all the units together at once, with no stubs or
drivers. Whichever approach is taken, the goal of traditional integration testing is to integrate
Dept. of CSE, SJBIT

Page 100

Figure 4.2 Partial Functional Decomposition of the ATM System

4.2 Alternative Life Cycle Models


Since the early 1980s, practitioners have devised alternatives in response to shortcomings of the
traditional waterfall model of software development [Agresti 86]. Common to all of these
alternatives is the shift away from the functional decomposition to an emphasis on composition.
Decomposition is a perfect fit both to the top-down progression of the waterfall model and to the
bottom-up testing order. One of the major weaknesses of waterfall development cited by [Agresti
86] is the over-reliance on this whole paradigm. Functional decomposition can only be well done
when the system is completely understood, and it promotes analysis to the near exclusion of
synthesis. The result is a very long separation between requirements specification and a completed
system, and during this interval, there is no opportunity for feedback from the customer.
Composition, on the other hand, is closer the way people work: start with something known and
understood, then add to it gradually, and maybe remove undesired portions. There is a very nice
analogy with positive and negative sculpture. In negative sculpture, work proceeds by removing
unwanted material, as in the mathematicians view of sculpting Michelangelos David: start with a
piece of marble, and simply chip away all non-David. Positive sculpture is often done with a
medium like wax. The central shape is approximated, and then wax is either added or removed until
the desired shape is attained. Think about the consequences of a mistake: with negative sculpture,
the whole work must be thrown away, and restarted. (There is a museum in Florence, Italy that
contains half a dozen such false starts to The David.) With positive sculpture, the erroneous part is
simply removed and replaced. The centrality of composition in the alternative models has a major
implication for integration testing.
4.2.1 Waterfall Spin-offs
There are three mainline derivatives of the waterfall model: incremental development, evolutionary
development, and the Spiral model [Boehm 88]. Each of these involves a series of increments or
builds, as shown in Figure 4.3. Within a build, the normal waterfall phases from detailed design
through testing occur, with one important difference: system testing is split into two steps,
regression and progression testing.

Figure 4.3 Life Cycle with a Build Sequence


Dept. of CSE, SJBIT

Page 101

Software Testing

10CS842

It is important to keep preliminary design as an integral phase, rather than to try to amortize such
high level design across a series of builds. (To do so usually results in unfortunate consequences of
design choices made during the early builds that are regrettable in later builds.) Since preliminary
design remains a separate step, we are tempted to conclude that integration testing is unaffected in
the spin-off models. To some extent this is true: the main impact of the series of builds is that
regression testing becomes necessary. The goal of regression testing is to assure that things that
worked correctly in the previous build still work with the newly added code. Progression testing
assumes that regression testing was successful, and that the new functionality can be tested. (We
like to think that the addition of new code represents progress, not a regression.) Regression testing
is an absolute necessity in a series of builds because of the well-known ripple effect of changes to
an existing system. (The industrial average is that one change in five introduces a new fault.)

The differences among the three spin-off models are due to how the builds are identified. In
incremental development, the motivation for separate builds is usually to level off the staff profile.
With pure waterfall development, there can be a huge bulge of personnel for the phases from
detailed design through unit testing. Most organizations cannot support such rapid staff fluctuations,
so the system is divided into builds that can be supported by existing personnel. In evolutionary
development, there is still the presumption of a build sequence, but only the first build is defined.
Based on it, later builds are identified, usually in response to priorities set by the customer/user, so
the system evolves to meet the changing needs of the user. The spiral model is a combination of
rapid prototyping and evolutionary development, in which a build is defined first in terms of rapid
prototyping, and then is subjected to a go/no go decision based on technology-related risk factors.
From this we see that keeping preliminary design as an integral step is difficult for the evolutionary
and spiral models. To the extent that this cannot be maintained as an integral activity, integration
testing is negatively affected.
Because a build is a set of deliverable end-user functionality, one advantage of these spin-off
models is that all three yield earlier synthesis. This also results in earlier customer feedback, so two
of the deficiencies of waterfall development are mitigated.
4.2.2 Specification Based Models
Two other variations are responses to the complete understanding problem. (Recall that
functional decomposition is successful only when the system is completely understood.) When
systems are not fully understood (by either the customer or the developer), functional
decomposition is perilous at best. The rapid prototyping life cycle (Figure 12.4) deals with this by
drastically reducing the specification-to-customer feedback loop to produce very early synthesis.
Rather than build a final system, a quick and dirty prototype is built and then used to elicit
customer feedback. Depending on the feedback, more prototyping cycles may occur. Once the
developer and the customer agree that a prototype represents the desired system, the developer goes
ahead and builds to a correct specification. At this point, any of the waterfall spin-offs might also be
used.

Dept. of CSE, SJBIT

Page 102

Software Testing

10CS842

Figure 4.4 Rapid Prototyping Life Cycle


Rapid prototyping has interesting implications for system testing. Where are the requirements? Is
the last prototype the specification? How are system test cases traced back to the prototype? One
good answer to questions such as these is to use the prototyping cycle(s) as information gathering
activities, and then produce a requirements specification in a more traditional manner. Another
possibility is to capture what the customer does with the prototype(s), define these as scenarios that
are important to the customer, and then use these as system test cases. The main contribution of
rapid prototyping is that it brings the operational (or behavioral) viewpoint to the requirements
specification phase. Usually, requirements specification techniques emphasize the structure of a
system, not its behavior. This is unfortunate, because most customers dont care about the structure,
and they do care about the behavior.
Executable specifications (Figure 4.5) are an extension of the rapid prototyping concept. With this
approach, the requirements are specified in an executable format (such as finite state machines or
Petri nets). The customer then executes the specification to observe the intended system behavior,
and provides feedback as in the rapid prototyping model.

Figure 4.5 Executable Specification


One big difference is that the requirements specification document is explicit, as opposed to a
prototype. More important, it is often a mechanical process to derive system test cases from an
executable specification. Although more work is required to develop an executable specification,
this is partially offset by the reduced effort to generate system test cases. Another important
distinction: when system testing is based on an executable specification, we have a form of
structural testing at the system level.

4.2.3 An Object-Oriented Life Cycle Model


When software is developed with an object orientation, none of our life cycle models fit very well.
The main reasons: the object orientation is highly compositional in nature, and there is dense
interaction among the construction phases of object-oriented analysis, object-oriented design, and
object-oriented programming. We could show this with pronounced feedback loops among
waterfall phases, but the fountain model [Henderson-Sellers 90] is a much more appropriate
Dept. of CSE, SJBIT

Page 103

Software Testing

10CS842

Software Testing

10CS842

metaphor. In the fountain model, (see Figure 12.6) the foundation is the requirements analysis of
real world systems.

Figure 4.7 Screens for the SATM System


Figure 4.6 Fountain Model of Object-Oriented Software Development
As the object-oriented paradigm proceeds, details bubble up through specification, design, and
coding phases, but at each stage, some of the flow drops back to the previous phase(s). This
model captures the reality of the way people actually work (even with the traditional approaches).

screen is comprised of

4.3 Formulations of the SATM System


In this and the next three chapters, we will relate our discussion to a higher level example, the
Simple Automatic Teller Machine (SATM) system. The version developed here is a revision of that
found in [Topper 93]; it is built around the fifteen screens shown in Figure 4.7. This is a greatly
reduced system; commercial ATM systems have hundreds of screens and numerous time-outs.
The SATM terminal is sketched in Figure 12.8; in addition to the display screen, there are function
buttons B1, B2, and B3, a digit keypad with a cancel key, slots for printer receipts and ATM cards,
and doors for deposits and cash withdrawals. The SATM system is described here in two ways:
with a structured analysis approach, and with an object-oriented approach. These descriptions are
not complete, but they contain detail sufficient to illustrate the testing techniques under discussion.
4.3.1 SATM with Structured Analysis
The structured analysis approach to requirements specification is the most widely used method in
the world. It enjoys extensive CASE tool support as well as commercial training, and is described in
numerous texts. The technique is based on three complementary models: function, data, and control.
Here we use data flow diagrams for the functional models, entity/relationship models for data, and
finite state machine models for the control aspect of the SATM system. The functional and data
models were drawn with the Deft CASE tool from Sybase Inc. That tool identifies external devices
(such as the terminal doors) with lower case letters, and elements of the functional decomposition
with numbers (such as 1.5 for the Validate Card function). The open and filled arrowheads on flow
arrows signify whether the flow item is simple or compound. The portions of the SATM system
shown here pertain generally to the personal identification number (PIN) verification portion of the
system.

Dept. of CSE, SJBIT

The Deft CASE tool distinguishes between simple and compound flows, where compound flows
may be decomposed into other flows, which may themselves be compound. The graphic appearance
of this choice is that simple flows have filled arrowheads, while compound flows have open
arrowheads. As an example, the compound flow screen has the following decomposition:

Page 104

screen1

welcome

screen2

e nt e r P I N

screen3

wro n g P I N

screen4

P I N f ai l e d, c ar d re t ai ne d

screen5

s e l e c t t rans t y pe

screen6

s e l e c t ac c o un t t y pe

screen7

enter amount

screen8

insufficient funds

screen9

c a n n o t d i s p e n s e t h a t a mo u n t

screen10

cannot process withdrawals

screen11

t ake y o ur c as h

screen12

c anno t p ro c e s s de po s i t s

screen13

put de p e nv e l o p i n s l o t

screen14

another transaction?

screen15

T h a n ks ; t ake c ard an d re c e i p t

Dept. of CSE, SJBIT

Page 105

Software Testing

10CS842

Figure 4.8 The SATM Terminal

Software Testing

10CS842

The dataflow diagrams and the entity/relationship model contain information that is primarily
structural. This is problematic for testers, because test cases are concerned with behavior, not with
structure. As a supplement, the functional and data information are linked by a control model; here
we use a finite state machine. Control models represent the point at which structure and behavior
intersect; as such, they are of special utility to testers.

Figure 4.9 Context Diagram of the SATM System

Figure 4.12 Upper Level SATM Finite State Machine

Figure 4.10 Level 1 Dataflow Diagram of the SATM System


Figure 4.11 is an (incomplete) Entity/Relationship diagram of the major data structures in the
SATM system: Customers, Accounts, Terminals, and Transactions. Good data modeling practice
dictates postulating an entity for each portion of the system that is described by data that is retained
(and used by functional components). Among the data the system would need for each customer are
the customers identification and personal account number (PAN); these are encoded into the
magnetic strip on the customers ATM card. We would also want to know information about a
customers account(s), including the account numbers, the balances, the type of account (savings or
checking), and the Personal Identification Number (PIN) of the account. At this point, we might ask
why the PIN is not associated with the customer, and the PAN with an account. Some design has
crept into the specification at this point: if the data were as questioned, a persons ATM card could
be used by anyone; as it is, the present separation predisposes a security checking procedure. Part of
the E/R model describes relationships among the entities: a customer HAS account(s), a customer
conducts transaction(s) in a SESSION, and, independent of customer information, transaction(s)
OCCUR at an ATM terminal. The single and double arrowheads signify the singularity or plurality
of these relationships: one customer may have several accounts, and may conduct none or several
transactions. Many transactions may occur at a terminal, but one transaction never occurs at a
multiplicity of terminals.

The upper level finite state machine in Figure 4.12 divides the system into states that correspond to
stages of customer usage. Other choices are possible, for instance, we might choose states to be
screens being displayed (this turns out to be a poor choice). Finite state machines can be
hierarchically decomposed in much the same way as dataflow diagrams. The decomposition of the
Await PIN state is shown in Figure 4.13. In both of these figures, state transitions are caused either
by events at the ATM terminal (such as a keystroke) or by data conditions (such as the recognition
that a PIN is correct). When a transition occurs, a corresponding action may also occur. We choose
to use screen displays as such actions; this choice will prove to be very handy when we develop
system level test cases.
The function, data, and control models are the basis for design activities in the waterfall model (and
its spin-offs). During design, some of the original decisions may be revised based on additional
insights and more detailed requirements (for example, performance or reliability goals). The end
result is a functional decomposition such as the partial one shown in the structure chart in Figure
4.14. Notice that the original first level decomposition into four subsystems is continued: the
functionality has beendecomposed to lower levels of detail. Choices such as these are the essence of
design, and design is beyond the scope of this book. In practice, testers often have to live with the
results of poor design choices.

Figure 4.13 PIN Entry Finite State Machine


If we only use a structure chart to guide integration testing, we miss the fact that some (typically
lower level) functions are used in more than one place. Here, for example, the ScreenDriver
function is used by several other modules, but it only appears once in the functional decomposition.
In the next chapter, we will see that a call graph is a much better basis for integration test case
identification. We can develop the beginnings of such a call graph from a more detailed view of

Figure 4.11 Entity/Relationship Model of the SATM System

Dept. of CSE, SJBIT

Page 106

Dept. of CSE, SJBIT

Page 107

Software Testing

10CS842

portions of the system. To support this, we need a numbered decomposition, and a more detailed
view of two of the components.
Here is the functional decomposition carried further in outline form: the numbering scheme
preserves the levels of the components in Figure 12.14.

Software Testing

10CS842

1.4 Manage Session


1.4.1 Validate Card
1.4.2 Validate PIN
1.4.2.1 GetPIN

1.4.3 Close Session


1.4.3.1 New Transaction Request

Figure 4.14 A Decomposition Tree for the SATM System

1.4.3.2 Print Receipt

1 S A TM S ys te m

1.4.3.3 Post Transaction Local

1.1 Device Sense & Control


1.1.1 Door Sense & Control

1.4.4 Manage Transaction

1.1.1.1 Get Door Status

1.4.4.1 Get Transaction Type

1.1.1.2 Control Door

1.4.4.2 Get Account Type

1.1.1.3 Dispense Cash

1.4.4.3 Report Balance

1.1.2 Slot Sense & Control

1.4.4.4 Process Deposit

1.1.2.1 WatchCardSlot

1.4.4.5 Process Withdrawal

1.1.2.2 Get Deposit Slot Status

As part of the specification and design process, each functional component is normally expanded to
show its inputs, outputs, and mechanism. We do this here with pseudo-code (or PDL, for program
design language) for three modules. This particular PDL is loosely based on Pascal; the point of any
PDL is to communicate, not to develop something that can be compiled. The main program
description follows the finite state machine description given in Figure 4.12. States in that diagram
are implemented with a Case statement.

1.1.2.3 Control Card Roller


1.1.2.3 Control Envelope Roller
1.1.2.5 Read Card Strip
1.2 Central Bank Comm.
1.2.1 Get PIN for PAN

Main Program
State = AwaitCard
CASE State OF
AwaitCard:

1.2.2 Get Account Status


1.2.3 Post Daily Transactions
1.3 Terminal Sense & Control
1.3.1 Screen Driver
1.3.2 Key Sensor
Dept. of CSE, SJBIT

Page 108

Dept. of CSE, SJBIT

ScreenDriver(1, null)
WatchCardSlot(CardSlotStatus)
WHILE CardSlotStatus is Idle DO
WatchCardSlot(CardSlotStatus)
ControlCardRoller(accept)
ValidateCard(CardOK, PAN)
IF CardOK
THEN
State = AwaitPIN
ELSE
ControlCardRoller(eject)

Page 109

Software Testing

AwaitPIN:

AwaitTrans:
CloseSession:

10CS842

State = AwaitCard
ValidatePIN(PINok, PAN)
IF PINok
THEN
ScreenDriver(2, null)
State = AwaitTrans
ELSE
ScreenDriver(4, null)
State = AwaitCard
ManageTransaction
State = CloseSession
IF NewTransactionRequest
THEN
State = AwaitTrans
ELSE
PrintReceipt
PostTransactionLocal
CloseSession
ControlCardRoller(eject)
State = AwaitCard

End, (CASE State)


END. (Main program SATM)

The ValidatePIN procedure is based on another finite state machine shown in Figure 4.13, in which
states refer to the number of PIN entry attempts.
Procedure ValidatePIN(PINok, PAN)
GetPINforPAN(PAN, ExpectedPIN)
Try = First
CASE Try OF
First:
ScreenDriver(2, null)
GetPIN(EnteredPIN)
IF EnteredPIN = ExpectedPIN
THEN
PINok = TRUE
RETURN
ELSE
ScreenDriver(3, null)
Try = Second
Second:
ScreenDriver(2, null)
GetPIN(EnteredPIN)
IF EnteredPIN = ExpectedPIN
THEN
PINok = TRUE
RETURN
ELSE
ScreenDriver(3, null)
Try = Third
Third:
ScreenDriver(2, null)
GetPIN(EnteredPIN)
IF EnteredPIN = ExpectedPIN
THEN
PINok = TRUE
RETURN
ELSE
ScreenDriver(4, null)
PINok = FALSE
END, (CASE Try)
END. (Procedure ValidatePIN)

Software Testing

10CS842

touched. Rather than another CASE statement implementation, the states are collapsed into
iterations of a WHILE loop.
Procedure GetPIN(EnteredPIN, CancelHit)
Local Data: DigitKeys = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
BEGIN
CancelHit = FALSE
EnteredPIN = null string
DigitsRcvd=0
WHILE NOT(DigitsRcvd= 4 OR CancelHit) DO
BEGIN
KeySensor(KeyHit)
IF KeyHit IN DigitKeys
THEN BEGIN
EnteredPIN = EnteredPIN + KeyHit
INCREMENT(DigitsRcvd)
IF DigitsRcvd=1 THEN ScreenDriver(2, 'X-')
IF DigitsRcvd=2 THEN ScreenDriver(2, 'XX-')
IF DigitsRcvd=3 THEN ScreenDriver(2, 'XXX-')
IF DigitsRcvd=4 THEN ScreenDriver(2, 'XXXX')
END
END
{WHILE}
(Procedure GetPIN)

END.

If we follow the pseudocode in these three modules, we can identify the uses relationship among
the modules in the functional decomposition.

WatchCardSlot

SATM Main

U s es M o d u l es

Module

C o n t ro l C ard R o l l e r
Sc r e e n D r i v e r
V al i dat e C ard
V al i dat e P I N
M a n a g e T r a n s a ct i o n
N e w T r an s ac t i o n R e q ue s t
ValidatePIN

Page 110

GetPINforPAN
Ge t P I N

The GetPIN procedure is based on another finite state machine in which states refer to the number
of digits received, and in any state, either another digit key can be touched, or the cancel key can be
Dept. of CSE, SJBIT

Dept. of CSE, SJBIT

Page 111

Software Testing

10CS842

Sc r e e n D r i v e r
GetPIN

KeySensor
Sc r e e n D r i v e r

Notice that the uses information is not readily apparent in the functional decomposition. This
information is developed (and extensively revised) during the more detailed phases of the design
process. We will revisit this in Chapter 13.

4.4 Separating Integration and System Testing


We are almost in a position to make a clear distinction between integration and system testing. We
need this distinction to avoid gaps and redundancies across levels of testing, to clarify appropriate
goals for these levels, and to understand how to identify test cases at different levels. This whole
discussion is facilitated by a concept essential to all levels of testing: the notion of a thread. A
thread is a construct that refers to execution time behavior; when we test a system, we use test cases
to select (and execute) threads. We can speak of levels of threads: system threads describe system
level behavior, integration threads correspond to integration level behavior, and unit threads
correspond to unit level behavior. Many authors use the term, but few define it, and of those that do,
the offered definitions arent very helpful. For now, we take thread to be a primitive term, much
like function and data. In the next two chapters, we shall see that threads are most often recognized
in terms of the way systems are described and developed. For example, we might think of a thread
as a path through a finite state machine description of a system, or we might think of a thread as
something that is determined by a data context and a sequence of port level input events, such as
those in the context diagram of the SATM system. We could also think of a thread as a sequence of
source statements, or as a sequence of machine instructions. The point is, threads are a generic
concept, and they exist independently of how a system is described and developed.
We have already observed the structural versus behavioral dichotomy; here we shall find that both
of these views help us separate integration and system testing. The structural view reflects both the
process by which a system is built and the techniques used to build it. We certainly expect that test
cases at various levels can be traced back to developmental information. While this is necessary, it
fails to be sufficient: we will finally make our desired separation in terms of behavioral constructs.

Software Testing
4.4.1 Structural Insights

10CS842

Everyone agrees that there must be some distinction, and that integration testing is at a more
detailed level than system testing. There is also general agreement that integration testing can safely
assume that the units have been separately tested, and that, taken by themselves, the units function
correctly. One common view, therefore, is that integration testing is concerned with the interfaces
among the units. One possibility is to fall back on the symmetries in the waterfall life cycle model,
and say that integration testing is concerned with preliminary design information, while system
testing is at the level of the requirements specification. This is a popular academic view, but it begs
an important question: how do we discriminate between specification and preliminary design? The
pat academic answer to this is the what vs. how dichotomy: the requirements specification defines
what, and the preliminary design describes how. While this sounds good at first, it doesnt stand up
well in practice. Some scholars argue that just the choice of a requirements specification technique
is a design choice
The life cycle approach is echoed by designers who often take a Dont Tread On Me view of a
requirements specification: a requirements specification should neither predispose nor preclude a
design option. With this view, when information in a specification is so detailed that it steps on the
designers toes, the specification is too detailed. This sounds good, but it still doesnt yield an
operational way to separate integration and system testing.
The models used in the development process provide some clues. If we follow the definition of the
SATM system, we could first postulate that system testing should make sure that all fifteen display
screens have been generated. (An output domain based, functional view of system testing.) The
entity/relationship model also helps: the one-to-one and one-to-many relationships help us
understand how much testing must be done. The control model (in this case, a hierarchy of finite
state machines) is the most helpful. We can postulate system test cases in terms of paths through the
finite state machine(s); doing this yields a system level analog of structural testing. The functional
models (dataflow diagrams and structure charts) move in the direction of levels because both
express a functional decomposition. Even with this, we cannot look at a structure chart and identify
where system testing ends and integration testing starts. The best we can do with structural
information is identify the extremes. For instance, the following threads are all clearly at the system
level:
1. Insertion of an invalid card. (this is probably the shortest system thread)
2. Insertion of a valid card, followed by three failed PIN entry attempts.
3. Insertion of a valid card, a correct PIN entry attempt, followed by a balance inquiry.
4. Insertion of a valid card, a correct PIN entry attempt, followed by a deposit.
5. Insertion of a valid card, a correct PIN entry attempt, followed by a withdrawal.
6. Insertion of a valid card, a correct PIN entry attempt, followed by an attempt to withdraw more
cash than the account balance.

Figure 4.15 SATM Class Hierarchy


Dept. of CSE, SJBIT

Page 112

Dept. of CSE, SJBIT

Page 113

Software Testing

10CS842

We can also identify some integration level threads. Go back to the PDL descriptions of
ValidatePIN and GetPIN. ValidatePIN calls GetPIN, and GetPIN waits for KeySensor to report
when a key is touched. If a digit is touched, GetPIN echoes an X to the display screen, but if the
cancel key is touched, GetPIN terminates, and ValidatePIN considers another PIN entry attempt.
We could push still lower, and consider keystroke sequences such as two or three digits followed by
cancel keystroke.
4.4.2 Behavioral Insights

Here is a pragmatic, explicit distinction that has worked well in industrial applications. Think about
a system in terms of its port boundary, which is the location of system level inputs and outputs.
Every system has a port boundary; the port boundary of the SATM system includes the digit
keypad, the function buttons, the screen, the deposit and withdrawal doors, the card and receipt
slots, and so on. Each of these devices can be thought of as a port, and events occur at system
ports. The port input and output events are visible to the customer, and the customer very often
understands system behavior in terms of sequences of port events. Given this, we mandate that
system port events are the primitives of a system test case, that is, a system test case (or
equivalently, a system thread) is expressed as an interleaved sequence of port input and port output
events. This fits our understanding of a test case, in which we specify pre-conditions, inputs,
outputs, and post-conditions. With this mandate we can always recognize a level violation: if a test
case (thread) ever requires an input (or an output) that is not visible at the port boundary, the test
case cannot be a system level test case (thread). Notice that this is clear, recognizable, and
enforceable. We will refine this in Chapter 14 when we discuss threads of system behavior.

Software Testing

1.1.2.3

1.1.2.3

4.5 A Closer Look at the SATM System

1.1.2.1

1.1.2

1.1.1.3

1.1.1.2

Craftspersons are recognized by two essential characteristics: they have a deep knowledge of the
tools of their trade, and they have a similar knowledge of the medium in which they work, so that
they understand their tools in terms of how they work with the medium. In Parts II and III, we
focused on the tools (techniques) available to the testing craftsperson. Our goal there was to
understand testing techniques in terms of their advantages and limitations with respect to particular
types of faults. Here we shift our emphasis to the medium, with the goal that a better understanding
of the medium will improve the testing craftspersons judgment.

1.1.1.1

1.1.1

1.1

S A TM S ys te m

U ni t N a m e

Level Number

Integration Testing

Page 114

10CS842

component that appears in our analysis is given a new (shorter) number; these numbers are given in
Table 1. (The only reason for this is to make the figures and spreadsheet more readable.) If you look
closely at the units that are designated by letters, you see that they are packaging levels in the
decomposition; they are never called as procedures. The decomposition in Table 1 is pictured as a
decomposition tree in Figure 13.1. This decomposition is the basis for the usual view of integration
testing. It is important to remember that such a decomposition is primarily a packaging partition of
the system. As software design moves into more detail, the added information lets us refine the
functional decomposition tree into a unit calling graph. The unit calling graph is the directed graph
in which nodes are program units and edges correspond to program calls; that is, if unit A calls unit
B, there is a directed edge from node A to node B. We began the development of the call graph for
the SATM system in Chapter 12 when we examined the calls made by the main program and the
ValidatePIN and GetPIN modules. That information is captured in the adjacency matrix given
below in Table 2. This matrix was created with a spreadsheet; this turns out to be a handy tool for
testers.
Table 1 SATM Units and Abbreviated Names Unit Number

1.1.2.5

we described the SATM system in terms of its output screens (Figure 4.7), the terminal itself
(Figure 4.8), its context and partial dataflow (Figures 4.9 and 4.10), an entity/relationship model of
its data (Figure 4.11), finite state machines describing some of its behavior (Figures 4.12 and 4.13),
and a partial functional decomposition (Figure 4.14). We also developed a PDL description of the
main program and two units, ValidatePIN and GetPIN.
We begin here by expanding the functional decomposition that was started in Figure 4.12; the
numbering scheme preserves the levels of the components in that figure. For easier reference, each
Dept. of CSE, SJBIT

10

1.1.2.2

1 .2

D e v i c e S e ns e & C o nt ro l
D o o r S e ns e & C o nt ro l
Ge t D o o r S t at us
Control Door
D i s pe n s e C as h
Slot Sense & Control
WatchCardSlot
Ge t D e po s i t S l o t S t at us
C o nt ro l C ar d R o l l e r
Control Envelope Roller
R e ad C ar d S t ri p
Central Bank Comm.

Dept. of CSE, SJBIT

Page 115

Software Testing

10CS842

11

1.2.1

Ge t P I N f o r P A N

12

1.2.2

G e t A c co u n t S t a t u s

13

1.2.3

P os t D a i l y T r an s ac t i o n s

1.3

T e r mi n a l S e n s e & C o n t r o l

14

1.3.1

Sc r e e n D r i v e r

15

1.3.2

K e y S e ns o r

1 .4

Manage Session

16

1.4.1

V al i d at e C ar d

17

1.4.2

V al i d at e P I N

18

1.4.2.1

Ge t P I N

1.4.3

Close Session

19

1.4.3.1

N e w T r ans ac t i o n R e qu e s t

20

1.4.3.2

Print Receipt

21

1.4.3.3

P os t T r a n s a c t i o n L oc a l

22

1.4.4

Manage Transaction

23

1.4.4.1

Ge t T ra n s ac t i o n T y p e

24

1.4.4.2

Get Account Type

25

1.4.4.3

Report Balance

26

1.4.4.4

Process Deposit

27

1.4.4.5

Process Withdrawal

Software Testing

4.5.2 Decomposition Based Integration

10CS842

Most textbook discussions of integration testing only consider integration testing based on the
functional decomposition of the system being tested. These approaches are all based on the
functional decomposition, expressed either as a tree (Figure 4.5.1) or in textual form. These
discussions inevitably center on the order in which modules are to be integrated. There are four
choices: from the top of the tree downward (top down), from the bottom of the tree upward (bottom
up), some combination of these (sandwich), or most graphically, none of these (the big bang). All of
these integration orders presume that the units have been separately tested, thus the goal of
decomposition based integration is to test the interfaces among separately tested units.

Figure 4.5.1 SATM Functional Decomposition Tree


T ab l e 2 A d j a c e n c y M a t r i x f o r t h e S A T M C a l l G r a p h

Figure 4.5.2 SATM Call Graph


We can dispense with the big bang approach most easily: in this view of integration, all the units are
compiled together and tested at once. The drawback to this is that when (not if!) a failure is
observed, there are few clues to help isolate the location(s) of the fault. (Recall the distinction we
made in Chapter 1 between faults and failures.)
4.5.2 Top-Down Integration

The SATM call graph is shown in Figure 4.5.2 Some of the hierarchy is obscured to reduce the
confusion in the drawing. One thing should be quite obvious: drawings of call graphs do not scale
up well. Both the drawings and the adjacency matrix provide insights to the tester. Nodes with high
degree will be important to integration testing, and paths from the main program (node 1) to the
sink nodes can be used to identify contents of builds for an incremental development.

Top-down integration begins with the main program (the root of the tree). Any lower level unit that
is called by the main program appears as a stub, where stubs are pieces of throw-away code that
emulate a called unit. If we performed top-down integration testing for the SATM system, the first
step would be to develop stubs for all the units called by the main program: WatchCardSlot, Control
Card Roller, Screen Driver, Validate Card, Validate PIN, Manage Transaction, and New

Dept. of CSE, SJBIT

Dept. of CSE, SJBIT

Page 116

Page 117

Software Testing

10CS842

Transaction Request. Generally, testers have to develop the stubs, and some imagination is required.
Here are two examples of stubs.
Procedure GetPINforPAN (PAN,
IF PAN = '1123' THEN PIN :=
IF PAN = '1234' THEN PIN :=
IF PAN = '8746' THEN PIN :=
End,

ExpectedPIN)
'8876';
'8765';
'1253';

STUB

Software Testing

10CS842

difficulty of fault isolation that is a consequence of big bang integration. (We could probably
discuss the size of a sandwich, from dainty finger sandwiches to Dagwood-style sandwiches, but
not now.)

4.6 Call Graph Based Integration


One of the drawbacks of decomposition based integration is that the basis is the functional
decomposition tree. If we use the call graph instead, we mitigate this deficiency; we also move in
the direction of behavioral testing. We are in a position to enjoy the investment we made in the
discussion of graph theory. Since the call graph is a directed graph, why not use it the way we used
program graphs? This leads us to two new approaches to integration testing: well refer to them as
pair-wise integration and neighborhood integration.

Procedure KeySensor (KeyHit) STUB


data: KeyStrokes STACK OF ' 8 ' . ' 8 ' , ' 7 ' , ' cancel '
KeyHit = POP (KeyStrokes)
End,

In the stub for GetPINforPAN, the tester replicates a table look-up with just a few values that will
appear in test cases. In the stub for KeySensor, the tester must devise a sequence of port events that
can occur once each time the KeySensor procedure is called. (Here, we provided the keystrokes to
partially enter the PIN 8876, but the user hit the cancel button before the fourth digit.) In practice,
the effort to develop stubs is usually quite significant. There is good reason to consider stub code as
part of the software development, and maintain it under configuration management.
Once all the stubs for SATM main have been provided, we test the main program as if it were a
stand-alone unit. We could apply any of the appropriate functional and structural techniques, and
look for faults. When we are convinced that the main program logic is correct, we gradually replace
stubs with the actual code. Even this can be problematic. Would we replace all the stubs at once? If
we did, we would have a small bang for units with a high outdegree. If we replace one stub at a
time, we retest the main program once for each replaced stub. This means that, for the SATM main
program example here, we would repeat its integration test eight times (once for each replaced stub,
and once with all the stubs).
4.5.2 Bottom-up Integration
Bottom-up integration is a mirror image to the top-down order, with the difference that stubs are
replaced by driver modules that emulate units at the next level up in the tree. In bottom-up
integration, we start with the leaves of the decomposition tree (units like ControlDoor and
DispenseCash), and test them with specially coded drivers. There is probably less throw-away code
in drivers than there is in stubs. Recall we had one stub for each child node in the decomposition
tree. Most systems have a fairly high fan-out near at the leaves, so in the bottom-up integration
order, we wont have as many drivers. This is partially offset by the fact that the driver modules will
be more complicated.

4.6.1 Pair-wise Integration


The idea behind pair-wise integration is to eliminate the stub/driver development effort. Rather than
develop stubs and/or drivers, why not use the actual code? At first, this sounds like big bang
integration, but we restrict a session to just a pair of units in the call graph. The end result is that we
have one integration test session for each edge in the call graph (40 for the SATM call graph in
Figure 4.2). This is not much of a reduction in sessions from either top-down or bottom-up (42
sessions), but it is a drastic reduction in stub/driver development.
4.6.2 Neighborhood Integration
We can let the mathematics carry us still further by borrowing the notion of a neighborhood from
topology. (This isnt too much of a stretch graph theory is a branch of topology.) We
(informally) define the neighborhood of a node in a graph to be the set of nodes that are one edge
away from the given node. In a directed graph, this means all the immediate predecessor nodes and
all the immediate successor nodes (notice that these correspond to the set of stubs and drivers of the
node). The eleven neighborhoods for the SATM example (based on the call graph in Figure 4.2) are
given in Table 3.
T ab l e 3 S A T M N e i g h b o r h o o d s N o d e

17

9, 10, 12

16

Successors

Predecessors

19

Sandwich integration is a combination of top-down and bottom-up integration. If we think about it


in terms of the decomposition tree, we are really just doing big bang integration on a sub-tree. There
will be less stub and driver development effort, but this will be offset to some extent by the added

18

4.5.3 Sandwich Integration

Dept. of CSE, SJBIT

Page 118

17
1

11, 14, 18
14, 15
14, 15

Dept. of CSE, SJBIT

Page 119

Software Testing

10CS842

23

22

14, 15

24

22

14, 15

26

22

14, 15, 6, 8, 2, 3

27

22

14, 15, 2, 3, 4, 13

25

22

15

22

23, 24, 26, 27, 25

n/ a

5, 7, 2, 21, 16, 17, 19, 22

Software Testing

10CS842

When a unit executes, some path of source statements is traversed. Suppose that there is a call to
another unit along such a path: at that point, control is passed from the calling unit to the called unit,
where some other path of source statements is traversed. We cleverly ignored this situation in Part
III, because this is a better place to address the question. There are two possibilities: abandon the
single-entry, single exit precept and treat such calls as an exit followed by an entry, or suppress
the call statement because control eventually returns to the calling unit anyway. The suppression
choice works well for unit testing, but it is antithetical to integration testing.
We can finally make the definitions for path based integration testing. Our goal is to have an
integration testing analog of DD-Paths.
Definition
An MM-Path is an interleaved sequence of module execution paths and messages.

We can always compute the number of neighborhoods for a given call graph. There will be one
neighborhood for each interior node, plus one extra in case there are leaf nodes connected directly
to the root node. (An interior node has a non-zero indegree and a non-zero outdegree.) We have
Interior nodes = nodes - (source nodes + sink nodes)
Neighborhoods = interior nodes + source nodes

The basic idea of an MM-Path is that we can now describe sequences of module execution paths
that include transfers of control among separate units. Since these transfers are by messages, MMPaths always represent feasible execution paths, and these paths cross unit boundaries. We can find
MM-Paths in an extended program graph in which nodes are module execution paths and edges are
messages. The hypothetical example in Figure 4.7.3 shows an MM-Path (the dark line) in which
module A calls module B, which in turn calls module C.

which combine to
Neighborhoods = nodes -sink nodes

Neighborhood integration yields a drastic reduction in the number of integration test sessions (down
to 11 from 40), and it avoids stub and driver development. The end result is that neighborhoods are
essentially the sandwiches that we slipped past in the previous section. (There is a slight difference,
because the base information for neighborhoods is the call graph, not the decomposition tree.) What
they share with sandwich integration is more significant: neighborhood integration testing has the
fault isolation difficulties of medium bang integration.

Figure 4.7..3 MM-Path Across Three Units


In module A, nodes 1 and 5 are source nodes, and nodes 4 and 6 are sink nodes. Similarly in
module B, nodes 1 and 3 are source nodes, and nodes 2 and 4 are sink nodes. Module C has a single
source node, 1, and a single sink node, 4. There are seven module execution paths in Figure 4.7.3:
MEP(A,1) = <1, 2, 3,5>

4.7 Path Based Integration

MEP(A,2) = <1, 2, 4>

Much of the progress in the development of mathematics comes from an elegant pattern: have a
clear idea of where you want to go, and then define the concepts that take you there. We do this
here for path based integration testing, but first we need to motivate the definitions.

MEP(A,3) = <5, 6>


MEP(B,1) = <1, 2>

We already know that the combination of structural and functional testing is highly desirable at the
unit level; it would be nice to have a similar capability for integration (and system) testing. We also
know that we want to express system testing in terms of behavioral threads. Lastly, we revise our
goal for integration testing: rather than test interfaces among separately developed and tested units,
we focus on interactions among these units. (Co-functioning might be a good term.) Interfaces are
structural; interaction is behavioral.
Dept. of CSE, SJBIT

Page 120

MEP(B,1) = <3, 4>


MEP(C,1) = <1, 2, 4, 5>
MEP(C,2) = <1, 3, 4, 5>

Dept. of CSE, SJBIT

Page 121

Software Testing

10CS842

Software Testing

10CS842

An atomic system function begins with a port input event, traverses one or more MM-Paths, and
terminates with a port output event. When viewed from the system level, there is no compelling
reason to decompose an ASF into lower levels of detail (hence the atomicity). In the SATM system,
digit entry is a good example of an ASF, so are card entry, cash dispensing, and session closing.
PIN entry is probably too big, it might be called a molecular system function.

Notice that MM-Path graphs are defined with respect to a set of units. This directly supports
composition of units and composition based integration testing. We can even compose down to the
level of individual module execution paths, but that is probably more detailed than necessary.

An atomic system function (ASF) is an action that is observable at the system level in terms of port
input and output events.

Given a set of units, their MM-Path graph is the directed graph in which nodes are module
execution paths and edges correspond to messages and returns from one unit to another.

Definition

Definition

The first guideline for MM-Paths: points of quiescence are natural endpoints for an MM-Path.
Our second guideline also serves to distinguish integration from system testing.

We can now define an integration testing analog of the DD-Path graph that serves unit testing so
effectively.

Figure 4.7.4 MM-Path Graph Derived from Figure 4.7.4


Figure 4.7.4 shows the MM-Path graph for the example in Figure 4.7.3. The solid arrows indicate
messages; the corresponding returns are indicated by dotted arrows. We should consider the
relationships among module execution paths, program path, DD-Paths, and MM-Paths. A program
path is a sequence of DD-Paths, and an MM-Path is a sequence of module execution paths.
Unfortunately, there is no simple relationship between DD-Paths and module execution paths.
Either might be contained in the other, but more likely, they partially overlap. Since MM-Paths
implement a function that transcends unit boundaries, we do have one relationship: consider the
intersection of an MM-Path with a unit. The module execution paths in such an intersection are
an analog of a slice with respect to the (MM-Path) function. Stated another way, the module
execution paths in such an intersection are the restriction of the function to the unit in which they
occur.
The MM-Path definition needs some practical guidelines. How long is an MM-Path? Nothing in the
definition prohibits an MM-Path to cover an entire ATM session. (This extreme loses the forest
because of the trees.) There are three observable behavioral criteria that put endpoints on MMPaths. The first is event quiescence, which occurs when a system is (nearly) idle, waiting for a
port input event to trigger further processing. The SATM system exhibits event quiescence in
several places: one is the tight loop at the beginning of SATM Main where the system has displayed
the welcome screen and is waiting for a card to be entered into the card slot. Event quiescence is a
system level property; there is an analog at the integration level: message quiescence. Message
quiescence occurs when a unit that sends no messages is reached (like module C in Figure 13.3).
There is a still subtler form: data quiescence. This occurs when a sequence of processing culminates
in the creation of stored data that is not immediately used. In the ValidateCard unit, the account
balance is obtained, but it is not used until after a successful PIN entry. Figure 13.5 shows how data
quiescence appears in a traditional dataflow diagram.

Dept. of CSE, SJBIT

Page 122

Figure 4.7.5 Data Quiescence


Our second guideline: atomic system functions are an upper limit for MM-Paths: we dont want
MM-Paths to cross ASF boundaries. This means that ASFs represent the seam between integration
and system testing. They are the largest item to be tested by integration testing, and the smallest
item for system testing. We can test an ASF at both levels. Again, the digit entry ASF is a good
example. During system testing, the port input event is a physical key press that is detected by
KeySensor and sent to GetPIN as a string variable. (Notice that KeySensor performs the physical to
logical transition.) GetPIN determines whether a digit key or the cancel key was pressed, and
responds accordingly. (Notice that button presses are ignored.) The ASF terminates with either
screen 2 or 4 being displayed. Rather than require system keystrokes and visible screen displays, we
could use a driver to provide these, and test the digit entry ASF via integration testing. We can see
this using our continuing example.
4.7.2 MM-Paths and ASFS in the SATM System
The PDL descriptions developed in Chapter 12 are repeated for convenient reference; statement
fragments are numbered as we did to construct program graphs.
1.
2.
3.
4.
5.
6.
7.
8.

Main Program
State = AwaitCard
CASE State OF
AwaitCard:

ScreenDriver (1, null)


WatchCardSlot (CardSlotStatus)
WHILE CardSlotStatus is Idle DO
WatchCardSlot (CardSlotStatus)
ControlCardRoller (accept)

Dept. of CSE, SJBIT

Page 123

Software Testing

10CS842

9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.

ValidateCard (CardOK, PAN)


IF CardOK THEN State = AwaitPIN
E L S E Co n t r o l C a r d R o l l e r ( e j e c t )
State = AwaitCard
AwaitPIN:
ValidatePIN (PINok, PAN)
IF PINok THEN ScreenDriver (5, null)
State = AwaitTrans
ELSE ScreenDriver (4, null)
State = AwaitCard
AwaitTrans:
ManageTransaction
State = CloseSession
CloseSession:
IF NewTransactionRequest
THEN State = AwaitTrans
ELSE PrintReceipt
PostTransactionLocal
CloseSession
ControlCardRoller (eject)
State = AwaitCard
End,
(CASE State)
END.
(Main program SATM)

29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.

Procedure ValidatePIN (PINok, PAN)


GetPINforPAN (PAN, ExpectedPIN)
Try = First
CASE Try OF
First:
ScreenDriver ( 2, null )
GetPIN (EnteredPIN)
IF EnteredPIN = ExpectedPIN
TH E N P I N o k = T R U E
RETURN
ELSE ScreenDriver ( 3, null
Try = Second
Second:
ScreenDriver ( 2, null )
GetPIN (EnteredPIN)
IF EnteredPIN ExpectedPIN
TH E N P I N o k = T R U E
RETURN
ELSE ScreenDriver ( 3, null
Try = Third
Third:
ScreenDriver (2, null )
GetPIN (EnteredPIN)
IF EnteredPIN = ExpectedPIN
TH E N P I N o k = T R U E
RETURN
ELSE ScreenDriver ( 4, null
PINok = FALSE
END,
(CASE Try)
END.
(Procedure ValidatePIN)
Procedure GetPIN (EnteredPIN, CancelHit)
Local Data: DigitKeys = { 0, 1, 2, 3, 4,
BEGIN
CancelHit FALSE
EnteredPIN = null string
DigitsRcvd=0

Dept. of CSE, SJBIT

Software Testing
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.

WHILE NOT (DigitsRcvd= 4 OR CancelHit) DO


BEGIN
KeySensor (KeyHit)
IF KeyHit IN DigitKeys
THEN BEGIN
EnteredPIN = EnteredPIN + KeyHit
INCREMENT (DigitsRcvd)
IF DigitsRcvd=1 THEN ScreenDriver (2,
IF DigitsRcvd=2 THEN ScreenDriver (2,
IF DigitsRcvd=3 THEN ScreenDriver (2,
IF DigitsRcvd=4 THEN ScreenDriver (2,
END
END {WHILE}
END. (Procedure GetPIN)

10CS842

'
'
'
'

X- ' )
XX- ' )
XXX- ' )
XXXX ' )

There are 20 source nodes in SATM Main: 1, 5, 6, 8, 9, 10, 12, 14, 15, 17, 18, 19, 20, 21, 22, 23,
24, 25, 26, 27. ValidatePIN has 11 source nodes: 29, 31, 34, 35, 39, 41, 46, 47, 48, 53; and in
GetPIN there are 6 source nodes: 56, 65, 70, 71, 72, 73.
SATM Main contains 16 sink nodes: 4, 5, 7, 8, 9, 11, 13, 14, 16, 18, 20, 22, 23, 24, 25, 28. There
are 14 sink nodes in ValidatePIN : 30, 33, 34, 37, 38, 40, 41, 44, 47, 48, 51, 52, 55; and 5 sink
nodes in GetPIN: 64, 69, 70, 71, 72.
Most of the module execution paths in SATM Main are very short; this pattern is due to the high
density of messages to other units. Here are the first two module execution paths in SATM Main:
<1, 2, 3, 4>, <5> and <6, 7>, <8> . The module execution paths in ValidatePIN are slightly longer:
<29, 30>, <31, 32, 33>, <34>, <35, 36, 37>, and so on. The beginning portion of GetPIN is a good
example of a module execution path: the sequence < 58, 59, 60, 61, 62, 63, 64> begins with a
source node (58) and ends with a sink node (64) which is a call to the KeyHit procedure. This is
also a point of event quiescence, where nothing will happen until the customer touches a key.

There are four MM-Paths in statements 64 through 72: each begins with KeySensor observing a
port input event (a keystroke) and ends with a closely knit family of port output events (the calls to
ScreenDriver with different PIN echoes). We could name these four MM-Paths GetDigit1,
GetDigit2, GetDigit3, and GetDigit4. They are slightly different because the later ones include the
earlier IF statements. (If the tester was the designer, this module might be reworked so that the
WHILE loop repeated a single MM-Path.) Technically, each of these is also an atomic system
function since they begin and end with port events.

There are interesting ASFs in ValidatePIN. This unit controls all screen displays relevant to the PIN
entry process. It begins with the display of screen 2 (which asks the customer to enter his/her PIN).
Next, GetPIN is called, and the system is event quiescent until a keystroke occurs. These keystrokes
initiate the GetDigit ASFs we just discussed. Here we find a curious integration fault. Notice that
screen 2 is displayed in two places: by the THEN clauses in the WHILE loop in GetPIN and by the
first statements in each CASE clause in ValidatePIN. We could fix this by removing the screen
displays from GetPIN and simply returning the string (e.g., X) to be displayed.

5 , 6 , 7, 8 , 9 }

Page 124

Dept. of CSE, SJBIT

Page 125

S-ar putea să vă placă și