Documente Academic
Documente Profesional
Documente Cultură
10IS81
Software Architecture
The various activities involved in creating software architecture are:
UNIT-1
Q1) With the help of neat block diagram of ABC (architecture business cycle). Explain in
detail the different activities which are involved in creating software architecture.(10
Marks)(June/July2014)
Software architecture is a result of technical, business, and social influences. Its existence in turn
affects the technical, business, and social environments that subsequently influence future
architectures. We call this cycle of influences, from the environment to the architecture and
back to the environment, the Architecture Business Cycle (ABC).
10IS81
Page 1
Developers must understand the work assignments it requires of them, testers must
understand the task structure it imposes on them, management must understand the
scheduling implications it suggests, and so forth.
Page 2
Software Architecture
10IS81
Constant vigilance is required to ensure that the actual architecture and its
representation remain to each other during this phase.
Q2) Explain the Architecture Business Cycle?)(June 2012)(Dec 2012) (Dec/Jan 2013)( (10 Marks)
Sol :
Software architecture is a result of technical, business, and social influences. Its existence in turn
affects the technical, business, and social environments that subsequently influence future
architectures. We call this cycle of influences, from the environment to the architecture and
back to the environment, the Architecture Business Cycle (ABC).
Software Architecture
10IS81
prescribes a structure for a system it particularly prescribes the units of software that must be
implemented (or otherwise obtained) and integrated to form the system. These units are the basis
for the development project's structure. Teams are formed for individual software units and the
development, test, and integration activities all revolve around the units. Likewise, schedules and
budgets allocate resources in chunks corresponding to the units. If a company becomes adept at
building families of similar systems, it will tend to invest in each team by nurturing each area of
expertise. Teams become embedded in the organization's structure. This is feedback from the
architecture to the developing organization.
In the software product line, separate groups were given responsibility for building and
maintaining individual portions of the organization's architecture for a family of products. In any
design undertaken by the organization at large, these groups have a strong voice in the system's
decomposition, pressuring for the continued existence of the portions they control.
The architecture can affect the goals of the developing organization. A successful system built from
it can enable a company to establish a foothold in a particular market area. The architecture
can provide opportunities for the efficient production and deployment of similar systems, and the
organization may adjust its goals to take advantage of its newfound expertise to plumb the market.
This is feedback from the system to the developing organization and the systems it builds.
The architecture can affect customer requirements for the next system by giving the customer the
opportunity to receive a system (based on the same architecture) in a more reliable, timely, and
economical manner than if the subsequent system were to be built from scratch. The customer
may be willing to relax some requirements to gain these economies. Shrink-wrapped software has
clearly affected people's requirements by providing solutions that are not tailored to their precise
needs but are instead inexpensive and (in the best of all possible worlds) of high quality. Product
lines have the same effect on customers who cannot be so flexible with their requirements.
The process of system building will affect the architect's experience with subsequent systems by
adding to the corporate experience base. A system that was successfully built around a tool bus or
.NET or encapsulated finite-state machines will engender similar systems built the same way in
the future. On the other hand, architectures that fail are less likely to be chosen for future projects.
A few systems will influence and actually change the software engineering culture, that is, the
technical environment in which system builders operate and learn. The first relational
databases, compiler generators, and table-driven operating systems had this effect in the 1960s
and early 1970s; the first spreadsheets and windowing systems, in the 1980s. When such
pathfinder systems are constructed, subsequent systems are affected by their legacy.
Page 3
Q3) Define Software Architecture. Explain the common software Architecture Structures? (Dec
12/Jan 13) (June/July 13)(Dec/Jan 2014)(10 marks)
Soln:
Architectural structures can by and large be divided into three groups, depending on the broad nature of
the elements they show.
Dept. of CSE, SJBIT
Page 4
Software Architecture
10IS81
Module structures.
Here the elements are modules, which are units of implementation. Modules represent a code-based way
of considering the system. They are assigned areas of functional responsibility. There is less emphasis on
how the resulting software manifests itself at runtime. Module structures allow us to answer questions
such as What is the primary functional responsibility assigned to each module? What other software
elements is a module allowed to use? What other software does it actually use? What modules are related
to other modules by generalization or specialization (i.e., inheritance) relationships?
Component-and-connector structures.
Here the elements are runtime components (which are the principal units of computation) and connectors
(which are the communication vehicles among components). Component-and-connector structures help
answer questions such as What are the major executing components and how do they interact? What are
the major shared data stores? Which parts of the system are replicated? How does data progress through
the system? What parts of the system can run in parallel? How can the system's structure change as it
executes?
Software Architecture
10IS81
Architecture is the structure of the components of a program or system, their interrelationships, and
the principles and guidelines governing their design and evolution over time. Any system has an
architecture that can be discovered and analyzed independently of any knowledge of the process by
which the architecture was designed or evolved.
Architecture is components and connectors. Connectors imply a runtime mechanism for
transferring control and data around a system. When we speak of "relationships" among elements, we
intend to capture both runtime and non-runtime relationships.
Q5) Define the Software Architecture? Discuss in detail implication of the definition?
(Dec 12)(10 Marks)
Soln:
There are fundamentally three reasons for software architectures importance from a technical
perspective.
Communication among stakeholders: software architecture represents a common abstraction of
a system that most if not all of the systems stakeholders can use as a basis for mutual
understanding, negotiation, consensus and communication.
Early design decisions: Software architecture manifests the earliest design decisions about a
system with respect to the system's remaining development, its deployment, and its maintenance
life. It is the earliest point at which design decisions governing the system to be built can be
analyzed.
Transferable abstraction of a system: software architecture model is transferable across
systems. It can be applied to other systems exhibiting similar quality attribute and functional
attribute and functional requirements and can promote large-scale re-use.
We will address each of these points in turn:
Allocation structures.
Allocation structures show the relationship between the software elements and the elements in one or
more external environments in which the software is created and executed. They answer questions
such as What processor does each software element execute on? In what files is each element stored
during development, testing, and system building? What is the assignment of software elements to
development teams?
Q4) State and explain the different architectural activities (June/July 13) (10 Marks)
Soln:
Architecture is high-level design. Other tasks associated with design are not architectural, such
as deciding on important data structures that will be encapsulated.
Architecture is the overall structure of the system. The different structures provide the
critical engineering leverage points to imbue a system with the quality attributes that will render it a
success or failure. The multiplicity of structures in an architecture lies at the heart of the concept.
Dept. of CSE, SJBIT
Page 5
Each stakeholder of a software system customer, user, project manager, coder, tester and so on - is
concerned with different system characteristics that are affected by the architecture. For ex. The user is
concerned that the system is reliable and available when needed; the customer is concerned that the
architecture can be implemented on schedule and to budget; the manager is worried that the
architecture will allow teams to work largely independently, interacting in disciplined and
controlled ways. Architecture provides a common language in which different concerns can be
expressed, negotiated, and resolved at a level that is intellectually manageable even for large, complex
systems.
ARCHITECTURE MANIFESTS THE EARLIEST SET OF DESIGN DECISIONS
Software architecture represents a systems earliest set of design decisions. These early decisions are the
most difficult to get correct and the hardest to change later in the development process, and they have the
most far-reaching effects.
i)The architecture defines constraints on implementation
Page 6
We wish to minimize the design complexity of the system we are building. Advantages to this approach
include enhanced re-use more regular and simpler designs that are more easily understood and
communicated, more capable analysis, shorter selection time, and greater interoperability.
iv)An architecture permits template-based development
An architecture embodies design decisions about how elements interact that, while reflected in each
element's implementation, can be localized and written just once. Templates can be used to capture in one
place the inter-element interaction mechanisms.
v)An architecture can be the basis for training
The architecture, including a description of how elements interact to carry out the required behavior, can
serve as the introduction to the system for new project members.
This means that the implementation must be divided into the prescribed elements, the elements must
interact with each other in the prescribed fashion, and each element must fulfill its responsibility to the
others as dictated by the architecture.
ii)The architecture dictates organizational structure
The normal method for dividing up the labor in a large system is to assign different groups different
portions of the system to construct. This is called the work breakdown structure of a system.
iii)The architecture inhibits or enables a systems quality attributes
Whether a system will be able to exhibit its desired (or required) quality attributes is substantially
determined by its architecture. However, the architecture alone cannot guarantee functionality or quality.
Decisions at all stages of the life cyclefrom high-level design to coding and implementationaffect
system quality. Quality is not completely a function of architectural design. To ensure quality, a good
architecture is necessary, but not sufficient.
iv)Predicting system qualities by studying the architecture
Architecture evaluation techniques such as the architecture tradeoff analysis method support top-down
insight into the attributes of software product quality that is made possible (and constrained) by software
architectures.
v)The architecture makes it easier to reason about and manage change
Software systems change over their lifetimes. Every architecture partitions possible changes into three
categories: local, nonlocal, and architectural.A local change can be accomplished by modifying a single
element. A nonlocal change requires multiple element modifications but leaves the underlying
architectural approach intact.
Software Architecture
Software Architecture
10IS81
Page 7
10IS81
Q6) Define Architectural Patterns,reference models and reference architectures and bring out
relationship between them?(Dec 12)/(June 2012)(6 Marks)
Soln:
An architectural pattern is a description of element and relation types together with a set of constraints
on how they may be used. For ex: client-server is a common architectural pattern. Client and server are
two element types, and their coordination is described in terms of the protocol that the server uses to
communicate with each of its clients.
A reference model is a division of functionality together with data flow between the pieces. A reference
model is a standard decomposition of a known problem into parts that cooperatively solve the problem.
A reference architecture is a reference model mapped onto software elements (that cooperatively
implement the functionality defined in the reference model) and the data flows between them. Whereas a
reference model divides the functionality, A reference architecture is the mapping of that functionality
onto a system decomposition.
Q7) Explain Model Based Structures?(Dec 12)(4 Marks)
Soln:
Module-based structures include the following structures.
Decomposition: The units are modules related to each other by the "is a submodule of "
relation, showing how larger modules are decomposed into smaller ones recursively until they are
small enough to be easily understood.
Uses: The units are related by the uses relation. One unit uses another if the correctness of the
first requires the presence of a correct version (as opposed to a stub) of the second.
Dept. of CSE, SJBIT
Page 8
Software Architecture
10IS81
Layered: Layers are often designed as abstractions (virtual machines) that hide
implementation specifics below from the layers above, engendering portability.
Class or generalization: The class structure allows us to reason about re-use and the
incremental addition of functionality.
Q8) Explain various process recommendations as used by architect while developing SA?
(June 2012)(4 Marks)
Soln:
Process recommendations are as follows:
The architecture should be the product of a single architect or a small group of architects with an
identified leader. The architect (or architecture team) should have the functional requirements for the
system and an articulated, prioritized list of quality attributes that the architecture is expected to satisfy.
The architecture should be well documented, with at least one static view and one dynamic view, using
an agreed-on notation that all stakeholders can understand with a minimum of effort.The architecture
should be circulated to the systems stakeholders, who should be actively involved in its review.The
architecture should be analyzed for applicable quantitative measures (such as maximum
throughput) and formally evaluated for quality attributes before it is too late to make changes to it.The
architecture should lend itself to incremental implementation via the creation of a skeletal system in
which the communication paths are exercised but which at first has minimal functionality. This
skeletal system can then be used to grow the system incrementally, easing the integration and testing
efforts.The architecture should result in a specific (and small) set of resource contention areas, the
resolution of which is clearly specified, circulated and maintained.
Q9) Briefly explain,what does Software architecture Constitute?(Dec 12/Jan 13))(5 Marks)
Soln:
Figure 2.1, taken from a system description for an underwater acoustic simulation, purports to describe
that system's "top-level architecture" and is precisely the kind of diagram most often displayed to help
explain an architecture. Exactly what can we tell from it?
The system consists of four elements.
Three of the elements Prop Loss Model (MODP), Reverb Model (MODR), and Noise Model
(MODN)might have more in common with each other than with the fourthControl Process
(CP)because they are positioned next to each other.All of the elements apparently have some sort of
relationship with each other, since the diagram is fully connected.
Dept. of CSE, SJBIT
Page 9
Software Architecture
10IS81
UNIT 2
Q1) Discuss the Invariants,Advantages and Disadvantages of Pipes and Filters Architectural
Style?(Dec12) (June/July 2014) (10 Marks)
Soln:
Conditions (invariants) of this style are:
Filters must be independent entities. They should not share state with other filter Filters do not know the
identity of their upstream and downstream filters. Specification might restrict what appears on input pipes
and the result that appears on the output pipes. Correctness of the output of a pipe-and-filter network
should not depend on the order in which filter perform their processing.
Page 10
Q3) Explain Process control Paradigms with various process control definitions?(June 2012)(6 marks)
Q2) What are the Basic Requirements for Mobile Robotics Architecture? How implicit Invocation Model
Handles them?(June 12)(Dec 13/Jan 14)(8 Marks)
Soln:
DESIGN CONSIDERATIONS:
Software Architecture
Software Architecture
10IS81
REQ1: Supports deliberate and reactive behavior. Robot must coordinate the actions to accomplish its
mission and reactions to unexpected situations
REQ2: Allows uncertainty and unpredictability of environment. The situations are not fully defined
and/or predicable. The design should handle incomplete and unreliable information
REQ3: System must consider possible dangerous operations by Robot and environment
REQ4: The system must give the designer flexibility (missions change/requirement changes)
SOLUTION : IMPLICIT INVOCATION
The third solution is based on the form of implicit invocation, as embodied in the Task-ControlArchitecture (TCA). The TCA design is based on hierarchies of tasks or task trees Parent tasks initiate
child task Temporal dependencies between pairs of tasks can be defined A must complete A must
complete before B starts (selective concurrency) Allows dynamic reconfiguration of task tree at run time
in response to sudden change(robot and environment) Uses implicit invocation to coordinate tasks and
Tasks communicate using multicasting message (message server) to tasks that are registered for these
events TCAs implicit invocation mechanisms support three functions:
Exceptions: Certain conditions cause the execution of an associated exception handling routines i.e.,
exception override the currently executing task in the sub-tree (e.g., abort or retry) tasks Wiretapping:
Message can be intercepted by tasks superimposed on an existing task tree E.g., a safety-check
component utilizes this to validate outgoing motion commands Monitors: Monitors read information and
execute some action if the data satisfy certain condition.
E.g. battery check
10IS81
Soln:
PROCESS CONTROL PARADIGMS
Process Variables: Properties of process that can be measured.
Controllable vehicle: Process variable whose value of system is intended to control.
Input variable: process variable that measures an input to the process
Manipulated variable: process variable whose value can be changed by the controller
Set point: the desired value for a controlled variable
Open-loop system: system in which information about process variables is not used to adjust the system
Closed-loop system: system in which information about process variables is used to manipulate a
process variable to compensate for variations in process variables and operating conditions
Feedback control System: The controlled variable is measured and the result is used to manipulate
one or more of the process variables
Feed forward control system: some of the process variables are measured, and anticipated
disturbances are compensated without waiting for changes in the controlled variable to be visible.
Q4) Write a Note on Heterogeneous Architecture? (Dec 11/Jan 12)(3 Marks)
Soln:
HETEROGENEOUS ARCHITECTURES
Architectural styles can be combined in several ways:
One way is through hierarchy. Example: UNIX pipeline
Second way is to combine styles is to permit a single component to use a mixture of architectural
connectors. Example: active database
Third way is to combine styles is to completely elaborate one level of architectural description in a
completely different architectural style. Example: case studies
Q5) Enlist different Architetcural
Invocation?(Dec12/Jan13)(6 Marks)
styles
and
discuss
in
brief
Event-based,
Implicit
Soln:
The Architectural Styles are:Dataflow systems,Call-and-return systems,Independent components,Virtual
machines,Data-centered systems:.
EVENT-BASED, IMPLICIT INVOCATION
Instead of invoking the procedure directly a component can announce one or more events.
Other components in the system can register an interest in an event by associating a procedure to it.
When the event is announced, the system itself invokes all of the procedure that have been registered for
the event. Thus an event announcement implicitly causes the invocation of procedures in other
modules.Architecturally speaking, the components in an implicit invocation style are modules whose
interface provides both a collection of procedures and a set of events.
Advantages:
It provides strong support for reuse
Page 11
Page 12
Software Architecture
10IS81
Software Architecture
10IS81
Any component can be introduced into the system simply by registering it for the events of that system.
Implicit invocation eases system evolution.
Components may be replaced by other components without affecting the interfaces of other components.
Disadvantages:
Components relinquish control over the computation performed by the system.
Concerns change of data.
Global performance and resource management can become artificial issues.
Q6) Explain Software Paradigm for process Control?(Dec12/Jan13)(Dec 13/Jan 14)(4 Marks)
Soln:
An architectural style for software that controls continuous processes can be based on the process-control
model, incorporating the essential parts of a process-control loop:
Computational elements: separate the process of interest from the controlled policy Process
definition, including mechanisms for manipulating some process variables Control algorithm, for
deciding how to manipulate variables
Data element: continuously updated process variables and sensors that collect them Process
variables, including designed input, controlled and manipulated variables and knowledge of
which can be sensed Set point, or reference value for controlled variable Sensors to obtain
values of process variables pertinent to control
The control loop paradigm: establishes the relation that the control algorithm exercises.
Q7) State the problem of KWIC?Propose Abstract datatypes and Implicit Invocation styles to
Implement solution for same?(Dec12/Jan13)(June/July 2013) (June/July 2014) (10 Marks)
Soln:
Parnas proposed the following problems: KWIC index system accepts an ordered set of lines. Each line is
an ordered set of words and each word is an ordered set of characters. Any line may be circularly shifted
by repeated removing the first word and appending it at the end of the line. KWIC index system outputs a
listing of all circular shifts of all lines in alphabetical order.
Parnas used the problem to contrast different criteria for decomposing a system into modules. He
describes 2 solutions:
a) Based on functional decomposition with share access to data representation.
b) Based on decomposition that hides design decision.
SOLUTION 1: ABSTRACT DATA TYPES
Decomposes The System Into A Similar Set Of Five Modules.
Data is no longer directly shared by the computational components.
Each module provides an interface that permits other components to access data only by invoking
procedures in that interface.
Page 13
Page 14
Software Architecture
10IS81
Software Architecture
10IS81
Q8) Explain Block diagram for Cruise Control?(June 12)(Dec 13/Jan14)(4 Marks)
Soln:
A cruise control (CC) system that exists to maintain the constant vehicle speed even over varying terrain.
Inputs: System On/Off: If on, maintain speed Engine On/Off: If on, engine is on. CC is active only in this
state Wheel Pulses: One pulse from every wheel revolution Accelerator: Indication of how far accelerator
is de-pressed Brake: If on, temp revert cruise control to manual mode Inc/Dec Speed: If on,
increase/decrease maintained speed Resume Speed: If on, resume last maintained speed Clock: Timing
pulses every millisecond Outputs: Throttle: Digital value for engine throttle setting
Artifact. This specifies the resource that is required to be highly available, such as a processor,
communication channel, process, or storage.
Environment. The state of the system when the fault or failure occurs may also affect the desired system
response. For example, if the system has already seen some faults and is operating in other than normal
mode, it may be desirable to shut it down totally. However, if this is the first fault observed, some
degradation of response time or function may be preferred. In our example, the system is operating
normally.
Response. There are a number of possible reactions to a system failure. These include logging the failure,
notifying selected users or other systems, switching to a degraded mode with either less capacity or less
function, shutting down external systems, or becoming unavailable during repair. In our example, the
system should notify the operator of the unexpected message and continue to operate normally. Response
measure. The response measure can specify an availability percentage, or it can specify a time to repair,
times during which the system must be available, or the duration for which the system must be available
Q2) What are the qualities of a system?Explain modifiability general scenario?(June/July 13)(Dec
13/Jan 14)(10 marks)
UNIT-III
Q1) what is availability? Explain general scenario for availability?(June 2012) (June/July 2014)
(10marks)
Soln:
AVAILABILITY SCENARIO
Availability is concerned with system failure and its associated consequences Failures are usually a result
of system errors that are derived from faults in the system. It is typically defines as .
Source of stimulus. We differentiate between internal and external indications of faults or failure since
the desired system response may be different. In our example, the unexpected message arrives from
outside the system. Stimulus. A fault of one of the following classes occurs. - omission. A component
fails to respond to an input. - crash. The component repeatedly suffers omission faults. - timing. A
component responds but the response is early or late. - response. A component responds with an incorrect
value.
Page 15
Soln:
It is the ability of the system to do the work for which it was intended.
MODIFIABILITY SCENARIO
Modifiability is about the cost of change. It brings up two concerns.
What can change (the artifact)?
When is the change made and who makes it (the environment)?
Source of stimulus. This portion specifies who makes the changesthe developer, a system
administrator, or an end user. Clearly, there must be machinery in place to allow the system administrator
or end user to modify a system, but this is a common occurrence. In Figure 4.4, the modification is to be
made by the developer.
Stimulus. This portion specifies the changes to be made. A change can be the addition of a function, the
modification of an existing function, or the deletion of a function. It can also be made to the qualities of
the systemmaking it more responsive, increasing its availability, and so forth. The capacity of the
system may also change. Increasing the number of simultaneous users is a frequent requirement. In our
example, the stimulus is a request to make a modification, which can be to the function, quality, or
capacity.
Artifact. This portion specifies what is to be changedthe functionality of a system, its platform, its user
interface, its environment, or another system with which it interoperates. In Figure 4.4, the modification is
to the user interface.
Environment. This portion specifies when the change can be madedesign time, compile time, build
time, initiation time, or runtime. In our example, the modification is to occur at design time. Response.
Whoever makes the change must understand how to make it, and then make it, test it and deploy it. In our
example, the modification is made with no side effects.
Response measure. All of the possible responses take time and cost money, and so time and cost are the
most desirable measures. Time is not always possible to predict, however, and so less ideal measures are
frequently used, such as the extent of the change (number of modules affected). In our example, the time
Dept. of CSE, SJBIT
Page 16
Software Architecture
10IS81
Q3) what do you mean by Tactics? Explain Availability Tactics with a Neat Diagram?(June /July 13)
(10 Marks)
Soln:
A tactic is a design decision that influences the control of a quality attribute response.
AVAILABILITY TACTICS
Software Architecture
10IS81
redundancy is often used in a client/server configuration, such as database management systems, where
quick responses are necessary even when a fault occurs
3.Passive redundancy (warm restart/dual redundancy/triple redundancy). One component (the
primary) responds to events and informs the other components (the standbys) of state updates they must
make. When a fault occurs, the system must first ensure that the backup state is sufficiently fresh before
resuming services.
Spare. A standby spare computing platform is configured to replace many different failed components. It
must be rebooted to the appropriate software configuration and have its state initialized when a failure
occurs.
Shadow operation. A previously failed component may be run in "shadow mode" for a short time to
make sure that it mimics the behavior of the working components before restoring it to service.
State resynchronization. The passive and active redundancy tactics require the component being restored
to have its state upgraded before its return to service.
Checkpoint/rollback. A checkpoint is a recording of a consistent state created either periodically or in
response to specific events. Sometimes a system fails in an unusual manner, with a detectably inconsistent
state. In this case, the system should be restored using a previous checkpoint of a consistent state and a
log of the transactions that occurred since the snapshot was taken.
Q4) Explain the quality attribute general scenario? List the parts of such scenario? Distinguish
between Availability and Modifiability Scenario?(Dec 12)(Dec 13/Jan 14) (10 Marks)
The above figure depicts goal of availability tactics. All approaches to maintaining availability involve
some type of redundancy, some type of health monitoring to detect a failure, and some type of recovery
when a failure is detected. In some cases, the monitoring or recovery is automatic and in others it is
manual.
FAULT DETECTION
1.Ping/echo. One component issues a ping and expects to receive back an echo, within a predefined time,
from the component under scrutiny. This can be used within a group of components mutually responsible
for one task
2.Heartbeat (dead man timer). In this case one component emits a heartbeat message periodically and
nother component listens for it. If the heartbeat fails, the originating component is assumed to have failed
and a fault correction component is notified. The heartbeat can also carry data.
3.Exceptions. The exception handler typically executes in the same process that introduced the exception.
Soln:
QUALITY ATTRIBUTE SCENARIOS
A quality attribute scenario is a quality-attribute-specific requirement. It consists of six parts.
1) Source of stimulus. This is some entity (a human, a computer system, or any other
actuator) that generated the stimulus.
2) Stimulus. The stimulus is a condition that needs to be considered when it arrives at a system.
3) Environment. The stimulus occurs within certain conditions. The system may be in an
overload condition or may be running when the stimulus occurs, or some other condition may be true.
4) Artifact. Some artifact is stimulated. This may be the whole system or some pieces of it.
5) Response. The response is the activity undertaken after the arrival of the stimulus.
6) Response measure. When the response occurs, it should be measurable in some fashion so
that the requirement can be tested.
Figure 4.1 shows the parts of a quality attribute scenario.
FAULT RECOVERY
1.Voting. Processes running on redundant processors each take equivalent input and compute a simple
output value that is sent to a voter. If the voter detects deviant behavior from a single processor, it fails it.
2.Active redundancy (hot restart). All redundant components respond to events in parallel. The
response from only one component is used (usually the first to respond), and the rest are discarded. Active
Availability is concerned with system failure and its associated consequences Failures are usually a result
of system errors that are derived from faults in the system.
Page 17
Page 18
Software Architecture
10IS81
It is typically defines as
Source of stimulus. We differentiate between internal and external indications of faults or failure
since the desired system response may be different. In our example, the unexpected message arrives
from outside the system.
Stimulus. A fault of one of the following classes occurs.
- omission. A component fails to respond to an input.
- crash. The component repeatedly suffers omission faults.
- timing. A component responds but the response is early or late.
- response. A component responds with an incorrect value.
Artifact. This specifies the resource that is required to be highly available, such as a processor,
communication channel, process, or storage.
Environment. The state of the system when the fault or failure occurs may also affect the desired
system response. For example, if the system has already seen some faults and is operating in other than
normal mode, it may be desirable to shut it down totally. However, if this is the first fault observed,
some degradation of response time or function may be preferred. In our example, the system is operating
normally.
Response. There are a number of possible reactions to a system failure. These include logging the
failure, notifying selected users or other systems, switching to a degraded mode with either less
capacity or less function, shutting down external systems, or becoming unavailable during repair. In
our example, the system should notify the operator of the unexpected message and continue to operate
normally.
Response measure. The response measure can specify an availability percentage, or it can specify a
time to repair, times during which the system must be available, or the duration for which the system
must be available.
MODIFIABILITY SCENARIO
Modifiability is about the cost of change. It brings up two concerns.
What can change (the artifact)?
When is the change made and who makes it (the environment)?
Software Architecture
10IS81
Source of stimulus. This portion specifies who makes the changesthe developer, a system
administrator, or an end user. Clearly, there must be machinery in place to allow the system administrator
or end user to modify a system, but this is a common occurrence. In Figure 4.4, the modification is to be
made by the developer. Stimulus. This portion specifies the changes to be made. A change can be the
addition of a function, the modification of an existing function, or the deletion of a function. It can also be
made to the qualities of the systemmaking it more responsive, increasing its availability, and so forth.
The capacity of the system may also change. Increasing the number of simultaneous users is a frequent
requirement. In our example, the stimulus is a request to make a modification, which can be to the
function, quality, or capacity.
Artifact. This portion specifies what is to be changedthe functionality of a system, its platform, its user
interface, its environment, or another system with which it interoperates. In Figure 4.4, the modification is
to the user interface.
Environment. This portion specifies when the change can be madedesign time, compile time, build
time, initiation time, or runtime. In our example, the modification is to occur at design time.
Response. Whoever makes the change must understand how to make it, and then make it, test it and
deploy it. In our example, the modification is made with no side effects.
Response measure. All of the possible responses take time and cost money, and so time and cost are the
most desirable measures. Time is not always possible to predict, however, and so less ideal measures are
frequently used, such as the extent of the change (number of modules affected). In our example, the time
to perform the modification should be less than three hours.
Q5) What are the qualities that architecture itself should posses?(Dec 12)(6 Marks)
Soln:
Achieving quality attributes must be considered throughout design, implementation, and deployment. No
quality attribute is entirely dependent on design, nor is it entirely dependent on implementation or
deployment. For example: Usability involves both architectural and non-architectural aspects
Modifiability is determined by how functionality is divided (architectural) and by coding techniques
within a module (non-architectural).Performance involves both architectural and non-architectural
dependencies The message of this section is twofold: Architecture is critical to the realization of many
qualities of interest in a system, and these qualities should be designed in and can be evaluated at the
architectural level Architecture, by itself, is unable to achieve qualities. It provides the foundation for
achieving quality.
Q6) List the Parts of Quality Attribute Scenario?(Dec12)(June 12)(Dec 12/Jan13)(4 Marks)
Soln:
A quality attribute scenario is a quality-attribute-specific requirement. It consists of six parts.
1) Source of stimulus. This is some entity (a human, a computer system, or any other actuator) that
generated the stimulus.
2) Stimulus. The stimulus is a condition that needs to be considered when it arrives at a system.
3) Environment. The stimulus occurs within certain conditions. The system may be in an overload
condition or may be running when the stimulus occurs, or some other condition may be true.
4) Artifact. Some artifact is stimulated. This may be the whole system or some pieces of it.
5) Response. The response is the activity undertaken after the arrival of the stimulus.
Page 19
Page 20
Software Architecture
10IS81
6) Response measure. When the response occurs, it should be measurable in some fashion so that the
requirement can be tested.
Q7) What is the goal of Tactics of Testability? Discuss 2 Categories of Tactics for Testing?(Dec
12)(Dec 13/Jan 14)(10 Marks)
Soln:
The goal of tactics for testability is to allow for easier testing when an increment of software development
is completed.
INPUT/OUTPUT
Record/playback. Record/playback refers to both capturing information crossing an interface and using
it as input into the test harness. The information crossing an interface during normal operation is saved in
some repository. Recording this information allows test input for one of the components to be generated
and test output for later comparison to be saved.
Separate interface from implementation. Separating the interface from the implementation allows
substitution of implementations for various testing purposes. Stubbing implementations allows the
remainder of the system to be tested in the absence of the component being stubbed.
Specialize access routes/interfaces. Having specialized testing interfaces allows the capturing or
specification of variable values for a component through a test harness as well as independently from its
normal execution. Specialized access routes and interfaces should be kept separate from the access routes
and interfaces for required functionality.
INTERNAL MONITORING
Built-in monitors. The component can maintain state, performance load, capacity, security, or other
information accessible through an interface. This interface can be a permanent interface of the component
or it can be introduced temporarily. A common technique is to record events when monitoring states have
been activated. Monitoring states can actually increase the testing effort since tests may have to be
repeated with the monitoring turned off. Increased visibility into the activities of the component usually
more than outweigh the cost of the additional testing.
Software Architecture
10IS81
Maintain integrity. Data should be delivered as intended. It can have redundant information encoded in it,
such as checksums or hash results, which can be encrypted either along with or independently from the
original data.
Limit exposure. Attacks typically depend on exploiting a single weakness to attack all data and services
on a host. The architect can design the allocation of services to hosts so that limited services are available
on each host.
Limit access. Firewalls restrict access based on message source or destination port. Messages from
unknown sources may be a form of an attack. It is not always possible to limit access to known sources.
i) Fault Prevention ii) Defer Binding Time iii) Resource Arbitration iv)Internal Monitoring v) Run
Time Tactics
Tactics for achieving security can be divided into those concerned with resisting attacks, those concerned
with detecting attacks, and those concerned with recovering from attacks.
RESISTING ATTACKS
Authenticate users. Authentication is ensuring that a user or remote computer is actually who it purports
to be. Passwords, one-time passwords, digital certificates, and biometric identifications provide
authentication.
Authorize users. Authorization is ensuring that an authenticated user has the rights to access and modify
either data or services. Access control can be by user or by user class.
Maintain data confidentiality. Data should be protected from unauthorized access. Confidentiality is
usually achieved by applying some form of encryption to data and to communication links. Encryption
provides extra protection to persistently maintained data beyond that available from authorization.
Soln:
FAULT PREVENTION
Removal from service. This tactic removes a component of the system from operation to undergo some
activities to prevent anticipated failures.
Transactions. A transaction is the bundling of several sequential steps such that the entire bundle can be
undone at once. Transactions are used to prevent any data from being affected if one step in a process
fails and also to prevent collisions among several simultaneous threads accessing the same data.
Process monitor. Once a fault in a process has been detected, a monitoring process can delete the
nonperforming process and create a new instance of it, initialized to some appropriate state as in the spare
tactic.
DEFER BINDING TIME
Many tactics are intended to have impact at loadtime or runtime, such as the following.
Runtime registration supports plug-and-play operation at the cost of additional overhead to manage the
registration.
Configuration files are intended to set parameters at startup.
Polymorphism allows late binding of method calls.
Component replacement allows load time binding.
Adherence to defined protocols allows runtime binding of independent processes.
RESOURCE ARBITRATION
First-in/First-out. FIFO queues treat all requests for resources as equals and satisfy them in turn.
Fixed-priority scheduling. Fixed-priority scheduling assigns each source of resource requests a
particular priority and assigns the resources in that priority order. Three common prioritization strategies
are semantic importance. Each stream is assigned a priority statically according to some domain
Q8) Classify Secutity Tactics? What are different tactics for resisting attacks?(June 2012)(8
Marks)
Soln:
Page 21
Page 22
characteristic of the task that generates it,deadline monotonic. Deadline monotonic is a static priority
assignment that assigns higher priority to streams with shorter deadlines,.rate monotonic. Rate monotonic
is a static priority assignment for periodic streams that assigns higher priority to streams with shorter
periods.
Dynamic priority scheduling:
1.round robin. Round robin is a scheduling strategy that orders the requests and then, at every assignment
possibility, assigns the resource to the next request in that order.
2.earliest deadline first. Earliest deadline first assigns priorities based on the pending requests with the
earliest deadline.
Static scheduling. A cyclic executive schedule is a scheduling strategy where the pre-emption points and
the sequence of assignment to the resource are determined offline.
INTERNAL MONITORING
Built-in monitors. The component can maintain state, performance load, capacity, security, or other
information accessible through an interface. This interface can be a permanent interface of the component
or it can be introduced temporarily. A common technique is to record events when monitoring states have
been activated. Monitoring states can actually increase the testing effort since tests may have to be
repeated with the monitoring turned off. Increased visibility into the activities of the component usually
more than outweigh the cost of the additional testing.
RUNTIME TACTICS
Maintain a model of the task. In this case, the model maintained is that of the task. The task model is
used to determine context so the system can have some idea of what the user is attempting and provide
various kinds of assistance. For example, knowing that sentences usually start with capital letters would
allow an application to correct a lower-case letter in that position.
Maintain a model of the user. In this case, the model maintained is of the user. It determines the user's
knowledge of the system, the user's behavior in terms of expected response time, and other aspects
specific to a user or a class of users. For example, maintaining a user model allows the system to pace
scrolling so that pages do not fly past faster than they can be read.
Maintain a model of the system. In this case, the model maintained is that of the system. It determines
the expected system behavior so that appropriate feedback can be given to the user. The system model
predicts items such as the time needed to complete current activity.
Software Architecture
Software Architecture
10IS81
10IS81
Soln:
LOCALIZE MODIFICATIONS
Maintain semantic coherence. Semantic coherence refers to the relationships among responsibilities in a
module. The goal is to ensure that all of these responsibilities work together without excessive reliance on
other modules.
Anticipate expected changes. Considering the set of envisioned changes provides a way to evaluate a
particular assignment of responsibilities. In reality this tactic is difficult to use by itself since it is not
possible to anticipate all changes.
Generalize the module. Making a module more general allows it to compute a broader range of
functions based on input
Limit possible options. Modifications, especially within a product line, may be far ranging and hence
affect many modules. Restricting the possible options will reduce the effect of these modifications
Soln:
1.Time to market.
If there is competitive pressure or a short window of opportunity for a system or product, development
time becomes important. This in turn leads to pressure to buy or otherwise re-use existing elements.
2.Cost and benefit.
The development effort will naturally have a budget that must not be exceeded. Different architectures
will yield different development costs. For instance, an architecture that relies on technology (or expertise
with a technology) not resident in the developing organization will be more expensive to realize than one
that takes advantage of assets already inhouse. An architecture that is highly flexible will typically be
more costly to build than one that is rigid (although it will be less costly to maintain and modify).
Dept. of CSE, SJBIT
Page 23
Page 24
Software Architecture
10IS81
Unit-IV
Q1) What do you mean by architectural pattern ? how it is categorized ?Explain the
structure part of the solution for ISO layered architecture.(June/July 2014)
Sol:
Architectural patterns express fundamental structural organization schemas for software systems.
They provide a set of predefined subsystems, specify their responsibilities, and include
rules and guidelines for organizing the relationships between them
The layers architectural pattern helps to structure applications that can be decomposed
into groups of subtasks in which each group of subtasks is at a particular level of abstraction.
Example: Networking protocols are best example of layered architectures. Such a protocol
consists of a set of rules and conventions that describes how computer programmer
communicates across machine boundaries. The format, contacts and meaning of all messages
are defined. The protocol specifies agreements at a variety of abstraction levels, ranging from the
details of bit transmission to high level abstraction logic. Therefore the designers use secured sub
protocols and arrange them in layers. Each layer deals with a specific aspect of communication
and users the services of the next lower layer. (see diagram & explain more)
Context: a large system that requires decomposition.
Problem: THE SYSTEM WE ARE BUILDING IS DIVIDED BY MIX OF LOW
AND HIGH LEVEL ISSUES, WHERE HIGH-LEVEL OPERATIONS RELY ON THE
LOWER-LEVEL ONES. FOR EX, HIGH-LEVEL WILL BE INTERACTIVE TO USER AND
LOW-LEVEL WILL BE CONCERNED WITH HARDWARE IMPLEMENTATION
Parts of the system should be exchangeable (i.e, a particular layer can be changed).
It may be necessary to build other systems at a later date with the same low-level issues
as the system you are currently designing.
Page 25
Software Architecture
10IS81
Solution:
Structure your system into an appropriate number of layers and place them on top of each
other.
Lowest layer is called 1 (base of our system), the highest is called layer N. i.e, Layer 1,
. Layer J-1,
Layer J, .. Layer N.
Most of the services that layer J Provides are composed of services provided by
layer J-1. In other words, the services of each layer implement a strategy for combining
the services of the layer below in a meaningful way. In addition, layer Js services may
depend on other services in layer J.
Structure:
An individual layer can be described by the following CRC card:
o
The main structural characteristics of the layers patterns is that services of layer J are only use by
layer J+1-there are no further direct dependencies between layers. This structure can be
compared with a stack, or even an onion. Each individual layer shields all lower from direct
access by higher layers.
Page 26
Software Architecture
10IS81
Software Architecture
10IS81
'The request moves down through the layers until it reaches Layer 1, is sent to Layer 1 of
the right stack, and there moves up through the layers of the right stack. The response to
the request follows the reverse path until it arrives at Layer N of the left stack.
Dynamics:
Scenario I is probably the best-known one. A client Issues a request to Layer N. Since
Layer N cannot carry out the request on its own. It calls the next Layer N - 1 for
supporting subtasks. Layer N - I provides these. In the process sending further requests
to Layer N-2 and so on until Layer I is reached. Here, the lowest-level services are finally
performed. If necessary, replies to the different requests are passed back up from Layer 1
to Layer 2, from Layer 2 to Layer 3, and so on until the final reply arrives at Layer N.
Scenario II illustrates bottom-up communication-a chain of actions starts at Layer 1,
for example when a device driver detects input. The driver translates the input into an
internal format and reports it to Layer 2 which starts interpreting it, and so on. In this way
data moves up through the layers until it arrives at the highest layer. While top-down
information and control flow are often described as 'requests'. Bottom-up calls can be
termed 'notifications'.
Scenario III describes the situation where requests only travel through a subset of the
layers. A top level request may only go to the next lower level N- 1 if this level can satisfy
the request. An example of this is where level N- 1 acts as a cache and a request from level
N can be satisfied without being sent all the way down to Layer 1 and from here to a
remote server.
Scenario IV An event is detected in Layer 1, but stops at Layer 3 instead of travelling all
the way up to Layer N. In a communication protocol, for example, a resend request
may arrive from an impatient client who requested data some time ago. In the meantime
the server has already sent the answer, and the answer and the re-send request cross. In this
case, Layer 3 of the server side may notice this and intercept the re-send request without
further action.
Scenario V involves two stacks of N layers communicating with each other. This scenario
is well-known from communication protocols where the stacks are known as
'protocol stacks'. In the following diagram, Layer N of the left stack issues a request.
Dept. of CSE, SJBIT
Page 27
Implementation:
The following steps describe a step-wise refinement approach to the definition of a layered
architecture. Define the abstraction criterion for grouping tasks into layers.
o This criterion is often the conceptual distance from the platform (sometimes, we encounter
other abstraction paradigm as well).
o In the real world of software development we often use a mix of abstraction criterions. For ex,
the distance from the hardware can shape the lower levels, and conceptual complexity
governs the higher ones.
o Example layering obtained using mixed model layering principle is shown below Uservisible elements
Specific application modules
Common services level
Operating system interface level
Operating system
Hardware
Determine the number of abstraction levels according to your abstraction criterion.
Each abstraction level corresponds to one layer of the pattern.
Having too many layers may impose unnecessary overhead, while too few layers
can result in a poor structure.
Name the layers and assign the tasks to each of them.
The task of the highest layer is the overall system task, as perceived by the client.
The tasks of all other layers are to be helpers to higher layers.
Specify the services
It is often better to locate more services in higher layers than in lower layers.
The base layers should be kept slim while higher layers can expand to cover a
spectrum of applicability.
This phenomenon is also called the inverted pyramid of reuse.
Refine the layering
Iterate over steps 1 to 4.
It is not possible to define an abstraction criterion precisely before thinking about
the implied layers and their services.
Page 28
Software Architecture
10IS81
Alternatively it is wrong to define components and services first and later impose
a layered structure.
The solution is to perform the first four steps several times until a natural and
stable layering evolves.
Specify an interface for each layer.
If layer J should be a black box for layer J+1, design a flat interface that offers all layer
Js services.
White box approach is that in which, layer J+1 sees the internals of layer J.
Gray box approach is a compromise between black and white box approaches.
Here layer J+1 is aware of the fact that layer J consists of three components, and
address them separately, but does not see the internal workings of individual components.
Structure individual layers
When an individual layer is complex, it should be broken into separate components.
This subdivision can be helped by using finer-grained patterns.
Specify the communication between adjacent layers.
Push model (most often used): when layer J invokes a service of layer J+1, any required
information is passed as part of the service call.
Pull model: it is the reverse of the push model. It occurs when the lower layer
fetches available information from the higher layer at its own discretion.
Decouple adjacent layers.
For top-down communication, where requests travel top-down, we can use one-way
coupling (i.e, upper layer is aware of the next lower layer, but the lower layer is unaware
of the identity of its users) here return values are sufficient to transport the results in the
reverse direction.
For bottom-up communication, you can use callbacks and still preserve a topdown one way coupling. Here the upper layer registers callback functions with the lower
layer.
We can also decouple the upper layer from the lower layer to a certain degree using
object oriented techniques.
Design an error handling strategy
An error can either be handled in the layer where it occurred or be passed to the next
higher layer.
Software Architecture
10IS81
2.Pipe Component
3.Data Source
4.Data Sink
1.Filter component:
Filter components are the processing units of the pipeline. A filter enriches, refines or transforms its input
data. It enriches data by computing and adding information, refines data by concentrating or extracting
information, and transforms data by delivering the data in some other representation. It is responsible for
the following activities:
The subsequent pipeline element pulls output data from the filter. (passive filter)
2. Pipe component:
Pipes denote the connection between filters, between the data source and the first filter, and between
the last filter and the data sink.
If two active components are joined, the pipe synchronizes them.
This synchronization is done with a first-in- first-out buffer.
As a rule of thumb, try to handle the errors at the lowest layer possible.
Q2) List the Components of Pipes and Filters Architectural Pattern?With Sketch explain CRC
Card for the same?(June 12)(Dec 12/Jan 13)(Dec 13/Jan 14)(8 Marks)
4.Data sink component:
The data sink collects the result from the end of the pipeline (i.e, it consumes output).
Two variants of data sink:
Soln:
The componenets of Pipes and Filters are:
1.Filter Component
Dept. of CSE, SJBIT
Page 29
Page 30
Q4) Discuss the steps involved in implementation of pipes of filters architecture?(Dec 12)(12 Marks)
An active data sink pulls results out of the preceding processing stage.
An passive data sink allows the preceding filter to push or write the results into it.
Software Architecture
Software Architecture
10IS81
Q3) Explain the forces that influence the solution to problems based on black board pattern?(Dec
12)(June 12)(Dec 12/Jan 13)(7 Marks)
Soln:
The following forces influence solutions to problems of this kind:
A complete search of the solution space is not feasible in a reasonable time.Since the domain is immature,
we may need to experiment with different algorithms for the same subtask. Hence, individual modules
should be easily exchangeable.
There are different algorithms that solve partial problems.
Input as well as intermediate and final results, have different representation, and the algorithms are
implemented according to different paradigms. An algorithm usually works on the results of other
algorithms. Uncertain data and approximate solutions are involved. Employing dis fact algorithms
induces potential parallelism.
Q4) Write a note on HEARSEY-II System?(June 12)(5 Marks)
Soln:
HEARSAY-II --The first Blackboard system was the HEARSAY-II speech recognition system from the
early 1970's. It was developed as a natural language interface to a literature database. Its task was to
answer queries about documents and to retrieve documents from a collection of abstracts of Artificial
Intelligence publications. The inputs to the system were acoustic signals that were semantically
interpreted and then transformed to a database query. The control component of HEARSAY-I1 consists
of the following:
The focus of control database, which contains a table of primitive change types of blackboard changes,
and those condition-parts that can be executed for each change type.
The scheduling queue, which contains pointers to condition- or action-parts of knowledge source.
The monitor, which keeps track of each change made to the blackboard.
The scheduler, which uses experimentally-derived heuristics to calculate priorities for the conditionand action- parts waiting in the scheduling queue.
Page 31
10IS81
Soln:
Implementation:
Divide the systems tasks into a sequence of processing stages.
Each stage must depend only on the output of its direct predecessor.
All stages are conceptually connected by the data flow.
Define the data format to be passed along each pipe.
Defining a uniform format results in the highest flexibility because it makes recombination of its filters
easy.
You must also define how the end of input is marked.
Decide how to implement each pipe connection.
This decision directly determines whether you implement the filters as active or passive components.
Adjacent pipes further define whether a passive filter is triggered by push or pull of data.
Design and implement the filters.
Design of a filter component is based both on the task it must perform and on the adjacent pipes.
You can implement passive filters as a function, for pull activation, or as a procedure for push activation.
Active filters can be implemented either as processes or as threads in the pipeline program.
Design the error handling.
Because the pipeline components do not share any global state, error handling is hard to address and is
often neglected. As a minimum, error detection should be possible. UNIX defines specific output
channel for error messages, standard error. If a filter detects error in its input data, it can ignore input
until some clearly marked separation occurs. It is hard to give a general strategy for error handling
with a system based on the pipes and filter pattern.
Set up the processing pipeline.
If your system handles a single task you can use a standardized main program that sets up the pipeline and
starts processing. You can increase the flexibility by providing a shell or other end-user facility to set
up various pipelines from your set of filter components.
Q5) Explain layers arachitectures Pattern,with sketches and CRC cards? (Dec 12/Jan13)(6 Marks)
Soln:
Structure your system into an appropriate number of layers and place them on top of each other. Lowest
layer is called 1 (base of our system), the highest is called layer N. i.e, Layer 1, . Layer J-1, Layer J,
.. Layer N. Most of the services that layer J Provides are composed of services provided by layer J-1. In
other words, the services of each layer implement a strategy for combining the services of the layer below
in a meaningful way. In addition, layer Js services may depend on other services in layer J.
Structure:
An individual layer can be described by the following CRC card:
Page 32
Software Architecture
10IS81
The main structural characteristics of the layers patterns is that services of layer J are only use by layer
J+1-there are no further direct dependencies between layers. This structure can be compared with a stack,
or even an onion. Each individual layer shields all lower from direct access by higher layers.
Page 33
Software Architecture
10IS81
Hardware
Determine the number of abstraction levels according to your abstraction criterion.
Each abstraction level corresponds to one layer of the pattern. Having too many layers may impose
unnecessary overhead, while too few layers can result in a poor structure.
Name the layers and assign the tasks to each of them.
The task of the highest layer is the overall system task, as perceived by the client. The tasks of all other
layers are to be helpers to higher layers.
Specify the services
It is often better to locate more services in higher layers than in lower layers. The base layers should be
kept slim while higher layers can expand to cover a spectrum of applicability. This phenomenon is
also called the inverted pyramid of reuse.
Refine the layering
Iterate over steps 1 to 4.
It is not possible to define an abstraction criterion precisely before thinking about the implied layers and
their services.
Alternatively it is wrong to define components and services first and later impose a layered structure.
The solution is to perform the first four steps several times until a natural and stable layering evolves.
Specify an interface for each layer.
If layer J should be a black box for layer J+1, design a flat interface that offers all layer Js services.
White box approach is that in which, layer J+1 sees the internals of layer J.
Gray box approach is a compromise between black and white box approaches. Here layer J+1 is aware
of the fact that layer J consists of three components, and address them separately, but does not see the
internal workings of individual components.
Structure individual layers
When an individual layer is complex, it should be broken into separate components.
This subdivision can be helped by using finer-grained patterns.
Specify the communication between adjacent layers.
Push model (most often used): when layer J invokes a service of layer J+1, any required information is
passed as part of the service call.
Pull model: it is the reverse of the push model. It occurs when the lower layer fetches available
information from the higher layer at its own discretion.
Decouple adjacent layers.
For top-down communication, where requests travel top-down, we can use one-way coupling (i.e, upper
layer is aware of the next lower layer, but the lower layer is unaware of the identity of its users) here
return values are sufficient to transport the results in the reverse direction.
Q7) Explain Benefits and Liabilities of Pipes and filter pattern?(Dec13/Jan14)(6 Marks)
Soln:
The Pipes and Filters architectural pattern has the following benefits
No intermediate files necessary, but possible.
Computing results using separate programs is possible without pipes, by storing intermediate results in
pipes.
Flexibility by the filter change
Dept. of CSE, SJBIT
Page 34
Software Architecture
10IS81
Filters have simple interface that allows their easy exchange within a processing pipeline.
Flexibility by recombination
This benefit combined with reusability of filter component allows you to create new processing pipelines
by rearranging filters or by adding new ones.
Reuse of filter components.
Support for recombination leads to easy reuse of filter components.
Rapid prototyping of pipelines.
Easy to prototype a data processing system from existing filters.
Efficiency by parallel processing.
It is possible to start active filter components in parallel in a multiprocessor system or a network.
The Pipes and Filters architectural pattern has the following Liabilities
Sharing state information is expensive or inflexible
This pattern is inefficient if your processing stage needs to share a large amount of global data.
Efficiency gain by parallel processing is often an illusion, because:
The cost for transferring data between filters is relatively high compared to the cost of the computation
carried out by a single filter.Some filter consumes all their input before producing any output. Contextswitching between threads or processes is expensive on a single processor machine. Synchronization
of filters via pipes may start and stop filters often.
Data transformation overhead
Using a single data type for all filter input and output to achieve highest flexibility results in data
conversion overheads.
Error handling
Is difficult. A concrete error-recovery or error-handling strategy depends on the task you need to solve.
Q8) What are the known uses of Black Board Pattern?(Dec 13/Jan 14)(4 marks)
Sol:
Known uses:
HEARSAY-II The first Blackboard system was the HEARSAY-II speech recognition system from the
early 1970's. It was developed as a natural language interface to a literature database. Its task was to
answer queries about documents and to retrieve documents from a collection of abstracts of Artificial
Intelligence publications. The inputs to the system were acoustic signals that were semantically
interpreted and then transformed to a database query. The control component of HEARSAY-I1 consists
of the following:
The focus of control database, which contains a table of primitive change types of blackboard changes,
and those condition-parts that can be executed for each change type.
The scheduling queue, which contains pointers to condition- or action-parts of knowledge source.
The monitor, which keeps track of each change made to the blackboard.
The scheduler, which uses experimentally-derived heuristics to calculate priorities for the conditionand action- parts waiting in the scheduling queue.
HASP/SIAP
CRYSALIS
TRICERO
SUS
Dept. of CSE, SJBIT
Page 35
Software Architecture
10IS81
Q9) Illustrate the behavior of the black board architecture based on Speech recognition and list the
steps to implement black board pattern?(June/July 13)(10 marks)
Soln:
Implementation:
Define the problem
Specify the domain of the problem
Scrutinize the input to the system
Define the o/p of the system
Detail how the user interacts with the system.
Define the solution space for the problem
Specify exactly what constitutes a top level solution
List the different abstraction levels of solutions
Organize solutions into one or more abstraction hierarchy.
Find subdivisions of complete solutions that can be worked on independently.
Divide the solution process into steps.
Define how solutions are transformed into higher level solutions.
Describe how to predict hypothesis at the same abstraction levels.
Detail how to verify predicted hypothesis by finding support for them in other levels.
Specify the kind of knowledge that can be uses to exclude parts of the solution space.
Divide the knowledge into specialized knowledge
These subtasks often correspond to areas of specialization.
Define the vocabulary of the blackboard
Elaborate your first definition of the solution space and the abstraction levels of your solution.
Find a representation for solutions that allows all knowledge sources to read from and contribute to the
blackboard.
The vocabulary of the blackboard cannot be defined of knowledge sources and the control component.
Specify the control of the system.
Control component implements an opportunistic problem-solving strategy that determines which
knowledge sources are allowed to make changes to the blackboard.The aim of this strategy is to construct
a hypothesis that is acceptable as a result. The following mechanisms optimizes the evaluation of
knowledge sources, and so increase the effectiveness and performance of control strategy. Classifying
changes to the blackboard into two types. One type specify all blackboard change that may imply new set
of applicable knowledge sources, and the other specifies all blackboard changes that do not. Associating
categories of blackboard changes with sets of possibly applicable knowledge sources. Focusing of
control. The focus contains either partial results on the blackboard or knowledge sources that should be
preferred over others. Creating on queue in which knowledge sources classified as applicable wait for
their execution.
Implement the knowledge sources
Split the knowledge sources into condition parts and action-parts according to the needs of the control
component. We can implement different knowledge sources in the same system using different
technologies
Page 36
Software Architecture
10IS81
Unit-V
Q1) What do you mean by broker architecture?What are the steps involved in implementing
distributed broker architecture patterns?(Dec 12/Jan 13)(June 2012)(June/July 13)(10 Marks)
Soln:
The broker architectural pattern can be used to structure distributed software systems with decoupled
components that interact by remote service invocations. A broker component is responsible for
coordinating communication, such as requests, as well as for transmitting results and exceptions.
Implementation:
1) Define an object existing model, or use an existing model.
Each object model must specify entities such as object names, requests, objects, values, exceptions,
supported types, interfaces and operations.
2) Decide which kind of component-interoperability the system should offer.
You can design for interoperability either by specifying a binary standard or by introducing a high-level
IDL.
IDL file contains a textual description of the interfaces a server offers to its clients.
The binary approach needs support from your programming language.
3) Specify the APIS the broker component provides for collaborating with clients and servers.
Decide whether clients should only be able to invoke server operations statically, allowing clients to bind
the invocations at complete time, or you want to allow dynamic invocations of servers as well.
This has a direct impact on size and no. of APIS.
4) Use proxy objects to hide implementation details from clients and servers.
Client side proxies package procedure calls into message and forward these messages to the local broker
component.
Server side proxies receive requests from the local broker and call the methods in the interface
implementation of the corresponding server.
5) Design the broker component in parallel with steps 3 and 4
During design and implementations, iterate systematically through the following steps
5.1 Specify a detailed on-the-wire protocol for interacting with client side and server side proxies.
5.2 A local broker must be available for every participating machine in the network.
5.3 When a client invokes a method of a server the broker system is responsible for returning all results
and exceptions back to the original client.
5.4 If the provides do not provide mechanisms for marshalling and un marshalling parameters results, you
must include functionality in the broker component.
5.5 If your system supports asynchronous communication b/w clients and servers, you need to provide
message buffers within the broker or within the proxies for temporary storage of messages.
5.6 Include a directory service for associating local server identifiers with the physical location of the
corresponding servers in the broker.
5.7 When your architecture requires system-unique identifiers to be generated dynamically during server
registration, the broker must offer a name service for instantiating such names.
Page 37
Software Architecture
10IS81
5.8 If your system supports dynamic method invocation the broker needs some means for maintaining
type information about existing servers.
5.9 Plan the brokers action when the communication with clients, other brokers, or servers fails.
6) Develop IDL compliers
An IDL compiler translates the server interface definitions to programming language code. When many
programming languages are in use, it is best to develop the compiler as afrarnework that allows the
developer to add his own code generators.
Q2)Explain with neat diagram, dynamic scenario of MVC?(June 2012)(Dec 12/Jan 13) (10 Marks)
Soln:
Dynamics: The following scenarios depict the dynamic behavior of MVC. For simplicity only one viewcontroller pair is shown in the diagrams.
Scenario I shows how user input that results in changes to the model triggers the change-propagation
mechanism:
The controller accepts user input in its event-handling procedure, interprets the event, and activates a
service procedure of the model.
The model performs the requested service. This results in a change to its internal data.
The model notifies all views and controllers registered with the change-propagation mechanism of the
change by calling their update procedures.
Each view requests the changed data from the model and redisplays itself on the screen.
Each registered controller retrieves data from the model to enable or disable certain user functions..
The original controller regains control and returns from its event handling procedure.
Scenario II shows how the MVC triad is initialized. The following steps occur:
The model instance is created, which then initializes its internal data structures.
A view object is created. This takes a reference to the model as a parameter for its initialization.
The view subscribes to the change-propagation mechanism of the model by calling the attach procedure.
The view continues initialization by creating its controller. It passes references both to the model and to
itself to the controller's initialization procedure.
The controller also subscribes to the change-propagation mechanism by calling the attach procedure.
After initialization, the application begins to process events.
Dept. of CSE, SJBIT
Page 38
Software Architecture
10IS81
Software Architecture
10IS81
do you modularize the user interface functionality of a web application so that you can easily modify the
individual parts? The following forces influence the solution:
Same information is presented differently in different windows. For ex: In a bar or pie chart. The display
and behavior of the application must reflect data manipulations immediately. Changes to the user
interface should be easy, and even possible at run-time. Supporting different look and feel standards or
porting the user interface should not affect code in the core of the application.
Solution:
MVC divides an interactive application into the three areas: processing, output and input.
Model component encapsulates core data and functionality and is independent of o/p and i/p.
View components display user information to user a view obtains the data from the model. There can be
multiple views of the model.
Each view has an associated controller component controllers receive input (usually as mouse events)
events are translated to service requests for the model or the view. The user interacts with the system
solely through controllers.
The separation of the model from view and controller components allows multiple views of the same
model.
View component:
Presents information to the user
Retrieves data from the model
Creates and initializes its associated controller
Implements the update procedure
Consider a simple information system for political elections with proportional representation. This offers
a spreadsheet for entering data and several kinds of tables and charts for presenting the current results.
Users can interact with the system via a graphical interface. All information displays must reflect changes
to the voting data immediately.
Context: Interactive applications with a flexible human-computer interface
Problem: Different users place conflicting requirements on the user interface. A typist enters information
into forms via the keyboard. A manager wants to use the same system mainly by clicking icons and
buttons. Consequently, support for several user interface paradigms should be easily incorporated. How
Structure:
Model component:
Contains the functional core of the application.
Registers dependent views and controllers
Notifies dependent components about data changes (change propagation mechanism)
Soln:
MVC architectural pattern divides an interactive application into three components.
The model contains the core functionality and data.
Views display information to the user.
Controllers handle user input.
Views and controllers together comprise the user interface.
A change propagation mechanism ensures consistence between the user interface and the model.
Page 39
Controller component:
Accepts user input as events (mouse event, keyboard event etc)
Translates events to service requests for the model or display requests for the view.
Dept. of CSE, SJBIT
Page 40
Software Architecture
10IS81
An object-oriented implementation of MVC would define a separate class for each component. In a C++
implementation, view and controller classes share a common parent that defines the update interface. This
is shown in the following diagram.
Dynamics: The following scenarios depict the dynamic behavior of MVC. For simplicity only one viewcontroller pair is shown in the diagrams.
Scenario I shows how user input that results in changes to the model triggers the change-propagation
mechanism:
The controller accepts user input in its event-handling procedure, interprets the event, and activates a
service procedure of the model.
The model performs the requested service. This results in a change to its internal data.
The model notifies all views and controllers registered with the change-propagation mechanism of the
change by calling their update procedures.
Each view requests the changed data from the model and redisplays itself on the screen.
Each registered controller retrieves data from the model to enable or disable certain user functions..
The original controller regains control and returns from its event handling procedure.
Software Architecture
10IS81
Scenario II shows how the MVC triad is initialized. The following steps occur:
The model instance is created, which then initializes its internal data structures,A vi,ew object is created.
This takes a reference to the model as a parameter for its initialization,The view subscribes to the
change-propagation mechanism of the model by calling the attach procedure,The view continues
initialization by creating its controller. It passes references both to the model and to itself to the
controller's initialization procedure.
The controller also subscribes to the change-propagation mechanism by calling the attach procedure.
Dept. of CSE, SJBIT
Page 41
Page 42
Software Architecture
10IS81
Software Architecture
10IS81
Scenario II illustrates the behaviour when a client sends a request to a local server. In this scenario we
describe a synchronous invocation, in which the client blocks until it gets a response from the server. The
broker may also support asynchronous invocations, allowing clients to execute further tasks without
having to wait for a response.
Q5) Discuss the Consequences of Presentation abstraction control architectural pattern?(Dec 12) (Dec
12/Jan13)( (10 Marks)
Soln:
Consequences:
Benefits:separation of concerns
o Different semantic concepts in the application domain are represented by separate agents.
Support for change and extension
o Changes within the presentation or abstraction components of a PAC agent do not affect other agents in
the system.
Support for multi tasking
o PAC agents can be distributed easily to different threads, processes or machines.
o Multi tasking also facilitates multi user applications.
Liabilities:
Increased system complexity
Implementation of every semantic concept within an application as its own PAC agent may result in a
complex system structure.
Complex control component
Individual roles of control components should be strongly separated from each other. The
implementations of these roles should not depend on specific details of other agents.
The interface of control components should be independent of internal details.
It is the responsibility of the control component to perform any necessary interface and data adaptation.
Efficiency:
Overhead in communication between PAC agents may impact system efficiency.
Example: All intermediate-level agents are involved in data exchange. if a bottom-level agent retrieve
data from top-level agent.
Dept. of CSE, SJBIT
Page 43
The smaller the atomic semantic concepts of an applications are, and the greater the similarly of their user
interfaces, the less applicable this pattern is.
Q6) What are known uses of PAC?(Dec 13/Jan 14)
Soln:
Known uses:
Network traffic management
Gathering traffic data from switching units.
Threshold checking and generation of overflow exceptions.
Logging and routing of network exceptions.
Visualization of traffic flow and network exceptions.
Displaying various user-configurable views of the whole network.
Statistical evaluations of traffic data.
Access to historic traffic data.
Dept. of CSE, SJBIT
Page 44
Software Architecture
10IS81
UNIT-VI
Q1) State and explain the properties of reflection pattern (June/July 13) (Dec13/Jan 14)(10marks)
Soln:
The reflection architectural pattern provides a mechanism for changing structure and behavior of software
systems dynamically. It supports the modification of fundamental aspects, such as the type structures and
function call mechanisms. In this pattern, an application is split into two parts:
A Meta level provides information about selected system properties and makes the s/w self aware.
A base level includes application logic changes to information kept in the Meta level affect subsequent
base-level behavior.
Context: Building systems that support their own modification a prior
Problem:
Designing a system that meets a wide range of different requirements a prior can be an overwhelming
task.
A better solution is to specify an architecture that is open to modification and extension i.e., we have to
design for change and evolution.
Several forces are associated with the problem:
Changing software is tedious, error prone and often expensive.
Adaptable software systems usually have a complex inner structure. Aspects that are subject to change are
encapsulated within separate components.
The more techniques that are necessary for keep in a system changeable the more awkward and complex
its modifications becomes.
Changes can be of any scale, from providing shortcuts for commonly used commands to adapting an
application framework for a specific customer.
Even fundamental aspects of software systems can change for ex. communication mechanisms b/w
components.
Software Architecture
10IS81
A base level
Meta level provides a self representation of the s/w to give it knowledge of its own structure and behavior
and consists of so called Meta objects (they encapsulate and represent information about the software).
Ex: type structures algorithms or function call mechanisms.
Base level defines the application logic. Its implementation uses the Meta objects to remain independent
of those aspects that are likely to change.
An interface is specified for manipulating the Meta objects. It is called the Meta object protocol (MOP)
and allows clients to specify particular changes.
Structure:
1.Meta level
2.Meta objects protocol(MOP)
3.Base level
1.Meta level
Meta level consists of a set of Meta objects. Each Meta object encapsulates selected information about a
single aspect of a structure, behavior, or state of the base level.
There are three sources for such information.
It can be provided by run time environment of the system, such as C++ type identification objects.
It can be user defined such as function call mechanism
It can be retrieved from the base level at run time.
All Meta objects together provide a self representation of an application.
What you represent with Meta objects depends on what should be adaptable only system details that are
likely to change r which vary from customer to customer should be encapsulated by Meta objects.
The interface of a Meta objects allows the base level to access the information it maintains or the services
it offers.
2.Base level
It models and implements the application logic of the software. Its component represents the various
services the system offers as well as their underlying data model.
It uses the info and services provided by the Meta objects such as location information about components
and function call mechanisms. This allows the base level to remain flexible.
Base level components are either directly connected to the Meta objects and which they depend or submit
requests to them through special retrieval functions.
Solution:
Make the software self-aware, and make select aspects of its structure and behavior accessible for
adaptation and change.
This leads to an architecture that is split into two major parts: A Meta level
Page 45
Page 46
One way of providing this access is to allow the MOP to directly operate on their internal states. Another
way (safer, inefficient) is to provide special interface for their manipulation, only accessible by MOP.
Software Architecture
Software Architecture
10IS81
10IS81
The general structure of a reflective architecture is very much like a Layered system
Q2) What are the steps involved in implementing micro kernel system?(June 12)(June/July
13)(Dec 13/Jan 14) (June/July 2014) (12 Marks)
Page 47
Soln:
Implementation:
1. Analyze the application domain:
Perform a domain analysis and identify the core functionality necessary for implementing ext servers.
2. Analyze external servers
That is polices ext servers are going to provide
3. Categorize the services
Group all functionality into semantically independent categories.
4. Partition the categories
Separate the categories into services that should be part of the microkernel, and those that should be
available as internal servers.
Dept. of CSE, SJBIT
Page 48
Software Architecture
10IS81
Page 49
Software Architecture
10IS81
The first layer includes information and functionality that allows software to identify and compare types.
The second layer provides more detailed information about the type system of an application.
The third layer provides information about relative addresses of data members, and offers functions for
creating 'empty' objects of user-defined types.
The fourth layer provides full type information, such as that about friends of a class, protection of data
members, or argument and return types of function members.
PGen
It allows an application to store and read arbitrary C++ object structures.
NEDIS
NEDIS includes a meta level called run-time data dictionary. It provides the following services and
system information:
Properties for certain attributes of classes, such as their allowed value ranges.
Functions for checking attribute values against their required properties.
Default values for attributes of classes, used to initialize new objects.
Functions specifying the behavior of the system in the event of errors
Country-specific functionality, for example for tax calculation.
Information about the 'look and feel' of the software, such as the layout of input masks or the language to
be used in the user interface.
OLE 2.0
It provides functionality for exposing and accessing type information about OLE objects and their
interfaces.
Q4) Explain the components of microkernel pattern (Dec 12) (Dec 12/Jan 13)(10 Marks)
Soln:
Microkernel pattern defines 5 kinds of participating components.
Internal servers
External servers
Adapters
Clients
Microkernel
Microkernel
The microkernel represents the main component of the pattern.
It implements central services such as communication facilities or resource handling.
The microkernel is also responsible for maintaining system resources such as processes or files.
It controls and coordinates the access to these resources.
A microkernel implements atomic services, which we refer to as mechanisms.
These mechanisms serve as a fundamental base on which more complex functionality called policies are
constructed.
Page 50
Software Architecture
10IS81
Software Architecture
10IS81
o Problem arises if a client accesses the interfaces of its external server directly ( direct dependency)
Such a system does not support changeability
If ext servers emulate existing application platforms clients will not run without modifications.
Adapter (emulator)
Represents the interfaces b/w clients and their external servers and allow clients to access the services of
their external server in a portable way.
They are part of the clients address space.
The following OMT diagram shows the static structure of a microkernel system.
An internal server (subsystem)
Extends the functionality provided by microkernel.
It represents a separate component that offers additional functionality.
Microkernel invokes the functionality of internal services via service requests.
Therefore internal servers can encapsulates some dependencies on the underlying hardware or software
system.
The OMT Diagram shows a static structure of MicroKernel Syetm.
Client:
o It is an application that is associated with exactly one external server. It only accesses the programming
interfaces provided by the external server.
Dept. of CSE, SJBIT
Page 51
Q5) Explain the advantages and disadvantages of reflexive architectural pattern?(June 12)(6
Marks)
Soln:
The reflection architecture provides the following Benefits:
No explicit modification of source code:
You just specify a change by calling function of the MOP.
Changing a software system is easy
MOP provides a safe and uniform mechanism for changing s/w. it hides all specific techniques such as
use of visitors, factories from user.
Support for many kind of change:
Because Meta objects can encapsulate every aspect of system behavior, state and structure. The reflection
architecture also has Liabilities:
Modifications at meta level may cause damage:
Incorrect modifications from users cause serious damage to the s/w or its environment. Ex: changing a
database schema without suspending the execution of objects in the applications that use it or passing
code to the MOP that includes semantic errors
Robustness of MOP is therefore of great importance.
Dept. of CSE, SJBIT
Page 52
Software Architecture
10IS81
Software Architecture
10IS81
Q2) Give the structure of Whole port design with CRC? (June/July 13)(Dec 13/Jan 14)(5 Marks)
Soln:
The Whole-Part pattern introduces two types of participant:
Whole
Whole object represents an aggregation of smaller objects, which we call parts.
It forms a semantic grouping of its parts in that it co ordinates and organizes their collaboration.
Some methods of whole may be just place holder for specific part services when such a method is
invoked the whole only calls the relevant part services, and returns the result to the client.
Part
Each part object is embedded in exactly one whole. Two or more parts cannot share the same part.
Each part is created and destroyed within the life span of the whole.
Static relationship between whole and its part are illustrated in the OMT diagram below
Unit VII
Q1) What are the application areas of master slave pattern (Dec 13/Jan 14) (10 Marks)
Soln:
There are 3 application areas for master slave pattern.
Master-slave for fault tolerance
In this variant the master just delegates the execution of a service to a fixed number of replicated
implementations, each represented by a slave.
Master-slave for parallel computation
In this variant the master divides a complex task into a number of identical sub-tasks, each of which is
executed in parallel by a separate slave.
Master-slave for computational concurrency.
In this variant the execution of a service is delegated to at least three different implementations, each of
which is a separate slave.
Other variants
Slaves as processes
To handle slaves located in separate processes, you can extend the original Master-Slave structure with
two additional components
Slaves as threads
In this variant the master creates the threads, launches the slaves, and waits for all threads to complete
before continuing with its own computation.
Master-slave with slave co ordination
In this case the computation of all slaves must be regularly suspended for each slave to coordinate itself
with the slaves on which it depends, after which the slaves resume their individual computation.
Page 53
Page 54
Software Architecture
10IS81
Virtual proxy:
Processing or loading a component is costly while partial information about the component may be
sufficient.
Firewall proxy:
Local clients should be protected from the outside world.
Q4) Discuss the 5 steps implementation of master slave pattern? (Dec 2012)(June/July 13)(10
Marks)
Soln:
Implementation:
1. Divide work:
Specify how the computation of the task can be split into a set equal sub tasks. Identify the sub services
that are necessary to process a subtask.
2. Combine sub-task results
Specify how the final result of the whole service can be computed with the help of the results obtained
from processing individual sub-tasks.
3. Specify co operation between master and slaves
Define an interface for the subservice identified in step1 it will be implemented by the slave and used by
the master to delegate the processing of individual subtask. One option for passing subtasks from the
master to the slaves is to include them as a parameter when invoking the subservice.
Another option is to define a repository where the master puts sub tasks and the slaves fetch them.
4. Implement the slave components according to the specifications developed in previous step.
5. Implement the master according to the specifications developed in step 1 to 3
There are two options for dividing a task into subtasks.
The first is to split work into a fixed number of subtasks.
The second option is to define as many subtasks as necessary or possible.
Use strategy pattern to support dynamic exchange and variations of algorithms for subdividing a task.
Q5) Define Proxy Design.Discuss benefits and liabilities of same?(June/July 13)(10 Marks)
Soln:
Proxy design pattern makes the clients of a component communicate with a representative rather than to
the component itself. Introducing such a place holder can serve many purposes, including enhanced
efficiency, easier access and protection from unauthorized access.
Dept. of CSE, SJBIT
Page 55
Software Architecture
10IS81
Page 56
Software Architecture
10IS81
8.Changeability of parts:
Part implementations may even be completely exchanged without any need to modify other parts or
clients.
9.Separation of concerns:
Each concern is implemented by a separate part.
10.Reusability in two aspects:
Parts of a whole can be reused in other aggregate objects
Encapsulation of parts within a whole prevents clients from scattering the use of part objects all over its
source code.
The whole-part pattern suffers from the following Liabilities:
11.Lower efficiency through indirection
Since the Whole builds a wrapper around its Parts, it introduces an additional level of indirection between
a client request and the Part that fulfils it.
12.Complexity of decomposition into parts.
An appropriate composition of a Whole from different Parts is often hard to find, especially when a
bottom-up approach is applied.
Q7) Briefly explain benefits of Master Slave Pattern? (June 2012) (June/July 2014) (6 Marks)
Soln:
The Master-Slave design pattern provides several Benefits:
Exchangeability and extensibility
By providing an abstract slave class, it is possible to exchange existing slave implementations or add new
ones without major changes to the master.
Separation of concerns
The introduction of the master separates slave and client code from the code for partitioning work,
delegating work to slaves, collecting the results from the slaves, computing the final result and handling
slave failure or inaccurate slave results.
Efficiency
The Master-Slave pattern for parallel computation enables you to speed up the performance of computing
a particular service when implemented carefully.
Q8) Briefly comment on the different steps carried out to realize the implementation of Proxy
pattern?(June/July 2011)(Dec 13/Jan 14)(June/July 2013)(6 Marks)
Software Architecture
10IS81
5. Associate the proxy and the original by giving the proxy a handle to the original. This handle may be
a pointer a reference an address an identifier, a socket, a port, and so on.
6. Remove all direct relationships between the original and its client
Replace them by analogous relationships to the proxy.
Q9) Explain the variants of whole-part design pattern in brief?(Dec 12)(10 Marks)
Soln:
Variants:
Shared parts:
This variant relaxes the restriction that each Part must be associated with exactly one Whole by allowing
several Wholes to share the same Part.
Assembly parts
In this variant the Whole may be an object that represents an assembly of smaller objects.
Container contents
In this variant a container is responsible for maintaining differing contents
Collection members
This variant is a specialization of Container-Contents, in that the Part objects all have the same type.
Composite pattern
It is applicable to Whole-Part hierarchies in which the Wholes and their Parts can be treated uniformlythat is, in which both implement the same abstract interface.
Q10)Explain Dynamic part of master-slave design?(Dec 12)(Dec 13/Jan 14)(8 Marks)
Soln:
The scenario comprises six phases:
A client requests a service from the master.
The master partitions the task into several equal sub-tasks.
The master delegates the execution of these sub-tasks to several slave instances, starts their execution and
waits for the results they return.
The slaves perform the processing of the sub-tasks and return the results of their computation back to the
master.
The master computes a final result for the whole task from the partial results received from the slaves.
The master returns this result to the client.
Soln:
1. Identify all responsibilities for dealing with access control to a component
Attach these responsibilities to a separate component the proxy.
2. If possible introduce an abstract base class that specifies the common parts of the interfaces of
both the proxy and the original.
Derive the proxy and the original from this abstract base.
3. Implement the proxys functions
To this end check the roles specified in the first step
4. Free the original and its client from responsibilities that have migrated into the proxy.
Page 57
Page 58
Software Architecture
10IS81
Software Architecture
10IS81
Unit-VIII
Q12)List the known uses and Liabilities of Proxy Pattern?(Dec 2012)(6 Marks)
Soln:
The Proxy pattern has the following Liabilities:
NeXT STEP
The Proxy pattern is used in the NeXTSTEP operating system to provide local stubs for remote objects.
Proxies are created by a special server on the first access to the remote object.
OMG-COBRA
It uses the Proxy pattern for two purposes. So called 'client-stubs', or IDL-stubs, guard clients against the
concrete implementation of their servers and the Object Request Broker.
OLE
In Microsoft OLE servers may be implemented as libraries dynamically linked to the address space of the
client, or as separate processes. Proxies are used to hide whether a particular server is local or
remote from a client.
WWW proxy
It gives people inside the firewall concurrent access to the outside world. Efficiency is increased by
caching recently transferred files.
Orbix
It is a concrete OMG-CORBA implementation, uses remote proxies. A client can bind to an original by
specifying its unique identifier.
Page 59
Q2) Explain with neat diagram,evolutionary delivery life cycle model?(june 2012)(10
Marks)
Soln:
Any organization that embraces architecture as a foundation for its software development processes needs
to understand its place in the life cycle. Several life-cycle models exist in the literature, but one that puts
architecture squarely in the middle of things is the evolutionary delivery life cycle model shown in figure
7.1.
Page 60
Software Architecture
10IS81
The intent of this model is to get user and customer feedback and iterate through several releases before
the final release. The model also allows the adding of functionality with each iteration and the delivery of
a limited version once a sufficient set of features has been developed.
Q3) What are the suggested standard organization
documentation?(june 2012)(June/July 13)(12 Marks)
points
for
interface
Soln:
An interface is a boundary across which two independent entities meet and interact or communicate with
each other.
1. Interface identify
When an element has multiple interfaces, identify the individual interfaces to distinguish them. This
usually means naming them. You may also need to provide a version number.
2. Resources provided:
The heart of an interface document is the resources that the element provides.
Resource syntax this is the resources signature
Resource Semantics:
Assignment of values of data
Changes in state
Events signaled or message sent
how other resources will behave differently in future
humanly observable results
Resource Usage Restrictions
initialization requirements
limit on number of actors using resource
3. Data type definitions:
If used if any interface resources employ a data type other than one provided by the underlying
programming language, the architect needs to communicate the definition of that type. If it is defined by
another element, then reference to the definition in that elements documentation is sufficient.
4. Exception definitions:
These describe exceptions that can be raised by the resources on the interface. Since the same exception
might be raised by more than one resource, if it is convenient to simply list each resources exceptions but
define them in a dictionary collected separately.
5. Variability provided by the interface.
Does the interface allow the element to be configured in some way? These configuration parameters and
how they affect the semantics of the interface must be documented.
6. Quality attribute characteristics:
The architect needs to document what quality attribute characteristics (such as performance or reliability)
the interface makes known to the element's users
7. Element requirements:
What the element requires may be specific, named resources provided by other elements. The
documentation obligation is the same as for resources provided: syntax, semantics, and any usage
restrictions.
8. Rationale and design issues:
Dept. of CSE, SJBIT
Page 61
Software Architecture
10IS81
Why these choices the architect should record the reasons for an elements interface design. The rationale
should explain the motivation behind the design, constraints and compromises, what alternatives designs
were considered.
9. Usage guide:
Item 2 and item 7 document an element's semantic information on a per resource basis. This sometimes
falls short of what is needed. In some cases semantics need to be reasoned about in terms of how a broad
number of individual interactions interrelate.
Q4) List the steps of ADD and explain?(Dec 12)(June 12)(Dec 12/Jan 13)(Dec 13/Jan 14)
(June/July 2014) (10 Marks)
Soln:
ADD Steps:
Steps involved in attribute driven design (ADD)
1. Choose the module to decompose
Start with entire system
Inputs for this module need to be available
Constraints, functional and quality requirements
2. Refine the module
a) Choose architectural drivers relevant to this decomposition
b) Choose architectural pattern that satisfies these drivers
c) Instantiate modules and allocate functionality from use cases representing using multiple views
d) Define interfaces of child modules
e) Verify and refine use cases and quality scenarios
3. Repeat for every module that needs further decomposition
1. Choose The Module To Decompose
the following are the modules: system->subsystem->submodule
Decomposition typically starts with system, which then decompose into subsystem and then into
sub-modules.
In our Example, the garage door opener is a system
Opener must interoperate with the home information system
Dept. of CSE, SJBIT
Page 62
Software Architecture
10IS81
Page 63
Software Architecture
10IS81
Page 64
Software Architecture
10IS81
Software Architecture
10IS81
Every suite of architectural documentation needs an introductory piece to explain its organization to a
novice stakeholder and to help that stakeholder access the information he or she is most interested in.
There are two kinds of "how" information:
View Catalog
A view catalog is the reader's introduction to the views that the architect has chosen to include in the suite
of documentation. There is one entry in the view catalog for each view given in the documentation suite.
Each entry should give the following: The name of the view and what style it instantiates, A description
of the view's element types, relation types, and properties A description of what the view is for
Management information about the view document, such as the latest version, the location of the view
document, and the owner of the view document
View Template
A view template is the standard organization for a view. It helps a reader navigate quickly to a section of
interest, and it helps a writer organize the information and establish criteria for knowing how much work
is left to do.
WHAT THE ARCHITECTURE IS
This section provides information about the system whose architecture is being documented, the relation
of the views to each other, and an index of architectural elements.
System Overview
This is a short prose description of what the system's function is, who its users are, and any important
background or constraints. The intent is to provide readers with a consistent mental model of the system
and its purpose. Sometimes the project at large will have a system overview, in which case this section of
the architectural documentation simply points to that.
Mapping between Views
Since all of the views of an architecture describe the same system, it stands to reason that any two views
will have much in common. Helping a reader of the documentation understand the relationships among
views will give him a powerful insight into how the architecture works as a unified conceptual whole.
Being clear about the relationship by providing mappings between views is the key to increased
understanding and decreased confusion.
Element List
The element list is simply an index of all of the elements that appear in any of the views, along with a
pointer to where each one is defined. This will help stakeholders look up items of interest quickly.
Project Glossary
The glossary lists and defines terms unique to the system that have special meaning. A list of acronyms,
and the meaning of each, will also be appreciated by stakeholders. If an appropriate glossary already
exists, a pointer to it will suffice here.
Q7) What are the suggested standard organization points for view documentation?(June
12)(Dec 12/Jan 13)(June/July 2013)(8 Marks)
stake holders?
Soln:
Soln:
Dept. of CSE, SJBIT
Page 65
Page 66
Software Architecture
10IS81
Primary presentation- elements and their relationships, contains main information about these system ,
usually graphical or tabular.
Element catalog- details of those elements and relations in the picture,
Context diagram- how the system relates to its environment
Variability guide- how to exercise any variation points a variability guide should include documentation
about each point of variation in the architecture, including
The options among which a choice is to be made
The binding time of the option. Some choices are made at design time, some at build time, and others at
runtime.
Architecture background why the design reflected in the view came to be? an architecture background
includes
o rationale, explaining why the decisions reflected in the view were made and why alternatives were
rejected
o analysis results, which justify the design or explain what would have to change in the face of a
modification
o assumptions reflected in the design
Glossary of terms used in the views, with a brief description of each.
Other information includes management information such as authorship, configuration control data, and
change histories. Or the architect might record references to specific sections of a requirements document
to establish traceability
10CS82
1.
Define system. Explain the components of a system with an example. 10 Marks (June
2014)
Soln: A system is a set of interacting or interdependent components forming an integrated whole
Every system is delineated by its spatial and temporal boundaries, surrounded and influenced by its
environment, described by its structure and purpose and expressed in its functioning.
Following are considered as the elements of a system in terms of Information systems:
1. Input
2. Output
3. Processor
4. Control
5. Feedback
6. Boundary and interface
7. Environment
1. INPUT: Input involves capturing and assembling elements that enter the system to be processed. The
inputs are said to be fed to the systems in order to get the output. For example, input of a 'computer
system' is input unit consisting of various input devices like keyboard,mouse,joystick etc.
2. OUTPUT: Those elements that exists in the system due to the processing of the inputs is known as
output. A major objective of a system is to produce output that has value to its user. The output of the
system maybe in the form of cash, information, knowledge, reports, documents etc. the system is
defined as output is required from it. It is the anticipatory recognition of output that helps in defining the
input of the system.
Page 1
Page 67
10CS82
3. PROCESSOR(S): The processor is the element of a system that involves the actual transformation of
input into output. It is the operational component of a system. For example, processor of a 'computer
system' is central processing unit that further consists of arithmetic and logic unit(ALU), control unit
and memory unit etc.
4. CONTROL: The control element guides the system. It is the decision-making sub-system that
controls the pattern of activities governing input,processing and output. It also keeps the system within
the boundary set. For example,control in a 'computer system' is maintained by the control unit that
controls and coordinates various units by means of passing different signals through wires.
5. FEEDBACK: Control in a dynamic system is achieved by feedback. Feedback measures output
10CS82
Every study begins with a statement of the problem, provided by policy makers. Analyst
ensures its clearly understood. If it is developed by analyst policy makers should understand and
agree with it.
2. Setting of objectives and overall project plan
The objectives indicate the questions to be answered by simulation. At this point a
determination should be made concerning whether simulation is the appropriate methodology.
Assuming it is appropriate, the overall project plan should include
A statement of the alternative systems
A method for evaluating the effectiveness of these alternatives
Plans for the study in terms of the number of people involved
Cost of the study
The number of days required to accomplish each phase of the work with the anticipated
results.
against a standard input in some form of cybernetic procedure that includes communication and control.
3. Model conceptualization
The feedback may generally be of three types viz.,positive,negative and informational. The positive
The construction of a model of a system is probably as much art as science. The art of
modeling is enhanced by an ability
To abstract the essential features of a problem
To select and modify basic assumptions that characterize the system
To enrich and elaborate the model until a useful approximation results
feedback motivates the system. The negative indicates need of an action. The feedback is a reactive form
of control. Outputs from the process of the system are fed back to the control mechanism. The control
mechanism then adjusts the control signals to the process on the basis of the data it receives.
Feedforward is a protective form of control. For example, in a 'computer system' when logical decisions
are taken,the logic unit concludes by comparing the calculated results and the required results.
6. BOUNDARY AND INTERFACE: A system should be defined by its boundaries-the limits that
identify its components,processes and interrelationships when it interfaces with another system. For
Thus, it is best to start with a simple model and build toward greater complexity. Model
conceptualization enhance the quality of the resulting model and increase the confidence of the
model user in the application of the model.
4. Data collection
There is a constant interplay between the construction of model and the collection of
needed input data. Done in the early stages. Objective kind of data are to be collected.
5. Model translation
example,in a 'computer system' there is boundary for number of bits, the memory size etc.that is
responsible for different levels of accuracy on different machines(like 16-bit,32-bit etc.). The interface
in a 'computer system'may be CUI (Character User Interface) or GUI (Graphical User Interface).
7. ENVIRONMENT: The environment is the 'supersystem' within which an organisation operates.It
excludes input,processes and outputs. It is the source of external elements that impinge on the system.
Real-world systems result in models that require a great deal of information storage and
computation. It can be programmed by using simulation languages or special purpose simulation
software.
Simulation languages are powerful and flexible. Simulation software models
development time can be reduced.
6. Verified
For example,if the results calculated/the output generated by the 'computer system' are to be used for
It pertains to he computer program and checking the performance. If the input parameters
and logical structure and correctly represented, verification is completed.
7. Validated
or in a government office then the system is same but its environment is different.
2. With a neat flow diagram, explain the steps in simulation study. 10 Marks (June 2014)
Soln:
1. Problem formulation
Page 2
Page 3
10CS82
10CS82
Gives the history of a simulation project. The result of all analysis should be reported
clearly and concisely in a final report. This enables to review the final formulation and
alternatives, results of the experiments and the recommended solution to the problem. The final
report provides a vehicle of certification.
12. Implementation
3. Explain the following components of simulation system with an example of bank system i)
Progress documentation
Success depends on the previous steps. If the model user has been thoroughly involved
and understands the nature of the model and its outputs, likelihood of a vigorous implementation is
enhanced.
v) event
Soln:
i) System: A collection of entities (e.g., people and machines) that ii together over time to
accomplish one or more goals.
ii)entity : An entity is an object of interest in a system.
Ex: In the factory system, departments, orders, parts and products The entities
iii)Attribute :An attribute denotes the property of an entity.
Ex: Quantities for each order, type of part, or number of machines in a Department are
attributes of factory system
Page 4
Page 5
10CS82
10CS82
4. List any three situations when simulation tool is appropriate and not appropriate tool.
10 Marks (June 2013 )
Soln: When Simulation is the Appropriate Tool
Simulation enables the study of and experimentation with the internal interactions of
ii)continuous system
Systems in which the changes are predominantly smooth are called continuous system. Ex:
Informational, organizational and environmental changes can be simulated and the effect
Has one or more random variable as inputs. Random inputs leads to random outputs. Ex:
Contains no random variables. They have a known set of inputs which will result in a unique
set of outputs. Ex: Arrival of patients to the Dentist at the scheduled appointment time
2012-13 )
Sol: i) discrete system
Systems in which the changes are predominantly discontinuous are called discrete systems. Ex:
v)entity
Bank the number of customers changes only when a customer arrives or when the service
Ex: In the factory system, departments, orders, parts and products are The entities
vi)Attribute
An attribute denotes the property of an entity.
Dept of CSE, SJBIT
Page 6
Page 7
I Phase
Ex: Quantities for each order, type of part, or number of machines in a Department are
10CS82
10CS82
It is period of discovery/orientation
6. Draw the flowchart of steps involved in simulation study.
12 Marks
(Dec /Jan
2012-13 )
Sol:
II Phase
Consists of steps 3,4,5,6 and 7
A continuing interplay is required among the steps
Exclusion of model user results in implications during implementation
III Phase
Consists of steps 8,9 and 10
Conceives a thorough plan for experimenting
Discrete-event stochastic is a statistical experiment
The output variables are estimates that contain random error and therefore proper
statistical analysis is required.
IV Phase
Consists of steps 11 and 12
successful completion
7. What is simulation? Explain with flow chart, the steps involved in simulation study
10Marks (June 2012)
Sol:
Simulation
A Simulation is the imitation of the operation of a real-world process or system over time.
Brief Explanation
model.
This model takes the form of a set of assumptions concerning the operation of the syste
The simulation model building can be broken into 4 phases.
Dept of CSE, SJBIT
Page 8
Page 9
10CS82
10CS82
9. What is system and system environment? List the components of a system, with
example.
Sol: Systems
A system is defined as an aggregation or assemblage of objects joined in some regular
interaction or interdependence toward the accomplishment of some purpose.
5 Marks
(June
2012)
Soln: i) discrete system
Systems in which the changes are predominantly discontinuous are called discrete systems. Ex:
Bank the number of customers changes only when a customer arrives or when the service
provided a customer is completed.
ii)continuous system
Attribute
Systems in which the changes are predominantly smooth are called continuous system. Ex:
Page 10
Page 11
An event is defined as an instaneous occurrence that may change the state of the system.
Simulation should be used when the problem cannot be solved using common sense.
Event
description of all the entities, attributes and activities as they exist at one point in time.
system at any time, relative to the objective of study. In other words, state of the system mean a
methodologies. Simulation can be used to experiment with new designs or policies prior to
obtained into which variables are most important and how variables interact.
Activity
By changing simulation inputs and observing the resulting outputs, valuable insight may be
Ex: Quantities for each order, type of part, or number of machines in a Department are
10CS82
10CS82
The external components which interact with the system and produce necessary changes
are said to constitute the system environment. In modeling systems, it is necessary to
decide on the boundary between the system and its environment. This decision may depend
on the purpose of the study.
Ex: In a factory system, the factors controlling arrival of orders may be considered to be
outside the factory but yet a part of the system environment. When, we consider the demand
and supply of goods, there is certainly a relationship between the factory output and arrival
of orders. This relationship is considered as an activity of the system.
Simulation should not be performed, if the resources or time are not available.
2011-12)
Sol: Steps in a Simulation study
1. Problem formulation
Every study begins with a statement of the problem, provided by policy makers. Analyst
ensures its clearly understood. If it is developed by analyst policy makers should
10. List any five circumstances, When the Simulation is appropriate tool and when it is
not?
Page 12
determination
should
be
made
concerning
whether
simulation
is
the
appropriate
methodology. Assuming it is appropriate, the overall project plan should include the alternative
systems
A method for evaluating the effectiveness of these alternatives
Plans for the study in terms of the number of people involved Cost of the study
The number of days required to accomplish each phase of the work with the
Dept of CSE, SJBIT
Page 13
10CS82
10CS82
anticipated results.
3. Model conceptualization
The construction of a model of a system is probably as much art as science. The art of
They are used to estimate measures of performance for the system designs that are being
simulated.
Based on the analysis of runs that have been completed. The analyst determines if
Thus, it is best to start with a simple model and build toward greater complexity. Model
additional runs are needed and what design those additional experiments should follow.
conceptualization enhance
the
quality
of
the
resulting
model
and
increase
the
4. Data collection
Program documentation
There is a constant interplay between the construction of model and the collection of needed
Process documentation
input data. Done in the early stages. Objective kind of data are to be collected.
Program documentation
5. Model translation
Can be used again by the same or different analysts to understand how the program
Real-world systems result in models that require a great deal of information storage and
simulation software.
Process documentation
Simulation
languages
are
powerful
and
flexible.
Simulation
software
models
will
Gives the history of a simulation project. The result of all analysis should be reported clearly
and concisely in a final report. This enable to review the final formulation and alternatives,
6. Verified
results of the experiments and the recommended solution to the problem. The final report
It pertains to computer program and checking the performance. If the input parameters
12. Implementation
7. Validated
Success depends on the previous steps. If the model user has been thoroughly involved and
understands
Achieved through calibration of the model, an iterative process of comparing the model to actual
implementation is enhanced.
the
nature
of
the
model
and
its
outputs,
likelihood
of
Page 14
Page 15
vigorous
10CS82
10CS82
Unit 2
GENERAL PRINCIPLES, SIMULATION SOFTWARE
1. Describe queuing system with respect to arrival and service mechanisms, system capacity,
queue discipline, flow diagrams of arrival and departure events. 10 Marks (June2014)
Soln:
Page 16
Page 17
10CS82
2. A small shop has one check out counter. Customers arrive at this checkout counter at random
from 1 to 10 minutes apart. Each possible value of inter-arrival time has the same probability of
occurrence equal to 0.10. Service times vary from 1 to 6 minutes with probability shown below:
10CS82
iv) Delay
sol:
i)
System State
A collection of variables that contain all the information necessary to describe the system at any
time
Develop simulation table for 10 customers. Find: i) Average waiting time; ii) Average
service time; iii) Average time, customer spends in system. Take the random digits for arrivals as
91, 72, 15, 94, 30, 92, 75, 23, 30 and for service times are 84, 10, 74, 53, 17, 79, 91, 67, 89, 38
ii)
event
An instantaneous occurrence that changes the state of a system as an arrival of a new customer).
iv)
Delay
An instantaneous occurrence that changes the state of a system as an arrival of a new customer).
i)
system
A collection of entities (e.g., people and machines) that ii together over time to accomplish
one or
more goals.
4. Consider the grocery store with one check out counter. Prepare the simulation table for
eight customers and find out average waiting time of customer in queue,idle time of
server and average service time .The inter arrival time (IAT) and service time (ST) are
given in minutes.
IAT : 3,2,6,4,4,5,8
ST(min) :3,5,5,8,4,6,2,3
Assume first customer arrives at time t=
Sol:
3.
Page 18
Page 19
10CS82
10CS82
6 Marks (June2012)
ii)
/Jan 2012-13)
time
(15 days) The probability for daily demand and lead time is given below. 20 Marks (Dec
A collection of variables that contain all the information necessary to describe the system at any
and an order of 8 units scheduled to arrive in two days time. Simulate for three cycles
i)
when a shortage condition occurs .Initial simulation is started with level of 3 units
System safe
list
0
0.1
1
0.25
2
0.35
3
0.2
(such as all customers currently in a waiting line, ordered by first come, first served, or by
0.1
priority).
iii)
0.5
Propability
Lead time
event
An instantaneous occurrence that changes the state of a system as an arrival of a new customer).
iv)
0.3
0.2
Delay
An instantaneous occurrence that changes the state of a system as an arrival of a new customer).
i)
Page 20
system
Page 21
10CS82
10CS82
A collection of entities (e.g., people and machines) that ii together over time to accomplish
8. Suppose the maximum inventory level is M 11 units and the review period is 5 days
estimate by simulation the average ending units in inventory and number of days
7. Six dump trucks are used to haul coal from the entrance of a small mine to railload.
when a shortage condition occurs .Initial simulation is started with level of 3 units and
Each truck is loaded by one of two loaders. After loading truck moves to scale, to be
an order of 8 units scheduled to arrive in two days time. Simulate for three cycles (15
weighed. After weighing a truck begins to travel time and then returns to loader
days) The probability for daily demand and lead time is given below. 20Marks (Dec
queue. It has been assumed that five of trucks Are at loader and one at scale at time
/Jan 2011-12)
0.By using event scheduling algorithm find out busy time of loader and scale and
stopping time E is 64 mins. 14 marks (June2012)
Loading time
10
Weighing time
12
12 12
Travel time
60 100
40
Demand
0.1
0.25
0.35
0.2
0.1
10
15
10
10
16
12
16
--
Lead time
40
80
--
--
Propability
0.5
0.3
0.
Sol:
=24/24 = 1.00
Page 22
Page 23
10CS82
Unit-3
10CS82
Soln:
1. Explain the event scheduling/time advance algorithm with an example. 08 Marks (June 2014)
Soln:
pace of X) = {0,1,2,}
p(xi), i = 1,2, must satisfy:
1. p(xi ) 0, for all i
2. p(xi ) = 1
The collection of pairs [xi, p(xi)], i = 1,2,, is called the probability
distribution of X, and p(xi) is called the probability mass function
(pmf) of X.
2. A company uses 6 trucks to haul manganese are from Kolar to industry. There are two loaders,
to load each truck. After loading, a truck moves to the weighing scale to be weighed. The queue
discipline is FIFO. When it is weighed, a truck travels to the industry and returns to the loader
queue. The distribution of loading time, weighing time and travel time are as follows:
3. Six dump trucks are used to haul coal from the entrance of a small mine to railroad.
Each truck is loaded by one of two loaders. After loading truck moves to scale, to be
weighed as soon as possible. Both the loader and the scale have first come first
served waiting line for trucks. Travel time from a loader to scale is considered
negligible. After weighing a truck begins to travel time and then returns to loader
table.
Assume 5 trucks are at the loader and one is at the scale, at time 0. Stopping event time TE =
queue. The activities of loading, weighing and travel time are given in the following
Calculate the total busy time of both loaders, the scale, average loader and scale utilization.
Page 24
Page 25
10CS82
10CS82
futureeventlist(FEL).
Loading time 10
10
15
10
10
Weighing time 12
12
12
16
12
16
--
Entity: An object in the system that requires explicit representation in the model, e.g., people,
Travel time
100
40
40
80
--
--
60
Event: Aninstantaneousoccurrencethatchangesthestateofasystem.
End of simulation is completion of two weighing from the scale. Depict the simulation
table and estimate the loader and scale utilizations. Assume that five of the trucks are at
the loaders and one is at the scale at time 0.
5. A large milling machine has three different bearings that fail in service. The
cumulative distribution function of the life of each bearing is identical given in table
sol:
1. When a bearing fails, the mill stops, a repair person is called and a new bearing is
installed. the delay time of the repair persons arriving at the milling machine is also a
random variable, with the distribution given in table 2.Downtime for the mill is
estimated at $5/minute. The direct on site cost of the repair person is $15/hour. It
takes 20 minutes to change 1 bearing, 30 minutes to change 2 bearings and 40
minutes to change 3 bearings .The bearing cost is $16 each. A proposal has been made
to replace all 3 bearings whenever a bearing fails. Management needs an evaluation of
this proposal. Simulate the system for 10,000 hours of operation under proposed
method and determine the total cost of the proposed system.
20 Marks
(Dec 12-13)
1000 1100
1300
1400
1500
1600
0.13
0.09
0.12
0.02
0.06
Life(Hrs)
Probability
1200
=24/24 = 1.00
0.10 0.13
0.25
0.05
0.05
10
Probability
0.6
0.3
15
0.1
Consider the following sequence of random digits for bearing life times
4. Explain the following.
Bearing 1
67
49
84
44
30
10
15
Bearing 2
70
43
86
93
81
44
19
51
Event List: list of event notices for future events, ordered by time of occurrence; known as the
Bearing 3
76
65
61
96
65
56
11
86
Page 26
Page 27
10CS82
10CS82
6. What do you mean by World View? Discuss the different types of World View? 10
Marks (June 2012)
Delay
Simulation model is defined in terms of entities or objects and their life cycle as they flow
A process is the life cycle of one entity, which consists of various events and activities
Sol:
through the system, demanding resources and queueing to wait for resources
Someactivitiesmightrequiretheuseofoneormoreresourceswhosecapacitiesarelimited
Processes interact, e.g., one process has to wait in a queue because the resource it needs is busy
with an other process
A process is a time- sequenced list of events, activities and delays, including demands for
resource, that define the life cycle of one entity as it moves through a system
Variable time advance
The delay time of the repairperson's arriving at the milling machine is also a
random variable, with the distribution given in Table
The cumulative distribution function of the life of each bearing is
identical, as shown in Table 2.22
Activity-scanning
Page 28
Page 29
10CS82
10CS82
runtime
Improvement: Three-phase approach
- Combination of event scheduling with activity scanning
World Views
Events are activities of duration zero time units
Two types of activities
- B activities: activities bound to occur all primary events and unconditional activities
- C activities: activities or events that are conditional upon certain conditions being true
The B-type activities can be scheduled ahead of time, just as in the event-scheduling approach
- Variable time advance
- FEL contains only B-type events
Scanning to learn whether any C- type activities can begin or C-type events occur happen only at
the end of each time advance, after all B-type events have completed
7. The maximum inventory level, M, is 11 units and the review period, N, is 5 days.
The problem is to estimate, by simulation, the average ending units in inventory
and the number of days when a shortage condition occurs. The distribution of the
number of units demanded per day is shown in table 9. In this example, lead-time is a
random variable, as shown in table 10. Assume that orders are placed at the close of
business and are received for inventory at the beginning as determined by the leadtime.
Sol: Simulation of an (M, N) Inventory System
8. What do you mean by World View? Discuss the different types of World View?
10 Marks (Dec 11-12)
Sol: World Views
Dept of CSE, SJBIT
Page 30
Page 31
The B-type activities can be scheduled ahead of time, just as in the event-scheduling approach
resource, that define the life cycle of one entity as it moves through a system
- C activities: activities or events that are conditional upon certain conditions being true
A process is a time- sequenced list of events, activities and delays, including demands for
- B activities: activities bound to occur all primary events and unconditional activities
Processes interact, e.g., one process has to wait in a queue because the resource it needs is busy
Someactivitiesmightrequiretheuseofoneormoreresourceswhosecapacitiesarelimited
World Views
through the system, demanding resources and queueing to wait for resources
Simulation model is defined in terms of entities or objects and their life cycle as they flow
A process is the life cycle of one entity, which consists of various events and activities
runtime
Disadvantage: The repeated scanning to discover whether an activity can be given results in slow
Process-interaction
10CS82
10CS82
9. Using
multiplicative
congruential
method,
generate
random
numbers
to
R3=52 100=0. 52
Activity to begin
R2=77 100=0. 77
Activity-scanning
At each clock advance, the conditions for each activity are checked, and, if the
the set I = {0,1 /m, 2/m,..., (m l)/m), since each Xi is an continuous on the interval [0,
First, notice that the numbers generated from Equation (7.2) can only assume values from
Page 32
Page 33
10CS82
10CS82
1], This approximation appears to be of little consequence, provided that the modulus m is
Unit-4
a very large integer. (Values such as m = 231 1 and m = 248 are in common use in
Queuing Models
generators appearing in many simulation languages.) By maximum density is meant that the
i) Uniform distribution
ii) Exponential distributions.10 Marks (June 2014)
Second, to help achieve maximum density, and to avoid cycling (i.e., recurrence of the
Sol:
same sequence of generated numbers) in practical applications, the generator should have the
i) Uniform distribution
largest possible period. Maximal period can be achieved by the proper choice of a, c, m, and
In probability theory and statistics, the continuous uniform distribution or rectangular distribution is
X0 .
a family of symmetric probability distributions such that for each member of the family, all intervals of
the same length on the distribution's support are equally probable. The support is defined by the two
For m a power of 2, say m =2b and c 0, the longest possible period is P = m = 2b,
which is achieved provided that c is relatively prime to m (that is, the greatest common
factor of c and m i s l ), and =a = l+4k, where k is an integer.
For m a power of 2, say m =2b and c = 0, the longest possible period is P = m4 = 2b-2,
which is achieved provided that the seed X0 is odd and the multiplier ,a, is given by
a=3+8K , for some K=0,1,..
For m a prime number and c=0, the longest possible period is P=m-1, which is achieved
provided that the multiplier , a, has the property that the smallest integer k such that a k- 1is
divisible by m is k= m-1.
In probability theory and statistics, the exponential distribution (a.k.a. negative exponential distribution)
is the probability distribution that describes the time between events in a Poisson process, i.e. a process
in which events occur continuously and independently at a constant average rate. It is the continuous
analogue of thegeometric distribution, and it has the key property of being memoryless.
Page 34
Page 35
10CS82
2. Explain the characteristics of queuing system. List the different queuing notations.10
Marks (June 2014)
10CS82
3. Explain Kendalls notation for parallel server queuing system A/B/C/N/K and also
interpreted
The key elements of queuing systems are the customers andservers.The term customer can
meaning of M/M/2//.
Sol:
G General, arbitrary
H Hyperexponential distribution
Service discipline
D Constant, deterministic
Population size
System capacity
Number of servers
Arrival process
Notations:
brokenmachine.
refer to people, parts, trucks,e-mails etc. and the term server clerks, mechanics,repairmen,
Only a small set of possibilities are solvable using standard queueing theory
Examples
Page 36
Page 37
10CS82
M/M/1/8/8 same as M/M/1: Single-server with unlimited capacity and call population.
Interarrival and service times are exponentially distributed
5. Explain Kendalls notation for parallel server queuing system A/B/C/N/K and also
interpreted
10CS82
meaning of M/M/2//.
Sol:
Sol:
Models
When the calling population is small, the presence of one or more customers
D Constant, deterministic
The time between the end of one service visit and the next call for service is
H Hyperexponential distribution
G General, arbitrary
Page 38
Page 39
10CS82
p(3)
also,
p(3)
10CS82
= e-223/3! = 0.18
= F(3) F(2) = 0.857-0.677=0.18
p(2 or more)
= 1 p(0) p(1)
= 0.594
= 1 F(1)
Sol:
pace of X) = {0,1,2,}
2. p(xi ) = 1
8. The number of Hurricanes hitting the coast of Indian follows poisson distribution
with mean =0.8 per year Determine:
Sol:
ii)
i)
Suppose service times have mean 1/m and variance s2 and r = l/m < 1, the steady-state
= / , P0 = 1-
sol:
2012)
(1+ )
2
L = 2(1+
Poisson Distribution
Example: A computer repair person is beeped each time there is a call for
(1+ )
2
LQ 2(1=
(1/
2 +2 )
2(1- )
1
w=
, wQ =
(1/ 2 + 2 )
2(1- )
Page 40
meaning of M/M/2//.
Page 41
10CS82
10CS82
Sol:
The time between the end of one service visit and the next call for service is
Examples
M/M/1/8/8 same as M/M/1: Single-server with unlimited capacity and call population.
Interarrival and service times are exponentially distributed
G/G/1/5/5: Single-server with capacity 5 and call-population 5.
M/M/5/20/1500/FIFO: Five parallel server with capacity 20, call-population 1500
and service discipline FIFO
12. Explain steady state parameters of M/G/1 queue. 10Marks. (Dec/Jan 2011-12)
Sol:
Steady-State Behavior of Finite-Population
Models
In practical problems calling population is finite
When the calling population is small, the presence of one or more customers
in the system has a strong effect on the distribution of future arrivals.
Consider a finite-calling population model with K customers (M/M/c/K/K)
Dept of CSE, SJBIT
Page 42
Page 43
10CS82
10CS82
Unit-5
Random-Number Generation, Random-Variate Generation
2. The sequence of random numbers 0.54, 0.73, 0.98, 0.11 and 0.68 has been generated. Use
Kolmogorov-Smirnov test with
1. Explain linear congruential method. Write three ways of achieving maximal period.05
Marks (June 2014)
= 0.565. 05 Marks
(June 2014)
A linear congruential generator (LCG) is an algorithm that yields a sequence of pseudo-randomized numbers
calculated with a discontinuous piecewise linear equation. The method represents one of the oldest and bestknown pseudorandom number generator algorithms.The theory behind them is relatively easy to understand, and they are
easily implemented and fast, especially on computer hardware which can provide modulo arithmetic by storage-bit
truncation.
R2=77 100=0. 77
the "modulus"
R3=52 100=0. 52
the "multiplier"
First, notice that the numbers generated from Equation (7.2) can only assume values from
the "increment"
the set I = {0,1 /m, 2/m,..., (m l)/m), since each Xi is an integer in the set {0,1,2,..., m 1}.
the "seed" or "start value" are integer constants that specify the generator.
Thus, each Ri is discrete on I, instead of continuous on the interval [0, 1], This approximation
Provided that the offset c is nonzero, the LCG will have a full period for all seed values if and only if
simulation languages.) By maximum density is meant that the values assumed by Ri = 1, 2,...,
The period of a general LCG is at most m, and for some choices of factor a much less than that.
such as m = 231 1 and m = 248 are in common use in generators appearing in many
appears to be of little consequence, provided that the modulus m is a very large integer. (Values
1.
and
are currently being questioned because of the use of this poor LCG.
R1= X1 231
example of this is RANDU, which was widely used in the early 1970s and led to many results which
X1 = 2,074,941,799
Historically, poor choices had led to ineffective implementations of LCGs. A particularly illustrative
producing pseudorandom numbers which can pass formal tests for randomness, this is extremely
(June 2014)
These three requirements are referred to as the Hull-Dobell Theorem.[3] While LCGs are capable of
The random numbers are 0.4357, 0.4146, 0.8353, 0.9952, 0.8004, 0.7945, 0.1530.10 Marks
is a multiple of 4 if
3.
2.
3. What is acceptance-rejection technique? Generate three Poisson variates with mean = 0.2.
is a multiple of 4.
Page 44
Page 45
10CS82
10CS82
R2 = X2 231= 0.2607
X3 = 75(559,872,160) mod(231 - 1) = 1,645,535,613
R3 = X3 231= 0.7662
Notice that this routine divides by m + 1 instead of m ; however, for sucha large value of
m , the effect is negligible.
4. Generate five random numbers using multiplicative congruential method with X 0=5,
a=10,m=64.
The calculations in Table 7.2 are illustrated in Figure 7.2, where the empirical cdf,
SN(X), is compared to the uniform cdf, F(x). It can be seen that D+ is the largest
Soln: X0 = 63
deviation of SN(x) above F(x), and that D- is the largest deviation of SN(X) below F(x). For
example, at R(3) the value of D+ is given by 3/5 - R(3) = 0.60 - 0.44 =0.16 and of D- is given
by R(3) = 2/5 = 0.44 - 0.40 = 0.04. Although the test statistic D is defined by Equation
(7.3) as the maximum deviation over all x, it can be seen from Figure 7.2 that the
When m is a power of 10, say m = 10b , the modulo operation is accomplished by saving
maximum deviation will always occur at one of the jump points R(1) , R(2) . . . , and thus
6. Using
multiplicative
congruential
method,
generate
random
numbers
to
test with =0.05 and check the hypothesis that the numbers are uniformly distributed
Sol:
First, the numbers must be ranked from smallest to largest. The calculations can be
X0 = 27
facilitated by use of Table 7.2. The top row lists the numbers from smallest (1) ) to
largest (R(n) ) .The computations for D+, namely i /N -R(i} and for D-, namely R(i ) - ( i -
R1=2100=0. 02
l ) / N, are easily accomplished using Table 7.2. The statistics are c mputed as D+ = 0.26
and D- = 0.21.
Therefore, D = max{0.26, 0.21} = 0.26. The critical value of D, obtained from Table A.8 for
R2=77 100=0. 77
a = 0.05 and N = 5, is 0.565. Since the computed value, 0.26, is less than the tabulated
critical value, 0.565, the hypothesis of no difference between the distribution of the
R3=52 100=0. 52
Page 46
Page 47
10CS82
10CS82
is compared to the uniform cdf, F(x). It can be seen that D+ is the largest deviation of SN(x)
2012-13)
The calculations in Table 7.2 are illustrated in Figure 7.2, where the empirical cdf, SN(X),
4 Marks (Dec/Jan
not be considered.
Therefore, D = max{0.26, 0.21} = 0.26. The critical value of D, obtained from Table A.8 for a
occur at one of the jump points R(1) , R(2) . . . , and thus the deviation at other values of x need
and D- = 0.21.
deviation over all x, it can be seen from Figure 7.2 that the maximum deviation will always
) / N, are easily accomplished using Table 7.2. The statistics are computed as D+ = 0.26
- 0.40 = 0.04. Although the test statistic D is defined by Equation (7.3) as the maximum
largest (R(n) ) .The computations for D+, namely i /N -R(i} and for D-, namely R(i ) - ( i - l
value of D+ is given by 3/5 - R(3) = 0.60 - 0.44 =0.16 and of D- is given by R(3) = 2/5 = 0.44
be facilitated by use of Table 7.2. The top row lists the numbers from smallest (R(1) ) to
above F(x), and that D- is the largest deviation of SN(X) below F(x). For example, at R(3) the
sol: First, the numbers must be ranked from smallest to largest. The calculations can
= 0.05 and N = 5, is 0.565. Since the computed value, 0.26, is less than the tabulated critical
value, 0.565, the hypothesis of no difference between the distribution of the generated
numbers and the uniform distribution is not rejected.
technique will be explained in detail for the exponential distribution and then applied to other
13)
is the underlying principle for sampling from a wide variety of discrete distributions. The
the Weibull, and the triangular distributions and empirical distributions. Additionally, it
uniformly distributed on the interval [0,1] can be rejected. Assume =0.05 and
The inverse transform technique can be used to sample from exponential, the uniform,
8. Using suitable frequency test find out whether the random numbers generated are
It is the most straightforward, but always the most efficient., technique computationally.
be facilitated by use of Table 7.2. The top row lists the numbers from smallest (R(1) ) to
distributions.
Sol: First, the numbers must be ranked from smallest to largest. The calculations can
largest (R(n) ) .The computations for D+, namely i /N -R(i} and for D-, namely R(i ) - ( i - l
) / N, are easily accomplished using Table 7.2. The statistics are computed as D+ = 0.26
EXPONENTIAL DISTRIBUTION :
and D- = 0.21.
Therefore, D = max{0.26, 0.21} = 0.26. The critical value of D, obtained from Table A.8 for a
= 0.05 and N = 5, is 0.565. Since the computed value, 0.26, is less than the tabulated critical
0,
f(X)= e-x ,
value, 0.565, the hypothesis of no difference between the distribution of the generated
x0
x<0
Page 48
1 e x,
x0
x<0
Page 49
10CS82
The parameter can be interpreted as the mean number of occurrences per time unit.
10CS82
For example, if interarrival times Xi , X2, X3, . . . had an exponential distribution with rate ,
desired random variates by Xi = F-1 (Ri) For the exponential case, F (R) = (-1/)ln(1- R) by
then could be interpreted as the mean number of arrivals per time unit, or the arrival rate|
Notice that for any j E(Xi)= 1/ so that is the mean interarrival time. The goal here is to
distribution.
The inverse transform technique can be utilized, at least in principle, for any distribution. But
10. Explain the two different techniques for generating random numbers with examples?
it is most useful when the cdf. F(x), is of such simple form that its inverse, F , can be
easily computed. A step-by-step procedure for the inverse transform technique illustrated
The linear congruential method, initially proposed by Lehmer [1951], produces a sequence of
integers, X\, X2,... between zero and m 1 according to the following recursive relationship:
Step 1. Compute the cdf of the desired random variable X. For the exponential distribution,
i = 0,1,2,...
Step 2. Set F(X) = R on the range of X. For the exponential distribution, it becomes 1 e
-X
the form is known as the multiplicative congruential method. The selection of the values for
R on the range x >=0. Since X is a random variable (with the exponential distribution in this
a, c, m and X0 drastically affects the statistical properties and the cycle length. . The random
case), it follows that 1 - is also a random variable, here called R. As will be shown later,
integers are being generated [0,m-1], and to convert the integers to random numbers:
Step 3. Solve the equation F(X) = R for X in terms of R. For the exponential distribution,
e x=R
such a way that the combined generator has good statistical properties and a longer
e-x= 1 R
period. The following result from L'Ecuyer [1988] suggests how this can be done: If Wi, 1 ,
-X= ln(1 - R)
x= -1/ ln(1 R)
identically distributed), but one of them, say Wi, 1, is uniformly distributed on the
integers 0 to mi 2, then is uniformly distributed on the integers 0 to mi 2.
Page 50
Page 51
11. The
sequence
of
numbers
0.44,0.81,0.14,0.05,0.93
were
10CS82
generated,
use
10CS82
the
Sol:
12. Explain the two different techniques for generating random numbers with examples?
10 Marks (Dec/Jan 2011-12)
a, c, m and X0 drastically affects the statistical properties and the cycle length. . The random
the form is known as the multiplicative congruential method. The selection of the values for
deviation will always occur at one of the jump points R(1) , R(2) . . . , and thus the
the maximum deviation over all x, it can be seen from Figure 7.2 that the maximum
R(3) = 2/5 = 0.44 - 0.40 = 0.04. Although the test statistic D is defined by Equation (7.3) as
at R(3) the value of D+ is given by 3/5 - R(3) = 0.60 - 0.44 =0.16 and of D- is given by
integers, X\, X2,... between zero and m 1 according to the following recursive relationship:
SN(x) above F(x), and that D- is the largest deviation of SN(X) below F(x). For example,
The linear congruential method, initially proposed by Lehmer [1951], produces a sequence of
compared to the uniform cdf, F(x). It can be seen that D+ is the largest deviation of
The calculations in Table 7.2 are illustrated in Figure 7.2, where the empirical cdf, SN(X),
i = 0,1,2,...
integers are being generated [0,m-1], and to convert the integers to random numbers:
2 Combined Linear Congruential Generators
As computing power has increased, the complexity of the systems that we are able to
simulate has also increased.
One fruitful approach is to combine two or more multiplicative congruen-tial generators in
such a way that the combined generator has good statistical properties and a longer
period. The following result from L'Ecuyer [1988] suggests how this can be done: If Wi, 1 ,
Dept of CSE, SJBIT
Page 52
Page 53
10CS82
10CS82
SN(X), is compared to the uniform cdf, F(x). It can be seen that D+ is the largest
identically distributed), but one of them, say Wi, 1, is uniformly distributed on the
deviation of SN(x) above F(x), and that D- is the largest deviation of SN(X) below F(x). For
example, at R(3) the value of D+ is given by 3/5 - R(3) = 0.60 - 0.44 =0.16 and of D- is given
by R(3) = 2/5 = 0.44 - 0.40 = 0.04. Although the test statistic D is defined by Equation
(7.3) as the maximum deviation over all x, it can be seen from Figure 7.2 that the
maximum deviation will always occur at one of the jump points R(1) , R(2) . . . , and thus
the deviation at other values of x need not be considered.
13. The six numbers 0.44,0.66,0.82,0.16,0.05,0.92 are generated. Using kolmogorov- smirnov
test with =0.05 and check the hypothesis that the numbers are uniformly distributed
on the interval[0,1] can be rejected. 10 Marks. (Dec/Jan 2011-12)
Sol:
First, the numbers must be ranked from smallest to largest. The calculations can be
facilitated by use of Table 7.2. The top row lists the numbers from smallest (1) ) to
largest (R(n) ) .The computations for D+, namely i /N -R(i} and for D-, namely R(i ) - ( i l ) / N, are easily accomplished using Table 7.2. The statistics are c mputed as D+ = 0.26
and D- = 0.21.
Therefore, D = max{0.26, 0.21} = 0.26. The critical value of D, obtained from Table A.8 for
a = 0.05 and N = 5, is 0.565. Since the computed value, 0.26, is less than the tabulated
critical value, 0.565, the hypothesis of no difference between the distribution of the
generated numbers and the uniform distribution is not rejected.
The calculations in Table 7.2 are illustrated in Figure 7.2, where the empirical cdf,
Dept of CSE, SJBIT
Page 54
Page 55
10CS82
Unit-6
10CS82
Input Modeling
2014)
that class interval. The expected frequency for each class interval is computed as Ei = npi, where
Where 0i, is the observed frequency in the ith class interval and Ei, is the expected frequency in
system, typical input data are the distributions of time between arrivals and service times.
candidate density or mass function, The test is valid for large sample sizes, for both discrete and
Input data provide the driving force for a simulation model. In the simulation of a queuing
pi is the theoretical, hypothesized probability associated with the ith class interval.
Collect data from the real system of interest. This often requires a substantial time and resource
It can be shown that approximately follows the chi-square distribution with k-s-1 degrees
There are four steps in the development of a useful model of input data:
H0: the random variable, X, conforms to the distributional assumption with the parameter(s)
Identify a probability distribution to represent the input process. When data are available, this
If the distribution being tested is discrete; each value of the random variable should be a class
cell-frequency requirement. For the discrete case, if combining adjacent cells is not required,
interval, unless it is necessary to combine adjacent class intervals to meet the minimum expected
Choose parameters that determine a specific instance of the distribution family. When data are
3. Suggest a step by step procedure to generate random variates using inverse transform
iterations of this procedure fail to yield a fit between an assumed distributional form and the
second step, chooses a different family of distributions, and repeats the procedure. If several
that the chosen distribution is a good approximation of the data, then the analyst returns to the
and ai, are the endpoints of the ith class interval. For the continuous case with assumed pdf f(x),
chisquare and the Kolmo-gorov-Smirnov tests are standard goodness-of-fit tests. If not satisfied
If the distribution being tested is continuous, the class intervals are given by [ai-1,ai), where ai-1
may be evaluated informally via graphical methods, or formally via statistical tests. The
Evaluate the chosen distribution and the associated parameters for good-of-fit. Goodness-of-fit
sol:
2. Explain Chi-square goodness of fit test. Apply it to Poisson assumption with alpha = 3.64,
collected data.
Page 56
Page 57
10CS82
10CS82
Exponential cdf:
r=
S ol :
F(x)
1e
-x
There are four steps in the development of a useful model of input data:
for x 0
Collect data from the real system of interest. This often requires a substantial
R=F(X)
X=F-1(R)
Identify a probability distribution to represent the input process. When data are
available, this step typically begins by developing a frequency distribution, or
Xi =
F-1(R )
-(1/) ln(1-R )
- (1/) ln(Ri)
When data are available, these parameters may be estimated from the data.
5. The following is set of single digit numbers from a random number generator.
Using appropriate test ,check whether the numbers are uniformly.N=50,=0.05 and X2
=16.9
6,7,0,6,9,9,0,6,4,6,4,0,8,2,6,6,1,2,6,8,5,6,0,4,7
1,3,5,0,7,1,4,9,8,6,0,8,6,6,7,1,0,4,7,9,2,0,1,4,8
Sol: Use the chi-square test with a = 0.05 to test whether the data shown below are uniformly
Page 58
Page 59
=0
distributed. Table 7.3 contains the essential computations. The test 0.05,9
uses n = 10 intervals of
equal length, namely [0, 0.1), [0.1, 0.2), . . . , [0.9, 1.0). The value of X0
compared with the critical value X2
value of X2
0.05,9
10CS82
is 3.4. This is
otherwise
10CS82
Sol:
(June 2012)
Both the Kolmogorov-Smirnov and the chi-square test are acceptable for testing the
uniformity
of a sample of data, provided that the sample size is large. However, the
Furthermore, the Kolmogorov-Smirnov test can be applied to small sample sizes, whereas
produced by the generator would not be random. The tests in the remainder of this chapter are
This set of numbers would pass the frequency tests with ease, but the ordering of the numbers
however, unequal widths however, unequal width may be used if the heights of the
values are in the range 0.01-0.10, the second 10 values are in the range 0.11-0.20, and so on.
1. Divide the range of the data into intervals (intervals are usually of equal width;
Imagine a set of 100 numbers which are being tested for independence where the first 10
4. Label the vertical axis so that the total occurrences can be plotted for each interval.
5. Plot the frequencies on the vertical axis.
details will not show well. If the intervals are too narrow, the histogram will be ragged and will
= 2-x 1<x<2
If the intervals are too wide, the histogram will be coarse, or blocky, and its shape and other
F(x)= x,
0<x<1
Page 60
Page 61
10CS82
10CS82
= x, 0<x<1
= 2-x 1<x<2
= 0 otherwise
8. Suggest a step by step procedure to generate random variates using inverse transform
technique for exponential distribution.
Sol:
sol:
Exponential Distribution:
Exponential cdf:
r=
F(x)
1 e-x
for x 0
R=F(X)
X=F-1(R)
To generate X1, X2, X3
Xi =
F-1(R )
-(1/) ln(1-R )
- (1/) ln(Ri)
Page 62
Page 63
10CS82
10CS82
minutes).
Unit-7
Estimation of Absolute Performance
Confidence-Interval Estimation
Purpose
Sol:
Sol:
2014)
q is the system performance, the precision of the estimator can be measured by:
Suppose the model is the normal distribution with mean q, variance s2 (both
unknown).
q.
Let Y i be the average cycle time for parts produced on the ith replication of the
simulation (its mathematical expectation is q).
Average cycle time will vary from day to day, but over the long-run the
average
of the averages will be close to q.
Sample variance across R replications:
independence.
S2 =
Confidence-Interval Estimation
1 R (Y
i. - Y.. ) 2
R -1 i=1
A measure of error.
Type of Simulations
-terminating simulations
3. Explain output analysis for termination simulation. 06 Marks (June 2014)
Sol:
simulation.
=E
for discrete output
i=1
=E
Page 64
Page 65
1
TE
TE
Y (t)dt
10CS82
Y (t) =
10CS82
otherwise
In general, independent replications are used, each run using a different random number
stream and independently chosen initial conditions.
Statistical Background
probability.
than
Confidence-Interval Estimation
Let Y i be the average cycle time for parts produced on the ith replication of the
1 n Y
i
= n
i=1
will be close to q.
E() =
Is biased if:
S2 =
1 R (Y
i. - Y.. ) 2
R -1 i=1
Confidence-Interval Estimation
Confidence Interval (CI):
A measure of error.
Point Estimator
Point estimation for continuous-time data.
TE
TE 0
Is biased in general where:
S
Y t / 2,R-1 R
Y (t)dt
5. Explain the distinction between terminating or transient simulation and steady state
simulation . Give an example.
e.g., the proportion of days on which sales are lost through an out-of-stock situation, let:
Dept of CSE, SJBIT
Page 66
Page 67
10CS82
Sol:
Interval
6. Explain
the 0.05,
chi-square
9
test with
a =9 0.05
0.05,
Oi
10 Marks.
Ei
10CS82
is 3.4.
to test whether the data shown
below are
100
uniformly distributed. Table 7.3 contains the essential computations. The test uses n
= 10 intervals of equal length, namely (0, 0.1], (0.1, 0.2], . . . , (0.9, 1.0]. The value of 0
2
=16.9.Since 0
100
3.4
7. Differentiate between terminating and steady state simulation with respect to output
analysis with an example.
(Dec/Jan 2012-13)
Confidence-Interval Estimation
A good guess for the average cycle time on a particular day is our estimator but it
is unlikely to be exactly right.
PI is designed to be wide enough to contain the actual average cycle time on any
particular day with high probability.
Normal-theory prediction interval:
Y t / 2,R-1S 1
The length of PI will not go to 0 as R increases because we can never simulate
away risk.
PIs limit is:
Output Analysis for Terminating Simulations
Dept of CSE, SJBIT
Page 68
Page 69
10CS82
10CS82
A good guess for the average cycle time on a particular day is our estimator but it
is unlikely to be exactly right.
monthly
PI is designed to be wide enough to contain the actual average cycle time on any
underground coal mine were being studied by federal agency. the values of past
Injuries/month
40
13
1
R
Y t / 2,R-1S 1 +
..
Frequency
of 35
occurance
X20.05.9 =7.81.
10 Marks (Jun
z / 2
2012)
Sol:
The first few numbers generated are as follows:
X1= 75(123,457) mod (231 - 1) = 2,074,941,799 mod (231 - 1)
10. Explain chi-square of goodness of- fit test for exponential distribution?
X1 = 2,074,941,799
1
sol:
31
R = X 2
5
Compare histogram
31
R2 = X2 231= 0.2607
Which approximately follows the chi-square distribution with k-s-1 degrees of freedom, where s =
R3 = X3 231= 0.7662
Null
if: Comments: m Errors in cells with small Eisaffect the test statistics more than cells with large
Eis. m Minimum size of Eidebated: recommends a value of 5 or more; if not combine adjacent
Sol:
cells. m Test designed for discrete distributions and large sample sizes only. For continuous
Confidence-Interval Estimation
distributions, Chi-Square test is only an approximation (i.e., level of significance holds only for n-
Page 70
Page 71
10CS82
10CS82
=
>).
Is biased if:
E() =
Oi
50
54
63
45
52
42
49
48
50
47
Ei
50
50
50
50
50
50
50
50
50
5.84
500
50
10
[(Oi-Ei)^2]/Ei
Point Estimator
Point estimation for continuous-time data.
0.08
0.02
1.28
0.08
Usually, system performance measures can be put into the common framework of q or f:
0.5
e.g., the proportion of days on which sales are lost through an out-of-stock situation, let:
3.38
0.32
Y (t) =
otherwise
0.18
Performance measure that does not fit: quantile or percentile:
Estimating quantiles: the inverse
of =the
Pr{Y }
p problem of estimating a proportion or
=14.68;
probability.
10Marks.
(Dec/Jan
Confidence-Interval Estimation
To understand confidence intervals fully, it is important to distinguish between measures of
2011-12)
error, and measures of risk, e.g., confidence interval versus prediction interval.
Suppose the model is the normal distribution with mean q, variance s2 (both unknown).
Sol:
Consider the estimation of a performance parameter, q (or f), of a simulated system.
Let
Yi
be the average cycle time for parts produced on the ith replication of the
Average cycle time will vary from day to day, but over the long-run the average of the averages
will be close to q.
S2 =
1
R -1
- Y )2
1 n Y
i
n i=1
Page 72
Page 73
10CS82
10CS82
Unit -8
A necessary condition for input-output transformation is that some version of the system
under study exists so that the system data under at least one set of input condition can be
collected to compare to model prediction.
1. Explain three step approach for validation process as formulated by Nayler and Finger.10
If the system is in planning stage and no system operating data can be collected,
Structural assumptions involve questions of how the system operates and usually
sol:
modeler may use past historical data which has been served for validation purposes that
is, if one set has been used to develop calibrate the model, its recommended that a
Thus accurate prediction of the past may replace prediction of the future for purpose
Page 74
The purpose of model verification is to assure that the conceptual model is reflected
The conceptual model quite often involves some degree of abstraction about system
Page 75
10CS82
10CS82
1: Have the computerized representation checked by someone other than its developer.
2: Make a flow diagram which includes each logically possible action a system can take
when an event occurs, and follow the model logic for each a for each action for each
event type.
3: Closely examine the model output for reasonableness under a variety of settings of
Input parameters.
4. Have the computerized representation print the input parameters at the end of the
Simulation to be sure that these parameter values have not been changed inadvertently.
5. Make the computerized representation of self-documenting as possible.
6. If the computerized representation is animated, verify that what is seen in the
Successful
simulation
model
Verification and validation although are conceptually distinct, usually are conducted
mistakes or commits logical errors when building a model. The IRC assists in finding and
Validation is the overall process of comparing the model and its behavior to the real
(b) Attention can be focused on a particular line of logic or multiple lines of logic that
Calibration is the iterative process of comparing the model to the real system,
making
paused, the current value or status of variables, attributes, queues, resources, counters, etc.,
can be observed.
The following figure 7.2 shows the relationship of the model calibration to the overall
validation process.
3. Describe with a neat diagram iterative process of calibrating a model. 10Marks
(June 2013)
Subjective test usually involve people, who are knowledgeable about one or more aspects
of the system, making judgments about the model and its output.
Objective tests always require data on the system's behavior plus the corresponding data
Page 76
Page 77
10CS82
Successful
simulation
model
10CS82
As an aid in the validation process, Naylor and Finger [1967] formulated a three
mistakes or commits logical errors when building a model. The IRC assists in finding and
(b) Attention can be focused on a particular line of logic or multiple lines of logic that
(c) Values of selected model components can be observed. When the simulation has
paused, the current value or status of variables, attributes, queues, resources, counters, etc.,
can be observed.
(Dec 2012-13)
(d) The simulation can be temporarily suspended, or paused, not only to view information
Sol:
The purpose of model verification is to assure that the conceptual model is reflected
The conceptual model quite often involves some degree of abstraction about system
Sol:
Many suggestions can be given for use in the verification process:1: Have the computerized representation checked by someone other than its developer.
2: Make a flow diagram which includes each logically possible action a system can take
when an event occurs, and follow the model logic for each a for each action for each
event type.
3: Closely examine the model output for reasonableness under a variety of settings of
Input parameters.
4. Have the computerized representation print the input parameters at the end of the
Simulation to be sure that these parameter values have not been changed inadvertently.
5. Make the computerized representation of self-documenting as possible.
Verification and validation although are conceptually distinct, usually are conducted
Page 78
Page 79
10CS82
Validation is the overall process of comparing the model and its behavior to the real
10CS82
The conceptual model quite often involves some degree of abstraction about system
Calibration is the iterative process of comparing the model to the real system,
making
1: Have the computerized representation checked by someone other than its developer.
2: Make a flow diagram which includes each logically possible action a system can take
when an event occurs, and follow the model logic for each a for each action for each
The following figure 7.2 shows the relationship of the model calibration to the overall
validation process.
event type.
3: Closely examine the model output for reasonableness under a variety of settings of
Input parameters.
Subjective test usually involve people, who are knowledgeable about one or more aspects
4. Have the computerized representation print the input parameters at the end of the
Simulation to be sure that these parameter values have not been changed inadvertently.
of the system, making judgments about the model and its output.
Objective tests always require data on the system's behavior plus the corresponding data
simulation
model
mistakes or commits logical errors when building a model. The IRC assists in finding and
paused, the current value or status of variables, attributes, queues, resources, counters, etc.,
(c) Values of selected model components can be observed. When the simulation has
(b) Attention can be focused on a particular line of logic or multiple lines of logic that
As an aid in the validation process, Naylor and Finger [1967] formulated a three
can be observed.
(d) The simulation can be temporarily suspended, or paused, not only to view information
Sol:
Verification of Simulation Models
The purpose of model verification is to assure that the conceptual model is reflected
7. Describe with a neat diagram iterative process of calibrating a model. Which are
Page 81
10CS82
Sol:
10CS82
Sol:
Verification and validation although are conceptually distinct, usually are conducted
Validation is the overall process of comparing the model and its behavior to the real
Calibration is the iterative process of comparing the model to the real system,
making
The following figure 7.2 shows the relationship of the model calibration to the overall
validation process.
Subjective test usually involve people, who are knowledgeable about one or more aspects
Verification and validation although are conceptually distinct, usually are conducted
of the system, making judgments about the model and its output.
Objective tests always require data on the system's behavior plus the corresponding data
Validation is the overall process of comparing the model and its behavior to the real
As an aid in the validation process, Naylor and Finger [1967] formulated a three
Calibration is the iterative process of comparing the model to the real system,
making
validation process.
The following figure 7.2 shows the relationship of the model calibration to the overall
Page 82
Page 83
10CS82
Subjective test usually involve people, who are knowledgeable about one or more aspects
of the system, making judgments about the model and its output.
10CS82
The conceptual model quite often involves some degree of abstraction about system
Objective tests always require data on the system's behavior plus the corresponding data
4. Have the computerized representation print the input parameters at the end of the
Input parameters.
3: Closely examine the model output for reasonableness under a variety of settings of
event type.
when an event occurs, and follow the model logic for each a for each action for each
2: Make a flow diagram which includes each logically possible action a system can take
As an aid in the validation process, Naylor and Finger [1967] formulated a three
1: Have the computerized representation checked by someone other than its developer.
Simulation to be sure that these parameter values have not been changed inadvertently.
5. Make the computerized representation of self-documenting as possible.
sol:
simulation
model
mistakes or commits logical errors when building a model. The IRC assists in finding and
correcting those errors in the follow ways:
(a) The simulation can be monitored as it progresses.
(b) Attention can be focused on a particular line of logic or multiple lines of logic that
constitute a procedure or a particular entity.
(c) Values of selected model components can be observed. When the simulation has
paused, the current value or status of variables, attributes, queues, resources, counters, etc.,
can be observed.
Marks
(June
2011)
Verification of Simulation Models
Sol:
The purpose of model verification is to assure that the conceptual model is reflected
Page 84
Page 85
10CS82
10CS82
Server: refers to any resource that provides the requested service, e.g.,
repairpersons, retrieval machines, runways at airport.
Error Estimation
[Steady-State Simulations]
autocorrelation.
infinite.
Finite population model: if arrival rate depends on the number of customers being served
and waiting, e.g., model of one corporate jet, if it is being repaired, the repair arrival rate
exhibiting negative
becomes zero.
autocorrelation.
Infinite population model: if arrival rate is not affected by the number of customers being
served and waiting, e.g., systems with large population of potential customers.
S
n
= BV (Y ),
where B =
System Capacity
an upward trend
System Capacity: a limit on the number of customers that may be in the waiting line or
system.
Limited capacity, e.g., an automatic car wash only has room for 10 cars to wait in
line to enter the mechanism.
Unlimited capacity, e.g., concert ticket sales with no limit on the number of people allowed
to wait to purchase tickets.
b) Arrival Process
Page 86
Page 87
10CS82
interarrival time between customer n-1 and customer n, and is exponentially distributed (with
10CS82
e routed to
queue j upon departure, then the arrival rate form queue i to queue j is ipij (over the long run).
mean 1/l).
Scheduled arrivals: interarrival times can be constant or constant plus or minus a
small random amount to represent early or late arrivals.
Networks of Queues
The overall arrival rate into queue j:
=a +
pij
all i
1/j, then, in steady state, queue j behaves likes an M/M/cj queue with arrival rate .
if there are cj identical servers delivering exponentially distributed service times with mean
generation scheme, is to produce a sequence of numbers between zero and 1 which simulates,
If arrivals from outside the network form a Poisson process with rate aj for each queue j, and
Pseudo means false, so false random numbers are being generated. The goal of any
possible.
numbers, certain
problems
or errors
can
occur. These errors, or departures from ideal randomness, are all related to the properties
stated previously.
Some examples include the following
1. The generated numbers may not be uniformly distributed.
2. The generated numbers may be discrete -valued instead continuous valued
3. The mean of the generated numbers may be too high or too low.
4. The variance of the generated numbers may be too high or low
5. There may be dependence. The following are examples:
(a) Autocorrelation between numbers.
(b) Numbers successively higher or lower than adjacent numbers.
c) Network of Queue
Many systems are naturally modeled as networks of single queues: customers departing
from one queue may be routed to another.
on system capacity:
out of a queue is the same as thearrival rate into the queue (over the long run).
Dept of CSE, SJBIT
Page 88
Page 89
10CS835
10CS835
A security perimeter is the first level of security that protects all internal systems from
QUESTION BANK
Unfortunately, the perimeter does not protect against internal attacks from employee threats
or on-site physical threats.
FIREWALLS
Program Management
Risk Management
prevents information from entering or exiting the defined area based on a set of predefined
rules.
Legal Compliance
Firewalls are usually placed on the security perimeter, just behind or as part of a gateway
Operational Controls
router.
Contingency Planning
While the gateway router is primarily designed to connect the organizations systems to the
Security ETA
outside world, it too can be used as the front-line defense against attacks as it can be
Personnel Security
Physical Security
Production Inputs and Outputs
Hardware & Software Systems Maintenance
Data Integrity
Technical Controls
Logical Access Controls
Identification, Authentication, Authorization, and Accountability
Audit Trails
Asset Classification and Control
Cryptography
Defense in Depth
One of the basic tenets of security architectures is the implementation of security in layers.
This layered approach is called Defense in Depth
Defense in depth requires that the organization establishes sufficient security controls and
safeguards so that an intruder faces multiple layers of control.
Security Perimeter
A perimeter is the boundary of an area. A security perimeter defines the edge between the
2. List critical characteristics of information and explain in brief any five of them. (10
marks) (Dec 2012) (June 2013) (8 marks) (Dec 2013) (Dec 2014)
Critical Characteristics Of Information The value of information comes from the
characteristics it possesses. .
Availability
Enables users who need to access information to do so without interference or obstruction
and in the required format. The information is said to be available to an authorized user when
and where needed and in the correct format.
Accuracy
Free from mistake or error and having the value that the end-user expects. If information
contains a value different from the users expectations due to the intentional or unintentional
modification of its content, it is no longer accurate.
Authenticity
The quality or state of being genuine or original, rather than a reproduction or fabrication.
Information is authentic when it is the information that was originally created, placed, stored,
or transferred.
Confidentiality
The quality or state of preventing disclosure or exposure to unauthorized individuals or
systems.
Integrity
outer limit of an organizations security and the beginning of the outside world.
Page 1
Page 2
BCP occurs concurrently with DRP when the damage is major or long term, requiring more
The quality or state of being whole, complete, and uncorrupted. The integrity of information
is threatened when the information is exposed to corruption, damage, destruction, or other
disruption of its authentic state.
Utility
The quality or state of having value for some purpose or end. Information has value when it
serves a particular purpose. This means that if information is available, but not in a format
meaningful to the end-user, it is not useful.
10CS835
3. What are the policies present in NSTISSC security model. (8 marks) (Dec 2012) (June
2013) (10 marks ) (Dec 2014)
The National Security Telecommunications and Information Systems Security Committee
(NSTISSC) was established by President Bush under National Security Directive 42 (NSD
42) entitled, "National Policy for the Security of National Security Telecommunications and
Information Systems," dated 5 July 1990. It reaffirms the Secretary of Defense as the
Executive Agent and the Director, National Security Agency as the National Manager for
National Security Telecommunications and Information Systems Security. In addition, the
Directive establishes the NSTISSC.
The NSTISSC provides a forum for the discussion of policy issues, sets national policy, and
promulgates direction, operational procedures, and guidance for the security of national
security systems through the NSTISSC Issuance System. National security systems contain
classified information or:
a. involves intelligence activities;
b. involves cryptographic activities related to national security;
c. involves command and control of military forces;
d. involves equipment that is an integral part of a weapon or weapons system(s); or
e. is critical to the direct fulfillment of military or intelligence missions (not including routine
administrative and business applications). Plans for events of this type are referred to in a
number of ways:
10CS835
Typically, this champion is an executive, such as a chief information officer (CIO), or the
vice president of information technology (VP-IT), who moves the project forward, ensures
that it is properly managed, and pushes for acceptance throughout the organization. Without
this high-level support, many of the mid-level administrators fail to make time for the project
or dismiss it as a low priority.
Also critical to the success of this type of project is the involvement and support of the end
users. These individuals are most directly affected by the process and outcome of the project
and must be included in the information security process. Key end users should be assigned
to a developmental team, known as the joint application development team (JAD).
DRP typically focuses on restoring systems after disasters occur, and as such is closely
associated with BCP.
Dept. of CSE, SJBIT
Page 3
Page 4
10CS835
10CS835
False Attack Stimulus: An event that triggers alarms and causes a false positive when no
actual attacks are in progress.
False Negative: The failure of an IDS system to react to an actual attack event. Of all failures,
this is the most grievous, for the very purpose of an IDS is to detect attacks.
False Positive: An alarm or alert that indicates that an attack is in progress or that an attack
has successfully occurred when in fact there was no such attack.
Noise: The ongoing activity from alarm events that are accurate and noteworthy but not
5. Explain the Security System Development Life Cycle. (8 marks) (June 2014)
nowledge about SDLC is very important for anyone who wants to understand S-SDLC. The
Following are some of the major steps which are common throughout the SDLC process,
regardless of the organization. Here is a photo representation of a Sample Software
Development Life Cycle:
Requirements Gathering
A Software Requirement Specification or SRS is a document which records expected
behavior of the system or software which needs to be developed.
Design
Software design is the blueprint of the system, which once completed can be provided to
developers for code development. Based on the components in design, they are translated into
software modules/functions/libraries, etc and these pieces together form a software system.
Coding
During this phase, the blueprint of the software is turned to reality by developing the source
code of the entire application. Time taken to complete the development depends on the size
of the application and number of programmers involved.
Testing
Once the application development is completed, it is tested for various issues like
functionality, performance, and so on. This is to ensure that the application is performing as
expected. If there are any issues, these issues are fixed before/after going to production
depending on the nature of issue and the urgency to go live for the application.
Deployment
Once the application is ready to go live, it is deployed on a production server in this phase. If
it is developed for a client, the deployment happens in a client premise or datacenter where
there client wants to get the application installed.
Site Policy: The rules and condiauration guidelines governing the implementation and
6. List and briefly explain Information Security Terminologies.(8 marks)( June 2013)(Dec
2013) (5 marks) (Dec 2014)
IDS Terminology
Alert or Alarm: An indication that a system has just been attacked and/or continues to be
under attack. IDSs create alerts of alarms to notify administrators that an attack is or was or
occurring and may have been successful.
Page 5
Page 6
During this phase, the blueprint of the software is turned to reality by developing the source
code of the entire application. Time taken to complete the development depends on the size
of the application and number of programmers involved.
Testing
Once the application development is completed, it is tested for various issues like
functionality, performance, and so on. This is to ensure that the application is performing as
expected. If there are any issues, these issues are fixed before/after going to production
depending on the nature of issue and the urgency to go live for the application.
Deployment
Once the application is ready to go live, it is deployed on a production server in this phase. If
it is developed for a client, the deployment happens in a client premise or datacenter where
there client wants to get the application installed.
10CS835
10CS835
1.
(1)
vision and culture guarantees the failure of the information security program.
Security is an Integral Element of Sound Management
Effective management includes planning, organizing, leading, and controlling.
Security enhances these areas by supporting the planning function when information
(2)
(3)
advantages.
Information security should justify its own costs.
Security measures that do not justify cost benefit levels must have a strong business case
(such as a legal requirement) to warrant their use.
(5)
2.
others, the security of this information becomes a serious responsibility for the owner of the
systems.
Dept. of CSE, SJBIT
Page 7
Page 8
10CS835
10CS835
Management from all communities of interest must consider policies as the basis for all
Usually, these policies are developed by those responsible for managing the information
technology resources. The optimal balance between the independent and comprehensive ISSP
Policies direct how issues should be addressed and technologies used Policies do not
approaches is the modular approach.It is also certainly managed and controlled but tailored to
the standards, procedures and practices of users manuals and systems documentation.
Security policies are the least expensive control to execute, but the most difficult to
The modular approach provides a balance between issue orientation and policy
implement properly.
management. The policies created with this approach comprise individual modules, each
Shaping policy is difficult because: _ Never conflict with laws _ Stand up in court, if
created and updated by individuals responsible for the issues addressed. These individuals
report to a central policy administration group that incorporates specific issues into an overall
comprehensive policy.
While issue-specific policies are formalized as written documents, distributed to users, and
IT security policy
agreed to in writing, SysSPs are frequently codified as standards and procedures to be used
3. With a block diagram, explain how policies, standards, practices, procedures and
guidelines are related.(7 marks) (Dec 2012) (June 2013) (10 marks ) (June 2014)
Three approaches:
Access control lists (ACLs) consist of the access control lists, matrices, and capability
tables governing the rights and privileges of a particular user to a particular system.
_ An ACL is a list of access rights used by file storage systems, object brokers, or other
object that it controls.(Object Brokers are system components that handle message requests
between the software components of a system )
The independent document approach to take when creating and managing ISSPs typically has
4. Define security policy. Briefly discuss three types of security policies.(8 marks)( June
creates a policy governing its use, management, and control. This approach to creating ISSPs
As various technologies and processes are implemented, certain guidelines are needed to
may fail to cover all of the necessary issues, and can lead to poor policy distribution,
The ISSP:
The single comprehensive policy approach is centrally managed and controlled. With formal
procedures for the management of ISSPs in place , the comprehensive policy approach
Electronic mail
establishes guidelines for overall coverage of necessary issues and clearly identifies processes
Page 9
Page 10
10CS835
10CS835
Contingency Planning
IT security policy
Physical Security
Personnel Security
Operational Controls
Then use the blueprint to plan the tasks to be accomplished and the order in which to
proceed
Setting priorities can follow the recommendations of published sources, or from published
standards provided by government agencies, or private consultants
Data Integrity
Documentation
Security Awareness, Training, and Education
Incident Response Capability
Technical Controls
Identification and Authentication
Logical Access Controls
Audit Trails
One approach is to adapt or adopt a published model or framework for information security
A framework is the basic skeletal structure within which additional detailed planning of the
blueprint can be placed as it is developed of refined
However, because people can directly access ring as well as the information at the core of
the model, the side of the sphere of protection that attempts to control access by relying on
people requires a different approach to security than the side that uses technology.
Experience teaches us that what works well for one organization may not precisely fit another
This security blueprint is the basis for the design, selection, and implementation of all
security policies, education and training programs, and technological controls.
The sphere of protection overlays each of the levels of the sphere of use with a layer of
security, protecting that layer from direct or indirect use through the next layer
The people must become a layer of security, a human firewall that protects the
The security blueprint is a more detailed version of the security framework, which is an
outline of the overall information security strategy for the organization and the roadmap for
planned changes to the information security environment of the organization.
6. Briefly describe management, operational and technical controls and explain when each
would be applied as part of a security framework? (10 marks) (June 2013) (Dec 2013) (
5 marks) (Dec 2014)
Management Controls
Risk Management
Page 11
Page 12
10CS835
Ensure that all models and implementations can be traced back to the business
strategy, specific business requirements and key principles.
Provide abstraction so that complicating factors, such as geography and technology
religion, can be removed and reinstated at different levels of detail only when
required.
Establish a common "language" for information security within the organization
Methodology
The practice of enterprise information security architecture involves developing an
architecture security framework to describe a series of "current", "intermediate" and "target"
reference architectures and applying them to align programs of change. These frameworks
detail the organizations, roles, entities and relationships that exist or should exist to perform a
set of business processes. This framework will provide a rigorous taxonomy and ontology
that clearly identifies what processes a business performs and detailed information about how
those processes are executed and secured. The end product is a set of artifacts that describe in
varying degrees of detail exactly what and how a business operates and what security controls
are required. These artifacts are often graphical.
Given these descriptions, whose levels of detail will vary according to affordability and other
practical considerations, decision makers are provided the means to make informed decisions
about where to invest resources, where to realign organizational goals and processes, and
what policies and procedures will support core missions or business functions.
A strong enterprise information security architecture process helps to answer basic questions
like:
What is the information security risk posture of the organization?
Is the current architecture supporting and adding value to the security of the
organization?
How might a security architecture be modified so that it adds more value to the
organization?
Based on what we know about what the organization wants to accomplish in the
future, will the current security architecture support or hinder that?
10CS835
and accuracy of the spec is also a part of the targeted improvement. When possible start on a
small scale to test possible effects.
DO
Implement the plan, execute the process, make the product. Collect data for charting and
analysis in the following "CHECK" and "ACT" steps.
CHECK
Study the actual results (measured and collected in "DO" above) and compare against the
expected results (targets or goals from the "PLAN") to ascertain any differences. Look for
deviation in implementation from the plan and also look for the appropriateness and
completeness of the plan to enable the execution, i.e., "Do". Charting data can make this
much easier to see trends over several PDCA cycles and in order to convert the collected data
into information. Information is what you need for the next step "ACT".
ACT
Request corrective actions on significant differences between actual and planned results.
Analyze the differences to determine their root causes. Determine where to apply changes
that will include improvement of the process or product. When a pass through these four
steps does not result in the need to improve, the scope to which PDCA is applied may be
refined to plan and improve with more detail in the next iteration of the cycle, or attention
needs to be placed in a different stage of the process.
9. Illustrate with diagram how information is under attack from variety of sources with
reference to the spheres of security. (10 marks) (Dec 2012) (June 2013) (8 marks) (Dec
2013) (Dec 2014)
management system.(10 marks) (Dec 2012) (June 2013) (10 marks ) (June 2014)
PDCA (plandocheckact or plandocheckadjust) is an iterative four-step management
method used in business for the control and continuous improvement of processes and
products. It is also known as the Deming circle/cycle/wheel, Shewhart cycle, control
circle/cycle, or plandostudyact (PDSA). Another version of this PDCA cycle is OPDCA.
The added "O" stands for observation or as some versions say "Grasp the current condition."
This emphasis on observation and current condition has currency with Lean
manufacturing/Toyota Production System literature
PLAN
Establish the objectives and processes necessary to deliver results in accordance with the
expected output (the target or goals). By establishing output expectations, the completeness
Information is always at risk from attacks through the people and computer systems that
Page 13
1.
While the gateway router is primarily designed to connect the organizations systems to the
router.
attempting to access information from the Internet must first go through the local networks
Firewalls are usually placed on the security perimeter, just behind or as part of a gateway
Networks and Internet represent indirect threats, as exemplified by the fact that a person
10CS835
10CS835
outside world, it too can be used as the front-line defense against attacks as it can be
The DMZ is a no-mans land between the inside and outside networks; it is also where some
Internet connection.
To gain access to the network, one must either directly access the network or go through an
DMZs
Thus, firewalls can be used to create to security perimeters like the one shown in Dia. 6.19
To gain access to the computer systems, one must either directly access the computer
A firewall can be a single device or a firewall subnet, which consists of multiple firewalls
Information, at the core of the sphere, is available for access by members of the
Firewalls can be packet filtering , stateful packet filtering, proxy or application level.
There are a number of types of firewalls, which are usually classified by the level of
Generally speaking, the concept of the sphere is to represent the 360 degrees of security
Sphere of Use
Dia illustrates that between each layer of the sphere of use there must exist a layer of
the computer systems, and between the Internet and Internal networks.
Controls are also implemented between systems and the information, between networks and
Proxy Servers
For example, the items labeled Policy & law and Education & Training are located
These servers provide access to organizational web pages, without allowing web requests to
protection to prevent access to the inner layer from the outer layer.
When deployed, a proxy server is condiaured to look like a web server and is assigned the
domain name that users would be expecting to find for the system and its services.
When an outside client requests a particular web page, the proxy server receives the
Define and identify the various types of firewalls. (10 marks) (Dec 2012) (June 2013) (8
marks) (Dec 2013) (Dec 2014)
requests as if it were the subject of the request, then asks for the same information from the
true web server (acting as a proxy for the requestor), and then responds tot eh request as a
the network or it might be placed within the firewall subnet or the DMZ for added protection.
rules.
The proxy server may be hardened and become a bastion host placed in the public area of
prevents information from entering or exiting the defined area based on a set of predefined
This gives requestors the response they need without allowing them to gain direct access to
For more frequently accessed web pages, proxy servers can cache or temporarily store the
page, and thus are sometimes called cache servers.
Dept. of CSE, SJBIT
Page 15
Page 16
10CS835
10CS835
This could include packets coming into the organizations networks with addresses from
machines already within the organization(IP Spoofing).
2.
Describe how the various types of firewalls interact with the network traffic at various
levels of the OSI model. (7 marks) (Dec 2012) (June 2013) (10 marks ) (June 2014)
It could also include high volumes of traffic going to outside addresses(As in a Denial of
Service Attack)
Firewalls fall into four broad categories: packet filters, circuit level gateways, application
Both Host and Network based IDSs require a database of previous activity.
In the case of host based IDSs, the system can create a database of file attributes, as well as
Packet filtering firewalls work at the network level of the OSI model, or the IP layer of
TCP/IP. They are usually part of a router. A router is a device that receives packets from one
network and forwards them to another network. In a packet filtering firewall each packet is
compared to a set of criteria before it is forwarded. Depending on the packet and the criteria,
the firewall can drop the packet, forward it or send a message to the originator. Rules can
IDSs can be used together for the maximum level of security for a particular network and
Network-based IDSs can use a similar catalog of common attack signatures and develop
include source and destination IP address, source and destination port number and protocol
used. The advantage of packet filtering firewalls is their low cost and low impact on network
performance. Most routers support packet filtering. Even if other firewalls are used,
implementing packet filtering at the router level affords an initial degree of security at a low
network layer. This type of firewall only works at the network layer however and does not
support sophisticated rule based models (see Figure 5). Network Address Translation (NAT)
routers offer the advantages of packet filtering firewalls but can also hide the IP addresses of
computers behind the firewall, and offer a level of circuit-based filtering.
3.
set of systems.
4.
According to the NISTs documentation on industry best practices, what are the six
reasons to acquire and use IDS? Explain(7 marks) (Dec 2012) (June 2013) (10 marks )
(June 2014)
Why Use an IDS?
According to the NIST's documentation on industry best practices, there are several
compelling reasons to acquire and use an IDS:
1. To prevent problem behaviors by increasing the perceived risk of discovery and
Identify and describe the two categories of intrusion detection systems. (10 marks)
(June 2013) (Dec 2013) ( 5 marks) (Dec 2014)
punishment for those who would attack or otherwise abuse the system
2. To detect attacks and other security violations that are not prevented by other security
measures
3. To detect and deal with the preambles to attacks (commonly experienced as network
machines, an organization may wish to implement Intrusion Detection Systems (IDSs) IDSs
5. To act as quality control for security design and administration, especially of large and
Host based IDSs are usually installed on the machines they protect to monitor the status of
complex enterprises.
6. To provide useful information about intrusions that do take place, allowing improved
The IDS learns the condiauration of the system , assigns priorities to various files depending
on their value, and can then alert the administrator of suspicious activity.
Network based IDSs look at patterns of network traffic and attempt to detect unusual
Explain the features of NIDS. List merits and demerits of the same. (3 marks)(Dec 2014)
.(7 marks) (Dec 2012) (June 2013)
Network-Based IDS
5.
an organization's network and monitors network traffic on that network segment, looking for
Dept. of CSE, SJBIT
Page 17
Page 18
10CS835
indications of ongoing or successful attacks. When a situation occurs that the NIDS is
6.
10CS835
Explain the features of HIDS. List merits and demerits of the same. (3 marks)(Dec 2014)
.(7 marks) (Dec 2012) (June 2013)
Host-Based IDS
A host-based IDS (HIDS) works differently from a network-based version of IDS. While a
for attack patterns within network traffic such as large collections of related items that are of
network-based IDS resides on a network segment and monitors activities across that segment,
a certain type, which could indicate that a denial-of service attack is underway, or the
a host-based IDS resides on a particular computer or server, known as the host, and monitors
exchange of a series of related packets in a certain pattern, which could indicate that a port
activity only on that system. HIDSs are also known as system integrity verifiers5 as they
scan is in progress. A NIDS can detect many more types of attacks than a host-based IDS, but
benchmark and monitor the status of key system files and detect when an intruder creates,
to do so, it requires a much more complex condiauration and maintenance program. A NIDS
is installed at a specific place in the network (such as on the inside of an edge router) from
where it is possible to watch the traffic going into and out of a particular network segment
like .ini, .cfg, and .dat files. Most HIDSs work on the principle of condiauration or change
The NIDS can be deployed to watch a specific grouping of host computers on a specific
management, which means they record the sizes, locations, and other attributes of system
network segment, or it may be installed to monitor all traffic between the systems that make
files. The HIDS then triggers an alert when one of the following changes occurs: file
up an entire network. When placed next to a hub, switch, or other key networking device, the
attributes change, new files are created, or existing files are deleted. A HIDS can also monitor
systems logs for predefined events. The HIDS examines these files and logs to determine if
that device's monitoring port. The monitoring port, also known as a switched port analysis
an attack s Underway or has occurred, and if the attack is succeeding or was successful. The
(SPAN) port or mirror port, is a specially condiaured connection on a network device that is
HIDS will maintain its own log file so that even when hackers successfully modify files on
capable of viewing all of the traffic that moves through the entire device. In the early '90s,
the target system to cover their tracks, the HIDS can provide an independent audit trail of the
before switches became the popular choice for connecting networks in a shared-collision
attack. Once properly condiaured, a HIDS is very reliable. The only time a HIDS produces a
domain, hubs were used. Hubs received traffic from one node, and retransmitted it to all other
false positive alert is when an authorized change occurs for a monitored file. This action can
nodes. This condiauration allowed any device connected to the hub to monitor all traffic
passing through the hub. Unfortunately, it also represented a security risk, since anyone
choose then to disregard subsequent changes to the same set of files. If properly condiaured, a
connected to the hub could monitor all the traffic that moved through that network segment.
HIDS can also detect when an individual user attempts to modify or exceed his or her access
More recently, switches have been deployed on most networks, and they, unlike hubs, create
authorization and give him or herself higher privileges. A HIDS has an advantage over NIDS
dedicated point-to-point links between their ports. These links create a higher level of
in that it can usually be installed in such a way that it can access information that is encrypted
transmission security and privacy, and effectively prevent anyone from being able to capture,
when traveling over the network. For this reason, a HIDS is able to use the content of
and thus eavesdrop on, the traffic passing through the switch. Unfortunately, however, this
ability to capture the traffic is necessary for the use of an IDS. Thus, monitoring ports are
Since the HIDS has a mission to detect intrusion activity on one computer system, all the
required. These connections enable network administrators to collect traffic from across the
traffic it needs to make that decision is coming to the system where the HIDS is running. The
network for analysis by the IDS as well as for occasional use in diagnosing network faults
and measuring network performance. Diaure 7-2 shows a sample screen from Demark Pure
Secure (see www.demarc.com) displaying events generated by the Snort Network IDS
A HIDS relies on the classification of files into various categories and then applies various
notification actions, depending on the rules in the HIDS condiauration. Most HIDSs provide
only a few general levels of alert notification. For example, an administrator can condiaure a
HIDS to treat the following types of changes as reportable security events: changes in a
Page 19
Page 20
7.
10CS835
8.
command before battle, walking down the line checking out the equipment and mental
preparedness of each soldier. In a similar way, the security administrator can use
vulnerability analysis tools to inspect the units (host computers and network devices) under
his or her command. A word of caution, though, should be heeded: many of these scanning
and analysis tools have distinct signatures, and Some Internet service providers (ISPs) scan
for these signatures. If the ISP discovers someone using hacker tools, it can pull that person's
access privileges. As such, it is probably best for administrators first to establish a working
relationship with their ISPs and notify the ISP of their plans.
Discuss the process of encryption and define key terms. (10 marks) (Dec 2014)
Basic Encryption Definitions
To understand the fundamentals of cryptography, you must become familiar with the
following definitions:
-Algorithm: The programmatic steps used to convert an unencrypted message into an
encrypted sequence of bits that represent the message; sometimes used as a reference to the
programs that enable the cryptographic processes
In order to secure a network, it is imperative that someone in the organization knows exactly
where the network needs securing. This may sound like a simple and intuitive statement;
however, many companies skip this step. They install a simple perimeter firewall, and then,
lulled into a sense of security by this single layer of defense, they rest on their laurels. To
truly assess the risk within a computing environment, one must deploy technical controls
using a strategy of defense in depth. A strategy based on the concept of defense in depth is
likely to include intrusion detection systems (IDS), active vulnerability scanners, passive
vulnerability scanners, automated log analyzers, and protocol analyzers (commonly referred
to as sniffers). As you've learned, the first item in this list, the IDS, helps to secure networks
by detecting intrusions; the remaining items in the list also help secure networks, but they do
this by helping administrators identify where the network needs securing. More specifically,
scanner and analysis tools can find vulnerabilities in systems, holes in security components,
and unsecured aspects of the network. Although some information security experts may not
perceive. them as defensive tools, scanners, sniffers, and other such vulnerability analysis
tools can be invaluable to security administrators because they enable administrators to see
what the attacker sees. Some of these tools are extremely complex and others are rather
simple. The tools can also range from being expensive commercial products to those that are
freely available at no cost. Many of the best scanning and analysis tools .are those that the
attacker community has developed, and are available free on the Web. Good administrators
should have several hacking Web sites' bookmarked and should try to keep up with chat room
discussions on new vulnerabilities, recent conquests, and favorite assault techniques. There is
nothing wrong with a security administrator using the tools that potential attackers use in
order to examine his or her defenses and find areas that require additional attention. In the
military, there is a long and distinguished history of generals inspecting the troops under their
Page 21
10CS835
Page 22
matching
-Work factor: The amount of effort (usually in hours) required to perform cryptanalysis on an
Any message that is encrypted by using the private key can only be decrypted by using the
only be decrypted by applying the same algorithm, but by using the matching private key.
-Steganography: The process of hiding messages-for example, messages can be hidden within
10CS835
public
10CS835
key.
encoded message so that it may be decrypted when the key or algorithm (or both) are
cipher method. With the bit stream method, each bit in the plaintext is transformed into a
slower than symmetric encryption. It requires far more processing power to both encrypt and
A plaintext can be encrypted through one of two methods, the bit stream method or the block
keys are supposed to be public). A problem with asymmetric encryption, however, is that it is
Cipher Methods
This means that you do not have to worry about passing public keys over the Internet (the
unknown
cipher bit one bit at a time. In the case of the block cipher method, the message is divided
process, known as frequency analysis, can be used along with published frequency of
level.
enable him or her to calculate the number of each type of letter used in the message. This
the level of its binary digits (bits), but SOme operations may operate at the byte or character
searching for a common text structure, wording, or syntax in the encrypted message that can
algorithm's structure. These attacks are known as ciphertext attacks, and involve a hacker
combination of these operations, as described in the following sections. As you read on, you
force attacks in which the ciphertext is repeatedly searched for clues that can lead to the
(XOR), whereas block methods can use substitution, transposition, XOR, or some
Historically, attempts to gain unauthorized access to secure communications have used brute
stream methods most commonly use algorithm functions like the exclusive OR operation
Attacks on Cryptosystems
bits is transformed into an encrypted block of cipher bits using an algorithm and a key. Bit
into blocks, for example, sets of 8-,16-,32-, or 64-bit blocks, and then each block of plaintext
occurrence patterns of various languages and can allow an experienced attacker to crack
UNIT 4: Cryptography
almost any code quickly if the individual has a large enough sample of the encoded text. To
1. What are the fundamental differences between symmetric and asymmetric encryption.
protect against this, modern algorithms attempt to remove the repetitive and predictable
sequences of characters from the cirhertext.
Occasionally, an attacker may obtain dup icate texts, one in ciphertext and one in plaintext,
which enable the individual to reverse-engineer the encryption algorithm in a knownplaintext attack scheme. Alternatively, attackers may conduct a selected- plaintext attack by
sending potential victims a specific text that they are sure the victims will forward on to
others. When the victim does encrypt and forward the message, it can be used in the attack if
the attacker can acquire the outgoing encrypted version. At the very least, reverse engineering
can usually lead the attacker to discover the cryptosystem that is being employed. Most
publicly available encryption methods are generally released to the information and computer
security communities for testing of the encryption algorithm's resistance to cracking. In
addition, attackers are kept informed of which methods of attack have failed. Although the
purpose of sharing this information is to develop a more secure algorithm, it has the danger of
Any message (text, binary files, or documents) that are encrypted by using the public key can
keeping attackers from wasting their time--that is, freeing them up to find new weaknesses in
the cryptosystem or new, more challenging means of obtaining encryption keys.
Page 23
Page 24
10CS835
10CS835
-Work factor: The amount of effort (usually in hours) required to perform cryptanalysis on an
correlation, dictionary, and timing. Although many of these attacks were discussed in Chapter
encoded message so that it may be decrypted when the key or algorithm (or both) are
2, they are reiterated here in the context of cryptosystems and their impact on these systems
unknown
3. Define the following terms i) algorithm ii) Key iii) Plaintext iv) steganography v) Work
factor vi) key space. (10 marks) (June 2013) (Dec 2013) ( 5 marks) (Dec 2014)
4. Describe the terms: authentication, integrity, privacy, authorization and nonrepudiation. (5 marks) (Dec 2012) (June 2013) (10 marks ) (June 2014)
1. AUTHENTICATION: The assurance that the communicating entity is the one that it
Introduction to Network Security, Authentication Applications, claims to be.
_ Peer Entity Authentication: Used in association with a logical connection to provide
confidence in the identity of the entities connected.
_ Data Origin Authentication: In a connectionless transfer, provides assurance that the source
of received data is as claimed.
2. ACCESS CONTROL: The prevention of unauthorized use of a resource (i.e., this service
controls who can have access to a resource, under what conditions access can occur, and what
those accessing the resource are allowed to do).
3. DATA CONFIDENTIALITY: The protection of data from unauthorized disclosure. _
Connection Confidentiality: The protection of all user data on a connection. _ Connectionless
Confidentiality: The protection of all user data in a single data block _ Selective-Field
Confidentiality: The confidentiality of selected fields within the user Data on a connection or
in a single data block. _ Traffic Flow Confidentiality: The protection of the information that
might be
Derived from observation of traffic flows.
4. DATA INTEGRITY: The assurance that data received are exactly as sent by an authorized
entity (i.e., contain no modification, insertion, deletion, or replay).
marks)(Dec 2014)
-Key or cryptovariable: The information used in conjunction with an algorithm to create the
Man-in-the-Middle Attack
ciphertext from the plaintext or derive the plaintext from the ciphertext; the key can be a
series of bits used by a computer program, or it can be a passphrase used by humans that is
transmission of a public key or even to insert a known key structure in place of the requested
then converted into a series of bits for use in the computer program
public key. Thus, attackers attempt to place themselves between the sender and receiver, and
-Keyspace: The entire range of values that can possibly be used to construct an individual key
once they've intercepted the request for key exchanges, they send each participant a valid
public key, which is known only to them. From the perspective of the victims of such attacks,
systems,wherein each system in a network decrypts the message sent to it and then reencrypts
their encrypted communication appears to be occurring normally, but in fact the attacker is
it using different keys and sends it to the next neighbor, and this process continues until the
receiving each encrypted message and decoding it (with the key given to the sending party),
and then encrypting and sending it to the originally intended recipient. Establishment of
-Plaintext or cleartext: The original unencrypted message that is encrypted; also the name
public keys with digital signatures can prevent the traditional man-in-the-middle attack, as
-Steganography: The process of hiding messages-for example, messages can be hidden within
the digital encoding of a picture or graphic
Dept. of CSE, SJBIT
Page 25
Page 26
10CS835
Page 27
10CS835
Page 28
10CS835
The decision of which RFCs become Internet standards is made by the IESG, on the
recommendation of the IETF. To become a standard, a specification must meet the following
criteria:
_ Be stable and well understood
_ Be technically competent
_ Have multiple, independent, and interoperable implementations with substantial operational
experience
_ Enjoy significant public support
_ Be recognizably useful in some or all parts of the Internet
The key difference between these criteria and those used for international standards from ITU
is the emphasis here on operational experience.The left-hand side of Diaure1.1 shows the
series of steps, called the standards track, that aspecification goes through to become a
standard; this process is defined in RFC 2026. The steps involve increasing amounts of
scrutiny and testing. At each step, the IETF must make a recommendation for advancement
of the protocol, and the IESG must ratify it. The process begins when the IESG approves the
publication of an Internet Draft document as an RFC
with the status of Proposed Standard.
Diaure 1.1 Internet RFC Publication Process
The white boxes in the diagram represent temporary states, which should be occupied for the
minimum practical time. However, a document must remain a Proposed Standard for at least
six months and a Draft Standard for at least four months to allow time for review and
comment. The gray boxes represent long-term states that may be occupied for years.
For a specification to be advanced to Draft Standard status, there must be at least two
independent and interoperable implementations from which adequate operational experience
has been obtained. After significant implementation and operational experience has been
obtained, a specification may be elevated to Internet Standard. At this point, the Specification
is assigned an STD number as well as an RFC number. Finally, when a protocol becomes
obsolete, it is assigned to the Historic state.
5. Compare active and passive attacks.(5 marks)(Jun 2013)(10 marks)(Jun 2013)(6
marks)(Dec 2014)
Security Attacks:
Security attacks, used both in X.800 and RFC 2828, are classified as passive attacks and
active attacks.
A passive attack attempts to learn or make use of information from the system but does not
affect system resources.
An active attack attempts to alter system resources or affect their operation. Passive attacks
are in the nature of eavesdropping on, or monitoring of, transmissions. The goal of the
opponent is to obtain information that is being transmitted. Two types of passive attacks are
release of message contents and traffic analysis.
10CS835
could determine the location and identity of communicating hosts and could observe the
frequency and length of messages being exchanged. This information might be useful in
guessing the nature of the communication that was taking place. Passive attacks are very
difficult to detect because they do not involve any alteration of the data. Typically, the
message traffic is sent and received in an apparently normal fashion and neither the sender
nor receiver is aware that a third party has read the messages or observed the traffic pattern.
However, it is feasible to prevent the success of these attacks, usually by means of
encryption. Thus, the emphasis in dealing with passive attacks is on prevention rather than
detection.
Active Attacks:
Active attacks involve some modification of the data stream or the creation of a false stream
and can be subdivided into four categories: Masquerade, Replay, Modification of messages,
and Denial of service.
Introduction to Network Security, Authentication Applications,
A masquerade takes place when one entity pretends to be a different entity (Diaure 1.4a).
A masquerade attack usually includes one of the other forms of active attack. For example,
uthentication sequences can be captured and replayed after a valid authentication sequence
has taken place, thus enabling an authorized entity with few privileges to obtain extra
privileges by impersonating an entity that has those privileges.
Replay involves the passive capture of a data unit and its subsequent retransmission to
produce an unauthorized effect .
Modification of messages simply means that some portion of a legitimate message is altered,
or that messages are delayed or reordered, to produce an unauthorized effect (Diaure 1.4c).
For example, a message meaning "Allow John Smith to read confidential file accounts" is
modified to mean "Allow Fred Brown to read confidential file accounts."
The denial of service prevents or inhibits the normal use or management of communications
facilities (Diaure 1.4d). This attack may have a specific target; for example, an entity may
suppress all messages directed to a particular destination (e.g., the security audit service).
Another form of service denial is the disruption of an entire network, either by disabling the
network or by overloading it with messages so as to degrade performance.
6. Explain Kerberos version 4 message exchanges.(10 marks) (Dec 2012)(6 marks Dec
2014)
Kerberos:
Kerberos is an authentication service developed by MIT. The problem that Kerberos
addresses is this: Assume an open distributed environment in which users at workstations
wish to access services on servers distributed throughout the network. We would like for
servers to be able to restrict access to authorized users and to be able to authenticate requests
for service. In this environment, a workstation cannot be trusted to identify its users correctly
to network services. In particular, the following three threats exist:
_ A user may gain access to a particular workstation and pretend to be another user
operating from that workstation.
_ A user may alter the network address of a workstation so that the requests sent from
the altered workstation appear to come from the impersonated workstation.
_ A user may eavesdrop on exchanges and use a replay attack to gain entrance to a
server or to disrupt operations.
Page 29
Page 30
Encryption system dependence It requires the use of DES. Export restriction on DES as well
as doubts about the strength of DES were thus of concern ciphertext is tagged with an
encryption type identifier so that any encryption technique may be used.
In any of these cases, an unauthorized user may be able to gain access to services and data
that he or she is not authorized to access.
Rather than building in elaborate authentication protocols at each server, Kerberos provides a
centralized authentication server whose function is to authenticate users to servers and
servers to users. Unlike most other authentication schemes, Kerberos relies exclusively on
symmetric encryption, making no use of public-key encryption.
Two versions of Kerberos are in common use. Version 4 implementations still exist. Version
5 corrects some of the security deficiencies of version 4 and has been issued as a proposed
Internet Standard (RFC 1510).
Today the more commonly used architecture is a distributed architecture consisting of
dedicated user workstations (clients) and distributed or centralized servers. In this
environment, three approaches to security can be envisioned:
_ Rely on each individual client workstation to assure the identity of its user or users
and rely on each server to enforce a security policy based on user identification (ID).
_ Require that client systems authenticate themselves to servers, but trust the client
system concerning the identity of its user.
Introduction to Network Security, Authentication Applications,
10CS835
_ Require the user to prove his or her identity for each service invoked. Also require
that servers prove their identity to clients.
7. List out differences between Kerberos version 4 and version 5.(10 marks)(Jun 2013)
Kerberos Version 4:
Version 4 of Kerberos makes use of DES, to provide the authentication service. Viewing the
protocol as a whole, it is difficult to see the need for the many elements contained
therein.Therefore, we adopt a strategy used by Bill Bryant of Project Athena and build up to
the fullprotocol by looking first at several hypothetical dialogues. Each successive dialogue
adds additional complexity to counter security vulnerabilities revealed in the preceding
dialogue.
The Version 4 Authentication Dialogue:
The first problem is the lifetime associated with the ticket-granting ticket. If this lifetime is
very short (e.g., minutes), then the user will be repeatedly asked for a password. If the
lifetime is long (e.g., hours), then an opponent has a greater opportunity for replay. The
second problem is that there may be a requirement for servers to authenticate themselves to
users. Without such authentication, an opponent could sabotage the condiauration so that
messages to a server were directed to another location. The false server would then be in a
position to act as a real server and capture any information from the user and deny the true
service to the user.
Kerberos Version 5 is specified in RFC 1510 and provides a number of improvements over
version 4.
Differences between Versions 4 and 5:
Version 5 is intended to address the limitations of version 4 in two areas: environmental
shortcomings and technical deficiencies. Let us briefly summarize the improvements in each
area.
Kerberos Version 4 was developed for use within the Project Athena environment and,
accordingly, did not fully address the need to be of general purpose. This led to the following
environmental shortcomings:
Version 4 Version 5
Page 31
10CS835
2. Describe the steps involved in providing aythentication and confidentiality by PGP. (10
marks)(Dec 2012) (6 marks)(Dec 2014)
The receiver generates a new hash code for the message and compares it with the decrypted
hash code. If the two match, the message is accepted as authentic. Diaure: 1.1 PGP
Cryptographic Functions The combination of SHA-1 and RSA provides an effective digital
signature scheme. Because of the strength of RSA, the recipient is assured that only the
possessor of the matching private key can generate the signature. Because of the strength of
SHA-1, the recipient is assured that no one else could generate a new message that matches
the hash code and, hence, the signature of the original message. Although signatures normally
are found attached to the message or file, this is not always the case: Detached signatures are
Dept. of CSE, SJBIT
Page 32
10CS835
also supported. A detached signature may be stored and transmitted separately from the
message it signs.Detached Signatures are useful in several contexts.
_ A user may wish to maintain a separate signature log of all messages sent or received. _ A
detached signature of an executable program can detect subsequent virus infection.
_ A detached signature can be used when more than one party must sign a document, such as
a legal contract. Each person's signature is independent and therefore is applied only to the
document. Otherwise, signatures would have to be nested, with the second signer signing
both the document and the first signature, and so on.
Confidentiality:
Confidentiality is provided by encrypting messages to be transmitted or to be stored locally as
files. In both cases, the symmetric encryption algorithm CAST-128 (Carlisle Adams and
Stafford Tavares) may be used. Alternatively, IDEA (International Data Encryption
Algorithm) or 3DES (Data Encryption Standards) may be used. The 64-bit cipher feedback
(CFB) mode is used. As always, one must address the problem of key distribution. In PGP,
each symmetric key is used only once. That is, a new key is generated as a random 128-bit
number for each message. Thus, although this is referred to in the documentation as a session
key, it is in reality a one-time key. Because it is to be used only once, the session key is
bound to the message and transmitted with it. To protect the key, it is encrypted with the
receiver's public key. Diaure 1.1b illustrates the sequence, which can be described as follows:
10CS835
4. Explain different MIME content types. (5 marks)(Jun 2013)(7 marks) (Dec 2013) (10
marks) (Dec 2012)
MIME-Version: Must have the parameter value 1.0. This field indicates that the message
conforms to RFCs 2045 and 2046.
Content-Type: Describes the data contained in the body with sufficient detail that the
receiving user agent can pick an appropriate agent or mechanism to represent the data to the
user or otherwise deal with the data in an appropriate manner.
Content-Transfer-Encoding: Indicates the type of transformation that has been used to
represent the body of the message in a way that is acceptable for mail transport.
Content-ID: Used to identify MIME entities uniquely in multiple contexts.
Content-Description: A text description of the object with the body; this is useful when the
object is not readable (e.g., audio data).
MIME Content Types:
The bulk of the MIME specification is concerned with the definition of a variety of content
types. This reflects the need to provide standardized ways of dealing with a wide variety of
information representations in a multimedia environment.Table 1.3 lists the content types
specified in RFC 2046. There are seven different major types of content and a total of 15
subtypes. In general, a content type declares the general type of data, and the subtype
specifies a particular format for that type of data.
3. Discuss limitations of SMTP/RFC 822 and how MIME overcomes these limitations. (6
5. Explain S/MIME certificate processing method. (10 marks)(Jun 2013)
Page 33
S/MIME Functionality:
In terms of general functionality, S/MIME is very similar to PGP. Both offer the ability to
sign and/or encrypt messages. In this subsection, we briefly summarize S/MIME capability.
We then look in more detail at this capability by examining message formats and message
preparation.
Functions
S/MIME provides the following functions:
Enveloped data: This consists of encrypted content of any type and encrypted-content
encryption keys for one or more recipients.
Signed data: A digital signature is formed by taking the message digest of the content to be
signed and then encrypting that with the private key of the signer. The content plus signature
are then encoded using base64 encoding. A signed data message can only be viewed by a
recipient with S/MIME capability.
Clear-signed data: As with signed data, a digital signature of the content is formed.
However, in this case, only the digital signature is encoded using base64. As a
result,recipients without S/MIME capability can view the message content, although they
cannotverify the signature.
Signed and enveloped data: Signed-only and encrypted-only entities may be nested, so that
encrypted data may be signed and signed data or clear-signed data may be encrypted.
Cryptographic Algorithms:
Table 1.6 summarizes the cryptographic algorithms used in S/MIME. S/MIME uses the
following terminology, taken from RFC 2119 to specify the requirement level:
Must: The definition is an absolute requirement of the specification. An implementation
must include this feature or function to be in conformance with the specification.
Should: There may exist valid reasons in particular circumstances to ignore this feature or
function, but it is recommended that an implementation include the feature or function.
Page 34
10CS835
UNIT 7: IP Security
1. Mention the application of IPsec. (10 marks) (June 2013) (Dec 2013) ( 5 marks) (Dec
2014)
Applications of IPSec:
IPSec provides the capability to secure communications across a LAN, across private and
public WANs, and across the Internet. Examples of its use include the following:
-Secure branch office connectivity over the Internet: A company can build a secure virtual
private network over the Internet or over a public WAN. This enables a business to rely
heavily on the Internet and reduce its need for private networks, saving costs and network
management overhead.
-Secure remote access over the Internet: An end user whose system is equipped with IP
security protocols can make a local call to an Internet service provider (ISP) and gain secure
access to a company network. This reduces the cost of toll charges for traveling employees
and telecommuters.
-Establishing extranet and intranet connectivity with partners: IPSec can be used to secure
communication with other organizations, ensuring authentication and confidentiality and
providing a key exchange mechanism.
-Enhancing electronic commerce security: Even though some Web and electronic commerce
applications have built-in security protocols, the use of IPSec enhances that security.
The principal feature of IPSec that enables it to support these varied applications is that it
canencrypt and/or authenticate all traffic at the IP level. Thus, all distributed applications,
including remote logon, client/server, e-mail, file transfer, Web access, and so on, can be
secured. Diaure 1.1 is a typical scenario of IPSec usage. An organization maintains LANs at
dispersed locations. Nonsecure IP traffic is conducted on each LAN. For traffic offsite,
through somesort of private or public WAN, IPSec protocols are used. These protocols
operate in networking devices, such as a router or firewall, that connect each LAN to the
outside world. The IPSec networking device will typically encrypt and compress all traffic
going into the WAN, and decrypt and decompress traffic coming from the WAN; these
operations are transparent to workstations and servers on the LAN. Secure transmission is
also possible with individual users who dial into the WAN. Such user workstations must
implement the IPSec protocols to provide security.
2. Explain the security association selections that determine a security policy database
entry.( 6 marks)( Dec 2013)(8 marks)(Dec 2014)
Security Associations:
A key concept that appears in both the authentication and confidentiality mechanisms for IPis
the security association (SA). An association is a one-way relationship between asender and a
receiver that affords security services to the traffic carried on it. If a peer relationship is
needed, for two-way secure exchange, then two security associations are required. Security
services are afforded to an SA for the use of AH or ESP, but not both. A security association
is uniquely identified by three parameters:
Security Parameters Index (SPI): A bit string assigned to this SA and having local
significance only. The SPI is carried in AH and ESP headers to enable the receiving system
to select the SA under which a received packet will be processed. IP Destination Address:
Currently, only unicast addresses are allowed; this is the address of the destination endpoint
of the SA, which may be an end user system or a network system such as a firewall or router.
Dept. of CSE, SJBIT
Page 35
10CS835
Security Protocol Identifier: This indicates whether the association is an AH or ESP security
association.
Hence, in any IP packet, the security association is uniquely identified by the Destination
Address in the IPv4 or IPv6 header and the SPI in the enclosed extension header (AH or
ESP).
SA Parameters:
In each IPSec implementation, there is a nominal Security Association Database that
definesthe parameters associated with each SA. A security association is normally defined by
the following parameters:
-Sequence Number Counter: A 32-bit value used to generate the Sequence Number field in
AH or ESP headers.
-Sequence Counter Overflow: A flag indicating whether overflow of the Sequence Number
Counter should generate an auditable event and prevent further transmission of packets on
this SA (required for all implementations).
-Anti-Replay Window: Used to determine whether an inbound AH or ESP packet is a replay.
-AH Information: Authentication algorithm, keys, key lifetimes, and related parameters being
used with AH (required for AH implementations).
3. Describe SA parameters and SA selectors in detail.(5 marks)( June 2013)(10 marks)
(Dec 2013)(June 2014)
SA Selectors:
IPSec provides the user with considerable flexibility in the way in which IPSec services are
applied to IP traffic. SAs can be combined in a number of ways to yield the desired user
condiauration. Furthermore, IPSec provides a high degree of granularity in discriminating
between traffic that is afforded IPSec protection and traffic that is allowed to bypass IPSec, in
the former case relating IP traffic to specific SAs.The means by which IP traffic is related to
specific SAs (or no SA in the case of traffic allowed to bypass IPSec) is the nominal Security
Policy Database (SPD). In its simplest form, an SPD contains entries, each of which defines a
subset of IP traffic and points to an SA for that traffic. In more complex environments, there
may be multiple entries that potentially relate to a single SA or multiple SAs associated with
a single SPD entry. The reader is referred to the relevant IPSec documents for a full
discussion.
Each SPD entry is defined by a set of IP and upper-layer protocol field values, called
selectors. In effect, these selectors are used to filter outgoing traffic in order to map it into a
particular SA. Outbound processing obeys the following general sequence for each IP packet:
-Compare the values of the appropriate fields in the packet (the selector fields) against the
SPD to find a matching SPD entry, which will point to zero or more SAs.
-Determine the SA if any for this packet and its associated SPI.
-Do the required IPSec processing (i.e., AH or ESP processing).
The following selectors determine an SPD entry:
-Destination IP Address: This may be a single IP address, an enumerated list or range of
addresses, or a wildcard (mask) address. The latter two are required to support more than one
destination system sharing the same SA (e.g., behind a firewall).
-Source IP Address: This may be a single IP address, an enumerated list or range of
addressee, or a wildcard (mask) address. The latter two are required to support more than one
source system sharing the same SA (e.g., behind a firewall).
-User ID: A user identifier from the operating system. This is not a field in the IP or upperlayer headers but is available if IPSec is running on the same operating system as the user.
Dept. of CSE, SJBIT
Page 36
10CS835
-Data Sensitivity Level: Used for systems providing information flow security (e.g., Secret or
Unclassified).
-Transport Layer Protocol: Obtained from the IPv4 Protocol or IPv6 Next Header field. This
may be an individual protocol number, a list of protocol numbers, or a range of protocol
numbers.
-Source and Destination Ports: These may be individual TCP or UDP port values, an
enumerated list of ports, or a wildcard port.
4. Explain IPsec and ESP format. (5 marks)(Jun 2013)(10 marks) (Dec 2013)
-Next Header (8 bits): Identifies the type of header immediately following this header.
-Payload Length (8 bits): Length of Authentication Header in 32-bit words, minus 2.
For example, the default length of the authentication data field is 96 bits, or three 32- bit
words. With a three-word fixed header, there are a total of six words in the header, and the
Payload Length field has a value of 4.
-Reserved (16 bits): For future use.
-Security Parameters Index (32 bits): Identifies a security association.
-Sequence Number (32 bits): A monotonically increasing counter value, discussed later.
-Authentication Data (variable): A variable-length field (must be an integral number
of 32-bit words) that contains the Integrity Check Value (ICV), or MAC, for this packet,
discussed
5. Describe Transport tunnel modes used for IPsec AH authentication bringing out their
10CS835
-Oakley Key Determination Protocol: Oakley is a key exchange protocol based on the DiffieHellman algorithm but providing added security. Oakley is generic in that it does not dictate
specific formats.
-Internet Security Association and Key Management Protocol (ISAKMP):
ISAKMP provides a framework for Internet key management and provides the specific
protocol support, including formats, for negotiation of security attributes. ISAKMP by itself
does not dictate a specific key exchange algorithm; rather, ISAKMP consists of a set of
message types that enable the
use of a variety of key exchange algorithms. Oakley is the specific key exchange algorithm
mandated for use with the initial version of ISAKMP.
Oakley Key Determination Protocol:
Oakley is a refinement of the Diffie-Hellman key exchange algorithm. Recall that DiffieHellman involves the following interaction between users A and B. There is prior
agreementon two global parameters: q, a large prime number; and a a primitive root of q. A
selects a random integer XA as its private key, and transmits to B its public keyY A = aXA
mod q.
Similarly, B selects a random integer XB as its private key and transmits to A its public
keyYB = a XBmod q. Each side can now compute the secret session key:
The Diffie-Hellman algorithm has two attractive features:
-Secret keys are created only when needed. There is no need to store secret keys for a long
period of time, exposing them to increased vulnerability.
-The exchange requires no preexisting infrastructure other than an agreement on the global
parameters. However, there are a number of weaknesses to Diffie-Hellman.
scope relevant to IPV4. (3 marks)(Dec 2014) .(19 marks) (Jun 2012) (June 2013)
-RFC 2401: An overview of a security architecture
-RFC 2402: Description of a packet authentication extension to IPv4 and IPv6
-RFC 2406: Description of a packet encryption extension to IPv4 and IPv6
-RFC 2408: Specification of key management capabilities
Support for these features is mandatory for IPv6 and optional for IPv4. In both cases, the
security features are implemented as extension headers that follow the main IP header. The
extension header for authentication is known as the Authentication header; that for encryption
is known as the Encapsulating Security Payload (ESP) header. In addition to these four RFCs,
a number of additional drafts have been published by the IP Security Protocol Working
Group set up by the IETF. The documents are divided into seven groups, as depicted in
Diaure 1.2 (RFC 2401).
-Architecture: Covers the general concepts, security requirements, definitions, and
mechanisms defining IPSec technology.
-Encapsulating Security Payload (ESP): Covers the packet format and general issues related
to the use of the ESP for packet encryption and, optionally, authentication.
-Authentication Header (AH): Covers the packet format and general issues related to the use
of AH for packet authentication.
-Encryption Algorithm: A set of documents that describe how various encryption algorithms
are used for ESP.
6. Mention important features of Oakley algorithm. (10 marks) (June 2013) (Dec 2013)
Page 37
Page 38
-Client write MAC secret: The secret key used in MAC operations on data sent by the client.
are created. Upon successful conclusion of the Handshake Protocol, the pending states
-Server write MAC secret: The secret key used in MAC operations on data sent by the server.
receive and send). In addition, during the Handshake Protocol, pending read and write states
connection.
Once a session is established, there is a current operating state for both read and write (i.e.,
-Server and client random: Byte sequences that are chosen by the server and client for each
10CS835
10CS835
-Server write key: The conventional encryption key for data encrypted by the server and
decrypted by the client.
3. What are the services provided by SSL record protocol?( 10 marks) (Dec 2012)(6 marks
1.5a), which consists of a single byte with the value 1. The sole purpose of this message is to
Record Protocol, and it is the simplest. This protocol consists of a single message (Diaure
Handshake Protocol. Thereafter the final ciphertext block from each record is preserved for
The Change Cipher Spec Protocol is one of the three SSL-specific protocols that use the SSL
(IV) is maintained for each key. This field is first initialized by the SSL
-Initialization vectors: When a block cipher in CBC mode is used, an initialization vector
Dec 2014)\
-Client write key: The conventional encryption key for data encrypted by the client and
cause the pending state to be copied into the current state, which updates the cipher suite to
be used on this connection.
2. Discuss SSL protocol stack. (10 marks)(Dec 2012)
SSL Protocol Stack
In theory, there may also be multiple simultaneous sessions between parties, but this feature
(applications such as HTTP on client and server), there may be multiple secure connections.
negotiation of new security parameters for each connection. Between any pair of parties
can be shared among multiple connections. Sessions are used to avoid the expensive
by the Handshake Protocol. Sessions define a set of cryptographic security parameters, which
-Session: An SSL session is an association between a client and a server. Sessions are created
a suitable type of service. For SSL, such connections are peer-to-peer relationships. The
-Connection: A connection is a transport (in the OSI layering model definition) that provides
contains a code that indicates the specific alert. First, we list those alerts that are always fatal
may continue, but no new connections on this session may be established. The second byte
Two important SSL concepts are the SSL session and the SSL connection, which are defined
is fatal, SSL immediately terminates the connection. Other connections on the same session
byte takes the value warning(1) or fatal(2) to convey the severity of the message. If the level
Alert Protocol. These SSL-specific protocols are used in the management of SSL exchanges
current state. Each message in this protocol consists of two bytes (Diaure 17.5b). The first
defined as part of SSL: the Handshake Protocol, The Change Cipher Spec Protocol, and the
applications that use SSL, alert messages are compressed and encrypted, as specified by the
Web client/server interaction, can operate on top of SSL. Three higher-layer protocols are
The Alert Protocol is used to convey SSL-related alerts to the peer entity. As with other
In particular, the Hypertext Transfer Protocol (HTTP), which provides the transfer service for
Alert Protocol:
The SSL Record Protocol provides basic security services to various higher-layer protocols.
is not used in practice. There are actually a number of states associated with each session.
Dept. of CSE, SJBIT
Page 39
Page 40
10CS835
10CS835
-close_notify: Notifies the recipient that the sender will not send any more messages on this
connection. Each party is required to send a close_notify alert before closing the write side of
cardholders that this information is safe and accessible only to the intended recipient.
a connection.
Confidentiality also reduces the risk of fraud by either party to the transaction or by malicious
available.
-bad_certificate: A received certificate was corrupt (e.g., contained a signature that did not
5. Explain SSL handshake protocol with a neat diagram. (5 marks)(Jun 2013)(7 marks)
verify).
(Dec 2013)
SSL: the Handshake Protocol, The Change Cipher Spec Protocol, and the Alert Protocol.
These SSL-specific protocols are used in the management of SSL exchanges and are
Two important SSL concepts are the SSL session and the SSL connection, which are defined
rendering it unacceptable.
SET is an open encryption and security specification designed to protect credit card
-Session: An SSL session is an association between a client and a server. Sessions are created
transactions on the Internet. The current version, SETv1, emerged from a call for security
by the Handshake Protocol. Sessions define a set of cryptographic security parameters, which
standards by MasterCard and Visa in February 1996. A wide range of companies were
can be shared among multiple connections. Sessions are used to avoid the expensive
involved in developing the initial specification, including IBM, Microsoft, Netscape, RSA,
negotiation of new security parameters for each connection. Between any pair of parties
Terisa, and Verisign. Beginning in 1996. SET is not itself a payment system. Rather it is a set
(applications such as HTTP on client and server), there may be multiple secure connections.
of security protocols and formats that enables users to employ the existing credit card
In theory, there may also be multiple simultaneous sessions between parties, but this feature
is not used in practice. There are actually a number of states associated with each session.
Once a session is established, there is a current operating state for both read and write (i.e.,
receive and send). In addition, during the Handshake Protocol, pending read and write states
are created. Upon successful conclusion of the Handshake Protocol, the pending states
6. Explain the construction of dual signature in SET. Also show its verification by the
SET Overview:
A good way to begin our discussion of SET is to look at the business requirements for SET,
through its relationship with a financial institution: This is the complement to the preceding
Requirements:
requirement. Cardholders need to be able to identify merchants with whom they can conduct
The SET specification lists the following business requirements for secure payment
processing with credit cards over the Internet and other networks:
Page 41
Page 42
travels across the network. An interesting and important feature of SET is that it prevents the
fraud and the overall cost of payment processing. Digital signatures and certificates are used
mechanism that links a cardholder to a specific account number reduces the incidence of
To meet the requirements just outlined, SET incorporates the following features:
during transmission of SET messages. Digital signatures are used to provide integrity.
software.
-Ensure the integrity of all transmitted data: That is, ensure that no changes in content occur
protocols and formats are independent of hardware platform, operating system, and Web
-Facilitate and encourage interoperability among software and network providers: The SET
Confidentiality also reduces the risk of fraud by either party to the transaction or by malicious
with the use of other security mechanisms, such as IPSec and SSL/TLS.
cardholders that this information is safe and accessible only to the intended recipient.
use: SET can securely operate over a "raw" TCP/IP stack. However, SET does not interfere
-Create a protocol that neither depends on transport security mechanisms nor prevents their
processing with credit cards over the Internet and other networks:
The SET specification lists the following business requirements for secure payment
Requirements:
-Ensure the use of the best security practices and system design techniques to protect all
10CS835
10CS835
merchant from learning the cardholder's credit card number; this is only provided to the
issuing bank. Conventional encryption by DES is used to provide confidentiality.
7. List out the key features of secure electronic transaction and explain in detail. .(5
marks)(Jun 2013)(10 marks)(Jun 2013)(6 marks)(Dec 2014)
Secure Electronic Transaction:
SET is an open encryption and security specification designed to protect credit card
transactions on the Internet. The current version, SETv1, emerged from a call for security
standards by MasterCard and Visa in February 1996. A wide range of companies were
involved in developing the initial specification, including IBM, Microsoft, Netscape, RSA,
Terisa, and Verisign. Beginning in 1996. SET is not itself a payment system. Rather it is a set
of security protocols and formats that enables users to employ the existing credit card
payment infrastructure on an open network, such as the Internet, in a secure fashion. In
essence, SET provides three services:
-Provides a secure communications channel among all parties involved in a transaction
-Provides trust by the use of X.509v3 digital certificates
-Ensures privacy because the information is only available to parties in a transaction
when and where necessary.
SET Overview:
A good way to begin our discussion of SET is to look at the business requirements for SET,
its key features, and the participants in SET transactions.
Dept. of CSE, SJBIT
Page 43
Page 44
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
UNIT 1
Introduction
TOPICS
This blueprint for the organizations information security efforts can be realized only if it operates
1. Planning for Security
2. Introduction
Without policy, blueprints, and planning, the organization will be unable to meet the information
security needs of the various communities of interest.
The organizations should undertake at least some planning: strategic planning to manage the
allocation of resources, and contingency planning to prepare for the uncertainties of the business
s.
co
m
Management from all communities of interest must consider policies as the basis for all information
la
bu
la
bu
Policies direct how issues should be addressed and technologies used Policies do not specify the
proper operation of equipments or software-this information should be placed in the standards,
yl
yl
lls
Security policies are the least expensive control to execute, but the most difficult to implement
lls
6. Contingency plan
s.
co
m
environment.
properly.
.a
.a
Shaping policy is difficult because: _ Never conflict with laws _ Stand up in court, if challenged _
EISP
The EISP is based on and directly supports the mission, vision, and direction of the organization and
Sets the strategic direction, scope, and tone for all security efforts within the organization
The EISP is an executive-level document, usually drafted by or with, the Chief Information Officer
(CIO) of the organization and is usually 2 to 10 pages long.
Page 4
www.allsyllabus.com
Page 5
www.allsyllabus.com
maintenance of the information security policies, and the practices and responsibilities of the users.
guidelines for overall coverage of necessary issues and clearly identifies processes for the
It also assigns responsibilities for the various areas of security, including systems administration,
procedures for the management of ISSPs in place , the comprehensive policy approach establishes
organization.
The single comprehensive policy approach is centrally managed and controlled. With formal
It defines then purpose, scope, constraints, and applicability of the security program in the
enforcement.
cover all of the necessary issues, and can lead to poor policy distribution, management, and
The EISP guides the development, implementation, and management of the security program. It
a policy governing its use, management, and control. This approach to creating ISSPs may fail to
has a scattershot effect. Each department responsible for a particular application of technology creates
The EISP does not usually require continuous modification, unless there is a change in the strategic
vtu.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
www.allsyllabus.com
modular approach.It is also certainly managed and controlled but tailored to the individual technology
resources. The optimal balance between the independent and comprehensive ISSP approaches is the
General compliance to ensure meeting the requirements to establish a program and the
Usually, these policies are developed by those responsible for managing the information technology
issues.
policies created with this approach comprise individual modules, each created and updated by
As various technologies and processes are implemented, certain guidelines are needed to use them
properly
The ISSP:
addresses specific areas of technology like
individuals responsible for the issues addressed. These individuals report to a central policy
administration group that incorporates specific issues into an overall comprehensive policy.
While issue-specific policies are formalized as written documents, distributed to users, and agreed to
in writing, SysSPs are frequently codified as standards and procedures to be used When condiauring
.a
.a
Electronic mail
ll
s
y
y
lls
ll
a
b
ll
a
b
The modular approach provides a balance between issue orientation and policy management. The
u
s
.
u
s
.
c
o
m
c
o
m
or maintaining systems
Access control lists (ACLs) consist of the access control lists, matrices, and capability tables
_ An ACL is a list of access rights used by file storage systems, object brokers, or other network
communications devices to determine which individuals or groups may access an object that it
controls.(Object Brokers are system components that handle message requests between the software
Three approaches:
components of a system )
A similar list, which is also associated with users and groups, is called a Capability Table. This
specifies which subjects and objects a user or group can access. Capability tables are frequently
Condiauration rules: comprise the specific condiauration codes entered into security systems to
_ A modular ISSP document that unifies policy creation and administration, while maintaining each
guide the execution of the system when information is passing through it.
The independent document approach to take when creating and managing ISSPs typically
ACL Policies
Page 6
Page 7
www.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
ACLs allow condiauration to restrict access from anyone and anywhere. Restrictions can
Documents must be properly disseminated (Distributed, read, understood, and agreed to) and
managed
ACLs regulate:
Special considerations should be made for organizations undergoing mergers, takeovers, and
partnerships
a schedule of reviews
The WHO of ACL access may be determined by an individual persons identity or that persons
The policy champion and manager is called the policy administrator.
s.
co
m
Responsible Individual
Determining WHAT users are permitted to access can include restrictions on the various attributes
s.
co
m
Access is controlled by adjusting the resource privileges for the person or group to one of Read,
It is good practice to actively solicit input both from the technically adept information security
Write, Create, Modify, Delete, Compare, or Copy for the specific resource.
experts and from the business-focused managers in each community of interest when making
la
bu
Policy administrator is a mid-level staff member and is responsible for the creation, revision,
la
bu
of the system resources, such as the type of resources (printers, files, communication devices, or
This individual should also notify all affected members of the organization when the policy is
For the control of WHERE resources can be accessed from, many network-connected assets have
modified.
The policy administrator must be clearly identified on the policy document as the primary point of
lls
lls
restrictions placed on them to block remote usage and also have some levels of access that are
yl
yl
To control WHEN access is allowed, some organizations choose to implement time-of-day and / or
Schedule of Reviews
Policies are effective only if they are periodically reviewed for currency and accuracy and modified
In some systems, these lists of ACL rules are known as Capability tables, user profiles, or user
.a
When these various ACL options are applied cumulatively, the organization has the ability to
.a
Policies that are not kept current can become liabilities for the organization, as outdated rules are
policies. They specify what the user can and cannot do on the resources within that system.
Rule Policies
Organization must demonstrate with due diligence, that it is actively trying to meet the requirements
Rule policies are more specific to the operation of a system than ACLs
Many security systems require specific condiauration scripts telling the systems what actions to
A properly organized schedule of reviews should be defined (at least annually) and published as part
of the document.
Examples of these systems include firewalls, intrusion detection systems, and proxy servers.
Dia 6.5 shows how network security policy has been implemented by Check Point in a firewall rule
To facilitate policy reviews, the policy manager should implement a mechanism by which
set.
Policy Management
Once the policy Hs come up for review, all comments should be examined and management
Policies are living documents that must be managed and nurtured, and are constantly changing and
growing
Page 8
www.allsyllabus.com
Page 9
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
Because each information security environment is unique, the security team may need to modify or
1.5 Audience
Setting priorities can follow the recommendations of published sources, or from published standards
1.4 Background
Then use the blueprint to plan the tasks to be accomplished and the order in which to proceed
1.2 Practices
1.1 Principles
At this point in the Security SDLC, the analysis phase is complete and the design phase begins
1. Introduction
Systems Design
Table of Contents
A clean desk policy stipulates that at the end of the business day, all classified information must be
among the references cited by the federal government when it decided not to select the
In todays open office environments, it may be beneficial to implement a clean desk policy
hey have been broadly reviewed by government and industry professionals, and are
organization.
party should be applied to policies in order to keep them freely available, but only within the
Therefore, each implementation may need modification or even redesign before it suits the needs of a
The same protection scheme created to prevent production data from accidental release to the wrong
Experience teaches you that what works well for one organization may not precisely fit another.
Information Classification
c
o
m
u
s
.
ll
a
b
ll
a
b
u
s
.
c
o
m
A framework is the basic skeletal structure within which additional detailed planning of the blueprint
One approach is to adapt or adopt a published model or framework for information security
ll
s
y
lls
This security blueprint is the basis for the design, selection, and implementation of all security
2.4 Systems Owners Have Security Responsibilities outside Their Own Organizations
Experience teaches us that what works well for one organization may not precisely fit another
.a
The security blueprint is a more detailed version of the security framework, which is an outline of
the overall information security strategy for the organization and the roadmap for planned changes to
the information security environment of the organization.
.a
realized and serve as a scalable, upgradeable, and comprehensive plan for the information security
The blueprint should specify the tasks to be accomplished and the order in which they are to be
There is a number of published information security frameworks, including those from government
This framework can be an outline of steps involved in designing and later implementing information
security in the organization.
Page 10
Page 11
www.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
3.14 Cryptography
NIST SP 800-14
3.5.1 Staffing
la
bu
33 Principles enumerated
yl
.a
policies.
3.7.2 Characteristics
.a
lls
Security enhances these areas by supporting the planning function when information
lls
yl
la
bu
s.
co
m
s.
co
m
The costs of information security should be considered part of the cost of doing business, much like
3.11.1 Identification
3.11.2 Authentication
These are not profit-generating areas of the organization and may not lead to competitive
3.11.3 Passwords
advantages.
Information security should justify its own costs.
Security measures that do not justify cost benefit levels must have a strong business case (such as a
Whenever systems store and use information from customers, patients, clients, partners, and others,
the security of this information becomes a serious responsibility for the owner of the systems.
Page 13
www.allsyllabus.com
System
managers.
NIST SP 800-14 Generally Accepted principles and Practices for securing Information Technology
Policy documents should clearly identify the security responsibilities of users, administrators, and
vtu.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
www.allsyllabus.com
To be legally binding, this information must be documented, disseminated, read, understood, and
agreed to.
2. Security Policies
1.3 Definitions
These details should be distributed to users, administrators, and managers to assist them in
1.2 Audience
policies.
Regarding the law, the organization should also detail the relevance of laws to issue specific security
1. Introduction
c
o
m
information security management and professionals, as well as the users, managers, administrators,
u
s
.
and other stakeholders of the broader organization) should participate in the process of developing a
comprehensive information security program.
Information security that is implemented and then ignore is considered negligent, the organization
having not demonstrated due diligence.
Security is an ongoing process
3.3 Firewalls
4. Security Services and Procedures
ll
s
y
4.1 Authentication
It cannot be implemented and then expected to function independently without constant maintenance
.a
4.2 Confidentiality
4.3 Integrity
and change
3. Architecture
3.1 Objectives
ll
a
b
4.5 Access
4.4 Authorization
To be effective against a constantly shifting set of threats and constantly changing user base, the
Continuous analysis of threats, assets, and controls must be conducted and new blueprint developed.
4.6 Auditing
.a
lls
ll
a
b
u
s
.
c
o
m
Only through preparation, design, implementation, eternal vigilance, and ongoing maintenance can
Legal demands, shareholder requirements, even business practices affect the implementation of
There are a number of factors that influence the implementation and maintenance of security.
For example, security professionals generally prefer to isolate information assets from the Internet,
which is the leading avenue of threats to the assets, but the business requirements of the organization
may preclude this control measure.
5.6 Responsibilities
6. Ongoing Activities
7. Tools and Locations
Page 14
Page 15
www.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
References
9. References
VISA International promotes strong security measures and has security guidelines
(www.cert.org)
Developed two important documents that improve and regulate its information systems
Both documents provide specific instructions on the use of the VISA cardholder
The Security Assessment Process document is a series of recommendations for the detailed
examination of an organizations systems with the eventual goal of integration into the VISA system
Planning for Security-Hybrid Framework
s.
co
m
s.
co
m
The Agreed Upon procedures document outlines the policies and technologies required for security
documents, a security team can develop a sound strategy for the design of good security architecture
can use to create a security system blueprint as they fill in the implementation details to address the
The only down side to this approach is the very specific focus on systems that can or do integrate with
VISAs systems.
The NIST SP 800-26 framework of security includes philosophical components of the Human
Firewall Project, which maintains that people , not technology, are the primary defenders of
lls
drawback of providing less detail than would a complete methodology -It is possible to gain
information assets in an information security program, and are uniquely responsible for their
yl
yl
Baselining and best practices are solid methods for collecting security practices, but they can have the
protection
lls
la
bu
This section presents a Hybrid framework or a general outline of a methodology that organizations
la
bu
systems that carry the sensitive cardholder information to and from VISA systems Using the two
NIST SP 800-26 Security Self Assessment Guide for Information Technology systems
for public agencies, but these policies can be adapted easily to private institutions.
Management Controls
-The documents found in this site include specific examples of key policies and planning
Risk Management
.a
NIST SP 800-26
-The Federal Agency Security Practices Site (fasp.csrc.nist.gov) is designed to provide best practices
.a
information by baselining and using best practices and thus work backwards to an effective design
documents, implementation strategies for key technologies and position descriptions for
key security personnel.
-Of particular value is the section on program management, which include the following:
Operational Controls
Personnel Security
Physical Security
Contingency Planning
Data Integrity
Documentation
Page 16
www.allsyllabus.com
Page 17
www.allsyllabus.com
sphere.
individual safeguards that can protect the various systems that are located closer to the center of the
The items of control shown in the diaure are not intended to be comprehensive but rather illustrate
Technical Controls
As illustrated in the sphere of protection, a variety of controls can be used to protect the information.
vtu.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
www.allsyllabus.com
Audit Trails
However, because people can directly access ring as well as the information at the core of the model,
The members of the organization must become a safeguard, which is effectively trained,
Information is always at risk from attacks through the people and computer systems that have access
Information, as the most important asset in the model is at the center of the sphere.
The people must become a layer of security, a human firewall that protects the information from
hard copies of documents and can also access information through systems.
protecting that layer from direct or indirect use through the next layer
The sphere of use illustrates the ways in which people access information; for example, people read
The sphere of protection overlays each of the levels of the sphere of use with a layer of security,
The spheres of security illustrate how information is under attack from a variety of sources.
The sphere of security (Dia 6.16) is the foundations of the security framework.
the side of the sphere of protection that attempts to control access by relying on people requires a
implemented and maintained or else they too will represent a threat to the information.
u
s
.
u
s
.
to the information.
c
o
m
c
o
m
policies
to access information from the Internet must first go through the local networks and then access
Networks and Internet represent indirect threats, as exemplified by the fact that a person attempting
Generally speaking, the concept of the sphere is to represent the 360 degrees of security necessary to
Each of the layers constitutes controls and safeguards that are put into place to protect the
Information, at the core of the sphere, is available for access by members of the organization and
The order of the controls within the layers follows prioritization scheme.
But before any controls and safeguards are put into place, the policies defining the management
To gain access to the computer systems, one must either directly access the computer systems or go
.a
.a
technology
While the design and implementation of the people layer and the technology layer overlap, both
ll
s
y
lls
Sphere of Use
ll
a
b
ll
a
b
connection.
To gain access to the network, one must either directly access the network or go through an Internet
Managerial Controls
Technical Controls
Operational Controls
Dia illustrates that between each layer of the sphere of use there must exist a layer of protection to
Each shaded band is a layer of protection and control.
Managerial Controls
performed by security administration of the organization
Managerial controls cover security processes that are designed by the strategic planners and
For example, the items labeled Policy & law and Education & Training are located between
Management Controls address the design and implementation of the security planning process and
Controls are also implemented between systems and the information, between networks and the
Page 18
Page 19
www.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
Contingency Planning
Security ETA
Operational Controls
Personnel Security
Operational controls deal with the operational functionality of security in the organization.
Physical Security
They include management functions and lower-level planning, such as disaster recovery and
Operational controls also address personnel security, physical security, and the protection of
Data Integrity
Technical Controls
In addition, operational controls guide the development of education, training, and awareness
Finally, they address hardware and software systems maintenance and the integrity of data.
Audit Trails
Technical Controls
s.
co
m
Operational Controls
Management controls further describe the necessity and scope of legal compliance and the
s.
co
m
Technical controls address those tactical and technical issues related to designing and implementing
Design of Security Architecture
la
bu
la
bu
security in the organization as well as issues related to examining and selecting the technologies
To inform the discussion of information security program architecture and to illustrate industry best
Many of these components are examined in an overview.
An overview is provided because being able to assess whether a framework and/or blueprint are on
accountability.
Defense in Depth
.a
components.
needed.
Finally they include the classification of assets and users, to facilitate the authorization levels
One of the basic tenets of security architectures is the implementation of security in layers.
In addition, these controls cover cryptography to protect information in storage and transit.
.a
target to meet an organizations needs requires a working knowledge of these security architecture
Technical controls also address the development and implementation of audit trails for
accountability.
lls
lls
They also include logical access controls, such as identification, authentication, authorization, and
yl
practices , the following sections outline a few key security architectural components.
technical components.
yl
While operational controls address the specifics of technology selection and acquisition of certain
Defense in depth requires that the organization establishes sufficient security controls and safeguards
Summary
Using the three sets of controls just described , the organization should be able to specify controls to
These layers of control can be organized into policy, training, and education and technology as per
cover the entire spectrum of safeguards , from strategic to tactical, and from managerial to technical.
The Framework
While policy itself may not prevent attacks , it certainly prepares the organization to handle them.
Management Controls
Program Management
Technology is also implemented in layers, with detection equipment working in tandem with
Risk Management
Implementing multiple types of technology and thereby preventing the failure of one system from
Legal Compliance
Page 20
www.allsyllabus.com
Page 21
www.allsyllabus.com
rules (shown as the header in the diagram) and the data content analysis (shown as 0100101000 in the
The DMZ is a no-mans land between the inside and outside networks; it is also where some
The Dia shows the use of firewalls and intrusion detection systems(IDS) that use both packet level
Dia 6.18 illustrates the concept of building controls in multiple, sometimes redundant layers.
DMZs
Thus, firewalls can be used to create to security perimeters like the one shown in Dia. 6.19
Redundancy can be implemented at a number of points through the security architecture such as
vtu.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
www.allsyllabus.com
diagram)
Unfortunately, the perimeter does not protect against internal attacks from employee threats or on-
An alternative approach to the strategies of using a firewall subnet or a DMZ is to use a proxy
A security perimeter is the first level of security that protects all internal systems from outside
Proxy Servers
A perimeter is the boundary of an area. A security perimeter defines the edge between the outer limit
These servers provide access to organizational web pages, without allowing web requests to enter
Security Perimeter
c
o
m
c
o
m
When an outside client requests a particular web page, the proxy server receives the requests as if it
internet connection, and a physical security perimeter, usually at the gate to the organizations offices.
name that users would be expecting to find for the system and its services.
There can be both an electronic security perimeter, usually at the organizations exterior network or
When deployed, a proxy server is condiaured to look like a web server and is assigned the domain
were the subject of the request, then asks for the same information from the true web server (acting as
Security perimeters can effectively be implemented as multiple technologies that safeguard the
ll
a
b
The proxy server may be hardened and become a bastion host placed in the public area of the
The assumption is that if individuals have access to all systems within that particular domain.
network or it might be placed within the firewall subnet or the DMZ for added protection.
For more frequently accessed web pages, proxy servers can cache or temporarily store the page, and
thus are sometimes called cache servers.
.a
.a
FIREWALLS
a proxy for the requestor), and then responds tot eh request as a proxy for the true web server.
This gives requestors the response they need without allowing them to gain direct access to the
Within security perimeters the organization can establish security domains, or areas of trust within
which users can freely communicate.
ll
s
y
lls
A Firewall is a device that selectively discriminates against information following into or out of the
organization.
ll
a
b
u
s
.
u
s
.
Host based IDSs are usually installed on the machines they protect to monitor the status of various
world, it too can be used as the front-line defense against attacks as it can be condiaured to allow only
While the gateway router is primarily designed to connect the organizations systems to the outside
Firewalls are usually placed on the security perimeter, just behind or as part of a gateway router.
organization may wish to implement Intrusion Detection Systems (IDSs) IDSs come in TWO
prevents information from entering or exiting the defined area based on a set of predefined rules.
In an effort to detect unauthorized activity within the inner network or an individual machines, an
The IDS learns the condiauration of the system , assigns priorities to various files depending on their
There are a number of types of firewalls, which are usually classified by the level of information
Firewalls can be packet filtering , stateful packet filtering, proxy or application level.
A firewall can be a single device or a firewall subnet, which consists of multiple firewalls creating a
Page 23
www.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
Network based IDSs look at patterns of network traffic and attempt to detect unusual activity based
Thus, managers from each community of interest within the organization must be ready to act when
on previous baselines.
This could include packets coming into the organizations networks with addresses from machines
Managers must provide strategic planning to assure continuous information systems availability
Attack)
Both Host and Network based IDSs require a database of previous activity.
In the case of host based IDSs, the system can create a database of file attributes, as well as maintain
Network-based IDSs can use a similar catalog of common attack signatures and develop databases
Contingency Plans
IDSs can be used together for the maximum level of security for a particular network and set of
la
bu
Summary
la
bu
systems.
s.
co
m
Continuity Strategy
It could also include high volumes of traffic going to outside addresses(As in a Denial of Service
s.
co
m
DRP typically focuses on restoring systems after disasters occur, and as such is closely associated
with BCP.
lls
lls
After your organization has selected a model, created a framework, and flashed it out into a blue
yl
IRP focuses on immediate response, but if the attack escalates or is disastrous the process changes
allow a decision maker to determine what should be implemented and when to bring in additional
yl
.a
BCP occurs concurrently with DRP when the damage is major or long term, requiring more than
training and awareness program that increases information security knowledge and visibility and
.a
print for implementation, you should make sure your planning includes the steps needed to create a
enables people across the organization to work in secure ways that enhance the safety of the
Components of CP
An incident is any clearly identified attack on the organizations information assets that would
Contingency Planning
Learning Objectives
An Incidence Response Plan (IRP) deals with the identification, classification, response, and
A Disaster Recovery Plan(DRP) deals with the preparation for and recovery from a disaster, whether
natural or man-made.
Grasp the reasons for and against involving law enforcement officials in incident responses and
A Business Continuity Plan(BCP) ensures that critical business functions continue, if a catastrophic
when it is required.
i. Champion: The CP project must have a high level manager to support, promote , and endorse the
Page 24
www.allsyllabus.com
Page 25
www.allsyllabus.com
iii. Team members: The team members for this project should be the managers or their representatives
of the organization.
ii. Project Manager: A champion provides the strategic vision and the linkage to the power structure
vtu.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
www.allsyllabus.com
from the various communities of interest: Business, Information technology, and information security.
Potential Damage Assessment
Subordinate plans will take into account the identification of, reaction to, and recovery from each
the damage from the attack, recover from the effects, and return to normal operations.
Once potential damage has been assessed, a subordinate plan must be developed or identified.
The BIA therefore adds insight into what the organization must do to respond to attack, minimize
It begins with the prioritized list of threats and vulnerabilities identified in the risk management.
organization.
A BIA is an investigation and assessment of the impact that various attacks can have on the
The first phase in the development of the CP process is the Business Impact Analysis.
From the attack success scenarios developed, the BIA planning team must estimate the cost of the
c
o
m
attack scenario.
Obviously the organizations security team does everything In its power to stop these attacks, but
Begin with Business Impact Analysis (BIA) if the attack succeeds, what do we do then?
u
s
.
u
s
.
c
o
m
an incident.
Incident response planning covers the identification of, classification of, and response to
some attacks, such as natural disasters, deviations from service providers, acts of human failure or
ll
a
b
An incident is an attack against an information asset that poses a clear threat to the confidentiality,
integrity, or availability of information resources.
-Attacks are only classified as incidents if they have the following characteristics:
1) . They are directed against information assets.
The attack profile is the detailed description of activities that occur during an attack
If an action that threatens information occurs and is completed, the action is classified as an incident.
an incident.
.a
.a
ll
s
y
lls
ll
a
b
resources.
deliberate or accidental.
Must be developed for every serious threat the organization faces, natural or man-made,
Incident Response-IR
Business Unit Analysis
IR is therefore the set of activities taken to plan for, detect, and correct the impact of an incident on
IR is more reactive than proactive.
the organization.
information assets.
The second major task within the BIA is the analysis and prioritization of business functions within
2. Detection
1. Planning
(departments, sections, divisions, groups, or other such units) to determine which are most vital to the
This series of tasks serves to identify and prioritize the functions within the organizations units
3. Reaction
Next create a series of scenarios depicting the impact a successful attack from each threat could have
4. Recovery
Page 27
www.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
This can consist of an on the ground walkthrough, in which everyone discusses hi/her actions at
-Planning for incidents is the first step in the overall process of incident response planning.
individuals sit around a conference table and discuss in turn how they would act as the incident
-Planning for an incident requires a detailed understanding of the scenarios developed for the BIA.
unfolded.
-With this information in hand, the planning team can develop a series of predefined responses that
3. Simulation: Simulation of an incident where each involved individual works individually rather
guide the organizations incident response (IR) team and information security staff.
than in conference, simulating the performance of each task required to react to and recover from a
simulated incident.
4. Parallel: This test is larger in scope and intensity. In the parallel test, individuals act as if an actual
incident occurred, performing the required tasks and executing the necessary procedures.
-The IR team consists of those individuals who must be present to handle the systems and functional
5. Full interruption: It is the final; most comprehensive and realistic test is to react to an incident as
if it were real.
-IR team verifies the threat, determines the appropriate response, ands co-ordinates the actions
In a full interruption, the individuals follow each and every procedure, including the interruption of
s.
co
m
each particular location and juncture or it can be more of a talk through in which all involved
s.
co
m
Incident Planning
# Storage
Keep it simple.
lls
Never assume
lls
yl
yl
-To respond to an incident, the responder simply opens the binder, flips to the appropriate section, and
la
bu
The more you sweat in training, the less you bleed in combat.
require information, such as to create a directory of incidents with tabbed sections for each incident.
la
bu
-The IR plan must be organized in such a way to support, rather than impede, quick and easy access to
Incident Detection
The document could be stored adjacent to the administrators workstation, or in a book case in the
unusual occurrence.
# Testing
This is most often a complaint to the help desk from one or more users about a technology service.
server room.
.a
If attackers gain knowledge of how a company responds to a particular incident, they can improve
.a
Note that the information in the IR plan should be protected as sensitive information.
These complaints are often collected by the help desk and can include reports such as the system is
acting unusual, programs are slow, my computer is acting weird, data is not available.
# Incident Indicators
-Even if an organization has what appears on paper to be an effective IR plan, the procedures that
Incident Reaction
Incident reaction consists of actions that guide the organization to stop the incident,
1. Check list: copies of the IR plan are distributed to each individual with a role to play during an
mitigate the impact of the incident, and provide information for the recovery from the
actual incident. These individuals each review the plan and create a checklist of correct and incorrect
incident
components.
In reacting to the incident there are a number of actions that must occur quickly including:
notification of key personnel
2. Structured walkthrough: in a walkthrough, each involved individual practices the steps he/she
assignment of tasks
Page 29
www.allsyllabus.com
Most organizations maintain alert rosters for emergencies. An alert roster contains contact
vtu.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
www.allsyllabus.com
happened, and how it happened, and what actions were taken. The documentation should record the
DRP Steps
It is important to ensure that the event is recorded for the organizations records, to know what
the most valuable assets to preserve value for the longer term even at the risk of more disruption.
Documenting an Incident
When situations are classified as disasters plans change as to how to respond - take action to secure
incidents.
A hierarchical roster is activated as the first person calls a few other people on the roster, who in
The contingency planning team must decide which actions constitute disasters and which constitute
A sequential roster is activated as a contact person calls each and every person on the roster.
Disaster recovery planning (DRP) is planning the preparation for and recovery from a disaster.
c
o
m
c
o
m
The first task is to identify the human resources needed and launch them into action.
Someone must initiate the alert roster and notify key personnel.
Incident Recovery
Crisis Management
The organization repairs vulnerabilities, addresses any shortcomings in safeguards, and restores the
data and services of the systems.
.a
.a
Crisis management is actions taken during and after a disaster focusing on the people involved and
addressing the viability of the business.
The crisis management team is responsible for managing the event from an enterprise perspective
and covers:
ll
s
y
y
lls
Damage Assessment
ll
a
b
ll
a
b
u
s
.
u
s
.
Continuity Strategies
Address the safeguards that failed to stop or limit the incident, or were missing from the system in
Identify the vulnerabilities that allowed the incident to occur and spread and resolve them.
hot sites
Evaluate monitoring capabilities. Improve their detection and reporting methods, or simply install
Restore the data from backups.
warm sites
cold sites
Page 30
Page 31
www.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
When the incident at hand constitutes a violation of law the organization may determine that
timeshare
service bureaus
mutual agreements
s.
co
m
s.
co
m
Database shadowing - Not only processing duplicate real-time data storage, but also duplicates the
_ Establish responsibility for managing the document, typically the security administrator
la
bu
la
bu
_ Appoint a secretary to document the activities and results of the planning session(s)
_ Independent incident response and disaster recovery teams are formed, with a common planning
committee
yl
yl
lls
lls
.a
.a
_ Identify and prioritize threats to the organizations information and information systems
Page 32
www.allsyllabus.com
Page 33
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
UNIT 8
TOPICS:
WEB SECURITY
number of individuals and companies with Internet access is expanding rapidly and all of these have
graphical Web browsers. As a result, businesses are enthusiastic about setting up facilities on the Web
for electronic commerce. But the reality is that the Internet and the Web are extremely vulnerable to
compromises of various sorts. As businesses wake up to this reality, the demand for secure Web
services grows. The topic of Web security is a Very broad one. In this chapter, we begin with a
discussion of the general requirements for Web security and then focus on two standardized schemes
that are becoming increasingly important as part of Web commerce: SSL/TLS and SET.
c
o
m
c
o
m
The World Wide Web is fundamentally a client/server application running over the Internet and
u
s
.
u
s
.
TCP/IP intranets. As such, the security tools and approaches discussed so far in this book are relevant
to the issue of Web security. But, the Web presents new challenges not generally appreciated in the
ll
a
b
ll
a
b
-The Internet is two way. Unlike traditional publishing environments, even electronic publishing
systems involving teletext, voice response, or fax-back, the Web is vulnerable to attacks on the Web
ll
s
y
lls
-The Web is increasingly serving as a highly visible outlet for corporate and product information and
.a
.a
as the platform for business transactions. Reputations can be damaged and money can be lost if the
-Although Web browsers are very easy to use, Web servers are relatively easy to condiaure and
manage, and Web content is increasingly easy to develop, the underlying software is extraordinarily
complex. This complex software may hide many potential security flaws. The short history of the
Web is filled with examples of new and upgraded systems, properly installed, that are vulnerable to a
variety of security attacks.
-A Web server can be exploited as a launching pad into the corporation's or agency's entire computer
complex. Once the Web server is subverted, an attacker may be able to gain access to data and
systems not part of the Web itself but connected to the server at the local site.
-Casual and untrained (in security matters) users are common clients for Web-based services. Such
users are not necessarily aware of the security risks that exist and do not have the tools or knowledge
to take effective countermeasures.
Page 164
www.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
section.
network traffic between browser and server and gaining access to information on a Web site that is
Two important SSL concepts are the SSL session and the SSL connection, which are defined in the
supposed to be restricted. Active attacks include impersonating another user, altering messages in
specification as follows:
transit between client and server, and altering information on a Web site.
-Connection: A connection is a transport (in the OSI layering model definition) that provides a
Diaure: 1.1 Relative Location of Security Facilities in the TCP/IP Protocol Stack
suitable type of service. For SSL, such connections are peer-to-peer relationships. The connections are
Diaure 1.1 illustrates this difference. One way to provide Web security is to use IP Security (Diaure
1.1a). The advantage of using IPSec is that it is transparent to end users and applications and provides
-Session: An SSL session is an association between a client and a server. Sessions are created by the
a general-purpose solution. Further, IPSec includes a filtering capability so that only selected traffic
Handshake Protocol. Sessions define a set of cryptographic security parameters, which can be shared
need incur the overhead of IPSec processing. Another relatively general-purpose solution is to
among multiple connections. Sessions are used to avoid the expensive negotiation of new security
implement security just above TCP (Diaure 1.1b). The foremost example of this approach is the
parameters for each connection. Between any pair of parties (applications such as HTTP on client and
Secure Sockets Layer (SSL) and the follow-on Internet standard known as Transport Layer Security
server), there may be multiple secure connections. In theory, there may also be multiple simultaneous
s.
co
m
specific protocols are used in the management of SSL exchanges and are examined later in this
group these threats is in terms of passive and active attacks. Passive attacks include eavesdropping on
s.
co
m
Table 1.1 provides a summary of the types of security threats faced in using the Web. One way to
provided as part of the underlying protocol suite and therefore be transparent to applications.
associated with each session. Once a session is established, there is a current operating state for both
Alternatively, SSL can be embedded in specific packages. For example, Netscape and Microsoft
read and write (i.e., receive and send). In addition, during the Handshake Protocol, pending read and
Explorer browsers come equipped with SSL, and most Web servers have implemented the protocol.
write states are created. Upon successful conclusion of the Handshake Protocol, the pending states
la
bu
sessions between parties, but this feature is not used in practice. There are actually a number of states
la
bu
(TLS). At this level, there are two implementation choices. For full generality, SSL (or TLS) could be
A session state is defined by the following parameters (definitions taken from the SSL specification):
to the specific needs of a given application. In the context of Web security, an important example of
-Session identifier: An arbitrary byte sequence chosen by the server to identify an active or
resumable session state.
lls
lls
this approach is Secure Electronic Transaction (SET). The remainder of this chapter is devoted to a
yl
shows examples of this architecture. The advantage of this approach is that the service can be tailored
yl
Application-specific security services are embedded within the particular application. Diaure 1.1c
-Peer certificate: An X509.v3 certificate of the peer. This element of the state may be null.
Netscape originated SSL. Version 3 of the protocol was designed with public review and input from
-Cipher spec: Specifies the bulk data encryption algorithm (such as null, AES, etc.) and a hash
industry and was published as an Internet draft document. Subsequently, when a consensus was
algorithm (such as MD5 or SHA-1) used for MAC calculation. It also defines cryptographic attributes
reached to submit the protocol for Internet standardization, the TLS working group was formed within
.a
.a
IETF to develop a common standard. This first published version of TLS can be viewed as essentially
-Master secret: 48-byte secret shared between the client and server.
-Is resumable: A flag indicating whether the session can be used to initiate new connections. A
connection state is defined by the following parameters:
SSL Architecture
-Server and client random: Byte sequences that are chosen by the server and client for each
SSL is designed to make use of TCP to provide a reliable end-to-end secure service. SSL is not a
connection.
single protocol but rather two layers of protocols, as illustrated in Diaure 1.2.
-Server write MAC secret: The secret key used in MAC operations on data sent by the server.
-Client write MAC secret: The secret key used in MAC operations on data sent by the client.
The SSL Record Protocol provides basic security services to various higher-layer protocols. In
-Server write key: The conventional encryption key for data encrypted by the server and decrypted
particular, the Hypertext Transfer Protocol (HTTP), which provides the transfer service for Web
by the client.
client/server interaction, can operate on top of SSL. Three higher-layer protocols are defined as part of
-Client write key: The conventional encryption key for data encrypted by the client and decrypted by
SSL: the Handshake Protocol, The Change Cipher Spec Protocol, and the Alert Protocol. These SSL-
the server.
Page 165
www.allsyllabus.com
Page 166
www.allsyllabus.com
The Handshake Protocol consists of a series of messages exchanged by client and server. All of these
state to be copied into the current state, which updates the cipher suite to be used on this connection.
consists of a single byte with the value 1. The sole purpose of this message is to cause the pending
to be used to protect data sent in an SSL record. The Handshake Protocol is used before any
Protocol, and it is the simplest. This protocol consists of a single message (Diaure 1.5a), which
to authenticate each other and to negotiate an encryption and MAC algorithm and cryptographic keys
The Change Cipher Spec Protocol is one of the three SSL-specific protocols that use the SSL Record
The most complex part of SSL is the Handshake Protocol. This protocol allows the server and client
Handshake Protocol:
unacceptable.
-certificate_unknown: Some other unspecified issue arose in processing the certificate, rendering it
Handshake Protocol. Thereafter the final ciphertext block from each record is preserved for use as the
maintained for each key. This field is first initialized by the SSL
-Initialization vectors: When a block cipher in CBC mode is used, an initialization vector (IV) is
vtu.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
www.allsyllabus.com
have the format shown in Diaure 1.5c. Each message has three fields:
-Type (1 byte): Indicates one of 10 messages.
c
o
m
c
o
m
Diaure 1.6 shows the initial exchange needed to establish a logical connection between client
that use SSL, alert messages are compressed and encrypted, as specified by the current state. Each
The Alert Protocol is used to convey SSL-related alerts to the peer entity. As with other applications
Alert Protocol:
u
s
.
message in this protocol consists of two bytes (Diaure 17.5b). The first byte takes the value
u
s
.
warning(1) or fatal(2) to convey the severity of the message. If the level is fatal, SSL immediately
ll
a
b
terminates the connection. Other connections on the same session may continue, but no new
ll
a
b
This phase is used to initiate a logical connection and to establish the security capabilities that will be
specific alert. First, we list those alerts that are always fatal (definitions from the SSL specification):
connections on this session may be established. The second byte contains a code that indicates the
ll
s
y
associated with it. The exchange is initiated by the client, which sends a client_hello message with
.a
.a
lls
-handshake_failure: Sender was unable to negotiate an acceptable set of security parameters given
nonces and are used during key exchange to prevent replay attacks.
update the parameters of an existing connection or create a new connection on this session. A zero
-Session ID: A variable-length session identifier. A nonzero value indicates that the client wishes to
-illegal_parameter: A field in a handshake message was out of range or inconsistent with other
the client, in decreasing order of preference. Each element of the list (each cipher suite) defines both a
connection.
-CipherSuite: This is a list that contains the combinations of cryptographic algorithms supported by
connection. Each party is required to send a close_notify alert before closing the write side of a
value indicates that the client wishes to establish a new connection on a new session.
-close_notify: Notifies the recipient that the sender will not send any more messages on this
-Compression Method: This is a list of the compression methods the client supports. After sending
available.
the client_hello message, the client waits for the server_hello message, which contains the same
parameters as the client_hello message. For the server_hello message, the following conventions
supported by the server. The Random field is generated by the server and is independent of the client's
apply. The Version field contains the lower of the version suggested by the client and the highest
-bad_certificate: A received certificate was corrupt (e.g., contained a signature that did not verify).
Page 167
Page 168
www.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
Random field. If the SessionID field of the client was nonzero, the same value is used by the server;
otherwise the server's SessionID field contains the value for a new session. The CipherSuite field
There are two differences between the SSLv3 and TLS MAC schemes: the actual algorithm and the
contains the single cipher suite selected by the server from those proposed by the client. The
scope of the MAC calculation. TLS makes use of the HMAC algorithm defined in RFC 2104. HMAC
Compression field contains the compression method selected by the server from those proposed by
is defined as follows:
the client.
The first element of the Cipher Suite parameter is the key exchange method (i.e., the means by which
where
the cryptographic keys for conventional encryption and MAC are exchanged). The following key
-Fixed Diffie-Hellman: This is a Diffie-Hellman key exchange in which the server's certificate
s.
co
m
s.
co
m
-RSA: The secret key is encrypted with the receiver's RSA public key. A public-key certificate for the
public-key certificate contains the Diffie-Hellman publickey parameters. The client provides its
SSLv3 uses the same algorithm, except that the padding bytes are concatenated with the secret key
rather than being XORed with the secret key padded to the block length. The level of security should
key exchange message. This method results in a fixed secret key between two peers, based on the
la
bu
la
bu
contains the Diffie-Hellman public parameters signed by the certificate authority (CA). That is, the
TLS makes use of a pseudorandom function referred to as PRF to expand secrets into blocks of data
keys. In this case, the Diffie-Hellman public keys are exchanged, signed using the sender's private
for purposes of key generation or validation. The objective is to make use of a relatively small shared
secret value but to generate longer blocks of data in a way that is secure from the kinds of attacks
lls
lls
RSA or DSS key. The receiver can use the corresponding public key to verify the signature.
yl
Pseudorandom Function:
-Ephemeral Diffie-Hellman: This technique is used to create ephemeral (temporary, one-time) secret
yl
(Diaure 1.7):
That is, each side sends its public Diffie-Hellman parameters to the other, with no authentication. This
Alert Codes:
TLS supports all of the alert codes defined in SSLv3 with the exception of no certificate. A number of
.a
made on hash functions and MACs. The PRF is based on the following data expansion function
.a
Certificates are used to authenticate the public keys. This would appear to be the most secure of the
additional codes are defined in TLS; of these, the following are always fatal:
decryption_failed: A ciphertext decrypted in an invalid way; either it was not an even multiple of the
Following the definition of a key exchange method is the CipherSpec, which includes the following
fields:
record_overflow: A TLS record was received with a payload (ciphertext) whose length exceeds 214
-CipherAlgorithm: Any of the algorithms mentioned earlier: RC4, RC2, DES, 3DES, DES40, IDEA,
+ 2048 bytes, or the ciphertext decrypted to a length of greater than 214 + 1024 bytes.
until enough output has been generated. The result of this algorithmic structure is a pseudorandom
unknown_ca: A valid certificate chain or partial chain was received, but the certificate was not
function. We can view the master_secret as the pseudorandom seed value to the function. The client
accepted because the CA certificate could not be located or could not be matched with a known,
and server random numbers can be viewed as salt values to complicate cryptanalysis.
trusted CA.
access_denied: A valid certificate was received, but when access control was applied, the sender
TLS is an IETF standardization initiative whose goal is to produce an Internet standard version of
SSL. TLS is defined as a Proposed Internet Standard in RFC 2246. RFC 2246 is very similar to
decode_error: A message could not be decoded because a field was out of its specified range or the
www.allsyllabus.com
Page 170
www.allsyllabus.com
of exchanged messages.
but this alert indicates that the sender is not able to renegotiate. This message is always a warning.
to 249. A variable padding length may be used to frustrate attacks based on an analysis of the lengths
client hello after initial handshaking. Either of these messages would normally result in renegotiation,
padding.length byte is 79 bytes long, then the padding length, in bytes, can be 1, 9, 17, and so on, up
255 bytes. For example, if the plaintext (or compressed text if compression is used) plus MAC plus
user_canceled: This handshake is being canceled for some reason unrelated to a protocol failure.
be any amount that results in a total that is a multiple of the cipher's block length, up to a maximum of
total size of the data to be encrypted is a multiple of the cipher's block length. In TLS, the padding can
In SSL, the padding added prior to encryption of user data is the minimum amount required so that the
impossible to continue. The remainder of the new alerts include the following:
Padding
internal_error: An internal error unrelated to the peer or the correctness of the protocol makes it
random numbers.
specifically because the server requires ciphers more secure than those supported by the client.
master_secret in TLS is calculated as a hash function of the pre_master_secret and the two ello
The pre_master_secret for TLS is calculated in the same way as in SSLv3. As in SSLv3, the
supported.
Cryptographic Computations:
protocol_version: The protocol version the client attempted to negotiate is recognized but not
server.
detected.
where finished_label is the string "client finished" for the client and "server finished" for the
export_restriction: A negotiation not in compliance with export restrictions on key length was
vtu.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
u
s
.
There are several small differences between the cipher suites available under SSLv3 and under TLS:
-Key Exchange: TLS supports all of the key exchange techniques of SSLv3 with the exception of
www.allsyllabus.com
and Visa in February 1996. A wide range of companies were involved in developing the initial
specification, including IBM, Microsoft, Netscape, RSA, Terisa, and Verisign. Beginning in 1996.
SET is not itself a payment system. Rather it is a set of security protocols and formats that enables
users to employ the existing credit card payment infrastructure on an open network, such as the
Internet, in a secure fashion. In essence, SET provides three services:
.a
.a
Fortezza.
-Symmetric Encryption Algorithms: TLS includes all of the symmetric encryption algorithms found
ll
s
y
lls
SET is an open encryption and security specification designed to protect credit card transactions on
the Internet. The current version, SETv1, emerged from a call for security standards by MasterCard
ll
a
b
ll
a
b
Cipher Suites
c
o
m
u
s
.
c
o
m
dss_sign, rsa_fixed_dh, and dss_fixed_dh. These are all defined in SSLv3. In addition, SSLv3
TLS defines the following certificate types to be requested in a certificate_request message: rsa_sign,
A good way to begin our discussion of SET is to look at the business requirements for SET,
SET Overview:
dss_sign types are used for that function; a separate signing type is not needed to sign Diffie-Hellman
involves signing the Diffie-Hellman parameters with either RSA or DSS; for TLS, the rsa_sign and
also reduces the risk of fraud by either party to the transaction or by malicious third parties. SET uses
cardholders that this information is safe and accessible only to the intended recipient. Confidentiality
handshake messages, and a label that identifies client or server. The calculation is somewhat different.
SSLv3, the finished message in TLS is a hash based on the shared master_secret, the previous
pads. These extra fields were felt to add no additional security. As with the finished message in
The SET specification lists the following business requirements for secure payment processing with
handshake_messages. Recall that for SSLv3, the hash calculation also included the master secret and
Requirements:
In the TLS certificate_verify message, the MD5 and SHA-1 hashes are calculated only over
SHA-1(handshake_messages))
Page 171
Page 172
www.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
Note that unlike IPSec and SSL/TLS, SET provides only one choice for each cryptographic algorithm.
-Ensure the integrity of all transmitted data: That is, ensure that no changes in content occur
This makes sense, because SET is a single application with a single set of requirements, whereas
during transmission of SET messages. Digital signatures are used to provide integrity.
SET Participants:
account: A mechanism that links a cardholder to a specific account number reduces the incidence of
Diaure 1.8 indicates the participants in the SET system, which include the following:
fraud and the overall cost of payment processing. Digital signatures and certificates are used to verify
Cardholder: In the electronic environment, consumers and corporate purchasers interact with
through its relationship with a financial institution: This is the complement to the preceding
payment card (e.g., MasterCard, Visa) that has been issued by an issuer.
requirement. Cardholders need to be able to identify merchants with whom they can conduct secure
Merchant: A merchant is a person or organization that has goods or services to sell to the cardholder.
Typically, these goods and services are offered via a Web site or by electronic mail. A merchant that
-Ensure the use of the best security practices and system design techniques to protect all
s.
co
m
merchants from personal computers over the Internet. A cardholder is an authorized holder of a
s.
co
m
Issuer: This is a financial institution, such as a bank, that provides the cardholder with the payment
card. Typically, accounts are applied for and opened by mail or in person. Ultimately, it is the issuer
institution that establishes an account with a merchant and processes payment card authorizations and
la
bu
that is responsible for the payment of the debt of the cardholder. Acquirer: This is a financial
use: SET can securely operate over a "raw" TCP/IP stack. However, SET does not interfere with the
la
bu
-Create a protocol that neither depends on transport security mechanisms nor prevents their
multiple bankcard associations or with multiple individual issuers. The acquirer provides
protocols and formats are independent of hardware platform, operating system, and Web software.
authorization to the merchant that a given card account is active and that the proposed purchase does
lls
lls
yl
payments. Merchants will usually accept more than one credit card brand but do not want to deal with
-Facilitate and encourage interoperability among software and network providers: The SET
yl
the acquirer is reimbursed by the issuer over some sort of payment network for electronic funds
travels across the network. An interesting and important feature of SET is that it prevents the
transfer.
.a
The acquirer also provides electronic transfer of payments to the merchant's account. Subsequently,
.a
To meet the requirements just outlined, SET incorporates the following features:
merchant from learning the cardholder's credit card number; this is only provided to the issuing bank.
-Integrity of data: Payment information sent from cardholders to merchants includes order
We now briefly describe the sequence of events that are required for a transaction. We will then look
information, personal data, and payment instructions. SET guarantees that these message contents are
not altered in transit. RSA digital signatures, using SHA-1 hash codes, provide message integrity.
1. The customer opens an account. The customer obtains a credit card account, such as MasterCard
2. The customer receives a certificate. After suitable verification of identity, the customer receives
legitimate user of a valid card account number. SET uses X.509v3 digital certificates with RSA
an X.509v3 digital certificate, which is signed by the bank. The certificate verifies the customer's
RSA public key and its expiration date. It also establishes a relationship, guaranteed by the bank,
-Merchant authentication: SET enables cardholders to verify that a merchant has a relationship with
between the customer's key pair and his or her credit card.
a financial institution allowing it to accept payment cards. SET uses X.509v3 digital certificates with
3. Merchants have their own certificates. A merchant who accepts a certain brand of card must be
in possession of two certificates for two public keys owned by the merchant: one for signing
messages, and one for key exchange. The merchant also needs a copy of the payment gateway's
public-key certificate.
Page 173
www.allsyllabus.com
Page 174
www.allsyllabus.com
list of the items to be purchased to the merchant, who returns an order form containing the list of
merchant also has the public key of the customer, taken from the customer's certificate. Then the
through the merchant's Web site to select items and determine the price. The customer then sends a
possession of the dual signature (DS), the OI, and the message digest for the PI (PIMD). The
4. The customer places an order. This is a process that may involve the customer first browsing
vtu.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
www.allsyllabus.com
3. The customer has linked the OI and PI and can prove the linkage.
the payment gateway, requesting authorization that the customer's available credit is sufficient for this
7. The merchant requests payment authorization. The merchant sends the payment information to
the customer.
Again, if these two quantities are equal, then the bank has verified the signature. In summary,
way that it cannot be read by the merchant. The customer's certificate enables the merchant to verify
order form. The payment contains credit card details. The payment information is encrypted in such a
(OIMD), and the customer's public key, then the bank can compute H(H[OI]||OIMD); D(PUc, DS)
merchant, along with the customer's certificate. The order confirms the purchase of the items in the
has verified the signature. Similarly, if the bank is in possession of DS, PI, the message digest for OI
6. The order and payment are sent. The customer sends both order and payment information to the
where PUc is the customer's public signature key. If these two quantities are equal, then the merchant
certificate, so that the customer can verify that he or she is dealing with a valid store.
c
o
m
c
o
m
advantage. It would then have to find another OI whose hash matches the existing OIMD.
8. The merchant confirms the order. The merchant sends confirmation of the order to the customer.
For example, suppose the merchant wishes to substitute another OI in this transaction, to its
purchase.
u
s
.
9. The merchant provides the goods or service. The merchant ships the goods or provides the
service to the customer.
With SHA-1, this is deemed not to be feasible. Thus, the merchant cannot link another OI
with this PI.
10. The merchant requests payment. This request is sent to the payment gateway, which handles all
Payment Processing:
Table 1.3 lists the transaction types supported by SET. In what follows we look in some
Dual Signature: Before looking at the details of the SET protocol, let us discuss an important
ll
s
y
innovation introduced in SET: the dual signature. The purpose of the dual signature is to link two
-Payment capture
does not need to know the customer's credit card number, and the bank does not need to know the
-Payment authorization
order information (OI) to the merchant and the payment information (PI) to the bank. The merchant
-Purchase request
messages that are intended for two different recipients. In this case, the customer wants to send the
details of the customer's order. The customer is afforded extra protection in terms of privacy by
.a
.a
y
lls
ll
a
b
ll
a
b
u
s
.
Merchant registration Merchants must register with a CA before they can exchange SET messages
intended for this order and not for some other goods or service.
merchants.
resolve disputes if necessary. The link is needed so that the customer can prove that this payment is
Cardholder registration Cardholders must register with a CA before they can send SET messages to
keeping these two items separate. However, the two items must be linked in a way that can be used to
back later. The cardholder or merchant sends the Certificate Inquiry message to determine the status
quickly, it will send a reply to the cardholder or merchant indicating that the requester should check
encrypts the final hash with his or her private signature key, creating the dual signature. The operation
Certificate inquiry and status If the CA is unable to complete the processing of a certificate request
These two hashes are then concatenated and the hash of the result is taken. Finally, the customer
Payment capture Allows the merchant to request payment from the payment gateway.
preceding paragraph. The customer takes the hash (using SHA-1) of the PI and the hash of the OI.
The linkage prevents this. Diaure 1.9 shows the use of a dual signature to meet the requirement of the
Payment authorization Exchange between merchant and payment gateway to authorize a given
from this customer, the merchant could claim that this OI goes with the PI rather than the original OI.
Purchase request Message from customer to merchant containing OI for merchant and PI for bank.
and a signed PI, and the merchant passes the PI on to the bank. If the merchant can capture another OI
To see the need for the link, suppose that the customers send the merchant two messages: a signed OI
where PRc is the customer's private signature key. Now suppose that the merchant is in
of the certificate request and to receive the certificate if the request has been approved. Purchase
Page 175
Page 176
www.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
vtu.allsyllabus.com
www.allsyllabus.com
inquiry Allows the cardholder to check the status of the processing of an order after the purchase
Purchase Request message (Diaure 1.10). For this purpose, the cardholder generates a one-time
response has been received. Note that this message does not include information such as the status of
back ordered goods, but does indicate the status of authorization, capture and credit processing.
1. Purchase-related information.
Authorization reversal Allows a merchant to correct previous authorization requests. If the order will
2. Order-related information.
not be completed, the merchant reverses the entire authorization. If part of the order will not be
3. Cardholder certificate.
completed (such as when goods are back ordered), the merchant reverses part of the amount of the
authorization. Capture reversal Allows a merchant to correct errors in capture requests such
credit to a cardholder's account such as when goods are returned or were damaged during shipping.
Note that the SET Credit message is always initiated by the merchant, not the cardholder. All
2. Verifies the dual signature using the customer's public signature key. This ensures that the order has
communications between the cardholder and merchant that result in a credit being processed happen
not been tampered with in transit and that it was signed using the cardholder's private signature key.
outside of SET. Credit reversal Allows a merchant to correct a previously request credit. Payment
3. Processes the order and forwards the payment information to the payment gateway for
gateway certificate request Allows a merchant to query the payment gateway and receive a copy of
the gateway's current key-exchange and signature certificates. Batch administration Allows a
s.
co
m
When the merchant receives the Purchase Request message, it performs the following actions
s.
co
m
astransaction amounts that were entered incorrectly by a clerk. Credit Allows a merchant to issue a
merchant to communicate information to the payment gateway regarding merchant batches. Error
payment gateway. The payment authorization ensures that the transaction was approved by the issuer.
lls
Before the Purchase Request exchange begins, the cardholder has completed browsing, selecting, and
yl
This authorization guarantees that the merchant will receive payment; the merchant can therefore
yl
Purchase Request:
During the processing of an order from a cardholder, the merchant authorizes the transaction with the
provide the services or goods to the customer. The payment authorization exchange consists of two
lls
tests.
Payment Authorization:
la
bu
la
bu
message Indicates that a responder rejects a message because it fails format or content verification
Payment Capture
consists of four messages: Initiate Request, Initiate Response, Purchase Request, and Purchase
To obtain payment, the merchant engages the payment gateway in a payment capture transaction,
Response. In order to send SET messages to the merchant,the cardholder must have a copy of the
consisting of a capture request and a capture response message. For the Capture Request message,
the merchant generates, signs, and encrypts a capture request block, which includes the payment
certificates of the merchant and the payment gateway. The customer requests the certificates in the
.a
to the customer. All of the preceding occurs without the use of SET. The purchase request exchange
.a
ordering. The end of this preliminary phase occurs when the merchant sends a completed order form
Initiate Request message, sent to the merchant. This message includes the brand of the credit card
amount and the transaction ID. The message also includes the encrypted capture token received earlier
that the customer is using. The message also includes an ID assigned to this request/response pair by
(in the Authorization Response) for this transaction, as well as the merchant's signature key and key-
The merchant generates a response and signs it with its private signature key. The response includes
When the payment gateway receives the capture request message, it decrypts and verifies the capture
the nonce from the customer, another nonce for the customer to return in the next message, and a
request block and decrypts and verifies the capture token block. It then checks for consistency
transaction ID for this purchase transaction. In addition to the signed response, the Initiate Response
between the capture request and capture token. It then creates a clearing request that is sent to the
message includes the merchant's signature certificate and the payment gateway's key exchange
issuer over the private payment network. This request causes funds to be transferred to the merchant's
certificate. The cardholder verifies the merchant and gateway certificates by means of their respective
account.
CA signatures and then creates the OI and PI. The transaction ID assigned by the merchant is placed
The gateway then notifies the merchant of payment in a Capture Response message. The message
in both the OI and PI. The OI does not contain explicit order data such as the number and price of
includes a capture response block that the gateway signs and encrypts. The message also includes the
items. Rather, it contains an order reference generated in the exchange between merchant and
gateway's signature key certificate. The merchant software stores the capture response to be used for
customer during the shopping phase before the first SET message. Next, the cardholder prepares the
Page 177
www.allsyllabus.com
Page 178
www.allsyllabus.com
Software Testing
10CS842
UNIT 4
Software Testing
10CS842
previously tested units with respect to the functional decomposition tree. While this describes
integration testing as a process, discussions of this type offer little information about the goals or
techniques. Before addressing these (real) issues, we need to understand the consequences of the
alternative life cycle models.
Page 100
Page 101
Software Testing
10CS842
It is important to keep preliminary design as an integral phase, rather than to try to amortize such
high level design across a series of builds. (To do so usually results in unfortunate consequences of
design choices made during the early builds that are regrettable in later builds.) Since preliminary
design remains a separate step, we are tempted to conclude that integration testing is unaffected in
the spin-off models. To some extent this is true: the main impact of the series of builds is that
regression testing becomes necessary. The goal of regression testing is to assure that things that
worked correctly in the previous build still work with the newly added code. Progression testing
assumes that regression testing was successful, and that the new functionality can be tested. (We
like to think that the addition of new code represents progress, not a regression.) Regression testing
is an absolute necessity in a series of builds because of the well-known ripple effect of changes to
an existing system. (The industrial average is that one change in five introduces a new fault.)
The differences among the three spin-off models are due to how the builds are identified. In
incremental development, the motivation for separate builds is usually to level off the staff profile.
With pure waterfall development, there can be a huge bulge of personnel for the phases from
detailed design through unit testing. Most organizations cannot support such rapid staff fluctuations,
so the system is divided into builds that can be supported by existing personnel. In evolutionary
development, there is still the presumption of a build sequence, but only the first build is defined.
Based on it, later builds are identified, usually in response to priorities set by the customer/user, so
the system evolves to meet the changing needs of the user. The spiral model is a combination of
rapid prototyping and evolutionary development, in which a build is defined first in terms of rapid
prototyping, and then is subjected to a go/no go decision based on technology-related risk factors.
From this we see that keeping preliminary design as an integral step is difficult for the evolutionary
and spiral models. To the extent that this cannot be maintained as an integral activity, integration
testing is negatively affected.
Because a build is a set of deliverable end-user functionality, one advantage of these spin-off
models is that all three yield earlier synthesis. This also results in earlier customer feedback, so two
of the deficiencies of waterfall development are mitigated.
4.2.2 Specification Based Models
Two other variations are responses to the complete understanding problem. (Recall that
functional decomposition is successful only when the system is completely understood.) When
systems are not fully understood (by either the customer or the developer), functional
decomposition is perilous at best. The rapid prototyping life cycle (Figure 12.4) deals with this by
drastically reducing the specification-to-customer feedback loop to produce very early synthesis.
Rather than build a final system, a quick and dirty prototype is built and then used to elicit
customer feedback. Depending on the feedback, more prototyping cycles may occur. Once the
developer and the customer agree that a prototype represents the desired system, the developer goes
ahead and builds to a correct specification. At this point, any of the waterfall spin-offs might also be
used.
Page 102
Software Testing
10CS842
Page 103
Software Testing
10CS842
Software Testing
10CS842
metaphor. In the fountain model, (see Figure 12.6) the foundation is the requirements analysis of
real world systems.
screen is comprised of
The Deft CASE tool distinguishes between simple and compound flows, where compound flows
may be decomposed into other flows, which may themselves be compound. The graphic appearance
of this choice is that simple flows have filled arrowheads, while compound flows have open
arrowheads. As an example, the compound flow screen has the following decomposition:
Page 104
screen1
welcome
screen2
e nt e r P I N
screen3
wro n g P I N
screen4
P I N f ai l e d, c ar d re t ai ne d
screen5
s e l e c t t rans t y pe
screen6
s e l e c t ac c o un t t y pe
screen7
enter amount
screen8
insufficient funds
screen9
c a n n o t d i s p e n s e t h a t a mo u n t
screen10
screen11
t ake y o ur c as h
screen12
c anno t p ro c e s s de po s i t s
screen13
put de p e nv e l o p i n s l o t
screen14
another transaction?
screen15
T h a n ks ; t ake c ard an d re c e i p t
Page 105
Software Testing
10CS842
Software Testing
10CS842
The dataflow diagrams and the entity/relationship model contain information that is primarily
structural. This is problematic for testers, because test cases are concerned with behavior, not with
structure. As a supplement, the functional and data information are linked by a control model; here
we use a finite state machine. Control models represent the point at which structure and behavior
intersect; as such, they are of special utility to testers.
The upper level finite state machine in Figure 4.12 divides the system into states that correspond to
stages of customer usage. Other choices are possible, for instance, we might choose states to be
screens being displayed (this turns out to be a poor choice). Finite state machines can be
hierarchically decomposed in much the same way as dataflow diagrams. The decomposition of the
Await PIN state is shown in Figure 4.13. In both of these figures, state transitions are caused either
by events at the ATM terminal (such as a keystroke) or by data conditions (such as the recognition
that a PIN is correct). When a transition occurs, a corresponding action may also occur. We choose
to use screen displays as such actions; this choice will prove to be very handy when we develop
system level test cases.
The function, data, and control models are the basis for design activities in the waterfall model (and
its spin-offs). During design, some of the original decisions may be revised based on additional
insights and more detailed requirements (for example, performance or reliability goals). The end
result is a functional decomposition such as the partial one shown in the structure chart in Figure
4.14. Notice that the original first level decomposition into four subsystems is continued: the
functionality has beendecomposed to lower levels of detail. Choices such as these are the essence of
design, and design is beyond the scope of this book. In practice, testers often have to live with the
results of poor design choices.
Page 106
Page 107
Software Testing
10CS842
portions of the system. To support this, we need a numbered decomposition, and a more detailed
view of two of the components.
Here is the functional decomposition carried further in outline form: the numbering scheme
preserves the levels of the components in Figure 12.14.
Software Testing
10CS842
1 S A TM S ys te m
1.1.2.1 WatchCardSlot
As part of the specification and design process, each functional component is normally expanded to
show its inputs, outputs, and mechanism. We do this here with pseudo-code (or PDL, for program
design language) for three modules. This particular PDL is loosely based on Pascal; the point of any
PDL is to communicate, not to develop something that can be compiled. The main program
description follows the finite state machine description given in Figure 4.12. States in that diagram
are implemented with a Case statement.
Main Program
State = AwaitCard
CASE State OF
AwaitCard:
Page 108
ScreenDriver(1, null)
WatchCardSlot(CardSlotStatus)
WHILE CardSlotStatus is Idle DO
WatchCardSlot(CardSlotStatus)
ControlCardRoller(accept)
ValidateCard(CardOK, PAN)
IF CardOK
THEN
State = AwaitPIN
ELSE
ControlCardRoller(eject)
Page 109
Software Testing
AwaitPIN:
AwaitTrans:
CloseSession:
10CS842
State = AwaitCard
ValidatePIN(PINok, PAN)
IF PINok
THEN
ScreenDriver(2, null)
State = AwaitTrans
ELSE
ScreenDriver(4, null)
State = AwaitCard
ManageTransaction
State = CloseSession
IF NewTransactionRequest
THEN
State = AwaitTrans
ELSE
PrintReceipt
PostTransactionLocal
CloseSession
ControlCardRoller(eject)
State = AwaitCard
The ValidatePIN procedure is based on another finite state machine shown in Figure 4.13, in which
states refer to the number of PIN entry attempts.
Procedure ValidatePIN(PINok, PAN)
GetPINforPAN(PAN, ExpectedPIN)
Try = First
CASE Try OF
First:
ScreenDriver(2, null)
GetPIN(EnteredPIN)
IF EnteredPIN = ExpectedPIN
THEN
PINok = TRUE
RETURN
ELSE
ScreenDriver(3, null)
Try = Second
Second:
ScreenDriver(2, null)
GetPIN(EnteredPIN)
IF EnteredPIN = ExpectedPIN
THEN
PINok = TRUE
RETURN
ELSE
ScreenDriver(3, null)
Try = Third
Third:
ScreenDriver(2, null)
GetPIN(EnteredPIN)
IF EnteredPIN = ExpectedPIN
THEN
PINok = TRUE
RETURN
ELSE
ScreenDriver(4, null)
PINok = FALSE
END, (CASE Try)
END. (Procedure ValidatePIN)
Software Testing
10CS842
touched. Rather than another CASE statement implementation, the states are collapsed into
iterations of a WHILE loop.
Procedure GetPIN(EnteredPIN, CancelHit)
Local Data: DigitKeys = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
BEGIN
CancelHit = FALSE
EnteredPIN = null string
DigitsRcvd=0
WHILE NOT(DigitsRcvd= 4 OR CancelHit) DO
BEGIN
KeySensor(KeyHit)
IF KeyHit IN DigitKeys
THEN BEGIN
EnteredPIN = EnteredPIN + KeyHit
INCREMENT(DigitsRcvd)
IF DigitsRcvd=1 THEN ScreenDriver(2, 'X-')
IF DigitsRcvd=2 THEN ScreenDriver(2, 'XX-')
IF DigitsRcvd=3 THEN ScreenDriver(2, 'XXX-')
IF DigitsRcvd=4 THEN ScreenDriver(2, 'XXXX')
END
END
{WHILE}
(Procedure GetPIN)
END.
If we follow the pseudocode in these three modules, we can identify the uses relationship among
the modules in the functional decomposition.
WatchCardSlot
SATM Main
U s es M o d u l es
Module
C o n t ro l C ard R o l l e r
Sc r e e n D r i v e r
V al i dat e C ard
V al i dat e P I N
M a n a g e T r a n s a ct i o n
N e w T r an s ac t i o n R e q ue s t
ValidatePIN
Page 110
GetPINforPAN
Ge t P I N
The GetPIN procedure is based on another finite state machine in which states refer to the number
of digits received, and in any state, either another digit key can be touched, or the cancel key can be
Dept. of CSE, SJBIT
Page 111
Software Testing
10CS842
Sc r e e n D r i v e r
GetPIN
KeySensor
Sc r e e n D r i v e r
Notice that the uses information is not readily apparent in the functional decomposition. This
information is developed (and extensively revised) during the more detailed phases of the design
process. We will revisit this in Chapter 13.
Software Testing
4.4.1 Structural Insights
10CS842
Everyone agrees that there must be some distinction, and that integration testing is at a more
detailed level than system testing. There is also general agreement that integration testing can safely
assume that the units have been separately tested, and that, taken by themselves, the units function
correctly. One common view, therefore, is that integration testing is concerned with the interfaces
among the units. One possibility is to fall back on the symmetries in the waterfall life cycle model,
and say that integration testing is concerned with preliminary design information, while system
testing is at the level of the requirements specification. This is a popular academic view, but it begs
an important question: how do we discriminate between specification and preliminary design? The
pat academic answer to this is the what vs. how dichotomy: the requirements specification defines
what, and the preliminary design describes how. While this sounds good at first, it doesnt stand up
well in practice. Some scholars argue that just the choice of a requirements specification technique
is a design choice
The life cycle approach is echoed by designers who often take a Dont Tread On Me view of a
requirements specification: a requirements specification should neither predispose nor preclude a
design option. With this view, when information in a specification is so detailed that it steps on the
designers toes, the specification is too detailed. This sounds good, but it still doesnt yield an
operational way to separate integration and system testing.
The models used in the development process provide some clues. If we follow the definition of the
SATM system, we could first postulate that system testing should make sure that all fifteen display
screens have been generated. (An output domain based, functional view of system testing.) The
entity/relationship model also helps: the one-to-one and one-to-many relationships help us
understand how much testing must be done. The control model (in this case, a hierarchy of finite
state machines) is the most helpful. We can postulate system test cases in terms of paths through the
finite state machine(s); doing this yields a system level analog of structural testing. The functional
models (dataflow diagrams and structure charts) move in the direction of levels because both
express a functional decomposition. Even with this, we cannot look at a structure chart and identify
where system testing ends and integration testing starts. The best we can do with structural
information is identify the extremes. For instance, the following threads are all clearly at the system
level:
1. Insertion of an invalid card. (this is probably the shortest system thread)
2. Insertion of a valid card, followed by three failed PIN entry attempts.
3. Insertion of a valid card, a correct PIN entry attempt, followed by a balance inquiry.
4. Insertion of a valid card, a correct PIN entry attempt, followed by a deposit.
5. Insertion of a valid card, a correct PIN entry attempt, followed by a withdrawal.
6. Insertion of a valid card, a correct PIN entry attempt, followed by an attempt to withdraw more
cash than the account balance.
Page 112
Page 113
Software Testing
10CS842
We can also identify some integration level threads. Go back to the PDL descriptions of
ValidatePIN and GetPIN. ValidatePIN calls GetPIN, and GetPIN waits for KeySensor to report
when a key is touched. If a digit is touched, GetPIN echoes an X to the display screen, but if the
cancel key is touched, GetPIN terminates, and ValidatePIN considers another PIN entry attempt.
We could push still lower, and consider keystroke sequences such as two or three digits followed by
cancel keystroke.
4.4.2 Behavioral Insights
Here is a pragmatic, explicit distinction that has worked well in industrial applications. Think about
a system in terms of its port boundary, which is the location of system level inputs and outputs.
Every system has a port boundary; the port boundary of the SATM system includes the digit
keypad, the function buttons, the screen, the deposit and withdrawal doors, the card and receipt
slots, and so on. Each of these devices can be thought of as a port, and events occur at system
ports. The port input and output events are visible to the customer, and the customer very often
understands system behavior in terms of sequences of port events. Given this, we mandate that
system port events are the primitives of a system test case, that is, a system test case (or
equivalently, a system thread) is expressed as an interleaved sequence of port input and port output
events. This fits our understanding of a test case, in which we specify pre-conditions, inputs,
outputs, and post-conditions. With this mandate we can always recognize a level violation: if a test
case (thread) ever requires an input (or an output) that is not visible at the port boundary, the test
case cannot be a system level test case (thread). Notice that this is clear, recognizable, and
enforceable. We will refine this in Chapter 14 when we discuss threads of system behavior.
Software Testing
1.1.2.3
1.1.2.3
1.1.2.1
1.1.2
1.1.1.3
1.1.1.2
Craftspersons are recognized by two essential characteristics: they have a deep knowledge of the
tools of their trade, and they have a similar knowledge of the medium in which they work, so that
they understand their tools in terms of how they work with the medium. In Parts II and III, we
focused on the tools (techniques) available to the testing craftsperson. Our goal there was to
understand testing techniques in terms of their advantages and limitations with respect to particular
types of faults. Here we shift our emphasis to the medium, with the goal that a better understanding
of the medium will improve the testing craftspersons judgment.
1.1.1.1
1.1.1
1.1
S A TM S ys te m
U ni t N a m e
Level Number
Integration Testing
Page 114
10CS842
component that appears in our analysis is given a new (shorter) number; these numbers are given in
Table 1. (The only reason for this is to make the figures and spreadsheet more readable.) If you look
closely at the units that are designated by letters, you see that they are packaging levels in the
decomposition; they are never called as procedures. The decomposition in Table 1 is pictured as a
decomposition tree in Figure 13.1. This decomposition is the basis for the usual view of integration
testing. It is important to remember that such a decomposition is primarily a packaging partition of
the system. As software design moves into more detail, the added information lets us refine the
functional decomposition tree into a unit calling graph. The unit calling graph is the directed graph
in which nodes are program units and edges correspond to program calls; that is, if unit A calls unit
B, there is a directed edge from node A to node B. We began the development of the call graph for
the SATM system in Chapter 12 when we examined the calls made by the main program and the
ValidatePIN and GetPIN modules. That information is captured in the adjacency matrix given
below in Table 2. This matrix was created with a spreadsheet; this turns out to be a handy tool for
testers.
Table 1 SATM Units and Abbreviated Names Unit Number
1.1.2.5
we described the SATM system in terms of its output screens (Figure 4.7), the terminal itself
(Figure 4.8), its context and partial dataflow (Figures 4.9 and 4.10), an entity/relationship model of
its data (Figure 4.11), finite state machines describing some of its behavior (Figures 4.12 and 4.13),
and a partial functional decomposition (Figure 4.14). We also developed a PDL description of the
main program and two units, ValidatePIN and GetPIN.
We begin here by expanding the functional decomposition that was started in Figure 4.12; the
numbering scheme preserves the levels of the components in that figure. For easier reference, each
Dept. of CSE, SJBIT
10
1.1.2.2
1 .2
D e v i c e S e ns e & C o nt ro l
D o o r S e ns e & C o nt ro l
Ge t D o o r S t at us
Control Door
D i s pe n s e C as h
Slot Sense & Control
WatchCardSlot
Ge t D e po s i t S l o t S t at us
C o nt ro l C ar d R o l l e r
Control Envelope Roller
R e ad C ar d S t ri p
Central Bank Comm.
Page 115
Software Testing
10CS842
11
1.2.1
Ge t P I N f o r P A N
12
1.2.2
G e t A c co u n t S t a t u s
13
1.2.3
P os t D a i l y T r an s ac t i o n s
1.3
T e r mi n a l S e n s e & C o n t r o l
14
1.3.1
Sc r e e n D r i v e r
15
1.3.2
K e y S e ns o r
1 .4
Manage Session
16
1.4.1
V al i d at e C ar d
17
1.4.2
V al i d at e P I N
18
1.4.2.1
Ge t P I N
1.4.3
Close Session
19
1.4.3.1
N e w T r ans ac t i o n R e qu e s t
20
1.4.3.2
Print Receipt
21
1.4.3.3
P os t T r a n s a c t i o n L oc a l
22
1.4.4
Manage Transaction
23
1.4.4.1
Ge t T ra n s ac t i o n T y p e
24
1.4.4.2
25
1.4.4.3
Report Balance
26
1.4.4.4
Process Deposit
27
1.4.4.5
Process Withdrawal
Software Testing
10CS842
Most textbook discussions of integration testing only consider integration testing based on the
functional decomposition of the system being tested. These approaches are all based on the
functional decomposition, expressed either as a tree (Figure 4.5.1) or in textual form. These
discussions inevitably center on the order in which modules are to be integrated. There are four
choices: from the top of the tree downward (top down), from the bottom of the tree upward (bottom
up), some combination of these (sandwich), or most graphically, none of these (the big bang). All of
these integration orders presume that the units have been separately tested, thus the goal of
decomposition based integration is to test the interfaces among separately tested units.
The SATM call graph is shown in Figure 4.5.2 Some of the hierarchy is obscured to reduce the
confusion in the drawing. One thing should be quite obvious: drawings of call graphs do not scale
up well. Both the drawings and the adjacency matrix provide insights to the tester. Nodes with high
degree will be important to integration testing, and paths from the main program (node 1) to the
sink nodes can be used to identify contents of builds for an incremental development.
Top-down integration begins with the main program (the root of the tree). Any lower level unit that
is called by the main program appears as a stub, where stubs are pieces of throw-away code that
emulate a called unit. If we performed top-down integration testing for the SATM system, the first
step would be to develop stubs for all the units called by the main program: WatchCardSlot, Control
Card Roller, Screen Driver, Validate Card, Validate PIN, Manage Transaction, and New
Page 116
Page 117
Software Testing
10CS842
Transaction Request. Generally, testers have to develop the stubs, and some imagination is required.
Here are two examples of stubs.
Procedure GetPINforPAN (PAN,
IF PAN = '1123' THEN PIN :=
IF PAN = '1234' THEN PIN :=
IF PAN = '8746' THEN PIN :=
End,
ExpectedPIN)
'8876';
'8765';
'1253';
STUB
Software Testing
10CS842
difficulty of fault isolation that is a consequence of big bang integration. (We could probably
discuss the size of a sandwich, from dainty finger sandwiches to Dagwood-style sandwiches, but
not now.)
In the stub for GetPINforPAN, the tester replicates a table look-up with just a few values that will
appear in test cases. In the stub for KeySensor, the tester must devise a sequence of port events that
can occur once each time the KeySensor procedure is called. (Here, we provided the keystrokes to
partially enter the PIN 8876, but the user hit the cancel button before the fourth digit.) In practice,
the effort to develop stubs is usually quite significant. There is good reason to consider stub code as
part of the software development, and maintain it under configuration management.
Once all the stubs for SATM main have been provided, we test the main program as if it were a
stand-alone unit. We could apply any of the appropriate functional and structural techniques, and
look for faults. When we are convinced that the main program logic is correct, we gradually replace
stubs with the actual code. Even this can be problematic. Would we replace all the stubs at once? If
we did, we would have a small bang for units with a high outdegree. If we replace one stub at a
time, we retest the main program once for each replaced stub. This means that, for the SATM main
program example here, we would repeat its integration test eight times (once for each replaced stub,
and once with all the stubs).
4.5.2 Bottom-up Integration
Bottom-up integration is a mirror image to the top-down order, with the difference that stubs are
replaced by driver modules that emulate units at the next level up in the tree. In bottom-up
integration, we start with the leaves of the decomposition tree (units like ControlDoor and
DispenseCash), and test them with specially coded drivers. There is probably less throw-away code
in drivers than there is in stubs. Recall we had one stub for each child node in the decomposition
tree. Most systems have a fairly high fan-out near at the leaves, so in the bottom-up integration
order, we wont have as many drivers. This is partially offset by the fact that the driver modules will
be more complicated.
17
9, 10, 12
16
Successors
Predecessors
19
18
Page 118
17
1
11, 14, 18
14, 15
14, 15
Page 119
Software Testing
10CS842
23
22
14, 15
24
22
14, 15
26
22
14, 15, 6, 8, 2, 3
27
22
14, 15, 2, 3, 4, 13
25
22
15
22
n/ a
Software Testing
10CS842
When a unit executes, some path of source statements is traversed. Suppose that there is a call to
another unit along such a path: at that point, control is passed from the calling unit to the called unit,
where some other path of source statements is traversed. We cleverly ignored this situation in Part
III, because this is a better place to address the question. There are two possibilities: abandon the
single-entry, single exit precept and treat such calls as an exit followed by an entry, or suppress
the call statement because control eventually returns to the calling unit anyway. The suppression
choice works well for unit testing, but it is antithetical to integration testing.
We can finally make the definitions for path based integration testing. Our goal is to have an
integration testing analog of DD-Paths.
Definition
An MM-Path is an interleaved sequence of module execution paths and messages.
We can always compute the number of neighborhoods for a given call graph. There will be one
neighborhood for each interior node, plus one extra in case there are leaf nodes connected directly
to the root node. (An interior node has a non-zero indegree and a non-zero outdegree.) We have
Interior nodes = nodes - (source nodes + sink nodes)
Neighborhoods = interior nodes + source nodes
The basic idea of an MM-Path is that we can now describe sequences of module execution paths
that include transfers of control among separate units. Since these transfers are by messages, MMPaths always represent feasible execution paths, and these paths cross unit boundaries. We can find
MM-Paths in an extended program graph in which nodes are module execution paths and edges are
messages. The hypothetical example in Figure 4.7.3 shows an MM-Path (the dark line) in which
module A calls module B, which in turn calls module C.
which combine to
Neighborhoods = nodes -sink nodes
Neighborhood integration yields a drastic reduction in the number of integration test sessions (down
to 11 from 40), and it avoids stub and driver development. The end result is that neighborhoods are
essentially the sandwiches that we slipped past in the previous section. (There is a slight difference,
because the base information for neighborhoods is the call graph, not the decomposition tree.) What
they share with sandwich integration is more significant: neighborhood integration testing has the
fault isolation difficulties of medium bang integration.
Much of the progress in the development of mathematics comes from an elegant pattern: have a
clear idea of where you want to go, and then define the concepts that take you there. We do this
here for path based integration testing, but first we need to motivate the definitions.
We already know that the combination of structural and functional testing is highly desirable at the
unit level; it would be nice to have a similar capability for integration (and system) testing. We also
know that we want to express system testing in terms of behavioral threads. Lastly, we revise our
goal for integration testing: rather than test interfaces among separately developed and tested units,
we focus on interactions among these units. (Co-functioning might be a good term.) Interfaces are
structural; interaction is behavioral.
Dept. of CSE, SJBIT
Page 120
Page 121
Software Testing
10CS842
Software Testing
10CS842
An atomic system function begins with a port input event, traverses one or more MM-Paths, and
terminates with a port output event. When viewed from the system level, there is no compelling
reason to decompose an ASF into lower levels of detail (hence the atomicity). In the SATM system,
digit entry is a good example of an ASF, so are card entry, cash dispensing, and session closing.
PIN entry is probably too big, it might be called a molecular system function.
Notice that MM-Path graphs are defined with respect to a set of units. This directly supports
composition of units and composition based integration testing. We can even compose down to the
level of individual module execution paths, but that is probably more detailed than necessary.
An atomic system function (ASF) is an action that is observable at the system level in terms of port
input and output events.
Given a set of units, their MM-Path graph is the directed graph in which nodes are module
execution paths and edges correspond to messages and returns from one unit to another.
Definition
Definition
The first guideline for MM-Paths: points of quiescence are natural endpoints for an MM-Path.
Our second guideline also serves to distinguish integration from system testing.
We can now define an integration testing analog of the DD-Path graph that serves unit testing so
effectively.
Page 122
Main Program
State = AwaitCard
CASE State OF
AwaitCard:
Page 123
Software Testing
10CS842
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
Software Testing
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
10CS842
'
'
'
'
X- ' )
XX- ' )
XXX- ' )
XXXX ' )
There are 20 source nodes in SATM Main: 1, 5, 6, 8, 9, 10, 12, 14, 15, 17, 18, 19, 20, 21, 22, 23,
24, 25, 26, 27. ValidatePIN has 11 source nodes: 29, 31, 34, 35, 39, 41, 46, 47, 48, 53; and in
GetPIN there are 6 source nodes: 56, 65, 70, 71, 72, 73.
SATM Main contains 16 sink nodes: 4, 5, 7, 8, 9, 11, 13, 14, 16, 18, 20, 22, 23, 24, 25, 28. There
are 14 sink nodes in ValidatePIN : 30, 33, 34, 37, 38, 40, 41, 44, 47, 48, 51, 52, 55; and 5 sink
nodes in GetPIN: 64, 69, 70, 71, 72.
Most of the module execution paths in SATM Main are very short; this pattern is due to the high
density of messages to other units. Here are the first two module execution paths in SATM Main:
<1, 2, 3, 4>, <5> and <6, 7>, <8> . The module execution paths in ValidatePIN are slightly longer:
<29, 30>, <31, 32, 33>, <34>, <35, 36, 37>, and so on. The beginning portion of GetPIN is a good
example of a module execution path: the sequence < 58, 59, 60, 61, 62, 63, 64> begins with a
source node (58) and ends with a sink node (64) which is a call to the KeyHit procedure. This is
also a point of event quiescence, where nothing will happen until the customer touches a key.
There are four MM-Paths in statements 64 through 72: each begins with KeySensor observing a
port input event (a keystroke) and ends with a closely knit family of port output events (the calls to
ScreenDriver with different PIN echoes). We could name these four MM-Paths GetDigit1,
GetDigit2, GetDigit3, and GetDigit4. They are slightly different because the later ones include the
earlier IF statements. (If the tester was the designer, this module might be reworked so that the
WHILE loop repeated a single MM-Path.) Technically, each of these is also an atomic system
function since they begin and end with port events.
There are interesting ASFs in ValidatePIN. This unit controls all screen displays relevant to the PIN
entry process. It begins with the display of screen 2 (which asks the customer to enter his/her PIN).
Next, GetPIN is called, and the system is event quiescent until a keystroke occurs. These keystrokes
initiate the GetDigit ASFs we just discussed. Here we find a curious integration fault. Notice that
screen 2 is displayed in two places: by the THEN clauses in the WHILE loop in GetPIN and by the
first statements in each CASE clause in ValidatePIN. We could fix this by removing the screen
displays from GetPIN and simply returning the string (e.g., X) to be displayed.
5 , 6 , 7, 8 , 9 }
Page 124
Page 125