Sunteți pe pagina 1din 14

OPERATING SYSTEM (ETCS-212)

Model Question Paper


Q. 1. (a) What is the main advantage of the Layered approach to
system
design? [5]
Ans. Advantages of the Layered Approach: There are many advantages
of the
layered approach such as:
(i) The main advantage of this approach is modularity. The layers are
selected
such that each users the functions and services of its Lower Layers.
(ii) The first layer can be debugged without any concern for the rest of the
system,
because by defination, it uses only the basic hardware which is assumed
correct to implement its functions.
(iii) The design and implementation of the system are simplified when the
system
is broken into layers.
Q.1. (b) What advantage is there in having different time quantum
sizes on
different levels of a multilevel queuing system? (5)
Ans. Processes which need more frequent servicing, for instance interactive
processes
such as editors, can be in a queue with a small time quantum. Processes with
no need
for frequent servicing can be in a queue with a large time quantum, requiring
fewer context
switches to complete the processing, making more efficient use of the
computer.
Q.1. (c) Consider a system that supports the strategies of
contiguous, linked
and indexed allocation. What criteria should be used in deciding
which strategy
is best utilized for a particular life? (5)
Ans. (i) Contiguous allocation is best for files that are small and accessed
sequentially.
(ii) Linked allocation is best for Large files accessed sequentially.
(iii) Indexed allocation is best for Large, randomly accessed files.
Q.1. (d) State advantages and disadvantage of placing functionality
in a
device controller? (5)

Ans. Advantages of Device Controller:


(i) Bugs are Less Likely to cause an operating system crash.
(ii) Performance can be improved by utilizing dedicated hardware and hardcoded
algorithms.
(iii) The kernel is simplified by moving algorithms out of it.
Disadvantages of Device Controller:
(i) Bugs are hardware to fix a new firmware version or new hardware is
needed.
(ii) Improving algorithms likewise require a hardware update rather than just
a
kernel or device-driver update.
(iii) Embedded algorithms could conflict with applications use of the device,
causing decreased performance.
Q.2. (a) Compare the use of networking sockets with the use of
shared
memory as a mechanism for communicating data between
processes on a single
computer. What are the advantages of each method? When might
each be
prefered? (6)
Ans. Using Network sockets rather than shared memory for local
communication
has a number of advantages. The main advantage is that the socket
programming
interface features a rich set of synchronization features. A process can easily
determine
when new data has arrived on a socket connection, how much data is
present, and who
sent it. Processes can block until new data arrives on a socket, or they can
request that
a signal be delivered when data arrives. A socket also manages separate
connections. A
process with a socket open for receive can accept multiple connections to
that socket and
will be told when new processes try to connect or when old processes drop
their
connections.
Shared memory offers none of these features. These is no way for a process
to
determine whether another process has delivered or changed data in shared
memory
other than by going to look at the contents of that memory. It is impossible
for a process

to block and request a wakeup when shared memory is delivered, and these
is no standard
mechanism for other processes to establish a shared memory links to an
existing process.
However, shared memory has advantage that it is very much faster than
socket
communications in many cases. When data is sent over a socket, it is
typically copied
from memory to memory multiple times. Shared memory updates requires
no data copies
if one process updates a data structure in shared memory, that update is
immediately
visible to all other processes sharing that memory. Sending or receiving data
over a socket
requires that a kernel system service call be made to initiate the transfer, but
shared
memory communication can be performed entirely in user mode with no
transfer of control
required.
Socket communication is typically preferred when connection management is
important or when these is a requirement to synchronise the sender and
reciever. For
example server processes will usually establish a listening socket to which
clients can
connect when they want to use that service. Once the socket is established,
individual
requests are also sent using the socket, so that the server can easily
determine when a
new request arrives and who it arrived from.
In some cases, however, shared memory is preferred. Shared memory is
often a better
solution when either large amounts of data are to be transferred or when two
processes
need random access to a large common data set. In this case, however, the
communicating
processes may still need an extra mechanism in addition to shared memory
to achieve
synchronization between themselves.
The X windows system, a graphical display environment for UNIX, is a
good example of this: most graphic requests are sent over sockets, but
shared
memory is offered as an additional transport in specid cases where
large bitmaps are to be displayed on the screen. In this case, a request to
display the

bitmap will still be sent over the socket, but the bulk data of the bitmap itself
will be
sent via shared memory.

Q.3Describe the actions taken by the operating system when a page


fault occurs? (4)
Ans Steps in handling a page fault
Step 1: We check an interval table (usually kept with the process control
block for
this process, to determine whether the reference was a valid or invalid
memory access.
Step 2. If the reference was invalid, we terminate the process. If it was valid,
but
we have not yet brought in that page, we now page in the latter.
Step 3. We find a free frame (by taking one from the free-frame list.,
Step 4. We schedule a disk operation to read the desired page into the
nearly
allocated frame.
Step 5. When the disk read is complete, we modify the internal table kept
with the
process and the page table to indicate that the page is now in memory.
Step 6. We restart the instruction that was interrupted by the illegal address
trap.
The process can now access the page as though it had always been in
memory.
Q.4.
Define semaphores. How can it be used as a general
synchronization
tool? What advantage does a semphore have as compared to a
hardware testandset instruction? (6)
Ans. Semaphores: A semaphore is defined as a shared integer variable
(with nonnegative
values) that apart from initialization, is accessed only through two indivisible
operations (also called atomic operations): wait and signal. These operations
were
orginally termed p (for wait) and v (for signal).
The classical definations of wait and signal are:
wait (S) : while S < 0 do no op;
S : = S 1;
Signal (S) : S : = S + 1;
Usage of a general Synchronization Tool: We can use semaphore to deal
with

the n-process critical-section problem. The n processes share a semaphore,


mutex
(standing for mutual exclusion) initialized to 1. Each process Pi is organised
as shown
in Figure:
We can also use semaphores to solve various synchronization problem. For
example,
consider two concurrently running process P, with a statement S1, and P2
with a statement
S2. Suppose that we require that S2 be executed only after S1 has
completed. We can
implement this scheme readily by letting P1 and P2 share a common
semaphose synch,
initialized to 0, and by inserting the statements S1;
S1;
signal (synch);
in process P1, and the statements
wait (synch);
S2;
in Process P2. Because synch is initialized to 0. P2 will execute S2 only after
P1 has
involved signal (synch), which is after S1.

Repeat
Wait (mutex);
Signal (mutex);
Remainder section
Critical section
Untill False;
Mutual Exculstion Implementation with Semaphore
Advantage of semaphore over test and set instruction:
1. The main advantage of semaphore i.e., Test and set hardware are not easy
to
generalize to more complex problems. But to overcome this difficulty, we can
use a
synchronization tool, called a semaphose.
2. Another advantage is Test and set Hardware solution is not feasible in a
multiprocessor environment. But by semaphose we will provide a solution for
multiprocessor environment.
Q.5. Define Critical section Problem. What are the necessary

requirements for a solution to the critical section problem? Explain?


(6)
Ans. Critical-Section Problem: Consider a system consisting of n
processes {P0,
P1, ... Pn 1}. Each process has a segment of code, called a critical section,
in which the
process may be changing common variables, updating a table, writing a file
and so on.
The important feature of the system is that, when one process is executing in
its critical
section, no other process is to be allowed to execete in its critical section.
Thus, the
execution of critical sections by the processes is mutually exclusive in time.
The critical
sections by the processes is mutually exchausive in time. The critical section
problem is
to design a protocol that the processes can use to cooperate. Each process
must request
permission to enter its critical section.
A solution to the critical section problem must satisfy the following three
requirements:
1. Mutual Exclustion: If process Pi is executing in its critical section, then
no other
processes can be executing in their critical sections.
2. Progress: If no process is executing in its critical section and there exist
some
processes that wish to enter their critical sections, then only those processes
that are
not executing in their remainder section can participate in the decission of
which will
enter its critical section next, and this selection cannot be postponed
indefinitely.
3. Bounded Waiting: These exist a bound on the number of times that
other
processes are allowed to enter their critical sections after a process has
made a request
to enter its critical section and before that request is granted.
Q.6. Explain the four necessary conditions that must be in effect for
a
deadlock to exist. Explain two methods for handling deadlocks. (6.5)
Ans. Four necessary conditions: Ans. Four necessary conditions for
deadlock:
1. Mutual Exclusion Condition: The resources involved are non-shareable.

Explanation: At least one resource (thread) must be held in a non-shareable


mode,
that is, only one process at a time claims exclusive control of the resource. If
another
process requests that resource, the requesting process must be delayed until
the resource
has been released.
2. Hold and Wait Condition: Requesting process hold already, resources
while
waiting for requested resources.
Explanation: There must exist a process that is holding a resource already
allocated
to it while waiting for additional resource that are currently being held by
other
processes.
3. No-Preemptive Condition: Resources already allocated to a process
cannot be
preempted.
Explanation: Resources cannot be removed from the processes are used to
completion or released voluntarily by the process holding it.
4. Circular Wait Condition: The processes in the system form a circular list
or
chain where each process in the list is waiting for a resource held by the next
process in
the list.
Two methods for Handling dead locks: The Bankers algorithm is a
resource
allocation and deadlock avoidance algorithm developed by Edsger Dijkstra
that tests for
safety by simulating the allocation of predetermined maximum possible
amounts of all
resources, and then makes a s-state check to test for possible deadlock
conditions for all
other pending activities, before deciding whether alocation should be
allowed to continue.
The algorithm was developed in the design process for the THE operating
system
and originally described (in Dutch) in EWD108. The name is by analogy with
the way
that bankers account for liquidity constraints.
Data Structures for the Bankers Algorithm: Available: Vector of length
m. If
available [j] = k, there are k instances of resources type Rj available.
Max: n x m matrix. If Max [i, j] = k, then process Pi may request at most k
instances

of resource type Rj.


Allocation: n x n matrix. If Allocation [i, j] = k then Pi is currently allocated k
instances
of Rj.
Need: n x m matrix. If Need [i, j] = k, then Pi may need k more instances of
Rj to
complete its task.
Need [i, j] = Max [i, j] Allocation [i, j].
.
Q.7. (a) Explain deadlock detection in centralized and fully
distributed
approach. Why deadlock detection is much more expensive in a
distributed
environment than it is in a centralized environment? (6.5)
Ans. Centralized Approach: The simplest way to detect cycles in the
global wait
for graph is to minimum a centralized system. One of the local coordinators
is designed
as the centralized cordinator. It collects the local WFG (wait-for Graph) from
all other
coordinators, assembles them into a complete global graph and then
analyzes the global
graph for the presence of cycles. The local graphs can be sent to the global
coordinator
whenever an edge is added or removed, or less frequently by grouping
together multiple
changes to reduce message traffic.
Disadvantage of Centralized Approach:
1. The global coordinator becomes a performance bottleneck, as well as a
single
point of failure.
2. It is prone to detecting non-existing dead locks referred to as phantom
deadlocks.
Fully Distributed Approach: A fully distributed dead lock detection
algorithm
attempts to find cycles in the global wait-for graph by tracing the different
paths
throughthe graph without gathering it in one central location. The basic idea
is to send
a special message, called a probe, which replicates itself along all outgoing
edges. If one
of the replicas the original destination of the probe, a cycle and thus a
deadlock, has

been detected. There are many different implementation of this basic


concept, referred
to as edge chasing or path pushing approaches. What distinguishes them is
the type
and amount of information carried by the probe, and the type and amount of
information
that needs to be maintained by each local cordinator.
Let us consider the conceptutaly simplest approach, where each probe is the
concatenation of edges it has traversed so far. The initial probe is launched
by a process
pi when it becomes blocked on a request for a resource currently held by
another process
pj. The local cordinator extends the probe by adding to it the edge (pi pj)
and replicates
it along all edges emanating from pj. If these are no such edges, indicating
that the current
process is not blocked, the probe is discarded. A cycle is detected whenever
the process
appears on the probe twice, indicating that the probe must have returned to
an already
visited process.
Q.7. (b) The head of moving head disk with 100 tracks numbered 0
to 99 is
currently serving a request 55. If queue of request is 10, 70, 75, 23,
65 which of
two disk scheduling algorithms FCFS and SSTF (Shortest seek Time
First) will
require less head movement? Find the total head movement for
each of the
algorithm? (6)
Ans. FCFS:
0 10 23 55 65 70 75 99
Initially Position
Total

head movement in FCFS: = 45 + 60 + 5 + 52 + 42 = 204 tracks


136 Fourth Semester Operating System, May-June 2013
SSTF : (Shortest Seek Time First)
0 10 23 55 65 70 75 99
Initially Position
Total

Head movement in SSTF: = 10 + 5 + 5 + 52 + 13 = 85 tracks


Q.8. (a) Explain
implementation?
(6.5)

File

concept,

access

methods,

directory

Ans. File Concept: Computers can store information on several different


storage
media, such as magnetic disks, magnetic tapes and optical disks. So that the
computer
system will be convenient to use, the operating system provides a uniform
logical view
of information storage. The operating system abstracts from the physical
properties of
its storage devices to define a logical storage unit, the file. Files are mapped,
by the
operating system, onto physical devices. These storage devices are usually
non-volatile,
so the contents are presistent through power failures and system reboots.
There are so
many concepts as:
File Attributetes
File operations
File Types
File Structures
Internal File Structure
Access Methods: File store information. When it is used, this information
must
be accessed and read into computer memory. There are several ways that
the information
in the file can be accessed.
Sequential Access: The simplest access method is sequential access.
Information
in the file is processed in order, one record after the other. This mode of
access is by far
the most common, for example editors and compilers usually access files in
this fashion.
Direct Access: Another method is direct access (or relative access). A file is
made
up of fixed length logical records that allow programs to read and write
records rapidlly
in no particular order. The direct access method is based on a disk model of a
file, since
disks allow random access to any file block.
Q.8. (b) Explain File types and File Access (sequential access and
random
access). (6)
Ans. File Types: One major consideration in designing a file system and the
entire
operating system, is whether the operating system should recognise and
support file

types.
If an operating system recognizes the type of a file, it can then operate on
the file in
reasonable ways. For example, a common mistake occurs a user tries to print
the binary
object form of a program.
A common technique technique for implementing file types is to include the
type as
part of the file name. The name is split into two partsa name and an
extension, usually
separated by a period character.
File Type Usual Extension Function
Executable exe, com, bin, or none ready-to-run machine- language program
Object obj, o compiled, machine language, not linked
Source Code C, P, pass f77, arm a source code in various languages
Batch bat, sh Commands to the command interpreter
Text text, doc textual data, documents
Word processor up, tex, rrf, etc various word-processor formats
Library Lib, a Libraries of routines for programmers.
Print of view Ps, dvi, fig ASCII or binary file in a format for
printing or viewing
File Allocation Methods:
1. Contiguous Allocation: This method requires each file to occupy a
setoff
contiguous address on the disk. Disk address defines a linear ordering on the
disk. The
difficulty with contiguous allocation is finding space for a new file. It also
suffers form
external fragmentation.
2. Linked Allocation: In linked allocation, each file is a linked list of disk
blocks.
The directory contains a pointer to the first (optionally the last) block of the
file. There
is no external fragmentation with linked allocation. Any free block is used to
satisfy a
request. The problem with it is that it is inefficient to support direct-access; it
is effective
only for sequential-access files.
3. Indexed Allocation: This allocation method is the solution to the
problem of
both continuous and linked allocation. This is done by bringing all the
pointers together
into one location called the index block. Each file has its own index block,
which is an

array of disk sector of addresses. The directly contains the adress of the
index block of
file.
Q.9. Write short on (12.5)
(i) Log structured File system
(ii) Efficiency and usage of disk space.
(iii) File system mounting
(iv) The protection strategies provided for Files.
Ans. (i) Log Structured File System: A log structured file system is a file
system
in which data and metadata are written sequentially to a circular buffer,
called a log.
The design of log-structured file system is based on the hypothesis that this
will no
longer be effecive because ever-increasing memory sizes on modern
computers would
lead to Input/Output becoming write heavy.
A log structured file system thus treats its storage as a circular log and writes
sequentially to the head of the log.
The design rationale for log-structured file systems assumes the most reads
will
be optimized away by ever-enlarging memory caches. This assumption does
not always
hold:
(i) On magnetic media
(ii) On flash memory
(ii) Efficiency and Usage of Disk Space: The efficiency use of disk space
is heavily
dependent on the disk allocation and directory dgorithms in use. For
instance, UNIX
inodes are preallocated on a partition. Even on empty disk has a
percentage of its
138 Fourth Semester Operating System, May-June 2013
space lost to inodes. However, by preallocating the indoes and spreading
these across
the partition, we improve the file systems performance. This improved
preformance is
a result of the UNIX allocation and free-space algorithms, which try to keep a
files data
blocks near that files inode block to reduce seek time.
As an example, consider how efficiency is affected by the size of the pointers
used to
access data. Most systems use either 16 or 32 bit pointers throughout the
operating
system. These pointers sizes limit the length of a file to either 216 (64 k) or
232 bytes (4

GB). Some systems implement 64-bit pointers to increase the limit to 264
bytes, which
is a very large number indeed. However, 64 bit pointers take more space to
store, and in
turn make the allocation and free space management methods (Linked lists,
indexes
and so on) use more disk space.
(iii) File System Mounting: Just as a file must be opened before it is used,
a file
system must be mounted before it can be available to processes on the
system. The
mount procedure is straight forward. The operating system is given the name
of the
device, and the location within the file structure at which to attach the file
system (called
the mount point) For instance, on a UNIX system, a file system containing
users home
directories might be mounted as home; then, to access the directory
structure within
that file system, one could preceede the directory names home, as in
/home/jane, mounting
that file system under/users would result in the path name /users/jane to
reach the same
directory.
Consider the actions of the macintosh operating system. Whenever the
system
ecounters a disk are for the first time (hard disks are found at boot time,
floppy disks
are seen when they are inserted into the drive), the maintain operating
system searches
for a file system on the device. If it finds one, it automatically mounts the file
system at
the root level adding a folder icon on the screen labeled with the name of the
file system.
(iv) Protection Stragegies provided by Files: When information is kept
in a
computer system, a major concern is its protection from both physical
damage
(reliability) and improper access (protection)
Reliability is generally provided by duplicate copies of files. many comp have
systems programs that automatically copy disk files to tape at regular
intervals to
maintain a copy should a file system be accidentally destroyed. File systems
can be
damaged by H/W problems.

Protection can be provided in many ways. For a small single user system, we
might
provide protection by physically removing the floppy disks and locking them
in a desk
drawer or file cobinet. In a multiuser system, however, other mechanisms
are needed.

S-ar putea să vă placă și