Sunteți pe pagina 1din 20

Approved by AICTE, Affiliated to VTU, Accredited as grade A Institution by NAAC.

(All UG branches – CSE, ECE, EEE, ISE & Mech.E accredited by NBA for academic years
2018-19 to 2020-21 & valid upto 30.06.2021)
Post box no. 7087, 27th cross, 12th Main, Banashankari 2nd Stage, Bengaluru- 560070, INDIA

Ph: 91-80- 26711780/81/82 Email: principal@bnmit.in, bnmitprincipal@gmail.com, www.bnmit.org

Department of Electronics and Communication Engineering


Question Bank for Module I
Course Name: Operating System Faculty Name: Kiran S M
Subject Code: 17EC553 Year of Study: 2018-19
1. What is an operating system? Explain abstract view of component of a computer system.
2. Explain the goals of an Operating System.
3. What is OS? What are the common tasks performed by OS and when they are performed.
4. List and explain the services provided by an OS that are designed to make computer system more
convenient for users
5. Explain the basic operations of OS
6. Explain the typical computational structures and mention the OS responsibilities in each
7. Explain static and dynamic allocation with respect to resource and memory.
8. Explain sequential and concurrent sharing of resources with examples.
9. Explain the terms Efficiency, System performance and User service
10. Briefly explain the different classes of OS, specify the primary concern and key concepts used.
11. Describe the batch processing system and functions of scheduling and memory management for the
same.
12. Explain turnaround time in batch processing system.
13. Explain the key concepts of multiprogramming OS and explain the functions of multiprogramming
OS/multiprogramming kernel.
14. Why I/O bound program should be given higher priority in a multiprogramming environment?
Illustrate with example.
15. Explain the key concepts of time sharing OS and explain memory management in time sharing OS.
16. Explain time sharing OS with respect to scheduling.
17. Explain the key concepts and features of real time operating system.
18. Define distributed system. Give key concepts and techniques used in distributed OS.
19. What is distributed operating system? What are the features/advantages of distributed OS.
20. Explain the following terms:
• Computational structures: • Time slice:
• Sub-request: • Swapping:
• Throughput: • Hard real time system:
• Scheduling: • Soft real time:
• Response time: • Real time application:
• Turn-around time: • Virtual resource:
• Pre-emption: • Virtual machines:
• Virtual resources:
21. Distinguish between:
• CPU burst and I/O burst jobs
• Batch system and time sharing system
Operating System:17EC553 Module 1 Kiran S M

1. What is an operating system? Explain abstract view of component of a computer system.


An operating system (OS) is different thing to different users.
 To a school or college student, the OS is the software that permits access to the Internet.
 To a programmer, the OS is the software that makes it possible to develop programs on a computer
system.
 To a user of an application package, the OS is simply the software that makes it possible to use the
package.
 To a technician in, say, a computerized chemical plant, the OS is the invisible component of a computer
system that controls the plant.
Abstract View of OS: A user's abstract view contains important features of a system from a user's
perspective. It helps an OS designer to understand the requirements of a user, which helps in planning the
features of an OS. The key advantage of using an abstract view in design is that it helps to control
complexity of the design by separating the concerns of different parts of a system. Abstract views are also
useful for understanding the design and implementation of a system.
Figure 1.1 contains an abstract view of the structure of an OS, which shows three main parts of an OS.
Each part consists of a number of programs. The kernel is the core of the OS. It controls operation of the
computer and provides a set of functions and services to use the CPU and resources of the computer. Non-
kernel programs implement user commands. These programs do not interact with the hardware, they use
facilities provided by kernel programs. Programs in the user interface part either provide a command line
interface or a graphical user interface (GUI) to the user. These programs use facilities provided by non-
kernel programs. A user interacts with programs in the user interface—typically with the command
interpreter—to request use of resources and services provided by the system.

Figure 1.1: A designer's abstract view of an OS


Figure 1.1 has interesting properties. It contains a hierarchical arrangement of program layers in which
programs in a higher layer use the facilities provided by programs in the layer below it. In fact, each layer
takes an abstract view of the layer below it. In this view the lower layer appears as a system that is capable
of executing certain commands. It could even be a machine that is capable of performing certain
operations. The fact that the lower layer consists of a set of programs rather than a computer system makes
no difference to the higher layer. Each layer extends capabilities of the machine provided by the lower

Page 1
Operating System:17EC553 Module 1 Kiran S M

layer. The machine provided by the user interface layer understands the commands in the command
language of the OS.

2. Explain the goals of an Operating System.


The fundamental goals of an operating system are:
 Efficient use: Ensure efficient use of a computer’s resources.
 User convenience: Provide convenient methods of using a computer system.
 Noninterference: Prevent interference in the activities of its users.
Efficient Use: An operating system must ensure efficient use of the fundamental computer system
resources of memory, CPU, and I/O devices such as disks and printers. Poor efficiency can result if a
program does not use a resource allocated to it, e.g., if memory or I/O devices allocated to a program
remain idle. Such a situation may have a snowballing effect: Since the resource is allocated to a program, it
is denied to other programs that need it. These programs cannot execute, hence resources allocated to them
also remain idle. In addition, the OS itself consumes some CPU and memory resources during its own
operation, and this consumption of resources constitutes an overhead that also reduces the resources
available to user programs. To achieve good efficiency, the OS must minimize the waste of resources by
programs and also minimize its own overhead. Efficient use of resources can be obtained by monitoring
use of resources and performing corrective actions when necessary
User Convenience: User convenience has many facets, as Table 1.1 indicates. In the early days of
computing, user convenience was synonymous with bare necessity-the mere ability to execute a program
written in a higher level language was considered adequate. Experience with early operating systems led to
demands for better service, which in those days meant only fast response to a user request.
Other facets of user convenience evolved with the use of computers in new fields. Early operating systems
had command-line interfaces, which required a user to type in a command and specify values of its
parameters. However, simpler interfaces were needed to facilitate use of computers by new classes of
users. Hence graphical user interfaces (GUIs) were evolved. These interfaces used icons on a screen to
represent programs and files and interpreted mouse clicks on the icons and associated menus as commands
concerning them. In many ways, this move can be compared to the spread of car driving skills in the first
half of the twentieth century.
Table 1.1: Facets of User Convenience

Facet Examples

Fulfilment of necessity Ability to execute programs, use the file system

Good service Speedy response to computational requests

User friendly interfaces Easy to use commands, GUI

New programming model Concurrent programming

Web oriented features Means to set up web enabled services

Evolution Add new features, use new computer technologies

Page 2
Operating System:17EC553 Module 1 Kiran S M

Noninterference: A computer user can face different kinds of interference in his computational activities.
Execution of his program can be disrupted by actions of other persons, or the OS services which he wishes
to use can be disrupted in a similar manner. The OS prevents such interference by allocating resources for
exclusive use of programs and OS services, and preventing illegal accesses to resources. Another form of
interference concerns programs and data stored in user files. A computer user may collaborate with some
other users in the development or use of a computer application, so he may wish to share some of his files
with them. Attempts by any other person to access his files are illegal and constitute interference. To
prevent this form of interference, an OS has to know which files of a user can be accessed by which
persons. It is achieved through the act of authorization, whereby a user specifies which collaborators can
access what files. The OS uses this information to prevent illegal accesses to files.

3. What is OS? What are the common tasks performed by OS and when they are performed.
The operating system (OS) is different thing to different users.
 To a school or college student, the OS is the software that permits access to the Internet.
 To a programmer, the OS is the software that makes it possible to develop programs on a computer
system.
 To a user of an application package, the OS is simply the software that makes it possible to use the
package.
 To a technician in, say, a computerized chemical plant, the OS is the invisible component of a computer
system that controls the plant.
Common tasks performed by OS:

Task When performed

When a user specifies which collaborators


Maintain authorization information
can access what programs or data

Construct a list of all the resources in the


During booting
system

Initiate execution of programs At user commands

Maintain resource usage information by


Continuously during OS operation
programs and current status of all the programs

Maintain current status of all resources and


At resource request or release
allocate resources to programs when required

Perform scheduling During Os operation

Maintain information for protection During Os operation

Handle requests made by user and their


At user requests
programs

Page 3
Operating System:17EC553 Module 1 Kiran S M

4. List and explain the services provided by an OS that are designed to make computer system more
convenient for users
User convenience has many facets, as Table 1.2 indicates. In the early days of computing, user convenience
was synonymous with bare necessity—the mere ability to execute a program written in a higher level
language was considered adequate. Experience with early operating systems led to demands for better
service, which in those days meant only fast response to a user request. Other facets of user convenience
evolved with the use of computers in new fields. Early operating systems had command-line interfaces,
which required a user to type in a command and specify values of its parameters. Users needed substantial
training to learn use of the commands, which was acceptable because most users were scientists or
computer professionals. However, simpler interfaces were needed to facilitate use of computers by new
classes of users. Hence graphical user interfaces (GUIs) were evolved. These interfaces used icons on a
screen to represent programs and files and interpreted mouse clicks on the icons and associated menus as
commands concerning them. In many ways, this move can be compared to the spread of car driving skills
in the first half of the twentieth century. Over a period of time, driving became less of a specialty and more
of a skill that could be acquired with limited training and experience.
Computer users attacked new problems as computing power increased. New models were proposed for
developing cost-effective solutions to new classes of problems. Some of these models could be supported
by the compiler technology and required little support from the OS; modular and object-oriented program
design are two such models. Other models like the concurrent programming model required specific
support features in the OS. Advent of the Internet motivated setting up of Web-enabled servers, which
required networking support and an ability to scale up or scale down the performance of a server in
response to the amount of load directed at it. Users and their organizations invest considerable time and
effort in setting up their applications through an operating system. This investment must be protected when
new application areas and new computer technologies develop, so operating systems need to evolve to
provide new features and support new application areas through new computer technologies.
Table 1.2: Facets of User Convenience

Facet Examples

Fulfilment of necessity Ability to execute programs, use the file system

Good service Speedy response to computational requests

User friendly interfaces Easy to use commands, GUI

New programming model Concurrent programming

Web oriented features Means to set up web enabled services

Evolution Add new features, use new computer technologies

5. Explain the basic operations of OS


The primary concerns of an OS during its operation are execution of programs, use of resources, and
prevention of interference with programs and resources. Accordingly, its three principal functions are:

Page 4
Operating System:17EC553 Module 1 Kiran S M

 Program management: The OS initiates programs, arranges their execution on the CPU, and
terminates them when they complete their execution. Since many programs exist in the system at any
time, the OS performs a function called scheduling to select a program for execution.
 Resource management: The OS allocates resources like memory and I/O devices when a program
needs them. When the program terminates, it deallocates these resources and allocates them to other
programs that need them.
 Security and protection: The OS implements noninterference in users’ activities through joint actions
of the security and protection functions. As an example, consider how the OS prevents illegal accesses
to a file. The security function prevents nonusers from utilizing the services and resources in the
computer system, hence none of them can access the file. The protection function prevents users other
than the file owner or users authorized by him, from accessing the file.

6. Explain the typical computational structures and mention the OS responsibilities in each
A computational structure is a configuration of one or more programs that work towards a common goal. It
is created by issuing one or more commands specify relationships between programs and to initiate their
execution. Some typical computational structures are:
 A single program
 A sequence of single programs
 Co-executing programs.
Table 1.3: Computational structures and OS responsibilities

Computational structure OS Responsibilities

Perform program initiation/termination, resource


Single program
management

Implement program dependence-terminate the


Sequence of single programs
sequence if a program faces abnormal termination

Provide appropriate interfaces between programs,


Co-executing programs
perform termination of the co-executing programs

Single program: A single program computation consists of the execution of a program on a given set of
data. The program can be either sequential or concurrent. A single program is the simplest computational
structure; it matches with the conventional notion of a program. In a concurrent program, different parts of
the program can execute concurrently. The OS needs to know the identities of these parts to organize their
concurrent execution. This function is not served by the user interface of the OS.
Sequence of single programs: Each single program in the sequence is initiated by the user through a
separate command. However, a sequence of single programs has its own semantics—a program should be
executed only if the previous programs in the sequence executed successfully. To achieve a common goal,
the programs must explicitly interface their inputs and outputs with other programs in the sequence.
Co-executing programs: A user initiates co-executing programs by indicating their names in a single
command. Co-execution semantics require that the programs should execute at the same time, rather than
one after another as in a sequence of programs, and interact with one another during their execution. The

Page 5
Operating System:17EC553 Module 1 Kiran S M

nature of interaction is specific to a co-execution command. The OS provides appropriate inter- faces
between the co-executing programs.

7. Explain static and dynamic allocation with respect to resource and memory.
Static allocation: static allocation is also called partitioning of resources. In the resource partitioning
approach, the OS decides a priori what resources should be allocated to a user program. This approach is
called static allocation because the allocation is made before the execution of a program begins. Static
resource allocation is simple to implement. However, it lacks flexibility that leads to problems like wastage
of resources that are allocated to a program but remain unused, and inability of the OS to grant additional
resources to a program during its execution. These difficulties arise because allocation is made on the basis
of perceived needs of a program, rather than its actual needs.
Dynamic allocation: dynamic allocation is also called as pool based allocation. In the pool-based approach
the OS maintains a common pool of resources and allocates from this pool whenever a program requests a
resource. This approach is called dynamic allocation because allocation takes place during execution of a
program. It avoids wastage of allocated resources, hence it can provide better resource utilization.
A simple resource allocation scheme uses a resource table as the central data structure (Table 1.4). Each
entry in the table contains the name and address of a resource unit and its present status, i.e., whether it is
free or allocated to some pro- gram. This table is built by the boot procedure by sensing the presence of I/O
devices in the system
Table 1.4: Resource allocation table

Resource name Class Address Allocation status

Printer 1 Printer 101 Allocated to P1

Printer 2 Printer 102 Free

Printer 3 Printer 103 Free

Disk 1 Disk 201 Allocated to P1

Disk 2 Disk 202 Allocated to P2

cdw 1 DC writer 301 Free

In the partitioned resource allocation approach, the OS considers the number of resources and programs in
the system and decides how many resources of each kind would be allocated to a program. For example, an
OS may decide that a program can be allocated I MB of memory, 2000 disk blocks and a monitor. Such a
collection of resources is called a partition.
In the pool-based allocation approach, OS consults the resource table when a program makes a request for
a resource, and allocates accordingly. When many units of a resource class exist in the system, a resource
request only indicates the resource class and the OS checks if any unit of that class is available for
allocation. Pool based allocation incurs overhead of allocating and deallocating resources individually;
however, it avoids both problems faced by the resource partitioning approach by adapting the allocation to
resource requirements of programs.

Page 6
Operating System:17EC553 Module 1 Kiran S M

(a). (b)
Figure 1.2: Resource partitioning and pool based allocation
Figure 1.2(a) shows a set of partitions that are defined during boot time. The resource table contains entries
for resource partitions rather than for individual resources. A free partition is allocated to each program
before its execution is initiated. Figure 1.2(b) illustrates pool based allocation. Program Pl has been
allocated a monitor, a disk area of 2000 blocks and I MB of memory. Program P2 has been allocated a
monitor and 2 MB of memory; disk area is not allocated because P2 did not request it. Thus pool based
allocation avoids allocation of resources that are not needed by a program. Programs with large or unusual
resource requirements can be handled by the OS so long as the required resources exit in the system.

8. Explain sequential and concurrent sharing of resources with examples.


In sequential sharing, a resource is allocated for exclusive use by a program. When the resource is
deallocated, it is marked free in the resource table. Now it can be allocated to another program. In
concurrent sharing, two or more programs can concurrently use the same resource. Examples of concurrent
sharing are data files like bus time tables. Most other resources cannot be shared concurrently.
The OS deallocates a resource when the program to which it is allocated either terminates or makes an
explicit request for deallocation. Sometimes it deallocates a resource by force to ensure fairness in its
utilization by programs, or to realize certain system-level goals. This action is called resource preemption.
A program that loses the CPU in this manner is called a preempted program.
CPU sharing: The CPU can be shared only in a sequential manner, so it can be assigned to only one
program at a time. Other programs in the system have to wait their turn on the CPU. The OS must share the
CPU among programs in a fair manner. Therefore, after a program has executed for a reasonable amount of
time, it preempts the program and gives the CPU to another program. The function of deciding which
program should be given the CPU, and for how long, is called scheduling.

Figure 1.3: A schematic of scheduling

Page 7
Operating System:17EC553 Module 1 Kiran S M

Figure 1.3 shows a schematic for CPU scheduling. Several programs await allocation of the CPU. The
scheduler selects one of these programs for execution on the CPU. A preempted program is added to the set
of programs waiting for the CPU.
Memory sharing: Parts of memory can be treated as independent resources. Both partitioning and pool-
based allocation can be used to manage memory. Partitioning is simple to implement. It also simplifies
protection of memory areas allocated to different programs. The pool-based allocation achieves better use
of memory. Memory can be preempted from inactive programs and used to accommodate active programs.
The special term swapping is used for memory preemption.
Figure 1.4 illustrates the approaches to memory allocation. In the fixed partitioned approach the memory is
divided a priori into many areas. A partition is allocated to a program at its initiation. Figure 1.4(a)
illustrates the situation when memory has been divided into equal sized partitions and two of them have
been allocated to programs A and B. Two partitions are currently free. Note that the size of B is smaller
than the size of the partition, so some memory allocated to it remains un- used. Figure 1.4(b) illustrates the
pool based approach. Newly initiated programs are allocated memory from the pool. Each program is
allocated only as much memory as requested by it, so allocated memory is not wasted.

Figure 1.4: Memory allocation schematics : (a) partitioned allocation, (b) pool-based allocation
Disk sharing: Different parts of a disk can be treated as independent resources. Both partitioning and pool
based approaches are feasible; however, modern OSs show a preference for the pool based approach. Disk
preemption is not practiced by an operating system; individual users can preempt their disk areas by
copying their files onto tape cartridges.

9. Explain the terms Efficiency, System performance and User service

Attribute Concept Description

Efficiency of use CPU efficiency Percent utilization of the CPU

System performance Throughput Amount of 'work' done per unit time

Turn around time Time to complete a job or process


User service Concept Response time Time to implement one interaction
between a user and his/her process

Efficiency and system performance: Although good efficiency of use is important, it is rarely a design
goal of an operating system. Good performance in its computing environment is an important design goal.
Efficiency of use may be a means to this end, for example, efficiency of resources like the CPU and disks

Page 8
Operating System:17EC553 Module 1 Kiran S M

are important parameters for fine-tuning the performance of a system. System performance is typically
measured as throughput: The throughput of a system is the number of jobs, programs, processes or
subrequests completed by it per unit lime.
The unit of work used in computing throughput depends on the nature of the computing environment. In a
non-interactive environment, throughput of an OS is measured in terms of number of jobs, programs or
processes completed per unit time. In an interactive environment, throughput may be measured in terms of
the number of subrequests completed per unit time. In a specialized computing environment, performance
may be measured in terms meaningful to the application being serviced, e.g., the number of transactions in
a banking environment. Throughput can also be used as a measure of performance for I/O devices. For
example, the throughput of a disk can be measured as the number of I/O operations completed per unit time
or the number of bytes transferred per unit time.
User service: User service is a measurable aspect of user convenience. It indicates how a user's
computation has been treated by the OS. We define two measures of user service—turn-around time and
response time. These are used in non-interactive and interactive computing environments, respectively.

10. Briefly explain the different classes of OS, specify the primary concern and key concepts used.
Table 1.5: Key Features of Classes of Operating Systems

Classes of operating systems have evolved over time as computer systems and users’ expectations of them
have developed. Table 1.5 lists five fundamental classes of operating systems that are named according to
their defining features. The table shows when operating systems of each class first came into widespread
use; what fundamental effectiveness criterion, or prime concern, motivated its development; and what key
concepts were developed to address that prime concern.
Batch Processing Systems: In a batch processing operating system, the prime concern is CPU efficiency.
The batch processing system operates in a strict one job-at-a-time manner, within a job, it executes the
programs one after another. Thus only one program is under execution at any time. The opportunity to
enhance CPU efficiency is limited to efficiently initiating the next program when one program ends, and
the next job when one job ends, so that the CPU does not remain idle.
Multiprogramming Systems: A multiprogramming operating system focuses on efficient use of both the
CPU and I/O devices. The system has several programs in a state of partial completion at any time. The OS
uses program priorities and gives the CPU to the highest-priority program that needs it. It switches the CPU
to a low-priority program when a high-priority program starts an I/O operation, and switches it back to the

Page 9
Operating System:17EC553 Module 1 Kiran S M

high-priority program at the end of the I/O operation. These actions achieve simultaneous use of I/O
devices and the CPU.
Time-Sharing Systems: A time-sharing operating system focuses on facilitating quick response to sub-
requests made by all processes, which provides a tangible benefit to users. It is achieved by giving a fair
execution opportunity to each process through two means: The OS services all processes by turn, which is
called round-robin scheduling. It also prevents a process from using too much CPU time when scheduled to
execute, which is called time-slicing. The combination of these two techniques ensures that no process has
to wait long for CPU attention.
Real-Time Systems: A real-time operating system is used to implement a computer application for
controlling or tracking of real-world activities. The application needs to complete its computational tasks in
a timely manner to keep abreast of external events in the activity that it controls. To facilitate this, the OS
permits a user to create several processes within an application program, and uses real-time scheduling to
interleave the execution of processes such that the application can complete its execution within its time
constraint.
Distributed Systems: A distributed operating system permits a user to access resources located in other
computer systems conveniently and reliably. To enhance convenience, it does not expect a user to know the
location of resources in the system, which is called transparency. To enhance efficiency, it may execute
parts of a computation in different computer systems at the same time. It uses distributed control; i.e., it
spreads its decision-making actions across different computers in the system so that failures of individual
computers or the network does not cripple its operation.

11. Describe the batch processing system and functions of scheduling and memory management for
the same.
A batch is a sequence of user jobs formed for the purpose of processing by a batch processing operating
system. Each job in the batch is independent of other jobs in the batch; jobs typically belong to different
users. A computer operator forms a batch by organizing a set of user jobs in a sequence and inserting
special marker cards to indicate the start and end of the batch. The operator submits a batch as a unit of
processing by the batch processing operating system. The primary function of the batch processing system
is to service the jobs in a batch one after another without requiring the operator's intervention. This is
achieved by automating the transition from execution of one job to that of the next job in the batch.

Figure 1.5: schematic of a batch processing system


Batch processing is implemented by the kernel (also called the batch monitor), which resides in one part of
the computer's memory. The remaining memory is used for servicing a user job—the current job in the
batch. When the operator gives a command to initiate the processing of a batch, the batch monitor set up

Page 10
Operating System:17EC553 Module 1 Kiran S M

the processing of the first job of the batch. At the end of the job, it performs job termination processing and
initiates execution of the next job. At the end of the batch, it performs batch termination processing and
awaits initiation of the next batch by the operator. Thus the operator needs to intervene only at the start and
end of a batch.
Figure 1.5 shows a schematic of a batch processing system. The batch consists of n jobs, jobl, job2 job n,
one of which is currently in execution. The figure depicts a memory map showing the arrangement of the
batch monitor and the current job of the batch in the computer's memory. The part of memory occupied by
the batch monitor is called the system area, and the part occupied by the user job is called the user area.
12. Explain turnaround time in batch processing system.
The notion of turn-around time is used to quantify user service in a batch processing system. Due to
spooling, the turn-around time of a job jobi processed in a batch processing system includes the following
time intervals:
a) Time until a batch is formed (i.e., time until the jobs jobi+l, ... jobn are submitted)
b) Time spent in executing all jobs of the batch.
c) Time spent in printing and sorting the results belonging to different jobs.
Thus, the turn-around time for jobi is a function of many factors, its own execution time being only one of
them. It is clear that use of batch processing does not guarantee improvements in the turn-around times of
jobs. In fact, the service to individual users would probably deteriorate due to the three Factors mentioned
above. This is not surprising because batch processing does not aim at improving user service—it aims at
improving CPU utilization.

Figure 1.6: Turn around time in Batch processing system

Example: Figure 1.6 illustrates different components in the turn-around times of a job. The user submits
the job at time to. However, the batch is not formed immediately. The staff of the computer center forms a
batch only after a sufficient number of jobs have been submitted. Consequently, the batch actually gets
formed at time tl. Its processing starts at time t2 and ends at t3. Printing of the results is commenced at time
t4 and completes at t5. The results are returned to the user only at time t6. Thus the turn-around time of the
job is (t6-to). It has no direct relation to its own execution time.

Page 11
Operating System:17EC553 Module 1 Kiran S M

13. Explain the key concepts of multiprogramming OS and explain the functions of
multiprogramming OS/multiprogramming kernel.
Concurrency of operation between the CPU and the I/O subsystem can be exploited to get more work done
in the system. The OS can put many user programs in the memory, and let the CPU execute instructions of
one program while the I/O subsystem is busy with an I/O operation for another program. This technique is
called multiprogramming.
Figure 1.7 illustrates operation of a multiprogramming OS. The memory contains three programs. An I/O
operation is in progress for program1, while the CPU is executing program2. The CPU is switched to
program3 when program2 initiates an I/O operation, and it is switched to program1 when program1’s I/O
operation completes. The multiprogramming kernel performs scheduling, memory management and I/O
management. It uses a simple scheduling policy and performs simple partitioned or pool-based allocation
of memory and I/O devices. The multiprogramming arrangement ensures synchronization of CPU and I/O
activities in a simple manner- it allocates the CPU to a program only when the program is not performing
the I/O operation

Figure 1.7; Operation of Multiprogramming system


Functions of multiprogramming OS /multiprogramming kernel:
Important functions of the multiprogramming kernel are:
a) Scheduling
b) Memory management
c) I/O management.
Scheduling is performed after servicing every interrupt. Multiprogramming systems use a simple priority
based scheduling scheme, Functions b and c involve allocation of memory and I/O devices. Simple
partitioned or pool-based allocation can be used for this purpose. Resource sharing necessitates protection
against mutual interference—the instructions, data, and I/O operations of one program should be protected
against interference by other programs. Two provisions are used to achieve this. Memory is used to prevent
a program from interfering with memory allocated to other programs, and the CPU is put in the non-
privileged mode while executing user programs. Any effort by a user program to access memory locations
situated outside its memory area, or to use a privileged instruction, now leads to an interrupt. Interrupt
processing routines for these interrupts simply terminate the program that caused an interrupt.

Page 12
Operating System:17EC553 Module 1 Kiran S M

14. Why I/O bound program should be given higher priority in a multiprogramming environment?
Illustrate with example.
In multiprogramming environment I/O bound program should be given higher priority for the following
reasons.
 CPU utilization is reasonable
 I/O utilization is reasonable (however, I/O idling would exit if system contains many devices capable of
operating in the DMA mode)
 Periods of concurrent CPU and I/O activities are frequent.

Figure 1.8: Timing chart when I/O bound program has highest priority
Figure 1.8 depicts the operation of the system when the I/O bound program has higher priority. progiob is
the higher priority program, hence it is given the CPU whenever it needs, i.e., whenever it is not
performing I/O. When progiob initiates an I/O operation, progcb gets the CPU. Being a CPU-bound program,
progcb keeps the CPU busy until progiob's I/O completes. progcb is preempted when progiob completes its I/O
since progiob has a higher priority than progcb. This explains the system behavior in the period t0-t26.
Deviations from this behavior occur when progcb initiates an I/O operation. Now both programs are
engaged in I/O and the CPU remains idle until one of them completes its I/O. This explains the CPU-idle
periods t26-t27 and t28-t29. I/O -idle periods occur whenever progiob executes on the CPU and progcb is not
performing I/O.

Page 13
Operating System:17EC553 Module 1 Kiran S M

15. Explain the key concepts of time sharing OS and explain memory management in time sharing
OS.
A time-sharing operating system focuses on facilitating quick response to sub-requests made by all
processes, which provides a tangible benefit to users. It is achieved by giving a fair execution opportunity
to each process through two means: The OS services all processes by turn, which is called round-robin
scheduling. It also prevents a process from using too much CPU time when scheduled to execute, which is
called time-slicing. The combination of these two techniques ensures that no process has to wait long for
CPU attention.
Memory management: The technique of swapping is used in time sharing system for managing memory.
This technique provides an alternative whereby a computer system can support a large number of users
without having to possess a large memory. Swapping is the technique of temporarily removing inactive
programs from the memory of a computer system.

Figure 1.9: A schematic of swapping


An inactive program is one which is neither executing on the CPU, nor performing an I/O operation. Figure
1.9 illustrates swapping as used in a practical situation. The programs existing in the memory are classified
into three categories:
a) Active programs-one active program executes on the CPU while others perform I/O.
b) Programs being swapped out of the memory.
c) Programs being swapped into the memory.
Whenever an active program becomes inactive, the OS swaps it out by copying its instructions and data
onto a disk. A new program is loaded in its place. The new program is added at the end of the scheduling
list; it receives CPU attention in due course. Use of swapping is feasible in time sharing systems because
the time sharing kernel can estimate when a program is likely to be scheduled next. It can use this estimate
to ensure that the program is swapped in before its turn on the CPU. Swapping increases the OS overhead
due to the disk I/O involved.

16. Explain time sharing OS with respect to scheduling.


User service in a time sharing system is characterized by response times to user requests, so the time
sharing kernel must provide good response times to all users. To realize this goal all users must get an
equal opportunity to present their computational requests and have them serviced. Each user must also
receive reasonable service.

Page 14
Operating System:17EC553 Module 1 Kiran S M

Two provisions are made to ensure this.


a. Programs are not assigned priorities because assignment of priorities may deny OS attention to low
priority programs. Instead, programs are executed by turn.
b. A program is prevented from consuming unreasonable amounts of CPU time when scheduled to
execute. This provision ensures that every request will receive OS attention without unreasonable
delays.
These provisions are implemented using the techniques of round-robin scheduling and time slicing,
respectively.
Round-robin scheduling: When a user makes a computational request to his program, the program is
added to the end of a scheduling list. The scheduler always removes the first program from the scheduling
list and gives the CPU to it. When a program finishes computing its response to a request, it is removed
from the CPU and the first program in the new list is selected for execution. When the user makes another
request, the program is once again added to the end of the scheduling list.
Time slicing: The notion of a time slice is used to prevent monopolization of the CPU by a program. Time
slicing is an implementation of the notion of a time slice. Every program is subjected to the time limit
specified by the time slice. A program exceeding this limit is preempted. Figure 1.9 illustrates a schematic
of round-robin scheduling with time slicing. A preempted program is added to the end of the scheduling
list. A program may be scheduled and preempted a few times before it produces a response.

Figure 1.10: A schematic of round-robin scheduling with time slicing


The time sharing kernel uses the interval timer of the computer to implement time slicing. The interval
timer consists of a register called timer register that can store an integer number representing a time
interval in hours, minutes and seconds. The contents of the register are decremented with an appropriate
periodicity, typically a few times every second. A timer interrupt is raised when the contents of the timer
register become zero, that is, when the time interval elapses. A time sharing OS uses round-robin
scheduling with time slicing.

17. Explain the key concepts and features of real time operating system.
The key concept of real time operating system is to the deadline before the specified time. In a class of
applications called real time applications, users need the computer to perform some actions in a timely
manner to control the activities in an external system, or to participate in them. The timeliness of actions is
determined by the time constraints of the external system. If the application takes too long to respond to an
activity, a failure can occur in the external system. Therefore a real time system is implemented to provide
the services for real time applications before the deadline reaches.

Page 15
Operating System:17EC553 Module 1 Kiran S M

A real time OS provides the many special features:


 Permits creation of multiple processes within an application
 Permits priorities to be assigned to processes
 Permits a programmer to define interrupts and interrupt processing routines
 Uses priority driven or deadline oriented scheduling.
 Provider, fault tolerance and graceful degradation capabilities.
A real time application can be coded such that the OS can execute its parts concurrently. When priority-
based scheduling is used, we have a situation analogous to programming within the application-if one part
of the application initiates an I/O operation, the OS would schedule another part of the application, Thus,
CPU and I/O activities of the application can be overlapped with one another, which helps to reduce the
worst case response time to the application. Deadline scheduling is a special scheduling technique that
helps an application meet its deadline. Specification of domain specific interrupts and interrupt servicing
actions for them enables a real time application to respond to special conditions and events in the external
system in a timely manner.
Hard real time systems partition their resources and allocate them permanently to competing processes in
the application. This approach reduces the allocation overhead and guarantees that the kernel response for a
resource request made by one process would be independent of resource utilization by other processes in
the system. Hard real time systems also avoid use of features whose performance cannot be predicted
precisely. A real time OS employs two techniques to ensure continuity of operation when faults occur—
fault tolerance, and graceful degradation. A fault tolerant computer system uses redundancy of resources to
ensure that the system will keep functioning even if a fault occurs. Graceful degradation is the ability of a
system to fall back to a reduced level of service when a fault occurs and to revert to normal operations
when the fault is rectified. The programmer can assign high priorities to crucial functions so that they
would be performed in a timely manner even when the system operates in a degraded mode.

18. Define distributed system. Give key concepts and techniques used in distributed OS.
A distributed system is more than a mere collection of computers connected to a network, functioning of
individual computers must be integrated to achieve effective utilization of the services. This is achieved
through participation of all computers in the control functions of the operating system.
A distributed system can be defined as: “A distributed system is a system consisting of two or more nodes,
where each node is a computer system with its own memory, some networking hardware, and a capability
of performing some of the control functions of an OS”. The control functions performed by individual
computer systems contribute to effective utilization of the distributed system.
Three key concepts and techniques used in a distributed OS are:
1. Distributed control: Distributed control is the opposite of centralized control, it implies that the control
functions of the distributed system are performed by several computers in the system, instead of being
performed by a single computer. Distributed control is essential for ensuring that failure of a single
computer, or a group of computers, does not halt operation of the entire system.

Page 16
Operating System:17EC553 Module 1 Kiran S M

2. Transparency of a resource: Transparency of a resource or service implies that a user should be able to
access it without having to know which node in the distributed system contains it. This feature enables
the OS to change the position of a software resource or service to optimize its use by computations. For
example, in a system providing transparency, a distributed file system may move a file to the node that
contains computation using the file, so that the delays involved in accessing the file over the network
would be eliminated.
3. Remote procedure call (RPC): The remote procedure call (RPC) is used by an application to execute a
procedure in another computer in the distributed system. The remote procedure may perform a part of
the computation in the application, or it may use a resource located in that computer.

19. What is distributed operating system? What are the features/advantages of distributed OS.
A distributed computer system consists of several individual computer systems connected through a
network. Thus, many resources of a kind, e.g., many memories, CPUs and I/O devices, exist in the
distributed system. A distributed operating system exploits the multiplicity of resources and the presence of
a network to provide the advantages of resource sharing across computers, reliability of operation, speedup
of applications, and communication between users.
Features/advantages of distributed operating systems are:
1. Resource sharing: “Improves resource utilization across boundaries of individual computer systems”.
2. Reliability: “Availability of resources and services despite failures”.
3. Computation speed-up: “Parts of a computation can be executed in different computer systems to
speed-up the computation”.
4. Communication: “Provides means of communication between remote entities”.
5. Incremental growth: “Capabilities of a system (e.g., its processing power) can be enhanced at a price
proportional to the nature and size of the enhancement.
Resource sharing has been the traditional motivation for distributed operating systems. The earliest form of
a distributed operating system was a network operating system, which enabled the use of specialized
hardware and software resources by geographically distant users. Resource sharing continues to be an
important aspect of distributed operating systems today, although the nature of distribution and sharing of
resources has changed due to advances in the networking technology. Sharing of resources is now equally
meaningful in a local area network (LAN). Thus, low-cost computers and workstations in an office or a
laboratory can share some expensive resources like laser printers.
One aspect of reliability is availability of a resource despite failures in system. A distributed environment
can offer enhanced availability of resources through redundancy of resources and communication paths.
For example, availability of a disk resource can be increased by having two or more disks located at
different sites in the system. If one disk is unavailable due to a failure, a process can use some other disk.
Availability of a data resource, e.g., a file, can be similarly enhanced by keeping copies of the file in
various parts of the system.
Computation speed-up implies obtaining better response times or turnaround times for an application. This
is achieved by dispersing processes of an application to different computers in the distributed system, so

Page 17
Operating System:17EC553 Module 1 Kiran S M

that they can execute at the same time. This arrangement is qualitatively different from the overlapped
operation of processes of an application in a conventional operating system.
Users of a distributed operating system have user ids and passwords that are valid throughout the system.
This feature greatly facilitates communication between users in two ways. First, communication through
user ids automatically invokes the security mechanisms of the OS and thus ensures authenticity of
communication. Second, users can be mobile within the distributed system and still be able to
communicate with other users of the system conveniently.

20. Explain the following terms:


 Computational structures: Computational structure is a configuration of one or more programs that
work towards a common goal. It is created by issuing one or more commands specify relationship
between programs and to initiate their execution.
 Sub-request: In an interactive environment subrequest is the representation of computational
requirements by the user to a program.
 Throughput: The throughput of a system is the number of jobs, programs, processes or subrequests
completed by it per unit time.
 Scheduling: The function of deciding which program should be given to CPU, and for how long, is
called scheduling.
 Response time: The response time provided to a subrequest is the time between the submission of the
subrequest by a user and the formulation of the process’s response to it.
 Turn-around time: The turn-around time of a job, program or a process is the time since its submission
for processing to the time its results become available to the user.
 Pre-emption: Preemption is the forced deallocation of the CPU from a program.
 Virtual resources: The virtual resource is a fictitious resource- it is an illusion supported by an OS
through use of a real resource. An OS may use the same real resource to support several virtual
resources.
 Time slice: The time slice is the largest amount of CPU time any program can consume when scheduled
to execute on the CPU.
 Swapping: Swapping is a technique of temporarily removing inactive program from the memory of a
computer system.
 Hard real time system: Hard real time system is typically dedicated to processing real time
applications, and provably meets the response requirements of an application under all conditions.
 Soft real time: Soft real time system makes the best effort to meet the response requirements of a real
time application but cannot guarantee that it will be able to meet it under all conditions.
 Real time application: A real time application is a program that responds to activities in a external
system within a maximum time determined by the external system.
 Virtual resource: A virtual resource is a fictitious resource- it is an illusion supported by OS through
use of a real resources. An OS may use the same real resource to support several virtual resources. This

Page 18
Operating System:17EC553 Module 1 Kiran S M

way, it can give the impression of having a larger number of resources than it actually does. Each use of
a virtual resource results in the use of an appropriate real resource.
 Virtual machines: The virtual machines are created by partitioning the memory and I/O devices of a
real machine. The advantage of this approach is twofold. Allocation of a virtual machine to each user
eliminates mutual interference between users. It also permits each user to select an OS of his choice to
execute on his virtual machine.

21. Distinguish between:


a. CPU burst and I/O burst jobs

CPU bound/CPU burst program I/O bound/ I/O burst program

CPU bound program involves a lot of An I/O bound program involves very little
computation and very little I/O computation and a lot of I/O

Have highest priority than I/O bound programs Have lowest priority than CPU bound programs

b. Batch system and time sharing system

Batch system Time sharing system

Prime concern: CPU idle time Prime concern: Good response time

Time slicing not using in batch system Time slicing is using in time sharing system

No program pre-emption Program are pre-empted after time slicing

Results not given to user immediately after Results given to user immediately after
completion of job completion of program

Page 19

S-ar putea să vă placă și