Documente Academic
Documente Profesional
Documente Cultură
UNIT 1
1.1 INTRODUCTION
Mainframe computer systems were the first computers used to tackle many commercial and scientific
applications.
1
It has the following systems.
1. Batch Systems
2. Multiprogrammed Systems
3. Time – Shared Systems.
1. Batch System
It is the first fundamental operating system.
o Batch processing is execution of a series of programs ("jobs") on a computer without manual
intervention
o The common input devices of batch system were card readers and tape drives.
o The output devices of batch system were line printers, tape drives and card punches.
o The user prepared a job which consisted of the programmed data and some control information
about the nature of the job (control cards) and submitted it to the computer operators.
o The job was usually in the form of punched cards.
o The output consisted of the result of the program as well as the dump of final memory and
registers for debugging.
The operating system keeps several jobs in main memory at the same time, simultaneously.
The operating system keeps several jobs in memory in the job pool.
The OS picks and begins to execute one of the jobs. The job may have to wait for some task
[I/O operation has to complete] in a non multi-programmed system.
In a multiprogramming system, it simply switches to another job.
2
Multiprogramming system
3
1-4 MULTIPROCESSOR SYSTEMS OR PARALLEL SYSTEMS
Multiprocessor systems are the systems with more than one CPU in close communication.
It is also called as parallel system or tightly coupled system.
It has more than one processor in close communication sharing the computer bus,clock,memory
and peripheral devices.
Tightly coupled system – processors share memory and a clock, communication usually takes
place through the shared memory.
Asymmetric multiprocessing
Each processor is assigned a specific task;
A master processor controls the system, the other processors looks for master processor for
instruction or it has some predefined tasks.
Master process schedules and allocates work to slave processors.
It is more common in extremely large systems.
Loosely coupled system – each processor has its own local memory; processors communicate
with one another through various communications lines, such as high-speed buses or telephone
lines.
Advantages of distributed systems.
1. Resources Sharing
2. Computation speed up – load sharing
3. Reliability
4. Communications
4
Requires networking infrastructure.
Local area networks (LAN) or Wide area networks (WAN)
1. Computer Networks
It is a network communication path between two or more systems. It depends on networking for their
functionality.
Networks vary by the protocols used, the distance between nodes, and transport media. It needs an
interface device a network adapter.
It deals with
LAN Local Area Network
Computers within a room, floor or a building
WAN Wide Area Network
It usually links buildings, cities or countries.
MAN Metropolitan Area Network
It could link buildings within a city.
It provides an interface to which a client can send requests to perform an action, the server executes the
action and sends back result to the client.
In a Peer – to Peer systems, services can be provided by several nodes distributed throughout the network.
In this, a node first join the network of peers, once a node has joined the network, it provides
services to requesting and response services are accomplished in following ways:
5
1. When a node joins network, it register its service and its centralized service on the network.
2. Client acting as a client must first discover what node provides a desired service by broadcasting
a request for the service to all other nodes in the network.
Hard real-time:
A hard real time system guarantees that critical tasks can be completed on time.
In hard real time system secondary storage is usually limited or absent. Data is stored in short
term memory, or read-only memory (ROM)
Most advanced operating system features are missing.
Conflicts with time-sharing systems, not supported by general-purpose operating systems.
Soft real-time
In a soft real time system a critical task gets priority over the other tasks and retains that priority
until it completes.
They have more limited utility than hard real time system.
They are risky to use in robotics.
They are useful in applications (multimedia, virtual reality) requiring advanced operating-
system features.
6
1.8 HARDWARE PROTECTION
To ensure proper operations, we must protect the operating system and all the programs and their data
from any malfunctioning program.
Sharing system resources requires operating system to ensure that an incorrect program cannot cause
other programs to execute incorrectly. Protection is needed for any shared resource.
At system boot time, the hardware starts in monitor mode. The operating system is then loaded and starts
user processes in user mode. Whenever trap/interrupt occurs, hardware switches from user mode to
monitor mode.
The lack of a hardware supported dual mode can cause serious shortcomings in the operating system
2. I/O protection
All I/O instructions are privileged instructions and it is used to prevent users from illegal I/O.
Thus users cannot issue I/O instruction directly; they must do it through the OS by means of a system
call.
7
Use a system call to perform a I/O
3. Memory Protection
To ensure correct operation, we must protect the interrupt vector from modification by a user
program.
8
(i.e.) 300040(base) – 420940 = 120999 (Limit)
The base and limit register can be loaded only by the operating system.
4. CPU Operation
Protecting CPU, by ensuring OS maintain control. We must prevent stuck from the user program by
using timer.
A timer can be set to interrupt the computer after a specified period. The period may be fixed /
variable.
Timer is decremented every clock tick.
When timer reaches the value 0, an interrupt occurs.
Timer commonly used to implement time sharing.
Timer also used to compute the current time.
Load
-timer is
a
privileg
ed
instructi
on.
9
1.9 SYSTEM COMPONENTS
We can create a system as large and complex as an operating system only by partitioning it into
smaller pieces, each piece should be a well – defined portion of the system, with carefully defined inputs,
outputs and functions.
Process Management
A set of instructions not complied is called program.
A process can be thought of as a program in execution, but its definition.
A time shared user program such as a compiler is a process.
A word processing program being run by an individual user on a PC is a process.
A system task, such as sending output to a printer, is also a process.
Here program is a passive entity, such as the contents of a file stored on disk, whereas process is an
active entity, with a program counter specifying the next instruction to execute. The execution of a
process must be sequential.
The operating system is responsible for the following activities in connection with process
management:
Creating and deleting both user and system processes.
Suspending and resuming processes.
Providing mechanisms for process synchronization.
Providing mechanisms for process communication.
Providing mechanisms for deadlock handling.
The main memory is central to the operation of a modern computer system. Main memory is a large array
of words or bytes, ranging in size from hundreds of thousands to billions. Each word or byte has its own
address. Main memory is a repository of quickly accessible data shared by the CPU and I/O devices.
Many different memory- management schemes are available, and the effectiveness of the different
algorithms depends on the particular situation.
10
The operating system is responsible for the following activities in connection with memory
management:
Keeping track of which parts of memory are currently being used and by whom.
Deciding which processes are to be loaded into memory when memory space becomes
available.
Allocating and de allocating memory space as needed.
File Management
File management is one of the most visible components of operating systems. Computers can store
information on several different types of physical media. Magnetic tape, Magnetic disk and optical disk
are the most common media. Each of these media has its own characteristics and physical organization.
Each medium is controlled by a device, such as a disk drive or tape drive, that also has unique
characteristics. These properties include access speed, capacity, data transfer rate and access method.
The operating system is responsible for the following activities in connection with file management:
Creating and deleting files
Creating and deleting directories
Supporting primitives for manipulating files and directories
Mapping files onto secondary storage
Backing up files on stable (nonvolatile) storage media.
One of the purposes of an operating system is to hide the peculiarities of specific hardware devices
form the user.
The I/O subsystem consists of
A memory – management component that includes buffering, caching and spooling.
A general device – driver interface.
Drivers for specific hardware devices.
The main purpose of a computer system is to execute programs. These programs, with the data they
access, must be in main memory, or primary storage, during execution.
The operating system is responsible for the following activities in connection with disk
management:
Free – space management
Storage allocation
Disk scheduling.
Networking
Distributed system is a collection of processors that do not share memory, peripheral devices, or a
clock.
Each processor has its own local memory and clock, and the processors communicate with one
another through various communication lines, such as high – speed buses or networks.
11
The processors in a distributed system vary in size and function. They may include small micro
processors, workstations, minicomputers and large, general – purpose computer systems.
Protection System
If a computer system has multiple users and allows the concurrent execution of multiple processes,
then the various processes must be protected from one another’s activities.
Protection can improve reliability by detecting latent errors at the interfaces between component
subsystems. Early detection of interface errors can often prevent contamination of a healthy subsystem by
another subsystem that is malfunctioning.
One of the most important systems programs for an operating system is the command interpreter,
which is the interface between the user and the operating system.
Some of the operating systems include the command interpreter in the kernel.
When a new job is started in a batch system, or when a user logs on to a time – shared system, a
program that reads and interprets control statements is executed automatically.
This program is sometimes called the control – card interpreter or the command- line interpreter, and
is often known as the shell. Its function is simple. To get the next command statement and execute it.
An operating system provides an environment for the execution of programs, one set of OS services
provides functions that are helpful to the user.
1. Program execution:
The system must be able to load a program into memory and run that program.
2. I/O Operations:
A running program may require I/O which may involve file or an I/O device.
4. Communications:
One process needs to communicate with another process by exchanging or passing messages
between them.
5. Error detection:
The OS constantly needs to be aware of possible error. Errors may be detected by OS in the
programs.
6. Resource Allocation:
12
When there are multiple users or multiple jobs running at the same time, resources must be
allocated to each of them. Different types of resources are managed by the OS.
OS allocates, what are the resources are required to execute the program which can be done by OS.
7. Accounting:
It maintains what are the resources are used for the program.
Status of the program.
System call provides the interface between a process and the operating system.
To include function call. In terms of user defined program, contains two parts
1. Main program
2. Function
Which is invoked by the call statement.
The user can then use the mouse to select the source name, and a window can be opened for
the destination name to be specified. It require many I/O system calls.
Once the two files names are obtained program must open the input file and create the output
file. Each operations require another system call if we are opening file means OS checks whether the
proper is existing / not. If the input file exists, we must create the output file. Each requires system
call.
System calls over in different ways, depending on the computer use. Often, more inform is
required than simply the identity of the desired system call. The exact type and amount of information
vary according to the particular operating system and call.
Three general methods are used to pass parameters to the operating system.
1. The simplest approach is to pass the parameters in registers.
2. In some cases, there may be more parameters in registers. Here parameters are and
address of the block is passed as a parameter in a register.
3. Parameters stored in stack => PUSH approach.
13
In this diagram, passing parameters as tables through registers.
1. Process control
A running program needs to be able to halt its execution either normally / abnormally.
A program in execution can be referred as process. Process can be controlled by detecting the errors.
end means, if the program is successfully completed. Abort means, if the program is not successfully
completed, then this program will be aborted.
At initial stage program will be loaded from main memory to buffer and executed.
14
Getting attributes of the process such as,
name of the process
memory space
speed execution of the process
What are the resources are utilized.
Depends on this information it sets the attribute. If memory space needed for storing of program, which
can be allocated, if a program can be deleted from the secondary storage, free space can be managed for
the future use.
At system startup, free memory, command interpreter and kernel is available because no process is
loaded.
During execution of a program, free memory, process, command interpreter and kernel is available.
15
UNIX RUNNING MULTIPLE PROGRAMS
Process D and process C can be interpreted by command interpreter and also process B. Finally three
programs can be executed.
2. File management
They are
1. File creation
2. Deletion
3. Renaming a file
4. Modifying a file contents
5. Creating files and directories are performed.
3. Device management
A program as it is running may need more resources to execute. i.e. main memory, disk drives ,access to
files and so on .If the resources are available they can be granted and control can be returned to the user
process. Otherwise the processes have to wait until sufficient resources are available. The resources
controlled by the OS are called devices.
4. Information Maintanence
2. Other system call may return information about the system, such as the number of current
users, the version number of the OS, the amount of free memory or disk space and so on.
5. Communication
16
Before communication can take place, a condition must be opened. The name of the other communicator
must be known be it another process on the same CPU / Process on another computer connected by a
communications network.
System programs provide a convenient environment for program development and execution. It
can be divided into following categories.
1. File Management
These programs create, delete, copy, rename, print, dump, list and manipulate files and
directories.
2. Status Information
It deals with the amount of memory or disk space, numbers of users, date, time / similar status
information.
3. File Modification
Before modifying files, we should check whether the file is existing / not. If it is existing we can
modify the files, if not error occurs.
17
6. Communication
Each process should be communicated by passing messages. Since it is sharing the resources
between them.
In addition to system progress, most OS are supplied with programs that are useful in solving
common problems or performing common operations or performing common operations.
Such programs include web browsers, word processors and text formatters, spreadsheets,
database system – complier, plotting and statistical analysis packages and games.
Process State:
If the process executes, it changes its state.
It has the following states,
1. New -> The process is being created.
2. Running -> Instructions are being executed.
3. Waiting -> The process is waiting for some event such as I/O or CPU.
4. Ready -> waiting process is assigned to the processor.
5. Terminated -> The process has finished execution
18
Diagram of process state
Each process is represented in the operating system by a process control block (PCB) also called
task control block.
1. process state:
The state may be new, ready, running, waiting,halted, and so on.
2. Program Counter:
The counter indicates address of the next instruction to be executed of this process.
3. CPU Register:
Temporary registers are used to stores results. It includes accumulators, index
registers, stack pointer, general purpose registers and condition code (cc i.e indicates
error status of the progress.) information.
4. CPU – Scheduling information:
19
It includes process priority, pointer to scheduling queue.
6. Accounting information:
It includes amount of CPU and real time used, time limits, account numbers, job
or process numbers .
Threads:
Thread is a logical extension of the program. Program in execution is a process that perform a single
thread of execution
20
The main objective of multiprogramming is to have some process running at all times to
maximize CPU utilization.
The main objective of time-sharing is to switch the CPU among different processes so frequently
that users can interact with each program while it is running.
A system with one CPU can only have one running process at any time. As user jobs enter the
system, they are put on a queue called as the “job pool”. This consists of all jobs in the system.
Processes that reside in main memory that are ready to execute are put on the “Ready Queue”. A
queue has a “header” node which contains pointers to the first and the last PCBs in the list.
There are also other queues in the system like device queue which is the list of the processes
waiting for a particular device. Each device has its own queue.
The list of processes waiting for a particular I/O device is called a device queue. Each device has
its own device queue.
A common representation of process scheduling is a queuing diagram.
21
2. Set of device queues
represent Process
queue
New process is initially put in the ready queue. It waits in the ready queue until it is selected for
execution. Following events could occur.
Schedulers:
22
It selects processes from a mass-storage (i.e., hard disk) device where they are spooled and loads
them into memory for execution. Also referred to as job scheduler, it selects a job to run and
creates a new process.
Short term Scheduler (CPU Scheduler)
It is responsible for scheduling ready processes that are already loaded in memory and are ready
to execute.
The long-term scheduler brings the processes into memory and hands them over to the CPU scheduler. It
controls the number of processes that the CPU scheduler is handling thus maintains the degree of
multiprogramming which corresponds to the number of processes in memory.
The long-term scheduler has to make careful selection among I/O-bound and CPU-bound
processes. I/O bound processes spend more of their time doing I/O then computation, while CPU-bound
processes spend more time on computation then I/O.
A long-term scheduler should pick a relatively good mix of I/O- and CPU-bound processes so
that the system resources are better utilized. If all processes are I/O-bound, the ready queue will almost
always be empty. If all processes are CPU-bound, I/O queue will be almost always empty. A balanced
combination should be selected for system efficiency.
Context Switching
Switching the CPU from one process to another requires saving (storage) the state of the running process
into its PCB and loading the state of the latter from its PCB. This is known as the context switch.
Switching involves copying registers, local and global data, file buffers and other information to the PCB.
23
Process Creation
The long-term scheduler creates a job’s first process as the job is selected for execution from the
job pool.
A process may create new processes via various system calls. So created processes are called
children processes while the creating process is referred to as the mother or parent process. On Unix, the
system call is fork(). Creating a process involves creating a PCB for it and scheduling for its execution.
Process Execution
Depending on OS policy, a newly created process may inherit its resources from its
parent or it may acquire its own resources from the OS.
When a child process is restricted to the parent’s resources, new processes do not overload the system. At
the same time some initialization data may be passed from the parent to the child process.
In Unix OS, each process has a different process identifier (aka, process number as referred to above) and
each child process executes in the address space of the parent. This eases communication between parent
and child processes.
When a new process (child) is created, either the parent runs concurrently with its child or parent
waits until the child terminates.
Process Termination
Having completed its execution and sent its output to its parent, a process terminates by signaling
the OS that it’s finished. On Unix, this is accomplished via the exit() system call.
The OS de-allocates memory, reclaims resources such as I/O buffers, open files that were
allocated to that process.
On some systems, when a parent process terminates, the OS also terminates all children processes.
Likewise if:
The child has exceeded its usage of resources it has been allocated; or
The task assigned to the child is no longer required; or
The OS does not allow a child to continue after its parent terminates,
Then the parent may terminate its child process. The concept of terminating the children processes by a
terminated parent is known as cascading termination.
24
Processes running simultaneously may be either independent or cooperating. An independent
process is not affected nor it can affect other processes executing in the system. A cooperating process
can affect the state of other processes by sharing memory, sending signals, etc.
Example:
Consider the producer- consumer problem which is a typical example of cooperating processes
where cooperation is demonstrating the classical inter-process communication problem.
The idea is that an operating system may have many processes that need to communicate.
Imagine a program that prints output somewhere internally which is later consumed by a printer driver.
In the case of unbounded-buffer producer-consumer problem, there is no restriction on the size of the
buffer. On the other hand, bounded-buffer producer-consumer problem assumes that there is a fixed
buffer size. A producer process produces information that is consumed by a consumer process. Producer
places its production into a buffer and consumer takes its consumption from the buffer. When buffer is
full, producer must wait until consumer consumes at least an item; likewise, when buffer is empty,
consumer must wait until producer places at least an item into the buffer. Consider a shared-memory
solution to the bounded-buffer problem.
Shared variables:
var n; // buffer size
type item = …; // kind of items kept in the buffer
var buffer : array [0 .. n-1] of item; // buffer
in, out : 0 .. n-1; // in & out indexes to buffer.
Producer code:
25
var nextp : item; // local variable of item produced
repeat
….
produce an item in nextp;
….
while in + 1 mod n = out do no-op;
buffer [in] := nextp;
in := in + 1 mod n;
until false;
Consumer code:
repeat
while in = out do no-op;
nextc := buffer [out];
out := out + 1 mod n;
….
Consume the item in nextc;
….
until false;
. . .
n-1
out
Buffer empty when:
x 0 in = out
y 1
z Buffer full when:
2 in + 1 mod n = out
. 3
. . in
An IPC facility basically provides two operations: send (message) and receive (message). The
messages may be of fixed or variable length.
If two processes want to communicate, a communication link must exist between them so that
they can send and receive messages to and from each other.
26
In this type of communication, each process that wants to communicate must explicitly name the
recipient or sender of the communication:
send (P, message) : Process P is the recipient.
receive (Q, message): Process Q is the sender.
Consider the producer – consumer problem: direct communication algorithm for producer and
consumer are as follows:
Producer: Consumer:
repeat repeat
… receive (Producer, nextc);
produce an item in nextp; …
… consume the item in nextc;
send (Consumer, nextp); …
until false; until false;
The main disadvantage is limited modularity in direct communication. The names of producer
and receiver should be correctly set in every time they are used.
Indirect Communication
The messages are sent to and received from mailboxes also known as ports. Each mailbox has a
unique identification into which messages can be placed by processes. Then two processes can
communicate only if they share the same mailbox. The instructions for communication are in the
following form:
send (A, message)
receive (A, message), where A is the name of a mailbox.
Indirect communication has the following properties:
A link is established between a pair of processes only if a shared mailbox is available;
A link may be associated with more than two processes;
27
Between two processes, there may be a number of different links, each link corresponding to a
mailbox;
A link may be unidirectional or bidirectional.
P1
What if a single mailbox is shared by more than two processes? Reference the
following figure, assume that P1 sends a message to mailbox A. Which of the
other processes will receive it, P2 or P3? In order to resolve this issue, either
Mailbox A
A link is associated with at most two processes, or
One process at a time can execute receive operation, or
The system may identify the recipient.
P1 P1
Buffering (Link Capacity)
A link capacity determines the number of messages that can reside in it temporarily. A link may be
viewed as a queue of messages which can be implemented in one of three ways:
1. Zero Capacity (No buffering): The queue has maximum length 0. The link cannot hold any
messages waiting in it. Sender must wait until the previously sent message is received.
2. Bounded Capacity: The queue has finite length n. Hence at most n messages can reside in it. If
the link is full, the sender must be delayed.
Message Acknowledgement
After receiving a message, in order to inform sender of the receipt of the message, receiver sends
an acknowledgement message.
Exception Conditions
Message system is useful in distributed environments since the probability of error during
communication is larger than that of a single machine environment. In a single machine environment,
shared memory system is generally used.
If an error occurs in a computer system, error recovery or exception condition handling must take
place. Some typical errors that must be handled can be as follows:
28
a) A Process Terminates:
A sender or receiver process may terminate before a message is processed. This situation will leave
behind message that will never be received or processes waiting for messages that will never be sent.
b) Lost Messages
Either,
1) The OS is responsible for detecting and responding, or
P Q 2) The sending process is responsible for detecting and should
retransmit the message.
The message is lost
due to hardware or Responding in case of (1) by the OS may also be implemented by
communication line notifying the sending process that the message has been lost.
failure.
The most common method for detecting lost messages is to use
timeouts. For instance, if the acknowledgement signal does not arrive at the sending process in a specified
time interval, it is assumed that the signal is lost and resent.
c) Scrambled Messages
Because of the noise in communication channels, the delivered message may be scrambled (modified) in
transit
29