Sunteți pe pagina 1din 24

Page 1

CS8493 OPERATING SYSTEMS


UNIT I OPERATING SYSTEM OVERVIEW 7
Computer System Overview- Basic Elements, Instruction Execution, Interrupts, Memory Hierarchy, Cache
Memory, Direct Memory Access, Multiprocessor and Multicore Organization. Operating system overview-
objectives and functions, Evolution of Operating System.- Computer System Organization Operating System
Structure and Operations- System Calls, System Programs, OS Generation and System Boot.

Basic Elements
 A computer consists of processor, memory, and I/O components, with one or more modules of each
type.
 These components are interconnected in some fashion to achieve the main function of the computer,
which is to execute programs.
 There are four main structural elements:
 Processor
 I/O Modules
 Main Memory
 System Bus

Processor:
 Controls the operation of the computer and performs its data processing functions.
 When there is only one processor, it is often referred to as the central processing unit (CPU).

CS8493/Operating Systems
Main memory:
 Stores data and programs.
 This memory is typically volatile; that is, when the computer is shut down, the contents of the
memory are lost.
 In contrast, the contents of disk memory are retained even when the computer system is shut down.
 Main memory is also referred to as real memory or primary memory.

I/O modules:
 Move data between the computer and its external environment.
 The external environment consists of a variety of devices, including secondary memory devices
(e.g., disks), communications equipment, and terminals.
System bus:
 Provides for communication among processors, main memory, and I/O modules.
Top level Components:
 The top-level components are shown in the figure.
 One of the processor’s functions is to exchange data with memory.
 For this purpose, it typically makes use of two internal (to the processor) registers.
MAR:
 A memory address register (MAR), which specifies the address in memory for the next read or
write;
MBR:
 A memory buffer register (MBR), which contains the data to be written into memory or which
receives the data read from memory.
I/O AR:
 An I/O address register (I/OAR) specifies a particular I/O device.
I/O BR:
 An I/O buffer register (I/OBR) is used for the exchange of data between an I/O module and the
processor.
 A memory module consists of a set of locations, defined by sequentially numbered addresses.
 Each location contains a bit pattern that can be interpreted as either an instruction or data. An I/O
module transfers data from external devices to processor and memory, and vice versa.
 It contains internal buffers for temporarily holding data until they can be sent on.
Processor Register
A processor register is a local storage space on a processor that holds data that is being processed by CPU.
Processor registers generally occupy the top-most position in the memory hierarchy, providing high-speed
storage space and fast access to data. A register may include the address of the memory location instead of the
real data itself.
 User-visible registers
 A user visible register (UVR) are the registers visible to programmers. Basically
this means that the programmer can only make use of these registers (UVA).
 These registers includes general purpose or special purpose registers.
Example: Data Register, & Address Register.
 Types of registers
Data Registers
o Some of the registers can be used for floating – point operations; Other registers can
be used for integer operations.
Address Registers
o Index register - Involves adding an index to a base value to get an address.
o Segment pointer - When memory is divided into segments, memory is referenced by
a segment and an offset.
o Stack pointer - Points to top of stack.
 Control and status registers
 Used by processor to control operating of the processor
 Used by privileged operating-system routines to control the execution of programs
o Program Counter (PC) – Contains the address of an instruction to be fetched
o Instruction Register (IR) – Contains the instruction most recently fetched
o Program Status Word (PSW) – this contains the status information
o Condition codes/flags – these are the bits set by the process or hardware as a
result of operations. Example: Positive, Negative, Zero, Overflow result.
Instruction Execution
 A program to be executed by a processor consists of a set of instructions stored in memory.
 Instruction processing consists of two steps:
 The processor reads ( fetches ) instructions from memory one at a time and executes each instruction.
 Program execution consists of repeating the process of instruction fetch and instruction execution.
 Instruction execution may involve several operations and depends on the nature of the instruction.

 The processing required for a single instruction is called an instruction cycle.


 The two steps are referred to as the fetch stage and the execute stage.
 Program execution halts only if the processor is turned off, some sort of unrecoverable error occurs, or a
program instruction that halts the processor is encountered.
Instruction Fetch & Execute:
 At the beginning of each instruction cycle, the processor fetches an instruction from memory.
 The program counter (PC) holds the address of the next instruction to be fetched.
 The processor always increments the PC after each instruction fetch so that it will fetch the next
instruction in sequence (i.e., the instruction located at the next higher memory address).
Instruction Register:
 The fetched instruction is loaded into the instruction register (IR).
 The instruction contains bits that specify the action the processor is to take.
 The processor interprets the instruction and performs the required action.
 These actions fall into four categories:
• Processor-memory: Data may be transferred from processor to memory or from memory to
processor.
• Processor-I/O: Data may be transferred to or from a peripheral device by transferring between the
processor and an I/O module.
• Data processing: The processor may perform some arithmetic or logic operation on data.
• Control: An instruction may specify that the sequence of execution be altered.
 An instruction’s execution may involve a combination of these actions.

Interrupts
 Interrupt is a signal which has highest priority from hardware or software which processor should
process its signal immediately.
Types of Interrupts:
 Although interrupts have highest priority than other signals, there are many type of interrupts but basic
type of interrupts are
 Hardware Interrupts: If the signal for the processor is from external device or hardware is called
hardware interrupts. Example: from keyboard we will press the key to do some action this pressing of
key in keyboard will generate a signal which is given to the processor to do action, such interrupts are
called hardware interrupts. Hardware interrupts can be classified into two types they are
1. Maskable Interrupt: The hardware interrupts which can be delayed when a much highest
priority interrupt has occurred to the processor.
2. Non Maskable Interrupt: The hardware which cannot be delayed and should process by the
processor immediately.
 Software Interrupts: Software interrupt can also divided in to two types. They are
1. Normal Interrupts: the interrupts which are caused by the software instructions are called
software instructions.
2. Exception: unplanned interrupts while executing a program is called Exception. For example:
while executing a program if we got a value which should be divided by zero is called an
exception.
Interrupts
 Interrupt the normal sequencing of the processor
 Provided to improve processor utilization
 most I/O devices are slower than the processor
 processor must pause to wait for device
 wasteful use of the processor
 Virtually all computers provide a mechanism by which other modules (I/O, memory) may interrupt
the normal sequencing of the processor.
 Interrupts are provided primarily as a way to improve processor utilization. For example, most I/O
devices are much slower than the processor.

Transfer of Control via Interrupts


 For the user program, an interrupt suspends the normal sequence of execution.
 When the interrupt processing is completed, execution resumes (Figure below).
 Thus, the user program does not have to contain any special code to accommodate interrupts;
 The processor and the OS are responsible for suspending the user program and then resuming it at the
same point.

Interrupt Processing
An interrupt triggers a number of events, both in the processor hardware and in software. When an I/O
device completes an I/O operation, the following sequence of hardware events occurs:
 The device issues an interrupt signal to the processor.
 The processor finishes execution of the current instruction before responding to the interrupt.
 The processor tests for a pending interrupt request, determines that there is one, and sends an
acknowledgment signal to the device that issued the interrupt. The acknowledgment allows the device to
remove its interrupt signal.
 The processor next needs to prepare to transfer control to the interrupt routine. To begin, it saves
information needed to resume the current program at the point of interrupt. The minimum information
required is the program status word 3 (PSW) and the location of the next instruction to be executed,
which is contained in the program counter (PC). These can be pushed onto a control stack
 The processor then loads the program counter with the entry location of the interrupt-handling routine
that will respond to this interrupt. Depending on the computer architecture and OS design, there may be
a single program, one for each type of interrupt, or one for each device and each type of interrupt. If
there is more than one interrupt-handling routine, the processor must determine which one to invoke.
This information may have been included in the original interrupt signal, or the processor may have to
issue a request to the device that issued the interrupt to get a response that contains the needed
information.
Memory Hierarchy

 Caching – copying information into faster storage system; main memory can be viewed as a last
cache for secondary storage.
 Main memory – only large storage media that the CPU can access directly
 Random access
 Typically volatile
 Secondary storage – extension of main memory that provides large nonvolatile storage capacity
 Magnetic disks – rigid metal or glass platters covered with magnetic recording material
 Disk surface is logically divided into tracks, which are subdivided into sectors
 The disk controller determines the logical interaction between the device and the computer
 The memory is characterized on the base of two factors –
1. Capacity : Capacity is the amount of information or data that a memory can store.
2. Access Time : Access time is the time interval between the read and write request and the
availability of data.
 Storage systems organized in hierarchy.
 Speed
 Cost
 Volatility
1. Registers
 CPU registers are at the top most level of this hierarchy, they hold the most frequently used data. They
are very limited in number and are the fastest.
 They are often used by the CPU and the ALU for performing arithmetic and logical operations, for
temporary storage of data.
2. Cache
 The very next level consists of small, fast cache memories near the CPU. They act as staging areas for a
subset of the data and instructions stored in the relatively slow main memory.
 There are often two or more levels of cache as well. The cache at the top most level after the registers is
the primary cache. Others are secondary caches.
 Many a times there is cache present on board with the CPU along with other levels that are outside the
chip.
3. Main Memory:
 The next level is the main memory, it stages data stored on large, slow disks often called hard disks.
These hard disks are also called secondary memory, which are the last level in the hierarchy. The main
memory is also called primary memory.
 The secondary memory often serves as staging areas for data stored on the disks or tapes of other
machines connected by networks.
4. Electronic disk/Solid State Drive (SSD)
 SSD incorporates the storage technique implemented in microchip-based flash memory, where data is
electronically stored on flash memory chips. An SSD is an entirely electronic storage device, and its
physical assembly contains no mechanical objects.
 A SSD has two key components:
 Flash memory: Contains storage memory.
 Controller: An embedded microprocessor that processes functions, like error correction, data
retrieval and encryption. Manages access to input/output (I/O) and read/write (R/W) operations
between the SSD and host computer.
5. Magnetic disks:
 rigid metal or glass platters covered with magnetic recording material
 Disk surface is logically divided into tracks, which are subdivided into sectors
 The disk controller determines the logical interaction between the device and the computer
6. Optical disk:
 An optical disk is primarily used as a portable and secondary storage device. It can store more data than
the previous generation of magnetic storage media, and has a relatively longer lifespan. Compact disks
(CD), digital versatile/video disks (DVD) and Blu-ray disks are currently the most commonly used
forms of optical disks. These disks are generally used to:
 Distribute software to customers
 Store large amounts of data such as music, images and videos
 Transfer data to different computers or devices
 Back up data from a local machine
7. Magnetic tapes:
 A magnetic disk is a storage device that uses a magnetization process to write, rewrite and access data.
 It is covered with a magnetic coating and stores data in the form of tracks, spots and sectors.
 Hard disks, zip disks and floppy disks are common examples of magnetic disks

Cache Memory
 The Cache Memory is the volatile computer memory which is very nearest to the CPU so also called
CPU memory.
 The cache memory stores the program (or its part) currently being executed or which may be executed
within a short period of time. The cache memory also stores temporary data that the CPU may
frequently require for manipulation.
 The cache memory works according to various algorithms, which decide what information it has to
store. These algorithms work out the probability to decide which data would be most frequently needed.
This probability is worked out on the basis of past observations.
 It acts as a high speed buffer between CPU and main memory and is used to temporary store very active
data and action during processing since the cache memory is faster than main memory, the processing
speed is increased by making the data and instructions needed in current processing available in cache.
The cache memory is very expensive and hence is limited in capacity.

Type of Cache memory


 Cache memory improves the speed of the CPU, but it is expensive. Type of Cache Memory is divided
into different level that are L1,L2,L3:
Level 1 (L1) cache or Primary Cache
L1 is the primary type cache memory. The Size of the L1 cache very small comparison to others that is
between 2KB to 64KB, it depends on computer processor. It is a embedded register in the computer
microprocessor(CPU).The Instructions that are required by the CPU that are firstly searched in L1 Cache.
Example of registers are accumulator, address register,, Program counter etc.
Level 2 (L2) cache or Secondary Cache
L2 is secondary type cache memory. The Size of the L2 cache is more capacious than L1 that is between
256KB to 512KB.L2 cache is Located on computer microprocessor. After searching the Instructions in L1
Cache, if not found then it searched into L2 cache by computer microprocessor. The high-speed system bus
interconnecting the cache to the microprocessor.
Level 3 (L3) cache or Main Memory
The L3 cache is larger in size but also slower in speed than L1 and L2, it's size is between 1MB to
8MB.In Multicore processors, each core may have separate L1 and L2, but all core share a common L3 cache.
L3 cache double speed than the RAM.
Advantages
 Cache memory is faster than main memory.
 It consumes less access time as compared to main memory.
 It stores the program that can be executed within a short period of time.
 It stores data for temporary use.
Disadvantages
 Cache memory has limited capacity.
 It is very expensive.
Direct Memory Access
 Direct memory access (DMA) is a method that allows an input/output (I/O) device to send or receive
data directly to or from the main memory, bypassing the CPU to speed up memory operations. The
process is managed by a chip known as a DMA controller (DMAC).
 A computer's system resource tools are used for communication between hardware and software. The
four types of system resources are:
 I/O addresses
 Memory addresses
 Interrupt request numbers (IRQ)
 Direct memory access (DMA) channels
 Three techniques are possible for I/O operations: programmed I/O, interrupt-driven I/O, and direct
memory access (DMA).
 When the processor is executing a program and encounters an instruction relating to I/O, it executes that
instruction by issuing a command to the appropriate I/O module.
 In the case of programmed I/O,the I/O module performs the requested action and then sets the
appropriate bits in the I/O status register but takes no further action to alert the processor.
 In particular, it does not interrupt the processor.
 Thus, after the I/O instruction is invoked, the processor must take some active role in determining when
the I/O instruction is completed.

 For this purpose, the processor periodically checks the status of the I/O module until it finds that the
operation is complete.
 With programmed I/O, the processor has to wait a long time for the I/O module of concern to be ready
for either reception or transmission of more data.
 The processor, while waiting, must repeatedly interrogate the status of the I/O module.
 As a result, the performance level of the entire system is severely degraded.
 An alternative, known as interrupt-driven I/O , is for the processor to issue an I/O command to a
module and then go on to do some other useful work.
 The I/O module will then interrupt the processor to request service when it is ready to exchange data
with the processor.
 The processor then executes the data transfer, as before, and then resumes its former processing.
 Interrupt-driven I/O, though more efficient than simple programmed I/O, still requires the active
intervention of the processor to transfer data between memory and an I/O module, and any data transfer
must traverse a path through the processor.
 Thus, both of these forms of I/O suffer from two inherent drawbacks:
1. The I/O transfer rate is limited by the speed with which the processor can test and service a
device.
2. The processor is tied up in managing an I/O transfer; a number of instructions must be
executed for each I/O transfer.
 When large volumes of data are to be moved, a more efficient technique is required: direct memory
access (DMA).
 The DMA function can be performed by a separate module on the system bus or it can be
incorporated into an I/O module.
 In either case, the technique works as follows.
 When the processor wishes to read or write a block of data, it issues a command to the DMA
module, by sending to the DMA module the following information:
 Whether a read or write is requested
 The address of the I/O device involved
 The starting location in memory to read data from or write data to
 The number of words to be read or written
 The processor then continues with other work.
 It has delegated this I/O operation to the DMA module, and that module will take care of it.
 The DMA module transfers the entire block of data, one word at a time, directly to or from memory
without going through the processor.
 When the transfer is complete, the DMA module sends an interrupt signal to the processor.
 Thus, the processor is involved only at the beginning and end of the transfer.
 The DMA module needs to take control of the bus to transfer data to and from memory.
 Because of this competition for bus usage, there may be times when the processor needs the bus and
must wait for the DMA module.
 Note that this is not an interrupt; the processor does not save a context and do something else.
 The processor pauses for one bus cycle (the time it takes to transfer one word across the bus). The
overall effect is to cause the processor to execute more slowly during a DMA transfer when processor
access to the bus is required.
 Nevertheless, for a multiple-word I/O transfer, DMA is far more efficient than interrupt-driven or
programmed I/O.

Multiprocessor and Multicore Organization


 A processor executes programs by executing machine instructions in sequence and one at a time. Each
instruction is executed in a sequence of operations (fetch instruction, fetch operands, perform operation,
store results).
 As computer technology has evolved and as the cost of computer hardware has dropped, computer
designers have sought more and more opportunities for parallelism, usually to improve performance
and, in some cases, to improve reliability.
 The three most popular approaches to providing parallelism by replicating processors:
 Symmetric multiprocessors (SMPs)
 Multicore computers and clusters.

I) Symmetric Multiprocessors
An SMP can be defined as a stand-alone computer system with the following characteristics:
1. There are two or more similar processors of comparable capability.
2. These processors share the same main memory and I/O facilities and are interconnected by a bus or
other internal connection scheme, such that memory access time is approximately the same for each
processor.
3. All processors share access to I/O devices, either through the same channels or through different
channels that provide paths to the same device.
4. All processors can perform the same functions (hence the term symmetric ).
5. The system is controlled by an integrated operating system that provides interaction between processors
and their programs at the job, task, file, and data element levels.

In an SMP, individual data elements can constitute the level of interaction, and there can be a high
degree of cooperation between processes.
 Performance: If the work to be done by a computer can be organized so that some portions of the work
can be done in parallel, then a system with multiple processors will yield greater performance than one
with a single processor of the same type.
 Availability: In a symmetric multiprocessor, because all processors can perform the same functions, the
failure of a single processor does not halt the machine. Instead, the system can continue to function at
reduced performance.
 Incremental growth: A user can enhance the performance of a system by adding an additional
processor.
 Scaling: Vendors can offer a range of products with different price and performance characteristics
based on the number of processors configured in the system.

Organization

 Figure illustrates the general organization of an SMP. There are multiple processors, each of which
contains its own control unit, arithmetic logic unit, and registers.
 Each processor has access to a shared main memory and the I/O devices through some form of
interconnection mechanism called a shared bus.
 The processors can communicate with each other through memory (messages and status information left
in shared address spaces). It may also be possible for processors to exchange signals directly. The
memory is often organized so that multiple simultaneous accesses to separate blocks of memory are
possible.

Figure: Symmetric Multiprocessor Organization

II)Multicore Computers
 A multicore computer, also known as a chip multiprocessor , combines two or more processors (called
cores) on a single piece of silicon (called a die).
 Typically, each core consists of all of the components of an independent processor, such as registers,
ALU, pipeline hardware, and control unit, plus L1 instruction and data caches.
 In addition to the multiple cores, contemporary multicore chips also include L2 cache and, in some
cases, L3 cache.
 Example: Intel Core i7, which includes four x86 processors, each with a dedicated L2 cache, and with a
shared L3 cache.
 The Core i7 chip supports two forms of external communications to other chips.
 The DDR3 memory controller brings the memory controller for the DDR (double data rate) main
memory onto the chip.
Figure: Intel Core i7 Block Diagram

 The interface supports three channels that are 8 bytes wide for a total bus width of 192 bits, for an
aggregate data rate of up to 32 GB/s.
 With the memory controller on the chip, the Front Side Bus is eliminated.
 The QuickPath Interconnect (QPI) is a point-to-point link electrical interconnect specification. It
enables high-speed communications among connected processor chips.
 The QPI link operates at 6.4 GT/s (transfers per second).

Operating System Overview


An OS is a program that controls the execution of application programs and acts as an interface between
applications and the computer hardware.

Objectives of OS:
 Convenience: An OS makes a computer more convenient to use.
 Efficiency: An OS allows the computer system resources to be used in an efficient manner.
 Ability to evolve: An OS should be constructed in such a way as to permit the effective
development, testing, and introduction of new system functions without interfering with service.
Functions of OS:
Program development:
 The OS provides a variety of facilities and services, such as editors and debuggers, to assist the
programmer in creating programs.
 Typically, these services are in the form of utility programs that, while not strictly part of the core of the
OS, are supplied with the OS and are referred to as application program development tools.
Program execution:
 A number of steps need to be performed to execute a program. Instructions and data must be loaded into
main memory, I/O devices and files must be initialized, and other resources must be prepared.
 The OS handles these scheduling duties for the user.
Access to I/O devices:
 Each I/O device requires its own peculiar set of instructions or control signals for operation.
 The OS provides a uniform interface that hides these details so that programmers can access such
devices using simple reads and writes.
Controlled access to files:
 For file access, the OS must reflect a detailed understanding of not only the nature of the I/O device
(disk drive, tape drive) but also the structure of the data contained in the files on the storage medium.
 In the case of a system with multiple users, the OS may provide protection mechanisms to control
access to the files.
System access:
 For shared or public systems, the OS controls access to the system as a whole and to specific system
resources.
 The access function must provide protection of resources and data from unauthorized users and must
resolve conflicts for resource contention.
Error detection and response:
 A variety of errors can occur while a computer system is running. These include internal and external
hardware errors, such as a memory error, or a device failure or malfunction; and various software
errors, such as division by zero, attempt to access forbidden memory location, and inability of the OS to
grant the request of an application.
 In each case, the OS must provide a response that clears the error condition with the least impact
on running applications.
 The response may range from ending the program that caused the error, to retrying the operation,
to simply reporting the error to the application.
Accounting:
 A good OS will collect usage statistics for various resources and monitor performance parameters such
as response time.
 On any system, this information is useful in anticipating the need for future enhancements and in tuning
the system to improve performance.
 On a multiuser system, the information can be used for billing purposes.
Evolution of Operating System
 To understand the key requirements for an OS and the significance of the major features of a
contemporary OS, it is useful to consider how operating systems have evolved over the years
Serial Processing

 In late 1940s to the mid-1950s, the programmer interacted directly with the computer hardware, there is
no operating system
 Computers ran from a console with display lights, toggle switches, some form of input device, and a
printer
 Programs in machine code were loaded via the input device (e.g., a card reader)and the outputs are
appeared on the printer.
 These early systems presented two main problems:

1. Scheduling:
 Most installations used a hardcopy sign-up sheet to reserve computer time
 Time allocations could run short or long, resulting in wasted computer time
2. Setup time
 A single program, called a job, could involve loading the compiler plus the high-level language
program (source program) into memory, saving the compiled program and then loading and linking
together the object program and common functions. Each of these steps could involve mounting or
dismounting tapes or setting up card decks.
 A considerable amount of time was spent just in setting up the program to run.
 This mode of operation could be termed serial processing

Simple Batch Systems


 Early computers were very expensive, and therefore it was important to maximize processor utilization.
 The central idea behind the simple batch-processing scheme is the use of a piece of software known as
the monitor .
 In this OS, the user no longer has direct access to the processor. Instead, the user submits the job on
cards or tape to a computer operator, who batches the jobs together sequentially and places the entire
batch on an input device, for use by the monitor.
 Each program is constructed to branch back to the monitor when it completes processing, at which point
the monitor automatically begins loading the next program.

Two points of view:

Monitor Point of View

 The monitor controls the sequence of events and the monitor must always be in main memory and
available for execution. That portion is referred to as the resident monitor.
 The rest of the monitor consists of utilities and common functions that are loaded as subroutines to the
user program at the beginning of any job that requires them
 The monitor reads in jobs one at a time from the input device(card reader)and the current job is placed in
the user program area, and control is passed to this job.When the job is completed, it returns control to
the monitor and it read the next job. The results of each job are sent to an output device(Printer).
Memory Layout for a Resident Monitor
Processor point of view:

 Processor executes instruction from the memory containing the monitor


 Executes the instructions in the user program until it encounters an ending or error condition
 “control is passed to a job” means processor is fetching and executing instructions in a user
program
 “control is returned to the monitor” means that the processor is fetching and executing instructions
from the monitor program
 The monitor performs a scheduling function .A batch of jobs is queued up and jobs are executed as
rapidly as possible.
 With each job, instructions are included in a primitive form of job control language (JCL) . This is
a special type of programming language used to provide instructions to the monitor.
 The monitor, or batch OS, is simply a computer program. It relies on the ability of the processor to
fetch instructions from various portions of main memory to alternately seize and relinquish control.

Certain other hardware features are also desirable:

Memory protection:
While the user program is executing, it must not alter the memory area containing the monitor

Timer:
A timer is used to prevent a single job from monopolizing (control) the system.

Privileged instructions:
Certain machine level instructions are designated privileged and can be executed only by the monitor.

Interrupts:
Early computer models did not have this capability. This feature gives the OS more flexibility in relinquishing
control to and regaining control from user programs.Considerations of memory protection and privileged
instructions lead to the concept of modes of operation.

1. User mode:
A user program executes in a user mode, in which certain areas of memory are protected from the
user’s use and in which certain instructions may not be executed.

2. Kernel Mode (System mode):


The monitor executes in kernel mode in which privileged instructions may be executed and protected
areas of memory may be accessed

Batch System disadvantage:


 With a batch OS, processor time alternates between execution of user programs and execution of the
monitor. There have been two sacrifices: Some main memory is now given over to the monitor and
some processor time is consumed by the monitor.

Batch System advantage:


 The simple batch system improves utilization of the computer.

Multiprogrammed Batch Systems:

 Even with the automatic job sequencing provided by a simple batch OS, the processor is often idle. The
problem is that I/O devices are slow compared to the processor.
System Utilization Example

 The calculation concerns a program that processes a file of records and performs, on average, 100
machine instructions per record. In this example, the computer spends over 96% of its time waiting for I/
O devices to finish transferring data to and from the file.
Uniprogramming:
 The processor spends a certain amount of time executing, until it reaches an I/O instruction; it must then
wait until that I/O instruction concludes before proceeding

Multiprogramming:
 When one job needs to wait for I/O, the processor can switch to the other job, which may not be waiting.
Furthermore, we might expand memory to hold three, four, or more programs and switch among all of
them ( Figure 2.5c ). The approach is known as multiprogramming, or multitasking. It is the central
theme of modern operating systems.

(c)Multiprogramming with three programs


Time-Sharing Systems:

 This technique is referred to as time sharing, because processor time is shared among multiple users.
In a time-sharing system, multiple users simultaneously access the system through terminals, with the
OS interleaving the execution of each user program in a short burst or quantum of computation.

 if there are n users actively requesting service at one time, each user will only see on the average 1/ n of
the effective computer capacity, not counting OS overhead. However, given the relatively slow human
reaction time, the response time on a properly designed system should be similar to that on a dedicated
computer.
Batch Multiprogramming vs. Time Sharing

Batch Multiprogramming Time Sharing


Principal objective Maximize processor use Minimize response time
Source of directives to Job control language Commands entered at the
operating system commands provided with the terminal
job

Operating System Structures


1. Simple Structure

Fig: In MS-DOS, applications may bypass the operating system.


 Operating systems such as MS-DOS and the original UNIX did not have well-defined structures.
 There was no CPU Execution Mode (user and kernel), and so errors in applications could cause the
whole system to crash.
2. Monolithic Approach
 Functionality of the OS is invoked with simple function calls within the kernel, which is one large
program.
 Device drivers are loaded into the running kernel and become part of the kernel.
Fig: A monolithic kernel, such as Linux and other Unix systems.

3. Layered Approach
This approach breaks up the operating system into different layers.
 This allows implementers to change the inner workings, and increases modularity.
 As long as the external interface of the routines don’t change, developers have more freedom to change
the inner workings of the routines.
 With the layered approach, the bottom layer is the hardware, while the highest layer is the user interface.
o The main advantage is simplicity of construction and debugging.
o The main difficulty is defining the various layers.
o The main disadvantage is that the OS tends to be less efficient than other implementations.

Fig: The Microsoft Windows NT Operating System. The lowest level is a monolithic kernel, but many OS
components are at a higher level, but still part of the OS.

4. Microkernels
This structures the operating system by removing all nonessential portions of the kernel and implementing them
as system and user level programs.
 Generally they provide minimal process and memory management, and a communications facility.
 Communication between components of the OS is provided by message passing.
Benefits:
 Extending the operating system becomes much easier.
 Any changes to the kernel tend to be fewer, since the kernel is smaller.
 The microkernel also provides more security and reliability.
Disadvantage: poor performance due to increased system overhead from message passing.

Fig: A Microkernel architecture.

Operating – System Operations


Modern operating systems are interrupt driven.
 If there are no processes to execute,
 no I/O devices to service, and no users to whom to respond,
an operating system will sit quietly, waiting for something to happen. Events are almost always signaled by the
occurrence of an interrupt or a trap.
 A trap (or an exception) is a software-generated interrupt caused either by an error (for example,
division by zero or invalid memory access) or by a specific request from a user program that an
operating-system service be performed.

Dual-Mode Operation
In order to ensure the proper execution of the operating system, we must be able to distinguish between the
execution of operating-system code and user defined code. We need two separate modes of operation: user
mode and kernel mode (also called supervisor mode, system mode, or privileged mode). A bit, called
the mode bit, is added to the hardware of the computer to indicate the current mode: kernel (0) or user (1).
With the mode bit, we are able to distinguish between a task that is executed on behalf of the operating system
and one that is executed on behalf of the user.
 When the computer system is executing on behalf of a user application, the system is in user mode.
 However, when a user application requests a service from the operating system (via a system call),
it must transition from user to kernel mode to fulfill the request. As we shall see, this
architectural enhancement is useful for many other aspects of system operation as well.
 At system boot time, the hardware starts in kernel mode.
 The operating system is then loaded and starts user applications in user mode.
 Whenever a trap or interrupt occurs, the hardware switches from user mode to kernel mode (that is,
changes the state of the mode bit to 0).
 Thus, whenever the operating system gains control of the computer, it is in kernel mode.
 The system always switches to user mode (by setting the mode bit to 1) before passing control to a user
program.
The dual mode of operation - protecting the operating system from errant users: The hardware
allows privileged instructions to be executed only in kernel mode. If an attempt is made to execute a
privileged instruction in user mode, the hardware does not execute the instruction but rather treats it as illegal
and traps it to the operating system.

 When a system call is executed, it is treated by the hardware as a software interrupt.


 Control passes through the interrupt vector to a service routine in the operating system, and the mode bit
is set to kernel mode.
 The system call service routine is a part of the operating system.
 The kernel examines the interrupting instruction to determine what system call has occurred; a
parameter indicates what type of service the user program is requesting.
 The kernel verifies that the parameters are correct and legal, executes the request, and returns control to
the instruction following the system call.
 The lack of a hardware-supported dual mode can cause serious shortcomings in an operating system.
 When a program error occurs, the operating system must terminate the program abnormally.
 This situation is handled by the same code as is a user-requested abnormal termination.
 An appropriate error message is given, and the memory of the program may be dumped.
 The memory dump is usually written to a file so that the user or programmer can examine it and
perhaps correct it and restart the program.
Timer
 We must ensure that the operating system maintains control over the CPU.
 We must prevent a user program from getting stuck in an infinite loop or not calling system services
and never returning control to the operating system. To accomplish this goal, we can use a timer.
 A timer can be set to interrupt the computer after a specified period. The period may be fixed (for
example, 1/60 second) or variable (for example, from 1 millisecond to 1 second).
 A variable timer is generally implemented by a fixed-rate clock and a counter.
 The operating system sets the counter. Every time the clock ticks, the counter is decremented.
 When the counter reaches 0, an interrupt occurs.
 Before turning over control to the user, the operating system ensures that the timer is set to interrupt.
 If the timer interrupts, control transfers automatically to the operating system, which may treat the
interrupt as a fatal error or may give the program more time.
 Clearly, instructions that modify the content of the timer are privileged. Thus, we can use the timer to
prevent a user program from running too long.
 Every second, the timer interrupts and the counter is decremented by 1.
 As long as the counter is positive, control is returned to the user program.
 When the counter becomes negative, the operating system terminates the program for exceeding the
assigned time limit.

System Calls
 System calls provide an interface to the services made available by an operating system. These calls are
generally available as routines written in C and C++, and assembly-language instructions.
 Application developers often do not have direct access to the system calls, but can access them through
an application programming interface (API). The functions that are included in the API invoke the
actual system calls. By using the API, certain benefits can be gained:
 Portability: as long a system supports an API, any program using that API can compile and run.
 Ease of Use: using the API can be significantly easier then using the actual system call.

System Call Parameters


Three general methods exist for passing parameters to the OS:
1. Parameters can be passed in registers.
2. When there are more parameters than registers, parameters can be stored in a block and the block
address can be passed as a parameter to a register.
3. Parameters can also be pushed on or popped off the stack by the operating system.
Types of System Calls
There are 5 different categories of system calls: They are process control, file manipulation, device
manipulation, information maintenance and communication.
Process Control
 A running program needs to be able to stop execution either normally or abnormally. When execution is
stopped abnormally, often a dump of memory is taken and can be examined with a debugger.
File Management
 Some common system calls are create, delete, read, write, reposition, or close. Also, there is a need to
determine the file attributes – get and set file attribute. Many times the OS provides an API to make
these system calls.
Device Management
 Process usually require several resources to execute, if these resources are available, they will be granted
and control returned to the user process. These resources are also thought of as devices. Some are
physical, such as a video card, and others are abstract, such as a file.
 User programs request the device, and when finished they release the device. Similar to files, we
can read, write, and reposition the device.
Information Management
 Some system calls exist purely for transferring information between the user program and the operating
system. An example of this is time, or date.
 The OS also keeps information about all its processes and provides system calls to report this
information.
Communication
There are two models of interprocess communication, the message-passing model and the shared memory
model.
 Message-passing uses a common mailbox to pass messages between processes.
 Shared memory use certain system calls to create and gain access to create and gain access to regions of
memory owned by other processes. The two processes exchange information by reading and writing in
the shared data.
System Programs
System programs provide a convenient environment for program development and execution. They can be
divided into several categories:
1. File management: These programs create, delete, copy, rename, print, dump, list, and generally
manipulate files and directories.
2. Status information: The status such as date, time, amount of available memory or disk space, number
of users or similar status information.
3. File modification: Several text editors may be available to create and modify the content of files stored
on disk or tape.
4. Programming-language support: Compilers, assemblers, and interpreters for common programming
languages are often provided to the user with the operating system.
5. Program loading and execution: The system may provide absolute loaders, relocatable loaders,
linkage editors, and overlay loaders.
6. Communications: These programs provide the mechanism for creating virtual connections among
processes, users, and different computer systems. (email, FTP, Remote log in)
7. Application programs: Programs that are useful to solve common problems, or to perform common
operations. Eg: Web browsers, database systems.

Operating System Generation


 It is possible to design, code, and implement an operating system specifically for one machine at one
site. More commonly, however, operating systems are designed to run on any of a class of machines at a
variety of sites with a variety of peripheral configurations. The system must then be configured or
generated for each specific computer site, a process sometimes known as system generation (SYSGEN).
 The operating system is normally distributed on disk or CD-ROM. To generate a system, we use a
special program. The SYSGEN program reads from a given file, or asks the operator of the system for
information concerning the specific configuration of the hardware system, or probes the hardware
directly to determine what components are there. The following kinds of information must be
determined.
1. What CPU is to be used? What options (extended instruction sets, floating point arithmetic, and so
on) are installed? For multiple CPU systems, each CPU must be described.
2. How much memory is available? Some systems will determine this value themselves by
referencing memory location after memory location until an "illegal address" fault is generated. This
procedure defines the final legal address and hence the amount of available memory.
3. What devices are available? The system will need to know how to address each device (the device
number), the device interrupt number, the device's type and model, and any special device
characteristics.
4. What operating-system options are desired, or what parameter values are to be used? These
options or values might include how many buffers of which sizes should be used, what type of CPU-
scheduling algorithm is desired, what the maximum number of processes to be supported is, and so
on. Once this information is determined, it can be used in several ways.
 At one extreme, a system administrator can use it to modify a copy of the source code of the operating
system. The operating system then is completely compiled. Data declarations, initializations, and
constants, along with conditional compilation, produce an output object version of the operating system
that is tailored to the system described. At a slightly less tailored level, the system description can cause
the creation of tables and the selection of modules from a precompiled library.
 These modules are linked together to form the generated operating system. Selection allows the library
to contain the device drivers for all supported I/O devices, but only those needed are linked into the
operating system. Because, the system is not recompiled, system generation is faster, but the resulting
system may be overly general.
 At the other extreme, it is possible to construct a system that is completely table driven. All the code is
always part of the system, and selection occurs at execution time, rather than at compile or link time.
System generation involves simply creating the appropriate tables to describe the system.
System Boot
 After an operating system is generated, it must be made available for use by the hardware. But how
does the hardware know where the kernel is or how to load that kernel?
 The procedure of starting a computer by loading the kernel is known as booting the system.
 A small piece of code known as the bootstrap program or bootstrap loader locates the kernel, loads it
into main memory, and starts its execution.

 The bootstrap program can perform a variety of tasks. Usually, one task is to run diagnostics to
determine the state of the machine. If the diagnostics pass, the program can continue with the booting
steps.
 It can also initialize CPU registers to device controllers and the contents of main memory.
 Some systems—such as cellular phones, PDAs, and game consoles—store the entire operating system
in ROM. Storing the operating system in ROM is suitable for small operating systems. A problem with
this approach is that changing the bootstrap code requires changing the ROM hardware chips. All forms
of ROM are also known as firmware. A problem with firmware in general is that executing code there
is slower than executing code in RAM. Some systems store the operating system in firmware and copy
it to RAM for fast execution. A final issue with firmware is that it is relatively expensive, so usually
only small amounts are available.
 For large operating systems, the bootstrap loader is stored in firmware, and the operating system is on
disk. In this case, the bootstrap runs diagnostics and has a bit of code that can read a single block at a
fixed location (say block zero) from disk into memory and execute the code from that boot block. A
disk that has a boot partition is called a boot disk or system disk.
 Now that the full bootstrap program has been loaded, it can traverse the file system to find the operating
system kernel, load it into memory, and start its execution. It is only at this point that the system is said
to be running.

S-ar putea să vă placă și