Sunteți pe pagina 1din 27

Operating System

An operating system (OS) is a collection of software that manages computer hardware


resources and provides common services for computer programs. The operating system is a
vital component of the system software in a computer system.
Types of Operating System

1) Serial Processing:

The Serial Processing Operating Systems are those which Performs all the instructions into
a Sequence Manner or the Instructions those are given by the user will be executed by using the
FIFO Manner means First in First Out. All the Instructions those are Entered First in the System
will be Executed First and the Instructions those are Entered Later Will be Executed Later. For
Running the Instructions the Program Counter is used which is used for Executing all the
Instructions.

2) Batch Processing:

The Batch Processing is same as the Serial Processing Technique. But in the Batch
Processing Similar Types of jobs are Firstly Prepared and they are Stored on the Card. and that
card will be Submit to the System for the Processing. The System then Perform all the
Operations on the Instructions one by one. And a user can’t be Able to specify any input.
And Operating System wills increments his Program Counter for Executing the Next Instruction.

The Main Problem is that the Jobs those are prepared for Execution must be the Same Type and
if a job requires for any type of Input then this will not be Possible for the user.

3) Multi-Programming:

In Multi programming, we can Execute Multiple Programs on the System at a Time and in the
Multi-programming the CPU will never get idle, because with the help of Multi-Programming
we can Execute Many Programs on the System.

The Multi-programming Operating Systems never use any cards because the Process is entered
on the Spot by the user. But the Operating System also uses the Process of Allocation and De-
allocation of the Memory Means he will provide the Memory Space to all the Running and all
the Waiting Processes. There must be the Proper Management of all the Running Jobs.
4) Real Time System:

In Real time systems, response Time is already fixed. Means time to Display the Results after
Possessing has fixed by the Processor or CPU. Real Time System is used at those Places in
which we Requires higher and Timely Response. These Types of Systems are used in
Reservation. So when we specify the Request, the CPU will perform at that Time. There are two
Types of Real Time System

1) Hard Real Time System: In the Hard Real Time System, Time is fixed and we can’t Change
any Moments of the Time of Processing. Means CPU will Process the data as we Enters the
Data.
2) Soft Real Time System: In the Soft Real Time System, some Moments can be Change.
Means after giving the Command to the CPU, CPU Performs the Operation after a Microsecond.

5) Distributed Operating System

- Distributed Means Data is Stored and Processed on Multiple Locations. When a Data is stored
on to the Multiple Computers, those are placed in Different Locations. Distributed means In the
Network, Network Collections of Computers are connected with Each other.

Then if we want to Take Some Data from other Computer, Then we uses the Distributed
Processing System. And we can also Insert and Remove the Data from out Location to another
Location. In this Data is shared between many users. And we can also Access all the Input and
Output Devices are also accessed by Multiple Users.

6) Multiprocessing: Generally a Computer has a Single Processor means a Computer have a just
one CPU for Processing the instructions. But if we are Running multiple jobs, then this will
decrease the Speed of CPU. For Increasing the Speed of Processing then we uses the
Multiprocessing, in the Multi Processing there are two or More CPU in a Single Operating
System if one CPU will fail, then other CPU is used for providing backup to the first CPU. With
the help of Multi-processing, we can Execute Many Jobs at a Time. All the Operations are
divided into the Number of CPU’s. if first CPU Completed his Work before the Second CPU,
then the Work of Second CPU will be divided into the First and Second.

7) Parallel operating systems


They are used to interface multiple networked computers to complete tasks in parallel. The
architecture of the software is often a UNIX-based platform, which allows it to
coordinate distributed loads between multiple computers in a network. Parallel operating systems
are able to use software to manage all of the different resources of the computers running in
parallel, such as memory, caches, storage space, and processing power.
Parallel operating systems also allow a user to directly interface with all of the computers in the
network.

Single user operating systems


Single user operating systems can be split into two types:

 single user, single application operating systems


 single user, multitasking operating systems

1) Single user, single tasking

This type of operating system only has to deal with one person at a time, running one user
application at a time.

An example of this kind of operating system would be found on a mobile phone. There can only
be one user using the mobile and that person is only using one of its applications at a time.

2) Single user, multi-tasking

This kind of operating system on a personal computer.

The operating system is designed mainly with a single user in mind, but it can deal with many
applications running at the same time. For example, you might be writing an essay, while
searching the internet, downloading a video file and also listening to a piece of music.

Example operating systems are

 Windows
 Linux
 Mac OS X

The difference compared to the Single-Use, Single Application operating system is that it must
now handle many different applications all running at the same time.
The memory available is also very different, for example it is quite normal to have Gigabytes of
RAM available on a personal computer which is what allows so many applications to run.

Multi-User, Multi-Tasking
This kind of operating system can be found on Mainframe and Supercomputers.

They are highly sophisticated and are designed to handle many people running their
programmes on the computer at the same time.

Examples of this kind of operating system include various versions of UNIX, Linux, IBM's z/OS,
OS/390, MVS and VM.

Now the operating system has to manage

 Each user logged on to the system, their workspace and so on.


 Allocate resources to the jobs they want to run.
 Keep logs of how much processing time and resources they use
 Work out the most efficient use of computer processing cycles
 Maintain security

When a program is being executed in memory, this is called a 'process'. Many people using
the same process at the same time. Each person is running a 'thread' of execution within the
process.

Components of an Operating System


1) Process Management

The major activities of an operating system in regard to process management are:

 Creation and deletion of user and system processes.


 Suspension and resumption of processes.
 A mechanism for process synchronization.
 A mechanism for process communication.
 A mechanism for deadlock handling.
 Process Scheduling
 Process completion

2) Main-Memory Management

The major activities of an operating in regard to memory-management are:


 Keep track of which part of memory are currently being used and by whom.
 Decide which process are loaded into memory when memory space becomes available.
 Allocate and deallocate memory space as needed.

3) File Management

The five main major activities of an operating system in regard to file management are

1. The creation and deletion of files.


2. The creation and deletion of directions.
3. The support of primitives for manipulating files and directions.
4. The mapping of files onto secondary storage.
5. The backup of files on stable storage media.

4) I/O System Management

1. Disk management functions such as free space management, storage allocation,


fragmentation removal, head scheduling
2. Consistent, convenient software to I/O device interface through buffering/caching,
custom drivers for each device.

5) Secondary-Storage Management

The three major activities of an operating system in regard to secondary storage management are:

1. Managing the free space available on the secondary-storage device.


2. Allocation of storage space when new files have to be written.
3. Scheduling the requests for memory access.

6) Networking

The major activities are


 TCP/IP, IPX, IPng
 Connection/Routing strategies
 ``Circuit'' managemen, message, packet switching
 Communication mechanism
 Data/Process migration

7) Protection System

Protection refers to mechanism for controlling the access of programs, processes, or users to
the resources defined by a computer systems.
Controlling access to the system

1. Resources --- CPU cycles, memory, files, devices


2. Users --- authentication, communication
3. Mechanisms, not policies

Services provided by an Operating System

Following are few common services provided by operating systems.

 Program execution

 I/O operations

 File System manipulation

 Communication

 Error Detection

 Resource Allocation

 Protection

Program execution
Following are the major activities of an operating system with respect to program management.

 Loads a program into memory.

 Executes the program.

 Handles program's execution.


 Provides a mechanism for process synchronization.

 Provides a mechanism for process communication.

 Provides a mechanism for deadlock handling.

I/O Operation
Operating System manages the communication between user and device drivers. Following are the
major activities of an operating system with respect to I/O Operation.

 I/O operation means read or write operation with any file or any specific I/O device.

 Program may require any I/O device while running.

 Operating system provides the access to the required I/O device when required.

File system manipulation


A file represents a collection of related information. Computer can store files on the disk (secondary
storage), for long term storage purpose

Following are the major activities of an operating system with respect to file management.

 Program needs to read a file or write a file.

 The operating system gives the permission to the program for operation on file.

 Permission varies from read-only, read-write, denied and so on.

 Operating System provides an interface to the user to create/delete files.

 Operating System provides an interface to the user to create/delete directories.

 Operating System provides an interface to create the backup of file system.

Communication
Following are the major activities of an operating system with respect to communication.

 Two processes often require data to be transferred between them.


 The both processes can be on the one computer or on different computer but are connected
through computer network.

 Communication may be implemented by two methods either by Shared Memory or by


Message Passing.

Error handling
Error can occur anytime and anywhere. Error may occur in CPU, in I/O devices or in the memory
hardware. Following are the major activities of an operating system with respect to error handling.

 OS constantly remains aware of possible errors.

 OS takes the appropriate action to ensure correct and consistent computing.

Resource Management
. Following are the major activities of an operating system with respect to resource management.

 OS manages all kind of resources using schedulers.

 CPU scheduling algorithms are used for better utilization of CPU.

Protection
Protection refers to mechanism or a way to control the access of programs, processes, or users to
the resources defined by a computer systems. Following are the major activities of an operating
system with respect to protection.

 OS ensures that all access to system resources is controlled.

 OS ensures that external I/O devices are protected from invalid access attempts.

 OS provides authentication feature for each user by means of a password.

Process
A process is a program in execution. The execution of a process must progress in a sequential
fashion. Definition of process is following.
 A process is defined as an entity which represents the basic unit of work to be
implemented in the system.

Components of process are following.

S.N. Component & Description


Object Program
1
Code to be executed.
Data
2
Data to be used for executing the program.
Resources
3
While executing the program, it may require some resources.
Status
Verifies the status of the process execution.A process can run to completion only when all
4
requested resources have been allocated to the process. Two or more processes could be
executing the same program, each using their own data and resources.

Process State

 Processes may be in one of 5 states, as shown in Figure 3.2 below.


o New - The process is in the stage of being created.
o Ready - The process has all the resources available that it needs to run,
but the CPU is not currently working on this process's instructions.
o Running - The CPU is working on this process's instructions.
o Waiting - The process cannot run at the moment, because it is waiting
for some resource to become available or for some event to occur. For
example the process may be waiting for keyboard input, disk access
request, inter-process messages, a timer to go off, or a child process to
finish.
o Terminated - The process has completed.
Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.

Process scheduling is an essential part of a Multiprogramming operating system. Such operating


systems allow more than one process to be loaded into the executable memory at a time and
loaded process shares the CPU using time multiplexing.

Process scheduler
The process scheduler is a part of the operating system that decides which process runs at a
certain point in time. It usually has the ability to pause a running process, move it to the back of
the running queue and start a new process; such a scheduler is known as preemptive scheduler,
otherwise it is a cooperative scheduler

Schedulers are of three types

 Long Term Scheduler


 Short Term Scheduler
 Medium Term Scheduler

Long Term Scheduler


It is also called job scheduler. Long term scheduler determines which programs are admitted to
the system for processing. Job scheduler selects processes from the queue and loads them into
memory for execution. Process loads into the memory for CPU scheduling. The primary
objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming.

On some systems, the long term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When process changes the state from new to
ready, then there is use of long term scheduler.

Short Term Scheduler


It is also called CPU scheduler. Main objective is increasing system performance in accordance
with the chosen set of criteria. It is the change of ready state to running state of the process. CPU
scheduler selects process among the processes that are ready to execute and allocates CPU to one
of them.

Short term scheduler also known as dispatcher, execute most frequently and makes the fine
grained decision of which process to execute next. Short term scheduler is faster than long term
scheduler.

Medium Term Scheduler


Medium term scheduling is part of the swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium term scheduler is in-charge of handling
the swapped out-processes.

Running process may become suspended if it makes an I/O request. Suspended processes cannot
make any progress towards completion. In this condition, to remove the process from memory
and make space for other process, the suspended process is moved to the secondary storage. This
process is called swapping, and the process is said to be swapped out or rolled out. Swapping
may be necessary to improve the process mix.

Comparison between Scheduler


S.N. Long Term Scheduler Short Term Scheduler Medium Term Scheduler
It is a process swapping
1 It is a job scheduler It is a CPU scheduler
scheduler.
Speed is lesser than short term Speed is fastest among Speed is in between both short
2
scheduler other two and long term scheduler.
It provides lesser control
It controls the degree of It reduces the degree of
3 over degree of
multiprogramming multiprogramming.
multiprogramming
4 It is almost absent or minimal It is also minimal in time It is a part of Time sharing
in time sharing system sharing system systems.
It selects processes from pool It can re-introduce the process
It selects those processes
5 and loads them into memory into memory and execution can
which are ready to execute
for execution be continued.

Scheduling Criteria
CPU utilization – keep the CPU as busy as possible

Throughput– of processes that complete their execution per time unit

Turnaround time – amount of time to execute a particular process

Waiting time – amount of time a process has been waiting in the ready queue

Response time – amount of time it takes from when a request was submitted until the first response is
produced, not output (for time-sharing environment)

CPU Burst Time- Amount of time, processor (CPU) execution.

I/O Burst Time- I/O operation time of the process.

Submission Time- Creation time of the process.

Arrival Time- Process reaching time in Ready Queue.

Scheduling algorithms

four major scheduling algorithms here which are following

 First Come First Serve (FCFS) Scheduling

 Shortest-Job-First (SJF) Scheduling

 Priority Scheduling

 Round Robin(RR) Scheduling

 Multilevel Queue Scheduling


First Come First Serve (FCFS)
 Jobs are executed on first come, first serve basis.

 Easy to understand and implement.

 Poor in performance as average wait time is high.

Wait time of each process is following

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.55

Shortest Job First (SJF)


 Best approach to minimize waiting time.
 Impossible to implement

 Processer should know in advance how much time process will take.

Wait time of each process is following

Process Wait Time : Service Time - Arrival Time

P0 3-0=3

P1 0-0=0

P2 16 - 2 = 14

P3 8-3=5

Average Wait Time: (3+0+14+5) / 4 = 5.50

Priority Based Scheduling


 Each process is assigned a priority. Process with highest priority is to be executed first and
so on.

 Processes with same priority are executed on first come first serve basis.
 Priority can be decided based on memory requirements, time requirements or any other
resource requirement.

Wait time of each process is following

Process Wait Time : Service Time - Arrival Time

P0 9-0=9

P1 6-1=5

P2 14 - 2 = 12

P3 0-0=0

Average Wait Time: (9+5+12+0) / 4 = 6.5

Round Robin Scheduling


 Each process is provided a fix time to execute called quantum.

 Once a process is executed for given time period. Process is preempted and other process
executes for given time period.
 Context switching is used to save states of preempted processes.

Wait time of each process is following

Process Wait Time : Service Time - Arrival Time

P0 (0-0) + (12-3) = 9

P1 (3-1) = 2

P2 (6-2) + (14-9) + (20-17) = 12

P3 (9-3) + (17-12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

Multi Queue Scheduling


 Multiple queues are maintained for processes.

 Each queue can have its own scheduling algorithms.

 Priorities are assigned to each queue.


Threads
A thread is a basic unit of CPU utilization, consisting of a program counter, a stack,
and a set of registers.
OR
A thread is a single sequence stream within in a process. Because threads have some
of the properties of processes, they are sometimes called lightweight processes.
In a process, threads allow multiple executions of streams.

Why Threads?

Following are some reasons why we use threads in designing operating systems.

1. A process with multiple threads make a great server for example printer
server.
2. Because threads can share common data, they do not need to use
interprocess communication.
3. Because of the very nature, threads can take advantage of multiprocessors.

Threads are cheap in the sense that


1. They only need a stack and storage for registers therefore, threads are cheap to
create.
2. Threads use very little resources of an operating system in which they are
working. That is, threads do not need new address space, global data, program
code or operating system resources.
3. Context switching are fast when working with threads. The reason is that we
only have to save and/or restore PC, SP and registers.

Levels of Threads

1) User-Level Threads
2) Kernel-Level Threads
3) System-Level Threads

4) User-Level Threads

User-level threads implement in user-level libraries, rather than via systems calls, so
thread switching does not need to call operating system and to cause interrupt to the
kernel.

Advantages:

Some advantages are

 User-level threads does not require modification to operating systems.


 Simple Representation:
Each thread is represented simply by a PC, registers, stack and a small
control block, all stored in the user process address space.
 Simple Management:
This simply means that creating a thread, switching between threads and
synchronization between threads can all be done without intervention of the
kernel.
 Fast and Efficient:
Thread switching is not much more expensive than a procedure call.

Disadvantages:

 There is a lack of coordination between threads and operating system kernel.


Therefore, process as whole gets one time slice irrespect of whether process
has one thread or 1000 threads within. It is up to each thread to relinquish
control to other threads.
 User-level threads requires non-blocking systems call.
 For example, if one thread causes a page fault, the process blocks.

5) Kernel-Level Threads

In this method, the kernel knows about and manages the threads. No runtime system is
needed in this case. Instead of thread table in each process, the kernel has a thread
table that keeps track of all threads in the system. In addition, the kernel also
maintains the traditional process table to keep track of processes. Operating Systems
kernel provides system call to create and manage threads.

Advantages:

 Because kernel has full knowledge of all threads, Scheduler may decide to give
more time to a process having large number of threads than process having
small number of threads.
 Kernel-level threads are especially good for applications that frequently block.

Disadvantages:

 The kernel-level threads are slow and inefficient. For instance, threads
operations are hundreds of times slower than that of user-level threads.
 Since kernel must manage and schedule threads as well as processes.
Process Synchronization
Process synchronization refers to the idea that multiple processes are to join up
or handshake at a certain point, in order to reach an agreement or commit to a certain
sequence of action.
Process synchronization is required when one process must wait for another to complete some
operation before proceeding.
Process Synchronization was introduced to handle problems that arose while multiple process
executions. Some of the problems are discussed below.

1) The Critical-Section Problem

A Critical Section is a code segment that accesses shared variables and has to be executed as an
atomic action. It means that in a group of cooperating processes, at a given point of time, only
one process must be executing its critical section. If any other process also wants to execute its
critical section, it must wait until the first one finishes.

The general idea is that in a number of cooperating processes, each has a critical
section of code, with the following conditions and terminologies:

 Only one process in the group can be allowed to execute in their critical
section at any one time.
 The code preceding the critical section, and which controls access to the
critical section, is termed the entry section. It acts like a carefully controlled
locking door.
 The code following the critical section is termed the exit section. It generally
releases the lock on someone else's door, or at least lets the world know that
they are no longer in their critical section.
 The rest of the code not included in either the critical section or the entry or
exit sections is termed the remainder section.
General structure of a typical process Pi

 A solution to the critical section problem must satisfy the following three
conditions:
1. Mutual Exclusion - Only one process at a time can be executing in their
critical section.
2. Progress - If no process is currently executing in their critical section,
and one or more processes want to execute their critical section,
processes cannot be blocked forever waiting to get into their critical
sections.
3. Bounded Waiting - There exists a limit as to how many other processes
can get into their critical sections after a process requests entry into
their critical section and before that request is granted

2) Semaphores

In 1965, Dijkstra proposed a new and very significant technique for managing concurrent processes
by using the value of a simple integer variable to synchronize the progress of interacting processes.
This integer variable is called semaphore. So it is basically a synchronizing tool and is accessed
only through two low standard atomic operations, wait and signal designated by P() and V()
respectively.

The classical definition of wait and signal are :


 Wait : decrement the value of its argument S as soon as it would become non-negative.

 Signal : increment the value of its argument, S as an individual operation.

Properties of Semaphores

1. Simple

2. Works with many processes

3. Can have many different critical sections with different semaphores

4. Each critical section has unique access semaphores

5. Can permit multiple processes into the critical section at once, if desirable.

Types of Semaphores

Semaphores are mainly of two types:

1. Binary Semaphore

It is a special form of semaphore used for implementing mutual exclusion, hence it is often

called Mutex. A binary semaphore is initialized to 1 and only takes the value 0 and 1 during

execution of a program.

2. Counting Semaphores

These are used to implement bounded concurrency.

Limitations of Semaphores

1. Priority Inversion is a big limitation os semaphores.

2. Their use is not enforced, but is by convention only.


3. With improper use, a process may block indefinitely. Such a situation is called Deadlock. We will

be studying deadlocks in details in coming lessons.

Deadlock
A condition that occurs when two processes are each waiting for the other to complete before
proceeding. The result is that both processes hang. Deadlocks occur most commonly
in multitasking and client/server environments. Ideally, the programs that are deadlocked, or
the operating system, should resolve the deadlock, but this doesn't always happen.
A deadlock is also called a deadly embrace.

In order for deadlock to occur, four conditions must be true.

 Mutual exclusion - Each resource is either currently allocated to exactly


one process or it is available. (Two processes cannot simultaneously control
the same resource or be in their critical section).
 Hold and Wait - processes currently holding resources can request new
resources
 No preemption - Once a process holds a resource, it cannot be taken away
by another process or the kernel.
 Circular wait - Each process is waiting to obtain a resource which is held by
another process.

Solutions to deadlock

There are several ways to address the problem of deadlock in an operating system.
1.Ignore deadlock

The text refers to this as the Ostrich Algorithm. Just hope that deadlock doesn't
happen.

If deadlock does occur, it may be necessary to bring the system down, or at least
manually kill a number of processes, but even that is not an extreme solution in most
situations.

2.Deadlock detection and recovery

If there is only one instance of each resource, it is possible to detect deadlock by


constructing a resource allocation/request graph and checking for cycles. Graph
theorists have developed a number of algorithms to detect cycles in a graph.

3.Deadlock avoidance

This works only if the system knows what requests for resources a process will be
making in the future, and this is an unrealistic assumption. The text describes the
bankers algorithm but then points out that it is essentially impossible to implement
because of this assumption.

4.Deadlock Prevention

The difference between deadlock avoidance and deadlock prevention is:

Deadlock avoidance refers to a strategy where whenever a resource is requested, t is


only granted if it cannot result in deadlock. Deadlock prevention strategies involve
changing the rules so that processes will not make requests that could result in
deadlock.

Few Terms:
1) Thrashing
When referring to a computer, thrashing or disk thrashing is a term used to describe when
the hard drive is being overworked by moving information between
the systemmemory and virtual memory excessively. Thrashing usually occurs when the system
does not have enough memory, the system swap file is not properly configured, or too much is
running at the same time and it has low system resources.
When thrashing occurs you will notice the computer hard drive always working and a decrease
in system performance. Thrashing is bad on the hard drive because of the amount of work the
hard drive has to do and if left unfixed can cause an early hard drive failure.
To resolve hard drive thrashing, a user can do any of the below.
1. Increase the amount of RAM in the computer.
2. Decrease the number of programs being run on the computer.
3. Adjust the size of the swap file.
2) Page Fault
An interrupt that occurs when a program requests data that is not currently in real memory.
The interrupt triggers the operating system to fetch the data from a virtual memory and load it
into RAM.
An invalid page fault or page fault error occurs when the operating system cannot find the data
in virtual memory. This usually happens when the virtual memory area, or the table that maps
virtual addresses to real addresses, becomes corrupt.

3) Paging
Paging is a memory management technique in which the memory is divided into fixed size
pages. Paging is used for faster access to data. When a program needs a page, it is available in
the main memory as the OS copies a certain number of pages from your storage device to main
memory. Paging allows the physical address space of a process to be noncontiguous.
4) Segmentation
Segmentation is a Memory Management technique in which memory is divided into variable
sized chunks which can be allocated to processes. Each chunk is called a segment. A table stores
the information about all such segments and is called Global Descriptor Table (GDT). A GDT
entry is called Global Descriptor.
5) Fragmentation
Fragmentation Refers to the condition of a disk in which files are divided into pieces scattered
around the disk. Fragmentation occurs naturally when you use a disk frequently, creating,
deleting, and modifying files. At some point, the operating system needs to store parts of a file
in noncontiguous clusters. This is entirely invisible to users, but it can slow down the speed at
which data is accessed because the disk drive must search through different parts of the disk to
put together a single file.
6) Semaphore
In Unix systems, semaphores are a technique for coordinating or synchronizing activities in
which multiple processes compete for the same operating system resources.
OR
A semaphore is a variable or abstract data type that is used for controlling access, by
multiple processes, to a common resource in a parallel programming or a multi user
environment.
7) Starvation

Starvation is a resource management problem where a process does not get the resources it
needs for a long time because the resources are being allocated to other processes.
The solution to starvation is to include the process of Aging.

8) Aging

Aging is a technique to avoid starvation in a scheduling system. It


works by adding an aging factor to the priority of each request. The
aging factor must increase the requests priority as time passes and
must ensure that a request will eventually be the highest priority
request (after it has waited long enough)

9) DMA
Stands for "Direct Memory Access." DMA is a method of transferring data from the
computer's RAM to another part of the computer without processing it using the CPU.
While most data that is input or output from your computer is processed by the CPU,
some data does not require processing, or can be processed by another device. In these
situations, DMA can save processing time and is a more efficient way to move data from
the computer's memory to other devices.
10) Process
A process is an instance of a program running in a computer. In unix and some other operating
systems, a process is started when a program is initiated.
A process can initiate a subprocess, which is a called a child process (and the initiating process
is sometimes referred to as its parent ).
Processes can exchange information or synchronize their operation through several methods of
interprocess communication ( IPC ).

11)Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a
backing store and then brought back into memory for continued execution.

Major time consuming part of swapping is transfer time. Total transfer time is directly proportional to
the amount of memory swapped.

12) Critical Region

A critical region is a simple mechanism that prevents multiple threads from accessing at once
code protected by the same critical region.

The code fragments could be different, and in completely different modules, but as long as the
critical region is the same, no two threads should call the protected code at the same time.

S-ar putea să vă placă și