Sunteți pe pagina 1din 48

08.

705 RTOS (EL-III)


Module 1 University Questions and
Solutions

Part A Questions and Solutions


1. What are the functions of an OS? (Nov. 2012)
Ans:
The basic functions of an operating system:
OS provide services to programs and to users of those
programs. It provide environment for the execution of
programs.
The common functions are:
1. Program execution: Loads the program into memory and
executes the program. Also end its execution normally or
abnormally.
2. I/O operation: Open or close any I/O device or any file.
3. File system management: Read a file or write a file.
4. Communications: Share the data between different
processes.
5. Error detection: To detect and make appropriate action to
correct the errors for consistent computing.

2. What are the benefits of a multiuser OS as


against a single user OS? (Nov. 2012)
Ans:
Advantages of multi-user OS:1. As the OS is handling multiple application at
a time most of the CPU time is efficiently
using.
2. Resources like memory, CPU are utilizing at
the maximum.
3. Multiple users can use a system at a time.
4. Networked, distributed system can be
implemented.

3. Describe how are system calls used while a file is copied


from a source location to destination location? (Oct.
2011)
Ans:
System call:
Provides an interface to the OS services.
Available as routines written in C and C++, also certain
low-level tasks may be in assembly language
instructions.
Types of system calls:
1. Process control
2. File manipulation
3. Device manipulation
4. Information maintenance
5. Communication
6. Protection

An example to illustrate how system calls are


used for writing a simple program to read data
from one file and copy them to another file.

4. What is microkernel? (Oct. 2011)


Ans:
Kernel:
In one view, the kernel is the lowest level of
software, immediately above the hardware
architectural level. In other view, it is the part of the
OS that resides permanently in main memory. By
another view, kernel is the part of the OS that runs
in a protected mode.
Micro-kernel:
OS kernels can be quite small, consisting of most of
the two classes (Process and thread management,
Interrupt and trap handling) and some limited
resource management, usually processor scheduling
and perhaps some low-level virtual memory
allocation.

The classes are:


1.Process and thread management: Process
creation, destruction, and basic interprocess
communication and synchronization.
2.Interrupt and trap handling: Responding to
signals triggered by various system events;
among these are the termination of a
process, completion of an I/O operation, a
timer signal indicating a timeout or clock
tick, an error caused by a program, or
hardware malfunctioning, etc.

5. List the criterion for considering the scheduler design


(Oct. 2011)
Ans:
Scheduling Criteria:
Different CPU-scheduling algorithms have different
properties, and the choice of a particular algorithm
may favor one class of processes over another.
The criteria include the following:
1.CPU utilization: To keep the CPU as busy as possible.
2.Throughput: The number of processes that are
completed per time unit. For long processes, this
rate may be one process per hour; for short
transactions, it may be ten processes per second

3. Turnaround time: It is the interval from the


time of submission of a process to the time of
completion. It is the sum of the periods spent
waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.
4. Waiting time: Waiting time is the sum of the
periods spent waiting in the ready queue.
5. Response time: This measure, called response
time, is the time it takes to start responding, not
the time it takes to output the response. The
turnaround time is generally limited by the
speed of the output device
It is desirable to maximize CPU utilization and
throughput and to minimize turnaround time,
waiting time, and response time

6. What is shared data problem? (Nov. 2012)


Ans:
Shared data problems:
ISRs and task code must share one or more
variables that they can use to communicate
with one another.
The shared-data problem arises when the ISRs
need to communicate with the rest of the
code.

A classical code with shared-data problem is


shown below.

The main code monitors two temperatures, which


are always supposed to be equal. If they differ, it
indicates an alarm.
The main code stays in an infinite loop and ISR
vReadTemperatures happens periodically due to
the hardware interrupts if one or both of the
temperatures changes.
Suppose both temperatures have been 73o for a
while and processor finished the execution of line
of
code
iTemp0
=
iTemperatures[0];
ie, iTemp0 = 73.
Suppose an interrupt occurs and that both
temperatures changed to 74o. The ISR writes the
value into the elements of iTemperatures array.

When ISR ends, processor continues the line


of code iTemp1 = iTemperatures[1];
This sets iTemp1 = 74. When the processor
compares iTemp0 and iTemp1, they will differ
and system set the alarm even though the
actual values are same.
This situation is called shared data problem.

7. What is critical section problem? (Oct. 2011)


Ans:
Atomic and Critical Section:
A part of the program is said to atomic if it cannot be
interrupted. By another way, we can say that the task
code uses the shared data that is not atomic.
When we disable interrupts around the lines of task code
that uses the shared data, we can make the collection of
lines as atomic and can solve the problem.
A part of the code is atomic means that it cannot be
interrupted at all, but it cannot be interrupted by
anything that might mess up the data it is using.
A set of instructions that must be atomic for the system
to work properly is called critical section.
Critical section problem is share-data problem. The
shared-data problem arises when the ISRs need to
communicate with the rest of the code.

Part B Questions and Solutions


1. Define process. Explain the operations on
process. (Oct. 2011)
Ans:
Process:
OS executes variety of programs and is called
jobs, tasks etc.
Process is a program in execution and process
execution must progress in sequential fashion.

The structure of process in memory is:

A process is more than the program code, which is


sometimes known as the text section.
It includes the current activity, as represented by
the value of the program counter (PC) and the
contents of the processor's registers.
A process generally also includes the process stack,
which contains temporary data (such as function
parameters, return addresses, and local variables),
and a data section, which contains global variables.

A process may also include a heap, which is


memory that is dynamically allocated during
process run time.
Process is an active entity, but a program is
passive entity, which is also referred as
executable file.
Operation on process:
The processes in most system can execute
concurrently, and they may be created and
deleted dynamically. Therefore two
mechanisms provided by OS for this are:
1. Process creation
2. Process termination

Process creation:
A process may create several new processes using
create-process system call during the execution of
process. The creating process is called parent process
and the new processes are called children of that
processes, forming a tree of processes.
Most OS identify the processes according to a unique
process identifier (pid), which is typically an integer
number
Parent and children can share all resources. Children
share subset of parents resources, but parent and child
share no resources.
When a process creates a new process, two possibilities
exist in terms of execution:
Parent and children execute concurrently.
Parent waits until children terminate.

There are also two possibilities in terms of the


address space of the new process:
Child duplicate of parent.
Child has a program loaded into it.
A UNIX example:

fork() is a system call and create a new


process. This allows the parent process to
communicate with child process. Both process
continue execution after system call fork(), but
one difference: the return code for the fork()
is zero for child process, whereas the a
nonzero pid of the child is returned to the
parent.
exec() is a system call after fork(). Parent can
create more children or it has nothing else to
while the child runs, it can issue a wait()
system call to move itself in the ready queue
until the termination of child.

Process Termination:
Process executes last statement and asks the OS to
delete it by using the exit system call.
Output data from child to parent via wait system call.
Processs resources are deallocated by OS
Parent may terminate execution of children processes
via abort system call for a variety of reasons, such as:
Child has exceeded allocated resources.
Task assigned to child is no longer required.
Parent is exiting, and the operating system does not
allow a child to continue if its parent terminates.
A UNIX example to terminate a process is system call
exit().

2. Explain various memory management


schemes. (Nov 2012)
Ans:
Memory Allocation Schemes:
The main memory must accommodate both
the OS and the various user processes.
There is a need to allocate main memory in
the most efficient way possible.
Two types of memory allocation scheme:
1. Contiguous Allocation
2. Non-contiguous Allocation

Contiguous Allocation:
Main memory usually into two partitions:
1. Resident OS, usually held in low memory
with interrupt vector.
2. User processes held in high memory

Multiple-partition allocation: fixed-sized


partitions and variable-sized partition
A simplest method for allocating memory is to
divide memory into several fixed-sized
partitions.
When a partition is free, a process is selected
from the input queue and is loaded into the
free partition.
When the process terminates, the partition
becomes available for another process
In variable-sized partition scheme, OS keeps a
table indicating which parts of memory are
available and which are occupied.

Initially, all memory is available for user


processes and is considered one large block of
available memory called hole.
When a process arrives, it is allocated memory
from a hole large enough to accommodate it.
The memory blocks available comprise a set of
holes of various sizes scattered throughout
memory.

OS maintains information about the allocated


partitions and free partitions (hole).
Dynamic Storage-Allocation Problem; which
concerns how to satisfy a request of size n
from a list of free holes.
Many solutions to this problem. The first-fit,
best-fit and worst-fit strategies are the
common, used to select a free hole from the
set of available holes.
1. First-fit: Allocate the first hole that is big
enough

2. Best-fit: Allocate the smallest hole that is


big enough; must search entire list, unless
ordered by size. Produces the smallest leftover
hole.
3. Worst-fit: Allocate the largest hole; must
also search entire list. Produces the largest
leftover hole.
This type allocation causes external
fragmentation: Due to the gaps between
allocated contiguous memory. The total
memory space exists to satisfy a request, but
it is not contiguous.

Non-Contiguous Allocation:
Paging:
Memory-management scheme that permits the
physical address space of a process to be
noncontiguous.
Process is allocated physical memory whenever it
is available.
Paging avoids external fragmentation and the
need for compaction.
Divide physical memory into fixed-sized blocks
called frames (size is power of 2, between 512
bytes and 8192 bytes).
Divide logical memory into blocks of same size
called pages.

Keep track of all free frames.


To run a program of size n pages, need to find
n free frames and load program.
Set up a page table to translate logical to
physical addresses.
Cause internal fragmentation: Allocated
memory may be slightly larger than requested
memory; this size difference is memory
internal to a partition, but not being used.

Hardware support for paging:

Physical address generated by CPU is divided


into:
Page number (p) used as an index into a
page table, which contains base address of
each page in physical memory
Page offset (d) combined with base address
to define the physical memory address that is
sent to the memory unit

3. What is paging? How paging is implemented with


TLB. (Oct. 2011)
Ans: Paging:
Non-Contiguous memory allocation scheme.
Memory-management scheme that permits the
physical address space of a process to be
noncontiguous.
Process is allocated physical memory whenever it
is available.
Paging avoids external fragmentation and the
need for compaction.
Divide physical memory into fixed-sized blocks
called frames (size is power of 2, between 512
bytes and 8192 bytes).

Divide logical memory into blocks of same size


called pages.
Keep track of all free frames.
To run a program of size n pages, need to find
n free frames and load program.
Set up a page table to translate logical to
physical addresses.
Cause internal fragmentation.

Hardware support for paging:

Address generated by CPU is divided into:


1. Page number (p) used as an index into a
page table, which contains base address of
each page in physical memory
2. Page offset (d) combined with base
address to define the physical memory
address that is sent to the memory unit

The paging model example:

If the size of the logical address space is 2m, and a


page size is 2n addressing units (bytes or words)
then the high-order m- n bits of a logical address
designate the page number p, and the n low-order
bits designate the page offset d.

The logical address is as follows:


Where p is an index into the page table and d
is the displacement within the page.
Implementation of Page Table
Page table is to improve the speed of page
access with large number of pages.
Hardware implementation of the page table
can be done in several ways.

A simple way is the page table is implemented


as a set of dedicated registers. If the address
consists of 16 bits, and the page size is 8KB.
The page table thus consists of eight entries
that are kept in fast registers. The use of
registers for the page table is satisfactory if
the page table is reasonably small (for
example, 256 entries). This is not suitable for
large page table, because fastest registers with
large size in CPU is not feasible.

Another method is keeping page table in main


memory. There is a Page-Table Base Register
(PTBR) points to the page table and a PageTable Length Register (PTLR) indicates size of
the page table.
In this scheme every data/instruction access
requires two memory accesses. One for the
page table and one for the data/instruction.
Thus, memory access is slowed by a factor of 2.
The two memory access problem can be solved
by the use of a special fast-lookup hardware
cache called associative memory or Translation
Look-aside Buffers (TLBs).

The TLB is associative, high-speed memory.


Each entry in the TLB consists of two parts: a
key (or tag) and a value. When the associative
memory is presented with an item, the item is
compared with all keys simultaneously. If the
item is found, the corresponding value field is
returned.
The search is fast; the hardware, however, is
expensive. Typically, the number of entries in
a TLB is small, often numbering between 64
and 1,024.

A paging hardware with TLB is shown below.

The TLB contains only a few of the page-table


entries.
When a logical address is generated by the
CPU, its page number is presented to the TLB.
If the page number is found, its frame number
is immediately available and is used to access
memory. If the page number is not in the TLB
(known as a TLB miss) a memory reference to
the page table must be made.
If the TLB is already full of entries, the OS
must
select
one
for
replacement.
Replacement policies range from least
recently used (LRU) to random.

4. Consider the following set of processes with CPU


burst given in milliseconds.

a)Draw the Gantt chart that illustrates the


execution of these process using the following
scheduling algorithms: FCFS, SJF and nonpreemptive priority (a smaller priority number
implies a higher priority)
b) What is the turnaround time and waiting time of
each process for the above algorithms?
(Oct. 2011)

Ans:

5. a) Explain the performance measures in RTOS. (Nov. 2012)


Ans: (Refer file: C M Krishna, Kang G. Shin- Real-Time Systems:
Chapter 2, page 12 to 18)
b) Briefly discuss atomic and critical section. (Nov. 2012)
Ans:
Atomic and Critical Section:
A part of the program is said to atomic if it cannot be
interrupted. By another way, we can say that the task code
uses the shared data that is not atomic.
When we disable interrupts around the lines of task code that
uses the shared data, we can make the collection of lines as
atomic and can solve the problem.
A part of the code is atomic means that it cannot be
interrupted at all, but it cannot be interrupted by anything
that might mess up the data it is using.
A set of instructions that must be atomic for the system to
work properly is called critical section.

6. Describe the steps followed after an interrupt


is caused by an I/O call. (Nov 2012)
Ans:
Interrupt driven I/O cycle is shown below.

S-ar putea să vă placă și