Sunteți pe pagina 1din 10

RVS TECHNICAL CAMPUS - COIMBATORE

COIMBATORE 641 402


INTERNAL TEST III, SEP 2016
EC6703 EMBEDDED AND REAL TIME SYSTEMS
Degree/Dept/Sem: BE/ECE/VII
Date & Session: 30.09.2016 & FN
Part A

1.

Duration: 1.30 Hrs


Max Marks: 50
(Answer ALL the questions)

52 = 10 Marks

Why Distributed systems?


Building an embedded system with multiprocessor configuration; talking over the network is
complicated then using a single microprocessor. Because of this issue distributed systems are to be used in the

2.

3.

designing of accelerator system.


Mention the goals of design process?
*Time to market
*Design cost
*quality
Define earliest deadline first (EDF) scheduling.
Earliest deadline first (EDF) or least time to go is a dynamic scheduling algorithm used in real-time operating
systems to place processes in a priority queue. Whenever a scheduling event occurs (task finishes, new task

4.

5.

released, etc.) the queue will be searched for the process closest to its deadline.
What are the states of a process?
The states of a process are;
Running
Ready
Blocked
Suspended

Define Scheduling.
A RTOS has an advanced algorithm for
Scheduler flexibility enables a wider, computerprocess priorities, but a real-time OS is more frequently
applications. Key factors in a real-time OS are minimal

scheduling.
system

orchestration

of

dedicated to a narrow set of


interrupt latency and minimal thread

switching latency; a real-time OS is valued more for how quickly or how predictably it can respond than for the
amount of work it can perform in a given period of time.

Part B

(2x16 mark Either or type)

(18 + 2x16 = 40 Marks)

6. Explain in details about Real time Operating systems [8]


A real-time operating system (RTOS) is an operating system that guarantees a certain
capability within a specified time constraint. For example, an operating system might be designed to
ensure that a certain object was available for a robot on an assembly line. In what is usually called a
"hard" real-time operating system, if the calculation could not be performed for making the object
available at the designated time, the operating system would terminate with a failure. In a "soft" real-time
operating system, the assembly line would continue to function but the production output might be lower
as objects failed to appear at their designated time, causing the robot to be temporarily unproductive.
Some real-time operating systems are created for a special application and others are more general
purpose. Some existing general purpose operating systems claim to be a real-time operating systems. To
some extent, almost any general purpose operating system such as Microsoft's Windows 2000 or IBM's
OS/390 can be evaluated for its real-time operating system qualities. That is, even if an operating system
doesn't qualify, it may have characteristics that enable it to be considered as a solution to a particular
real-time application problem.
In general, real-time operating systems are said to require:
Multitasking
Process threads that can be prioritized
A sufficient number of interrupt levels
Real-time operating systems are often required in small embedded operating systems that are packaged
as part of micro devices. Some kernels can be considered to meet the requirements of a real-time
operating system. However, since other components, such as device drivers, are also usually needed for a
particular solution, a real-time operating system is usually larger than just the kernel.
Scheduling:
In typical designs, a task has three states:

Running (executing on the CPU);


Ready (ready to be executed);
Blocked (waiting for an event, I/O for example).

Most tasks are blocked or ready most of the time because generally only one task can run at a time per
CPU. The number of items in the ready queue can vary greatly, depending on the number of tasks the
system needs to perform and the type of scheduler that the system uses. On simpler non-preemptive but
still multitasking systems, a task has to give up its time on the CPU to other tasks, which can cause the
ready queue to have a greater number of overall tasks in the ready to be executed state (resource
starvation).
Usually the data structure of the ready list in the scheduler is designed to minimize the worst-case
length of time spent in the scheduler's critical section, during which preemption is inhibited, and, in
some cases, all interrupts are disabled. But the choice of data structure depends also on the maximum
number of tasks that can be on the ready list.

If there are never more than a few tasks on the ready list, then a doubly linked list of ready tasks is likely
optimal. If the ready list usually contains only a few tasks but occasionally contains more, then the list
should be sorted by priority. That way, finding the highest priority task to run does not require iterating
through the entire list. Inserting a task then requires walking the ready list until reaching either the end of
the list, or a task of lower priority than that of the task being inserted.
Care must be taken not to inhibit preemption during this search. Longer critical sections should be
divided into small pieces. If an interrupt occurs that makes a high priority task ready during the insertion
of a low priority task, that high priority task can be inserted and run immediately before the low priority
task is inserted.
The critical response time, sometimes called the flyback time, is the time it takes to queue a new ready
task and restore the state of the highest priority task to running. In a well-designed RTOS, readying a
new task will take 3 to 20 instructions per ready-queue entry, and restoration of the highest-priority ready
task will take 5 to 30 instructions.
In more advanced systems, real-time tasks share computing resources with many non-real-time tasks,
and the ready list can be arbitrarily long. In such systems, a scheduler ready list implemented as a linked
list would be inadequate.
Intertask communication and resource sharing
A multitasking operating system like Unix is poor at real-time tasks. The scheduler gives the highest
priority to jobs with the lowest demand on the computer, so there is no way to ensure that a time-critical
job will have access to enough resources. Multitasking systems must manage sharing data and hardware
resources among multiple tasks. It is usually unsafe for two tasks to access the same specific data or
hardware resource simultaneously. There are three common approaches to resolve this problem:
Temporarily masking/disabling interrupts
Binary semaphores
Message passing
Interrupt handlers and the scheduler
Since an interrupt handler blocks the highest priority task from running, and since real time operating
systems are designed to keep thread latency to a minimum, interrupt handlers are typically kept as short
as possible. The interrupt handler defers all interaction with the hardware if possible; typically all that is
necessary is to acknowledge or disable the interrupt (so that it won't occur again when the interrupt
handler returns) and notify a task that work needs to be done. This can be done by unblocking a driver
task through releasing a semaphore, setting a flag or sending a message. A scheduler often provides the
ability to unblock a task from interrupt handler context.
7.

[a] Discuss the power management and optimization for process.[16] [or]
The RTOS and system architecture can use static and dynamic power management mechanisms
to help manage the systems power consumption.

A power management policy [Ben00] is a strategy for determining when to perform certain
power management operations. A power management policy in general examines the state of the system
to determine when to take actions.
However, the overall strategy embodied in the policy should be designed based on the
characteristics of the static and dynamic power management mechanisms. Going into a low-power mode
takes time; generally, the more that is shut off, the longer the delay incurred during restart. Because
power-down and power-up are not free, modes should be changed carefully. Determining when to switch
into and out of a power-up mode requires an analysis of the overall system activity.
Avoiding a power-down mode can cost unnecessary power.
Powering down too soon can cause severe performance penalties.
Re-entering run mode typically costs a considerable amount of time. A straightforward method is to
power up the system when a request is received. This works as long as the delay in handling the request
is acceptable. A more sophisticated technique is predictive shutdown. The goal is to predict when the
next request will be made and to start the system just before that time, saving the requestor the start-up
time. In general, predictive shutdown techniques are probabilisticthey make guesses about activity
patterns based on a probabilistic model of expected behavior. Because they rely on statistics, they may
not always correctly guess the time of the next activity. This can cause two types of problems:
The requestor may have to wait for an activity period. In the worst case, the requestor may not make a
deadline due to the delay incurred by system start-up.
The system may restart itself when no activity is imminent. As a result, the system will waste power.
Clearly, the choice of a good probabilistic model of service requests is important. The policy
mechanism should also not be too complex,since the power
it consumes to make decisions is part of the total system
power budget. Several predictive techniques are possible. A
very simple technique is to use xed times. For instance, if
the system does not receive inputs during an interval of
length Ton , it shuts down; a powered-down system waits
for a period T before returning to the power-on mode.
The choice of TOff and T must be determined by
experimentation. Srivastava and Eustace [Sri94] found one useful rule for graphics terminals. They
plotted the observed idle time (T the immediately preceding active time (TOn off on off ) of a graphics
terminal versus). The result was an L-shaped distribution as illustrated in Figure 6.17. In this
distribution, the idle period after a long active period is usually very short, and the length of the idle
period after a short active period is uniformly distributed. Based on this distribution, they proposed a
shut down threshold that depended on the length of the last active periodthey shut down when the
active period length was below a threshold, putting the system in the vertical portion of the L
distribution.
The Advanced Conguration and Power Interface (ACPI) is an open industry standard for power
management services. It is designed to be compatible with a wide variety of OSs. It was targeted initially
to PCs. The role of ACPI in the system is illustrated in Figure.

ACPI provides some basic power management facilities and abstracts the hardware layer, the OS has
its own power management module that determines the policy, and the OS then uses ACPI to send the
required controls to the hardware and to observe the hardwares state as input to the power manager.
ACPI supports the following ve basic global power states:

G3, the mechanical off state, in which the system


consumes no power.

G2, the soft off state, which requires a full OS reboot

to

restore the machine to working condition. This state


has four substates:
S1, a low wake-up latency state with no loss

of

system context;
S2, a low wake-up latency state with a loss

of

CPU and system cache state;


S3, a low wake-up latency state in which all system state except for main memory is lost;
and
S4, the lowest-power sleeping state, in which all devices are turned off.
G1, the sleeping state, in which the system appears to be off and the time required to return to
working condition is
inversely proportional to power consumption.

G0, the working state, in which the system is fully usable.


The legacy state, in which the system does not comply with ACPI.
The power manager typically includes an observer, which receives messages

through the ACPI

interface that describe the system behavior. It also includes a decision module that determines power
management actions based on those observations.

[b] (i) What is meant by context switching? [8]


Context switching
As a task executes it utilizes the processor / microcontroller registers and accesses RAM and ROM just as any other
program. These resources together (the processor registers, stack, etc.) comprise the task execution context.
A task is a sequential piece of code - it does not know when it is going to get suspended (swapped out or switched out) or
resumed (swapped in or switched in) by the kernel and does not even know when this has happened. Consider the
example of a task being suspended immediately before executing an instruction that sums the values contained within
two processor registers. While the task is suspended other tasks will execute and may modify the processor register
values. Upon resumption the task will not know that the processor registers have been altered - if it used the modified
values the summation would result in an incorrect value.
To prevent this type of error it is essential that upon resumption a task has a context identical to that immediately prior to
its suspension. The operating system kernel is responsible for ensuring this is the case - and does so by saving the
context of a task as it is suspended. When the task is resumed its saved context is restored by the operating system kernel
prior to its execution. The process of saving the context of a task being suspended and restoring the context of a task
being resumed is called context switching.
Multitasking
Most commonly, within some scheduling scheme, one process must be switched out of the CPU so another process can
run. This context switch can be triggered by the process making itself unrunnable, such as by waiting for an I/O or
synchronization operation to complete. On a pre-emptive multitasking system, the scheduler may also switch out
processes which are still runnable. To prevent other processes from being starved of CPU time, preemptive schedulers
often configure a timer interrupt to fire when a process exceeds its time slice. This interrupt ensures that the scheduler
will gain control to perform a context switch.
Interrupt handling
Modern architectures are interrupt driven. This means that if the CPU requests data from a disk, for example, it does not
need to busy-wait until the read is over; it can issue the request and continue with some other execution. When the read
is over, the CPU can be interrupted and presented with the read. For interrupts, a program called an interrupt handler is
installed, and it is the interrupt handler that handles the interrupt from the disk.
When an interrupt occurs, the hardware automatically switches a part of the context (at least enough to allow the handler
to return to the interrupted code). The handler may save additional context, depending on details of the particular
hardware and software designs. Often only a minimal part of the context is changed in order to minimize the amount of
time spent handling the interrupt. The kernel does not spawn or schedule a special process to handle interrupts, but
instead the handler executes in the (often partial) context established at the beginning of interrupt handling. Once
interrupt servicing is complete, the context in effect before the interrupt occurred is restored so that the interrupted
process can resume execution in its proper state.
User and kernel mode switching

When a transition between user mode and kernel mode is required in an operating system, a context switch is not
necessary; a mode transition is not by itself a context switch. However, depending on the operating system, a context
switch may also take place at this time.
(ii) What are the types of Scheduling [8]
Scheduling:
In typical designs, a task has three states:

Running (executing on the CPU);

Ready (ready to be executed);

Blocked (waiting for an event, I/O for example).

Most tasks are blocked or ready most of the time because generally only one task can run at a time per CPU. The number
of items in the ready queue can vary greatly, depending on the number of tasks the system needs to perform and the type
of scheduler that the system uses. On simpler non-preemptive but still multitasking systems, a task has to give up its time
on the CPU to other tasks, which can cause the ready queue to have a greater number of overall tasks in the ready to be
executed state (resource starvation).
Usually the data structure of the ready list in the scheduler is designed to minimize the worst-case length of time spent in
the scheduler's critical section, during which preemption is inhibited, and, in some cases, all interrupts are disabled. But
the choice of data structure depends also on the maximum number of tasks that can be on the ready list.
If there are never more than a few tasks on the ready list, then a doubly linked list of ready tasks is likely optimal. If the
ready list usually contains only a few tasks but occasionally contains more, then the list should be sorted by priority. That
way, finding the highest priority task to run does not require iterating through the entire list. Inserting a task then requires
walking the ready list until reaching either the end of the list, or a task of lower priority than that of the task being
inserted.
Care must be taken not to inhibit preemption during this search. Longer critical sections should be divided into small
pieces. If an interrupt occurs that makes a high priority task ready during the insertion of a low priority task, that high
priority task can be inserted and run immediately before the low priority task is inserted.
The critical response time, sometimes called the flyback time, is the time it takes to queue a new ready task and restore
the state of the highest priority task to running. In a well-designed RTOS, readying a new task will take 3 to 20
instructions per ready-queue entry, and restoration of the highest-priority ready task will take 5 to 30 instructions.
In more advanced systems, real-time tasks share computing resources with many non-real-time tasks, and the ready list
can be arbitrarily long. In such systems, a scheduler ready list implemented as a linked list would be inadequate.
8.[a] Write short notes on priority Scheduling? [16]

[or]

Priority Based Scheduling

Priority scheduling is a non-preemptive algorithm and one of the most common scheduling algorithms in batch
systems.

Each process is assigned a priority. Process with highest priority is to be executed first and so on. Processes with
same priority are executed on first come first served basis. Priority can be decided based on memory requirements,
time requirements or any other resource requirement.

Wait time of each process is as follows

Process

Wait Time : Service Time - Arrival Time

P0

9-0=9

P1

6-1=5

P2

14 - 2 = 12

P3

0-0=0

Average Wait Time: (9+5+12+0) / 4 = 6.5


Shortest Remaining Time
Shortest remaining time (SRT) is the preemptive version of the SJN algorithm. The processor is allocated to the job
closest to completion but it can be preempted by a newer ready job with shorter time to completion. Impossible to
implement in interactive systems where required CPU time is not known. It is often used in batch environments where
short jobs need to give preference.

Round Robin Scheduling


Round Robin is the preemptive process scheduling algorithm. Each process is provided a fix time to execute, it is called
a quantum. Once a process is executed for a given time period, it is preempted and other process executes for a given
time period. Context switching is used to save states of preempted processes

Wait time of each process is as follows

Process

Wait Time : Service Time - Arrival Time

P0

(0 - 0) + (12 - 3) = 9

P1

(3 - 1) = 2

P2

(6 - 2) + (14 - 9) + (20 - 17) = 12

P3

(9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5


[b] Write in detail about evaluating operating system performance? [16]
The scheduling policy does not tell us all that we would like to know about the performance of a real system running
processes. Our analysis of scheduling policies makes some simplifying assumptions:
We have assumed that context switches require zero time. Although it is often reasonable to neglect context switch
time when it is much smaller than the process execution time, context switching can add signicant delay in some cases.
We have assumed that we know the execution time of the processes. In fact, we learned in Section 5.6 that program
time is not a single number, but can be bounded by worst-case and best-case execution times.
We probably determined worst-case or best-case times for the processes in isolation. But, in fact, they interact with each
other in the cache. Cache conicts among processes can drastically degrade process execution time.
The zero-time context switch assumption used in the analysis of RMS is not correctwe must execute instructions to
save and restore context, and we must execute additional instructions to implement the scheduling policy. On the other
hand, context switching can be implemented efficientlycontext switching need not kill performance. The effects of
nonzero context switching time must be carefully analyzed in the context of a particular implementation to be sure that
the predictions of an ideal scheduling policy are sufficiently accurate. Example shows that context switching can, in fact,
cause a system to miss a deadline.
Scheduling and context switching overhead
Appearing below is a set of processes and their characteristics.
First, let us try to nd a schedule assuming that context switching
time is zero. Following a feasible schedule for a sequence of data
arrivals that meets all the deadlines:
Now let us assume that the total time to
initiate

process,

including

context

switching and scheduling policy evaluation,


is one time unit. It is easy to see that there is
no feasible schedule for the above release
time sequence, since we require a total of
2TP1+TP2=2X(1+3)+(1+3)=11 time units
to execute one period of P2 and two periods
of P1.

Each process uses half the cache, so only two processes can be in the cache at the sametime.Appearing below is a rst
schedule that uses a least-recently-used cache replacement policy on a process-by-process basis.

In the rst iteration, we must ll up the cache, but even in subsequent iterations, competition among all three processes
ensures that a process is never in the cache when it starts to execute. As a result, we must always use the worst-case
execution time.
Another schedule in which we have reserved half the cache for P1 is shown below. This leaves P2 and P3 to ght over
the other half of the cache.

In this case, P2 and P3 still compete, but P1 is always ready. After the rst iteration, we can use the average-case
execution time for P1, which gives us some spare CPU time that could be used for additional operations.

Faculty

HoD-ECE

S-ar putea să vă placă și