Documente Academic
Documente Profesional
Documente Cultură
1.
52 = 10 Marks
2.
3.
4.
5.
released, etc.) the queue will be searched for the process closest to its deadline.
What are the states of a process?
The states of a process are;
Running
Ready
Blocked
Suspended
Define Scheduling.
A RTOS has an advanced algorithm for
Scheduler flexibility enables a wider, computerprocess priorities, but a real-time OS is more frequently
applications. Key factors in a real-time OS are minimal
scheduling.
system
orchestration
of
switching latency; a real-time OS is valued more for how quickly or how predictably it can respond than for the
amount of work it can perform in a given period of time.
Part B
Most tasks are blocked or ready most of the time because generally only one task can run at a time per
CPU. The number of items in the ready queue can vary greatly, depending on the number of tasks the
system needs to perform and the type of scheduler that the system uses. On simpler non-preemptive but
still multitasking systems, a task has to give up its time on the CPU to other tasks, which can cause the
ready queue to have a greater number of overall tasks in the ready to be executed state (resource
starvation).
Usually the data structure of the ready list in the scheduler is designed to minimize the worst-case
length of time spent in the scheduler's critical section, during which preemption is inhibited, and, in
some cases, all interrupts are disabled. But the choice of data structure depends also on the maximum
number of tasks that can be on the ready list.
If there are never more than a few tasks on the ready list, then a doubly linked list of ready tasks is likely
optimal. If the ready list usually contains only a few tasks but occasionally contains more, then the list
should be sorted by priority. That way, finding the highest priority task to run does not require iterating
through the entire list. Inserting a task then requires walking the ready list until reaching either the end of
the list, or a task of lower priority than that of the task being inserted.
Care must be taken not to inhibit preemption during this search. Longer critical sections should be
divided into small pieces. If an interrupt occurs that makes a high priority task ready during the insertion
of a low priority task, that high priority task can be inserted and run immediately before the low priority
task is inserted.
The critical response time, sometimes called the flyback time, is the time it takes to queue a new ready
task and restore the state of the highest priority task to running. In a well-designed RTOS, readying a
new task will take 3 to 20 instructions per ready-queue entry, and restoration of the highest-priority ready
task will take 5 to 30 instructions.
In more advanced systems, real-time tasks share computing resources with many non-real-time tasks,
and the ready list can be arbitrarily long. In such systems, a scheduler ready list implemented as a linked
list would be inadequate.
Intertask communication and resource sharing
A multitasking operating system like Unix is poor at real-time tasks. The scheduler gives the highest
priority to jobs with the lowest demand on the computer, so there is no way to ensure that a time-critical
job will have access to enough resources. Multitasking systems must manage sharing data and hardware
resources among multiple tasks. It is usually unsafe for two tasks to access the same specific data or
hardware resource simultaneously. There are three common approaches to resolve this problem:
Temporarily masking/disabling interrupts
Binary semaphores
Message passing
Interrupt handlers and the scheduler
Since an interrupt handler blocks the highest priority task from running, and since real time operating
systems are designed to keep thread latency to a minimum, interrupt handlers are typically kept as short
as possible. The interrupt handler defers all interaction with the hardware if possible; typically all that is
necessary is to acknowledge or disable the interrupt (so that it won't occur again when the interrupt
handler returns) and notify a task that work needs to be done. This can be done by unblocking a driver
task through releasing a semaphore, setting a flag or sending a message. A scheduler often provides the
ability to unblock a task from interrupt handler context.
7.
[a] Discuss the power management and optimization for process.[16] [or]
The RTOS and system architecture can use static and dynamic power management mechanisms
to help manage the systems power consumption.
A power management policy [Ben00] is a strategy for determining when to perform certain
power management operations. A power management policy in general examines the state of the system
to determine when to take actions.
However, the overall strategy embodied in the policy should be designed based on the
characteristics of the static and dynamic power management mechanisms. Going into a low-power mode
takes time; generally, the more that is shut off, the longer the delay incurred during restart. Because
power-down and power-up are not free, modes should be changed carefully. Determining when to switch
into and out of a power-up mode requires an analysis of the overall system activity.
Avoiding a power-down mode can cost unnecessary power.
Powering down too soon can cause severe performance penalties.
Re-entering run mode typically costs a considerable amount of time. A straightforward method is to
power up the system when a request is received. This works as long as the delay in handling the request
is acceptable. A more sophisticated technique is predictive shutdown. The goal is to predict when the
next request will be made and to start the system just before that time, saving the requestor the start-up
time. In general, predictive shutdown techniques are probabilisticthey make guesses about activity
patterns based on a probabilistic model of expected behavior. Because they rely on statistics, they may
not always correctly guess the time of the next activity. This can cause two types of problems:
The requestor may have to wait for an activity period. In the worst case, the requestor may not make a
deadline due to the delay incurred by system start-up.
The system may restart itself when no activity is imminent. As a result, the system will waste power.
Clearly, the choice of a good probabilistic model of service requests is important. The policy
mechanism should also not be too complex,since the power
it consumes to make decisions is part of the total system
power budget. Several predictive techniques are possible. A
very simple technique is to use xed times. For instance, if
the system does not receive inputs during an interval of
length Ton , it shuts down; a powered-down system waits
for a period T before returning to the power-on mode.
The choice of TOff and T must be determined by
experimentation. Srivastava and Eustace [Sri94] found one useful rule for graphics terminals. They
plotted the observed idle time (T the immediately preceding active time (TOn off on off ) of a graphics
terminal versus). The result was an L-shaped distribution as illustrated in Figure 6.17. In this
distribution, the idle period after a long active period is usually very short, and the length of the idle
period after a short active period is uniformly distributed. Based on this distribution, they proposed a
shut down threshold that depended on the length of the last active periodthey shut down when the
active period length was below a threshold, putting the system in the vertical portion of the L
distribution.
The Advanced Conguration and Power Interface (ACPI) is an open industry standard for power
management services. It is designed to be compatible with a wide variety of OSs. It was targeted initially
to PCs. The role of ACPI in the system is illustrated in Figure.
ACPI provides some basic power management facilities and abstracts the hardware layer, the OS has
its own power management module that determines the policy, and the OS then uses ACPI to send the
required controls to the hardware and to observe the hardwares state as input to the power manager.
ACPI supports the following ve basic global power states:
to
of
system context;
S2, a low wake-up latency state with a loss
of
interface that describe the system behavior. It also includes a decision module that determines power
management actions based on those observations.
When a transition between user mode and kernel mode is required in an operating system, a context switch is not
necessary; a mode transition is not by itself a context switch. However, depending on the operating system, a context
switch may also take place at this time.
(ii) What are the types of Scheduling [8]
Scheduling:
In typical designs, a task has three states:
Most tasks are blocked or ready most of the time because generally only one task can run at a time per CPU. The number
of items in the ready queue can vary greatly, depending on the number of tasks the system needs to perform and the type
of scheduler that the system uses. On simpler non-preemptive but still multitasking systems, a task has to give up its time
on the CPU to other tasks, which can cause the ready queue to have a greater number of overall tasks in the ready to be
executed state (resource starvation).
Usually the data structure of the ready list in the scheduler is designed to minimize the worst-case length of time spent in
the scheduler's critical section, during which preemption is inhibited, and, in some cases, all interrupts are disabled. But
the choice of data structure depends also on the maximum number of tasks that can be on the ready list.
If there are never more than a few tasks on the ready list, then a doubly linked list of ready tasks is likely optimal. If the
ready list usually contains only a few tasks but occasionally contains more, then the list should be sorted by priority. That
way, finding the highest priority task to run does not require iterating through the entire list. Inserting a task then requires
walking the ready list until reaching either the end of the list, or a task of lower priority than that of the task being
inserted.
Care must be taken not to inhibit preemption during this search. Longer critical sections should be divided into small
pieces. If an interrupt occurs that makes a high priority task ready during the insertion of a low priority task, that high
priority task can be inserted and run immediately before the low priority task is inserted.
The critical response time, sometimes called the flyback time, is the time it takes to queue a new ready task and restore
the state of the highest priority task to running. In a well-designed RTOS, readying a new task will take 3 to 20
instructions per ready-queue entry, and restoration of the highest-priority ready task will take 5 to 30 instructions.
In more advanced systems, real-time tasks share computing resources with many non-real-time tasks, and the ready list
can be arbitrarily long. In such systems, a scheduler ready list implemented as a linked list would be inadequate.
8.[a] Write short notes on priority Scheduling? [16]
[or]
Priority scheduling is a non-preemptive algorithm and one of the most common scheduling algorithms in batch
systems.
Each process is assigned a priority. Process with highest priority is to be executed first and so on. Processes with
same priority are executed on first come first served basis. Priority can be decided based on memory requirements,
time requirements or any other resource requirement.
Process
P0
9-0=9
P1
6-1=5
P2
14 - 2 = 12
P3
0-0=0
Process
P0
(0 - 0) + (12 - 3) = 9
P1
(3 - 1) = 2
P2
P3
(9 - 3) + (17 - 12) = 11
process,
including
context
Each process uses half the cache, so only two processes can be in the cache at the sametime.Appearing below is a rst
schedule that uses a least-recently-used cache replacement policy on a process-by-process basis.
In the rst iteration, we must ll up the cache, but even in subsequent iterations, competition among all three processes
ensures that a process is never in the cache when it starts to execute. As a result, we must always use the worst-case
execution time.
Another schedule in which we have reserved half the cache for P1 is shown below. This leaves P2 and P3 to ght over
the other half of the cache.
In this case, P2 and P3 still compete, but P1 is always ready. After the rst iteration, we can use the average-case
execution time for P1, which gives us some spare CPU time that could be used for additional operations.
Faculty
HoD-ECE