Documente Academic
Documente Profesional
Documente Cultură
3.
4.
5.
automobile engines, printers, and cell phones. In all these systems, certain operations must be executed
periodically, and each operation is executed at its own rate.
Part B
[a] Discuss the power management and optimization for process.[16] [or]
The RTOS and system architecture can use static and dynamic power management mechanisms
to help manage the systems power consumption.
A power management policy [Ben00] is a strategy for determining when to perform certain
power management operations. A power management policy in general examines the state of the system
to determine when to take actions.
However, the overall strategy embodied in the policy should be designed based on the
characteristics of the static and dynamic power management mechanisms. Going into a low-power mode
takes time; generally, the more that is shut off, the longer the delay incurred during restart. Because
power-down and power-up are not free, modes should be changed carefully. Determining when to switch
into and out of a power-up mode requires an analysis of the overall system activity.
Avoiding a power-down mode can cost unnecessary power.
Powering down too soon can cause severe performance penalties.
Re-entering run mode typically costs a considerable amount of time. A straightforward method is to
power up the system when a request is received. This works as long as the delay in handling the request
is acceptable. A more sophisticated technique is predictive shutdown. The goal is to predict when the
next request will be made and to start the system just before that time, saving the requestor the start-up
time. In general, predictive shutdown techniques are probabilisticthey make guesses about activity
patterns based on a probabilistic model of expected behavior. Because they rely on statistics, they may
not always correctly guess the time of the next activity. This can cause two types of problems:
The requestor may have to wait for an activity period. In the worst case, the requestor may not make a
deadline due to the delay incurred by system start-up.
The system may restart itself when no activity is imminent. As a result, the system will waste power.
Clearly, the choice of a good probabilistic model of service requests is important. The policy
mechanism should also not be too complex,since the power
it consumes to make decisions is part of the total system
power budget. Several predictive techniques are possible. A
very simple technique is to use xed times. For instance, if
the system does not receive inputs during an interval of
length Ton , it shuts down; a powered-down system waits
for a period T before returning to the power-on mode.
The choice of TOff and T must be determined by
experimentation. Srivastava and Eustace [Sri94] found one useful rule for graphics terminals. They
plotted the observed idle time (T the immediately preceding active time (TOn off on off ) of a graphics
terminal versus). The result was an L-shaped distribution as illustrated in Figure 6.17. In this
distribution, the idle period after a long active period is usually very short, and the length of the idle
period after a short active period is uniformly distributed. Based on this distribution, they proposed a
shut down threshold that depended on the length of the last active periodthey shut down when the
active period length was below a threshold, putting the system in the vertical portion of the L
distribution.
The Advanced Conguration and Power Interface (ACPI) is an open industry standard for power
management services. It is designed to be compatible with a wide variety of OSs. It was targeted initially
to PCs. The role of ACPI in the system is illustrated in Figure.
ACPI provides some basic power management facilities and abstracts the hardware layer, the OS has
its own power management module that determines the policy, and the OS then uses ACPI to send the
required controls to the hardware and to observe the hardwares state as input to the power manager.
ACPI supports the following ve basic global power states:
to
interface that describe the system behavior. It also includes a decision module that determines power
management actions based on those observations.
Most tasks are blocked or ready most of the time because generally only one task can run at a time per CPU.
The number of items in the ready queue can vary greatly, depending on the number of tasks the system needs
to perform and the type of scheduler that the system uses. On simpler non-preemptive but still multitasking
systems, a task has to give up its time on the CPU to other tasks, which can cause the ready queue to have a
greater number of overall tasks in the ready to be executed state (resource starvation).
Usually the data structure of the ready list in the scheduler is designed to minimize the worst-case length of
time spent in the scheduler's critical section, during which preemption is inhibited, and, in some cases, all
interrupts are disabled. But the choice of data structure depends also on the maximum number of tasks that can
be on the ready list.
If there are never more than a few tasks on the ready list, then a doubly linked list of ready tasks is likely
optimal. If the ready list usually contains only a few tasks but occasionally contains more, then the list should
be sorted by priority. That way, finding the highest priority task to run does not require iterating through the
entire list. Inserting a task then requires walking the ready list until reaching either the end of the list, or a task
of lower priority than that of the task being inserted.
Care must be taken not to inhibit preemption during this search. Longer critical sections should be divided into
small pieces. If an interrupt occurs that makes a high priority task ready during the insertion of a low priority
task, that high priority task can be inserted and run immediately before the low priority task is inserted.
The critical response time, sometimes called the flyback time, is the time it takes to queue a new ready task
and restore the state of the highest priority task to running. In a well-designed RTOS, readying a new task will
take 3 to 20 instructions per ready-queue entry, and restoration of the highest-priority ready task will take 5 to
30 instructions.
In more advanced systems, real-time tasks share computing resources with many non-real-time tasks, and the
ready list can be arbitrarily long. In such systems, a scheduler ready list implemented as a linked list would be
inadequate.
where the {Ci} are the worst-case computation-times of the {n} n processes and the {Ti} are their respective
inter-arrival periods (assumed to be equal to the relative deadlines).
That is, EDF can guarantee that all deadlines are met provided that the total CPU utilization is not more than
100%. Compared to fixed priority scheduling techniques like rate-monotonic scheduling, EDF can guarantee
all the deadlines in the system at higher loading.
However, when the system is overloaded, the set of processes that will miss deadlines is largely unpredictable
(it will be a function of the exact deadlines and time at which the overload occurs.) This is a considerable
disadvantage to a real time systems designer. The algorithm is also difficult to implement in hardware and
there is a tricky issue of representing deadlines in different ranges (deadlines can't be more precise than the
granularity of the clock used for the scheduling). If a modular arithmetic is used to calculate future deadlines
relative to now, the field storing a future relative deadline must accommodate at least the value of the
(("duration" {of the longest expected time to completion} * 2) + "now"). Therefore EDF is not commonly
found in industrial real-time computer systems.
Instead, most real-time computer systems use fixed priority scheduling (usually rate-monotonic scheduling).
With fixed priorities, it is easy to predict that overload conditions will cause the low-priority processes to miss
deadlines, while the highest-priority process will still meet its deadline.
There is a significant body of research dealing with EDF scheduling in real-time computing; it is possible to
calculate worst case response times of processes in EDF, to deal with other types of processes than periodic
processes and to use servers to regulate overloads.
[a] Write short notes on priority Scheduling? [16]
[or]
P0
9-0=9
P1
6-1=5
P2
14 - 2 = 12
P3
0-0=0
P0
(0 - 0) + (12 - 3) = 9
P1
(3 - 1) = 2
P2
P3
(9 - 3) + (17 - 12) = 11
with care in RT applications! The application programmer should be sure about the maximum amount of time
that a task can be delayed because of locks held by other tasks; and this maximum should be less than the
specified timing constraints of the system. The lock concept can easily lead to unpredictable latencies in the
scheduling of a task. It doesn't protect the data directly, but synchronizes the code that accesses the data. As
with scheduling priorities, locks give disciplined programmers a means to reach deterministic performance
measures. But discipline is not sufficient to guarantee consistency in large-scale systems.
The problem with using locks is that they make an application vulnerable for the priority inversion problem.
This should be prevented in the design phase. Another problem occurs when the CPU on which the task
holding the lock is running, suddenly fails, or when that task enters a trap and/or exception, because then the
lock is not released, or, at best its release is delayed.
Some remarks about the types of synchronization:
Semaphore - spinlock
A semaphore is a lock for which the normal behaviour of the locking task is to go to sleep. Hence, this
involves the overhead of context switching, so don't use semaphores for critical sections that should take only
a very short time; in these cases spinlocks are a more appropriate choice.
Semaphore - mutex
Many programmers also tend to think that a semaphore is necessarily a more primitive RTOS function than a
mutex. This is not necessarily so, because one can implement a counting semaphore with a mutex and a
condition variable.
Spinlock
Spinlocks work if the programmer is disciplined enough to use them with care, that is for guaranteed very
short critical sections. In principle, the latency induced by a spinlock is NOT deterministic (not RT). But they
offer a good solution in the case that the scheduling and context switching times generated by the use of locks,
are larger than the time required to execute the critical section the spinlock is guarding.
IPC and data exchange
Mechanisms of all data IPC exchange are quite similar; the OS has some memory space reserved for the data
that has to be exchanged and uses some synchronisation IPC primitives for reading or writing to that memory
space. The main difference between different forms of data exchange lies in their policy. Two or more tasks
often have some form of shared memory. Possible problems are the available space in the RAM and some
control over the freshness of the shared data What do you mean with the freshness of the shared data?. Shared
memory has the properties of a block device ; programs can access arbitrary blocks on the device in any
sequence. Character devices can only access data in a linear sequence. FIFO is such a device. In a RT-system
no lock is needed on the real-time side as no user program can interrupt the real-time side. Another form of
IPC data exchange is using messages and mailboxes. Again for a real-time system there are some things to pay
attention to. An IPC approach which uses dynamic memory allocation is not feasible for a RT-system.Circular
buffers is another possibility of IPC. There are some possible problems with data loss though, so an RT-system
uses some specific options. These are: locking in memory and buffer half full. A better solution is using a
swinging buffer. This is an advanced circular buffer and a deadlock-free lock. A swinging buffer is nonblocking and loss-prone.
Issues in Real Time System Design
A Real Time systems goal is to behave deterministically. Determinism implies two aspects. First, if the
process asks for CPU, RAM or communication, it should receive it from the coordination. Second, if a failure
occurs, the system should know what to do. For a system designer, the most important features of a Real Time
application are scheduling tasks, coping with failure and using available resources.
Scheduling tasks
The system designer has to make sure that (at least a clearly identified, small) part of the systems processes are
scheduled in a predictable way because, even more than timing, the sequence of processes is an important part
of the application. Scheduling of the processes can for example be done by the scheduler. Its essential that the
sequence is determined in a deterministic way, but the scheduler might not suffice here. For example, if two
processes have the same priority, it might be unclear what process will come first. A system designer should
not expect to know the duration of a process, even if this process gets the full capacity of the processor.
Therefore the designer has to make sure that the scheduling works, even in case of the worst scenario.
Failure
The system should behave reliably during internal or external failure. Possible failures should be known by the
real time system. If the process cant recover from a failure, the system should go into a fail safe/gracefully
mode. If the system cant satisfy the tasks timing constraints and therefore also the quality demands, it
should take action by triggering an error. In such a case, it is important to check if it is possible to remove
certain constraints. Internal failure can be a hard- or a software failure in the system. To cope with software
failure, tasks should be designed to safeguard error conditions. Hardware failure can be processor, board or
link failure. To detect failure the system designer should implement watchdog systems. If the main program
neglects to regularly service the watchdog, it can trigger a system reset.
Resources and services
In the context of resources and services the term Quality of Service (QoS) is important. It means that the task
must get a xed amount of service per time unit. This service includes hardware, processing time and
communication and is deterministically known by the system. In contrast to general OS, the hardware needs to
be specifically assigned for every process. In assigning service to tasks, the programmer should take into
account worst case scenarios: if various tasks could be needing a service, then sooner or later they will want
it at the same time. Then the sequence has to maximise the quality of service.
Complexity
There are three classes of system complexity. The first, C1, has centralized hardware and a centralized state.
Everything is determined and external factors have no influence on the process. Such a system can be designed
by one person. The simplest robots belong to this category. The last, C3, has decentralized hardware and a
decentralized state. The system adapts the process due to external factors when necessary. Such a system
requires more (approximately 100) designers. An example is the RoboCup team. C2 is an intermediate level
and it has decentralized hardware and a centralized state, like for example industrial robots do. Such systems
require about 10 designers. The higher the complexity of the interactions (synchronisation and
communication) in the system, the harder to make it deterministic because much more aspects should be taken
into account.
Faculty
HoD-ECE