Sunteți pe pagina 1din 10

RVS TECHNICAL CAMPUS - COIMBATORE

COIMBATORE 641 402


INTERNAL TEST III, SEP 2016
EC6703 EMBEDDED AND REAL TIME SYSTEMS
Degree/Dept/Sem: BE/ECE/VII
Date & Session: 30.09.2016 & FN
Part A

(Answer ALL the questions)

Duration: 1.30 Hrs


Max Marks: 50
52 = 10 Marks

1. Define Interprocess Communication?


Interprocess communication (IPC) is a set of programming interfaces that allow a programmer to
coordinate activities among different program processes that can run concurrently in an operating system.
This allows a program to handle many user requests at the same time.
2.

What are the data types available in C language?

3.

What are advantages of high level languages?


High-level language programs are portable.
High-level instructions:
Program development is faster
Fewer lines of code
Program maintenance is easier
Compiler translates to the target machine language.

4.

Define RMS (Rate Monotonic Scheduling)?


In computer science, rate-monotonic scheduling (RMS)is a scheduling algorithm used in real-time
operating systems (RTOS) with a static-priority scheduling class.The static priorities are assigned
according to the cycle duration of the job, so a shorter cycle duration results in a higher job priority.
These operating systems are generally preemptive and have deterministic guarantees with regard to
response times. Rate monotonic analysis is used in conjunction with those systems to provide scheduling
guarantees for a particular application.

5.

What is multirate system?


Implementing code that satisfies timing requirements is even more complex when multiple rates of
computation must be handled. Multirate embedded computing systems are very common, including

automobile engines, printers, and cell phones. In all these systems, certain operations must be executed
periodically, and each operation is executed at its own rate.
Part B

(2x16 mark Either or type)

(18 + 2x16 = 40 Marks)

6. What are the goals of RTOS? [8]


the process, and operating systems that use processes to create systems. A process is an execution of a
program;an embedded system may have several processes running concurrently. A separate real-time
operating system (RTOS) controls when the processes run on the CPU. Processes are important to
embedded system design because they help us juggle multiple events happening at the same time. A realtime embedded system that is designed without processes usually ends up as a mess of spaghetti code
that does not operate properly
We begin by introducing the process abstraction. A process is dened by a combination of the program
being executed and the current state of the program. We will learn how to switch contexts between
processes.
We cover the fundamentals of interprocess communication, including the various styles of
communication and how they can be implemented.
In order to make use of processes, we must be able to schedule them. We discuss process priorities and
how they can be used to guide scheduling.
The real-time operating system is the software component that implements the process abstraction and
scheduling. We study how RTOSs implement schedules, how programs interface to the operating system,
and how we can evaluate the performance of systems built from RTOSs.
Tasks introduce a new level of complexity to performance analysis. Our study of real-time scheduling
provides an important foundation for the study of multi-tasking systems.
Not only does an answering machine require real-time operationtelephone data are regularly sampled
and stored to memorybut it must juggle several tasks at once. The answering machine must be able to
operate the user interface simultaneously with recording voice data. In the most complex version of the
answering machine, we must also simultaneously compress voice data during recording and uncompress
it during playback. To emphasize the role of processes in structuring real-time computation, we compare
the answering machine design with and without processes. It becomes apparent that the implementation
that does not use processes will be considerably harder to design and debug.
7.

[a] Discuss the power management and optimization for process.[16] [or]
The RTOS and system architecture can use static and dynamic power management mechanisms
to help manage the systems power consumption.
A power management policy [Ben00] is a strategy for determining when to perform certain
power management operations. A power management policy in general examines the state of the system
to determine when to take actions.
However, the overall strategy embodied in the policy should be designed based on the
characteristics of the static and dynamic power management mechanisms. Going into a low-power mode
takes time; generally, the more that is shut off, the longer the delay incurred during restart. Because
power-down and power-up are not free, modes should be changed carefully. Determining when to switch
into and out of a power-up mode requires an analysis of the overall system activity.
Avoiding a power-down mode can cost unnecessary power.
Powering down too soon can cause severe performance penalties.
Re-entering run mode typically costs a considerable amount of time. A straightforward method is to
power up the system when a request is received. This works as long as the delay in handling the request

is acceptable. A more sophisticated technique is predictive shutdown. The goal is to predict when the
next request will be made and to start the system just before that time, saving the requestor the start-up
time. In general, predictive shutdown techniques are probabilisticthey make guesses about activity
patterns based on a probabilistic model of expected behavior. Because they rely on statistics, they may
not always correctly guess the time of the next activity. This can cause two types of problems:
The requestor may have to wait for an activity period. In the worst case, the requestor may not make a
deadline due to the delay incurred by system start-up.
The system may restart itself when no activity is imminent. As a result, the system will waste power.
Clearly, the choice of a good probabilistic model of service requests is important. The policy
mechanism should also not be too complex,since the power
it consumes to make decisions is part of the total system
power budget. Several predictive techniques are possible. A
very simple technique is to use xed times. For instance, if
the system does not receive inputs during an interval of
length Ton , it shuts down; a powered-down system waits
for a period T before returning to the power-on mode.
The choice of TOff and T must be determined by
experimentation. Srivastava and Eustace [Sri94] found one useful rule for graphics terminals. They
plotted the observed idle time (T the immediately preceding active time (TOn off on off ) of a graphics
terminal versus). The result was an L-shaped distribution as illustrated in Figure 6.17. In this
distribution, the idle period after a long active period is usually very short, and the length of the idle
period after a short active period is uniformly distributed. Based on this distribution, they proposed a
shut down threshold that depended on the length of the last active periodthey shut down when the
active period length was below a threshold, putting the system in the vertical portion of the L
distribution.
The Advanced Conguration and Power Interface (ACPI) is an open industry standard for power
management services. It is designed to be compatible with a wide variety of OSs. It was targeted initially
to PCs. The role of ACPI in the system is illustrated in Figure.
ACPI provides some basic power management facilities and abstracts the hardware layer, the OS has
its own power management module that determines the policy, and the OS then uses ACPI to send the
required controls to the hardware and to observe the hardwares state as input to the power manager.
ACPI supports the following ve basic global power states:

G3, the mechanical off state, in which the system


consumes no power.

G2, the soft off state, which requires a full OS reboot


restore the machine to working condition. This state
has four substates:

to

S1, a low wake-up latency state with no loss of system context;


S2, a low wake-up latency state with a loss of CPU and system cache state;
S3, a low wake-up latency state in which all system state except for main memory is lost; and
S4, the lowest-power sleeping state, in which all devices are turned off.
G1, the sleeping state, in which the system appears to be off and the time required to return to working
condition is
inversely proportional to power consumption.
G0, the working state, in which the system is fully usable.
The legacy state, in which the system does not comply with ACPI.
The power manager typically includes an observer, which receives messages

through the ACPI

interface that describe the system behavior. It also includes a decision module that determines power
management actions based on those observations.

[b] (i) What are the types of Scheduling? [8]


Scheduling:
In typical designs, a task has three states:

Running (executing on the CPU);

Ready (ready to be executed);

Blocked (waiting for an event, I/O for example).

Most tasks are blocked or ready most of the time because generally only one task can run at a time per CPU.
The number of items in the ready queue can vary greatly, depending on the number of tasks the system needs
to perform and the type of scheduler that the system uses. On simpler non-preemptive but still multitasking
systems, a task has to give up its time on the CPU to other tasks, which can cause the ready queue to have a
greater number of overall tasks in the ready to be executed state (resource starvation).
Usually the data structure of the ready list in the scheduler is designed to minimize the worst-case length of
time spent in the scheduler's critical section, during which preemption is inhibited, and, in some cases, all
interrupts are disabled. But the choice of data structure depends also on the maximum number of tasks that can
be on the ready list.
If there are never more than a few tasks on the ready list, then a doubly linked list of ready tasks is likely
optimal. If the ready list usually contains only a few tasks but occasionally contains more, then the list should
be sorted by priority. That way, finding the highest priority task to run does not require iterating through the
entire list. Inserting a task then requires walking the ready list until reaching either the end of the list, or a task
of lower priority than that of the task being inserted.
Care must be taken not to inhibit preemption during this search. Longer critical sections should be divided into
small pieces. If an interrupt occurs that makes a high priority task ready during the insertion of a low priority
task, that high priority task can be inserted and run immediately before the low priority task is inserted.
The critical response time, sometimes called the flyback time, is the time it takes to queue a new ready task
and restore the state of the highest priority task to running. In a well-designed RTOS, readying a new task will
take 3 to 20 instructions per ready-queue entry, and restoration of the highest-priority ready task will take 5 to
30 instructions.
In more advanced systems, real-time tasks share computing resources with many non-real-time tasks, and the
ready list can be arbitrarily long. In such systems, a scheduler ready list implemented as a linked list would be
inadequate.

(ii) Define earliest deadline first (EDF) scheduling? [8]


Earliest deadline first (EDF) or least time to go is a dynamic scheduling algorithm used in real-time operating
systems to place processes in a priority queue. Whenever a scheduling event occurs (task finishes, new task
released, etc.) the queue will be searched for the process closest to its deadline. This process is the next to be
scheduled for execution.
EDF is an optimal scheduling algorithm on preemptive uniprocessors, in the following sense: if a collection of
independent jobs, each characterized by an arrival time, an execution requirement and a deadline, can be
scheduled (by any algorithm) in a way that ensures all the jobs complete by their deadline, the EDF will
schedule this collection of jobs so they all complete by their deadline.
With scheduling periodic processes that have deadlines equal to their periods, EDF has a utilization bound of
100%. Thus, the schedulability test for EDF is:

where the {Ci} are the worst-case computation-times of the {n} n processes and the {Ti} are their respective
inter-arrival periods (assumed to be equal to the relative deadlines).
That is, EDF can guarantee that all deadlines are met provided that the total CPU utilization is not more than
100%. Compared to fixed priority scheduling techniques like rate-monotonic scheduling, EDF can guarantee
all the deadlines in the system at higher loading.
However, when the system is overloaded, the set of processes that will miss deadlines is largely unpredictable
(it will be a function of the exact deadlines and time at which the overload occurs.) This is a considerable
disadvantage to a real time systems designer. The algorithm is also difficult to implement in hardware and
there is a tricky issue of representing deadlines in different ranges (deadlines can't be more precise than the
granularity of the clock used for the scheduling). If a modular arithmetic is used to calculate future deadlines
relative to now, the field storing a future relative deadline must accommodate at least the value of the
(("duration" {of the longest expected time to completion} * 2) + "now"). Therefore EDF is not commonly
found in industrial real-time computer systems.
Instead, most real-time computer systems use fixed priority scheduling (usually rate-monotonic scheduling).
With fixed priorities, it is easy to predict that overload conditions will cause the low-priority processes to miss
deadlines, while the highest-priority process will still meet its deadline.
There is a significant body of research dealing with EDF scheduling in real-time computing; it is possible to
calculate worst case response times of processes in EDF, to deal with other types of processes than periodic
processes and to use servers to regulate overloads.
[a] Write short notes on priority Scheduling? [16]

[or]

Priority Based Scheduling


Priority scheduling is a non-preemptive algorithm and one of the most common scheduling algorithms in batch
systems.
Each process is assigned a priority. Process with highest priority is to be executed first and so on. Processes
with same priority are executed on first come first served basis. Priority can be decided based on memory
requirements, time requirements or any other resource requirement.

Wait time of each process is as follows


Process

Wait Time : Service Time - Arrival Time

P0

9-0=9

P1

6-1=5

P2

14 - 2 = 12

P3

0-0=0

Average Wait Time: (9+5+12+0) / 4 = 6.5


Shortest Remaining Time
Shortest remaining time (SRT) is the preemptive version of the SJN algorithm. The processor is allocated to
the job closest to completion but it can be preempted by a newer ready job with shorter time to completion.
Impossible to implement in interactive systems where required CPU time is not known. It is often used in
batch environments where short jobs need to give preference.

Round Robin Scheduling


Round Robin is the preemptive process scheduling algorithm. Each process is provided a fix time to execute, it
is called a quantum. Once a process is executed for a given time period, it is preempted and other process
executes for a given time period. Context switching is used to save states of preempted processes

Wait time of each process is as follows


Process

Wait Time : Service Time - Arrival Time

P0

(0 - 0) + (12 - 3) = 9

P1

(3 - 1) = 2

P2

(6 - 2) + (14 - 9) + (20 - 17) = 12

P3

(9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

[b] Explain Interprocess communications using signals


[16]
inter-process communication or interprocess communication (IPC)
refers specifically to the mechanisms an operating system provides
to allow processes it manages to share data. Typically, applications
can use IPC categorized as clients and servers, where the client
requests data and the server responds to client requests.Many
applications are both clients and servers, as commonly seen in
distributed computing. Methods for achieving IPC are divided into categories which vary based on software
requirements, such as performance and modularity requirements, and system circumstances, such as network
bandwidth and latency.
Because there are tasks that have to communicate with each other, the need exists for synchronization of
different tasks, as well as for data exchange between them. The RTOS should provide easy and safe IPC
primitives which can be used for programmers to build their software systems. These primitives can have
different effects on task scheduling (blocking, non-blocking, conditional blocking, blocking with time out), can
use different degrees of coupling (named connection, broadcast, blackboard, object request broker) and
buffering.
The origin of most problems with resource sharing (or allocation) in multi-tasking and multi-processor systems
is the fact that operations on resources can usually not be performed atomically, i.e., as if they were executed
as one single, non-interruptible instruction that takes zero time. Indeed, a task that interfaces with a resource
can at any instant be pre-empted, and hence, when it gets re-scheduled again, it cannot just take for granted
that the data it uses now is in the same state before the pre-emption. Some parts of the code are called 'critical
sections' because it is critical to the validity of the code that the access to the data used for a certain statement
be executed atomically: un-interruptible by anything else. (Most) machine code instructions of a given
processor execute atomically, but instructions in higher-level programming languages are usually translated
into a sequence of many machine code instructions.
The main problem is called a race condition: two or more tasks compete against each other to get access to
the same shared resources. Examples of these race conditions are the following: deadlock, livelock, starvation.
Algorithms are available that avoid these race conditions, some more complex than others.
There are many types of synchronization: barrier , semaphore, mutex, spinlock, read/write lock (and lock-free
for data exchange).
When a task reaches its so-called critical section, it requests a lock. Now it can get the lock if it isn't taken by
another task and enters the critical section, or it has to wait (blocked, sleeps) till the other task releases the
lock at the end of its critical section. A blocked task cannot be scheduled for execution, so locks are to be used

with care in RT applications! The application programmer should be sure about the maximum amount of time
that a task can be delayed because of locks held by other tasks; and this maximum should be less than the
specified timing constraints of the system. The lock concept can easily lead to unpredictable latencies in the
scheduling of a task. It doesn't protect the data directly, but synchronizes the code that accesses the data. As
with scheduling priorities, locks give disciplined programmers a means to reach deterministic performance
measures. But discipline is not sufficient to guarantee consistency in large-scale systems.
The problem with using locks is that they make an application vulnerable for the priority inversion problem.
This should be prevented in the design phase. Another problem occurs when the CPU on which the task
holding the lock is running, suddenly fails, or when that task enters a trap and/or exception, because then the
lock is not released, or, at best its release is delayed.
Some remarks about the types of synchronization:
Semaphore - spinlock
A semaphore is a lock for which the normal behaviour of the locking task is to go to sleep. Hence, this
involves the overhead of context switching, so don't use semaphores for critical sections that should take only
a very short time; in these cases spinlocks are a more appropriate choice.
Semaphore - mutex
Many programmers also tend to think that a semaphore is necessarily a more primitive RTOS function than a
mutex. This is not necessarily so, because one can implement a counting semaphore with a mutex and a
condition variable.
Spinlock
Spinlocks work if the programmer is disciplined enough to use them with care, that is for guaranteed very
short critical sections. In principle, the latency induced by a spinlock is NOT deterministic (not RT). But they
offer a good solution in the case that the scheduling and context switching times generated by the use of locks,
are larger than the time required to execute the critical section the spinlock is guarding.
IPC and data exchange
Mechanisms of all data IPC exchange are quite similar; the OS has some memory space reserved for the data
that has to be exchanged and uses some synchronisation IPC primitives for reading or writing to that memory
space. The main difference between different forms of data exchange lies in their policy. Two or more tasks
often have some form of shared memory. Possible problems are the available space in the RAM and some
control over the freshness of the shared data What do you mean with the freshness of the shared data?. Shared
memory has the properties of a block device ; programs can access arbitrary blocks on the device in any
sequence. Character devices can only access data in a linear sequence. FIFO is such a device. In a RT-system
no lock is needed on the real-time side as no user program can interrupt the real-time side. Another form of
IPC data exchange is using messages and mailboxes. Again for a real-time system there are some things to pay
attention to. An IPC approach which uses dynamic memory allocation is not feasible for a RT-system.Circular
buffers is another possibility of IPC. There are some possible problems with data loss though, so an RT-system
uses some specific options. These are: locking in memory and buffer half full. A better solution is using a
swinging buffer. This is an advanced circular buffer and a deadlock-free lock. A swinging buffer is nonblocking and loss-prone.
Issues in Real Time System Design
A Real Time systems goal is to behave deterministically. Determinism implies two aspects. First, if the
process asks for CPU, RAM or communication, it should receive it from the coordination. Second, if a failure
occurs, the system should know what to do. For a system designer, the most important features of a Real Time
application are scheduling tasks, coping with failure and using available resources.
Scheduling tasks

The system designer has to make sure that (at least a clearly identified, small) part of the systems processes are
scheduled in a predictable way because, even more than timing, the sequence of processes is an important part
of the application. Scheduling of the processes can for example be done by the scheduler. Its essential that the
sequence is determined in a deterministic way, but the scheduler might not suffice here. For example, if two
processes have the same priority, it might be unclear what process will come first. A system designer should
not expect to know the duration of a process, even if this process gets the full capacity of the processor.
Therefore the designer has to make sure that the scheduling works, even in case of the worst scenario.
Failure
The system should behave reliably during internal or external failure. Possible failures should be known by the
real time system. If the process cant recover from a failure, the system should go into a fail safe/gracefully
mode. If the system cant satisfy the tasks timing constraints and therefore also the quality demands, it
should take action by triggering an error. In such a case, it is important to check if it is possible to remove
certain constraints. Internal failure can be a hard- or a software failure in the system. To cope with software
failure, tasks should be designed to safeguard error conditions. Hardware failure can be processor, board or
link failure. To detect failure the system designer should implement watchdog systems. If the main program
neglects to regularly service the watchdog, it can trigger a system reset.
Resources and services
In the context of resources and services the term Quality of Service (QoS) is important. It means that the task
must get a xed amount of service per time unit. This service includes hardware, processing time and
communication and is deterministically known by the system. In contrast to general OS, the hardware needs to
be specifically assigned for every process. In assigning service to tasks, the programmer should take into
account worst case scenarios: if various tasks could be needing a service, then sooner or later they will want
it at the same time. Then the sequence has to maximise the quality of service.
Complexity
There are three classes of system complexity. The first, C1, has centralized hardware and a centralized state.
Everything is determined and external factors have no influence on the process. Such a system can be designed
by one person. The simplest robots belong to this category. The last, C3, has decentralized hardware and a
decentralized state. The system adapts the process due to external factors when necessary. Such a system
requires more (approximately 100) designers. An example is the RoboCup team. C2 is an intermediate level
and it has decentralized hardware and a centralized state, like for example industrial robots do. Such systems
require about 10 designers. The higher the complexity of the interactions (synchronisation and
communication) in the system, the harder to make it deterministic because much more aspects should be taken
into account.
Faculty
HoD-ECE

S-ar putea să vă placă și