Sunteți pe pagina 1din 250

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Department of Computer Science and Engineering


Subject Name: EMBEDDED SYSTEMS

Subject Code: CS T56

Prepared By :

Mr.P.Karthikeyan,AP/CSE
Mr.B.Thiyagarajan, AP/CSE
Mrs.P.Subha Priya, AP/CSE
Verified by :

Approved by :

UNIT I

Introduction to Embedded System: Components of Embedded System


Classification - Characteristic of embedded system- Microprocessors & Micro
controllers- Introduction to embedded processors - Embedded software
architectures: Simple control loop Interrupt controlled system - Cooperative
multitasking - Preemptive multitasking or multi-threading Micro kernels and
exokernels - Monolithic kernels - Exotic custom operating systems.

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

2 MARKS
UNIT 1
1. What is an embedded system? [April 2012]
An embedded system employs a combination of hardware & software (a computational
engine) to perform a specific function; is part of a larger system that may not be a computer;
works in a reactive and time-constrained environment.
2. What are the components of the Embedded Systems? [NOV 2011]
An embedded system is basically a computer controlled device designed to perform some
specific tasks. In most cases these tasks revolve around real-time control of machines or
processes. Embedded systems are more cost effective to implement than systems composed of
general purpose computers, such as PCs. The components of ES are,
Memory
System Clock
Peripherals
3. What is the Classification of an embedded system? [April 2013]
The Embedded system is classified into following category
Small Scale Embedded System
Medium Scale Embedded System
Sophisticated Embedded System
4. What is Sophisticated Embedded System?
The sophisticated embedded system has the following features,
Enormous hardware and software Complexity.
This may need scalable processor or configurable processor and programming logic
arrays.
Constrained by the processing speed available in their hardware units.
5. What are the characteristics of an embedded system?
The typical characteristics of the embedded Systems are as follows:
Embedded systems are designed to do some specific task, rather than be a generalpurpose computer for multiple tasks. Some also have real-time performance constraints
that must be met, for reason such as safety and usability; others may have low or no

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

performance requirements, allowing the system hardware to be simplified to reduce


costs.
Embedded systems are not always separate devices. Most often they are physically builtin to the devices they control.
The software written for embedded systems is often called firmware, and is stored in
read-only memory or Flash memory chips rather than a disk drive. It often runs with
limited computer hardware resources: small or no keyboard, screen, and little memory.
6. What are the advantages of embedded system?
The advantages of the embedded system are Customization yields lower area, power,
cost.
7. What are the disadvantages of embedded system?
Higher HW/software development overhead
design, compilers, debuggers, ...
May result in delayed time to market!
8. What are the various embedded system requirements?
Types of requirements imposed by embedded applications:
R1 Functional requirements
R2 Temporal requirements
R3 Dependability requirements
9. What are the functional requirements of embedded system?
The functional requirements of the embedded systems are as follows:
Data Collection
Sensor requirements
Signal conditioning
Alarm monitoring
Direct Digital Control
Actuators
Man-Machine Interaction
Informs the operator of the current state of the controlled object assists the
operator in controlling the system.

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

10. What are the temporal requirements of the embedded systems?


The temporal requirements of the embedded systems:
Tasks may have deadlines
Minimal latency jitter
Minimal error detection latency
Timing requirements due to tight software control loops
Human interface requirements.
11. What are dependability requirements of an embedded system?
The dependability requirements of an embedded system are as follows
Safety
critical failure modes
certification
Maintainability
MTTR in terms of repairs per hour
Availability
A = MTTF / (MTTF + MTTR)
Security

12. What is a Microprocessor?


A silicon chip that contains a CPU. In the world of personal computers, the terms
microprocessor and CPU are used interchangeably. At the heart of all personal computers and
most workstations sits a microprocessor. Microprocessors also control the logic of almost all
digital devices, from clock radios to fuel-injection systems for automobiles.
13. What is a Microcontroller? [NOV 2012]
A microcontroller is a small and low-cost computer built for the purpose of dealing with
specific tasks, such as displaying information in a microwave LED or receiving information from
a televisions remote control. Microcontrollers are mainly used in products that require a degree
of control to be exerted by the user.

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

14. What are differences between Microprocessor and Microcontroller?


MICROPROCESSOR
The

functional

ALU,

registers,

blocks
timing

control unit.

MICROCONTROLLERS
are It includes functional blocks of
& microprocessors & in addition
has timer, parallel i/o, RAM,
EPROM, and ADC & DAC.

Bit handling instruction is less, Many types of bit handling


one or two type only.

instruction.

Rapid movements of code and Rapid movements of code and


data between external memory data within me.
& MP.
It is used for designing general They are used for designing
purpose

digital

computers application

system.

specific

dedicated

systems.

15. What are the various embedded system designs?


Modeling
Refining (or partitioning)
HW-SW partitioning
16. What are the complicating factors in embedded design?
Complicating factors in the design of embedded systems
Many of the subtasks in design are intertwined.
Allocation depends on the partitioning, and scheduling presumes a certain allocation.
Predicting the time for implementing the modules in hardware or software is not very
easy, particularly for tasks that have not been performed before.
17. What are the real time requirements of an embedded system?
Hard-real time systems: where there is a high penalty for missing a deadline
e.g., control systems for aircraft/space probes/nuclear reactors; refresh rates for video, or DRAM.

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Soft real-time systems: where there is a steadily increasing penalty if a deadline is


missed.
e.g., laser printer: rated by pages-per-minute, but can take differing times to print a page
(depending on the \"complexity\" of the page) without harming the machine or the customer.
18. Explain digital signal processing in embedded system?
Continued digitization of signals increasing the role of DSP in ES
Signals are represented digitally as sequence of samples
ADCs are moving closer to signals
19. List the various processors that are present?
Expand GPP and ASSP. [NOV 2012]
General Purpose Processor (GPP)

Microprocessor

Microcontroller

Embedded Processor

Digital Signal Processor

Application Specific System Processor (ASSP)


Multi Processor System using GPPS
20. What is the Embedded Processor? [April 2013]
Special microprocessor and microcontrollers often called Embedded Processor. An
embedded processor is used when fast processing fast context switching and atomic ALU
operations are needed.
Examples: ARM7, INTEL 1960, AMD 29050.
21. Give the reactivitys in embedded system?
Closed systems
Execution indeterminacy confined to one source
Causal relations are easily established.
Open systems
Indeterminacy from multiple sources, not controllable or observable by the programmer not
possible to infer causal relations

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

22. What are embedded cores?


More and more vendors are selling or giving away their processors and peripherals in a
form that is ready to be integrated into a programmable logic-based design. They either
recognize the potential for growth in the system-on-a-chip area or want a piece of the royalties or
want to promote the use of their particular FPGA or CPLD by providing libraries of ready-to-use
building blocks. Either way, you will gain with lower system costs and faster time-to-market.

23. What are hybrid chips?


The vendors of hybrid chips are betting that a processor core embedded within a
programmable logic device will require far too many gates for typical applications. So they\'ve
created hybrid chips that are part fixed logic and part programmable logic. The fixed logic
contains a fully functional processor and perhaps even some on-chip memory. This part of the
chip also interfaces to dedicated address and data bus pins on the outside of the chip.
Application-specific peripherals can be inserted into the programmable logic portion of the chip,
either from a library of IP cores or the customer\'s own designs.

24. Give the diversity of embedded computing?


Diversity in Embedded Computing ; Pocket remote control RF transmitter ; 100 KIPS,
crush-proof, long battery life ; Software optimized for size ; Industrial equipment controller ; 1
MIPS, safety-critical, 1 MB memory ; Software control loops ; Military signal processing ; 1
GFLOPS, 1 GB/sec IO, 32 MB

25. What is a kernel? [April 2012]


The kernel is a program that constitutes the central core of a computer operating system.
It has complete control over everything that occurs in the system. A kernel can be contrasted
with a shell (such as bash, csh or ksh in Unix-like operating systems), which is the outermost
part of an operating system and a program that interacts with user commands. The kernel itself
does not interact directly with the user, but rather interacts with the shell and other programs as
well as with the hardware devices on the system, including the processor (also called the central
processing unit or CPU), memory and disk drives.

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

26. What are the types of Kernel?


There are four popular categories or kinds of Kernels namely monolithic kernels,
microkernels, hybrid kernels and exokernels. Monolithic kernels are part of Unix-like operating
systems like Linux, FreeBSD etc. These types of kernels consist of the core functions of the
operating system and the device drivers with the ability to load modules at runtime.

27. Define Cooperative Multitasking?


A type of multitasking in which the process currently controlling the CPU must offer
control to other processes. It is called cooperative because all programs must cooperate for it to
work. If one program does not cooperate, it can hog the CPU. In contrast, preemptive
multitasking forces applications to share the CPU whether they want to or not. Versions 8.0-9.2.2
of Macintosh OS and Windows 3.x operating systems are based on cooperative multitasking,
whereas UNIX, Windows 95, Windows NT, OS/2, and later versions of Mac OS are based on
preemptive multitasking.

28. What is Preemptive Multitasking?


The term preemptive multitasking is used to distinguish a multitasking operating
system, which permits preemption of tasks, from a cooperative multitasking system wherein
processes or tasks must be explicitly programmed to yield when they do not need system
resources.

29. What is Exotic custom operating system?


A small fraction of embedded systems require safe, timely, reliable or efficient behavior
unobtainable with the one of the above architectures. In this case an organization builds a system
to suit. In some cases, the system may be partitioned into a "mechanism controller" using special
techniques, and a "display controller" with a conventional operating system. A communication
system passes data between the two.

30. List the applications of Embedded Systems?


Embedded Systems: Applications:
Consumer electronics, e.g., cameras, camcorders,

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Consumer products, e.g., washers, microwave ovens,


Automobiles (anti-lock braking, engine control,)
Industrial process controllers & avionics/defense applications
Computer/Communication products, e.g., printers, FAX machines,
Emerging multimedia applications & consumer electronics
31. What are the two essential units of a Processor? [NOV 2011]
Program Flow Control Unit (CU)
Execution Unit (EU)

11 MARKS

1. Describe Embedded System In Detail?


System:
A system is a way of working, organizing or doing one or many tasks according to a
fixed plan, program or set of rules. A system is also a arrangement in which all its units assemble
and work together according to the plan or program.

System Examples:

1. Watch
It is time display system.
Parts: Hardware, Needles, Battery, Dial, Chassis and Strap.
Rules: (1) All needles move clockwise only
(2) A thin needle rotates every second.
(3) A long needle rotates every minute.
(4) A short needle rotates every hour.
(5) All needles return to the original position after 12 hours.
2. Washing Machine
It is an automatic clothes washing System.
Parts: Status display panel, switches and dials, Motor, Power Supply, Control Unit, Inner water
level sensor and solenoid valve

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Rules: (1) Wash by spinning


(2) Rinse
(3) Drying
(4) Wash over by Blinking
(5) Each step displays the process stages.
(6) In case interruption, execute only the remaining.

Embedded Systems:
Definition:
An embedded system is one that has computer hardware with software embedded in it as one of
its important components.

HARDWARE

SOFTWARE
PROGRAM

[Its software embeds in ROM (Read only Memory). It does not need secondary memories as in a
computer]
Computer hardwares that can be used:

A Microprocessor

A Large Memory (Ram, ROM and caches)

Input Units (Keyboard, Mouse, Scanner, etc)

Output Units (Monitor, Printer, etc)

Networking Units (Ethernet Card, Drivers, etc)

I/O Units (Modem, Fax cum modem, etc)

Embedded system is a

Microcontroller based

Software driven

Reliable

Real time control system

10

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Autonomous or Human interactive

Operating on diverse physical variable

In diverse environments

Embedded system is Hardware with Software embedded in it, for a dedicated application.

2. Explain The Components, Classification And Characteristics Of Embedded System


Briefly? [NOV 2011], [April 2012], [April 2013]
I.

COMPONENTS OF EMBEDDED SYSTEM:


An embedded system is basically a computer controlled device designed to perform some

specific tasks. In most cases these tasks revolve around real-time control of machines or
processes. Embedded systems are more cost effective to implement than systems composed of
general purpose computers, such as PCs.
Processor

The main part of an embedded system is its processor. This can be a generic
microprocessor or a microcontroller. The processor is programmed to perform the
specific tasks for which the embedded system has been designed.

Memory
1.

Electronic memory is an important part of embedded systems. This memory is of


essentially three types: RAM, or random access memory, ROM, or read-only memory,
and cache. The RAM is where program components are temporarily stored during
execution. The ROM contains the basic input-output routines that are needed by the
system at startup. The cache is used by the processor as a temporary storage during
processing and data transfer.

System Clock

The system clock is a very important part of an embedded system since all processes in
an embedded system run on clock cycles and require precise timing information. This
clock generally consists of an oscillator and some associated circuitry.

11

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Peripherals
The peripherals interface an embedded system with other components. The peripheral devices
are provided on the embedded system boards for easy integration. Typical peripherals include
serial port, parallel port, network port, keyboard and mouse ports, memory drive port and
monitor port. Some specialized embedded systems also have other ports such as CAN-bus port.
II.

CLASSIFICATION OF EMBEDDED SYSTEMS

Small Scale Embedded Systems

Medium Scale Embedded Systems

Sophisticated Embedded Systems

SMALL SCALE EMBEDDED SYSTEMS

Single 8 bit or 16 bit Micro Controllers.

Little hardware and software complexity

They may even be battery operated.

Usually c is used for developing these systems.

The need to limit power dissipation when system is running continuously

MEDIUM SCALE EMBEDDED SYSTEMS

Single or few 16 or 32 bits microcontrollers or digital signal processors (DSP) or reduced


Instructions set computers (RISC).

Both hardware and software complexity.

SOPHISTICATED EMBEDDED SYSTEMS

Enormous hardware and software Complexity.

This may need scalable processor or configurable processor and programming logic
arrays.

III.

Constrained by the processing speed available in their hardware units.

CHARACTERISTICS OF EMBEDDED SYSTEMS


1) Embedded systems are designed to do some specific task, rather than be a general-

purpose computer for multiple tasks. Some also have real-time performance constraints that must

12

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

be met, for reason such as safety and usability; others may have low or no performance
requirements, allowing the system hardware to be simplified to reduce costs.
2) Embedded systems are not always separate devices. Most often they are physically
built-in to the devices they control.
3) The software written for embedded systems is often called firmware, and is stored in readonly memory or Flash memory chips rather than a disk drive. It often runs with limited computer
hardware resources: small or no keyboard, screen, and little memory.

3. What is a Microcontroller explain with an example? [NOV 2011]

A microcontroller (sometimes abbreviated C, uC or MCU) is a small computer on a


single integrated circuit containing a processor core, memory, and programmable input/output
peripherals. Program memory in the form of NOR flash or OTP ROM is also often included on
chip, as well as a typically small amount of RAM. Microcontrollers are designed for embedded
applications, in contrast to the microprocessors used in personal computers or other general
purpose applications.
Microcontrollers are used in automatically controlled products and devices, such as
automobile engine control systems, implantable medical devices, remote controls, office
machines, appliances, power tools, and toys. By reducing the size and cost compared to a design
that uses a separate microprocessor, memory, and input/output devices, microcontrollers make it
economical to digitally control even more devices and processes. Mixed signal microcontrollers
are common, integrating analog components needed to control non-digital electronic systems.

The die from an Intel 8742, an 8-bit microcontroller that includes a CPU running at 12
MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.

13

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

8051 MICROCONTROLLER SIGNAL PINS

VARIOUS MICROCONTROLLERS
INTEL

8031, 8032, 8051, 8052, 8751, 8752

PIC

8bit PIC 16, PIC 18


16bit DSPIC 33/PIC 24
PIC 16C7X

MOTOROLO

MC 68HC11

4. What is Microprocessor? Explain with an example? [NOV 2012]


A microprocessor incorporates the functions of a computer's central processing unit
(CPU) on a single integrated circuit (IC, or microchip).
It is a multipurpose, programmable, and clock-driven, register based electronic device
that accepts binary data as input, processes it according to instructions stored in its memory, and
provides results as output.
The first microprocessors emerged in the early 1970s and were used for electronic
calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses
of 4-bit and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc.,

14

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

followed soon after. Affordable 8-bit microprocessors with 16-bit addressing also led to the first
general-purpose microcomputers from the mid-1970s on.

Intel 4004, the first general-purpose, commercial microprocessor


INTEL 4004 MICRO PROCESSOR SIGNAL PINS

_______
_|

_______
\__/

<--> D0 |_|1

|_
16|_| RAM0 -->

_|

|_

<--> D1 |_|2

15|_| RAM1 -->

_|

|_

<--> D2 |_|3

14|_| RAM2 -->

_|

|_

<--> D3 |_|4
_|
(+5v) Vss |_|5

13|_| RAM3 -->


4004

|_
12|_| Vdd (-10v)

_|
--> CLK1 |_|6
_|
--> CLK2 |_|7
_|
<-- SYNC |_|8
|__________________|

15

EMBEDDED SYSTEMS

|_
11|_| ROM

-->
|_

10|_| TST <-|_


9|_| RST <--

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The first microprocessor in history, Intel 4004 was a 4-bit CPU designed for usage in
calculators, or, as we say now, designed for "embedded applications". Clocked at 740 KHz, the
4004 executed up to 92,000 single word instructions per second, could access 4 KB of program
memory and 640 bytes of RAM. Although the Intel 4004 was perfect fit for calculators and
similar applications it was not very suitable for microcomputer use due to its somewhat limited
architecture. The 4004 lacked interrupt support, had only 3-level deep stack, and used
complicated method of accessing the RAM. Some of these shortcomings were fixed in the 4004
successor - Intel 4040.

Intel C4004
740 KHz
16-pin ceramic DIP

4004 (C4004) processors in white ceramic package were produced until the second half of 1976.
This particular processor is dated 8th week of 1975. Today the C4004 with visible traces (like
the one on the picture) is the most rare and expensive version of Intel 4004.

Intel D4004
740 KHz
16-pin ceramic DIP

4004 microprocessors in plastic and ceramic (not white ceramic) packages were introduced
around 1976. This ceramic 4004 is dated 45th week of 1976.

16

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Intel P4004
740 KHz
16-pin plastic DIP

National Semiconductor 1NS4004D (INS4004D)


16-pin ceramic DIP
Purple ceramic/gold top/gold pins

National Semiconductor was the only second source company for the Intel 4004. The naming
convention for the 4004 processors was "INS4004" plus one letter representing package type.
The chip on the picture is mistakenly marked as "1NS4004D".

National Semiconductor INS4004J


16-pin plastic DIP

Embedded microprocessor systems are computer chips that are incorporated into products
such as cars, fridges, traffic lights, industrial equipment, and so on. Embedded microprocessor
systems are used every day by millions of people, but these systems are not seen because, as the
name implies, they are buried inside the product or the equipment. And because they are not
seen, they do not receive as much attention from the media as does the personal computer (PC).
However, the number of embedded microprocessor system computers and their economic
importance is considerable. It was reported that as far back as 1997, around 30 million

17

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

microprocessor chips were used by PC manufacturers, whilst close to three billion were used in
numerous embedded systems applications.

VARIOUS MICROPROCESSORS

INTEL

Zilog

4004, 4040

Z80, Z180, 1280

8080, 8085

Z8, eZ8

8086, 8088

and others

80186, 80188
80286, 80386
X86-64

5. Explain about an Embedded Processor?


An Embedded Processors is simply a Processor that has been Embedded into a device.
It is software programmable but interacts with different pieces of hardware. Performs both
control and computation. More performance than a Controller but not as much performance as a
general purpose processor. They are used in: Cars, Phones, Media Devices, Wireless, Printers.
WHAT IS IT REALLY?
Typically an Embedded Processor is a single-issue in-order RISC processor with a little
cache. It can then sell as a piece of silicon, custom layout, net list, or architectural description.
They are designed to be small, low power, and most importantly correct. Often due to the realtime constraints of an application area they are designed to have a small deterministic worst case
time per instruction

Example ARM

18

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

WHY USE AN EMBEDDED PROCESSOR?


The main reason is simple: Cost
Embedded processors are small so they dont take up much die area and thus they are cheap.
Embedded processors are verified so I wont spend a bunch of engineering man hours tracking
down hardware bugs.
Embedded processors run software the key part of that is the SOFT deal with changing specs.
DESIGN CRITERIA
How do we design a good embedded processor?
The three most important design criteria are performance, power, and cost.
Performance is a function of the parallelism, instruction encoding efficiency, and cycle time (or
the good old NumInstr, CPI, Freq)
Power is approximately a function of the voltage, area, and switching frequency. Also a function
execution time for leakage.
Cost is a function of both area (how many fit on a die) and the complexity of use (in terms of
engineering cost)
ISA OPTIONS
What sort of architecture do we want to design?
What sort of ISA should I provide (pros/cons)?
Register-Register / Memory-Memory
RISC/ CISC
Predication
Compound Instructions (MAC,PostInc)
Instruction Encoding
Registers (number and access)
VLIW / SIMD / Vector
DESIGN OPTIONS
What parts should be included (pros/cons)
Core
Instruction Cache
Data Cache
Multiplier

19

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Scratch Pad Memory


MMU
Write Buffer
TLB
Branch Prediction

FUTURE OF EMBEDDED PROCESSORS


Pipeline lengths are starting to get very long. How does high performance architecture handle
this Branch prediction? Intels XScale has branch prediction tables. Embedded processor designs
take heavily from high performance processor designs. But now under different constraints
What else will migrate to the embedded space?
VLIW processors -Multiple issue machines, Scheduling done by the compiler, Customized
Processors - Such as from Tensilica, Allows more cost effective design as we now pick only
what is important - Instruction Compaction, Thumb is good, but we need to do better as more
and more functionality moves to software

6. Explain The Embedded System Architectures In Detail? [April 2010], [NOV 2012]

There are several different types of software architecture in common use.


I.

SIMPLE CONTROL LOOP


A common model for this kind of design is a state machine, which identifies a set of

states that the system can be in and how it changes between them, with the goal of providing
tightly defined system behavior. This system's strength is its simplicity, and on small pieces of
software the loop is usually so fast that nobody cares that its timing is not predictable. It is
common on small devices with a stand-alone microcontroller dedicated to a simple task.
II.

INTERRUPT CONTROLLED SYSTEM


An INTERUPT CONTROL SYSTEM can include logic systems with at least one

interrupt to a microprocessor. The interrupt not only can came from the processor, but also from
the external components like the Memory, Graphic controller, A keyboard and can be from other
I/O devices. Events are in asynchronous in nature. But Processors are in synchronous in nature.
When interrupt occurred, the processor finds it as an interrupt and performs the required

20

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

instructions. This is referred to as an Interrupt service Routine (ISR). Interrupt latency is the time
which the processor requires to perform the instruction by an ISR. This Interrupt latency should
be reduced. For that ultra wideband media access control (UWB MAC) devices are introduced.
These having an interrupt control system detect a set of particular instructions from the processor
core to instruction random access memory (I-RAM). The interrupt control system provides the
core with the computer executable instructions that includes a branch instruction such that the
processor can branch directly to an interrupt service routine (ISR) that provides the computer
instructions for processing the event.
III.

COOPERATIVE MULTITASKING
A type of multitasking in which the process currently controlling the CPU must offer

control to other processes. It is called cooperative because all programs must cooperate for it to
work. If one program does not cooperate, it can hog the CPU. In contrast, preemptive
multitasking forces applications to share the CPU whether they want to or not. Versions 8.0-9.2.2
of Macintosh OS and Windows 3.x operating systems are based on cooperative multitasking,
whereas UNIX, Windows 95, Windows NT, OS/2, and later versions of Mac OS are based on
preemptive multitasking.
A nonpreemptive multitasking system is very similar to the simple control loop scheme,
except that the loop is hidden in an API. The programmer defines a series of tasks, and each task
gets its own environment to run in. When a task is idle, it calls an idle routine, usually called
pause, wait, yield, nop (stands for no operation), etc.
The advantages and disadvantages are very similar to the control loop, except that adding
new software is easier, by simply writing a new task, or adding to the queue-interpreter.
IV.

PREEMPTIVE MULTITASKING OR MULTI-THREADING


The term preemptive multitasking is used to distinguish a multitasking operating

system, which permits preemption of tasks, from a cooperative multitasking system wherein
processes or tasks must be explicitly programmed to yield when they do not need system
resources.
In this type of system, a low-level piece of code switches between tasks or threads based
on a timer (connected to an interrupt). This is the level at which the system is generally
considered to have an "operating system" kernel. Depending on how much functionality is

21

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

required, it introduces more or less of the complexities of managing multiple tasks running
conceptually in parallel.
As any code can potentially damage the data of another task (except in larger systems
using an MMU) programs must be carefully designed and tested, and access to shared data must
be controlled by some synchronization strategy, such as message queues, semaphores or a nonblocking synchronization scheme.
Because of these complexities, it is common for organizations to use a real-time
operating system (RTOS), allowing the application programmers to concentrate on device
functionality rather than operating system services, at least for large systems; smaller systems
often cannot afford the overhead associated with a generic real time system, due to limitations
regarding memory size, performance, and/or battery life. The choice that a RTOS is required
brings in its own issues however as the selection must be done prior to starting to the application
development process. This timing forces developers to choose the embedded operating system
for their device based upon current requirements and so restricts future options to a large extent.
The restriction of future options becomes more of an issue as product life decreases. Additionally
the level of complexity is continuously growing as devices are required to manage many
variables such as serial, USB, TCP/IP, Bluetooth, Wireless LAN, trunk radio, multiple channels,
data and voice, enhanced graphics, multiple states, multiple threads, numerous wait states and so
on. These trends are leading to the uptake of embedded middleware in addition to a real time
operating system.
V.

KERNELS
The kernel is a program that constitutes the central core of a computer operating system.

It has complete control over everything that occurs in the system. A kernel can be contrasted
with a shell (such as bash, csh or ksh in Unix-like operating systems), which is the outermost
part of an operating system and a program that interacts with user commands. The kernel itself
does not interact directly with the user, but rather interacts with the shell and other programs as
well as with the hardware devices on the system, including the processor (also called the central
processing unit or CPU), memory and disk drives.

22

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

MICROKERNELS AND EXOKERNELS


A microkernel is a logical step up from a real-time OS.
The usual arrangement is that the operating system kernel allocates memory and switches
the CPU to different threads of execution. User mode processes implement major functions such
as file systems, network interfaces, etc.
In general, microkernels succeed when the task switching and intertask communication is
fast, and fail when they are slow.
Exokernels communicate efficiently by normal subroutine calls. The hardware and all the
software in the system are available to, and extensible by application programmers.
MONOLITHIC KERNELS
In this case, a relatively large kernel with sophisticated capabilities is adapted to suit an
embedded environment. This gives programmers an environment similar to a desktop operating
system like Linux or Microsoft Windows, and is therefore very productive for development; on
the downside, it requires considerably more hardware resources, is often more expensive, and
because of the complexity of these kernels can be less predictable and reliable.
Common examples of embedded monolithic kernels are Embedded Linux and Windows CE.
Despite the increased cost in hardware, this type of embedded system is increasing in
popularity, especially on the more powerful embedded devices such as Wireless Routers and
GPS Navigation Systems. Here are some of the reasons:

Ports to common embedded chip sets are available.

They permit re-use of publicly available code for Device Drivers, Web Servers,
Firewalls, and other code.

Development systems can start out with broad feature-sets, and then the distribution can
be configured to exclude unneeded functionality, and save the expense of the memory
that it would consume.

Many engineers believe that running application code in user mode is more reliable,
easier to debug and that therefore the development process is easier and the code more
portable.

Many embedded systems lack the tight real time requirements of a control system.
Although a system such as Embedded Linux may be fast enough in order to respond to
many other applications.

23

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Features requiring faster response than can be guaranteed can often be placed in
hardware.

Many RTOS systems have a per-unit cost. When used on a product that is or will become
a commodity, that cost is significant.

VI.

EXOTIC CUSTOM OPERATING SYSTEMS


A small fraction of embedded systems require safe, timely, reliable or efficient behavior

unobtainable with the one of the above architectures. In this case an organization builds a system
to suit. In some cases, the system may be partitioned into a "mechanism controller" using special
techniques, and a "display controller" with a conventional operating system. A communication
system passes data between the two.

7. Explain Any One Of The Applications Of The Embedded Systems In Detail?


I.

Military and aerospace embedded software applications


From in-orbit embedded systems to jumbo jets to vital battlefield networks, designers of

mission-critical aerospace and defense systems requiring real-time performance, scalability, and
high-availability facilities consistently turn to the LynxOS RTOS and the LynxOS-178 RTOS
for software certification to DO-178B.
Rich in system resources and networking services, LynxOS provides an off-the-shelf
software platform with hard real-time response backed by powerful distributed computing
(CORBA), high reliability, software certification, and long-term support options.
The LynxOS-178 RTOS for software certification, based on the RTCA DO-178B
standard, assists developers in gaining certification for their mission- and safety-critical systems.
Real-time systems programmers get a boost with LynuxWorks' DO-178B RTOS training
courses.
LynxOS-178 is the first DO-178B and EUROCAE/ED-12B certifiable, POSIXcompatible RTOS solution.

24

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

II.

Medical electronics technology


With the introduction of the LynxSecure separation kernel and embedded
hypervisor, medical devices can now have the best of both worlds, with hard real-time
applications running alongside commercial desktop operating systems on the same
industry-standard Intel processors.
The separation kernel offers both safety and security partitioning for applications.
The embedded hypervisor functionality allows "guest" operating systems and their
applications to run in their own partitions.

III.

Communications applications
"Five-nines" availability, CompactPCI hot swap support, and hard real-time response

LynxOS delivers on these key requirements and more for today's carrier-class systems. Scalable
kernel configurations, distributed computing capabilities, integrated communications stacks, and
fault-management facilities make LynxOS the ideal choice for companies looking for a single
operating system for all embedded telecommunications applicationsfrom complex central
controllers to simple line/trunk cards.
LynuxWorks JumpStart for Communications package enables OEMs to rapidly develop
mission-critical

communications

equipment,

with

pre-integrated,

state-of-the-art,

data

networking and porting software componentsincluding source code for easy customization.
The Lynx Certifiable Stack (LCS) is a secure TCP/IP protocol stack designed especially
for applications where standards certification is required.

IV.

Electronics applications and consumer devices


As the number of powerful embedded processors in consumer devices continues to rise,

the LynxOS real-time operating system provides a highly reliable option for systems designers.
For makers of low-cost consumer electronic devices who wish to integrate the LynxOS real-time
operating system into their products, we offer special MSRP-based pricing to reduce royalty fees
to a negligible portion of the device's MSRP.

25

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

V.

Industrial automation and process control software


Designers of industrial and process control systems know from experience that

LynuxWorks operating systems provide the security and reliability that their industrial
applications require.
From ISO 9001 certification to fault-tolerance, POSIX conformance, secure partitioning
and high availability, we've got it all. Take advantage of our two decades of experience.

8. Discuss the shared data problem and its remedies with example. [April 2010]
Some data is common to different processes ortasks. Examples are as follows:
Time, which is updated continuously by a process, is also used by display process in a system.
Port input data, which is received by one process and further processed and analyzed by
another process.
Memory Buffer data which is inserted by one process and further read (deleted), processed and
analyzed by another.
Assume that at an instant when the value of variable operates and during the operations
on it, only a part of the operation is completed and another part remains incomplete. At that
moment, assume that there is an interrupt.
Assume that there is another function. It also shares the same variable. The value of the
variable may differ from the one expected if the earlier operation had been completed. .

26

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Whenever another process sharing the same partly operated data, then shared data problem
arises.
Example:
Consider x, a 128-bit variable, b127.b0.
Assume that the operation OPsl is shift left by 2 bits to multiply it by 4 and find y = 4 x.
Let OPsl be done non-atomically in four sub-operations,
OPAsl, OPBsl, OPCsl and OPDsl
For
b31.b0 , b63.b32 ,
b95.b64 and b127.b96 ,
respectively.
Assuming at an instance OPAsl, OPBsl and OPCsl completed and OPDsl remained
incomplete.
Now interrupt I occurs at that instance.
I calls some function which uses x if x is the global variable.
It modifies x to b127.b0.
On return from interrupt, since OPDsl did not complete, OPDsl operates on
b127.b96 .
Resulting value of x is different due to the problem of incomplete operation before I
occurred.

27

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

9. Explain the Bus structure of an embedded system. [April 2010]

Using VLSI tools, a processor itself can be designed. A system specific processor (ASIP)
is the one that does not use the GPP (standard available CISC or RISC microprocessor or
microcontroller or signal processor). The processor on chip incorporates a section of the CISC or
RISC instruction set.
This specific processor may have especially configurable instruction-set for an
application. An ASIP can also be configurable. Using appropriate tools, an ASIP can be designed
and configured for the instructions needed in the following exemplary functions: DSP functions,
controller signals processing function, adaptive filtering functions and communication protocol

28

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

implementing functions. On a VLSI chip, an embedded ASIP in a special system can be a unit
within an ASIC or SoC.
On a VLSI chip, there may be high-level components. These are components that possess
gate-level Sophistication in circuits above that of the counter, register, multiplier, floating point
operation unit and ALU. A standard source solution for synthesizing a higher-level component
by configuring FPGA core or a core of VLSI chip may be available as an Intellectual Property,
called (IP). The copyright for the synthesized design of a higher-level component for gate-level
implementation of an IP is held by the designer or designing company.
One has to pay royalty for every chip shipped. An embedded system may incorporate an
IP(s) . An IP may provide hardwired implement-able design of a transform or of an encryption
algorithm or a deciphering algorithm. An IP may provide a design for adaptive filtering of a
signal. An IP may provide full design for implementing Hyper Text Transfer Protocol (HTTP)
or File Transfer Protocol (FTP) to transmit a web page or a file on the Internet. An IP may be
designed for the PCI or USB bus controller.
A General Purpose Processor (GPP) can be embedded on a VSLI chip. Recently,
exemplary GPPs, called ARM 7 and ARM 9, which embed onto a VLSI chip, have been
developed by ARM and their enhancements by Texas Instruments.
An ARM-processor VLSI-architecture is available either as a CPU chip or for integrating
it into VLSI or SoC. ARM provides CISC functionality with RISC architecture at the core. An
application of ARM embed circuit is ICE. ICE is used for debugging an embedded system.
Exemplary ARM 9 applications are setup boxes, cable modems, and wireless devices such as
mobile handsets.

29

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

10. Explain briefly about Interrupt [April 2010]

In systems programming, an interrupt is a signal to the processor emitted by hardware or


software indicating an event that needs immediate attention. An interrupt alerts the processor to a
high-priority condition requiring the interruption of the current code the processor is executing
(the current thread). The processor responds by suspending its current activities, saving its state,
and executing a small program called an interrupt handler (or interrupt service routine, ISR) to
deal with the event. This interruption is temporary, and after the interrupt handler finishes, the
processor resumes execution of the previous thread.

30

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Two types of interrupts:


A hardware interrupt is an electronic alerting signal sent to the processor from an external
device, either a part of the computer itself such as a disk controller or an external peripheral. For
example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that
cause the processor to read the keystroke or mouse position. Unlike the software type (below),
hardware interrupts are asynchronous and can occur in the middle of instruction execution,
requiring additional care in programming. The act of initiating a hardware interrupt is referred to
as an interrupt request (IRQ).
A software interrupt is caused either by an exceptional condition in the processor itself, or a
special instruction in the instruction set which causes an interrupt when it is executed. The
former is often called a trap or exception and is used for errors or events occurring during
program execution that is exceptional enough that they cannot be handled within the program
itself. For example, if the processor's arithmetic logic unit is commanded to divide a number by
zero, this impossible demand will cause a divide-by-zero exception, perhaps causing the
computer to abandon the calculation or display an error message. Software interrupt instructions
function similarly to subroutine calls and are used for a variety of purposes, such as to request
services from low level system software such as device drivers. For example, computers often
use software interrupt instructions to communicate with the disk controller to request data be
read or written to the disk.
Each interrupt has its own interrupt handler. The number of hardware interrupts is limited by the
number of interrupt request (IRQ) lines to the processor, but there may be hundreds of different
software interrupts. Interrupts are a commonly used technique for computer multitasking,
especially in real-time computing. Such a system is said to be interrupt-driven.

11. Explain in detail about Monolithic Kernel. [April 2012], [NOV 2012]
In this case, a relatively large kernel with sophisticated capabilities is adapted to suit an
embedded environment. This gives programmers an environment similar to a desktop operating
system like Linux or Microsoft Windows, and is therefore very productive for development; on
the downside, it requires considerably more hardware resources, is often more expensive, and
because of the complexity of these kernels can be less predictable and reliable.
Common examples of embedded monolithic kernels are Embedded Linux and Windows CE.

31

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Despite the increased cost in hardware, this type of embedded system is increasing in
popularity, especially on the more powerful embedded devices such as Wireless Routers and
GPS Navigation Systems. Here are some of the reasons:

Ports to common embedded chip sets are available.

They permit re-use of publicly available code for Device Drivers, Web Servers,
Firewalls, and other code.

Development systems can start out with broad feature-sets, and then the distribution can
be configured to exclude unneeded functionality, and save the expense of the memory
that it would consume.

Many engineers believe that running application code in user mode is more reliable,
easier to debug and that therefore the development process is easier and the code more
portable.

Many embedded systems lack the tight real time requirements of a control system.
Although a system such as Embedded Linux may be fast enough in order to respond to
many other applications.

Features requiring faster response than can be guaranteed can often be placed in
hardware.

Many RTOS systems have a per-unit cost. When used on a product that is or will become
a commodity, that cost is significant.

32

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

12. Explain the skill required for embedded system designer. [April 2012]
An embedded system designer has to develop a product using the available tools within the given
specifications, cost and time frame. [Chapters 7 and 12 will cover the design aspects of
embedded systems.
Skills for Small Scale Embedded System Designer: Author Tim Wilmshurst in the book referred
above, has said that the following skills are needed in the individual or team that is developing a
small-scale system: Full understanding of microcontrollers with a basic knowledge of computer
architecture, digital electronic design, software engineering, data communication, control
engineering, motors and actuators, sensors and measurements, analog electronic design and IC
design and manufacture. Specific skills will be needed in specific situations. For example,
control engineering knowledge will be needed for design of control systems and analog
electronic design knowledge will be needed when designing the system interfaces. Basic aspects
of the following topics will be described in this book to prepare the designer who already has a
good knowledge of the microprocessor or microcontroller to be used. (i) Computer architecture
and organization. (ii) Memories. (iii) Memory allocation. (iv) Interfacing the memories. (v)

33

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Burning (a term used for porting) the executable machine codes in PROM or ROM (Section
2.3.1). (vi) Use of decoders and demultiplexers. (vii) Direct memory accesses. (viii) Ports. (ix)
Device drivers in assembly. (x) Simple and sophisticated buses. (xi) Timers. (xii) Interrupt
servicing mechanism. (xiii) C programming elements. (xiv) Memory optimization. (xv) Selection
of hardware and microcontroller. (xvi) Use of ICE (In-Circuit-Emulators), cross-assemblers and
testing equipment. (xvii) Debugging the software and hardware bugs by using test vectors. Basic
knowledge in the other areasdata communication, control engineering, motors and actuators,
sensors and measurements, analog electronic design and IC design and manufacturecan be
obtained from the standard textbooks available.
Skills for Medium Scale Embedded System Designer: C programming and RTOS
programming and program modeling skills are a must to design a medium-scale embedded
system. Knowledge of the following becomes critical.
(i) Tasks and their scheduling by RTOS.
(ii) Cooperative and preemptive scheduling.
(iii) Inter processor communication functions.
(iv) Use of shared data, and programming the critical sections and re-entrant functions. (v) Use
of semaphores, mailboxes, queues, sockets and pipes.
(vi) Handling of interrupt-latencies and meeting task deadlines.
(vii) Use of various RTOS functions.
(viii) Use of physical and virtual device drivers. A designer must have access to an RTOS
programming tool with Application Programming Interfaces (APIs) for the specific
microcontroller to be used. Solutions to various functions like memory allocation, timers, device
drivers and interrupt handing mechanism are readily available as the
APIs of the RTOS. The designer needs to know only the hardware organization and use of these
APIs. The microcontroller or processor then represents a small system element for the designer
and a little knowledge may suffice.
Skills for Sophisticated Embedded System Designer: A team is needed to co-design and solve
the high level complexities of the hardware and software design. An embedded system hardware
engineer should have full skills in hardware units and basic knowledge of C, RTOS and other
programming tools. Software engineer should have basic knowledge in hardware and a thorough

34

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

knowledge of C, RTOS and other programming tools. A final optimum design solution is then
obtained by system integration.

QUESTION BANK
TWO MARKS
1. What is an embedded system? [April 2012]
2. What are the components of the Embedded Systems? [NOV 2011]
3. What is the Classification of an embedded system? [April 2013]
4. What is Sophisticated Embedded System?
5. What are the characteristics of an embedded system?
6. What are the advantages of embedded system?
7. What are the disadvantages of embedded system?
8. What are the various embedded system requirements?
9. What are the functional requirements of embedded system?
10. What are the temporal requirements of the embedded systems?
11. What are dependability requirements of an embedded system?
12. What is a Microprocessor?
13. What is a Microcontroller? [NOV 2012]
14. What are differences between Microprocessor and Microcontroller?
15. What are the various embedded system designs?
16. What are the complicating factors in embedded design?
17. What are the real time requirements of an embedded system?
18. Explain digital signal processing in embedded system?
19. List the various processors that are present?
20. What is the Embedded Processor? [April 2013]
21. Give the reactivitys in embedded system?
22. What are embedded cores?
23. What are hybrid chips?
24. Give the diversity of embedded computing?
25. What is a kernel? [April 2012]
26. What are the types of Kernel?

35

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

27. Define Cooperative Multitasking?


28. What is Preemptive Multitasking?
29. What is Exotic custom operating system?
30. List the applications of Embedded Systems?
31. What are the two essential units of a Processor? [NOV 2011]

ELEVEN MARKS
1. Describe Embedded System In Detail? [Page: 9]
2. Explain The Components, Classification And Characteristics Of Embedded System Briefly?
[NOV 2011], [April 2012], [April 2013] [Page: 11]
3. What Is A Microcontroller Explain With An Example? [NOV 2011] [Page: 13]
4. What Is A Microprocessor Explain With An Example? [NOV 2012] [Page: 14]
5. What Is An Embedded Processor? [Page: 18]
6. Explain The Embedded System Architectures In Detail? [April 2010], [NOV 2012] [Page:
21]
7. Explain Any One Of The Applications Of The Embedded Systems In Detail [Page: 25]
8. Discuss the shared data problem and its remedies with example. [April 2010]
[Page: 29]
9. Explain the Bus structure of an embedded system. [April 2010] [Page: 31]
10. Explain briefly about Interrupt [April 2010] [Page: 33]
11. Explain in detail about Monolithic Kernel. [April 2012], [NOV 2012] [Page: 35]
12. Explain the skill required for embedded system designer. [April 2012] [Page: 36]

PONDICHERRY UNIVERSITY QUESTIONS


1. Explain The Components, Classification And Characteristics Of Embedded System Briefly?
[NOV 2011], [April 2012], [April 2013] [Page: 11]
2. What Is A Microcontroller Explain With An Example? [NOV 2011] [Ref. Page No.: 13]
3.. What Is A Microprocessor Explain With An Example? [NOV 2012] [Ref. Page No.: 14]
Explain The Embedded System Architectures In Detail? [April 2010], [NOV 2012]
[Ref. Page No.: 21]

36

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

4. Discuss the shared data problem and its remedies with example. [April 2010]
[Ref. Page No.: 29]
5. Explain the Bus structure of an embedded system. [April 2010] [Ref. Page No.: 31]
6. Explain briefly about Interrupt [April 2010] [Ref. Page No.: 33]
7. Explain in detail about Monolithic Kernel. [April 2012], [NOV 2012] [Ref. Page No.: 35]
8. Explain the skill required for embedded system designer.[April 2012][Ref. Page No.: 36]

37

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Department of Computer Science and Engineering


Subject Name: EMBEDDED SYSTEMS

Subject Code: CS T56

Prepared By :

Mr.P.Karthikeyan,AP/CSE
Mr.B.Thiyagarajan, AP/CSE
Mrs.P.Subha Priya, AP/CSE
Verified by :

Approved by :

UNIT II
Embedded Hardware Architecture :32 Bit Microcontrollers:
ARM 2 TDMI core based 32 Bit microcontrollers and family of processors:
Register, Memory and Data transfer, Arithmetic and Logic instructions, Assembly
Language, I/O operations interrupt structure, ARM cache. ARM Bus, Embedded
systems with ARM.
Networks for Embedded systems:
Serial bus protocols: The CAN bus, and the USB bus, Parallel bus protocols: The
PCI Bus and GPIB bus,

38

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

1. Give the summary of I/O devices used in embedded system


Program, data and stack memories occupy the same memory space. The total addressable
memory size is 64 KB.
Program memory - program can be located anywhere in memory. Jump, branch and call
instructions use 16-bit addresses, i.e. they can be used to jump/branch anywhere within 64 KB.
All jump/branch instructions use absolute addressing.
Data memory - the processor always uses 16-bit addresses so that data can be placed anywhere.
Stack memory is limited only by the size of memory. Stack grows downward.
First 64 bytes in a zero memory page should be reserved for vectors used by RST instructions.
I/O ports
256 Input ports
256 Output ports
Registers
Accumulator or A register is an 8-bit register used for arithmetic, logic, I/O and load/store
operations.
2. Define bus.
Buses: The exchange of information.
Information is transferred between units of the microcomputer by collections of conductors
called buses. There will be one conductor for each bit of information to be passed, e.g., 16 lines
for a 16 bit address bus. There will be address, control, and data buses
3. What are the classifications of I/O devices?
Synchronous serial input and output
Asynchronous serial UART input and output
Parallel one bit input and output
Parallel port input and output
4. Give some examples for serial input I/O devices.
Audio input, video input, dial tone, transceiver input, scanner, serial IO bus input, etc.,
5. Give the steps for accomplishing input output data transfer
Accomplishing input/output data transfer
There are three main methods used to perform/control input/output data transfers. They are,
Software programming (scanning or polling)

39

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Interrupt controlled
Direct memory access (DMA)
6. Give the limitations of polling technique.
The polling technique, however, has limitations.
It is wasteful of the processors time, as it needlessly checks the status of all devices all
the time.
It is inherently slow, as it checks the status of all I/O devices before it comes back to
check any given one again.
When fast devices are connected to a system, polling may simply not be fast enough to
satisfy the minimum service requirements. Priority of the device is determined
7. What do you meant by bus arbitration?
Bus Arbitration
Most processors use special control lines for bus arbitration, ie, controlling the use of the address
and data bus,
An input which the DMAC uses to request the bus
An output(s) indicating the bus status
An output indicating acceptance of the DMAC\'s bus request
8. What are the two characteristics of synchronous communication?
Bytes/frames maintain constant phase difference and should not be sent at random time
intervals. No handshaking signals are provided during the communication.
Clock pulse is required to transmit a byte or frame serially. Clock rate information is
transmitted by the transmitter.
9. What do you mean by asynchronous communication?
The most basic way of sharing data is by copying the data in question to each server. This
will only work if the data is changed infrequently and always by someone with administrative
access to all the servers in the cluster.
10. What are the characteristics of asynchronous communication?
Variable bit rate need not maintain constant phase difference
Handshaking method is used
Transmitter need not transmit clock information along with data bit stream

40

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

11. What are the three ways of communication for a device?


Separate clock pulse along with data bits
Data bits modulated with clock information
Embedded clock information with data bits before transmitting
12. Expand a) SPI b) SCI
SPI Serial Peripheral Interface
SCI Serial Communication Interface
13. What are the features of SPI?
SPI has programmable clock rates
Full-duplex mode
Crystal clock frequency is 8MHz
Open drain or totempole output from master to slave
14. Define software timer.
A software timer is software that executes the increase/decrease count value on an
interrupt from timer or RTC. Software timer is used as virtual timing device.
15. What are the forms of timer?
Hardware interrupt timer
Software timer
User software controlled hardware timer
RTOS controlled hardware timer
UP/DOWN count action timer
One-shot timer (No reload after overflow and finished states)
16. Define RTC
RTC Stands for Real Time Systems. Once the system starts, do not stop/reset and the
count value cannot be reloaded.
17. What is I2C?

Inter- Integrated Circuit (2-wire/line protocol) which offers synchronous communication.


Standard speed: 100Kbps and High speed: 400 Kbps
18. What are the bits in I2C corresponding to?
SDA Serial Data Line and SCL Serial Clock Line

41

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

19. What is a CAN bus? Where is it used?


CAN stands for Controller Area Network. Serial line, bi-directional bus used in
automobiles.Operates at the rate of 1Mbps.

20. What is USB? Where is it used? [November 2012]


USB Universal Serial Bus
Operating speed - up to 12 Mbps in fast mode and 1.5Mbps in low-speed mode.
21. What are the features of the USB protocol?
A device can be attached, configured and used, reset, reconfigured and used, detached and
reattached, share the bandwidth with other devices.
22. What are the four types of data transfer used in USB?
Controlled transfer
Bulk transfer
Interrupt driven data transfer
Iso-synchronous transfer
23. Explain briefly about PCI and PCI/X buses. (or) List any two parallel buses used in
embedded Systems [April/May 2014]
Used for most PC based interfacing
Provides superior throughput than EISA
Platform-independent
Clock rate is nearest to sub-multiples of system clock
24. Mention some advanced bus standard protocols;
GMII (Gigabit Ethernet MAC Interchange Interface)
XGMI (10 Gigabit Ethernet MAC Interchange Interface)
CSIX-1 6.6 Gbps
Rapid IO interconnect specification v1.1 at 8 Gbps
25. What do you meant by high speed device interfaces?
Fail-over clustering would not be practical without some way for the redundant servers to
access remote storage devices without taking a large performance hit, as would occur if these

42

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

devices were simply living on the local network. Two common solutions to this problem are
double-ended SCSI and fiber-channel.
26. Mention some I/O standard interfaces.
HSTL High Speed Transceiver Logic (Used in high speed operations)
SSTL Stub Series Terminated Logic (Used when the buses are needed to isolate from the
large no. of stubs)

27. WHAT IS THE ARM BUS ARCHITECTURE? [ April/may 2012]

28. What are the types of Interrupt handler?


There are two types of Interrupt hander. They are as follows:
Nested Interrupt Handling
Non Nested Interrupt Handling
29. What are the arithmetic instructions?

The basic expression for arithmetic instructions is


OPcode Rd, Rn, Rm

For example, ADD R0, R2, R4


Performs the operation R0[R2]+[R4]

SUB R0, R6, R5


Performs the operation R0[R6]-[R5]

Immediate mode: ADD R0, R3, #17

43

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Performs the operation R0[R3]+17

The second operand can be shifted or rotated before being used in the operation
For example, ADD R0, R1, R5, LSL #4 operates as follows: the second operand
stored in R5 is shifted left 4-bit positions (equivalent to [R5]x16), and its is then added to
the contents of R1; the sum is placed in R0

30. What are the Logic Instructions?

The logic operations AND, OR, XOR, and Bit-Clear are implemented by instructions
with the OP codes AND, ORR, EOR, and BIC.

For example
AND R0, R0, R1: performs R0[R0]+[R1]

The Bit-Clear instruction (BIC) is closely related to the AND instruction.


It complements each bit in operand Rm before ANDing them with the bits in
register Rn.
For example, BIC R0, R0, R1. Let R0=02FA62CA, R1=0000FFFF. Then the
instruction results in the pattern 02FA0000 being placed in R0

The Move Negative instruction complements the bits of the source operand and places
the result in Rd.
For example, MVN R0, R3

31. Define register. [April/May 2012]


Register are used to quickly accept, store, and transfer data and instructions that are
being used immediately by the CPU, there are various types of Registers those are used for
various purpose. Among of the some Mostly used Registers named as AC or Accumulator, Data
Register or DR, the AR or Address Register, program counter (PC), Memory Data Register
(MDR) ,Index register, Memory Buffer Register.

32. List out the design goals of ARM. [APRIL 2013]


There are a number of physical features that have driven the ARM processor design.
First, portable embedded systems require some form of battery power. The ARM processor has
been specifically designed to be small to reduce power consumption and extend battery
operationessential for applications such as mobile phones and personal digital assistants
(PDAs).

44

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

High code density is another major requirement since embedded systems have limited
memory due to cost and/or physical size restrictions. High code density is useful for applications
that have limited on-board memory, such as mobile phones and mass storage devices.

In addition, embedded systems are price sensitive and use slow and low-cost memory
devices. For high-volume applications like digital cameras, every cent has to be accounted for in
the design. The ability to use low-cost memory devices produces substantial savings.

Another important requirement is to reduce the area of the die taken up by the embedded
processor. For a single-chip solution, the smaller the area used by the embedded processor, the
more available space for specialized peripherals. This in turn reduces the cost of the design and
manufacturing since fewer discrete chips are required for the end product.

ARM has incorporated hardware debug technology within the processor so that software
engineers can view what is happening while the processor is executing code. With greater
visibility, software engineers can resolve issues faster, which has a direct effect on the time
to market and reduces overall development costs.

The ARM core is not a pure RISC architecture because of the constraints of its primary
applicationthe embedded system. In some sense, the strength of the ARM core is that it does
not take the RISC concept too far. In todays systems the key is not raw processor speed but total
effective system performance and power consumption.

UNIT 2
11 MARKS
1. DISCUSS THE ARM architecture in Detail ( APRIL 2013)
The ARM processor core is a key component of many successful 32-bit embedded systems.
You probably own one yourself and may not even realize it! ARM cores are widely used in
mobile phones, handheld organizers, and a multitude of other everyday portable consumer
devices. ARMs designers have come a long way from the first ARM1 prototype in 1985. Over
one billion ARM processors had been shipped worldwide by the end of 2001. The ARM

45

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Company bases their success on a simple and powerful original design, which continues to
improve today through constant technical innovation. In fact, the ARM core is not a single core,
but a whole family of designs sharing similar design principles and a common instruction set.
For example, one of ARMs most successful cores is the ARM7TDMI. It provides up to
120 Dhrystone MIPS1 and is known for its high code density and low power
consumption,making it ideal for mobile embedded devices. In this first chapter we discuss how
the RISC (reduced instruction set computer) design philosophy was adapted by ARM to create a
flexible embedded processor. We then introduce an example embedded device and discuss the
typical hardware and software technologies that surround an ARM processor.

The RISC design philosophy:


The ARM core uses a RISC architecture. RISC is a design philosophy aimed at delivering simple
but powerful instructions that execute within a single cycle at a high clock speed. The RISC
philosophy concentrates on reducing the complexity of instructions performed by the hardware
because it is easier to provide greater flexibility and intelligence in software rather than
hardware. As a result, a RISC design places greater demands on the compiler. In contrast, the
traditional complex instruction set computer (CISC) relies more on the hardware for instruction
functionality, and consequently the CISC instructions are more complicated. Figure 1.1
illustrates these major differences.

The RISC philosophy is implemented with four major design rules:

1. InstructionsRISC processors have a reduced number of instruction classes. These classes


provide simple operations that can each execute in a single cycle. The compiler or programmer
synthesizes complicated operations (for example, a divide operation) by combining several
simple instructions. Each instruction is a fixed length to allow the pipeline to fetch future
instructions before decoding the current instruction. In contrast, in CISC processors the
instructions are often of variable size and take many cycles to execute.
2. PipelinesThe processing of instructions is broken down into smaller units that can be
executed in parallel by pipelines. Ideally the pipeline advances by one step on each cycle for

46

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

maximum throughput. Instructions can be decoded in one pipeline stage. There is no need for an
instruction to be executed by a miniprogram called microcode as on CISC processors.
3.RegistersRISC machines have a large general-purpose register set. Any register can contain
either data or an address. Registers act as the fast local memory store for all data

Fig. 1.1. CISC vs RISC. CISC

processing operations. In contrast, CISC processors have dedicated registers for specific
purposes.
4. Load-store architecture The processor operates on data held in registers. Separate load and
store instructions transfer data between the register bank and external memory. Memory accesses
are costly, so separating memory accesses from data processing provides an advantage because
you can use data items held in the register bank multiple times without needing multiple memory
accesses. In contrast, with a CISC design the data processing operations can act on memory
directly. These design rules allow a RISC processor to be simpler, and thus the core can operate
at higher clock frequencies. In contrast, traditional CISC processors are more complex and
operate at lower clock frequencies. Over the course of two decades, however, the distinction
between RISC and CISC has blurred as CISC processors have implemented more RISC
concepts.

47

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The ARM processor controls the embedded device. Different versions of the ARM
processor are available to suit the desired operating characteristics. An ARM processor
comprises a core (the execution engine that processes instructions and manipulates data)
plus the surrounding components that interface it with a bus. These components can
include memory management and caches.
Controllers coordinate important functional blocks of the system. Two commonly found
controllers are interrupt and memory controllers.
The peripherals provide all the input-output capability external to the chip and are
responsible for the uniqueness of the embedded device.
A bus is used to communicate between different parts of the device

Applications
The operating system schedules applicationscode dedicated to handling a particular
task. An application implements a processing task; the operating system controls the
environment. An embedded system can have one active application or several applications
running simultaneously.
ARM processors are found in numerous market segments, including networking,
automotive, mobile and consumer devices, mass storage, and imaging. Within each segment

48

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

ARM processors can be found in multiple applications. For example, the ARM processor is
found in networking applications like home gateways, DSL modems for high-speed Internet
communication, and 802.11 wireless communication. The mobile device segment is the largest
application area for ARM processors because of mobile phones. ARM processors are also found
in mass storage devices such as hard drives and imaging products such as inkjet printers
applications that are cost sensitive and high volume. In contrast, ARM processors are not found
in applications that require leading-edge high performance. Because these applications tend to be
low volume and high cost, ARM has decided not to focus designs on these types of applications.
2. WHAT IS THE ARM FAMILY OF PROCESSOR?
There are currently eight product families which make up the ARM processor range:
ARM Processor Instruction Set Architecture

Cortex Processor Family

ARM7 processor family

ARM9 processor family

ARM9E processor family

ARM10E processor family

ARM11 processor family

Secure Core processor family

CORTEX FAMILY PROCESSOR


ARM Cortex-A8, ARM Cortex-A9 MPCore, ARM Cortex-A9 Single Core Processor,
ARM Cortex-M0, ARM Cortex-M1, ARM Cortex-M3 and ARM Cortex-R4(F)
The Cortex family of processors provides ARM Partners with a range of solutions
optimized around specific market applications across the full performance spectrum. This
underlines ARM's strategy of aligning technology around specific market applications and
performance requirements.
The ARM Cortex family comprises three series, which all implement the Thumb-2
Instruction set to address the increasing performance and cost demands of various markets:

49

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The ARM Cortex-A Series is a family of applications processors for complex OS and
user applications.The Cortex-A8 and Cortex-A9 processors support the ARM, Thumb and
Thumb-2 instruction sets.
The ARM Cortex-R Series is a family of embedded processors for real-time
systems.These processors support the ARM, Thumb, and Thumb-2 instruction sets. Currently
this family comprises the Cortex-R4 and the Cortex-R4F processors
The ARM Cortex-M Series is a family of deeply embedded processors optimized for cost
sensitive applications. These processors support the Thumb-2 instruction set only. This family
comprises the Cortex-M3 , the Cortex-M1 and the Cortex-M0 processors

ARM7 FAMILY PROCESSOR


ARM720T, ARM7EJ-S, ARM7TDMI and ARM7TDMI-S
The ARM7 family is a range of low-power 32-bit RISC microprocessor cores optimized
for cost and power-sensitive applications.
Introduced in 1994, the ARM7 family continues to be used in a variety of designs, but
newer and more demanding designs are increasingly making use of latest ARM processors such
as the Cortex-M0 and Cortex-M3 both of which offer several significant enhancements over the
ARM7 family.

The ARM Cortex-M3 32-bit processor has been specifically developed to


provide a high-performance, low-cost platform for a broad range of applications
including microcontrollers, automotive body systems, industrial control systems
and wireless networking. The Cortex-M3 processor provides outstanding
computational performance and exceptional system response to interrupts while
meeting low cost requirements through small core footprint, industry leading code
density enabling smaller memories, reduced pin count and low power
consumption.

The ARM Cortex-M0 processor is the smallest, lowest power and most
energy-efficient ARM processor available. The exceptionally small silicon area,
low power and minimal code footprint of the processor enables developers to
achieve 32-bit performance at an 8-bit price point, bypassing the step to 16-bit
devices.

50

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The ARM7 family offers up to 130MIPs (Dhrystone2.1) and incorporates the Thumb 16bit instruction set. The family consists of the ARM7TDMI, ARM7TDMI-S and ARM7EJ-S
processors, each of which was developed to address different market requirements:

Integer processor

Synthesizable version of the ARM7TDMI processor

Synthesizable core with DSP and Jazelle

technology enhancements for Java

acceleration

Cached core with Memory Management Unit (MMU) supporting operating systems
including Windows CE, Palm OS, Symbian OS and Linux

Applications

Personal audio (MP3, WMA, AAC players)

Entry level wireless handsets

Two-way pagers

ARM7 family

Established, high-volume 32-bit RISC architecture

Up to 130 MIPs (Dhrystone 2.1) performance on a typical 0.13m process

Small die size and very low power consumption

High code density, comparable to 16-bit microcontroller

Wide operating system and RTOS support - including Windows CE, Palm OS, Symbian
OS, Linux and market-leading RTOS

Wide choice of development tools

Simulation models for leading EDA environments

Excellent debug support for SoC designers, including ETM interface

Multiple sourcing from industry-leading silicon vendors

Availability in 0.25m, 0.18m and 0.13m processes

Migration and support across new process technologies

Code is forward-compatible to ARM9, ARM9E and ARM10 processors as well as Intel's


XScale technology

51

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

ARM9 FAMILY PROCESSOR


The ARM9 processor family is built around the ARM9TDMI processor and incorporates
the 16-bit Thumb instruction set, which improves code density by as much as 35%. The ARM9
family's comprehensive feature set enables developers to implement leading-edge systems, while
delivering considerable savings in chip area, time-to-market, development costs and power
consumption. The ARM9 family consists of the ARM922T cached processor macrocell.
Applications

Next-generation hand-held products Videophones, portable communicators, PDAs

Digital consumer products Set-top boxes, home gateways, games consoles, MP3 audio,
MPEG4 video

Imaging Desktop printers, still picture cameras, digital video cameras

Automotive Telematic and infotainment systems

ARM922 Features

32-bit RISC processor with ARM and Thumb instruction sets

5-stage integer pipeline achieves 1.1 MIPS/MHz

Up to 300 MIPS (Dhrystone 2.1) in a typical 0.13m process

Single 32-bit AMBA bus interface

MMU supporting Windows CE, Symbian OS, Linux, Palm OS ( and )

Integrated instruction and data caches

Excellent debug support for SoC designers, including ETM interface

8-entry write buffer avoids stalling the processor when writes to external memory are
performed

Portable to latest 0.18m, 0.15m, 0.13m silicon processes.

ARM 9E FAMILY PROCESSOR


ARM926EJ-S , ARM946E-S, ARM966E-S and ARM968E-S
The ARM9E processor family enable single processor solutions for microcontroller, DSP and
Java applications, offering savings in chip area and complexity, power consumption, and timeto-market. The ARM9E family of products are DSP-enhanced 32-bit RISC processors, well

52

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

suited for applications requiring a mix of DSP and microcontroller performance. The family
includes the ARM926EJ-S, ARM946E-S and ARM968E-S processor macrocells, each of which
has been developed to address different application requirements. They include signal processing
extensions to enhance 16-bit fixed point performance using a single-cycle 32 x 16 multiplyaccumulate (MAC) unit, and implement the 16-bit Thumb instruction set giving excellent code
density, maximising savings on system cost. The ARM926EJ-S processor also includes ARM
Jazelle technology which enables the direct execution of Java bytecodes in hardware.

Jazelle technology, memory management unit (MMU), variable size instruction and data
caches (4K - 128K), instruction and data tightly coupled memory (TCM) interfaces

Variable size instruction and data cache (0K- 1M), instruction and data TCM(0 - 1M),
and memory protection unit for embedded applications.

Targets "hard real-time" applications requiring predictable instruction execution timings


with high performance and low power consumption.

Applications

Next-generation hand-held products


Videophones, portable communicators, Internet appliances

Digital consumer products


Set-top boxes, home gateways, games consoles,

Imaging
Desktop printers, still picture cameras, digital video cameras

Storage
HDD and DVD drives

Automotive
Power train, infotainment, ABS, body control systems

Industrial control systems


Motion controls, power delivery

Networking
VoIP, Wireless LAN, xDSL

ARM9E Family

32-bit RISC processor with ARM, Thumb and DSP instruction sets

53

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

ARM Jazelle technology delivers 8x Java acceleration (ARM926EJ-S)

5-stage integer pipeline achieves 1.1 MIPS/MHz

Up to 300 MIPS (Dhrystone 2.1) in a typical 0.13m process

Integrated real-time trace and debug support

Optional VFP9 coprocessor delivers floating-point performance

215 MFLOPS for 3D graphics and real-time control systems

High-performance AHB system

MMU supporting Windows CE, Symbian OS, Linux, Palm OS (ARM926EJ-S)

Integrated instruction and data caches

Real-time debug support for SoC designers, including ETM interface

16-entry write buffer avoids stalling the processor when writes to external memory are
performed

Flexible soft IP delivery, synthesizable to the latest 0.18m, 0.15m, 0.13m silicon
processes.

ARM 10E FAMILY PROCESSOR


ARM1026EJ-S High Performance, Jazelle-enhanced Macrocell:
The ARM1026EJ-S macrocell is a fully synthesizable processor delivering a high level of
performance, functionality and flexibility to enable innovative SoC applications. A Jazelle
Technology enhanced 32-bit RISC ARM10EJ-S CPU with extensive 64-bit internal bussing is
combined with configurable instruction and data caches, configurable tightly coupled memories
(TCM), support for parity protection on SRAM arrays, memory management and protection
units (MMU and MPU), vector interrupt controller interface, advanced vector floating point
support and dual 64/32-bit configurable AMBA
AHB system interfaces. The ARM1026EJ-S core implements the ARMv5TEJ instruction
set and includes an enhanced 16 x 32-bit multiplier. The ARMv5TEJ instruction set includes 16bit fixed point DSP instructions to enhance performance of many signal processing algorithms
and applications as well as supporting Thumb and Java bytecode execution.
Applications:

Hand-held products

54

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

internet appliances, portable communicators and PDAs

Digital consumer products


set-top boxes, home gateways, VoIP telephones, web tablets, laser
printers, game consoles, digital cameras and TV

Automotive control systems


powertrain control, drive-by-wire control, infotainment and navigation

Industrial control systems


motion controls, power delivery and signal processing

Features:

32-bit performance-optimized processor core implementing the ARM, Thumb, DSP, and
Java ISAs (v5TEJ)
-

Highly-efficient ARM10EJ-S core achieves 1.35 MIPS/MHz on Dhrystone 2.1


without inlining

Extensive 64-bit internal bussing delivers increased bandwidth for applications


with large working sets

Full MMU support for Windows CE, Linux, Palm OS, Symbian OS, and Java OS

Full MPU support for a broad range of real time operating systems

Separate instruction and data caches


-

Configurable sizes (4 128kB) with 4 way associativity

Separate instruction and data TCM


-

Configurable sizes (0 1MB) and support for wait state insertion

Parity protection support on SRAM arrays for maximum field reliability

Dual 64 or 32-bit AMBA AHB bus interfaces

Direct-attach vector interrupt controller interface for improved interrupt response

Support for optional vector floating point and embedded trace coprocessors

EmbeddedICE-RT logic for real-time debug

Fully-synthesizable and process portable design delivered as RTL

Benefits:

Flexible, high-performance, low-power core for innovative SoC applications

55

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Runs all major OSs and existing middleware

Enables low-cost, single-chip MCU, DSP, and Java solutions

High-performance hardware Java bytecode execution

Single development toolkit for reduced development costs and shorter development cycle
time

Synthesizable design allows sourcing from multiple industry-leading silicon vendors

Excellent real-time debug support for SoC designers via optional ETM10RV macrocell

Instruction set can be extended by the use of coprocessors

ARM-EDA Reference Methodology deliverables significantly reduce the time to


generate a specific technology implementation of the core and to generate industry
standard views and models.

ARM11 FAMILY PROCESSOR


RM11 MPCore, ARM1136J(F)-S, ARM1156T2(F)-S and ARM1176JZ(F)-S
Based on the ARM11 microarchitecture, the ARM11 processor family comprises a range
of high performance microprocessors, delivering up to 740 Dhrystone 2.1 MIPS in 0.13 process
technology. The family comprises four main product lines, the ARM1136J(F)-S processor, the
ARM1156T2(F)-S processor and the ARM1176JZ(F)-S processor, each optimized for the
specific requirements of different market segments, and the ARM11 MPCore multiprocessor.
Applications Processors for Consumer and Wireless
The ARM1176JZ-S and ARM1176JZF-S processors featuring ARM TrustZone
technology, and ARM Jazelle technology for efficient embedded Java execution, are designed
for use as applications processors in consumer and wireless products. ARM TrustZone
technology provides support within the CPU and platform architecture for building the trusted
computing environments required to enable protection of critical system functions from
downloaded applications, copyright protection of downloaded media, safe over-the-air system
upgrades.
Both processors feature the ARMv6 instruction set architecture, with media processing
extensions, ARM Jazelle technology, and ARM Thumb for compact code. The ARM1176JZFS processor also features an integrated floating point coprocessor, which makes it particularly

56

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

suitable for embedded 3D-graphics applications. These processors feature AXI interfaces
compatible with the latest AMBATM 3 AXITM specification, and offering higher system bus
bandwidth with fewer bus layers and rapid timing closure.
High Performance Processors for Automotive, Data Storage, Imaging and Embedded
Control
The ARM1156T2-S and ARM1156T2F-S processors incorporate the latest ARM Thumb2 technology for even higher code density and instruction set efficiency. Thumb-2 technology
uses 31 percent less memory than pure 32-bit code to reduce system cost, while at the same time
delivering up to 38 percent better performance than existing Thumb technology-based solutions.
These processors also feature optional parity protection for caches and Tightly Coupled
Memories (TCM), and non- maskable interrupts, making them ideal for embedded control
applications where high reliability or high availability are paramount.
The ARM1156T2F-S processor includes an integrated floating point coprocessor - ideal
for embedded control applications developed from mathematical models. Both processors feature
an enhanced Memory Protection Unit (MPU) and offer an ideal upgrade path for embedded
control applications currently using ARM946E-S, ARM966E-S or older 16-bit processors.
These processors feature AMBA 3 AXI specification interfaces, offering higher system
bus bandwidth with fewer bus layers and rapid timing closure.
Processors for Network Infrastructure, Consumer, and Automotive Infotainment
The award-winning ARM1136J-S and ARM1136JF-S processors, feature the ARMv6
instruction set with media extensions, ARM Jazelle technology, ARM Thumb code compression,
and optional floating point coprocessor. As with all ARM11 family processors, media processing
extensions offer up to 1.9x acceleration of media-processing tasks such as MPEG4 encode,
instruction and data cache sizes are configurable, and optional Tightly Coupled Memories can be
added to accelerate interrupt handling and data-processing. These processors feature AMBA 2
AHB interfaces compatible with a wide range of system IP and peripherals.

ARM11 MPCore Multiprocessor


The ARM11 MPCore synthesizable multiprocessor is based on the ARM11
microarchitecture and can be configured to contain between one and four processors delivering
up to 2600 Dhrystone MIPS of performance.

57

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The ARM11 MPCore multiprocessor solution delivers greater performance at lower


frequencies than comparable single processor solutions, bringing significant cost savings to
system designers, while maintaining full compatibility with existing EDA tools and flows. The
ARM11 MPCore processor also simplifies otherwise complex multiprocessor designs, reducing
time-to-market and design costs.
Supporting Products
The ARM11 family of processors is complemented by the range of ARM11 PrimeXsys
Platforms and ETK11 which provide a foundation for efficient and rapid implementation of
ARM11 core-based designs. ARM PrimeXsys Platforms provide a comprehensive set of
peripherals, pre-configured with a flexible high-performance interconnect, and are readily
extendable with customers own or third party IP.
The CoreSight range of embedded debug products provides the ability to trace execution
of the ARM11 processors in real-time and gives debug visibility of the entire ARM11 SoC.
A wide range of development tools is also available from ARM and third parties.
*ARM1176JZ-S and ARM1176JZF-S processors are also suitable for all application
areas covered by ARM1136J-S, but with the added advantage of TrustZone technology where a
trusted computing platform is required.

ARM11 Family Features:

Powerful ARMv6 instruction set architecture

ARM Thumb instruction set reduces memory bandwidth and size requirements by up to
35%

ARM Jazelle technology for efficient embedded Java execution

ARM DSP extensions

SIMD (Single Instruction Multiple Data) media processing extensions deliver up to 2x


performance for video processing

ARM TrustZone technology for on-chip security foundation (ARM1176JZ-S and


ARM1176JZF-S cores)

Thumb-2 core technology for enhanced performance, energy efficiency and code density
(ARM1156T2-S and ARM1156T2F-S cores)

58

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Low power consumption:


*0.6mW/MHz (0.13m, 1.2V) including cache controllers;
*Energy saving power-down modes address static leakage currents in
advanced processes

High performance integer processor:


*8-stage integer pipeline delivers high clock frequency (9 stages for
ARM1156T2(F)-S)
*Separate load-store and arithmetic pipelines
*Branch Prediction and Return Stack

High performance memory system design:


*Supports 4-64k cache sizes
*Optional tightly coupled memories with DMA for multi-media
applications
*High-performance 64-bit memory system speeds data access for media
processing and networking applications
*ARMv6 memory system architecture accelerates OS context-switch

Vectored interrupt interface and low-interrupt-latency mode speeds interrupt response


and real-time performance

Optional Vector Floating Point coprocessor (ARM1136JF-S, ARM1176JZF-S and


ARM1156T2F-S cores) for automotive/industrial controls and 3D graphics acceleration

All ARM11 cores are delivered as ARM- Synopsys Reference Methodology compliant
deliverables which significantly reduce the time to generate a specific technology
implementation of the core and to generate a complete set of industry standard views and
models

3. WHAT ARE THE REGISTERS AND MEMORY ACCESS PROCESSES IN ARM?


[APRIL 2013]

In the ARM architecture


Memory is byte addressable
32-bit addresses

59

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

32-bit processor registers

Two operand lengths are used in moving data between the memory and the processor
registers
Bytes (8 bits) and words (32 bits)

Word addresses must be aligned, i.e., they must be multiple of 4


Both little-endian and big-endian memory addressing are supported

When a byte is loaded from memory into a processor register or stored from a register
into the memory
It always located in the low-order byte position of the register

Register Structure

There are 15 additional general-purpose registers called the banked registers


They are duplicates of some of the R0 to R14 registers
They are used when the processor switches into Supervisor or Interrupt modes of
operation

Saved copies of the Status register are also available in the Supervisor and Interrupt
modes

60

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

ARM Instruction Format


Each instruction is encoded into a 32-bit word
Access to memory is provided only by Load and Store instructions
The basic encoding format for the instructions, such as Load, Store, Move, Arithmetic, and
Logic instructions, is shown below

An instruction specifies a conditional execution code (Condition), the OP code, two or three
registers (Rn, Rd, and Rm), and some other information

Conditional Execution of Instructions

A distinctive and somewhat unusual feature of ARM processors is that all instructions are
conditionally executed
Depending on a condition specified in the instruction

The instruction is executed only if the current state of the processor condition code flag
satisfies the condition specified in bits b31-b28 of the instruction
Thus the instructions whose condition is not meet the processor condition code
flag are not executed

One of the conditions is used to indicate that the instruction is always executed

Memory Addressing Modes

Pre-indexed mode
The effective address of the operand is the sum of the contents of the base register Rn

and an offset value

Pre-indexed with writeback mode


The effective address of the operand is generated in the same way as in the Pre-

indexed mode, and then the effective address is written back into Rn

Post-indexed mode

61

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The effective address of the operand is the contents of Rn. The offset is then added to
this address and the result is written back into Rn

ARM Indexed Addressing Modes

Relative Addressing Mode

The operand must be within the range of 4095 bytes forward or backward from the
updated PC.

62

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Pre-Indexed Addressing Mode

Post-Indexed Addressing with Writeback

63

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

4. HOW THE DATA TRANSFERRED IN ARM USING INSTRUCTIONS? [November


2012]
Data transfer instructions
Data transfer instructions move data between ARM registers and memory. There are
three basic forms of data transfer instruction in the ARM instruction set:
Single register load and store instructions.
These instructions provide the most flexible way to transfer single data items between an
ARM register and memory. The data item may be a byte, a 32-bitword, or a 16-bit half-word.
(Older ARM chips may not support half-words.)
Multiple register load and store instructions.
These instructions are less flexible than single register transfer instructions, but enable
large quantities of data to be transferred more efficiently. They are used for procedure entry and
exit, to save and restore workspace registers, and to copy blocks of data around memory.
Single register swap instructions.
These instructions allow a value in a register to be exchanged with a value in memory,
effectively doing both a load and a store operation in one instruction. They are little used in userlevel programs, so they will not be discussed further in this section. Their principal use is to
implement semaphores to ensure mutual exclusion on accesses to shared data structures in multiprocessor systems, but don't worry if this explanation has little meaning for you at the moment.
It is quite possible to write any program for the ARM using only the single register load
and store instructions, but there are situations where the multiple register transfers are much
more efficient, so the programmer should be familiar with them.
Register-indirect addressing
Towards the end of Section 1.4 on page 14 there was a discussion of memory addressing
mechanisms that are available to the processor instruction set designer. The ARM data transfer
instructions are all based around register-indirect addressing, with modes that include base-plusoffset and base-plus-index addressing.
Register-indirect addressing uses a value in one register (the base register) as a memory
address and either loads the value from that address into another register or stores the value from
another register into that memory address.

64

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

These instructions are written in assembly language as follows:


LDR r0, [r1]

; r0:= mem32 [r1];

STR r0, [r1]

mem32 [r1]:= r0

Other forms of addressing all build on this form, adding immediate or register off-sets to
the base address. In all cases it is necessary to have an ARM register loaded with an address
which is near to the desired transfer address, so we will begin by looking at ways of getting
memory addresses into a register.
Initializing an address pointer
To load or store from or to a particular memory location, an ARM register must be
initialized to contain the address of that location, or, in the case of single register transfer
instructions, an address within 4 Kbytes of that location.
If the location is close to the code being executed it is often possible to exploit the fact
that the program counter, r15, is close to the desired address. A data processing instruction can
be employed to add a small offset to r15, but calculating the appropriate offset may not be that
straightforward. However, this is the sort of tricky calculation that assemblers are good at, and
instruction in the assembly source code but does not correspond directly to a particular ARM
instruction. Instead, the assembler has a set of rules which enable it to select the most appropriate
ARM instruction or short instruction sequence for the situation in which the pseudo instruction is
used. (In fact, ADR is always assembled into a single ADD or SUB instruction.)As an example,
consider a program which must copy data from TABLE 1 toTABLE2, both of which are near to
the code: ARM assemblers have an inbuilt 'pseudo instruction', ADR which makes this easy. A
pseudo instruction looks like a normal

65

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Here we have introduced labels (COPY, TABLE 1 and TABLE2) which are simply
names given to particular points in the assembly code. The first ADR pseudo instruction causes
r1 to contain the address of the data that follows TABLE1; the second ADR likewise causes r2
to hold the address of the memory starting at TABLE2.Of course any ARM instruction can be
used to compute the address of a data item in memory, but for the purposes of small programs
the ADR pseudo instruction will do what we require.
Single register load and store instructions
These instructions compute an address for the transfer using a base register, which should
contain an address near to the target address, and an offset which may bean other register or an
immediate value. We have just seen the simplest form of these instructions, which does not use
an offset
LDR

r0

[r1]

; r0:= mem32 [r1];


mem32 [r1] := r1

STR

r0

[r1]
The notation used here indicates that the data quantity is the 32-bit memory word
addressed by r1. The word address in r1 should be aligned on a 4-byte boundary, so the two least
significant bits of r1 should be zero. We can now copy the first word from one table to the other.

We could now use data processing instructions to modify both base registers ready for the
next transfer:

66

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Note that the base registers are incremented by 4 (bytes), since this is the size of a word.
If the base register was word-aligned before the increment, it will be word-aligned afterwards
too.
All load and store instructions could use just this simple form of register-indirect
addressing. However, the ARM instruction set includes more addressing modes that can make
the code more efficient.
Base plus offset addressing
If the base register does not contain exactly the right address, an offset of up to4 Kbytes
may be added (or subtracted) to the base to compute the transfer address:
LDR

r0,

r0:= men 32[r1

[r1, #4]

+ 4]

This is a pre-indexed addressing mode. It allows one base register to be used to access a
number of memory locations which are in the same area of memory.
Sometimes it is useful to modify the base register to point to the transfer address. This
can be achieved by using pre-indexed addressing with auto-indexing, and allows the program to
walk through a table of values:
LDR

r0,

r0:= mem32[r]

[r1, #4]!

+ 4]; r1:= r1
+4

The exclamation mark indicates that the instruction should update the base register after
initiating the data transfer. On the ARM this auto-indexing costs no extra time since it is

67

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

performed on the processor's data path while the data is being fetched from memory. It is exactly
equivalent to preceding a simple register-indirect load with a data processing instruction that
adds the offset (4bytes in this example) to the base register, but the time and code space cost of
the extra instruction are avoided.
Another useful form of the instruction, called post-indexed addressing, allows the base to
be used without an offset as the transfer address, after which it is auto-indexed:
LDR

r0, [r1], #4

r0:= men32 [r1]


r1:= r1 + 4

Here the exclamation mark is not needed, since the only use of the immediate offset is as
a base register modifier. Again, this form of the instruction is exactly equivalent to a simple
register-indirect load followed by a data processing instruction, but it is faster and occupies less
code space.
Using the last of these forms we can now improve on the table copying program example
introduced earlier:

The load and store instructions are repeated until the required number of values has been
copied into TABLE2, and then the loop is exited. Control flow instructions are required to
determine the loop exit; they will be introduced shortly.
In the above examples the address offset from the base register was always an immediate
value. It can equally be another register, optionally subject to a shift operation before being

68

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

added to the base, but such forms of the instruction are less useful than the immediate offset
form.
As a final variation, the size of the data item which is transferred may be a single
unsigned 8-bit byte instead of a 32-bit word. This option is selected by adding a letter B onto the
opcode:
LDBR

r0, [r1]

; r0:=mem8 [r1]

In this case the transfer address can have any alignment and is not restricted to a4-byte
boundary, since bytes may be stored at any byte address. The loaded byte is placed in the bottom
byte of r0 and the remaining bytes in r0 are filled with zeros. All but the oldest ARM processors
also support signed bytes, where the top bit of the byte indicates whether the value should be
treated as positive or negative, and signed and unsigned 16-bit half-words;
Multiple register data transfers
Where considerable quantities of data are to be transferred, it is preferable to move
several registers at a time. These instructions allow any subset (or all) of the 16 registers to be
transferred with a single instruction. The trade-off is that the available addressing modes are
more restricted than with a single register transfer instruction. A simple example of this
instruction class is:

Since the transferred data items are always 32-bit words, the base address (r1)should be
word-aligned.
The transfer list, within the curly brackets, may contain any or all of r0 to r15. The order
of the registers within the list is insignificant and does not affect the order of transfer or the
values in the registers after the instruction has executed. It is normal practice, however, to
specify the registers in increasing order within the list.
Note that including r15 in the list will cause a change in the control flow, sincer15 is the
PC. We will return to this case when we discuss control flow instructions and will not consider it
further until then.

69

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The above example illustrates a common feature of all forms of these instructions: the
lowest register is transferred to or from the lowest address, and then the other registers
are transferred in order of register number to or from consecutive word addresses above
the first. However there are several variations on how the first address is formed, and
auto-indexing is also available (again by adding a '!'After the base register).
Stack addressing
The addressing variations stem from the fact that one use of these instructions is to
implement stacks within memory. A stack is a form of last-in-first-out store which supports
simple dynamic memory allocation, that is, memory allocation where the address to be used to
store a data value is not known at the time the program is compiled or assembled. An example
would be a recursive function, where the depth of recursion depends on the value of the
argument. A stack is usually implemented as a linear data structure which grows up (an
Ascending stack) or down (a descending stack) memory as data is added to it and shrinks back as
data is removed. A stack pointer holds the address of the current top of the stack, either by
pointing to the last valid data item pushed onto the stack (a full stack), or by pointing to the
vacant slot where the next data item will be placed (an empty stack).
The above description suggests that there are four variations on a stack, representing all
the combinations of ascending and descending full and empty stacks. The ARM multiple register
transfer instructions support all four forms of stack:
Full ascending: the stack grows up through increasing memory addresses and the base
register points to the highest address containing a valid item.
Empty ascending: the stack grows up through increasing memory addresses and the base
register points to the first empty location above the stack.
Full descending: the stack grows down through decreasing memory addresses and the
base register points to the lowest address containing a valid item.
Empty descending: the stack grows down through decreasing memory addresses and the
base register points to the first empty location below the stack.
Block copy addressing
Although the stack view of multiple register transfer instructions is useful, there are
occasions when a different view is easier to understand. For example, when these instructions are
used to copy a block of data from one place in memory to another amechanistic view of the

70

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

addressing process is more useful. Therefore the ARM assembler supports two different views of
the addressing mechanism, both of which map onto the same basic instructions, and which can
be used interchangeably. The block copy view is based on whether the data is to be stored above
or below the address held in the base register and whether the address incrementing or
decrementing begins before or after storing the first value. The block copy views shows how
each variant stores three registers into memory and how the base register is modified if autoindexing is enabled. The base register value before the instruction is r9, and after the autoindexing it is r9'.
To illustrate the use of these instructions, here are two instructions which copy eight
words from the location r0 points to the location r1 points to:
LDMIA r0! {R2-R9}
STMIA r1, {r2-r9}
After executing these instructions r0 has increased by 32 since the '!' causes it to autoindex across eight words, whereas r1 is unchanged. If r2 to r9 contained useful values, we could
preserve them across this operation by pushing them onto a stack:
STMFD r13!

{R2-R9}

LDMIA r0!

{R2-R9}

STMIA r1,

{R2Rr9}

LDMFD r13!

{R2-R9}

save regs onto stack

; restore from stack

Here the 'FD 'postfix on the first and last instructions signifies the full descending stack
address mode as described earlier. Note that auto-indexing is almost always specified for stack
operations in order to ensure that the stack pointer has a consistent behavior.
The load and store multiple register instructions are an efficient way to save and restore
processor state and to move blocks of data around in memory. They save code space and operate
up to four times faster than the equivalent sequence of single Register.

71

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Register load or store instructions (a factor of two due to improved sequential behavior and
another factor of nearly two due to the reduced instruction count). This significant advantage
suggests that it is worth thinking carefully about how data is organized in memory in order to
maximize the potential for using multiple register data transfer instructions to access it.
These instructions are, perhaps, not pure 'RISC' since they cannot be executed in a single
clock cycle even with separate instruction and data caches, but other RISC architectures are
beginning to adopt multiple register transfer instructions in order to increase the data bandwidth
between the processor's registers and the memory.

72

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The ARM multiple register transfer instructions are uniquely flexible in being able to
transfer any subset of the 16 currently visible registers.

5. WHAT ARE THE Load/Store Multiple Operands IN ARM PROCESSOR?

In ARM processors, there are two instructions for loading and storing multiple operands
They are called Block transfer instructions

Any subset of the general purpose registers can be loaded or stored


Only word operands are allowed, and the OP codes used are LDM (Load
Multiple) and STM (Store Multiple)

The memory operands must be in successive word locations

All of the forms of pre- and post-indexing with and without writeback are available

They operate on a Base register Rn specified in the instruction and offset is always 4
LDMIA R10!, {R0,R1,R6,R7}
IA: Increment After corresponding to post-indexing

6.Classify the Instructions set of ARM and explain [April/May 2012]


6. Explain the Arithmetic Instructions of ARM?

The basic expression for arithmetic instructions is


OPcode Rd, Rn, Rm

For example, ADD R0, R2, R4


Performs the operation R0[R2]+[R4]

SUB R0, R6, R5


Performs the operation R0[R6]-[R5]

Immediate mode: ADD R0, R3, #17


Performs the operation R0[R3]+17

The second operand can be shifted or rotated before being used in the operation
For example, ADD R0, R1, R5, LSL #4 operates as follows: the second operand
stored in R5 is shifted left 4-bit positions (equivalent to [R5]x16), and its is then added to
the contents of R1; the sum is placed in R0

73

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

7. What are the Logic Instructions of ARM?

The logic operations AND, OR, XOR, and Bit-Clear are

implemented by instructions with the OP codes AND,


ORR, EOR, and BIC.

For example
AND R0, R0, R1: performs R0[R0]+[R1]

The Bit-Clear instruction (BIC) is closely related to the AND instruction.


It complements each bit in operand Rm before ANDing them with the bits in
register Rn.
For example, BIC R0, R0, R1. Let R0=02FA62CA, R1=0000FFFF. Then the
instruction results in the pattern 02FA0000 being placed in R0

The Move Negative instruction complements the bits of the source operand and places
the result in Rd.
For example, MVN R0, R3

8. Explain the Branch Instructions in detail?

Conditional branch instructions contain a signed 24-bit offset that is added to the updated
contents of the Program Counter to generate the branch target address

The format for the branch instructions is shown as below

Offset is a signed 24-bit number. It is shifted left two-bit positions (all branch
targets are aligned word addresses), signed extended to 32 bits, and added to the updated
PC to generate the branch target address
The updated points to the instruction that is two words (8 bytes) forward from the
branch instruction

The BEQ instruction (Branch if Equal to 0) causes a branch if the Z flag is set to 1

74

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Setting Condition Codes

Some instructions, such as Compare, given by


CMP Rn, Rm which performs the operation [Rn]-[Rm] have the sole purpose of
setting the condition code flags based on the result of the subtraction operation

The arithmetic and logic instructions affect the condition code flags only if explicitly
specified to do so by a bit in the OP-code field. This is indicated by appending the suffix
S to the OP-code
For example, the instruction ADDS R0, R1, R2 set the condition code flags
But ADD R0, R1, R2 does not

An Example of Adding Numbers

75

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

9. Explain the Assembly Language of ARM Processor?

The ARM assembly language has assembler directives to reserve storage space, assign
numerical values to address labels and constant symbols, define where program and data
blocks are to be placed in memory, and specify the end of the source program text

The AREA directive, which uses the argument CODE or DATA, indicates the beginning
of a block of memory that contains either program instructions or data

The ENTRY directive specifies that program execution is to begin at the following LDR
instruction

In the data area, which follows the code area, the DCD directives are used to label and
initialize the data operands

An Example of Assembly Language

An EQU directive can be used to define symbolic names for constants

For example, the statement


TEN EQU 10

When a number of registers are used in a program, it is convenient to use symbolic names
for them that relate to their usage
The RN directive is used for this purpose
For example, COUNTER RN 3 establishes the name COUNTER for register R3

76

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The register names R0 to R15, PC (for R15), and LR( for R14) are predefined by the
assembler
R14 is used for a link register (LR)

10. What is Pseudo-Instructions?

An alternative way of loading the address into register R2 is also provided in the
assembly language

The pseudo-instruction ADR Rd, ADDRESS holds the 32- bit value ADDRESS into Rd
This instruction is not an actual machine instruction
The assembler chooses appropriate real machine instructions to implement
pseudo-instructions

For example,
The combination of the machine instruction LDR R2, POINTER and the data
declaration directive POINTER DCD NUM1 is one way to implement the pseudoinstruction ADR R2, NUM1

Subroutines

A Branch and Link (BL) instruction is used to call a subroutine

The return address is loaded into register R14, which acts as a link register

When subroutines are nested, the contents of the link register must be saved on a stack by
the subroutine.
Register R13 is normally used as the pointer for this stack

Example

Byte-Sorting Program

77

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Byte-Sorting Program

11. EXPLAIN THE I/O OPERATIONS INTERRUPT STRUCTURE


There are two types of interrupts available on ARM processor. The first type is the
interrupt caused by external events from hardware peripherals and the second type is the SWI
instruction.
The ARM core has only one FIQ pin, that is why an external interrupt controller is
always used so that the system can have more than one interrupt source which are prioritized
with this interrupt controller and then the FIQ interrupt is raised and the handler identifies which
of the external interrupts was raised and handle it.
How are interrupts assigned?

78

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

It is up to the system designer who can decide which hardware peripheral can produce which
interrupt request. By using an interrupt controller we can connect multiple external interrupts to
one of the ARM interrupt requests and distinguish between them. There is a standard design for
assigning interrupts adopted by system designers:

SWIs are normally used to call privileged operating system routines.

IRQs are normally assigned to general purpose interrupts like periodic timers.

FIQ is reserved for one single interrupt source that requires fast response time, like

DMA or any time critical task that requires fast response.

Interrupt Latency
It is the interval of time between from an external interrupt signal being raised to the first
fetch of an instruction of the ISR of the raised interrupt signal. System architects must balance
between two things, first is to handle multiple interrupts simultaneously, second is to minimize
the interrupt latency. Minimization of the interrupt latency is achieved by software handlers by
two main methods, the first one is to allow nested interrupt handling so the system can respond
to new interrupts during handling an older interrupt. This is achieved by enabling interrupts
immediately after the interrupt source has been serviced but before finishing the interrupt
handling. The second one is the possibility to give priorities to different interrupt sources; this is
achieved by programming the interrupt controller to ignore interrupts of the same or lower
priority than the interrupt being handled if there is one.
IRQ and FIQ exceptions
Both exceptions occur when a specific interrupt mask is cleared in the CPSR. The ARM
processor will continue executing the current instruction in the pipeline before handling the
interrupt. The processor hardware go through the following standard procedure:
The processor changes to a specific mode depending on the received interrupt.
The previous mode CPSR is saved in SPSR of the new mode.
The PC is saved in the LR of the new mode.
Interrupts are disabled, either IRQ or both IRQ and FIQ.
The processor branches to a specific entry in the vector table.
Enabling/Disabling FIQ and IRQ exceptions is done on three steps; at first loading the
contents of CPSR then setting/clearing the mask bit required then copy the updated contents back
to the CPSR.

79

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Interrupt stack
Exception handling uses stacks extensively because each exception has a specific mode
of operation, so switching between modes occurs and saving the previous mode data is required
before switching so that the core can switch back to its old state successfully. Each mode has a
dedicated register containing a stack pointer. The design of these stacks depends on some factors
like operating system requirements for stack design and target hardware physical limits on size
and position in memory. Most of ARM based systems has the stack designed such that the top of
it is located at high memory address. A good stack design tries to avoid stack overflow because
this causes instability in embedded systems.
In the following figure we have two memory layouts which show how the stack is placed in
memory:

The first is the traditional stack layout. The second layout has the advantage that when overflow
occurs, the vector table remains untouched so the system has the chance to correct itself.
Interrupt handling schemes
Here we introduce some interrupt handing schemes with some notes on each scheme
about its advantages and disadvantages.
Non-nested interrupt handling
This is the simplest interrupt handler. Interrupts are disabled until control is returned back
to the interrupted task. So only one interrupt can be served at a time and that is why this scheme
is not suitable for complex embedded systems which most probably have more than one interrupt
source and require concurrent handling. Figure 5 shows the steps taken to handle an interrupt:
Initially interrupts are disabled, when IRQ exception is raised and the ARM processor
disables further IRQ exceptions from occurring. The mode is changed to the new mode

80

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

depending on the raised exception. The register CPSR is copied to the SPSR of the new mode.
Then the PC is set to the correct entry in the vector table and the instruction there will direct the
PC to the appropriate handler. Then the context of the current task is saved a subset of the
current mode non banked register. Then the interrupt handler executes some code to identify the
interrupt source and decide which ISR will be called. Then the appropriate ISR is called. And
finally the context of the interrupted task is restored, interrupts are enabled again and the control
is returned to the interrupted task.

Non-nested interrupt handling summery:


Handle and service individual interrupts sequentially.
High interrupt latency.
Relatively easy to implement and debug.
Not suitable for complex embedded systems.
Nested interrupt handling
In this handling scheme handling more than one interrupt at a time is possible. This is
achieved by re-enabling interrupts before the handler has fully served the current interrupt. This
feature increases the complexity of the system but improves the latency. The scheme should be
designed carefully to protect the context saving and restoration from being interrupted. The
designer should balance between efficiency and safety by using defensive coding style that
assumes problems will occur. The goal of nested handling is to respond to interrupts quickly and
to execute periodic tasks without any delays. Re-enabling interrupts requires switching out of the
IRQ mode to user mode to protect link register from being corrupted. Also performing context
switch requires emptying the IRQ stack because the handler will not perform switching if there is

81

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

data on the IRQ stack, so all registers saved on the IRQ stack have to be transferred to task stack.
The part of the task stack used in this process is called stack frame. The main disadvantage of
this interrupt handling scheme is that it doesnt differ between interrupts by priorities, so lower
priority interrupt can block higher priority interrupts.

Nested interrupt handling summery:


Handle multiple interrupts without a priority assignment.
Medium or high interrupt latency.
Enable interrupts before the servicing of an individual interrupt is complete.
No prioritization, so low priority interrupts can block higher priority interrupts.
Prioritized simple interrupt handling
In this scheme the handler will associate a priority level with a particular interrupt source.
A higher priority interrupt will take precedence over a lower priority interrupt. Handling
prioritization can be done by means of software or hardware. In case of hardware prioritization
the handler is simpler to design because the interrupt controller will give the interrupt signal of
the highest priority interrupt requiring service. But on the other side the system needs more
initialization code at start-up since priority level tables have to be constructed before the system
being switched on.

82

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

When an interrupt signal is raised, a fixed amount of comparisons with the available set
of priority levels is done, so the interrupt latency is deterministic but at the same point this could
be considered a disadvantage because both high and low priority interrupts take the same amount
of time.
Prioritized simple interrupt handling summery:
Handle multiple interrupts with a priority assignment mechanism.
Low interrupt latency.
Deterministic interrupt latency.
Time taken to get to a low priority ISR is the same as for high priority ISR.
Other schemes
There are some other schemes for handling interrupts, designers have to choose the suitable one
depending on the system being designed.
Re-entrant interrupt handler:
The basic difference between this scheme and the nested interrupt handling is that
interrupts are re-enabled early on the re-entrant interrupt handler which can reduce interrupt
latency. The interrupt of the source is disabled before re-enabling interrupts to protect the system
from getting infinite interrupt sequence. This is done by a using a mask in the interrupt

83

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

controller. By using this mask, prioritizing interrupts is possible but this handler is more
complex.
Prioritized standard interrupt handler:
It is the alternative approach of prioritized simple interrupt handler; it has the advantage
of low interrupt latency for higher priority interrupts than the lower priority ones. But the
disadvantage now is that the interrupt latency in nondeterministic.
Prioritized grouped interrupt handler:
This handler is designed to handle large amount of interrupts by grouping interrupts
together and forming a subset which can have a priority level. This way of grouping reduces the
complexity of the handler since it doesnt scan through every interrupt to determine the priority.
If the prioritized grouped interrupt handler is well designed, it will improve the overall system
response times dramatically, on the other hand if it is badly designed such that interrupts are not
well grouped, then some important interrupts will be dealt as low priority interrupts and vice
versa. The most complex and possibly critical part of such scheme is the decision on which
interrupts should be grouped together in one subset.

12. WHAT IS ARM CACHE? [November 2012]


All modern ARM-based processors are equipped with either on-chip L1 cache or on-chip
memory. In most implementations, cache is split into separate instruction and data caches. Both
caches are typically virtually addressed and set-associative with high degree of associativity (up
to 64-way). The instruction cache is read-only, while the data cache is read/write with copy-back
write strategy. One less standard feature present in most ARM processors is cache lock-down
capability that allows locking critical sections of code or data in cache.

13. EXPLAIN ARM BUS IN DETAIL? [November 2012]


AMBA (ARM Main Memory Bus Architecture)
AHB (ARM High Performance Bus)

AMBA-AHB interfaces the memory, external DRAM (dynamic RAM controller and onchip I/O devices

AMBA-AHB connects to 32-bit data and 32-bit address lines at high speed

AHB maximum bps bandwidth sixteen times ARM processor clock

84

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

AMBA (ARM Main Memory Bus Architecture)


APB (ARM Peripheral Bus)
AMBA -APB interfaces ARM processor with the memory AMBAAHB and external -chip I/O
devices, which operate at low speed using a bridge (AMBA-APB bridge)
AMBA-APB Bridge

Switches ARM CPU communication with the AMBA bus to APB bus.

ARM processor based microcontroller has a single data bus in AMBA-AHB that
connects to the bridge, which integrate the bridge onto the same integrated circuit as the
processor to reduce the number of chips required to build a system and thus the system
cost.

The bridge communicates with the memory through a AMBA-AHB, a dedicated set of
wires that transfer data between these two systems.

A separate APB I/O bus connects the bridge to the I/O devices.

ARM BUS

APB bus connects

I2C

touch screen

85

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

SDIO

MMC (multimedia card)

USB

CAN bus and other required interfaces to an ARM microcontroller

CONCLUSION
We learnt
ARM bus two types AMBA-AHB and AMBA-APB.
AHB connects high speed memory
APB connects the external peripherals to system memory bus through a bridge

14. EXPLAIN EMBEDDED SYSTEMS WITH ARM?[November 2011]


The ARM processor core is a key component of many successful 32-bit embedded
systems. ARM cores are widely used in mobile phones, handheld organizers, and a multitude of
other everyday portable consumer devices. ARMs designers have come a long way from the
first ARM1 prototype in 1985. Over one billion ARM processors had been shipped worldwide
by the end of 2001. The ARM company bases their success on a simple and powerful original
design, which continues to improve today through constant technical innovation. In fact, the
ARM core is not a single core, but a whole family of designs sharing similar design principles
and a common instruction set. For example, one of ARMs most successful cores is the
ARM7TDMI. It provides up to 120 Dhrystone MIPS1 and is known for its high code density and
low power consumption, making it ideal for mobile embedded devices. In this first chapter we
discuss how the RISC (reduced instruction set computer) design philosophy was adapted by
ARM to create a flexible embedded processor. We then introduce an example embedded device
and discuss the typical hardware and software technologies that surround an ARM processor.
The RISC design philosophy
The ARM core uses a RISC architecture. RISC is a design philosophy aimed at delivering simple
but powerful instructions that execute within a single cycle at a high clock speed. The RISC
philosophy concentrates on reducing the complexity of instructions performed by the hardware
because it is easier to provide greater flexibility and intelligence in software rather than
hardware. As a result, a RISC design places greater demands on the compiler. In contrast, the
traditional complex instruction set computer (CISC) relies more on the hardware for instruction

86

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

functionality, and consequently the CISC instructions are more complicated. Figure 1.1
illustrates these major differences. The RISC philosophy is implemented with four major design
rules:
1. InstructionsRISC processors have a reduced number of instruction classes. These classes
provide simple operations that can each execute in a single cycle. The compiler or programmer
synthesizes complicated operations (for example, a divide operation) by combining several
simple instructions. Each instruction is a fixed length to allow the pipeline to fetch future
instructions before decoding the current instruction. In contrast, in CISC processors the
instructions are often of variable size and take many cycles to execute.
2. Pipelinesthe processing of instructions is broken down into smaller units that can be
executed in parallel by pipelines. Ideally the pipeline advances by one step on each cycle for
maximum throughput. Instructions can be decoded in one pipeline stage. There is no need for an
instruction to be executed by a mini program called microcode as on CISC processors.
3. RegistersRISC machines have a large general-purpose register set. Any register can contain
either data or an address. Registers act as the fast local memory store for all data processing
operations. In contrast, CISC processors have dedicated registers for specific purposes.

CISC emphasizes hardware complexity. RISC emphasizes compiler.


4. Load-store architecturethe processor operates on data held in registers. Separate load and
store instructions transfer data between the register bank and external memory. Memory accesses
are costly, so separating memory accesses from data processing provides an advantage because
you can use data items held in the register bank multiple times without needing multiple memory
accesses. In contrast, with a CISC design the data processing operations can act on memory
directly.
These design rules allow a RISC processor to be simpler, and thus the core can operate at higher
clock frequencies. In contrast, traditional CISC processors are more complex and operate at

87

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

lower clock frequencies. Over the course of two decades, however, the distinction between RISC
and CISC has blurred as CISC processors have implemented more RISC concepts.

15. WHAT ARE THE SERIAL BUS PROTOCOLS [November 2014]

CAN BUS

CAN (Controller Area Network) is a serial bus system, which was originally developed
for automotive applications in the early 1980's.

The CAN protocol was internationally standardized in 1993 as ISO 11898-1 and
comprises the data link layer of the seven layer ISO/OSI reference model.

CAN provide two communication services: the sending of a message (data frame
transmission) and the requesting of a message (remote transmission request, RTR).

The equivalent of the CAN protocol in human communication are e.g. the Latin
characters. CAN users still have to define the language/grammar and the
words/vocabulary to communicate?

CAN provides
*A multi-master hierarchy, which allows building intelligent and
redundant systems.
*Broadcast communication. A sender of information transmits to all
devices on the bus. All receiving devices read the message and then decide if it is
relevant to them.
*Sophisticated error detecting mechanisms and re-transmission of faulty
messages.

EXAMPLE OF CAN BUS

88

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

CAN PROTOCOL
1. PRINCIPLES OF DATA EXCHANGE

As a result of the content-oriented addressing scheme a high degree of system and


configuration flexibility is achieved.

It is easy to add stations to an existing CAN network without making any hardware or
software modifications to the present stations as long as the new stations are purely
receivers.

This allows for a modular concept and also permits the reception of multiple data and the
synchronization of distributed processes.

Data transmission is not based on the availability of specific types of stations allowing
simple servicing and upgrading of the network.

2. REAL TIME DATA TRANSMISSION

In real-time processing the urgency of messages to be exchanged over the network can
differ greatly.

The priority, at which a message is transmitted compared to another less urgent message,
is specified by the identifier of each message.

89

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Bus access conflicts are resolved by bit-wise arbitration of the identifiers involved by
each station observing the bus level bit for bit. This happens in accordance with the
wired-and mechanism, by which the dominant state overwrites the recessive state.

Transmission requests are handled in order of their importance for the system as a whole.

3. MESSAGE FRAME FORMATS

The CAN protocol supports two message frame formats, the only essential difference
being in the length of the identifier.

The CAN base frame supports a length of 11 bits for the identifier and the CAN
extended frame supports a length of 29 bits for the identifier.

4. DETECTING AND SIGNALING ERRORS

For error detection the CAN protocol implements three mechanisms at the message level:
Cyclic Redundancy Check (CRC)
Frame check
ACK errors

The CAN protocol also implements two mechanisms for error detection at the bit level:

90

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Monitoring: Each station that transmits also observes the bus level and thus
detects differences between the bit sent and the bit received.
Bit stuffing: The bit representation used by CAN is "Non Return to Zero (NRZ)"
coding. The synchronization edges are generated by means of bit stuffing. This
stuff bit has a complementary value, which is removed by the receivers.
CAN hardware implementations cover the lower two layers of the OSI reference model while
various software solutions (higher layer protocols) cover the layers three to seven.

CAN protocol define the data link layer and part of the physical layer in the OSI model,
which consists of seven layers. The International Standards Organization (ISO) defined a
standard, which incorporates the CAN specifications as well as a part of physical layer:
the physical signaling, which comprises bit encoding and decoding (Non-Return-to- Zero,
NRZ) as well as bit timing and synchronization.

The CAN physical medium is a two-wire bus line with common return terminated at both
ends by resistors. Differential signal is used for better immunity. The following figure
shows a transmit signal from a CAN controller, the differential signal emitted on the line
and the receive signal received by the CAN controller to monitor the CAN bus.

A typical CAN bus in a factory automation application is a single line bus with stubs to
connect equipments such as PLC, Sensors, Drives etc as illustrated by the figure below :

91

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

CAN DESIGN

USB BUS
Introduction to the USB
USB (Universal Serial Bus) is as its name suggests, based on serial type architecture.
However, it is an input-output interface much quicker than standard serial ports. Serial
architecture was used for this type of port for two main reasons:

Serial architecture gives the user a much higher clock rate than a parallel interface because a
parallel interface does not support too high frequencies (in a high speed architecture, bits
circulating on each wire arrive with lag, causing errors);

serial cables are much cheaper than parallel cables.

USB standards
So, from 1995, the USB standard has been developed for connecting a wide range of devices.
The USB 1.0 standard offers two modes of communication:

12 Mb/s in high speed mode,

1.5 Mb/s in low speed.

The USB 1.1 standards provide several clarifications for USB device manufacturers but does not
change anything in the speed.

92

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The USB 2.0 standards makes it possible to obtain speeds which can reach 480 Mbit/s
If there is no logo, the best way of determining if something is a low or high speed USB
is to consult the product documentation insofar as the connectors are the same.
Compatibility between USB 1.0, 1.1 and 2.0 is assured. However, the use of a USB 2.0
device in a low speed USB port (i.e. 1.0 or 1.1) will limit the speed to 12Mbit/s maximum.
Furthermore, the operating system is likely to display a message explaining that the speed will be
restricted.
Types of connectors
There are two types of USB connectors:

Connectors known as type A, where the shape is rectangular and generally used for less
bandwidth intensive devices (keyboard, mouse, webcam, etc.);

Connectors known as type B, where the shape is square and mainly used for high speed
devices (external hard disks, etc.);

1. Power supply +5V (VBUS) 100mA maximum


2. Data (D-)
3. Data (D+)
4. Mass (GND)
Operation of the USB
One characteristic of USB architecture is that it can supply electricity to devices to which
it connects, with a limit of 15 W maximum per device. To do so, it uses a cable made up of four
wires (the GND mass, the BUS supply and two data wires called D- and D+).

93

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The USB standard allows devices to be chained by using a bus or star topology. So,
devices

can

either

be

connected

one

to

another

or

branched.

Branching is done using boxes called "hubs" comprising of a single input and several outputs.
Some are active (supplying electric energy), others passive (power supplied by the computer).

Communication between the host (computer) and devices is carried out according to
a protocol (communication language) based on the token ring principle. This means that
bandwidth is temporarily shared between all connected devices. The host (computer) issues a
signal to begin the sequence every millisecond (ms), the time interval during which it will
simultaneously give each device the opportunity to "speak". When the host wants to
communicate with a device, it transmits a token (a data packet, containing the address of the
device coded over 7 bits) designating a device, so it is the host that decides to "talk" with the
devices. If the device recognizes its address in the token, it sends a data packet (between 8 and
255 bytes) in response, if not it passes the packet to the other connected devices. Data is
exchanged in this way is coded according to NRZI coding.
Since the address is coded over 7 bits, 128 devices (2^7) can simultaneously be
connected to a port of this type. In reality, it is advisable to reduce this number to 127 because
the 0 address is a reserved address.
Due to the maximum length of the cable between two devices of 5 meters and a
maximum number of 5 hubs (supplied), it is possible to create a chain 25 meters in length.
USB ports support hot plug and play. So, devices can be connected without turning off
the computer (hot plug). When a device is connected to the host it detects the addition of a new
item thanks to a change in the tension between the D+ and D- wires. At this time, the computer
sends an initialization signal to the device for 10ms, then it supplies the current using
the GND and VBUS wires (up to 100mA). The device is then supplied with electric current and
temporarily takes over the default address (0 addresses). The following stage consists of
supplying it with its definitive address (this is the listing procedure). To do so, the computer
interrogates devices already connected to know their addresses and allocates a new one, which

94

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

identifies it by return. The host, having all the necessary characteristics is then able to load the
appropriate driver.

16. EXPLAIN THE PARALLEL BUS PROTOCOLS? [November 2011]


PCI BUS
What is PCI Bus?

Expansion bus to a processor bus. Devices connected to the PCI bus appear to the
processor as they were directly connected to the processor bus. A low cost bus, truly
processor independent.

Growing demand for bus bandwidth to support high speed disks, graphic and video
devices, and specialized needs of multiprocessor systems.

A plug and play capability for connecting I/O devices.

The PCI Local Bus is a high performance 32-bit or 64-bit bus with multiplexed address and data
lines.
The bus is intended for use as an interconnect mechanism between highly integrated peripheral
controller components, peripheral add-in boards and processor/memory systems.

PCI Features
The PCI component and add-in card interface is processor independent enabling an efficient
transition to future processor generations and use with multiple processor architectures.
A transparent 64-bit extension of the 32-bit data and address buses is defined doubling the bus
bandwidth and offering forward and backward compatibility of 32-bit and 64-bit PCI
peripherals.

95

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

A forward and backward compatible 66 MHz specification is also defined doubling the
capabilities of the 33 MHz definition.
Configuration registers are specified for PCI components and add-in cards. A system with
embedded auto-configuration software offers true ease-of-use by automatically configuring PCI
add-in cards at power on. PCI specifies one interrupt line for a single-function device and up to 4
interrupt lines for a multi-function device.
PCI Local Bus Overview

PCI Local Bus Features and Benefits:


High Performance: Transparent upgrade from 32-bit data path at 33 MHz (132 MB/s peak) to
64-bit data path at 33 MHz (264 MB/s peak) and from 32-bit data path at 66 MHz (264 MB/s
peak) to 64-bit data path at 66 MHz (528 MB/s peak).
Low Latency: Random accesses (60 ns write access latency for 33 MHz PCI or 30 ns for 66
MHz PCI to slave registers from master parked on bus.
Concurrency: Capable of full concurrency with processor / memory subsystem.
Synchronous: The bus is synchronous with operation upto 33 MHz or 66 MHz.
Bus Management: Hidden overlapped Centralized Bus Arbitration.
Reduced Pin Count: Multiplexed Architecture reduces pin count to 47 signals for target and 49
for the Master.
Portability: Single PCI add-in card works on ISA, EISA or MCA based computer systems.
Ease of Use: PCI devices contain registers with the device information required for
configuration.
Use of a PCI Bus in a Computer System

96

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

GPIB BUS
GPIB Description [IEEE488]

GPIB SYSTEM
The IEEE-488 interface bus, also known as the General Purpose Interface Bus "GPIB" is an 8 bit
wide byte serial, bit parallel interface system which incorporates:
5 control lines
3 handshake lines
8 bi-directional data lines.
The entire bus consists of 24 lines, with the remaining lines occupied by ground wires.
Additional features include: TTL logic levels (negative true logic), the ability to communicate in
a number of different language formats, and no minimum operational transfer limit. The
maximum data transfer rate is determined by a number of factors, but is assumed to be 1Mb/s.
Devices exist on the bus in any one of 3 general forms:
1. Controller
2. Talker
3. Listener
A single device may incorporate all three options, although only one option may be
active at a time. The Controller makes the determination as to which device becomes active on
the bus. The GPIB can handle only 1 active controller on the bus, although it may pass

97

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

operation to another controller. Any number of active listeners can exist on the bus with an
active talker as long as no more then 15 devices are connected to the bus. The controller
determines which devices become active by sending interface messages over the bus to a
particular instrument. Each individual device is associated with a 5 bit BCD code which is
unique to that device. By using this code, the controller can coordinate the activities on the bus
and the individual devices can be made to talk, listen (un-talk, un-listen) as determined by the
controller. A controller can only select a particular function of a device, if that function is
incorporated within the device; for example a listen only device can not be made to talk to the
controller. The Talker sends data to other devices. The Listener receives the information from the
Talker. In addition to the 3 basic functions of the controller, talker, and listener the system also
incorporates a number of operational features, such as; serial poll, parallel poll, secondary talk
and listen addresses, remote/local capability, and a device clear (trigger). Device dependent
messages are moved over the GPIB in conjunction with the data byte transfer control lines.
These three lines (DAV, NRFD, and NDAC) are used to form a three wire interlocking
handshake which controls the passage of data. The active talker would control the DAV line
(Data Valid) and the listener(s) would control the NRFD (Not Ready for Data), and the
NDAC (Not Data Accepted) line. In the steady state mode the talker will hold DAV high (no
data available) while the listener would hold NRFD high (ready for data) and NDAC low (no
data accepted. After the talker placed data on the bus it would then take DAV low (data valid).
The listener(s) would then send NRFD low and send NDAC high (data accepted). Before the
talker lifts the data off the bus, DAV will be taken high signifying that data is no longer valid.
If the ATN line (attention) is high while this process occurs the information is considered data (
a device dependent message), but with the "ATN line low the information is regarded as an
interface message; such as listen, talk, un-listen or un-talk. The other five lines on the bus
(ATN included) are the bus management lines. These lines enable the controller and other
devices on the bus to enable, interrupt, flag, and halt the operation of the bus. All lines in the
GPIB are tri-state except for SQR, NRFD, and NDAC which are open-collector. The
standard bus termination is a 3K resistor connected to 5 volts in series with a 6.2K resistor to
ground - all values having a 5% tolerance.

98

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The standard also allows for identification of the devices on the bus. Each device should
have a string of 1 or 2 letters placed some where on the body of the device (near or on the GPIB
connector). These letters signify the capabilities of the device on the GPIB bus.
C Controller
T Talker
L Listener
AH Acceptor Handshake
SH Source Handshake
DC Device Clear
DT Device Trigger
RL Remote Local
PP Parallel Poll
TE Talker Extended
LE Listener Extended

Devices are connected together on the bus in a daisy chained fashion.


Normally the GPIB connector (after being connected to the device with the male side) has a
female interface so that another connector may be attached to it. This allows the devices to be
daisy chained. Devices are connected together in either a Linear or Star fashion.
Most devices operate either via front panel control or HPIB control (REMOTE).
While using the front Panel the device is in the Local state, when receiving commands via the
HPIB, the device is in the Remote state. The device is placed in the Remote state when ever the
System Controller is reset or powered on; also, when the system controller sends out an Abort
message. In addition, if the device is addressed, it then enters the Remote state.

99

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

UNIT II
TWO MARKS
1. Give the summary of I/O devices used in embedded system
2. Define bus.
3. What are the classifications of I/O devices?
4. Give some examples for serial input I/O devices.
5. Give the steps for accomplishing input output data transfer
6. Give the limitations of polling technique.
7. What do you meant by bus arbitration?
8. What are the two characteristics of synchronous communication?
9. What do you mean by asynchronous communication?
10. What are the characteristics of asynchronous communication?
11. What are the three ways of communication for a device?
12. Expand a) SPI b) SCI
13. What are the features of SPI?
14. Define software timer.
15. What are the forms of timer?
16. Define RTC
17. What is I2C?
18. What are the bits in I2C corresponding to?
19. What is a CAN bus? Where is it used?
20. What is USB? Where is it used? [November 2012]
21. What are the features of the USB protocol?
22. What are the four types of data transfer used in USB?
23. Explain briefly about PCI and PCI/X buses. (or) List any two parallel buses used in
embedded Systems [April/May 2014]
24. Mention some advanced bus standard protocols;
25. What do you meant by high speed device interfaces?
26. Mention some I/O standard interfaces.
27. WHAT IS THE ARM BUS ARCHITECTURE? [ April/may 2012]
28. What are the types of Interrupt handler?

100

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

29. What are the arithmetic instructions?


30. What are the Logic Instructions?
31. Define register. [April/May 2012]
32. List out the design goals of ARM. [APRIL 2013]

UNIT 2
11 MARKS
1. Discuss the arm architecture in detail (April 2013)[ref. Page no. :45]
2. What is the arm family of processor?
3. What are the registers and memory access processes in arm? [April 2013][Ref. Page no.: 59]
4. How the data transferred in arm using instructions? [November 2012][Ref. Page no.: 64]
5. What are the load/store multiple operands in arm processor?
6. Classify the instructions set of arm and explain [April/May 2012][Ref. Page no.:73]
6. Explain the arithmetic instructions of arm?
7. What are the logic instructions of arm?
8. Explain the branch instructions in detail?
9. Explain the assembly language of arm processor?
10. What is pseudo-instructions?
11. Explain the I/O operations interrupt structure
12. What is arm cache? [November 2012][Ref. Page no.: 84]
13. Explain arm bus in detail? [November 2012] [Ref. Page no.: 84]
14. Explain embedded systems with arm?[November 2011] [Ref. Page no.:45]
15. What are the serial bus protocols? [November 2014] [Ref. Page no.:86]
16. Explain the parallel bus protocols? [November 2011] [Ref. Page no.:93]

101

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

PONDICHERRY UNIVERSITY QUESTIONS


1. Discuss the arm architecture in detail (April 2013)[ref. Page no. :45]
2. What are the registers and memory access processes in arm? [April 2013][Ref. Page no.:
59]
3. How the data transferred in arm using instructions? [November 2012][Ref. Page no.: 64]
4. Classify the instructions set of arm and explain [April/May 2012][Ref. Page no.:73]
5. What is arm cache? [November 2012][Ref. Page no.: 84]
6. Explain arm bus in detail? [November 2012] [Ref. Page no.: 84]
7. Explain embedded systems with arm?[November 2011] [Ref. Page no.:45]
8. What are the serial bus protocols? [November 2014] [Ref. Page no.:86]
9. Explain the parallel bus protocols? [November 2011] [Ref. Page no.:93]

102

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Department of Computer Science and Engineering


Subject Name: EMBEDDED SYSTEMS

Subject Code: CS T56

Prepared By :

Mr.P.Karthikeyan,AP/CSE
Mr.B.Thiyagarajan, AP/CSE
Mrs.P.Subha Priya, AP/CSE
Verified by :

Approved by :

UNIT III
Software Development:

Embedded Programming in C and C++ - Source Code Engineering Tools for Embedded C/C++ Program Modeling Concepts in Single and Multiprocessor Systems - Software Development
Process - Software Engineering Practices in the Embedded Software Development Hardware /
Software Co-design in an Embedded System.

103

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

1. What are the advantages of Assembly language?


It gives the precise control of the processor internal devices and full use of processor
specific features in its instruction sets and addressing modes.
The machine codes are compact, which requires only small memory.
Device drivers need only few assembly instructions.
2. What are advantages of high level languages?
Data type declaration
Type checking
Control structures
Probability of non-processor specific codes
3. Define In -line assembly
Inserting an assembly code in between is said to be in-line assembly.
4. Mention the elements of C program.
Files:
o Header files
o Source files
o Configuration files
o Preprocessor directives
Functions:
o Macro function
o Main function
o Interrupt service routines or device drivers
Others:
o Data types
o Data structures
o Modifiers
o Statements
o Loops and pointers
5. What is the use of MACRO function?
A macro function executes a named small collection of codes, with the values passed by
the calling function through its arguments.

104

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

It has constant saving and retrieving overheads.


6. What is the use of interrupt service routines or device drivers?
It is used for the declaration of functions and datatypes, typedef and executes named set
of codes.
ISR must be small (short), reentrant or must have solution for shared data problem.
7. What are the datatypes available in C language?
Char 8 bit; byte 8 bit; short 16 bit; unsigned short 16 bit; unsigned int 32 bit;
int 32 bit; long double 64 bit; float 32 bit; double 64
8. Mention the data structures available in C language.
1. Queue
2. Stack
3. Array (1-dimentional and multi-dimentional)
4. List
5. Tree
6. Binary-tree
9. Write the syntax for declaration of pointer and Null-pointer.
Syntax for pointer:
void *portAdata
Syntax for Null-pointer:
#define NULL (void*) 0x0000
10. Explain pass by values.
The values are copied into the arguments of the function.
Called programs does not change the values of the variables
11. What are the three conditions that must be satisfied by the re-entrant function?
1. All the arguments pass the values and none of the argument is a pointer.
2. When a non-atomic operation, that function should not operate on the function
declared outside.
3. A function does not call a function by itself when it is not reentrant.
12. Explain pass by reference.
When an argument value to a function is passed through a pointer, then the value can be
changed.

105

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

New value in the calling function will be returned from the called function.
13. Write the syntax for function pointer.
Syntax:
void *<function_name> (function arguments)
14. Define queue.
A structure with a series of elements.
Uses FIFO mode.
It is used when an element is not directly accessed using pointer and index but only
through FIFO.
Two pointers are used for insertion and deletion.
15. Define stack.
A structure with a series of elements which uses LIFO mode.
An element can be pushed only at the top and only one pointer is used for POP.
Used when an element is not accessible through pointer and index, but only through
LIFO.
16. Define List.
Each element has a pointer to its next element.
Only the first element is identifiable and it is done using list-top pointer (header).
Other element has no direct access and is accessed through the first element.
17. What is Object oriented programming?
An object-oriented programming language is used when there is a need for re-usability of
defined objects or a set of objects that are common for many applications.
18. What are the advantages of OOPs?
Data encapsulation
Reusable software components
Inheritance
19. What are the characteristics of OOPs?
An identity reference to a memory block
A state data, field and attributes
A behavior methods to manipulate the state of the object

106

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

20. Define Class.


A class declaration defines a new type that links code and data. It is then used to declare
objects of that class. Thus a class is a logical abstraction but an object has physical existence.
21. Define NULL function
NULL defines empty stack or no content in the stack/queue/list.
22. What is Multiple Inheritances?
Inheritance is the process by which objects of one class acquire the properties of objects
of another class. In OOP, the concept of inheritance provides the idea of reusability.
23. Define Exception handling
Exceptions are used to report error conditions. Exception handling is built upon three
keywords:
1. Try
2. catch
3. throw

24. What is a Preprocessor Directive?


A preprocessor directive starts with # sign. The following are the types of preprocessor
directives:
1. Preprocessor global variables
2. Preprocessor constants

25. Mention the flags available for queue.


QerrrorFlag
HeaderFlag
TrailingFlag
cirQuFlag
PolyQuFlag
26. Give any two disadvantages of c++ [November 2011]
It doesn't support to create GUI oriented S/W easily.

It can't run all the PLATFORM(WINDOWS,UNIX,etc.).

107

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

It create .OBJ format,when it compiling so easy to HACKING.

Storage of our application is so poor,bcoz it having file

concept only not DATABASE.

Poor in Multitasking.

27. What is meant by source file? [November 2011]


In computing, source code is any collection of computer instructions (possibly with comments)
written using some human-readable computer language, usually as text. The source code of a program is
specially designed to facilitate the work of computer programmers, who specify the actions to be
performed by a computer mostly by writing source code. The source code is often transformed by a
compiler program into low-level machine code understood by the computer. The machine code might
then be stored for execution at a later time. Alternatively, an interpreter can be used to analyze and
perform the outcomes of the source code program directly on the fly.
28. What are the features of UML?[November 2014]

UML is designed to meet some very specific objectives so that it can truly be a standard
that addresses the practical needs of the software development community. Any effort to be all
things to all people is doomed to fail, so the UML authors have taken care to establish clear
boundaries for the features of the UML.
The goals of UML
The OMG knows that the success of UML hinges on its ability to address the widely
diverse real-world needs of software developers. The standard will fail if it is too rigid or too
relaxed, too narrow in scope or too all-encompassing, too bound to a particular technology or so
vague that it cannot be applied to real technologies. To ensure that the standard will, in fact, be
both practical and durable, the OMG established a list of goals.
29. Write down the uses of DFGs and CDFGs.[November 2012]
Data-flow Graph is a technique for gathering information about the possible set of
values calculated at various points in a computer program. A program's control flow graph
(CFG) is used to determine those parts of a program to which a particular value assigned to a
variable might propagate. The information gathered is often used by compilers when optimizing
a program. A canonical example of a data-flow analysis is reaching definitions.

108

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

UNIT III
11 MARKS

1. Specify the C++ features that suits embedded program. [April/May 2014]
An object oriented language is used when there is a need for reusability of the defined
object or set of objects that are common within a program between many applications. When a
large program is to be made, an object-oriented language offers many advantages. Data
encapsulation, design of reusable software components and inheritance are the advantages
derived from the OOPs.
An object-oriented language provides for defining the objects and methods that manipulate the
objects modifying their definitions. It provides for the data and methods for encapsulation. An
object can be characterized by the following:

An identity, a reference to a memory block that holds its state and behavior.

A state (its data property fields and attributes).

A behavior (method or methods that manipulate the state of the object).

In a procedure-based language, like FORTRAN' COBOL, Pascal and C, large programs


are split into simpler functional blocks and statements.

In an object- oriented language like Smalltalk, C++ or Java, logical groups (also known
as classes) are first made. Each group defines the data and the methods of using the data.

A set of these groups then gives an application program.

Each group has internal user-level fields for the data and the methods of processing that
data at these fields.

Each group can then create many objects by copying the group and making it functional.
Each object is functional.

Each object can interact with other objects to process the user's data.

The language provides for formation of classes by the definition of a group of objects
having similar attributes and common behavior.

A class creates the objects. An object is an instance of a class.

109

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Embedded Programming in C+ +

Embedded system codes can optimized when using an OOp language by the following:

Declare private as many classes as possible. It helps in optimizing the generated codes.

Use char, int and boolean (scalar data types) in place of the objects (reference data types)
as arguments and use local variables as much as feasible.

Recover memory already used once by changing the reference to an object

A special compiler for an embedded system can facilitate the disabling of specific
features provided in C++.

Embedded C++ is a version of C++ that provides for a selective disabling of the above
features so that there is a less runtime overhead and less runtime library.

The solutions for the library functions in an embedded C++ compiler are also reentrant.

So using embedded C++ compilers or the special compilers make the C++ a significantly
more powerful coding language than C for embedded systems.

GNU Compiler

GNU C/C++ compilers (called gcc) find extensive use in the C++ environment in
embedded software development.

Embedded C++ is a new programming tool with a compiler that provides a small runtime
library.

It satisfies small runtime RAM needs is by selectively de-configuring features like,


template, multiple inheritance, virtual base class, etc.

When there is a less runtime overhead and when the less runtime library using solutions
are available. Selectively removed (de-configured) features could be template, run time
type identification, multiple inheritance, exceptional handling, virtual base class, I/O
streams and foundation classes.

110

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

An embedded system (C++ compiler (other than gcc) is Diab compiler from Diab data. It
also provides the target (embedded System processors) specific optimization codes. The run- line
analysis tools check the expected run time error and give a profile that is visually interactive.

Embedded C++ is a C++ version, which makes large program development simpler by
providing object oriented programming features of using an object, which binds state and
behavior and which is defined by an instance of a class.
Diab compiler has two special features processor specific code optimization and run time
analysis tools for finding expected run time errors.

2.

Write short note on software tool for embedded program.[April/May


2014]

Introduction
Application programs are typically developed, compiled, and run on host
system
Embedded programs are targeted to a target processor (different from the
development/host processor and operating environment) that drives a device
or controls
What tools are needed to develop, test, and locate embedded software into the
target processor and its operating environment?
Distinction
Host: Where the embedded software is developed, compiled, tested,
debugged, optimized, and prior to its translation into target device. (Because
the host has keyboards, editors, monitors, printers, more memory, etc. for
development, while the target may have not of these capabilities for
developing the software.)
Target: After development, the code is cross-compiled, translated crossassembled, linked (into target processor instruction set) and located into the
target
Cross-Compilers
Native tools are good for host, but to port/locate embedded code to target, the
host must have a tool-chain that includes a cross-compiler, one which runs on
the host but produces code for the target processor
Cross-compiling doesnt guarantee correct target code due to (e.g., differences
in word sizes, instruction sizes, variable declarations, library functions)
Cross-Assemblers and Tool Chain

111

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Host uses cross-assembler to assemble code in targets instruction syntax for


the target
Tool chain is a collection of compatible, translation tools, which are
pipelined to produce a complete binary/machine code that can be linked and
located into the target processor
(See Fig 9.1)

A source code engineering tool is of great help for source-code development, compiling
and cross compiling. The tools are commercially available for embedded C /C++ code
engineering, testing and debugging.

The features of a typical tool are comprehension, navigation and browsing editing debugging
configuring (disabling and enabling the C++ features). A tool for C and C++ is Sniff+. It is from

112

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Wind River systems. A version SNIFF+ PRO code as well as debug module. Main features of
the tool are as follows:

It searches and lists the definitions symbols and hierarchy of the classes, and class
inheritance trees.

It searches and lists the dependencies of symbols and defined symbols, variables,
functions (methods) and other symbols.

It monitors, enables and disables the implementation of the virtual functions.

It finds the full effect of any code change on the source code.

It searches and lists the dependencies and hierarchy of included header files

It navigates to and fro between the implementation and symbol declaration

It navigates to and fro between the overridden and overriding methods. [Overriding
method is method in daughter class with same name and same number and types of
arguments as in the parent class. Overridden method is the method of the parent class,
which has been redefined at the daughter class.

It browses through information regarding instantiation (object creation) of a class.

It browses the information encapsulation of variables among the members and browses
through the public, private and protected visibility of the members.

It browses through object components relationships.

It automatically removes error prone and unused tasks.

It provides easy and automated search and replacement.

The embedded software designer for sophisticated applications uses source code engineering
tool for program coding, profiling, testing and debugging of embedded software design.

113

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

3.

Discuss in detail with multiprocessor system. [November 2011]

A multiprocessor system uses two or more processors for faster execution of the (i) Program
functions, (ii) tasks or (iii) single instruction multiple data instructions
A large complex program

Partitioned into the tasks or sets of instructions (or processes or threads) and the ISRs.

The tasks and ISRs can run concurrently on different processors and by some mechanism
the tasks can communicate with each other

Multiple-instructions multiple- data instructions

Very long instruction words (VLIWs)

Static scheduling of tasks and ISRs on two processors

Static scheduling of tasks and ISRs on two cores of same processor

Static scheduling

114

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Means that a compiler compiles such that the codes are run on different processors or processing
units as per the schedule decided and this schedule remains static during the program run even if
a processor waits for the others to finish the scheduled processing.
The Multiprocessor System memory interface Models

Share the same address space through a common bus (tight coupling)

Processors have different autonomous address spaces (like in a network) as well as


shared data sets and arrays, called loose coupling.

Tight Coupling of two processor to a common memory

Loose Coupling of two processor to a common memory

The Multiprocessor System Programming Model Needs

Partition the program into tasks or sets of instructions between the various processors

Schedule the instructions over the available processor times and resources so that there is
optimum performance.

Partition of processes, instruction sets and instruction(s)

Schedule the instructions, SIMDs, MIMDs, and VLIWs within each process and
scheduling them for each processor

Concurrent processing of processes on each processor

Concurrent processing on each superscalar unit and pipeline in the processor

115

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Static scheduling by compiler, analogous to the scheduling in a superscalar processor.

Hardware scheduling, for example, whether static scheduling of hardware (processors


and memories) is feasible or not [It is simpler and its use depends on the types of
instructions when it does not affect the system performance.]

Static scheduling issue [For example, when the performance is not affected and when the
processing actions are predictable and synchronous.]

Synchronizing issues, synchronization means use of inter -processor or process


communications (IPCs) such that there is a definite order (precedence) in which the
computations are fired on any processor in multiprocessor system.

Dynamic scheduling issues [For example, when the performance is affected when there
are interrupts and when the services to the tasks are asynchronous. It is also relevant
when there is preemptive scheduling as that is also asynchronous.]

Methods of scheduling and synchronizing the execution of instructions, SIMDs, MIMDs,


and VLIWs
Scheduling is done after analyzing the scheduling and synchronizing options for the concurrent
processing and scheduling of instructions, SIMDs, MIMDs and VLIWs
Method 1 of concurrent processing

Schedule each task so that it is executed on different processors and synchronizes the
tasks by some inter processor communication mechanism

Method 2 of concurrent processing

When an SMID or MIMD or VLIW instruction has different data (for example, different
coefficients in Filter example

Each task is processed on different processors (tightly coupled processing) for different
data.

This is analogous to the execution of a VLIW in TMS320C6, a recent Texas Instruments


DSP series processor.

TMS320C6 employs two identical sets of four units and a VLIW instruction word can be
within four and thirty-two bytes.

TMS320C6 has instruction level parallelism when a compiler schedules such that the
processors run the different instruction elements into the different units in parallel.

116

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The compiler does static scheduling for VLIWs.

Method 3

An alternate way is that a task-instruction is executed on the same processor or different


instructions of a task can be done on different processors (loosely coupled).

A compiler schedules the various instructions of the tasks among the processors at an
instance.

Performance cost

Suppose one processor finishes computations earlier than the other.

Performance cost is more if there is idle time left from the available time.

If one task needs to send a message to another and the other waits (blocks) till the
message is received, performance cost is proportional to the waiting period.

A multiprocessor system uses two or more processors for faster execution of the (i) Program
functions, (ii) tasks or (iii) single instruction multiple data instructions
Partitioning of the processes and scheduling of processes on multiple processor can be static or
dynamic.

4.

Explain embedded software develop process and tools.(November 2011]

Software development process


A software development process is a structure imposed on the development of a
software product. Synonyms include software life cycle and software process. There are several
models for such processes, each describing approaches to a variety of tasks or activities that take
place during the process.

117

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Overview
A largely growing body of software development organizations implement process
methodologies. Many of them are in the defense industry, which in the U.S. requires a rating based on
'process models' to obtain contracts.

The international standard for describing the method of selecting, implementing and
monitoring the life cycle for software is ISO 12207.
A decades-long goal has been to find repeatable, predictable processes that improve
productivity and quality. Some try to systematize or formalize the seemingly unruly task of
writing software. Others apply project management techniques to writing software. Without
project management, software projects can easily be delivered late or over budget. With large
numbers of software projects not meeting their expectations in terms of functionality, cost, or
delivery schedule, effective project management appears to be lacking.
Organizations may create a Software Engineering Process Group (SEPG), which is the
focal point for process improvement. Composed of line practitioners who have varied skills, the
group is at the center of the collaborative effort of everyone in the organization who is involved
with software engineering process improvement.
Software development activities
Requirements analysis
The most important task in creating a software product is extracting the requirements or
requirements analysis. Customers typically have an abstract idea of what they want as an end
result, but not what software should do. Incomplete, ambiguous, or even contradictory
requirements are recognized by skilled and experienced software engineers at this point.
Frequently demonstrating live code may help reduce the risk that the requirements are incorrect.
Once the general requirements are gleaned from the client, an analysis of the scope of the
development should be determined and clearly stated. This is often called a scope document.
Certain functionality may be out of scope of the project as a function of cost or as a result of

118

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

unclear requirements at the start of development. If the development is done externally, this
document can be considered a legal document so that if there are ever disputes, any ambiguity of
what was promised to the client can be clarified.
Domain Analysis is often the first step in attempting to design a new piece of software,
whether it is an addition to existing software, a new application, a new subsystem or a whole
new system. Assuming that the developers (including the analysts) are not sufficiently
knowledgeable in the subject area of the new software, the first task is to investigate the socalled "domain" of the software. The more knowledgeable they are about the domain already, the
less work required. Another objective of this work is to make the analysts, who will later try to
elicit and gather the requirements from the area experts, speak with them in the domain's own
terminology, facilitating a better understanding of what is being said by these experts. If the
analyst does not use the proper terminology it is likely that they will not be taken seriously, thus
this phase is an important prelude to extracting and gathering the requirements.
Specification
Specification is the task of precisely describing the software to be written, possibly in a
rigorous way. In practice, most successful specifications are written to understand and fine-tune
applications that were already well-developed, although safety-critical software systems are
often carefully specified prior to application development. Specifications are most important for
external interfaces that must remain stable. A good way to determine whether the specifications
are sufficiently precise is to have a third party review the documents making sure that the
requirements and Use Cases are logically sound.
Architecture
The architecture of a software system or software architecture refers to an abstract
representation of that system. Architecture is concerned with making sure the software system
will meet the requirements of the product, as well as ensuring that future requirements can be
addressed. The architecture step also addresses interfaces between the software system and other
software products, as well as the underlying hardware or the host operating system.
Design, implementation and testing

119

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Implementation is the part of the process where software engineers actually program the
code for the project.
Software testing is an integral and important part of the software development process.
This part of the process ensures that bugs are recognized as early as possible.
Documenting the internal design of software for the purpose of future maintenance and
enhancement is done throughout development. This may also include the authoring of an API, be
it external or internal.
Deployment and maintenance
Deployment starts after the code is appropriately tested, is approved for release and sold
or otherwise distributed into a production environment.
Software Training and Support is important because a large percentage of software
projects fail because the developers fail to realize that it doesn't matter how much time and
planning a development team puts into creating software if nobody in an organization ends up
using it. People are often resistant to change and avoid venturing into an unfamiliar area, so as a
part of the deployment phase, it is very important to have training classes for new clients of your
software.
Maintenance and enhancing software to cope with newly discovered problems or new
requirements can take far more time than the initial development of the software. It may be
necessary to add code that does not fit the original design to correct an unforeseen problem or it
may be that a customer is requesting more functionality and code can be added to accommodate
their requests. It is during this phase that customer calls come in and you see whether your
testing was extensive enough to uncover the problems before customers do
Models
Iterative processes

120

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Iterative development prescribes the construction of initially small but ever larger
portions of a software project to help all those involved to uncover important issues early before
problems or faulty assumptions can lead to disaster. Iterative processes are preferred by
commercial developers because it allows a potential of reaching the design goals of a customer
who does not know how to define what they want.
Agile software development
Agile software development processes are built on the foundation of iterative
development. To that foundation they add a lighter, more people-centric viewpoint than
traditional approaches. Agile processes use feedback, rather than planning, as their primary
control mechanism. The feedback is driven by regular tests and releases of the evolving software.
Interestingly, surveys have shown the potential for significant efficiency gains over the
waterfall method. For example, a survey, published in August 2006 by VersionOne and Agile
Alliance and based on polling more than 700 companies claims the following benefits for an
Agile approach.
XP: Extreme Programming
Extreme Programming (XP) is the best-known iterative process. In XP, the phases are
carried out in extremely small (or "continuous") steps compared to the older, "batch" processes.
The (intentionally incomplete) first pass through the steps might take a day or a week, rather than
the months or years of each complete step in the Waterfall model. First, one writes automated
tests, to provide concrete goals for development. Next is coding (by a pair of programmers),
which is complete when all the tests pass, and the programmers can't think of any more tests that
are needed. Design and architecture emerge out of refactoring, and come after coding. Design is
done by the same people who do the coding. (Only the last feature - merging design and code - is
common to all the other agile processes.) The incomplete but functional system is deployed or
demonstrated for (some subset of) the users (at least one of which is on the development team).
At this point, the practitioners start again on writing tests for the next most important part of the
system.

121

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Waterfall processes
The waterfall model shows a process, where developers are to follow these steps in order:
1. Requirements specification (AKA Verification or Analysis)
2. Design
3. Construction (AKA implementation or coding)
4. Integration
5. Testing and debugging (AKA validation)
6. Installation (AKA deployment)
7. Maintenance
After each step is finished, the process proceeds to the next step, just as builders don't revise
the foundation of a house after the framing has been erected.
There is a misconception that the process has no provision for correcting errors in early steps
(for example, in the requirements). In fact this is where the domain of requirements management
comes in which includes change control.
This approach is used in high risk projects, particularly large defense contracts. The problems
in waterfall do not arise from "immature engineering practices, particularly in requirements
analysis and requirements management." Studies of the failure rate of the DOD-STD-2167
specification, which enforced waterfall, have shown that the more closely a project follows its
process, specifically in up-front requirements gathering, the more likely the project is to release
features that are not used in their current form. Often the supposed stages are part of review
between customer and supplier, the supplier can, in fact, develop at risk and evolve the design
but must sell off the design at a key milestone called Critical Design Review (CDR). This shifts
engineering burdens from engineers to customers who may have other skills.
Other models
Capability Maturity Model Integration

122

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The Capability Maturity Model Integration (CMMI) is one of the leading models
and based on best practice. Independent assessments grade organizations on how well
they follow their defined processes, not on the quality of those processes or the software
produced. CMMI has replaced CMM.

ISO 9000
ISO 9000 describes standards for formally organizing processes with
documentation.
ISO 15504
ISO 15504, also known as Software Process Improvement Capability
Determination (SPICE), is a "framework for the assessment of software processes". This
standard is aimed at setting out a clear model for process comparison. SPICE is used
much like CMMI. It models processes to manage, control, guide and monitor software
development. This model is then used to measure what a development organization or
project team actually does during software development. This information is analyzed to
identify weaknesses and drive improvement. It also identifies strengths that can be
continued or integrated into common practice for that organization or team.
Six sigma
Six sigma is a methodology to manage process variations that uses data and
statistical analysis to measure and improve a company's operational performance. It
works by identifying and eliminating defects in manufacturing and service-related
processes. The maximum permissible defects are 3.4 per one million opportunities.
However, Six Sigma is manufacturing-oriented and needs further research on its
relevance to software development.
Test Driven Development
Test Driven Development (TDD) is a useful output of the Agile camp but some
suggest that it raises a conundrum. TDD requires that a unit test be written for a class
before the class is written. It might be thought, then, that the class firstly has to be
"discovered" and secondly defined in sufficient detail to allow the write-test-once-andcode-until-class-passes model that TDD actually uses. This would be actually counter to
Agile approaches, particularly (so-called) Agile Modeling, where developers are still

123

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

encouraged to code early, with light design. However, to get the claimed benefits of TDD
a full design down to class and responsibilities (captured using, for example, Design By
Contract) is not necessary. This would count towards iterative development, with a
design locked down, but not iterative design - as heavy refactoring and re-engineering
might negate the usefulness of TDD.

Formal methods
Formal methods are mathematical approaches to solving software (and hardware) problems
at the requirements, specification and design levels. Examples of formal methods include the BMethod, Petri nets, RAISE and VDM. Various formal specification notations are available, such
as the Z notation. More generally, automata theory can be used to build up and validate
application behavior by designing a system of finite state machines.
Finite state machine (FSM) based methodologies allow executable software specification and
by-passing of conventional coding (see virtual finite state machine or event driven finite state
machine).
Formal methods are most likely to be applied in avionics software, particularly where the
software is safety critical. Software safety assurance standards, such as DO178B demand formal
methods at the highest level of categorization (Level A).
Formalization of software development is creeping in, in other places, with the application of
Object Constraint Language (and specializations such as Java Modeling Language) and
especially with Model-driven architecture allowing execution of designs, if not specifications.
Another emerging trend in software development is to write a specification in some form of
logic (usually a variation of FOL), and then to directly execute the logic as though it were a
program. The OWL language, based on Description Logic, is an example. There is also work on
mapping some version of English (or another natural language) automatically to and from logic,
and executing the logic directly. Examples are Attempt to Controlled English, and Internet
Business Logic, which does not seek to control the vocabulary or syntax. A feature of systems

124

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

that support bidirectional English-logic mapping and direct execution of the logic is that they can
be made to explain their results, in English, at the business or scientific level.
The Government Accountability Office, in a 2003 report on one of the Federal Aviation
Administrations air traffic control modernization programs, recommends following the agencys
guidance for managing major acquisition systems by

establishing, maintaining, and controlling an accurate, valid, and current performance


measurement baseline, which would include negotiating all authorized, unpicked work
within 3 months;

conducting an integrated baseline review of any major contract modifications within 6


months; and

Preparing a rigorous life-cycle cost estimate, including a risk assessment, in accordance


with the Acquisition System Toolsets guidance and identifying the level of uncertainty
inherent in the estimate.

5.

Explain the software engineering practices in the Embedded software


development.

Practice for Success


Develop iteratively - Iterative development dates back to the mid-1950 and has been known
under various names (Rapid Prototyping or the Boehm-Spiral Methodology). Basically this
technique calls developing in such a way so as to test as soon as possible. Preferably, the highest
risks should be identified, isolated and developed first. Recent developments surrounding the
Agile Manifesto (See http://www.agilemanifesto.org/) have re-enforced this methodology. Stated
in "agile" terms, we should strive for "early and continuous delivery of valuable software"
Manage requirements - Start by writing them down. Next, we need to continuously maintain
these documents - at least through the entire development cycle. During the early stages, it is
important to maintain a list of TBD's (because there will be many) with dates and resources
associated with them. It is also important to know the cost of each and every requirement. Often,
designs can be cheaper and cleaner if we can change some requirement. This principle is
overlooked all the time.

125

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Re-use - This is probably the most misunderstood practice. We have found that although proper
design can maximize re-use, two principles need to be instilled in all of your software engineers
if you want to maximize re-use:
A well designed late product is not necessarily better than an on-time mediocre design.
If you think you can do it faster by starting from scratch, than by understanding the existing
code, come back when you understand the existing code.
Maximize Tool Usage - Very few projects allocate time up front to evaluate what tools could be
made or bought that will reduce the time to market. The tools could be used for modeling,
automatic code generation, testing or any number of other areas.
Continuously verify quality - Poor performance and poor reliability are common results of an
inadequate focus on software quality. Rational places and emphasis on quality throughout the
project lifecycle, with testing conducted in each iteration. This is in contrast to a more traditional
approach that leaves the testing of integrated software until late in the project's lifecycle.
Manage change - A key challenge when developing software-intensive systems is coordinating
the work of multiple developers organized into different teams sometimes located at different
sites working on multiple iterations, releases, products, and platforms. By establishing
repeatable procedures for managing changes to software and other development artifacts, you
minimize chaos; maximize efficient allocation of resources, and ensuring your freedom to
change.
Embedded software development has evolved into a large-scale, globally distributed endeavor,
posing significant engineering management challenges. Embedded projects now involve huge
teams of developers, outsourcers, third-party software technology vendors, chipset partners and
even open source. However, software development methods and practices are largely the same as
ten years ago, especially in the areas of integration and testing. As a result, companies are
struggling with the challenge of managing, integrating and verifying a vast array of components
from many different sources. As a short-term fix, software managers add more engineers and
resources but with limited effectiveness and at very high cost. Often, they still end up delivering
software releases late and with compromised quality.

126

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The growing complexity of embedded software development requires a new more reliable and
scalable approachone that adopts early developer testing practices and implements automated
software verification to prevent and detect more defects sooner. The objective is to have all
developers and integrators create reusable tests that can be shared and automated throughout the
development cycle. This strategy replaces ad hoc testing with continuous automated testing to
ensure the on-time delivery of fully tested and properly functioning products

The market for sophisticated embedded software-based products is exploding. As the industry
evolves to support an ever growing range of capabilities and features, the underlying
microprocessors become smaller and faster, and the software content just keeps multiplying.
A few years ago, a typical embedded application included a few thousand lines of monolithic
code developed by a few developers at a single location. Todays embedded application may
incorporate millions of lines of code developed by more than a hundred developers. It has
become a complex software platform comprising a huge number of software components
brought together from various sources and locations.
In the past, the embedded software industry was driven by technical competence, meaning that
smart technical guys somehow always came through. There has never really been a push to
invest in processes and technologies to address this rapid increase in complexity. Now, in the
race to beat the competition, product developers and manufacturers face greater time-to-market

127

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

pressures and tighter product release schedules. Yet they have more code to manage, limited
ability to properly test it, and less time to find and fix problems. This is a recipe for disaster.
Its very apparent that the current approaches and technologies that were sufficient in the last
decade are no longer adequate. Software methods, practices and technology simply have not
evolved in response to these new complex integration issues.
Possibly the most alarming observation is the relative time and resources devoted to quality
assurance (QA) or product testing. In many companies, time spent on software coding or
implementation is relatively short, while integration activities can take twice as long. However,
product testing efforts are truly daunting, taking 5-10 times as long as implementation, while
staffed with very large teams that continue to grow.
At many companies, integration testing is merely a smoke test or sanity test to confirm
viable software build by manually executing a rudimentary set of tests. Even when integration
testing is more extensive, the test coverage is limited by the time-consuming nature of manual
testing. Often the first time all embedded software components are extensively tested as an
integrated whole is during QA or production testing.
As a result, QA engineers usually uncover large volumes of defects. A hiatus ensues as managers
redirect developers from other work to isolate, characterize and debug large numbers of critical
and serious defects. Engineers try in vain to salvage a release schedule but they know the odds
are stacked against them. By catching defects late, developers are fixing bugs when theyre the
most difficult, time-consuming and expensive to resolve. With so many critical and serious
defects, software managers inevitably not only miss their delivery schedule, but also find it
difficult to predict new delivery dates. Worse yet, they cannot be sure that the code they
ultimately release is high quality and free from costly or dangerous errors

128

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Objectives for a New Verification Strategy


Clearly the embedded industry is overdue for a more scalable and effective software testing and
integration strategy. Any effort to improve software integration and verification should address
several key objectives.
First, manual testing by developers or integrators must be minimized. Manual testing is too
tedious, time-consuming, error-prone, and is not a good use of valuable engineers. Also, for
companies building multiple products concurrently from the same code base, engineers do not
have access to all hardware-software permutations. Secondly, the process must achieve phase
containment to prevent or catch defects early, before QA. Important metrics of a new process
will be significantly fewer defects escaping into QA, and reducing the time and resources
required in product testing.
Next, processes and infrastructure must be implemented that are truly scalable and capable of
supporting integration and testing of many components for multiple product lines concurrently.
And finally, improving the predictability of releases requires earlier visibility into the software

129

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

health of releases. Managers must be able to pinpoint trouble spots, and make decisions based on
real data or metrics.
An Automated, Test-Driven Strategy
To achieve these objectives, the mantra needs to be Automate, Automate, Automate. Then,
reuse and automate the tests whenever code has changed or at various integration points
throughout the development cycle. This strategy sounds conceptually simple but it does
nonetheless require a process change involving adoption of new methods, implementation of
automated infrastructure, and a change in mentality regarding the importance of developer
testing. Lets examine this strategy more closely, looking at certain test-driven software practices
and the role of an embedded software verification platform.
Due to the complexity of todays embedded software, developers need to actively participate in
the verification of their code. As the creators and architects of software components, only
developers truly understand the inner workings of their code. Following a best practice from
Test-Driven Development (TDD), developers would ideally create tests up front before
implementing code. Developing tests in advance has the added benefit of fleshing out your
design, and especially aids in designing for testability. By thinking about testing up front, youll
take into account and implement facilities to access internal APIs, data structures or other
information that aid in testing.
Thus, the first role of an embedded software verification platform is to assist developers in
creating reusable, automated tests quickly and easily by providing specialized tools and
techniques. For example, the verification platform might enable developers to quickly break
dependencies by simulating missing code with a GUI, simple scripts or C/C++ code. Or perhaps
the verification platform would support recording and playback to automate a series of manual
test operations.
An embedded software verification platform also provides value at this stage by enabling
developers to validate their tests before code is available. For API-level testing, a software
verification platform can execute and verify tests by simulating or modeling APIs through simple

130

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

scripts, C/C++ code, or a GUI. This provides developers with a very simple, quick means to
execute tests and feed them canned responses to validate them. For example, if code-under-test
depends on the return values of another application interface, the developer can dynamically
mock-up the desired return values his new approach succeeds only if developers and integrators
create and deliver automated tests that are reusable by anyone, and if the tests can be aggregated
and automated in large-scale testing. To do this requires that a common test framework and test
guidelines be used by all software developers, integrators and testers.
The first key is the software verification platform, which serves as the common test framework
that supports, manages and automates tests from all of the developers and integrators. With the
diversity of embedded software components, this means that the test framework should ideally
be flexible enough to support various testing strategies. Depending on the type of embedded
software component, certain approaches may be more suitable. Hard real-time code may
require tests written in native code to be directly built into the target; soft real-time
applications can be exercised remotely from the host, possibly using a scripting language;
network protocols might have internal state machines that should be verified through white-box
techniques; data-centric APIs may require facilities to efficiently enter complex data.
Secondly, the development team must conform to a level of uniformity when it creates tests. For
example, guidelines might require that tests be written to be self-contained, i.e., not dependent on
the preceding execution of other tests. Standard entry and exit criteria would guarantee that tests
enter and leave the target in a consistent, known state, enabling tests to be executed in any
sequence. All tests would leverage the same error handling and recovery mechanisms. Internal
policies and conventions would establish naming conventions, archiving and maintenance
policies, and the standard languages for implementing tests.

131

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Continually Leveraging Automated Testing


Once everyone has made the effort to deliver automated tests, the development team begins to
derive major benefits from applying them. There are many opportunities to leverage the
automated tests, virtually for free. For example, developers can perform regression testing of
their own code whenever they modify their software. Tests can also be re-executed at
integration points, or whenever developers integrate their code with other software
components.
The developers tests can be aggregated and automatically executed with an automated complete
target build, at regularly scheduled intervals. This practice is referred to as continuous
integration and is very effective for achieving and maintaining software stability because
defects are uncovered earlier and frequently, providing continual visibility and metrics into the
softwares overall health. The complete collection of developers and integrators tests can then
also be executed by the QA team to complement their black box testing.
In these cases, the role of the software verification platform is to provide a framework for
management, reporting and automation by aggregating, organizing, controlling and executing
tests, and then collecting, analyzing and displaying results.
Implementing this unified verification approach will indeed transform the software development
process with significant new attributes. Tests from all developerseven geographically widely
separatedwill be managed and automated from a single common framework. Anyone will be
able to reuse and execute any test for any component at any time. A growing portfolio of
developers tests can be automated for regression or verification at various integration points or
whenever code is modified. This will enable metrics for software health and completeness to be

132

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

collected

much

earlier

in

the

development

cycle.

Even if this strategy is applied incrementally or selectively to certain development teams, the
impact can be dramatic. A greater volume of defects can be caught and prevented before product
test resulting in shorter cycles and fewer resources required in product test. This, along with
increased visibility and predictability into software health and delivery schedules, will lead to a
higher quality product, on time. These are all tangible benefits that can transform a companys
embedded software engineering into a true competitive advantage.
6.

Discuss the Hardware/software Co-design in an Embedded system.


The concurrent design of hardware and software which is implemented in a preferred

function is called a software and hardware co design. Successful co design fits really well
into the preferred function. Today it has become a compulsion to blend the software and
hardware design, as the conventional methodologies are not that effective today. All the
items which contain a chip memory and a set of executable instructions or programs in order
to monitor its functioning are known as hardware and software co design.

Focus of co Design
The emphasizes of co design is on the area of system specification, hardware software
portioning, architectural design, and the iteration In between the software and hardware as the
design proceeds to next stage. This task is completed by the hardware and software integration.

133

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The new and old integrated circuits are finding their place in the new designs in the form of
entrenched cores in a mixed fashion. Hardware and software co design makes it possible to make
handy devices which can be carried by individuals however for these systems one does not have
to stay in touch with the computer in order to run software applications. The most common
example of hardware and software co design is embedded systems. The objective of the co
design is to combine CPU memory and programs to control physical operations.
A computer system which has been specifically designed to perform different functions,
however which faces real time computing restriction is known as embedded systems. Embedded
systems can be controlled by digital signal processing and micro controllers. When ever you see
software guiding the hardware it is an example of embedded system. Software and hardware co
design is a broader horizon which incorporates all the embedded systems. There are today
unlimited examples of the embedded systems like modern Robot Guitar for tuning the strings.
Embedded systems deal with the complex GUI, which is more like the desktop computers. It is
the rise of embedded systems that we are able to use touch screen devices. The key of embedded
systems is that any hardware on which specific purpose software is fixed or stamped is an
embedded system. Embedded systems have many distinct features like they consume less
electricity. On the contrary the proceeding speed of these systems is also low and the capability
to store data is also low.
Examples of Hardware Co Systems
The applications of the hardware software co design have many applications ranging
from the everyday items to the special purpose machines. The common purpose examples
include television, automobiles, GPS, microwaves, thermostats, network routers, game consoles
and automobiles. Some special purpose examples include ATM machines, kiosks, aircraft,
satellites, sensing, consumer electronics, smart phones, industrial automation, avionics, medical
and IT hardware. The best example of co design technology is multi core processors, I Phone and
PS- 3. Therefore the examples of the hardware and software co design include all the intelligent
devices .these systems can be configured for personalized uses. Let us examine a simple example
of a MP3 player. It has a large memory which is capable of storing unlimited songs. Songs are
stored in digital compressed form. The CPU scampers the program in main memory. The audio

134

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

present in the form of an audio and raw audio is generated in the form of digital signals. This
information is then displayed on the screen with the help of software working in the memory.
The hardware and software co design method is used for the implementation of the MP3 audio
decoder which is helpful in real time specification of MP3 player. The recent development in this
field can be seen in telecommunications.

QUESTION BANK
2 - MARKS

1. What are the advantages of Assembly language?


2. What are advantages of high level languages?
3. Define In -line assembly
4. Mention the elements of C program.
5. What is the use of MACRO function?
6. What is the use of interrupt service routines or device drivers?
7. What are the datatypes available in C language?
8. Mention the data structures available in C language.
9. Write the syntax for declaration of pointer and Null-pointer.
10. Explain pass by values.
11. What are the three conditions that must be satisfied by the re-entrant function?
12. Explain pass by reference.
13. Write the syntax for function pointer.
14. Define queue.

135

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

15. Define stack.


16. Define List.
17. What is Object oriented programming?
18. What are the advantages of OOPs?
19. What are the characteristics of OOPs?
20. Define Class.
21. Define NULL function
22. What is Multiple Inheritances?
23. Define Exception handling
24. What is a Preprocessor Directive?
25. Mention the flags available for queue.
26. Give any two disadvantages of c++ [November 2011]
27. What is meant by source file? [November 2011]
28. What are the features of UML?[November 2014]

29. Write down the uses of DFGs and CDFGs.[November 2012]


UNIT III
11 MARKS

1. Specify the C++ features that suits embedded program. [April/May 2014]
[Ref. Page No. :109]
2.Write short note on software tool for embedded program.[April/May 2014][Ref.
Page No.:111]
3.Discuss in detail with multiprocessor system. [November 2011][Ref. Page
No.:114]

136

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

4.Explain embedded software develop process and tools.(November 2011][Ref.


Page No.:117]
5.Explain the software engineering practices in the Embedded software
development.
6.Discuss the Hardware/software Co-design in an Embedded system.

PONDICHERRY UNIVERSITY QUESTIONS

1. Specify the C++ features that suits embedded program. [April/May 2014]
[Ref. Page No. :109]
2.Write short note on software tool for embedded program.[April/May 2014]
[Ref. Page No.:111]
3.Discuss in detail with multiprocessor system. [November 2011]
[Ref. Page No.:114]
4.Explain embedded software develop process and tools.(November 2011]
[Ref. Page No.:117]

137

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Department of Computer Science and Engineering


Subject Name: EMBEDDED SYSTEMS

Subject Code: CS T56

Prepared By :

Mr.P.Karthikeyan,AP/CSE
Mr.B.Thiyagarajan, AP/CSE
Mrs.P.Subha Priya, AP/CSE
Verified by :

Approved by :

UNIT IV
Real Time Operating Systems: Tasking Models, Task States, Services and Transitions Real- Time Scheduling Algorithms: Round-Robin, FIFO, Priority-Based Preemptive
Scheduling - Rate-Monotonic Scheduling - Priority Inversion and Priority Ceiling Deadlocks Process Synchronization IPC - Shared Memory, Memory Locking,
Memory Allocation - Signals Semaphore Flag or mutex as Resource key Message
Queues Mailboxes Pipes Virtual Sockets.

138

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

UNIT IV
2 MARKS
1. What is an rtos?
RTOS is an OS for Embedded system for response time and event controlled processes.
2. What are rtos basic services?( April-2013)
RTOS Services:
Basic OS functions

- PM, RM, MM, DM, FSM, I/o, etc.

RTOS main functions

- RT task scheduling and latency control

Time management

- Time Allocation, time slicing & monitoring for efficiency.

Predictability

- Predicting time behavior and initiation of task synchronization

Priorities Management

- Allocation and Inheritance

IPC

- Synchronization of Tasks using IPC.

3. Why we need rtos?


We need RTOS for the following reasons,

When efficient scheduling in needed for multitasks with time constraints.

Task synchronization is needed.

Interrupt latency Control is needed.

4. What are the occasions where we no need rtos?

Small scale embedded system never use RTOS.

Instead of functions in RTOS standard lib functions in C can be used. Example:


malloc(), free(), fopen(), fclose(), etc.

139

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

5. What are rtos task scheduling models?(April-2013)

Control Flow Strategy

Data Flow Strategy

Control Data Flow Strategy

6. What are the features of control flow strategy?

Complete control of i/p and o/ps.

Co-operative scheduler adopts this strategy.

Worst case latencies are well defined.

7. What are the features of data flow strategy?

Interrupt occurrences are predictable.

Task control not deterministic. Ex. Network.

Pre-emptive scheduler adopts this strategy.

8. What are the features of control data flow strategy?

Task scheduler functions are designed with predefined time-out delays.

WC latency is deterministic, because the maximum delay is per-defined.

Cyclic Co-operative Scheduling, Pre-emptive Scheduling, Fixed Time Scheduling,


Dynamic RT Scheduling use this strategy.

9. What are basic functions of rtos?


There are various functions that are available in RTOS. They are as follows:

Kernel

140

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Error Handling Functions

Service and system clock Functions

Device drivers, Network Stack send and receive Functions

Time and delay Functions

Initiate and start Functions

Task state switching Functions

ISR Functions

Memory Functions

IPC Functions

10. What is the need for tested rtos?


While designing a complex embedded system, we need a tested bug free codes for the
following.
Multiple task functions in C or C++.
Real time clock based software timers (RTCSWT).
Software for Co-operative scheduler.
Software for a Pre-emptive scheduler.
Device drivers and Device managers.
Functions for Inter Process Communications.
Network functions
Error handling functions and exception handling functions.
Testing and System debugging software.

141

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

A readily available RTOS package provides the above functions and a lot of time is saved for
coding the same.
11. What are the options in rtos?
There are various options for RTOS. They are as follows:

Own RTOS.

Linux Based RTOS.

C/OS-II.

PSoS, VxWorks, Nucleus, Win CE, Palm OS.

12. What are different phases of system development methology?


To understand the reasons for this study we need to take a look at the different phases of the
system development methodology and observe where the characteristics of the RTOS emerge.

Four fundamental development phases come to light:

Analysis: determines WHAT the system or software has to do;

Design: HOW the system or software will satisfy these requirements;

Implementation: DOING IT, i.e. implementing the system or software;

Maintenance: USING IT, i.e. using the system or software.

In a waterfall model, one supposes that all these phases are consecutive. In practice, this is
never possible. Most developments end up being chaotic, with all the phases being executed
simultaneously. Adopting a pragmatic approach, the methodology used should just be a
framework to guide producing the correct documents, whilst performing appropriate reviews and
audits at the right time.

142

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

13. What is bounded dispatch time?


When the system is not loaded, there will be just one thread waiting in a ready state to be
executed. With higher loads, there might be multiple threads in the ready list. The dispatch time
should be independent of the number of threads in the list.
In good RTS design, taking into account the thread priorities, the list is organized when an
element is added to the list so that when a dispatch occurs, the first thread in the list can be taken.
14. What is the need for max number of tasks?
A task, thread or process may be considered as an OS object. Each object in the OS needs
some memory space for the object definition. The more complex the object, the more attributes it
will have, and the bigger the definition space will be. If there is for example an MMU in the
system, the mapping tables are extra attributes for the task and more system space is needed for
all this.
This definition space may be part of the system or part of the task. If it is part of the
system, then in most cases the RTOS would like to reserve the maximum space it allocates to
these tables. In this case, the maximum number of tasks which may coexist in the system is then
a system parameter. Another approach could be full dynamic allocation of this space. The
maximum number of tasks is then only limited by the available memory in the system shared
among object tables, code, etc.
15. Why min ram is required per task?
Memory footprint is an important issue in an embedded system despite the cost
reductions in silicon and disk memory these days. The size of the OS, or the system space
necessary to run the OS with all the objects defined is important. A task needs to run RAM for
the changing parts of the task control block (the task object definition) and for the stack and heap
to be capable of executing the program (which might be in ROM or RAM). It is the RTOS
vendors choice as to the minimum level of RAM to allocate for this. It is also up to the vendor
to indicate the size of the minimum application for the intended purpose of the RTOS.

143

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

16. What is the need for maximum addressable memory space?


Each task can address a certain memory space. Each vendor may have a different
memory model, depending on whether he relies on X86 segments or not. This depends largely on
the product history of the RTOS. For instance whether it was initially developed for the flat
address space of Motorola processors or for the segmented X86 Intel series. The 64K segments
in a 8086 processor is an important limitation for modern software. RT systems that need to be
back- compatible with this type of hardware constitute a burden for the designer. On the other
hand, it should be noted that this limitation is not imposed by the RTOS, making new designs
possible outside the 64K space per module.
17. Draw the task state transition diagram

18. When does the priority inversion occurs?


Priority inversion is a situation where in lower priority tasks will run blocking higher
priority tasks waiting for resource (mutex). The solution to priority inversion is Priority
inheritance.
19. What are classical problems of synchronization?

Bounded-Buffer Problem

Readers and Writers Problem

Dining-Philosophers Problem

20. How can the deadlock be recovered?

144

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The deadlock condition can be recovered by,

Through preemption

Rollback

Killing Process

21. Define process.


A process is a program that performs a specific function.
22. Define task and task state.
A task is a program that is within a process. It has the following states:
1. Ready
2. Running
3. Blocked
4. Idle
23. Define (TCB)
The TCB stands for Task Control Block which holds the control of all the tasks within
the block. It has separate stack and program counter for each task.
24. What is a thread?
A thread otherwise called a lightweight process (LWP) is a basic unit of CPU utilization,
it comprises of a thread id, a program counter, a register set and a stack. It shares with other
threads belonging to the same process its code section, data section, and operating system
resources such as open files and signals.

145

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

25. What are the benefits of multithreaded programming?


The benefits of multithreaded programming can be broken down into four major
categories:
Responsiveness
Resource sharing
Economy
Utilization of multiprocessor architectures
26. Compare user threads and kernel threads.
User threads Kernel threads
User threads are supported above the kernel and are implemented by a thread library at
the user level Kernel threads are supported directly by the operating system
Thread creation & scheduling are done in the user space, without kernel intervention.
Therefore they are fast to create and manage Thread creation, scheduling and
management are done by the operating system. Therefore they are slower to create & manage
compared to user threads
Blocking system call will cause the entire process to block if the thread performs a
blocking system call, the kernel can schedule another thread in the application for execution
27. Define RTOS.
A real-time operating system (RTOS) is an operating system that has been developed for
real-time applications. It is typically used for embedded applications, such as mobile telephones,
industrial robots, or scientific research equipment.

146

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

28. Define task and task rates.


An RTOS facilitates the creation of real-time systems, but does not guarantee that they
are real-time; this requires correct development of the system level software. Nor does an RTOS
necessarily have high throughput rather they allow, through specialized scheduling algorithms
and deterministic behavior, the guarantee that system deadlines can be met. That is, an RTOS is
valued more for how quickly it can respond to an event than for the total amount of work it can
do. Key factors in evaluating an RTOS are therefore maximal interrupt and thread latency
29. Define cpu scheduling.
CPU scheduling is the process of switching the CPU among various processes. CPU
scheduling is the basis of multi-programmed operating systems. By switching the CPU among
processes, the operating system can make the computer more productive.
30. Define synchronization.
Message passing can be either blocking or non-blocking. Blocking is considered to be
synchronous and non-blocking is considered to be asynchronous.
31. Define inter process communication.
Inter-process communication (IPC) is a set of techniques for the exchange of data among
multiple threads in one or more processes. Processes may be running on one or more computers
connected by a network. IPC techniques are divided into methods for message passing,
synchronization, shared memory, and remote procedure calls (RPC). The method of IPC used
may vary based on the bandwidth and latency of communication between the threads, and the
type of data being communicated.
32. Define semaphore.
A semaphore S is a synchronization tool which is an integer value that, apart from
initialization, is accessed only through two standard atomic operations; wait and signal.

147

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Semaphores can be used to deal with the n-process critical section problem. It can be also used to
solve various synchronization problems.
The classic definition of wait
wait (S){
while (S<=0)
;
S--;
}
The classic definition of signal
signal (S){
S++ ;}
33. What is a semaphore?
Semaphores -- software, blocking, OS assistance solution to the mutual exclusion
problem basically a non-negative integer variable that saves the number of wakeup signals sent
so they are not lost if the process is not sleeping another interpretation we will see is that the
semaphore value represents the number of resources available
34. Give the semaphore related functions.(November-2011)
A semaphore enforces mutual exclusion and controls access to the process critical
sections. Only one process at a time can call the function fn.
SR Program: A Semaphore Prevents the Race Condition.
SR Program: A Semaphore Prevents Another Race Condition.

148

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

35. When the error will occur when we use the semaphore?
i. When the process interchanges the order in which the wait and signal operations on the
semaphore mutex.
ii. When a process replaces a signal (mutex) with wait (mutex).
iii. When a process omits the wait (mutex), or the signal (mutex), or both.
36. Differentiate counting semaphore and binary semaphore.
Binary Semaphore:
The general-purpose binary semaphore is capable of addressing the requirements of both
forms of task coordination: mutual exclusion and synchronization.
A binary semaphore can be viewed as a flag that is available (full) or unavailable
(empty).
Counting semaphores are another means to implement task synchronization and mutual
exclusion.

Counting Semaphore:
The counting semaphore works like the binary semaphore except that it keeps track of the
number of times a semaphore is given. Every time a semaphore is given, the count is
incremented; every time a semaphore is taken, the count is decremented. When the count reaches
zero, a task that tries to take the semaphore is blocked. As with the binary semaphore, if a
semaphore is given and a task is blocked, it becomes unblocked. However, unlike the binary
semaphore, if a semaphore is given and no tasks are blocked, then the count is incremented. This
means that a semaphore that is given twice can be taken twice without blocking.

149

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

37. What is priority inheritance?


Priority inheritance is a method for eliminating priority inversion problems. Using this
programming method, a process scheduling algorithm will increase the priority of a process to
the maximum priority of any process waiting for any resource on which the process has a
resource lock.
38. Define message queue.
A message queue is a buffer managed by the operating system. Message queues allow a
variable number of messages, each of variable length, to be queued. Tasks and ISRs can send
messages to a message queue, and tasks can receive messages from a message queue (if it is
nonempty). Queues can use a FIFO (First In, First Out) policy or it can be based on priorities.
Message queues provide an asynchronous communications protocol.
39. Define mailbox and pipe.
A mailboxes are software-engineering components used for interprocess communication,
or for inter-thread communication within the same process. A mailbox is a combination of a
semaphore and a message queue (or pipe).
Message queue is same as pipe with the only difference that pipe is byte oriented while queue
can be of any size.
40. Define socket.
A socket is an endpoint for communications between tasks; data is sent from one socket
to another.
41. Define remote procedure call.
Remote Procedure Calls (RPC) is a facility that allows a process on one machine to call a
procedure that is executed by another process on either the same machine or a remote machine.
Internally, RPC uses sockets as the underlying communication mechanism.

150

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

42. Define thread cancellation & target thread.


The thread cancellation is the task of terminating a thread before it has completed. A
thread that is to be cancelled is often referred to as the target thread. For example, if multiple
threads are concurrently searching through a database and one thread returns the result, the
remaining threads might be cancelled.
43. What are the different ways in which a thread can be cancelled?
Cancellation of a target thread may occur in two different scenarios:
Asynchronous cancellation: One thread immediately terminates the target thread is called
asynchronous cancellation.
deferred cancellation: The target thread can periodically check if it should terminate, allowing
the target thread an opportunity to terminate itself in an orderly fashion.
44. What is preemptive and non-preemptive scheduling?(November-2012)
Under non-preemptive scheduling once the CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU either by terminating or switching to the waiting
state.
Preemptive scheduling can preempt a process which is utilizing the CPU in between its
execution and give the CPU to another process.
45. What is a dispatcher?
The dispatcher is the module that gives control of the CPU to the process selected by the
short-term scheduler. This function involves:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program.

151

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

46. What is dispatch latency?


The time taken by the dispatcher to stop one process and start another running is known
as dispatch latency.
47. What are the various scheduling criteria for cpu scheduling?
The various scheduling criteria are

CPU utilization

Throughput

Turnaround time

Waiting time

Response time

48. Define throughput?


Throughput in CPU scheduling is the number of processes that are completed per unit
time.
For long processes, this rate may be one process per hour; for short transactions, throughput
might be 10 processes per second.
49. What is turnaround time?
Turnaround time is the interval from the time of submission to the time of completion of
a process. It is the sum of the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.
50. Define race condition.
When several process access and manipulate same data concurrently, then the outcome of
the execution depends on particular order in which the access takes place is called race condition.
To avoid race condition, only one process at a time can manipulate the shared variable.

152

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

51. What is critical section problem?


Consider a system consists of n processes. Each process has segment of code called a
critical section, in which the process may be changing common variables, updating a table,
writing a file. When one process is executing in its critical section, no other process can allowed
executing in its critical section.
52. What are the requirements that a solution to the critical section problem must satisfy?
The three requirements are
Mutual exclusion
Progress
Bounded waiting
53. Define deadlock.
A process requests resources; if the resources are not available at that time, the process
enters a wait state. Waiting processes may never again change state, because the resources they
have requested are held by other waiting processes. This situation is called a deadlock.
54. What are conditions under which a deadlock situation may arise?
A deadlock situation can arise if the following four conditions hold simultaneously in a
system:
1. Mutual exclusion
2. Hold and wait
3. No pre-emption
4. Circular wait
55. What are the various shared data operating system services?
explain how operating systems provide abstraction from the computer hardware.

153

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

describe the meaning of processes, threads and scheduling in a multitasking operating


system.
describe the role of memory management explaining the terms memory swapping,
memory paging, and virtual memory.
contrast the way that MS-DOS and UNIX implement file systems compare the design
of some real operating systems.
56.What is meant by operating system?
An operating system (sometimes abbreviated as "OS") is the program that, after being
initially loaded into the computer by a boot program, manages all the other programs in a
computer. The other programs are called applications or application programs. The application
programs make use of the operating system by making requests for services through a defined
application program interface (API). In addition, users can interact directly with the operating
system through a user interface such as a command language or a graphical user interface (GUI).
57.Define protection mechanism?
In computer science, protection mechanisms are built into a computer architecture to
support the enforcement of security policies. A simple definition of a security policy is "to set
who may use what information in a computer system".
58.What is decomposition of Task?(November-2012)
Task Decomposition is the division of a larger (root) task into smaller, more manageable
elements or sub-tasks to deal with the root task at the lowest possible level and therefore with
higher simplicity. It creates the root tasks hierarchical model or breakdown that includes a series
of sub-tasks.
59.Write down the methods of fixed scheduling?(November-2011)
Each task has a fixed (static) priority computed off-line

The ready tasks are dispatched to execution in the order determined by their priority

154

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

In real-time systems the priority of a task is derived from its temporal requirements, not
its importance to the correct functioning of the system or its integrity
11 MARKS

1. What is a Co-Design Process and what is the Scheduling Policies?


Co-Design process:
In real-time systems, both hardware and software are dealt with in what these days are
called a co-design process. In such a process, the phases can be defined as follows:
Feasibility study: how much effort will it take to build the required system;
System analyses (SA): WHAT is the system going to do: draft refined or detailed
requirements?
System architectural design (SAD): HOW will we meet requirements by defining subsystems
working in (real) parallel?
Subsystem software analyses (SSA): WHAT is a particular subsystem going to do;
Subsystem software architectural design (SSAD): similar to system architectural design, but
here we define a pseudo-parallelism in a multitasking model;
Software detailed design (SDD): design all the tasks in the system by subdividing them into
modules
Implementation: code writing, debugging, testing, and integration of the subsystem;
System integration: integrating all subsystems;
System delivery.
As can be seen, 2 important steps are concerned with architectural design. In both these
steps, the OS considered as a building block is an important factor. In the SSAD the
multitasking capabilities of the OS are important. In the SAD, the capability of supporting
multiple processor architectures, interconnected in different ways, is important.

155

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

We should be able to implement SSAD without knowing the RTOS used. However,
commercial RTOS vendors have made certain choices and you just have to work with the
possibilities and limitations of products actually available.
All products are different in terms of the choices made. This means that an SSAD will
largely depend on the RTOS chosen. This also means that porting the application to another
RTOS environment is just an illusion, even if the RTOS is POSIX compliant.
Scheduling Policies:
The scheduler is one of the basic parts of an OS. It has the function of switching from one
task, thread or process to another. As this activity constitutes overhead, it should be done as
quickly as possible. To understand scheduling, it has to be understood that each task has different
states. At least 3 states are needed to allow an OS to run smoothly: running, blocked and ready.
A task is running if it is using a processor to execute the task code. If it has to wait for
another system resource, then the task is blocked (waiting for I/O, memory, etc.) once the
missing resources that the task wants to run have been allocated to it. As different tasks probably
want to run simultaneously and only as many tasks can run as there are available processors,
what is needed is a waiting for run queue. A task in this queue is considered ready. The queue
is called the ready list. In a symmetrical multiprocessor system, there is only one queue for all
the processors. In other architectures, you have one queue per processor.
If there is more than one task in the ready queue, you need a decision-making algorithm
determining which task can use the processor first. This is also called the scheduling policy.
There are probably as many policies as there are engineers inventing them. Therefore we
have to limit ourselves to the ones that are actually of use in RT systems.
In all RT systems, a deadline-driven scheduling policy is required. However this is still
under development and is not currently commercially available.
A pre-emptive priority scheduling policy is a minimum requirement. You cannot develop
a hard predictable system without it. If you apply RMS, then each task should have a different
priority level.

156

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

In more complex systems, only part of the system is hard real-time, with other parts being
soft or non-real time. The soft real-time parts should be designed like the hard RT part, with the
fact that not all the needed processor power will always be available being taken into account.
The same scheduling policy applies. In the non-RT part, a more general purpose OS (GPOS)
approach may be desirable. In GPOS systems, the philosophy is maximum usage of all system
resources. This philosophy is at odds with the RT requirement of being predictable. If you want
to give each task an equal share of the processor, a round robin scheme is more appropriate. The
non real-time part of a complex system should therefore be capable of using an RRS. Most
RTOSs implement this when you put more than one task on the same priority level. Other
RTOSs have an RRS explicitly defined for certain priority ranges.
Conclusion: an RTOS should always support pre-emptive priority scheduling. For
complex applications, where for some parts of the system a more GPOS-oriented philosophy is
needed, RRS or some other mechanisms might be useful.
2. What is a Task? Explain the Various Task States?
Tasks
Tasks = Code + Data + State (context)
Task State is stored in a Task Control Block (TCB) when the task is not running on the processor

Task states
Executing: running on the CPU
Ready: could run but another one is using the CPU
Blocked: waits for something (I/O, signal, resource, etc.)

157

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Dormant: created but not executing yet


Terminated: no longer active
The RTOS implements a Finite State Machine for each task, and manages its transitions.
Task State Transitions

WHAT IS TASK MODEL?

Precedence constraints: specify if any task(s) must precede other tasks.

Release (arrival) time - r(i,j): The release time of the jth instance of the i-th task

Phase: (i): The release time of the first instance of the i-th task.

Response time: Time span between task activation and its completion

Absolute deadline - d(i,j): The instant by which the jth instance of the i-th task must
complete

Relative deadline D(i): The maximum allowable response time for the task

Laxity type: Notion of urgency or leeway in a tasks execution

Period - p(i): The minimum length of intervals between the release times of consecutive
tasks.

158

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Execution time e(i): The (maximum) amount of time required to complete the
execution of the i-th task when it executes alone and has all the resources it requires.

Some basic identities:

(i) = r(i,1) // First release time

r(i,k) = (i) + (k-1) * p(i) // Periodic tasks

d(i,j) = (i) + (j-1) * p(i) + D(i) // Abs. Deadline

If D(i) == p(i) then

d(i,k) = r(i,k) + p(i) = (i) + k * p(i)

Simple task model:

All tasks are strictly periodic.

The relative deadline of a task is equal to its period.

All tasks are independent no precedence constraints

No tasks have non-preemptible sections cost of preemption is negligible

Only processing requirements count memory and

I/O requirements are negligible.

3. Explain Round Robin Scheduling Briefly And What Is FIFO? Explain Its Working In
Detail?
The ROUND ROBIN SCHEDULING ALGORITHM is designed especially for the time
sharing systems. A small unit of time called a time quantum is defined. A time quantum is
generally from ten to hundred milli seconds. The ready queue is treated as a circular queue. The
CPU scheduler goes around the ready queue, allocating the CPU to each process for a time
interval of up to one time quantum.

159

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

To implement the round robin scheduling, we keep the ready queue as a FIFO queue of
the processes. New processes are added to the tail of the ready queue. The CPU scheduler picks
the first process from the ready queue, sets a timer to interrupt after one time quantum in a row.
If a process CPU burst exceeds one time quantum, that process is preempted and is put back in
the ready queue. The RR scheduling algorithm is preemptive
//Round Robin calls for the distribution of the processing time equitably among all processes
requesting the processor. Run process for one time slice, then move to back of queue. Each
process gets equal share of the CPU. Most systems use some variant of this.
FIFO
This is a Non-Preemptive scheduling algorithm. FIFO strategy assigns priority to processes in the
order in which they request the processor. The process that requests the CPU first is allocated the
CPU first. When a process comes in, add its PCB to the tail of ready queue. When running
process terminates, dequeue the process (PCB) at head of ready queue and run it.
Consider the example with P1=24, P2=3, P3=3

Gantt Chart for FCFS : 0 - 24 P1 , 25 - 27 P2 , 28 - 30 P3

Turnaround time for P1 = 24


Turnaround time for P1 = 24 + 3
Turnaround time for P1 = 24 + 3 + 3

Average Turnaround time = (24*3 + 3*2 + 3*1) / 3

In general we have (n*a + (n-1)*b + ....) / n

If we want to minimize this, a should be the smallest, followed by b and


so on.

160

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Comments: While the FIFO algorithm is easy to implement, it ignores the service time request
and all other criteria that may influence the performance with respect to turnaround or waiting
time.
Problem: One Process can monopolize CPU
Solution: Limit the amount of time a process can run without a context switch. This time is
called a time slice.
4. What is Preemptive Priority Based Scheduling and Explain Rate Monotonic Scheduling
in Detail? (April-2013)
With a preemptive priority based scheduler, each task has a priority and the kernel
insures that the CPU is allocated to the highest priority task that is ready to run. This scheduling
method is preemptive in that if a task that has a higher priority than the current task becomes
ready to run, the kernel immediately saves the current tasks's context and switches to the context
of the higher priority task.
The Wind kernel has 256 priority levels(0-255). Priority 0 is the highest and priority 255
is the lowest. Tasks are assigned a priority when created; however, while executing, a task can
change its priority using taskPrioritySet().
Example: Preemptive Priority Based Scheduling
One of the arguments to taskSpawn() is the priority at which the task is to execute:
id = taskSpawn(name, priority, options, stacksize, function, arg1,.. , arg10);
By varying the priority(0-255) of the task spawned, you can affect the priority of the task.
Priority 0 is the highest and priority 255 is the lowest.The Note the priority of a task is relative to
the priorities of other tasks. In other words, the task priority number itself has no particular
significance by itself.
In addition a task's priority can be changed after its spawned using the following routine:

taskPrioritySet(int tid, int newPriority): Change the priority of a task.

161

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

In the example below, there are three tasks with different priorities(HIGH,MID,LOW). The
result of running the program is that the task with the highest priority, "taskThree" will run to
completion first, followed by the next highest priority task, "taskTwo", and the finally the task
with the lowest priority which is "taskOne."
Rate Monotonic Scheduling
The Rate Monotonic Scheduling Algorithm (RMS) is important to real-time systems
designers because it allows one to guarantee that a set of tasks is schedulable. A set of tasks is
said to be schedulable if all of the tasks can meet their deadlines. RMS provides a set of rules
which can be used to perform a guaranteed schedulability analysis for a task set. This analysis
determines whether a task set is schedulable under worst-case conditions and emphasizes the
predictability of the system's behavior. It has been proven that:
RMS is an optimal static priority algorithm for scheduling independent, preemptible,
periodic tasks on a single processor.
RMS is optimal in the sense that if a set of tasks can be scheduled by any static priority
algorithm, then RMS will be able to schedule that task set. RMS bases it schedulability analysis
on the processor utilization level below which all deadlines can be met.
RMS calls for the static assignment of task priorities based upon their period. The shorter
a task's period, the higher its priority. For example, a task with a 1 millisecond period has higher
priority than a task with a 100 millisecond period. If two tasks have the same period, then RMS
does not distinguish between the tasks. However, RTEMS specifies that when given tasks of
equal priority, the task which has been ready longest will execute first. RMS's priority
assignment scheme does not provide one with exact numeric values for task priorities. For
example, consider the following task set and priority assignments:
TASK

PERIOD(in milliseconds)

PRIORITY

100

Low

50

Medium

162

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

50

Medium

25

High

RMS only calls for task 1 to have the lowest priority, task 4 to have the highest priority,
and tasks 2 and 3 to have an equal priority between that of tasks 1 and 4. The actual RTEMS
priorities assigned to the tasks must only adhere to those guidelines.
Many applications have tasks with both hard and soft deadlines. The tasks with hard
deadlines are typically referred to as the critical task set, with the soft deadline tasks being the
non-critical task set. The critical task set can be scheduled using RMS, with the non-critical tasks
not executing under transient overload, by simply assigning priorities such that the lowest
priority critical task (i.e. longest period) has a higher priority than the highest priority non-critical
task. Although RMS may be used to assign priorities to the non-critical tasks, it is not necessary.
In this instance, schedulability is only guaranteed for the critical task set.
5. What Is Priority Inversions? (November-2011)
Priority inversion is a problematic scenario in scheduling when a higher priority task is
indirectly preempted by a lower priority task effectively "inverting" the relative priorities of the
two tasks.
This violates the priority model that high priority tasks can only be prevented from
running by higher priority tasks and briefly by low priority tasks which will quickly complete
their use of a resource shared by the high and low priority tasks.
Example of a priority inversion
Consider there is a task L, with low priority. This task requires resource R. Consider that
L is running and it acquires resource R. Now, there is another task H, with high priority. This
task also requires resource R. Consider H starts after L has acquired resource R. Now H has to
wait until L relinquishes resource R.

163

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Everything works as expected up to this point, but problems arise when a new task M
starts with medium priority during this time. ` Since R is still in use (by L), H cannot run. Since
M is the highest priority unblocked task, it will be scheduled before L. Since L has been
preempted by M, L cannot relinquish R. So M will run till it is finished, then L will run - at least
up to a point where it can relinquish R - and then H will run. Thus, in above scenario, a task with
medium priority ran before a task with high priority, effectively giving us a priority inversion.
In some cases, priority inversion can occur without causing immediate harmthe
delayed execution of the high priority task goes unnoticed, and eventually the low priority task
releases the shared resource. However, there are also many situations in which priority inversion
can cause serious problems. If the high priority task is left starved of the resources, it might lead
to a system malfunction or the triggering of pre-defined corrective measures, such as a watch dog
timer resetting the entire system.
Priority inversion can also reduce the perceived performance of the system. Low priority
tasks usually have a low priority because it is not important for them to finish promptly (for
example, they might be a batch job or another non-interactive activity). Similarly, a high priority
task has a high priority because it is more likely to be subject to strict time constraintsit may
be providing data to an interactive user, or acting subject to real time response guarantees.
Because priority inversion results in the execution of the low priority task blocking the high
priority task, it can lead to reduced system responsiveness, or even the violation of response time
guarantees.
A similar problem called deadline interchange can occur within earliest deadline first
scheduling (EDF).
Solutions
The existence of this problem has been known since the 1970s, but there is no fool-proof
method to predict the situation. There are however many existing solutions, of which the most
common ones are:
Disabling all interrupts to protect critical sections

164

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

When disabled interrupts are used to prevent priority inversion, there are only two
priorities: preemptible, and interrupts disabled. With no third priority, inversion is
impossible. Since there's only one piece of lock data (the interrupt-enable bit),
misordering locking is impossible, and so deadlocks cannot occur. Since the critical
regions always run to completion, hangs do not occur. Note that this only works if all
interrupts are disabled. If only a particular hardware device's interrupt is disabled, priority
inversion is reintroduced by the hardware's prioritization of interrupts. A simple
variation, "single shared-flag locking" is used on some systems with multiple CPUs. This
scheme provides a single flag in shared memory that is used by all CPUs to lock all interprocessor critical sections with a busy-wait. Interprocessor communications are
expensive and slow on most multiple CPU systems. Therefore, most such systems are
designed to minimize shared resources. As a result, this scheme actually works well on
many practical systems. These methods are widely used in simple embedded systems,
where they are prized for their reliability, simplicity and low resource use. These schemes
also require clever programming to keep the critical sections very brief, under 100
microseconds in practical systems. Many software engineers consider them impractical in
general-purpose computers.
A priority ceiling
With priority ceilings, the shared mutex process (that runs the operating system
code) has a characteristic (high) priority of its own, which is assigned to the task locking
the mutex. This works well, provided the other high priority task(s) that tries to access the
mutex does not have a priority higher than the ceiling priority.
Priority inheritance
Under the policy of priority inheritance, whenever a high priority task has to wait
for some resource shared with an executing low priority task, the low priority task is
temporarily assigned the priority of the highest waiting priority task for the duration of its
own use of the shared resource, thus keeping medium priority tasks from pre-empting the
(originally) low priority task, and thereby effecting the waiting high priority task as well.
Once the resource is released, the low priority task continues at its original priority level.

165

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

6. What Is Priority Ceiling and Explain Memory Allocation Strategies?


Priority Ceiling:
In real-time computing, the priority ceiling protocol is a synchronization protocol for
shared resources to avoid unbounded priority inversion and mutual deadlock due to wrong
nesting of critical sections. In this protocol each resource is assigned a priority ceiling, which is a
priority equal to the highest priority of any task which may lock the resource.
For Eg: With priority ceilings, the shared mutex process (that runs the operating system
code) has a characteristic (high) priority of its own, which is assigned to the task locking the
mutex. This works well, provided the other high priority task(s) that tries to access the mutex
does not have a priority higher than the ceiling priority.
In the Immediate Ceiling Priority Protocol (ICPP) when a task locks the resource its
priority is temporarily raised to the priority ceiling of the resource, thus no task that may lock the
resource is able to get scheduled. This allows a low priority task to defer execution of higherpriority tasks.
The Original Ceiling Priority Protocol (OCPP) has the same worst-case performance
but is subtly different in the implementation which can provide finer grained priority inheritance
control mechanism than ICPP.
A task will not get scheduled if any resource it may lock actually has been locked by
another task, and therefore the priority ceiling protocol prevents deadlocks.
ICPP is called "Priority Protect Protocol" in POSIX and "Priority Ceiling Emulation" in
RTSJ.
Memory Allocation Strategies
There are many allocation strategies. They depend upon how blocks are allocated and
freed. One characterization is by how a block is chosen for allocation.

166

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

First Fit
First fit just starts at the front of the list of free storage and grabs the first block which is
"big enough". If it is "too big" it "splits" the block, returning you a pointer to the front part of the
block and returning the tail of the block back to the free pool.
First-fit has lousy performance. It requires that you may have to skip over blocks that are
too small, thus wasting time, and it tends to split big blocks, thus increasing memory
fragmentation. The original C allocators were not only first-fit, but they had to scan the entire
heap from start to finish, skipping over the already-allocated blocks. In a modern machine,
where you might have a gigabyte of allocation in tens of thousands of blocks, this is guaranteed
to maximize your page faulting behavior. So it has truly lousy allocation performance and
fragments memory badly. It is considered a truly losing algorithm.
Best Fit
"Best fit" tries to find a block that is "just right". The problem is that this requires
keeping your free list in sorted order, so you can see if there is a really good fit, but you still have
to skip over all the free blocks that are too small. As memory fragments, you get more and more
small blocks that interfere with allocation performance, and deallocation performance requires
that you do the insertion properly in the list. So again, this allocator tends to have truly lousy
performance.
Quick Fit
The idea here is that you keep a "cache" of free blocks of storage rooted in their sizes.
Sizes at this level are always multiples of some basic allocation granularities, such as DWORD.
Like most algorithms that work well, it is based on patterns of observed behavior. The L1 and
L2 caches rely on what is called "locality of reference" to in effect prefect and cache data that is
very likely to be used in the near future. LRU page replacement is based on the fact that a page
which hasn't been used in a long time is unlikely to be used in the near future. Working set is
based on the fact that, like caches, the pages you used most recently are likely to be the pages
you are most likely to use in the future. QuickFit is based on the observed premise that most

167

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

programs allocate only a small number of discrete sizes of objects. You might allocate and free
objects of type A hundreds of times a second, but you will actually be doing this a lot.
QuickFit relies on lazy coalescing. If you free a block of size n, you are very likely, in the near
future, to allocate a block of size n, because what you really did was free an object of type A
(sizeof(A) == n) and you are therefore fairly likely in the near future to allocate a new object of
type A. So instead of returning the block to the general heap, and possibly coalescing it with
nearby blocks, you just hang onto it, keeping it in a list of blocks of size n. The next time you
allocate an object of size n, the allocator looks on the free list[n] to see if there are any blocks
laying around to be reallocated, and if one is, your allocation is essentially instantaneous. Only
if the list is empty do you revert to one of the slower algorithms.
In QuickFit, coalescing is done before you decide to ask the operating system for more storage.
First you run a coalesce pass, and then see if you now have a big enough block. If you don't,
then you get more space from the system.
7. Explain The Concept Deadlock in Detail?
A set of processes is deadlocked if each process in the set is waiting for an event that only
another process in the set can cause (including itself).
Waiting for an event could be:

waiting for access to a critical section

waiting for a resource Note that it is usually a non-preemptable (resource). pre-emptable


resources can be yanked away and given to another.

Conditions for Deadlock

Mutual exclusion: resources cannot be shared.

Hold and wait: processes request resources incrementally, and hold on to what they've
got.

No preemption: resources cannot be forcibly taken from processes.

168

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Circular wait: circular chain of waiting, in which each process is waiting for a resource
held by the next process in the chain.

Strategies for dealing with Deadlock

ignore the problem altogether ie. ostrich algorithm it may occur very infrequently, cost of
detection/prevention etc may not be worth it.

detection and recovery

avoidance by careful resource allocation

prevention by structurally negating one of the four necessary conditions.

Deadlock Prevention
Difference from avoidance is that here, the system itself is build in such a way that there are no
deadlocks.
Make sure atleast one of the 4 deadlock conditions is never satisfied.
This may however be even more conservative than deadlock avoidance strategy.

Attacking Mutex condition


o

Attacking preemption
o

never grant exclusive access. but this may not be possible for several resources.

not something you want to do.

Attacking hold and wait condition


o

make a process hold at the most 1 resource at a time.

make all the requests at the beginning. All or nothing policy. If you feel, retry. eg.
2-phase locking

Attacking circular wait


o

Order all the resources.

Make sure that the requests are issued in the correct order so that there are no
cycles

169

EMBEDDED SYSTEMS

present

in

the

resource

graph.

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Resources numbered 1 ... n. Resources can be requested only in increasing order.


ie. you cannot request a resource whose no is less than any you may be holding.
Deadlock Avoidance
Avoid actions that may lead to a deadlock.
Think of it as a state machine moving from 1 state to another as each instruction is
executed.
Safe State
Safe state is one where
o

It is not a deadlocked state

There is some sequence by which all requests can be satisfied.

To avoid deadlocks, we try to make only those transitions that will take you from one
safe state to another. We avoid transitions to unsafe state (a state that is not deadlocked,
and is not safe)
eg.
Total # of instances of resource = 12
(Max, Allocated, Still Needs)
P0 (10, 5, 5)

P1 (4, 2, 2)

P2 (9, 2, 7)

Free = 3

The sequence is a reducible sequence


the first state is safe.

What if P2 requests 1 more and is allocated 1 more instance?


- results in Unsafe state

So do not allow P2's request to be satisfied.

170

EMBEDDED SYSTEMS

- Safe

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Banker's Algorithm for Deadlock Avoidance


When a request is made, check to see if after the request is satisfied, there is a
(atleast one!) sequence of moves that can satisfy all the requests. ie. the new state is safe.
If so, satisfy the request, else make the request wait.
How do you find if a state is safe

n process and m resources


Max[n * m]
Allocated[n * m]
Still_Needs[n * m]
Available[m]
Temp[m]
Done[n]

while () {
Temp[j]=Available[j] for all j
Find an i such that
a) Done[i] = False
b) Still_Needs[i,j] <= Temp[j]
if so {
Temp[j] += Allocated[i,j] for all j
Done[i] = TRUE}
}
else if Done[i] = TRUE for all i then state is safe
else state is unsafe
}
Detection and Recovery
Is there a deadlock currently?

171

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

One resource of each type (1 printer, 1 plotter, 1 terminal etc.)


o

check if there is a cycle in the resource graph. for each node N in the graph do
DFS (depth first search) of the graph with N as the root In the DFS if you come
back to a node already traversed, then there is a cycle. }

Multiple resources of each type


o

m resources, n processes

Max resources in existence = [E1, E2, E3, .... Em]

Current Allocation = C1-n,1-m

Resources currently Available = [A1, A2, ... Am]

Request matrix = R1-n,1-m

Invariant = Sum(Cij) + Aj = Ej

Define A <= B for 2 vectors, A and B, if Ai <= Bi for all i

Overview

Check

of

matrix,

deadlock

and

find

detection

row

such

algorithm,

at

Ri

<

A.

If such a process is found, add Ci to A and remove process i from the system.
Keep doing this till either you have removed all processes, or you cannot remove
any

other

process.

Whatever

is

remaining

is

deadlocked.

Basic idea, is that there is atleast 1 execution which will undeadlock the system
Recovery

172

through preemption

rollback

keep checkpointing periodically

when a deadlock is detected, see which resource is needed.

Take away the resource from the process currently having it.

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Later on, you can restart this process from a check pointed state where it
may need to reacquire the resource.

killing processes

where possible, kill a process that can be rerun from the beginning without
illeffects

8. Explain Process Synchronization (November-2012)

The Critical-Section Problem

Classic Problems of Synchronization

Critical-Section Problem
Producer-Consumer Problem:
Producer Process: produces information
Consumer Process: consumes the information

173

unbounded-buffer places no practical limit on the size of the buffer

bounded-buffer assumes that there is a fixed buffer size

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Solution to the Critical-Section Problem


Solution must satisfy the following three conditions:

Mutual Exclusion - If process Pi is executing in its critical section, then no other


processes can be executing in their critical sections.

2. Progress - If no process is executing in its critical section and there exist some processes
that wish to enter their critical section, then the selection of the processes that will enter
the critical section next cannot be postponed indefinitely.
3. Bounded Waiting - A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted.
Classical Problems of Synchronization

Bounded-Buffer Problem

Readers and Writers Problem

Dining-Philosophers Problem

Bounded-Buffer Problem

N buffers, each can hold one item

Semaphore mutex initialized to the value 1

174

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Semaphore full initialized to the value 0

Semaphore empty initialized to the value N.

Readers-Writers Problem

A data set is shared among a number of concurrent processes

Readers only read the data set; they do not perform any updates

Writers can both read and write.

Problem allow multiple readers to read at the same time. Only one single writer can
access the shared data at the same time.

Shared Data

175

Data set

Semaphore mutex initialized to 1.

Semaphore wrt initialized to 1.

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Integer readcount initialized to 0.

Dining-Philosophers Problem

Shared data

Bowl of rice (data set)

Semaphore chopstick [5] initialized to 1

The structure of Philosopher i:

While (true) {

176

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think
}
9. Explain Inter Process Communication Briefly?
Inter-process communication (IPC) is a set of interfaces, which is usually programmed in
other for a programmer to communicate between a series of processes. This allows the running
of programs concurrently in an operating system.
There are quite a number of methods used in inter-process communications. They are:
Pipes: This allows the flow of data in one direction only. Data from the output is usually
buffered until the input process receives it which must have a common origin.
Named Pipes: This is a pipe with a specific name. It can be used in processes that do not
have a shared common process origin. Example is FIFO where the data is written to a pipe is
first named.
Message queuing: This allows messages to be passed between messages using either a
single queue or several message queues. This is managed by the system kernel. These messages
are co-ordinated using an application program interface (API)
Semaphores: This is used in solving problems associated with synchronization and
avoiding race conditions. They are integers values which are greater than or equal to zero

177

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Shared Memory: This allows the interchange of data through a defined area of memory.
Semaphore value has to be obtained before data can get access to shared memory.
Sockets: This method is mostly used to communicate over a network, between a client
and a server. It allows for a standard connection which I computer and operating system
independent.
Mutual exclusion processes has a shortcoming which is the fact that it wastes the
processor time.
There are primitive interprocesses that block instead of wasting the processor time.
Some of these are:
Sleep and Wakeup
SLEEP is a system call that causes the caller to block, that is, be suspended until another
process wakes it up. The WAKEUP call has one parameter, the process to be awakened.
The Producer-Consumer Problem
In this case, two processes share a common, fixed-size buffer. One of the processes puts
information into the buffer, and the other one, the consumer, takes it out. This could be easy
with 3 or more processes in which one wakeup waiting bit is insufficient, another patch could be
made, and a second wakeup waiting bit is added of 8 or 32 but the problem of race condition will
still be there.
Events Counter
This involves programming a program without requiring mutual exclusion. Event
counters value can only increase and never decrease. There are three operations defined on an
event counter for example, E:
1.

Read (E): Return value of E

178

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

2.

Advance (E): Atomically increment E by 1.

3.

Await (E, v): Wait until E has a value of v or more.


Two events counters are used. The first one, in would be to count the cumulative number

of items that the producer discussed above has put into the buffer since the program started
running. The other one out, counts the cumulative number of items that the consumer has
removed from the buffer so far. Therefore, it is clear that in must be greater than or equal to out,
but not more that the size of the buffer. This is method that works with pipes discussed above.
Monitors
This is about the best way of achieving mutual exclusion.
A Monitor is a collection of procedures, variables, and data structures that are grouped
together in a special kind of module or package. The monitor uses the wait and signal. The
"WAIT" is to indicate to the other process that the buffer is full and so causes the calling process
to block and allows the process that was earlier prohibited to enter the process at this point.
"SIGNAL" will allow the other process to be awakened by the process that entered during the
"WAIT".
10. What is the Concept behind Shared Memory?
Shared memory is memory that is accessible to a number of processes. By several orders
of magnitude, it is the quickest way of sharing information among a set of processes. Keep in
mind that shared memory is available on all operating systems. Only the calls will be different.
Shared memory is persistent. It does not go away when no program is referencing it. This can
be a good thing, but it can tie up system resources.
shmget system call. Gets or creates a shared memory segment. The call has the following
format:
#include <sys/types.h>

179

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

#include <sys/ipc.h>
#include <sys/shm.h>
int shmget ( key_t key, size_t size, int shmflg)
Returns shared memory ID if successful and -1 if error.
Key Returns a unique key if successful and (key_t) -1 if it fails. It fails if the path does
not exist.
size is used when creating the shared memory segment to specify the size. If the shared
memory segment exists, it may be 0 or used to specify the minimum size of the shared
memory. The size of the shared memory you request is less than or equal to the amount
you actually get. That is, due to the fact that shared memory is allocated in pages, so, the
actual amount of shared memory allocated may be bigger than the size requested.
shmflg are flags specifying options for this command. Right-most 9 bits are
permissions. (Same bits as in open) SHM_R and SHM_W for owner and these shifted 3
bits to the right for group and these shifted 6 bits to the right for other. Also have
IPC_CREAT and IPC_EXCL bits. These are similar to O_CREAT and O_EXCL for the
file system. There are additional flags that are not as general. For example, on Solaris,
there is the ability to make the shared memory dynamically resizable.
shmat system call. Attaches the shared memory segment to our process. This is separate from
shmget because there is a limit on the number of shared memory segments to which we can
attach. This is a system-imposed limit. (Let's try and figure this out when I do an example.) So,
if you are dealing with a large number of shared memory segments, a good strategy would be to
get the IDs for all that we need and then attach only to those shared memory segments that we
are actively using. We can detach from a shared memory segment.
The system call shmat is defined as follows:
#include <sys/types.h>

180

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

#include <sys/ipc.h>
#include <sys/shm.h>
void *shmat ( int shmid, const void *shmaddr, int shmflg );
Returns a pointer to the shared memory segment or -1 if error.
shmid is the shared memory ID
shmaddr should be NULL. This argument can allow you to specify the address to
associate with the shared memory. For modern computers, it is hard to find a reason for
using this argument. Do no specify an address unless you have a very good reason.
shmflg are flags for this call. The only one that we will care about is SHM_RDONLY.
This makes the shared memory segment read only. This is like opening a file to be read
only. So, if we don't want read only shared memory, this argument will be set to zero.
shmdt detaches a shared memory segment from a process. There is a limit to how many
shared memory segments you may attach to. shmdt is used so that another segment may
be attached. The format of this call is as follows:
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
int shmdt (const void *shmaddr );
shmaddr is the address of the shared memory segment.
Returns 0 if successful and -1 if fails.
shmctl performs a bunch of utility functions on shared memory. Its description follows:

181

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
int shmctl ( int shmid, int cmd, struct shmid_ds *buf)
Returns 0 if successful and -1 if fails.
cmd is the command that we wish to have executed.
IPC_RMID removes the shared memory segment when no one is using it.
IPC_STAT gets data associated with shared memory segment into buf
IPC_SET allows 3 fields in buf to be changed.
buf is used by certain commands. The most often used command is IPC_RMID.

11. What Is Memory Locking? Explain Its Function?


Memory locking is one way to ensure that a process stays in main memory and is exempt
from paging. In a realtime environment, a system must be able to guarantee that it will lock a
process in memory to reduce latency for data access, instruction fetches, buffer passing between
processes, and so forth. Locking a process's address space in memory helps ensure that the
application's response time satisfies realtime requirements. As a general rule, time-critical
processes should be locked into memory.
Realtime application developers should consider memory locking as a required part of
program initialization. Many realtime applications remain locked for the duration of execution,
but some may want to lock and unlock memory as the application runs. DIGITAL UNIX
memory-locking functions let you lock the entire process at the time of the function call and
throughout the life of the application, or selectively lock and unlock as needed.

182

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Memory locking applies to a process's address space. Only the pages mapped into a
process's address space can be locked into memory. When the process exits, pages are removed
from the address space and the locks are removed.
Two functions, mlock and mlockall, are used to lock memory. The mlock function allows
the calling process to lock a selected region of address space. The mlockall function causes all of
a process's address space to be locked. Locked memory remains locked until either the process
exits or the application calls the corresponding munlock or munlockall function.
Memory locks are not inherited across a fork and all memory locks associated with a
process are unlocked on a call to the exec function or when the process terminates.
For most realtime applications the following control flow minimizes program complexity and
achieves greater determinism by locking the entire address into memory.
1. Perform nonrealtime tasks, such as opening files or allocating memory
2. Lock the address space of the process calling mlockall function
3. Perform realtime tasks
4. Release resources and exit
The memory-locking functions are as follows:
FUNCTION

DESCRIPTION

mlock

Locks a specified region of a


process's address space

mlockall

Locks all of a process's address


space

munlock

Unlocks a specified region of a


process's address space

munlockall

Unlocks all of a process's


address space

183

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Locking and Unlocking a Specified Region


The mlock function locks a preallocated specified region. The address and size arguments
of the mlock function determine the boundaries of the preallocated region. On a successful call to
mlock, the specified region becomes locked. Memory is locked by the system according to
system-defined pages. If the address and size arguments specify an area smaller than a page, the
kernel rounds up the amount of locked memory to the next page. The mlock function locks all
pages containing any part of the requested range, which can result in locked addresses beyond
the requested range.
Repeated calls to mlock could request more physical memory than is available; in such
cases, subsequent processes must wait for locked memory to become available. Realtime
applications often cannot tolerate the latency introduced when a process must wait for lockable
space to become available. Preallocating and locking regions is recommended for realtime
applications.
If the process requests more locked memory than will ever be available in the system, an
error is returned.
Memory Allocation with mlock

The mlock function locks all pages defined by the range addr to addr+len-1 (inclusive).
The area locked is the same as if the len argument were rounded up to a multiple of the next page

184

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

size before decrementing by 1. The address must be on a page boundary and all pages mapped by
the specified range are locked. Therefore, you must determine how far the return address is from
a page boundary and align it before making a call to the mlock function.
Use the sysconf(_SC_PAGE_SIZE) function to determine the page size. The size of a
page can vary from system to system. To ensure portability, call the sysconf function as part of
your application or profile when writing applications that use the memory-locking functions. The
sys/mman.h header file defines the maximum amount of memory that can be locked. Use the
getrlimit function to determine the amount of total memory.
Exercise caution when you lock memory; if your processes require a large amount of
memory and your application locks memory as it executes, your application may take resources
away from other processes. In addition, you could attempt to lock more virtual pages than can be
contained in physical memory.
Locked space is automatically unlocked when the process exits, but you can also
explicitly unlock space. The munlock function unlocks the specified address range regardless of
the number of times the mlock function was called. In other words, you can lock address ranges
over multiple calls to the mlock function, but can remove the locks with a single call to the
munlock function. Space locked with a call to the mlock function must be unlocked with a
corresponding call to the munlock function.
12. What are Signals Related IPC Functions?
One way for messaging is to use an OS function signal ( ). It is provided in Unix, Linux
and several RTOs. Unix and Linux OSes use signals profusely and have 31 different types of
signals for the various events. The task or process sending the signal uses a function signal ( )
having an integer number n in the argument. A signal is function, which executes a software
interrupt instruction INT n or SWI n.
A signal provides the shortest communication. The signed ( ) sends a output n for a
process, which enables the OS to unmask a signal mask of a process or task as per the n. The task
is called signal handler and has coding similar to the ones in an ISR. The handler runs in a way

185

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

similar to a highest priority ISR. An ISR runs on an hardware interrupt provided that the
interrupt is not masked. The handler runs on the signal provided that the signal is not masked.
The signal ( ) forces the OS to first run a signaling process or task called signal handler.
When there is return from the signaled or forced task or process, the process, which sent the
signal, runs the codes as happens on a return from an ISR. A signal mask is the software
equivalent of the flag at a register that sets on masking hardware interrupt. Unless masked by a
signal masked, the signal allows the execution of an ISR.
An integer number (for example n) represents each signal and that number associates a
function (or process or task) signal handler, an equivalent of the ISR. The signal handler has a
function called whenever a process communicates that number.
A signal handler is not called by a code. When the signal is sent from the process, OS
interrupts the process execution and calls the function for signal handling. On return from the
signal hander, the process continues as before.
For example, signal (5). The signal mask of a signal handler 5 is reset. The signal handler
and connect function associate the number 5. The function represented by number 5 is forced run
by the signal handler.
An advantage of using it is that unlike semaphores it takes the shortest possible CPU time
to force a handler to run. The signals are the interrupts that can be used as the IPC function of
synchronizing.
A signal is unlike the semaphore. The semaphore has use as a token or resources key to
let another task process block, or which locks a resource to a particular task process for a given
section of the codes. A signal is just an interrupt that is shared and used by another interruptservicing process. A signal raised by one process forces another process (signal handler) to
interrupt and catch that signal in case the signal is not masked (use of that signal handler is not
disabled). Signals are to be handler only for forcing the run is very high priority processes as it
may disrupt the usual schedule and priority inheritance mechanism. It may also cause reentrancy
problems.

186

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

An important application of the signals is to be handle exceptions. (An exception is a


process that is executed on a specific reported run-time condition). A signal reports an error
(called exception) during the running of a task and then lets the scheduler initiate an errorhandling process or function or task. An error-handling signal handler handles the different error
login of other task. The devices driver functions also use the signals to call the handles (ISRs).
The following are the signal related IPC functions, which are generally not provided in the
RTOS such as COS-II and provided in RTOS such a V x Works or OS such as Unix and Linux.
1. sigHandler ( ) to the signal context. The signal contexts to a signal identified by the signal
number and define a pointer to the signal context. The signal context saves the registers
on signal.
2. Connect an interrupt vector to a signal provides the PC value for the signal handler
function address.
3. A function signal ( ) to send a signal identified by a number in the argument to a task.
4. Mask the signal.
5. Unmask the signal.
6. Ignore the signal.

13. What is Semaphore Flag and Explain Mutex as Resource Key?

Semaphore Flag
The OS system for semaphore as notice or token for an event occurrence.
Semaphore facilitates IPC for notifying (through a scheduler) a waiting task section
change to the running state upon event at presently running code section at an ISR or
task. A semaphore as binary semaphore is a token or resources key. The OS also provides
for mutex access to a set of codes in a task (or thread or process). The use of mutex is
such that the priority inversion problem is not solved in some OSes while it is solved in
other OSes. The OS also provides for counting semaphores. The OS may provide for
POSIX standard P and V semaphores which can be used for notifying event occurrence
or as mutex or for counting. The timeout can be defined in the argument with wait

187

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

function for the semaphore IPC. The error pointer can also be defined in the arguments
for semaphore IPC function.

The following are the functions, which are generally provided in an OS, for example,
COS-II for the semaphores.
1. OSSemCreate, a semaphore function to create the semaphore in an event control
block (ECB). Initialize it with an initial value.
2. OSSemPost, a function which sends the semaphore notification to ECB and it
value increments on event occurrence (used in ISRs as well as tasks).
3. OSSemPend, a function, which waits the semaphore from an event, and its value
decrements on taking note of that event occurrence (used in tasks not in ISRs).
4. OSSemAccept, a function, which reads and returns the present semaphore value
and if it shows occurrence of an event (by non-zero value) then it takes note of
that and decrements that value (no wait; used in ISRs as well as tasks).
5. OSSemQuery, a function, which queries the semaphore for an event occurrence or
non-occurrence by reading its value and returns the present semaphore value, and
returns pointer to the data structure OSSemdata. The semaphore value does not
decrease. The OSSemData points to the present value and table of the tasks
waiting for the semaphore (used in tasks).
Mutex as Resource Key
An OS, using a mutex blocks a critical section in a task on taking the mutex by another
tasks critical section other task unlocks on releasing the mutex. The mutex wait by unlocking
task can be for a specified timeout.
There is a function in kernel called lock ( ). It locks a process to be resources till that process
executes unlock ( ). A wait loop creates and when wait is over the other process waiting for the
lock starts. Use of lock ( ) and unlock ( ) involves litte overhead compared to uses of
OSSemPend ( ) and OSSenPost ( ) when using a mutex. Overhead means number of operations
needed for blocking one process and starting another. However, a resource of high priority
should not lock the other processes by blocking an already running task in the following

188

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

situation. Suppose a task is running and a little is left for its completion. The running time left for
it is less compared with the time that would be taken in blocking it and context switching. There
is an innovative concept of spin locking in certain OS schedulers. A spin lock is a powerful tool
in the situation described before. The scheduler locking process for a task I waits in a loop to
cause the blocking of the running task first for a time interval t, then for (t-t), then (t-2t) and so
on. When this time interval spin-downs to 0, the task that requested the lock of the processor
now unlocks the running task I and blocks it from further running. The request is now granted to
task J to unblock and start running provided that task is of higher priority. A spin lock does not
let a running task to be blocked instantly, but first successively tries with or without decreasing
the trial periods before finally blocking a task. A spin-lock obviates need of context-switching by
pre-emption and use of mutex function-calls to OS.
14. Explain Message Queue in Detail?
Some OSes do not distinguish, or make little distinction, between the use of queues, pipes
and mailboxes during the message communication among processes, while other OSes regard the
use of queues as different.
A message queue is an IPC with the following features.
1. An OS provides for inserting and deleting the message pointers or messages.
2. Each queue for the message or message-pointers needs initialization (creation) before
using functions in kernel for the queue.
3. Each created queue has an ID.
4. Each queue has a user-definable size (upper limits for number of bytes).
5. When an OS call is to insert a message into the queue, the bytes are as per the pointed
number of bytes. For example, for an integer or float variables as pointer, there will be 4
bytes inserted per call. If the pointer is for an array of eight integers, then 32bytes will be
inserted into the queue. When a message-pointer is inserted into queue, the 4 bytes
inserts, assuming 32-bit addresses.
6. When a queue becomes full, there is error handling function to handle that.
The OS functions for a queue, for example, in COS-II, can be as follows:

189

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

1. OSQCreate, a function that creates a queue and initializes the queue.


2. OSQPost, a function that sends a message into the queue as per the queue tail pointer, it
can be used by tasks as well as ISRs.
3. The OSQPend waits for a queue message at the queue and reads and deletes that when
received (wait, used by tasks only, not used by ISRs).
4. OSQAccept deletes the present message at queue head after checking its presence yes or
no and after the deletion the queue head pointer increments. (No wait; used by ISRs as
well as tasks).
5. OSQFlush deletes the messages from queue head to tail. After the flush the queue head
and tail points to QTop, which is the pointer at start of the queuing. (used by ISRs and
tasks)
6. OSQQuery queries the queue message block but the message is not deleted. The function
returns pointer to the message queue *Qhead if there are the messages in the queue or
else returns NULL. It returns a pointer to the structure of the queue data structure for
*QHEAD, number of queued messages, size and table of tasks waiting for the messages
from the queue. (queue is used in tasks)
7. OSQPostFront sends a message to front pointer, *QHEAD. Use of this function is made
in the following situations. A message is urgent or is higher priority than all the
previously posted message into the queue (used in ISRs and tasks).
In certain RTOS, a queue is given select option and the option is provided for priority or
FIFO. The task having priority if started deletes a queue message first in case the priority
option is selected. The task pending since longer period a queue message first in case the
FIFO option is selected.
15. What is the Purpose of Mailbox in Real Time Operating Systems? (April/May-2012)
A message-mailbox is for an IPC message that can be used only by a single destined
task. The mailbox message is a message-pointer or can be a message. (COS-II provides for
sending message-pointer into the box). The source (mail sender) is the task that sends the
message pointer to a created (initialize) mailbox (the box initially has the NULL pointer before
the message posts into the box). The destination is the place where the OSMBoxPend function
waits for the mailbox message and reads it when received.

190

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

A mobile phone LCD display task is an example that uses the message mailboxes as an
IPC. In the mailbox, when the time and data message from a clock-process arrives, the time is
displayed at side corner on top line. When the message from another task to display a phone
number, it is displayed the number at middle at a line. When the message from another task to
display the signal strength at antenna, it is displayed at the vertical bar on the left.
Another example of using a mailbox is the mailbox for an error-handling task, which
handles the different error logins from other tasks.
The following may be the provisions at an OS for IPC functions when using the mailbox.
1. A task may put into mailbox only a pointer to the message-block or number of message
bytes as per the OS provisioning.
2. There are three types of the mailbox provisions.
A queue may be assumed a special case of a mailbox with provision for multiple messages or
message pointers. An Os can provide for queue from which a read (deletion) can be on a FIFO
basic or alternatively an OS can provide for the multiple mailbox messages with each message
having a priority parameter. The read (deletion) can then only be on priority basic in case
mailbox messages has multiple messages with priority assigned to high priority ones. Even if the
messages are inserted in a different priority, the deletion is as per the assigned priority parameter.
An OS may provision for mailbox and queue separately. A mailbox will permit one message
pointer per box and the queue will permit multiple messages or messages pointers. COS-II
RTOS is an example of such an OS.
The RTOS functions for mailbox for the use by the tasks can be the following:
1. OSMBoxcreate creates a box and initializes the mailbox contents with an NULL pointer.
2. OSMBoxPost sends (writes) a message to the box.
3. OSMBoxWait (pend) waits for a mailbox message, which is read when received.
4. OSMBoxAccept reads the current message pointer after checking the presence yes or no
(no wait).
5. OSMBoxQuery queries the mailbox when read and not needed later.

191

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

An ISR can post (but not wait) into the mailbox of a task.
16. What is Pipes? How it is used in Real Time Operating System?
The OS pipe functions are unlike message queue functions. The difference is that pipe
functions are similar to the ones used for devices such as file.
A message-pipe is a device for inserting (writing) and deleting from the between two
given interconnected tasks or two sets of tasks. Writing and reading from a pipe is like using
a C command fwrite with a file name to write into a named file, and fread with a file name to
read from a named file. Pipes are also like Java PipeInputOutputStreams.
1. One task using the function fwrite in a sets of tasks, can insert (write) to pipe at the
back pointer address, *pBACK.
2. Another task using the function fread in a set of tasks can delete (read) from a pipe at
the front pointer address, *pFRONT.
3. In a pipe there may be no fixed number of bytes per message but there is end-pointer.
A pipe can therefore be inserted limited of bytes and have a variable number of bytes
per messages between the initial and final pointers.
4. Pipe is unidirectional. One thread or task inserts into it and the other one deletes from
it.
An example of the need for messaging and thus for IPC using a pipe is a network stack.
The OS functions for pipe are the following:
i.

pipeDevCreate for creating a device, which functions as pipe.

ii.

Open ( ) for opening the device to enable its use from beginning of its
allocated buffer, its use is with options and restrictions (or permissions)
defined at the time of opening.

iii.

Connect ( ) for connecting a thread or task inserting bytes to the thread or


task deleting bytes from the pipe.

iv.

Write ( ) function for inserting (writing) from the bottom of the empty
memory spaces in the buffer allocated to it.

192

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

v.

Read ( ) function for deleting (reading) from the pipe from the bottom of
the unread memory spaces in the buffer filled after writing into the pipe.

vi.

Close ( ) for closing the device the enable its use from beginning of its
allocated buffer only after opening it again.

17. Explain About the Virtual Socket? (April/May-2012)


The use of pipe between a process at the card and a process at the host will have the
following problems.
1. We need the card information to be transferred from a process. A as bytes stream
to the host machine process B and the B sends messages as bytes stream to A.
There is need for bi-directional communication between A and B.
2. We need the A and Bs ID or port as well as address information when
communication. These must be specified either for the destination alone or for
both source and destination (It is similar to sending the messages in a letter along
with address specification).
A protocol provides for communication along with the byte information of the
address or port of the destination alone or addresses or ports of both source and
destination. A protocol may provide for the addresses as well as ports of both source
and destination in case of the remote processes (for example, in IP protocol). Also
there are two types of the protocols.
1. There may be the need of using a connectionless protocol when sending and
receiving message streams. An example of such a protocol is UDP (user
datagram protocol). UDP protocol requires a UDP header, which contains
source port (optional) and destination port number, length of the datagram and
checksum for the header-bytes. Port means a process or task for specific
application. The number specific the process. Connectionless means there is
no connection establishment between source and destination before actual
transfer of data stream can take place. Datagram means a set of data, which is
independent and need not be in sequence with the previously sent data.

193

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Checksum is sum of the bytes to enable the checking of the erroneous data
transfer. For remote communication, the address, for example, IP address is
also required in the header.
2. There may be the need of using a connection-oriented protocol, for example,
TCP. Connection-oriented protocol means a protocol, which provides that
there must first a connection establishment between the source and
destination, and then the actual transfer of data stream can take place. At end,
there must be connection termination.
Socket provides a devices-like mechanism for bi-direction communication. It provides
for using a protocol between the source and destination processes for transferring the bytes; it
provides for establishing and closing a connection between the source and destination processes
using a protocol for transferring the bytes; it may provide for listening from multiple sources or
multicasting to multiple destinations. Two tasks at two distinct places are locally interconnect
through the sockets. Multiple tasks at multiple distinct places interconnect through the sockets to
a socket at a server process. The client and server sockets can run on the same CPU or at distant
CPUs on the Internet.
Sockets can be using different domain. For example, a socket domain can be TCP,
another socket domain may be UDP, the card and host example socket domains are different.
The use of a socket of IPC is analogous to the use of sockets for an Internet connection
between the browser and web server. A socket provides a bi-directional transfer of messages and
may also send protocol header. The transfer is between two or between the multiple clients and
server-process. Each socket may have the task source address (similar to a network or IP
address) and a port (similar to a process or thread) number. The source and destination sets of
tasks (addresses) may be on the same computer or on a network.
18. Describe the functions of an RTOS and when it is necessary.(April/May2012)(November-2011)(November-2012)
A Real-Time Operating System (RTOS) is a computing environment that reacts to input
within a specific time period. A real-time deadline can be so small that system reaction appears
instantaneous. The term real-time computing has also been used, however, to describe "slow
real-time" output that has a longer, but fixed, time limit.

194

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Basic functions of RTOS

Time management
Task management
Interrupt handling
Memory management
Exception handling
Task synchronization
Task scheduling

Time management
A high resolution hardware timer is programmed to interrupt the processor at fixed rate
Time interrupt Each time interrupt is called a system tick (time resolution):
Normally, the tick can vary in microseconds (depend on hardware)
The tick may be selected by the user
All time parameters for tasks should be the multiple of the tick
System time = 32 bits
One tick = 1ms: your system can run 50 days
One tick = 20ms: your system can run 1000 days = 2.5 years
One tick = 50ms: your system can run 2500 days= 7 year
Time interrupt routine
Save the context of the task in execution
Increment the system time by 1, if current time > system lifetime, generate a timing error
Update timers(reduce each counter by 1)
A queue of timers
Activation of periodic tasks in idling state
Schedule again - call the scheduler

195

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Other functions e.g.


Remove all tasks terminated --deallocate data structures e.g TCBs
Check if any deadline misses for hard tasks, monitoring
load context for the first task in ready queue

Need for RTOS


An RTOS must respond in a timely manner to changes, but that does not necessarily
mean that an RTOS can handle a large throughput of data. In fact in an RTOS, small response
times are valued much higher than computing power, or data speed. Sometimes an RTOS will
even need to drop data to ensure that it meets its strict deadlines. In essence, that provides us
with a perfect definition: an RTOS is an operating system designed to meet strict deadlines.
Beyond that definition, there are few requirements as to what an RTOS must be, or what features
it must have. Some RTOS implementations are very complete and very robust, while other
implementations are very simple, and suited for only one particular purpose.
An RTOS may be either event-driven or time-sharing. An event-driven RTOS is a
system that changes state only in response to an incoming event. A time-sharing RTOS is a
system that changes state as a function of time
An operating system is considered real-time if it invariably enables its programs to
perform tasks within specific time constraints, usually those expected by the user. To meet this
definition, some or all of the following methods are employed:

The RTOS performs few tasks, thus ensuring that the tasks will always be executed before
the deadline

The RTOS drops or reduces certain functions when they cannot be executed within the time
constraints ("load shedding")

The RTOS monitors input consistently and in a timely manner

196

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

The RTOS monitors resources and can interrupt background processes as needed to ensure
real-time execution

The RTOS anticipates potential requests and frees enough of the system to allow timely
reaction to the user's request

The RTOS keeps track of how much of each resource (CPU time per timeslice, RAM,
communications bandwidth, etc.) might possibly be used in the worst-case by the currentlyrunning tasks, and refuses to accept a new task unless it "fits" in the remaining un-allocated
resources.

19.How the interrupt handled by the RTOS?(April-2013)


1.ISRs in RTOSes
a)ISRs have the higher priorities over the RTOS functions and the tasks.
b) An ISR should not wait for a semaphore, mailbox message or queue message
c) An ISR should not also wait for mutex else it has to wait for other critical section code to
finish before the critical codes in the ISR can run.
d) Only the IPC accept function for these events (semaphore, mailbox, queue) can be used, not
the post function
e) Three alternative ways systems to respond to hardware source calls from the interrupt
2.Direct Call to an ISR and ISR Enter message
On an interrupt, the process running at the CPU is interrupted
a) ISR corresponding to that source starts executing.
b)A hardware source calls an ISR directly.
c)The ISR just sends an ISR enter message to the RTOS. ISR enter message is to
inform the RTOS that an ISR has taken control of the CPU
3. RTOS first interrupting on an interrupt, then RTOS calling the corresponding ISR
On interrupt of a task, say, k-th task, the RTOS first gets itself the hardware source call and
initiates the corresponding ISR after saving the present processor status (or context)

197

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

a) Then the ISR during execution then can post one or more outputs for the events and messages
into the mailboxes or queues.
b) The ISR must be short and it must simply puts post the messages for another task.
c)This task runs the remaining codes whenever it is scheduled.
d)RTOS schedules only the tasks (processes) and switches the contexts between the tasks only.
e)ISR executes only during a temporary suspension of a task.

QUESTION BANK
UNIT-IV
TWO MARKS
1. What is an rtos?
2. What are rtos basic services?( April-2013)
3. Why we need rtos?
4. What are the occasions where we no need rtos?
5.What are rtos task scheduling models?(April-2013)
6. What are the features of control flow strategy?
7. What are the features of data flow strategy?
8. What are the features of control data flow strategy?
9. What are basic functions of rtos?

198

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

10. What is the need for tested rtos?


11. What are the options in rtos?
12. What are different phases of system development methology?
13. What is bounded dispatch time?
14. What is the need for max number of tasks?
15. Why min ram is required per task?
16. What is the need for maximum addressable memory space?
17. Draw the task state transition diagram
18. When does the priority inversion occurs?
19. What are classical problems of synchronization?
20. How can the deadlock be recovered?
21. Define process.
22. Define task and task state.
23. Define TCB
24. What is a thread?
25. What are the benefits of multithreaded programming?
26. Compare user threads and kernel threads.
27. Define rtos.
28. Define task and task rates.
29. Define cpu scheduling.
30. Define synchronization.

199

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

31. Define inter process communication


32. Define semaphore.
33. What is a semaphore?
34. Give the semaphore related functions.(November-2011)
35. When the error will occur when we use the semaphore?
36. Differentiate counting semaphore and binary semaphore
37. What is priority inheritance?
38. Define message queue
39. Define mailbox and pipe.
40. Define socket.
41. Define remote procedure call.
42. Define thread cancellation & target thread.
43. What are the different ways in which a thread can be cancelled?
44. What is preemptive and non-preemptive scheduling?(November-2012)
45. What is a dispatcher?
46. What is dispatch latency?
47. What are the various scheduling criteria for cpu scheduling?
48. Define throughput?
49. What is turnaround time?
50. Define race condition.
51. What is critical section problem?
52. What are the requirements that a solution to the critical section problem must satisfy?

200

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

53. Define deadlock.


54. What are conditions under which a deadlock situation may arise?
55. What are the various shared data operating system services?
56.What is meant by operating system?
57.Define protection mechanism?
58.What is decomposition of Task?(November-2012)
59.Write down the methods of fixed scheduling?(November-2011)
11-MARKS:
1. What is a Co-Design Process and what is the Scheduling Policies?
2. What is a Task? Explain the Various Task States?
3. Explain Round Robin Scheduling Briefly And What Is FIFO? Explain Its Working In Detail?
4. What is Preemptive Priority Based Scheduling and Explain Rate Monotonic Scheduling in
Detail? (April-2013)
5. What Is Priority Inversions? (November-2011)
6. What Is Priority Ceiling and Explain Memory Allocation Strategies?
7. Explain The Concept Deadlock in Detail?
8. Explain Process Synchronization (November-2012)
9. Explain Inter Process Communication Briefly?
10. What is the Concept behind Shared Memory?

11. What Is Memory Locking? Explain Its Function?


12. What are Signals Related IPC Functions?

201

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

13. What is Semaphore Flag and Explain Mutex as Resource Key?


14. Explain Message Queue in Detail?
15. What is the Purpose of Mailbox in Real Time Operating Systems? (April/May-2012)
16. What is Pipes? How it is used in Real Time Operating System?
17. Explain About the Virtual Socket? (April/May-2012)
18. Describe the functions of an RTOS and when it is necessary.(April/May-2012)(November2011)(November-2012)
19.How the interrupt handled by the RTOS?(April-2013)

PONDICHERRY UNIVERSITY QUESTIONS


1. What is Preemptive Priority Based Scheduling and Explain Rate Monotonic Scheduling in
Detail? (April-2013)[Ref. Page No.: 161]
2. What Is Priority Inversions? (November-2011) [Ref. Page No.:163]
3. Explain Process Synchronization (November-2012) [Ref. Page No.: 173]
4. What is the Purpose of Mailbox in Real Time Operating Systems? (April/May-2012)
[Ref. Page No.:193]
5. Explain About the Virtual Socket? (April/May-2012) [Ref. Page No.: 190]
6. Describe the functions of an RTOS and when it is necessary.(April/May-2012)(November2011)(November-2012) [Ref. Page No.: 194]
7. How the interrupt handled by the RTOS?(April-2013) [Ref. Page No.: 197]

202

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Department of Computer Science and Engineering


Subject Name: EMBEDDED SYSTEMS

Subject Code: CS T56

Prepared By :

Mr.P.Karthikeyan,AP/CSE
Mr.B.Thiyagarajan, AP/CSE
Mrs.P.Subha Priya, AP/CSE
Verified by :

Approved by :

UNIT V
Study of Micro C/OS-II or Vx Works: RTOS System Level Functions Task Service
Functions Time Delay Functions Memory Allocation Related Functions Semaphore
Related Functions Mailbox Related Functions Queue Related Functions Case
Studies of Programming with RTOS.

203

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

1. What are the various options of RTOS?(April/May-2012)


The various options for RTOS are as follows:

Own RTOS

Linux based RTOS

C/ OS-II

Vx Works

2. What is MUCOS?(November-2011)
C/OS-II is Free Open-source RTOS designed by Jean J. Labrosse in 1992.
C/OS-II is intended for Non-commercial use.
C/OS-II codes are in C and few CPU specific modules are in ASSY.
C/OS-II code Port on MANY Processors that are commonly used in ES design.
3. What are the various features of C/OS-II?
C/OS-II is a Scalable OS.
C/OS-II uses Pre-emptive Scheduler (for multitasking).
C/OS-II has System level functions.
C/OS-II has Task service functions.
C/OS-II has Task delay functions.

204

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

C/OS-II has Memory allocation functions.


C/OS-II has IPC functions.
C/OS-II has Semaphore, Queue, Mailbox functions.
4. What are the types of Source Code files of C/OS-II?
There are 2 types of source code files of C/OS-II. They are as follows:

Processor Dependent Source Files

Processor Independent Source Files

5. What are the Processor Dependent Source Files?


Os_cpu.h

Processor definition header file

Os_cfg.h

Kernel building configuration file

Os_tick.c

C file for ISR and RTOS timers

Os_cpu_c.c

Processor C codes file

Os_cpu-a.s12

Assy code for task switching functions (68HC12)

6. What are the Processor Independent Source Files?

205

Ucos.ii.h

MUCOS header file

Ucos.ii.c

MUCOS header file

Os_core.c

For RTOS core

Os_time.c

For RTOS timer

Os_task.c

For RTOS task related functions

Os_mem.c

For Memory partitioning

Os_sem.c

For Semaphore related functions

Os_q.c

For Queue related functions

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

7. What is Vx Works?
VxWorks is a real-time operating system made and sold by Wind River Systems of
Alameda, California, USA. Intel acquired Wind River Systems on July 17, 2009. VxWorks is
designed for use in embedded systems. Unlike "self-hosting" systems such as Unix, VxWorks
development is done on a "host" machine running Linux, Unix, or Microsoft Windows, crosscompiling target software to run on various "target" CPU architectures.
8. What are the features of Vx Works?
The various features of Vx Works are as follows:

SCSI support

performance monitoring tools

exception handling

symbolic debugging

message queues

9. What are the differences of psos and Vxworks?


Actually there is not much of difference between using psos or vxworks. A few
differences in features are:

The psos priority is reverse of vxworks.

psos supports posix 1003.1 while vxworks it is 1003.1b.

In psos device driver architecture is different than vxworks.

Also vxworks has interrupt latency<4.33 microsecs while psos its higher.
Other

then

these

both

work

in

same

manner

and

follow

same

architecture.

Also as psos is getting killed no fresh development work is supported by


windriver for psos. Also vxworks development environment is much more user
friendly then psos environment becos vxworks IDE mimics mostly visual studio.

206

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

10. What are the various system level Functions?


The system level functions are,
Void OSInit (void) - at the beginning prior to OS start ()
Void OSStart (void) - after OS Init () and task creating functions
Void OSTickInit (void) to initialize system timer ticks.
Void OSIntEnter (void) just after the start of ISR codes.
Void OSIntExit (void) before return form the ISR codes.
OS- enter- critical macro to disable all interrupts.
OS-exit -critical- macro to enable all interrupts.
11. What are the various Task Service functions?(November-2012)
These functions are used to create task, suspend and resume, and time settling and
retrieving functions.
Unsigned byte OS task create () must call before running a task.
Unsigned byte OS task suspend () - called for blocking a task.
Unsigned byte OS task resume () called for resuming a blocked task.
Void OS time set () = when system time is to be set.
Unsigned int OS time get (void) find present count when time is read.
12. What are the various Time Delay Functions?(November-2012)
MUCOS time delay functions for the tasks are
Void OStimeDly () - to delay a task by count 1 value.
Unsigned byte OStimeDly resume () - to resume a task a preset delay.

207

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Void OStimeDly HMSM () time delay to block a task.


13. What is the various Memory Allocation related Functions?
MUCOS memory related functions for the task are,
OSMem & OSMemCreate () to create and initialize memory partition
Void & OSMem query () to find pointers of memory control blocks.
Unsigned byte OSMem Query ()-to find pointer of memory blocks, and data structures.
Unsigned byte OSMem put () to return a pointer of memory blocks.
14. What are the various Semaphore Related Functions?
When a semaphore created by OS and used a resources acquiring key, it must be
initialized with 1 to indicate the resources is available.
MUCOS semaphore related functions for task are,
OS event OSSemCreate () to create and initialize a semaphore
Void OSSemPend () to ckeck whether a semaphore is pending.
Unsigned short OSSemAccept () to check whether sem Val>0
Unsigned byte OSSemPort () if sem Val=0 or more, increments and makes a semaphore
again not Pending.
Unsigned byte OS sem query () to get semaphore information.
15. What are the other functions that are related to MUCOS?
Apart from the various functions, MUCOS has also functions related to
Mailbox Functions
Queue Functions

208

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

16. What are the general Mailbox Related Functions?(November-2011)

mu-mailbox-open - Opens a mailbox specified by URL.

mu-mailbox-close - Closes mailbox MBOX

mu-mailbox-get-url - Returns the URL of the mailbox.

mu-mailbox-get-port - Returns a port associated with the contents of the MBOX.


MODE is a string defining operation mode of the stream. It may contain any of the two
characters: r for reading, w for writing.

mu-mailbox-get-message - Retrieve from MBOX message # MSGNO

mu-mailbox-messages-count - Returns number of messages in the mailbox.

mu-mailbox-expunge - Expunges deleted messages from the mailbox.

mu-mailbox-append-message - Appends the message to the mailbox.

17. What are the general Queue Related Functions?

QS_ALLEVENTS - An input, WM_TIMER, WM_PAINT, WM_HOTKEY, or posted


message is in the queue.

QS_ALLINPUT - Any message is in the queue.

QS_ALLPOSTMESSAGE - A posted message (other than those listed here) is in the


queue.

QS_HOTKEY - A WM_HOTKEY message is in the queue.

QS_INPUT - An input message is in the queue.

QS_KEY

WM_KEYUP,

WM_KEYDOWN,

WM_SYSKEYUP,

or

WM_SYSKEYDOWN message is in the queue.

QS_MOUSE

WM_MOUSEMOVE

message

or

mouse-button

message

(WM_LBUTTONUP, WM_RBUTTONDOWN, and so on).

QS_MOUSEBUTTON

mouse-button

message

(WM_LBUTTONUP,

WM_RBUTTONDOWN, and so on).

QS_MOUSEMOVE - A WM_MOUSEMOVE message is in the queue.

QS_PAINT - A WM_PAINT message is in the queue.

QS_POSTMESSAGE - A posted message (other than those listed here) is in the queue.

QS_SENDMESSAGE - A message sent by another thread or application is in the queue.

209

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

QS_TIMER - A WM_TIMER message is in the queue.

18. When do you use OS_ENTER-CRITICAL ( ) and OS_EXIT_CRITICAL ( )?


OS_ENTER-CRITICAL ( ) is not actually a function. This macro inserts the machine
instruction into your code to block all interrupts.
OS_EXIT_CRITICAL ( ) is not actually a function. This macro inserts the machine
instruction to enable interrupts.
19. How to perform taskLock( ) Function?
Disabling preemption offers a somewhat less restrictive form of mutual exclusion. While no
other task is allowed to preempt the current executing task, ISRs are able to execute:
funcA ()
{
taskLock ();
.
. /* critical region of code that cannot be interrupted */
.
taskUnlock ();
}
However, this method can lead to unacceptable real-time response. Tasks of higher priority are
unable to execute until the locking task leaves the critical region, even though the higher-priority
task is not itself involved with the critical region. While this kind of mutual exclusion is simple,
if you use it, make sure to keep the duration short.

210

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

20. When will the synchronous context switches occur?


There are three situations where a context switch needs to occur. They are:

Multitasking

Interrupt handling

User and kernel mode switching

21. How the task is scheduled in windows operating System?


With Scheduled Tasks, you can schedule any script, program, or document to run at a
time that is most convenient for you. Scheduled Tasks starts every time that you start Windows
XP and runs in the background, and it starts each task that you schedule at the time that you
specify when you create the task.
22. What is Wind Message Queues?
Wind message queues are created, used, and deleted with the routines shown in Table 214. This library provides messages that are queued in FIFO order, with a single exception: there
are two priority levels, and messages marked as high priority are attached to the head of the
queue.
Wind Message Queue Control:
msgQCreate( ) - Allocates and initializes a message queue.
msgQDelete( ) - Terminates and frees a message queue.
msgQSend( ) - Sends a message to a message queue.
msgQReceive( ) - Receives a message from a message queue.
23. How a message queue can send Events to a Task?
A message queue can send events to a task, if it is requested to do so by the task. To
request that a message queue send events, a task must register with the message queue using
msgQEvStart( ). From that point on, every time the message queue receives a message and

211

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

there are no tasks pending on it, the message queue sends events to the registered task. To
request that the message queue stop sending events, the registered task calls msgQEvStop( )
24. How is the real time system are structured?
Real-time systems are often structured using a client-server model of tasks. In this model,
server tasks accept requests from client tasks to perform some service, and usually return a reply.
The requests and replies are usually made in the form of inter task messages.
25. How does the Vx Work Exception Handler works?
The default exception handler suspends the task that caused the exception, and saves the
state of the task at the point of the exception. The kernel and other tasks continue uninterrupted.
A description of the exception is transmitted to the Tornado development tools, which can be
used to examine the suspended task.
26. Comparison of taskLock( ) and intLock( )
When using taskLock( ), consider that it will not achieve mutual exclusion. Generally, if
interrupted by hardware, the system will eventually return to your task. However, if you block,
you lose task lockout. Thus, before you return from the routine, taskUnlock( ) should be called.
When a task is accessing a variable or data structure that is also accessed by an ISR, you
can use intLock( ) to achieve mutual exclusion. Using intLock( ) makes the operation "atomic"
in a single processor environment. It is best if the operation is kept minimal, meaning a few lines
of code and no function calls. If the call is too long, it can directly impact interrupt latency and
cause the system to become far less deterministic.
27. How is wind scheduler enabled and disabled?
The wind scheduler can be explicitly disabled and enabled on a per-task basis with the
routines taskLock( ) and taskUnlock( ). When a task disables the scheduler by calling
taskLock( ), no priority-based preemption can take place while that task is running.

212

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

If the task explicitly blocks or suspends, the scheduler selects the next highest-priority
eligible task to execute. When the preemption-locked task unblocks and begins running again,
preemption is again disabled.
28. What are the routines that can be called by Task Switch Hooks?

bLib - All routines

fppArchLib - fppSave( ), fppRestore( )

intLib - intContext( ), intCount( ), intVecSet( ), intVecGet( ), intLock( ), intUnlock( )

lstLib - All routines except lstFree( )

mathALib - All are callable if fppSave( )/fppRestore( ) are used

rngLib - All routines except rngCreate( ) and roundlet( )

taskLib - taskIdVerify( ), taskIdDefault( ), taskIsReady( ), taskIsSuspended( ), taskTcb( )

vxLib - vxTas( )

29. What is called reentrancy?


A subroutine is reentrant if a single copy of the routine can be called from several task
contexts simultaneously without conflict. Such conflict typically occurs when a subroutine
modifies global or static variables, because there is only a single copy of the data and code. A
routine's references to such variables can overlap and interfere in invocations from different task
contexts.
30. What are the reentrancy techniques used by majority of Vx Works?
The majority of VxWorks routines use the following reentrancy techniques:
o

dynamic stack variables

global and static variables guarded by semaphores task variables

31.Give any 2 time delay function (November-2013)

void OSTimeSet (unsigned int counts)Used when system time is to be set by


counts

213

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

unsigned int OSTimeGet (void)To find present counts when system time is
read.

void OSTimeDly (unsigned short delayCount) To delay a task by period of


count-inputs equal to delayCount -1

32.How can you create a queue for an IPC? November-2013)

msgQCreate ( ) to allocate and initialize a queue

33.Define TCB?(April\May-2012)
A Process Control Block (PCB, also called Task Control Block or Task Struct) is a data
structure in the operating system kernel containing the information needed to manage a particular
process. The PCB is "the manifestation of a process in an operating system".
34.Goals of embedded software?
An embedded system is a computer system with a dedicated function within a larger mechanical
or electrical system, often with real-time computing constraints. It is embedded as part of a
complete device often including hardware and mechanical parts.

UNIT V
11MARKS
1. Explain the System-Level Functions in detail? (November-2011)
The system level functions are,
Void OSInit (void) - at the beginning prior to OS start ()
Void OSStart (void) - after OS Init () and task creating functions
Void OSTickInit (void) to initialize system timer ticks.
Void OSIntEnter (void) just after the start of ISR codes.
Void OSIntExit (void) before return form the ISR codes.
OS- enter- critical macro to disable all interrupts.

214

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

OS-exit -critical- macro to enable all interrupts.


1. Initiating the OS before starting the use of the RTOS functions. Functions void OSInit
(void) is used to initiate the OS. It use is compulsory before calling any OS kernel
functions. It returns no parameter.
2. Starting the use of RTOS multitasking functions and running the tasks. Function void
OSstart (void) is used to start the initiated OS and created tasks. Its use is compulsory for
the multitasking OS kernel operations. Its returns no parameter.
3. Starting the use of RTOS system clock. Functions void OStickInit (void)is used to
initiate

the

system

clock

ticks

and

interrupts

at

regular

interval

as

per

OS_TICKS_PER_SEC predefined during


Configuring the MUCOS. Its use is compulsory for the multitasking OS kernel operations
when the timer functions are to be used. It returns or passes no parameter.
4. Sending message to RTOS kernel for taking at the start of an ISR. Functions void
OSIntenter (void) is used at the start of an ISR. It is for sending a message to RTOS
kernel for taking control. Its use is compulsory to let the multitasking Os kernel, control
the nesting of the ISRs in case of occurrences of multiple interrupts of varying priorities.
It returns no parameter.
5. Sending a message to RTOS kernel for quitting the control from an ISR. Functions void
OSIntExit (void) is used just before the return from the running ISR. It is for sending a
message to RTOS kernel for quitting control from the nesting loop. Its use is compulsory
to let the OS kernel quit the ISR from the nested loop of the ISRs. Its returns no
parameter.
6. Sending a message to RTOS kernel for taking control at the start of a critical section. A
macro-function OS_NETER_CRITICAL is used at the start of critical section in task or
ISR. It is for sending a message to MUCOS kernel and disabling the interrupts. Its use is
compulsory to let the OS kernel take note of and disable the interrupts of the system. It
returns no parameter.
7. Sending a message to RTOS kernel for quitting the control at the return from a critical
section. Macro-function OS_EXIT_CRITICAL is used just before the return from the
critical section. It is for sending a message to RTOS kernel for quitting control from the

215

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

section. Its use is compulsory to let the OS kernel quit the section and enable the
interrupts to the system. It returns no parameter.
8. Locking OS scheduler. OSSschedlock () disables preemption by a higher-priority task.
This function inhibits preemption by higher-priority task, but does not interrupt. If an
interrupts occurs then locking enables return of OS control to that task, which executes
this functions. The control returns to the task after any ISR completes.
9. Unlocking OS scheduler. OSSchedUnlock () enables preemption by higher priority task.
Enables return of OS control to the high priority task after the execution of
OSSchedUnlock. In case of any interrupts occurring after executing OSSchedUnlock and
after the end of the ISR, the higher-priority task, which is ready will execute on return
from the ISR.
2. What are the Task Service Functions? Explain in detail?
These functions are used to create task, suspend and resume, and time settling and retrieving
functions.
Unsigned byte OS task create () must call before running a task.
Unsigned byte OS task suspend () - called for blocking a task.
Unsigned byte OS task resume () called for resuming a blocked task.
Void OS time set () = when system time is to be set.
Unsigned int OS time get (void) find present count when time is read.

1. Creating a task. Functions unsigned byte OSTaskCreate(void(*task)(void*taskPointer),


void*pmdata, OS_STK*taskStackPointer, unsigned byte taskpriority) is explained as
follows.
A preemptive scheduler preempts a task of higher priority. Therefore, each user-task is to
be assigned a priority, which must be set between 8 and OS_MAX_TASKS-9(or 8 and
OS_LOWEST_PRIO-8). OS reserves eight highest and eight lowest priority tasks for its own

216

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

functions. Total number of tasks that MUCOS manages can be up to 64. Mucos task priority
is also the task identifier. There is no task ID-defining function in MUCOS because each task
has to be assigned a distinct priority.
If the maximum number if user tasks is eight, then OS_MAX_TASKS is 24 (including
eight system-level tasks and 8 lowest priority system-tasks), the priority must be set between
8 and 15, because MUCOS will assign priorities 16 to 23 to the 8 lowest priority system
level tasks. The priorities 0 to 7 or 16 to 23 will then be for MUCOS internal uses.
OS_LOWEST_PRIO and OS_MAX_TASKS are user-defined constants in preprocessor
codes that are needed for configuring the MUCOS for the user applications. Defining
unnecessarily 20 user tasks that are actually 4 task are created by the user is to be avoided
because more OS_MAX_TASKS means unnecessarily higher memory space allocation by
the system to the user tasks.
Tasks parameters passing PA:
a) *task pointer is a pointer to the codes of task being created.
b) *pmdata is pointer for and optional message data reference passed to the task. If
none, we assign it as NULL.
c) *taskStackPointer is a pointer to the stack of task being created.
d) taskPriority is a task priority and must be within 8 to 15, if macro
OS_LOWEST_PRIO sets the lowest priority equal to 23
Returning RA: The lowest priority of any task OS_LOWEST_PRIO is 23. For the
application program, task priority assigned must be wthin 8 to 15. The function
OSTaskCreate () returns the following:
i.

OS_NO_ERR, when creation succeeds;

ii.

OS_PRIO_EXIST, if priority value that passed already exists;

iii.

OS_PRIO_INVALID,

if

priority

value

that

passed

is

more

than

the

OS_PRIO_LOWEST;
iv.

OS_NO_MORE_TCB returns, when no more memory block for the task control is
available.

217

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

A task can create other tasks, but an ISR is not allowed to create a task. An exemplary
use is in creating a task, Task1_connect, for a connection task. OSTaskCreate(Task1_connect,
void(*) 0, (void*)*Task1_connectstack[100],10)
Task parameter passed as arguments are as follows.
a) Task1_Connect, a pointer to the codes of Task1_Connect for task being created.
b) The pointer for an optional message data reference passed to task is NULL.
c) *Task1_Connect is a pointer to the stack of TASk1_Connect and it is given size = 100
addresses in the memory.
d) Taskpriority is task priority allotted at 10, the highest but two that can be allocated. It will
generate error parameters, OS_NO_ERR = true in case creation and exists.
OS_PRIO_INVALID

true,

if

passed

priority

parameter

is

higher

than

OS_LOWEST_PRIO 8. OS_NO_TCB = false, when TCB is available for


Tasks1_Connect.
2.

Suspending (blocking a task. Function unsigned byte OSTaskSuspend (unsigned byte


task priority) Task parameters passing PB: taskpriority is the task priority and must be
within 8 to 15 for 8 user-tasks. Returning RB: the function OSTaskSuspend () returns the
error parameters OS_NO_ERR when the blocking succeeds. OS_PRIO_INVALID, if
priority value that passed is more than 15, the OS_PRIO_LOWEST constant value.
OS_TASK_SUSPEND_PRIO, if priority value that passed already does not exists.
OS_TASK_SUSPEND_IDLE, if attempting to suspend an idle task that is illegal.
An exemplary use is in blocking the task Task1_Connect of priority =
Task1_Connect_priority is as follows OSTaskSuspend(Task1_Connect_priority). Task
parameter passed as argument is 6. The following error parameters will be returned by
this function.
a) OS_NO_ERR = true, when the blocking succeeds.
b) OS_PRIO_INVALID = false, as 8 is a valid priority and is not more than
OS_PRIO_LOWEST.
c) OS_PRIO_LOWEST = 23.
d) OS_TASK_SUSPEND_PRIO = false, as priority value that passed already does
exits.

218

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

e) OS_TASK_SUSPEND_IDLE = false, when attempting to suspend a task that


was not an idle task. MUCOS executes idle task OSTaskIdle() when all tasks are
either waiting for timer expiry or for and IPC e.g., semaphore IPC.
3.

Resuming (enabling unblocking) a task. Function unsigned byte OSTaskResume


(unsigned byte taskPriority) resumes a suspended task.
Task parameters passing PC: taskpriority is the task priority of that task which is to

resume and must be within 8 to 15 when OS_LOWEST_PRIO is 23 and number of usertasks =8.
Returning RC: the function OStaskResume () returns the OS_NO_ERR when the blocking
succeeds. OS_PRIO_INVALID, if priority value that passed is more than 23, the
OS_LOWEST_PRIO constant value. OS_TASK_RESUME_PRIO, if priority value that
passed already resumed. OS_TASK_NOT_SUSPENDED, if attempting to resume a not
suspended (blocked) task.
An exemplary use is in un=blocking Task1_Connect of priority = Task1_connect
_priority is as follows:
OSTaskResume (Task1_Connect_priority). Tasks parameter passed as arguments is 10, as
Task1_Connect_priority = 10. The following error parameter will be returned by the taskresuming function.
a) OS_NO_ERR = true, when the un-blocking succeeds and task of priority 8 reaches
the running state.
b) OS_PRIO_INVALID = false, as 8 is a valid priority and is not more than
OS_LOWEST_PRIO.
c) OS_LOWEST_PRIO = 23.
d) OS_TASK_RESUME_PRIO = false, as priority value that passed already resumed.
e) OS_TASK_NOT_SUSPENDED = false, when attempting to resume a task that was
not suspended.
4. Setting time is system clock. Function void OSTimeSet (unsigned int count) returns no
value. Passing parameter, PD as argument is given next.

219

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

PD: It passes a 32-bit integer for the count (set the number of ticks for the current time that
will increment after each system clock tick in the system)
An exemplary use is a function OSTimeSet (to preset the time). The function OSTimeSet
(0) sets the present count =0. Caution: it is suggested that OSTimeSet function should be
used before the OSTickInit function and only once then never be used within a task function,
as some other functions that rely on the timer will malfunction. Let the OS timer clock count
continue to be used as in a free running counter. There is little need later on of using the set
time function. This is because at any instant, the time can be read using a get function and at
any other instant, it can be defined again by adding a value to this time.
5. Getting time of system clock. Function unsigned int OSTimeGet (void) returns current
number of ticks as and unsigned integer. The passing parameter as argument is none.
RE: Returns 32-bit integer, current number of ticks at the system clock.
3. Explain the Time Delay Functions in detail? (November-2011)(April/May-2012)
MUCOS time delay functions for the tasks are
Void OStimeDly () - to delay a task by count 1 value.
Unsigned byte OStimeDly resume () - to resume a task a preset delay.
Void OStimeDly HMSM () time delay to block a task.
1. Delaying by defining a time delay by number of clock ticks. Function void OSTimeDly
(unsigned short delaycount) delays task by (delayCount -1) ticks of system clock. It
returns no parameter.
Task parameter passing PF: a 16=bit integer, delay count, to delay a task at least till the
system clock count inputs(ticks) equals to (delayCount -1)+ count, where count is the present
number of ticks at system-clock.
An exemplary use as a function in a task is OSTimeDly (10,000). It delays that task for the
least 100000 ms if system clock ticks after every 10ms.

220

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

2. Resuming a delayed task by OSTimeDly. Functions unsigned byte OSTimeDlyRseme


(unsigned byte_taskpriority) resumes a previously delayed task, whether the delay
parameter was in terms of the delaycount ticks or hours, minutes and seconds. Note: In
case, defined delay is more than 65,535 system clock ticks, OSTimeDlyResume will not
resume that delayed task.
Returning RG: Returns the following error parameters.
a) OS_NO_ERR = true, when resumption after delay succeeds.
b) OS_TSK_NOT_EXIST = true, if task was not created earlier.
c) OS_TIME_NOT_DLY true, if task was not delayed.
d) OS_PRIO_INVALID, when taskPriority parameter that was passed is more
than the OS_PROI_LOWEST (=23).
Task parameter passing PG: taskPriority is the priority of that is delayed before
resumption. An exemplary use is OSTimeDlyResume(Task_ReadPortPriority). It
resumes a delayed task that the OS identifies by priority Task_readPortProirity.
3. Delaying By defining a time delays in units of hours, minutes, seconds and milliseconds.
Function void OSTimeDlyHMSM (unsigned short hr, unsigned short mn, unsigned short
sec, unsigned short mils) delays up to 65,535 ticks a task with delay time defined by hr
hours between 0 to 59, sec seconds between 0 and 59 and mils milliseconds between 0
and 999. The ms adjust to the integral multiple of number of system-clock ticks. The task
in which function is defined is delayed.
Returning RH: The function OSTimeDlyHMSM () returns an error code as following.
a) OS_NO_ERR, when four arguments are valid and resumption after delay succeeds.
b) OS_TIME_INVALID_HOURS,
OS_TIME_INVALID_SECONDS

OS_TIME_INVALID_MINUTES,
and

OS_TIME_INVALID_MILLI,

arguments are greater than 55, 59, 59 and 999, respectively.


c) OS_TIME_ZERO_DLY, if all the arguments passed are 0.

221

EMBEDDED SYSTEMS

if

the

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Task parameter passed PH: (a) to (c) hr, mn, sec and ms are the delay times in hours,
minutes, seconds and milliseconds by which task delays before resuming.
An exemplary use is using OSTimeDlyHMSM (0, 0, 0, 999) function in the codes of a
task. It delayed that task by 9990ms. The function delayed that task for at least 10 ms if
system clock ticks after every 10ms. (if delay is defined as 9,000,000ms, the
OSTimeDlyResume shall not be able to resume this task when asked. Number of ticks
must be less than 65,535, which means maximum delay can be 655,350ms if system
clock ticks every 10ms.

4. What are the Memory Allocation-Related functions? (April-2013) (April/May-2012)


MUCOS memory related functions for the task are
OSMem & OSMemCreate () to create and initialize memory partition
Void & OSMem query () to find pointers of memory control blocks.
Unsigned byte OSMem Query ()-to find pointer of memory blocks, and data structures.
Unsigned byte OSMem put () to return a pointer of memory blocks.
Memory functions are required to allocate fixed-size memory blocks from a memory
partition having integer number of blocks. The allocation takes place without
fragmentation. The allocation and de-allocation take place in fixed and deterministic
time.
1. Creating memory blocks at a memory address. Function OSMem *OSMemCreate (void
*memAddr, MEMTYPE numblocks, MEMYTPE blocksize, unsigned byte *memErr) is
an OS function, which partitions the memory from an address with partitions in the
blocks. The creation and initializing of the memory partitions into the blocks helps the
OS in resources allocations.
Returning RI: The function *OSMemCreate () returns a pointer to a control block for
the created memory partitions. If none created, the create function returns NULL pointer.

222

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Task parameters passing PI: MEMTYPE is the data type according to the memory,
whether 16-bit or 32-bit CPU memory addresses are there. For example, 16=bit in
68HC11 and 8051.
i.

*memAddr is pointer for the memory-starting address of the blocks.

ii.

Numblocks is the number of blocks into which the memory must be


partitioned (must be 2 or more).

iii.

The blockSize is the memory size in bytes in each block.

iv.

*memErr is a pointer of the address to hold the error codes. At the address
*memErr the following global error code variables change from false to true.
OS_NO_ERR = true when creation succeeds. OS_MEM_INVALID_BLKS =
true, when at least two blocks are not passed as arguments.

v.

OS_MEM_INVALID_PART = true, when memory for partition is not


available.

vi.

OS_MEM_INVALID_SIZE = true, when blocks size is smaller than a pointer


variable.

2. Getting a memory block at a memory address. Function void *OSMemGet


(OS_MemCBpointer, unsigned byte *memErr) is to retrieve a memory block from the
partitions created earlier.
Returning RJ: The function OSMemGet ( ) returns a pointer to the memory control block
for the partitions. It returns NULL if no blocks exist there.
Task parameter passing PJ:
i.

Passes a pointer as argument for the control block of a memory partition.

ii.

The function OSMemGet ( ) passes the error code pointer *memErr so that later it
returns one of the following. OS_NO_ERR, when memory block returns to the
memory partition, or OS_MEM_FULL, when memory block cannot be put into the
memory partition as it is full.

3. Querying a memory block. Function unsigned byte OSMemQuery (OS_MEM


*memCBPointer, OS_MEMDATA *memData) is to query and return the error code and

223

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

pointers for the memory partitions. OS_NO_ERROR as 1 if a memory address


*memPointer exists at *OS_MEMDATA, else returns 0.
Returning RK: The function OSMemQuery ( ) returns an error code, which is an unsigned
byte. The code is OS_NO_ERR = 1 when querying succeeds, else 0.
Task parameter passing PK:
i.

The function OSMemQuery ( ) passes (i) a pointer mempointer of the memory


created earlier

ii.

A pointer of data structure, OS_MEM_DATA. As pointer are passed as references,


the information about memory partition returns with the memory control block
pointer.

4. Putting a memory block into a partition. Function unsigned byte OSMemput (OS_MEM
*memCBPointer, void *memBlock) returns a memory block pointed by *memBlock,
which memory control block points by *memCBpointer.
Returning RL: The function OSMemPut ( ) returns error codes for one of the
following: either
i.

OS_NO_ERR, when the memory block returned to the memory partition or

ii.

OS_MEM_FULL, when the memory block cannot be put into the memory
partition as it is full.

Task parameters passing PL:


i.

The memory OSMemPut ( ) passes a pointer *memCBPointer of the memory


control block for the memory partitions. It is there that the block is to be put.

ii.

A pointer of the memory block *memBlock is to be put into the partition.

5. What are the Semaphore-Related Functions? (November-2012)


When a semaphore created by OS and used a resources acquiring key, it must be initialized with
1 to indicate the resources is available.

224

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

MUCOS semaphore related functions for task are,


OS event OSSemCreate () to create and initialize a semaphore
Void OSSemPend () to ckeck whether a semaphore is pending.
Unsigned short OSSemAccept () to check whether sem val>0
Unsigned byte OSSemPort () if sem val=0 or more, increments and makes a semaphore
again not Pending.
Unsigned byte OS sem query () to get semaphore information.
When a semaphore created by this OS is used as a resource-acquiring key(as matex), the
semaphore value to start with is 1, which means that resource is available and 0 will mean not
available. When a value to start with = 0 or n when using the semaphore as event-signaling flag
or counting, respectively.
1. Creating a semaphore for the IPCs. Function OS_Event OSSemCreate (unsigned short
semval) is for creating an OSs ECB (Event Control Block) for an IPC with semval
returning a pointer, which points to the ECB. A semaphore creates and initializes with the
value =semval.
Returning RM: The function OSSemCreate ( ) returns a pointer *eventPointer for the ECB
allocated to the semaphore. Null if none available.
Task parameter passing PM: A semVal between 0 and 65535 is passed. For IPC as an
event-signaling flag, SemFlag must pass 0 and as a resource-acquiring key, SemKey must
pass 1. For IPC as a counting semaphore, SemCount must be either 0 or a count-value to be
passed in the beginning.
2. Waiting for an IPC for semaphore release. Function void OSSemPend (OS_Event
*eventPointer, unsigned short timeout, unsigned byte *SemErrPointer) is for letting a
task wait till the release event of a semaphore: SemFlag or SemKey or SemCount. The
latter is at the ECB pointed by *eventPointer. SemFlag or SemKey or SemCount
becoming greater than 0 is and event that signals the release of the tasks in the waiting

225

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

states. The tasks now become ready for running (they run if no other higher-priority task
is ready). The tasks also become ready after a predefined timeout. SemFlag or SemKey or
SemCount decrements and if it becomes 0 then it makes the semaphore pending again
and the other tasks using OSSemPend ( ) have to wait for its release.
Running RN: The function OSSemPend ( ) when a semaphore is pending, then suspends
till>0 (release) and decrements the semVal on unblocking that task SemVal = 1. The
following macros return true.
i.

OS_NO_ERR returns true, when semaphore search succeeds (semVal>0)

ii.

OS_TIMEOUT returns true, if semaphore did not release(did not become >0)
during the ticks defined for the timeout.

iii.

OS_ERR_PEND_ISR returns true, if this function call was by an ISR and


which is an error, since an ISR should not be blocked for talking the
semaphore.

iv.

OS_ERR_EVENT_TYPE returns true, when *eventPointer is not pointing to


the semaphore.

Task parameter passing PN:


i.

The OS_Event*eventPointer passes as a pointer to ECB that associates with the


semaphore: SemFlag or SemKey or SemCount.

ii.

Passes argument for the number of timer ticks for the timeout. Tasks unblocks
after the delay is equal to (timeout -1) ticks even when the semaphore is not
released. It prevents infinite wait. It must pass 0 if this provision is not used.

iii.

Passes*err, a pointer for holding the error codes.

3. Check for availability of an IPC after a semaphore release. Function unsigned short
OSSemAccept (OS_Event *eventPointer) checks for a semaphore value at ECB and
whether it is greater than 0. An unassigned 16-bit value is retrieved and then
decremented.

226

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Returning RO: The function OSSemAccept ( ) decrements the semVal if>0 and returns
the pre decremented value as an unsigned 16-bit number. It returns 0 if semVal was 0 and
pending when posted (released). After this, the task codes run further.
Task parameter passing PO: The OS_Event *eventPointer for the ECB that associates
with semaphore, semVal.
4.

Sending an IPC after a semaphore release. Function unsigned byte OSSemPost


(OS_Event *eventPointer) is for letting another waiting task not now afterwards and an
IPC is sent for the release event of the semaphore, semFlag or SemKey or SemCount.
The IPC is at the ECB pointed by *eventPointer SemFlag or SemKey or SemCount, it
increments and if it becomes greater than 0, it is an event that signals the release of a task
waiting state. The task now become ready for running ( runs if no higher-priority task is
ready). SemFlag or SemKey or SemCount decrements on running that task and if it
becomes <0 then it makes semaphore pending again and the other tasks have to wait for
its release. If the IPC is posted from an ISR, then the pending task can run only after
OSIntrExit ( ) executes and return from the ISR. If the presently running task is of higher
priority than the task pending for the want of the IPC, then the present task will continue
to rum unless blocked or delayed by executing some function.
Returning RP: The function OSSemPost ( ) increments the SemVal if it is 0 or >0, and
later following macros return true from the error codes macros as follows:
i.

OS_NO_ERR returns true, if semaphore signaling succeeded (SemVal>0 or 0).

ii.

OS_ERR_EVENT_TYPE returns true, if *eventPointer is not pointing to the


semaphore.

iii.

OS_SEM_OVF returns true, when semVal overflows (cannot increment and is


already 65,535)
Task parameter passing PP: The OS_Event *eventPointer passes as pointer to
ECB that associates with the semaphore.

5. Retrieve the error information for a semaphore. Function unsigned byte OSSemQuery
(OS_EVENT *eventPointer, OS_SEM_DATA *SemData) puts the data values for the
semaphore at the pointer, SemData.

227

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Returning RQ: After the OSSemQuery ( ) runs the SemData function and gets the
OSCnt, which is the semaphore present value (count). The semdata also gets the list of
the tasks, which are waiting for the semaphore. The list is at pointers OSEvent
Tbl [ ] and OSEventGr. The semaphore error information parameters we can find on
running the macros, OS_NO_ERR and OS_ERR_EVENT_TYPE.
i.

OS_NO_ERR returns true, when querying succeeds or

ii.

OS_ERR_EVENT_TYPE returns true, if*eventPointer is not pointing to the


semaphore.

Task parameters passing PQ: The function OSSwmQuery ( ) passes a pointer of the
semaphore created earlier at *eventPointer and a pointer of the data structure at *SemData
for that created semaphore.
6. Explain the Mailbox-Related Functions in detail? (April-2013)
Let there be a pointer *msg to the message to be sent in the mailbox, and another
*mboxPointer for the message sending event and retrieving the message itself. MUCOS
mailbox IPC functions for the task.
1. Creating a mailbox for an IPC. Function OS_Event *OSMboxCreate (void*msg) is for
creating and ECB and RTOS and thus initializing a pointer *mboxpointer to msg. The
msg pointer is NULL, if the created mailbox initialized as empty mailbox.
Task parameters passing M1:*msg is message pointer to which *mboxPointer will initialize.
For an IPC, sending the message-pointer *mboxPointer communicates the msg.
Returning M1: The function OSMboxCreate ( ) returns a pointer to the ECB at the MUCOS
and mboxPointer at ECB points to msg.
2. Check for availability of an IPC after a message at mailbox. Function void
*OSMboxAccept (OS_EVENT *mboxPointer) checks for a mailbox message at ECB at
mboxPointer (as event pointer). The pointer for the message msg returns from the
function, if message is available, mboxPointer not pointing to NULL but to the msg.
After returning, the mailbox empties, and mboxPointer will point to NULL on emptying

228

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

of mailbox. The differences with OSMboxPend function is that OSMboxPend suspends


the task if the message is not available and waits for mboxPointer not equal to NULL.
Task parameter passing M2: The OS_Event *mboxPointer passes as pointer to ECB that
associates with the mailbox.
Returning M2: The function OSMboxAccept ( ) checks the message at *mboxPointer and
returns the message pointer *msg presently at msgpointer. The function then returns NULL
pointer if message pointer is not available at mboxPointer. Later *mboxPointer will point to
NULL, because mailbox empties.
3. Waiting for availability of an IPC for a message at mailbox. Function void
*OSMboxPend (OS_Event *mboxPointer, unsigned short timeout, unsigned byte
*MboxErr) checks a mailbox message pointer msg at ECB event pointer, mboxPointer. A
pointer for the message retrieves on return, if message is available, mboxPointer not
pointing to NULL but pointing to msg else waits till available or till time out, whichever
is earlier. If timeout argument value is 0, it means wait indefinitely till the message is
available.
Task parameters passing M3:
i.

The OS_Event *mboxPointer passes as a pointer to ECB that is associated with the
mailbox.

ii.

Passes argument timeout. This resumes that the blocked task after the delay is equal
to (timeout 1) count inputs (ticks) at the system-clock timer.

iii.

Passes reference *mboxErr, a pointer that will hold the error codes.

Returning M3: The function OSmboxPend checks as well as waits for the message at
*mboxPointer and the function returns msg. After returning, the mailbox empties. *mboxPointer
will later point to NULL, because mailbox empties. When message is not available, it suspends
the tasks and blocks as long as *msgpointer is not NULL. It returns NULL pointer if the message
is not available (msgPointer pointing to NULL). The following macros will the return true.

229

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

i.

OS_NO_ERR returns true, when mailbox message search succeeds;

ii.

OS_TIMEOUT returns true, if mailbox message does not succeed during the
ticks defined for the timeout >0:

iii.

OS_ERR_PEND_ISR returns true, if this function call was from the ISR;

iv.

OS_ERR_EVENT_TYPE returns true, when *mboxPointer is not pointing to


the pointer type variable for msg.

4. Send a message for an IPC through mailbox. Function unsigned byte OSMboxPost
(OS_EVENT *msgPointer, void *msg) sends mailbox message at ECB event pointer,
*msgPointer. The message sent is at msg, as well as at mboxPointer after the posting.
Task parameters passing M4: The OS_Event *msgPointer passes as a pointer to ECB that
associates with the message. The pointer msg passes to the mailbox address *msgPointer.
Returning M4: The function OSMboxPost ( ) sends the message and then returns the error
code on running the macros as follows:
i.

OS_NO_ERR returns true, if mailbox message signaling is succeeded, or

ii.

OS_ERR_EVENT_TYPE returns true, if *msgPointer does not point to the


mailbox message type, or

iii.

OS_MBOX_FULL returns true, when mailbox at msgPointer already has a


message that is not accepted or returned.

5. Finding mailbox data and retrieving the error information for a mailbox. Function
unsigned

byte

OSMboxQuery

(OS_EVENT

*msgPointer,

OS_MBOX_DATA

*mboxData) checks for a mailbox data and places that at mboxData. It also finds the error
information parameters, OS_NO_ERR_EVENT_TYPE for the ECB.
Task parameter passing M5: The function OSMboxQuery ( ) passes
i.

A pointer of the mailbox message created earlier at *msgPointer and

ii.

A pointer of data structure at mboxData.

Returning M5: Function OSMboxQuery ( ) returns information in the pointer mboxData is a


data structure with current contents of the message (OSMsg) and the list of waiting tasks for the

230

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

message. The error code macro OS_NO_ERR returns true, when message querying succeeds or
OS_ERR_EVENT_TYPE returns true, if msgPointer does not point to mailbox type msg
7. What are the Queue-Related Functions? (April/May-2012)
The message pointers can be posted into a queue by the tasks either at the back as in a queue or
at the front as in a stack. A task can thus insert a given message for deleting either in the FIFO
mode or in the LIFO mode.
The post-front function enables insertion such that the waiting task does LIFO retrieval of the
message-pointer, hence of the message.
1. Creating a queue for an IPC. OS_Event QMsgPointer =OSQCreate (void **QTop,
unsigned short qsize) is used for creating an OSs ECB for the QTop and queue is an
array of pointersat QMsgPointer. The array size can be declared as maximum 65,535 (0 th
to 65,535th element). Initially, the array QMsgPointer points to NULL.
Task parameters passing R: The **QTop passes as pointer for an array of voids. The qsize is
the size of this array (number of message pointers that the queue can insert before a read is
within 0 and 65,535)
Returning R: The function OSQCreate ( ) returns a pointer to the ECB allocated to the
queue. It is an array of voids initially, NULL if none is available.
2. Waiting for an IPC message at a queue. Function void *OSQPend (OS_Event
*QMsgPointer, unsigned short timeout, unsigned byte *Qerr) checks if a queue has a
message pending at ECB QMagPointer (QMsgPointer is not pointing to NULL). The
message pointer points to the queue from (head) at the ECB for the queue defined by
QMsgPointer. It suspends the task if no message is pending [until either ticks of the
system timer]. The queue head pointer at the ECB will later increment to point to the next
message after returning the pointer for the message.
Returning S: The function returns pointer to a queue at ECB. It also returns the following on
running the Marcos as under:

231

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

i.

OS_NO_ERR returns true, when the queue message search succeeds

ii.

OS_TIMEOUT returns true, if queue did not get the message during the ticks defined
by the timeout

iii.

OS_ERR_PEND_ISR returns true, if this function call was from the ISR

iv.

OS_ERR_EVENT_TYPE returns true, when *QMsgPointer is not pointing to the


queue message.
Task parameter passing S:
i.

OS_Event *QbackPointer passes as pointer to the ECB that is associated with


the queue.

ii.

It passes 16-bit unsigned integer argument timeout. It resumes the task after
the delay equals (timeout -1) count inputs (ticks) at the system clock.

iii.

It passes *err, a pointer for holding the error codes.

3. Emptying the queue and eliminating all the message pointers. Functions unsigned byte
*OSQFlush (OS_EVENT *QMsgPointer) checks if a queue has a message pending at
QMSgPointer (the queue between queue front pointer and queue back pointer at the ECB.
It returns an error codes and QMsgPointer at ECB. These will later point to NULL on
return from the function.
Task parameters passing T: The OS_Event *QMsgPointer passes as pointer to the ECB that
is associated with the queue.
Returning T: After the function OSQFlush ( ) executes, the error macros returns as follows:
OS_NO_Err returns true, if the message queue flush succeeds, or OS_EVENT_TYPE returns
true, if the message queue flush succeeds, or OS_ERR_EVENT_TYPE returns true, if
QMsgPointer is not pointing to the queue message queue.
4. Sending a message pointer to the queue. The function unsigned byte OSQPost
(OS_EVENT *QMsgPointer, void *QMsg) sends a pointer of the message *QMsg. The
message pointer QMsgPointer (queue tail pointer) points to the QMsg.
Task parameters passing T: The OS_EVENT *QMsgPointer passes as pointer to the ECB
that is associated with the queue tail.

232

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Returning U: After the function OSQPost ( ), the message pointer *QMsg is passes for the
message to *QMsgPointer and the erro macros return the error code as follows;
i.

OS_NO_ERR returns true, if queue signaling succeeded,

ii.

OS_ERR_EVENT_TYPE returns true, if *QtailPointer is not pointing to the queue


and

iii.

OS_Q_FULL returns true, when queue message cannot be posted (Qsize cannot
exceed a limit set on creating the queue.

5. Sending a message pointer and inserting it at the queue front. The function unsigned byte
OSQPostFront (OS_EVENT*QMsgPointer, void *QMsg) sends QMsg pointer to the
queue, but it is at the queue front pointer in the ECB where pointer for QMSg now stores,
pushing other message pointers backwards.
Task parameters passing V: The OS_Event *QMsgPointer passes as pointer to the ECB that
is associated with the queue. The second argument is the message QMsg address that is the
queue front address.
Returning V: After the function OSQPostFront ( ) executes the following error macros
returns as under:
i.

OS_NO_ERR returns true, if the message at the queue front is placed successfully

ii.

OS_ERR_EVENT_TYPE returns true, if pointer QtailPointer is not pointing to


message queue

iii.

OS_Q_FULL returns true, if qsize was declared n and queue had n message waiting
for the read.

6. Querying to find the message and error information for the queue ECB. This function
unsigned byte OSQQuery (OS_EVENT *QMsgPointer, OS_Q_DATA *QData) checks
for a queue data and places that QData. It also finds the error information parameters, on
executing the following macros: OS_NO_ERR and OS_ERR_EVENT_TYPE.
Task parameters passing X: The function OSQQuery passes

233

i.

A pointer of the queue at *QMsgPointer ECB

ii.

A pointer of the data structure at *QData.

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Returning X: QData has pointer to the message at OSMsg, number of messages at


OSNMsgs, OSQSize as queue size in terms of the number of entries permitted and list of the
tasks waiting for the message. After the function, the following macros returns true:
i.

OS_NO_ERR, when querying succeeds

ii.

OS_ERR_EVENT_TYPE, if *QMsgPointer is not pointing to queue message.

8. Explain Any Case Study of RTOS


A Case Study in Embedded System Design: an Engine Control Unit
Abstract
A number of techniques and software tools for embedded system design have been
recently proposed. However, the current practice in the designer community is heavily based on
manual techniques and on past experience rather than on a rigorous approach to design. To
advance the state of the art it is important to address a number of relevant design problems and
solve them to demonstrate the power of the new approaches.
An industrial example is chosen in automotive electronics to validate the design
methodology: an existing commercially available Engine Control Unit. We will see in detail the
specification, the implementation philosophy, and the architectural trade-off analysis. Then we
analyze the results obtained with the approach and compare them with the existing design
underlining the advantages offered by a systematic approach to embedded system design in
terms of performance and design time.
Introduction
Hardware/software co-design and embedded system design techniques, advocate a formal
design procedure for embedded systems, based on unbiased specification, simulation-based
validation, various forms of automated partitioning, and module and interface synthesis.
However, the design methodology followed in practice is far from being at the
sophistication level implied by the approaches listed above. The common resistance offered by
designers to innovation is made even stronger in the case of embedded system design because of

234

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

the safety concerns and of the strict constraints on implementation costs. To demonstrate the
applicability of the design methodology and tools an industrially relevant design: an Engine
Control Unit for a commercial vehicle is tackled. The case study is relevant because it represents
an actual product and because of its complexity. In particular, we were able to compare the
results obtained with our design methodology with the present implementation showing its
advantages both in terms of performance and design time.
Specification of the ECU
An electronic Engine Control Unit (ECU) consists of a set of sensors which periodically
measure the engine status, an electronic unit which processes data coming from the sensors and
drives the actuators, and the actuators themselves which execute the commands received from
the control unit. A control strategy is implemented in the electronic unit to optimize the fuel
injection and ignition; in particular it should minimize fuel consumption, minimize emissions of
polluting substances and maximize torque and power, when possible. These requirements are
usually competing, so the algorithm must find the best compromise for each situation.
The two main tasks of an ECU are the control of injection and ignition. The control
specifications for these two tasks are as follows:
Injection: in order to burn completely and correctly the fuel, the ratio between the air and
the fuel which go into each piston should be kept constant and close to the value 14.7 (for a
gasoline engine). This is achieved by controlling the opening time of each injector.
Ignition: in order to give the fuel enough time to burn completely, the spark should be red
in advance with respect to the instant when the piston is at its highest point. This parameter also
a affects consumption and emissions, and it is basically computed from the engine RPM.
Both injection and ignition can be adapted dynamically with a very high precision, by
processing the inputs signals coming from the sensors.

235

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Functional architecture of ECU


New methodologies are required in the design of digital electronic systems, due to the increased
complexity and reduced time-to-market. Moreover, a product should be flexible to adapt to
changes during its lifetime, which is best obtained by using software, and must also meet tight
timing constraints, which is most suitable for hardware components. Partitioning between
hardware and software is therefore a critical step in the design flow, and CAD tools should help
the designer in making the right choices. We have shown how to perform partitioning by using
fast co-simulation and software estimation. We have also shown how to optimize the system
with respect to code size and running time, by selectively collapsing or dividing modules, and by
moving the threshold between control and data-flow in conditional statements. The results
obtained are very promising, and were achieved on a real design and in a relative short period of
time.
9. Describe the task service functions with examples.(November-2013)
These functions are used to create task, suspend and resume, and time settling and
retrieving functions.

236

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Unsigned byte OS task create () must call before running a task.


Unsigned byte OS task suspend () - called for blocking a task.
Unsigned byte OS task resume () called for resuming a blocked task.
Void OS time set () = when system time is to be set.
Unsigned int OS time get (void) find present count when time is read.

2. Creating a task. Functions unsigned byte OSTaskCreate(void(*task)(void*taskPointer),


void*pmdata, OS_STK*taskStackPointer, unsigned byte taskpriority) is explained as
follows.
A preemptive scheduler preempts a task of higher priority. Therefore, each user-task is to
be assigned a priority, which must be set between 8 and OS_MAX_TASKS-9(or 8 and
OS_LOWEST_PRIO-8). OS reserves eight highest and eight lowest priority tasks for its own
functions. Total number of tasks that MUCOS manages can be up to 64. Mucos task priority
is also the task identifier. There is no task ID-defining function in MUCOS because each task
has to be assigned a distinct priority.
If the maximum number if user tasks is eight, then OS_MAX_TASKS is 24 (including
eight system-level tasks and 8 lowest priority system-tasks), the priority must be set between
8 and 15, because MUCOS will assign priorities 16 to 23 to the 8 lowest priority system
level tasks. The priorities 0 to 7 or 16 to 23 will then be for MUCOS internal uses.
OS_LOWEST_PRIO and OS_MAX_TASKS are user-defined constants in preprocessor
codes that are needed for configuring the MUCOS for the user applications. Defining
unnecessarily 20 user tasks that are actually 4 task are created by the user is to be avoided
because more OS_MAX_TASKS means unnecessarily higher memory space allocation by
the system to the user tasks.
Tasks parameters passing PA:
e) *task pointer is a pointer to the codes of task being created.

237

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

f) *pmdata is pointer for and optional message data reference passed to the task. If
none, we assign it as NULL.
g) *taskStackPointer is a pointer to the stack of task being created.
h) taskPriority is a task priority and must be within 8 to 15, if macro
OS_LOWEST_PRIO sets the lowest priority equal to 23
Returning RA: The lowest priority of any task OS_LOWEST_PRIO is 23. For the
application program, task priority assigned must be wthin 8 to 15. The function
OSTaskCreate () returns the following:
v.

OS_NO_ERR, when creation succeeds;

vi.

OS_PRIO_EXIST, if priority value that passed already exists;

vii.

OS_PRIO_INVALID,

if

priority

value

that

passed

is

more

than

the

OS_PRIO_LOWEST;
viii.

OS_NO_MORE_TCB returns, when no more memory block for the task control is
available.
A task can create other tasks, but an ISR is not allowed to create a task. An exemplary

use is in creating a task, Task1_connect, for a connection task. OSTaskCreate(Task1_connect,


void(*) 0, (void*)*Task1_connectstack[100],10)
Task parameter passed as arguments are as follows.
e) Task1_Connect, a pointer to the codes of Task1_Connect for task being created.
f) The pointer for an optional message data reference passed to task is NULL.
g) *Task1_Connect is a pointer to the stack of TASk1_Connect and it is given size = 100
addresses in the memory.
h) Taskpriority is task priority allotted at 10, the highest but two that can be allocated. It will
generate error parameters, OS_NO_ERR = true in case creation and exists.
OS_PRIO_INVALID

true,

if

passed

priority

parameter

is

higher

than

OS_LOWEST_PRIO 8. OS_NO_TCB = false, when TCB is available for


Tasks1_Connect.

238

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

3.

Suspending (blocking a task. Function unsigned byte OSTaskSuspend (unsigned byte


task priority) Task parameters passing PB: taskpriority is the task priority and must be
within 8 to 15 for 8 user-tasks. Returning RB: the function OSTaskSuspend () returns the
error parameters OS_NO_ERR when the blocking succeeds. OS_PRIO_INVALID, if
priority value that passed is more than 15, the OS_PRIO_LOWEST constant value.
OS_TASK_SUSPEND_PRIO, if priority value that passed already does not exists.
OS_TASK_SUSPEND_IDLE, if attempting to suspend an idle task that is illegal.
An exemplary use is in blocking the task Task1_Connect of priority =
Task1_Connect_priority is as follows OSTaskSuspend(Task1_Connect_priority). Task
parameter passed as argument is 6. The following error parameters will be returned by
this function.
f) OS_NO_ERR = true, when the blocking succeeds.
g) OS_PRIO_INVALID = false, as 8 is a valid priority and is not more than
OS_PRIO_LOWEST.
h) OS_PRIO_LOWEST = 23.
i) OS_TASK_SUSPEND_PRIO = false, as priority value that passed already does
exits.
j) OS_TASK_SUSPEND_IDLE = false, when attempting to suspend a task that
was not an idle task. MUCOS executes idle task OSTaskIdle() when all tasks are
either waiting for timer expiry or for and IPC e.g., semaphore IPC.

4.

Resuming (enabling unblocking) a task. Function unsigned byte OSTaskResume


(unsigned byte taskPriority) resumes a suspended task.
Task parameters passing PC: taskpriority is the task priority of that task which is to

resume and must be within 8 to 15 when OS_LOWEST_PRIO is 23 and number of usertasks =8.
Returning RC: the function OStaskResume () returns the OS_NO_ERR when the blocking
succeeds. OS_PRIO_INVALID, if priority value that passed is more than 23, the
OS_LOWEST_PRIO constant value. OS_TASK_RESUME_PRIO, if priority value that
passed already resumed. OS_TASK_NOT_SUSPENDED, if attempting to resume a not
suspended (blocked) task.

239

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

An exemplary use is in un=blocking Task1_Connect of priority = Task1_connect


_priority is as follows:
OSTaskResume (Task1_Connect_priority). Tasks parameter passed as arguments is 10, as
Task1_Connect_priority = 10. The following error parameter will be returned by the taskresuming function.
f) OS_NO_ERR = true, when the un-blocking succeeds and task of priority 8 reaches
the running state.
g) OS_PRIO_INVALID = false, as 8 is a valid priority and is not more than
OS_LOWEST_PRIO.
h) OS_LOWEST_PRIO = 23.
i) OS_TASK_RESUME_PRIO = false, as priority value that passed already resumed.
j) OS_TASK_NOT_SUSPENDED = false, when attempting to resume a task that was
not suspended.
5. Setting time is system clock. Function void OSTimeSet (unsigned int count) returns no
value. Passing parameter, PD as argument is given next.
PD: It passes a 32-bit integer for the count (set the number of ticks for the current time that
will increment after each system clock tick in the system)
An exemplary use is a function OSTimeSet (to preset the time). The function OSTimeSet
(0) sets the present count =0. Caution: it is suggested that OSTimeSet function should be
used before the OSTickInit function and only once then never be used within a task function,
as some other functions that rely on the timer will malfunction. Let the OS timer clock count
continue to be used as in a free running counter. There is little need later on of using the set
time function. This is because at any instant, the time can be read using a get function and at
any other instant, it can be defined again by adding a value to this time.
6. Getting time of system clock. Function unsigned int OSTimeGet (void) returns current
number of ticks as and unsigned integer. The passing parameter as argument is none.
RE: Returns 32-bit integer, current number of ticks at the system clock.

240

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

10.

List

out

the

semaphore

related

functions

and

explain

them.(November-

2013)(November-2012)
When a semaphore created by OS and used a resources acquiring key, it must be initialized with
1 to indicate the resources is available.
MUCOS semaphore related functions for task are,
OS event OSSemCreate () to create and initialize a semaphore
Void OSSemPend () to ckeck whether a semaphore is pending.
Unsigned short OSSemAccept () to check whether sem val>0
Unsigned byte OSSemPort () if sem val=0 or more, increments and makes a semaphore
again not Pending.
Unsigned byte OS sem query () to get semaphore information.
When a semaphore created by this OS is used as a resource-acquiring key(as matex), the
semaphore value to start with is 1, which means that resource is available and 0 will mean not
available. When a value to start with = 0 or n when using the semaphore as event-signaling flag
or counting, respectively.
2. Creating a semaphore for the IPCs. Function OS_Event OSSemCreate (unsigned short
semval) is for creating an OSs ECB (Event Control Block) for an IPC with semval
returning a pointer, which points to the ECB. A semaphore creates and initializes with the
value =semval.
Returning RM: The function OSSemCreate ( ) returns a pointer *eventPointer for the ECB
allocated to the semaphore. Null if none available.
Task parameter passing PM: A semVal between 0 and 65535 is passed. For IPC as an
event-signaling flag, SemFlag must pass 0 and as a resource-acquiring key, SemKey must
pass 1. For IPC as a counting semaphore, SemCount must be either 0 or a count-value to be
passed in the beginning.

241

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

3. Waiting for an IPC for semaphore release. Function void OSSemPend (OS_Event
*eventPointer, unsigned short timeout, unsigned byte *SemErrPointer) is for letting a
task wait till the release event of a semaphore: SemFlag or SemKey or SemCount. The
latter is at the ECB pointed by *eventPointer. SemFlag or SemKey or SemCount
becoming greater than 0 is and event that signals the release of the tasks in the waiting
states. The tasks now become ready for running (they run if no other higher-priority task
is ready). The tasks also become ready after a predefined timeout. SemFlag or SemKey or
SemCount decrements and if it becomes 0 then it makes the semaphore pending again
and the other tasks using OSSemPend ( ) have to wait for its release.
Running RN: The function OSSemPend ( ) when a semaphore is pending, then suspends
till>0 (release) and decrements the semVal on unblocking that task SemVal = 1. The
following macros return true.
v.

OS_NO_ERR returns true, when semaphore search succeeds (semVal>0)

vi.

OS_TIMEOUT returns true, if semaphore did not release(did not become >0)
during the ticks defined for the timeout.

vii.

OS_ERR_PEND_ISR returns true, if this function call was by an ISR and


which is an error, since an ISR should not be blocked for talking the
semaphore.

viii.

OS_ERR_EVENT_TYPE returns true, when *eventPointer is not pointing to


the semaphore.

Task parameter passing PN:


iv.

The OS_Event*eventPointer passes as a pointer to ECB that associates with the


semaphore: SemFlag or SemKey or SemCount.

v.

Passes argument for the number of timer ticks for the timeout. Tasks unblocks
after the delay is equal to (timeout -1) ticks even when the semaphore is not
released. It prevents infinite wait. It must pass 0 if this provision is not used.

vi.

Passes*err, a pointer for holding the error codes.

4. Check for availability of an IPC after a semaphore release. Function unsigned short
OSSemAccept (OS_Event *eventPointer) checks for a semaphore value at ECB and

242

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

whether it is greater than 0. An unassigned 16-bit value is retrieved and then


decremented.
Returning RO: The function OSSemAccept ( ) decrements the semVal if>0 and returns
the pre decremented value as an unsigned 16-bit number. It returns 0 if semVal was 0 and
pending when posted (released). After this, the task codes run further.
Task parameter passing PO: The OS_Event *eventPointer for the ECB that associates
with semaphore, semVal.
5.

Sending an IPC after a semaphore release. Function unsigned byte OSSemPost


(OS_Event *eventPointer) is for letting another waiting task not now afterwards and an
IPC is sent for the release event of the semaphore, semFlag or SemKey or SemCount.
The IPC is at the ECB pointed by *eventPointer SemFlag or SemKey or SemCount, it
increments and if it becomes greater than 0, it is an event that signals the release of a task
waiting state. The task now become ready for running ( runs if no higher-priority task is
ready). SemFlag or SemKey or SemCount decrements on running that task and if it
becomes <0 then it makes semaphore pending again and the other tasks have to wait for
its release. If the IPC is posted from an ISR, then the pending task can run only after
OSIntrExit ( ) executes and return from the ISR. If the presently running task is of higher
priority than the task pending for the want of the IPC, then the present task will continue
to rum unless blocked or delayed by executing some function.
Returning RP: The function OSSemPost ( ) increments the SemVal if it is 0 or >0, and
later following macros return true from the error codes macros as follows:
iv.

OS_NO_ERR returns true, if semaphore signaling succeeded (SemVal>0 or 0).

v.

OS_ERR_EVENT_TYPE returns true, if *eventPointer is not pointing to the


semaphore.

vi.

OS_SEM_OVF returns true, when semVal overflows (cannot increment and is


already 65,535)
Task parameter passing PP: The OS_Event *eventPointer passes as pointer to
ECB that associates with the semaphore.

243

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

7. Retrieve the error information for a semaphore. Function unsigned byte OSSemQuery
(OS_EVENT *eventPointer, OS_SEM_DATA *SemData) puts the data values for the
semaphore at the pointer, SemData.
Returning RQ: After the OSSemQuery ( ) runs the SemData function and gets the
OSCnt, which is the semaphore present value (count). The semdata also gets the list of
the tasks, which are waiting for the semaphore. The list is at pointers OSEvent
Tbl [ ] and OSEventGr. The semaphore error information parameters we can find on
running the macros, OS_NO_ERR and OS_ERR_EVENT_TYPE.
iii.

OS_NO_ERR returns true, when querying succeeds or

iv.

OS_ERR_EVENT_TYPE returns true, if*eventPointer is not pointing to the


semaphore.

Task parameters passing PQ: The function OSSwmQuery ( ) passes a pointer of the
semaphore created earlier at *eventPointer and a pointer of the data structure at *SemData
for that created semaphore.
11.Discuss the features of Vx works?(November-2012)
The features of Vx works are

High performance, Unix like, multitasking Environment scalable and hierarchical


RTOS

Host and target based development approach

Supports Device Software Optimization, a new methodology that enables


development and running of device software faster, better and more reliably

1.VxWorks RTOS Kernel


VxWorks 6.x processor abstraction layer
The layer enables application design for new versions later by just changing the
layer-hardware interface
Supports advanced processor architecturesARM, ColdFire, MIPS,

244

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Intel, SuperH,
2.VxWorks RTOS Kernel

Hard real time applications

Supports kernel mode execution of tasks

Supports open source Linux and TIPC (transparent inter process communication) protocol
3.Scalability

Scalable OS only needed OS functions become part of the application codes

Configuration file includes the user definitions for the needed IPC functions needed

4. Hierarchical

kernel extendibility and interfaces hierarchy includes timers, signals, TCP/IP Sockets,
queuing functions library, NFS, RPCs, Berkeley Port and Sockets, Pipes, Unix
compatible loader, language interpreter, shell, debugging tools and linking loader for
Unix.
QUESTION BANK
UNIT 5
2 MARKS

1. What are the various options of RTOS? ?(April/May-2012)


2. What is MUCOS?(November-2011
3. What are the various features of C/OS-II?
4. What are the types of Source Code files of C/OS-II?
5. What are the Processor Dependent Source Files?
6. What are the Processor Independent Source Files?
7. What is Vx Works?
8. What are the features of Vx Works?
9. What are the differences of psos and Vxworks?

245

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

10. What are the various system level Functions?


11. What are the various Task Service functions? (November-2012)
12. What are the various Time Delay Functions?(November-2012) (November-2013
13. What is the various Memory Allocation related Functions?
14. What are the various Semaphore Related Functions?
15. What are the other functions that are related to MUCOS?
16. What are the general Mailbox Related Functions? (November-2011)
17. What are the general Queue Related Functions?
18. When do you use OS_ENTER-CRITICAL ( ) and OS_EXIT_CRITICAL ( )?
19. How to perform taskLock( ) Function?
20. When will the synchronous context switches occur?
21. How the task is scheduled in windows operating System?
22. What is Wind Message Queues?
23. How a message queue can send Events to a Task?
24. How the real time system is structured?
25. How does the Vx Work Exception Handler works?
26. Comparison of taskLock( ) and intLock( )
27. How is wind scheduler enabled and disabled?
28. What are the routines that can be called by Task Switch Hooks?
29. What is called reentrancy?
30. What are the reentrancy techniques used by majority of Vx Works?
31. Give any 2 time delay function (November-2013)
32. How can you create a queue for an IPC? November-2013)
33. .Define TCB?(April\May-2012)
11 MARKS
1. Explain the System-Level Functions in detail? (November-2011)
2. What are the Task Service Functions? Explain in detail?
3. Explain the Time Delay Functions in detail? (November-2011)(April/May-2012)

246

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

4. What are the Memory Allocation-Related functions? (April-2013) (April/May-2012)


5. What are the Semaphore-Related Functions? (November-2012)
6. Explain the Mailbox-Related Functions in detail? (April-2013)
7. What are the Queue-Related Functions? (pril/May-2012)
8. Explain Any Case Study of RTOS
9. Describe the task service functions with examples.(November-2013)
10. List out the semaphore related functions and explain them.(November-2013)(November2012)
11..Discuss the features of Vx works? (November-2012)
PONDICHERRY UNIVERSITY QUESTIONS
1. Explain the System-Level Functions in detail? (November-2011) [Ref. Page No. :214]
2. Explain the Time Delay Functions in detail? (November-2011)(April/May-2012) [Ref. Page
No. : 220]
3. What are the Memory Allocation-Related functions? (April-2013) (April/May-2012[Ref.
Page No. : 222]
4.What are the Semaphore-Related Functions? (November-2012) [Ref. Page No. :224]
5. Explain the Mailbox-Related Functions in detail? (April-2013) [Ref. Page No. :228]
6. What are the Queue-Related Functions? (April/May-2012) [Ref. Page No. : 231]
7. Describe the task service functions with examples.(November-2013) [Ref. Page No. :236]
8. List out the semaphore related functions and explain them.(November-2013)(November2012) [Ref. Page No. :241]
9.Discuss the features of Vx works?(November-2012) [Ref. Page No. :244]

247

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

UNIT WISE PONDICHERRY UNIVERSITY QUESTIONS


CST 56 EMBEDDED SYSTEMS
UNIT I
1. Explain The Components, Classification And Characteristics Of Embedded System Briefly?
[NOV 2011], [April 2012], [April 2013] [Page: 11]
2. What Is A Microcontroller Explain With An Example? [NOV 2011] [Ref. Page No.: 13]
3.. What Is A Microprocessor Explain With An Example? [NOV 2012] [Ref. Page No.: 14]
Explain The Embedded System Architectures In Detail? [April 2010], [NOV 2012]
[Ref. Page No.: 21]
4. Discuss the shared data problem and its remedies with example. [April 2010]
[Ref. Page No.: 29]
5. Explain the Bus structure of an embedded system. [April 2010] [Ref. Page No.: 31]
6. Explain briefly about Interrupt [April 2010] [Ref. Page No.: 33]
7. Explain in detail about Monolithic Kernel. [April 2012], [NOV 2012] [Ref. Page No.: 35]
8. Explain the skill required for embedded system designer.[April 2012][Ref. Page No.: 36]

UNIT II
1. Discuss the arm architecture in detail (April 2013)[ref. Page no. :45]
2. What are the registers and memory access processes in ARM? [April 2013][Ref. Page
no.: 59]
3. How the data transferred in arm using instructions? [November 2012][Ref. Page no.: 64]
4. Classify the instructions set of arm and explain [April/May 2012][Ref. Page no.:73]
5. What is arm cache? [November 2012][Ref. Page no.: 84]
6. Explain arm bus in detail? [November 2012] [Ref. Page no.: 84]
7. Explain embedded systems with arm?[November 2011] [Ref. Page no.:45]
8. What are the serial bus protocols? [November 2014] [Ref. Page no.:86]
9. Explain the parallel bus protocols? [November 2011] [Ref. Page no.:93]

248

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

UNIT III
1. Specify the C++ features that suits embedded program. [April/May 2014]
[Ref. Page No. :109]
2.Write short note on software tool for embedded program.[April/May 2014]
[Ref. Page No.:111]
3.Discuss in detail with multiprocessor system. [November 2011]
[Ref. Page No.:114]
4.Explain embedded software develop process and tools.(November 2011]
[Ref. Page No.:117]
UNIT IV
1. What is Preemptive Priority Based Scheduling and Explain Rate Monotonic Scheduling in
Detail? (April-2013)[Ref. Page No.: 161]
2. What Is Priority Inversions? (November-2011) [Ref. Page No.:163]
3. Explain Process Synchronization (November-2012) [Ref. Page No.: 173]
4. What is the Purpose of Mailbox in Real Time Operating Systems? (April/May-2012)
[Ref. Page No.:193]
5. Explain About the Virtual Socket? (April/May-2012) [Ref. Page No.: 190]
6. Describe the functions of an RTOS and when it is necessary.(April/May-2012)(November2011)(November-2012) [Ref. Page No.: 194]
7. How the interrupt handled by the RTOS?(April-2013) [Ref. Page No.: 197]

249

EMBEDDED SYSTEMS

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

UNIT V
1. Explain the System-Level Functions in detail? (November-2011) [Ref. Page No. :214]
2. Explain the Time Delay Functions in detail? (November-2011)(April/May-2012) [Ref. Page
No. : 220]
3. What are the Memory Allocation-Related functions? (April-2013) (April/May-2012[Ref.
Page No. : 222]
4.What are the Semaphore-Related Functions? (November-2012) [Ref. Page No. :224]
5. Explain the Mailbox-Related Functions in detail? (April-2013) [Ref. Page No. :228]
6. What are the Queue-Related Functions? (April/May-2012) [Ref. Page No. : 231]
7. Describe the task service functions with examples.(November-2013) [Ref. Page No. :236]
8. List out the semaphore related functions and explain them.(November-2013)(November2012) [Ref. Page No. :241]
9.Discuss the features of Vx works?(November-2012) [Ref. Page No. :244]

250

EMBEDDED SYSTEMS

S-ar putea să vă placă și