Sunteți pe pagina 1din 117

Embedded Systems Unit1

Lecture No: 1

Introduction to Embedded Systems and Definition


Our day to day life is becoming more and more dependent on „embedded systems‟ and
digital techniques. Embedded technologies are bonding into our daily activities even
without our knowledge.
DEFINITION OF A EMBEDDED SYSTEM:-
An embedded electronic and electro- mechanical system designed to performs
a specific function as combination of both hard-ware and software.
Every embedded system is unique and hard-ware as well as the to the application
domain. Embedded system are becoming an inevitable part of any product or equipment
in all field including home appliances, telecommunications, medical equipment,
industrial control, consumer products etc..,
INTRODUCTION TO EMBEDDED SYSTEM:-
Generally our day to day life becoming more dependent on “embedded system”
and digital techniques .Some of them are not known that they are using “refrigerator”,
“d.v.d players”, a.c‟s and “motor vechiles”,” security systems offered by intelligence of
embedded systems.
Embedded system are like reliable servants that they reveal their identity and
don‟t complain about work loads.
Now a days we are enjoying so many comforts in our daily life like every facility
and comfort of a system which are deduced from the power of embedded systems.

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department,GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 2

EMBEDDED SYSTEM vs. GENERAL COMPUTING SYSTEM

the main objective of this class is to give detail description about general
computing system and comparing with embedded system.

EMBEDDED SYSTEM GENERAL PURPOSE COMPUTING


SYSTEM
Def:A System which is a combination of Def:A System which is a combination of
special purpose hardware and Embedded generic hardware and General Purpose
OS for executing a specific set of Operating System for executing a task is
applications is called as an Embedded called as a General Purpose Computing
System. System.
May or may not OS for functioning of the Contains a General Purpose Operating
embedded system. System.
For certain category of Embedded systems, Responses of the system are not necessarily
the responses are very time critical. Mostly time critical.
in real time applications.
Execution behavior is highly deterministic Need not be deterministic in execution
for certain type of embedded systems like behavior.
Hard real time systems.
Application specific requirements are the Performance is the key deciding factor in
key deciding factors. the selection of the system.
Applications are programmable by the The firmware of the embedded system is
user. pre-programmed and cannot be
programmed again by the end user.
Highly tailored to take advantages of Less or not tailored towards reduced
power saving modes supported b the operating power requirements, options for
hardware and OS. different levels of power management.

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department,GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 3
History & Classification of Embedded Systems
The main objective of this class is to give detail description about history of
embedded system.
One of the first recognizably modern embedded systems was the Apollo Guidance
Computer, developed by Charles Stark Draper at the MIT Instrumentation Laboratory. At
the project's inception, the Apollo guidance computer was considered the riskiest item in
the Apollo project as it employed the then newly developed monolithic integrated circuits
to reduce the size and weight. An early mass-produced embedded system was the
Autonetics D-17 guidance computer for the Minuteman missile, released in 1961. When
the Minuteman II went into production in 1966, the D-17 was replaced with a new
computer that was the first high-volume use of integrated circuits. This program alone
reduced prices on quad nand gate ICs from $1000/each to $3/each[citation needed],
permitting their use in commercial products.
Since these early applications in the 1960s, embedded systems have come down
in price and there has been a dramatic rise in processing power and functionality. The
first microprocessor for example, the Intel 4004, was designed for calculators and other
small systems but still required many external memory and support chips. In 1978
National Engineering Manufacturers Association released a "standard" for programmable
microcontrollers, including almost any computer-based controllers, such as single board
computers, numerical, and event-based controllers.
As the cost of microprocessors and microcontrollers fell it became feasible to
replace expensive knob-based analog components such as potentiometers and variable
capacitors with up/down buttons or knobs read out by a microprocessor even in consumer
products. By the early 1980s, memory, input and output system components had been
integrated into the same chip as the processor forming a microcontroller.
Microcontrollers find applications where a general-purpose computer would be too
costly.
A comparatively low-cost microcontroller may be programmed to fulfill the same
role as a large number of separate components. Although in this context an embedded
system is usually more complex than a traditional solution, most of the complexity is
contained within the microcontroller itself. Very few additional components may be
needed and most of the design effort is in the software. Software prototype and test can
be quicker compared with the design and construction of a new circuit not using an
embedded processor.

Classification of Embedded Systems:


It is possible to have a multitude of classifications for embedded systems based on
different criteria.
1. Based on generation
2. Complexity and performance requirements

P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 3
3. Based on deterministic behavior
4. Based on triggering.
Based on Generation:

This classification is based on the order in which the embeeded processing systems
evolved from the first version to where they are today. As per this criterion ,
embedded systems are classified as
i. First generation
ii. Second Generation
iii. Third Generation
iv. Fourth Generation

First Generation :
The early embedded systems were built around 8 bit microprocessors like 8085
and Z80, and 4bit microcontrollers. Simple in hardware circuits with firmware
developed in assembly code. Digital telephone keypads, stepper motor control units
etc. are examples of this.
Second Generation :
These are embedded systems built around 16 bit microprocessors and 8 or 16 bit
microcontrollers , following the first generation emebedded systems. The instruction
set for the second generation processors/controllers were much more complex and
powerful than the first generation preocessors/controllers.

P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 5
Classification of Embedded Systems:
It is possible to have a multitude of classifications for embedded systems based on
different criteria.
1. Based on generation
2. Complexity and performance requirements
3. Based on deterministic behavior
4. Based on triggering.
Based on Generation:

This classification is based on the order in which the embeeded processing systems
evolved from the first version to where they are today. As per this criterion ,
embedded systems are classified as
i. First generation
ii. Second Generation
iii. Third Generation
iv. Fourth Generation

First Generation :
The early embedded systems were built around 8 bit microprocessors like 8085
and Z80, and 4bit microcontrollers. Simple in hardware circuits with firmware
developed in assembly code. Digital telephone keypads, stepper motor control units
etc. are examples of this.
Second Generation :
These are embedded systems built around 16 bit microprocessors and 8 or 16 bit
microcontrollers , following the first generation emebedded systems. The instruction
set for the second generation processors/controllers were much more complex and
powerful than the first generation preocessors/controllers.
Third Generation :
With advances in processor technology, embedded system developers started making
use of powerful 32 bit processors and 16 bit microcontrollers for their design. A new
conept of application and domain specific processors like DSP and Application
Specific Integrated Circuits (ASIC).
Fourth Generation : the advent of system on chips , reconfigurable processors and
multicore processors are bringing high performance , tight integration and
miniaturization into the embedded device market. The SoC technique implements a
total system on a chip by integrating different functionalities with a processor core on
an integrated .

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 6
Major application areas of Embedded Systems
Embedded systems are deployed in various applications and span all aspects of modern
life.
Main application areas of embedded systems.

CARD READERS: Barcode, smart card readers, hand held devices, etc.
Banking and Retail: Automatic teller machines (ATM) , and currency counter, point of
sales(POS).
Measurement and Instrumentation: Digital multi meters, digital CRO, logic analyzers,
PLC system, etc.
Computer networking systems: Network routers, switches, hubs, firewalls,etc.
Computer peripherals: printers, scanners, fax machines,etc.

B Anil Kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 7
PURPOSE OF EMBEDDED SYSTEM:
Each embedded system is designed to serve the purpose of any one or a combination of
the following tasks:
1. Data collection / storage / representation.
Under water network uses the embedded system for collection of information and storing
the information.
2. Data communication.
The networks are designed to carry different types of communication including voice,
data and video signals. Even systems with a single original purpose like telephony have
been exploited for the transfer of other traffic, like data transfer for computers. Another
development that has increased interest in general purpose communication is the internet.
The majority of embedded communication systems can be classified as
 Either point-to-point networks (data links) or shared media networks (data
highways).
3. Data (signal) processing.
Digital signal processors are particularly designed for implementing digital signal
processing algorithms such as the fast Fourier transform (FFT) algorithm and filtering
algorithms. Digital signal processors use a dedicated circuit for performing multiply and
accumulate operations. So embedded processors have special fuzzy logic instructions
(such as the fuzzy AND instruction). This is because inputs to an embedded system are
sometimes better represented as fuzzy variables.
4. Monitoring and Control.
The design of an embedded system for the control of temperature &Light intensity with
continuous monitoring in a single system using sensors, microcontroller and LCD. It
describes the controlling action incorporated in the hardware to control any device
connected when specific conditions are met.
5. Application specific user interface.
Embedded systems range from no user interface at all. Dedicatedonly to one task — to
complex graphical user interfaces that resemble modern computer desktop operating
systems. Simple embedded devices use buttons, LEDs, graphic or character LCDs with a
simple menu system.

B Anil Kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 9
CORE OF THE EMBEDED SYSTEM
Embedded systems are domain and application specific and built around central core.
core of the embedded system may fall into the categories:
1)GENERAL PURPOSE AND DOMAIN SPECIFIC PROCESSORS:
1.1)MICROPROCESSORS
1.2)MICROCONTROLLERS
1.3)DIGITAL SIGNAL PROCESSORS
2)APPLICATION SPECIFIC INTEGRATED CIRCUITS(ASICS)
3)PROGRAMMABLE LOGIC DEVICES(PLDS)
4)COMMERCIAL OFF-THE-SHELF COMPONENTS(COTS)
1.GENERAL PURPOSE AND DOMAIN SPECIFIC PROCESSORS:
MICROPROCESSORS:
he CPU is a unit that centrally fetches and processes a set of general-purpose instructions.
The CPU instruction set (Section 2.4) includes instructions for data transfer operations,
ALU operations, stack operations, input and output (I/O) operations and program control,
sequencing and supervising operations.
The general purpose instruction set (refer to Appendix A, Section A.1) is always specific
to a specific CPU. Any CPU must possess the following basic functional units.
1. A control unit to fetch and control the sequential processing of a given command or
instruction and for communicating with the rest of the system.
2. An ALU for the arithmetic and logical operations on the bytes or words. It may be
capable of processing 8, 16, 32 or 64 bit words at an instant.
A microprocessor is a single VLSI chip that has a CPU and may also have some other
units (for examples, caches, floating point processing arithmetic unit, pipelining and
super-scaling units) that are additionally present and that result in faster processing of
instructions. The earlier generation microprocessor‟s fetch-and-execute cycle was guided
by clock frequency of the order of 1 MHz. Processors now operate at clock frequency of
2 GHz. [Intel released a 2 GHz processor on August 25, 2001. This also marked the
twentieth anniversary of the introduction of the IBM PC. Intel released 3 GHz Pentium 4
on April 14, 2003.] Since early 2002, a few highly sophisticated embedded systems (for
examples, Gbps transceiver and encryption engine) have incorporated the GHZ

B Anil Kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 9
processor. [Gbps means Giga bit per second. Transceiver means a transmitting cum
receiving circuit with appropriate processing and controls, for example, for bus-
collisions.]
One example of an older generation microprocessor is Intel 8085. It is an 8-bit processor.
Another is Intel 8086 or 8088, which is a 16-bit processor. Intel 80x86 (also referred as
x86) processors are the 32-bit successors of 8086. [The x here means extended 8086 for
32 bits.] Examples of 32-bit processors in 80x86 series are Intel 80386 and 80486.
Mostly, the IBM PCs use 80x86 series of processors and the embedded systems
incorporated inside the PC for specific tasks (like graphic accelerator, disk controllers,
network interface card) use these microprocessors.
An example of the new generation 32- and 64-bit microprocessor is the classic Pentium
series of processors from Intel. These have superscalar architecture . They also possess
powerful ALUs and Floating Point Processing Units (FLPUs) [Table 2.1]. An example of
the use of Pentium III operating at 1 GHz clock frequency in an embedded system is the
„Encryption Engine‟. This gives encrypted data at the rate of 0.464 Gbps.
MICROCONTROLLER:
Just as a microprocessor is the most essential part of a computing system, a
microcontroller is the most essential component of a control or communication circuit. A
microcontroller is a single-chip VLSI unit (also called ‘microcomputer’) which, though
having limited computational capabilities, possesses enhanced input-output capabilities
and a number of on-chip functional units. Microcontrollers are particularly suited for
use in embedded systems for real-time control applications with on-chip program
memory and devices.
DIGITAL SIGNAL PROCESSOR:
Just as a microprocessor is the most essential unit of a computing system, a digital signal
processor (DSP) is an essential unit of an embedded system for a large number of
applications needing processing of signals. Exemplary applications are in image
processing, multimedia, audio, video, HDTV, DSP modem and telecommunication
processing systems. DSPs also find use in systems for recognizing an image pattern or a
DNA sequence fast. Appendix D describes in detail the embedded system DSPs.

B Anil Kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 9
The DSP as a GPP is a single chip VLSI unit. It possesses the computational capabilities
of a microprocessor and also has a Multiply and Accumulate (MAC) unit(s). Nowadays,
a typical DSP has a 16 x 32 MAC unit. A DSP provides fast, discrete-time, signal-
processing instructions. It has Very Large Instruction Word (VLIW) processing
capabilities; it processes Single Instruction Multiple Data (SIMD) instructions fast; it
processes Discrete Cosine Transformations (DCT) and inverse DCT (IDCT) functions
fast. The latter are a must for fast execution of the algorithms for signal analysing,
coding, filtering, noise cancellation, echo-elimination, compressing and decompressing,
etc.
2. APPLICATION SPECIFIC INTEGRATED CIRCUITS:
An application-specific integrated circuit is an integrated circuit (IC) customized for a
particular use, rather than intended for general-purpose use. For example, a chip designed
to run in a digital voice recorder is an ASIC. Application-specific standard
products (ASSPs) are intermediate between ASICs and industry standard integrated
circuits like the 7400 or the 4000 series.
As feature sizes have shrunk and design tools improved over the years, the maximum
complexity (and hence functionality) possible in an ASIC has grown from 5,000 gates to
over 100 million. Modern ASICs often include entire microprocessors, memory blocks
includingROM, RAM, EEPROM, Flash and other large building blocks. Such an ASIC is
often termed a SoC (system-on-chip). Designers of digital ASICs use a hardware
description language (HDL), such as Verilog or VHDL, to describe the functionality of
ASICs.
3.PROGRAMMABLE LOGIC DEVICES:
A programmable logic device or PLD is an electronic component used to
build reconfigurable digital circuits. Unlike a logic gate, which has a fixed function, a
PLD has an undefined function at the time of manufacture. Before the PLD can be used
in a circuit it must be programmed, that is, reconfigured.
The advantage of using a ROM in this way is that any conceivable function of all
possible combinations of the m inputs can be made to appear at any of the n outputs,
making this the most general-purpose combinational logic device available for m input
pins and n output pins.

B Anil Kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 9
Also, PROMs (programmable ROMs), EPROMs (ultraviolet-erasable PROMs)
and EEPROMs (electrically erasable PROMs) are available that can be programmed
using a standard PROM programmer without requiring specialized hardware or software.
However, there are several disadvantages:
 they are usually much slower than dedicated logic circuits,
 they cannot necessarily provide safe "covers" for asynchronous logic transitions
so the PROM's outputs may glitch as the inputs switch,
 they consume more power,
 they are often more expensive than programmable logic, especially if high speed
is required.
Since most ROMs do not have input or output registers, they cannot be used stand-alone
for sequential logic. An external TTL register was often used for sequential designs such
as state machines. Common EPROMs, for example the 2716, are still sometimes used in
this way by hobby circuit designers, who often have some lying around.
PLA
In 1970, Texas Instruments developed a mask-programmable IC based on the IBM read-
only associative memory or ROAM. This device, the TMS2000, was programmed by
altering the metal layer during the production of the IC. The TMS2000 had up to 17
inputs and 18 outputs with 8 JK flip flop for memory. TI coined the term Programmable
Logic Array for this device.
A programmable logic array (PLA) has a programmable AND gate array, which links to
a programmable OR gate array, which can then be conditionally complemented to
produce an output.
PAL
PAL devices have arrays of transistor cells arranged in a "fixed-OR, programmable-
AND" plane used to implement "sum-of-products" binary logic equations for each of the
outputs in terms of the inputs and either synchronous or asynchronous feedback from the
outputs.
MMI introduced a breakthrough device in 1978, the Programmable Array Logic or PAL.
The architecture was simpler than that of Signetics FPLA because it omitted the
programmable OR array. This made the parts faster, smaller and cheaper. They were

B Anil Kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 9
available in 20 pin 300 mil DIP packages while the FPLAs came in 28 pin 600 mil
packages. The PAL Handbook demystified the design process. The PALASM design
software (PAL Assembler) converted the engineers' Boolean equations into the fuse
pattern required to program the part. The PAL devices were soon second-sourced by
National Semiconductor, Texas Instruments and AMD.
After MMI succeeded with the 20-pin PAL parts, AMD introduced the 24-
pin 22V10 PAL with additional features. After buying out MMI (1987), AMD spun off a
consolidated operation as Vantis, and that business was acquired by Lattice
Semiconductor in 1999.
An innovation of the PAL was the generic array logic device, or GAL, invented
by Lattice Semiconductor in 1985. This device has the same logical properties as the
PAL but can be erased and reprogrammed. The GAL is very useful in the prototyping
stage of a design, when any bugs in the logic can be corrected by reprogramming. GALs
are programmed and reprogrammed using a PAL programmer, or by using the in-circuit
programming technique on supporting chips.
Lattice GALs combine CMOS and electrically erasable (E2) floating gate technology for
a high-speed, low-power logic device.
A similar device called a PEEL (programmable electrically erasable logic) was
introduced by the International CMOS Technology (ICT) corporation.
CPLDs
PALs and GALs are available only in small sizes, equivalent to a few hundred logic
gates. For bigger logic circuits, complex PLDs orCPLDs can be used. These contain the
equivalent of several PALs linked by programmable interconnections, all in
one integrated circuit. CPLDs can replace thousands, or even hundreds of thousands, of
logic gates.
Some CPLDs are programmed using a PAL programmer, but this method becomes
inconvenient for devices with hundreds of pins. A second method of programming is to
solder the device to its printed circuit board, then feed it with a serial data stream from a
personal computer. The CPLD contains a circuit that decodes the data stream and
configures the CPLD to perform its specified logic function.
FPGAs

B Anil Kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 9
:
While PALs were busy developing into GALs and CPLDs (all discussed above), a
separate stream of development was happening. This type of device is based on gate
array technology and is called the field-programmable gate array (FPGA). Early
examples of FPGAs are the 82s100 array, and 82S105 sequencer, by Signetics,
introduced in the late 1970s. The 82S100 was an array of AND terms. The 82S105 also
had flip flop functions.
FPGAs use a grid of logic gates, and once stored, the data doesn't change, similar to that
of an ordinary gate array. The term "field-programmable" means the device is
programmed by the customer, not the manufacturer.
FPGAs are usually programmed after being soldered down to the circuit board, in a
manner similar to that of larger CPLDs. In most larger FPGAs the configuration is
volatile, and must be re-loaded into the device whenever power is applied or different
functionality is required. Configuration is typically stored in a
configuration PROM or EEPROM. EEPROM versions may be in-system programmable
(typically via JTAG)
4. COMMERCIAL OFF-THE-SHELF COMPONENTS:
Commercial-Off-The-Shelf Software (COTS) is pre-built software usually from a 3rd
party vendor. COTS can be purchased, leased or even licensed to the general public.
COTS provides some of the following advantages:
 Applications are provided at a reduced cost.
 The application is more reliable when compared to custom built software because
its reliability is proven through the use by other organizations.
 COTS is more maintainable because the systems documentation is provided with
the application.
 The application is higher quality because competition improves the product
quality.
 COTS is of higher complexity because specialists within the industry have
developed the software.
 The marketplace not industry drives the development of the application.
 The delivery schedule is reduced because the basic schedule is operations.

B Anil Kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 9
Security implications of COTS
According to the United States Department of Homeland Security, software security is a
serious risk of using COTS software. If the COTS software contains severe security
vulnerabilities it can introduce significant risk into an organization‟s software supply
chain. The risks are compounded when COTS software is integrated or networked with
other software products to create a new composite application or a system of systems.
The composite application can inherits risks from its COTS components.
The US Department of Homeland Security has sponsored efforts to manage supply chain
cyber security issues related to the use of COTS. However, software
industry observers such as Gartner and the SANS Institute indicate that supply chain
disruption poses a major threat. Gartner predicts that "enterprise IT supply chains will be
targeted and compromised, forcing changes in the structure of the IT marketplace and
how IT will be managed moving forward." Also, the SANS Institute published a survey
of 700 IT and security professionals in December 2012 that found that only 14% of
companies perform security reviews on every commercial application brought in house,
and over half of other companies do not perform security assessments. Instead companies
either rely on vendor reputation (25%) and legal liability agreements (14%) or they have
no policies for dealing with COTS at all and therefore have limited visibility into the
risks introduced into their software supply chain by COTS.
COTS issues in other industries
In the medical device industry, COTS is referred to as SOUP (Software of Unknown
Pedigree or Provenance), i.e. software that has not been developed with a known software
development process or methodology. In this industry, faults in software components
may become system failures in the device itself. The standard IEC 62304:2006 “Medical
device software – Software life cycle processes ”outlines specific practices to ensure that
“SOUP components support the safety requirements for the device being developed. In
the case where the software components are COTS, DHS best practices for COTS risk
review can be applied.

B Anil Kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 11
Communication interface :
In the communication interface mainly there are two perspectives. There are:
1)device/boardlevel communications interface(onboard level communication interface)
2)product level communication interface(External level communication interface)
• Embedded product is a combination of different types of components
(chips/devices) on the PCB.
• Serial interface like 12c,SPI,UART,1-WIRE etc…. and parallel bus interface are
examples of onboard communication interface.
Onboard communication interface:
• It refers to different communication channels/buses for interconnecting the
various circuits and peripherals with in embedded system.
• There are various interfaces for onboard like
Inter integrated circuit(12c) bus:
• Synchronous bi-directional half duplex
• Developed by Philips semiconductors in 1980’s
• The original intension of 12c is to provide an easy way of connections between
microprocessors/ micro controllers systems and peripherals chips in tv sets.
Serial peripheral interface(SPI) bus:
• Synchronous bi-directional full duplex 4 wire serial interface bus
• Introduced by Motorola
• It is single master multi slave system
• Requires 4 signal lines of communication
1)Master Out Slave In(Mosi)
2)Master In Slave Out(Miso)
3)Serial Clock(Sclk)
4)Slave Select(Ss)
1-wire interface:
• Asynchronous half duplex communication protocol
• Developed by maxim dollar semiconductor
• So it is known as dollar 1-wire protocol

B Anil kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 11
• It uses single signal line(wire) called DQ for communication and follows master
slava communication model
And similarly like Parallel interface, UART
External communication interface:
It is used to communicate with the external world like
1)rs-232c and rs-485
2)USB:
• Wired high speed serial bus for data communication
• USB port can support connections upto 127 including slave peripherals/devise
and other hosts
• It follows star topology
• USB transmits data in packet form and each data packet has a standard form
• USB host consists a host controller it is responsible for controlling the data
communications
Ieee 1394(FIRE WIRE):
• Wired isochronus high speed serial communication bus
• It is also known as high performance serial bus(HPSB)
• Research started by Apple inc. in 1985
• Allows peer-to-peer communication and point-to multipoint communication and
are connected in tree topology
and other types are like Infrared,Wifi,Bluetooth,Zigbee,Gprs(general packet radio signal).

B Anil kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 13
Firmware For Embedded Systems
Writing code for embedded systems is not the same as writing the user code for a PC
(personal computer). It is more like writing a driver for a PC. If no special embedded
operation system (OS) like TinyOS is used the firmware engineer has to take care of all
basic things like setting up hardware registers himself. The code should be minimalistic,
efficient, real-time, stable, easy to read etc.
• Your system is embedded meaning that it is typically connected to real hardware like
motors, sensors, communication busses etc. The means that software errors can have
dramatic and expensive results. Pins can get shorted in software leading to high currents
and damage of the electronics. Batteries that should be monitored can be under- and
overcharged and explode. Sensors can get destroyed when not being operated correctly.
Gear boxes of motors can break without proper control. So test you system first in a safe
environment using power supplies with selectable current limit and make sure that the
pins of your microcontroller are correctly configured as inputs and outputs.
• Never ever do busy waiting with while loops. Always use timers! Why: your busy
waiting will keep the processor waiting in a busy state. It will not be able to do anything
useful during that time. Remember: Embedded systems have to run in real time. There is
typically no scheduler like when writing software for an operating system that
automatically switches between different processes.
• Make yourself familiar with design strategies like interrupts and DMA (direct memory
access). Interrupts will help you to avoid actively waiting for a condition like a change of
an input signal but will instead allow you to react only when the change happens. DMA
will free your processor from the dump work of just waiting to be ready to transfer some
data from one memory to another.
• Never ever put a lot of code into an interrupt service routine. These call back functions
are only there as signals and should be as short as possible. So if you need to do a lot of
work after an interrupt: set a flag and get the work done in the main loop.
• Be aware of the fact that embedded systems have to face unknown time conditions. E.g.
you never know when and in which precise order a user will use a switch or push button
or you will receive some data through a UART interface. So be prepared for this.

B Anil kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 13
• To make your code readable and reusable take care of coding standards and be
consistent.
• Separate your code into different files and functions in a meaningful manner. Do not
write spaghetti code. Very good C-code looks like C++. Make use of structs where
useful. Do not make extensive use of functions where it is not necessary. Each call
requires the current variables to be copied on the stack and after the call to be copied
back. This is time consuming and can seriously decrease the speed of your code.
• Use pointers and buffers to generate efficient code. Avoid copying data.
• Make use of macros to design efficient and readable code. You can even implement
simple functions with macros since macros take parameters.
• Document your code. Write the documentation is a way that you can use automatic
extraction tools like Doxygen.
• It is always a good idea to use a subversion (SVN) system to keep track of the different
versions of your code.
• Microcontrollers and their development environments typically come with debugging
functionalities. Use them!
• If useful think about using some highly efficient ASSEMBLER code where useful.
The other system components refer to the circuits/components/ICs which are necessary
for proper functioning of embedded system.
Clock oscillator circuit and clocking unit(s)
After the power supply, the clock is the next important unit of a system. A processor
needs a clockoscillator circuit. The clock controls the various clocking requirements of
the CPU, of the systemtimers and the CPU machine cycles. The machine cycles are for
(i) Fetching the codes and data from memory and then decoding and executing at the
processor,and
(ii) Transferring the results to memory.
The clock controls the time for executing an instruction. The clock circuit uses either a
crystal(external to the processor) or a ceramic resonator (internally associated with the
processor) or anexternal oscillator IC attached to the processor. (a) The crystal resonator
gives the highest stability infrequency with temperature and drift in the circuit. The
crystal in association with an appropriateresistance in parallel and a pair of series

B Anil kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 13
capacitance at both pins resonates at the frequency, which iseither double or single times
the crystal-frequency. Further, the crystal is kept as near as feasible totwo pins of the
processor. (b) The internal ceramic resonator, if available in a processor, saves the useof
the external crystal and gives a reasonable though not very highly stable frequency. [A
typical driftof the ceramic resonator is about ten minutes per month compared to the
typical drift of 1 or 5 minutesper month of a crystal]. (c) The external IC-based clock
oscillator has a significantly higher powerdissipation compared to the internal processor-
resonator. However, it provides a higher driving capability, which might be needed when
the various circuits of embedded system are concurrently driven.
For example, a multiprocessor system needs the clock circuit, which should give a high
driving capabilities and enables control of all processors concurrently.
Reset circuit, power-up reset and watchdog timer reset
Reset means that the processor starts the processing of instructions from a starting
address. Thataddress is one that is set by default in the processor program counter (or
instruction pointer and code segment registers in x86 processors) on a power-up. From
that address in memory, the fetching of program-instructions starts following the reset of
the processor. [In certain processors, for example, 68HC11 and HC12, there are two start-
up addresses. One is as per power-up reset vector and other is as per reset vector after the
Reset instruction or after a time-out (for example from a watchdog timer)].
The reset circuit activates for a fixed period (a few clock cycles) and then deactivates.
The processor circuit keeps the reset pin active and then deactivates to let the program
proceed from a defaultbeginning address. The reset pin or the internal reset signal, if
connected to the other units (for example, I/O interface or Serial Interface) in the system,
is activated again by the processor; it becomes an outgoing pin to enforce reset state in
other sister units of the system. On deactivation of the reset that succeeds the processor
activation, a program executes from start-up address.
Reset can be activated by one of the following:
1. An external reset circuit that activates on the power-up, on switching-on reset of the
system oron detection of a low voltage (for example < 4.5V when what is required is 5V
on the systemsupply rails). This circuit output connects to a pin called the reset pin of the

B Anil kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 13
processor. This circuit may be a simple RC circuit, an external IC circuit or a custom-
built IC. The examples of the ICs are MAX 6314 and Motorola MC 34064.
2. By (a) software instruction or (b) time-out by a programmed timer known as watchdog
timer (or on an internal signal called COP in 68HC11 and 68HC12 families) or (c) a
clock monitor detecting a slowdown below certain threshold frequencies due to a fault.
The watchdog timer is a timing device that resets the system after a predefined timeout.
This time
is usually configured and the watchdog timer is activated within the first few clock cycles
after power-up. It has a number of applications. In many embedded systems reset by a
watchdog timer is very essential because it helps in rescuing the system if a fault
develops and the program gets stuck.
On restart, the system can function normally. Most microcontrollers have on-chip
watchdog timers.
Consider a system controlling the temperature. Assume that when the program starts
executing, the sensor inputs work all right. However, before the desired temperature is
achieved, the sensor circuit develops some fault. The controller will continue delivering
the current nonstop if the system is not reset. Consider another example of a system for
controlling a robot. Assume that the interfacing motor control circuit in the robot arm
develops a fault during the run. In such cases, the robot arm maycontinue to move unless
there is a watchdog timer control. Otherwise, the robot will break its own arm!

B Anil kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 14
PCB
Generally in now a days every electronic device is as small in size because due to
development in the fabrication technologies. In any fabrication technology all
components should be placed in single board that is nothing but the "PRINTED
CIRCUIT BOARD" or "PRINTER WIRING BOARD" in shortly it is called as a 'PCB' or
'PWB'.In the manufacturing of any system first of all we should get logic and from that
we have to decide which components to be used and then inter-connections should be
done after that the real matter starts i.e. arrangement of those components with proper
inter-connections .This should be done by creating the schematic design and according to
it PCB is fabricated. Without the proper design of PCB the system won't work correctly
that's way PCB is called as the backbone for embedded systems. PCB can acts as a
platform for the components which are used for the working of embedded system and
also for the testing of the embedded firmware.
In generally PCB is mainly used because it does not have discrete wires i.e. those are
having the printed discrete wires called as 'PCB TRACKS'.PCB is having component
layouts attaching to the insulator sheet. This insulator sheet is of type 'GLASS EPOXY'
or 'PERTINAX'.
TYPES OF PCB
There are different types of PCBs are exit depending upon the component placement and
PCB track routes.
There are different methods are used for the fabrication of PCBs those are

B Anil kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit1
Lecture No: 15
PASSIVE ELEMENTS:
As we all know that there is an existence of active elements and passive elements in the
electronics. PASSIVE elements are the elements which are usually consume the energy
whereas the ACTIVE elements are usually produce energy i.e. VOLTAGE SOURCE
AND CURRENT SOURCE are active elements whereas RESISTORS, CAPACITORS,
DIODES are passive elements
This passive elements also plays a Vitol role in the working of the embedded systems.
They are the co-workers of the various chips in the embedded system hardware. They are
very essential for proper functioning of embedded system.
Here coming to these passive coponents,they are providing the various purposes like the
capacitor is used for the charging in order to provide the voltage for working of digital
chips. The one more important use of these passive components is providing a regulated
ripple-free supply voltage to the system by using the regulator IC and spike suppressor
filter capacitors.
Here for converting the AC supply into the ripple free supply we are using rectifiers
along with the filters. Here in the rectifiers we are using the passive components like
resistor and diode etc. .Here the process of half wave rectifier is given as below here in
this way full wave rectification is also done and it's output is given to the filter to get the
pure dc that is ripple free supply.

B Anil kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 17
Introduction to characteristics embedded SYSTEM.
A system is a way of working, organizing or doing one or many tasks according to a
fixed plan, program, or set of rules. A system is also an arrangement in which all its units
assemble and work together according to the plan or program.
EMBEDDED SYSTEM
An embedded system is one that has computer-hardware with software embedded in it as
one of its most important component. It is a dedicated computer-based system for an
application(s) or product. It may be either an independent system or a part of a larger
system. As its software usually embeds in ROM (Read Only Memory) it does not need
secondary memories as in a computer.
EMBEDDED SYSTEM HAS THREE MAIN COMPONENTS:
1. It has hardware
2. It has main application software. The application software may perform concurrently
the series of tasks or multiple tasks.
3. It has a real time operating system (RTOS) that supervises the application software and
provides a mechanism to let the processor run a process as per scheduling and do the
context-switch between the various processes (tasks). RTOS defines the way the system
works. It organizesaccess to a resource in sequence of the series of tasks of the system. It
schedules their working and execution by following a plan to control the latencies and to
meet the deadlines. [Latency refers to the waiting period between running the codes of a
task and the instance at which the need for the task arises.] It sets the rules during the
execution of the application software. A small-scale embedded system may not need an
RTOS.
CHARACTERISTICS OF EMBEDDED SYSTEM
1 User interface
Embedded systems range from no user interface at all — dedicated only to one task — to
complex graphical user interfaces that resemble modern computer desktop operating
systems. Simple embedded devices use buttons, LEDs, graphic or character LCDs (for
example popular HD44780 LCD) with a simple menu system.
2 Processors in embedded systems
2.1 READY MADE COMPUTER BOARDS

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 17
2.2 ASIC AND FPGA SOLUTIONS
A common array of n configuration for very-high-volume embedded systems is the
system on a chip (SoC) which contains a complete system consisting of multiple
processors, multipliers, caches and interfaces on a single chip. SoCs can be implemented
as an application-specific integrated circuit (ASIC) or using a field-programmable gate
array (FPGA).
3 PERIPHERALS
4 TOOLS
5 DEBUGGING
5.1 TRACING
Real-time operating systems (RTOS) often supports tracing of operating system events. A
graphical view is presented by a host PC tool, based on a recording of the system
behavior. The trace recording can be performed in software, by the RTOS, or by special
tracing hardware. RTOS tracing allows developers to understand timing and performance
issues of the software system and gives a good understanding of the high-level system
behavior. Commercial tools like RTXC Quadros or IAR Systems exist.
6 RELIABILITY
7 HIGH VS LOW VOLUME
For high volume systems such as portable music players or mobile phones, minimizing
cost is usually the primary design consideration. Engineers typically select hardware that
is just “good enough” to implement the necessary functions.
For low-volume or prototype embedded systems, general purpose computers may be
adapted by limiting the programs or by replacing the operating system with a real-time
operating system.

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 18
QUALITY ATTRIBUTES OF EMBEDDED SYSTEMS:
Quality attributes are the non functional requirements that need to be documented
properly in any system design. If the quality attributes are more concrete and measurable
it will give a positive impact on the system development process and the product.
The various quality attributes used in embedded systems development.they are classified
as
1. Operational quality attributes
2. Non operational quality attributes
1. OPERATIONAL QUALITY ATTRIBUTES:
The operational quality attributes represent the relevant quality attributes related to the
embedded systems when it is in operational mode or ‘online mode’.The important
operational quality attributes are
1. Response
2. Throughput
3. Reliablity
4. Maintainablity
5. Security
6. Saftey
1. THROUGHPUT:
Throughput deals with the efficiency of a system. It can be defined as the rate of
reduction or operation of a defined process over a stated period of time. The rates can be
expressed in terms of unit of products, batches produced, or any other meaningful
measurments.throughput is generally in terms of benchmark benchmark is a reference
point by something can be measured.
2. RESPONSE:
Response is measure quickness of a system. It gives the how fast system is tracking the
changes in input variables. In real time the system give fast response in embedded
system. There is not necessary to all embedded system work in real time in response.
There s no specific deadline that this system should respond within this particular
timeline

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 19
3.REALIABLITY:
It is a measure of how much %you can rely upon the proper functioning of system or
what is the %susceptibility of the system to failures. Mean time between failures (MTBF)
and mean time to repair(MTTR)are the terms used in defining system reliability. MTBF
gives the frequency of failures in hours\weeks\months. MTTR specifies how long the
system is allowed to be out of order following a failures .for an embedded system with
critical application need, it should be of the order of minutes.
4. MAINTAINABILITY:
Maintainability deals with support and maintenance to the end user or client case of
technical issues and product failures or on the basis of a routine system chekup.an more
reliable system means a system with less corrective maintainability requirements and vice
versa.
Maintainability is two types 1.scheduled or periodic Maintenance, 2.maintenance to
unexpected failures.
5. SECURITY:
Confidentiality, integrity and availability are three measures of information security.
Confidentiality deals with the protection of data and application from unauthorized
disclosure. Integrity deals with the protection of data and application from unauthorized
modification. Availability deals with protection of data application of unauthorized users.
Example of security is personal digital assistant (PDA).
6. SAFTEY:
Safety deals with possible damages that can happen to the operators, public environment
due to the breakdown of an embedded system or due to the emission of radi0active or
hazardous materials from the embedded products. The breakdown of an embedded
system may occur due to a hardware failure or a firmware failure.

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 21
NON OPERATIONAL QUALITY ATTRIBUTES:
The quality attributes that needs to be addressed for the product ‘not’on the basis of
operational aspects are grouped under this category. The important non quality attributes
are
1. Testability & Debug ability
2. Evolvability
3. Portablity
4. Time to prototype and market
5. Perunit and total cost
TESTABILITY DEBUG ABILITY:
Testability deals with how easily one can test the design, application and by which
means can test it. In embedded system it is applicable to Testability and firmware. In
embedded hard ware testing ensures that the peripherals and the total hardware functions
in the desired manner, whereas firmware testing ensures that the firmware is functioning
in the expected way.
Debug ability is a means of debugging the products as such for figuring out the probable
source that create unexpected behavior in the total system. Debug-ability has two aspects
in the embedded system development context, namely, hardware level debugging and
firmware level debugging. Hardware debugging is used for figuring out the issues created
by hardware problems whereas firmware debugging is employed to figure out the
probable errors that appears as a result of flaws in the firmware.
PORTABILITY:
Portability is a measure of system independence. An embedded product is said to be a
portable if the product capable of functioning as such in various environments, target
processors/controllers and embedded operating systems. A standard embedded product
should always be flexible and portable. In embedded products the term porting represents
the migration of the embedded firmware.

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 22
Evolvability:
Evolvalability is term which is closely related to Biology. Evolvability is referred as the
non-heritable variation. For an embedded system, the quality attribute ‘Evolvability’
refers to the ease with which the embedded product (including firmware and
hardware)can be modified to take advantage of new firmware or hardware technologies.
TIME-TO-PROTOTYPE AND MARKET
Time-to-market is the time elapsed between the conceptualization of product and the time
at which the product is ready for selling or use. These commercial embedded product are
highly competitive. It is conceptualize that product which sell in the time which we really
need Time-to-market the product is success to commercial embedded product. If you
come up with new design and if it take a long time to develop and market it, the
competitor product may take advantage of it with there product. If you start your design
by making are of a new technology and if it takes long time to develop and market the
product, by the time you market the product, the technology might have superseded with
a new technology.
PER UNIT COST AND REVENUE:
Embedded technology is one which changing happening product prototype helps a lot in
reducing Time-to-market.
Product prototyping helps a lot in reducing the TIME-TO-MARKET
Cost is a factor which is closely monitored by both end user and product manufactures. It
is high sensitive factor for commercial products. If the cost of the product is at nominal
rate then the product is market is failed. Proper market study and cost benefit analysis
should be carried out before taking a decision. The ultimate aim of product is to generate
marginal profit Every embedded system has its own product life cycle which starts with
design and development phase. Product idea generation Prototyping, Road map
definition , Actual product design and development are the activities carried out during
this phase.
There is no RETUNS only INVESTMENTS.......

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 22

the below curve is about PRODUCT OF LIFE CYCLE.


Once the product is ready to sell, it is introduced in the market which is PRODUCT
INTRODUCTION STAGE. Initial sale level be low and less competition and then
increases with time.
In the GROWTH PHASE, the product retirement/decline phase starts with the drop in
sales volume, market share and revenue. Decline starts due to with drop in sales , volume
, market shares and revenue.

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 23
Comparison of QUALITY ATTRIBUTES OF EMBEDDED SYSTEMS:
Quality attributes are the non functional requirements that need to be documented
properly in any system design. If the quality attributes are more concrete and measurable
it will give a positive impact on the system development process and the product.
The various quality attributes used in embedded systems development.they are classified
as
1. Operational quality attributes
2. Non operational quality attributes
1. OPERATIONAL QUALITY ATTRIBUTES:
The operational quality attributes represent the relevant quality attributes related to the
embedded systems when it is in operational mode or ‘online mode’.The important
operational quality attributes are
1. Response
2. Throughput
3. Reliablity
4. Maintainablity
5. Security
6. Saftey
NON OPERATIONAL QUALITY ATTRIBUTES:
The quality attributes that needs to be addressed for the product ‘not’on the basis of
operational aspects are grouped under this category. The important non quality attributes
are
1. Testability & Debug ability
2. Evolvability
3. Portablity
4. Time to prototype and market
5. Perunit and total cost

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 25
Application Specified Embedded System
Embedded systems are application and domain specific , meaning; they are specially
build for certain applications in certain domains like telecom, automotive etc. In general
purpose computing is possible to replace a system with another system with is closely
matching with existed system, where as it not in the case of Embedded systems are highly
specialized in functioning and dedicated in specific application. Hence it is not possible
for replace a embedded system developed for specific application in a specific domain
with other embedded system designed for some other application in some other domain

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 26
Difference Between Real time and Non Real time Applications :
Definition of Real Time Systems An operation within a larger dynamic system is called a
real-time operation if the combined reaction- and operation-time of a task operating on
current events or input, is no longer than the maximum delay allowed, in view of
circumstances outside the operation. The task must also occur before the system to be
controlled becomes unstable. A real-time operation is not necessarily fast, as slow
systems can allow slow real-time operations. This applies for all types of dynamically
changing systems. The polar opposite of a real-time operation is a batch job with
interactive timesharing falling somewhere in between the two extremes. Alternately, a
system is said to be hard real-time if the correctness of an operation depends not only
upon the logical correctness of the operation but also upon the time at which it is
performed. An operation performed after the deadline is, by definition, incorrect, and
usually has no value. In a soft real-time system the value of an operation declines steadily
after the deadline expires.
Common Architecture of Real Time Embedded Systems Unlike general purpose
computers a generic architecture can not be defined for a Real Time Embedded Systems.
There are as many architecture as the number of manufacturers. Generalizing them would
severely dilute the soul purpose of embodiment and specialization. However for the sake
of our understanding we can discuss some common form of systems at the block diagram
level. Any system can hierarchically divided into subsystems. Each subsystem may be
further segregated into smaller systems. And each of these smaller systems may consist
of some discrete parts. This is called Hardware configuration. Some of these parts may be
programmable and therefore must have some place to keep these programs. In RTES the
on-chip or on-board non-volatile memory does keep these programs. These programs are
the part of the Real Time Operating System (RTOS) and continually run as long as the
gadget is receiving power. A part of the RTOS also executes itself in the stand-by mode
while taking a very little power from the battery. This is also called the sleep mode of the
system. Both the hardware and software coexist in a coherent manner. Tasks which can
be both carried out by software and hardware affect the design process of the system. For
example a multiplication action may be done by hardware or it can be done by software
by repeated additions. Hardware based multiplication improves the speed at the cost of

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 26
increased complexity of the arithmetic logic unit (ALU) of the embedded processor. On
the other hand software based multiplication is slower but the ALU is simpler to design.
These are some of the conflicting requirements which need to be resolved on the
requirements as imposed by the overall system. This is known as Hardware-Software Co-
design or simply Co-design.
Non-real time, or NRT, is a term used to describe a process or event that does not occur
immediately. For example, a forum can be considered non-real time as responses often do
not occur immediately and can sometimes take hours or days

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 27
WASHING MACHINE
An embedded system is a computer system with a dedicated function within a larger
mechanical or electrical system, often with real-time computing constraints. It is
embedded as part of a complete device often including hardware and mechanical parts.
By contrast, a general-purpose computer, such as personal (PC), is designed to be flexible
and to meet a wide range of end-user needs. Embedded systems control many devices in
common use today.
Embedded systems contain processing cores that are either microcontrollers or digital
signal processors (DSP).
A processor is an important unit in the embedded system hardware. It is the heart of the
embedded system. Let examine the following two examples.
It is an automatic clothes-washing system. The important hardware parts include its status
display panel, the switches and dials for user-defined programming, a motor to rotate or
spin, its power supply and control unit, an inner water-level sensor, a solenoid valve for
letting water in and another valve for letting water drain out. These parts organize to
wash clothes automatically according to a program preset by a user. The system-program
is to wash the dirty clothes placed in a tank, which rotates or spins in pre-programmed
steps and stages. It follows a set of rules. Some of these rules are as follows: (i) Follow
the steps strictly in the following sequence.
Step I: Wash by spinning the motor according to a programmed period.
Step II: Rinse in fresh water after draining out the dirty water, and rinse a second time if
the system is not programmed in water-saving mode.
Step III: After draining out the water completely, spin fast the motor for a programmed
period for drying by centrifuging out water from the clothes.
Step IV: Show the wash-over status by a blinking display.
I. Sound the alarm for a minute to signal that the wash cycle is complete
II. At each step, display the process stage of the system..
III. In case of an interruption, execute only the remaining part of the program, starting
from the position when the process was interrupted. There can be no repetition from Step
I unless the user resets the system by inserting another set of clothes and resets the
program.

B Anil Kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 27
Now Consider a watch. It is a time-display system. Its parts are its hardware, needles and
battery with the beautiful dial, chassis and strap. These parts organize to show the real
time every second and continuously update the time every second. The system-program
updates the display using three needles after each second. It follows a set of rules. Some
of these rules are as follows:
(i) All needles move clockwise only.
(ii) A thin and long needle rotates every second such that it returns to same position
after a minute.
(iii) A long needle rotates every minute such that it returns to same position after an
hour.
(iv) A short needle rotates every hour such that it returns to same position after
twelve hours.
(v) All three needles return to the same inclinations after twelve hours each day.

B Anil Kumar & P Kalyan Chakravarthi,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 29
Automotive -Domain Specific Example of Embedded Systems

The major application domains of embedded systems are consumer, industrial,


automotive , telecom, etc. of which telecom and automotive industry holds a big market
share.
Inner workings of Automotive embedded systems :
Automotive embedded systems are the one where electronics take control over the
mechanical systems. The presence of automotive embedded system in a vehicle varies
from simple error and wiper controls to complex air bag controller and antilock break
systems. Automotive embedded systems are normally built around microcontrollers or
dsp or hybrid of the tow and are generally knows as electronic control units. The number
of embedded controllers in an ordinary vehicle varies from 20 to 40 whereas a luxury
vehicles like meredes S and BMW 7 may contain 75 to 100 numbers of embedded
controllers. Government regulations on fuel economy , environmental factors and
emission standards and increasing customer demands on safety , comfort and
infotainment force the automotive manufactures to opt for sophisticated embedded
control units within the vehicle. The first embedded system used in automotive
application was the microprocessor based fuel injection system introduced by
Volkswagen 1600 in 1968.
The various types of electronic control units used in the automotive embed industry can
be broadly classified into two high speed embed control units and low speed embedded
control units .

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 2
Lecture No: 30
High speed electronic control units : high speed electronic control units are deployed in
critical control units requiring fast response . they include fuel injection systems antilock
brake systems engine control electronic throttle, steering controls, transmission control
unit and central control unit.
Low speed ECUs:
Low speed electronic control units are deployed in applications where response time is
not so critical. They generally are built around low cost microprocessors
/microcontrollers and digital signal processors. And dsps. Audio controllers passenger
and driver door locks, door glass controls wiper control, mirror control, seat control
systems, head lamp and tail lamp controls, sun roof control unit etc. are examples of
LECUs .
Automotive communication Buses :
Automotive applications make use of serial buses for communication which greatly
reduces the amount of wiring required inside a vehicle. The following section will give
you an overview of the different types of serial interface buses deployed in automotive
embed application.

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 31
Introduction to Embedded hardware
The hardware of embedded system is built around analog electronic components and
circuits, digital electronic components and circuits, and integrated circuits. A printed
circuit board provides a plat form for placing all the necessary hardware components for
building an embedded product.
For commercial product you cannot go for a bread board as an interconnection platform
since they make your product bulky and the bread board connections are highly unstable
back bone of embedded hardware. In this chapter is organized in such a way to refresh
the student knowledge on various analog and digital electronic components and circuits,
familiarize them with integrated circuit designing and provide them fundamentals of
printed circuit boards, its design and development using electronic design automation.

B Anil Kumar & P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 33
ANALOG ELECTRONIC COMPONENTS & DIGITAL ELECTRONIC
COMPONENTS

Digital electronics deal with digital or discrete signals. Microprocessors,


microcontrollers and system on chips (SoCs) work on digital principles. They interact
with the rest of the world through digital I/O interfaces and process digital data.
Embedded systems employ various digital electronic circuits for 'Glue logic'
implementation. 'Glue logic' is the custom digital electronic circuitry required to achieve
compatible interface between two different integrated circuit chips. Address decoders,
latches, encoders/decoders, etc. are examples for glue logic circuits. Transistor Transistor
Logic (TTL), Complementary Metal Oxide Semiconductor (CMOS) logic etc are some of
the standards describing the electrical characteristics of digital signals in a digital system.
The following sections give an overview of the various digital I/O interface standards and
the digital circuits/components used in embedded system development.

Open Collector and Tri-State Output :


Open collector is an I/0 interface standard in digital system design. The term 'open
collector' is com¬monly used in conjunction with the output of an Integrated Circuit (IC)
chip. It facilitates the interfacing of IC output to other systems which operate at different
voltage levels. In the open collector configura¬tion, the output line from an IC circuit is
connected to the base of an NPN transistor. The collector of the transistor is left
unconnected (floating) and the emitter is internally connected to the ground signal of IC.
Figure 8.1 illustrates an open collector output configuration

For the output pin to function properly, the output pin should be pulled, to the desired
voltage for the oip device, through a pull-up resistor. The output signal of the IC is fed to

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 33
the base of an open col¬lector transistor. When the base drive to the transistor is ON and
the collector is in open state, the oip pin floats. This state is also known as 'high
impedance' state. Here the output is neither driven to logic 'high' nor logic 'low'. If a pull-
up resistor is connected to the oip pin, when the base drive is ON, the o/p pin becomes at
logic 0 (0V). With a pull-up resistor, if the base driver is 0, the o/p will be at logic high
(Voltage = lc). The advantage of open collector output in embedded system design is
listed below.
I. It facilitates the interfacing of devices, operating at a voltage different from the IC, with
the IC. Thereby, it eliminates the need for additional interface circuits for connecting
devices at different voltage levels,
An open collector configuration supports multi-drop connection, i.e., connecting more
than one open collector output to a single line. It is a common requirement in modern
embedded systems supporting communication interfaces like I2C, 1-Wire, etc. Please
refer to the various interfaces described in Chapter 2 under the section 'Onboard
Communication Interfaces'.
It is easy to build 'Wired AND' and 'Wired OR' configuration using open collector output
lines.
The output of a standard logic device has two states, namely 'Logic 0 (LOW)' and 'Logic
1 (HIGH), and the output will be at any one of these states at a given point of time,
whereas tri-state devices have three states for the output, namely, `Logic 0 (LOW)',
'Logic I (HIGH) and 'High Impedance (FLOAT)'. A tri-state logic device contains a
device activation line called 'Device Enable'. When the 'Device Enable' line is activated
(set at 'Logic l' for an active 'HIGH' enable input and at 'Logic 0' for an active 'LOW'
enable input), the device acts like a normal logic device and the output will be in any one
of the logic conditions, 'Logic 0 (LOW)' or 'Logic I (HIGH)'. When the 'Device Enable'
line is de-activated (set at 'Logic 0' for an active 'HIGH' enable input and at 'Logic I' for
an active 'LOW' enable input), the output of the logic device enters in a high impedance
state and the device is said to be in the floating state. The tri-stated output condition
produces the effect of 'removing' the device from a circuit and allows more than one
devices to share a common bus. With multiple "tri-stated' devices share a common bus,

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 33
only one 'device' is allowed to drive the bus (drive the bus to either 'Logic 0' or 'Logic 1')
at any given point of time and rest of the devices should be in the ‘tri-stated' condition.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 34
I/O TYPES AND EXAMPLES

Real time clock and timers :


A timer circuit suitably configured is the system-clock, also called real-time clock (RTC).
An RTC is used by the schedulers and for real-time programming. An RTC is designed as
follows: Assume a processor generates a clock output every 0.5 ms. When a system timer
is configured by a software instruction to issue timeout after 200 inputs from the
processor clock outputs, then there are 10000 interrupts (ticks) each second. The RTC
ticking rate is then 10 kHz and it interrupts every 100 ms. The RTC is also used to obtain
software-controlled delays and time-outs.
Uses and applications of Timer device:
(i)Timer is a device,which counts the input at regular time interval using clock pulses at
its input.The counts increments on each pulse and store in a register, called count register.
(ii)The counts multiplied with the interval give the time.
Uses of a timer device : (i)Real Time Clock Ticks (System Heart Beats). [Real time clock
is a clock, which, once
the system starts, does not stop and can't be reset and its count valuecan't be reloaded.
Real time endlessly flows and never returns back!]
Real Time Clock is set for ticks using prescaling bits (or rate set bits) in appropriate
control registers.
(ii) Initiating an event after a preset delay time. Delay is as per count valueloaded.
(iii) Initiating an event (or a pair of events or a chain of events) after a comparison(s)
with
between the pre-set time(s) with counted value(s). [It is similar to a preset alarm(s).].
(iv) A preset time is loaded in a Compare Register. [It is similar to presetting an alarm].
(v) Capturing the count value at the timer on an event. The information of time(instance
of the event) is thus stored at the capture register.
(vi) Finding the time interval between two events. Counts are captured at each event in
capture register(s) and read. The intervals are thus found out.
(vii) Wait for a message from a queue or mailbox or semaphore for a preset time when
using RTOS. There is a A predefined waiting period is done before RTOS lets a task run.
(viii) Watchdog timer. It resets the system after a defined time.
P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM
Embedded Systems Unit 3
Lecture No: 34
(ix) Baud or Bit Rate Control for serial communication on a line or network. Timer
timeout interrupts define the time of each baud.
(x) Input pulse counting when using a timer, which is ticked by giving non-periodic
inputs instead of the clock inputs. The timer acts as a counter if, in place of clock inputs,
the inputs are given to the timer for each instance to be counted.
(xi) Scheduling of various tasks. A chain of software-timers interrupt and RTOS uses
these interrupts to schedule the tasks.
(xii) Time slicing of various tasks. A multitasking or multi-programmed operating system
presents the illusion that multiple tasks or programs are running simultaneously by
switching between programs very rapidly, for example, after every 16.6 ms.
(xiii) Process known as a context switch.[RTOS switches after preset time-delay from
one running task to the next.task. Each task can therefore run in predefined slots of time]
(xiv) Time division multiplexing (TDM)
(xv)Timer device used for multiplexing the input from a number of channels.
(xvi)Each channel input allotted a distinct and fixed-time slot to get a TDM output. [For
example, multiple telephone calls are the inputs and TDM device generates the TDM
output for launching it into the optical fiber.
(ii) Signals during a transfer of a byte using the I2C bus and the Frame format:
I2C Bus:
(i) The Bus has two lines that carry its signals— one line is for the clock and one is for
bi-directional data.
(ii)There is a standard protocol for the I2C bus.
Device Addresses and Master in the I2C bus:
(i) Each device has a 7-bit address using which the data transfers take place.
(ii) Master can address 127 other slaves at an instance.
(iii)Master has at a processing element functioning as bus controller or a microcontroller
with I2C (Inter Integrated Circuit) bus interface circuit.
Slaves and Masters in the I2C bus:
(i) Each slave can also optionally has I2C (Inter Integrated Circuit) bus controller and
processing element.
(ii) Number of masters can be connected on the bus.

P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 34
(iii)However, at an instance, master is one, which initiates a data transfer on SDA (serial
data) line and which transmits the SCL (serial clock) pulses. From master, a data frame
has fields beginning from start bit.

P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 35
SERIAL COMMUNICATION DEVICES ,PARALLEL DEVICE PORTS AND
WIRELESS DEVICES

A task is the execution of a sequential program. It starts with reading of the input
data and of the internal state of the task, and terminates with the production of the results
and updating the internal state. The control signal that initiates the execution of a task
must be provided by the operating system. The time interval between the start of the task
and its termination, given an input data set x, is called the actual duration dact(task,x) of
the task on a given target machine. A task that does not have an internal state at its point
of invocation is called a stateless task; otherwise, it is called a task with state.
Simple Task (S-task):
If there is no synchronization point within a task, we call it a simple task (S-task),
i.e., whenever an S -task is started, it can continue until its termination point is reached.
Because an S-task cannot be blocked within the body of the task, the execution time of an
S-task is not directly dependent on the progress of the other tasks in the node, and can be
determined in isolation. It is possible for the execution time of an S-task to be extended
by indirect interactions, such as by task preemption by a task with higher priority.
Complex Task (C-Task):
A task is called a complex task (C-Task) if it contains a blocking synchronization
statement (e.g., a semaphore operation "wait") within the task body. Such a "wait"
operation may be required because the task must wait until a condition outside the task is
satisfied, e.g., until another task has finished updating a common data structure, or until
input from a terminal has arrived. If a common data structure is implemented as a
protected shared object, only one task may access the data at any particular moment
(mutual exclusion). All other tasks must be delayed by the "wait" operation until the
currently active task finishes its critical section. The worst-case execution time of a
complex task in a node is therefore a global issue because it depends directly on the
progress of the other tasks within the node, or within the environment of the node.
A task can be in one of the following states: running, waiting or ready-to-run.
A task is said to be in running state if it is being executed by the CPU
A task is said to be in waiting state if it is waiting for another event to occur.
A task is said to be in ready-to-run state if it is waiting in a queue for the CPU

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 35
time.

Task Scheduler:
An application in real-time embedded system can always be broken down into a number
of distinctly different tasks. For example,
Keyboard scanning
Display control
Input data collection and processing
Responding to and processing external events
Communicating with host or others
Each of the tasks can be represented by a state machine. However, implementing a single
sequential loop for the entire application can prove to be a formidable task. This is
because of the various time constraints in the tasks - keyboard has to be scanned, display
controlled, input channel monitored, etc.One method of solving the above problem is to
use a simple task scheduler. The various tasks are handled by the scheduler in an orderly
manner. This produces the effect of simple multitasking with a single processor. A bonus
of using a scheduler is the ease of implementing the sleep mode in microcontrollers
which will reduce the power consumption dramatically (from mA to DA). This is
important in battery operated embedded systems.
There are several ways of implementing the scheduler - preemptive or cooperative, round
robin or with priority. In a cooperative or non-preemptive system, tasks cooperate with

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 35
one another and relinquish control of the CPU themselves. In a preemptive system, a task
may be preempted or suspended by different task, either because the latter has a higher
priority or the time slice of the former one is used up. Round robin scheduler switches in
one task after another in a round robin manner whereas a system with priority will switch
in the highest priority task.For many small microcontroller based embedded systems, a
cooperative (or non-preemptive), round robin scheduler is adequate. This is the simplest
to implement and it does not take up much memory. Ravindra Karnad has implemented
such a scheduler for 8051 and other microcontrollers. In his implementation, all tasks
must behave cooperative

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 37
TIMERS AND COUNTING DEVICES ,WATCHDOG TIMER AND REAL TIME
CLOCK

Real time clock and timers :


A timer circuit suitably configured is the system-clock, also called real-time clock (RTC).
An RTC is used by the schedulers and for real-time programming. An RTC is designed as
follows: Assume a processor generates a clock output every 0.5 ms. When a system timer
is configured by a software instruction to issue timeout after 200 inputs from the
processor clock outputs, then there are 10000 interrupts (ticks) each second. The RTC
ticking rate is then 10 kHz and it interrupts every 100 ms. The RTC is also used to obtain
software-controlled delays and time-outs.
Uses and applications of Timer device:
(i)Timer is a device,which counts the input at regular time interval using clock pulses at
its input.The counts increments on each pulse and store in a register, called count register.
(ii)The counts multiplied with the interval give the time.
Uses of a timer device : (i)Real Time Clock Ticks (System Heart Beats). [Real time clock
is a clock, which, once
the system starts, does not stop and can't be reset and its count valuecan't be reloaded.
Real time endlessly flows and never returns back!]
Real Time Clock is set for ticks using prescaling bits (or rate set bits) in appropriate
control registers.
(ii) Initiating an event after a preset delay time. Delay is as per count valueloaded.
(iii) Initiating an event (or a pair of events or a chain of events) after a comparison(s)
with
between the pre-set time(s) with counted value(s). [It is similar to a preset alarm(s).].
(iv) A preset time is loaded in a Compare Register. [It is similar to presetting an alarm].
(v) Capturing the count value at the timer on an event. The information of time(instance
of the event) is thus stored at the capture register.
(vi) Finding the time interval between two events. Counts are captured at each event in
capture register(s) and read. The intervals are thus found out.
(vii) Wait for a message from a queue or mailbox or semaphore for a preset time when
using RTOS. There is a A predefined waiting period is done before RTOS lets a task run.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 37
(viii) Watchdog timer. It resets the system after a defined time.
(ix) Baud or Bit Rate Control for serial communication on a line or network. Timer
timeout interrupts define the time of each baud.
(x) Input pulse counting when using a timer, which is ticked by giving non-periodic
inputs instead of the clock inputs. The timer acts as a counter if, in place of clock inputs,
the inputs are given to the timer for each instance to be counted.
(xi) Scheduling of various tasks. A chain of software-timers interrupt and RTOS uses
these interrupts to schedule the tasks.
(xii) Time slicing of various tasks. A multitasking or multi-programmed operating system
presents the illusion that multiple tasks or programs are running simultaneously by
switching between programs very rapidly, for example, after every 16.6 ms.
(xiii) Process known as a context switch.[RTOS switches after preset time-delay from
one running task to the next.task. Each task can therefore run in predefined slots of time]
(xiv) Time division multiplexing (TDM)
(xv)Timer device used for multiplexing the input from a number of channels.
(xvi)Each channel input allotted a distinct and fixed-time slot to get a TDM output. [For
example, multiple telephone calls are the inputs and TDM device generates the TDM
output for launching it into the optical fiber.
(ii) Signals during a transfer of a byte using the I2C bus and the Frame format:
I2C Bus:
(i) The Bus has two lines that carry its signals— one line is for the clock and one is for
bi-directional data.
(ii)There is a standard protocol for the I2C bus.
Device Addresses and Master in the I2C bus:
(i) Each device has a 7-bit address using which the data transfers take place.
(ii) Master can address 127 other slaves at an instance.
(iii)Master has at a processing element functioning as bus controller or a microcontroller
with I2C (Inter Integrated Circuit) bus interface circuit.
Slaves and Masters in the I2C bus:
(i) Each slave can also optionally has I2C (Inter Integrated Circuit) bus controller and
processing element.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 37
(ii) Number of masters can be connected on the bus.
(iii)However, at an instance, master is one, which initiates a data transfer on SDA (serial
data) line and which transmits the SCL (serial clock) pulses. From master, a data frame
has fields beginning from start bit.
Watchdog timer :
A timing device such that it is set for a preset time interval and an event must occur
during that interval else the device will generate the timeout signal on failure to get
that event in the watched time interval. On that event, the watchdog timer is
disabled to disable generation of timeout or reset .Timeout may result in processor start
a service routine or start from beginning .Assume that we anticipate that a set of tasks
must finish in 100 ms interval. The watchdog timer is disabled and stopped by the
program instruction in case the tasks finish within 100 ms interval. In case task does not
finish (not disabled by the program instruction), watchdog timer generates interrupts after
100 ms and executes a routine, which is programmed to run because there is failure of
finishing the task in anticipated interval.
A watchdog timer (WDT; sometimes called a computer operating properly or COP timer,
or simply a watchdog) is an electronic timer that is used to detect and recover from
computer malfunctions. During normal operation, the computer regularly restarts the
watchdog timer to prevent it from elapsing, or "timing out". If, due to a hardware fault or
program error, the computer fails to restart the watchdog, the timer will elapse and
generate a timeout signal. The timeout signal is used to initiate corrective action or
actions. The corrective actions typically include placing the computer system in a safe
state and restoring normal system operation. Watchdog timers are commonly found in
embedded systems and other computer-controlled equipment where humans cannot easily
access the equipment or would be unable to react to faults in a timely manner. In such
systems, the computer cannot depend on a human to reboot it if it hangs; it must be self-
reliant. For example, remote embedded systems such as space probes are not physically
accessible to human operators; these could become permanently disabled if they were
unable to autonomously recover from faults. A watchdog timer is usually employed in
cases like these. Watchdog timers may also be used when running untrusted code in a

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 37
sandbox, to limit the CPU time available to the code and thus prevent some types of
denial-of-service attacks.
Application :
An application in mobile phone is that display is off in case no GUI interaction takes
place within a watched time interval. The interval is usually set at 15 s, 20 s, 25 s, 30 s
in mobile phone .This saves power.
Real time clock :
A real-time clock (RTC) is a computer clock (most often in the form of an integrated
circuit) that keeps track of the current time. Although the term often refers to the devices
in personal computers, servers and embedded systems, RTCs are present in almost any
electronic device which needs to keep accurate time.
Timer and counting devices :
Like a computer system, all embedded system need at least one timing device (for system
clock) Real Time Clock, Pulse Accumulator Counter, Watchdog Timer, Serial
Communication rate control timer, OS timer for task scheduling Both hardware timers
and software timers are used in the systems.
Uses of a Timer :
Real Time Clock Ticks (System Heart Beats)
Real Time clock: A clock does not stop once the system starts, cannot be reset and its
count value cannot be reloaded. Initiating an event after a preset delay time. Delay is as
per count value loaded. Initiating an event after a comparison between the preset time
with a counted value. Preset time is loaded in comparison register. Capturing a count
value at the timer on an event. The information time is stored in capture register. Finding
a time interval between two events. Time is captured at each event and intervals are then
found.
Wait for a message from a queue or mailbox or semaphore for a preset time when using
RTOS. There is a predefined waiting period before RTOS starts to run. Watchdog timer.
It resets the system after a define time. Baud or Bit rate control for serial communication
on a line or network. Timer timeout interrupts define the time δT of each baud or ∆T for
each bit. Input pulse counting when using a timer, which is ticked by giving non- periodic

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 37
inputs instead of clock inputs. The timer acts as a counter for the inputs given to it for
each instance to be counted.
Scheduling of various tasks. A chain of software timers interrupt & RTOS uses these
interrupts to schedule the tasks. Time slicing of various tasks. RTOS switches after preset
time delay from one running task to the next. Each task can therefore run in predefined
slot of time. Time Division Multiplexing. Timer used for multiplexing the input from a
number of channels. Each channel input is allotted a distinct and fixed- time slot to get a
TDM output.
Hardware Timers
There can be limited number of hardware present in the system. The system has at least
one hardware timer. The system clock is configured from this. A microcontroller may
have 2, 3 or 4 hardware timers. One of the hardware timers ticks from the inputs from the
internal clock of the processor and generates the system clock. Using the systems clock
or internal clock, the number of hardware timers that are present cab be driven. These
timers are programmable by the device driver programs.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 38
EMBEDDED FIRMWARE DESIGN APPROACHES

The embedded firmware is responsible for controlling the various peripherals of"
the embedded hard ware and generating response in accordance with the functional
requirements mentioned in the requirements for the particular embedded product.
Firmware is considered as the master brain of the embedded Firmware Design and
Development system. imparting intelligence to an Embedded system is a one time
process and it can happen at any stage, it can be immediately after the fabrication of the
embedded hardware or at a later stage. Once intelligence is imparted to the embedded
product, by embedding t.l1e firmware in the hardware, the product starts Functioning
properly and will continue serving the assigned task till hardware breakdown occurs or a
corruption in embedded firmware occurs. In case of hardware breakdown, the damaged
component may need to be replaced by a new component and for firmware corruptions
the firmware should be re-loaded, to bring back the embedded product to the nominal
functioning. Coming back to the newborn baby example, the newborn baby is very
adaptive in terns of intelligence, meaning it learns from mistakes and updates its memory
each time a mistake or a deviation in expected behavior occurs, whereas most of the
embedded systems are less adaptive or non-adaptive. For most of the embedded
products the embedded firmware is stored at a permanent memory {ROM} and they are
no alterable by end users. Some of the embedded products used in the Control and
Instrumentation domain are adaptive. This adaptability is achieved by making use
configurable parameters which are stored in the alterable permanent memory area. The
parameters get updated in accordance with the deviations from expected behavior and the
firmware makes use of these parameters for creating the response next time for similar
variations.
Designing embedded firmware requires understanding of the particular embedded
product hardware, like various component interfacing, memory map details, ND port
details, configuration and register details of various hardware chips used and some
programming language [either target processor controller specific low level assembly
language or a high level language like C,C++,JAVA}.
Embedded firmware development process starts with the conversion of the firmware
requirements into a program model using modeling tools like UML or flow chart based
B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM
Embedded Systems Unit 3
Lecture No: 38
representation. The Uml diagrams or flow chart gives a diagrammatic representation of
the decision items to be taken and the tasks to be performed {Since the program model is
created, the next step is the implementation of the tasks and actions by capturing the
model using a language which is understandable by the target process port controller. The
following sections are designed to give an overview of the various steps involved in the
embedded firmware design and development.
Embedded Firmware Design Approaches :
The firmware design approaches for embedded product is purely dependent on the
complexity of the functions to be performed, the speed of operation required, etc. Two
basic approaches are used for embedded firmware design. They are ‘Conventional
procedure based Firmware Design’ and ‘Embedded operating System (US) Based
Design’. The conventional procedural based design is also known as ‘Super Loop
Model’. We will discuss each of them in detail in the following sections.
The Super Loop Based approach :
The Super Loop based firmware development approach is adopted for applications that
are not time critical and where the response time is not so important [embedded systems
where missing deadlines are acceptable]. It is very similar to a conventional procedural
programming where the code is executed task by task. The task listed at the top of the
program code is executed first and the tasks just below the op are executed after
completing the first task. This is a true procedural one. In a multiple task based
system, each task is executed in serial in this approach. The firmware execution flow for
this will be
1. Configure the common parameters and perform initialization for various hardware
components memory, registers, etc.
2. Start the first task and execute it
3. Execute the second task
4. Execute the next task
5.
6.
7. Execute the last defined task
8. Jump back to the first task and follow the same flow

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 38
The Embedded Operating System {OS} Based approach
The Operating System (OS) based approach contains operating systems, which can be
either a General Purpose Operating System [GPOS or a Real Time Operating System
(RTOS to host the user written application firmware. The General Purpose OS based
design is very similar to a conventional PC‘. based application development where the
device contains an operating system windows, Linux. etc. for Desktop PCs) and you will
be creating and running user applications on top of it. Ex-,ample of a GPUS used in
embedded product development is Microsoft} Windows XP Embedded.
Examples of Embedded products using Microsoft Windows XP are Personal Digital
Assistants.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 39
EMBEDDED FIRMWARE DEVELOPMENT LANGUAGES

We can use either a target processor controller specific language {Generally known as
Assembly language or low level language) or a target processor controller independent
language [Like C, CH-_. JAVA, etc. commonly known as High Level Language} or a
combination of Assembly and High level Language. We will discuss where each of the
approach is used and the relative merits and de-merits of each, in the following sections.
Assembly language based development:
Assembly level language is the human readable notation of ‘machine language‘, whereas
“machine language’ is a processor understandable language. Processors deal only with
binaries machine language is a binary representation and it consists of is and Us. Machine
language is made readable by using specific symbols called mnemonies‘. Hence machine
language can be considered as an interface between processor and programmer.
Assembly language and machine languages are processor controller dependent and an
assembly program written for one processor controller family will not work with
others.
Assembly language programming is the task of writing processor specific machine code
in form, converting the mnmemonies into actual processor instructions {machine
language} and associated data using an assembler.
Assembly Language program was the most common type of programming adopted in the
beginning of software revolution. If we look back to the history of programming, we can
see that a large number of programs were written entirely in assembly language. Even in
the 19905, the majority of console video games were written in assembly language,
including most popular games written for the Sega Genesis and the Super Nintendo
Entertainment System. The popular arcade game NBA Jam released in 1993 was also
ended entirely using the assembly language.
Even today also almost all low level, system related, programming is carried out using
assembly language. Some Operating System dependent tasks require low-level
languages. in particular, assembly language is often used in writing the low level
interaction between the operating system and the hardware, for instance in device drivers.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 39
The general format of an assembly language instruction is an opcode followed by
Operands. The opcode tells the processor controller what to do and the operands provide
the data and information .

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 41

ISR CONCEPT ,INTERRUPT SOURCES


Interrupt means event. which invites the attention of processor for some action on the
hardware or software event
1. When a device or port is ready, :1 device or port generates an interrupt or when it
completes the assigned action, it generates an interrupt. This interrupt is called hardware
interrupt.
2. When software run-tirne exception condition is detected. either processor hardware or
a software instruction generates an interrupt. This interrupt is called software interrupt or
trap or exception.
3. Software can execute the software instruction for interrupt to signal the execution of
ISR. The interrupt due to signal is also a software interrupt [The signal differs from the
function in the sense that the execution of the signal handler function (ISR) can be
masked and till the mask is reset. the handler will not execute. Function on the other hand
always executes on the call after a call-instruction]
In response to the interrupt. the routine or program, which is running at present gets
interrupted and an [SR is executed. ISR is also called device driver ISR in the case of
devices and is called exception or signal or trap handler in the case of software interrupts.
Device driver 1SRs execute on software interrupts from device open { ). close ( ), read (
). write ( )or other device functions.
Examples of Software Interrupts and ISRs
Interrupts and device-drivers play the major role in using the system hardware and
devices. Think of any system hardware and it will have devices and thus needs device
drivers. The embedded software or the operating system for application software must
consist of the codes for the device (i) configuring (initializing), (ii) activating (also called
opening or attaching), (iii) driving function for read. (iv) driving function for write and
(iv) resetting (also called deactivating or closing or detaching). Each device task is
completed by first using an ISR-a device-driver function calls the [SR by using a
software interrupt instruction (SW1).
A program must detect error condition or run-time exceptional condition encountered
during the running. In a program either the hardware detects this condition (called trap)

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 41
or an instruction SW] is used that executes on detecting the exceptional run-time
condition during computations or communication. For example detecting
that the square root of a negative number is being calculated or detecting the illegal
argument in the function or detecting that the connection to network is not found.
Detection of exceptional run-time condition is called an exception by the program. An
interrupt service routine (exceptional handler routine) executes, which is called catch
function as it executes on catching the exception thrown by executing an SW1.
INTERRUPT SOURCES
Hardware sources can be from internal devices or external peripherals. which interrupt
the ongoing routine and thereby cause diversion to corresponding ISR. Software sources
for interrupt are related to (i) processor detecting (trapping) computational error for an
illegal op-code during execution or (ii) execution of an SW1 instruction to cause
processor-interrupt of ongoing routine.
Each of the interrupt sources (when not masked) (or groups of interrupt sources) demands
a temporary transfer of control from the presently executed routine to the [SR
corresponding to the source.
The internal sources and devices differ in different processors or microcontrollers or
devices and their versions and families. It gives a classification as hardware and software
interrupts from several sources. Not all the given types of sources in the table may be
present or enabled in a given system. Further, there may be some other special types of
sources provided in the system.
Hardware Interrupts Related to Internal Devices There are number of hardware interrupt
sources which can interrupt an ongoing program. These are processor or microcontroller
or internal device hardware specific. An example of a hardware-related interrupt is timer
overflow interrupt generated by the microcontroller hardware. Row I of Table lists
common internal devices interrupt sources. Hardware Interrupts Related to External
Devices - 1 There can be external hardware interrupt source for interrupting an ongoing
program that also provides the ISR address or vector address or interrupt-type
information through the data bus. Row 2 of Table lists these interrupt sources. External
hardware interrupts with ISR addresses information sent by the devices themselves
and are device hardware-specific.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 41

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 42
INTERRUPT SERVICING MECHANISM

The meaning of ‘interrupts’ is to break the sequence of operation. While the CPU is
executing a program, on ‘interrupt’ breaks the normal sequence of execution of
instructions, diverts its execution to some other program called Interrupt Service Routine
(ISR).After executing ISR , the control is transferred back again to the main program.
Interrupt processing is an alternative to polling.
Need for Interrupt: Interrupts are particularly useful when interfacing I/O devices that
provide or require data at relatively low data transfer rate.
Types of Interrupts: There are two types of Interrupts in 8086.
They are:
(i)Hardware Interrupts and
(ii)Software Interrupts
(i) Hardware Interrupts (External Interrupts). The Intel microprocessors support
hardware interrupts through:
• Two pins that allow interrupt requests, INTR and NMI
• One pin that acknowledges, INTA, the interrupt requested on INTR.

INTR and NMI


• INTR is a maskable hardware interrupt. The interrupt can be enabled/disabled
using STI/CLI instructions or using more complicated method of updating the
FLAGS register with the help of the POPF instruction.
• When an interrupt occurs, the processor stores FLAGS register into stack,
disables further interrupts, fetches from the bus one byte representing interrupt
type, and jumps to interrupt processing routine address of which is stored in
location 4 * <interrupt type>. Interrupt processing routine should return with the
IRET instruction.
• NMI is a non-maskable interrupt. Interrupt is processed in the same way as the
INTR interrupt. Interrupt type of the NMI is 2, i.e. the address of the NMI
processing routine is stored in location 0008h. This interrupt has higher priority
than the maskable interrupt.
Ex: NMI, INTR.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 42

(ii) Software Interrupts (Internal Interrupts and Instructions) .


Software interrupts can be caused by:
• INT instruction – breakpoint interrupt. This is a type 3 interrupt.
• INT <interrupt number> instruction – any one interrupt from available 256
interrupts.
• INTO instruction – interrupt on overflow
• Single-step interrupt – generated if the TF flag is set. This is a type 1 interrupt.
When the CPU processes this interrupt it clears TF flag before calling the
interrupt processing routine.
• Processor exceptions: Divide Error (Type 0), Unused Opcode (type 6) and Escape
opcode (type 7).
• Software interrupt processing is the same as for the hardware interrupts.
Ex: INT n (Software Instructions)
• Control is provided through:
• IF and TF flag bits
• IRET and IRETD
Interrupt vectors:
Interrupt vectors and the vector table are crucial to an understanding of hardware and
software interrupts.
• The interrupt vector table is located in the first 1024 bytes of memory at addresses
000000H–0003FFH. contains 256 different four-byte interrupt vectors
• An interrupt vector contains the address (segment and offset) of the interrupt
service procedure.
• The first five interrupt vectors are identical in all Intel processors
• Intel reserves the first 32 interrupt vectors
• the last 224 vectors are user-available
B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM
Embedded Systems Unit 3
Lecture No: 42
• each is four bytes long in real mode and contains the starting address of the
interrupt service procedure.
• the first two bytes contain the offset address
• the last two contain the segment address

Dedicated Interrupts:
• Type 0 The divide error whenever the result from a division overflows or an
attempt is made to divide by zero.
• Type 1 Single-step or trap occurs after execution of each instruction if the trap
(TF) flag bit is set.
– upon accepting this interrupt, TF bit is cleared so the interrupt service
procedure executes at full speed

• Type 2 The non-maskable interrupt occurs when a logic 1 is placed on the NMI
input pin to the microprocessor.
– non-maskable—it cannot be disabled
• Type 3 A special one-byte instruction (INT 3) that uses this vector to access its
interrupt-service procedure.
– often used to store a breakpoint in a program for debugging
• Type 4 Overflow is a special vector used with the INTO instruction. The INTO
instruction interrupts the program if an overflow condition exists.
– as reflected by the overflow flag (OF)

Reserved interrupts(for future processors):


• Type 5 to 31 had been reserved for future processors like
80186,80286,80386…etc.
Available interrupts:
• Type 32 to 255 are known as available interrupts.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 42

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 43
MULTIPLE INTERRUPTS & DMA, DEVICE DRIVER PROGRAMMING

INTERRUPT VECTOR TABLE:

HARDWARE INTERRUPTS:
• The two processor hardware interrupt inputs:
– non-maskable interrupt (NMI)
– interrupt request (INTR)
• When NMI input is activated, a type 2 interrupt occurs
– Because NMI is internally decoded
• The INTR input must be externally decoded to select a vector.
• Any interrupt vector can be chosen for the INTR pin, but we usually use an interrupt
type number between 20H and FFH.
• Intel has reserved interrupts 00H - 1FH for internal and future expansion.
• INTA is also an interrupt pin on the processor.
• It is an output used in response to INTR input to apply a vector type number to the data
bus connections D7–D0 .
B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM
Embedded Systems Unit 3
Lecture No: 43
• Fig. 2 shows the three user interrupt connections on the microprocessor.

Fig. 1: The timing of the INTR input and INTA output. *This portion of the data
bus is ignored and usually contains the vector number.

• The non-maskable interrupt (NMI) is an edge-triggered input that requests an interrupt


on the positive edge (0-to-1 transition).
• After a positive edge, the NMI pin must remain logic 1 until recognized by the
microprocessor
• Before the positive edge is recognized, NMI pin must be logic 0 for at least two
clocking periods
• The NMI input is often used for parity errors and other major faults, such as power
failures.
• Power failures are easily detected by monitoring the AC power line and causing an NMI
interrupt whenever AC power drops out.
• The interrupt request input (INTR) is level-sensitive, which means that it must be held
at a logic 1 level until it is recognized.
• INTR is set by an external event and cleared inside the interrupt service procedure
• INTR is automatically disabled once accepted.
• re-enabled by IRET at the end of the interrupt service procedure

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 43

Fig. 2: A simple method for generating interrupt vector type number FFH in
response to INTR

• 80386–Core2 use IRETD in protected mode.


• In 64-bit protected mode, IRETQ is used
• The processor responds to INTR by pulsing INTA output in anticipation of receiving an
interrupt vector type number on data bus connections D7–D0.
• Fig 5.1.2 shows the timing diagram for the INTR and pins of the microprocessor.
• Two INTA pulses generated by the system insert the vector type number on the data
bus.
• Fig. 2 shows a circuit to appy interrupt vector type number FFH to the data bus in
response to an INTR.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 43
DMA - Direct memory access or autonomous transfer
What is direct memory access - Direct memory access is a sophisticated i/o technique in
which a DMA controller replaces the CPU and takes care of the access of both, the i/o
device and the memory, for fast data transfers. Using DMA you get the fastest data
transfer rates possible.
Special hardware writes to / reads from memory directly (without CPU intervention) and
saves the timing associated with op-code fetch and decoding, increment and test
addresses of source and destination.
The DMA controller may either stop the CPU and access the memory (cycle stealing
DMA) or use the bus while the CPU is not using it (hidden cycle DMA).
The DMA controller has some control lines (to do a handshake with the CPU negotiating
to be a bus master and to emulate the CPU behaviour while accessing the memory), an
address register which is auto-incremented (or auto-decremented) at each memory access,
and a counter used to check for final byte (or word) count.
The DMA controller is programmed with (at least):
- initial memory address
- number of bytes to be input
- address of the source

Direct Memory Access is a capability provided by some computer bus architectures that
allows data to be sent directly from an attached device (such as a disk drive) to the

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 43
memory on the computer's motherboard. The microprocessor is freed from involvement
with the data transfer, thus speeding up overall computer operation.
Usually a specified portion of memory is designated as an area to be used for direct
memory access. In the ISA bus standard, up to 16 megabytes of memory can be
addressed for DMA. The EISA and MCA standards allow access to the full range of
memory addresses (assuming they're addressable with 32 bits). PCI accomplishes DMA
by using bus mastering (with the microprocessor "delegating" I/O control to the PCI
controller).
An alternative to DMA is the Programmed Input/Output (PIO) interface in which all data
transmitted between devices goes through the processor. A newer protocol for the
ATA/IDE interface is Ultra DMA/33, which provides a burst data transfer rate up to 33
MB (megabytes) per second. Hard drives that come with Ultra DMA/33 also support PIO
modes 1, 3, and 4, and multiword DMA mode 2 (at 16.6 megabytes per second).
DEVICE DRIVER PROGRAMMING
A system has number of physical devices. A device may have multiple functions. Each
device function requires a driver. Examples of multiple functions in a device are as
follows.
I. A timer device performs timing functions as well as counting functions. It also
performs the delay function and periodic system calls.
2. A transceiver device transmits as well as receives It may not be just a repeater. It may
also do the jabber control and collision control. (Jabber control means prevention of
continuous streams of unnecessary bytes in case of system fault. Collision control means
that it must first sense the network bus availability then only transmit.)
3. Voice-data-fax modem device has transmitting as well as receiving functions for voice,
fax as well as data.
A common driver or separate drivers for each device function are required Device drivers
and their corresponding lSRs are the important routines in most systems. The driver has
following features.
I. The driver provides a software layer (Interface) between the application and actual
device: When running an application. the devices are used. A driver provides a routine
that facilitates the use of a device function in the application. For example. an application

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 43
for mailing generates a stream of bytes. These are to be sent through a network driver
card after packing the stream messages as per the protocol used in the various layers. for
example, TCPIIP. The network driver routine will provide the software layer between the
application and network for using the network interface card (device).
2. The driver facilitates the use of a device by executing an ISR: The driver function is
usually written in such a manner that it can be used like a black box by an application
developer. Simple commands from a task or function can then drive the device. Once a
driver function is available for writing the codes. The application developer does not need
to know anything about the mechanism. addresses. registers, bits and flags used by the
device. For example. consider a case when the system clock is to be set to tick every
Io.000 us (100 times each second). The user application simply makes a call to an 05
function like OS_Tickts (100). It is not necessary for the user of this function to know
which timer device will perform it. What are the addresses. which will be used by the
driver Which will be the device register where
value 100 registers for the ticks? What are the control bits that will be set or reset?
oS_Ticks (100) when run. simply interrupts the system and executes the SW1 instruction
which calls the signal led routine (driver ISR) for the system ticking device. Then the
driver ISR which executes takes Io as input and configures the real time clock to let the
system clock tick each 10.000 us and generate the system clock interrupts continuously
every 10.000 its to get Ioo ticks each second.
Generic device driver functions in high level language are used in high level language
program. The functions are open. close. read. write. listen. accept etc.
Device d.river {SR programming in assembly needs an understanding of the processor.
system and 10 buses and the addresses of the device registers in the specific hardware. It
needs in-depth understanding of how the software application program will seek the
device data or write into the device data and what is the platform. Platform means the
operating system and hardware, which interfaces with the system buses.
A common method of using the drivers is as follows: a device (or device function
module) is opened (or registered or attached) before using the driver. If means device is
first initialized and configured by setting and resetting the control bits of device control
register and use of the interrupt service is enabled. Using a user function or an 08

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 43
function. a device (or device function module) can also be closed or de-registered or
detached by another process. After executing that process. the device driver is not
accessible till the device is re-opened (re-registered or re-attached).

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 45
CONCEPTS OF C VERSUS EMBEDDED C , COMPILER VERSUS CROSS-
COMPILER

Concepts of C versus Embedded C :


C is for desktop computers, embedded C usually is for microcontroller based
applications.
C use the resources of desktop computers (memory, OS, etc)
Embedded C use only limited resources available in chip (limited RAM, ROM, ports,
etc).
Embedded C could be a subset of C.
PROG G IN EMBEDDED C
Whenever the conventional ‘C’ Language and its extensions are used for programming
embedded systems, it is referred as ‘Embedded C‘programming. Programming in
‘Embedded C’ is quite different from conventional Desktop application development
using ‘C’ language for a particular platform. Desktop computers contain working
memory in the range of Megabytes (Nowadays Giga bytes) and storage memory in the
range of Gigs bytes.
‘C’ is a well structured. well defined and standardized general purpose programming
language with extensive bit manipulation support. ‘C’ offers a combination of die
features of high level language an assembly and helps in hardware access programming
{system level programming} as well as business package developments [Application
development.‘-3. like pay roll systems. banking applications. etc}. The conventional ‘C’
language follows ANSI standard and it incorporates various library files for different
operating systems. A platform {operating system} specific application. known as.
compiler is used F: the conversion of programs written in ‘C’ to the target processor ( on
which the US is melting) specific binary files. Hence it is a platform specific
development. Embedded ‘C’ can be considered as a subset of conventional ‘C’ language.
Embedded *C' support all ‘C’ instructions and incorporates a few target processor
specific functions . It should be noted that the standard ANSI ‘C’ library implementation
is always tailored to the target processor for controller library files in Embedded ‘C’. The
implementation of target processon’cont:rol1er specific functions depends upon the
processor controller as well as the supported cross-compiler forth particular Embedded

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 45
‘C‘ language. A software program called ‘Cross-compiler’ is used for the convention of
programs written in Embedded ‘C’ to target processor controller specific instructions
{machine language}.
Compiler vs. Cross-Compiler
Compiler is a software tool that converts a source code written in a high level language
on top of
particular operating system running on a specific target processor architecture (e.g. Intel )
Here the operating system. the compiler program and the application making use of the
source code run on the same target processor. The source code is convened to the target
processor specific machine instructions. The development is platform specific (OS as
well as target processor on which the OS is running). Compilers are generally termed as
‘Native Compilers’. A native compiler generates code for the same machine (processor)
on which it is running.
Cross-compilers are the software tools used in cross-platform development applications.
In cross
platform development. the compiler running on a particular target processor converts the
source
code to machine code for a target processor whose architecture and instruction set is
different for
the processor on which the compiler is running or for an operating system which is
different from the current development environment OS. Embedded system development
is a typical example for cross plat from development where embedded firmware is
developed on a machine other target processors and the same is converted into machine
code for any other target processor architecture.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 46
FUNDAMENTAL ISSUES IN HARDWARE AND SOFTWARE CO-DESIGN

This chapter is about giving the reader some practical processes and techniques that have
proven useful over the years. Defining the system and its architecture, if done correctly,
is the phase of development which is the most difficult and the most important of the
entire development cycle. Figure shows the different phases of development as defined
by the Embedded System Design and Development Lifecycle Model.

This model indicates that the process of designing an embedded system and taking that
design to market has four phases:
1. Phase 1. Creating the Architecture, which is the process of planning the design of
the embedded system.
2. Phase 2. Implementing the Architecture, which is the process of developing the
embedded system.
3. Phase 3. Testing the System, which is the process of testing the embedded system
for problems, and then solving those problems.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 46
4. Phase 4. Maintaining the System, which is the process of deploying the embedded
system into the field, and providing technical support for users of that device for
the duration of the device’s lifetime.
This model also indicates that the most important time is spent in phase 1, creating the
architecture. At this phase of the process, no board is touched and no software is coded. It
is about putting full attention, concentration and investigative skills into gathering
information about the device to be developed, understanding what options exist, and
documenting those findings. If the right preparation is done in defining the system’s
architecture, determining requirements, understanding the risks, and so on, then the
remaining phases of development, testing and maintaining the device will be simpler,
faster, and cheaper. This, of course assumes that the engineers responsible have the
necessary skills. In short, if phase 1 is done correctly, then less time will be wasted on
deciphering code that doesn’t meet the system requirements, or guessing what the
designers’ intentions were, which most often results in more bugs, and more work. That
is not to say that the design process is always smooth sailing. Information gathered can
prove inaccurate, specifications can change, and so on, but if the system designer is
technically disciplined, prepared, and organized, new hurdles can be immediately
recognized and resolved. This results in a development process that is much less stressful,
with less time and money spent and wasted. Most importantly, the project will, from a
technical standpoint, almost certainly end in success.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 47

COMPUTATIONAL MODELS IN EMBEDDED DESIGN


We implement a system’s processing behavior with processors. But to accomplish this,
we must have first described that processing behavior. One method we’ve discussed for
describing processing behavior uses assembly language. Another, more powerful method
uses a high-level programming language like C. Both these methods use what is known
as a sequential program computation model, in which a set of instructions executes
sequentially. A high-level programming language provides more advanced constructs for
sequencing among the instructions than does an assembly language, and the instructions
are more complex, but nevertheless, the sequential execution model (one statement at a
time) is the same. However, embedded system processing behavior is becoming very
complex, requiring more advanced computation models to describe that behavior. The
increasing complexity results from increasing IC capacity: the more we can put on an IC,
the more functionality we want to put into our embedded system. Thus, while embedded
systems previously encompassed applications like washing machines and small games
requiring perhaps hundreds of lines of code, today they also extend to fairly sophisticated
applications like television set-top boxes and digital cameras requiring perhaps hundreds
of thousands of lines. Trying to describe the behavior of such systems can be extremely
difficult. The desired behavior is often not even fully understood initially. Therefore,
designers must spend much time and effort simply understanding and describing the
desired behavior of a system, and some studies have found that most system bugs come
from mistakes made describing the desired behavior rather than from mistakes in
implementing that behavior. The common method today of using an English (or some
other natural language) description of desired behavior provides a reasonable first step,
but is not nearly sufficient, because English is not precise. Trying to describe a system
precisely in English can be an arduous and often futile endeavor -- just look at any legal
document for any example of attempting to be precise in a natural language. A
computation model assists the designer to understand and describe the behavior by
providing a means to compose the behavior from simpler objects. A computation model
provides a set of objects, rules for composing those objects, and execution semantics of
the composed objects. For example, the sequential program model provides a set of
statements, rules for putting statements one after another, and semantics stating how the

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 3
Lecture No: 47
statements are executed one at a time. Unfortunately, this model is often not enough.
Several other models are therefore also used to describe embedded system behavior.
These include the communicating process model, which supports description of multiple
sequential programs running concurrently. Another model is the state machine model,
used commonly for control-dominated systems. A control-dominated system is one
whose behavior consists mostly of monitoring control inputs and reacting by setting
control outputs. Yet another model is the dataflow model, used for data-dominated
systems. A data-dominated system’s behavior consists mostly of transforming streams of
input data into streams of output data, such as a system for filtering noise out of an audio
signal as part of a cell phone. An extremely complex system may be best described using
an object-oriented model, which provides an elegant means for breaking the complex
system into simpler, well-defined objects. A model is an abstract notion, and therefore we
use languages to capture the model in a concrete form. For example, the sequential
program model can be captured in a variety of languages, such as C, C++, Pascal, Java,
Basic, Ada, VHDL, and Verilog. Furthermore, a single language can capture a variety of
models. Languages typically are textual, but may also be graphical. For example,
graphical languages have been proposed for sequential programming (though they have
not been widely adopted).
An earlier chapter focused on the sequential program model. This chapter will focus on
the state machine and concurrent process models, both of which are commonly used in
embedded systems.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 49
HARDWARE SOFTWARE TRADE OFFS

Embedded systems are large in numbers, and those numbers are growing every
year as more electronic devices gain a computational element. Embedded systems
possess several common characteristics that differentiate them from desktop systems, and
that pose several challenges to designers of such systems. The key challenge is to
optimize design metrics, which is particularly difficult since those metrics compete with
one another. One particularly difficult design metric to optimize is time-to-market,
because embedded systems are growing in complexity at a tremendous rate, and the rate
at which productivity improves every year is not keeping up with that growth. This book
seeks to help improve productivity by describing design techniques that are standard and
others that are very new, and by presenting a unified view of software and hardware
design. This goal is worked towards by presenting three key technologies for embedded
systems design: processor technology, IC technology, and design technology. Processor
technology is divided into general-purpose, application-specific, and single-purpose
processors. IC technology is divided into custom, semi-custom, and programmable logic
IC’s. Design technology is divided into compilation/synthesis, libraries/IP, and
test/verification Design technology involves the manner in which we convert our concept
of desired system functionality into an implementation. We must not only design the
implementation to optimize design metrics, but we must do so quickly. As described
earlier, the designer must be able to produce larger numbers of transistors every year, to
keep pace with IC technology. Hence, improving design technology to enhance
productivity has been a focus of the software and hardware design communities for
decades.
To understand how to improve the design process, we must first understand the
design process itself. Variations of a top-down design process have become popular in
the past decade, an ideal form of which is illustrated in Figure. The designer refines the
system through several abstraction levels. At the system level, the designer describes the
desired functionality in some language, often a natural language like English, but
preferably an executable language like C; we shall call this the system specification. The
designer refines this specification by distributing portions of it among chosen processors
(general or single purpose), yielding behavioral specifications for each processor. The

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 49
designer refines these specifications into register-transfer (RT) specifications by
converting behavior on general-purpose processors to assembly code, and by converting
behavior on single-purpose processors to a connection of register-transfer components
and state machines. The designer then refines the register-transfer-level specification of a
single-purpose processor into a logic specification consisting of Boolean equations.
Finally, the designer refines the remaining specifications into an implementation,
Consisting of machine code for general-purpose processors, and a gate-level netlist for
single-purpose processors. There are three main approaches to improving the design
process for increased productivity, which we label as compilation/synthesis, libraries/IP,
and test/verification. Several other approaches also exist. We now discuss all of these
approaches. Each approach can be applied at any of the four abstraction levels.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 50
INTEGRATION OF HARDWARE AND FIRMWARE

Having the explicit architecture documentation helps the engineers and programmers on
the development team to implement an embedded system that conforms to the
requirements. Throughout this book, real-world suggestions have been made for
implementing various components of a design that meet these requirements. In addition
to understanding these components and recommendations, it is important to understand
what development tools are available that aid in the implementation of an embedded
system. The development and integration of an embedded system’s various hardware and
software components are made possible through development tools that provide
everything from loading software into the hardware to providing complete control over
the various system components. Embedded systems aren’t typically developed on one
system alone—for example, the hardware board of the embedded system—but usually
require at least one other computer system connected to t he embedded platform to
manage development of that platform. In short, a development environment is typically
made up of a target (the embedded system being designed) and a host (a PC, Sparc
Station, or some other computer system where the code is actually developed). The target
and host are connected by some transmission medium, whether serial, Ethernet, or other
method. Many other tools, such as utility tools to burn EPROMs or debugging tools, can
be used within the development environment in conjunction with host and target. The key
development tools in embedded design can be located on the host, on the target, or can
exist stand-alone. These tools typically fall under one of three categories: utility,
translation, and debugging tools. Utility tools are general tools that aid in software or
hardware development, such as editors (for writing source code), VCS (Version Control
Software) that manages software files, ROM burners that allow software to be put onto
ROMs, and so on. Translation tools convert code a developer intends for the target into a
form the target can execute, and debugging tools can be used to track down and correct
bugs in the system. Development tools of all types are as critical to a project as the
architecture design, because without the right tools, implementing and debugging the
system would be very difficult, if not impossible.
The Main Software Utility Tool: Writing Code in an Editor or IDE :

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 50
Source code is typically written with a tool such as a standard ASCII text editor, or an
Integrated Development Environment (IDE) located on the host (development) platform,
as shown in Figure . An IDE is a collection of tools, including an ASCII text editor,
integrated into one application user interface. While any ASCII text editor can be used to
write any type of code, independent of language and platform, an IDE is specific to the
platform and is typically provided by the IDE’s vendor, a hardware manufacturer (in a
starter kit that bundles the hardware board with tools such as an IDE or text editor), OS
vendor, or language vendor (Java, C, etc.).

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 51
ICE , ISSUES IN EMBEDDED SYSTEM DESIGN

Hardware and firmware engineering design teams often run into problems and conflicts
when trying to work together. They come from different development environments,
have different tool sets and use different terminology. Often they are in different
locations within the same company or work for different companies. The two teams have
to work together, but often have conflicting differences in procedures and methods. Since
their resulting hardware and firmware work have to integrate successfully to build a
product, it is imperative that the hardware/firmware interface – including people,
technical disciplines, tools and technology – be designed properly
This article provides seven principles hardware/firmware codesign that if followed will
ensure that such collaborations are a success. They are:
Collaborate on the Design;
Set and Adhere to Standards;
Balance the Load;
Design for Compatibility;
Anticipate the Impacts;
Design for Contingencies; and
Plan Ahead.
Collaborate on the Design
Designing and producing an embedded product is a team effort. Hardware engineers
cannot produce the product without the firmware team; likewise, firmware engineers
cannot produce the product without the hardware team.
Even though the two groups know that the other exists, they sometimes don’t
communicate with each other very well. Yet it is very important that the interface where
the hardware and firmware meet—the registers and interrupts—be designed carefully
and with input from both sides.
Collaborating implies proactive participation on both sides. Figure 2.1 shows a picture of
a team rowing a boat. Some are rowing on the right side and some on the left. There is a
leader steering the boat and keeping the team rowing in unison. Both sides have to work
and work together. If one side slacks off, it is very difficult for the other side and the
leader to keep the boat going straight.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 51

In order to collaborate, both the hardware and firmware teams should get together to
discuss a design or solve a problem. Collaboration needs to start from the very early
stages of conceptual hardware design all the way to the late stages of final firmware
development. Each side has a different perspective, that is, a view from their own
environment, domain, or angle.
Collaboration helps engineers increase their knowledge of the system as a whole,
allowing them to make better decisions and provide the necessary features in the design.
The quality of the product will be higher because both sides are working from the same
agenda and specification.
Documentation is the most important collaborative tool. It ranges from high-level product
specification down to low-level implementation details. The hardware specification
written by hardware engineers with details about the bits and registers forming the
hardware/ firmware interface is the most valuable tool for firmware engineers. They have
to have this to correctly code up the firmware. Of course, it goes without saying that this
specification must be complete and correct.
Software tools are available on the market to assist in collaborative efforts. In some, the
chip specifications are entered and the tool generates a variety of hardware (Verilog,
VHDL. . . ), firmware (C, C++ . . . ), and documentation (*.rtf, *.xls, *.txt . . . ) files.
Other collaborative tools aid parallel development during the hardware design phase,
such as co-simulation, virtual prototypes, FPGA-based prototype boards, and modifying
old products.
Collaboration needs to happen, whether it is achieved by walking over to the desk on the
same floor, or by using email, phone, and video conferencing, or by occasional trips to
another site in the same country or halfway around the world.
This principle, collaboration, is the foundation to all of the other principles. As we shall
see, all of the other principles require some amount of collaboration between the
hardware and firmware teams to be successful.
Set and Adhere to Standards
Standards need to be set and followed within the organization. I group standards into
industry standards and internal standards.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 51
Industry standards exist in many areas, such as ANSI C, POSIX, PCI Express, and JTAG.
Stay true to industry standards. Don’t change them. Changing a standard will break
the protocol, interoperability, and any off-the-shelf components, such as IP, device
drivers, and test suites.

For example, USB is widely known and used for connecting devices to computers. If this
standard is adhered to, any USB-enabled device can plug into any computer and a well-
defined behavior will occur (even if it is “unknown USB device installed†).
Industry standards evolve but still behave in a well-defined manner. USB has evolved,
from 1.1, to 2.0, and now 3.0, but it still has a well-defined behavior when plugging one
version into another.
By internal standards, I mean that you have set standards, rules, and guidelines that
everybody must follow within your organization. Modules are written in a certain
fashion, specific quality checks are performed, and documentation is written in a
specified format. Common practices and methods are defined to promote reuse and avoid
the complexity of multiple, redundant ways of doing the something.
In the same way that industry standards allow many companies to produce similar
products, following internal standards allows many engineers to work together and
encourages them to make refinements to the design. It provides consistency among
modules, creation of common test suites and debugging tools, and it spreads expertise
among all the engineers.
Look at the standards within your organization. Look for best practices that are being
used and formalize them to make them into standards that everybody abides by. There are
many methods and techniques in the industry that help with this, such as CMMI
(capability maturity model integration, an approach for improving processes;
sei.cmu.edu/cmmi), ISO (International Organization for Standardization, international
standards for business, government, and society; iso.org), and Agile (software
development methods promoting regular inspection and adaptation; agilealliance.org).
Adapt and change your internal standards as necessary. If a change needs to be made, it
needs to go through a review and approval process by all interested parties.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 51
Once such a change has been approved, make sure that it is published within your
organization. Apply version numbers if necessary. There is no such thing as a
“customized standard.†Something is either a standard or customized, but not both.
If you break away from a standard, be sure you have a good reason.
Balance the Load
Hardware and firmware each have their strengths and weaknesses when it comes to
performing tasks. The challenge is to achieve the right balance between the two. What
applies in one embedded system will not necessarily apply in another. Differences exist
in CPU performance, bus architectures, clock speeds, memory, firmware load, and other
parameters.
Proper balance between hardware and firmware depends on the given product and
constraints. It requires studying what the tradeoffs will be for a given situation and
adjusting as necessary.
An embedded system without a proper balance between hardware and firmware may
have bottlenecks, performance issues, and stability problems. If firmware has too much
work, it might be slow responding to hardware and/or it might not be able to keep
hardware busy.
Alternatively, hardware might have too big of a load, processing and moving data
excessively, which may impact its ability to keep up with firmware requests. The quality
of the system is also impacted by improper load balancing. The side with the heavier load
may be forced to take shortcuts, fall behind, or lose some work.
System-level requirements
In order to be competitive in the marketplace, embedded systems require that the
designers take into account the entire system when making design decisions.
End-product utility
The utility of the end product is the goal when designing an embedded system, not the
capability of the embedded computer itself. Embedded products are typically sold on the
basis of capabilities, features, and system cost rather than which CPU is used in them or
cost/performance of that CPU.
One way of looking at an embedded system is that the mechanisms and their associated
I/O are largely defined by the application. Then, software is used to coordinate the

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 51
mechanisms and define their functionality, often at the level of control system equations
or finite state machines. Finally, computer hardware is made available as infrastructure to
execute the software and interface it to the external world. While this may not be an
exciting way for a hardware engineer to look at things, it does emphasize that the total
functionality delivered by the system is what is paramount.
Design challenge:
Software- and I/O-driven hardware synthesis (as opposed to hardware-driven software
compilation/synthesis).
System safety & reliability
An earlier section discussed the safety and reliability of the computing hardware itself.
But, it is the safety and reliability of the total embedded system that really matters. The
Distributed system example is mission critical, but does not employ computer
redundancy. Instead, mechanical safety backups are activated when the computer system
loses control in order to safely shut down system operation.
A bigger and more difficult issue at the system level is software safety and reliability.
While software doesn't normally "break" in the sense of hardware, it may be so complex
that a set of unexpected circumstances can cause software failures leading to unsafe
situations. This is a difficult problem that will take many years to address, and may not be
properly appreciated by non-computer engineers and managers involved in system design
decisions ([12] discusses the role of computers in system safety).
Design challenges:
Reliable software.
Cheap, available systems using unreliable components.
Electronic vs. non-electronic design tradeoffs.
Controlling physical systems
The usual reason for embedding a computer is to interact with the environment, often by
monitoring and controlling external machinery. In order to do this, analog inputs and
outputs must be transformed to and from digital signal levels. Additionally, significant
current loads may need to be switched in order to operate motors, light fixtures, and other
actuators. All these requirements can lead to a large computer circuit board dominated by
non-digital components.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 51
In some systems "smart" sensors and actuators (that contain their own analog interfaces,
power switches, and small CPUS) may be used to off-load interface hardware from the
central embedded computer. This brings the additional advantage of reducing the amount
of system wiring and number of connector contacts by employing an embedded network
rather than a bundle of analog wires. However, this change brings with it an additional
computer design problem of partitioning the computations among distributed computers
in the face of an inexpensive network with modest bandwidth capabilities.
Design challenge:
Distributed system tradeoffs among analog, power, mechanical, network, and digital
hardware plus software.
Power management
A less pervasive system-level issue, but one that is still common, is a need for power
management to either minimize heat production or conserve battery power. While the
push to laptop computing has produced "low-power" variants of popular CPUs,
significantly lower power is needed in order to run from inexpensive batteries for 30 days
in some applications, and up to 5 years in others.
Design challenge:
Ultra-low power design for long-term battery operation.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 52
THE MAIN SOFTWARE UTILITY TOOL

The hardware components within an embedded system can only directly transmit, store,
and
execute machine code, a basic language consisting of ones and zeros. Machine code was
used
in earlier days to program computer systems, which made creating any complex
application
a long and tedious ordeal. In order to make programming more efficient, machine code
was
made visible to programmers through the creation of a hardware-specific set of
instructions,
where each instruction corresponded to one or more machine code operations. These
hardware-
specific sets of instructions were referred to as assembly language. Over time, other
programming languages, such as C, C++, Java, etc., evolved with instruction sets that
were
(among other things) more hardware-independent. These are commonly referred to as
high level
languages because they are semantically further away from machine code, they more
resemble human languages, and are typically independent of the hardware. This is in
contrast to a low-level language, such as assembly language, which more closely
resembles machine code. Unlike high-level languages, low-level languages are hardware
dependent, meaning there is a unique instruction set for processors with different
architectures. Table outlines this evolution of programming languages. Because machine
code is the only language the hardware can directly execute, all other languages need
some type of mechanism to generate the corresponding machine code. This mechanism
usually includes one or some combination of preprocessing, translation, and
interpretation. Depending on the language, these mechanisms exist on the programmer’s
host system (typically a nonembedded development system, such as a PC or Sparc
station), or the target system (the embedded system being developed). See Figure .
Preprocessing is an optional step that occurs before either the translation or interpretation

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 52
of source code, and whose functionality is commonly implemented by a preprocessor.
The preprocessor’s role is to organize and restructure the source code to make translation
or interpretation of this code easier. As an example, in languages like C and C++, it is a
preprocessor that allows the use of named code fragments, such as macros, that simplify
code development by allowing the use of the macro’s name in the code to replace
fragments of code. The preprocessor then replaces the macro name with the contents of
the macro during preprocessing. The preprocessor can exist as a separate entity, or can be
integrated within the translation or interpretation unit. Many languages convert source
code, either directly or after having been preprocessed through use of a compiler, a
program that generates a particular target language—such as machine code and Java byte
code—from the source language. A compiler typically “translates” all of the source code
to some target code at one time. As is usually the case in embedded systems, compilers
are located on the programmer’s host machine and generate target code for hardware
platforms that differ from the platform the compiler is actually running on. These
compilers are commonly referred to as cross-compilers. In the case of assembly language,
the compiler is simply a specialized cross-compiler
referred to as an assembler, and it always generates machine code. Other high-level
language
compilers are commonly referred to by the language name plus the term “compiler,” such
as
“Java compiler” and “C compiler.” High-level language compilers vary widely in terms
of what is generated. Some generate machine code, while others generate other high-level
code ,which then requires what is produced to be run through at least one more compiler
or interpreter, as discussed later in this section. Other compilers generate assembly code,
which then must be run through an assembler. After all the compilation on the
programmer’s host machine is completed, the remaining target code file is commonly
referred to as an object file, and can contain anything from machine code to Java byte
code (discussed later in this section), depending on the programming language used. As
shown in Figure 2-4, after linking this object file to any system libraries required, the
object file, commonly referred to as an executable, is then ready to be transferred to the
target embedded system’s memory.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 52

A decompiler represents executable binary files in a readable form. More precisely, it


transforms
binary code into text that software developers can read and modify. The software security
industry relies on this transformation to analyze and validate programs. The analysis is
performed on the binary code because the source code (the text form of the software)
traditionally is not available, because it is considered a commercial secret. Programs to
transform binary code into text form have always existed. Simple one-to-one mapping of
processor instruction codes into instruction mnemonics is performed by disassemblers.
Many disassemblers are available on the market, both free and commercial. The most
powerful disassembler is IDA Pro, published by Datarescue. It can handle binary code for
a huge number of processors and has open architecture that allows developers to write
add-on analytic modules. Decompilers are different from disassemblers in one very
important aspect. While both generate
human readable text, decompilers generate much higher level text, which is more concise
and much easier to read. Disassembler outputCompared to low level assembly language,
high level language representation has several advantages:
✔ It is consise.
✔ It is structured.
✔ It doesn't require developers to know the assembly language.
✔ It recognizes and converts low level idioms into high level notions.
✔ It is less confusing and therefore easier to understand.
✔ It is less repetitive and less distracting.
✔ It uses data flow analysis.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 52
Let's consider these points in detail.
Usually the decompiler's output is five to ten times shorter than the disassembler's output.
For
example, a typical modern program contains from 400KB to 5MB of binary code. The
disassembler's output for such a program will include around 5-100MB of text, which can
take
anything from several weeks to several months to analyze completely. Analysts cannot
spend this much time on a single program for economic reasons. The decompiler's output
for a typical program will be from 400KB to 10MB. Although this is still a big volume
to read and understand (about the size of a thick book), the time needed for analysis time
is divided by 10 or more.
The second big difference is that the decompiler output is structured. Instead of a linear
flow of
instructions where each line is similar to all the others, the text is indented to make the
program logic explicit. Control flow constructs such as conditional statements, loops, and
switches are marked with the appropriate keywords.The decompiler's output is easier to
understand than
the disassembler's output because it is high level. To be able to use a disassembler, an
analyst must know the target processor's assembly language. Mainstream programmers
do not use assembly languages for everyday tasks, but virtually everyone uses high level
languages today. Decompilers remove the gap between the typical programming
languages and the output language. More analysts can use a decompiler than a
disassembler. Decompiler outputDecompilers convert assembly level idioms into high-
level abstractions. Some idioms can be quite long and time consuming to analyze. The
following one line code
x = y / 2;
can be transformed by the compiler into a series of 20-30 processor instructions. It takes
at least 15-30 seconds for an experienced analyst to recognize the pattern and mentally
replace it with the original line.. If the code includes many such idioms, an analyst is
forced to take notes and mark each pattern with its short representation. All this slows
down the analysis tremendously.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 52
Decompilers remove this burden from the analysts. The amount of assembler instructions
to analyze is huge. They look very similar to each other and their patterns are very
repetitive. Reading disassembler output is nothing like reading a captivating story. In a
compiler generated program 95% of the code will be really boring to read and analyze. It
is extremely easy for an analyst to confuse two similar looking snippets of code, and
simply lose his way in the output. These two factors (the size and the boring nature of the
text) lead to the following phenomenon: binary programs are never fully analyzed.
Analysts try to locate suspicious parts by using some heuristics and some automation
tools. Exceptions happen when the program is extremely small or an analyst devotes a
disproportionally huge amount of time to the analysis.
Decompilers alleviate both problems: their output is shorter and less repetitive. The
output still
contains some repetition, but it is manageable by a human being. Besides, this repetition
can be
addressed by automating the analysis. Repetitive patterns in the binary code call for a
solution. One obvious solution is to employ the computer to find patterns and somehow
reduce them into something shorter and easier for human analysts to grasp. Some
disassemblers (including IDA Pro) provide a means to automate analysis. However, the
number of available analytical modules stays low, so repetitive code continues to be a
problem. The main reason is that recognizing binary patterns is a surprisingly difficult
task. Any “simple” action, including basic arithmetic operations such as addition and
subtraction, can be represented in an endless number of ways in binary form. The
compiler might use the addition operator for subtraction and vice versa. It can store
constant numbers somewhere in its memory and load them when needed. It can use the
fact that, after some operations, the register value can be proven to be a known constant,
and just use the register without reinitializing it. The diversity of methods used explains
the small number of available analytical modules.
Decompilers remove the gap between the typical programming languages and the output
language. More analysts can use a decompiler than a disassembler. Decompilers convert
assembly level idioms into high-level abstractions. Some idioms can be quite long and
time consuming to analyze. The following one line code

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 52
x = y / 2; can be transformed by the compiler into a series of 20-30 processor
instructions. It takes at least 15-30 seconds for an experienced analyst to recognize the
pattern and mentally replace it with the original line.. If the code includes many such
idioms, an analyst is forced to take notes and mark each pattern with its short
representation. All this slows down the analysis tremendously. Decompilers remove this
burden from the analysts. The amount of assembler instructions to analyze is huge. They
look very similar to each other and their patterns are very repetitive. Reading
disassembler output is nothing like reading a captivating story. In a compiler generated
program 95% of the code will be really boring to read and analyze. It is extremely easy
for an analyst to confuse two similar looking snippets of code, and simply lose his way
in the output. These two factors (the size and the boring nature of the text) lead to the
following phenomenon: binary programs are never fully analyzed. Analysts try to locate
suspicious parts by using some heuristics and some automation tools. Exceptions happen
when the program is
extremely small or an analyst devotes a disproportionally huge amount of time to the
analysis.
Decompilers alleviate both problems: their output is shorter and less repetitive. The
output still
contains some repetition, but it is manageable by a human being. Besides, this repetition
can be
addressed by automating the analysis. Repetitive patterns in the binary code call for a
solution. One obvious solution is to employ the computer to find patterns and somehow
reduce them into something shorter and easier for human analysts to grasp. Some
disassemblers (including IDA Pro) provide a means to automate analysis. However, the
number of available analytical modules stays low, so repetitive code continues to be a
problem. The main reason is that recognizing binary patterns is a surprisingly difficult
task. Any “simple” action, including basic arithmetic operations such as addition and
subtraction, can be represented in an endless number of ways in binary form. The
compiler might use the addition operator for subtraction and vice versa. It can store
constant numbers somewhere in its memory and load them when needed. It can use the
fact that, after some operations, the register value can be proven to be a known constant,

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 52
and just use the register without reinitializing it. The diversity of methods used explains
the small number of available analytical modules. The situation is different with a
decompiler. Automation becomes much easier because the decompiler provides the
analyst with high level notions. Many patterns are automatically recognized and replaced
with abstract notions. The remaining patterns can be detected easily because of the
formalisms the decompiler introduces. For example, the notions of function parameters
and calling conventions are strictly formalized. Decompilers make it extremely easy to
find the parameters of any function call, even if those parameters are initialized far away
from the call instruction. With a disassembler, this is a daunting task, which requires
handling each case individually.
Decompilers, in contrast with disassemblers, perform extensive data flow analysis on the
input.
This means that questions such as, “Where is the variable initialized?” and, “Is this
variable used?” can be answered immediately, without doing any extensive search over
the function. Analysts routinely pose and answer these questions, and having the answers
immediately increases their productivity. Two reasons: 1) they are tough to build
because decompilation theory is in its infancy; and 2) decompilers have to make many
assumptions about the input file, and some of these assumptions may be wrong. Wrong
assumptions lead to incorrect output. In order to be practically useful, decompilers must
have a means to remove incorrect assumptions and be interactive in general.
Building interactive applications is more difficult than building offline (batch)
applications. In
short, these two obstacles make creating a decompiler a difficult endeavor both in theory
and in
practice.Given all the above, we are proud to present our analytical tool, the Hex-Rays
Decompiler. It embodies almost 10 years of proprieary research and implements many
new approaches to the problems discussed above. The highlights of our decompiler are:
✔ It can handle real world applications.
✔ It has both automatic (batch) and interactive modes.
✔ It is compiler-agnostic to the maximum possible degree.
✔ Its core does not depend on the processor.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 52
✔ It has a type system powerful enough to express any C type.
✔ It has been tested on thousands of files including huge applications consisting of tens
of
Mbs.
✔ It is interactive: analysts may change the output, rename the variables and specify
their type.
✔ It is fast: it decompiles a typical function in under a second

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 54
CAD AND HARDWARE , TRANSLATION TOOLS

Target Hardware Debugging


During development process, a host system is used
Then locating and burning the codes in the target board.
Target board hardware and software latercopied to get the final embedded system
Final system function exactly as the onetested and debugged and finalized during
the development process
Host system at PC or workstation or laptop :
High performance processor with caches, large RAM memory
ROMBIOS (read only memory basic input-output system)
very large memory on disk
keyboard
display monitor
mice
network connection
Program development kit for a high
level language program or IDE
Host processor compiler and cross
compiler
Cross assembler

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 54

Sophisticated target system:

Target and final systems .


Target system differs from a final system
Target system interfaces with the computer as well works as a standalone system
In target system might be repeated downloading of the codes during the
development phase.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 55
PROCESSORS INTERPRETERS

Target Hardware Debugging


During development process, a host system is used
Then locating and burning the codes in the target board.
Target board hardware and software latercopied to get the final embedded system
Final system function exactly as the onetested and debugged and finalized during
the development process
Host system at PC or workstation or laptop :
High performance processor with caches, large RAM memory
ROMBIOS (read only memory basic input-output system)
very large memory on disk
keyboard
display monitor
mice
network connection
Program development kit for a high
level language program or IDE
Host processor compiler and cross
compiler
Cross assembler

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 55

Sophisticated target system:

Target and final systems .


Target system differs from a final system
Target system interfaces with the computer as well works as a standalone system
In target system might be repeated downloading of the codes during the
development phase.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 57
COMPILERS AND LINKERS AND DEBUGGING TOOLS

Computer-Aided Design (CAD) and the Hardware:


Computer-Aided Design (CAD) tools are commonly used by hardware engineers to
simulate circuits at the electrical level in order to study a circuit’s behavior under various
conditions before they actually build the circuit.

Figure is a snapshot of a popular standard circuit simulator, called PSpice. This circuit
simulation software is a variation of another circuit simulator that was originally
developed at University of California, Berkeley called SPICE (Simulation Program with
Integrated Circuit Emphasis). PSpice is the PC version of SPICE, and is an example of a
simulator that can do several types of circuit analysis, such as nonlinear transient,
nonlinear dc, linear ac, noise, and distortion to name a few. As shown in Figure circuits
created in this simulator can be made up of a variety of active and/or passive elements.
Many commercially available electrical circuit simulator tools are generally similar to
PSpice in terms of their overall purpose, and mainly differ in what analysis can be done,
what circuit components can be simulated, or the look and feel of the user interface of the
tool. Because of the importance of and costs associated with designing hardware, there
are many industry techniques in which CAD tools are utilized to simulate a circuit. Given
a complex set of circuits in a processor or on a board, it is very difficult, if not
impossible, to perform a simulation on the whole design, so a hierarchy of simulators and
models are typically used. In fact, the use of models is one of the most critical factors in
hardware design, regardless of the efficiency or accuracy of the simulator. At the highest

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 57
level, a behavioral model of the entire circuit is created for both analog and digital
circuits, and is used to study the behavior of the entire circuit. This behavioral model can
be created with a CAD tool that offers this feature, or can be written in a standard
programming language. Then depending on the type and the makeup of the circuit,
additional models are created down to the individual active and passive components of
the circuit, as well as for any environmental dependencies (temperature, for example) that
the circuit may have. Aside from using some particular method for writing the circuit
equations for a specific simulator, such as the tableau approach or modified nodal
method, there are simulating techniques for handling complex circuits that include one or
some combination of dividing more complex circuits into smaller circuits, and then
combining the results. utilizing special characteristics of certain types of circuits.
Utilizing vector-high speed and/or parallel computers.
Translation Tools—Preprocessors, Interpreters, Compilers, and Linkers:
Translating code was along with a brief introduction to some of the tools used in
translating code, including preprocessors, interpreters, compilers, and linkers. As a
review, after the source code has been written, it needs to be translated into machine
code, since machine code is the only language the hardware can directly execute. All
other languages need development tools that generate the corresponding machine code
the hardware will understand. This mechanism usually includes one or some combination
of preprocessing, translation, and/or interpretation machine code generation techniques.
These mechanisms are implemented within a wide variety of translating development
tools. Preprocessing is an optional step that occurs either before the translation or
interpretation of source code, and whose functionality is commonly implemented by a
preprocessor. The preprocessor’s role is to organize and restructure the source code to
make translation or interpretation of this code easier. The preprocessor can be a separate
entity, or can be integrated within the translation or interpretation unit.
Many languages convert source code, either directly or after having been preprocessed, to
target code through the use of a compiler, a program which generates some target
language,
such as machine code, Java byte code, etc., from the source language, such as assembly,
C,

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 57
Java, etc
A compiler typically translates all of the source code to a target code at one time. As is
usually
the case in embedded systems, most compilers are located on the programmer’s host
machine and generate target code for hardware platforms that differ from the platform the
compiler is actually running on. These compilers are commonly referred to as cross-
compilers. In the case of assembly, an assembly compiler is a specialized cross-compiler
referred to as an assembler, and will always generate machine code. Other high-level
language compilers are commonly referred to by the language name plus “compiler” (i.e.,
Java compiler, C compiler). High-level language compilers can vary widely in terms of
what is generated. Some generate machine code while others generate other high-level
languages, which then require what is produced to be run through at least one more
compiler. Still other compilers generate assembly code, which then must be run through
an assembler. After all the compilation on the programmer’s host machine is completed,
the remaining target code file is commonly referred to as an object file, and can contain
anything from machine code to Java byte code, depending on the programming language
used. As shown in Figure , a linker integrates this object file with any other required
system libraries, creating what is commonly referred to as an executable binary file,
either directly onto the board’s memory or ready to be transferred to the target embedded
system’s memory by a loader.
EMBEDDED SOFTWARE DEVELOPMENT TOOLS
Application programs are typically developed, compiled, and run on host system
Embedded programs are targeted to a target processor (different from the
development/host processor and operating environment) that drives a device or controls
What tools are needed to develop, test, and locate embedded software into the target
processor and its operating environment? Host: Where the embedded software is
developed, compiled, tested, debugged, optimized, and prior to its translation into target
device. (Because the host has keyboards, editors, monitors, printers, more memory, etc.
for development, while the target may have not of these capabilities for developing the
software.) Target: After development, the code is cross-compiled, translated – cross-
assembled, linked (into target processor instruction set) and located into the target

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 57
Cross-Compilers – Native tools are good for host, but to port/locate embedded code to
target, the host must have a tool-chain that includes a cross-compiler, one which runs on
the host but produces code for the target processor Cross-compiling doesn’t guarantee
correct target code due to (e.g., differences in word sizes, instruction sizes, variable
declarations, library functions)
Cross-Assemblers and Tool Chain Host uses cross-assembler to assemble code in target’s
instruction syntax for the target Tool chain is a collection of compatible, translation tools,
which are ‘pipelined’ to produce a complete binary/machine code that can be linked and
located into the target processor
EMBEDDED SOFTWARE DEVELOPMENT TOOLS
Linker/Locators for Embedded Software
Native linkers are different from cross-linkers (or locators) that perform additional tasks
to locate embedded binary code into target processors
Address Resolution –
Native Linker: produces host machine code on the hard-drive (in a named file), which the
loader loads into RAM, and then schedules (under the OS control) the program to go to
the CPU.
In RAM, the application program/code’s logical addresses for, e.g., variable/operands
and function calls, are ordered or organized by the linker. The loader then maps the
logical addresses into physical addresses – a process called address resolution. The
loader then loads the code accordingly into RAM . In the process the loader also resolves
the addresses for calls to the native OS routines

Locator: produces target machine code (which the locator glues into the RTOS) and the
combined code (called map) gets copied into the target ROM. The locator doesn’t stay in
the target environment, hence all addresses are resolved, guided by locating-tools and
directives, prior to running the code
EMBEDDED SOFTWARE DEVELOPMENT TOOLS
Locating Program Components – Segments
Unchanging embedded program (binary code) and constants must be kept in ROM to be
remembered even on power-off

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 57
Changing program segments (e.g., variables) must be kept in RAM
Chain tools separate program parts using segments concept
Chain tools (for embedded systems) also require a ‘start-up’ code to be in a separate
segment and ‘located’ at a microprocessor-defined location where the program starts
execution Some cross-compilers have default or allow programmer to specify segments
for program parts, but cross-assemblers have no default behavior and programmer must
specify segments for program parts
Telling/directing the locator where (which segments) to place parts
The –Z tells which segments (list of segments) to use and the start-address of the first
segment
The first line tells which segments to use for the code parts, starting at address 0; and the
second line tells which segments to use for the data parts, starting at x8000
The proper names and address info for the directing the locator are usually in the cross-
compiler documentation
Other directives: range of RAM and ROM addresses, end of stack address (segment is
placed below this address for stack to grow towards the end)
Segments/parts can also be grouped, and the group is located as a unit
Initialized Data and Constant Strings
Segments with initialized values in ROM are shadowed (or copied into RAM) for correct
reset of initialized variables, in RAM, each time the system comes up (esp. for initial
values that are take #define constants, and which can be changed)
In C programs, a host compiler may set all uninitialized variable to zero or null, but this
is not generally the case for embedded software cross-compilers (unless the startup code
in ROM does so
If part(s) of a constant string is(are) expected to be changed during run-time, the cross-
compiler must generate a code to allow ‘shadowing’ of the string from ROM
Output file of locators are Maps – list addresses of all segments
Maps are useful for debugging
n ‘advanced’ locator is capable of running (albeit slowly) a startup code in ROM, which
(could decompress and) load the embedded code from ROM into RAM to execute
quickly since RAM is faster, especially for RISC microprocessors

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 57
Getting Embedded Software into Target System:
Moving maps into ROM or PROM, is to create a ROM using hardware tools or a PROM
programmer (for small and changeable software, during debugging)
If PROM programmer is used (for changing or debugging software), place PROM in a
socket (which makes it erasable – for EPROM, or removable/replaceable) rather than
‘burnt’ into circuitry
PROM’s can be pushed into sockets by hand, and pulled using a chip puller
The PROM programmer must be compatible with the format (syntax/semantics) of the
Map
ROM Emulators – Another approach is using a ROM emulator (hardware) which
emulates the target system, has all the ROM circuitry, and a serial or network interface to
the host system. The locator loads the Map into the emulator, especially, for debugging
purposes.
Software on the host that loads the Map file into the emulator must understand (be
compatible with) the Map’s syntax/semantics
Getting Embedded Software into Target System – 2
Using Flash Memory
For debugging, a flash memory can be loaded with target Map code using a software on
the host over a serial port or network connection (just like using an EPROM)
Advantages:
No need to pull the flash (unlike PROM) for debugging different embedded code
Transferring code into flash (over a network) is faster and hassle-free
New versions of embedded software (supplied by vendor) can be loaded into flash
memory by customers over a network - Requires a) protecting the flash programmer,
saving it in RAM and executing from there, and reloading into flash after new version is
written and b) the ability to complete loading new version even if there are crashes and
protecting the startup code as in (a)
Modifying and/or debugging the flash programming software requires moving it into
RAM, modify/debug, and reloading it into target flash memory using above methods

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 57

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 58
QUALITY ASSURANCE AND TESTING OF THE DESIGN

Among the goals of testing and assuring the quality of a system are finding bugs within a
design and tracking whether the bugs are fixed. Quality assurance and testing is similar to
debugging, discussed earlier in this chapter, except that the goals of debugging are to
actually fix discovered bugs. Another main difference between debugging and testing the
system is that debugging typically occurs when the developer encounters a problem in
trying to complete a portion of the design, and then typically tests-to-pass the bug fix
(meaning tests only to ensure the system minimally works under normal circumstances).
With testing, on the other hand, bugs are discovered as a result of trying to break the
system, including both testing-to-pass and testing-to-fail, where weaknesses in the system
are probed. Under testing, bugs usually stem from either the system not adhering to the
architectural specifications— i.e., behaving in a way it shouldn’t according to
documentation, not behaving in a way it should according to the documentation,
behaving in a way not mentioned in documentation— or the inability to test the system.
The types of bugs encountered in testing depend on the type of testing being done. In
general, testing techniques fall under one of four models: static black box testing, static
white box testing, dynamic black box testing, or dynamic white box testing (see the matrix
in Figure 12-9). Black box testing occurs with a tester that has no visibility into the
internal workings of the system (no schematics, no source code, etc.). Black box testing is
based on general product requirements documentation, as opposed to white box testing
(also referred to clear box or glass box testing) in which the tester has access to source
code, schematics, and so on. Static testing is done while the system is not running,
whereas dynamic testing is done when the system is running.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 58

Within each of the models (shown in Figure 12-10), testing can be further broken down
to include unit/module testing (incremental testing of individual elements within the
system), compatibility testing (testing that the element doesn’t cause problems with other
elements in the system), integration testing (incremental testing of integrated elements),
system testing (testing the entire embedded system with all elements integrated),
regression testing (rerunning previously passed tests after system modification), and
manufacturing testing (testing to ensure that manufacturing of system didn’t introduce
bugs), just to name a few. From these types of tests, an effective set of test cases can be
derived that verify that an element and/or system meets the architectural specifications, as
well as validate that the element and/or system meets the actual requirements, which may
or may not have been reflected correctly or at all in the documentation. Once the test
cases have been completed and the tests are run, how the results are handled can vary

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 58
depending on the organization, but typically vary between informal, where information is
exchanged without any specific process being followed, and formal design reviews, or
peer reviews where fellow developers exchange elements to test, walkthroughs where the
responsible engineer formally walks through the schematics and source code, inspections
where someone other than the responsible engineer does the walk through, and so on.
Specific testing methodologies and templates for test cases, as well as the entire testing
process, have been defined in several popular industry quality assurance and testing
standards, including ISO9000 Quality Assurance standards, Capability Maturity Model
(CMM), and the ANSI/IEEE 829 Preparation, Running, and Completion of Testing
standards.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 59
TESTING ON HOST MACHINE , SIMULATORS AND LABORATORY TOOLS

Debugging Tools:
Aside from creating the architecture, debugging code is probably the most difficult task
of the development cycle. Debugging is primarily the task of locating and fixing errors
within the system. This task is made simpler when the programmer is familiar with the
various types of debugging tools available and how they can be used (the type of
information shown in Table ). As seen from some of the descriptions in Table ,
debugging tools reside and interconnect in some combination of standalone devices, on
the host, and/or on the target board.
.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 59

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 59

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 59

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 59

Some of these tools are active debugging tools and are intrusive to the running of the
embedded system, while other debug tools passively capture the operation of the system
with no intrusion as the system is running. Debugging an embedded system usually
requires a combination of these tools in order to address all of the different types of
problems that can arise during the development process.

Other Debugging Tools:


Aside from creating the architecture, debugging code is probably the most difficult task
of the development cycle. Debugging is primarily the task of locating and fixing errors
within the system. This task is made simpler when the programmer is familiar with the
various types of debugging tools available and how they can be used (the type of
information shown in Table ). As seen from some of the descriptions in Table ,
debugging tools reside and interconnect in some combination of standalone devices, on
the host, and/or on the target board.
.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 59

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 59

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM


Embedded Systems Unit 4
Lecture No: 59

Some of these tools are active debugging tools and are intrusive to the running of the
embedded system, while other debug tools passively capture the operation of the system
with no intrusion as the system is running. Debugging an embedded system usually
requires a combination of these tools in order to address all of the different types of
problems that can arise during the development process.

B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM

S-ar putea să vă placă și