Sunteți pe pagina 1din 211

DEVICES AND COMMUNICATION BUSES FOR DEVICES NETWORK

SPI, SCI, SI and SDIO Port/devices for Serial Data Communication

Microcontroller internal devices for SPI or SCI or SI


Synchronous Peripheral Interface (SPI) Port, for example, in 68HC11 and 68HC12 microcontrollers Asynchronous UART Serial Connect Interface (SCI), for example, SCI port in 68HC11/12 Asynchronous UART mode Serial Interface (SI), for example, SI in 8051

SPI: Synchronous Serial Peripheral Interface

SPI: Synchronous Peripheral Interface


Full-duplex Synchronous communication. The SPI is a synchronous serial interface in which data in an 8-bit (1 byte) can be shifted in and/or out one bit at a time. It can be used to communicate with a serial peripheral device or with another microcontroller (68HC12) with an SPI interface. The Serial Peripheral Interface (SPI) circuit is a synchronous serial data link that provides communication with external devices in Master or Slave Mode.

SPI Cont.
The Serial Peripheral Interface is essentially a shift register that serially transmits data bits to other SPIs. During a data transfer, one SPI system acts as the "master" which controls the data flow, while the other devices act as "slaves" which have data shifted into and out by the master.
SCLK, MOSI and MISO signals for serial clock from master, output from master and input to master, respectively. Device selection as master or slave can be done by a signal to hardware input SS. (Slave select when 0) pin

Operation of SPI protocol


A slave device is selected when the master asserts its NSS/SS signal. If multiple slave devices exist, the master generates a separate slave select signal for each slave. The SPI system consists of two data lines and two control lines: Master Out Slave In (MOSI): This data line supplies the output data from the master shifted into the input(s) of the slave(s).

Master In Slave Out (MISO): This data line supplies the output data from a slave to the input of the master. There may be no more than one slave transmitting data during any particular transfer.
Serial Clock (SPCK/SCLK): This control line is driven by the master and regulates the flow of the data bits. The master may transmit data at a variety of baud rates; the SPCK/SCLK line cycles once for each bit that is transmitted. Slave Select (NSS/SS): This control line allows slaves to be turned on and off by hardware.

Operation of SPI protocol

Figure 1: Master/slave serial peripheral interface.

Operation of SPI protocol


Shift Register: loaded by SPI data and by data from SSPBUF

Figure 1: Master/slave serial peripheral interface.

Operation of SPI protocol


Serial Buffer: Data is put here at the end of an SPI transfer, and data placed here for sending is loaded into the SSPSR for the next transfer.

Figure 1: Master/slave serial peripheral interface.

Operation of SPI protocol


Control/ SPI CLOCK: SCK transmits the SPI clock from the Master to the Slave. SS controls if the slave is connected or idle.

Figure 1: Master/slave serial peripheral interface.

The SPI Registers


An SPI transmission is always initiated by the master, and the peripheral device is called the slave. The master initiates a transfer by storing a byte in the SPI data register (SP0DR for 6812, SP0DR for 6812). SP0CR1 : SPI Control Register 1 SP0CR2 : SPI Control Register 2 SP0BR : SPI Baud Rate Register SP0SR : SPI Status Register SP0DR : SPI Data Register

SPI Registers Addresses

Table 2: 68HC12 SPI Clock Rate Selection

Master with multiple slaves


SCLK MISO MOSI SCLK MISO MOSI SPI Slave #1 SPI Master SCLK MISO

MOSI
SPI Slave #2

Figure 2: Master and Multiple Slaves.

Examples
68HC11/12 uses synchronous serial communication (SPI Protocol)
68HC12 provides SPI communication device operation at 4Mbps. 68HC11 provides SPI communication device operation at 2Mbps.

SCI: Serial Connect Interface Port

SCI: Serial Connect Interface Port


UART asynchronous mode port Full-duplex mode SCI programmable for transmission and for reception This interface uses three dedicated pins: transmit data (TXD), receive data (RXD), and SCI serial clock (SCLK). It supports industry-standard asynchronous bit rates and protocols as well as high-speed (up to 5 Mbps for a 40-MHz clock) synchronous data transmission.

The SCI consists of separate transmit and receive sections whose operations can be asynchronous with respect to each other.

SCI Features
Three-Pin Interface:

TXD Transmit Data RXD Receive Data SCLK Serial Clock(Optional for Synchronous Communication)

781.25 Kbps NRZ Asynchronous Communications Interface (50-MHz System Clock) 6.25 Mbps Synchronous Serial Mode (50-MHz System Clock)

Multidrop Mode for Multiprocessor Systems:

Two Wakeup Modes: Idle Line and Address Bit Wired-OR Mode

On-Chip or External Baud Rate Generation/Interrupt Timer Four Interrupt Priority Levels Fast or Long Interrupts

SCI Full duplex signals

SCI Bit format

The SCI configuration allows for a number of options for data transmission and reception. In the simplest configuration, ten bits are involved : a start bit (logical 0), the 8 data bits, and a stop bit (logical 1). An example transmission waveform, such as what you might see if you hooked up a logic analyzer to the TX pin, is shown above. Note that the data bits are sent lsb (bit 0) first, and msb (bit 7) last. Thus, the middle 8 bits in the pulse train below are sent in the order "10110100" in a time domain signal, corresponding to an actual data byte of 0x2D (%0010 1101), which is an ASCII minus " - " sign.

SCI Bit format cont..

The data rate out of/into the SCI is determined by the baud rate, and is an essential value to consider in any SCI setup.
For our Dragon 12 running with a PLL controlled clock at 24 MHz, the baud rate is determined by the following formula:
SCI BAUD rate = 24,000,000/(16 x BR)

where BR is the content of the SCI1BDH/L Baud Rate Registers at addresses $00D0 and $00D1. For example, for a baud rate of 9600 (baud period ~ 100 usec) , we want a BR of about 156 = $9C, so we write $00 to SCI1BDH and $9C to SCI1BDL. Note that the baud rate is the same for transmission and reception for a given SCI port; we can't set one baud rate for transmission, and another for reception on a given port.

Serial Transmission Rules


The basic mechanics of serial transmission are simple:
(1) We load a data register with 8 bits of data; (2) Once the data register is full, the data is transferred to a shift register automatically by the hardware; (3) The data in the shift register are padded with a start/stop bit, and then shifted out on the TX pin a bit at a time at the set baud rate. For serial reception, the basic mechanics are reversed:

(1) We wait for the shift register connected to the RX pin to fill with serial data;
(2) Once the RX shift register is filled, the data is immediately transferred to the data register (the transfer is automatically done by the hardware);

(3) We then read the data register to see what data was received.

68HC11 SCI signals at Port PD

SI: Serial Interface Port

Serial Interface (SI) Port


UART 10T or 11T mode asynchronous port interface. Functions as USRT (universal synchronous receiver and transmitter) also. SI is therefore synchronous-asynchronous serial communication port called USART (universal synchronous-asynchronous receiver and transmitter) port.

SI is an internal serial IO device in 8051.

SI Half-duplex signals Mode 0

SI Full duplex signals Mode 1, 2 or 3

SI Control bits programming


Mode 0 Half- duplex synchronous mode of operation, called. When a 12 MHz crystal is at 8051, and is attached to the processor, the clock bits are at the intervals of 1 s. Mode 1 or 2 or 3 Full- duplex asynchronous serial communication.

SDIO: Secure Digital Input Output

Secure Digital Association (SD)


SD an association of over 700 companies started from 3 companies in 1999 Created a new flash memory card format, called SD format for IOs.

SDIO card has become popular feature in handheld mobile devices, PDAs, digital cameras and embedded systems.
SD card size just 0.14 cm 2.4 cm 3.2 cm. Allowed to stick out of the handheld device open slot, which can be at the top in order to facilitate insertion of the SD card

SDIO card host controller


A processing element functions used SDIO host controller to process the IOs.
Controller may include SPI controller to support SPI mode for the IOs and also supports the needed protocol functionality internally

SD Memory Card System Concept


1. Read-Write Property Read/Write (RW) cards Read Only Memory (ROM) cards. 2. Supply Voltage High Voltage SD Memory Cards: 2.7-3.6 V. Dual Voltage SD Memory Cards Dual Voltage SD Memory Cards that can operate within the voltage range of Low Voltage Range (T.B.D) and 2.7-3.6 V.
3. Card Capacity Standard Capacity SD Memory Cards: up to and including 2 G bytes High Capacity SD Memory Cards: up to and including 32 GB(specification 2.00). 2 types of High Capacity SD Memory Card are specified. Type A (Single State Card) has single High Capacity memory area Type B (Dual State Card) has both High Capacity memory area and Standard Capacity memory area. In Type B card, only one memory area can be used at any given time.

4. Speed Class
Four Speed Classes are defined and indicate minimum performance of the cards Class 0 - These class cards do not specify performance. It includes all the legacy cards prior to specification 2.00, regardless of its performance Class 2 - Are more than or equal to 2 MB/sec performance. Class 4 - Are more than or equal to 4 MB/sec performance. Class 6 - Are more than or equal to 6 MB/sec performance.

5. Bus Topology
The SD bus includes the following 9 signals: CLK : Host to card clock signal CMD : Bidirectional Command/Response signal DAT 0 - DAT 3 : 4 Bidirectional data signals. VDD , VSS1 , VSS2 : Power and ground signals.

6. Bus Protocol Communication over the SD bus is based on command and data bit streams that are initiated by a start bit and terminated by a stop bit. Command: a command is a token that starts an operation. A command is sent from the host either to a single card (addressed command) or to all connected cards (broadcast command). A command is transferred serially on the CMD line. Response: a response is a token that is sent from an addressed card, or (synchronously) from all connected cards, to the host as an answer to a previously received command. A response is transferred serially on the CMD line.

Data: data can be transferred from the card to the host or vice versa. Data is transferred via the data lines.

SDIO Signals Description

Figure-3: Command Token Format

Figure-4: Response Token Format

Figure-5: no response and no data Operations

Figure-6: (Multiple) Block Read Operation

Figure-7: (Multiple) Block Write Operation

Data packet format for the SD card:


Usual data (8-bit width): The usual data (8-bit width) are sent in LSB (Least Significant Byte) first, MSB (Most Significant Byte) last sequence. But in the individual byte, it is MSB (Most Significant Bit)first, LSB (Least Significant Bit) last.

Figure-8: Data Packet Format - Usual Data

SDIO Hardware Design

COMMUNICATION BUSES FOR DEVICES NETWORK


Parallel port at devices

Parallel Port
8-bit IOs Short distances, generally within a circuit board or IC or nearby external devices

Parallel Port in the device


Advantage
Number of 8 bits over the wires in parallel. High data transfer rate

Disadvantage
More number of wires Capacitive effect in parallel wires reduces the length up to which communication in parallel can take place. High capacitance results in delay for the bits at the other end undergoing transition from 0 to 1 or 1 to 0. High capacitance can also result in noise and cross talk (induced signals) between the wires.

Parallel Port Interfacing

Parallel port interfacings for keypad, LCD display and modem

Parallel IO port handshaking and Interfacing

Handshaking signals to and from an external peripheral device for input at port
Device makes a strobe request to port, STROBE after it is ready to send the byte and System I/O port sends the acknowledgement, PORT READY. System I/O port receives data in buffer and then issues an interrupt signal, INT to processor to enable an ISR execution

Handshaking signals to and from an external peripheral device for Output at port
Device sends the message ACKNOWLEDGE when and the I/O device port ends the BUFFER FULL signal to inform that the is buffer full. The processor is sent the INTERRUPT REQUEST message, when the transmitting-buffer is empty (available for next write)

Port Interrupt to processor


When receiving-buffer is full (available for next read) When transmitting-buffer is empty
(available for next write)

Bidirectional Port Handshaking signals


STROBE PORT READY BUFFER-FULL ACKNOWLEDGE INTERRUPT REQUEST

SELF STUDY TOPICS


1. Parallel Port Interfacing with Switches and LEDS.(SELF STUDY TOPICS) 2. Parallel Port Interfacing with Matrix Keyboard. (SELF STUDY TOPICS) 3. Parallel Port Interfacing with Stepper Motor 4. Parallel Port Interfacing with LCD Controller.

SERIAL BUS COMMUNICATION PROTOCOLS

SERIAL BUS COMMUNICATION PROTOCOLS

Introduction to I2C
I2C is well known bus invented by PHILIPS. I2C stands for INTER-INTEGRATED CIRCUIT. These type of bus is famous in TV circuit board and then it come to computer environment. It has speed of 100kbs but it can be extended to 450kbps. But only the problem is our processor has a capability of I2C protocol bus inbuilt.

Technical Specifications

Bit format of I2C

State Diagram of I2C Bus Master


Address

Idle

DATA Read Get DATA

DATA Write Send DATA

SERIAL BUS COMMUNICATION PROTOCOLS

CAN Protocol

Serial Communication Distributed Control Area Network (CAN) Bus


Distributed Control Area Network
Example - a network of embedded systems in automobile It has a speed of 1 Mbps as a data rate It uses Multi-master bus. CAN Module are required

CANBUS and the OSI Model


CAN is a closed network
no need for security, sessions or logins. - no user interface requirements.

Physical and Data Link layers in silicon.

OSI:-> Open Systems Interconnection

60

CANBUS Physical Layer


Physical medium two wires terminated at both ends by resistors. Differential signal - better noise immunity. Benefits: Reduced weight, Reduced cost Fewer wires = Increased reliability

Conventional multi-wire looms

CAN bus network

vs.

http://canbuskit.com/what.php

61

Message Oriented Transmission Protocol


Each node receiver & transmitter A sender of information transmits to all devices on the bus All nodes read message, then decide if it is relevant to them All nodes verify reception was error-free All nodes acknowledge reception

CAN bus

2005 Microchip Technology Incorporated. All Rights Reserved.

62

Message Format
Each message has an ID, Data and overhead. Data 8 bytes max Overhead start, end, CRC, ACK

63

Bus Arbitration
Arbitration needed when multiple nodes try to transmit at the same time Only one transmitter is allowed to transmit at a time. A node waits for bus to become idle

Nodes with more important messages continue transmitting

CAN bus

2005 Microchip Technology Incorporated. All Rights Reserved.

64

Bus Arbitration
Message importance is encoded in message ID. Lower value = More important As a node transmits each bit, it verifies that it sees the same bit value on the bus that it transmitted. A 0 on the bus wins over a 1 on the bus. Losing node stops transmitting, winner continues.

65

CAN protocol
There is a CAN controller between the CAN line and the host node. CAN controller BIU (Bus Interface Unit) consisting of a buffer and driver Method for arbitration CSMA/AMP (Carrier Sense Multiple Access with Arbitration on Message Priority basis)

Each Distributed Node Uses:


Twisted Pair Connection up to 40 m for bidirectional data. Line, which pulls to Logic 1 through a resistor between the line and + 4.5V to +12V. : Line Idle state Logic 1 (Recessive state) Detects Input Presence at the CAN line pulled down to dominant (active) state logic 0 (ground ~ 0V) by a sender to the CAN line

Physical Layer
It has two states
1. Dominant State(Logic 0) 2. Recessive State(Logic 1)

Data Link Layer


Bit Format of CAN Bus
1 BIT START 12 BITS ARBITRATION FIELD 6 BITS CONTROL FIELD 0 TO 64 BITS 16 BITS 2 BITS DATA FIELD CRC FIELD 7 BITS

ACK END FIELD FRAME

There are 5-fields in CAN data Frame Format and START & STOP Bits 1. Arbitration field

2.
3. 4. 5.

Control Field [Specifies the number of bytes of data to follow (0-8)]


Data Field CRC Field [cyclic redundancy check code] Acknowledge Field

First field of 12 bits Arbitration field.

Protocol defined First field in frame bits

11-bit destination address and RTR bit (Remote Transmission Request) Destination device address specified in an 11-bit subfield and whether the data byte being sent is a data for the device or a request to the device in 1-bit sub-field.
Maximum 211 devices can connect a CAN controller in case of 11-bit address field standard

Arbitration Field
Identifies(11 bits) the device to which data is being sent or request is being made. When RTR bit is at '1', it means this packet is for the device at destination address. If this bit is at '0' (dominant state) it means, this packet is a request for the data from the device.

Control Field (6 bits)


Second field of 6 bits- control field.
The first bit is for the identifiers extension. The second bit is always '1'.

The last 4 bits specify code for data length

Data Field (up-to 8 bytes data)


Third field of 0 to 64 bits Its length depends on the data length code in the control field.

CRC Field
Fourth field (third if data field has no bit present) of 16 bits CRC (Cyclic Redundancy Check) bits. The receiver node uses it to detect the errors, if any, during the transmission

ACK Field
Fifth field of 2 bits First bit 'ACK slot' ACK = '1 BIT' and receiver sends back '0' in this slot when the receiver detects an error in the reception. Sender after sensing '0' in the ACK slot, generally retransmits the data frame. Second bit 'ACK delimiter' bit. It signals the end of ACK field.

If the transmitting node does not receive any acknowledgement of data frame within a specified time slot, it should retransmit.

EOF Field
Sixth field of 7-bits end- of- the frame specification and has seven '0's

Summary
CAN bus Controller Area Network bus Primarily used for building ECU (Engine control unit) networks in automotive applications. Two wires OSI - Physical and Data link layers Differential signal - noise immunity 1Mbit/s, 120 Messages contain up to 8 bytes of data

77

Self Study
USB Protocol..!!!

Bluetooth PANs IEEE 802.15.1

DEVICES AND COMMUNICATION BUSES FOR DEVICES NETWORK


PARALLEL BUS DEVICE PROTOCOLS

80

PCI PARALLEL BUS PROTOCOL

94

PCI (Peripheral Component Interconnect )


PCI(Peripheral Component Interconnect) bus is based on ISA (Industry Standard Architecture) Bus.

Introduced by Intel in 1992.


PCI provides direct access to system memory.

This configuration allowed for higher performance without slowing down the processor
The PCI Bus was originally 33Mhz and then changed to 66Mhz. PCI Bus became big with the release of Windows 95 with Plug and Play technology.
95

Image of PCI Slots

96

32- Bit vs 64- Bit Slots/ Boards

97

PCI Pin-out Diagram

98

PCI Bus In Relation to System Bus


The PCI adapter is a custom circuit with these main functions: To act as a PCI bus target when a PCI bus master requests a read or write to memory. To act as a PCI bus master when a CPU requests a PIO operation To manage PCI bus arbitration, allocating bus use to devices as they request it. To interface PCI interrupt signals to the system bus and the CPU
99

PCI System Bus Performance


What makes the PCI bus one of the fastest I/O bus used today?
Three features make this possible: Burst Mode: allows multiple sets of data to be sent . Full Bus Mastering: the ability of devices on the PCI bus to perform transfers directly.

High Bandwidth Options: allows for increased speed of the PCI. PCI driver can access the hardware automatically as well as by the programmer assigned addresses.

Simplified addition and deletion (attachment detachment) of the system peripherals.

and
100

How PCI Compares to Other Buses


Bus Type ISA Bus Width 16 bits Bus Speed MB/sec Advantages Disadvantages 8MHz 16 MBps low cost, compatibility, widely used low speed, Jumpers & DIP switches. becoming obsolete

PCI

64 bits

133 MHz

1 GBps

very high speed, Plug & Play, dominant boardlevel bus designed for industrial use, hot swapping/Plug & Play, ideal for embedded systems

incompatible with older systems, can cost more lower speed than PCI, need adapter for PC use, incompatible with older systems

CompactPCI

64 bits

33MHz

132 MBps

Table 1: How PCI compares to other buses (Tyson, 2004a; Quatech, 2004c)

101

Plug and Play


Requirements for full implementation:
Plug and Play BIOS Extended System Configuration Data (ESCD) Plug and Play operating system

Tasks it automates:
Interrupt Requests (IRQ) Direct Memory Access (DMA) Memory Addresses Input/Output (I/O) Configuration

102

How PCI Works: Installing A New Device


Once a new device has been inserted into a PCI slot on the motherboard
1. Operating System Basic Input/Output System (BIOS) initiates Plug and Play (PnP) BIOS. 2. PnP BIOS scans the PCI bus for any new hardware connected to the bus. If new hardware is found, it will ask for identification. 3. The device will respond with its identification and send its device ID to the BIOS through the bus. PnP checks the Extended System Configuration Data (ESCD) to make sure the configuration data already exists for the card. (If the card is new, then there will be no data for it.)

4.

103

New Device Cont


5.PnP will assign an Interrupt Request Line, Direct Memory Access, memory address and Input/Output settings to the card, then stores the information in the ESCD. 6.When the Windows software loads, it will check the PCI bus and the ESCD to see if there is new hardware. Windows will alert the user that new hardware has been found if there is new hardware installed and will also identify the hardware. 7.Windows will determine the device and attempt to install its driver. The operating system may ask the user to insert a disk containing the driver or direct it to where the driver is located. In the event that Windows is unable to determine what the device is, it will provide a dialog window so the user can identify the hardware and load its driver.

104

How a Device Works


Example: PCI-based sound card
1. The sound card will convert the analog signal to a digital signal. The digital audio data carried across the PCI bus to the bus controller, which determines which device on the PCI device has the priority to send data to the central processing unit (CPU) and whether the data will go directly to the CPU or to the system memory.

3.

If the sound card is in recording mode, the bus controller will assign a high priority to the data coming from the sound card. It will send the sound cards data over the bus bridge to the system bus.
The system bus will save the data in system memory. When the recording is complete, then it will be up to the user to save the data from the sound card on either the hard drive, or will remain in memory for additional processing.
105

2.

4.

106

PCI-X (PCI extended)


133 MBps to as much as 1 GBps Backward compatible with existing PCI cards

Used in high bandwidth devices (Fiber Channel, and processors that are part of a cluster and Gigabit Ethernet)
Maximum 264 MBps throughput, uses 8, 16, 32, or 64 bit transfers

107

PCI bus Applications


It connects display monitor, printer, character devices, network subsystems, video card,(AGP Controller) modem card, hard disk controller,
108

ARM BUS
Self study
109

Overview of Bluetooth
What is Bluetooth?

Bluetooth is a short-range wireless communications technology. Works on Unlicensed ISM Band of 2.4GHz Frequency. Devices within 10 m can share up to 720 kbps of capacity Point-to-point or point-to-multiple points Voice and Data Supports both synchronous and asynchronous services

Why Bluetooth?

Cable replacement between devices. Supported by major companies. Low power consumption Connection can be initiated without user interaction. Devices can be connected to multiple devices at the same time.

Technical features of Bluetooth IEEE 802.15.1

Typical Bluetooth Scenario


Bluetooth will support wireless pointto-point and point-to-multipoint (broadcast) between devices in a piconet.
Point to Point Link Master - slave relationship Bluetooth devices can function as masters or slaves
M S

Piconet It is the network formed by a Master and one or more slaves (max 7) Each piconet is defined by a different hopping channel to which users synchronize to Each piconet has max capacity (1 Mbps)

Piconet Structure
S S

S S M

M S

Master Active Slave Parked Slave Standby

S S

S S
S

All devices in piconet hop together.

Ad-hoc Network the Scatternet


Inter-piconet communication Up to 10 scatternet. piconets in a

Multiple piconets can operate within same physical space This is an ad-hoc, peer to peer (P2P) network

Bluetooth Protocol Stack

Radio or Antenna
Bluetooth devices operate on 2.4 GHz Industrial Scientific Medical band (ISM band). Unlicensed in most countries.
Techniques to minimize packet loss: Frequency Hopping Adaptive power control Short data packets

Frequency Hopping Spread Spectrum (FHSS)


Frequency hopping is jumping from one Frequency to another within the ISM (Industrial, Scientific and Medical) radio band. Hopping interval is 625 s and number of hopped frequencies are 79 Frequency synthesis: frequency hopping 2.400-2.4835 GHz 2.402 + k MHz, k=0, , 78 1,600 hops per second

Efficient use of entire bandwidth.


Provides basic level of security. FH occurs by jumping from one channel to another in pseudorandom sequence

Frequency Hopping

The master shall start its transmission in even-numbered time slots only, and the slave shall start its transmission in odd-numbered time slots only. The packet start shall be aligned with the slot start.

119

Multislot Frames

For a multislot packet, the RF hop frequency to be used for the entire packet is derived from the Bluetooth clock value in the first slot of the packet. The RF hop frequency in the first slot after a multislot packet shall use the frequency as determined by the current Bluetooth clock value.

Baseband layer

Baseband protocol
AMA: PMA: Active Member address Parked Member address

Standby Waiting to join a piconet Unconnected: Standby Inquire Ask about available radios Page Connecting states Connect to a specific radio Connected Actively on a piconet (master Active states or slave) Park/Hold Low-power connected states
Low-power states

Standby

Inquiry

Page

Transmit data AMA

Connected AMA

PARK PMA

HOLD AMA

122

Baseband: Links
Between master and slave(s), two different types of links can be established
Synchronous Connection-Oriented (SCO) link a symmetric point-to-point link between a master and a single slave in the piconet Uses reserved slots to support circuit-switched connection The master can support up to three SCO links to the same slave or to different slaves Asynchronous Connection-Less (ACL) link Asymmetric point-to-multipoint link between master and all active slaves participating in the piconet Uses the slots that are not reserved by SCO to provide packetswitched connection Between a master and a slave only a single ACL link can exist

Connection State Machine

Slave Connection State Modes

Active participates in piconet

Listens, transmits and receives frames

Sniff only listens on specified slots


Hold does not support ACL frames

Reduced power status May still participate in SCO exchanges

Park does not participate on piconet

Still retained as part of piconet

Link Manager Protocol

Link Manager Protocol


The Link Manager carries out link setup, authentication & link configuration. Channel Control All the work related to the channel control is managed by the master. The master uses polling process for this The master is the first device which starts the connection

This roles can change (master-slave role switch)

Logical Link Control and Adaptation Protocol (L2CAP) Layer.


Provides a link-layer protocol between entities with a number of services.

Relies on lower layer for flow and error control.


Provides two alternative services to upper-layer protocols
Connection-less service Connection-oriented service

Gives specifications for state transmission mode, supervision, power level monitoring, synchronisation, and exchange of capability, packet flow latency, peak data rate, average data rate, maximum burst size parameters from lower and higher layers

Advantage and Disadvantages of Bluetooth


Advantages No line of Sight Lower power consumption 2.5 GHz radio frequency ensures worldwide operation Upgradeable Long Lasting Technology Disadvantage Data rate Less range

129

Wireless Personal Area Protocols


802.11 WLAN and 802.11b WiFi (Self Study ) ZigBee 900 MHz (Self Study)

Embedded Software Development


B. Jaiswal Embedded Systems Raj Kamal

Operating System
Developed to use bare machines (hardware) Essential software required to work with a computer Manage basic hardware resources and provide interface to users and their programs Also Controls the execution of application programs

ET-2012

SVBIT,Gandhinagar

132

Appointment with CPU

ET-2012

SVBIT,Gandhinagar

133

Process
Process is a program in execution. It includes current activity as represented by PC, Registers, Stack, etc. Unit of Work in Modern Time Sharing systems
Process A process is sequence of instruction. Process is a dynamic entity, that is a program in execution. Process is active part. During execution it gives the result Process is stored into memory Program Program contains the instruction. Program is static entity made up of program statement. Program is passive part. It doesnt give any result but gives result after starting its execution and becomes process. Program is stored in disk.

ET-2012

Process compete for computing Program does not compete for resources like CPU time or memory.SVBIT,Gandhinagar Computing resources.

134

Process State
As process executes it changes state. State is defined by the current activity of the process.

Each process may be in one of the following States:


1.
2.

New: To start execution of a program, a new process is created in memory.


Ready: Process is waiting to be assigned to the CPU for further execution
A process which is not waiting for external event or not running is in READY state. When CPU is free OS chooses one from list of the ready state processes and dispatch for execution as per scheduling algorithm.

3.
4.

Running: Instructions are being executed.


Only one process execution by CPU at any given moment.

Blocked/Waiting: Process is waiting for some event to occur e.g. I/O operation finish.
Blocked process cannot be scheduled for running even if CPU is free.

5.

Terminated: Process has finished its execution

ET-2012

SVBIT,Gandhinagar

135

Process State Switching 5-states


model
Runs when it is scheduled to run by the OS (kernel) OS gives the control of the CPU on a processs request (system call). Runs by executing the instructions and the continuous changes of its state takes place as the program counter (PC) changes
new admitted interrupt ready I/O or event completion Scheduler dispatch exit terminated

running

I/O or event wait

waiting
ET-2012 SVBIT,Gandhinagar 136

Process Definition
Process is that executing unit of computation, which is controlled by some processes of the OS
for a scheduling mechanism that lets it execute on the CPU
for a resource-management mechanism that lets it use the system-memory and other system-resources such as network, file, display or printer for access control mechanism for inter-process communication concurrently

Application program can be said to consist of number of processes


ET-2012 SVBIT,Gandhinagar 137

Process Control Block


Information about each and every process in our computer is stored into Process Control Block (PCB) Also known as Task Control Block. Created when a user creates a process Removed from system when process is terminated Stores in protected memory area of the kernel (Memory reserved for OS)

ET-2012

SVBIT,Gandhinagar

138

PCB Contents
Process State: - Information about various process states such as new, ready, running, waiting, etc.
Program Counter: - It shows the address of the next instruction to be executed in the process.

CPU registers: - There are various registers in CPU such as accumulators, stack pointer, working register, instruction pointer. PCB stores the state of all these register when CPU switch from one process to another process.
CPU Scheduling information: - It includes process priority, pointer to the ready queue and device queue, and scheduling information. Accounting information: - It includes total CPU time used, real time used, process number, etc.

I/O status information: - It includes list of I/O device allocated to the process. It also includes the list of opened file by process in disk. File is opened either for read or write.
Memory-management information: - PCB stores the value of base and limit registers, address of page table or segment table, and other memory management information.
SVBIT,Gandhinagar

ET-2012

139

Context Switching
When CPU switch from one process to another process, CPU saves the information about the one process into PCB (Process Control Block) and then starts the new process. The present CPU registers, which include program counter and stack pointer are called context. When context saves on the PCB pointed process-stack and register-save area addresses, then the running process stops.

Other process context now loads and that process runs This means that the context has switched.
Context switch is purely overhead because system does not perform any useful work while context switch.

Context switch times are highly dependent on hardware. Its speed vary from machine to machine depending on the memory speed, registers to be copied and existence of special instructions.
ET-2012 SVBIT,Gandhinagar 140

Thread

Many software that run on desktop PCs are multithreaded. An application typically is implemented as a separate process with several threads of control. A web browser may have one thread display images or text while another thread retrieves data from the network. Another example, a word processor may have a thread for displaying graphics, another thread for reading key stroke from the user, and third thread for performing spelling and grammar checking in the background.
ET-2012 SVBIT,Gandhinagar 141

Thread
A Thread, sometimes called lightweight process (LWP), is basic unit of CPU utilization. Each thread has independent parameters -- a thread ID, a program counter, a register set, and a stack, priority and its present status . A traditional heavyweight Process (a kernel-level controlled entity) has a single thread of control. If the process has multiple threads of control, it can do more than one task at a time.

ET-2012

SVBIT,Gandhinagar

142

Thread
A thread can either be a sub-process within a process (kernel level thread) or a process within an application program (user level thread) Thread states starting, running, blocked (sleep) and finished. A thread does not call another thread to run.

ET-2012

SVBIT,Gandhinagar

143

Process
Process is considered heavy weight

Thread

Process vs. ThreadThread is considered light weight


Unit of CPU utilization Thread creation is very economical

Unit of Resource Allocation and of protection Process creation is very costly in terms of resources Program executing as process are relatively slow

Programs executing using thread are comparatively faster

Process cannot access the memory area belonging to another process


Process switching is time consuming One Process can contain several threads
ET-2012

Thread can access the memory area belonging to another thread within the same process
Thread switching is faster One thread can belong to exactly one process
SVBIT,Gandhinagar 144

Task
Task term used for the process in the RTOS for the embedded systems. An application program consist of the tasks and task behaviors in various states that are controlled by OS. A task is like a process or thread in an OS. Runs when it is scheduled to run by the OS (kernel), which gives the control of the CPU on a task request (system call) or a message. Runs by executing the instructions and the continuous changes of its state takes place as the program counter (PC) changes A task an independent process. No task can call another task.
ET-2012 SVBIT,Gandhinagar 145

Task
Includes task context and TCB TCB A data structure having the information using which the OS controls the process state. Stores in protected memory area of the kernel. Consists of the information about the task state

ET-2012

SVBIT,Gandhinagar

146

Task
TaskID, e.g. a number between 0 and 255 task priority, between 0 and 255, represented by a byte parent task (if any), child task (if any), address to the next tasks TCB of task that will run next

ET-2012

SVBIT,Gandhinagar

147

Task States
1. Idle state [Not attached or not registered] 2. Ready State [Attached or registered] 3. Running state 4. Blocked (waiting) state 5. Delayed for a preset period Number of possible states depends on the RTOS.

ET-2012

SVBIT,Gandhinagar

148

Scheduling
Scheduling implies selection of a process for execution in a way to meet the objective of the system (throughput, response time, etc.) Scheduler is the program responsible for scheduling

ET-2012

SVBIT,Gandhinagar

149

Scheduling Criteria
Turn around Time
Interval of time between submission & completion

Waiting Time
Time spent waiting in ready queue

Response Time
Time from submission until response received

CPU Utilisation
Keep busy

Throughput
Number of process completed per unit time

Fairness
No process should starve

Enforcing Priorities
ET-2012

Scheduling policy should favour higher priority process SVBIT,Gandhinagar

150

Scheduling Algorithms
Non Preemptive Mode: process would be taken away from the processor if any one of the following events occur
Its completion It requires some I/O It executes a system call 1. FCFS (First Come First Serve) 2. SJF (Shortest Job First) 3. Priority

Preemptive Mode: process would be taken away from the processor if any one of the following events occur
Timeout occurs Arrival of new process with higher priority 1. Round Robin 2. SRT (Shortest Remaining Time) 3. Priority SVBIT,Gandhinagar

ET-2012

151

First Come First Serve Scheduling


Non preemptive algorithm Simplest process that request CPU first is allocated CPU first. Implemented with FIFO queue. When a process enters the ready queue, its PCB would always be inserted at the tail of the queue. When CPU is free it is allocated with the process at the head of the queue. The running process is then removed from the queue.

ET-2012

SVBIT,Gandhinagar

152

FCFS Example
Code is simple to write and understand Convoi effect: large process blocking few smaller processes which degrade the scheduling parameters. Assume Burst time, P1=24, P2=3, P3=3 (in ms) Order P1,P2,P3
P2 waits 24, P3 waits 24+3=27 Average wait = (0+24+27)/3=17ms (AWT or ART) ATAT= ((0+24)+(24+3)+(27+3))/3=27ms

Order P2,P3,P1
P3 waits 3, P1 waits 3+3=6 Average wait = (0+3+6)/3=3ms ATAT= ?
ET-2012 SVBIT,Gandhinagar 153

Shortest Job First Scheduling


Non preemptive algorithm Process which requires shortest burst time would be given first chance for execution. Burst time cannot be determined exactly, it can only be estimated by empirical formula P1=6,P2=8,P3=7,P4=3 SJFS Order P4,P3,P1,P2

ET-2012

SVBIT,Gandhinagar

154

Shortest Remaining Time Scheduling

Preemptive SJF algorithm Process which requires shortest remaining time would be given first chance for execution. Burst time cannot be determined exactly, it can only be estimated by empirical formula

ET-2012

SVBIT,Gandhinagar

155

Round Robin Scheduling


Preemptive FCFS algorithm Implemented by adding a time slice mechanism to the FIFO queue. Each process would be assigned to the processor for a time interval called time slice or time quantum. No process in CPU is allocated more than one time quantum in a row. If a process CPU burst time exceeds one time quantum that process is preempted and put back in the ready queue. If time slice is too low then more time would be spent in context switching and if time slice is too high then other process may suffer from starvation.
ET-2012 SVBIT,Gandhinagar 156

Round Robin Example


P1=24, P2=3, P3=3 (in ms) Time Quantum=4ms By RR strategy

AWT= (?) 5.66ms


improved

ET-2012

SVBIT,Gandhinagar

157

Priority Scheduling Algorithm


Both Preemptive and Non-Preemptive mode
Preemptive: newly arrived process priority is compared with existing priority, if it is higher then current process is preempted else process continues. Non preemptive: newly arrived process is put at the head of the priority queue.

Process having highest priority would be scheduled for execution first Equal Priorities scheduled in FCFS Internal Priorities
Time limit Memory requirement Number of open files

External Priorities
ET-2012

Type of process Resources paid for running the process SVBIT,Gandhinagar

158

Scheduling Examples

AWT = 8.2 Drawback: In heavily loaded system, continuous stream of higher priority processes will keep the lower priority process from getting access to the CPU. Solution: Aging technique of gradually increasing the priority of the process waiting in the queue
ET-2012 SVBIT,Gandhinagar 159

Principles of Concurrency
Three contexts: Multiple applications: Multiprogramming was invented to allow the processing time of the computer to be dynamically shared among a number of active jobs

Structured application: As an extension of principles of modular design and structured programming , some application can be effectively implemented as a set of concurrent process Operating System Structure: The same structuring advantages apply to the systems programmer and some OS are themselves implemented as a set of processes.
ET-2012 SVBIT,Gandhinagar 160

Single vs. Multi-processor


In single processor multiprogramming system, processes are interleaved in time to yield the appearance of simultaneous execution. (a)

In multiple processor system, it is possible not only to interleave processes but to overlap them. (b)

ET-2012

SVBIT,Gandhinagar

161

Process Interaction
Independent Process(Unaware of each other)
If it cant affect or cant be affected by another process executing in the system No data sharing with another process in system

Cooperative Process
If it can affect or can be affected by another process executing in the system
1. Cooperation by Sharing (access to some common object) Indirectly Aware 2. Cooperation by Communication Directly Aware

ET-2012

SVBIT,Gandhinagar

162

Process Cooperation by Sharing (Process Synchronisation)


Race Condition When several process access and manipulate the shared data concurrently, then outcome of execution depends on the particular order of execution in which the access takes place. Critical Section Each process has a segment of code in which the process may be
Changing common variables Updating a table Writing on a file, etc.
ET-2012 SVBIT,Gandhinagar 163

Control Requirements to Critical Section


1. Mutual Exclusion
If a process is inside critical section then another process should not be allowed to enter critical section.

2. Progress
If there is no process inside critical section then AND if a process wants to enter critical section then it should not be stopped by another process.

3. Bounded Waiting
There exist a limit on the number of times the other process should be allowed to enter the critical section after a process has requested for the same. Decision of entering critical section should not be postponed indefinitely.

All of above three conditions must be satisfied.


ET-2012 SVBIT,Gandhinagar 164

Requirements for Mutual Exclusion


1. Mutual Exclusion must be forced: Among all the process that have critical sections for shared object, only one process at a time is allowed into its Critical section

2.
3. 4.

A process that halts in its non-critical section must do so without interfering with other processes.
It must not be possible for a process requiring access to critical section to be delayed indefinitely: no deadlock or starvation When no process is in a critical section, any process that request entry into critical section must be permitted to enter without delay

5.
6.
ET-2012

No assumptions are made about relative process speeds or number of processors


A process remains inside its critical section for a finite time only.
SVBIT,Gandhinagar 165

Process Cooperation by Communication (Inter-Process Communication)


Various processes participate in common effort that link all the processes. Communication can be characterized as consisting of message of some sort. Nothing is shared between processes in the act, so mutual exclusion is NOT a control requirement Concurrency mechanisms for inter-process communication and synchronisation

ET-2012

Queue to communicate data Mailbox/Messages Pipes and Sockets Semaphores to trigger actions Signals Remote Procedure Call (RPC) for distributed systems
SVBIT,Gandhinagar 166

Semaphore
Programming language concept proposed for IPC Two elements count and queue

Semaphore is a variable upon which three operations are defined:


1. 2.

Count is always initialized to non-negative value WAIT operation decrements the semaphore value. If negative then process that executed wait would get blocked.
To receive a signal by semaphore, a process executes the primitive wait(s)

3.

SIGNAL operation increments the semaphore. If not positive then semaphore will unblock one of the process from its queue and that process to ready queue.
To transmit a signal by semaphore, a process executes the primitive signal(s)
SVBIT,Gandhinagar 167

ET-2012

Semaphore
Other than these three operations there is no way to inspect or manipulate semaphore Wait and Signal primitives are assumed to be atomic i.e. they cannot be interrupted and each routine can be treated as indivisible step. P (for wait operation) derived from a Dutch word Proberen, which means 'to test'. P semaphore function signals that the task requires a resource and if not available waits for it.

V (for signal passing operation) derived from the word 'Verhogen' which means 'to increment'. V semaphore function signals which the task passes to the OS that the resource is now free for the other users. Semaphore are used to count the number of wakeups saved or some positive value then one or more wakeups are pending
For any semaphore a queue is used to hold processes waiting on the semaphore.
ET-2012 SVBIT,Gandhinagar 168

Semaphore
General semaphore (Counting or P-V semaphore)
Count can take any integer value Used for process synchronisation

Binary semaphore (MUTEX)


Value of count 0 or 1, Easier to implement Used for mutual exclusion purpose A process using the mutex locks on to a critical section in a task.

Strong semaphore
Queue implemented as FIFO queue
The process blocked longest would be released first

Never suffer starvation, Convenient to implement

Weak semaphore
Queue implemented as priority queue May suffer from starvation
ET-2012 SVBIT,Gandhinagar 169

Mutex Semaphore
Mutex means mutually exclusive key Mutex is a binary semaphore usable for protecting use of resource by other task section at an instance Let the key sm initial value = 1 When the key is taken by section the key sm decrements from 1 to 0 and the waiting task codes starts. Assume that the sm increments from 0 to 1 for signaling or notifying end of use of the key that section of codes in the task or thread. When sm = 0 > assumed that it has been taken (or accepted) and other task code section has not taken it yet and using the resource When sm = 1 > assumed that it has been released (or sent or posted) and other task code section can now take the key and use the resource
ET-2012 SVBIT,Gandhinagar 170

Signal
A signal is the software equivalent of the flag at a register that sets on a hardware interrupt. Unless masked by a signal mask, the signal allows the execution of the signal handling function and allows the handler to run just as a hardware interrupt allows the execution of an ISR Signal is a one/two byte IPC used for signaling from a process to OS (to enable start of another process) A provision for interrupt-message from a process or task to another process or task When the IPC functions for signal are not provided by an OS, then the OS employs semaphore for the same purpose.
ET-2012 SVBIT,Gandhinagar 171

Signal Pros and Cons


PROS
1. 2. 3. 4. 5. The simplest IPC for messaging & synchronizing processes is signal. Unlike semaphores, it takes the shortest possible CPU time. A signal is the software equivalent of the flag at a register that sets on a hardware interrupt. A signal is identical to setting a flag that is shared and used by another interrupt servicing process. It is sent on some exception or on some condition, which can set during running of a process or task or thread.

6.
7. 8.

Sending a signal is software equivalent of throwing exception in C/C++ or Java program


A signal raised by one process forces another process to interrupt and to catch that signal provided the signal is not masked at that process. Unless process is masked by a signal mask, the signal allows the execution of the signal handling process, just as a hardware interrupt allows the execution of an ISR. CONS Signal is handled only by a very high priority process (service routine). That may disrupt the usual schedule and usual priority inheritance mechanism. Signal may cause reentrancy problem [process not returning to state identical to the one SVBIT,Gandhinagar before signal handler process executed]
172

1. 2.

ET-2012

Queue
Every OS provides queue IPC functions for inserting and deleting the message-pointers or messages in FIFO or priority mode.

Each queue for a message need initialization (creation) before using the functions in the scheduler for the message queue.
Each queue either has a user definable size (upper limit for number of bytes) or a fixed pre-defined size assigned by the scheduler. When a queue becomes full, there may be a need for error handling and user codes for blocking the task(s). There may not be self-blocking. Queue functions: Create, Post, PostFront, Pend, Flush, Query, Accept, and Delete
ET-2012 SVBIT,Gandhinagar 173

Mailbox
Mailbox (for message) is an IPC through a message-block at an OS that can be used only by a single destined task. Each mailbox usually has one message pointer only, which can point to message. OS mailbox functions: Create, Query, Post, Pend, Accept, Delete

ET-2012

SVBIT,Gandhinagar

174

Pipe
Pipe is a device used for the inter process communication. Pipe has the functions create, connect and delete and functions similar to a device driver (open, write, read, close)

A message-pipe a device for inserting (writing) and deleting (reading) from that between two given inter-connected tasks or two sets of tasks.
Pipes are also like Java IO stream. In a pipe there may be no fixed number of bytes per message with an initial pointer for the back and front. Pipe is unidirectional.

A pipe could be used for inserting the byte steam by a process and deleting the bytes from the stream by another process.
ET-2012 SVBIT,Gandhinagar 175

Socket
Socket Provides for a bi-directional pipe like device, which also use a protocol between source and destination processes for transferring the bytes. Provides for establishing and closing a connection between source and destination processes using a protocol for transferring the bytes.

Two tasks at two distinct places or locally interconnect through the sockets.
Multiple tasks at multiple distinct places interconnect through the sockets to a socket at a server process. The client and server sockets can run on same CPU or at distant CPUs on the Internet.

Sockets can be using a different domain/protocol.


Two processes (or sections of a task) at two sets of ports interconnect (perform ipc) through a socket at each. A pipe does not have protocol based inter-processor communication, while socket provides that. A socket can be a client-server socket. Client Socket and server socket functions are different. A socket can be a peer-to-peer socket IPC. At source and destination sockets have similar functions. SVBIT,Gandhinagar

ET-2012

176

Pipe vs. Socket Pipe


Pipe is used for IPC. Pipe is created by the pipe system call, which returns two descriptors. Pipe is a linear array of bytes, as is a file, but it is used solely as an I/O stream.

Socket
Socket is used for IPC and also for communication of process in network. A socket is created by the socket system call, which returns a descriptor for it. it is logical connection for communication &

support various communication semantics. Socket does not, it exists as long as some
process holds a descriptor referring to it. Socket when named, is called Port.

Pipe supports destructive reading. (once if you read it vanishes)


Two types of pipes are: unnamed pipes & named pipes (FIFO)

Pipe is use for connection link, it is

unidirectional. Piping is a process where the o/p of one process is made the i/p of another
ET-2012

Sockets provide point-to-point, two-way communication between two processes


sockets are usually datagram oriented
177

SVBIT,Gandhinagar

Pipe Pipe vs. Queue


Pipes are a layer over message Queues

Queue
Message queue is managed by kernel. All the queue memory allocated at creation. Message queue is a method by which process can pass data using an interface. A message queue can be created by one process and used by multiple processes that read / write messages to the queue Queue is not a streaming interface.

Pipe is a technique for passing information from one process to another Two processes, one feeding the pipe with data while the other extracts the data at the other end. Pipe is a linear array of bytes, as is a file, but it is used solely as an I/O stream.

Pipe supports destructive reading. (once if Datagram-like behavior: reading an entry you read it vanishes) removes it from the queue. If you don't read the entire data, the rest is lost Pipe is one-way communication only Have a maximum number of elements and each element has maximum size

ET-2012

SVBIT,Gandhinagar

178

Remote Procedure Call


A method used for connecting two remotely placed functions by first using a protocol for connecting the processes. It is used in the cases of distributed tasks
The RTOS can provide for the use of RPCs. These permits distributed environment for the embedded systems. The OS IPC function allows a function or method to run at another address space of shared network or other remote computer. The client makes the call to the function that is local or remote and the server response is either remote or local in the call. Both systems work in the peer-to-peer communication mode. Each system in peer-to-peer mode can make an RPC.

An RPC permits remote invocation of the processes in the distributed systems.


ET-2012 SVBIT,Gandhinagar 179

Real Time Operating System

Topics
1. Operating System Services
a. GOAL, b. MODES AND c. STRUCTURE

2. Process Management (Kernel service) 3. Memory Management (Kernel service)

What is Kernel ??and its Services in the OS??

Kernel and its Services in the OS


What is kernel first?????
heart of the operating system
A kernel connects the application software to the hardware of a computer The kernel is a program that constitutes the central core of a computer operating system. It has complete control over everything that occurs in the system. It provides the very basic facilities for the management such as processor management, I/O management, memory management, and process management of efficient execution of the system.

A kernel can be contrasted with a shell, the outermost part of an operating system that interacts with user commands.

Kernel and shell are terms used more frequently in Unix operating systems than in IBM mainframe or Microsoft Windows systems.

Figure 2: We can visualize a whole Operating System like an Atom structure.

Kernel basic facilities


The kernel's primary function is to manage the computer's resources and allow other programs to run and use these resources. Typically, the resources consist of: The Central Processing Unit. This is the most central part of a computer system, responsible for running or executing programs. The kernel takes responsibility for deciding at any time which of the many running programs should be allocated to the processor or processors (each of which can usually run only one program at a time) The computer's memory. Memory is used to store both program instructions and data. Typically, both need to be present in memory in order for a program to execute. Often multiple programs will want access to memory, frequently demanding more memory than the computer has available. The kernel is responsible for deciding which memory each process can use, and determining what to do when not enough is available. Any Input /Output (I/O) devices present in the computer, such as keyboard, mouse, disk drives, USB devices, printers, displays, network adapters, etc. The kernel allocates requests from applications to perform I/O to an appropriate device (or subsection of a device, in the case of files on a disk or windows on a display) and provides convenient methods for using the device (typically abstracted to the point where the application does not need to know implementation details of the device). Kernels also usually provide methods for synchronization and communication between processes called inter-process communication (IPC).

Kernel Services in the OS


1. Process Management. 2. Memory Management. 3. I/O Device Management. 4. Processor Management.

Process Management Kernel Services in an OS.


Creation to deletion of Processes Process structure maintenance Processing resource requests Scheduling Processes Inter process Communication (IPC) (communication between Tasks, ISRs, OS functions)

Process Creation
Step 1: At the reset of the processor. in a computer system, an OS is initialized first enabling the use of the OS functions, which includes the function to create the processes. Step 2: Using OS process creation function, a process, which can be called initial process, is created. Step 3: OS started and that calls the initial process to run.

Step 4: When the initial process runs, it creates subsequent processes. Processes can be created hierarchically. OS schedules the threads and provide for context switching between the threads (or tasks).

Creation of a process
Means defining the resources for the process and address spaces (memory blocks) for the created process, its stack, its data and placing the process initial information at a PCB(Process Control Block), or TCB(Task Control Block)

What is PCB or TCB?????


Process Control Block (PCB, Process descriptor or also called Task Controlling Block) is a data structure in the operating system kernel containing the information needed to manage a particular process or task.

Information provided by PCB


The PCB contains important information about the specific process including: 1. The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2. Unique identification of the process in order to track "which is which" information.


3. A pointer to parent process. 4. Similarly, a pointer to child process (if it exists). 5. The priority of process (a part of CPU scheduling information). 6. Pointers to locate memory of processes. 7. A register save area.

8. The processor it is running on.

Example of Process Creation


Suppose I have to create a OS function of Display for a mobile phone device. An OS function first creates the Display_process. The display process then creates the following threads: 1. Display_Time_DateThread

2. Display_BatteryThread
3. Display_SignalThread 4. Display_ProfileThread 5. Display_MessageThread

6. Display_Call StatusThread
7. Display_MenuThread

Process Manager Functions


Implements CPU sharing (called scheduling) Must allocate resources to processes in conformance with certain policies Implements process synchronization and inter-process communication Implements deadlock strategies and protection mechanisms

Memory Management

Memory Management Kernel Services in an OS.


Allocation and de-allocation between Tasks, ISRs, OS functions. Shared Memory Management It takes care of Memory Protection also.

Memory allocation
When a process is created, the memory manager allocates the memory addresses (blocks) to it by mapping the process address space. Threads of a process share the memory space of the process

Memory Management after Initial Allocation


Memory manager of the OS secure, robust and well protected. No memory leaks and stack overflows. Memory leaks means attempts to write in the memory block not allocated to a process or data structure. Stack overflow means that the stack exceeding the allocated memory block(s)

Memory managing Strategy for a system

Topics
Basic OS Functions
Process Management,(done) Memory Management(done) Device Management(SS) I/O Device Management(SS)

RTOS task scheduling models

Scheduling
Definition: It is the method by which threads, processes or data flows are given access to system resources (e.g. processor time, communications bandwidth). By use of Proper algorithm of scheduling, we can perform multiple task in a given time. The scheduler is concerned mainly with: Throughput - The total number of processes that complete their execution per time unit. Response time - amount of time it takes from when a request was submitted until the first response is produced. Waiting Time - Equal CPU time to each process (or more generally appropriate times according to each process' priority). It is the time for which the process remains in the ready queue.

Preemptive vs. Non-Preemptive


A scheduling algorithm is:
Preemptive: if the active process or task or thread can be temporarily suspended to execute a more important process or task or thread . Non-Preemptive: if the active process or task or thread cannot be suspended, i.e., always runs to completion.

215

RTOS task scheduling models


1. 2. 3. 4. 5. Cooperative Scheduling of ready tasks in a queue. Cyclic and round robin (time slicing) Scheduling. Preemptive Scheduling. Rate-Monotonic Scheduling (RMS). Scheduling using Earliest deadline first (EDF).

1.

Cooperative Scheduling of ready tasks in a queue.

Dis-advantages:
Longer execution time of a low priority task makes a higher priority task wait until it finishes.

2.

Cyclic and round robin (time slicing) Scheduling.

Cyclic Scheduling

Round Robin (time slicing) Scheduling

Example

Tasks programs contexts at the five instances in the Time Scheduling Scheduler for C1 to C5

Priority-based Scheduling
Typical RTOS based on fixed-priority preemptive scheduler

Assign each process a priority


At any time, scheduler runs highest priority process ready to run Process runs to completion unless preempted

Typical RTOS Task Model


Each task a triplet: (execution time, period, deadline) Usually, deadline = period Can be initiated any time during the period
Initiation Execution time Deadline

Time Period

Priority-based Preemptive Scheduling


Always run the highest-priority runnable process
Priorities Initiation Deadline

1 P1 2 P2 3 P3
Scheduling P2

P1

P1
P2 P3

P1

P3

After completing other high priority process, switch to next interrupted low priority process

RMS Scheduling

Rate-Monotonic Scheduling

The utilization bound (UB) test allows schedulability analysis by comparing the calculated utilization for a set of tasks and comparing that total to the theoretical utilization for that number of tasks:

If this equality is satisfied, all of the tasks will always meet their deadlines. If the total utilization calculates to greater than 100%, the system will have scheduling problems.

Utilization Bound Test


Assumes rate monotonic priority assignment
Task with smaller period is assigned higher priority
Guaranteed to be schedulable if test succeeds.

Utilization Bound Test Example


No. Tasks
1
2 3 4 5 10 Utilization Bound (in % )

100%
82.8% 78.0% 75.7% 74.3% 71.8%

Example: Checking for Schedulability using of RMS.


Processes Computation/ Execution Time (Ci)
1 2 3

Time Period (Ti)

P1 P2 P3

4 6 12

And Time period P1 process = 4 secs. Means P1 repeat after every 4 secs

Solution:
Golden Rule : 1. Shortest test period = Highest Priority 2. Check for Schedulability

1.

Deciding Priority Level


Processes P1 Computation/ Execution Time 1 Time Period 4 Priority High(1)

P2
P3
2.

2
3

6
12

Medium(2)
Low(3)

Checking for Schedulability

Here U(n) < 1, so we can schedule the process by using RMS scheduling algorithm

Scheduling Timing Diagram Total time = take LCM of Period of Process = 12

Processes P1 P2 P3

Computation/ Execution Time 1 2 3

Time Period 4 6 12

Processes
P3 P3

1st period continue P3 Next period P2 Next period P1 Next period P1

P3

P2

P2

P1

P1

10

11

12

Example: Checking for Schedulability using of RMS.


Processes Computation/ Execution Time (Ci)
2 3 3

Time Period (Ti)

P1 P2 P3

4 6 12

Solution Here clearly the CPU Utilization is more than 1 (i.e. 1.25). So these is not Scheduled.

Earliest deadline first Scheduling


RMS assumes fixed priorities First check for Schedulability by using Utilization formulae Earliest deadline first: Processes with soonest deadline given highest priority

Example: Checking for Schedulability using of EDF.


Processes Computation/ Execution Time (Ci)
1 1 2

Time Period (Ti)

P1 P2 P3

3 4 5

Solution Step 1: Check for Schedulability U = 1/3 + 1/4 + 2/5 = 0.98 which is less than 1 So we can schedule the given task

Solutions
Period (UPTO 60)
1 2 3
d(1)

Processes

Computation / Execution Time (Ci)


1 1 2

Time Period (Ti)


3 4 5

P1 P2 P3

4
d(2)

5
d(3)

6
d(1)

8
d(2)

9
d(1)

10
d(3)

11

12
d(1) d(2)

13

14

15
d(1) d(3)

16

P1

P2

P3

P3

P1

P2

P1

P3

P1

P2

P1

P3

P3

Scheduled Process according to their Execution time

S-ar putea să vă placă și