Sunteți pe pagina 1din 97

Unit -1

INTRODUCTION TO EMBEDDED
SYSTEMS
Introduction to embedded real time systems
The build process for embedded systems
Embedded system design process
Embedded computory applications
Types of memory
Memory management methods
Definition
• Embedded system is a computer system
designed for specific control tasks.
• It is dedicated to a specific task
• Design engineers can optimize it to reduce the
size and cost of the product and increase the
reliability and performance.
A “short” list of Embedded Systems
• Auto focus cameras • Network cards
• ATMs • Pagers
• Avionic systems • PDAs
• Automatic transmission • Photocopiers
• Battery Chargers • Portable Video games
• Camcorders • Pont of Sales terminals
• Cell phones • Printers
• Cordless Phones • Scanners
• Digital Cameras • Satellite phones
• DVD players • Teleconferencing systems
• Electronic Toys/Games • Televisions
• Fax machines • Set-top boxes
• Medical Equipment • VCR’s
• Microwave ovens • Video phones
• Modems • Washers and dryers
and many more ….
Embedded systems from real life
Mobile Phones and Base Stations
• Multiprocessor
▫ 8-bit/32-bit for UI; DSP for signals
▫ 32-bit in IR port; 32-bit in Bluetooth
• 8-100 MB of memory
• All custom chips

• Massive signal processing


▫ Several processing tasks per connected call
• Based on DSPs
▫ Standard or custom
▫ 100s of processors
Embedded systems from real life
• Multiple • Multiple networks
Cars processors ▫ Body, engine, telematics,
▫ Up to 100 media, safety
▫ Networked
together  Functions by embedded
processing:
 Large diversity in processor  ABS: Anti-lock braking
types: systems
 8-bit – door locks, lights, etc.
 ESP: Electronic stability
 16-bit – most functions control
 32-bit – engine control,  Airbags
airbags  Efficient automatic

gearboxes
 Theft prevention with

smart keys
 Blind-angle alert systems

 ... etc ...


Embedded systems from real life

Extremely Large
• Functions requiring
computers:
▫ Radar
▫ Weapons
▫ Damage control
▫ Navigation
▫ basically everything

• Computers:
▫ Large servers
▫ 1000s of processors
Embedded systems from real life

Inside Your PC
• Custom processors
▫ Graphics, sound
• 32-bit processors
▫ IR, Bluetooth
▫ Network, WLAN
▫ Harddisk
▫ RAID controllers
• 8-bit processors
▫ USB
▫ Keyboard, mouse
Characteristics of Embedded Systems (1)

 Must be dependable,
• Reliability R(t) = probability of system working correctly
provided that is was working at t=0
• Maintainability M(d) = probability of system working
correctly d time units after error occurred.
• Availability A(t): probability of system working at time t
• Safety: no harm to be caused
• Security: confidential and authentic communication
Even perfectly designed systems can fail if the assumptions
about the workload and possible errors turn out to be wrong.
Making the system dependable must not be an after-thought,
it must be considered from the very beginning
Characteristics of Embedded Systems (2)

 Must be efficient
▫ Energy efficient
▫ Code-size efficient
(especially for systems on a chip)
▫ Run-time efficient
▫ Weight efficient
▫ Cost efficient
 Dedicated towards a certain application
Knowledge about behavior at design time can be used to
minimize resources and to maximize robustness
 Dedicated user interface
(no mouse, keyboard and screen)
 Hybrid systems (analog + digital parts).
Characteristics of Embedded Systems (3)

 Many ES must meet real-time constraints


▫ A real-time system must react to stimuli from the controlled
object (or the operator) within the time interval dictated by
the environment.
▫ For real-time systems, right answers arriving too late are
wrong.
▫ „A real-time constraint is called hard, if not meeting
that constraint could result in a catastrophe“ [Kopetz,
1997].
▫ All other time-constraints are called soft.
▫ A guaranteed system response has to be explained without
statistical arguments
 Frequently connected to physical environment
through sensors and actuators,
Characteristics of Embedded Systems (4)

 Typically, ES are reactive systems:


„A reactive system is one which is in continual
interaction with is environment and executes at a pace
determined by that environment“ [Bergé, 1995]
Behavior depends on input and current state.
 automata model appropriate,
model of computable functions inappropriate.

Not every ES has all of the above characteristics.


Def.:Information processing systems having most of
the above characteristics are called embedded systems.
Why Embedded Systems Are Different?

• Embedded systems are dedicated to specific tasks, whereas


PCs are generic computing platforms.
• Embedded systems are supported by a wide array of processors
and processor architectures.
• Embedded systems are usually cost sensitive.
• Embedded systems have real-time constraints.
• If an embedded system is using an operating system at all, it is
most likely using a real-time operating system (RTOS), rather
than Windows 9X, Windows NT, Windows 2000, Unix, Solaris, or
HP- UX.
• The implications of software failure are much more severe in
embedded systems than in desktop systems.
• Embedded systems often have power constraints.
• Embedded systems often must operate under extreme
environmental conditions.
Why Embedded Systems Are Different?

• Embedded systems have far fewer system


resources than desktop systems.
• Embedded systems often store all their
object code in ROM.
• Embedded systems require specialized tools
and methods to be efficiently designed.
• Embedded microprocessors often have
dedicated debugging circuitry.
An embedded system example -- a digital camera

Digital camera chip


CCD

CCD preprocessor Pixel coprocessor D2A


A2D

lens

JPEG codec Microcontroller Multiplier/Accum

DMA controller Display ctrl

Memory controller ISA bus interface UART LCD ctrl

 Single-functioned -- always a digital camera


 Tightly-constrained -- Low cost, low power, small, fast
 Reactive and real-time -- only to a small extent
Embedded System – A simplified view
Build process

output analog

input analog
CPU

mem
embedded
computer
Typical Embedded System

Source : Embedded System Design Issues, Philip J. Koopman, ICCD96


The Embedded Design Life Cycle

Phase 5 : HW / SW Integration

Phase 7 : Maintenance and Upgrade


Phase 2 : HW / SW Partitioning
Phase 1 : Product Specification

Cost to fix bugs

Phase 6 : Acceptance testing


Phase 3 : Iteration
Phase 4 : Detailed
HW / SW Design
System HW & SW Development / System Test
Specification Debug
Hardware Design
and Design
37% 51% 12%

Software Design Product Release


Source : Embedded Systems Design,
Arnold S. Berger
Product Specification
• A common factor for the successful products was
that the design team (senior management,
marketing, sales, quality assurance, and
Engineering) shared a common vision of the
product they were designing
• In-circuit emulators
Hardware/Software partitioning
• Partitioning decision- to decide which portion of the
problem will be solved in hardware and which in
software
• It is possible to implement that algorithm purely in
software (the CPU without the FPU example), purely
in hardware (the dedicated modem chip example),
or in some combination of the two (the video card
example).
Iteration and Implementation
• This phase represents the early design work before the
hardware and software teams build “the wall” between
them
• The hardware designers might be using simulation
tools, such as architectural simulators, to model the
performance of the processor and memory systems.
• The software designers are probably running code
benchmarks on self-contained, single-board
computers that use the target micro processor.
Hardware/Software Integration
• Big Endian/Little Endian Problem
The Selection Process
The Selection Process

• Is it available in a suitable implementation?


• Is it capable of sufficient performance?
• Is it supported by a suitable operating system?
• Is it supported by appropriate and adequate
tools?
Packaging the Silicon
Microprocessor versus Microcontroller
• Most embedded systems use microcontrollers instead of microprocessors
Microprocessor versus Microcontroller
• In a microprocessor-based system, the CPU and the various
I/O functions are packaged as separate ICs

• In a microcontroller-based system many, if not all, of the I/O


functions are integrated into the same package with the CPU

• The advantages of the microcontroller’s higher level of


integration are easy to see:
▫ Lower cost — One part replaces many parts.
▫ More reliable — Fewer packages, fewer interconnects.
▫ Better performance — System components are optimized
for their environment.
▫ Faster — Signals can stay on the chip.
▫ Lower RF signature — Fast signals don’t radiate from a
large PC board
Adequate Performance
Performance-Measuring Tools
• The Dhrystone benchmark is a simple C program that compiles
to about 2,000 lines of assembly code and is independent of
operating system services
Meaningful Benchmarking
• Real benchmarking involves carefully balancing
system requirements and variables
• it’s important to analyze the real-time behavior
of the processor
• Real-time performance can be generally
categorized into two buckets: interrupt handling
and task switching.
EEMBC
• The EEMBC benchmark consists of industry-specific tests
29

Memory Management

• Ideally programmers want memory that is


▫ large
▫ fast
▫ non volatile

• Memory hierarchy
▫ small amount of fast, expensive memory – cache
▫ some medium-speed, medium price main memory
▫ gigabytes of slow, cheap disk storage

• Memory manager handles the memory hierarchy


30

Basic Memory Management


Monoprogramming without Swapping or Paging

Three simple ways of organizing memory


- an operating system with one user process
31

Multiprogramming with Fixed Partitions

• Fixed memory partitions


▫ separate input queues for each partition
▫ single input queue
32

Modeling Multiprogramming

Degree of multiprogramming

CPU utilization as a function of number of processes in memory


33

Analysis of Multiprogramming
System Performance

• Arrival and work requirements of 4 jobs


• CPU utilization for 1 – 4 jobs with 80% I/O wait
• Sequence of events as jobs arrive and finish
▫ note numbers show amout of CPU time jobs get in each interval
34

Relocation and Protection

• Cannot be sure where program will be loaded in


memory
▫ address locations of variables, code routines cannot be
absolute
▫ must keep a program out of other processes’ partitions

• Use base and limit values


▫ address locations added to base value to map to physical
addr
▫ address locations larger than limit value is an error
35

Swapping (1)

Memory allocation changes as


▫ processes come into memory
▫ leave memory
Shaded regions are unused memory
36

Swapping (2)

• Allocating space for growing data segment


• Allocating space for growing stack & data segment
37

Memory Management with


Bit Maps

• Part of memory with 5 processes, 3 holes


▫ tick marks show allocation units
▫ shaded regions are free
• Corresponding bit map
• Same information as a list
38

Memory Management with Linked Lists

Four neighbor combinations for the terminating process


X
39

Virtual Memory Paging (1)

The position and function of the MMU


40

Paging (2)
The relation between
virtual addresses
and physical
memory addres-
ses given by
page table
41

Page Tables (1)

Internal operation of MMU with 16 4 KB pages


42

Page Tables (2)


Second-level page tables

Top-level
page table

• 32 bit address with 2 page table fields


• Two-level page tables
43

Page Tables (3)

Typical page table entry


44

TLBs – Translation Lookaside Buffers

A TLB to speed up paging


45

Inverted Page Tables

Comparison of a traditional page table with an inverted page table


46

Page Replacement Algorithms

• Page fault forces choice


▫ which page must be removed
▫ make room for incoming page

• Modified page must first be saved


▫ unmodified just overwritten

• Better not to choose an often used page


▫ will probably need to be brought back in soon
47

Optimal Page Replacement Algorithm

• Replace page needed at the farthest point in future


▫ Optimal but unrealizable

• Estimate by …
▫ logging page use on previous runs of process
▫ although this is impractical
48

Segmentation with Paging: Pentium (1)

A Pentium selector
49

Segmentation with Paging: Pentium (2)

• Pentium code segment descriptor


• Data segments differ slightly
50

Segmentation with Paging: Pentium (3)

Conversion of a (selector, offset) pair to a linear address


51

Segmentation with Paging: Pentium (4)

Mapping of a linear address onto a physical address


52

Segmentation with Paging: Pentium (5)

Level

Protection on the Pentium


Unit - 2
EMBEDDED SYSTEM ORGANIZATION
Structural units in processor
Selection of processor & memory devices
DMA – I/O devices
Timer & counting devices
Serial communication using I2C, CAN USB buses
Parallel communication using ISA PCI, PCI/X buses,
Device drivers
I am sorry to say that there is too much point to the
wisecrack that life is extinct on other planets because
their scientists were more advanced than ours.
~John F. Kennedy

MDR, MAR, BIU, PC AND SP


BUSES
It has become appallingly obvious that our
technology has exceeded our humanity.
~Albert Einstein
REGISTERS
ALU, FLPU
“To succeed in your mission, you must have
single-minded devotion to your goal.
CACHES AOU
UNIT-III
PROGRAMMING AND SCHEDULING
1. INTEL I/O INSTRUCTIONS –

2. SYNCHRONIZATION - TRANSFER RATE, LATENCY;

3. INTERRUPT DRIVEN INPUT AND OUTPUT - NONMASKABLE

INTERRUPTS, SOFTWARE INTERRUPTS, PREVENTING

INTERRUPTS OVERRUN - DISABILITY INTERRUPTS.

4. MULTITHREADED PROGRAMMING –CONTEXT SWITCHING,

PREEMPTIVE AND NON-PREEMPTIVE MULTITASKING,

SEMAPHORES.

5. SCHEDULING-THREAD STATES, PENDING THREADS, CONTEXT

SWITCHING
1. INTEL I/O INSTRUCTIONS
I/O DEVICE CONTROL
• Devices controlled via special registers of a device controller
• CPU needs to control devices
▫ Read or write control signals or data in special registers of a device
controller
• General interfaces
▫ Port I/O: Special I/O machine instructions that trigger the bus to select
the proper I/O port & move the data into or out of a device register
 E.g., IN, INS, OUT, OUTS on numerous Intel architectures
▫ Memory mapped I/O: The I/O registers of the device become part of
the linear memory space of the “RAM” of the computer; standard
memory load and store machine instructions are used
• E.g., a serial port might use port I/O, while a graphics
controller could have a memory-mapped image of the
computer screen
I/O structure
• IN and OUT transfer data between an I/O device and the
microprocessor's accumulator (AL, AX or EAX).
▫ The I/O address is stored in:
• Register DX (variable addressing).
• The byte, p8, immediately following the opcode (fixed
address).
Example
IN AL,19H ; 8 bits are saved to AL from I/O port 19 H
IN EAX, DX ; 32 bits are saved to EAX
OUT DX,EAX ; 32 bits are written to port DX from EAX
OUT 19H,AX ; 16 bits are written to I/O port 0019H
EXAMPLE I/O PORT INSTRUCTIONS
Contd..
• Only 16-bits (A0 to A15) are decoded.
• Address connections above A15 are undefined
for I/O instructions.
▫ 0000H-03XXH are used for the ISA bus.
Contd..
• interaction between processor and memory. to
execute an instruction, the processor must be
able to request 3 things from memory:
• 1. instruction FETCH
• 2. operand (variable) load LOAD
• 3. operand (variable) store STORE
memory really only needs to be able to do 2
operations
1. read (fetch or load) 2. write (store)
POLLING
• Polling, or polled operation, refers to actively
sampling the status of an external device by a
client program as a synchronous activity.
• Polling is most often used in terms of (I/O), and
is also referred to as polled I/O or software
driven I/O.
GENERAL METHODS FOR ACCESSING DEVICES

• Two general methods by which devices can be


accessed
▫ Each of these methods can be used with either
ports or memory-mapped I/O
• Polling
▫ A busy wait, repeatedly checking a device for some
change in status
• Interrupts
▫ Using interrupt line of CPU to provide service to
devices
POLLING
• Also called programmed I/O
• CPU is used (e.g., by a device driver), in a program
code loop, to continuously check a device to see if it
is ready to accept data or commands, or produce
output
▫ Continuous polling
• In some cases, a process periodically checks a
device, and then blocks until next period elapsed
▫ Periodic polling
▫ E.g., a USB mouse might be periodically polled for data at
100 Hz rate
Loop (until device ready)
Check device
End-loop
ISSUES WITH CONTINUOUS POLLING
• CPU is tied up in communication with device until
I/O operation is done
• No other work can be accomplished by CPU
• Typically one CPU in system, but many I/O devices
• May be useful if controller and device are fast
▫ Programmed I/O can be more efficient than interrupt-
driven I/O, if the number of cycles spent busy-waiting is
not excessive
• If polling interleaved with other processing
▫ Device may be idle because we don’t supply it with
information or a new request
▫ Device might have a fixed length buffer, and we may get
buffer overrun conditions, resulting in data loss
2. SYNCHRONIZATION - TRANSFER
RATE, LATENCY
SYNCHRONIZATION
• The CPU and I/O devices are physically
independent.
• Events occurring within an I/O device that
determine when data is available for input or
when the device is ready to receive output data
are not under the control of the computer.
• To transfer data to or from an I/O device
requires Synchronizing the CPU to the I/O
device by somehow checking the status of the
device and waiting until it is ready to transfer
TRANSFER RATE
• Transfer rate is simply a measure of the number
of bytes per second transferred between the CPU
and an external device.
• The fastest transfer rates are provided by DMA,
but at the expense of additional hardware
complexity.
• Polled waiting loops provide data rates that are a
bit slower, but still quite reasonable.
LATENCY
• Latency is a measure of the delay from the
instant that the device is ready until the time the
first data byte is transferred.
• Latency = response time
• Latency can be quite high with polled waiting
loop
• Latency is reasonable with interrupt driven I/O
• Latency is very low with DMA
3. INTERRUPT DRIVEN INPUT AND OUTPUT –
NONMASKABLE INTERRUPTS, SOFTWARE INTERRUPTS, PREVENTING
INTERRUPTS OVERRUN - DISABILITY INTERRUPTS

INTERRUPT

• Every embedded system typically takes input


from its environment or its user. The interrupt
mechanism is one of the common ways to
interact with the user.
D1

µC
D2

D3
Microcontroller interacting with three devices
Types of interrupt

Interrupt

Hardware Software

Normal interrupt Exception

Hardware interrupt
• Interrupts are set by hardware components (like for instance the
timer component) or by peripheral devices such as a hard disk.
• There are two basic types of hardware interrupts: non maskable
interrupts (nmi) and (maskable) interrupt requests (irq).
Software interrupt
• Software interrupts are initiated with an INT instruction
and, as the name implies, are triggered via software.
• For example, the instruction INT 33h issues the
interrupt with the hex number 33h.
• Real mode address space of the i386, 1024 (1k)
bytes are reserved for the interrupt vector table
(IVT).
• Table contains an interrupt vector for each of the
256 possible interrupts. Every interrupt vector in
real mode consists of four bytes and gives the jump
address of the ISR (also known as interrupt handler) for
the particular interrupt in segment: offset format.
Operation of Interrupt

• When an interrupt is issued, the processor


automatically transfers the current flags, the
code segment (CS) and the instruction pointer
(IP) onto the stack.
• The interrupt number is internally multiplied by
four and then provides the offset in the segment
00h where the interrupt vector for handling the
interrupt is located. The processor then loads
EIP and CS with the values in the table..
Exception
• This particular type of interrupt originates in the
processor itself.
• when the processor can't handle alone an internal error
caused by system software.
• There are three main classes of exceptions which I will
discuss briefly.
• - Fault : A fault issues an exception prior to
completing the instruction.
• The saved EIP value then points to the same instruction
that created the exception.
Contd.
• - Trap : A trap issues an exception after
completing the instruction execution.
• Traps are useful when, despite the fact the
instruction was processed without errors,
program execution should be stopped as with
the case of debugger breakpoints.
- Abort : This is not a good omen. Aborts
usually translate very serious failures, such as
hardware failures or invalid system tables.
Hardware interrupt processing
Hardware interrupt request occurs CPU

Interrupt response sequence: (CPU pushes flag s)

Interrupt Service Routine


Re enable higher priority interrupts
Preserve CPU registers
Transfer Data (also clears the interrupt request)
Re enable lower priority interrupts
Restore CPU registers
Pop EIP,CS, and EFlags and return to interrupted code

Interrupt complete
Interrupt-driven I/O cycle


Hardware response
• Hardware response corresponds to the sequence of events within the processor

Action Detailed Description Bytes Transferred


Push EFlag ESP ESP-4; 4 (Stack write)
register
Disable interrupt IF flag 0 n/a
Push return ESP ESP-4; 4 (stack write)
segment
Push return ESP ESP-4; 4 (stack write)
offset
Identify interrupt Read interrupt type code 1 (I/O read)
from data bus
Load CS and 8 (IDT read)+ 8(GDT
EIP read)
Pentium Processor Event-Vector Table
Interrupt handler
• An interrupt handler, also known as an interrupt service
routine (ISR), is a callback subroutine in an operating system or
device driver whose execution is triggered by the reception of an
interrupt. Interrupt handlers have a multitude of functions, which
vary based on the reason the interrupt was generated and the speed
at which the interrupt handler completes its task.

• An interrupt handler is a low-level counterpart of event handlers.


These handlers are initiated by either hardware interrupts or
interrupt instructions in software, and are used for servicing
hardware devices and transitions between protected modes of
operation such as system calls.
4. MULTITHREADED PROGRAMMING –CONTEXT
SWITCHING, PREEMPTIVE AND NON-PREEMPTIVE
MULTITASKING, SEMAPHORES.

I like my new telephone, my computer Use of advanced messaging technology does


works just fine, my calculator is perfect, not imply an endorsement of western
but Lord, I miss my mind! industrial civilization

• Kernel • Scheduler
▫ Piece of a multitasking (or: dispatcher)
system that controls tasks ▫ controls which task will run
▫ Primary task = context ▫ Mostly priority based
switching
▫ Priority-based kernel =
▫ Extra ROM and RAM gives highest priority task
needed -> problem for that is ready to run control
single-chip processors of the CPU
▫ RAM gets eaten
▫ Extra CPU-time needed (2
tot 5%)
MULTITASKING
• Multitasking is the ability of a computer to run more
than one program, or task , at the same time.
Multitasking contrasts with single-tasking, where one
process must entirely finish before another can begin.
MS-DOS is primarily a single-tasking environment,
while Windows 3.1 and Windows NT are both multi-
tasking environments.
• MULTITASKING, there are two major sub-
categories: preemptive and non-preemptive (or
cooperative).
• In non-preemptive multitasking , use of the
processor is never taken from a task; rather, a task
must voluntarily yield control of the processor before
any other task can run. Windows 3.1 uses non-
preemptive multitasking for Windows applications.
• Preemptive multitasking differs from non-preemptive multitasking in
that the operating system can take control of the processor without the
task's cooperation. (A task can also give it up voluntarily, as in non-
preemptive multitasking.) The process of a task having control taken
from it is called preemption. Windows NT uses preemptive
multitasking for all processes except 16-bit Windows 3.1 programs.
As a result, a Window NT application cannot take over the processor
in the same way that a Windows 3.1 application can.
• A preemptive operating system takes control of the processor from a
task in two ways:
1. When a task's time quantum (or time slice) runs out. Any given task
is only given control for a set amount of time before the operating
system interrupts it and schedules another task to run.
2. When a task that has higher priority becomes ready to run. The
currently running task loses control of the processor when a task
with higher priority is ready to run regardless of whether it has time
left in its quantum or not.
Preemptive and non-preemptive kernels
• non-preemptive kernel
▫ also: "cooperative multitasking"
▫ A task has to give up the CPU itself
▫ releasing the CPU has to be done fast
▫ small interrupt latency
▫ every task can run completely
before the CPU is released
▫ less danger for corruption
(non-reentrant functions)
▫ bad response on high-priority
tasks
▫ crash of a task, or endless
loop --> ??
Preemptive and non-preemptive kernels
• Preemptive kernel
▫ Used when response time important
▫ eg. microC/OS-II
▫ Highest priority task ALWAYS gets control
▫ Highest priority tasks gets
immediate control
▫ response time very short
▫ Non-reentrant functions
cannot be used!
▫ After execution of task
continue with highest priority
task (not interrupted task)
5. SCHEDULING-THREAD STATES,
PENDING THREADS, CONTEXT SWITCHING

• Scheduling is a key concept in computer


multitasking, multiprocessing operating system
and real-time operating system designs.
• Scheduling refers to the way processes are
assigned to run on the available CPUs, since there
are typically many more processes running than
there are available CPUs. This assignment is carried
out by software known as a scheduler and
dispatcher.
THREAD STATES
Life Cycle of a Thread
• born state
▫ The tread state in which a new thread begins life.
• dead state
▫ The state of a thread after it has been terminated.
• sleeping state
▫ A blocked thread state which can be transitioned to ready when notified of a
timeout event.
• waiting state
▫ A blocked thread state which can be transitioned to ready when notified of an event.
• The blocked state can effectively be divided into the sleeping state
(blocked for a designated time interval), the waiting state (waiting for an
asynchronous event), and the blocked state (waiting for an I/O event).
Thread States and Transitions

running
Thread::Yield
(voluntary or involuntary)
Thread::Sleep
(voluntary) Scheduler::Run

blocked ready
Thread::Wakeup
Thread Operations
• Create - thread creation is a subset of process creation, and
is faster.
• Exit (terminate)
• Suspend
• Resume
• Sleep
• Awaken
• Cancel (Cancel signals can be masked.)
• Join
• join
▫ Thread operation in which the calling thread is blocked
until the thread it joins terminates.
Thread Implementation Considerations
• synchronous signal
▫ signal generated due to execution of the currently running thread's execution.
• asynchronous signal
▫ signal generated for reasons unrelated to the current instruction of the running
thread.
• signal mask
▫ A data structure which blocks specified signals from being delivered to a thread.
• pending signal
▫ A signal which has not yet been delivered.
• A signal can be pending because the recipient thread is not running, or has
masked the the signal.
• signal handler
▫ Code that is executed in response to a particular signal type.
Thread types
• Inactive: the thread is unknown to the scheduler
• Pending: the thread is waiting for an external
event or a shared resource
• Ready: the thread is waiting for the scheduler to
give it control
• Running: the thread is currently running
Pending thread

Pending Running

Ready
Inactive
Context switching
Running Ready

In computing, a context switch is the process of storing and restoring the state
(context) of a process or thread so that execution can be resumed from the same point
at a later time. This enables multiple processes to share a single CPU and is an essential
feature of a multitasking operating system. What constitutes the context is determined
by the processor and the operating system.

Context switches are usually computationally intensive, and much of the design of
operating systems is to optimize the use of context switches. Switching from one
process to another requires a certain amount of time for doing the administration –
saving and loading registers and memory maps, updating various tables and lists etc.
A context switch can mean a register context switch, a task context switch, a stack
frame switch, a thread context switch, or a process context switch.
Semaphores
• In computer science, particularly in operating systems, a semaphore is a variable or abstract
data type that is used for controlling access, by multiple processes, to a common resource in a
parallel programming or a multi user environment.
• A useful way to think of a semaphore is as a record of how many units of a particular
resource are available, coupled with operations to safely (i.e., without race conditions) adjust
that record as units are required or become free, and, if necessary, wait until a unit of the
resource becomes available. Semaphores are a useful tool in the prevention of race
conditions; however, their use is by no means a guarantee that a program is free from these
problems. Semaphores which allow an arbitrary resource count are called counting
semaphores, while semaphores which are restricted to the values 0 and 1 (or
locked/unlocked, unavailable/available) are called binary semaphores.
• The semaphore concept was invented by Dutch computer scientist Edsger Dijkstra in 1962 or
1963,[1] and has found widespread use in a variety of operating systems.

S-ar putea să vă placă și