Documente Academic
Documente Profesional
Documente Cultură
Introduction
The operation system layer
Protection
Processes and threads
Communication and invocation
Operating system architecture
Summary
6.1 Introduction
In this chapter we shall continue to focus on remote
invocations without real-time guarantee
An important theme of the chapter is the role of the system
kernel
The chapter aims to give the reader an understanding of the
advantages and disadvantages of splitting functionality
between protection domains (kernel and user-level code)
We shall examining the relation between operation system
layer and middle layer, and in particular how well the
requirement of middleware can be met by the operating
system
Efficient and robust access to physical resources
The flexibility to implement a variety of resource-management policies
Introduction (2)
The task of any operating system is to provide
problem-oriented abstractions of the underlying
physical resources (For example, sockets rather
than raw network access)
the processors
Memory
Communications
storage media
Introduction (3)
Network operating systems
They have a network capability built into them and so can
be used to access remote resources. Access is networktransparent for some not all type of resource.
Multiple system images
The node running a network operating system retain autonomy in
managing their own processing resources
Single
system image
Anoperatingsystemthatproducesasingle
One could envisage an operating system in which users
systemimagelikethisforalltheresourcesina
are never concerned with where their programs run, or the
distributedsystemiscalledadistributed
location of any resources. The operating system has
operatingsystem
control over all the nodes in the system
Figure 6.1
System layers
Applications, services
Middlew are
OS: kernel,
libraries &
servers
OS1
Processes, threads,
communication, ...
OS2
Processes, threads,
communication, ...
Computer &
netw ork hardw are
Computer &
netw ork hardw are
Node 1
Node 2
Platform
Encapsulation
Protection
Concurrent processing
Communication
Scheduling
Provideausefulserviceinterface
totheirresource
Figure 6.2
Core OS functionality
Communicationbetween
threadsattachedto
differentprocesseson
thesamecomputer
Treadcreation,
synchronization
andscheduling
Dispatchingof
interrupts,Thread manager
systemcall
trapsandother
exceptions
Process manager
Communication
manager
Memory manager
Supervisor
Handlesthe
creationofand
operationsupon
process
Management
ofphysical
andvirtual
memory
6.3 Protection
thatwouldupsetnormaluseof
The first is to ensure that each of the files two operations
(read and
thefileandthatfileswouldnev
write) can be performed only by clients with right to perform it
bedesignedtoexport
The other type of illegitimate access, which we shall address here, is
where a misbehaving client sidesteps the operations that resource
exports
WecanprotectresourcefromillegitimateinvocationssuchassetFilePointRandomly
ortouseatypesafeprogramminglanguage(JAVAorModula3)
Figure 6.3
Address space
2N
Auxiliary
regions
Stack
Heap
Text
0
Location policy
Determines which node should host a new process
selected for transfer. This decision may depend on the
relative loads of nodes, on their machine architectures and
on any specialized resources they may process
Loadmanagercollect
informationaboutthe
nodesanduseitto
allocatenew
processestonode
Centralized Oneloadmanagercomponent
Hierarchical Severalloadmanagerorganizedinatreestructure
decentralized
Nodeexchangeinformationwithoneanotherdirecttomake
allocationdecisions
Figure 6.4
Copy-on-write
RA
Thepagesareinitially
writeprotectedatthe
hardwarelevel
A's page
table
RB copied
from RA
RB
Kernel
Shared
frame
a) Before write
B's page
table
pagefault
b) After write
Thepagefault
handler
allocatesanew
framefor
processBand
copiesthe
originalframes
dataintobyte
bybyte
6.4.3 Threads
thread process
CPU Program Counter register
set stack space (task)
thread code section data section
(OS resource)
thread
multithreading
The next key aspect of a process to consider in
more detail and server process to possess more
than one thread.
Figure 6.5
Client and server with threads
Workerpool
Thread 2 makes
requests to server
Thread 1
generates
results
Input-output
Receipt &
queuing
T1
Requests
N threads
Client
Server
Adisadvantageofthisarchitectureisitsinflexibility
Anotherdisadvantageisthehighlevelofswitchingbetweenthe
I/Oandworkerthreadsastheymanipulatethesharequeue
Figure 6.6
Alternative server threading architectures (see also Figure 6.5)
Associatesathreadwith
eachconnection
request
I/O
w orkers
remote
objects
a. Thread-per-request
Advantage:thethreads
Disadvantage:the
donotcontendfora
overheadsofthethread
sharedqueue,and
creationanddestruction
throughputispotentially
operations
maximized
per-connection threads
remote
objects
b. Thread-per-connection
Associatesathread
witheachobject
per-object threads
I/O
remote
objects
c. Thread-per-object
Ineachoftheselasttwoarchitectures
Theirdisadvantageisthatclientsmaybe
theserverbenefitsfromloweredthread
delayedwhileaworkerthreadhas
managementoverheadscomparedwith
severaloutstandingrequestsbutanother
thethreadperrequestarchitecture.
threadhasnoworktoperform
Figure 6.7
State associated with execution environments and threads
Executionenvironment
Addressspacetables
Communicationinterfaces,openfiles
Thread
Savedprocessorregisters
Priorityandexecutionstate(suchas
BLOCKED)
Softwareinterrupthandlinginformation
Semaphores,othersynchronization
objects
Listofthreadidentifiers
Executionenvironmentidentifier
Pagesofaddressspaceresidentinmemory;hardwarecacheentries
Thread scheduling
Thread implementation
The four type of event that kernel notified to the user-level scheduler
SA blocked
An SA has blocked in the kernel, and kernel is using a fresh SA to notify the
scheduler: the scheduler sets the state of the corresponding thread to
BLOCKED and can allocate a READY thread to the notifying SA
SA unblocked
An SA that was blocked in the kernel has become unblocked and is ready to
execute at user level again; the scheduler can now return the corresponding
thread to READY list. In order to create the notifying SA, the another SA in the
same process. In the latter case, it also communicates the preemption event
to the scheduler, which can re-evaluate its allocation of threads to SAs.
SA preempted
The kernel has taken away the specified SA from the process (although it may
do this to allocate a processor to a fresh SA in the same process); the
scheduler places the preempted thread in the READY list and re-evaluates the
thread allocation.
Figure 6.10
Scheduler activations
Process
A
P added
Process
B
SA preempted
Process
SA unblocked
SA blocked
Kernel
Virtual processors
Kernel
P idle
P needed
Scheduleractivation(SA)isacall
fromkerneltoaprocess
Figure 6.11
Invocations between address spaces
(a) System call
Thread
Kernel
Protection domain
boundary
Thread 1
User 1
Thread 2
Kernel
User 2
Netw ork
Thread 1
Thread 2
User 1
User 2
Kernel 1
Kernel 2
Figure 6.12
RPC delay against parameter size
RPC delay
Clientdelayagainstrequested
datasize.Thedelayisroughly
proportionaltothesizeuntilthe
sizereachesathresholdatabout
networkpacketsize
Requested data
size (bytes)
1000
2000
Packet
size
The following are the main components accounting for remote invocation delay,
besides network transmission times
Marshalling
Data copying
Packet initialization
Thread scheduling and context switching
Waiting for acknowledgements
Potentially,evenaftermarshalling,messagedataiscopiedseveraltimes
1.Marshallingandunmarshalling,whichinvolvecopyingand
Severalsystemcalls(thatis,contextswitches)aremadeduringan
Thisinvolvesinitializingprotocolheadersand
ThechoiceofRPCprotocolmay
convertingdata,becomeasignificantoverheadastheamountof
inthecourseofanRPC
RPC,asstubsinvokesthekernelscommunicationoperations
trailers,includingchecksums.Thecostis
influencedelay,particularlywhenlarge
datagrows
thereforeproportional,inpart,totheamountof
amountsofdataaresent
1.
2. Acrosstheuserkernelboundary,betweentheclientorserveraddress
Oneormoreserverthreadsisscheduled
datasent
spaceandkernelbuffers
3. Iftheoperatingsystememploysaseparatenetworkmanagerprocess,
2. Acrosseachprotocollayer(forexample,RPC/UDP/IP/Ethernet)
theneachSendinvolvesacontextswitchtooneofitsthreads
3. Betweenthenetworkinterfaceandkernelbuffers
Figure 6.13
A lightweight remote procedure call
Client
Server
A stack
4. Execute procedure
and copy results
1. Copy args
User
stub
stub
Kernel
2. Trap to Kernel
3. Upcall
5. Return (trap)
Figure 6.14
Times for serialized and concurrent invocations
Serialised invocations
process args
marshal
Send
Receive
unmarshal
process results
process args
marshal
Send
Concurrent invocations
process args
marshal
Send
transmission
process args
marshal
Send
Receive
unmarshal
execute request
marshal
Send
Receive
unmarshal
process results
pipelining
Receive
unmarshal
execute request
marshal
Send
Receive
unmarshal
execute request
marshal
Send
Receive
unmarshal
process results
Receive
unmarshal
execute request
marshal
Send
time
Receive
unmarshal
process results
Client
Server
Client
Server
Figure 6.15
Monolithic kernel and microkernel
S4
.......
Microkernelprovidesonlythe
mostbasicabstraction.
Principallyaddressspaces,the
threadsandlocalinterprocess
communication
S1
S1
Key:
Server:
S2
S3
S2
S3
S4
.......
.......
Monolithic Kernel
Microkernel
Wherethesedesignsdifferprimarilyisinthedecisionas
towhatfunctionalitybelongsinthekernelandwhatisto
belefttoseverprocessesthatcanbedynamicallyloaded
torunontopofit
Figure 6.16
The role of the microkernel
Middlew are
Language
support
subsystem
Language
support
subsystem
OS emulation
subsystem
Microkernel
Hardw are
....
Comparison