Sunteți pe pagina 1din 3

Mutual Exclusion: Only one process may use a critical resource at a time.

Hold & Wait: A process may be allocated some resources while waiting for
others.
No Pre-emption: No resource can be forcible removed from a process holding
it.
Circular Wait: A closed chain of processes exist such that each process holds
at least one resource needed by another process in the chain.
What is thrashing?
It is a phenomenon in virtual memory schemes when the processor spends
most of its time swapping pages, rather than executing instructions. This is
due to an inordinate number of page fault. Trashing refers to an instance of
high paging activity. This happens when it is spending more time paging
instead of executing
Turnaround time is the interval between the submission of a job and its
completion. Response time is the interval between submission of a request,
and the first response to that request.
Virtual memory is a memory management technique for letting processes
execute outside of memory. This is very useful especially is an executing
program cannot fit in the physical memory
Paging is a memory management scheme that permits the physical-address
space of a process to be noncontiguous. It avoids the considerable problem of
having to fit varied sized memory chunks onto the backing store.
Preemptive multitasking allows an operating system to switch between
software programs. This in turn allows multiple programs to run without
necessarily taking complete control over the processor and resulting in
system crashes.
Spooling is normally associated with printing. When different applications
want to send an output to the printer at the same time, spooling takes all of
these print jobs into a disk file and queues them accordingly to the printer.
Caching is the processing of utilizing a region of fast memory for a limited
data and process. A cache memory is usually much efficient because of its
high access speed.
Context switch is the process of storing and restoring the state (context) of
a process or thread so that execution can be resumed from the same point at
a later time. This enables multiple processes to share a single CPU and is an
essential feature of a multitasking operating system. What constitutes the
context is determined by the processor and the operating system
For Windows, critical sections are lighter-weight than mutexes.
Mutexes can be shared between processes, but always result in a system call
to the kernel which has some overhead.
Critical sections can only be used within one process, but have the advantage
that they only switch to kernel mode in the case of contention - Uncontended
acquires, which should be the common case, are incredibly fast. In the case

of contention, they enter the kernel to wait on some synchronization primitive


(like an event or semaphore).
From a theoretical perspective, a critical section is a piece of code that must
not be run by multiple threads at once because the code accesses shared
resources.
A mutex is an algorithm (and sometimes the name of a data structure) that is
used to protect critical sections.
Semaphores and Monitors are common implementations of a mutex.
Dynamic dispatch when a method is invoked on an object, the object itself
determines what code gets executed by looking up the method at run time in
a table associated with the object.
A reentrant lock is one where a process can claim the lock multiple times
without blocking on itself. It's useful in situations where it's not easy to keep
track of whether you've already grabbed a lock. If a lock is non re-entrant you
could grab the lock, then block when you go to grab it again, effectively
deadlocking your own process.
Reentrancy in general is a property of code where it has no central mutable
state that could be corrupted if the code was called while it is executing. Such
a call could be made by another thread, or it could be made recursively by an
execution path originating from within the the code itself.
Re-entrant mutexes
A simple mutex is not re-entrant as only one thread can be in the critical
section at a given time. If you grab the mutex and then try to grab it again a
simple mutex doesn't have enough information to tell who was holding it
previously. To do this recursively you need a mechanism where each thread
had a token so you could tell who had grabbed the mutex. This makes the
mutex mechanism somewhat more expensive so you may not want to do it in
all situations.
The typical difference is that threads (of the same process) run in a shared
memory space, while processes run in separate memory spaces.
ProcessEach process provides the resources needed to execute a program. A
process has a virtual address space, executable code, open handles to
system objects, a security context, a unique process identifier, environment
variables, a priority class, minimum and maximum working set sizes, and at
least one thread of execution. Each process is started with a single thread,
often called the primary thread, but can create additional threads from any of
its threads.
ThreadA thread is the entity within a process that can be scheduled for
execution. All threads of a process share its virtual address space and system
resources. In addition, each thread maintains exception handlers, a
scheduling priority, thread local storage, a unique thread identifier, and a set

of structures the system will use to save the thread context until it is
scheduled. T
Thread:
A thread is a subset of the process.
An application consists of one or more processes. A process, in the simplest
terms, is an executing program. One or more threads run in the context of the
process. A thread is the basic unit to which the operating system allocates
processor time. A thread can execute any part of the process code, including
parts currently being executed by another thread. A fiber is a unit of
execution that must be manually scheduled by the application. Fibers run in
the context of the threads that schedule them.
traceroute is a computer network diagnostic tool for displaying the route
(path) and measuring transit delays of packets across an Internet Protocol (IP)
network. The history of the route is recorded as the round-trip times of the
packets received from each successive host (remote node) in the route
(path); the sum of the mean times in each hop indicates the total time spent
to establish the connection. Traceroute proceeds unless all (three) sent
packets are lost more than twice, then the connection is lost and the route
cannot be evaluated. Ping, on the other hand, only computes the final roundtrip times from the destination point.
The function malloc is used to allocate a certain amount of memory during
the execution of a program. The malloc function will request a block of
memory from the heap. If the request is granted, the operating system will
reserve the requested amount of memory.
When the amount of memory is not needed anymore, you must return it to
the operating system by calling the function free.

S-ar putea să vă placă și