Sunteți pe pagina 1din 2

Paul John F.

Cadag Operating System TTh(9:00 – 10:00)

1. Spool

(Simultaneous Peripheral Operations OnLine) The overlapping of low-speed operations


with normal processing. Spooling originated with mainframes in order to optimize slow
operations such as reading cards and printing. Card input was read onto disk and printer output
was stored on disk. In that way, the business data processing was performed at high speed,
receiving input from disk and sending output to disk. Subsequently, spooling is used to buffer
data for the printer as well as remote batch terminals.

2. Interrupt

Is an asynchronous signal indicating the need for attention or a synchronous event in


software indicating the need for a change in execution.
A hardware interrupt causes the processor to save its state of execution and begin execution of an
interrupt handler.
Software interrupts are usually implemented as instructions in the instruction set, which cause a
context switch to an interrupt handler similar to a hardware interrupt.

3. Virtual memory

Is a computer system technique which gives an application program the impression that it
has contiguous working memory (an address space), while in fact it may be physically
fragmented and may even overflow on to disk storage.

4. Firmware

Is a term often used to denote the fixed, usually rather small, programs and data
structures that internally control various electronic devices. Typical examples of devices
containing firmware range from end-user products such as remote controls or calculators,
through computer parts and devices like hard disks, keyboards, TFT screens or memory cards, all
the way to scientific instrumentation and industrial robotics.

5. Kernel

Is the central component of most computer operating systems; it can be thought of as the
bridge between application and the actual data processing done at the hardware level. The
kernel's responsibilities include managing the system's resources (the communication between
hardware and software components.

6. Threading of execution
Results from a fork of a computer program into two or more concurrently running tasks.
The implementation of threads and processes differs from one operating system to another, but in
most cases, a thread is contained inside a process. Multiple threads can exist within the same
process and share resources such as memory, while different processes do not share these
resources.

Operating systems schedule threads in one of two ways:

1. Preemptive multithreading - is generally considered the superior approach, as it


allows the operating system to determine when a context switch should occur. The disadvantage
to preemptive multithreading is that the system may make a context switch at an inappropriate
time, causing priority inversion or other negative effects which may be avoided by cooperative
multithreading.
2. Cooperative multithreading - on the other hand, relies on the threads themselves to
relinquish control once they are at a stopping point. This can create problems if a thread is
waiting for a resource to become available.

S-ar putea să vă placă și