Sunteți pe pagina 1din 17

NOTE: These notes are by Allan Gottlieb, and are reproduced here, with superficial modifications, with his

permission. "I" in this text generally refers to Prof. Gottlieb, except in regards to administrative matters.

================ Start Lecture #2 (Jan. 28) ================

Chapter 1: Introduction
Homework: Read Chapter 1 (Introduction) Levels of abstraction (virtual machines)

Software (and hardware, but that is not this course) is often implemented in layers. The higher layers use the facilities provided by lower layers. Alternatively said, the upper layers are written using a more powerful and more abstract virtual machine than the lower layers. Alternatively said, each layer is written as though it runs on the virtual machine supplied by the lower layer and in turn provides a more abstract (pleasent) virtual machine for the higher layer to run on. Using a broad brush, the layers are. 1. Applications and utilities 2. Compilers, Editors, Command Interpreter (shell, DOS prompt) 3. Libraries 4. The OS proper (the kernel, runs in privileged/kernel/supervisor mode) 5. Hardware Compilers, editors, shell, loader. etc run in user mode. The kernel itself is itself normally layered, e.g. 1. ... 2. Filesystems 3. Machine independent I/O 4. Machine dependent device drivers The machine independent I/O part is written assuming ``virtual (i.e. idealized) hardware''. For example, the machine independent I/O portion simply reads a block from a ``disk''. But in reality one must deal with the specific disk controller. Often the machine independent part is more than one layer. The term OS is not well defined. Is it just the kernel? How about the libraries? The utilities? All these are certainlysystem software but not clear how much is part of the OS.

1.1: What is an operating system?

The kernel itself raises the level of abstraction and hides details. For example a user (of the kernel) can write to a file (a concept not present in hardware) and ignore whether the file resides on a floppy, a CD-ROM, or a hard magnetic disk The kernel is a resource manager (so users don't conflict). How is an OS fundamentally different from a compiler (say)? Answer: Concurrency! Per Brinch Hansen in Operating Systems Principles (Prentice Hall, 1973) writes. The main difficulty of multiprogramming is that concurrent activities can interact in a timedependent manner, which makes it practically impossibly to locate programming errors by systematic testing. Perhaps, more than anything else, this explains the difficulty of making operating systems reliable.

1.2 History of Operating Systems


1. Single user (no OS). 2. Batch, uniprogrammed, run to completion. o The OS now must be protected from the user program so that it is capable of starting (and assisting) the next program in the batch).

3. Multiprogrammed o The purpose was to overlap CPU and I/O o Multiple batches IBM OS/MFT (Multiprogramming with a Fixed number of Tasks) OS for IBM system 360. The (real) memory is partitioned and a batch is assigned to a fixed partition. The memory assigned to a partition does not change. Jobs were spooled from cards into the memory by a separate processor (an IBM 1401). Similarly output was spooled from the memory to a printer (I believe a 1403) by the 1401. IBM OS/MVT (Multiprogramming with a Variable number of Tasks) (then other names) Each job gets just the amount of memory it needs. That is, the partitioning of memory changes as jobs enter and leave MVT is a more ``efficient'' user of resources but is more difficult. When we study memory management, we will see that with varying size partitions questions like compaction and ``holes'' arise. o Time sharing

This is multiprogramming with rapid switching between jobs (processes). Deciding when to switch and which process to switch to is called scheduling. We will study scheduling when we do processor management

4. Personal Computers o Serious PC Operating systems such as linux, Windows NT (2000) and (the newest) MacOS are multiprogrammed OSes. o GUIs have become important. Debate as to whether it should be part of the kernel. o Early PC operating systems were uniprogrammed and their direct descendants still are (e.g. Windows ME).

1.3: OS Zoo
There is not as much difference between mainframe, server, multiprocessor, and PC OSes as Tannenbaum suggests. For example Windows NT and 2000 are used in all (except mainframes) and Unix and Linux are used on all.

1.3.1: Mainframe Operating Systems


Used in data centers, these systems ofter tremendous I/O capabilities.

1.3.2: Server Operating Systems


Perhaps the most important servers today are web servers. Again I/O (and network) performance are critical.

1.3.3: Multiprocessor Operating systems


These existed almost from the beginning of the computer age, but now are not exotic.

1.3.4: PC Operating Systems (client machines)


Some OSes (e.g. Windows ME) are tailored for this application. One could also say they are restricted to this application.

1.3.5: Real-time Operating Systems


Often are Embedded Systems. Soft vs hard real time. In the latter missing a deadline is a fatal error--sometimes literally. Very important commercially, but not covered much in this course.

1.3.6: Embedded Operating Systems

The OS is ``part of'' the device. For example, PDAs, microwave ovens, cardiac monitors. Often are real-time systems. Very important commercially, but not covered much in this course.

1.3.7: Smart Card Operating Systems


Very limited in power (both meanings of the word).

Multiple computers

Network OS: Make use of the multiple PCs/workstations on a LAN. Distributed OS: A ``seamless'' version of above. Not part of this course (but often in G22.2251).

1.4: Computer Hardware Review


Tannenbaum's treatment is very brief and superficial. Mine is even more so. The picture on the right is very simplified. For one thing, today separate buses are used to Memory and Video.

1.4.1: Processors
We will ignore processor concepts such as program counters and stack pointers. We will also ignore computer design issues such as pipelining andsuperscalar. We do, however, need the notion of atrap, that is an instruction that atomicallyswitches the processor into privileged mode and jumps to a pre-defined physical address.

1.4.2: Memory
We will ignore caches, but will (later) discuss demand paging, which is very similar although uses completely disjoint terminology. In both cases, the goal is to combine large slow memory with small fast memory and achieve the effect of large fast memory.

The central memory in a system is called RAM(Random Access Memory). A key point is that it is volatile, i.e. the memory loses its value if power is turned off. Disk Hardware I don't understand why Tanenbaum discusses disks here instead of in the next section entitled I/O devices, but he does. I don't. ROM / PROM / EPROM / EEPROM / Flash Ram ROM (Read Only Memory) is used to hold data that will not change, e.g. the serial number of a computer or the program use in a microwave. ROM is non-volatile. But often this unchangable data needs to be changed (e.g., to fix bug). This gives rise first to PROM (Programmable ROM), which, like a CD-R, can be written once (as opposed to being mass produced already written like a CD-ROM), and then to EPROM (Erasable PROM; not Erasable ROM as in Tanenbaum), which is like a CD-RW. An EPROM is especially. convenient if it can be erased with a normal circuit (EEPROM, Electrically EPROM or Flash RAM). Memory Protection and Context Switching As mentioned above when discussingOS/MFT and OS/MVTmultiprograming requires that we protect one process from another. That is we need to translate the virtual addresses of each program into distinct physical addresses. The hardware that performs this translation is called theMMU or Memory Management Unit. When context switching from one process to another, the translation must change, which can be an expensive operation.

1.4.3
I/O Devices Show a real disk opened up and illustrate the components

Platter Surface Head Track Sector Cylinder Seek time Rotational latency Transfer time

Devices are often quite complicated to manage and a separate computer, called a controller, is used to translate simple commands (read sector 123456) into what the device requires (read cylinder 321, head 6, sector 765). Actually the controller does considerably more, e.g. calculates a checksum for error detection. How does the OS know when the I/O is complete? 1. It can busy wait constantly asking the controller if the I/O is complete. This is the easiest (by far) but has low performance. It is also called polling or PIO (Programmed I/O). 2. It can tell the controller to start the I/O and then switch to other tasks. The controller must then interrupt the OS when the I/O is done. Less waiting, but harder (concurrency!). 3. Some controllers can do DMA (Direct Memory Access) in which case they deal directly with memory after being started by the CPU. This takes work from the CPU and halves the number of bus accesses. We discuss this more in chapter 5. In particular, we explain the last point about halving bus accesses there.

1.4.3: Buses

I don't care so much about the names of the buses, but the diagram given in the book doesn't show a modern design. The one on the right does.

1.5: Oper ating Syste m Conce pts


This will be very brief. Much of the rest of the course will consist in ``filling in the details''.

1.5.1: Processes
A program in execution. If you run the same program twice, you have created two processes. For example if you have two editors running in two windows, each instance of the editor is a separate process. Often one distinguishes the state or context (memory image, open files) from the thread of control. Then if one has manythreads running in the same task, the result is a ``multithreaded processes''. The OS keeps information about all processes in the process table. Indeed, the OS views the process as the entry. This is an example of an active entity being viewed as a data structure (cf. discrete event simulations). An observation made by Finkel in his (out of print) OS textbook. The set of processes forms a tree via the fork system call. The forker is the parent of the forkee, which is called achild. If the system blocks the parent until the child finishes, the ``tree'' is quite simple, just a line. But the parent (in many OSes) is free to continue executing and in particular is free to fork again producing another child.

A process can send a signal to another process to cause the latter to execute a predefined function (the signal handler). This can be tricky to program since the programmer does not know when in his ``main'' program the signal handler will be invoked. Each user is assigned User IDentification (UID) and all processes created by that user have this UID. One UID is special (the superuser oradministratore) and has extra privileges. A child has the same UID as its parent. It is sometimes possible to change the UID of a running process. A group of users can be formed and given a Group IDentification, GID. Access to files and devices can be limited to a given UID or GID.

1.5.2: Deadlocks
A set of processes each of which is blocked by a process in the set. The automotive equivalent, shown at right, is gridlock.

1.5.3: Memory Management


Each process requires memory. The loader produces a load module that assumes the process is loaded at location 0. The operating system ensures that the processes are actually given disjoint memory. Current operating systems permit each process to be given more (virtual) memory than thetotal amount of (real) memory on the machine.

1.5.4: Input/Output

There are a wide variety of I/O devices that the OS must manage. For example, if two processes are printing at the same time, the OS must not interleave the output. The OS contains device specific code (drivers) for each device as well as device-independent I/O code.

1.5.5: Files
Modern systems have a hierarchy of files. A file system tree.

In MSDOS the hierarchy is a forest not a tree. There is no file, or directory that is an ancestor of both a:\ and c:\. In unix the existence of (hard) links weakens the tree to a DAG. Unix also has symbolic links, which when used indiscriminately, permit directed cycles (i.e., the result is not a DAG).

You can name a file via an absolute path starting at the root directory or via a relative path starting at the current working directory. In addition to regular files and directories, Unix also uses the file system namespace for devices (called special files, which are typically found in the /dev directory. Often utilities that are normally applied to (ordinary) files can be applied as well to some special files. For example, when you are accessing a unix system using a mouse and do not have anything serious going on (e.g., right after you log in), type the following command
cat /dev/mouse

and then move the mouse. You kill the cat by typing cntl-C. I tried this on my linux box and no damage occurred. Your mileage may vary. Before a file can be accessed, it must be openedand a file descriptor obtained. Many systems have standard files that are automatically made available to a process upon startup. These (initial) file descriptors are fixed

standard input: fd=0 standard output: fd=1 standard error: fd=2

A convenience offered by some command interpretors is a pipe or pipeline. The pipeline


dir | wc

which pipes the output dir into a character/word/line counter, will give the number of files in the directory (plus other info).

1.5.6: Security
Files and directories normally have permissions

Normally have at least rwx.

User, group, world A more general mechanism is an access control lists. Often files have ``attributes'' as well. For example the linux ext2 file system supports a ``d'' attribute that is a hint to the dump program not to backup this file. When a file is opened, permissions are checked and, if the open is permitted, a file descriptor is returned that is used for subsequent operations

1.5.7: The Shell or Command Interpreter (DOS Prompt)


The command line interface to the operating system. The shell permits the user to

Invoke commands Pass arguments to the commands Redirect the output of a command to a file or device Pipe one command to another (as illustrated above via ls | wc)

1.6: System Calls


System calls are the way a user (i.e., a program) directly interfaces with the OS. Some textbooks use the termenvelope for the component of the OS responsible for fielding system calls and dispatching them. On the right is a picture showing some of the OS components and the external events for which they are the interface. Note that the OS serves two masters. The hardware (below)asynchronously sends interrupts and the user makes system calls and generates page faults. What happens when a user executes a system call such as read()? We show a more detailed picture below, but at a high level what happens is 1. Normal function call (in C, Ada, Pascal, etc.). 2. Library routine (in C). 3. Small assembler routine. 1. Move arguments to predefined place (perhaps registers). 2. Poof (a trap instruction) and then the OS proper runs in supervisor mode.

3. Fixup result (move to correct place).

The following actions occur when the user executes the (Unix) system call
count = read(fd,buffer,nbytes)

which reads up to nbytes from the file described by fd into buffer. The actual number of bytes read is returned (it might be less than nbytes if, for example, an eof was encountered). 1. 2. 3. 4. 5. Push third parameter on to the stack. Push second parameter on to the stack. Push first parameter on to the stack. Call the library routine. Machine/OS dependent actions. One is to put the system call number for read in a well defined place, e.g., a specific register. This requires assembly language.

6. Trap to the kernel (assembly language). This enters the operating system proper and shifts the computer to privileged mode. 7. The envelope uses the system call number to access a table of pointers to find the handler for this system call. 8. The read system call handler processes the request (see below). 9. Some magic instruction returns to user mode and jumps to the location right after the trap. 10. The library routine returns (there is more; e.g., the count must be returned. 11. The stack is popped (ending the function call read). A major complication is that the system call handler may block. Indeed for read it is likely. In that case a switch occurs to another process. This is far from trivial and is discussed later in the course. Process Management Posix Fork exec(ve) exit Posix open close read write lseek stat Posix mkdir rmdir link unlink mount umount Posix Win32 CreateProcess Description Clone current process Replace current process Terminate current process & return status File Management Win32 CreateFile CloseHandle ReadFile WriteFile SetFilePointer Description Open a file & return descriptor Close an open file Read from file to buffer Write from buffer to file Move file pointer

waid(pid) WaitForSingleObject Wait for a child to terminate. ExitProcess

GetFileAttributesEx Get status info Directory and File System Management Win32 CreateDirectory RemoveDirectory (none) DeleteFile (none) (none) Win32 Description Create new directory Remove empty directory Create a directory entry Remove a directory entry Mount a file system Unmount a file system Miscellaneous Description

A Few Important Posix/Unix/Linux and Win32 System Calls

chdir chmod kill

SetCurrentDirectory Change the current working directory (none) (none) Change permissions on a file Send a signal to a process

time GetLocalTime Elapsed time since 1 jan 1970 The table on the right shows some systems calls; the descriptions are accurate for Unix and close for win32. To show how the four process management calls enable much of process management, consider the following highly simplified shell.
while (true) display_prompt() read_command(command) if (fork() != 0) // true in parent false in child waitpid(...) else execve(command) // the command itself executes exit() endif endwhile

1.7: OS Structure
I must note that Tanenbaum is a big advocate of the so called microkernel approach in which as much as possible is moved out of the (supervisor mode) microkernel into separate processes. In the early 90s this was popular. Digital Unix (now called True64) and Windows NT are examples. Digital Unix is based on Mach, a research OS from Carnegie Mellon university. Lately, the growing popularity of Linux has called into question the belief that ``all new operating systems will be microkernel based''.

1.7.1: Monolithic approach


The previous picture: one big program The system switches from user mode to kernel mode during the poof and then back when the OS does a ``return''. But of course we can structure the system better, which brings us to.

1.7.2: Layered Systems


Some systems have more layers and are more strictly structured. An early layered system was ``THE'' operating system by Dijkstra. The layers were. 1. 2. 3. 4. 5. The operator User programs I/O mgt Operator-process communication Memory and drum management

The layering was done by convention, i.e. there was no enforcement by hardware and the entire OS is linked together as one program. This is true of many modern OS systems as well (e.g., linux). The multics system was layered in a more formal manner. The hardware provided several protection layers and the OS used them. That is, arbitrary code could not jump to or access data in a more protected layer.

1.7.3: Virtual Machines


Use a ``hypervisor'' (beyond supervisor, i.e. beyond a normal OS) to switch between multiple Operating Systems. Made popular by IBM's VM/CMS

Each App/CMS runs on a virtual 370. CMS is a single user OS. A system call in an App traps to the corresponding CMS. CMS believes it is running on the machine so issues I/O. instructions but ...

... I/O instructions in CMS trap to VM/370. This idea is still used. A modern version (used to ``produce'' a multiprocessor from many uniprocessors) is ``Cellular Disco'', ACM TOCS, Aug. 2000. Another modern usage is JVM the ``Java Virtual Machine''.

1.7.4: Exokernels
Similar to VM/CMS but the virtual machines have disjoint resources (e.g., distinct disk blocks) so less remapping is needed.

1.7.5: Client Server

When implemented on one computer, a client server OS is using the microkernel approach in which the microkernel just supplies interprocess communication and the main OS functions are provided by a number of separate processes. This does have advantages. For example an error in the file server cannot corrupt memory in the process server. This makes errors easier to track down. But it does mean that when a (real) user process makes a system call there are more processes switches. These are not free. A distributed system can be thought of as an extension of the client server concept where the servers are remote. Homework: 14, 18, 23

S-ar putea să vă placă și