Sunteți pe pagina 1din 10

An operating system

is software, consisting of programs and data, that runs on computers, manages computer hardware resources, and provides common services for execution of various application software. The operating system is the most important type of system software in a computer system. Without an operating system, a user cannot run an application program on their computer, unless the application program is self booting. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between application programs and the computer hardware,[1][2] although the application code is usually executed directly by the hardware and will frequently call the OS or be interrupted by it. Operating systems are found on almost any device that contains a computerfrom cellular phones and video game consoles to supercomputers and web servers. Examples of popular modern operating systems are: BSD, Linux (Ubuntu, Fedora, OpenSuSE, Debian etc.), Mac OS X, Microsoft Windows, and Unix.[3] Types Real-time A real-time operating system is a multitasking operating system that aims at executing real-time applications. Real-time operating systems often use specialized scheduling algorithms so that they can achieve a deterministic nature of behavior. The main object of real-time operating systems is their quick and predictable response to events. They have an event-driven or time-sharing design and often aspects of both. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts. Multi-user vs. Single-user A multi-user operating system allows multiple users to access a computer system concurrently. Timesharing system can be classified as multi-user systems as they enable a multiple user access to a computer through the sharing of time. Single-user operating systems, as opposed to a multi-user operating system, are usable by a single user at a time. Being able to have multiple accounts on a Windows operating system does not make it a multi-user system. Rather, only the network administrator is the real user. But for a Unix-like operating system, it is possible for two users to login at a time and this capability of the OS makes it a multi-user operating system. Multi-tasking vs. Single-tasking When a single program is allowed to run at a time, the system is grouped under a single-tasking system, while in case the operating system allows the execution of multiple tasks at one time, it is classified as a multi-tasking operating system. Multi-tasking can be of two types namely, pre-emptive or co-operative. In pre-emptive multitasking, the operating system slices the CPU time and dedicates one slot to each of the programs. Unix-like operating systems such as Solaris and Linux support pre-emptive multitasking. Cooperative multitasking is achieved by relying on each process to give time to the other processes in a defined manner. MS Windows prior to Windows 95 used to support cooperative multitasking. Distributed A distributed operating system manages a group of independent computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other, gave rise to distributed computing. Distributed computations are carried out on more than one machine. When computers in a group work in cooperation, they make a distributed system. Embedded Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources. They are very compact and extremely efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. operating system PROTECTION & SECURITY

OS Protection and Security


Computer protection and security mechanisms provided by an operating system

must address the following requirements: Confidentiality: (or privacy) the requirement that information maintained by a computer system be accessible only by authorised parties (users and the processes that run as/represent those users). Interception occurs when an unauthorised party gains access to a resource; examples include illicit file copying and the invocation of programs. Integrity: the requirement that a computer systems resources can be modified only by authorised parties. Modification occurs when an unauthorised party not only gains access to but changes a resource such as data or the execution of a running process. Availability: the requirement that a computer system be accessible at required times by authorised parties. Interruption occurs when an unauthorised party reduces the availability of or to a resource. Authenticity: the requirement that a computer system can verify the identity of a user. Fabrication occurs when an unauthorised party inserts counterfeit data amongst valid data. 1 Assets and their Vulnerabilities Hardware is mainly vulnerable to interruption, either by theft or by vandalism. Physical security measures are used to prevent these attacks. Software is also vulnerable to interruption, as it is very easy to delete. Backups are used to limit the damage caused by deletion. Modification or fabrication through alteration (e.g. by viruses) is a major problem, as it can be hard to spot quickly. Software is also vulnerable to interception through unauthorised copying: this problem is still largely unsolved. Data is vulnerable in many ways. Interruption can occur through the simple destruction of data files. Interception can occur through unauthorised reading of data files, or more perniciously through unauthorised analysis and aggregation of data. Modification and fabrication are also obvious problems with potentially huge consequences. Communications are vulnerable to all types of threats. Passive attacks take the form of eavesdropping, and fall into two categories: reading the contents of a message, or more subtly, analysing patterns of traffic to infer the nature of even secure messages. Passive attacks are hard to detect, so the emphasis is usually on prevention. Active attacks involve modification of a data stream, or creation of a false data stream. One entity may masquerade as another (presumably one with more or different privileges), maybe by capturing and replaying an authentication sequence. Replay is a similar attack, usually on data. Message contents may also be modified, often to induce incorrect behaviour in other users. Denial of service attacks aim to inhibit the normal use of communication facilities. Active attacks are hard to prevent (entirely), so the emphasis is usually on detection and damage control. 2 Protection Multiprogramming involves the sharing of many resources, including processor, memory, I/O devices, programs, and data. Protection of such resources runs along the following spectrum:

No protection may be adequate e.g. if sensitive procedures are run at separate times. Isolation implies that entities operate separately from each other in the physical sense. Share all or nothing implies that an object is either totally private or totally public. Share via access limitation implies that different entities enjoy different levels of access to an object, at the gift of the owner. The OS acts as a guard between entities and objects to enforce correct access. Share via dynamic capabilities extends the former to allow rights to be varied dynamically. Limit use of an object implies that not only is access to the object controlled, the use to which it may be put also varies across entities. The above spectrum is listed roughly in order of increasing fineness of control for owners, and also increasing difficulty of implementation. 3 Intruders Intruders and viruses are the two most publicised security threats. We identify three classes of intruders: A masquerador is an unauthorised individual (an outsider) who penetrates a system to exploit legitimate users accounts. A misfeasor is a legitimate user (an insider) who accesses resources to which they are not privileged, or who abuses such privilege. A clandestine user is an individual (an insider or an outsider) who seizes control of a system to evade auditing controls, or to suppress audit collection. Intruders are usually trying to gain access to a system, or to increased privileges to which they are not entitled, often by obtaining the password for a legitimate account. Many methods of obtaining passwords have been tried: trying default passwords; exhaustively testing short passwords; trying words from a dictionary, or from a list of common passwords; collecting personal information about users; using a Trojan horse; eavesdropping on communication lines. The usual methods for protecting passwords are through one-way encryption, or by limiting access to password files. However, passwords are inherently vulnerable. 4 Malicious Software The most sophisticated threats to computer systems are through malicious software, sometimes called malware. Malware attempts to cause damage to, or consume the resources of, a target system. Malware can be divided into programs that can operate independently, and those that need a host program; and also into programs that can replicate themselves, and those that cannot. A trap door is a secret entry point into a program, often left by the programs developers, or sometimes delivered via a software update. A logic bomb is code embedded in a program that explodes when certain conditions are met, e.g. a certain date or the presence of certain files or users. Logic bombs also often originate with the developers of the software. A Trojan horse is a useful (or apparently useful) program that contains hidden

code to perform some unwanted or harmful function. A virus is a program that can infect other programs by modification, as well as causing local damage. Such modification includes a copy of the virus, which can then spread further to other programs. A worm is an independent program that spreads via network connections, typically using either email, remote execution, or remote login to deliver or execute a copy of itself to or on another system, as well as causing local damage. A zombie is an independent program that secretly takes over a system and uses that system to launch attacks on other systems, thus concealing the original instigator. Such attacks often involve further replication of the zombie itself. Zombies are often used in denial-of-service attacks. The last three of these involve replication. In all cases, prevention is much easier than detection and recovery. 5 Trusted Systems So far we have discussed protecting a given resource from attack by a given user. Another requirement is to protect a resource on the basis of levels of security, e.g. the military-style system, where users are granted clearance to view certain categories of data. This is known as multi-level security. The basic principle is that a subject at a higher level may not convey information to a subject at a lower level against the wishes of the authorised user. This principle has two facets: No read-up implies that a subject can only read objects of less or equal security level. No write-down implies that a subject can only write objects of greater or equal security level. These requirements are implemented by a reference monitor, which has three roles: Complete mediation implies that rules are imposed on every access. Isolation implies that the monitor and database are protected from unauthorised modification. Verifiability implies that the monitor is provably correct. Such a system is known as a trusted system. These requirements are very difficult to meet, both in terms of assuring correctness and in terms of delivering adequate performance. 6 Protection and Security Design Principles Saltzer and Schroeder (1975) identified a core set of principles to operating system security design: Least privilege: Every object (users and their processes) should work within a minimal set of privileges; access rights should be obtained by explicit request, and the default level of access should be none. Economy of mechanisms: security mechanisms should be as small and simple as possible, aiding in their verification. This implies that they should be integral to an operating systems design, and not an afterthought. Acceptability: security mechanisms must at the same time be robust yet non-intrusive. An intrusive mechanism is likely to be counter-productive and avoided by users, if possible. Complete: Mechanisms must be pervasive and access control checked during all operations including the tasks of backup and maintenance.

Open design: An operating systems security should not remain secret, nor be provided by stealth. Open mechanisms are subject to scrutiny, review, and continued refinement.

A process
A process is an execution stream in the context of a particular process state. An execution stream is a sequence of instructions. Process state determines the effect of the instructions. It usually includes (but is not restricted to): o Registers o Stack o Memory (global variables and dynamically allocated memory) o Open file tables o Signal management information Key concept: processes are separated: no process can directly affect the state of another process. Process is a key OS abstraction that users see - the environment you interact with when you use a computer is built up out of processes. The shell you type stuff into is a process. When you execute a program you have just compiled, the OS generates a process to run the program. Your WWW browser is a process. Organizing system activities around processes has proved to be a useful way of separating out different activities into coherent units. Two concepts: uniprogramming and multiprogramming. Uniprogramming: only one process at a time. Typical example: DOS. Problem: users often wish to perform more than one activity at a time (load a remote file while editing a program, for example), and uniprogramming does not allow this. So DOS and other uniprogrammed systems put in things like memory-resident programs that invoked asynchronously, but still have separation problems. One key problem with DOS is that there is no memory protection - one program may write the memory of another program, causing weird bugs. Multiprogramming: multiple processes at a time. Typical of Unix plus all currently envisioned new operating systems. Allows system to separate out activities cleanly. Multiprogramming introduces the resource sharing problem - which processes get to use the physical resources of the machine when? One crucial resource: CPU. Standard solution is to use preemptive multitasking - OS runs one process for a while, then takes the CPU away from that process and lets another process run. Must save and restore process state. Key issue: fairness. Must ensure that all processes get their fair share of the CPU. How does the OS implement the process abstraction? Uses a context switch to switch from running one process to running another process. How does machine implement context switch? A processor has a limited amount of physical resources. For example, it has only one register set. But every process on the machine has its own set of registers. Solution: save and restore hardware state on a context switch. Save the state in Process Control Block (PCB). What is in PCB? Depends on the hardware. Registers - almost all machines save registers in PCB. Processor Status Word. What about memory? Most machines allow memory from multiple processes to coexist in the physical memory of the machine. Some may require Memory Management Unit (MMU) changes on a context switch. But, some early personal computers switched all of process's memory out to disk (!!!). Operating Systems are fundamentally event-driven systems - they wait for an event to happen, respond appropriately to the event, then wait for the next event. Examples: User hits a key. The keystroke is echoed on the screen.

A user program issues a system call to read a file. The operating system figures out which disk blocks to bring in, and generates a request to the disk controller to read the disk blocks into memory. The disk controller finishes reading in the disk block and generates and interrupt. The OS moves the read data into the user program and restarts the user program. A Mosaic or Netscape user asks for a URL to be retrieved. This eventually generates requests to the OS to send request packets out over the network to a remote WWW server. The OS sends the packets. The response packets come back from the WWW server, interrupting the processor. The OS figures out which process should get the packets, then routes the packets to that process. Time-slice timer goes off. The OS must save the state of the current process, choose another process to run, the give the CPU to that process. When build an event-driven system with several distinct serial activities, threads are a key structuring mechanism of the OS. A thread is again an execution stream in the context of a thread state. Key difference between processes and threads is that multiple threads share parts of their state. Typically, allow multiple threads to read and write same memory. (Recall that no processes could directly access memory of another process). But, each thread still has its own registers. Also has its own stack, but other threads can read and write the stack memory. What is in a thread control block? Typically just registers. Don't need to do anything to the MMU when switch threads, because all threads can access same memory. Typically, an OS will have a separate thread for each distinct activity. In particular, the OS will have a separate thread for each process, and that thread will perform OS activities on behalf of the process. In this case we say that each user process is backed by a kernel thread. When process issues a system call to read a file, the process's thread will take over, figure out which disk accesses to generate, and issue the low level instructions required to start the transfer. It then suspends until the disk finishes reading in the data. When process starts up a remote TCP connection, its thread handles the low-level details of sending out network packets. Having a separate thread for each activity allows the programmer to program the actions associated with that activity as a single serial stream of actions and events. Programmer does not have to deal with the complexity of interleaving multiple activities on the same thread. Why allow threads to access same memory? Because inside OS, threads must coordinate their activities very closely. If two processes issue read file system calls at close to the same time, must make sure that the OS serializes the disk requests appropriately. When one process allocates memory, its thread must find some free memory and give it to the process. Must ensure that multiple threads allocate disjoint pieces of memory. Having threads share the same address space makes it much easier to coordinate activities - can build data structures that represent system state and have threads read and write data structures to figure out what to do when they need to process a request. One complication that threads must deal with: asynchrony. Asynchronous events happen arbitrarily as the thread is executing, and may interfere with the thread's activities unless the programmer does something to limit the asynchrony. Examples: An interrupt occurs, transferring control away from one thread to an interrupt handler. A time-slice switch occurs, transferring control from one thread to another. Two threads running on different processors read and write the same memory. Asynchronous events, if not properly controlled, can lead to incorrect behavior. Examples: Two threads need to issue disk requests. First thread starts to program disk controller (assume it is memory-mapped, and must issue multiple writes to specify a disk operation). In the meantime, the second thread runs on a different processor and also issues the memory-mapped writes to program the disk controller. The disk controller gets horribly confused and reads the wrong disk block.

Two threads need to write to the display. The first thread starts to build its request, but before it finishes a time-slice switch occurs and the second thread starts its request. The combination of the two threads issues a forbidden request sequence, and smoke starts pouring out of the display. For accounting reasons the operating system keeps track of how much time is spent in each user program. It also keeps a running sum of the total amount of time spent in all user programs. Two threads increment their local counters for their processes, then concurrently increment the global counter. Their increments interfere, and the recorded total time spent in all user processes is less than the sum of the local times. So, programmers need to coordinate the activities of the multiple threads so that these bad things don't happen. Key mechanism: synchronization operations. These operations allow threads to control the timing of their events relative to events in other threads. Appropriate use allows programmers to avoid problems like the ones outlined above.

File system
A file system (sometimes written as filesystem) is a method of storing and organizing arbitrary collections of data, in a form that is human-readable. A file system organizes data into an easy-to-manipulate database of human-readable names for the data, usually with a human-readable hierarchical organization of the data, for the storage, organization, manipulation, and retrieval by the computer's operating system. Each discrete collection of data in a file system is referred to as a computer file. Programs do not necessarily require file names or directories to access data, and direct data access is possible by hardcoding programs to directly access data regions on a storage device. Similarly, directories or folders are technically unnecessary, and all data could be arranged in a flat-file manner, identifying data using some external locating method such as typed pages in a binder. Such a system would be extremely difficult for day-to-day management and organization of data by human users. File systems are used on data storage devices such as hard disks or CD-ROMs to maintain the physical location of the files. Beyond this, they might provide access to data on a file server by acting as clients for a network protocol (e.g., NFS, SMB, or 9P clients), or they may be virtual and exist only as an access method for virtual data (e.g., procfs). It is distinguished from a directory service and registry. Windows makes use of the FAT and NTFS file systems. Unlike many other operating systems, Windows uses a drive letter abstraction at the user level to distinguish one disk or partition from another. For example, the path C:\WINDOWS represents a directory WINDOWS on the partition represented by the letter C. The C drive is most commonly used for the primary hard disk partition, on which Windows is usually installed and from which it boots. This "tradition" has become so firmly ingrained that bugs came about in older applications which made assumptions that the drive that the operating system was installed on was C. The tradition of using "C" for the drive letter can be traced to MS-DOS, where the letters A and B were reserved for up to two floppy disk drives. This in turn derived from CP/M in the 1970s, and ultimately from IBM's CP/CMS of 1967. FAT The File Allocation Table (FAT) filing system, supported by all versions of Microsoft Windows, was an evolution of that used in Microsoft's earlier operating system (MS-DOS which in turn was based on 86DOS). FAT ultimately traces its roots back to the short-lived M-DOS project and Standalone disk BASIC before it. Over the years various features have been added to it, inspired by similar features found on file systems used by operating systems such as Unix. Older versions of the FAT file system (FAT12 and FAT16) had file name length limits, a limit on the number of entries in the root directory of the file system and had restrictions on the maximum size of FAT-formatted disks or partitions. Specifically, FAT12 and FAT16 had a limit of 8 characters for the file name, and 3 characters for the extension (such as .exe). This is commonly referred to as the 8.3 filename limit. VFAT, which was an extension to FAT12 and FAT16 introduced in Windows NT 3.5 and subsequently included in Windows 95, allowed long file names (LFN). NTFS

NTFS, introduced with the Windows NT operating system, allowed ACL-based permission control. Hard links, multiple file streams, attribute indexing, quota tracking, sparse files, encryption, compression, reparse points (directories working as mount-points for other file systems, symlinks, junctions, remote storage links) are also supported, though not all these features are well-documented. kernel mode/user mode

A kernel& User mode


A kernel is the main program or component of an operating system, it's the thing that does all the work, without it you have no operating system. Linux is actually nothing more than a kernel; the programs that make up the rest of the operating system are generally GNU software, so the entire operating system is usually referred to as GNU/Linux. User mode is the normal mode of operating for programs. Web browsers, calculators, etc. will all be in user mode. They don't interact directly with the kernel, instead, they just give instructions on what needs to be done, and the kernel takes care of the rest. Kernel mode, on the other hand, is where programs communicate directly with the kernel. A good example of this would be device drivers. A device driver must tell the kernel exactly how to interact with a piece of hardware, so it must be run in kernel mode. Because of this close interaction with the kernel, the kernel is also a lot more vulnerable to programs running in this mode, so it becomes highly crucial that drivers are properly debugged before being released to the public. Finally, yes, updating Linux simply involves installing a new kernel because, as I said earlier, that's all 'Linux' really is. If you want to update the GNU programs that make up the rest of the operating system, you're going to have to use the tools that your Linux distribution provider supplied with the operating system.

Linux
Linux (commonly /lnks/ LIN-ks in English,[4][5] also pronounced /lnks/ LIN-uuks[6] in Europe) refers to the family of Unix-like computer operating systems using the Linux kernel. Linux can be installed on a wide variety of computer hardware, ranging from mobile phones, tablet computers, routers and video game consoles, to desktop computers, mainframes and supercomputers.[7][8][9][10] Linux is a leading server operating system, and runs the 10 fastest supercomputers in the world.[11] The development of Linux is one of the most prominent examples of free and open source software collaboration; typically all the underlying source code can be used, freely modified, and redistributed, both commercially and non-commercially, by anyone under licenses such as the GNU General Public License. Typically Linux is packaged in a format known as a Linux distribution for desktop and server use. Some popular mainstream Linux distributions include Debian (and its derivatives such as Ubuntu), Fedora and openSUSE. Linux distributions include the Linux kernel, supporting utilities and libraries and usually a large amount of application software to fulfill the distribution's intended use. A distribution oriented toward desktop use may include the X Window System, the GNOME and KDE Plasma desktop environments. Other distributions may include a less resource intensive desktop such as LXDE or Xfce for use on older or less-powerful computers. A distribution intended to run as a server may omit any graphical environment from the standard install and instead include other software such as the Apache HTTP Server and a SSH server like OpenSSH. Because Linux is freely redistributable, it is possible for anyone to create a distribution for any intended use. Commonly used applications with desktop Linux systems include the Mozilla Firefox web browser, the OpenOffice.org or LibreOffice office application suites, and the GIMP image editor. The name "Linux" comes from the Linux kernel, originally written in 1991 by Linus Torvalds. The main supporting user space system tools and libraries from the GNU Project (announced in 1983 by Richard Stallman) are the basis for the Free Software Foundation's preferred name GNU/Linux Unix

The Unix
The Unix operating system was conceived and implemented in 1969 at AT&T's Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, and Joe Ossanna. It was first released in 1971 and was initially entirely written in assembly language, a common practice at the time. Later, in a key pioneering approach in 1973, Unix was re-written in the programming language C by Dennis Ritchie (with exceptions to the kernel and I/O). The availability of an operating system written in a high-level language allowed easier portability to different computer platforms. With a legal glitch forcing AT&T to license the operating system's source code to anyone who asked,[14] Unix quickly grew and became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs. Free of the legal glitch requiring free licensing, Bell Labs began selling Unix as a proprietary product. Memory management

Memory management
is the act of managing computer memory. In its simpler forms, this involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system. Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using paging or swapping to secondary storage. The quality of the virtual memory manager can have a big impact on overall system performance. Garbage collection is the automated allocation and deallocation of computer memory resources for a program. This is generally implemented at the programming language level and is in opposition to manual memory management, the explicit allocation and deallocation of computer memory resources. Regionbased memory management is an efficient variant of explicit memory management that can deallocate large groups of objects simultaneously. Requirements Memory management systems on multi-tasking operating systems usually deal with the following issues. Relocation In systems with virtual memory, programs in memory must be able to reside in different parts of the memory at different times. This is because there is often not enough free space in one location of memory to fit the entire program. The virtual memory management unit must also deal with concurrency. Memory management in the operating system should therefore be able to relocate programs in memory and handle memory references and addresses in the code of the program so that they always point to the right location in memory. Protection Main article: Memory protection Processes should not be able to reference the memory for another process without permission. This is called memory protection, and prevents malicious or malfunctioning code in one program from interfering with the operation of other running programs. Sharing Main article: Shared memory Even though the memory for different processes is normally protected from each other, different processes sometimes need to be able to share information and therefore access the same part of memory. Shared memory is one of the fastest techniques for Inter-process communication. Logical organization Programs are often organized in modules. Some of these modules could be shared between different programs, some are read only and some contain data that can be modified. The memory management is responsible for handling this logical organization that is different from the physical linear address space. One way to arrange this organization is segmentation. Physical organization

Memory is usually divided into fast primary storage and slow secondary storage. Memory management in the operating system handles moving information between these two levels of memory.

External fragmentation
External fragmentation is the phenomenon in which free storage becomes divided into many small pieces over time.[1] It is a weakness of certain storage allocation algorithms, occurring when an application allocates and deallocates ("frees") regions of storage of varying sizes, and the allocation algorithm responds by leaving the allocated and deallocated regions interspersed. The result is that although free storage is available, it is effectively unusable because it is divided into pieces that are too small to satisfy the demands of the application. The term "external" refers to the fact that the unusable storage is outside the allocated regions. "A partition of main memory is the wastage of an entire partition is said to be External Fragmentation". For example, in dynamic memory allocation, a block of 1000 bytes might be requested, but the largest contiguous block of free space has only 300 bytes. Even if there are ten blocks of 300 bytes of free space, separated by allocated regions, one still cannot allocate the requested block of 1000 bytes, and the allocation request will fail. External fragmentation also occurs in file systems as many files of different sizes are created, change size, and are deleted. The effect is even worse if a file which is divided into many small pieces is deleted, because this leaves similarly small regions of free spaces.

S-ar putea să vă placă și