Sunteți pe pagina 1din 25

Technocrats Institute of Technology

Bhopal
Department of computer science & Engineering
(1st year)
Subject: Basic Computer Engineering
Subject Code: BE-205

Faculty:
Vikas Kumar Tiwari
Assistant Professor
TIT

Vikas kumar tiwari BE205

Page 1

Unit 1

Introduction of Computer
Computer
A computer is an electronic machine that takes data and instruction as a input from the user process data and provide
useful information.A computer is a programmable machine. The two principal characteristics of a computer are: it
responds to a specific set of instructions in a well-defined manner and it can execute a prerecorded list of
instructions (a program).

Classification of computer
According to Size and Power

Computers can be generally classified by size and power as follows, though there is considerable overlap:
personal computer: a small, single-user computer based on a microprocessor. In addition to the
microprocessor.
Micro computer: micro computer is defines as a computer that has a micro processor.
workstation: a powerful, single-user computer. A workstation is like a personal computer, but it has a
more powerful microprocessor and a higher-quality monitor.
minicomputer: a multi-user computer capable of supporting from 10 to hundreds of users
simultaneously.
mainframe: a powerful multi-user computer capable of supporting many hundreds or thousands of
users simultaneously.
supercomputer: an extremely fast computer that can perform hundreds of millions of instructions per
second.

CPU - Central Processing Unit


Pronounced as separate letters, CPU is the abbreviation for central processing unit. Sometimes referred to simply as
the central processor. CPU is the brain of the computer where most calculations take place. In terms of computing
power.

Vikas kumar tiwari BE205

Page 2

Two typical components of a CPU are the following:


1. The arithmetic logic unit (ALU), which performs arithmetic and logical operations.
2. The control unit (CU), which extracts instructions from memory and decodes and executes them, calling on
the ALU when necessary.

GENERATION OF COMPUTER
Computer Generations Generation in computer terminology is a change in technology a computer is/was being used. Initially, the
generation term was used to distinguish between varying hardware technologies. But nowadays, generation includes
both hardware and software, which together make up an entire computer system.
There are totally five computer generations known till date. Each generation has been discussed in detail along with
their time period, characteristics. We've used approximate dates against each generation which are normally
accepted.
Following are the main five generations of computers

First Generation
The period of first generation was 1946-1959.
First generation of computer started with using vacuum tubes as the basic components for memory and circuitry for
CPU(Central Processing Unit). These tubes like electric bulbs produced a lot of heat and were prone to frequent
fusing of the installations, therefore, were very expensive and could be afforded only by very large organisations. In
this generation mainly batch processing operating system were used. In this generation Punched cards, Paper tape,
Magnetic tape Input & Output device were used. There were Machine code and electric wired board languages used.
Examples of this generations computers ENIAC, EDVAC, UNIVAC, IBM-701, IBM-650

The main features of First Generation are:


Vacuum tube technology
Unreliable
Supported Machine language only
Very costly
Generate lot of heat
Slow Input/Output device

Vikas kumar tiwari BE205

Page 3

Huge size
Need of A.C.
Non portable
Consumed lot of electricity

Second Generation
The period of second generation was 1959-1965.
This generation using the transistor were cheaper, consumed less power, more compact in size, more reliable and
faster than the first generation machines made of vaccum tubes.In this generation, magnetic cores were used as
primary memory and magnetic tape and magnetic disks as secondary storage devices.
In this generation assembly language and high level programming language like FORTRAN, COBOL were used.
There was Batch processing and Multiprogramming Operating system used
The main features of Second Generation are:
Use of transistors
Reliable as compared to First generation computers
Smaller size as compared to First generation computers
Generate less heat as compared to First generation computers
Consumed less electricity as compared to First generation computers
Faster than first generation computers
Still very costly
A.C. needed
Support machine and assmebly languages
Examples: IBM 1620, IBM 7094, CDC 1604, CDC 3600.

Third Generation
The period of third generation was 1965-1971.
The third generation of computer is marked by the use of Integrated Circuits (IC's) in place of transistors. A single
I.C has many transistors, resistors and capacitors along with the associated circuitry. The I.C was invented by Jack
Kilby. This development made computers smaller in size, reliable and efficient. In this generation Remote
processing, Time-sharing, Real-time, Multi-programming Operating System were used.
High level language (FORTRAN-II TO IV, COBOL, PASCAL PL/1, BASIC, ALGOL-68 etc.) were used during
this generation.
The main features of Third Generation are:
IC used
More reliable
Smaller size
Generate less heat
Faster
Lesser maintenance

Vikas kumar tiwari BE205

Page 4

Still costly
A.C needed
Consumed lesser electricity
Support high level language
Examples: IBM-360 series, Honeywell-6000 series, PDP(Personal Data Processor), IBM-370/168,
TDC-316.

Fourth Generation
The period of Fourth Generation was 1971-1980.
The fourth generation of computers is marked by the use of Very Large Scale Integrated (VLSI) circuits.VLSI
circuits having about 5000 transistors and other circuit elements and their associated circuits on a single chip made it
possible to have microcomputers of fourth generation. Fourth Generation computers became more powerful,
compact, reliable, and affordable. As a result, it gave rise to personal computer (PC) revolution.
In this generation Time sharing, Real time, Networks, Distributed Operating System were used.
All the Higher level languages like C and C++, DBASE etc. were used in this generation

The main features of Fourth Generation are:


VLSI technology used
Very cheap
Portable and reliable
Use of PC's
Very small size
Pipeline processing No A.C. needed
Concept of internet was introduced
Great developments in the fields of networks
Computers became easily available
Examples: DEC 10, STAR 1000, PDP 11, CRAY-1(Super Computer), CRAY-X-MP(Super Computer)
Fifth Generation
The period of Fifth Generation is 1980-till date.
In the fifth generation, the VLSI technology became ULSI (Ultra Large Scale Integration) technology, resulting in
the production of microprocessor chips having ten million electronic components.
This generation is based on parallel processing hardware and AI (Artificial Intelligence) software. AI is an emerging
branch in computer science, which interprets means and method of making computers think like human beings.
All the Higher level languages like C and C++, Java, .Net etc. are used in this generation.
AI includes:
Robotics
Neural networks

Vikas kumar tiwari BE205

Page 5

Game Playing
Development of expert systems to make decisions in real life situations.
Natural language understanding and generation.
The main features of Fifth Generation are:
ULSI technology
Development of true artificial intelligence
Development of Natural language processing
Advancement in Parallel Processing
Advancement in Superconductor technology
More user friendly interfaces with multimedia features
Availability of very powerful and compact computers at cheaper rates
Examples: Desktop, Laptop, NoteBook, UltraBook, ChromeBook

Registers
A register consists of a group of flip-flops with a common clock input. Registers are commonly used to store and
shift binary data. A counter is constructed from two or more flip-flops which change state in a prescribed sequence.
In computer architecture, a processor register is a small amount of storage available as part of a CPU or other
digital processor.
User-accessible registers The most common division of user-accessible registers is into data registers and address
registers.
Data registers can hold numeric values such as integer and floating-point values, as well as characters, small bit
arrays and other data. In some older and low end CPUs, a special data register, known as the accumulator, is used
implicitly for many operations.
Address registers hold addresses and are used by instructions that indirectly access primary memory.
Some processors contain registers that may only be used to hold an address or only to hold numeric values (in some
cases used as an index register whose value is added as an offset from some address); others allow registers to hold
either kind of quantity. A wide variety of possible addressing modes, used to specify the effective address of an
operand, exist.
The stack pointer is used to manage the run-time stack. Rarely, other data stacks are addressed by dedicated address
registers, see stack machine.

A Memory Buffer Register : It is used for storing data received from or sent to cpu. MBR is the register in
a computer's processor, or central processing unit, CPU, that stores the data being transferred to and from the
immediate access store. It acts as a buffer allowing the processor and memory units to act independently without
being affected by minor differences in operation. A data item will be copied to the MBR ready for use at the
nextclock cycle, when it can be either used by the processor or stored in main memory.

Vikas kumar tiwari BE205

Page 6

The Memory Data Register : It is used for storing operands and data. MDR is the register of a computer's control
unit that contains the data to be stored in the computer storage (e.g. RAM), or the data after a fetch from the
computer storage.
The MDR is a two-way register. When data is fetched from memory and placed into the MDR, it is written to in one
direction. When there is a write instruction, the data to be written is placed into the MDR from another CPU register,
which then puts the data into memory.
In a computer, the Memory Address Register (MAR) is a CPU register that either stores the memory address from
which data will be fetched to the CPU or the address to which data will be sent and stored.
The program counter, PC, is a special-purpose register that is used by the processor to hold the address of the next
instruction to be executed.
Instruction Set - In computing, an instruction register is the part of a CPU's control unit that stores the instruction
currently being executed or decoded
Accumulator(ACC) For storing the results produced by arithmetic and logic units.

Bus Architecture

-Memory bus
Memory bus (also called system bus since it interconnects the subsystems)
Interconnects the processor with the memory systems and also connects the I/O bus

Vikas kumar tiwari BE205

Page 7

Three sets of signals address bus, data bus, and control bus.

System Bus
A systems bus characteristics according to the needs of the processor, speed, and word length for instructions
and data
Processor internal bus(es) characteristics differ from the system external bus(es).

Address Bus
Through the address bus, processor issues the address of the instruction byte or word to the memory system
Through the address bus, processor execution unit, when required, issues the address of the data (byte or word) to
the memory system
Data Bus
When the Processor issues the address of the instruction, it gets back the instruction through the data bus
When it issues the address of the data, it loads the data through the data bus
When it issues the address of the data, it stores the data in the memory through the data bus.
A data bus of 32-bits fetches, loads, or stores the instruction or data of 32-bits at one time.

Fig: Buses to interconnect the processor Functional units to memory and IO systems

Memory System
There are two kinds of computer memory: primary and secondary . Primary memory is accessible directly by the
processing unit. RAM is an example of primary memory. As soon as the computer is switched off the contents of the

Vikas kumar tiwari BE205

Page 8

primary memory is lost. You can store and retrieve data much faster with primary memory compared to secondary
memory. Secondary memory such as floppy disks , magnetic disk , etc., is located outside the computer. Primary
memory is more expensive than secondary memory. Because of this the size of primary memory is less than that of
secondary memory.
Computer memory is used to store two things:
i) Instructions to execute a program .
ii) Data .
When the computer is doing any job, the data that have to be processed are stored in the primary memory. This data
may come from an input device like keyboard or from a secondary storage device like a floppy disk.
The following terms related to memory of a computer are discussed below :

Random Access Memory (RAM)

The primary storage is referred to as random access memory (RAM) because it is possible to randomly select and
use any location of the memory directly store and retrieve data . It takes same time to any address of the memory as
the first address. It is also called read/write memory .The storage of data and instructions inside the primary storage
is temporary . It disappears from RAM as soon as the power to the computer is switched off. The memories,which
loose their content on failure of power supply, are known as volatile memories. So now we can say that RAM is
volatile
memory.

Read Only Memory (ROM)


There is another memory in computer, which is called Read Only Memory (ROM). It is the ICs inside the PC that
form the ROM. The storage of program and data in the ROM is permanent. The ROM stores some standard
processing programs supplied by the manufacturers to operate the personal computer.
The ROM can only be read by the CPU but it cannot be changed. The basic input/output program is stored
in the ROM that examines and initializes various equipment attached to the PC when the switch is made ON. The
memories, which do not loose their content on failure of power supply, are known as non-volatile memories. ROM
is non-volatile memory.

Memory Hierarchy

Vikas kumar tiwari BE205

Page 9

Software Set of program


Application software
Application software is a defined subclass of computer software that employs the capabilities of a computer directly
to a task that the user wishes to perform. Application software is all the computer software that causes a computer
to perform useful tasks (compare with Computer viruses) beyond the running of the computer itself. A specific
instance of such software is called a software application, application or app.
System software (or systems software) is an operating system designed to operate and control the computer
hardware and to provide a platform for running application software. System softwares are used to run the
application software.

Computer Ethics
Ethics is a set of moral principles that govern the behavior of a group or individual. Therefore, computer ethics is set
of moral principles that regulate the use of computers. Some common issues of computer ethics include intellectual
property rights (such as copyrighted electronic content), privacy concerns, and how computers affect society.
For example, while it is easy to duplicate copyrighted electronic (or digital) content, computer ethics would suggest
that it is wrong to do so without the author's approval.

Computer Applications
e-Business
eBusiness (e-Business), or Electronic Business, is the administration of conducting business via the internet. This
would include the buying and selling of goods and services, along with providing technical or customer support
through the Internet

Bioenformatics
Bioinformatics has become an important part of many areas of biology. In experimental molecular biology,
bioinformatics techniques such as image and signal processing allow extraction of useful results from large amounts
of raw data. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are

Vikas kumar tiwari BE205

Page 10

an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, RNA,
and protein structures as well as molecular interactions.
GIS and remote sensing
A geographic information system (GIS) is a computer-based tool for mapping and analyzing feature events on earth.
GIS technology integrates common database operations, such as query and statistical analysis, with maps. GIS
manages location-based information and provides tools for display and analysis of various statistics, including
population characteristics, economic development opportunities, and vegetation types
Remote sensing
Remote sensing is the acquisition of information about an object or phenomenon without making physical contact
with the object. In modern usage, the term generally refers to the use of aerial sensor technologies to detect and
classify objects on Earth (both on the surface, and in the atmosphere and oceans) by means of propagated signals
(e.g. electromagnetic radiation emitted from aircraft or satellites).

Animation
'To animate' literally means to give life to. Animating is moving something that cannot move on it's own. Animation
adds to graphics the dimensions of time, which tremendously increase the potential of transmitting the desired
information. In order to animate something the animator has to be able to specify directly or indirectly how the
'thing' has to move through time and space.
With time the technique of animation has become more and more computer -assisted and
computer- generated. All of such techniques require a trade-off between the level of control that the animator has
over the finer details of the motion and the amount of work that the computer does on its own. Broadly, the
computer animation falls into three basic categories: keyframing, motion capture, and simulation
Multimedia
Many clients choose multimedia to explain technical issues. Multimedia uses a navigational approach to accessing
data, allowing one to display video, animation, graphics, drawings, documents, and still images as needed during a
presentation or testimony.
Meteorology
Meteorology is the interdisciplinary scientific study of the atmosphere. Studies in the field stretch back millennia,
though significant progress in meteorology did not occur until the 18th century. The 19th century saw breakthroughs
occur after observing networks developed across several countries.
Climatology
Climatology is the study of climate, scientifically defined as weather conditions averaged over a period of time. This
modern field of study is regarded as a branch of the atmospheric sciences and a subfield of physical geography,
which is one of the Earth sciences. Climatology now includes aspects of oceanography and biogeochemistry.

Instruction set: It is a set of instruction which are executed by a processor to perform the different
operations. Instruction set are two types which are defined on the basis of the complexity and the number
of instructions used and they are
1. Complex instruction set and
2. Reduced instruction set.

Vikas kumar tiwari BE205

Page 11

Complex instruction set


Large no of instruction (100 to 250) and complex in nature.
Memory based instruction which involves frequent references to the memory and takes a long
time to execute a instruction.
Large no.of addressing modes.
Variable length instruction format are used(not limited to 32 bits).
Reduced instruction setSmall no of instruction (0-100),
Registers based instruction the load and store are the only memory based instruction.
This is the set of those instructions that are frequently used by the processor for execution of a
program
Fixed length instruction format are used(32 bits)
RISC VS CISC
RISC

CISC

Small set of instructions with fixed size

large set of instructions with variable size(16 to 64)

3-5 addressing modes.

12-24 addressing modes

32-192 no. of general purpose registers.

8-24 no. of general purpose registers.

Clock rate:50-150mhz in 1993

32-50mhz in 1992

Vikas kumar tiwari BE205

Page 12

Unit 2

Operating System
Operating System

Definition: An Operating System is a computer program that manages the resources of a computer. It accepts
keyboard or mouse inputs from users and displays the results of the actions and allows the user to run applications,
or communicate with other computers via networked connections.

Also known as an "OS," this is the software that communicates with computer hardware on the most basic level.
Without an operating system, no software programs can run. The OS is what allocates memory, processes tasks,
accesses disks and peripherials, and serves as the user interface.

Functions Of Operating System


Today most operating systems perform the following important functions:
1. Processor management, that is, assignment of processor to different tasks being performed by the computer
system.
2. Memory management, that is, allocation of main memory and other storage areas to the system programmes as
well as user programmes and data.
3. Input/output management, that is, co-ordination and assignment of the different output and input device while one
or more programmes are being executed.
4. File management, that is, the storage of file of various storage devices to another. It also allows all files to be
easily changed and modified through the use of text editors or some other files manipulation routines.
5. Establishment and enforcement of a priority system. That is, it determines and maintains the order in which
jobs are to be executed in the computer system.
6. Automatic transition from job to job as directed by special control statements.
7.Interpretation of commands and instructions.
8. Coordination and assignment of compilers, assemblers, utility programs, and other software to the arious user of
the computer system.

Vikas kumar tiwari BE205

Page 13

Different Types of Operating Systems


An operating system is a software component of a computer system that is responsible for the management of
various activities of the computer and the sharing of computer resources. It hosts the several applications that run on
a computer and handles the operations of computer hardware. Users and application programs access the services
offered by the operating systems, by means of system calls and application programming interfaces. Users interact
with operating systems through Command Line Interfaces (CLIs) or Graphical User Interfaces known as GUIs. In
short, operating system enables user interaction with computer systems by acting as an interface between users or
application programs and the computer hardware. Here is an overview of the different types of operating systems.
Real-time Operating System: It is a multitasking operating system that aims at executing real-time applications.
Real-time operating systems often use specialized scheduling algorithms so that they can achieve a deterministic
nature of behavior. The main object of real-time operating systems is their quick and predictable response to events.
They either have an event-driven or a time-sharing design. An event-driven system switches between tasks based of
their priorities while time-sharing operating systems switch tasks based on clock interrupts.
Multi-user and Single-user Operating Systems: The operating systems of this type allow a multiple users to
access a computer system concurrently. Time-sharing system can be classified as multi-user systems as they enable
a multiple user access to a computer through the sharing of time. Single-user operating systems, as opposed to a
multi-user operating system, are usable by a single user at a time. Being able to have multiple accounts on a
Windows operating system does not make it a multi-user system. Rather, only the network administrator is the real
user. But for a Unix-like operating system, it is possible for two users to login at a time and this capability of the OS
makes it a multi-user operating system.
Multi-tasking and Single-tasking Operating Systems: When a single program is allowed to run at a time, the
system is grouped under a single-tasking system, while in case the operating system allows the execution of multiple
tasks at one time, it is classified as a multi-tasking operating system. Multi-tasking can be of two types namely, preemptive or co-operative. In pre-emptive multitasking, the operating system slices the CPU time and dedicates one
slot to each of the programs. Unix-like operating systems such as Solaris and Linux support pre-emptive
multitasking. Cooperative multitasking is achieved by relying on each process to give time to the other processes in
a defined manner. MS Windows prior to Windows 95 used to support cooperative multitasking.
Distributed Operating System: An operating system that manages a group of independent computers and makes
them appear to be a single computer is known as a distributed operating system. The development of networked
computers that could be linked and communicate with each other, gave rise to distributed computing. Distributed
computations are carried out on more than one machine. When computers in a group work in cooperation, they
make a distributed system.
Embedded System: The operating systems designed for being used in embedded computer systems are known as
embedded operating systems. They are designed to operate on small machines like PDAs with less autonomy. They
are able to operate with a limited number of resources. They are very compact and extremely efficient by design.
Windows CE, FreeBSD and Minix 3 are some examples of embedded operating systems.
The operating systems thus contribute to the simplification of the human interaction with the computer hardware.
They are responsible for linking application programs with the hardware, thus achieving an easy user access to the
computers.

Vikas kumar tiwari BE205

Page 14

Types of Operating Systems


Within the broad family of operating systems, there are generally four types, categorized based on the types of
computers they control and the sort of applications they support. The categories are:
Real-time operating system (RTOS) - Real-time operating systems are used to control machinery,
scientific instruments and industrial systems. An RTOS typically has very little user-interface capability,
and no end-user utilities, since the system will be a "sealed box" when delivered for use. A very important
part of an RTOS is managing the resources of the computer so that a particular operation executes in
precisely the same amount of time, every time it occurs. In a complex machine, having a part move more
quickly just because system resources are available may be just as catastrophic as having it not move at all
because the system is busy.
Single-user, single task - As the name implies, this operating system is designed to manage the computer
so that one user can effectively do one thing at a time. The Palm OS for Palm handheld computers is a good
example of a modern single-user, single-task operating system.
Single-user, multi-tasking - This is the type of operating system most people use on their desktop and
laptop computers today. Microsoft's Windows and Apple's MacOS platforms are both examples of
operating systems that will let a single user have several programs in operation at the same time. For
example, it's entirely possible for a Windows user to be writing a note in a word processor while
downloading a file from the Internet while printing the text of an e-mail message.
Multi-user - A multi-user operating system allows many different users to take advantage of the
computer's resources simultaneously. The operating system must make sure that the requirements of the
various users are balanced, and that each of the programs they are using has sufficient and separate
resources so that a problem with one user doesn't affect the entire community of users. Unix, VMS and
mainframe operating systems, such as MVS, are examples of multi-user operating systems.

It's important to differentiate between multi-user operating systems and single-user operating systems that support
networking. Windows 2000 and Novell Netware can each support hundreds or thousands of networked users, but the
operating systems themselves aren't true multi-user operating systems. The system administrator is the only "user"
for Windows 2000 or Netware. The network support and all of the remote user logins the network enables are, in the
overall plan of the operating system, a program being run by the administrative user.
File systems under Microsoft Windows
Windows makes use of the FAT and NTFS file systems.
FAT
The File Allocation Table (FAT) filing system, supported by all versions of Microsoft Windows, was an evolution
of that used in Microsoft's earlier operating system (MS-DOS which in turn was based on 86-DOS). FAT ultimately
traces its roots back to the short-lived M-DOS project and Standalone disk BASIC before it. FAT32 also addressed
many of the limits in FAT12 and FAT16, but remains limited compared to NTFS.
NTFS
NTFS, introduced with the Windows NT operating system, allowed ACL-based permission control. Hard links,
multiple file streams, attribute indexing, quota tracking, sparse files, encryption, compression, reparse points
(directories working as mount-points for other file systems, symlinks, junctions, remote storage links) are also

Vikas kumar tiwari BE205

Page 15

Process
In computing, a process is an instance of a computer program that is being executed. It contains the program code
and its current activity. Depending on the operating system (OS), a process may be made up of multiple threads of
execution that execute instructions concurrently.
Multitasking is a method to allow multiple processes to share processors (CPUs) and other system resources. Each
CPU executes a single task at a time. However, multitasking allows each processor to switch between tasks that are
being executed without having to wait for each task to finish. Depending on the operating system implementation,
switches could be performed when tasks perform input/output operations, when a task indicates that it can be
switched, or on hardware interrupts.

Process states

The various process states, displayed in a state diagram, with arrows indicating possible transitions between
states.
An operating system kernel that allows multi-tasking needs processes to have certain states. Names for these states
are not standardised, but they have similar functionality. [1]
First, the process is "created" - it is loaded from a secondary storage device (hard disk or CD-ROM...) into
main memory. After that the process scheduler assigns it the state "waiting".
While the process is "waiting" it waits for the scheduler to do a so-called context switch and load the
process into the processor. The process state then becomes "running", and the processor executes the
process instructions.
If a process needs to wait for a resource (wait for user input or file to open ...), it is assigned the "blocked"
state. The process state is changed back to "waiting" when the process no longer needs to wait.

Vikas kumar tiwari BE205

Page 16

Once the process finishes execution, or is terminated by the operating system, it is no longer needed. The
process is removed instantly or is moved to the "terminated" state. When removed, it just waits to be
removed from main memory.

Process Control Block


All of the information needed to keep track of a process when switching is kept in a data package called a process
control block. The process control block typically contains:
An ID number that identifies the process
Pointers to the locations in the program and its data where processing last occurred
Register contents
States of various flags and switches
Pointers to the upper and lower bounds of the memory required for the process
A list of files opened by the process
The priority of the process
The status of all I/O devices needed by the process

Process management

Process management is an integral part of any modern day operating system (OS). The OS must allocate resources
to processes, enable processes to share and exchange information, protect the resources of each process from other
processes and enable synchronisation among processes. To meet these requirements, the OS must maintain a data
structure for each process, which describes the state and resource ownership of that process, and which enables the
OS to exert control over each process.
Process creation
Operating systems need some ways to create processes. In a very simple system designed for running only a single
application (e.g., the controller in a microwave oven), it may be possible to have all the processes that will ever be
needed be present when the system comes up. In general-purpose systems, however, some way is needed to create
and terminate processes as needed during operation.
There are four principal events that cause a process to be created:

Vikas kumar tiwari BE205

Page 17

System initialization.
Execution of process creation system call by running a process.
A user request to create a new process.
Initiation of a batch job.
When an operating system is booted, typically several processes are created. Some of these are foreground
processes, that interacts with a (human) user and perform work for them. Other are background processes, which are
not associated with particular users, but instead have some specific function. For example, one background process
may be designed to accept incoming e-mails, sleeping most of the day but suddenly springing to life when an
incoming e-mail arrives. Another background process may be designed to accept an incoming request for web pages
hosted on the machine, waking up when a request arrives to service that request.
Process termination
There are many reasons for process termination:
Batch job issues halt instruction
User logs off
Process executes a service request to terminate
Error and fault conditions
Normal completion
Time limit exceeded
Memory unavailable
Bounds violation; for example: attempted access of (non-existent) 11th element of a 10-element array
Protection error; for example: attempted write to read-only file
Arithmetic error; for example: attempted division by zero
Time overrun; for example: process waited longer than a specified maximum for an event
I/O failure
Invalid instruction; for example: when a process tries to execute data (text)
Privileged instruction
Data misuse
Operating system intervention; for example: to resolve a deadlock
Parent terminates so child processes terminate (cascading termination)
Parent request

Memory Management
Memory management is the act of managing computer memory. In its simpler forms, this involves
providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when
no longer needed.
Mainly two techniques are used in memory management
1. Contiguous Memory allocation
2. Non Contiguous Memory allocation

Contiguous Allocation
Contiguous memory allocation is a classical memory allocation model that assigns a process
consecutive memory blocks. The memory is usually divided into two partitions: one for the
resident operating system and one for the user processes.

Vikas kumar tiwari BE205

Page 18

Fixed Partition Scheme


Memory broken up into fixed size partitions
But the size of two partitions may be different
Each partition can have exactly one process
When a process arrives, allocate it a free partition
Can apply different policy to choose a partition
Easy to manage
Problems:
Maximum size of process bound by max. partition size
Large internal fragmentation possible
Multiple-partition allocation
o
o
o

Hole block of available memory; holes of various size are scattered throughout memory
When a process arrives, it is allocated memory from a hole large enough to accommodate
it
Operating system maintains information about:
a) allocated partitions b) free partitions (hole)

Dynamic Storage-Allocation Problem


How to satisfy a request of size n from a list of free holes
First-fit: Allocate the first hole that is big enough
Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by
size Produces the smallest leftover hole
Worst-fit: Allocate the largest hole; must also search entire list Produces the largest leftover
hole
Fragmentation Issues
External Fragmentation total memory space exists to satisfy a request, but it is not contiguous
Internal Fragmentation allocated memory may be slightly larger than requested memory; this
size difference is memory internal to a partition, but not being used
Reduce external fragmentation by compaction

Vikas kumar tiwari BE205

Page 19

o
o
o

Shuffle memory contents to place all free memory together in one large block
Compaction is possible only if relocation is dynamic, and is done at execution time
I/O problem
Latch job in memory while it is involved in I/O
Do I/O only into OS buffers

Non Contiguous Memory Allocation: Parts of a process can be allocated noncontiguous chunks of
memory
Paging
Divide physical memory into fixed-sized blocks called frames. Keep track of all free frames
Divide logical memory into blocks of same size called pages
To run a program of size n pages, need to find n free frames.
Set up a page table to translate logical to physical addresses
Remove/reduce external fragmentation. Internal fragmentation exists

Vikas kumar tiwari BE205

Page 20

Address

translation

File Management System

The operating system is responsible for the following activities in connections with file management:

File creation and deletion.

Directory creation and deletion.

Support for manipulating files and directories.

Mapping files onto secondary storage.

File backup on stable (nonvolatile) storage media.

Identify and locate a selected file

Use a directory to describe the location of all files plus their attributes.

File Operations
Create
Delete
Open
Close
Read
Write

Vikas kumar tiwari BE205

Page 21

Device Management

Track status of each device (such as tape drives, disk drives, printers, plotters, and terminals).

Use preset policies to determine which process will get a device and for how long.

Allocate the devices.

Deallocate the devices at 2 levels:

At process level when I/O command has been executed & device is temporarily released

At job level when job is finished & device is permanently released.

GENERATIONS OF PROGRAMMING LANGUAGE


A low-level programming language is a programming language that provides little or no abstraction from
computers microprocessor. A high-level programming language is a programming language that is more abstract,
easier to use, and more portable across platforms.
LEVELS OF PROGRAMMING LANGUAGE

FIRST GENERATION OF PROGRAMMING LANGUAGE


The first generation of programming language, or 1GL, is machine language. Machine language is a set of
instructions and data that a computer's central processing unit can execute directly. Machine language statements are
written in binary code, and each statement corresponds to one machine action.
SECOND GENERATION PROGRAMMING LANGUAGE
The second generation programming language, or 2GL, is assembly language. Assembly language is the humanreadable notation for the machine language used to control specific computer operations. An assembly language
programmer writes instructions using symbolic instruction codes that are meaningful abbreviations or mnemonics.
An assembler is a program that translates assembly language into machine language.
THIRD GENERATION PROGRAMMING LANGUAGE
The third generation of programming language, 3GL, or procedural language uses a series of English-like words,
that are closer to human language, to write instructions.
High-level programming languages make complex programming simpler and easier to read, write and maintain.
Programs written in a high-level programming language must be translated into machine language by a compiler or
interpreter. PASCAL, FORTRAN, BASIC, COBOL, C and C++ are examples of third generation programming
languages.
FOURTH GENERATION PROGRAMMING LANGUAGE
The fourth generation programming language or non-procedural language, often abbreviated as 4GL, enables users
to access data in a database. A very high-level programming language is often referred to as goal-oriented
programming language because it is usually limited to a very specific application and it might use syntax that is
never used in other programming languages. SQL, NOMAD and FOCUS are examples of fourth generation
programming languages.

Vikas kumar tiwari BE205

Page 22

FIFTH GENERATION PROGRAMMING LANGUAGE


The fifth generation programming language or visual programming language, is also known as natural
language. Provides a visual or graphical interface, called a visual programming environment, for creating
source codes. Fifth generation programming allows people to interact with computers without needing any
specialised knowledge. People can talk to computers and the voice recognition systems can convert spoken sounds
into written words. Prolog and Mercury are the best known fifth-generation languages.

Characteristics of a Good Programming Language


Several characteristics believed to be important with respect to making a programming language good are briefly
discussed below.

Simplicity
A good programming language must be simple and easy to learn and use. A good programming language should
provide a programmer with a clear, simple and unified set of concepts which can be easily grasped. It is also easy to
develop and implement a compiler or an interpreter

Naturalness
It should provide appropriate operators, data structures, control structures, and a natural syntax in order to facilitate
the users to code their problem easily and efficiently. FORTRAN and COBOL are good examples of scientific and
business languages respectively.

Abstraction
Abstraction means the ability to define and then use complicated structures or operations in ways that allow many of
the details to be ignored. The degree of abstraction allowed by a programming language directly affects its
writability. For example, object-oriented languages support high degree of abstraction. Hence writing programs in

object-oriented languages is much easier.

Efficiency
The program written in good programming language are efficiently translated into machine code, are
efficiently executed, and acquires as little space in the memory as possible.

Structuredness
Structuredness means that the language should have necessary features to allow its users to write their
programs based on the concepts of structured programming.

Locality
A good programming language should be such that while writing a program, a programmer need not jump
around visually as the text of the program is prepared. This allows the programmer to concentrate almost
solely on the part of the program around the statements currently being worked with. COBOL lacks
locality because data definitions are separated from processing statements, perhaps by many pages of
code.\

Vikas kumar tiwari BE205

Page 23

Extensibility
A good programming language should allow extension through simple, natural, and elegant mechanisms.

Suitability to its Environment


Depending upon the type of application for which a programming language has been designed, the
language must also be made suitable to its environment

Concepts of OOP:
Objects
Classes
Data Abstraction and Encapsulation
Inheritance
Polymorphism
Objects
Objects are the basic run-time entities in an object-oriented system. Programming problem is analyzed in terms of objects
and nature of communication between them. When a program is executed, objects interact with each other by sending
messages. Different objects can also interact with each other without knowing the details of their data or code.
Classes
A class is a collection of objects of similar type. Once a class is defined, any number of objects can be created which
belong to that class.
Data Abstraction and Encapsulation
Abstraction refers to the act of representing essential features without including the background details or explanations.
Classes use the concept of abstraction and are defined as a list of abstract attributes. Storing data and functions in a single
unit (class) is encapsulation. Data cannot be accessible to the outside world and only those functions which are stored in
the class can access it.
Inheritance
Inheritance is the process by which objects can acquire the properties of objects of other class.
Polymorphism
Polymorphism means the ability to take more than one form. An operation may exhibit different behaviors in different
instances. The behavior depends on the data types used in the operation

Vikas kumar tiwari BE205

Page 24

Difference Between Procedure Oriented Programming (POP) & Object


Oriented Programming (OOP)
Procedure Oriented Programming
Divided Into
Importance
Approach
Access
Specifiers
Data Moving
Expansion

Data Access

Data Hiding

Overloading
Examples

Object Oriented Programming

In POP, program is divided into small parts called functions.

In OOP, program is divided into parts called objects.

In POP,Importance is not given to data but to functions as

In OOP, Importance is given to the data rather than procedures

well as sequence of actions to be done.

or functions because it works as a real world.

POP follows Top Down approach.

OOP follows Bottom Up approach.

POP does not have any access specifier.

OOP has access specifiers named Public, Private, Protected, etc.

In POP, Data can move freely from function to function in

In OOP, objects can move and communicate with each other

the system.

through member functions.

To add new data and function in POP is not so easy.

OOP provides an easy way to add new data and function.

In POP, Most function uses Global data for sharing that can
be accessed freely from function to function in the system.
POP does not have any proper way for hiding data so it
is less secure.
In POP, Overloading is not possible.
Example of POP are : C, VB, FORTRAN, Pascal.

Vikas kumar tiwari BE205

In OOP, data can not move easily from function to function,it


can be kept public or private so we can control the access of
data.
OOP provides Data Hiding so provides more security.
In OOP, overloading is possible in the form of Function
Overloading and Operator Overloading.
Example of OOP are : C++, JAVA, VB.NET, C#.NET.

Page 25

S-ar putea să vă placă și