Sunteți pe pagina 1din 75

1.

Shell Programming
AIM :

A. Write a program to handle student data base with options given below,
a) Create data base. b) View Data Base. c) Insert a record.
d) Delete a record. e) Modify a record. f) Result of a particular student. g) Exit.
B. Menu driven program for
a) Find factorial of a no. b) Find greatest of three numbers c) Find a prime no
d) Find whether a number is palindrome e) Find whether a string is palindrome
C. Write shell program using command-line argument for
a. Finding biggest of three numbers
b. Reversing a number
c. Accept a number N and a word and print the word N times, one word per line
d. Sum of individual digits of a 4-digit number
(1234 -> 1+2+3+4=10)

OBJECTIVE:

Understanding of UNIX shell commands & shell programming.

THEORY:
1) What is Shell?
Linux comes with various command interpreters called as shells in the Unix terminology.
Actually shell sits in between the kernel of an operation system and the user. So whatever user
wants to do through kernel it is available in terms of shell commands. Once you provide valid
command for the required operation it hands over the request to the operating system kernel and
finally job will be done by the system.
There are various shells available to use in the Linux environment but following shells are
the standard shells.
UNIX/Linux Shells

Bourne Shell C Shell Korn Shell

Bash tcsh

The shells used in the Linux operating system has dual capability, in one had it is used as a tool
which accepts commands interpret it and hands over it to the operating system kernel. Due to this
capability it is called as command – line interpreter, another use of shell is it can be used as a
programming language. Shell programming is interpretive by nature and mostly it is used to
assist in system administration tasks.

Operating System Lab SCOEIT 1


1. Shell Programming
2)Explain shell Command with example:
Special Characters
Before we continue to learn about Linux shell commands, it is important to know that there are
many symbols and characters that the shell interprets in special ways. This means that
certain typed characters: a) cannot be used in certain situations, b) may be used to
perform special operations, or, c) must be “escaped” if you want to use them in a normal way.

Character Description
\ Escape character. If you want to reference a special character, you must “escape” it with a
backslash first.
Example: touch /tmp/filename\*
/ Directory separator, used to separate a string of directory names.
Example: /usr/src/linux
. Current directory. Can also “hide” files when it is the first character in a filename.
.. Parent directory
~ User's home directory
* Represents 0 or more characters in a filename, or by itself, all files in a directory.
Example: pic*2002 can represent the files pic2002, picJanuary2002,
picFeb292002, etc.
? Represents a single character in a filename.
Example: hello?.txt can represent hello1.txt, helloz.txt, but not
hello22.txt
[ ] Can be used to represent a range of values, e.g. [0-9], [A-Z], etc.
Example: hello[0-2].txt represents the names hello0.txt,
hello1.txt, and hello2.txt
| “Pipe”. Redirect the output of one command into another command.
Example: ls | more
> Redirect output of a command into a new file. If the file already exists, over-write it.
Example: ls > myfiles.txt
>> Redirect the output of a command onto the end of an existing file.
Example: echo “Mary 555-1234” >> phonenumbers.txt
< Redirect a file as input to a program.
Example: more < phonenumbers.txt
; Command separator. Allows you to execute multiple commands on a single line.
Example: cd /var/log ; less messages

Operating System Lab SCOEIT 2


&& Command separator as above, but only runs the second command if the first one
finished without errors.
Example: cd /var/logs && less messages
& Execute a command in the background, and immediately get your shell back.
Example: find / -name core > /tmp/corefiles.txt &

Executing Commands

The Command PATH:


Most common commands are located in your shell's “PATH”, meaning that you can
just type the name of the program to execute it.
Example: Typing “ ls” will execute the “ ls” command.
Your shell's “PATH” variable includes the most common program locations, such as
/bin, /usr/bin, /usr/X11R6/bin, and others.
To execute commands that are not in your current PATH, you have to give the
complete location of the command.
Examples: /home/bob/myprogram
./program (Execute a program in the current directory)
~/bin/program (Execute program from a personal bin
directory)

Command Syntax
Commands can be run by themselves, or you can pass in additional arguments to make them
do different things. Typical command syntax can look something like this:
command [-argument] [-argument] [--argument] [file]
Examples: ls List files in current directory
ls -l Lists files in “long” format
ls -l --color As above, with colourized output
cat filename Show contents of a file
cat -n filename Show contents of a file, with line numbers

Help
When you're stuck and need help with a Linux command, help is usually only a few
keystrokes away! Help on most Linux commands is typically built right into the commands
themselves, available through online help programs (“man pages” and “info pages”), and of
course online.

Operating System Lab SCOEIT 3


1. Shell Programming

Using a Command's Built-In Help


Many commands have simple “help” screens that can be invoked with special command flags.
These flags usually look like “-h” or “--help”.
Example: grep –help

Online Manuals: “Man Pages”


The best source of information for most commands can be found in the online manual
pages, known as “man pages” for short. To read a command's man page, type “man
command”.
Examples: man ls Get help on the “ls” command.
man man A manual about how to use the
manual!
To search for a particular word within a man page, type “/word”. To quit from a man page,
just type the “Q” key.
Sometimes, you might not remember the name of Linux command and you need to search
for it. For example, if you want to know how to change a file's permissions, you can search the
man page descriptions for the word “permission” like this:
man -k permission
If you look at the output of this command, you will find a line that looks something
like:
chmod (1) - change file access
permissions
Now you know that “chmod” is the command you were looking for. Typing “man chmod”
will show you the chmod command's manual page!

Navigating the Linux File system


The Linux file system is a tree-like hierarchy hierarchy of directories and files. At the base of
the file system is the “/” directory, otherwise known as the “root” (not to be confused with the
root user). Unlike DOS or Windows file systems that have multiple “roots”, one for each disk
drive, the Linux file system mounts all disks somewhere underneath the / files ystem. The
following table describes many of the most common Linux directories.

Info Pages
Some programs, particularly those released by the Free Software Foundation, use info
pages as their main source of online documentation. Info pages are similar to man page,
but instead of being displayed on one long scrolling screen, they are presented in shorter
segments with links to other pieces of information. Info pages are accessed with the
“info” command, or on some Linux distributions, “pinfo” (a nicer info browser).
For example: info df Loads the “df” info page.

Operating System Lab SCOEIT 4


1. Shell Programming

The Linux Directory Layout

Directory Description
The nameless base of the filesystem. All other directories, files, drives, and
devices are attached to this root. Commonly (but incorrectly) referred to as
the “slash” or “/” directory. The “/” is just a directory separator, not a
directory itself.
/bin Essential command binaries (programs) are stored here (bash, ls, mount,
tar, etc.)
/boot Static files of the boot loader.
/dev Device files. In Linux, hardware devices are acceessd just like other files, and
they are kept under this directory.
/etc Host-specific system configuration files.
/home Location of users' personal home directories (e.g. /home/susan).
/lib Essential shared libraries and kernel modules.
/proc Process information pseudo-filesystem. An interface to kernel data structures.
/root The root (superuser) home directory.
/sbin Essential system binaries (fdisk, fsck, init, etc).
/tmp Temporary files. All users have permission to place temporary files here.
/usr The base directory for most shareable, read-only data (programs, libraries,
documentation, and much more).
/usr/bin Most user programs are kept here (cc, find, du, etc.).
/usr/include Header files for compiling C programs.
/usr/lib Libraries for most binary programs.
/usr/local “Locally” installed files. This directory only really matters in environments
where files are stored on the network. Locally-installed files go in
/usr/local/bin, /usr/local/lib, etc.). Also often used for
software packages installed from source, or software not officially shipped
with the distribution.
/usr/sbin Non-vital system binaries (lpd, useradd, etc.)
/usr/share Architecture-independent data (icons, backgrounds, documentation, terminfo,
man pages, etc.).
/usr/src Program source code. E.g. The Linux Kernel, source RPMs, etc.
/usr/X11R6 The X Window System.
/var Variable data: mail and printer spools, log files, lock files, etc.

Operating System Lab SCOEIT 5


1. Shell Programming

Commands for Navigating the Linux File systems


The first thing you usually want to do when learning about the Linux file system is take some
time to look around and see what's there! These next few commands will: a) Tell you where
you are, b) take you somewhere else, and c) show you what's there. The following table
describes the basic operation of the pwd, cd, and ls commands, and compares them to
certain DOS commands that you might already be familiar with.
Linux Command DOS Command Description
pwd cd “Print Working Directory”. Shows the current
location in the directory tree.
cd cd, chdir “Change Directory”. When typed all by itself, it
returns you to your home directory.
cd directory cd directory Change into the specified directory name.
Example: cd /usr/src/linux
cd ~ “~” is an alias for your home directory. It can be
used as a shortcut to your “home”, or other
directories relative to your home.
cd .. cd.. Move up one directory. For example, if you are in
/home/vic and you type “cd ..”, you will end
up in /home.
cd - Return to previous directory. An easy way to get
back to your previous location!
ls dir /w List all files in the current directory, in column
format.
ls directory dir directory List the files in the specified directory.
Example: ls /var/log
ls -l dir List files in “long” format, one file per line. This
also shows you additional info about the file, such
as ownership, permissions, date, and size.
ls -a dir /a List all files, including “hidden” files. Hidden files
are those files that begin with a “.”, e.g. The
.bash_history file in your home directory.
ls -ld A “long” list of “directory”, but instead of showing
directory the directory contents, show the directory's detailed
information. For example, compare the output of
the following two commands:
ls -l /usr/bin
ls -ld /usr/bin
ls /usr/bin/d* dir d*.* List all files whose names begin with the letter “d”
in the /usr/bin directory.

Operating System Lab SCOEIT 6


1. Shell Programming

Piping and Re-Direction


Before we move on to learning even more commands, let's side-track to the topics of piping
and re-direction. The basic UNIX philosophy, therefore by extension the Linux philosophy, is
to have many small programs and utilities that do a particular job very well. It is the
responsibility of the programmer or user to combine these utilities to make more useful
command sequences.

Piping Commands Together


The pipe character, “|”, is used to chain two or more commands together. The output of the
first command is “piped” into the next program, and if there is a second pipe, the output is sent
to the third program, etc. For example:
ls -la /usr/bin | less
In this example, we run the command “ls -la /usr/bin”, which gives us a long listing
of all of the files in /usr/bin. Because the output of this command is typically very long,
we pipe the output to a program called “less”, which displays the output for us one screen at a
time.

Redirecting Program Output to Files


There are times when it is useful to save the output of a command to a file, instead of
displaying it to the screen. For example, if we want to create a file that lists all of the MP3
files in a directory, we can do something like this, using the “>” redirection character:
ls -l /home/vic/MP3/*.mp3 > mp3files.txt
A similar command can be written so that instead of creating a new file called
mp3files.txt, we can append to the end of the original file:
ls -l /home/vic/extraMP3s/*.mp3 >> mp3files.txt

Other Linux Commands


The following sections describe many other commands that you will find on most Linux
systems. I can't possibly cover the details of all of these commands in this document, so don't
forget that you can check the “man pages” for additional information. Not all of the listed
commands will be available on all Linux or UNIX distributions.

Working With Files and Directories


These commands can be used to: find out information about files, display files, and
manipulate them in other ways (copy, move, delete).

Linux DOS Description


Command Command

Operating System Lab SCOEIT 7


file Find out what kind of file it is.
For example, “file /bin/ls” tells us that it is a Linux
executable file.
cat type Display the contents of a text file on the screen. For
example: cat mp3files.txt would display the file we
created in the previous section.
head Display the first few lines of a text file.
Example: head /etc/services
tail Display the last few lines of a text file.
Example: tail /etc/services
tail -f Display the last few lines of a text file, and then output
appended data as the file grows (very useful for following
log files!).
Example: tail -f /var/log/messages
cp copy Copies a file from one location to another.
Example: cp mp3files.txt /tmp
(copies the mp3files.txt file to the /tmp directory)
mv rename, Moves a file to a new location, or renames it.
ren, move For example: mv mp3files.txt /tmp
(copy the file to /tmp, and delete it from the original
location)
rm del Delete a file. Example: rm /tmp/mp3files.txt
mkdir md Make Directory. Example: mkdir /tmp/myfiles/
rmdir rd, rmdir Remove Directory. Example: rmdir /tmp/myfiles/

Finding Things
The following commands are used to find files. “ls” is good for finding files if you already
know approximately where they are, but sometimes you need more powerful tools such as
these:

Linux Description
Command
which Shows the full path of shell commands found in your path. For example, if
you want to know exactly where the “grep” command is located on the
filesystem, you can type “which grep”. The output should be something
like: /bin/grep

Operating System Lab SCOEIT 8


whereis Locates the program, source code, and manual page for a command (if all
information is available). For example, to find out where “ls” and its man
page are, type: “whereis ls” The output will look something like:
ls: /bin/ls /usr/share/man/man1/ls.1.gz
locate A quick way to search for files anywhere on the filesystem. For example, you
can find all files and directories that contain the name “mozilla” by typing:
locate mozilla
find A very powerful command, but sometimes tricky to use. It can be used to
search for files matching certain patterns, as well as many other types of
searches. A simple example is:
find . -name \*mp3
This example starts searching in the current directory “.” and all sub-
directories, looking for files with “mp3” at the end of their names.

Informational Commands
The following commands are used to find out some information about the user or the system.

Linux Command Explanation


ps Lists currently running process (programs).
w Show who is logged on and what they are doing.
id Print your user-id and group id's
df Report filesystem disk space usage (“Disk Free” is how I remember it)
du Disk Usage in a particular directory. “du -s” provides a summary
for the current directory.
top Displays CPU processes in a full-screen GUI. A great way to see the
activity on your computer in real-time. Type “Q” to quit.
free Displays amount of free and used memory in the system.
cat /proc/cpuinfo Displays information about your CPU.
cat /proc/meminfo Display lots of information about current memory usage.
uname -a Prints system information to the screen (kernel version, machine type,
etc.)

Here are some other commands that are useful to know.

Linux Command Description


clear Clear the screen
echo Display text on the screen. Mostly useful when writing shell scripts. For
example: echo “Hello World”

Operating System Lab SCOEIT 9


more Display a file, or program output one page at a time. Examples:
more mp3files.txt
ls -la | more
less An improved replacement for the “more” command. Allows you to scroll
backwards as well as forwards.
grep Search for a pattern in a file or program output. For example, to find out
which TCP network port is used by the “nfs” service, you can do this:
grep “nfs” /etc/services
This looks for any line that contains the string “nfs” in the file “/etc/services”
and displays only those lines.
lpr Print a file or program output. Examples:
lpr mp3files.txt - Print the mp3files.txt file
ls -la | lpr - Print the output of the “ls -la” command.
sort Sort a file or program output. Example: sort mp3files.txt
su “Switch User”. Allows you to switch to another user's account temporarily.
The default account to switch to is the root/superuser account. Examples:
su - Switch the root account
su - - Switch to root, and log in with root's environment
su larry - Switch to Larry's account

INPUT:

OUTPUT:

FAQS:

PRACTISE ASSIGNMENTS / EXERCISE / MODIFICATIONS: (Max – 5)


1) WAP to implement string operations.
2) WAP for sorting?

Operating System Lab SCOEIT 10


2. AWK Programming
AIM:
Write a program to handle student data base with options given below,
a) Create data base. b) View Data Base. c) Insert a record.
d) Delete a record. e) Modify a record. f) Result of a particular
student. g) Exit.
B. Menu driven program for
a) Find factorial of a no. b) Find greatest of three numbers c) Find a prime no
d) Find whether a number is palindrome e) Find whether a string is palindrome

OBJECTIVE:

This assignment covers the AWK tools under UNIX OS which can be used as a programming
language also. To study and implement AWK Programming and understand the concepts and
terms related to it .

THEORY:

The name awk comes from initials of its designers :

1. Alfred V. Aho

2. Peter J. Weinberger

3. Brian W. Kernigham

AWK consists of user-defined functions , multiple input streams and computed regular
expressions. The awk utility interprets a special purpose programming language that
makes it possible to handle simple data-reformatting jobs easily with just a few lines of
code.

USING AWK YOU CAN :

a. Manage small, personal databases

b. Generate reports

c. Validate data

d. Produce indices and perform other document preparation tasks ,

AWK is a pattern matching program. It takes two inputs data file & command file, data
file contains text .The command file contains command matching instructions; it is equal
to ordinary computer program. AWK as an interpreter executes commands from he

Operating System Lab SCOEIT 11


2. AWK Programming

command file on the data file .It is a very versatile program, especially when used as a
part of a pipe.

There are three execution types of commands:

1. Starting command, first word "BEGIN”, which are executed only once for each
input file, at the beginning of the file.

2. Pattern matching commands , each of which is executed once for each line in the
data file

3. Ending commands, first word "END”, which are executed only once for each
input file when end of file has reached.

The AWK program consists of a series of rules. Syntactically, a rule consists of a pattern
followed by an action .The action is enclosed in curly braces to separate it from the
pattern .Rules are usually separated by new lines.

HOW TO RUN AWK PROGRAMS:

• If the program is short , include it in the command line .

awk 'program' input-file1 input-file2 . . .

• When the program id long it is usually more convenient to put it in a file and then
run it.

awk -f program file input-file1 input-file2 ...

• Typical command line to run awk is :

awk -f program.awk inputfile > outputfile

awk compiles the program into an internal form , and then proceeds to read each file
named in the ARGV array .If there are no files named on the command line awk reads the
standard input .

Operating System Lab SCOEIT 12


2. AWK Programming

PATTERNS :

Patterns are zero or more patterns to be matched with the line from the data file .

If the whole pattern is missing , it matches any line and the command is always executed
. Pattern consists of character "/" , regular expressions and character "/"

Several patterns may be put into the pattern part by using logical operators to separate
them (!,&&,||) .

COMMANDS :

If the command is missing then the default command , print the line , is executed .
It is enclosed between curly braces "{"and"}".

Brackets may be nested and used to extend the command over several lines .

INBUILT VAREIABLES :

There are several inbuilt variables :

a. NF : NO. OF WORDS ON THE LINE

b. NR: NO. OF RECORDS , i.e , LINE NO.

c. FILENAME: NAME OF INPUT FILE

d. FS : INPUT FIELD SEPERATOR (SPACE , TAB ,etc )

e. RS: INPUT RECORD SEPERATOR (LINE , NEWLINE , etc)

Some awk implementations allow comments staring with "#" as first character on that
line However , there is no defined commenting mechanism in awk . Comments can be
included using an assignment and a character string like : {comment = "this is a comment
string"} Common awk functions are:

a) length(variable)

b) substr(strin g,first-char,no. of characters)

c) int (numeric variable)

d) exp (numeric variable)

e) log (numeric variable)

Operating System Lab SCOEIT 13


2. AWK Programming

f) sqtr (numeric variable)

Note that the C-like part of awk is a very small subset of "C" . AWK is stateless, i.e , it
treats each new line similarly .However you can use variables and conditional
instructions to create states .

Regular expressions are a way to define conditional character strings. Common regular
expressions are composed of the following ways:

a) Text : Any one character

b) [string]: Any one character in the string

c) [a-k]: Any one character from "a" to "k"

d) * Zero or more repeats of previous regular expression

e) ^ At the beginning of a regular expression limits it to

the beginning of the line

f) $ At the end of the regular expression limits it to the end

of the line

g) \ Removes meta characters special meaning

h) \(\) Grouping brackets as in mathematics

You can print 1st character by using $0 , 2nd using $1 , and so on when reading from a
file Repeating statements are:

a) while(condition) command

b) for(command1;(condition);command2)command3

Functions in awk are defined as follows :

function name (parameter list ) { statements }

The word "func" maybe used in place of function .

The "return" function can be used to return a value from a function .

Operating System Lab SCOEIT 14


2. AWK Programming
COMMAND LINE OPTIONS :

1. Options begin with a "-"(minus) sign , and consists of a single character .

2. If an option takes a keyword , it is immediately followed by an equal to sign

"=" and the arguments value .

3. -f source-file indicates that the awk program is to be found in source file

instead of the first non-optional argument .

4. If the -f is not used , then the first non-optional command line argument

is expected to be the program text .

5. -f option when used more than once , it acts as though all specifies files from
which to read input are concatenated .

6. This is useful for creating awk libraries of awk functions .

7. You can type a program at the terminal and use library functions.

INPUT:

Provide students details like roll number name and marks in various subjects.
For second assignment provide integer numbers according to the requirements.

OUTPUT:

Formatted student information on the screen

FAQS:
1. What is the use of awk programming ?

2. How to run awk script ?


PRACTISE ASSIGNMENTS / EXERCISE / MODIFICATIONS: (Max – 5)

Example 1: square of a number

#!/bin/awk -f
BEGIN {
print "type a number";
}
{
print "The square of ", $1, " is ", $1*$1;
print "type another number";
}

Operating System Lab SCOEIT 15


2. AWK Programming
END {
print "Done"
}

Example 2: remove only directories

ls -l | grep '^d' | awk '{print "rm -r "$9}' | sh

Operating System Lab SCOEIT 16


3. Process Control in UNIX
AIM:

1. program where parent process sorts array elements in descending order and child process sorts
array elements in descending order.

2. Count number of vowels in the given sentence implement program using vfork().

OBJECTIVE:

This assignment covers the UNIX process control commonly called for process creation,
program execution and process termination. Also covers process model, including process
creation, process destruction and daemon processes.

THEORY:

Process in UNIX:

A process is the basic active entity in most operating-system models.

Process IDs

Each process in a Linux system is identified by its unique process ID, sometimes referred to as
pid. Process IDs are 16-bit numbers that are assigned sequentially by Linux as new processes are
created.

When referring to process IDs in a C or C++ program, always use the pid_t typedef, which is
defined in <sys/types.h>.A program can obtain the process ID of the process it’s running in with
the getpid() system call, and it can obtain the process ID of its parent process with the getppid()
system call.

Creating Processes

Two common techniques are used for creating a new process.

1. using system() function.


2. using fork() system calls.

1. Using system

The system function in the standard C library provides an easy way to execute a command from
within a program, much as if the command had been typed into a shell. In fact, system creates a
subprocess running the standard Bourne shell (/bin/sh)and hands the command to that shell for
execution.
3. Process Control in UNIX

Operating System Lab SCOEIT 17


The system function returns the exit status of the shell command. If the shell itself cannot be run,
system returns 127; if another error occurs, system returns –1.

2. Using fork

A process can create a new process by calling fork. The calling process becomes the parent, and
the created process is called the child. The fork function copies the parent's memory image so
that the new process receives a copy of the address space of the parent. Both processes continue
at the instruction after the fork statement (executing in their respective memory images).

SYNOPSIS

#include <unistd.h>

pid_t fork(void);

The fork function returns 0 to the child and returns the child's process ID to the parent. When
fork fails, it returns –1.

The wait Function

When a process creates a child, both parent and child proceed with execution from the point of
the fork. The parent can execute wait to block until the child finishes. The wait function causes
the caller to suspend execution until a child's status becomes available or until the caller receives
a signal.

SYNOPSIS

#include <sys/wait.h>

pid_t wait(int *status);

If wait returns because the status of a child is reported, these functions return the process ID of
that child. If an error occurs, these functions return –1

Example:

pid_t childpid;

childpid = wait(NULL);
if (childpid != -1)
printf("Waited for child with pid %ld\n", childpid);

Operating System Lab SCOEIT 18


3. Process Control in UNIX

Status values

The status argument of wait is a pointer to an integer variable. If it is not NULL, this function
stores the return status of the child in this location. The child returns its status by calling exit,
_exit or return from main.

A zero return value indicates EXIT_SUCCESS; any other value indicates EXIT_FAILURE.

POSIX specifies six macros for testing the child's return status. Each takes the status value
returned by a child to wait as a parameter. Following are the two such macros:

SYNOPSIS

#include <sys/wait.h>

WIFEXITED(int stat_val)
WEXITSTATUS(int stat_val)

New program execution within the existing process (The exec Function)

The fork function creates a copy of the calling process, but many applications require the child
process to execute code that is different from that of the parent. The exec family of functions
provides a facility for overlaying the process image of the calling process with a new image. The
traditional way to use the fork–exec combination is for the child to execute (with an exec
function) the new program while the parent continues to execute the original code.

SYNOPSIS

#include <unistd.h>

extern char **environ;

1. int execl(const char *path, const char *arg0, ... /*, char *(0) */);
2. int execle (const char *path, const char *arg0, ... /*, char *(0),
char *const envp[] */);
3. int execlp (const char *file, const char *arg0, ... /*, char *(0) */);
4. int execv(const char *path, char *const argv[]);
5. int execve (const char *path, char *const argv[], char *const envp[]);
6. int execvp (const char *file, char *const argv[]);

All exec functions return –1 if unsuccessful. In case of success these functions never return
to the calling function.

Process Termination

Operating System Lab SCOEIT 19


3. Process Control in UNIX

Normally, a process terminates in one of two ways. Either the executing program calls the exit()
function, or the program’s main function returns. Each process has an exit code: a number that
the process returns to its parent.The exit code is the argument passed to the exit function, or the
value returned from main.

Zombie Processes

If a child process terminates while its parent is calling a wait function, the child process vanishes
and its termination status is passed to its parent via the wait call. But what happens when a child
process terminates and the parent is not calling wait? Does it simply vanish? No, because then
information about its termination—such as whether it exited normally and, if so, what its exit
status is—would be lost. Instead, when a child process terminates, is becomes a zombie process.

A zombie process is a process that has terminated but has not been cleaned up yet. It is the
responsibility of the parent process to clean up its zombie children. The wait functions do this,
too, so it’s not necessary to track whether your child process is still executing before waiting for
it. Suppose, for instance, that a program forks a child
process, performs some other computations, and then calls wait. If the child process has not
terminated at that point, the parent process will block in the wait call until the child process
finishes. If the child process finishes before the parent process calls wait, the child process
becomes a zombie. When the parent process calls wait, the zombie child’s termination status is
extracted, the child process is deleted, and the wait call returns immediately.

vfork: alternative of fork

create a new process when exec a new program.

Compare with fork:

1. Creates new process without fully copying the address space of the parent.

2. vfork guarantees that the child runs first, until the child calls exec or exit.

3. When child calls either of these two functions(exit, exec), the parent resumes.

INPUT:

1. An integer array with specified size.


2. A sentences to count number of words.

OUTPUT:
1. Sorted array in ascending and descending orders.
2. Counted number of vowels and words in the sentence.
FAQS:

PRACTISE ASSIGNMENTS / EXERCISE / MODIFICATIONS: (Max – 5)

Operating System Lab SCOEIT 20


3. Process Control in UNIX
Example 1

Example 2

3. Process Control in UNIX

Operating System Lab SCOEIT 21


Example 3

Example 4

The following function determines the exit status of a child.

#include <stdio.h>
#include <sys/types.h>
#include <sys/wait.h>

void show_return_status(void)
{
pid_t childpid;
int status;

childpid = wait(&status);
if (childpid == -1)
perror("Failed to wait for child");

else if (WIFEXITED(status))
printf("Child %ld terminated with return status %d\n",
(long)childpid, WEXITSTATUS(status));

Operating System Lab SCOEIT 22


3. Process Control in UNIX

Example 5: A program that creates a child process to run ls -l.

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>

int main(void)
{
pid_t childpid;

childpid = fork();
if (childpid == -1) {
perror("Failed to fork");
return 1;
}
if (childpid == 0) {
/* child code */
execl("/bin/ls", "ls", "-l", NULL);
perror("Child failed to exec ls");
return 1;
}

if (childpid != wait(NULL)) {
/* parent code */
perror("Parent failed to wait due to signal or error");
return 1;
}
return 0;
}

Operating System Lab SCOEIT 23


3. Process Control in UNIX

Operating System Lab SCOEIT 24


3. Process Control in UNIX

Example 7

Operating System Lab SCOEIT 25


3. Process Control in UNIX
Example 8
Demo of multiprocess application using fork()system call

Operating System Lab SCOEIT 26


Operating System Lab SCOEIT 27
3. Process Control in UNIX

Example 9:
program where parent process sorts array elements in descending order and child
process sorts array elements in descending order.

Operating System Lab SCOEIT 28


Operating System Lab SCOEIT 29
3. Process Control in UNIX

Example 10: Count number of vowels in the given sentence implement program using
vfork().

Operating System Lab SCOEIT 30


3. Process Control in UNIX

Operating System Lab SCOEIT 31


4. CPU scheduling Algorithms
AIM :
CPU scheduling algorithms.

OBJECTIVE:
The main aim of this assignment is to learn the scheduling operations handled by the operating
system and how CPU scheduling algorithms actually work.

THEORY:

1. CPU Scheduler
CPU Scheduler selects from among the processes in memory that are ready to execute, and
allocates the CPU to one of them. CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is non preemptive whereas all other scheduling is preemptive.
Scheduling Criteria

By switching the CPU among processes, the Operating System can make the computer more
productive.
 Throughput – Number of processes that complete their execution per time unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the ready queue
 Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment)

2. Types of processor scheduling


Aim of the processor scheduling is to assign processor to processes over time, in a way that
meets system objectives i.e. throughput, response time and processor efficiency. Scheduling is
broken down into three categories:
– Long term scheduling.
– Medium term scheduling.
– Short term scheduling.

Long term scheduling: is performed when a new process is created. This is a decision to add a
new process to the set of processes that are currently active. It means the long term scheduler is
responsible to assign it into ready queue or blocked queue according to the nature of that process.
Medium term scheduling: determines which process shall be allowed to compete for the CPU. It
is responsible to transfer the processes from temporarily suspended to ready state. We know that
processes in the suspended state are waiting for their IO completion whenever any process
becomes ready to assign CPU, it is medium term scheduler that select a process from blocked
queue and assign it into ready queue. We can say that this scheduler acts as a buffer between
creation of the processes and the assigning of the CPU to these processes.

Operating System Lab SCOEIT 32


4. CPU scheduling Algorithms

Short term scheduler: determines which ready process will be assigned the CPU when it next
becomes available and actually assign the CPU to this process (i.e. it dispatches the CPU to the
process). Short term scheduling is performed by the dispatcher, which operates many times per
second.
Scheduling Algorithms:
Short-Term Scheduling Criteria

The main objective of short-term scheduling is to allocate processor time in such a way as to
optimize one or more aspects of system behavior. Generally, a set of criteria is established
against which various scheduling policies may be evaluated.

The Use of Priorities


In many systems, each process is assigned a priority and the scheduler will always choose a
process of higher priority over one of lower priority. Figure 1 below, illustrates the use of
priorities. For clarity, the queuing diagram is simplified, ignoring the existence of multiple
blocked queues and of suspended. Instead of a single ready queue,we provide a set of queues, in
descending order of priority:RQ0, RQ1, . . . RQn,with priority[RQi] !priority[RQj] for i"j.3When
a scheduling selection is to be made, the scheduler will start at the highest-priority ready queue
(RQ0). If there are one or more processes in the queue, a process is selected using some
scheduling policy. If RQ0 is empty, then RQ1 is examined, and so on.
One problem with a pure priority scheduling scheme is that lower- priority processes may suffer
starvation. This will happen if there is always a steady supply of higher-priority ready processes.
If this behavior is not desirable, the priority of a process can change with its age or execution
history.

First-Come, First-Served (FCFS) Scheduling


Figure 1: Priority Queuing

Suppose that the processes arrive in the order: P1 , P2 , P3


The Gantt Chart for the schedule is:

Waiting time for P1 = 0; P2 = 24; P3 = 27

Operating System Lab SCOEIT 33


Average waiting time: (0 + 24 + 27)/3 = 17
4. CPU scheduling Algorithms

Suppose that the processes arrive in the order: P2 , P3 , P1


The Gantt chart for the schedule is:

CPU scheduling Algorithms

Waiting time for P1 = 6; P2 = 0; P3 = 3


Average waiting time: (6 + 0 + 3)/3 = 3
Shortest-Job-First (SJR) Scheduling

Associate with each process the length of its next CPU burst. Use these lengths to schedule the
process with the shortest time.
Two schemes:
Non preemptive – once CPU given to the process it cannot be preempted until completes its CPU
burst.
Preemptive – if a new process arrives with CPU burst length less than remaining time of current
executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First
(SRTF).
Example of Non-Preemptive SJF

SJF (non-preemptive)

Example of Preemptive SJF

Operating System Lab SCOEIT 34


Average waiting time = (9 + 1 + 0 +2)/4 – 3
4. CPU scheduling Algorithms

Priority Scheduling

A priority number (integer) is associated with each process. CPU is allocated to the process
having the highest priority. Hence the name priority. Equal priority processes are scheduled
according to FCFS algorithm. The SJF algorithm is a particular case of the general priority
algorithm. In this case priority is the inverse of the next CPU burst time. Larger the next CPU
burst, lower is the priority and vice versa. In the following example, we will assume lower
numbers to represent higher priority.
CPU scheduling Algorithms

Priority based algorithms can be either preemptive or nonpreemptive. In case of preemptive


scheduling, if a new process joins the ready queue with a priority higher than the process that is
executing, then the current process is preempted and CPU allocated to the new process. But in
case of nonpreemptive algorithm, the new process having highest priority from among the ready
processes, is allocated the CPU only after the current process gives up the CPU.

Round Robin (RR)


Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After
this time has elapsed, the process is preempted and added to the end of the ready queue. If there
are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the
CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time
units.
Example of RR with Time Quantum = 20

Operating System Lab SCOEIT 35


4. CPU scheduling Algorithms

The Gantt chart is:

INPUT:
1. Number of processes form user.
2. Accept burst time, priority, arrival time.
OUTPUT:
1. Display gantt chart.
2. Gantt chart displaying process execution for every algorithms.
CPU scheduling Algorithms
3. Average waiting time, turnaround time

FAQS:
1. What is the purpose scheduling algorithms?
2. What are different scheduling criteria?
3. Which are different types of scheduler?
4. What is multilevel queue scheduling and multilevel feedback queue scheduling?
5. Difference between preemptive and non preemptive scheduling algorithms?
6. What is preemptive and non preemptive scheduling?

PRACTISE ASSIGNMENTS / EXERCISE / MODIFICATIONS:


1. Implement same assignment by using different data structure.
2. Make a comparative statement of different scheduling algorithm.
3. Execute problem statement for various inputs.

Operating System Lab SCOEIT 36


5. Bankers algorithm
AIM :
Banker’s algorithm for deadlock avoidance.

OBJECTIVE:
Understand safe and unsafe state to handle deadlock situation in the system and how to handle
it. How banker’s algorithm is used to avoid deadlock altogether.

THEORY:
Deadlock avoidance using Banker's Algorithm.

1. Deadlock

A set of processes is deadlocked if each process in the set is waiting for an event that only
another process in the set can cause. Because all the processes are waiting, none of them will
ever cause any of the events that could wake up any of the other members of the set, and all
the processes continue to wait forever. For this model, we assume that processes have only a
single thread and that there are no interrupts possible to wake up a blocked process. The no-
interrupts condition is needed to prevent an otherwise deadlocked process from being awake.
Conditions for Deadlock
Coffman et al. (1971) showed that four conditions must hold for there to be a deadlock:
1. Mutual exclusion condition. Each resource is either currently assigned to exactly one
process or is available.
2. Hold and wait condition. Processes currently holding resources granted earlier can request
new resources.
3. No preemption condition. Resources previously granted cannot be forcibly taken away
from a process. They must be explicitly released by the process holding them.
4. Circular wait condition. There must be a circular chain of two or more processes, each of
which is waiting for a resource held by the next member of the chain.

5. Banker’s Algorithms

The Banker’s Algorithm for a Single Resource

A scheduling algorithm that can avoid deadlocks is due to Dijkstra (1965) and is known as the
banker’s algorithm and is an extension of the deadlock detection algorithm. It is modeled on
the way a small-town banker might deal with a group of customers to whom he has granted lines
of credit. What the algorithm does is check to see if granting the request leads to an unsafe state.
If it does, the request is denied. If granting the request leads to a safe state, it is carried out.

Operating System Lab SCOEIT 37


5. Bankers algorithm

In Fig. (a) we see four customers. A, B, C and D, each of whom has been granted a certain
number of credit units (e.g., 1 unit is 1K dollars).

The banker knows that not all customers will need their maximum credit immediately, so he has
reserved only 10 units rather than 22 to service them. (In this analogy, customers are processes,
units are, say, tape drives, and the banker is the operating system.)

The customers go about their respective businesses, making loan requests from time to time
(i.e., asking for resources). At a certain moment, the situation is as shown in Fig. (b). This state is
safe because with two units left, the banker can delay any requests except C’s, thus letting C
finish and release all four of his resources. With four units in hand, the banker can let either D or
B have the necessary units, and so on.

The first three conditions are necessary but not sufficient for a deadlock to exist. The fourth
condition is, actually a potential consequence of the first three. That is, given that the first three
conditions exist, a sequence of events may occur that lead to an un- resolvable circular wait.
“The circular wait is in fact definition of deadlock.
3. Dealing with deadlock
Basically there are three strategies to deal with the deadlock:

Operating System Lab SCOEIT 38


Prevention: if any of the four necessary condition is denied, a deadlock can not occur. So we
will find out the restrictions due to that these necessary conditions can’t be occurred in the
system.
5. Bankers algorithm

Deadlock avoidance: like prevention we will find some strategies due to which deadlock can’t
be occurred in the system.

Recovery: detect when deadlock has occurred and recover from it.
4. Deadlock Avoidance
In most systems, resources are requested one at a time. The system must be able to decide
whether granting a resource is safe or not and only make the allocation when it is safe. Thus the
question arises: Is there an algorithm that can always avoid deadlock by making the right choice
all the time? The answer is a qualified yes—we can avoid deadlocks, but only if certain
information is available in advance.
Safe and Unsafe States
At any instant of time, there is a current state consisting of E, A, C, and R. A state is said to be
safe if it is not deadlocked and there is some scheduling order in which every process can run to
completion even if all of them suddenly request their maximum number of resources
immediately. It is easiest to illustrate this concept by an example using one resource.

we have a state in which A has 3 instances of the resource but may need as many as 9
eventually. B currently has 2 and may need 4 altogether, later. Similarly, C also has 2 but
may need an additional 5. A total of 10 instances of the resource exist, so with 7 resources
already allocated, there are 3 still free.

The state of Fig, (a) is safe because there exists a sequence of allocations that allows all
processes to complete. Namely, the scheduler could simply run B exclusively, until it
asked for and got two more instances of the resource, leading to the state of Fig, (b). When B
completes, we get the state of Fig, c). Then the scheduler can run C leading eventually to Fig
(d). When C completes, we get Fig(e)
Now A can get the six instances of the resource it needs and also complete. Thus the state of Fig,
(a) is safe because the system, by careful scheduling, can avoid deadlock.
Now suppose we have the initial state shown in Fig, (a), but this time
A requests and gets another resource, giving Fig, (b). Can we find a sequence that is guaranteed
to work? Let us try. The scheduler could run B until it asked for all its resources, as shown in
Fig, (c).

Operating System Lab SCOEIT 39


5. Bankers algorithm

Eventually, B completes and we get the situation of Fig, (d). At this point we are stuck. We only
have four instances of the resource free, and each of the active processes needs five. There is no
sequence that guarantees completion. Thus the allocation decision that moved the system from
Fig, (a) to Fig, (b) went from a safe state to an unsafe state. Running A or C next starting at Fig,
(b) does not work either. In retrospect, A’s request should not have been granted. It is worth
noting that an unsafe state is not a deadlocked state. Starting at Fig. (b), the system can run for a
while. In fact, one process can even complete. Furthermore, it is possible that A might release a
resource before asking for any more, allowing C to complete and avoiding deadlock altogether.
Thus the difference between a safe state and an unsafe state is that from a safe state the system
can guarantee that all processes will finish; from an unsafe state, no such guarantee can be given.

Banker’s Algorithms
The Banker’s Algorithm for a Single Resource
A scheduling algorithm that can avoid deadlocks is due to Dijkstra (1965) and is known as the
banker’s algorithm and is an extension of the deadlock detection algorithm. It is modeled on
the way a small-town banker might deal with a group of customers to whom he has granted lines
of credit. What the algorithm does is check to see if granting the request leads to an unsafe state.
If it does, the request is denied. If granting the request leads to a safe state, it is carried out.

In Fig. (a) we see four customers. A, B, C and D, each of whom has been granted a certain
number of credit units (e.g., 1 unit is 1K dollars).
The banker knows that not all customers will need their maximum credit immediately, so he has
reserved only 10 units rather than 22 to service them. (In this analogy, customers are processes,
units are, say, tape drives, and the banker is the operating system.)

The customers go about their respective businesses, making loan requests from time to time
(i.e., asking for resources). At a certain moment, the situation is as shown in Fig. (b). This state is

Operating System Lab SCOEIT 40


safe because with two units left, the banker can delay any requests except C’s, thus letting C
finish and release all four of his resources. With four units in hand, the banker can let either D or
B have the necessary units, and so on.

5. Bankers algorithm

Consider what would happen if a request from B for one more unit were granted in Fig. (b). We
would have situation Fig. (c), which is unsafe.

If all the customers suddenly asked for their maximum loans, the banker could not satisfy any of
them, and we would have a deadlock. An unsafe state does not have to lead to deadlock, since a
customer might not need the entire credit line available, but the banker cannot count on this
behavior. The banker’s algorithm considers each request as it occurs, and see if granting it leads
to a safe state. If it does, the request is granted; otherwise, it is postponed until later. To see if a
state is safe, the banker checks to see if he has enough resources to satisfy some customer. If so,
those loans are assumed to be repaid, and the customer now closest to the limit is checked, and
so on. If all loans can eventually be repaid, the state is safe and the initial request can be granted.
The Banker’s Algorithm for Multiple Resources
The banker’s algorithm can be generalized to handle multiple resources.
Figure below shows how it works.

Several data structures must be maintained to implement the banker's algorithm. These data
structures encode the state of the resource- allocation system. Let n be the number of processes
in the system and m be the number of resource types. We need the following data structures.

Operating System Lab SCOEIT 41


Available: A vector of length m indicates the number of available resources of each type. If
Available[j] equals k, there are k instances of resource type Rj available

Max: An n × m matrix defines the maximum demand of each process. If Max[i][j] equals k, then
process Pi may request at most k instances of resource type Rj.
5. Bankers algorithm

Allocation: An n × m matrix defines the number of resources of each type currently allocated to
each process. If Allocation[i][j] equals k, then process Pi is currently allocated k instances of
resource type Rj.

Need: An n × m matrix indicates the remaining resource need of each process. If Need[i][j]
equals k, then process Pi may need k more instances of resource type Rj to complete its task
Need[i][j] equals Max[i][j] − Allocation[i][j]

An Illustrative Example
Consider a system with five processes P0 through P4 and three resource types A, B, C. Resource
type A has 10 instances, resource type B has 5 instances, and resource type C has 7 instances.
Suppose that, at time T0, the following snapshot of the system has been taken.

The content of the matrix Need is defined to be Max - Allocation and is as shown above.
We claim that the system is currently in a safe state. Indeed, the sequence <P1, P3, P4, P2, P0>
satisfies the safety criteria. Suppose now that process P1 requests one additional instance of
resource type A and two instances of resource type C, so Request1 = (1,0,2). To decide whether
this request can be immediately granted, we first check that Request1 ≤ Available—that is,

Operating System Lab SCOEIT 42


(1,0,2) ≤ (3,3,2), which is true. We then pretend that this request has been fulfilled, and we arrive
at the following new state

5. Bankers algorithm

We must determine whether this new system state is safe. To do so, we execute our safety
algorithm and find that the sequence <P1, P3, P4, P0, P2> satisfies our safety requirement.
Hence, we can immediately grant the request of process P1. You should be able to see, however,
that when the system is in this state, a request for (3,3,0) by P4 cannot be granted, since the
resources are not available. Furthermore, a request for (0,2,0) by P0 cannot be granted, even
though the resources are available, since the resulting state is unsafe

INPUT
Accept allocation maximum and available resource matrix form user.

OUTPUT:
Display need matrix.
Display safe sequence
Where the system will be in safe sequence after allocation of resource?
FAQS: (min – 5 & max – 15)
1. What is dead lock?

2. What are the necessary and sufficient conditions to occur deadlock?

3. What is deadlock avoidance and deadlock prevention techniquies?

4. What is wait for graph?

5. What is safe sequence?

6. What are weakness of bankers algorithm?

PRACTISE ASSIGNMENTS / EXERCISE / MODIFICATIONS: (Max – 5)


1. Do the program for calculating available matrix if its not given.

2. Handel additional request of any process after first safe sequence.

Operating System Lab SCOEIT 43


6. Memory allocation algorithms
AIM :
Memory allocation algorithms

OBJECTIVE:
The focused area in this assignment is the implementation of the memory allocation algorithms
like First fit, Best fit & Next fit. These are used for dynamic storage allocation problem.

THEORY:
Explain memory allocation strategies with example.

First Fit: A resource allocation scheme (usually for memory). First Fit fits data into memory
by scanning from the beginning of available memory to the end, until the first free space which is
at least big enough to accept the data is found. This space is then allocated to the data. Any left
over becomes a smaller, separate free space.

If the data to be allocated is bigger than the biggest free space, the request cannot be met, and an
error is generated.

Best Fit - A resource allocation scheme (usually for memory). Best Fit tries to determine the best
place to put the new data. The definition of 'best' may differ between implementations, but one
example might be to try and minimize the wasted space at the end of the block being allocated -
i.e. use the smallest space which is big enough.

By minimizing wasted space, more data can be allocated overall, at the expense of a more time-
consuming allocation routine.

Next Fit: The first fit approach tends to fragment the blocks near the beginning of the list
without considering blocks further down the list. Next fit is a variant of the first-fit strategy. The
problem of small holes accumulating is solved with next fit algorithm, which starts each search
where the last one left off,
wrapping around to the beginning when the end of the list is reached (a form of one-way
elevator)

INPUT:
Accept memory partitions in order.

Accept process to be fit in given memory partitons

OUTPUT:
Show graphical or textual process allocation to particular memory blocks according to all three
algorithms

Operating System Lab SCOEIT 44


FAQS:
1. What is contiguous and non contiguous memory allocation?
2. What is fragmentation?
3. How to avoid internal and external fragementation?
4. How the address binding is doine?
5. What is logical vs physical address space?
6. What is swapping?

PRACTISE ASSIGNMENTS / EXERCISE / MODIFICATIONS: (Max – 5)

Operating System Lab SCOEIT 45


7. Page Replacement Algorithms
AIM :

OBJECTIVE:
The focused area in this assignment is the implementation of the virtual memory
techniques. Various techniques like paging and demand paging is covered here, so the main
learning objective is to understand about various page replacement algorithms.

THEORY:
1. Virtual Memory

Virtual memory is a technique that allows the execution of processes that may not be
completely in memory. One advantages of this scheme is that programs can be larger than
physical memory. Virtual memory can be implemented via:
• Demand paging

• Demand segmentation

Virtual memory concepts

Disassociating the addresses referenced in a running process from the address available in
primary memory. Virtual addresses: the addresses referenced by the running process. Real
address: addresses available in the primary memory.

Virtual address space: the range of virtual addresses the running process may reference is
called that process’s virtual address space V. Real address space: the range of addresses
available on a particular computer system is called that computer’s real address space R.

Components of VM system
The user sees a large linear virtual address space. Only parts of the virtual address space are in
physical memory. The rest of it is “virtual” and is kept on the disk until needed. The disk
contain an image of the entire virtual address space, even the parts that are in physical
memory.

Operating System Lab SCOEIT 46


7. Page Replacement Algorithms

2. Paging

Most virtual memory systems use a technique called paging. Main memory is partitioned into
equal fixed – size chunks known as frames and process is divided into equal fixed size
chunks known as pages. A virtual address in a paging system is an ordered pair (p, d),
where p is the page number in virtual memory on which the reference item resides, and d is
the displacement within page p at which the reference item is located.

3. Segmentation

Paging is an arbitrary division of the logical address space into small fixed size pieces. Instead
of using pages, we could divide the address space of a process into pieces based on the
semantics of the program. Such pieces are called segments. Although the user can now refer
to objects in the program by a two dimensional address, the actual physical memory
is still one dimensional sequence of bytes. Thus, we must define an implementation to map
two dimensional user defined addresses into one dimensional physical addresses. This
mapping is affected by the segment table.

4. Demand Paging

It is similar to paging with swapping processes reside on secondary memory. Rather than
swapping the entire process in memory, a lazy swapper is used. A lazy swapper never swaps a
page into memory unless that page will be needed. We use pager rather than swapper in
demand paging.

Transfer of a Paged Memory to Contiguous Disk Space

It is the conventional wisdom that a process’s pages should be loaded on demand. No page
should be brought from secondary to primary storage until it is explicitly referenced by a
running process. Demand paging guarantees that the only pages brought to main memory are
those actually needed by processes.

Operating System Lab SCOEIT 47


7. Page Replacement Algorithms

Steps in Handling a Page Fault

We check an internal table (usually in PCB) for this process, to determine whether the
reference was valid or invalid memory access. If the reference was invalid, we terminate the
process. If it was valid, but we have not yet brought in that page, we now page it in. We find a
free frame. We schedule a disk operation to read the desired page into the newly allocated
frame. When the disk read is complete, we modify the internal table kept with the process
and the page table to indicate that the page is now in memory. We restart the instruction that
was interrupted by the illegal address trap. The process can now access the page as though it
had always been in memory.

Page Replacement

If no frame is free, we find one that is not currently being used and free its contents to
swap space, and changing the page table to indicate that the page is no longer in
memory. The free frame can now be used to hold the page for which the process faulted.
Now the page service routine is modified to include page replacement.

Operating System Lab SCOEIT 48


7. Page Replacement Algorithms

5. Page Replacement Algorithms

There are many different page replacement algorithms. Probably every OS has its own
replacement scheme. We evaluate an algorithm by running it on a particular string of memory
references and computing the no. of page faults. The string of memory reference is called a
reference string.

A succession of memory references made by a program executing on a computer may


be:

14489, 1448B, 14494, 14596, ….

When analyzing page replacement algorithms we are interested only in


the pages being referenced. The referenced pages are obtained simply by omitting the two
least significant hex digits.

144, 144, 144, 145 …..


Two reduce the number of data we note two things

For a given page size (generally fixed by the h/w), we need to consider only the page no. not
the entire address. If we have a reference to a page p, then any immediately following
references to page p will never cause a page fault. Page p will be in memory after the first
reference the immediately following references will not fault.

For ex: if we trace a particular process, we might record the following address sequence:

0100, 042, 0101, 0612, 0103, 0104, 0101, 0611, 0102

010, 0104, 0101, 0610, 0102, 010, 0104, 0101, 0609, 0102, 0105
Which at 100 bytes per page is reduced to the following reference string:

1, 4, 1, 6, 1, 6, 1, 6, 1, 6, 1

Operating System Lab SCOEIT 49


7. Page Replacement Algorithms

First-In-First-Out (FIFO) Algorithm

Belady's anomaly

Belady's anomaly reflects the fact that, for some page-replacement algorithms, the page-fault
rate may increase as the number of allocated frames increases. FIFO algorithm suffers from
this problem.

We would expect that giving more memory to a process would improve its performance. In
some early research, investigators noticed that this assumption was not always true. Belady's
anomaly was discovered as a result.

Optimal Algorithm

Replace page that will not be used for longest period of time.

LRU Algorithm

If the optimal algorithm is not feasible, perhaps an approximation to the optimal algorithm is
possible. The key distinction between the FIFO and OPT algorithms (other than looking
backward or forward in time) is that the FIFO algorithm uses the time when a page was
brought into memory; the OPT algorithm uses the time when a page is to be used. If we use
the recent past as an approximation of the near future, then we will replace the page that has
not been used for the longest period of time. This approach is the least recently used (LRU)
algorithm.

Operating System Lab SCOEIT 50


7. Page Replacement Algorithms

INPUT:

1) Accept string of memory reference


e.g 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
2) Accept number of frames

OUTPUT:

1) Display frame after each memory reference

2) Display number of page faults

FAQS:
1) Which is the best page replacement algorithm & why?
2) Why do you need page reference strings to evaluate page replacement algorithm? What do
You do with them?
3) What are the advantages and disadvantages of paging the operating system address space?
4) Compare paging and segmentation.
5) What is Virtual memory?

Operating System Lab SCOEIT 51


8. Mutual Exclusion and Synchronization of threads
AIM: Implementation of Reader-Writer/Producer – consumer problem using mutex and
semaphore

OBJECTIVE:

 Learn basic thread concepts.


 Experiment with POSIX thread calls.
 Explore threaded application design.
 Learn the basics of thread synchronization and semaphores and their properties.
 Experiment with mutex locks and semaphores.
 Understand the working of semaphore and mutexes in general and in particular with
POSIX threads.
 Explore critical section behavior.

THEORY:

Thread Basics:

THREADS, LIKE PROCESSES, ARE A MECHANISM TO ALLOW A PROGRAM to do more


than one thing at a time.As with processes, threads appear to run concurrently. Conceptually, a
thread exists within a process. A thread is a unit of execution, associated with a process with its
own ID, stack, stack pointer, PC, condition codes and general purpose registers. Multiple threads
associated with a process run concurrently in the context of that process, sharing its code, data,
heap, shared libraries and open files.

Traditional view of a process:

Process = process context + code, data, and stack

Modern view of a process:

Process = thread + code, data and kernel context

A process with multiple threads:

Multiple threads can be associated with a process. Each thread has its own logical control flow
(Sequence of PC values). Each thread starts the same code, data and kernel context. Each thread
has its own thread id (tid).

Differences between thread execution and process execution:

A thread context switch is faster than the process context switch. Threads unlike processes, are
not organized in a rigid parent child hierarchy. The threads associated with a process form a pool
Mutual Exclusion and Synchronization of threads

Operating System Lab SCOEIT 52


8. Mutual Exclusion and Synchronization of threads

of peers, independent of which threads were created by which other threads. The main thread is
distinguished from other threads only in the sense that it is always the first thread to run in the
process. A thread can kill any of its peers, or wait for any of its peer to terminate. Each peer can
read and write the same shared data.
Threads associated with a process:

Summary: Threads Vs Process

How threads & processes are similar:

 Each has its own logical control flow.


 Each can run concurrently.
 Each is context switched.

How threads & processes are different:

 Threads share code & data, processes do not.


 Threads are some what less expensive than processes.

Operating System Lab SCOEIT 53


8. Mutual Exclusion and Synchronization of threads
Thread levels

a. User Level Threads (Thread libraries).

b. Kernel Level Threads (System calls).

c. Combined ULT and KLT

User Level Threads (ULT)

Kernel is not aware of thread activity but it is still managing process activity. When a thread
makes a system call, the whole process will be blocked but for the thread library that thread is
still in the running state. So thread states are independent of the process states.

Advantages of ULT:

Thread switching does not involve the kernel – no mode switching. Scheduling can be
application specific – choose the best algorithm. ULT can run on any OS only needs a thread
library.

Disadvantages of ULT:

Most system calls are blocking and the kernel blocks processes – so all threads within the
process will be blocked. The kernel can only assign processes to processors. Two threads within
the same process can’t run simultaneously on two processors.

Kernel Level Threads (ULT)

All thread management is done by the kernel. No thread library but an API(system calls) to the
kernel thread facility exists. Kernel maintains context information for the process and the
threads. Switching between threads requires the kernel. Scheduling is performed on thread basis.

Advantages of KLT:

Kernel can simultaneously schedule many threads of the same process on many processors.
Blocking is done on a thread level. Kernel routines can be multithreading.

Disadvantages of KLT:

Thread switching within the same process involves the kernel. E.g. if we have 2 mode switches
per thread switch, this results in a significant slow down.

Combined ULT/KLT approaches

Operating System Lab SCOEIT 54


8. Mutual Exclusion and Synchronization of threads

Idea is to combine best of both the approaches. Thread creation done in the user space. Bulk of
scheduling and synchronization of threads done in the user space. The programmer may adjust
the no. of KLTs. Solaris is an example of OS with this approach.

Thread programming with POSIX thread library <pthread.h>

Creating Threads:

To create a new thread we need to use the pthread create() function.

 int pthread_create(pthread_t *tid, const pthread_attr_t *attr, void *


(*start_routine)(void*), void *arg);

return value: 0 if OK, non zero on error.

The pthread create() function gives back a thread identi_er that can be used in other calls. The
second parameter is a pointer to a thread attribute object that you can use to set the thread's
attributes. The null pointer means to use default attributes. The third parameter is a pointer to the
function the thread is to execute. The _nal parameter is the argument to the function. By using
void pointers here, any sort of data could potentially be sent to the thread function provided
proper casts are applied.

Terminating threads

A thread terminates in following two ways:

 The thread terminates implicitly when its top level thread routine returns.
 The thread terminates explicitly by calling the routine: pthread_exit();

void pthread_exit(void *thread_return);

Returns: 0 if OK non zero on errors


Another thread terminates the current thread by calling the pthread_cancel function with the ID
of the current thread.

 int pthread_cancel(pthread_t tid);


returns: 0 on OK non zero on errors

Wait for thread termination


int pthread_join(pthread_t tid, void **thread_return);

returns: 0 on OK non zero on errors

Operating System Lab SCOEIT 55


8. Mutual Exclusion and Synchronization of threads

The pthread_join function blocks until thread tid terminates, assign the (void*) pointer returned
by the thread routine to the location pointed to by thread_return, and then reaps any memory
resources held by the terminated thread.

If thread_return is not NULL, the return value of tid is stored in the location pointed to by
thread_return. The return value of tid is either the argument it gave to pthread_exit or
PTHREAD_CANCELED if tid was canceled.

Returning Results from Threads

If the second argument we pass to pthread_join is non-null, the thread’s return value will be
placed in the location pointed to by that argument. The thread return value, like the thread
argument, is of type void*. If you want to pass back a single int or other small number, we can
do this easily by casting the value to void* and then casting back to the appropriate type after
calling pthread_join.

Thread Synchronization

In order to effectively work together the threads in a program usually need to share information
or coordinate their activity. Many ways to do this have been devised and such techniques usually
go under the name of thread synchronization.
Mutual Exclusion

When writing multi-threaded programs it is frequently necessary to enforce mutually exclusive


access to a shared data object. This is done with mutex objects. The idea is to associate a mutex
with each shared data object and then require every thread that wishes to use the shared data
object to first lock the mutex before doing so.
Here are the particulars:

1. Declare an object of type pthread mutex t.

2. Initialize the object by calling pthread mutex init().

3. Call pthread mutex unlock() to release the exclusive access and allow another thread to use the
shared data object.

4. Get rid of the object by calling pthread mutex destroy().

It is important to understand that if a thread attempts to lock the mutex while some other thread
has it locked, the second thread is blocked until the first releases the mutex with
pthread_mutex_unlock().
Other synchronization primitives used in POSIX threads are: semaphore and condition variables.

Operating System Lab SCOEIT 56


8. Mutual Exclusion and Synchronization of threads

Process Synchronization

Process synchronization is a mechanism to ensure a systematic sharing of resources


amongst concurrent processes. In broader terms, process synchronization is when one
process waits for notification of an event that will occur in another process.

1. Racing Problem

Suppose two processes P0 and P1 are accessing a common integer variable A:


 Suppose the initial value of A = 1000

The expected result of the execution of these two processes is A = 1100.

This is only possible if execution is:


(a) P0 followed by P1.
(b) P1 followed by P0.

But, Suppose P0 and P1 are permitted to execute in any arbitrary fashion, then there will
be following two possibilities could happen.

Possibility 1:

Possibility of execution in the sequence P0, P1, P0, P1. Now, the end value of A will be 1200,
which is wrong.

Operating System Lab SCOEIT 57


8. Mutual Exclusion and Synchronization of threads
Possibility 2:

Possibility of execution in the sequence P0, P1, P0. Now, the end value of A will be 900,
which is wrong.
Such a situation when the end-result of execution of two or more concurrent processes is
arbitrary and depends on the relative order of their execution, is called a racing problem. The
concurrent processes are racing with each other towards the shared resource in an arbitrary order
and producing the wrong result.

How to avoid the Racing Problem?

In previous example the end result is correct, if the execution follows the sequence:
(a) P0 followed by P1.
(b) P1 followed by P0.

This implies that at a time only one of the processes should be executing in its critical section,
not both. This is known as mutual exclusion of the two processes.

Cooperating processes:

Two or more concurrent processes, sharing a common resource, have to follow some well –
defined protocols avoid racing problem. Such processes are called cooperating processes.

2. Critical Section

Critical section refers to the code segment of a process, whereby it accesses a shared
resource. Processes P0 and P1 are executing their respective critical sections to modify the
value of A. The shared variable A represents the common resource between the two processes.

Critical Section Problem:

Consider a set of concurrent processes {P0, P1, P2,....,Pn-1} sharing a common resource R,
through the execution of their critical sections. These processes have to cooperate with each
other to provide a solution to the critical section problem.

Operating System Lab SCOEIT 58


8. Mutual Exclusion and Synchronization of threads
Requirements of a Critical Solution

An ideal critical-section solution should meet the following three requirements:

(a) Mutual exclusion: at any time, at most one of the cooperating processes should be executing
in its critical section.

(b) Progress: if no process is executing in its critical section and there exists some processes
that wish to enter their critical sections, then only those processes that are not executing in their
remainder section can participate in the decision of which will enter its critical section next,
and this decision cannot be postponed indefinitely. This is termed as requirement of progress,
which must be met under all possible conditions. if no process is in critical section, can decide
quickly who enters, only one process can enter the critical section so in practice, others are put
on the queue.

(c) Bounded waiting: There must exist a finite upper bound on the number of times that other
cooperating processes can enter their critical section, after a process P1 has requested entry
into its critical section and before the request is granted. Normally the upper bound is 1.
The wait is the time from when a process makes a request to enter its critical section until
that request is granted. In practice, once a process enters its critical section, it does not get
another turn until a waiting process gets a turn (managed as a queue).

Critical Section Solutions structure

The general structure of the critical section solution will be:

Entry section: refers to the code segment of a process that is executed when the process intends
to enter its critical section.

Critical section: This is the code segment, wherein the process or thread will access a shared
resource.

Operating System Lab SCOEIT 59


Exit section: this section of code will be executed by the process immediately after its exit from
the critical section.
8. Mutual Exclusion and Synchronization of threads

Reminder section: This is the remaining part of a process's code. When a process is executing in
this section, it implies that it is not waiting to enter its critical section.

3. Semaphores

Semaphores are the OS tools for synchronization. Two types:


1. Binary Semaphore.
2. Counting Semaphore.

1. Binary Semaphore

A binary semaphore is an integer variable, which can be accessed by a cooperating process


through the use of two primitives:
 Wait
 Signal

A binary semaphore is initialized by the OS to 1 and it can assume only one of the two values
(either 1 or 0).

 int S = 1; /* Let S be a Binary semaphore, initialized to 1 */

Wait: the first process invoking will make the semaphore value 0 and proceed to enter its critical
section. If a subsequent process Pi is requesting entry into critical section, when another
cooperating process is still executing in critical section, then Pi will be made to wait. So at a time
only one of the waiting process is permitted to enter its critical section. A waiting process Pi will
repeatedly check the value of semaphore, till it is found to be 1. Then it will decrement the value
to 0 and proceed to enter its critical section.

The two instructions decrementing the value of semaphore i.e. while(*S == 0) and decrementing
the value to 0 i.e. *S--, have to be executed atomically; else it would possible that more than one
waiting processes may find the semaphore value to be 1, decrement it to 0 and proceed to enter
their critical sections simultaneously. Thus the requirement of mutual exclusion will be violated.

Operating System Lab SCOEIT 60


Signal: this primitive is invoked by a cooperating process, when it is exiting from critical
section. The operation comprises of incrementing the value of semaphore to 1, to facilitate one of
the waiting processes to enter its critical section.
8. Mutual Exclusion and Synchronization of threads

A process Pi can be synchronized for accessing of its critical section as follows:

4. Counting semaphore

The counting semaphores are free of the limitations of the binary semaphores. A counting
semaphore comprises:

An integer variable, initialized to a value K (K>=0). During operation it can assume any value
<= K. A pointer to a process queue. The queue will hold the PCBs of all those processes, waiting
to enter their critical sections. The queue is implemented as a FCFS, so that the waiting processes
are served in a FCFS order.

A counting semaphore can be implemented as follows:

Operating System Lab SCOEIT 61


8. Mutual Exclusion and Synchronization of threads
A Semaphore is initialized as follows:

Operation of a counting semaphore:

1. Let the initial value of the semaphore count be 1.


2. When semaphore count = 1, it implies that no process is executing in its critical section
and no process is waiting in the semaphore queue.
3. When semaphore count = 0, it implies that one process is executing in its critical section
but no process is waiting in the semaphore queue.

Operating System Lab SCOEIT 62


4. When semaphore count = N, it implies that one process is executing in its critical section
and N process are waiting in the semaphore queue.
5. When a process is waiting in semaphore queue, it is not performing any busy waiting. It
is rather in a “waiting” or “blocked” state.
6. When a waiting process is selected for entry into its critical section, it is transferred from
“Blocked” state to “ready” state.

8. Mutual Exclusion and Synchronization of threads

7. POSIX pthread mutex synchronization object

When writing multi-threaded programs it is frequently necessary to enforce mutually exclusive


access to a shared data object. This is done with mutex objects. The idea is to associate a mutex
with each shared data object and then require every thread that wishes to use the shared data
object to first lock the mutex before doing so.

Here are the particulars:

5. Declare an object of type pthread mutex t.

6. Initialize the object by calling pthread mutex init().

7. Call pthread mutex unlock() to release the exclusive access and allow another thread
to use the shared data object.

8. Get rid of the object by calling pthread mutex destroy().

It is important to understand that if a thread attempts to lock the mutex while some other thread
has it locked, the second thread is blocked until the first releases the mutex with
pthread_mutex_unlock().

mutex functions:

1. pthread_mutex_init()
2. pthread_mutex_lock()
3. pthread_mutex_unlock()

Mutex Locks

A mutex is a special variable that can be either in the locked state or the unlocked state. If the
mutex is locked, it has a distinguished thread that holds or owns the mutex. If no thread holds the
mutex, we say the mutex is unlocked, free or available. When the mutex is free and a thread
attempts to acquire the mutex, that thread obtains the mutex and is not blocked. The mutex or
mutex lock is the simplest and most efficient thread synchronization mechanism. Programs use
mutex locks to preserve critical sections and to obtain exclusive access to resources. A mutex is

Operating System Lab SCOEIT 63


meant to be held for short periods of time. Mutex functions are not thread cancellation points and
are not interrupted by signals.

Creating and initializing a mutex

POSIX uses variables of type pthread_mutex_t to represent mutex locks. A program must
always initialize pthread_mutex_t variables before using them for synchronization. For
statically allocated pthread_mutex_t variables, simply assign PTHREAD_MUTEX_INITIALIZER to

8. Mutual Exclusion and Synchronization of threads

the variable. For mutex variables that are dynamically allocated or that don't have the default
mutex attributes, call pthread_mutex_init to perform initialization.

int pthread_mutex_init(
pthread_mutex_t *mutex,
const pthread_mutexattr_t *restrict attr
);

The mutex parameter of pthread_mutex_init is a pointer to the mutex to be initialized. Pass


NULL for the attr parameter of pthread_mutex_init to initialize a mutex with the default
attributes.

pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;

Locking and unlocking a mutex

POSIX has two functions, pthread_mutex_lock and pthread_mutex_trylock for acquiring a


mutex. The pthread_mutex_lock function blocks until the mutex is available, while the
pthread_mutex_trylock always returns immediately. The pthread_mutex_unlock function
releases the specified mutex. All three functions take a single parameter, mutex, a pointer to a
mutex.

int pthread_mutex_lock(pthread_mutex_t *mutex);


int pthread_mutex_trylock(pthread_mutex_t *mutex);
int pthread_mutex_unlock(pthread_mutex_t *mutex);

8. POSIX Unnamed Semaphores as a synchronization object

A POSIX semaphore is a variable of type sem_t with associated atomic operations for
initializing, incrementing and decrementing its value.
The POSIX:SEM Semaphore Extension defines two types of semaphores, named and unnamed.
The difference between unnamed and named semaphores is analogous to the difference between
ordinary pipes and named pipes (FIFOs).

Declaration of a semaphore variable called sem.

#include <semaphore.h>

Operating System Lab SCOEIT 64


sem_t sem;

The POSIX Extension does not specify the underlying type of sem_t. One possibility is that
sem_t acts like a file descriptor.

8. Mutual Exclusion and Synchronization of threads

Initialization of POSIX SEM: semaphore variable

int sem_init(sem_t *sem, int pshared, unsigned value);

If successful, sem_init initializes sem. POSIX does not specify the return value on success, but
the rationale mentions that sem_init may be required to return 0 in a future specification. If
unsuccessful, sem_init returns –1.

The sem_init function initializes the unnamed semaphore referenced by sem to value. The
value parameter cannot be negative. Our examples use unnamed semaphores with pshared
equal to 0, meaning that the semaphore can be used only by threads of the process that
initializes the semaphore. If pshared is nonzero, any process that can access sem can use the
semaphore.

The following code segment initializes an unnamed semaphore to be used by threads of the
process.

sem_t semA;

if (sem_init(&semA, 0, 1) == -1)
perror("Failed to initialize semaphore semA");

POSIX:SEM Semaphore Operations

The sem_post function implements classic semaphore signaling. If no threads are blocked on
sem, then sem_post increments the semaphore value. If at least one thread is blocked on sem,
then the semaphore value is zero. In this case, sem_post causes one of the threads blocked on
sem to return from its sem_wait function, and the semaphore value remains at zero.

int sem_post(sem_t *sem);

If successful, sem_post returns 0. If unsuccessful, sem_post returns –1.

The sem_wait function implements the classic semaphore wait operation. If the semaphore value
is 0, the calling thread blocks until it is unblocked by a corresponding call to sem_post.

Operating System Lab SCOEIT 65


int sem_wait(sem_t *sem);

If successful, this function returns 0. If unsuccessful, this function return –1.

8. Mutual Exclusion and Synchronization of threads

9. Readers /Writers problem

The readers/writers problem is defined as follows:

There is a data area shared among a number of processes. The data area could be a file, a block
of main memory, or even a bank of processor registers. There are a number of processes that
only read the data area (readers) and a number that only write to the data area (writers).

The conditions that must be satisfied are as follows:

1. Any number of readers may simultaneously read the file.


2. Only one writer at a time may write to the file.
3. If a writer is writing to the file, no reader may read it.

Thus, readers are processes that are not required to exclude one another and writers are processes
that are required to exclude all other processes, readers and writers alike.

9. Producer / Consumer problem

The general statement is this: there are one or more producers generating some type of data
(records, characters) and placing these in a buffer. There is a single consumer that is taking items
out of the buffer one at a time.
The system is to be constrained to prevent the overlap of buffer operations. That is, only one
agent (producer or consumer) may access the buffer at any one time. The problem is to make
sure that the producer won’t try to add data into the buffer if it’s full and that the consumer won’t
try to remove data from an empty buffer

INPUT:
Number of readers and writers
OUTPUT:
A shared File is created and read item by the reader process

FAQS: (min – 5 & max – 15)

1. What is mutual exclusion and what are the requirements to enforce mutual exclusion?

Operating System Lab SCOEIT 66


2. What is meant by critical section?
3. Explain the concept of semaphore?
4. Explain wait and signal functions associated with semaphores.
5. What is meant by binary and counting semaphores?

8. Mutual Exclusion and Synchronization of threads

PRACTISE ASSIGNMENTS / EXERCISE / MODIFICATIONS: (Max – 5)

Example 1:

#include<pthread.h>

void *thread(void *vargp);

int main()
{
pthread_t tid;
pthread_create(&tid,NULL,thread,NULL);
exit(0);
}

void *thread(void *vargp)


{
printf(“Hello World!\n”);
return NULL;
}

Example 2:

void *thread(void *vargp);


int main()
{
pthread_t tid;
pthread_create(&tid,NULL,thread,NULL);
pthread_join(tid,NULL);
}

/* Thread Routine*/

void *thread(void *vargp)


{
printf(“Hello World!\n”);

Operating System Lab SCOEIT 67


return NULL;
}

Example 3:

typedef struct
{
int num1;
int num2;
}NUM;

void* sum_function(void *argp);

8. Mutual Exclusion and Synchronization of threads


int main()
{
pthread_t th1;
NUM N1;
int n1, n2;
int ret_val;
printf("Enter num1\n");
scanf("%d",&n1);
printf("Enter num2\n");
scanf("%d",&n2);
N1.num1 = n1;
N1.num2 = n2;
pthread_create(&th1, NULL, sum_function, (void*) &N1);
pthread_join(th1, (void*)&ret_val);
printf("sum = %d\n",ret_val);
return 0;
}

void* sum_function(void *argp)


{
NUM *N2 = (NUM*) argp;
int a = N2->num1;
int b = N2->num2;
int sum = a + b;
return (void*)sum;
}

Example4 : A shared variable protected by semaphores.

Operating System Lab SCOEIT 68


8. Mutual Exclusion and Synchronization of threads

Operating System Lab SCOEIT 69


9. Inter process communication in UNIX
AIM:
Producer-Consumer problem using UNIX pipe

OBJECTIVE:
Learn Inter-process communication (UNIX pipe) as a data transfer mechanism among processes.

THEORY:

Pipes

Pipes are the oldest form of UNIX System IPC and are provided by all UNIX systems. Pipes
have two limitations.

1. Historically, they have been half duplex (i.e., data flows in only one direction).
2. Pipes can be used only between processes that have a common ancestor. Normally, a pipe
is created by a process, that process calls fork, and the pipe is used between the parent
and the child.

SYNOPSIS

#include <unistd.h>

int pipe(int fildes[2]);

The pipe function creates a communication buffer that the caller can access through the file
descriptors fildes[0] and fildes[1]. The data written to fildes[1] can be read from
fildes[0] on a first-in-first-out basis.

If successful, pipe returns 0. If unsuccessful, pipe returns –1.

A pipe has no external or permanent name, so a program can access it only through its two
descriptors. For this reason, a pipe can be used only by the process that created it and by
descendants that inherit the descriptors on fork.

When a process calls read on a pipe, the read returns immediately if the pipe is not empty. If the
pipe is empty, the read blocks until something is written to the pipe, as long as some process has
the pipe open for writing. On the other hand, if no process has the pipe open for writing, a read
from an empty pipe returns 0, indicating an end-of-file condition.

Example:

/*define an array to store the two file descriptors*/


int filedes[2];
/*now create the pipe*/
rc=pipe(filedes);
if(rc==-1)
{
perror(“pipe failed”);

Operating System Lab SCOEIT 70


9. Inter process communication in UNIX

exit(1);
}

fork and pipe

A single process would not use a pipe. They are used when two processes wish to communicate
in a one way fashion. A pipe opened before fork becomes shared between two processes.

Operating System Lab SCOEIT 71


Kernel activity

Data flowing in a pipe are managed directly by the kernel. We can think them flowing through
the kernel.

And

Operating System Lab SCOEIT 72


9. Inter process communication in UNIX

Writing into a pipe and Reading from a pipe

For this purposes write() and read() system calls are used. Following file descriptors can be used
for block I/O with write() and read()system calls:

write( pfd[1], buf, size );


read ( pfd[0], buf, size );

read() system call

Prototype:

#include<unistd.h>

ssize_t read(int fd, void *buf, size_t count);

read() attempts to read up to count bytes from file descriptor fd into the buffer starting at buf. If
the count is zero read returns 0 and has no other results.

Return value:

• On success, the number of bytes read is returned.


• On error –1 is returned

write () system call

Prototype:

#include<unistd.h>

ssize_t write(int fd, const void *buf, size_t count);

Operating System Lab SCOEIT 73


9. Inter process communication in UNIX

Write writes up to count bytes to the file referenced by the file descriptor fd from the buffer
starting at buf. On success the number of bytes written are returned(zero indicates nothing was
written)on error –1 is returned.

INPUT:
Number of producer and consumer processes.

OUTPUT:
Producer and consumer processes executed and data items stored in files.

FAQS: (min – 5 & max – 15)

1. What do you mean by cooperating and non cooperating processes?


2. What are the various ways in which two processes can communicate with each other?
3. Explain various functions along with their parameters for implementing named pipe and
unnamed pipe.

PRACTISE ASSIGNMENTS / EXERCISE / MODIFICATIONS: (Max – 5)

Operating System Lab SCOEIT 74


Operating System Lab SCOEIT 75

S-ar putea să vă placă și