Sunteți pe pagina 1din 17

c  

 
  
* The UN*X operating system provides a flexible set of simple tools to allow you to perform a
wide variety of system-management, text-processing, and general-purpose tasks. These simple
tools can be used in very powerful ways by tying them together programmatically, using using
"shell scripts" or "shell programs".

The UN*X "shell" itself is a user-interface program that accepts commands from the user and
executes them. It can also accept the same commands written as a list in a file, along with
various other statements that the shell can interpret to provide input, output, decision-making,
looping, variable storage, option specification, and so on. This file is a shell program.

Shell programs are, like any other programming language, useful for some things but not for
others. They are excellent for system-management tasks but not for general-purpose
programming of any sophistication. Shell programs, though generally simple to write, are also
tricky to debug and slow in operation.

There are three versions of the UN*X shell: the original "Bourne shell (sh)", the "C shell (csh)"
that was derived from it, and the "Korn shell (ksh)" that is in predominant use.

This document focuses on the Bourne shell. The C shell is more powerful but has various
limitations, and while the Korn shell is clean and more powerful than the other two shells, it is a
superset of the Bourne shell: anything that runs on a Bourne shell runs on a Korn shell. Since the
Bourne shell's capabilities are probably more than most people require, there's no reason to
elaborate much beyond them in an introductory document, and the rest of the discussion will
assume use of the Bourne shell unless otherwise stated.

[1] GETTING STARTED

* The first thing to do in understanding shell programs is to understand the elementary system
commands that can be used in them. A list of fundamental UN*X system commands follows:

ls # Give a simple listing of files.


ll # Give a listing of files with file details.
cp # Copy files.
mv # Move or rename files.
rm # Remove files.
rm -r # Remove entire directory subtree.
cd # Change directories.
pwd # Print working directory.
cat # Lists a file or files sequentially.
more # Displays a file a screenfull at a time.
pg # Variant on "more".
mkdir # Make a directory.
rmdir # Remove a directory.
The shell executes such commands when they are typed in from the command prompt with
their appropriate parameters, which are normally options and file names.
* The shell also allows files to be defined in terms of "wildcard characters" that define a range of
files. The "*" wildcard character substitutes for any string of characters, so:

rm *.txt
ëë deletes all files that end with ".txt". The "?" wildcard character substitutes for any single
character, so:
rm book?.txt
ëë deletes "book1.txt", "book2.txt", and so on. More than one wildcard character can be used at
a time, for example:
rm *book?.txt
ëë deletes "book1.txt", "mybook1.txt", "bigbook2.txt", and so on.

* Another shell capability is "input and output redirection". The shell accepts input by default
from what is called "standard input", and generates output by default to what is called "standard
output". These are normally defined as the keyboard and display, respectively, or what is referred
to as the "console" in UN*X terms.

However, you can "redirect" standard input or output to a file or another program if needed.
Consider the "sort" command. This command sorts a list of words into alphabetic order, so if you
enter:

sort
PORKY
ELMER
FOGHORN
DAFFY
WILE
BUGS
<CTL-D>
ëë the shell spits back out at you:
BUGS
DAFFY
ELMER
FOGHORN
PORKY
WILE
Note that the CTLëD key input terminates direct keyboard input. You could just as well store the
same words in a file and then "redirect" the contents of that file to standard input with the "<"
operator:
sort < names.txt
This would list the sorted names to the display as before. If you wanted to store the sorted
names in a file, you could redirect them to standard output with the ">" operator:
sort < names.txt > output.txt
You can also append to an existing file using the ">>" operator:
sort < names.txt >> output.txt
In these cases, you don't see any output, since the command just executes and ends. However,
you can fix that by connecting the "tee" command to the output through a "pipe", designated
by "|". This allows the standard input of one command to be chained into the standard input of
another command. In the case of "tee", it accepts text into its standard input and redirects it
both to a file and to standard output:
sort < names.txt | tee output.txt
So this both displays the names and puts them in the output file. You can chain together many
commands to "filter" information through several processing steps. This ability to combine the
effects of commands is one of the beauties of shell programming.

By the way, "sort" has some handy additional options:

sort -u # Eliminate redundant lines in output.


sort -r # Sort in reverse order.
sort -n # Sort numbers.
sort +1 # Skip first field in sorting.
2 If a command generates an error, it is displayed to what is called "standard error", instead of
standard output, which defaults to the console. It will not be redirected by ">". However, you
can use "2>" to redirect the error message. For example:
ls xyzzy 2> /dev/null
ëë will give an error message if the file "xyzzy" doesn't exist, but the error will be redirected to
the file "/dev/null". This is actually a "special file" that exists under UN2 where everything sent
to it is simply discarded.

* The shell allows you to execute multiple commands sequentially on one line by chaining them
with a ";":

rm *.txt ; ls
A timeëconsuming program can also be run in a "parallel" fashion by following it with a "&":
sort < bigfile.txt > output.txt &
2 These commands and operations are essential elements for creating shell programs. They can
be stored in a file and then executed by the shell.

 ou instruct the shell that the file contains commands by marking it as "executable" with the
"chmod" command. Each file under UN*X has a set of "permission" bits, listed by an "ll" as:

rwxrwxrwx
The "r" gives "read" permission, the "w" gives "write" permission, and the "x" gives "execute"
permission. There are three sets of these permission bits, one for the user, one for other
members of a local group of users on a system, and one for everyone who can access the
system ëë remember that UN2 is normally a multiuser environment.
 ou can use "chmod" to set these permissions by specifying them as an octal code. For example:

chmod 644 myfile.txt


This gives you both read and write permission on the file, but everybody else only gets read
permission. You can use the same octal scheme to set execute permission, or just use the "+x"
option:
chmod +x mypgm
This done, if you enter the name "mypgm" at the prompt, the shell reads the commands out of
"mypgm" and executes them. You can remove the execute permission with the "ëx" option.

For example, suppose you want to be able to inspect the contents of a set of archive files stored
in the directory "/users/group/archives".  ou could create a file named "ckarc" and store the the
following command string in it:

ls /users/group/archives | pg
This is a very simple shell program. As noted, the shell has control constructs, supports storage
variables, and has several options that can be set to allow much more sophisticated programs.

The following sections describe these features in a quick outline fashion. Please remember that
you can't really make much effective use of most of the features until you've learned about all of
them, so if you get confused just keep on going and then come back for a second pass.

BACK_TO_TOP

0 
 

* The first useful command to know about in building shell programs is "echo", which allows
you to perform output from your shell program:

echo "This is a test!"


This sends the string "This is a test!" to standard output. It is recommended that your shell
programs generate some output to inform the user of what they are doing.

The shell allows you to store values in variables. All you have to do to declare a variable is
assign a value to it:

shvar="This is a test!"
The string is enclosed in doubleëuotes to ensure that the variable swallows the entire string
(more on this later), and there are ‘ spaces around the "=". The value of the shell variable can
be obtained by preceding it with a "$":
echo $shvar
This displays "This is a test!". If you hadn't stored a value in that shell variable, you would have
simply got a blank line. Values stored in shell variables can be used as parameters to other
programs as well:
ll $lastdir
You can wipe out a value stored in a shell variable by assigning the "null string" to it:
$shvar=""
There are some subtleties in using shell variables. For example, suppose your shell program
performed the assignment:
allfiles=*
ëë and then performed:
echo $allfiles
This would echo a list of all the files in the directory. However, only the string "2" would be
stored in "allfiles". The expansion of "2" only occurs when the "echo" command is executed.

Another subtlety is in modifying the values of shell variables. Suppose you have a file name in a
shell variable named "myfile" and want to copy that file to another with the same name, but with
"2" tacked on to the end.  ou might think to try:

mv $myfile $myfile2
ëë but you'd uickly realize that the shell will think that "myfile2" is a different shell variable,
and this won't work. Fortunately, there is a way around this. You can perform the change as
follows:
mv $myfile ${myfile}2
2 Your UN2 installation will have some variables installed by default, most importantly
$HOME, which gives the location of your home directory.

If you want to call other shell programs from a shell program and have them use the same shell
variables as the calling program, you have to "export" them as follows:

shvar="This is a test!"
export shvar
echo "Calling program two."
shpgm2
echo "Done!"
If "shpgm2" simply contains:
echo $shvar
ëë then it will echo "This is a test!".
0   



The next step is to consider shell command substitution. Like any programming language, the
shell does exactly what you tell it to do and so you have to be very specific when you tell it to do
something.

As an example, consider the "fgrep" command, which searches a file for a string. Suppose you
want to search a file named "source.txt" for the string "Coyote".  ou could do this with:

fgrep Coyote source.txt


ëë and it would print out the matching lines. However, suppose you wanted to search for "Wile
E. Coyote". If you did this as:
fgrep Wile E. Coyote source.txt
ëë you'd get an error message that "fgrep" couldn't open "E.". You need to enclose the string in
doubleëuotes (""):
fgrep "Wile E. Coyote" source.txt
If a string has a special character in it, such as "2" or "?", that you want to be interpreted as a
"literal" and not a wildcard, the shell can get a little confused. If you want to ensure that the
wildcards are not interpreted, you can either "escape" the wildcard with a backslash ("\2" or
"\?") or enclose the string in single uotes, which prevents the shell from interpreting any of
the characters within the string.

For example, if you executed:

echo "$shvar"
ëë from a shell program, you would output the value of the shell variable "$shvar". If instead you
used:
echo '$shvar'
ëë you would get the string "$shvar".

* Having considered "double-quoting" and "single-quoting", let's now consider "back-quoting".


This is a little tricky to explain. As a useful tool, consider the "expr" command, which allows
you to do simple math from the command line:

expr 2 + 4
This returns the value "6". You must have spaces between the parameters, and that if you
perform a multiplication you have to "escape" the "2" so the shell doesn't interpret it:
expr 3 \* 7
Now suppose you stored the string "expr 12 / 3" in a shell variable named "shcmd". If you
executed:
echo $shcmd
ëë or:
echo "$shcmd"
ëë you'd get "expr 12 / 3". If you used singleëuotes:
echo '$shcmd'
ëë you'd get the string "$shcmd". But if you used j ëuotes, the reverse form of a single
uote:
echo `$shcmd`
ëë you'd get the value "4", since the string inside "shcmd" is executed. This is an extremely
powerful techniue that can be very confusing to use in practice.

0 
 

* In general, shell programs operate in a "batch" mode, that is, without interaction from the user,
and so most of their parameters are obtained on the command line.

Each argument on the command line can be seen inside the shell program as a shell variable of
the form "$1", "$2", "$3", and so on, with "$1" corresponding to the first argument, "$2" the
second, "$3" the third, and so on.

There is also a "special" argument variable, "$0", that gives the name of the shell program itself.
Other special variables include "$#", which gives the number of arguments supplied, and "$*",
which gives a string with all the arguments supplied.

Since the argument variables are in the range "$1" to "$9", so what happens if you have more
than 9 arguments? No problem, you can use the "shift" command to move the arguments down
through the argument list. That is, when you execute "shift" then the second argument becomes
"$1", the third argument becomes "$2", and so on, and if you do a "shift" again the third
argument becomes "$1"; and so on.  ou can also add a count to cause a multiple shift:

shift 3
ëë shifts the arguments three times, so that the fourth argument ends up in "$1".

0


   

* Shell programs can perform conditional tests on their arguments and variables and execute
different commands based on the results. For example:

if [ "$1" = "hyena" ]
then
echo "Sorry, hyenas not allowed."
exit
elif [ "$1" = "jackal" ]
then
echo "Jackals not welcome."
exit
else
echo "Welcome to Bongo Congo."
fi
echo "Do you have anything to declare?"
ëë checks the command line to see if the first argument is "hyena" or "jackal" and bails out,
using the "exit" command, if they are. Other arguments allow the rest of the file to be
executed. Note how "$1" is enclosed in double uotes, so the test will not generate an error
message if it yields a null result.

There are a wide variety of such test conditions:

[ "$shvar" = "fox" ] String comparison, true if match.


[ "$shvar" != "fox" ] String comparison, true if no match.
[ "$shvar" = "" ] True if null variable.
[ "$shvar" != "" ] True if not null variable.

[ "$nval" -eq 0 ] Integer test; true if equal to 0.


[ "$nval" -ge 0 ] Integer test; true if greater than or equal to 0.
[ "$nval" -gt 0 ] Integer test; true if greater than 0.
[ "$nval" -le 0 ] Integer test; true if less than or equal to 0.
[ "$nval" -lt 0 ] Integer test; true if less than to 0.
[ "$nval" -ne 0 ] Integer test; true if not equal to 0.

[ -d tmp ] True if "tmp" is a directory.


[ -f tmp ] True if "tmp" is an ordinary file.
[ -r tmp ] True if "tmp" can be read.
[ -s tmp ] True if "tmp" is nonzero length.
[ -w tmp ] True if "tmp" can be written.
[ -x tmp ] True if "tmp" is executable.
There is also a "case" control construct that checks for euality with a list of items. It can be
used with the example at the beginning of this section:
case "$1"
in
"gorilla") echo "Sorry, gorillas not allowed."
exit;;
"hyena") echo "Hyenas not welcome."
exit;;
*) echo "Welcome to Bongo Congo.";;
esac
The string ";;" is used to terminate each "case" clause.

* The fundamental loop construct in the shell is based on the "for" command. For example:

for nvar in 1 2 3 4 5
do
echo $nvar
done
ëë echoes the numbers 1 through 5. You could echo the names of all the files in the current
directory with:
for file in *
do
echo $file
done
One nice little feature of the shell is that if you don't actually specify the "in" parameters for the
"for" command, it just assumes you want the commandëline arguments. If you simply typed in:
for file
do
echo $file
done
2 There is a "break" command to allow you to exit a loop if necessary:
for file
do
if [ "$file" = punchout ]
then
break
else
echo $file
fi
done
There is also a "continue" command that allows you to start the next iteration of the loop
immediately. You u  have a command in the "then" or "else" clauses, or you'll get an error
message. If you don't want to, say, actually do anything in the "then" clause, you can use ":" as
a "noëop" command:
then
:
else
2 There are two other looping constructs available as well, "while" and "until". For an example
of "while":
n=10
while [ "$n" -ne 0 ]
do
echo $n
n=`expr $n - 1`
done
ëë counts down from 10 to 1. The "until" loop has similar syntax but tests for a false condition:
n=10
until [ "$n" -eq 0 ]
do
...

0  

* There are other useful features available for writing shell programs. For example, you can
comment your programs by preceding the comments with a "#":

# This is an example shell program.


cat /users/group/grouplog.txt | pg # Read group log file.
It is strongly recommended that you provide comments in  your shell programs. If they are
just oneëliners, a simple comment line will do. If they are complicated shell programs, they
should have a title, revision number, revision date, and revision history along with descriptive
comments.

This will prevent prevent confusion if you find copies of the same file that don't have the same
comments, or try to modify the program later. Shell programs can be obscure, even by the
standards of programming languages, and it is useful to provide a few hints.

*  ou can read standard input into a shell program using the "read" command. For example:

echo "What is your name?"


read myname
echo myname
ëë echoes your own name. The "read" command will read each item of standard input into a list
of shell variables until it runs out of shell variables, and then it will read all the rest of standard
input into the last shell variable. As a result, in the example above your entire name is stored
into "myname".

* If you have a command too long to fit on one line, you can use the line continuation character
"\" to put it on more than one line:

echo "This is a test of \


the line continuation character."
2 There is a somewhat cryptic command designated by "." that allows you to execute a file of
commands within your current shell program. For example:
. mycmds
ëë will execute the commands stored in the file "mycmds". It's something like an "include"
command in other languages.

* If you want to trace the execution of a shell program, you can use the "-x" option with the
shell:

sh -x mypgm *
This traces out the steps "mypgm" takes during the course of its operation.

* One last comment on shell programs before proceeding: What happens if you have a shell
program that just performs, say:

cd /users/coyote
ëë to allow you to change to another directory? Well, if you do this, you'll find that nothing
happens. After you run the shell program, you're still in the same directory you were when you
started.

The reason is that the shell creates a new shell, or "subshell", to run the shell program, and when
the shell program is finished, the subshell vanishes -- along with any changes made in that
subshell's environment. It is easier, at least in this simple case, to define a command alias in your
UN*X "login" shell rather than struggle with the problem in shell programs.

0

* Before we go on to practical shell programs, let's consider a few more useful tools.

The "paste" utility takes a list of text files and concatenates them on a line-by-line basis. For
example:

paste names.txt phone.txt > data.txt


ëë takes a file containing names and a file containing corresponding phone numbers and
generates a file with each name and number on a separate line.

* The "head" and "tail" utilities list the first 10 or last 10 lines in a file respectively.  ou can
specify the number of lines to be listed if you like:

head -5 source.txt # List first 5 lines.


tail -5 source.txt # List last 5 lines.
tail +5 source.txt # List all lines after line 5.
2 The "tr" utility translates from one set of characters to another. For example, to translate
uppercase characters to lowercase characters:
tr '[A-Z]' '[a-z]' < file1.txt > file2.txt
You can of course make the reverse conversion using:
tr '[a-z]' '[A-Z]' < file1.txt > file2.txt
A "ëd" option allows you to delete a character. For example:
tr -d '*'
ëë deletes all asterisks from the input stream. Note that "tr" only works on single characters.

* The "uniq" utility removes duplicate consecutive lines from a file. It has the syntax:

uniq source.txt output.txt


A "ëc" option provides an additional count of the number of times a line was duplicated, while a
"ëd" option allows you to display only the duplicated lines in a file.

* The "wc (word count)" utility tallies up the characters, words, and lines of text in a text file.
 ou can also invoke it with the following options:

wc -c # Character count only.


wc -w # Word count only.
wc -l # Line count only.
2 The "find" utility is extremely useful, if a little hard to figure out. Essentially, it traverses a
directory subtree and performs whatever action you want to perform on each directory. For
example:
find / -name findtest.txt -print
This searches from the root directory ("/") for "findtest.txt", as designated by the "ëname"
option, and then prints the full pathname of the file, as designated by the "ëprint" option.
Incidentally, you u  tell "find" to do something on a match, or it won't do anything and will
keep right on searching.

There are a wide variety of selection criteria. If you just want to print out the directories in a
search from your current directory, you can do so with:

find . -type d -print


You can also find files based on their username, date of last modification, size, and so on.

One of the things that makes "find" extremely useful is that not only can you perform searches,
you can perform an action when a search has a match, using the "-exec" option. For example, if
you want to get the headers of all the files on a match into a single file, you could do so as:

find . -name log.txt -exec head >> ./log \;


Note how the executed command string is terminated by "\;".

0   




* An advanced set of tools allows you to perform searches on text strings in files and, in some
cases, manipulate the strings found. These tools are known as "grep", "sed", and "awk" and are
based on the concept of a "regular expression", which is a scheme by which specific text patterns
can be specified by a set of special or "magic" characters.

The simplest regular expression is just the string you are searching for. For example:

grep Taz *.txt


ëë finds every example of the string "Taz" in all files ending in ".txt", then displays the name of
the file and the line of text containing the string.

But using the magic characters provides much more flexibility. For example:

grep ^Taz *.txt


ëë finds the string "Taz" ‘ if it is at the beginning of the line. Similarly:
grep Taz$ *.txt
ëë matches it ‘ if it is at the end of the line.

Now suppose you want to be able to match both "Taz" and "taz".  ou can do that with:

[Tt]az
The suare brackets ("[]") allow you to specify a range of characters. You can specify a range of
values, such as:
group_[abcdef]
This matches the strings "group_a", "group_b", and so on up to "group_f". This range
specification can be simplified to:
group_[a-f]
Similarly:
set[0123456789]
ëë can be simplified to:
set[0-9]
If you want to match to all characters ÷ ÷ a specific range, you can do that as follows:
unit_[^xyz]
This matches "unit_a" or "unit_b", but not "unit_x" or "unit_y" or "unit_z".

Other magic characters provide a wildcard capability. The "." character can substitute for any
single character, while the "*" substitutes for zero or more repetitions of the preceding regular
expression. For example:

_*$
ëë matches any line that is padded with spaces to the right margin (for clarity the space
character is represented here by a "_"). If you want to match to a magic character as a real item
of text, you have to precede it with a "\":
test\.txt
This matches "test.txt".

0  !!"

* Now that we understand regular expressions, we can consider "grep", "sed", and "awk" in more
detail.

The name "grep" stands for "general regular expression processor" and as noted it searchs a file
for matches to a regular expression like "^Taz" or "_*$". It has a few useful options as well. For
example:

grep -v <regular_expression> <file_list>


ëë lists all lines that p ‘ match the regular expression. Other options include:
grep -n # List line numbers of matches.
grep -i # Ignore case.
grep -l # Only list file names for a match.
If you are simply searching for strings and not using regular expressions, there is a variation on
"grep" called "fgrep" (meaning "fast grep") that searches for matches on strings and runs faster;
we used "fgrep" in an earlier example. It uses the same options as described for "grep" above.
* The name "sed" stands for "stream editor" and it provides, in general, a search-and-replace
capability. Its syntax for this task is as follows:

sed 's/<regular_expression>/<replacement_string>/[g]' source.txt


The optional "g" parameter specifies a "global" replacement. That is, if you have multiple
matches on the same line, "sed" will replace them all. Without the "g" option, it will only
replace the first match on that line.

For example, to replace the string "flack" with "flak", you would use "sed" as follows:

sed 's/flack/flak/g' source.txt > output.txt


You can also specify deletion of a string:
sed 's/bozo/d'
ëë or perform substitutions and deletions from a list of such specifications stored in a file:
sed -f sedcmds.txt source.txt > output.txt
Another useful feature allows you to uit on a pattern match:
sed '/^Target/q' source.txt > output.txt
ëë or append a file to the output after a pattern match:
sed '/^Target/ r newtext.txt' source.txt > output.txt
The "sed" utility has a wide variety of other options, but a full discussion of its capabilities is
beyond the scope of this document.

* Finally, "awk" is a full-blown text processing language that looks something like a mad cross
between "grep" and "C". In operation, "awk" takes each line of input and performs text
processing on it. It recognizes the current line as "$0", with each word in the line recognized as
"$1", "$2", "$3", and so on.

This means that:

awk '{ print $0,$0 }' source.txt


ëë prints each line with duplicate text. You can specify a regular expression to identify a pattern
match. For example, if you want to tally the lines with the word "Taz" on them, you could do
that with:
awk '/Taz/ { taz++ }; END { print taz }' source.txt
The END clause used in this example allows execution of "awk" statements after the lineë
scanning has been completed. There is also a BEGIN clause that allows execution of "awk"
statements before lineëscanning begins.

 ou can do very simple or very complicated things with "awk" once you know how it works. Its
syntax is much like that of "C", though it is much less finicky to deal with. Details of "awk" are
discussed in another Vectorsite document.
0#$  

* The most elementary use of shell programs is to reduce complicated command strings to
simpler commands and to provide handy utilities.

For example, I can never remember the options for compiling an ANSI C program, so I store
them in a script program named "compile":

cc $1.c -Aa -o $1
Similarly, I like to timestamp my documents in a particular format, so I have a shell program
named "td" ("timedate") that invokes "date" as follows:
date +"date: %A, %d %B %Y %H%M %Z"
This gives, for example:
date: Friday, 24 November 1995 1340 MST
Another simple example is a shell script to convert file names from uppercase to lowercase:
for file
do
mv $file `echo $file | tr "[A-Z]" "[a-z]"`
done
In this example, "for" is used to seuence through the file arguments, and of "tr" and backë
uoting are used to establish the lowerëcase name for the file.

0##%
   

* This final section provides a fast lookup reference for the materials in this document. It is a
collection of thumbnail examples and rules that will be cryptic if you haven't read through the
text.

* Useful commands:

cat # Lists a file or files sequentially.


cd # Change directories.
chmod +x # Set execute permissions.
chmod 666 # Set universal read-write permissions.
cp # Copy files.
expr 2 + 2 # Add 2 + 2.
fgrep # Search for string match.
grep # Search for string pattern matches.
grep -v # Search for no match.
grep -n # List line numbers of matches.
grep -i # Ignore case.
grep -l # Only list file names for a match.
head -5 source.txt # List first 5 lines.
ll # Give a listing of files with file details.
ls # Give a simple listing of files.
mkdir # Make a directory.
more # Displays a file a screenfull at a time.
mv # Move or rename files.
paste f1 f2 # Paste files by columns.
pg # Variant on "more".
pwd # Print working directory.
rm # Remove files.
rm -r # Remove entire directory subtree.
rmdir # Remove a directory.
sed 's/txt/TXT/g' # Scan and replace text.
sed 's/txt/d' # Scan and delete text.
sed '/txt/q' # Scan and then quit.
sort # Sort input.
sort +1 # Skip first field in sorting.
sort -n # Sort numbers.
sort -r # Sort in reverse order.
sort -u # Eliminate redundant lines in output.
tail -5 source.txt # List last 5 lines.
tail +5 source.txt # List all lines after line 5.
tr '[A-Z]' '[a-z]' # Translate to lowercase.
tr '[a-z]' '[A-Z]' # Translate to uppercase.
tr -d '_' # Delete underscores.
uniq # Find unique lines.
wc # Word count (characters, words, lines).
wc -w # Word count only.
wc -l # Line count.
2 Elementary shell capabilities:
shvar="Test 1" # Initialize a shell variable.
echo $shvar # Display a shell variable.
export shvar # Allow subshells to use shell variable.
mv $f ${f}2 # Append "2" to file name in shell variable.
$1, $2, $3, ... # Command-line arguments.
$0 # Shell-program name.
$# # Number of arguments.
$* # Complete argument list.
shift 2 # Shift argument variables by 2.
read v # Read input into variable "v".
. mycmds # Execute commands in file.
2 IF statement:
if [ "$1" = "red" ]
then
echo "Illegal code."
exit
elif [ "$1" = "blue" ]
then
echo "Illegal code."
exit
else
echo "Access granted."
fi

[ "$shvar" = "red" ] String comparison, true if match.


[ "$shvar" != "red" ] String comparison, true if no match.
[ "$shvar" = "" ] True if null variable.
[ "$shvar" != "" ] True if not null variable.

[ "$nval" -eq 0 ] Integer test; true if equal to 0.


[ "$nval" -ge 0 ] Integer test; true if greater than or equal to 0.
[ "$nval" -gt 0 ] Integer test; true if greater than 0.
[ "$nval" -le 0 ] Integer test; true if less than or equal to 0.
[ "$nval" -lt 0 ] Integer test; true if less than to 0.
[ "$nval" -ne 0 ] Integer test; true if not equal to 0.

[ -d tmp ] True if "tmp" is a directory.


[ -f tmp ] True if "tmp" is an ordinary file.
[ -r tmp ] True if "tmp" can be read.
[ -s tmp ] True if "tmp" is nonzero length.
[ -w tmp ] True if "tmp" can be written.
[ -x tmp ] True if "tmp" is executable.
2 CASE statement:
case "$1"
in
"red") echo "Illegal code."
exit;;
"blue") echo "Illegal code."
exit;;
*) echo "Access granted.";;
esac
2 Loop statements:
for nvar in 1 2 3 4 5
do
echo $nvar
done

for file # Cycle through command-line arguments.


do
echo $file
done

while [ "$n" != "Joe" ] # Or: until [ "$n" = "Joe" ]


do
echo "What's your name?"
read n
echo $n
done
There are "break" and "continue" commands that allow you to exit or skip to the end of loops
as the need arises.

0#  


 &

* This document was originally written during the 1990s, but I yanked it in 2001 as it didn't seem
to be attracting much attention. In the spring of 2003 I retrieved it and put it back up as I realized
it offered some value and it made no sense just to keep it archived, gathering dust.

Unfortunately, by that time I had lost track of its revision history. For want of anything better to
do, I simply gave the resurrected document the initial revcode of "v1.0.0". I believe it is unlikely
that any earlier versions of this document are available on the Internet, but since I had switched
from a two-digit revcode format ("v1.0") to a three-digit format ("v1.0.0") in the interim, any
earlier copies will have a two-digit revcode.

S-ar putea să vă placă și