Sunteți pe pagina 1din 33

Shell script

From Wikipedia, the free encyclopedia



This article is about scripting in UNIX-like systems. For batch programming in DOS, OS/2 and Windows,
see Batch file. For batch programming in Windows PowerShell shell, see Windows PowerShell#Scripting.






Editing a FreeBSD shell script for configuring ipfirewall
A shell script is a computer program designed to be run by the Unix shell, a command line interpreter.
[1]

The various dialects of shell scripts are considered to be scripting languages.
Typical operations performed by shell scripts include file manipulation, program execution, and printing
text.
Contents
[hide]
1 Capabilities
o 1.1 Shortcuts
o 1.2 Batch jobs
o 1.3 Generalization
o 1.4 Verisimilitude
o 1.5 Programming
2 Other scripting languages
3 Life cycle
4 Advantages and disadvantages
5 See also
6 References
7 External links
Capabilities[edit]
Shortcuts[edit]
In its most basic form, a shell script can provide a convenient variation of a system command where
special environment settings, command options, or post-processing apply automatically, but in a way that
allows the new script to still act as a fully normal Unix command.
One example would be to create a version of ls, the command to list files, giving it a shorter command
name of l, which would be normally saved in a user's bin directory as /home/username/bin/l, and a
default set of command options pre-supplied.
#!/bin/sh
LC_COLLATE=C ls -FCas "$@"
Here, the first line (Shebang) indicates which interpreter should execute the rest of the script, and the
second line makes a listing with options for file format indicators, columns, all files (none omitted), and a
size in blocks. The LC_COLLATE=C sets the default collation order to not fold upper and lower case
together, not intermix dotfiles with normal filenames as a side effect of ignoring punctuation in the names
(dotfiles are usually only shown if an option like -a is used), and the "$@" causes any parameters given to
l to pass through as parameters to ls, so that all of the normal options and other syntax known to ls can
still be used.
The user could then simply use l for the most commonly used short listing.
Another example of a shell script that could be used as a shortcut would be to print a list of all the files
and directories within a given directory.
#!/bin/sh

clear
ls -al
In this case, the shell script would start with its normal starting line of #!/bin/sh. Following this, the
script executes the command clear which clears the terminal of all text before going to the next line.
The following line provides the main function of the script. The ls -al command list the files and
directories that are in the directory from which the script is being run. The ls command attributes could
be changed to reflect the needs of the user.
Note: If an implementation does not have the clear command, try using the clr command instead.
Batch jobs[edit]
Shell scripts allow several commands that would be entered manually at a command-line interface to be
executed automatically, and without having to wait for a user to trigger each stage of the sequence. For
example, in a directory with three C source code files, rather than manually running the four commands
required to build the final program from them, one could instead create a C shell script, here named
build and kept in the directory with them, which would compile them automatically:
#!/bin/csh
echo compiling...
cc -c foo.c
cc -c bar.c
cc -c qux.c
cc -o myprog foo.o bar.o qux.o
echo done.
The script would allow a user to save the file being edited, pause the editor, and then just run ./build to
create the updated program, test it, and then return to the editor. Since the 1980s or so, however, scripts of
this type have been replaced with utilities like make which are specialized for building programs.
Generalization[edit]
Simple batch jobs are not unusual for isolated tasks, but using shell loops, tests, and variables provides
much more flexibility to users. A Bash (Unix shell) script to convert JPEG images to PNG images, where
the image names are provided on the command linepossibly via wildcardsinstead of each being listed
within the script, can be created with this file, typically saved in a file like
/home/username/bin/jpg2png
#!/bin/bash
for jpg; do # use $jpg in place of each filename given, in turn
png="${jpg%.jpg}.png" # find the PNG version of the filename by replacing .jpg with .png
echo converting "$jpg" ... # output status info to the user running the script
if convert "$jpg" jpg.to.png ; then # use the convert program (common in Linux) to create the PNG
in a temp file
mv jpg.to.png "$png" # if it worked, rename the temporary PNG image to the correct name
else # ...otherwise complain and exit from the script
echo 'jpg2png: error: failed output saved in "jpg.to.png".' >&2
exit 1
fi # the end of the "if" test construct
done # the end of the "for" loop
echo all conversions successful # tell the user the good news
exit 0
The jpg2png command can then be run on an entire directory full of JPEG images with just
/home/username/bin/jpg2png *.jpg
Verisimilitude[edit]
A key feature of shell scripts is that the invocation of their interpreters is handled as a core operating
system feature. So rather than a user's shell only being able to execute scripts in that shell's language, or a
script only having its interpreter directive handled correctly if it was run from a shell (both of which were
limitations in the early Bourne shell's handling of scripts), shell scripts are set up and executed by the OS
itself. A modern shell script is not just on the same footing as system commands, but rather many system
commands are actually shell scripts (or more generally, scripts, since some of them are not interpreted by
a shell, but instead by Perl, Python, or some other language). This extends to returning exit codes like
other system utilities to indicate success or failure, and allows them to be called as components of larger
programs regardless of how those larger tools are implemented.
Like standard system commands, shell scripts classically omit any kind of filename extension unless
intended to be read into a running shell through a special mechanism for this purpose (such as shs . ,
or cshs source).
Programming[edit]
Many modern shells also supply various features usually found only in more sophisticated general-
purpose programming languages, such as control-flow constructs, variables, comments, arrays,
subroutines, and so on. With these sorts of features available, it is possible to write reasonably
sophisticated applications as shell scripts. However, they are still limited by the fact that most shell
languages have little or no support for data typing systems, classes, threading, complex math, and other
common full language features, and are also generally much slower than compiled code or interpreted
languages written with speed as a performance goal.
Other scripting languages[edit]
Main article: scripting language
Many powerful scripting languages have been introduced for tasks that are too large or complex to be
comfortably handled with ordinary shell scripts, but for which the advantages of a script are desirable and
the development overhead of a full-blown, compiled programming language would be disadvantageous.
The specifics of what separates scripting languages from high-level programming languages is a frequent
source of debate. But generally speaking a scripting language is one which requires an interpreter.
Life cycle[edit]
Shell scripts often serve as an initial stage in software development, and are often subject to conversion
later to a different underlying implementation, most commonly being converted to Perl, Python, or C. The
interpreter directive allows the implementation detail to be fully hidden inside the script, rather than being
exposed as a filename extension, and provides for seamless reimplementation in different languages with
no impact on end users.
Advantages and disadvantages[edit]
Perhaps the biggest advantage of writing a shell script is that the commands and syntax are exactly the
same as those directly entered at the command line. The programmer does not have to switch to a totally
different syntax, as they would if the script were written in a different language, or if a compiled language
was used.
Often, writing a shell script is much quicker than writing the equivalent code in other programming
languages. The many advantages include easy program or file selection, quick start, and interactive
debugging. A shell script can be used to provide a sequencing and decision-making linkage around
existing programs, and for moderately sized scripts the absence of a compilation step is an advantage.
Interpretive running makes it easy to write debugging code into a script and re-run it to detect and fix
bugs. Non-expert users can use scripting to tailor the behavior of programs, and shell scripting provides
some limited scope for multiprocessing.
On the other hand, shell scripting is prone to costly errors. Inadvertent typing errors such as rm -rf * /
(instead of the intended rm -rf */) are folklore in the Unix community; a single extra space converts the
command from one that deletes everything in the sub-directories to one which deletes everythingand
also tries to delete everything in the root directory. Similar problems can transform cp and mv into
dangerous weapons, and misuse of the > redirect can delete the contents of a file. This is made more
problematic by the fact that many UNIX commands differ in name by only one letter: cp, cd, dd, df, etc.
Another significant disadvantage is the slow execution speed and the need to launch a new process for
almost every shell command executed. When a script's job can be accomplished by setting up a pipeline
in which efficient filter commands perform most of the work, the slowdown is mitigated, but a complex
script is typically several orders of magnitude slower than a conventional compiled program that performs
an equivalent task.
There are also compatibility problems between different platforms. Larry Wall, creator of Perl, famously
wrote that "It is easier to port a shell than a shell script."
Similarly, more complex scripts can run into the limitations of the shell scripting language itself; the
limits make it difficult to write quality code, and extensions by various shells to ameliorate problems with
the original shell language can make problems worse.
[2]

Many disadvantages of using some script languages are caused by design flaws within the language
syntax or implementation, and are not necessarily imposed by the use of a text-based command line; there
are a number of shells which use other shell programming languages or even full-fledged languages like
Scsh (which uses Scheme).
See also[edit]
Glue code
Interpreter directive
Shebang symbol (#!)
Unix shells
Windows PowerShell
Windows Script Host
References[edit]
1. Jump up ^ Kernighan, Brian W.; Pike, Rob (1984), "3. Using the Shell", The UNIX Programming
Environment, Prentice Hall, Inc., p. 94, ISBN 0-13-937699-2, "The shell is actually a programming
language: it has variables, loops, decision-making, and so on."
2. Jump up ^ "Csh Programming Considered Harmful"
External links[edit]

Wikibooks has a book on the topic of: Ad Hoc Data Analysis From The Unix Command Line
An Introduction To Shell Programming by Greg Goebel
UNIX / Linux shell scripting tutorial by Steve Parker
Shell Scripting Primer (Apple)
What to watch out for when writing portable shell scripts by Peter Seebach
Free Unix Shell scripting books


Script
From Wikipedia, the free encyclopedia
(Redirected from Scripts)
Jump to: navigation, search
For computer scripts that can be used with Wikipedia, see Wikipedia:Scripts.

Look up script, scripted, scripting, or scripts in Wiktionary, the free dictionary.
Script or scripting may refer to:
Contents
[hide]
1 Computing
2 Media
3 Music
4 Psychology
5 Places
6 Writing systems
7 Medicine
8 Other
9 See also
Computing[edit]
Scripting languages for computers
Script (computing), a small non-compiled program written for a scripting language or command
interpreter
script (Unix), a Unix command that records a tty session
SCRIPT (markup), a text formatting language developed by IBM
Scripts (artificial intelligence), a structure for representing procedural knowledge
Computer programming, also referred to as "scripting"
Media[edit]
Script (comics), the dialogue for a comic book or comic strip
Screenplay, the dialog and instructions for a film, known as a "script"
Manuscript, any written document often story based and unpublished
Script, a defunct literary magazine edited by Rob Wagner
Scripted sequence, a predefined series of events in a video game triggered by player location or
actions
Music[edit]
The Script, an Irish band
o The Script (album), an album by The Script
Scripted, the debut album by American rock band Icon for Hire
Psychology[edit]
Behavioral script, a sequence of expected behaviors
Life (or childhood) script in transactional analysis
Places[edit]
SCRIPT (AHRC Centre) Scottish Centre for Research in Intellectual Property and Technologies
Writing systems[edit]
A distinctive writing system, based on defined elements or symbols generally known as a "script"
Script (Unicode), collections of letters and other written signs used to represent textual
information in writing systems, each assigned to a Unicode number
Script typeface, having characteristics of handwriting
Medicine[edit]
A common abbreviation or slang-type usage of medical prescription
SCRIPT (medicine), a standard for the electronically transmitted medical prescriptions in the
United States
Other[edit]
Scrip, any currency substitute
Scripted (company), an online marketplace that allows businesses to hire freelance writers for
blogs, articles, and bulk social media posts
See also[edit]
All pages with titles containing "Script"
All pages beginning with "Script"

This disambiguation page lists articles associated with the same title.
If an internal link led you here, you may wish to change the link to point directly to the intended article.
Retrieved from "http://en.wikipedia.org/w/index.php?title=Script&oldid=603700842"


Scripting language
From Wikipedia, the free encyclopedia
Jump to: navigation, search

This article needs additional citations for verification. Please help improve this article by
adding citations to reliable sources. Unsourced material may be challenged and removed.
(February 2014)
A scripting language or script language is a programming language that supports scripts, programs
written for a special run-time environment that can interpret (rather than compile) and automate the
execution of tasks that could alternatively be executed one-by-one by a human operator. Environments
that can be automated through scripting include software applications, web pages within a web browser,
the shells of operating systems (OS), and embedded systems. A scripting language can be viewed as a
domain-specific language for a particular environment; in the case of scripting an application, this is also
known as an extension language. Scripting languages are also sometimes referred to as very high-level
programming languages, as they operate at a high level of abstraction, or as control languages,
particularly for job control languages on mainframes.
The term "scripting language" is also used loosely to refer to dynamic high-level general-purpose
language, such as Perl,
[1]
Tcl, and Python,
[2]
with the term "script" often used for small programs (up to a
few thousand lines of code) in such languages, or in domain-specific languages such as the text-
processing languages sed and AWK. Some of these languages were originally developed for use within a
particular environment, and later developed into portable domain-specific or general-purpose languages.
Conversely, many general-purpose languages have dialects that are used as scripting languages. This
article discusses scripting languages in the narrow sense of languages for a specific environment;
dynamic, general-purpose, and high-level languages are discussed at those articles.
The spectrum of scripting languages ranges from very small and highly domain-specific languages to
general-purpose programming languages used for scripting. Standard examples of scripting languages for
specific environments include: Bash, for the Unix or Unix-like operating systems; ECMAScript
(JavaScript), for web browsers; and Visual Basic for Applications, for Microsoft Office applications. Lua
is a language designed and widely used as an extension language. Python is a general-purpose language
that is also commonly used as an extension language, while ECMAScript is still primarily a scripting
language for web browsers, but is also used as a general-purpose language. The Emacs Lisp dialect of
Lisp (for the Emacs editor) and the Visual Basic for Applications dialect of Visual Basic are examples of
scripting language dialects of general-purpose languages. Some game systems, notably the Trainz
franchise of Railroad simulators have been extensively extended in functionality by scripting extensions.
Contents
[hide]
1 Characteristics
2 History
3 Types of scripting languages
o 3.1 Glue languages
o 3.2 Job control languages and shells
o 3.3 GUI scripting
o 3.4 Application-specific languages
o 3.5 Extension/embeddable languages
4 See also
5 References
6 External links
Characteristics[edit]
In principle any language can be used as a scripting language, given libraries or bindings for a specific
environment. Formally speaking, "scripting" is a property of the primary implementations and uses of a
language, hence the ambiguity about whether a language "is" a scripting language for languages with
multiple implementations. However, many languages are not very suited for use as scripting languages
and are rarely if ever used as such.
Typically scripting languages are intended to be very fast to pick up and author programs in. This
generally implies relatively simple syntax and semantics. For example, it is uncommon to use Java as a
scripting language due to the lengthy syntax and restrictive rules about which classes exist in which files
contrast to Python, where it is possible to briefly define some functions in a file. A scripting language is
usually interpreted from source code or bytecode.
[3]
By contrast, the software environment the scripts are
written for is typically written in a compiled language and distributed in machine code form. Scripting
languages may be designed for use by end users of a program end-user development or may be only
for internal use by developers, so they can write portions of the program in the scripting language.
Scripting languages abstract their users from variable types and memory management.
Scripts are often created or modified by the person executing them,
[4]
though they are also often
distributed, such as when large portions of games are written in a scripting language. In many
implementations a script or portions of one may be executed interactively on a command line.
History[edit]
Early mainframe computers (in the 1950s) were non-interactive, instead using batch processing. IBM's
Job Control Language (JCL) is the archetype of languages used to control batch processing.
[5]

The first interactive shells were developed in the 1960s to enable remote operation of the first time-
sharing systems, and these used shell scripts, which controlled running computer programs within a
computer program, the shell. Calvin Mooers in his TRAC language is generally credited with inventing
command substitution, the ability to embed commands in scripts that when interpreted insert a character
string into the script.
[6]
Multics calls these active functions.
[7]
Louis Pouzin wrote an early processor for
command scripts called RUNCOM for CTSS around 1964. Stuart Madnick at MIT wrote a scripting
language for IBM's CP/CMS in 1966. He originally called this processor COMMAND, later named
EXEC.
[8]
Multics included an offshoot of CTSS RUNCOM, also called RUNCOM.
[9]
EXEC was
eventually replaced by EXEC 2 and REXX.
Languages such as Tcl and Lua were specifically designed as general purpose scripting languages that
could be embedded in any application. Other languages such as Visual Basic for Applications (VBA)
provided strong integration with the automation facilities of an underlying system. Embedding of such
general purpose scripting languages instead of developing a new language for each application also had
obvious benefits, relieving the application developer of the need to code a language translator from
scratch and allowing the user to apply skills learned elsewhere.
Some software incorporates several different scripting languages. Modern web browsers typically provide
a language for writing extensions to the browser itself, and several standard embedded languages for
controlling the browser, including JavaScript (a dialect of ECMAScript) or XUL.
Types of scripting languages[edit]
Glue languages[edit]

This section does not cite any references or sources. Please help improve this section by
adding citations to reliable sources. Unsourced material may be challenged and removed.
(March 2007)
Scripting is often contrasted with system programming, as in Ousterhout's dichotomy or "programming in
the large and programming in the small". In this view, scripting is particularly glue code, connecting
system components, and a language specialized for this purpose is a glue language. Pipelines and shell
scripting are archetypal examples of glue languages, and Perl was initially developed to fill this same
role. Web development can be considered a use of glue languages, interfacing between a database and
web server. The characterization of glue languages as scripting languages is ambiguous, however, as if a
substantial amount of logic is part of the "glue" code, it is better characterized as simply another software
component.
A glue language is a programming language (usually an interpreted scripting language) that is designed
or suited for writing glue code code to connect software components. They are especially useful for
writing and maintaining:
Custom commands for a command shell
Smaller programmes than those that are better implemented in a compiled language
"Wrapper" programmes for executables, like a batch file that moves or manipulates files and does
other things with the operating system before or after running an application like a word
processor, spreadsheet, data base, assembler, compiler, etc.
Scripts that may change
Rapid prototypes of a solution eventually implemented in another, usually compiled, language.
Glue language examples:
Erlang
Unix Shell scripts (ksh, csh, bash, sh and others)
Windows PowerShell
ecl
DCL
Scheme
JCL
m4
VBScript
JScript and JavaScript
AppleScript
Python
Ruby
Lua
Tcl
Perl
PHP
Pure
REXX
XSLT
Macro languages exposed to operating system or application components can serve as glue languages.
These include Visual Basic for Applications, WordBasic, LotusScript, CorelScript, PerfectScript,
Hummingbird Basic, QuickScript, SaxBasic, and WinWrap Basic. Other tools like awk can also be
considered glue languages, as can any language implemented by an ActiveX WSH engine (VBScript,
JScript and VBA by default in Windows and third-party engines including implementations of Rexx, Perl,
Tcl, Python, XSLT, Ruby, Delphi, &c). A majority of applications can access and use operating system
components via the object models or its own functions.
Other devices like programmable calculators may also have glue languages; the operating systems of
PDAs such as Windows CE may have available native or third-party macro tools that glue applications
together, in addition to implementations of common glue languagesincluding Windows NT, MS-DOS
and some Unix shells, Rexx, PHP, and Perl. Depending upon the OS version, WSH and the default script
engines (VBScript and JScript) are available.
Programmable calculators can be programmed in glue languages in three ways. For example, the Texas
Instruments TI-92, by factory default can be programmed with a command script language. Inclusion of
the scripting and glue language Lua in the TI-NSpire series of calculators could be seen as a successor to
this. The primary on-board high-level programming languages of most graphing calculators (most often
Basic variants, sometimes Lisp derivatives, and more uncommonly, C derivatives) in many cases can glue
together calculator functionssuch as graphs, lists, matrices, etc. Third-party implementations of more
comprehensive Basic version that may be closer to variants listed as glue languages in this article are
availableand attempts to implement Perl, Rexx, or various operating system shells on the TI and HP
graphing calculators are also mentioned. PC-based C cross-compilers for some of the TI and HP
machines used in conjunction with tools that convert between C and Perl, Rexx, awk, as well as shell
scripts to Perl, VBScript to and from Perl make it possible to write a programme in a glue language for
eventual implementation (as a compiled programme) on the calculator.
Job control languages and shells[edit]
Main article: Shell script
A major class of scripting languages has grown out of the automation of job control, which relates to
starting and controlling the behavior of system programs. (In this sense, one might think of shells as being
descendants of IBM's JCL, or Job Control Language, which was used for exactly this purpose.) Many of
these languages' interpreters double as command-line interpreters such as the Unix shell or the MS-DOS
COMMAND.COM. Others, such as AppleScript offer the use of English-like commands to build scripts.
GUI scripting[edit]
With the advent of graphical user interfaces, a specialized kind of scripting language emerged for
controlling a computer. These languages interact with the same graphic windows, menus, buttons, and so
on that a human user would. They do this by simulating the actions of a user. These languages are
typically used to automate user actions. Such languages are also called "macros" when control is through
simulated key presses or mouse clicks.
These languages could in principle be used to control any GUI application; but, in practice their use is
limited because their use needs support from the application and from the operating system. There are a
few exceptions to this limitation. Some GUI scripting languages are based on recognizing graphical
objects from their display screen pixels. These GUI scripting languages do not depend on support from
the operating system or application.
Application-specific languages[edit]
Many large application programs include an idiomatic scripting language tailored to the needs of the
application user. Likewise, many computer game systems use a custom scripting language to express the
programmed actions of non-player characters and the game environment. Languages of this sort are
designed for a single application; and, while they may superficially resemble a specific general-purpose
language (e.g. QuakeC, modeled after C), they have custom features that distinguish them. Emacs Lisp,
while a fully formed and capable dialect of Lisp, contains many special features that make it most useful
for extending the editing functions of Emacs. An application-specific scripting language can be viewed as
a domain-specific programming language specialized to a single application.
Extension/embeddable languages[edit]
A number of languages have been designed for the purpose of replacing application-specific scripting
languages by being embeddable in application programs. The application programmer (working in C or
another systems language) includes "hooks" where the scripting language can control the application.
These languages may be technically equivalent to an application-specific extension language but when an
application embeds a "common" language, the user gets the advantage of being able to transfer skills from
application to application. A more generic alternative is simply to provide a library (often a C library) that
a general-purpose language can use to control the application, without modifying the language for the
specific domain.
JavaScript began as and primarily still is a language for scripting inside web browsers; however, the
standardization of the language as ECMAScript has made it popular as a general purpose embeddable
language. In particular, the Mozilla implementation SpiderMonkey is embedded in several environments
such as the Yahoo! Widget Engine. Other applications embedding ECMAScript implementations include
the Adobe products Adobe Flash (ActionScript) and Adobe Acrobat (for scripting PDF files).
Tcl was created as an extension language but has come to be used more frequently as a general purpose
language in roles similar to Python, Perl, and Ruby. On the other hand, Rexx was originally created as a
job control language, but is widely used as an extension language as well as a general purpose language.
Perl is a general-purpose language, but had the Oraperl (1990) dialect, consisting of a Perl 4 binary with
Oracle Call Interface compiled in. This has however since been replaced by a library (Perl Module),
DBD::Oracle.
[10][11]

Other complex and task-oriented applications may incorporate and expose an embedded programming
language to allow their users more control and give them more functionality than can be available through
a user interface, no matter how sophisticated. For example, Autodesk Maya 3D authoring tools embed the
MEL scripting language, or Blender which has Python to fill this role.
Some other types of applications that need faster feature addition or tweak-and-run cycles (e.g. game
engines) also use an embedded language. During the development, this allows them to prototype features
faster and tweak more freely, without the need for the user to have intimate knowledge of the inner
workings of the application or to rebuild it after each tweak (which can take a significant amount of time).
The scripting languages used for this purpose range from the more common and more famous Lua and
Python to lesser-known ones such as AngelScript and Squirrel.
Ch is another C compatible scripting option for the industry to embed into C/C++ application programs.
See also[edit]
Architecture description language
Build automation
Interpreter directive / Shebang (Unix)
Templating language
References[edit]
1. Jump up ^ Sheppard, Doug (2000-10-16). "Beginner's Introduction to Perl". dev.perl.org. Retrieved 2011-
01-08.
2. Jump up ^ Programming is Hard, Let's Go Scripting..., Larry Wall, December 6, 2007
3. Jump up ^ Brown, Vicki. "Scripting Languages". Retrieved 2009-07-22.
4. Jump up ^ IEEE Computer, 2008, In praise of scripting, Ronald Loui author
5. Jump up ^ IBM Corporation (1967). IBM System/360 Operating System Job Control Language (C28-
6529-4).
6. Jump up ^ Mooers, Calvin. "TRAC, A Procedure-Describing Language for the Reactive Typewriter".
Archived from the original on 2001-04-25. Retrieved Mar 9, 1012.
7. Jump up ^ Van Vleck(ed.), Thomas. "Multics Glossary -A- (active function)". Retrieved Mar 9, 2012.
8. Jump up ^ Varian, Melinda. "VM AND THE VM COMMUNITY: Past, Present, and Future". Retrieved
Mar 9, 2012.
9. Jump up ^ Van Vleck, Thomas(ed.). "Multics Glossary -R- (RUNCOM)". Retrieved Mar 9, 2012.
10. Jump up ^ Oraperl, CPAN]
11. Jump up ^ Perl, Underground Oracle FAQ
External links[edit]
Patterns for Scripted Applications at the Wayback Machine (archived October 10, 2004)
Glue code
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In programming, glue code is code that does not contribute any functionality towards meeting the
program's requirements, but instead serves solely to "glue together" different parts of code that would not
otherwise be compatible. Glue code often appears in code written to let existing libraries or programs
interoperate, as in language bindings or foreign function interfaces like the Java native interface, or when
mapping objects to a database using object-relational mapping, or when integrating two or more
commercial off-the-shelf programs. Glue code may be written in the same language as the code it is
gluing together, or in a separate glue language.
References[edit]
University of Ottawa: Object Oriented Software Engineering, Glue Code Def.
http://projects.camlcity.org/projects/dl/findlib-1.2.1/doc/guide-html/x341.html
See also[edit]
Scripting language
Shell script
SWIG
Lua (programming language)
Glue logic
WinGlue
Build automation
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Software development process

A software developer at work
Core activities
Requirements
Specification
Architecture
Construction
Design
Testing
Debugging
Deployment
Maintenance
Methodologies
Waterfall
Prototype model
Incremental
Iterative
V-Model
Spiral
Scrum
Cleanroom
RAD
DSDM
RUP
XP
Agile
Lean
Dual Vee Model
TDD
FDD
DDD
Supporting disciplines
Configuration management
Documentation
Quality assurance (SQA)
Project management
User experience
Tools
Compiler
Debugger
Profiler
GUI designer
Modeling
IDE
Build automation
v
t
e
Build automation is the act of scripting or automating a wide variety of tasks that software developers do
in their day-to-day activities including things like:
compiling computer source code into binary code
packaging binary code
running tests
deployment to production systems
creating documentation and/or release notes
Contents
[hide]
1 History
2 New breed of tools
3 Advanced build automation
4 Advantages
5 Types
6 Makefile
7 Requirements of a build system
8 See also
9 References
History[edit]
Historically, developers used build automation to call compilers and linkers from inside a build script
versus attempting to make the compiler calls from the command line. It is simple to use the command line
to pass a single source module to a compiler and then to a linker to create the final deployable object.
However, when attempting to compile and link many source code modules, in a particular order, using the
command line process is not a reasonable solution. The make scripting language offered a better
alternative. It allowed a build script to be written to call, in a series, the needed compile and link steps to
build a software application. GNU Make also offered additional features such as "makedepend" which
allowed some source code dependency management as well as incremental build processing. This was the
beginning of Build Automation. Its primary focus was on automating the calls to the compilers and
linkers. As the build process grew more complex, developers began adding pre and post actions around
the calls to the compilers such as a check-out from version control to the copying of deployable objects to
a test location. The term "build automation" now includes managing the pre and post compile and link
activities as well as the compile and link activities.
New breed of tools[edit]
In recent years, build management tools have provided even more relief when it comes to automating the
build process. Both commercial and open source tools are available to perform more automated build and
workflow processing. Some tools focus on automating the pre and post steps around the calling of the
build scripts, while others go beyond the pre and post build script processing and also streamline the
actual compile and linker calls without much manual scripting. These tools are particularly useful for
continuous integration builds where frequent calls to the compile process are required and incremental
build processing is needed.
Advanced build automation[edit]
Advanced build automation offers remote agent processing for distributed builds and/or distributed
processing. The term "distributed builds" means that the actual calls to the compiler and linkers can be
served out to multiple locations for improving the speed of the build. This term is often confused with
"distributed processing".
Distributed processing means that each step in a process or workflow can be sent to a different machine
for execution. For example, a post step to the build may require the execution of multiple test scripts on
multiple machines. Distributed processing can send the different test scripts to different machines.
Distributed processing is not distributed builds. Distributed processing cannot take a make, ant or maven
script, break it up and send it to different machines for compiling and linking.
The distributed build process must have the machine intelligence to understand the source code
dependencies in order to send the different compile and link steps to different machines. A build
automation tool must be able to manage these dependencies in order to perform distributed builds. Some
build tools can discover these relationships programmatically (Rational ClearMake distributed,
[1]
Electric
Cloud ElectricAccelerator
[2]
), while others depend on user-configured dependencies (Platform LSF
lsmake
[3]
)
Build automation that can sort out source code dependency relationships can also be configured to run the
compile and link activities in a parallelized mode. This means that the compiler and linkers can be called
in multi-threaded mode using a machine that is configured with more than one core.
Not all build automation tools can perform distributed builds. Most only provide distributed processing
support. In addition, most products that do support distributed builds can only handle C or C++. Build
automation products that support distributed processing are often based on make and many do not support
Maven or Ant.
The deployment task may require configuration of external systems, including middleware. In cloud
computing environments the deployment step may even involve creation of virtual servers to deploy build
artifacts into.
[4]

Advantages[edit]
The advantages of build automation to software development projects include
Improve product quality
Accelerate the compile and link processing
Eliminate redundant tasks
Minimize "bad builds"
Eliminate dependencies on key personnel
Have history of builds and releases in order to investigate issues
Save time and money - because of the reasons listed above.
[5]

Types[edit]
On-Demand automation such as a user running a script at the command line
Scheduled automation such as a continuous integration server running a nightly build
Triggered automation such as a continuous integration server running a build on every commit
to a version control system.
Makefile[edit]
One specific form of build automation is the automatic generation of Makefiles. See List of build
automation software.
Requirements of a build system[edit]
Basic requirements:
1. Frequent or overnight builds to catch problems early.
[6][7][8]

2. Support for Source Code Dependency Management
3. Incremental build processing
4. Reporting that traces source to binary matching
5. Build acceleration
6. Extraction and reporting on build compile and link usage
Optional requirements:
[9]

1. Generate release notes and other documentation such as help pages
2. Build status reporting
3. Test pass or fail reporting
4. Summary of the features added/modified/deleted with each new build
See also[edit]
Continuous integration
Continuous delivery
List of build automation software
Product family engineering
Release engineering
Software configuration management
Unit testing
References[edit]
1. Jump up ^ Dr. Dobb's Distributed Loadbuilds, retrieved 2009-04-13
2. Jump up ^ Dr. Dobb's Take My Build, Please
3. Jump up ^ LSF User's Guide - Using lsmake, retrieved 2009-04-13
4. Jump up ^ Amies, Alex; Zou P X; Wang Yi S (29 Oct 2011). "Automate development and management of
cloud virtual machines". IBM developerWorks (IBM).
5. Jump up ^ http://www.denverjug.org/meetings/files/200410_automation.pdf
6. Jump up ^ http://freshmeat.net/articles/view/392/
7. Jump up ^ http://www.ibm.com/developerworks/java/library/j-junitmail/
8. Jump up ^ http://buildbot.net/trac
9. Jump up ^ http://www.cmcrossroads.com/content/view/12525/120/
Notes
Mike Clark: Pragmatic Project Automation, The Pragmatic Programmers ISBN 0-9745140-3-9
Retrieved from "http://en.wikipedia.org/w/index.php?title=Build_automation&oldid=603747231"
Categories:
Build automation
Unit testing
From Wikipedia, the free encyclopedia
Jump to: navigation, search

[hide]This article has multiple issues. Please help improve it or discuss these issues on
the talk page.

This article needs additional citations for verification. Please help
improve this article by adding citations to reliable sources. Unsourced
material may be challenged and removed. (November 2007)

This article may require cleanup to meet Wikipedia's quality standards.
No cleanup reason has been specified. Please help improve this article if you
can. (July 2009)


In computer programming, unit testing is a method by which individual units of source code, sets of one
or more computer program modules together with associated control data, usage procedures, and
operating procedures are tested to determine if they are fit for use.
[1]
Intuitively, one can view a unit as the
smallest testable part of an application. In procedural programming, a unit could be an entire module, but
it is more commonly an individual function or procedure. In object-oriented programming, a unit is often
an entire interface, such as a class, but could be an individual method.
[2]
Unit tests are short code
fragments
[3]
created by programmers or occasionally by white box testers during the development
process.
Ideally, each test case is independent from the others. Substitutes such as method stubs, mock objects,
[4]

fakes, and test harnesses can be used to assist testing a module in isolation. Unit tests are typically written
and run by software developers to ensure that code meets its design and behaves as intended.
Contents
[hide]
1 Benefits
o 1.1 Find problems early
o 1.2 Facilitates change
o 1.3 Simplifies integration
o 1.4 Documentation
o 1.5 Design
2 Separation of interface from implementation
3 Parameterized unit testing
4 Unit testing limitations
5 Applications
o 5.1 Extreme programming
o 5.2 Techniques
o 5.3 Unit testing frameworks
o 5.4 Language-level unit testing support
6 See also
7 Notes
8 External links
Benefits[edit]
The goal of unit testing is to isolate each part of the program and show that the individual parts are
correct.
[1]
A unit test provides a strict, written contract that the piece of code must satisfy. As a result, it
affords several benefits.
Find problems early[edit]
Unit tests find problems early in the development cycle.
In test-driven development (TDD), which is frequently used in both Extreme Programming and Scrum,
unit tests are created before the code itself is written. When the tests pass, that code is considered
complete. The same unit tests are run against that function frequently as the larger code base is developed
either as the code is changed or via an automated process with the build. If the unit tests fail, it is
considered to be a bug either in the changed code or the tests themselves. The unit tests then allow the
location of the fault or failure to be easily traced. Since the unit tests alert the development team of the
problem before handing the code off to testers or clients, it is still early in the development process.
Facilitates change[edit]
Unit testing allows the programmer to refactor code at a later date, and make sure the module still works
correctly (e.g., in regression testing). The procedure is to write test cases for all functions and methods so
that whenever a change causes a fault, it can be quickly identified.
Readily available unit tests make it easy for the programmer to check whether a piece of code is still
working properly.
In continuous unit testing environments, through the inherent practice of sustained maintenance, unit tests
will continue to accurately reflect the intended use of the executable and code in the face of any change.
Depending upon established development practices and unit test coverage, up-to-the-second accuracy can
be maintained.
Simplifies integration[edit]
Unit testing may reduce uncertainty in the units themselves and can be used in a bottom-up testing style
approach. By testing the parts of a program first and then testing the sum of its parts, integration testing
becomes much easier.
[citation needed]

An elaborate hierarchy of unit tests does not equal integration testing. Integration with peripheral units
should be included in integration tests, but not in unit tests.
[citation needed]
Integration testing typically still
relies heavily on humans testing manually; high-level or global-scope testing can be difficult to automate,
such that manual testing often appears faster and cheaper.
[citation needed]

Documentation[edit]
Unit testing provides a sort of living documentation of the system. Developers looking to learn what
functionality is provided by a unit and how to use it can look at the unit tests to gain a basic understanding
of the unit's interface (API).
Unit test cases embody characteristics that are critical to the success of the unit. These characteristics can
indicate appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the
unit. A unit test case, in and of itself, documents these critical characteristics, although many software
development environments do not rely solely upon code to document the product in development.
By contrast, ordinary narrative documentation is more susceptible to drifting from the implementation of
the program and will thus become outdated (e.g., design changes, feature creep, relaxed practices in
keeping documents up-to-date).
Design[edit]
When software is developed using a test-driven approach, the combination of writing the unit test to
specify the interface plus the refactoring activities performed after the test is passing, may take the place
of formal design. Each unit test can be seen as a design element specifying classes, methods, and
observable behaviour. The following Java example will help illustrate this point.
Here is a set of test cases that specify a number of elements of the implementation. First, that there must
be an interface called Adder, and an implementing class with a zero-argument constructor called
AdderImpl. It goes on to assert that the Adder interface should have a method called add, with two integer
parameters, which returns another integer. It also specifies the behaviour of this method for a small range
of values over a number of test methods.
public class TestAdder {

// can it add the positive numbers 1 and 1?
public void testSumPositiveNumbersOneAndOne() {
Adder adder = new AdderImpl();
assert(adder.add(1, 1) == 2);
}

// can it add the positive numbers 1 and 2?
public void testSumPositiveNumbersOneAndTwo() {
Adder adder = new AdderImpl();
assert(adder.add(1, 2) == 3);
}

// can it add the positive numbers 2 and 2?
public void testSumPositiveNumbersTwoAndTwo() {
Adder adder = new AdderImpl();
assert(adder.add(2, 2) == 4);
}

// is zero neutral?
public void testSumZeroNeutral() {
Adder adder = new AdderImpl();
assert(adder.add(0, 0) == 0);
}

// can it add the negative numbers -1 and -2?
public void testSumNegativeNumbers() {
Adder adder = new AdderImpl();
assert(adder.add(-1, -2) == -3);
}

// can it add a positive and a negative?
public void testSumPositiveAndNegative() {
Adder adder = new AdderImpl();
assert(adder.add(-1, 1) == 0);
}

// how about larger numbers?
public void testSumLargeNumbers() {
Adder adder = new AdderImpl();
assert(adder.add(1234, 988) == 2222);
}
}
In this case the unit tests, having been written first, act as a design document specifying the form and
behaviour of a desired solution, but not the implementation details, which are left for the programmer.
Following the "do the simplest thing that could possibly work" practice, the easiest solution that will
make the test pass is shown below.
interface Adder {
int add(int a, int b);
}
class AdderImpl implements Adder {
int add(int a, int b) {
return a + b;
}
}
Unlike other diagram-based design methods, using unit-tests as a design specification has one significant
advantage. The design document (the unit-tests themselves) can be used to verify that the implementation
adheres to the design. With the unit-test design method, the tests will never pass if the developer does not
implement the solution according to the design.
It is true that unit testing lacks some of the accessibility of a diagram, but UML diagrams are now easily
generated for most modern languages by free tools (usually available as extensions to IDEs). Free tools,
like those based on the xUnit framework, outsource to another system the graphical rendering of a view
for human consumption.
Separation of interface from implementation[edit]
Because some classes may have references to other classes, testing a class can frequently spill over into
testing another class. A common example of this is classes that depend on a database: in order to test the
class, the tester often writes code that interacts with the database. This is a mistake, because a unit test
should usually not go outside of its own class boundary, and especially should not cross such
process/network boundaries because this can introduce unacceptable performance problems to the unit
test-suite. Crossing such unit boundaries turns unit tests into integration tests, and when test cases fail,
makes it less clear which component is causing the failure. See also Fakes, mocks and integration tests
Instead, the software developer should create an abstract interface around the database queries, and then
implement that interface with their own mock object. By abstracting this necessary attachment from the
code (temporarily reducing the net effective coupling), the independent unit can be more thoroughly
tested than may have been previously achieved. This results in a higher quality unit that is also more
maintainable.
Parameterized unit testing[edit]
Parameterized unit tests (PUTs) are tests that take parameters. Unlike traditional unit tests, which are
usually closed methods, PUTs take any set of parameters. PUTs have been supported by TestNG, JUnit
and various .NET test frameworks. Suitable parameters for the unit tests may be supplied manually or in
some cases are automatically generated by the test framework. Testing tools QuickCheck exist to generate
test inputs for PUTs.
Unit testing limitations[edit]
Testing will not catch every error in the program, since it cannot evaluate every execution path in any but
the most trivial programs. The same is true for unit testing. Additionally, unit testing by definition only
tests the functionality of the units themselves. Therefore, it will not catch integration errors or broader
system-level errors (such as functions performed across multiple units, or non-functional test areas such
as performance). Unit testing should be done in conjunction with other software testing activities, as they
can only show the presence or absence of particular errors; they cannot prove a complete absence of
errors. In order to guarantee correct behavior for every execution path and every possible input, and
ensure the absence of errors, other techniques are required, namely the application of formal methods to
proving that a software component has no unexpected behavior.
Software testing is a combinatorial problem. For example, every boolean decision statement requires at
least two tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every
line of code written, programmers often need 3 to 5 lines of test code.
[5]
This obviously takes time and its
investment may not be worth the effort. There are also many problems that cannot easily be tested at all
for example those that are nondeterministic or involve multiple threads. In addition, code for a unit test is
likely to be at least as buggy as the code it is testing. Fred Brooks in The Mythical Man-Month quotes:
"Never go to sea with two chronometers; take one or three."
[6]
Meaning, if two chronometers contradict,
how do you know which one is correct?
Another challenge related to writing the unit tests is the difficulty of setting up realistic and useful tests. It
is necessary to create relevant initial conditions so the part of the application being tested behaves like
part of the complete system. If these initial conditions are not set correctly, the test will not be exercising
the code in a realistic context, which diminishes the value and accuracy of unit test results.
[7]

To obtain the intended benefits from unit testing, rigorous discipline is needed throughout the software
development process. It is essential to keep careful records not only of the tests that have been performed,
but also of all changes that have been made to the source code of this or any other unit in the software.
Use of a version control system is essential. If a later version of the unit fails a particular test that it had
previously passed, the version-control software can provide a list of the source code changes (if any) that
have been applied to the unit since that time.
It is also essential to implement a sustainable process for ensuring that test case failures are reviewed
daily and addressed immediately.
[8]
If such a process is not implemented and ingrained into the team's
workflow, the application will evolve out of sync with the unit test suite, increasing false positives and
reducing the effectiveness of the test suite.
Unit testing embedded system software presents a unique challenge: Since the software is being
developed on a different platform than the one it will eventually run on, you cannot readily run a test
program in the actual deployment environment, as is possible with desktop programs.
[9]

Applications[edit]
Extreme programming[edit]
Unit testing is the cornerstone of extreme programming, which relies on an automated unit testing
framework. This automated unit testing framework can be either third party, e.g., xUnit, or created within
the development group.
Extreme programming uses the creation of unit tests for test-driven development. The developer writes a
unit test that exposes either a software requirement or a defect. This test will fail because either the
requirement isn't implemented yet, or because it intentionally exposes a defect in the existing code. Then,
the developer writes the simplest code to make the test, along with other tests, pass.
Most code in a system is unit tested, but not necessarily all paths through the code. Extreme programming
mandates a "test everything that can possibly break" strategy, over the traditional "test every execution
path" method. This leads developers to develop fewer tests than classical methods, but this isn't really a
problem, more a restatement of fact, as classical methods have rarely ever been followed methodically
enough for all execution paths to have been thoroughly tested.
[citation needed]
Extreme programming simply
recognizes that testing is rarely exhaustive (because it is often too expensive and time-consuming to be
economically viable) and provides guidance on how to effectively focus limited resources.
Crucially, the test code is considered a first class project artifact in that it is maintained at the same
quality as the implementation code, with all duplication removed. Developers release unit testing code to
the code repository in conjunction with the code it tests. Extreme programming's thorough unit testing
allows the benefits mentioned above, such as simpler and more confident code development and
refactoring, simplified code integration, accurate documentation, and more modular designs. These unit
tests are also constantly run as a form of regression test.
Unit testing is also critical to the concept of emergent design. As emergent design is heavily dependent
upon refactoring, unit tests are an integral component.
[10]

Techniques[edit]
Unit testing is commonly automated, but may still be performed manually. The IEEE does not favor one
over the other.
[11]
The objective in unit testing is to isolate a unit and validate its correctness. A manual
approach to unit testing may employ a step-by-step instructional document. However, automation is
efficient for achieving this, and enables the many benefits listed in this article. Conversely, if not planned
carefully, a careless manual unit test case may execute as an integration test case that involves many
software components, and thus preclude the achievement of most if not all of the goals established for
unit testing.
To fully realize the effect of isolation while using an automated approach, the unit or code body under test
is executed within a framework outside of its natural environment. In other words, it is executed outside
of the product or calling context for which it was originally created. Testing in such an isolated manner
reveals unnecessary dependencies between the code being tested and other units or data spaces in the
product. These dependencies can then be eliminated.
Using an automation framework, the developer codes criteria into the test to verify the unit's correctness.
During test case execution, the framework logs tests that fail any criterion. Many frameworks will also
automatically flag these failed test cases and report them in a summary. Depending upon the severity of a
failure, the framework may halt subsequent testing.
As a consequence, unit testing is traditionally a motivator for programmers to create decoupled and
cohesive code bodies. This practice promotes healthy habits in software development. Design patterns,
unit testing, and refactoring often work together so that the best solution may emerge.
Unit testing frameworks[edit]
See also: List of unit testing frameworks

This section requires expansion. (April 2010)
Unit testing frameworks are most often third-party products that are not distributed as part of the compiler
suite. They help simplify the process of unit testing, having been developed for a wide variety of
languages. Examples of testing frameworks include open source solutions such as the various code-driven
testing frameworks known collectively as xUnit, and proprietary/commercial solutions such as TBrun,
JustMock, Isolator.NET, Isolator++, Parasoft Development Testing (Jtest, Parasoft C/C++test, dotTEST),
Testwell CTA++ and VectorCAST/C++.
It is generally possible to perform unit testing without the support of a specific framework by writing
client code that exercises the units under test and uses assertions, exception handling, or other control
flow mechanisms to signal failure. Unit testing without a framework is valuable in that there is a barrier to
entry for the adoption of unit testing; having scant unit tests is hardly better than having none at all,
whereas once a framework is in place, adding unit tests becomes relatively easy.
[12]
In some frameworks
many advanced unit test features are missing or must be hand-coded.
Language-level unit testing support[edit]
Some programming languages directly support unit testing. Their grammar allows the direct declaration
of unit tests without importing a library (whether third party or standard). Additionally, the boolean
conditions of the unit tests can be expressed in the same syntax as boolean expressions used in non-unit
test code, such as what is used for if and while statements.
Languages that directly support unit testing include:
ABAP
C#
D
Go
[13]

Java
Obix
Python
[14]

Ruby
[15]

Scala
See also[edit]

Software Testing portal
Acceptance testing
Characterization test
Component-Based Usability Testing
Design predicates
Design by contract
Extreme programming
Integration testing
List of unit testing frameworks
Unit testing frameworks for Ruby
Regression testing
Software archaeology
Software testing
Test case
Test-driven development
xUnit a family of unit testing frameworks.
Notes[edit]
1. ^ Jump up to:
a

b
Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in
Software Management. Wiley-IEEE Computer Society Press. p. 426. ISBN 0-470-04212-5.
2. Jump up ^ Xie, Tao. "Towards a Framework for Differential Unit Testing of Object-Oriented Programs".
Retrieved 2012-07-23.
3. Jump up ^ "Unit Testing". Retrieved 2014-01-06.
4. Jump up ^ Fowler, Martin (2007-01-02). "Mocks aren't Stubs". Retrieved 2008-04-01.
5. Jump up ^ Cramblitt, Bob (2007-09-20). "Alberto Savoia sings the praises of software testing". Retrieved
2007-11-29.
6. Jump up ^ Brooks, Frederick J. (1995) [1975]. The Mythical Man-Month. Addison-Wesley. p. 64.
ISBN 0-201-83595-9. edit
7. Jump up ^ Kolawa, Adam (2009-07-01). "Unit Testing Best Practices". Retrieved 2012-07-23.
8. Jump up ^ daVeiga, Nada (2008-02-06). "Change Code Without Fear: Utilize a regression safety net".
Retrieved 2008-02-08.
9. Jump up ^ Kucharski, Marek (2011-11-23). "Making Unit Testing Practical for Embedded Development".
Retrieved 2012-05-08.
10. Jump up ^ "Agile Emergent Design". Agile Sherpa. 2010-08-03. Retrieved 2012-05-08.
11. Jump up ^ IEEE Standards Board, "IEEE Standard for Software Unit Testing: An American National
Standard, ANSI/IEEE Std 1008-1987" in IEEE Standards: Software Engineering, Volume Two: Process
Standards; 1999 Edition; published by The Institute of Electrical and Electronics Engineers, Inc. Software
Engineering Technical Committee of the IEEE Computer Society.
12. Jump up ^ Bullseye Testing Technology (20062008). "Intermediate Coverage Goals". Retrieved 24
March 2009.
13. Jump up ^ golang.org. "testing - The Go Programming Language". Retrieved 3 December 2013.
14. Jump up ^ Python Documentation (19992012). "unittest -- Unit testing framework". Retrieved 15
November 2012.
15. Jump up ^ Ruby-Doc.org. "Module: Test::Unit::Assertions (Ruby 2.0)". Retrieved 19 August 2013.
External links[edit]
Test Driven Development (Ward Cunningham's Wiki)
Retrieved from "http://en.wikipedia.org/w/index.php?title=Unit_testing&oldid=605159258"


Test Driven Development

When you code, alternate these activities:
add a test, get it to fail, and write code to pass the test (DoSimpleThings, CodeUnitTestFirst)
remove duplication (OnceAndOnlyOnce, DontRepeatYourself, ThreeStrikesAndYouAutomate)
This inner loop pumps the outer loops of ExtremeProgramming - ContinuousIntegration, DailyDeployment?, FrequentReleases, and
SteeringSoftwareProjects. (Hence the graphic.) Tests help us keep promises regarding the quality, cost, and existence of previously installed
features.
Using this system, all my code is highly decoupled (meaning easy to re-use) because it all already has two users - its clients, and its test rigs.
Classes typically resist the transition from one user to two, then the rest are easy. I make reuse easy as a side-effect of coding very fast.
Then, the "remove duplication" phase forces one to examine code for latent abstractions that one could express via virtual methods and other
techniques that naturally make code more extendable. This is the "reuse" that the OO hype of the 1980s screamed about.

Think about what you want to do.
Think about how to test it.
Write a small test. Think about the desired API.
Write just enough code to fail the test.
Run and watch the test fail. (The test-runner, if you're using something like JUnit, shows the "Red Bar"). Now you know that your test
is going to be executed.
Write just enough code to pass the test (and pass all your previous tests).
Run and watch all of the tests pass. (The test-runner, if you're using JUnit, etc., shows the "Green Bar"). If it doesn't pass, you did
something wrong, fix it now since it's got to be something you just wrote.
If you have any duplicate logic, or inexpressive code, refactor to remove duplication and increase expressiveness -- this includes
reducing coupling and increasing cohesion.
Run the tests again, you should still have the Green Bar. If you get the Red Bar, then you made a mistake in your refactoring. Fix it
now and re-run.
Repeat the steps above until you can't find any more tests that drive writing new code.
Please note that first item is by far the most difficult, followed by the second item. But if you can't do those, you probably shouldn't start writing
any code. The rest of the list is really pretty easy, but the first two are critical.
Uh, item 1 is what your OnsiteCustomer keeps screaming about, and item 2 is just item 1 stated inside-out. They are all easy, especially in this
order.
There is a big step between hearing the words of an OnsiteCustomer and understanding the meaning. Translating a business statement into
technical language can be a difficult job and one should respect that difficulty. Item 2 recognizes that testing the code often requires exposing
some things not necessarily required by the end user. There is a step to go beyond what the user needs to what the test will need.

I use code to find patterns that I am interested in. I can imagine many possible solutions to programming problems but some are a lot better
than others. Rather than use my brain to model the computer in high resolution, I use the computer itself to do the modeling and all I need is to
start coding somewhere, make incremental changes and follow what turns out to be interesting. Most of this kind of code is thrown away so why
would I want to make it bullet proof up front? If I was creating a UI for a piece of code, I would create many versions until I zeroed in on the one I
like. Test first is great if you know exactly the best way to program an explicitly defined program but I rarely get that kind of explicit definition and
even if I did, how would I know that technique "best fits" that problem? I would know that if I had coded something very like it before but if I had,
then I would just take the code I wrote before and make modifications to it. Creating tests that proves the code works is very hard, except in the
simple cases and those probably don't need a test in any case. Tests should be created for code that is a "keeper" which, in my case, is only a
small fraction of the code I write.
How do you write a test for something that is constantly changing and you don't know what it's shape or structure will look like?
-- DavidClarkd

Systems created using CodeUnitTestFirst, RelentlessTesting & AcceptanceTests might just be better designed than traditional systems. But
they all certainly would support CodeUnitTestFirst while using them better than our current set of systems.
But because so danged many of them were not, we are a little blind to what we could have available on the shelf.

A list of ways that test-first programming can affect design:
Re-use is good. Test first code is born with two clients, not one. This makes adding a third client twice as easy.
Refactoring test-first code results in equally tested code, permitting more aggressive refactorings (RefactorMercilessly). Cruft is not
allowed, and code is generally in better shape to accept more refactorings.
When paying attention during all of the little steps, you may discover patterns in your code.
Test code is easy to write. It's usually a couple calls to the server object, then a list of assertions. Writing the easy code first makes
writing the hard code easy.
DesignPatterns may be incremented in, not added all of a bunch up front.
Test-first code is written Interface first. You think of the simplest interface that can show function.
Code tends to be less coupled. Effective unit tests only test one thing. To do this you have to move the irrelevant portions out of the
way (e.g., MockObjects). This forces out what might be a poor design choice.
UnitTests stand as canonical & tested documentation for objects' usage. Developers read them, and do what they do in production
code to the same objects. This keeps projects annealed and on track.
When the developer has to write tests for what he is going to do, he is far less likely to add extraneous capabilities. This really puts a
damper on developer driven scope creep.
Test First Design forces you to really think about what you are going to do. It gets you away from "It seemed like a good idea at the
time" programming.
- This sure isn't a page for irresponsible people. Is any programming activity for irresponsible people? See CodeAndFix, WaterFall.
- Nope. the problem is: They don't know! [UnskilledAndUnawareOfIt]

I have been working my way through Kent's TDD book for a while now, and applying the principles quite rigorously. I am a real dullard
sometimes, because it takes me a horribly long time to understand even simple stuff. I had probably been applying TDD for more than a week
before I realized why it works so well (at least for me). There are three parts to this:
The tests (obviously) help find bugs in the application code, and the micro-steps taken with TDD mean that any "bugs" are in the very
code I have just been writing and hence which still has a relevant mental model in my small brain
By doing the absolutely simplest thing in the application code in order to get each test to run, I often have small AhaMoments, where I
see that I am writing such overly concrete code (i.e. just enough to get the tests to work) that the tests (even though they run) cannot
possibly be adequate to cover the "real" requirements. So to justify writing more abstract application code, I need to add more test
cases that demand that abstraction, and this forces me to explore the requirement further. Therefore, the application code actually
helps me debug the tests. That is, since the tests are the specification, feedback from the application code helps me debug the
specification.
As I have these "aha!" moments (mentioned in 2 above) I follow Kent's practice of adding them to the ToDoList. It took my stupid head
quite some time to realize that the TODO list is actually a list of Micro-Stories, which I constantly prioritize (since I am the customer at
this level). Following AlistairCockburn's insight that Stories are promises to have a conversation with the customer, I see, then, that
the Micro-Stories in the TODO list are a promise to have a conversation with myself (and, here is the weird bit) to have a conversation
with the code (since it gives me feedback - it tells me things - and TDD tunes me in to listening to the code).
-- AnthonyLauder

Test Driven Development (TDD) by KentBeck

ISBN 0321146530 A Book,
Mailing list: http://groups.yahoo.com/group/testdrivendevelopment.
Test Driven Development (TDD) by DavidAstels?
ISBN 0131016490 another book

JohnRusk worries that one danger of TestDrivenDevelopment is that developers may not take that step that you take. I.e. developers may stay
with overly concrete code that satisfies the tests but not the "real" requirements.
To look at it another way, I have always felt that it was dangerous to approach design with (only) particular test cases in mind, since its usually
necessary to think about boundary cases, and other unusual cases.
How does XP address that danger? By encouraging developers to write sufficiently comprehensive tests? Or by relying on developers to take
that step which you mention, which is saying, "OK, this actually passes my tests, but its not really adequate for the real world because....".
XP addresses that danger with PairProgramming. When obvious boundary cases are overlooked by the programmer driving the keyboard, the
programmer acting as navigator points out the oversight. This is an excellent example of a case where a single practice is, by itself, insufficient
to reasonably guarantee success but, in combination with a complementary practice, provides excellent results.
TestFirst is a cool way to program source code. XP extends TestFirst to all scales of the project. One tests the entire project by frequently
releasing it and collecting feedback.
Q: What "real" requirements can you not test?
Those requirements which are not stated.
Requirements which require highly specialized, unaffordable, or non-existent support hardware to test. E.g., with hardware such as
the Intellasys SEAforth chips, it's possible to generate pulses on an I/O pin as narrow as 6ns under software control. To see these
pulses, you need an oscilloscope with a bandwidth no less than 166MHz, and to make sure their waveform is accurate, you need a
bandwidth no less than 500MHz, with 1GHz strongly preferred. However, at 1GHz bandwidths, you're looking at incredibly expensive
sampling oscilloscopes. Thus, if you cannot afford this kind of hardware, you pretty much have to take it on faith things are OK. Then,
you need some means of feeding this waveform data back to the test framework PC (which may or may not be the PC running the
actual test), which adds to the cost.
TDD cannot automatically fix algorithms; neither can any other technique. Where rigorous testing helps is ensuring that the details of your
algorithm remain the same even if you refactor or add features. A test case can easily check _too_ much, and fail even if the production code
would have worked. For example, suppose one step of an algorithm returns an array. A test could fail if that array is not sorted, even if the
algorithm does not require the array, at that juncture, to be sorted.
This is a good thing; it makes you stop, revert your change, and try again.

Note: here is the link to the pdf file on the yahoo groups area:
http://groups.yahoo.com/group/testdrivendevelopment/files/TDD17Jul2002.pdf
I found this unexpectedly awkward to locate by searching, so I thought I'd drop the link in here. -- DavidPlumpton

I've sometimes been an advocate of test driven development, but my enthusiasm has dropped after I've noticed that CodeUnitTestFirst goes
heavily against prototyping. I prefer a style of coding where the division of responsibility between units and the interfaces live a lot in the
beginning of the development cycle, and writing a test for those before the actual code is written will seriously hinder the speed of development
and almost certainly end testing the wrong thing.
Agreed. Please see my blog entry about adapting TDD for mere mortals at http://agileskills2.org/blog/2010/02/07/tdd-adapted-for-mere-mortals/
-- KentTong
: Assuming you're not shipping the prototype, there's nothing particularly in conflict. The prototype is in itself a sort of design test. The trouble
with prototypes is that they have this habit of becoming the product...
[RE: prototypes becoming "the" product. Arrg.. I agree. No reason for it to happen. TDD makes rewriting slops as clean code absurdly
easy]
Design is a lot more difficult than implementing a design. But UnitTests explicitly require one to define the design before making it work.
There are other problems, too. Like OO methodology, the difficulties which test driven development helps with are sometimes caused by itself.
Every extra line of code adds to the complexity of the program, and tests slow down serious refactoring. This is most apparent in hard-to-test
things like GUI's, databases and web applications, which sometimes get restructured to allow for testing and get complicated a lot. --
PanuKalliokoski
CodeUnitTestFirst goes heavily against prototyping? Strange, I haven't found that to be true, myself. Closer to the opposite, in fact - I start with a
very thin shell of a ProtoType?, and as I make progress it fills in with real features. I wonder how what you actually do differs from what I actually
do.
Strongly agree. A big side benefit of CodeUnitTestFirst that doesn't get enough attention is how it rearranges your thinking. Instead of thinking
"oh I'll need these accessors on these classes, etc" you think in terms of use cases. And you end up with *exactly* what you need, nothing more
and nothing less.
I find I have a huge increase in speed in all areas of development when I CodeUnitTestFirst. I honestly believe that anyone who doesn't
experience this is either busy knocking down StrawMans, isn't doing it right, or hasn't really given it a chance.
TDDing GUIs is quite frustrating. It may be a lot easier if there were a GUI toolkit available that has been TDDed itself, from the ground up.
Anyone interested in such a TDDedGuiFramework project?
You mean like RubyOnRails? -- PhlIp
:-) No, I mean a toolkit for standalone clients (or maybe a hybrid one, people have been experimenting with this already). Something like SWT,
but a lot more intuitive :-)

Note that some TDDers abuse MockObjects. Dynamic mock systems like http://classmock.sf.net can make this too easy. A TDD design should
be sufficiently decoupled that its native object work fine as stubs and test resources. They help to test other objects without runaway
dependencies. One should mock the few remaining things which are too hard to adapt to testing, such as random numbers or filesystem errors.
Some of us disagree with that view, see http://www.mockobjects.com/files/mockrolesnotobjects.pdf for an alternative.

I'd like to revisit a comment that JohnRusk made above:
It seems to me that one danger of TestDrivenDevelopment is that developers may _not_ take that step that you take. I.e. developers may stay
with overly concrete code that satisfies the tests but not the "real" requirements.
See FakeItUntilYouMakeIt

The principles of TDD can be applied quite well to analysis and design, also. There's a tutorial on Test-driven Analysis & Design at
http://www.parlezuml.com/tutorials/tdad/index_files/frame.htm which neatly introduces the ideas.
-- DaveChan

What about extended the principles of TDD beyond testing, analysis, and design. How about using the principles also on user documentation?
This idea is described in Purpose Driven Development (PDD) at http://jacekratzinger.blogspot.com/2012/01/purpose-driven-development-
pdd.html
-- JacekRatzinger

"Roman Numerals" is often held up as a good sample project to learn TDD. I know TDD but I'm bad at math, so I tried the project, and put its
results here:
http://www.xpsd.org/cgi-bin/wiki?TestDrivenDevelopmentTutorialRomanNumerals
-- PhlIp

<shameless plug> Up to date info on tools and practices at http://testdriven.com
-- DavidVydra

I found GNU/Unix command line option parsing to be a good TDD exercise as well. My results here:
http://home.comcast.net/~pholser/software/pholser-getopts.zip (3.5Mb download; 4.7Mb unpacked)
-- PaulHolser

OrganicTesting of the TgpMethodology is one way to practice TestDrivenDevelopment. Organic Testing is an AutomatedTests methodology that
share resemblance to both UnitTest and IntegrationTest. Like UnitTest, they are run by the developers whenever they want (before check-in).
Unlike UnitTest, only the framework for the test is provided by the programmers, while the actual data of the test is given by
BusinessProfessionals. In Organic testing like IntegrationTests, in each run the whole software (or a whole module) is activated. -- OriInbar

Another reference: article "Improving Application Quality Using Test-Driven Development" from Methods & Tools
http://www.methodsandtools.com/archive/archive.php?id=20

But what about SecureDesign?? SecurityAsAnAfterthought? is a bad idea and it seems that test-driven development (and a number of other
agile processes, though perhaps not all) has a bad habit of ignoring security except as test cases, which isn't always the best way to approach
the problem.
-- KyleMaxwell

Hmmmm. Seems to me that TDD deals with security (as well as things like performance) just like any other functional requirement. You have a
story (or task) explicitly stating what functionality is needed (e.g., user needs 2 passwords to login, which are stored in an LDAP server; or
algorithm needs to perform 5000 calculations per second). You then write tests that will verify functionality. And then you write the functionality
itself.
And unlike traditional development, you now have regression tests to make sure that this functionality never gets broken. (i.e., if, due to
subsequent coding, the security code gets broken, or the algorithm performance drops off, you'll have a broken test to alert you of that.)
-- DavidRosenstrauch
One of the reasons that security is hard is that security is not just a piece of functionality. Okay, there are things like passwords which are
security features, but the rest of the code also has to not have security holes, which are NegativeRequirements?; ie, code must not do X.
This is obvious in the case of 'must not overflow buffer', which TDD does address, but is less obvious in things like 'component X should not be
able to affect component Y'. How do you test that? (I probably read this in 'Security Engineering', by Ross Anderson, which is now free on the
web).
-- AlexBurr?
In the case of "Component A must not affect Component B", how would you evaluate this without test-driven development? If you can't formally
define this requirement, then TDD is no better or worse than hoping for the best. (One answer in this case may be rule-based formal validation
of the system, which is easy enough to plug into a TDD framework.) -- JevonWright?

Some interesting info on test driven development from E. Dijkstra:
http://www.cs.utexas.edu/users/EWD/ewd10xx/EWD1012.PDF
(above is from http://www.cs.utexas.edu/users/EWD/index10xx.html)
Bit Torrent founder (Bram) uses test driven development, and thinks Formal methods suck:
http://z505.com/cgi-bin/qkcont/qkcont.cgi?p=Bittorrent-Founder-Wrong

IEEE Software will publish a special issue on Test-Driven Development in July 2007. For more information, see
IeeeSoftwareSpecialIssueOnTestDrivenDevelopment

Found this useful illustration of TDD in .NET (Flash screen cast) http://www.parlezuml.com/tutorials/tdd.html

I'd like to see TDD really integrated tightly into the programming environment. In Python, for example, there is a module in the standard library
called DocTest. Python has DocStrings built into the language to encourage standardized code commenting. These DocStrings are accessed
via a built-in help() function and encourage a friendly development community. DocTest goes further, encouraging smart design of tests within
the DocStrings themselves in ways that simultaneously teach the User what your method and class is supposed to do. Ideally, DocTest should
be removed and built into the interpreter environment with companion function to help() called test(). This would allow the "add a test, get it to
fail, and write code to pass" cycle to most friendly LiterateProgramming. -- MarkJanssen

BDD (Behavior Driven Development) is a form of TDD (Test Driven Development) where the tests are specified through definition of desired
Behaviors, as opposed to writing tests in code (the same code language used for the product). The BDD camp says that you use natural
language to describe desired behaviour, and employ testing tools which translate the natural language behaviour specification into tests which
validate the product code. This approach recognizes and attempts to address a couple of challenges with testing which I elaborate upon below.
See 'Cucumber' (http://cukes.info/) as one example of a BDD test toolkit.
One strategy with BDD is that you employ test (SDT) developers with much different skills than your product (SDE) developers. You can thereby
segregate SDT and SDE developers by skills. Another strategy is that your developers use different tools for product development from test
development. This suggests that you use people with different skills (less product intensive skills, btw), to write tests (or the same folks using
different skills). But when you choose TDD or BDD as a methodology practice, you need to consider (answer) the questions exposed below.
Building test code to automate the testing of your product code produces a regression problem. You want to avoid placing 'faith' in the
production code, and ensure that the code has been tested (verified), but now you have moved your 'faith' into the test code.
This regression problem is a challenge with testing, where you must provide tests which cover the requirements and features of your product, so
you have moved (regressed) the problem of (potential) defects in the development into a problem with (potential) defects in the test code. You
have now written more code (probably in the same language), but possibly replicated or moved the defect into test code. You are still placing
faith, but now in your test code, rather than in your production code.
One strategy is to use SDT's to develop tests, and SDE's to develop products. This can help catch misunderstandings in requirements (win), but
only increases the amount of code written, and thus the potential number of defects, adding personnel to develop these tests, and thus adding
costs. And you now have a recruiting and motivation problem. You must staff SDE's to build products and SDT's to build tests.
However, we can consider that by using different personnel, defects are statistically less likely to align between product code and test code,
because we assume that the SDE's and SDT's are independent. We assume a stochastically independent defect generation (SDE's and SDT's
are different people). Thus we expect them to generate defects in different places.
But are these activities stochastically independent? We are relying upon this independence. But agile asks that one agile team combine
developers writing production code and developers writing test code. So the same (hard) problems are viewed by the same team, and the same
conceptual issues are tackled in code by the same pool of developers. Using different developers to write product and test code gains little
practical independence, as developers (SDE or SDT) have essentially the same training and experience.
Consider the strategy that you require different training and experience from SDE and SDT. This regains some missing independence. But
unless all developers (SDE and SDT) have essentially the same training, capabilities, and skills, they cannot perform interchangeably on an
agile team. And developers undertake extensive training. And now you must separate developers by skill and role. Now you face the choice
whether to use more skilled developers to write the product, or to write the tests? Using less skilled developers to write the product impairs your
product. Using less skilled developers to write your tests means that you impair your tests. This reduces to a Faustian choice, do you effectively
subvert your testing and quality process, or do you sacrifice your product development?
Revisit the recruiting and motivation problem. Suppose you decide to staff SDT's to build tests and SDE's to build products. You have
introduced stratification and competition into your development team. Are you going to get equally qualified candidates for both SDE and SDT?
Even with the different requirements? Assume that most developers want to advance their careers, and gain recognition and rewards, and
become the best developers they can become.
Which path will the best and the brightest want to pursue? Consider that Google hires less than 1% of applicants (lots of people want to work
there, so they must want to pursue the 'best' career path). Joel Spolsky (co-founder, Fog Creek Software) writes a blog on software
development, (http://www.joelonsoftware.com/), says that you should hire people who are "Smart, and Get Things Done".
Can you effectively use the same people writing both product code and test code? And gain the stochastic independence testing needs? Can
you use people with different training and skills, and have them independently build tests? And not sacrifice product development on the altar of
quality?
The BDD camp says that you use natural language to describe desired behaviour, which would employ developers with much different skills,
and thereby segregate developers by skills. This suggests that you use people with different skills (less product intensive skills, btw), to write
tests. This has been exposed as a suspect strategy, but assuming you choose TDD or BDD as a methodology practice, anyway. How do you
achieve the best results?
-- ChuckCottrill

TDD in general and BehaviorDrivenDevelopment in particular are firmly grounded in HoareTriple. I find a useful parallel between a written
UseCase document and BDD Feature / Story Narrative.
-- MartinSpamer

See TestFirstUserInterfaces CalvinAndHobbesDiscussTdd TestDrivenAnalysisAndDesign TestDrivenDevelopmentaPracticalGuide
TestDrivenDevelopmentChallenges TestDrivenDesignPhaseShift, PowerOfTdd, IeeeSoftwareSpecialIssueOnTestDrivenDevelopment,
BehaviorDrivenDevelopment, TestFoodPyramid

CategoryTesting CategoryBook CategoryTestDrivenDevelopment CategoryExtremeProgramming

EditText of this page (last edited April 19, 2014) or FindPage with title or text search

S-ar putea să vă placă și