Sunteți pe pagina 1din 40

LAZY DATA DOMAINS

ROBERT CARTWRIGHT
Computer Science Department, Rice University, Houston, TX 77005-1892

JIM DONAHUE
Xerox Palo Alto Research Center, Palo Alto, CA 94304
Abstract. Lazy evaluation has gained widespread acceptance among
the advocates of functional programming. The implementation of lazy
evaluation is easy to describe but its semantic consequences are deceptively complex. This paper develops a comprehensive semantic theory of
lazy data domains and explores several approaches to formalizing that
theory within a programming logic. In the process, the paper presents
four interesting results.
First, there are several semantically distinct definitions of lazy evaluation that plausibly capture the intuitive notion. The differences
among the various definitions are significant; simple programs produce different results under the different definitions.
Second, non-trivial lazy domains are similar in structure (under
the approximation ordering) to universal domains such as the P
and T [Plot78] models of the untyped lambda calculus. In fact,
both P and T are isomorphic is simple lazy data domains.
Third, we prove that equational specifications do not have the
power to define lazy domains. This result establishes a fundamental limitation on the power of equational theories as data type
specifications.
Fourth, although lazy domains have the same higher-order structure as universal domains such as P , they nevertheless have an
elegant, natural characterization within first order logic. In this
paper, we develop a simple, yet comprehensive first order theory
of lazy domains which subsumes LCF.

A forerunner of this paper appeared in the Proceedings of the 1982 ACM Conference on
LISP and Functional Programming under the title The Semantics of Lazy Evaluation.
This research was partially supported by NSF grants MCS-8104209, MCS-8403530 and
by Xerox Corporation.
1

1. Introduction
Since the publication of two influential papers on lazy evaluation in 1976
[Hend76, Frie76], the idea has gained widespread acceptance among the
advocates of functional programming [Hend80, Back78]. There are two basic
reasons for the popularity of lazy evaluation. First, by making some of the
data constructors in a functional language non-strict, it supports programs
that manipulate infinite objects such as recursively enumerable sequences,
making some applications easier to program. Second, by delaying evaluation
of arguments until they are actually needed, it may speed up computations
involving ordinary finite objects.
Despite the popularity of lazy evaluation, its semantics are deceptively
complex. Although the implementation of lazy evaluation is easy to describe, its semantic consequences are not. In lazy domains, the existence
of infinite objects nullifies the usual principle of structural induction for
program data. Replacing conventional data constructors by their lazy counterparts radically alters the topological structure of the data domain. As
a result, reasoning about programs defined over lazy domains is a subtle,
often counterintuitive endeavor. Many simple theorems about ordinary data
objects do not hold in the context of lazy evaluation. For example, although
the function reverse reverse is the identity function on ordinary linear lists,
it does not equal the identity function in the context of lazy evaluation;
applying reverse to an infinite list yields the undefined object . In response to these issues, this paper develops a comprehensive semantic theory
of lazy data domains and explores several approaches to formalizing that
theory within a programming logic. In the process, the paper presents four
interesting new results.
First, there are several semantically distinct definitions of lazy evaluation
that plausibly capture the intuitive notion. In contrast to the implementationoriented approaches in the literature, we define lazy evaluation as a change
in the value space over which computation is performed. We use a small collection of constructors from denotational semantics [Scot76, Scot81, Scot83]
to build abstract value spaces that correspond to the meanings of computations using various lazy constructors. Our abstract approach to defining
lazy domains accommodates several different interpretations of the informal
concept of lazy lists developed in the literature [Frie76, Hend76]. The differences among the various interpretations are significant; simple programs
produce different results under the different interpretations.
Second, non-trivial lazy domains are similar in structure (under the approximation ordering) to universal domains (as defined by Scott [Scot76])
such as the P and T [Plot78] models of the untyped lambda calculus.
Specifically, we show that P is isomorphic to the simple lazy domain

TrivSeq = Triv TrivSeq

LAZY DATA DOMAINS

where Triv is the trivial data domain consisting of two objects {, T} and
denotes the Cartesian product of two sets. Moreover, the primitive
operations on TrivSeq corresonding to the the primtive operations of P
({0, suc, pred, ifthenelse, K, S, apply}) are definable by first order recursion
equations using only the constants T and , the constructor and selector
functions for TrivSeq, and the logical operations and por (parallel or) on
Triv. An analogous relationship exists between T (including an appropriate
set of primitive operations) and the corresponding lazy domain
B = B B
where B is the Boolean data domain consisting of three objects {, T, F}.
Hence, lazy sequences provide an elegant model of the untyped lambda
calculus that is intuitively familiar to most computer scientists.
Third, we prove that equational specifications [ADJ76,77] [Gutt78] do not
have the power to define lazy domains. This result establishes a fundamental
limitation on the power of equational theories as data type specifications.
Fourth, although lazy domains have the same higher-order structure as
universal domains such as P , they nevertheless have an elegant, natural
characterization within first order logic. In this paper, we develop a simple,
yet comprehensive first order theory of lazy domains relying on three axiom
schemes asserting
(1) the principle of structural induction for finite objects;
(2) the existence of least upper bounds for directed sets; and
(3) the continuity of functions.
To demonstrate the deductive power of the system, we show that there is a
simple, natural translation of the higher-order logic LCF [Gord77] into our
first order system. In addition, we derive a generalized induction rule (analogous to fixed point induction in LCF) for admissible predicates called lazy
induction which extends conventional structural induction to lazy domains,
simplifying the proof of many theorems. An instance of this generalized rule
reduces to ordinary fixed point induction.
The remainder of the paper is divided into six sections. Section 2 provides
a brief overview of Scotts theory of data domains [Scot76,Scot81,Scot83].
Section 3 develops the specific machinery required to define the abstract
semantics of lazy data domains. Using this machinery, Section 4 presents
a taxonomy of lazy lists, demonstrating that there are many semantically
distinct data domains that capture the intuitive notion of lazy evaluation.
Section 5 explores various approaches to formalizing our semantic definition
of lazy domains within a logical theory. In the process, we prove that algebraic specification is too weak to accomplish the task and that lazy domains
have the same rich higher-order structure as P . Finally, in Section 6, we
present a simple first order theory for lazy data domains and demonstrate
that it is as least as powerful as the corresponding theory formulated in the
higher-order logic LCF. Section 7 assesses the intuitive significance of our
results and speculates about promising directions for future research.

LAZY DATA DOMAINS

2. Background
2.1. Mathematical Preliminaries. The following group of definitions rigorously describes our concept of data domain, which is an adaptation and
distillation of several different expositions by Scott [Scot 76,81,83].
Definition 2.1. A partial order is a pair hS, vi consisting of a set S of
objects, called the universe of S, and a binary relation v over S such that
(1) v is reflexive: x S x v x.
(2) v is antisymmetric: x, y S [x v y y v x x = y].
(3) v is transitive: x, y, z S [x v y y v z x v z] .
A subset R S is consistent (in S) iff u S such that r R (r v u);
u is called an upper bound of R. A subset R S is directed iff every finite
subset E R has an upper bound in R. The restriction of the relation v
to the set R is denoted vR .
Notation It is customary to abuse notation and denote a partially ordered
set hS, vi by the symbol S for the universe. In this case, the partial ordering
relation corresponding to S is denoted vS . We will follow this practice
except when more precise notation is necessary for the sake of clarity.
Definition 2.2. A partial order S is complete iff every directed subset R S
(including the empty set) has a least upper bound (denoted tR) in S. The
least upper bound in S of the empty set is denoted . The phrase complete
partial order is frequently abbreviated cpo.
Definition 2.3. Two partial orders A and B are isomorphic (written A v
B) iff there exists a bijective (one-to-one and onto) function h : A B that
preserves the partial ordering on A:
x, y A [x vA y h(x) vB h(y)] .
Definition 2.4. A partial order B is a finitary basis iff
(1) B is countable, and
(2) every finite consistent subset of B (including the empty set ) has a
least upper bound in B.
Definition 2.5. Let S be a partial order. A non-empty subset R S is an
ideal over S iff
(1) R is directed, and
(2) R is downward closed: r R s S s v r s R.
The set of ideals over S is denoted dSe. For every r S, the set r
{s S | s v r} is an ideal in dSe; it is called the principal ideal determined
by r.
Lemma 2.1. The set of ideals over a partial order S forms a cpo hdSe, i
under the subset ordering .
Definition 2.6. Let B be a finitary basis. The domain determined by B is
the cpo hdBe, i.

LAZY DATA DOMAINS

Definition 2.7. An element s of a cpo S is finite iff for every directed


subset R S, s v tS R implies that r S such that s v r. The symbol
S 0 denotes the set of finite elements of S.
Lemma 2.2. An element R in the domain of ideals determined by the finitary basis B is finite iff R is a principal ideal. Moreover, r, s B r v
s r s.
Remark It is convenient to think of the set dBe as a superset of Bcalled
the ideal completion of Bbecause the finite elements of dBe form a partial
order isomorphic to B. The remaining ideals correspond to limit points
that are added to B to make it form a cpo. From this perspective, a domain
is a cpo hD, vi where hD0 , vD i is a finitary basis and hD, vi is isomorphic
to the domain determined by hD0 , vD i.
Definition 2.8. A presentation of the domain D is a function : N D
such that (N) = D0 .
Remark The function delta identifies the finite elements of D. Equivalently,
delta assigns indices in N to the elements of the finitary basis for D. Since
is not necessarily 11, the range of can be finite (implying that D is
finite).
Definition 2.9. Let D be an arbitrary domain. A function f : D#f D
is continuous iff
x1 , . . . , x#f D f (x1 , . . . , x#f ) =
V
t{f (d1 , . . . , d#f ) | d1 , . . . , d#f D0 1i#f di v xi } .
A continuous function f : D#f D is strict iff the image of every argument
list containing is , i.e.,
x1 , . . . , x#f D [x1 = . . . x#f = f (x1 , . . . , x#f ) = ].
Definition 2.10. A domain R is a subspace of the domain D iff
(1) R D, vR vD , and R = D .
(2) R D0 = R0 .
The function R defined by
R (x) = t{r R0 | r v x}
is called the projection corresponding to R.
Remark Some formulations of domain theory use a weaker definition of subspace where subspaces are the ranges of functions called finitary retractions
instead of projections. In contrast to projections, finitary retractions do not
preserve the computational structure of domains because they generally do
not finite elements to finite elements. See [Plot78].
Many common data domains such as the natural numbers and ordinary
(finite) lists are degenerate in the sense that they contain no limit points
(points that are not finite); in these domains, every element is finite (in the

LAZY DATA DOMAINS

topological sense). We will use two such domains For example, the domain
of Boolean truth values, consisting of the set
{, T, F}
under the partial ordering vB defined by
x vB y x = y x = .
is a degenerate domain.
An example of a more interesting domain is P , the power set of the
natural numbers under the partial ordering v determined by set inclusion.
The finite elements of P are precisely the finite sets of natural numbers.
2.2. Domain Constructions. In specifying domains, it is often convenient
to construct composite domains from simpler ones. There are two fundamental mechanisms for constructing composite domains: the Cartesian product
construction and the continuous function construction. We will discuss several other constructions later in the paper, but they are all based on these
two mechanisms.
We will define the two constructions without proving that the constructed
domains are well-formed. The interested reader is encouraged to verify that
they are.
Definition 2.11. Given domains A and B with presentations and , the
Cartesian product domain, denoted [AB], is the domain h[A B], v[AB] i
with presentation where A B is the set of pairs
{hx, yi | x A, y B},
v[AB] is the relation defined by
(x1 , y1 ) v[AB] (x2 , y2 ) x1 vA x2 y1 vB y2
and is the enumeration hi | i Ni consisting of all pairs haj , bk i sorted
by rank
(j + k) (j + k + 1)
+ m.
r(j, k) =
2
The bottom element of A B is hA , B i where A and B denote the
least elements of A and B.
Lemma 2.3. [A B]0 v A0 B 0 .
Notation The square brackets appearing in the expression [A B] are
significant because they distinguish the Cartesian product construction on
domains A and B from the standard set-theoretic Cartesian product construction on the universes of these domains. The latter is commonly used to
specify the domains of multi-ary functions. The distinction is important in
domain theory: f : [A B] C is a unary function that maps arguments
from the set [A B] into C while g : A B C is a binary function that
takes two arguments x A and y B and produces a result in C.

LAZY DATA DOMAINS

The second fundamental domain construction is the formation of the domain of approximable maps from one domain into another. An approximable
map is a data object that denotes a function.
Definition 2.12. Assume that we are given data domains A and B. A
partial map from A into B is a binary relation R A0 B 0 such that R is
consistent: a A0 , the set
{b B 0 | x A0 [x v a xRb]}
is consistent. An approximable map from A into B is a partial map R from
A into B that satisfies the following two closure properties:
(1) R is downward-closed: ha, bi A0 B 0 (hx, yi R [x v a b v
y] xRy) .
(2) f is directed-closed: a A0 b1 , b2 B 0 (aRb1 aRb2 aR(b1 t
b2 )) .
Definition 2.13. Given a partial map R from A into B, the function determined by R is the function f : A B defined by
G
f (a) = {b B 0 | x A0 [x v a xRb]}.
Lemma 2.4. If R is a partial map from A into B, then the function f :
A B determined by R is continuous.
Definition 2.14. Given the domains A and B with presentations hai | i
Ni and hbi | i Ni, the domain of approximable maps A , B is is the
domain hC, vC i with presentation where C is the set of approximable
maps from A into B, vC is the relation defined by
R1 vC R2 R1 R2
and is the enumeration consisting of the set
{R | finite R0 [A0 B 0 ] such that R is the downward, directed closure of R0 }
sorted by rank
X

2r(i,j) .

hai ,bj if

The least element of A , B is the relation


{ha, B i | a A0 }
which is the downward, directed closure of the trivial partial map {hA , B i};
it determines the everywhere undefined function
x.B .
An approximable map R is finite iff it is the downward, directed closure of
a finite partial map.

LAZY DATA DOMAINS

Definition 2.15. For any domain D, the function applyD : (D , D)D


D is defined by
x D applyD (R, x) = f (x).
where f D D is the function determined by the approximable map
R D , D.
Lemma 2.5. applyD is continuous.
Although we have only defined the notion of approximable maps corresponding to unary continuous functions, there is a standard transformation (usually called currying) that converts a multiple argument function
f : D#f D to an equivalent unary function f 0 : D [D ... [D
D] . . . ] defined by the lambda expression:
x1 . . . . . x#f . f (x1 , . . . , x#f ).
2.3. Computability. In order to formalize the idea of computable functions on a domain, we must identify a concrete representation for the elements of the domain.
Definition 2.16. A presentation of a domain A is effective iff the following
two relations are recursive:
(1) The binary relation Con defined by
Con(i, j) ki v k j v k .
(2) The ternary relation Lub defined by
Lub = {hi, j, ki | k = i t j } .
Fortunately, both of the fundamental domain constructionsCartesian
product and approximable mappreserve effectiveness.
Theorem 2.6. If the presentations of the domains A and B are effective,
then the presentations the composite domains [A B] and [A B] are
effective.
The notion of effectiveness extends in a natural way to subspaces.
Definition 2.17. Let B be a domain with presentation and let A be a
subspace of B. A is an effective subspace of B iff the index set {i | i A}
is recursively enumerable.
Remark The function implicitly determines an effective presentation for
A: .
In an abstract implementation of a domain A with effective presentation
, each element x of the universe is represented by a natural number xR
encoding the set index(x) = {i1 , i2 , . . .} of indicies of the finite elements
{Ai1 , Ai2 , . . .} approximating x. More precisely, there is a binary total recursive function such that for all x A, k .(xR , k ) has range index(x).
In this context, a computable function f over A is implemented by a #f -ary

LAZY DATA DOMAINS

partial recursive function f R such that for all x1 , . . . , x#f A, the function
R
R
k.(f R (xR
1 , . . . , x#f ), k )) has range index(f (x1 , . . . , x#f )) .
Given the preceding motivation, we formalize the notions of computable
function and computable map as follows.
Definition 2.18. An element a of a domain A is accessible iff the index set
of the ideal corresponding to a is recursively enumerable.
Definition 2.19 (Computable). Given the domains A and B with effective
presentations and , an approximable map f [A , B] is computable iff
the index set {hi, ji | hi , j i f } for f is recursively enumerable. The function f determined by an approximable map f from A into B is computable
iff f is computable.
A computable function f : A B is computable in the sense that
given any accessible element x A (represented by the code xR ), we can
enumerate the set of basis elements that approximate f (x) B.
Definition 2.20 (Effectively isomorphic). Let A and B be domains with
effective presentations and . A and B are are effectively isomorphic iff
there exists a computable function h : A B that establishes an isomorphism between A and B.
Definition 2.21 (Signature). A signature is a pair hO , # i consisting
of a finite set of function symbols O distinct from V ar and a function #
mapping O into the natural numbers N specifying the arity # f for each
symbol f in O . If # f is 0, f is called a constant.
Notation For the sake of notational simplicity, we will frequently write
instead of O . For example, f means f O . Similarly, we will write
#f instead of # f in contexts where there is no ambiguity.
Definition 2.22 (Domain Algebra). A domain algebra D with signature
is a pair hD, D i consisting of a domain D and an interpretation function D
mapping each symbol o into an continuous #o-ary function oD (called
an operation) over D.
Definition 2.23 (Isomorphic algebras). Two algebras D1 and D2 with signature are isomorphic iff there exists a bijective (one-to-one and onto)
function : D1 D2 such that
the domains D1 and D2 are isomorphic under , and
for each operation symbol f ,
x1 , . . . , x#f D1 (f D1 (x1 , . . . , x#f )) = f D2 ((x1 ), . . . , (x#f )) .
The obvious difference between a domain and a domain algebra is that
a domain algebra identifies a collection of primitive operationsin addition to a universe of valuesthat form a set of building blocks for defining
new functions over the universe. In contrast, a domain leaves the primitive
operations on data unspecified.

10

LAZY DATA DOMAINS

Definition 2.24 (Term). Let be a signature and let V ar be a denumerable


set of symbols {x0 , x1 , . . .} called variables that are distinct from the symbols
in . A term M over is either
a variable xi V ar; or
a string of symbols f (1 , . . . , k ) where f ; the symbols in are
distinct from the metasymbols (, ), and ,; k = #f ; and for
each i , i is a term over .
If the term M contains no variables, it is variable-free. The set of terms over
the signature is . The set of variable-free terms over the signature is
denoted 0 .
Definition 2.25 (Environment). Let D be an arbitrary domain algebra with
signature . An environment is a function mapping the set of variables
V ar into A. Given D and , the meaning of an arbitrary term M over
(denoted D[[M ]] ) is inductively defined as follows:
If M has the form xj , then
D[[M ]] = (xj ).
If M has the form fi (M1 , . . . , M#f ), then
D[[M ]] = fi D (D[[M1 ]] ), . . . , D[[M#f ]] .
A domain algebra D with signature satisfies an equation = (where
and are terms over ) iff for all environments , D[[]] and D[[]] denote
the same element of the universe D. D satisfies a set of equations iff it
satisfies each equation in .
Definition 2.26 (Recursive Program). A recursive program P over a domain algebra D with signature is a finite set of equations of the form:
{f1 (x1 ) = M1 ,
f2 (x2 ) = M2 ,
..
.
fn (xn ) = Mn }
where n > 0; the set F of function symbols {f1 , f2 , . . . , fn } is disjoint from
Os and V ar; x1 , x2 , . . . , xn are (possibly empty) lists of variables from V ar;
each term Mi is composed solely from the variables xi and the operation
symbols O F . The functional corresponding to P is the n-ary function
mapping (D#f1 D) (D#fn D) into itself defined by the equation
(f1 , . . . , fn ) = hx1 . D[[M1 ]], . . . , xn .D[[Mn ]]i .
Theorem (Kleene) For every recursive program P over a domain algebra
D, the corresponding functional has a least fixed-point hf1 , . . . , fn i in the
domain (D#f1 D) (D#fn D) .

LAZY DATA DOMAINS

11

Corollary The domain algebra consisting of D augmented by least fixedpoint interpretation for the function symbols f1 , . . . , fn satisfies the recursive
program P .
Definition 2.27 (Recursively Definable). A function f : Dn D (where
n 0)1 is recursively definable in D iff there exists a recursive program P
such that f is one of functions in the least fixed-point hf1 , . . . , fn i of the
functional corresponding to P .
Definition 2.28 (Expressive, Reflectively Complete). A domain algebra
D = hD, D i with effective presentation is expressive iff every accessible
element of D is recursively definable in D. D is computationally complete iff
every computable function f : Dn D (n 0) is recursively definable in
D. D is reflexively complete iff the following three properties hold:
(1) D is expressive.
(2) There exists an effective subspace D, of D that is effectively isomorphic to the domain D , D under the function h : [D , D] D, .
(3) The function apply : D, D D defined by
f [D , D] x D apply(h(f ), x) = applyD (f, x)
is recursively definable in D.
Remark We will call elements of the subspace D, internal maps to distinguish them from the elements of the domain D , D.
Lemma 2.7. If an algebra D is reflexively complete, then it is computationally complete.
Proof Let f be an arbitrary computable n-ary function over D. Since D is
expressive and D , D is effectively isomorphic to D, , there is a recursive
program
f=M
that defines the curried internal map f corresponding to f . The recursive
program for f is the equation
f = M
f (x1 , . . . , xn ) = apply(. . . (apply(apply(f, x1 ), x2 ) . . . , xn ).
2.4. Projections on the Universal Domain. An interesting collection
of domains can be constructed from a collection of primitive domains (such
as Nat and Bool) by composing the Cartesian product and approximable
map constructions. However, there are many important domains such as
infinite cartesian products of primitive domains that are beyond the scope of
this simple scheme. Dana Scott has developed a much more comprehensive
approach to the problem of constructing domains based on the concept of a
universal domain.
1In the special case n = 0, we define D 0 as the empty universe containing the single
element . Hence, each element d D is identified with the continuous function mapping
tod.

12

LAZY DATA DOMAINS

Definition 2.29 (Universal Domain). A domain U with effective presentation is universal iff the following two properties hold:
(1) Every domain D is isomorphic to a subspace UD of U .
(2) Given an effective presentation for a domain D, there is an effective
subspace of U that is effectively isomorphic to D.
Since every domain D has an isomorphic image UD within a universal
domain U , the problem of defining an arbitrary domain can be reduced to
defining an arbitrary subspace of U . A simple, elegant way to identify an
arbitrary subspace UD of U is to define a map, called a projective map,
in U , U that uniquely characterizes the subspace UD . Moreover, since
U, is an effective subspace of U , there is an internal map Map D U,
corresponding to D.
2.4.1. Projections.
Definition 2.30 (Projective map). A projective map on a domain D is an
approximable map : D , D with the property that there exists a subset
P D0 satisfying the constraint
= {hx, yi | p P p v x y v p}.
In other words, is the downward, directed-closure of the set {hp, pi | p P }.
Definition 2.31 (Projective map domain). Given the domain D with presentation , the domain D of projective maps over D is the domain hE, vE i
with presentation where E is the set of projective maps of D, vE is the
subset ordering, and is the enumeration function for the set of all finite
maps D sorted by rank
X
2i
{i | hbi ,bi ij<ibj 6=bi }

It is easy to verify that D is a domain with presentation .


Lemma If the presentation of the domain D is effective, then the corresponding presentation of the domain D of projective maps over D is
effective. The domain P rojD with presentation is an effective subspace of
D , D. The accessible elements of D are precisely the projections that
identify effective subspaces of D.
2.4.2. Defining Projections within a Universal Domain. By the definition
of universality, a universal domain U must contain an effective subspace
U, that is isomorphic to U , U . Similarly, U, must contain an effective
subspace U that is isomorphic to U . Consequently, for any domain D
with effective presentation , we can define a unique internal projective map
D in U U, that uniquely characterizes a subspace UD that is isomorphic
to D.
Definition 2.32 (Universal domain algebra). An algebra U = hU, U i is
universal iff U is a universal domain and U is reflexively complete.

LAZY DATA DOMAINS

13

Remark Given a universal domain U , we can construct a universal algebra


as follows. Let U, denote the subspace of U that is isomorphic to U , U .
We identify a signature designating a finite set of functions O over U such
that:
(1) the operation apply : U, U U defined in Section 2.3 is recursively definable in hU, U i, and
(2) every recursively enumerable element of U is denoted by some variablefree term formed from O.
Moreover, since U is reflexively complete, there is an term f (composed
from O) for each operation f that is recursively definable in hU, U i, such
that

x1 , . . . , x#f U apply((. . . apply(f , x1 ), . . . ), x#f ) = f (x1 , . . . , x#f ) .

Notation To simplify the syntax of expressions over a universal algebra


U, we will adopt the following conventions. First, since there is an element f U, for each recursively definable operation f , we will use the
internal map f in place of each operation f except for constants and the
special operation apply. Hence, instead of the expression f (x, y) we will
write apply(apply(f , x), y). Second, we will abbreviate every application of
the form apply(u, v) by (u v). Third, we will elide parentheses by making
application left associative; hence u v w abbreviates ((u v) w). Finally, we
will abbreviate applications of the form f (g x) by f g x. This notation
is consistent with the conventions usually employed in the untyped lambda
calculus [Bare77].
Although there are many different possible formulations of the universal
domain, the particular choice is unimportant. Given an arbitrary universal
algebra U, we can recursively define (in terms of the primitive operations
O on the universal domain) the basic set of operations Olazy that we need
to construct lazy domains. Olazy consists of the internal projective maps
Bool , , and , identifying subspaces isomorphic to B (the flat domain
containing the elements {T, F, }) [U U ], and U , U , and the internal
maps

T, F
ifthenelse
or
not
left
S

:
:
:
:
:
:

B
B (U (U U ))
B (B B)
BB
U U
U, ((U U U U ))

and
por
pair
right
K

:
:
:
:
:
:

U B
B (B B)
B (B B)
U (U U )
U U
U (U U )

14

LAZY DATA DOMAINS

satisfying the axioms:


=
ifthenelse T x y = x
ifthenelse x y =
or x y = ifthenelse x T y
x = T y = T (por x y) = T
left (pair x y) = x
S x y z = x z (y z)

x 6= (x) = T
ifthenelse F x y = y
and x y = ifthenelse x y F
x 6= T y 6= T (por x y) = (or x y)
not x = ifthenelse x F T
right (pair x y) = y
Kxy = x .

The notation f : D1 D2 means that f is an internal map in U, such that


x D1 f x D2 .
The behavior of f on points outside of the domain D1 is not specified.
With the exception of por, S, and K, these maps are generalizations of
familiar operations from lazy LISP (where left, right, and pair correspond
to car, cdr, and cons). The declared domain for each map is its intended
domain of usage. Each map is actually defined over the entire universal
domain U ; domain declarations can be enforced if necessary by projecting
argument values outside the declared domain onto the declared domain D
(using the projection map D ).
Since Olazy includes the apply operation and the internal maps S and K,
we can form a variable-free term that denotes the map corresponding to any
function that is recursively definable in terms of the operations Olazy . It is
well known [Bare77] that any closed term (no free variables) in the (untyped)
lambda calculus can be converted to an equivalent term constructed solely
from applications of S and K. Moreover, the least fixed point operator
Y : (U U ) U that maps a continuous function into its least fixed point
is defined by the lambda expression
f . (x . f (x x)) (x . f (x x)).
The corresponding internal map Y is defined by:
Y = SAA
I = SKK
A = (S (S (K S) (S (K K) I))(S (K S)(K I)) (K I))
Consequently, the map corresponding to an arbitrary recursive definition
f (x1 , . . . , x#f ) = Mf
is simply
Y( x1 . . . . . x#f . Mf )
where x . N denotes the term (formed using S and K) signifying the internal map corresponding to the function x . N .
Notation As a notational convenience, we will use -expressions to denote
maps instead applications of S and Ksince they are much easier to read.
On a formal level, these -expressions simply abbreviate the corresponding

LAZY DATA DOMAINS

15

compositions of S and K. Similarly, we will elide applications of the Y


operator by using the equation
f x1 , . . . x#f = x1 . . . . . x#f . Mf
to abbreviate the recursive definition
f = Y( f . x1 . . . . . x#f . Mf
of the internal map f.
We will also use the standard infix abbreviations for applications of Boolean
maps:
if x then y else z
x or y
x and y
x por y

ifthenelse x y z
or x y
and x y
por x y .

3. The Construction of Lazy Domains


In constructing a composite space (such as a Cartesian product or discriminated union) from component spaces, we must decide how to form the
bottom element of the composite space, i.e., determine which constructed
objects are identified with the undefined composite object. This decision determines whether the constructor functions for the composite space require
lazy or industrious evaluation.
Let D1 and D2 be arbitrary effective subspaces of our universal domain
U characterized by the internal projection maps 1 and 2 in U, . Using the
Cartesian map pair : U (U U, ), we can form a wide variety of simple
composite spaces using the following domain constructions.
(1) Ordinary product. D1 D2 = {hx, yi | x D1 , y D2 }. The corresponding basic maps are:
Pair : D1 (D2 (D1 D2 ))
1st : (D1 D2 ) D1
2nd : (D1 D2 ) D2
: U (D1 D2 )

=
=
=
=

pair
z . left z
z . right z
x . Pair (1 1st x) (2 2nd x)

(2) Coalesced product. D1 D2 = {hx, yi | x D1 , y D2 , x 6= , y 6= }


{}. The corresponding basic maps are:
Pair : D1 (D2 (D1 D2 ))
1st : (D1 D2 ) D1
2nd : (D1 D2 ) D2
: U (D1 D2 )

=
=
=
=

x . y.if x and y then pair x y else


z . left z
z . right z
x . if x then Pair (1 1st x) (2 2nd x) else

16

LAZY DATA DOMAINS

(3) Separated product. D1 D2 = {hT, hx, yii | x D1 , y D2 } . The


corresponding basic maps are:
Pair : D1 (D2 (D1 D2 ))
1st : (D1 D2 ) D1
2nd : (D1 D2 ) D2
: U D1 D2

=
=
=
=

x . y . pair T (pair x y)
z . left right z
z . right right z
x . Pair (1 1st x) (2 2nd x)

(4) Coalesced sum. D1 D2 = {hT, xi | x D1 , x 6= }{hF, yi | y D2 , y 6= }


{} . The corresponding basic maps are:
inL : D1 (D1 D2 )
inR : D2 D1 D2
outL : (D1 D2 ) D1
outR : (D1 D2 ) D2
L? : (D1 D2 ) Bool
R? : (D1 D2 ) Bool
: U (D1 D2 )

=
=
=
=
=
=
=

x . if x then pair T x else


x . if x then pair F xelse
z . right z
z . right z
z . left z
z . not left z
x . if L? x then inL 1 outL x
else inR 2 outR x

(5) Separated sum. D1 +D2 = {hT, xi | x D1 }{hF, yi | y D2 }{}


. The corresponding basic maps are:
inL+ : D1 (D1 + D2 )
inR+ : D2 (D1 + D2 )
outL+ : (D1 + D2 ) D1
outR+ : (D1 + D2 ) D2
L? : (D1 + D2 ) Bool
R? : (D1 + D2 ) Bool
+ : U D1 + D2

=
=
=
=
=
=
=

x . pair T x
y . pair F y
z . right z
z . right z
z . left z
z . not left z
x . if L? x then inL+ 1 outL+ x
else inR+ 2 outR+ x

(6) Lifted domain. D = {hT, xi | x D} {}. Let D be the projection map corresponding to D. The basic maps corresponding to D
are:
delay : D D = x . pairT x
force : D D = z . right z
: U D = x . delay D force x
In constructing products and unions, there are three plausible symmetric
ways to handle composite objects containing an undefined component:
(1) A composite object (e.g. an ordered pair) containing an undefined
component is identified with the undefined object in the constructed
domain. Coalesced products () and sums () obey this convention.

LAZY DATA DOMAINS

17

(2) A constructed object containing at least one defined component is


distinguished from the bottom element of the composite domain. In
this case, two such objects are equal only if all of their corresponding
components are equal. Ordinary Cartesian products () obey this
convention.
(3) A composite object is always distinguished from the bottom element of the constructed domain. In this case, the bottom element is
outside the range of the constructor function corresponding to the
composite domain. Separated products ( ), separated sums (+),
and lifted domains ( ) all obey this convention.
Each of these three different approaches to constructing composite data
objects corresponds to a different evaluation protocol (sometimes called a
computation rule [Manna 74]) for evaluating applications of constructor
functions to argument expressions. The first scheme corresponds to conventional call-by-value computation: evaluate all argument expressions
before forming the composite object. We refer to this evaluation strategy
as industrious evaluation. The second scheme corresponds to dovetailing
the evaluation of all argument expressions until one of them converges, and
forming a composite lazy object (where the arguments other than the one
that converged remain unevaluated as closures [Hend80]). The third scheme
corresponds to forming a composite lazy object without evaluating any of
the argument expressions. We refer to both the second and third schemes
as lazy evaluation schemes because they permit components of a composite
data object to be undefined.
In a lazy composite object, unevaluated arguments are evaluated only
when the corresponding selector function (e.g. car and cdr in lazy LISP)
is applied to the composite object. If such an application does not occur
in the course of executing a program, the corresponding argument is never
evaluated.
The lifting operator provides an explicit mechanism for constructing a
domain of suspended or unevaluated elements corresponding to a given
domain D. Note that the composition of the lifted domain construction with
the coalesced product construction is identical to the separated product
construction, i.e.,
A B = A B .
Similarly, the separated sum construction can be defined in terms of the
appropriate composition of the lifting operator with the coalesced sum construction:
A + B = A B .
Consequently, without loss of generality, we can confine our attention (when
it is convenient) to the four domain constructors: (ordinary product),
(coalesced product), (coalesced sum), and (lifting operator).

18

LAZY DATA DOMAINS

4. A Taxonomy of Lists
The variety of mechanisms available for constructing lazy domains suggests that there may be several different lazy domains that correspond to
an ordinary (industrious) recursive data domain (such as lists)each with
subtly different properties. In fact, the number of semantically distinct possibilities is surprisingly large. We will illustrate this phenomenon by studying list domains in detail. In particular, we are interested in determining
and classifying the possible lazy variations on the domain algebra consisting
of the subspace L

(0)

L = A (L L) ,

and the set of operations OL

:
Pair :
cons :
cdr :
A? :

L
L
L2 L
LL
LL

A
true, false, a1 , a2 , . . .
car
if-then-else
Pair ?

:
:
:
:
:

L
L
LL
L3 L
LL

where true, false, a1 , a2 , . . . are constants denoting lists that are atoms. We
presume that A is an unspecified flat, expressive subdomain of U including
the elements true and false and a set of object isomorphic to the natural
numbers N.
The domain List defined in equation (0) is the project characterized by
the projection map

List = u .
if L? u then inL A outL u
l
else inR (L 1st outR u) (L 2nd outR u))

where A is the projection for A. In accordance with the conventions we


adopted in Section 2.5, we will define the maps in U determining the operations OL . The corresponding internal maps of U representing the operations

LAZY DATA DOMAINS

19

in OL satisfy the equations:


=
A = inL
Pair = cons
true = inL T
false = inL F
ai = inL a0i
cons = x . y . inR Pair x y
car = x . 1st outR x
cdr = x . 2nd outR x
ifthenelse = x . y . z . ifthenelse (outL x) y z
atom? = x . if L? xthen T else F
pair? = x . if R? xthen T else F
where a0i denotes the element of A corresponding to ai .
In the process of classifying lazy variations on the domain L, we will identify which ones correspond to the two major implementation-oriented semantic definitions for Lazy LISP presented in the literature [Hend76, Frie76].
Our investigation will demonstrate that that apparently innocuous variations in the definition of recursive data domains have profound semantic
consequences.
The obvious syntactic variations on industrious List domain defined above
replace by +, or by or . The variant domains are:
(1)
(2)
(3)
(4)
(5)

L
L
L
L
L

=
=
=
=
=

A + (L L)
A + (L L)
A (L L)
A (L L)
A + (L L)

In each variant domain, the primitive operations OL are defined in the


obvious way analogous to their definition in domain (0). For example, in
variation (1), the functions cons, car , cdr are determined by the following
maps:
cons = x . y . inR+ Pair x y
car = x . 1st outR+ x
cdr = x . 2nd outR+ x
We will subsequently consider other possible variations that involve the
explicit use of the operator.
As a gross categorization, we can classify list domains on the basis of
whether they accommodate infinite lists. The ordinary industrious domain

20

LAZY DATA DOMAINS

(0) does not, but all of the lazy variants (1)-(5) do. For example, the list
zeros defined by the equation
zeros = cons 0 zeros
denotes the undefined element of the industrious domain (0) while it
denotes a linear list of 0s in each of the other domains (1)-(5).
Within the class of domains that support infinite objects, there are significant differences in the kinds of infinite and undefined objects that can
appear within infinite and partial objects. By applying this form of analysis,
we can demonstrate that the first four domains (1)-(4) have fundamentally
different internal structure. We can also show that domain (5) is distinct
from the other domains, but the difference between it and domain (1) is
not significant because the two domains (and corresponding algebras) are
isomorphic.
In domain (1), lists can contain undefined atoms (the element hT, i,
undefined pairs (the element hF, i, and undefined lists (). In domain (2),
lists can contain undefined atoms and the undefined pair but not undefined
lists. In domain (3), lists can contain undefined lists but not undefined
atoms and undefined pairs. In domain (4), lists can contain undefined lists
and undefined pairs, but not undefined atoms. In domain (5), as in domain
(1), lists can contain undefined atoms, undefined pairs, and undefined lists.
However, domain (5) contains a different form of undefined pair (hT, hT, ii)
than domains (1), (2), and (4).
By inspecting a few simple examples, we can easily prove that the first
four lazy domains are distinct (non-isomorphic); corresponding computations yield different answers. In domain (1), we can define
(1) the infinite list containing no atoms;
(2) the infinite sequence containing undefined lists () alternating with
zeros; and
(3) the list consisting of undefined atoms
by the expressions
(1) BigTree = cons BigTree BigTree
(2) AltSeq = cons (cons 0 AltSeq), and
(3) A .
However, in the other three domains (2)-(4), at least one of the corresponding lists does not exist. In domain (2), AltSeq denotes the undefined
pair Pair ; lists may not contain undefined lists. In domain (3), both
BigTree and A denote the undefined list ; every defined list must contain a defined atom. In domain (4), Atom denotes the undefined list ;
lists cannot contain undefined atoms. Hence, domains (1), (2), (3), and
(4) are structurally distinct (nonisomorphic); the set of finite elements is
fundamentally different in each case.
Although each pair (created by a cons operation) in domain (5) contains
a redundant level of lifting, domain (5) is isomorphic to domain (1) under

LAZY DATA DOMAINS

21

the function h : U U determined by the map


h = x . if L? x then xelse pair T h(right right x).
The function h simply strips one level of lifting from the representation of
every L pair. The interested reader should confirm that all of the operations
in OL (restricted to their respective domains) are preserved by h.
With the aid of the operator, we can define an even wider class of lazy
list domains. First, we can define three more basic variations on lazy lists
(domains (6), (7), and (8) below) completing an enumeration of the eight
possible ways (domains (0)-(8) excluding (5)) to include or exclude undefined
atoms, undefined pairs, and undefined lists. Second, we can define pairing
operators that are lazy in only one argument (unlike P air , P air ). Finally, we can add redundant levels of delayed evaluation in the formation of
either atomic lists or paired lists analogous to the extra level that appears
in paired lists in domain (5). Since every domain in the final class (involving
redundant levels of lifting) is isomorphic to a domain outside the class, we
will not discuss this class any further.
To facilitate classifying the extra domains, we rewrite the definitions of
the five basic lazy list domains (1)-(5) in terms of the operators , , +,
, and :
(1)
(2)
(3)
(4)
(5)

L
L
L
L
L

=
=
=
=
=

A (L L)
A (L L)
A (L L)
A (L L)
A [(L L) ] .

In this standardized form, the close relationship between domain (5) and
domain (1) is evident.
The remaining interesting variations on lazy lists are:
(6)
(7)
(8)
(9)
(10)
(11)
(12)

L
L
L
L
L
L
L
.

=
=
=
=
=
=
=

A (L L)
A (L L)
A (L L)
A (L L)
A (L L )
A (L L)
A (L L )

Variation (6) accommodates undefined atoms and undefined lists, but not
undefined pairs. Variation (7) does exactly the opposite; it accommodates
undefined pairs, but not undefined atoms or lists. Variation (8) is only
marginally lazy: within lists it accommodates undefined atoms, but not
undefined lists or undefined pairs. Variations (9), (10), (11), (12) all delay

22

LAZY DATA DOMAINS

the evaluation of only one argument of a paired list. As a result, domains (9)
and (11) allow infinitely deep lists but not infinitely long ones while domains
(10) and (12) do the opposite. Spaces (9) and (10) prohibit undefined atoms
while domains (11) and (12) accommodate them.
At this point, the question arises: which denotational definition of lazy
lists corresponds to the standard implementation-oriented definition given
in the literature [Frie76]? The answer is (4), because their domain accommodates undefined lists and undefined pairs but not undefined atoms.
The situation is somewhat more complicated in the case of the semantics
presented in [Hend76]. Their semantic definition describes a domain isomorphic to (1), but the definable data points are contained within a subdomain
isomorphic to (4), because the operations in their domain cannot generate
undefined atoms.
5. Axiomatizing Lazy Domains
Since there are significant differences between various formulations of lazy
data domains, it is important to develop clear, comprehensive axiomatic
definitions for the alternatives. Naively, we might attempt to specify a lazy
domain like
(1)

L=A+LL

(given an axiomatization for A) by devising a list of equations such as those


presented in section 3 and designating the lazy domain as the corresponding
initial algebra [ADJ76,77] (or alternatively the corresponding final algebra
[Kami80]). From our previous discussion, it seems reasonable to conjecture
that this task will be deceptively difficult given the variety of lazy domains
available. In fact, it is impossible. No recursively enumerable set of equations can specify a non-trivial lazy domain as either the initial or final algebra
corresponding to the specification. We will formally prove this fact after we
establish a few important properties of lazy domains.
Unlike ordinary data domains, lazy domains have infinite strictly ascending chains of objects d0 v d1 v d2 v . . . (where v denotes the approximation relation introduced in Section 3) where each object di is constructed in
exactly the same way as di+1 except that di uses to approximate substructures of di+1 . In ordinary industrious data domains (such as LISP lists), the
undefined object cannot be embedded inside constructed objects, which
precludes the existence of infinite ascending chains of successively more complete approximations.
This apparently small change in the definition of data constructors (e.g.
the LISP cons operation) profoundly changes the structure of the data domain. Ordinary structural induction, for example, no longer holds, because
lazy domains contain the limit elements of infinite ascending chainswhich
cannot be constructed from primitive constants (e.g. atoms) in a finite number of steps. For example, in the domain of industrious lists, L(0) , let the

LAZY DATA DOMAINS

23

operation leafcount be recursively defined by the equation:


leafcount(x) = if Atom?(x) then 1 else leafcount(car (x)) + leafcount(cdr (x)),
where if then else abbreviates if-then-else(, , ) and the addition operation (+) is defined on integer atoms in the usual way. Then the following
theorem is easily proved by structural induction on x:
x [x 6= leafcount(x) > 0] .
On the other hand, as soon as we extend the domain L(0) to include limit
points, the principle of structural induction fails. In a version of L including
the object BigTree (such as L(1) ), the preceding theorem is clearly false.
Since lazy domains include limit points, they have a much more complex
topological structure than their industrious counterparts. An important
illustration of this phenomenon is the following observation. Let Triv denote
the trivial subspace of U consisting of the objects T and . Although the
industrious domain
TrivSeq0 = Triv TrivSeq0
is completely degenerate (it contains no elements other than ), the corresponding lazy domain
TrivSeq = Triv TrivSeq
is isomorphic to Scotts P model for the untyped lambda calculus under
the mapping h defined by:
h(x) = {i | xi = T}
where xi denotes the ith element of x = hx0 , x1 , . . . , xj , . . . i .
P is the domain consisting of all subsets of the natural numbers under
the approximation ordering defined by the subset relation. If we modify
the definition of a domain by adding the requirement the every domain
must contain a maximum element and we weaken the definition of subspace
as discussed in Section 2.1, then P is a universal domain. Hence, P
contains a subspace P , that D is isomorphic to P , P . Moreover,
if we augment the domain P by a very small set of operations OP , the
resulting alebra is universal. OP consists of the constant 0 denoting the
singleton set {0}, the primitive binary operation apply : P 2 P (defined
exactly as in Section 2.3), and the the primitive maps (which are constant
operations)
suc
pred
ifthenelse
K
S

:
:
:
:
:

P
P
P
P
P

P
P
(P (P P ))
(P P )
(P (P P ))

24

LAZY DATA DOMAINS

defined by
suc x
pred x
ifthenelse x y z
K
S

=
=
=
=
=

{e + 1 | e x}
{e | e + 1 x}
{e | e y 0 x} {e | e z 1 y}
x . y . x)
Map(x . y . z . apply(apply(x, z), apply(y, z))) .

where Map : (P , P ) P , is the function mapping continuous functions on P to their images in P . Surprisingly, all of these operations are
recursively definable in the a domain containing the lazy subspaces TrivSeq
and Triv together with the obvious structural operations
T,
por , and
cons
hd
tl

:
:
:
:
:

Triv
Triv2 Triv
Triv TrivSeq TrivSeq
TrivSeq TrivSeq
TrivSeq TrivSeq.

Note that the Cartesian product symbol in the definition above indicates that cons is a binary functionnot a unary function on pairs. The
recursive programs defining the primitive operations OP over appear in the
Appendix.
P together with the binary operation apply : P 2 P and maps S and
K, forms a model for the untyped lambda calculus (excluding -reduction).
Consequently, the lazy space TrivSeq together with the corresponding operations also constitutes a model for the untyped lambda calculus. TrivSeq is
a particularly attractive model for computer scientists, because it is based
on widely understood concepts from applicative programming. Lazy domains are the natural higher order generalization of familiar recursive
data structures.
The most widely publicized method for specifying data domains in the
literature is equational specification (frequently called algebraic specification). We formally define the method as follows.
Definition 5.1 (Quotient Algebra). Let be an algebra and let be an
equivalence relation on the universe D. is compatible with iff every
operation f in is well-defined with respect to . If is compatible with
the quotient algebra / corresponding to is the pair hD/, OD/ i.
is called the parent algebra corresponding to /.
Definition 5.2. The free term algebra with signature is the algebra
with universe T and interpretation function O defined by
O (f ) = f
where
f (x1 , . . . , x#f ) = f (x1 , . . . , x#f )
A term algebra is a quotient algebra corresponding to a free term algebra.

LAZY DATA DOMAINS

25

Definition 5.3 (Equational Specification). An equational specification


with signature is a pair hE, Ii consisting of a recursively enumerable set
of equations E over and a non-empty, recursively enumerable set of inequations I over .
Remark The usual convention in equational specification is to force all
signatures to include the constants true and false and to fix the set of
inequations I as the single inequation true 6= false. It is easy to show that
this convention does not restrict the class of definable algebras.2
Definition 5.4 (Relation determined by equational specification). The binary relation MustEqual determined by an equational specification =
hE, Ii is defined by
MustEqual E |= = .
Lemma MustEqual is an equivalence relation.
Definition 5.5 (Initial Algebra). The initial algebra determined by is the
term algebra determined by the equivalence relation MustEqual.
Definition 5.6 (Consistent equational specification). An equational specification is consistent iff it has a model.
Theorem An equational specification is consistent iff the initial algebra
is a model for .
Definition 5.7 (CannotEqual). The binary relation CannotEqual determined by an equational specification = hE, Ii is defined by
CannotEqual E, I |= A 6= .
Definition 5.8 (Sufficiently complete). An equational specification is
sufficiently complete iff the complement of the relation CannotEqual is an
equivalence relation.
Definition 5.9 (Final algebra). The final algebra determined by a sufficiently complete equational specification is the term algebra determined
by the complement of the CannotEqual relation.
We have finally developed enough machinery to assess the capacity of
equational specifications to describe lazy domains. We begin with two observations.
First, there is an obvious discrepancy in cardinality between domains defined by equational specification and non-trivial lazy domains. Since the
domains defined by equational specifications are finitely generated, they
must be countable. Yet non-trivial lazy domains (like TrivSeq) contain uncountably many elements. Obviously, there is no way that an equational
specification can define a non-trivial lazy domain if we force the defined
2Assuming that we expand the signature to include hidden operations when

necessary.

26

LAZY DATA DOMAINS

domain to include all of the limit points of directed sets of basis elements.
But this requirement is too stringent. The only elements of a lazy domain
that are accessible in a program are the ones with approximating sets (of
basis elements) that are recursively enumerable. Since there only countably
many recursively enumerable subsets of the basis set, the computationally
significant portion of a lazy domain is countable.
Without performing a deeper analysis than cardinality arguments, we
cannot rule out the possibility that equational specification can define a
domain that contains an isomorphic image of all of the accessible elements
of a non-trivial lazy domain. In fact, a machine implementation of a lazy
domain obviously includes only countably many elements; yet, it suffices
as a crude (non-extensional) formal characterization of the lazy domain.
Similarly, given any logical theory for a non-trivial lazy domain (including
all of the limit points), we can construct a countable non-standard model.3
Consequently, it is meaningful to ask whether equational specification can
define domains containing an isomorphic image of the accessible elements
of a non-trivial lazy domain. We will use computability arguments to show
that the answer to this question is no. In fact, we will prove a much stronger
result, namely that equational specification is too restrictive to define any
non-trivial computationally expressive domainlazy or industrious! The
crux of the problem is that the equality relation in an initial specification is
recursively enumerable and the inequality relation in a final specification is
recursively enumerable, yet neither the equality relation nor the inequality
relation in a non-trivial, computationally expressive domain is recursively
enumerable.
Definition 5.10 (Non-trivial domain). A domain D is non-trivial iff it contains a recursively enumerable, infinite set E = {e0 , e1 , . . . , ei , . . .} of mutually incomparable (under the approximation ordering vD ) basis elements.4
Theorem 5.1. Neither the initial interpretation nor the final interpretation of an equational specification can define a non-trivial, computationally
expressive algebra.
Proof. Let be a non-trivial, computationally expressive algebra with the
recursively enumerable set E = {e0 , e1 , . . . , ei , . . .} of incomparable basis
elements. We can encode the set of natural numbers N within by using
each basis element ei in E to represent the corresponding natural number
i. The effective subdomain E + = E is clearly isomorphic to the flat
domain N consisting of N augmented the undefined element . Since
is computationally expressive and E + is an effective subdomain of ,
has the following two properties. First, for each basis element ei , there is a
variable-free term Mi over denoting ei . Similarly, there is a variable-free
3This result is the immediate consequence of the Lowenheim-Skolem theorem; see En-

derton [Ende72] for more details.


4Technically, it is index set corresponding to E that is recursively enumerable.

LAZY DATA DOMAINS

27

term denoting . Second, for every computable function f mapping the


flat subdomain E + into itself, there is a corresponding a term Mf (x) such
that Mf (ei ) denotes f (ei ). The space of computable functions over E + is
obviously isomorphic to the space of computable functions over N .
We prove that no initial or final equational specification characterizes
by contradiction. The proof breaks down into two cases: (a) the inadequacy
of initial specifications and (b) the inadequacy of final specifications.
Case (a). Assume here is an initial equational specification that defines .
Then the set of pairs of equivalent variable-free terms over is recursively
enumerable. Since every computable function f over N is represented by a
term M f over d, the set M of divergent computations over E + terms
equivalent to must be recursively enumerable. But this contradicts the
fact that the halting problem is undecidable.
Case (b). Assume that the a final equational specification that defines .
Then the set of pairs of inequivalent variable-free terms over is recursively
enumerable. Let (x) be the term representing the function D defined by

M0 if E (x) 6=
D(x) =

otherwise .
Let be the set of variable-free terms of form (M ) where M is an arbitrary term over . is clearly recursively enumerable. Hence, the subset
0 of terms inequivalent to M0 must be recursively enumerable. Similarly, the set of terms {M | (M ) 0 } is recursively enumerable. But this
set is simply , the set of divergent computations over E + which is not
recursively enumerable. Hence, we have a contradiction.

Since lazy domains have essentially the same structure as P , an obvious
approach to formulating a logic for lazy domains is to use a higher order logic
based on the lambda calculus (similar to Edinburgh LCF) that conveniently
expresses the properties of P .5 However, we would prefer not to abandon
first-order logic for two reasons. First, first-order theories (such as firstorder Peano arithmetic) based on structural induction provide a simple,
elegant characterization of ordinary data domains. In this context, recursive
programs can be formalized as definitions extending the theory. The highly
successful Boyer-Moore LISP Verifier [Boyer75,79] is based on such a firstorder system.6 We would like to extend this approach to handle lazy lists
as well. Second, the completeness theorem for first order logic provides a
invaluable tool for analyzing the deductive power of any theory. If a first
order theory is too weak to establish a particular theorem, there must be a
non-standard model in which that theorem is false. In higher order logics, on
the other hand, a theory may be too weak to prove an important theorem,
yet there may be no model that refutes it.
5See [Giles78] for an LCF axiomatization of lazy lists.
6A rigorous explication of this approach to program semantics appears in [Cart84].

28

LAZY DATA DOMAINS

6. A First-Order Theory of Lazy Domains


The chief obstacle to extending ordinary first-order structural induction
theories to lazy domains is that conventional structural induction is applicable only to well-founded sets, yet lazy domains under the (proper) containment (substructure) ordering determined the constructors are not wellfounded because a limit element (e.g. BigTree) can properly contain itself.
Let = hD, Oi be a domain algebra with signature such that:
(1) contains two constants true and false denoting inconsistent finite elements of D and the standard ternary conditional function
if-then-else defined as in Section 4.
(2) O contains a finite set of contructor functions C = {c1 , . . . , cn } that
generate the basis of D. In other words, C satisfies the following
properties.
(a) For every basis element b D0 , there exists a term b composed
solely from operations in C such that b denotes b.
(b) For all c C,
x1 , . . . , x#c D0 c(x1 , . . . , x#c ) D0 .
(c) For all ci , cj C,
x1 , . . . , x#ci , y1 , . . . , y#cj D0 [ci (x1 , . . . , x#ci ) v cj (y1 , . . . , y#cj )
ci (x1 , . . . , x#ci ) = (i = j x1 v y1 . . . x#ci v y#ci )]
(d) For each constructor c C, G contains selector functions sj , j =
1, . . . , #c such that:
sj (c(x1 , . . . , x#c )) = xj if c(x1 , . . . , x#c ) 6=
and a characteristic function c? : D B such that:

if x =
T if x 6= c(s1 (x), . . . , s#c (x)) = x
c?(x) =

F otherwise .

[
cC

(e) The basis D0 of D forms a well-founded set under the substructure ordering which is the transitive closure of the binary
relation
[
{hxj , c(x1 , . . . , x#c )i | x1 , . . . , x#c D0 c(x1 , . . . , x#c ) 6= } .

j=1,...,#c

is not an approximation ordering.


If D is industrious, then D = D0 , and the substructure ordering on D
is the conventional well-founded ordering used in the structural induction
scheme for D. It is a straightforward (but tedious, and error-prone) task
to devise a first order axiomatization (comparable in deductive power to
the first order formulation of Peanos axioms) for an industrious domain D
consisting of

LAZY DATA DOMAINS

29

: (1) implications between equations relating the operations in G (e.g.,


constructors, selectors, characteristic functions, if-then-else)
: (2) inequations asserting that the Boolean truth values T, F, and the
undefined object are all distinct;
: (3) axioms describing the substructure ordering and the approximation ordering v (which are both predicates);
: (4) the structural induction scheme
^
^
[x1 , . . . , x#c (
(xi ) (c(x1 , . . . , x#c )))] x (x)
cC

i=1,...,#c

or, equivalently,
x [x0 (x0 x (x0 )) (x)] z (z).
A detailed account of this process appears in [Cart80].
The corresponding problem for lazy domains D is more subtle. If we
construct the axiomatization described above for a lazy domain D, then
the specified domain contains only the finite objects (basis elements) of
the lazy domain.7 The structural induction scheme (4) has the effect of
banning infinite objects (limit points) from the domain. In fact, if we extend
the axiomatized algebra to include the characteristic predicate Finite? for
finite objects and augment the axiomatization by a sentence asserting that
constructors map finite objects to finite objects, then we can prove
x Finite?(x) = T
by structural induction.
As a result, recursive definitions over the domain may not have least fixed
points because directed sets do not necessarily have least upper bounds. For
example, if we consider a domain consisting the finite objects in TrivSeq, the
function definition
f (x) = cons(true, f (x))
is contradictory, because we can prove by structural induction that
x, y [x 6= cons(y, x)] .
If we replace induction scheme (4) by an induction axiom scheme restricted to finite objects:
(40 )

x[IsF in(x) [x0 [x0 x (x0 )] (x)]] [zIsF in(z) (z)],

then the lazy domain is a model for our axiomatization, but so is the subspace containing only finite objects. In such a theory, we could not prove
any interesting statements about infinite objects.
7Non-standard models may contain infinite objects, but their behavior does not

resemble that of lazy data objects. For a detailed discussion of this issue, see [Cart84].

30

LAZY DATA DOMAINS

7. A Satisfactory Axiomatization
The solution to the problem is to augment the axiomatization consisting
of (1), (2), (3), and (4) above by two additional schemes asserting that:
: (5) Every definable directed set has a least upper bound.
: (6) Every term t(x) over the domain operations G is continuous in
the variable x.
and by an extra axiom asserting that:
: (7) Every element is the least upper bound of the set of finite elements
that approximate it.
They are formalized as follows. Let (u) and t(u) be an arbitrary formula
and term respectively in the language of the data domain and and let x, y, z
be variables not free in either (u) or t(u). Let Diru {t(u) | (u)} abbreviate
the formula
x, y [(x) (y) z((z) x v t(z) y v t(z))]
F
which asserts that {t(u) | u(u)} is a directed set. Let u {t(u) | (u)} = v
abbreviate the formula
x ([(x) t(x) v v] z [x (x) t(x) v z] t(x) v v)
which asserts that v is the least upper bound of the set {t(u) | (u)}.8 Then
the additional axiom schemes and axiom are:
: (5) (the existence of least upper bounds)
G
Diru {t(u) | (u)} v[ {t(u) | (u)} = v]
u

: (6) (the continuity of functions)


G
G
{u | (u)} = v
{t(u) | (u)} = t(v))
u

: (7) (the finite basis property)


G
{u | u v x Finite?(u)} = x
where t(u) and (u) are an arbitrary term and formula containing
no free variables other than u.
Scheme (5) asserts that if the set {t(u) | (u)} is directed, then it has a least
upper bound. Scheme (6) asserts that if the set {u | (u)} has a least upper
bound v, then the function u . t(u) is continuous at v. Axiom (7) asserts
that x is the least upper bound of the set of finite elements approximating
x.
Although there are no blatant sources of incompleteness in this axiomatization9 (consisting of (1), (2), (3), (4a), (4b), (5), (6)), it is not obvious
8Note that u is not free in either Dir {t(u) | (u)} or F {t(u) | (u)} = v.
u
u
9For a non-trivial lazy domain, the axiomatization is obviously not complete by Godels

first incompleteness theorem.

LAZY DATA DOMAINS

31

that the system is strong enough to prove all of the important properties
of particular lazy domains. For this reason, it is interesting to compare the
power of our first-order system with the corresponding theory in LCF, a
logic specifically designed to accommodate higher order domains like P .
The LCF theory looks similar except:
(1) It includes the typed lambda calculus in the term syntax for the
logic.
(2) The induction axiom scheme is fixed point induction on recursively
defined functions. This scheme has the form
(() f [(f ) (Mf )]) (Y (f . Mf ))
where (f ) is a formula that admits induction on f . Fixed-point induction is applicable only to admissible formulas, where admissibility
is a complex syntactic test (described in [Gord77]) that analyzes the
types of terms within the formula.
The closest analog of structural induction in LCF is fixed point induction on a projection characterizing the domain of interest. The fixed point
induction scheme has the form:
(8)

f [(f ) (Mf )] (Y (f . Mf ))

where f is a function of type , Mf is a term of type , (f ) is a formula


that admits induction on f , and Y is the least fixed point operator.
After studying the two systems, we were surprised to discover that our
system subsumes LCF both in expressiveness and deductive power. In particular, we can systematically translate arbitrary LCF statements into equivalent statements in our first order system by:
(1) Converting all lambda expressions into equivalent expressions formed
using the standard S and K combinators.
(2) Converting all function applications to explicit applications (using
the primitive operation apply) of corresponding maps.
Unlike many translations between formal systems, this translation does
not mutilate the syntactic structure of the original formula. In fact, if we use
the abbreviated notation for terms described in section 2.4, the first order
translation of an LCF formula is identical to the original formula!
Under this translation, all of the LCF proof rules and axioms (expressed
in terms of translated formulas) are derivable in our first-order system. In
particular, we can derive the LCF fixed point induction scheme for admissible formulas. The derivation critically relies on the structural induction
scheme for finite objects (4), the least upper bound scheme (5), and and
the continuity scheme (6).
We call the first order analog of fixed-point induction, lazy induction. If
we use the abbreviated notation described in 2.4, then the lazy induction
scheme is identical in appearance to the fixed point scheme (8). The formal
derivation of lazy induction within our system is a tedious induction on the

32

LAZY DATA DOMAINS

structure of formulas that is beyond the scope of this paper, but the basic
idea underlying the proof is instructive.
The admissibility test in LCF ensures that passing to the limit of a directed set (of lazy data objects) does not change the meanings of subformulas that determine the truth of the entire formula. The idea behind the
derivation is that the metamathematical justification for fixpoint induction
on a function within a particular admissible formula can be translated into a
proof in our first order system consisting of two parts. The first part utilizes
conventional structural induction to establish that the formula holds for all
finite approximations to the function. The second part extends the result to
the entire function (an infinite lazy object) by appealing to the definition of
admissibility and the fact that all functions in the domain are continuous.
Although the admissibility test required for lazy induction is awkward, the
rule can be a useful shortcut in certain situations. A particular important
example is lazy induction on the projection D characterizing the recursive
data type D defined by the domain equation
D = Dn1 + . . . + Dnk
where n1 , . . . , nk are positive integers. For each component Dni of D, let
ci ?, ci , and si,j , j = 1, . . . , ni denote the recognizer, constructor, and selector
functions, respectively, used to identify, build, and tear apart objects of form
Dni within D. Then D n is defined by the equation:
D = x . if c1 ? x then c1 (D s1,1 x) . . . (D s1,n1 x)
else if . . .
else if ck ? x then ck (D sk,1 x) . . . (D sk,nk x)
else.
When we apply lazy induction to this projection, the premises of the rule
reduce to the premises of conventional structural induction for the finite
objects of the domain. Similarly, the conclusion of the rule reduces to an
assertion that the hypothesis holds for all objects in D. Hence, if a formula is
admissible, conventional structural induction establishes the formula holds
for all objects in D, not just finite ones!
7.1. Sample Program Proofs. Consider the recursive definition
append (x, y) = if Atom? x then y
else cons(car (x), append (cdr (x), y))
over the data domain L(1) . The following formula
x, y, z append (x, append (y, z)) = append (append (x, y), z)
is obviously true on the domain of finite objects (including ). The proof
is a trivial induction on the structure of x. Does the same theorem hold
for all lazy lists? The answer must be yes, because the formula stating the

LAZY DATA DOMAINS

33

theorem is admissible! Lazy induction enables us to prove theorems about


lazy domains using conventional structural induction.
On the other hand, lazy induction is not sound if the induction formula
is not admissible. For instance, consider the formula
(9)

x L(zap(x) < x)

where the function zap and the relation v are defined by the formulas
zap(x)

x<y

if Atom? x then
elsecons(car (x), zap(cdr (x)))
(x 6= y) (x v y).

By induction on x, we can trivially prove the formula (9), yet it is clearly


false for lazy lists since
zap(BigTree) = BigTree
where BigTree is defined as in Section 4. Lazy induction fails in this case
because the formula (9) is not admissible.
8. Conclusions and Future Research
Although implementation-oriented definitions of lazy evaluation provide
some insight into the behavior of particular computations, they are inadequate as the basis of a logical theory of lazy domains. They also blur subtle
but important semantic distinctions between different forms of lazy evaluation. Our abstract characterization in terms of domain constructors provides
a much clearer picture of the mathematical properties of lazy domains and
directly corresponds to a natural formal system for reasoning about them.
Since lazy domains have essentially the same complex structure as Scotts
P model of the untyped lambda calculus, they cannot be specified by
restrictive methods such as equational specification. One approach is to
axiomatize lazy domains within a higher order logic such as LCF. In this
paper we have presented a first-order theory of lazy domains that we prefer
to higher order formalizations because it relies on conventional structural induction rather than fixed point induction as the fundamental axiom scheme.
In our system, the admissibility test for fixed-point induction is simply a
sufficient set of conditions for its derivation. Moreover, our system extends
conventional structural induction (as implemented in the Boyer-Moore LISP
Verifier [Boyer75,79]) to the context of lazy data domains, providing programmer with a simple intuitive framework for reasoning about functions
that manipulate lazy data objects.
Since computable functions have a natural extensional representation10 as
lazily evaluated graphs (maps), our first-order formalization of lazy domains
accommodates function domains as well. However, we must overcome one
10There are still multiple partial maps corresponding to the same function, but the
only difference between an arbitrary map and the canonical one for the equivalence class
is that the canonical one contains every possible piece of redundant information.

34

LAZY DATA DOMAINS

major obstacle to make our treatment of functions intuitively accessible to


programmers: our reliance on combinators rather than lambda expressions
to denote computable maps. In response to this issue, we are developing
a collection of combinators that closely correspond to conventional lambda
notation.
9. Appendix: Lazy Domains Are Universal
Each data object x in the lazy domain TrivSeq is an infinite sequence
x0 , x1 , . . . , xi , . . .
in which each element xi is either T or . In effect, a member of TrivSeq
is a potentially infinite enumeration of natural numbers (the indices of the
convergent elements). Consequently, the abstraction function : TrivSeq
P defined by
(x) = {i | xi = T}
establishes a natural isomorphism between the two spaces. This appendix
contains a recursive program defining the operations OP0 over TrivSeq corresponding to the basic operations OP of P . The style of this program
is rather unusual because all computations over TrivSeq are infinite enumerations in which the subcomputations determining individual elements are
dovetailed (performed in parallel)an unfamiliar phenomenon in conventional applicative languages such as Pure LISP.
For the sake of clarity, each individual recursive function definition in the
program obeys the following syntactic conventions.
(1) Each definition has the form
f (x) informal-definition = formal-definition
where an informal-definition is a mathematical description of the
value of the function and formal-definition is the actual body of the
function definition. If the formal-definition is transparent, then the
informal-definition may be omitted.
(2) The names of TrivSeq operations (functions that return values of type
TrivSeq) are capitalized; the names of Triv operations (functions that
return values of type Triv) are not. Decimal digits are considered
capital letters. Trivoperations are used as subfunctions within the
definitions of the functions in OP0 .
(3) Variables ranging over TrivSeq that are intended to denote arbitrary
sets in P are capitalized. Variables ranging over TrivSeq that are
intended to denote individual natural numbers (singleton sets) are
not. No variables range over Triv.
(4) In every unary function application, the parentheses enclosing the
argument are omitted. Note that this is not the same abbreviation we employed in connection with maps in the main body of the
paper. In the following program, every application within an expression is explicitly written down; consequently, a chain of unary

LAZY DATA DOMAINS

35

applications f g h x associates to the right [f (g(h(x)))], rather than


the left [(((f g) h) x)] .
(5) In informal definitions (comments), the following special notation
appears.
(a) The symbol i denotes the finite set in P corresponding to the
binary representation for the integer i:

{j | bit j in the binary representation of i is 1}

where bits are numbered from right to left starting with 0.


(b) The function symbol denotes the inverse of the function , i.e.
s is the infinite sequence denoting the set of natural numbers
s P .
(c) The bracketed pair hi, ji abbreviates the arithmetic expression

(i + j) (i + j + 1)
+ i.
2

The binary function i, j . hi, ji is a commonly used bijective


pairing function.

9.1. Auxiliary Operations. The following collection of auxiliary operations OAux are used in the definition of the primitive operations OP of
P .

36

LAZY DATA DOMAINS

def X
=
Plus(I, J)
=

Times(I, J)
=
Pair (I, J)
=
1st X
=
1st 1 (k, X)
=
any2nd (i, k, X)
=
2nd X
=
2nd 1 (k, X)
=
any1st(k, j, X)
=
Overlap(I, J)
=
Top
=
odd X
=
Halve X
=
approx (i, X)
=

i X
(hd x) por (def S X)
{i + j | i I j J}
Cons([hd I] and [hd J],
Cons([(hd Tl I) and (hd J)] por [(hd I) and (hd Tl J)],
Plus(Tl I, Tl J)))
{i j | i I j J}
Cons([(def I and (hd J)] por [(hd I) and (def J)],
Plus(Tl I, Times(I, Tl J)))
{hi, ji | i I j J}
Plus(Halve Times(Plus(I, J), Plus(Plus(I, J), Suc 0))), I)
{i | j hi, ji X}
1st 1 (0, X)
{i k | j hi, ji X}
Cons(any2nd (k, 0, X), 1st 1 (Suc k, X))
j k [hi, ji X]
Overlap(Pair (i, k), X) por any2nd (i, Suc k, X)
{j | i[hi, ji X]}
2nd 1 (0, X)
{j k | i [hi, ji X]}
Cons(any1st(0, k, X), 2nd 1 (Suc k, X))
i k [hi, ji X]
Overlap(Pair (k, j), X) por any1st(Suc k, j, X)
i[i I i J]
[hd I and hd J] por Overlap(Tl I, Tl J)
{i}
Cons(T, Top)
i [2i + 1 X]
(hd Tl X) por (odd Tl Tl X)
{i | 2i X} {j | 2j + 1 X}
Cons([hd X] por [hd Tl X], HalveTl Tl X)
i X
[hd i] por [([(odd i) and (hd X)] por [odd Tl i]) and
approx (Halve i, Tl X)]

LAZY DATA DOMAINS

37

9.2. Primitive Operations. Recursive definitions for the operations OP0 =


{0, Suc, Pred, Cond, K, S, apply} in terms of the auxiliary operations OAux appear below.
0
=
Suc =
Sucgraph k
=
Suc I
=
Pred =
Predgraph k
=
Pred I
=
Cond =
Condgraph k
=
Cond1 X =
Condgraph1 (X, k)
=
Cond2 (X, Y )
=
Condgraph2 (X, Y, k)
=
Cond3 (I, Y, Z)
=
K =
Kgraph k
=
K1 X
=
Filter I
=
Filter1 (I, k)
=

{0}
Cons(T, )
Sucgraph 0
{hi, ji k | hi, ji k j [ Suc i ]}
Cons(approx (2nd k, Suc 1st k), Sucgraph Suc k)
{i + 1 | i I}
Cons(, I)
Predgraph 0
{hi, ji k | hi, ji k j [ Pred i ]}
Cons(approx (2nd k, Pred 1st k), Predgraph Suc k)
{i | i + 1 I}
Tl I
Condgraph 0
{hi, ji k | hi, ji k j [ Cond1 i ]}
Cons(approx (2nd k, Cond1 1st k), Condgraph Suc k)
Condgraph1 (X, 0)
{hi, ji k | hi, ji k j Cond2 (X, i )}
Cons(approx (2nd k, Cond2 (X, 1st k)), Condgraph1 (X, Suc k))
Z . Cond(X, Y, Z)
Condgraph2 (X, Y, 0)
{hi, ji k | hi, ji k j Cond3 (X, Y, i )}
Cons(approx (2nd k, Cond3 (X, Y, 1st k)), Condgraph2 (X, Y, Suc k))
{i Y | 0 I} {j Y | w [w + 1 I]}
Cons([(hd I) and (hd Y )] por [(def Tl I and (hd Z)],
Cond3 (I, Tl Y, Tl Z))
Kgraph 0
{hi, ji k | hi, ji k j [ K1 i ]}
Cons(approx (2nd k, K1 1st k), Kgraph Suc k)
{hi, ji | j X}
Pair (Top, Filter X)
{i | i I}
Filter1 (I, 0)
{i k | i k i X}
Cons(approx (k, I), Filter1 (I, Suc k))

38

LAZY DATA DOMAINS

S = Sgraph 0
Sgraph k {hi, ji k | hi, ji k j [ S1 i ]}
= Cons(approx (2nd k, S1 1st k), Sgraph Suc k)
S1 X Y . S2 (X, Y )
= Sgraph1 (X, 0)
Sgraph1 (X, k) {hi, ji k | hi, ji k j S2 (X, i )}
= Cons(approx (2nd k, S2 (X, 1st k)), Sgraph1 (X, Suc k))
S2 (X, Y ) Z . S3 (X, Y, Z)
= Sgraph2 (X, Y, 0)
Sgraph2 (X, Y, k) {hi, ji k | hi, ji k j S3 (X, Y, i )}
= Cons(approx (2nd k, S3 (X, Y, 1st k)), Sgraph2 (X, Y, Suc k))
S3 (X, Y, Z) = apply(apply(X, Z), apply(Y, Z))
apply(F, X) {j | i hi, ji F i X}
= 2nd apply1 (0, F, X)
apply1 (F, X, k) {p k | p k p F 1st p X}
= Cons(test(k, X, F ), apply1 (F, X, Suc k))
test(p, X, F ) p F 1st p X
= Overlap(p, F ) and approx (1st p, X)

References
[ADJ76]

Goguen, J., J. Thatcher and E. Wagner. An Initial Algebra Approach to the Specification, Correctness and Implementation of Abstract Data Types. IBM Research Report RC-6478,
Yorktown Heights, 1976.

[ADJ77]

Goguen, J., J. Thatcher, E. Wagner and J. Wright.


Initial Algebra Semantics and Continuous Algebras. JACM 24,
1977, 68-95.

[Back78]

Backus, J. Can Programming be Liberated from the vonNeumann Style? A Functional Style and its Algebra of Programs.
CACM 21, 1978, 613-641.

[Bare77]

Barendregt, H. The Type Free Lambda Calculus. Handbook


of Mathematical Logic, J. Barwise, ed., North-Holland, Amsterdam, 1091-1132.

[Boye75]

Boyer, R.S., and Moore, J S. Proving Theorems About


LISP Functions. JACM 22, 1975, 129-144.

[Boye79]

Boyer, R.S., and Moore, J S. A Computational Logic, Academic Press, New York, 1979.

LAZY DATA DOMAINS

39

[Cart76]

Cartwright, R. User-Defined Data Types as an Aid to Verifying LISP Programs. Automata, Languages and Programming,
Edinburgh Press, 1976.

[Cart80]

Cartwright, R. A Constructive Alternative to Axiomatic


Data Type Definitions. Proceedings 1980 LISP Conference, Stanford, 1980.

[Cart83]

Cartwright, R. Non-standard Fixed Points. Proceedings of


the Workshop on Logics of Programs, Carnegie-Mellon University, June 1983, Springer Verlag Lecture Notes in Computer Science, Volume 164, 86-100.

[Cart84]

Cartwright, R. Recursive Programs As Definitions in First


Order Logic. SIAM Journal of Computing 13, 1984, 374-408.

Ende72]

Enderton, H. B. A Mathematical Introduction to Logic, Academic Press, New York, 1972.

[Frie76]

Friedman, D. and D. Wise. CONS Should Not Evaluate


Its Arguments. Automata, Languages and Programming, Edinburgh University Press, 1976, pp. 257-284.

[Gile78]

Giles, D. An LCF Axiomatization of Lazy Lists. Technical


Report CSR-31-78, Computer Science Department, Edinburgh
University.

[Gord77]

Gordon, M., R. Milner and C. Wadsworth. Edinburgh


LCF. Technical Report CSR-11-77, Computer Science Department, Edinburgh University.

[Gutt78]

Guttag, J. and J. Horning. The Algebraic Specification of


Abstract Data Types. Acta Informatica 10, 1978, 27-52.

[Hend80]

Henderson, P. Functional Programming: Application and Implementation, Prentice-Hall, London, 1980.

[Hend76]

Henderson, P and J. Morris, Jr. A Lazy Evaluator. In


Proc. Third Symposium on Principles of Programming Languages, 1976, 95-103.

[Kami80]

Kamin, S. Final Data Type Specifications: A New Data


Type Specification Method. In Proc. Seventh Symposium
on Principles of Programming Languages, 1980, 131-138.

[Plot78]

Plotkin, G. T as a Universal Domain. Journal of Computer


and System Sciences 17, 1978, 209-236.

[Scot76]

Scott, D. S. Data Types as Lattices. SIAM J. Computing 5,


1976, 522-587.

[Scot81]

Scott, D. S. Lectures on a Mathematical Theory of Computation. Technical Monograph PRG-19. Oxford University
Computing Laboratory, Oxford.

40

[Scot83]

[Stoy77]
[Tars55]

LAZY DATA DOMAINS

Scott, D. S. Domains for Denotational Semantics. In Proc.


International Conference on Automata, Languages, and Programming, Lecture Notes in Mathematics 140, Springer-Verlag,
Berlin, 1982.
Stoy, J. Denotational Semantics: The Scott-Strachey Approach to Programming Language Theory, MIT Press, 1977.
Tarski, A. A Lattice-Theoretical Fixpoint Theorem and its
Applications. Pacific J. Math. 5, 1955, 285-309.

S-ar putea să vă placă și