Documente Academic
Documente Profesional
Documente Cultură
Joseph Y. Halpern
Cornell University
Dept. of Computer Science
Ithaca, NY 14853
halpern@cs.cornell.edu
http://www.cs.cornell.edu/home/halpern
1 Introduction
2 Plausibility Measures
The standard approach to modeling uncertainty is probability theory. In recent years, researchers, motivated by varying concerns including a dissatisfaction with some of the axioms of probability and a desire to represent information more
qualitatively, have introduced a number of generalizations
and alternatives to probability, including Dempster-Shafer belief functions [Shafer, 1976], possibility measures [Dubois
and Prade, 1990], lexicographic probability [Blume et al.,
1991], and many others. Rather than investigating each of
these approaches piecemeal, I consider here an approach to
representing uncertainty that generalizes them all, and lets us
understand their commonalities and differences.
A plausibility measure [Friedman and Halpern, 1995] associates with a set a plausibility, which is just an element in a
partially ordered space. The only real requirement is that if
is a subset of , then the plausibility of is less than equal to
the plausibility of . Probability measures are clearly plausibility measures; every other representation of uncertainty
that I am aware of can also be viewed as a plausibility measure. Given how little structure plausibility measures have, it
is perhaps not surprising that plausibility measures generalize
so many other notions. This very lack of structure turns out
to be a signicant advantage. By adding structure on an as
needed basis, it is possible to characterize what is required to
ensure that a plausibility measure has certain properties of interest. This both gives insight into the essential features of the
properties in question and makes it possible to prove general
results that apply to many representations of uncertainty.
In this paper, I discuss three examples of this phenomenon.
Pl1.
Pl2.
'&$
% #
!
"
6
987
1
2
%
543
0)(
%
A
B@
AD@
A @
EBSRQP
%
A @
IFBH
G
A
FB@
A
ED@
!
C
U
VG
U
VG
!" dYT
%
U W
VYVa'VYVX
U G `
U G
U
h ` r% f
i5sq`
U X
U W
h QVpg`
` U G f
U G
U
e
h `
i5U
U
VX
U
VW
f
g`
W % A
5t(EB@
.
A @ G
9EBxt
A
FB@
v
w
X % # A
QtuSEB@
, then
Pl3. If
.
v
y
U
e
U
VW
T b
c&`
U
VX
Since
is a partial order, Pl3 says that, if
, then the
plausibility of is comparable to the plausibility of and,
moreover,
.
U
VG
A @ G
3Dat
A
ED@
1
Frequently it is also assumed that is countably additive, i.e., if
,
, are pairwise disjoint, then
. Since
I focus on nite state spaces here, countable additivity does not play
a signicant role, so I do not assume it.
a7
Ii&'
%
&
3
e
3
%
2)
I
%
b
50
b2
&
b
yH
903
G
3 &R
%
&
3
D@
A
T
i f
} %
} %
B"iu8
b e } %
4nBC
b
&
G
3x
g
Vy%
U W
%
wp( D@
A
v
v
U 'x"9uB@
X
%
% # A
g
V%
VH"
A
U W
U X
%
D@
b
cR
9Rs
G
b
i4iRsil {
G
A @ G
I)D2s
A
D@
%
A
)lD@
{ wp
A
lD@
b
Bil {
%
{
v
y
VW
b
a3
sil
%
U
b
%
)il
!
Cq xp
h e
U X
T
b
ts
i3il
G
G
!
"
f
f
&nS
%
h
8 B@
A
f
Su}
A
3)lD@
f5%fFg
%
atnSg
f % f
B
A
D@
3 Conditional Plausibility
Suppose an agents beliefs are represented by a plausibility
measure . How should these beliefs be updated in light
of new information? The standard approach to updating in
probability theory is by conditioning. Most other representations of uncertainty have an analogue to conditioning. Indeed,
compelling arguments have been made in the context of probability to take conditional probability as a primitive notion,
rather than unconditional probability. The idea is to start with
a primitive notion
satisfying some constraints (such as
if and
are disjoint) rather than starting with an unconditional probability
measure and dening conditioning in terms of it. The advantage of taking conditional probability as primitive is that it
A
D@
9
@ 6
qDt9
Da3E
@ %
qD@
1
Q
lqD@
!
" ji$qDi
h
g f e A d
A d r %
Eists2
~ }
$9S
vV(
%
o m kA
qpnlD@
o m kA
pnlD@
f
g ge
{ z y x m o %
p|tpwxt
!
" uh
% #
Qt$
% #
atu
cn}
%
V8
1
%
D(
Bs VG
i$ld
%
f
h
f $l
f 6
5
f
b
x
e
}xmoat
%
&
b
(YB
e
&
}
iY
%
&
9
&
3&
A
| lD@
A
&
'
%
D@
u S%
U
tVW
%
U
tVX
G
ESEq'siiq
G
v(&(g q$(% | T
G G G e
}
A @ G
3$ B5s
A
nt D@
&
2
The word plausibility is slightly overloaded, appearing both
in the context of plausibility function and plausibility measure.
Plausibility functions will play only a minor role in this paper, so
there should not be much risk of confusion.
&
is
,
B@
A
A belief function on
is a function
satisfying certain axioms [Shafer, 1976]. These axioms
certainly imply property Pl3, so a belief function is a
plausibility measure. There is a corresponding plausibility function
dened as
.2
probability of . Clearly,
satises Pl13, so it is indeed a plausibility measure, but one that puts only a partial
(pre)order on events. A similar plausibility measure can be
associated with a belief/plausibility function.
The trouble with
,
, and even
is that they
lose information. For example, it is not hard to nd a set
of probability measures and subsets
of
such that
for all
and
for some
, but
and
. Indeed, there exists an innite set
of probability measures
such that
for all
but
and
. If all the probability measures in
agree that is less likely than , it seems reasonable to conclude that is less likely than . However, none of , ,
or
necessarily draw this conclusion.
It is not hard to associate yet another plausibility measure
with that does not lose this important information (and does
indeed conclude that is less likely than ). Suppose, without loss of generality, that there is some index set such that
. Thus, for example, if
,
then
. (In general, may be innite.) Let
consist of all functions from to
. The standard pointwise ordering on functionsthat is,
if
for all
gives a partial order on
. Note that
is
such that
for all
the function
and
is the function such that
for all
.
, let
be the function such that
For
for all
. Dene the plausibility measure
by taking
. Thus,
iff
for all
iff
for all
. It is easy to
and
. Clearly
satises
see that
Pl13. Pl1 and Pl2 follow since
and
, while Pl3 holds because if
,
then
for all
.
To see how this representation works, consider a simple
example where a coin which is known to be either fair or
double-headed is tossed. The uncertainty can be represented
by two probability measures on , which gives heads probwhich gives heads probability
. Taking
ability 1, and
the index set to be
, this gives us a plausibility measure
such that
is a function such that
and
; similarly,
is a function such that
and
.
B@
A
A
FB@
4
b
34
A
D@
3
and
iff
4
EBxt4
A @ G
G
4
A
ED@
b
E)
A
D@
3
CPl5. If
4
r%
, then
A
EB@
TjhHT'Tv
e
T
4
T
'8v
h
T
T e
distributes over
A
ED@
; more precisely,
A
D@
A
D@
DomPl
, and
b
98
psT
C|uT
}
%
1
1
!Cq b
Snq"i| Siu}
T
b
3)
b
&)
3
3
A
B@
T
3gsD@
h
e A
A @
D3t
X % # A
2t9nD@
E
A @
FDS
3
A
D@
9v
3
G
5
b
ii"7iq
iip
%
9 iinq
b
B
Alg4. If
then
ii
c9
A
D@
9I'
9
3SV ii9lqlvB
%
3
b
b
p84
b
e A @
&S4EBS4
A @ }
D$Y&s
%
E3vu
b
0b
iiCf
s
b
vi f v wii f
sS
b
B' f q wqqi f i$
du&uf v
%
ui
f
&iHx
and DomPl
sS
A
B@
and
where DomPl
G
&q2p
A
D@
9
if
DomPl
DomPl
X r%
2s
%(qD@
(9qD@
Alg3.
B@
9
9ED8
A @
A
EB@
9bt)
b
A @
D89
A @ %
Dat9
b
2s
9
, then
then
%
4
Alg2. If
b
3Q
A
FD@
Alg1. If
1
(
A
43ED@
A @ G
FBa3
9
A
D@
ADx9
@ %
A
FD@
v
5w
W % A
2t9yD@
b
(
A
9SEB@
X r%
249
A
EB@
&b
b
9&(
%
D3
X r%
2V(
A @
EBqe
}
FAD@3t
U x9
X
r%
A
D@
3qD@
r%
B@
v
i
%
2
2
G
3
%
24
A @
EBs
A
D@
4r %
A
FB@
$h
A @
EBqh
f Dc h
h
A @
A @
FB f FD@
A
A
EB@
f
9
9
h
f EDc9
A @
A @
EDc3
h ED@
A
3
f
h FDy
A @
1
A
EB@
A @
EDwp3
f D@
A
&
A @
D9
f
A
ED@
A
D@
3
h
7&1
f
&1
h
f
1
h EDcf EB@
A @
A
1
f FAB@
A @
EBw h
h
f
U G
VI
9E
f `
h `
U G
n i|
fq`
f `
4F
%
pE
q
U G
Vs
9
h 5 f `
`
h `
hi`ag`
6 f
f `
` % ` %
99`
` % ` %
99`
% ` % `
cx)Q99
f
iCi
p ip f f
ii"g
f
p i Cf
3
3
b
9tD
#
us
c
A
B@
A
B@
h `
h `
F9SS
$
f `
Belief
f
i|
h `
U G
G
5s
A
D@
"i
, where
represents a
possibly empty sequence of 0s, and
.
I leave it to the reader to check that these denitions indeed
make the cps algebraic.
A construction similar in spirit can be used to dene a noof
tion of conditioning appropriate for the representation
a set of plausibility measures; this also leads to an algebraic
cps [Halpern, 2000a].
b
,
,
,
(3) does not contain the empty set. Given a lter , an agent
is said to believe iff
. Note that the set of sets which
are given probability 1 by a probability measure form a lter.
Conversely, every lter
denes a nitely additive probability measure : the sets in get probability 1; all others
get probability 0. We can also obtain a lter from the Kripke
structure denition of knowledge. If the agent considers possible the set
, then let
consist of all superset of
. This is clearly a lter (consisting of precisely the events
the agent believes). Conversely, in a nite space, a lter
determines a Kripke structure. The agent considers possible
precisely the intersection of all the sets in (which is easily
seen to be nonempty). In an innite space, a lter may not
determine a Kripke structure precisely because the intersection of all sets in the lter may be empty. The events believed
in a Kripke structure form a lter whose sets are closed under
arbitrary intersection.
Another approach to modeling belief, due to Brandenburger and Keisler 2000, uses LPSs. Say that an agent besuch that
lieves in LPS if there is some
for all
and
for
. It is easy to see that
beliefs dened this way are closed under intersection. Brandenburger and Keisler give an elegant decision-theoretic justication for this notion of belief. Interestingly, van Fraassen
1995 denes a notion of belief using conditional probability
spaces that can be shown to be closely related to the denition
given by Brandenburger and Keisler.
Plausibility measures provide a framework for understanding what all these approaches have in common. Say that an
agent believes
with respect to plausibility measure
if
; that is, the agent believes if is more
plausible than not. It is easy to see that, in general, this definition is not closed under conjunction. In the case of probability, for example, this denition just says that is believed
if the probability of is greater than
. What condition
on a plausibility measure
is needed to guarantee that this
denition of belief is closed under conjunction? Trivially, the
following restriction does the trick:
and
, then
Pl4 . If
.
I actually want a stronger version of this property, to deal
with conditional beliefs. An agent believes
conditional
on , if given ,
is more plausible than , that is, if
. In the presence of CPl5 (which I implicitly assume for this section), conditional beliefs are closed
under conjunction if the following holds:
Pl4 . If
and
, then
.
A more elegant requirement is the following:
Pl4. If
,
, and
are pairwise disjoint sets,
, and
, then
.
In words, Pl4 says that if
is more plausible than
and if
is more plausible than , then
by itself is
already more plausible than
.
Remarkably, in the presence of Pl13, Pl4 and Pl4 are
equivalent:
aB
r
s
x
%
b
&
7
b
b
(vd
b
2t
t
8p
s2p
0t
h t
2t
h
2q
h f
2q&t
f h
)u
f
A
D@
A
B@
f
S)~C}r5B
%
iuhV~Cf~}2
%
~
h f
A
D@
u
r
A @
D$ ED@
A
z
r
b
1
b
ue
}
b
DcS
A @
A
EB@
}
A
D@
OR. From
and
infer
G rdenfors, 1988] attempts to describe how an agent should
a
accommodate a new belief (possibly inconsistent with his
CM. From
and
infer
other beliefs) about a static world. Belief update [Katsuno
(cautious monotonicity).
Belief Revision
4.2
Because Pl4 does not hold for lot , it can be used to represent the lottery paradox. Because Pl4 does hold for the
plausibility measure corresponding to beliefs in Kripke structure, belief in Kripke structures is closed under innite conjunction. A countable version of Pl4 holds for -additive
probability measures, which is why probability-1 beliefs are
closed under countable conjunctions (but not necessarily under arbitrary innite conjunctions).
A
B@
are
A
B@
, if
, and for all
Default Reasoning
A
D@
4.3
~
3l
Thus, for plausibility measures, Pl4 is necessary and sufcient to guarantee that conditional beliefs are closed under
conjunction. Proposition 4.1 helps explain why all the notions of belief discussed above are closed under conjunction.
More precisely, for each notion of belief discussed earlier, it
satisfying Pl4
is trivial to construct a plausibility measure
that captures it:
give plausibility 1 to the events that are
believed and plausibility 0 to the rest.
Pl4 is required for beliefs to be closed under nite intersection (i.e., nite conjunction). It does not guarantee closure
under innite intersection. This is a feature: beliefs are not always closed under innite intersection. The classic example
is the lottery paradox [Kyburg, 1961]: Consider a situation
with innitely many individuals, each of whom holds a ticket
to a lottery. It seems reasonable to believe that individual
will not win, for any , yet that someone will win. If
is the
event that individual does not win, this amounts to believing
and also believing
(and not believing
). It is easy to capture this with a plausibility measure.
Let
, where is the world where individual wins (so that
). Let lot be a plausibility
measure that assigns plausibility to the empty set, plausibilto all nite sets, and plausibility 1 to all innite sets.
ity
It is easy to see that lot raties Pl4. Nevertheless, each of
is believed according to lot , as is
.
As shown in [Friedman et al., 2000], the key property that
guarantees that (conditional) beliefs are closed under innite
intersection is the following generalization of Pl4:
and Mendelzon, 1991], on the other hand, attempts to describe how an agent should change his beliefs as a result of
learning about a change in the world.
Belief revision and belief update describe only two of the
many ways in which beliefs can change. Using plausibility,
it is possible to construct a general framework for reasoning
about belief change (see [Friedman and Halpern, 1997]). The
key point is that it is possible to describe belief changing using conditioning with plausibility, even though it cannot be
done with probability. Starting with a conditional plausibility
measure satisfying Pl4 (this is necessary for belief to have the
right properties) and conditioning on new information gives
us a general model of belief change. Belief revision and belief update can be captured by putting appropriate constraints
on the initial plausibility [Friedman and Halpern, 1999]. The
same framework can be used to capture other notions of belief
change, such as a general Markovian models of belief change
[Friedman and Halpern, 1996b] and belief change with unreliable observations [Boutilier et al., 1998]. The key point is
that belief change simply becomes conditioning (and iterated
belief change becomes iterated conditioning).
Ri~
b
satis-
8
A
D@
.
.
h
t
2t
f
)2t
b
(
t
r
u
r %
%v
"ii f
b A @ %
&FDx
X % ! A @
5t! q EBgiE !! h SD !! f FD@
A @
A
This result shows that if the KLM properties are sound with
respect to a class of structures, then they are almost inevitably
complete as well. More generally, Theorems 4.2 and 4.4 explain why the KLM properties are sound and complete for so
many approaches.
The discussion up to now has focused on propositional defaults, but using plausibility, it is fairly straightforward to extend to the rst-order case; see [Friedman et al., 2000].
2 f
f
2 h
t
f
)
f
)
A
D@
! g
!
FAB@s! ! SD@
A
t
A @
BS
!
E! s
37!
b !
! !
! lE! EB! lE! SD@
A @
! ! A
A
B@
X r% !
2s!
X
!
r% !
A
D@
E! D@
% !
B@
X
3! SD@
% ! A
A
B@
f
4! F! SqD@
! !
A
B@
X %
2t9
A
D@
X % A @ %
59FDat
A
EB@
, then
Pl5. If
LLE states that the syntactic form of the antecedent is irrelevant. Thus, if
and
are equivalent, we can deduce
from
. RW describes a similar property of the
consequent: If (logically) entails , then we can deduce
from
. This allows us to combine default and
logical reasoning. REF states that is always a default conclusion of . AND states that we can combine two default
conclusions. If we can conclude by default both
and
from , then we can also conclude
from . OR
states that we are allowed to reason by cases. If the same default conclusion follows from each of two antecedents, then it
also follows from their disjunction. CM states that if
and
are two default conclusions of , then discovering that
holds when holds (as would be expected, given the default)
should not cause us to retract the default conclusion .
The fact that the KLM properties characterize so many different semantic approaches has been viewed as rather surprising, since these approaches seem to capture quite different
intuitions. As Pearl 1989 said of the equivalence between semantics and preferential structures, It is remarkable that
two totally different interpretations of defaults yield identical sets of conclusions and identical sets of reasoning machinery. Plausibility measures help us understand why this
should be so. In fact, plausibility measures provide a much
deeper understanding of exactly what properties a semantic
approach must have in order to be characterized by the KLM
properties.
The rst step to obtaining this understanding is to give semantics to defaults using plausibility. A plausibility struc, where
is a plausibility measure
ture is a tuple
holds in this structure if either
on . A conditional
or
(where
is the set of worlds satisfying the formula ). This approach
is just a generalization of the approach rst used to dene
defaults with possibility measures [Dubois and Prade, 1991].
Note that if satises CPl5, this is equivalent to saying that
if
(the implicit asiff
).
sumption here is that
While this denition of defaults in terms of plausibility is
easily seen to satisfy REF, RW, and LLE, it does not satisfy
AND, OR, or CM in general. It is easy to construct counterexamples taking
to be a probability measure
(in which
case the denition boils down to
if
or
). As observed earlier, if
satises Pl4
(which it does not in general if
is a probability measure),
then the AND rule is satised. As shown in [Friedman and
Halpern, 1996a], Pl4 also sufces to guarantee CM (cautious
monotonicity). The only additional property that is needed to
guarantee that OR holds is the following:
A
B@
A @
BS
worst
. The minimax rule says to choose the action
(or one of them, in case of ties) such that worst
is highest. The action chosen according to this rule is
the one with the best worst-case outcome. Notice that
minimax makes sense no matter how uncertainty is represented. Now take
and
, both
with the standard order, and consider the plausibility
, where
if
and
measure
. Let be
and let be multiplication. With this choice of and , it is easy to see that
worst , so expected utility maximization with respect to
is minimax.
7
C
$
HGHsT
}
q"%
r%
t
%
A
9D@
'
A
s B@
Ru
% #
7
D
A
s B@
@ I
9
%
4
52
6
ql
6 b
Q7
7
F
9R 8 9 ~
7
7
F P F9
9
7 ~
~ y lD
r
~
~
9lD 4 y
7
C
~
3
7
F
7
8
g
% ~
B3l
k
z
ty 3
r ~
&
%
l
~
@ 9
S
U
0
@ 9
7
8
6 ~
c lD
! r %
" T
7 `
C 9 @
S
T
q7l3w
%
~
3 D
k
%
z
{ z
|ty 3
'
a
'
Y X VA
"&WD@
7
C
g
h7
r p 9
&qW
g
n
v
b
g
u7
7
u7
u7
g
0 @ 9 @ 9
tAWf 8AW
}
FDi
%
B3
g
s7
( '
) &
U
VG
I 9 Es
T T
sT
h
vq(
T T e
U c`
U X
G
G
'
sT
'
'
'
b
H~
7
8
~
l
T b
`
A
B@
42
53
0
1
h7sTsqd
T e
X
42
53
@ 9 ed 4 2
AWf"cb53
"
9
%
$7
u7
S k @ 9
iU9dAW
9
% ~ e
3liR}
~
%
y
( '
) &
sT
I 9 Fs
T T
(
a
%
&
sT
#i
G
U
VG
v
A
D@
v
r
%
(
0 y T3
Sk
HGBsT
%
Y X VA
"&WD@
Sk
U3 7
z
ty
%
Dg
3
G
2t
%
qQ yi
6
7
C
% ~
y3l
sT
&C
% 7
Sk % 7
T3xBC
%
tl
z
z
ty
%
l
A
sD@
'
7
C
(1)
~
lD
sT
7
8
9 % ~ @
#3B9
@
A9
6 b
E7$7
yb
7
D
s
7
D
7
C
u Q
~ e ~
33
@ F3d3D
9} % 7
7
C
that have a potential inuence on . This inuence is mediated through the parents of , those ancestors of directly
connected to . That means that should be conditionally
independent of its ancestors, given its parents. The formal
be independent not only
denition requires, in fact, that
of its ancestors, but of its nondescendants, given its parents,
where the nondescendants of are those nodes such that
is not the ancestor of .
u7 E7
g
u7
7 h
x % 0 @ 0
ysf tAW9St
0 @ 9 v % @ 9 0 f % @
TBI w8AWSt AW9
8
%
!
"
&
l
lV nt@
k
r
F e0
lV
o d
b
Qi
A qualitative Bayesian network gives qualitative information about dependence and independence, but does not actually give the values of the conditional probabilities. A quantitative Bayesian network provides more quantitative information, by associating with each node in a conditional
probability table (cpt) that quanties the effects of the parents
of
on . For example, if s parents in are and ,
then the cpt for
would have an entry denoted
for all
. As the notation is meant to suggest,
for the plausibility
measure represented by . (Of course, there is no need to
, since this is
have an entry for
just
.) Formally, a quantitative
consisting of a qualitative
Bayesian network is a pair
Bayesian network
and a function that associates with
each node in a cpt, where there is an entry in the interval
in the cpt for each possible setting of the parents of .
is a root of , then the cpt for
can be thought of as
If
giving the unconditional probability that
.
i h`
g
2i
%
!
"
i
2
%
%
w ilt
%
r
%
w%Y R%
`
%w
xv nwla%
%
%
} b
"4l
k g
i F j
i
gqY(q
%
y% "i f (%
% %
xtpg3q
i
f
"i f
%
j(gqg
r%
%
f
Ya %
% f
l
f
y
% f
f % f
V
"
v
w
% f
8l
f
%
i
b
&
sT
A
B@
sT
'
HG
h
U Y
U 5
U
ildil
h
h
il z 3ilD z l3
k
sT
0
7 7
g
f %
S|I
42
53
0
0 @ 9 4 2 U @ 4 2
tAW 53 0 AW9 53
'
A
D@
( '
& )
I 9
7
u7
g
6
sT
S
T
T T
0 @ 94 2 @ 94 2
tAW53&BIE51
A
B@
Suppose that
is a set of possible worlds characterized by
binary random variables
(or, equivais
lently, primitive propositions). That is, a world
a tuple
, where
is the value of .
That means that there are
worlds in , say
.
requires
A naive description of a probability measure on
numbers,
, where
is the probability
of world . (Of course, the probability of
is determined
by the other probabilities, since they must sum to 1.)
If is relatively small, describing a probability measure in
this naive way is not so unreasonable, but if is, say, 1000
(certainly not unlikely in many practical applications), then
it is completely infeasible. One of the most signicant recent advances in AI has been in work on Bayesian networks
[Pearl, 1988], a tool that allows probability measures to be
represented in a compact way and manipulated in a computationally feasible way. I briey review Bayesian networks here
and then discuss the extent to which the ideas can be applied
to other representations of uncertainty. More details can be
found in [Halpern, 2000a].
Recall that a (qualitative) Bayesian network (sometimes
called a belief network) is a dag, that is, a directed acyclic
graph, whose nodes are labeled by random variables. Informally, the edges in a Bayesian network can be thought of as
representing causal inuence.
Given a Bayesian network and a node in , think of
the ancestors of
in the graph as those random variables
,
,
, and
. Then and are incomparable according to
, but
.
of plausibility values be that used for
Let the set
, that is, the functions from to
, with the
pointwise ordering. Let
be the functions from to
, let
be pointwise addition, and let
be pointand
wise multiplication. The difference between
is captured by considering two different orders on
. For
, order
by
, where
if
, while for
, order
by
, where
if
for all
.
If
is the expectation function corresponding to
this denition of
and , then it is easy to see that
iff
, for
.
It can be shown that every partial order on actions can be
represented as the ordering induced by expected utility according to some plausibility measure on . That is, given
some partial order on actions that can be taken in some set
of possible worlds, there is a plausibility measure
on
and expectation domain
such that the range
is
and a utility function on
with range
of
such that
iff
. Thus, viewing
decision rules as instances of expected utility maximization
with respect to the appropriate expectation function provides
a general framework in which to study decision rules. For
example, it becomes possible to ask what properties of an expectation domain are needed to get various of Savages 1954
postulates. I hope to report on this in future work.
that
1. Every probability measure is represented by a qualitative Bayesian network (in fact, in general there are many
qualitative Bayesian networks that represent a given
probability measure).
&ii f
b
c~
h
~
ii f
}
n "yb
h
V~
}
%
"if
h f
Ui"
f
7
2r
A
B@
i
g f
$Y%
A @
FD3t
A
B@
"
si
T
U
s i
T
r
%
&l B@
Hb
A
D@
%
y4
%
a
'
94
b
4
A @
EBSF9SS
9D@
A
A
EB@
g ii f
A
D@
A
E)9D@
E)
A
D@
p
P%
t%
}
&"i f
f
i
% f
gRq
%
%
i
p
l
p
l
p
q
A
D@
ii f
}
C%
f
l
f
t}
f % f
% f
I8
An equivalent denition of and being independent with respect to a probability measure is that
. However, I want to give a denition of independence that
does not require an analogue to multiplication. But even in an algebraic cps, the requirement that
is not always equivalent to the denition given here (see [Halpern,
2000a]). Also note that if
(in the case of probability, this would correspond to
having probability 0), then
is not dened. In this case, there is no requirement
that
. A similar observation holds if
.
i
A
D@
A
D@
A @
EBSF9SS
7 Conclusions
There is no reason to believe that one representation of uncertainty is best for all applications. This makes it useful to
have a framework in which to compare representations. As I
hope I have convinced the reader, plausibility measures give
us such a framework, and provide a vantage point from which
to look at representations of uncertainty and understand what
makes them tickwhat properties of each one are being used
to get results of interest. More discussion of these and related
topics can be found in [Halpern, 2000c].
References
[Adams, 1966] E. Adams. Probability and the logic of conditionals. In J. Hintikka and P. Suppes, editors, Aspects of
Inductive Logic, pages 265316. North Holland, 1966.
[Alchourron et al., 1985] C. E. Alchourron, P. G rdenfors,
a
and D. Makinson. On the logic of theory change: partial meet functions for contraction and revision. Journal of
Symbolic Logic, 50:510530, 1985.
[Blume et al., 1991] L. Blume, A. Brandenburger, and
E. Dekel. Lexicographic probabilities and choice under
uncertainty. Econometrica, 59(1):6179, 1991.
[Boutilier et al., 1998] C. Boutilier, J. Y. Halpern, and
N. Friedman. Belief revision with unreliable observations.
In Proc.AAAI 98, pages 127134, 1998.
[Brandenburger and Dekel, 1987] A. Brandenburger and
E. Dekel. Common knowledge with probability 1. Journal
of Mathematical Economics, 16:237245, 1987.
x w
y Hru
w r|" 8 w ru
u r {
wruvr t
s r
w rv"
u r
w z} w r
s
w Hru
5
w r s 1r ru r5
w s u 5
w r s r
w ~r s r
} w
w rdr z
u s