Documente Academic
Documente Profesional
Documente Cultură
Our assumption of private ownership means that all rms are owned by
agents in the economy. To keep our analysis simple, we will assume that the
shares of the rms owned by each agent are xed, which implies that the nancial
side of the economy is trivial. We let ji denote the share of rm j owned by
PM
agent i: Hence, we have ji 2 [0; 1] and i=1 ji = 1: With this assumption,
the economy is now fully described by
= (Xi ;
M
i ; ! i )i=1
; (Yj )j=1 ;
j
i
j=1;:::;J
i=1;:::;M
Maximizing Behavior
Producers in the model are assumed to act to maximize prot j (p) =
max p yj : The rationale for this assumption is the interests of the owners of
yj 2Yj
the rm, who are the residual claimants to the rmsprots. As long as some
owner has non-satiated preferences, they will prefer more income to less, and
will wish for the rm to act to maximize prots. We let j (p) be the set of
solutions to rm js optimization, so that
j
Consumers in the model act to maximize utility subject to budget constraints. The income or wealth of consumer i for given prices p is
wi (p) = p ! i +
J
X
j=1
j
i j
(p) :
(p) = fx 2 Xi j p x
wi (p)g :
(p) = fx 2
(p) j 8x0 2
(p) , x
x0 g :
(p )
4. For all i; xi 2
(p ) :
is denoted W E ( ) :
(p)
(p)
X
i
z (p)
(p)
(p)
! (excess demand).
= [1; :::; 1] :
2. The economy
; p z (p) = 0:
M
X
j=1
i (p)
i=1
M
X
i (p)
M
X
i=1
(p)
M
X
i=1
i=1
M
X
p !i
i=1
i=1
M
X
M
X
[p
(p)
J
X
(p)
j=1
p !i
2
M
J X
X
j
i j
j=1 i=1
4p ! i +
J
X
j=1
j
i j
(p)
3
(p)5
wi (p)] = 0:
i=1
Notice that we have not said anything so far about the existence of equilibrium: Walras law holds (under the non-satiation assumption) as an
identity for all prices, by virtue of the budget constraints being satised
with equality for all consumers.
Existence of Equilibrium
To show the existence of equilibrium in the general Arrow-Debreu model,
we will need the following technical result (which we will apply without proof).
Kakutanis Fixed-Point Theorem: Let
: X
X be a correspondence
from the compact, convex metric space X to itself. If
is non-empty-valued,
compact- and convex-valued, and upper-hemi-continuous, the there exists a point
x
^ 2 (^
x) :
To apply this result, we will construct a correspondence with the required
properties to apply the theorem, and the resulting xed point will then constitute an equilibrium. To construct the correspondence, we note rst that
the excess demand correspondence z (p) is upper-hemi-continuous. This follows from the maximum theorem (which implies that individual excess demand
3
correspondences are uhc) and standard aggregation results for uhc correspondences. We now also add the assumption that for all i, i is convex. With this
assumption, z (p) will also be convex-valued. Next, dene the correspondence
B (x) =
j p x = maxq x for x 2 R` :
p2
q2
Problem: To apply the Kakutani theorem, we need the excess demand correspondence to be compact-valued. If some consumer is globally non-satiated,
then z (p) will not be compact valued for any p 2 @ : In this case, however,
we know that such a price cant be an equilibrium. If, on the other hand,
preferences are such that some pk = 0 is consistent with equilibrium, then z will
be compact-valued at such prices. Hence, we make the following modication.
For K
R` with 0 2 int K; and K compact and convex, dene
zK (p) = z (p)\K: If K is su ciently large, then, at any equilibrium
price p ; z (p ) R`_ will be such that z (p ) K:
for all p 2
: Also,
2. With B : K
dened as above, B is uhc (by the maximum theorem), compact- and convex-valued. Compact-valuedness follows from the
compactness of ; while convex-valuedness follows from the fact that if
p1 ; p2 2 B (x) ; then p1 x = p2 x so p x = [ p1 + (1
) p2 ] x = maxq x:
q2
K by
0 for all p 2
The example is a simple pure exchange economy in which there are 3 agents
who trade 3 goods. We specify person 1s utility function as
b3
1
+ 2
x21
x2
u1 (x1 ; x2 ; x3 ) =
for b > 3, and let ! 1 = [1; 0; 0]. The preferences and endowments for agents 2
and 3 are then obtained by cyclically permuting the indices on the goods and
prices. Thus, for example, agent 2 will have the same utility function as agent
1, except that x1 would be replaced by x2 , x2 by x3 , and x3 by x1 . Similarly
agent 3 would have the same utility function, with x3 replacing x1 , x1 replacing
x2 , and x2 replacing x3 . Agent 2s endowment is then ! 2 = [0; 1; 0], and agent
3s is ! 2 = [0; 0; 1].
With these specications we calculate demand functions in the usual way:
equating agent 1s MRS to the ratio or the prices yields the rst-order condition
3
bx2
x1
p1
:
p2
Solving this for x2 in terms of x1 ;substituting into the budget constraint and
solving the resulting equation for x1 then yields the demand function
2=3
x1 =
bp1
2=3
bp1
2=3
+ p2
p1
2=3 1=3
bp1 p2
:
+ p2
One obtains the demand functions for the other agents similarly.
1=3
Now, make a change of variable i = pi and substitute into the demand
functions you have obtained. By Walras Law, it su ce to consider only the
demands (and excess demands) for goods 1 and 2. Making this substitution,
we obtain excess demand functions
z1
z2
2
1
2
1
3
1
2
1 2+
2
2
+
3
2
b
+
3
3
2
+
1
3
b 22
2
b 2+
3
1
1:
2
3
2
2 1+
2
2
2+ 3
2
3
3
3
2
3 2
3
3
2 3
3 2
+3
0:
3
1+3
3
3
2
3
3:
3
0.2
0
-0.2 0
0.2
0.4
0.6
0.8
1.2
-0.4
-0.6
-0.8
-1
alpha
2b 1
2+
1
2b
[b
2
1
2
1
2+
2+
1
3
2
b+3
1
2 2
2
[b
[b
2b
4
1
2
1
2+
1+
2b
1
3 2
1
[b
2b 2
b 22 +1
3 2
2
2b2
[b
1 2
2+ 2
1
2
3
2
2 +1 2
2
3
1
(b
[b
2
1
2
1 +3
2+
7
) 5
2
2
3 2
2
b 3
(b+1)2
b+3
(b+1)2
= 1 then yields
#
2b
(b+1)2
b 3
(b+1)2
(b
3)
2
(b + 1)
+3
b2 + 3
4
(b + 1)
Clearly, the roots are complex conjugates with positive real parts as long
as b > 3, so the Walras tatonnement is completely unstable at the unique
competitive equilibrium for this economy.
Geometrically, the dynamic system corresponding to the Walras tatonnement for this economy spirals away from the competitive equilibrium, and
approaches a limit cycle, as illustrated in Figure 2.
Research over the past decade into the problem of tatonnement has shown
that it is always possible to construct a tatonnement procedure i.e. to specify
a dierential equation of the form
p_ = H [z (p)]
8
which converges to the competitive equilibrium for any given economy . Unfortunately, the specication of the appropriate function H requires knowledge
of all agents preferences, at least up to the second derivatives of the utility
functions. One such approach is based on a global version of Newtons Method
for nding the zeros of a function. This approach, rst derived by Smale, is
based on the dierential equation
Dp z (p) p_ = z (p)
which can be shown to converge to some competitive equilibrium for almost
any starting value of p in the unit simplex. Note, however, that because the
Jacobian matrix Dp z (p) will generally be non-diagonal, this procedure not only
requires that we have information about the derivatives of the excess demand, it
also requires some mechanism for coordinating the rates at which prices adjust
in any given market with the rates of price adjustment in other markets. It
is di cult to think of obvious economic institutions or mechanisms capable of
implementing this kind of tatonnement procedure. Thus, while economists
generally believe that the concept of a competitive equilibrium does manifest
itself in the real world of commerce, we dont know how it gets implemented.
One alternative to the idea of tatonnement processes is to postulate that
economic equilibria, unlike those of physical systems, must be learned by the
agents in the system. In physical systems, equilibria occur as natural rest
pointsin dynamic processes based on xed laws of motion of the system. Economic equilibrium, on the other hand, involves not only a physical rest point
condition (market-clearing), but also a psychological condition (satisfaction of
needs or wants) interacting with an articial construct (prices) derived from
the psychological condition. Indeed, it is easy to nd market-clearing allocations: any bully can do it very eectively. It is less easy to nd Pareto
optimal allocations, although well-dened systems of property rights together
with enforcement mechanisms for ensuring that contracts are honored makes implementing Pareto improving trades possible, which can lead under very simple
search procedures to optimal allocations. But, as we already know, not every
Pareto optimum is a competitive equilibrium. Thus, it seems likely that if the
concept of the competitive equilibrium is to be useful in economic analysis, we
need a mechanism for explaining how agents in the economy might learn what
the competitive prices are.
Recent work by Gode and Sunder took a very dierent approach to the
problem of implementing the competitive equilibrium. In their approach, they
analyzed a partial equilibrium situation involving a single market with many
interacting agents, based on the standard supply and demand experiments pioneered by Vernon Smith and Charlie Plott. In the experimental version of
this market, one group of agents play the role of buyers, the other the role of
sellers. Buyers can purchase one unit of the good, and this one unit is worth
a given reservation price to them. Hence, if they buy the good for a price at
9
or below their reservation value, they earn a prot. Sellers can each sell up to
one unit of the good. If they sell their unit, they incur a given production cost.
Hence, if they sell at a price at or above their cost, they make a prot. It is
well established in the literature on experimental markets by now that human
traders in this environment eventually end up trading the competitive amount
of the good at prices the closely approximate the competitive equilibrium (i.e.,
transactions take place according to the price and aggregate quantity specied
by the intersection of the supply and demand schedules for the experiment). We
note, however, that agents in this experiment generally require several rounds
of trading before they learn what the relevant equilibrium prices are, so that the
data generated in such experiments exhibits a convergence of prices and quantities to the predicated competitive equilibrium prices and quantities, rather than
an abrupt and direct implementation of the equilibrium.
Gode and Sunder asked whether this process of nding the right prices and
allocations was one requiring very sophisticated learning, or whether it could
be implemented with zero intelligencesearch procedures. They proceeded to
replicate the basic experimental setup using computerized robots. The robot
traders in their model generated simple random bids (if they were buyers) or
oers (if they were sellers) with the only restriction on behavior being that no
bid or oer should make an agent worse o. Thus, buyers were restricted to
bid below their oer price, while sellers were restricted to bid above their costs.
In simulations of the model, Gode and Sunder found that while prices dont
converge to the competitive equilibrium prices (as they do with human subjects),
the infra-marginal prices (i.e. the prices of the last observed transactions) always
occur at or near the CE price, while the e ciency of the market is in excess
of 90% of the maximum (which occurs when the quantity of the good traded
is the CE quantity). These results tell us that the double auction mechanism
of the classic supply and demand experiment will implement the competitive
equilibrium under very mild conditions on agentsbehavior.
The zero intelligence trading result does not, however, answer the question of
whether the competitive paradigm can be implemented easily in environments
where many agents trade many goods.
Follow-on work to the Gode and Sunder work by Gode, Spear and Sunder
showed that, at least in the context of a 2 agent, 2 good exchange economy,
simple random search easily nds Pareto optimal equilibria. (Notes on this
research can be found on the course web-site in PDF format.) The random
search process does not, however, nd the competitive equilibrium. The reason
for this is self-evident. The random search process generates a uniform set of
random trajectories from the initial endowment to the contract curve. The
ending allocations are, therefore, uniformly distributed on the contract curve
about the average trajectory generated by the search procedure.
More recent research by Gode, Spear and Sunder has focused on the question of how much additional intelligence is required of agents in the zero10
i=1
1
Dui (b
xi )
kDui (b
xi )k
11
Comment on Coordinates: Since we wish to work in the generalized Edgeworth box context, we will adopt a coordinate system consisting the allocations
received by the rst M 1 agents, the last agents allocation being dened then
as the dierence between the total resources and what every other agent gets.
For notational symmetry, we will denote the initial endowment allocation as
h
i h
i
M
M
y00 = x0i i=1 = (! i )i=1 :
where n denotes the iteration of the algorithm within stage zero. In later stages
t, we will denote these allocations as ynt :
At stage 0, we simply generate a near Pareto optimal allocation using the
following procedure. Fix a number " > 0 and a number 0 < r < 1: For each
agent i = 1; :::; M 1, let Q"i be the (boundary of the) cube of side " centered
at x0i : Let
Si0 = Q"i \ x 2 R`+ j ui (x) ui (! i ) :
Now, choose a vector zi0 from Q"i at random, taking the probability measure
on Q"i as (normalized) Lebesgue measure. Since the at-least-as-good-as set
x 2 R`+ j ui (x) ui (! i ) has open interior (as long as the initial endowment
is not Pareto optimal), there is a positive probability that zi0 2 Si0 : If we take a
sequence of independent draws from the distribution on Q"i ; the Borel-Cantelli
lemma implies that we will realize a vector in Si0 with probability one. Since
the process of taking independent draws from a uniform distribution on Q"i
will generate vectors which are either in Si0 or in Q"i nSi0 , we can apply standard
formulas for determining waiting times for events drawn according to a binomial
distribution to estimate how many realizations it takes on average to obtain a
vector in Si0 . Let q = prob Q"i nSi0 : Then, the probability that it takes more
than r tries to obtain the rst vector in Si0 is given by
prob (r) = q r :
We can use this formula to determine how many trials we need, given q, in order
to ensure that the probability of not obtaining a vector in Si0 in r tries is less
than any desired probability. Letting b be the desired probability, we nd
r=
ln ^
:
ln q
For the case where q = 0:5 and we want the waiting probability to be ^ =
:
:
0:00001; this yields a value of r = 16: For ^ = 0:000001, we get r = 20.
Note that as q ! 1 (so that the improving region for this individual becomes
12
vanishingly small), the expected waiting times will diverge. We will return to
this issue later in our discussion of the simulations.
If the endowment allocation is itself Pareto optimal, then, since the Pareto
set has Lebesgue measure zero in the generalized Edgeworth box, it will follow
that value of q is one, in which case the waiting time we calculated above will
be innite. We can use this fact to use the number of draws we make as a
criterion for determining whether we are close to an optimal allocation, in the
sense that q is close to one. If, for example, we take our target value of q = 0:99
:
with a target probability of pb = 0:01; then r = 460: To keep pb = 0:00001; we
:
would require r = 1147:
Once we obtain a collection of vectors
h
i
M
0
y1 = x1i i=1 by taking
zi0
M 1
i=1
for i = 1; :::; M
1, and
x1M
M
X
M
X1
!i
i=1
x1i :
i=1
x1M
If uM
uM (! M ) then the allocation y10 Pareto improves on y00 ; and
1
we adopt the new allocation. If the new allocation does not Pareto improve
on the old, we repeat stage 0 until we reach the iteration bound, at which
point we conclude that the allocation is near optimal. If the allocation is
not near optimal, we repeat the procedure starting from y10 to obtain a new
improving allocation y20 ; and we continue this process until we obtain a near
optimal allocation, which we denote as y 1 :
Stage t+1
Given an allocation from stage t, y t , we price this allocation using the common normalized gradients
pt =
1
Dui x
bti
kDui (b
xti )k
where x
bti is agent is nal allocation at the end of stage t.
Now, dene the ith agents loss at this allocation as
t
i
= pt
x
bti
!i :
If ti < 0; then we say that agent i is subsidizing other agents. We note that
if no agent in the economy is providing any subsidies, then we must be at a
competitive equilibrium, since ti 0 for all i implies
pt
M
X
i=1
x
bti
13
!i
0:
xt+1
i
!i >
t
i
where is small and positive, for any i such that ti < 0: These constraints
guarantee that any agent who was providing subsidies at stage t will be providing
smaller subsidies at stage t + 1:
Note that in passing from one stage to the next, we must always carry along
the subsidization constraints, even if we move to a new allocation in which an
agent who was providing a subsidy in the previous stage is receiving a subsidy
in the current stage. If we forget the past subsidization constraint, then the
algorithm could go back to an allocation in which this agent was again making
losses, possibly larger than in the previous stage. Hence, at each stage t; the
data required for each agent is [b
xti ; i ; p ], where ( i ; p ) were the price and loss
in the last stage at which agent i incurred a loss. We illustrate this in Figure
3, for the 2-by-2 economy. Here, at some earlier allocation marked A, agent 1
incurred a loss, so subsequent generations of optimal allocations must lie above
the upper line parallel to the tangent line at A. Similarly, at a dierent stage,
agent B incurred a loss, so subsequent allocations must lie below the lower line
parallel to the tangent line at B. This restricts the set of potential reallocations
that the ZI mechanism can draw from to those in the grey shaded area. In the
absence of the loss constraints, the set of potential reallocations would be the
full lens-shaped region between the pair of indierence curves containing the
endowment point.
Since only the CE allocation has no agent making a loss, the manner in which
we increment the loss constraints implies that the set of potential allocations
from the ZI algorithm selects must decrease in size from one stage to the next,
which implies in turn that the ZI algorithm must converge to the -competitive
equilibrium.
This algorithm improves signicantly on the tatonnement results, to the extent that it requires that agents only be able to price Pareto optimal allocations,
a process which requires information only about the common normalized utility
gradient at the optimal point, rather than information about both rst- and
second-derivatives of all agents utility functions. It also provides a more realistic foundation for actually implementing the competitive equilibrium than
14
15
does the ctitious price-adjusting auctioneer, based on standard bargaining theory. In this framework, once agents learn that they are in fact subsidizing other
agents, they may use the threat of refusing to trade with agents who are benetting from this subsidization to extract concessions (in the form of trades which
reduce the degree of subsidization). As we will see in the following section,
this threat is made quite credible by the fact that in a large economy, it is
always possible for subsets of agents (trading among themselves) to Pareto improve on any non-CE allocation. Finally, the implication that the competitive
equilibrium must be learned also explains the recurrence of the idea throughout
economic history of competitive prices as normal prices around which actual
market prices uctuate.
Welfare Properties of Walrasian Equilibrium
While our results on learning and competitive equilibrium suggest that markets may not attain equilibrium quite as quickly as economists had typically
assumed, they do provide robust support for the notion of the competitive equilibrium as an economic attractor in the sense that realistic trading processes
will tend toward the competitive equilibrium given su cient time for learning
to take place. Thus, we are justied in asking what properties the competitive
equilibrium exhibits, and the most important of these are its welfare properties.
Denition: The set of individually rational feasible allocations in is dened
by
IR ( ) = f(x; y) 2 F ( ) j 8i; xi
Proposition 2: If
Proof: Exercise.
!i g :
IR ( ) :
16
x
~i >
i=1
M
X
xi :
i=1
i=1
j=1
i=1
j=1
i=1
M
X
M
X
xi
i=1
which is a contradiction.
x
~i
i=1
j
i
(p )
xi ) p
xi :
i
i
is locally non-satiated
xg is convex for all x 2 Xi
satisfy
R`+ :
17
xi
wi (p ) :
x1 g +
M
X
i=2
fx j x
J
X
xi g
Yj :
j=1
(a) Claim: ! 2
= G: If ! 2 G then there exists (~
x; y
~) such that for
all j = 1; :::; J y~j 2 Yj ; and
!=
M
X
!i =
i=1
M
X
J
X
x
~i
i=1
y~j
j=1
i.e. (~
x; y
~) 2 F ( ) and x
~1 1 x1 and x
~i i xi for i = 2; :::; M .
But this contradicts the assumed optimality of (x ; y ) :
(b) By assumptions 2 and 3, the set G is convex. By the separating
hyperplane theorem (Minkowskis theorem), then, and the fact
that ! 2
= G; there exists p 2 R` ; p 6= 0 such that
p
M
X
!i
i=1
x
~i
i=1
J
X
y~j
j=1
P
P
for ( x
~i
y~i ) 2 G:
(c) Claim: p is a price system. We need to show that p 2 R`+ :
Suppose for some good k (w.l.o.g. k = 1) we have p1 < 0: Then
by assumption 4, e1 2 Y; where e1 = [ 1; 0; :::; 0] and > 0:
By the argument in b), for any x
^ such that for all i = 1; :::; M;
x
^ i 2 Xi ; x
^1 1 x1 and x
^i i xi for i = 2; :::; M we have that
p
M
X
x
~i
e1 =
i=1
M
X
x
~ i + p1
i=1
for any > 0: But this is impossible for su ciently large values
of :
1
(d) Claim: Firms maximize prots. Let fxq1 gq=1 be a sequence
q
q
such that for all q, x1 2 X1 and x1 1 x1 ; and xq1 ! x1 : (Note
that the existence of such a sequence is guaranteed by assumption
1.) Let y 2 Y: Then by the argument in b),
p
xq1 +
M
X
i=2
18
xi
y:
"
M
X
xi
y :
i=1
2
M
X
4
xi
j=1
y1
i=1
J
X
j=2
y1
yj 5
y1
xi > inf fp
x j x 2 Xi g
2 exchange economy.
and assume that there is one unit of each commodity available to allocate among
agents.
The two agent types are denoted by A and B, and are completely characterized by their preferences and endowments. Type A agents preferences are
given by the utility function
uA xA = min
A
xA
1 ; 2x2
B
2xB
1 ; x2
Endowments are denoted by ! A for type A agents, and by ! B for type B agents.
1. Find the Pareto set and illustrate it in the Edgeworth box.
2. Consider the case where
!A
!B
"
"
"
"
!B
"
1
"
"
"
21