Sunteți pe pagina 1din 10

Organisational Economics

Introduction to Incentives
Hongyi Li
July 2016

Introduction: The Principal-Agent Framework


In this lecture, we will take a first look at what is called the Principal-Agent Problem. Basically, we will discuss how to motivate one party to act on behalf of another.
Premise: One party (the Principal) needs the other party (the Agent) to perform a task on
his behalf
e.g. Principal = Employer, Agent = Employee
Problem: principal and agent have different objectives, so agent may not do what principal
wants him to do.
Differing objectives may take many forms. One example that will be the focus of this lecture:
lazy agents
Principal wants agent to work hard, but agent reluctant to exert effort.
Principal can motivate agent by rewarding agent for effort.
Lets write down a model of this situation
Basic idea: model is a game between principal and agent
As with any game, we need to specify: (i) objectives of players, (ii) rules of the game

1.1

A Simple Principal-Agent Model

Objectives are represented by payoff functions for each player: a higher value for a players
payoff function means the player is more satisfied.
The Principals payoff function is: Up = x w
We may think of Up as net profit of principal.
x is the agents output, which we assume equals the amount of effort exerted by
agent: x = e.
w is total wage paid to agent
The Agents payoff function is: Ua = w 12 ce2
agent is happier when w is higher
1

has to exert effort to increase Principals revenue; dislikes exerting effort


Now, rules of the game. (i.e, how can principal reward agent?)
For now, make it simple: principal offers to pay agent a share of output, i.e. w = e.
(Well consider more sophisticated incentive schemes later.)
Principal gets to choose , i.e. he decides how strongly he wants to motivate agent
So the game proceeds as follows:
1. Principal chooses , i.e. offers agent incentive scheme w = x
2. Agent (who learns ) chooses effort level e
3. Principal pays agent wage w = x

1.2

Solving the Model

Lets analyze this game, i.e. figure out what the players will do.
Well do this by working backwards. Well first figure out how much effort the agent will
exert, given the incentive scheme that the principal offers. Then, well put ourselves in
the principals shoes; anticipating the agents response to his offers, he chooses the offer
that will make the most profit for himself.

1.3

The Agents Maximization Problem

First, consider the decision of the agent (that is, how much effort to exert) after he receives
an offer w = e from the principal.
after principal makes offer, the agent knows he will receive wage of x = e
So, for a given choice of e, agents payoff is Ua = e 21 ce2 .
Our agent is rational, so he will choose e to exactly maximize his payoff, Ua :
1
e = max e ce2
e
2

= .
c

(1)
(2)

This means that as principal increases incentive strength , agent responds by increasing
effort.
So, our first insight: agent responds to incentives!

1.4

The Principals Maximization Problem

Next, consider principals problem


Assume that principal is rational: he can anticipate perfectly how agent will respond to his
choice of incentive scheme (specifically, e = ).

So, principal anticipates that his profit will be

Up = x w
= e e
=

2
.
c
c

He chooses to maximize his profit; so = 21 .


This means that for every dollar of revenue that the principal receives (from the efforts of the
agent), he gives half to the agent.

1.5

Efficiency

Now, lets take a step back and look at the outcome of the game played between the principal
and agent, from the eyes of a third party who we call the planner.
In particular, were interested in what the planner would like the principal and agent to do.
But first we have to specify the preferences of the planner.
To make things simple, suppose this planner is benevolent and seeks to maximize total value,
i.e. the sum of the principals and agents payoffs:
Utotal = Up + Ua .

(3)

(For example, we might think of the planner as the government.) What would be the preferred
outcome from the planners point of view?
We call the planners preferred outcome the efficient outcome.
But note that there are many possible notions of efficiency. Right now, we focus on the
simple notion that efficiency is about maximizing total value.
Lets write out the expression for Ut :
Utotal = Up + Ua
1
= (x w) + (w ce2 )
2
1
= x ce2
2
1 2
= e ce .
2

(4)
(5)
(6)
(7)

Notice that the planner doesnt care about w: an increase in w of 1 dollar just means that
the agent gets 1 dollar more while the principal gets 1 dollar less, which cancels out in the
calculation of total value. All the planner cares about is the effort level.
We can calculate what effort level the third-party prefers. He seeks to maximize e 12 ce2 ;
this expression is maximized at e = 1/c.
3

If the planner could force everyone involved to do what he wanted, he would choose this
efficient effort level, which we denote as eeff = 1/c.
Compare this to the outcome of the game played between the principal and agent: e =
1
/c = 2c
. Effort is lower than the efficient level. In other words, the outcome of the game is
inefficient from the point of view of someone whose goal is to maximize total payoffs. Why?
The principal can induce the agent to put in effort by increasing the incentive strength
. In fact, he can get the agent to put in efficient effort e = 1 by choosing incentive
strength = 1.
But - remember that when principal chooses incentive strength , he has to share of
every dollar of revenue that he receives with the agent.
If he chooses = 1, hed have to give away all of his revenue to the agent, so he would
make zero profit! Instead, to maximize profits, he raises high enough to induce the
agent to put in some effort, but not so high that he has to give away too much of the
revenue. This optimum occurs at = 1/2.
Put another way, in this game, there is a conflict between efficiency and profit maximization for the principal. The principal prioritizes profit maximization; so he chooses
incentives that are weaker than what the planner would consider optimal, so as to keep
more profit for himself.

Enriching the Incentive Scheme


Now, lets a further step back, and re-examine the assumptions of our model.
One of the rules of the game was: the principal offers an incentive scheme of the form w = e.
In other words, incentives are strictly in the form of a share of the agents effort/output; agent
never loses any money no matter how little effort he puts in.
What if we allow negative rewards: that is, what if the principal can ask the agent for
a participation fee?
Lets modify the rule about incentive schemes to give the principal more flexibility. Now the
principal can offer a fixed payment + a variable bonus: w = + x.
In particular, if the fixed payment < 0, this means that if agent puts in zero effort,
his wage is negative:
w =+0

(8)

= < 0.

(9)

This may be interpreted as a participation fee, or alternatively, a fine for poor performance. (But either way, the model works the same!)

2.1

Introducing Outside Options to the Model

but lets be careful about adding this flexibility to the model. To understand why, suppose
you are the agent, and the principal says I offer you an incentive scheme with fixed payment
= 1000000 and bonus = 1. What would your response be?
No way! Id be guaranteed to lose money that way. Id rather do nothing, or find
another job with better pay.
So we should add the following feature to the model, to capture the point that the agent has
to be willing to accept the incentive scheme chosen by the principal:
The agent has the chance to either accept or reject the incentive scheme offered by the
principal.
If the agent accepts the offer, then the game proceeds as usual: the agent chooses an
effort level, and the principal pays the agent a wage based on the chosen incentive scheme.
If he rejects the offer, then the agent receives an outside option payoff U0 , and the game
ends without the agent working for the principal (so the principal receives zero payoff).
This outside option U0 is a fixed number that doesnt depend on the decisions of either
player. We may think of it as the payoff that the agent expects to receive if he doesnt
work for the principal, and instead takes a vacation, or perhaps finds other employment.
To simplify things, well assume for now that U0 is small enough, so that the outside
option is not too lucrative that the agent always rejects the principals offer.
Once we add this exit option, the game now proceeds as follows:
5

1. Principal chooses an incentive scheme w = + e to offer agent


2. Agent chooses whether to accept offer. If agent rejects, then he receives outside option
payoff U0 , and game ends.
3. If agent accepts, then he chooses effort level e
4. (If agent accepts, then) Principal pays agent wage w = + e.

2.2

The Agents Maximization Problem

To calculate the outcome, well again work backwards. Suppose that were at step 3, so that
the agent has accepted an incentive scheme of w = + e from the principal, and has to
choose an effort level.
Given this incentive scheme, the agent chooses effort to maximize his payoff
1
Ua = w ce2
2
1
= + e ce2 ;
2

(10)
(11)

solving the maximization problem, we get


e = /c.

(12)

Note that this is exactly the same relationship between incentive strength and effort as in the
earlier model with no fixed payments (w = e).
So, the fixed wage doesnt affect the agents effort level; only the incentive strength
does.
Next, lets move one step back, to step 2. Here, the agent has received an offer w = + x
from the principal, and is deciding whether to accept it.
As always, he tries to make this decision to maximize his payoff. So, lets compare.
If he rejects the offer, then he gets his outside option payoff U0 .
If he accepts the offer, then he anticipates that he will receive
1
Ua = w ce2
2

(13)

where e is the effort he will choose in step 3, and w = + x = + e is the wage he


will receive based on e .
So, comparing the two payoffs, he will choose to accept the offer if and only if Ua U0 .
To summarize: by including an exit option for the agent in step 2, we ensure that the agent
will only accept an incentive scheme that gives him at least as much payoff as his outside
option.

2.3

The Principals Maximization Problem

Now, lets move to step 1, Here, the principal wants to choose an incentive scheme w = +x
to maximize his payoff
x w = e ( + e ).
(14)
But he is subject to the restriction that the agent has to be willing to accept his offer. This
means, from step 2, that the agent must receive at least his outside option payoff if he accepts
the principals offer:
1
Ua = w ce2 U0 .
(15)
2
This looks complicated! So, lets try to break it down piece by piece to understand the
underlying logic.
Our first step is to show that the principal will choose an incentive scheme to ensure that the
agents payoff exactly equals his outside option level:
Ua = U0 .

(16)

Why? Suppose that the incentive scheme is such that the agent receives more than his
outside option payoff. Then this cannot be the profit-maximizing incentive scheme for
the principal.
In fact the principal can increase his profits further by modifying the incentive
scheme slightly: he can reduce the fixed payment a little bit.
By doing so, the principal increases his profits because he pays the agent slightly
less wages. But at the same time, he doesnt take so much that the agent rejects his
offer.
So, to maximize profits, he takes just enough that the agent is barely willing to work
for him, i.e. Ua = U0 .
In other words: the principal will choose an incentive scheme that ensures that the
agent gets exactly his outside option.
Our next step is to show that the principals maximization problem is equivalent to the
planners maximization problem. This looks like a bit of a mathematical trick, so make you
understand whats going on in terms of economics.
First, lets rewrite the equation we just obtained, Ua = U0 , to become Ua U0 = 0.
This means that as long as we know that the agent gets exactly his outside option, we
can rewrite the principals payoff function as
Up = Up + 0

(17)

= Up + Ua U0

(18)

= Utotal U0

(19)

(Well use Up instead of Up to denote that this is the principals payoff taking into account
that the agent receives exactly his outside option payoff.)

Notice that this is exactly the same as the planners payoff function, except with an
additional U0 term (which, as we will see shortly, doesnt really change things)! In
other words, if we can ensure that the agent gets exactly his outside option payoff, then
maximizing the principals payoff is exactly the same as maximizing total value, minus
the agents outside option.
(Heres a useful way to think about this expression: Utotal U0 is in fact the surplus from
the principal-agent relationship. That is, Utotal is the total value within the relationship.
Whereas U0 is the total value if the agent does not work for the principal, because then
the agent receives his outside option U0 , whereas the principal receives zero. And the
principals payoff turns out to be exactly the difference between these two expressions,
i.e. the surplus from the relationship!)
Now, remember that from our calculations in the last section,
1
Up + Ua = (x w) + (w ce2 )
2
1 2

=e e ,
2
so we can substitute in this expression to get
Up = (Up + Ua ) U0
1
= e ce2 U0 .
2

(20)
(21)

(22)
(23)

This expression is maximized when e = 1/c. So, the principals preferred effort level is
exactly the same as the planners preferred effort level.
So, the principal wants to:
Choose an incentive strength that induces the agent to choose this effort level e = 1/c.
Given such a , choose a fixed payment that gets the agent exactly his outside option
payoff.
The principals choice of incentive strength is straightforward. We know from step 3 that the
agent will choose e = /c. Thus, to induce e = 1/c, the principal simply chooses = 1.
The choice of fixed payment is less interesting. All we need to know is that, once we know
what to choose, we can always adjust appropriately so that the agent gets exactly his
outside option payoff. But lets go through the calculations anyway. We start with the fact
that the agents payoff equals the outside option:
Ua = U0 .

(24)

Since we know that Ua = w 12 ce2 and that = 1 and e = 1/c, we can write
1
Ua = w ce2
2
1
= ( + e ) ce2
2
1
= +
2c
= U0 ;
8

(25)
(26)
(27)
(28)

rearranging, we get
= U0

2.4

1
.
2c

(29)

Discussion

Now, lets recap.


Previously, when the principal wasnt allowed to extract a participation fee from the agent,
he had to share some of his profit with the agent to motivate him; but he wasnt willing to
share all of his profit, so the result was that effort was inefficiently low.
But once we allow fixed payments, the principal doesnt have to worry about giving the agent
too much of the profit through the bonus, because he can extract all of the profit by decreasing
the fixed payment to a negative value (leaving the agent with just enough that he is willing
to accept the principals offer).
So he can offer the agent the efficient level of incentives, to induce the agent to put in
the level of effort that maximizes total value;
Then, adjust the fixed payment to capture all of the total value (minus the outside option
payoff needed to keep the agent around).
Another way to think about this: when the principal is allowed to extract participation
fees, then his maximization problem coincides with the maximization problem of the planner
(equation 19). So, he induces the agent to take the efficient level of effort.

Comparing and Interpreting Models


So, which model (with or without fixed payments in the incentive scheme) is more useful, and
in what settings?
In many employment relationships, the employer typically isnt allowed to fine the employee. In fact, labour laws often require a positive minimum wage. Our model suggests that
incentives may be inefficiently low in such employment relationships, so that employees are
less productive than would be efficient.
On the other hand, in some principal-agent relationships, we do observe fines. For example,
taxi drivers (other than owner-operators) have to rent their taxis (approx. $600 per week in
Sydney), but get to keep (almost) all profits from driving.
Here, its more natural to think of the fine as a rental fee instead. So, in effect, the
agent is renting the entire enterprise from the principal.
In fact, this type of principal-agent arrangement where the agent pays the principal in
exchange for the right to keep all the revenues is quite common.
In barbershops, barber pays for the right to operate a seat, and gets to keep all the fees
McDonalds franchisees purchase a franchise from McDonalds Inc
There are two major advantages from such an arrangement:
9

As weve discussed using the model, when the agent gets to keep all revenues, the agent
has the optimal incentives to exert effort (and ends up maximizing total value).
Another reason that is outside our model: in many such arrangements, the principal
doesnt even have to monitor the agent to ensure that he is putting in the requisite effort.
He simply sells/rents everything to the agent, and then lets the agent worry about
revenues. (e.g. taxi drivers, Mcdonalds franchises.)
On the other hand, if the arrangement was for the agent to keep only a fraction of the
revenues, the principal would have to monitor the agent to make sure that he is not
stealing/skimming.
Such arrangments (where the agent pays the principal a fee and then gets to keep all the
revenues) tend to be more common outside of firms, rather than within firms. (Well discuss
reasons why this is so in future lectures.) So, roughly speaking, the model from Section 2
taught us more about principal-agent relationships outside firms, rather than within firms.
Nonetheless, in future lectures, well continue to use the model from Section 2 to study
principal-agent relationships within firms. This is because it is very useful as a baseline
model to which we can add other sources of inefficiency, and study their effects. (In contrast,
the model from Section 1 already has a source of inefficiency: the conflict between maximizing
total value and maximizing principals profit. Adding additional sources of ineffiency would
make things too complicated.)
Lesson: we want models to be useful and insightful, but that doesnt mean they have
to be realistic. We may remove realistic features if that allows us to focus on specific
questions that we are asking.

10

S-ar putea să vă placă și