Sunteți pe pagina 1din 11

PUBLIC MANAGEMENT

EVALUATING POLICY ADVICE:


THE AUSTRALIAN EXPERIENCE

PATRICK WELLER AND BRONWYN STEVENS

Mandarins give policy advice. The process is shrouded in the mystique of


confidentiality. Policy advising is regarded as the ultimate skill, the height
of ambition for civil servants, far above responsibility for the exercise of
executive authority in the delivery of services. For departmental secretaries
this policy-advising relationship with the minister is a crucial element of
their success; for senior officials access to the minister may be an indication
of their centrality to the departments interests. The challenge of advising
ministers is the ultimate ambition for many in public service.
But how well is the job done? In part there seems to be an implicit
assumption that good people give good advice. If the system ensures those
who reach the top of the civil service are properly talented, then it follows
that the quality of advice will be based on firm foundations. Where policies
fail, it can be argued, it is because the government failed to listen. Since
ministers take decisions, failures need not indicate the advice provided by
officials was poor. However such assumptions need to be tested. Pro-
gramme evaluation is now routinely undertaken, but the evaluation of pol-
icy advice has been left to more informal mechanisms. Yet policy advice is
a crucial determinant in public sector activity.
Policy advice may be difficult to assess. It is, after all, eventually a matter
of judgement, an art or craft rather than a science (Wildavsky 1979; Waller
1996, p. 12). It usually requires the reduction of a complex problem to a set
of options, based on assumptions about causation, and compatible with
government policies or directions. There is no guarantee ministers will
accept the advice. Nor should there be. Hence in the process of evaluation
the emphasis must be more on the development of advice than on resultant
action. Acceptance cannot be an absolute criterion of good advice. Even
when advice is accepted without change, there may be problems of
Patrick Weller is Professor of Politics and Public Policy and Director of the Centre for Australian
Public Sector Management at Griffith University; Bronwyn Stevens is Lecturer in Politics at the
Sunshine Coast University College, Queensland. They would like to thank Michael Keating and
Glyn Davis for their comments on drafts of this article.

Public Administration Vol. 76 Autumn 1998 (579589)


Blackwell Publishers Ltd. 1998, 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street,
Malden, MA 02148, USA.
580 PATRICK WELLER AND BRONWYN STEVENS

implementation. Further, since advice is usually confidential, there may be


some problems in seeking to judge its quality. Ministers have many sources
of advice and disentangling them is difficult, so there can be no certainty
that the civil services policy, for better or worse, was responsible for the
outcome.
None of these caveats are novel. Nor are they restricted to any one coun-
try. The literature on policy making emphasizes its contingent nature. Wil-
davskys (1973) famous strictures on planning can readily be applied to
advice; it needs a theory of causation and clear objectives against which
progress can be tested. The fact of acceptance is not a sign of its success,
as politicians and officials may agree to a proposal for reasons outside the
policy itself (Wildavsky 1979, p. 213). The best policy is negotiated, the out-
come of a political process, rather than being determined by the edicts of
experts. The accepted view of policy making emphasizes the ebb and flow
of debate, the processes of policy learning, the creation of coalitions. Any
link between the provision of advice, its acceptance or otherwise, and its
impact, cannot be seen as linear or simple.
But are these satisfactory reasons for not trying to assess whether the
advice is of good quality? In an era which stresses efficiency of service
delivery, and where there is an effort to put a price on government services,
is there not a case for seeking to test the effectiveness of policy advice?
There is, as always, a judgement to be made. Recognizing the difficulties,
will evaluation promise enough benefits to justify the cost of the exercise,
and is there a methodology that might suggest the promise of advantages
to the political system? In addition to the theoretical desirability, there may
be some pressing strategic reasons for considering such a process. At a time
when there are serious proposals that policy advice should be privatized
or contracted out, there is a need to justify its current mode of existence.
Indeed if policy advice is to be contracted out, there is an even greater need
to devise procedures for determining whether that advice is of good qual-
ity.
The imperative to try is made more urgent by the perceived costs of
policy mistakes, sometimes made on a grand scale and costing billions of
pounds, and in part generated by poor advice. In his analysis of policy
disasters, Dunleavy identifies as a cause the lack of appropriate pro-
fessional expertise to be able to appreciate risk and cost; he claims there
was a premium on political responsiveness and a de-emphasis on policy
work (Dunleavy 1995, p. 62). He concludes that the shrinkage of the
central state machine produced by ceaseless reorganization and efficiency
savings has led to sadly reduced core competencies (1995, p. 63).
His diagnosis reflects a broader dissatisfaction. Plowden (1994, p. 104)
cites a commentator who claims that the most obvious sign of ill-health in
the machinery of state is poor performance at policy work. Plowden argues
that properly thought-out advice from civil servants ought to include alter-
native types of solution to those in vogue (1994, p. 105). Questions of Pro-

Blackwell Publishers Ltd. 1998


EVALUATING POLICY ADVICE 581

cedure for Ministers instructs ministers to give fair consideration and due
weight to informed and impartial advice from civil servants, but clearly
there is some feeling that all is not working well. Yet this diagnosis of ill-
health has not led to proposals for assessing when that advice is, or is not,
well informed and of good quality.
The New Zealand government has gone further. It has calculated the cost
of policy advice $NZ 245 million in the 19921993 financial year thus
estimating what ministers are purchasing (Boston et al. 1996, p. 129). The
government has devised a performance management framework for exam-
ining quality with seven defining characteristics: purpose, logic, accuracy,
options, consultation, practicality and presentation. Ministers are asked
every three months to indicate their level of satisfaction, as an input to the
review of the performance of departmental secretaries by the State Services
Commission (Hunn 1994, pp. 312). But the authors of the initiative
acknowledge that ministers may not always be the best people to judge
quality and that they may have a particular problem in those cases when
the advice was necessary, but not welcome. In addition the evaluation of
advice was part of the process of assessing official performance; policy
advice was not the principal subject of inquiry. Even after this attempt to
calculate where there is the best performance, observers can only comment
that some departments have acquired a well-deserved reputation for pro-
ducing excellent work, whereas others have been much less successful
(Boston et al. 1996, p. 133).
That the problem is hard is not in debate. That difficulty creates a
pressing requirement to assess the methodology and impact of those experi-
ments that have been undertaken. In that spirit this article examines an
experiment in Australia where a series of Policy Management Reviews
(PMRs) were undertaken to assess the performance of central agencies in
the provision of advice and, more broadly, in developing that advice. This
article describes the Australian process, explains some of the dilemmas, and
suggests the valuable if limited benefits that can flow from this method
of evaluating policy advice.

THE ORIGINS OF POLICY MANAGEMENT REVIEWS


The second half of the 1980s saw an extension of a broad managerialist
agenda within the Australian public service. Departments were required to
develop corporate plans, set performance targets and evaluate their per-
formance. Budgets were to be determined by programmes and outputs,
rather than traditional inputs. Evaluation became mandatory, with the
emphasis on determining the cost effectiveness of programmes in meeting
their policy objectives. For most departments providing services to the pub-
lic, a range of methodologies could be adopted to test performance.
But central agencies were faced with a different problem. When parlia-
mentary committees asked what value-added they provided, the response
was policy advice. If the question was then asked: how well was it pro-

Blackwell Publishers Ltd. 1998


582 PATRICK WELLER AND BRONWYN STEVENS

vided? there was little evidence to justify the confidence that it was of a
high standard. The central agencies therefore had to develop a means of
assessing policy advice in order to conform to the same standards of evalu-
ation imposed on line departments.
A working party made up from officials of the four central agencies
(Prime Minister and Cabinet, Treasury, Finance and the Public Service
Commission) was established to examine the problem. It first explored the
experience of other nations:
The overseas trip was interesting and useful in teasing out very different
cultural and philosophical approaches to whether, and if so, how the
business of policy advising should be evaluated. With the French it really
was the Gallic shrug, as though to say that we really were quite mad
in wanting to ask any of these questions. Whitehall was somewhat differ-
ent. Paraphrasing Dr Johnson, the British saw it as not a question of
whether you ought to do policy evaluation well. Instead it was a ques-
tion of why on earth would you want to evaluate policy advice at all
given that, self-evidently, anybody in the Whitehall advising policy game
must be doing it well.
The US was quite different again, and useful. Though structurally and
constitutionally distinct from the system that we are accustomed to here,
there was a great deal more yeast and openness in terms of questioning
the mores, the value and the appropritae application of policies (Waller
1996, p. 70)
Yet if some nations were sceptical of the value of the exercise, Australian
central agencies felt they had no choice if they were to answer the questions
being asked by Parliament. The problem was therefore not whether to
evaluate, but how to do so.
The Task Forces report, Performance Assessment of Policy Work (PAPW
1992), was judicious and careful. (An abbreviated version was soon pub-
lished: Waller 1992). It explained the process for developing good advice
and acknowledged the contingent nature of evaluating good policy advice
which involves:
taking a difficult and sometimes poorly understood problem or issue
and structuring it so that it can be thought about in a systematic way,
gathering the minimum necessary information and applying the appro-
priate analytical framework,
formulating effective options addressing, where necessary, mech-
anisms for implementation and evaluation; and
communicating the results of the work to the government in a timely
and understandable way (PAPW 1992, pp. 89).
The object of the evaluation policy advice was seen as more than merely
consideration of the final paper to the minister; it incorporated the whole
process of understanding the problems, co-ordinating the responses and

Blackwell Publishers Ltd. 1998


EVALUATING POLICY ADVICE 583

formulating their advice in essence, the whole policy cycle from issue
identification through to implementation and evaluation.
The report noted the need to consider three aspects: inputs (staff, man-
agement practices, information), process and outputs. It acknowledged the
first two were easier to analyse than the utility of the advice. Assessing that
utility, the report pointed out, involves assessing the effectiveness of policy
advice, firstly in terms of its effect on the recipient of that advice, secondly
whether that person adopts the advice and advocates it and thirdly its effect
on the decision-making process. This need to understand causal links
between the quality of the advice and the outcome is made harder by the
problem of judging impact, and deciding whether short-term or long-term
perspectives are the more appropriate time horizons.
Good advice is therefore both procedural getting the content right
and presentational telling it as it is. As the report says:
policy advice is an art or a craft, rather than a science. What the policy
adviser is required to do is reach a professional judgement about both
the underlying situation and the appropriate course for policy. These
judgements must be honest, disciplined and rigorous and be transparent
to those to whom the advice is directed (PAPW 1992, p. 9).
Good advice is what ministers should hear. To avoid giving unpalatable
information because that advice may be unwelcome is an exercise not in
politicization but in servility.
The working party emphasized that policy advising must be confidential,
to protect the relationship between ministers and their advisers. Any review
must appreciate this confidentiality which restricts the available options for
evaluation and people who might be able to undertake them.
The working party report (but not the version later published) then can-
vassed options. It admitted that informal methods of analysis ministerial
advisers, a better informed private sector, interest groups and governmental
research institutes provides some analysis through the process of con-
testing and debating policy alternatives. If the public service once had a
near monopoly on policy advice, this is no longer true. Ministers are, any-
way, always able to make informal assessments of policy advisers. Some
political assessment has existed, if not in a systematic way, through parlia-
mentary committees. On behalf of Parliament the Australian Audit Office
could assess programme efficiency and report to the Public Accounts Com-
mittee. But these methods were informal or tangential to the central issue;
none of them provided any formal consideration of the procedures and
problems of formulating advice.
If a more formal process was required, the report argued that the choice
was either an internal peer assessment, or a Policy Management Review
(PMR) undertaken by an external evaluator with specific terms of reference.
Either process would need to capture the complexity and time pressures
involved in developing policy advice, and to identify suitable areas of pol-

Blackwell Publishers Ltd. 1998


584 PATRICK WELLER AND BRONWYN STEVENS

icy (particularly where there are cross-portfolio implications) for evaluation.


Further, either process had to be economic in its use of resources, and to
rely on a suitable supply of competent people to undertake the analysis.
The report saw problems with peer assessment, particularly in the assess-
ment of single policy issues, but saw benefits in PMRs as long as under-
taken selectively.
The report, therefore, tentatively recommended a process by which a suit-
ably qualified and experienced expert who would have access to the docu-
mentation and officials would then survey the subjects under review. It
recognized that there would be a severe limit on the number of suitable
evaluators who might be available. It argued that the PMRs would be able
to assess input and processes and, possibly, outputs.
The report had been commissioned by the central agencies; it was there-
fore incumbent on them to set the example and examine some of the pro-
grammes in which they had been involved.

CONTINUING DILEMMA
There is now a familiar series of dilemmas facing those who undertake
PMRs. Some relate to context, to the well-defined professional environment
within which senior officials must work. For instance, Dr Michael Keating,
then the Secretary of the Department of the Prime Minister and Cabinet,
argues that the key ethical obligation of public service advisers is:
to ensure that decisions are fully informed and ministers are not misled.
To do this the Public Service has to be able to give its advice frankly,
without fear and favour. In particular, it has a responsibility to draw
on its professional knowledge and accumulated experience to point any
possibly unpalatable implications of particular problems which might
otherwise have remained unforeseen or been glossed over (Keating
1995).
A further problem is the political context. While the obligation is to pro-
vide advice that is frank and fair, to what extent should, or can, advice
reflect the interests of the government? Dr Keating noted with approval a
colleagues comment that: for a public servant to be aware of the govern-
ments mandate, philosophic approach, and of the political constraints is
not to be political. It is part of being professional to ensure that the advice
will be relevant (Keating 1996, p. 65). There is, of course, a fine balance
between determining what might be the political context and introducing
partisan elements. Nevertheless the task cannot be avoided.
A further professional requirement is the basis on which officials can
make their recommendation. Policy advice need not masquerade as the
general or public interest, as though the department is somehow the guard-
ian of that vague doctrine (Keating 1995). Rather it has to be based on a
distinct (even if contested) description of the problem, rigorous analysis
and logical options and recommendations. The last may be particularly sig-

Blackwell Publishers Ltd. 1998


EVALUATING POLICY ADVICE 585

nificant. As Sir John Crawford, a distinguished departmental head, wrote


forty years ago: The Permanent Head must accept the responsibility for
final advice to the Minister. Pros and cons, yes; but come off the fence too.
It is the Permanent Head who takes the jump (Crawford 1954, p. 163).
Judgement must be exercised. Given the complexity of most issues, there
is unlikely to be one answer. Ministers may adopt a different option, but
they have to know the range of choices and the advice of policy pro-
fessionals.
There are also problems defining what constitutes a symbol of success,
and determining cause and effect. It would be easy to assert that acceptance
by ministers is an indication of good advice. But it may not be, in part
because advice may be accurate but unpalatable, in part because ministers
invariably have several sources of advice. The final decision is the outcome
of a contestable process and may reflect a range of influences that are diffi-
cult to re-capture. Of course if a departments policy advice is never
accepted, it may be because the agency misunderstands the governments
directions or because the government is so driven by expediency that they
will not hear good advice. If a departments advice is not sought, then
clearly it has failed (Sedgwick 1996b, p. 128). But these varying problems
may need to be disentangled.
All PMRs will necessarily be retrospective, with the benefits of hindsight
and with the advantages of seeing the outcomes. In some cases failure may
be attributed to poor policy design, but at other times the advice may be
sound but the administration poor (Nicholson 1996, p. 38). The real costs
too become available for the PMR, when they could only be estimated at the
time. But these are all essentially problems of judgement for the analysts,
problems common to all post-hoc qualitative analyses.
THE POLICY MANAGEMENT REVIEWS
Since the 1992 report, five PMRs have been completed. All were com-
missioned by central agencies, keen to explore ways of assessing their per-
formance. Four of the five have been made publicly available, at least in
part (in one case only one half was released). One has been published in
an academic journal (Weller 1996). In another case, the reflections of the
author have been reproduced in different forums (Uhr 1996a, 1996b).
In order of completion, the PMRs (for a summary, see Gregory 1996)
were:
(1) The Evaluation of the Policy Development of the 199293 Carers
Package, undertaken by Graham Glenn, a former department sec-
retary, for the Department of the Prime Minister and Cabinet, April
1993. The PMR examined the work of the Interdepartmental Commit-
tee, which was established to consider the possibility of extended pro-
vision of assistance for carers for the elderly. The government wanted
to shift towards a greater focus on care in the community. The Inter-
departmental Committee was to organize the data and develop

Blackwell Publishers Ltd. 1998


586 PATRICK WELLER AND BRONWYN STEVENS

options for the Cabinet. The review was to give specific emphasis to
the role of the Department of Prime Minister and Cabinet in the
activities of the committee.
(2) Review of Treasury policy advice on the governments response to
the report of the parliamentary committee of inquiry into banking.
Prepared by Fred Argy for the Treasury, May 1993. This report has
not been released.
(3) The Evaluation of the Forward Estimate Strategy Papers for the 1993
4 Budget, prepared by Graham Glenn for the Department of Finance
in April 1994. This PMR examined the preparation of the forward
estimate strategy papers and their impact on ministers. The 19934
Budget Paper had sought to give greater emphasis to strategic direc-
tion than in earlier years by developing plausible new options for
containing outlay growth. The review examined in detail two strategy
papers, on environmental programmes and child care, from the 48
prepared by the department. The first programme related to cross-
portfolio programmes, the latter to a specific, well-understood
growth programme.
(4) Review of Finances Role in Promoting and Using Evaluation, a paper
prepared by Dr John Uhr, Federalism Research Centre, Australian
National University for the Department of Finance in June 1993. The
review examined the performance of two interdepartmental commit-
tees, in which Finance was a key participant, to test the use of pro-
gramme evaluation. The two cases were aged care services for Abor-
iginal and Torres Strait Island people, and cost recovery practices in
the Commonwealth management of Australian Fisheries.
(5) Review of Commonwealth-State Reform Processes, prepared for the
Department of Prime Minister and Cabinet by Professor Patrick
Weller, Centre for Australian Public Sector Management, Griffith Uni-
versity, June 1995. The PMR considered the preparation of agenda
and advice for the Council of Australian Governments (an
intergovernmental body made up of the Prime Minister and state
premiers) in relation to micro-economic reform. The PMR took six
areas of reform and reviewed reasons why some succeeded more
than others; in particular it considered the impact of the Department
of Prime Minister and Cabinet on facilitating the process. It drew
conclusions about the strategies that worked the best and was pub-
lished soon after completion (Weller 1996).

The methodology of the PMRs was consistent: an examination of avail-


able documents, coupled with interviews with the key actors. But they did
not follow any pre-determined path. For the PMR on the Commonwealth
State Reform Processes, for instance, seven policy areas were examined,
and interviews held with four state premiers and 36 officials across the

Blackwell Publishers Ltd. 1998


EVALUATING POLICY ADVICE 587

Commonwealth government, six states and one territory. It therefore pro-


vided a fairly rigorous cross-checking of perceptions on what might work.
There is a criticism that, failing a common methodology, there is little
reason to accept the findings as anything but an individuals views, and
that it is not possible to see how findings spring from the review (Colebatch
1996). Yet it is difficult to see to what degree any common external method-
ology could be applied to the disparate subjects. These ranged from the
analysis of the strategies of providing policy advice to the Council of Aus-
tralian Governments a broad multi-purpose, council of first ministers
to internal advice on forward estimates. To require a rigid methodology
would restrict the usefulness and the possible range of subjects that might
be amenable to that process. In addition, it is difficult in developing the
project to be precise about exactly what information will become available.
The PAPW report recognized the need to select evaluators carefully and
rely on their judgement. Indeed, if policy advice is accepted as the exercise
of judgement, how else can it be evaluated other than by a further exercise
of judgement?

SOME LESSONS
To this point all PMRs have been undertaken by those promoting the idea,
the central agencies (primarily the Departments of Prime Minister and Cabi-
net and Finance). Perhaps they had a greater need to explain their value-
adding participation in the advisory process. The initiation of PMRs by
these central agencies has a necessary consequence: the evaluation is look-
ing for the contribution made by the central agency, which is often as much
concerned with co-ordination and process as with outcome. In some of the
cases the central agency may have acted as the driver and catalyst for policy
development; the Department of Prime Minister and Cabinet, for instance,
was the initiator of competition policy and for many of the changes to
federal-state relations. Yet much of the later detailed work has to be done
in line departments. In other cases, when the policy initiative lies with line
departments, central agencies will assist to shape proposals, set some of
the rules and provide some input, whether based on political realism or
whole-of-government perspectives. For that reason, it is perhaps harder to
identify exactly what central departments add in policy terms. PMRs on
central agencies are therefore starting with cases that will be harder and
more complex than those where policy advice is primarily limited to a line
department. It would be easy to extend them to those more traditional
areas, where causal links may be easier to identify. It might also be useful,
as one observer noted, to examine a disaster in policy advice, to see what
went wrong (if any agency is prepared to participate in such a soul-
baring exercise).
Each PMR evaluator tried to draw lessons from the analysis. The PMR
on carers identified areas that should have been done better, and proposed
the department in future ensure certain policy conditions were met. The

Blackwell Publishers Ltd. 1998


588 PATRICK WELLER AND BRONWYN STEVENS

PMR on forward estimates concluded that, since these papers had little
impact on ministers, a less costly process might be developed if the primary
benefit of the demands for the forward estimates was to act as an internal
catalyst for work on expenditure growth. The report on Commonwealth
State Reform Processes noted conditions that led to successful reforms and
the lessons which might be drawn. Two PMRs provided check lists for
policy advisers, as possible guides for action or as aides memoires when ask-
ing what else might need to be done.
The PMRs have therefore tended to report in two parts: to explain what
happened, and to provide lessons that might be adopted for future practice,
and for training. None of the findings have perhaps been startling for our
understanding of the way policy-making works. They reflect and develop
the wisdom, rather than shake it; but do provide a means of examining
processes in a structured way rare at the frenetic pace of central govern-
ment.
The recipients have been satisfied that some benefits have accrued. Dr
Keating wrote to department heads in May 1995 arguing that I see potential
for more widespread application of the options for performance assessment
of policy work, including the use of the PMR methodology, particularly as
a means of expanding the pool of information available to policy managers
(quoted in Gregory 1996, p. 160). The Secretary of the Department of Fin-
ance concluded that, while their early caution was valid, the results of
PMR undertaken to date show that the technique has promise, including
for the policy advising of line agencies (Sedgwick 1996a, p. 89).

CONCLUSIONS
The Australian experience represents a sustained attempt to come to terms
with the problems of assessing policy advice. The contingent nature of
advice is recognized, the problems of identifying the multiple strands
accepted. The PMRs are designed to analyse the process of developing
policy advice for ministers, while acknowledging a need to protect the
confidentiality of the relationship.
The PMR process does not pretend to provide a complete answer, nor
does it replace the need for more formal evaluations of government pro-
grammes. Rather a PMR can provide insight into the process of providing
advice, and more particularly on the capacities of co-ordinating agencies
in providing inputs.
The existing PMRs were all undertaken when a government had been in
power for some time; this may be a significant factor. Procedures may have
been well established and consequently lessons could be applied to well-
established routines. But the basic questions about quality will not change
with the election of a new government.
The Australian PMR outcomes suggest the number of evaluations that
can usefully be completed may be finite. PMRs provide ideas and give
guides to action. Every line department could undertake analyses and draw

Blackwell Publishers Ltd. 1998


EVALUATING POLICY ADVICE 589

lessons. But, since precisely the same circumstances will not re-occur, it is
these lessons, more than the analysis of the particular case, that will be of
the greatest benefit. In addition, as the initial report noted, the availability
of suitable evaluators is likely to be limited.
Yet policy evaluations are worth developing. The cost of PMRs is fairly
small compared to the potential benefits in an area where too little is known
and too much is assumed. As long as policy advice is a recognized skill,
its quality should be assessed. PMRs are an initial step worth taking.

REFERENCES
Boston, J., J. Martin, J. Pallot and P. Walsh. 1996. Public management: the New Zealand experience. Auckland:
Oxford University Press.
Colebatch, H. 1996. Testing the policy capacity of budgetary agencies: a comment on Uhr, Australian Journal
of Public Administration 55, 4.
Crawford, Sir John. 1954. The role of the Permanent Head, Public Administration (Sydney) 13, 3.
Dunleavy, P. 1995. Policy disasters: explaining the UKs record, Public Policy and Administration 10, 2.
Gregory, P. 1996. Policy management reviews, in Uhr and Mackay 1996 (see below).
Hunn, D.K. 1994. Measuring performance in policy advice: a New Zealand perspective, in Performance
Measurement in Government. OECD Public Management Occasional Papers, No. 5. Paris: OECD.
Keating, M. 1995. Public service values. 1995 Peter Wilenski Memorial Lectures, National Press Club.
. 1996. Defining the policy advising function, in Uhr and Mackay 1996 (see below).
Nicholson, J. 1996. Measures for monitoring policy advice, in Uhr and Mackay 1996 (see below).
PAPW. 1992. Performance assessment of policy work, Report of Working Party, Canberra.
Sedgwick, S. 1996a. Lessons from Finance, in Uhr and Mackay 1996 (see below).
. 1996b. Discussion, in Uhr and Mackay 1996 (see below).
Uhr, J. 1996a. Lessons from an external reviewer, in Uhr and Mackay 1996 (see below).
. 1996b. Testing the policy capacities of budgetary agencies: lessons from finance, Australian Journal of
Public Administration 55, 4.
Uhr, J. and K. Mackay. 1996. Evaluating policy advice: learning from Commonwealth experiences. Canberra:
Federalism Research Centre and Department of Finance.
Waller, M. 1992. Evaluating policy advice, Australian Journal of Public Administration 51, 4.
. 1996. The changing environment of policy making, in Uhr and Mackay 1996 (see above).
Weller, P. 1996. Commonwealth-state reform processes: a policy management review, Australian Journal
of Public Administration 55, 1.
Wildavsky, A. 1973. If planning is everything, maybe its nothing, Policy Sciences 4.
. 1979. Speaking truth to power. Boston: Little, Brown.

Date received 26 June 1997. Date accepted 20 October 1997.

Blackwell Publishers Ltd. 1998

S-ar putea să vă placă și