Sunteți pe pagina 1din 189

Université Libre de Bruxelles

Faculté des Sciences

Institut de Gestion de l’Environnement et d’Aménagement du Territoire

Indicators for Sustainable Development:


A Discussion of their Usability

Thèse présentée par

Tom Bauler
En vue de l’obtention du grade académique de Docteur en Environnement
Mai-Juin 2007

Directeur : Prof. Edwin Zaccaï (ULB)


Jury : Profs. Hans Bruyninckx (KU Leuven), Walter Hecq (ULB-CEESE), Frédéric Varone
(Université de Genève), Philippe Vincke (ULB), Edwin Zaccaï (ULB)
2
3
Acknowledgements 7
Introduction 9

Chapter 1 ‘Indicators’ and ‘Indicators for Sustainable Development’ 15

1.1 Indicators for Sustainable Development 18


1.1.1 The relationship between indicators for sustainable development and assessments 18
1.1.2 ‘Indicators of sustainable development’ or ‘indicators for sustainable development’? 19
1.1.3 Incremental versus structural levers to policy change 21
1.1.4 Indicators and the self-generation of Sustainable Development 23
1.2 Characterizations of Indicators for Sustainable Development 24
1.2.1 Historical backgrounds 25
1.2.2 Definitions 28
1.2.3 Types and typologies of indicators 31
1.2.4 Typologies of indicators: some examples 42
Conclusion to the chapter 45

Chapter 2 A Procedural Understanding of Sustainable Development:


Principles and Processes 47

2.1 Sustainable Development: Systems’ Approaches and Processes 50


2.1.1 Systems approaches 52
2.1.2 Processes 59
2.2 Translating the ‘Projet de Société’ into Principles 62
Conclusion to the chapter 65

Chapter 3 Sustainable Development and Assessment 67

3.1 Decision-making and Information: Attempts to assess Information 74


3.1.1 Decision-aiding and rationalities 78
3.1.2 Handling information in decision spaces 83
3.1.3 Towards a generic model for information assessment 92
3.2 SD and Evaluations: Principles of Sustainable Development as Limits to
the Utilisation of Assessments 99
3.2.1 Multiple Dimensions : L,C,S-criteria and the principle of ‘integration’ 100
3.2.2 Participative Assessments : L,C,S-criteria and the principle of ‘participation’ 105

4
3.3. Approaches towards assessing Influence of Indicators for Sustainable
Development 110
3.3.1 Selected approaches towards assessing ISD-use 111
3.3.2 Legitimacy, Credibility, Salience at the level of ISD 123
3.3.3 Linkages between the L,C,S-framework and the approaches for the
assessment of the indicator influence 135
Conclusion to the chapter 139

Chapter 4 Institutionalisation of ISD : a major Factor to characterize the


Usability of Indicators for Sustainable Development ? 141

4.1 Institutionalisation and ISD 144


4.1.1 Institutionalization and institutional embeddedness 145
4.1.2 Limits to the institutionalization of ISD 150
4.1.3 The L,C,S use-criteria facing the institutionalisation of ISD 153
4.2 ‘Institutional Embeddedness’ as a second Axis to the Assessment of
ISD-Usability 156
4.2.1 Indicators and their institutional embeddedness: a matter of
‘boundary organisations’? 156
4.2.2 Steering ‘boundary organisations’: ‘reflexive governance’ applied to ISD 158
4.2.3 Integrating ‘Institutional Embeddedness’ to the L,C,S-framework 160
Conclusion to the chapter 166

Conclusion :
From ‘proceduralism’ to ‘institutional embeddedness’ to ‘reflexive institutionalisation’:
a prudent outlook on enhancing the usability of ISD (and more?) 169

References 175

5
6
Acknowledgments

It is as always very difficult to find the correct, not too clumsy, anchor for the acknowledgments of
such an enterprise as a thesis. I don’t excel in this type of exercise.
Of course, I owe particularly to my parents without whose sincere enthusiasm to see me evolve in
research and higher education, it would have been much more difficult for me to find the motivation
to invest in this enterprise. In a very different dimension, and of course in many uncountable
respects, I am deeply indebted to Sara, the other stable, continuous pillar in my life (in parallel to an
unfinished thesis project). I owe very much also to those of my very, very good friends, who made
me laugh at the surrealism of my own condition every time I met them.
Not strictly in parallel to these more personal than professional acknowledgments, I am very
appreciative of Edwin Zaccaï, director of the present thesis, main cause of it being completed today,
and very much a reference-person to me. Particular credit also to Walter Hecq, without whom I
wouldn’t have entered ULB, or the study field of ISD.
Big waves to all the others, who were more or less involved in the present enterprise, and be it only
because they were so polite not to ask me to explain them my work.

7
8
Introduction

Context

After nearly 20 years of existence as a policy referent, the effective translation of Sustainable
Development (SD) into policy processes remains a matter of debate, even of experimentation, as new
policy initiatives are still being developed. For instance, precisely at the moment we complete the
present effort, mid-April 2007, the Belgian parliament is taking decision to include an article on SD
into the Belgian constitution, whereas parts of the federal Belgian administration are triggering a
debate on the necessity and opportunities to institutionalise a long-term participatory planning
process for SD.
Obviously, since the Brundtland-report and the Rio-summit there have been shifts in the
interpretations of SD. Over the years there have also emerged different - and sometimes innovative –
approaches to proceed with the operationalisation and implementation of SD into public policy
processes. Two generic, parallel stances have been followed. On the one hand, one observed the
emergence of specific SD-policies and notably of SD-strategies, which meant to provide a top-down
reference point for policy-makers. On the other hand, a handful of specific instruments, processes and
tools were initiated which should assist to mainstream the translation of SD-criteria and principles
into policy processes by influencing the configuration of policy moments such as public policy
evaluation and assessment, policy communication, policy formulation…
Within this second – non-programmatic – approach to translate SD into policies, the choice of the
instruments, processes and tools to be preferred and promoted appears to have considerably changed
over the last years. In effect, right after the Rio-summit a wave of processes at national, global, local,
urban… level were initiated, which were concerned with the construction of indicators for
sustainable development (ISD). Before the turn of the millennium, nearly every country in the
developed world had its ISD-initiative either accomplished or in the pipeline. Besides, hundreds of
local communities had stepped into their ISD-processes. And private corporations had accomplished
a first version of international standards for ISD-processes at the level of corporations.
Today, many of these initiatives have weakened, both in intensity and number. Even if some
countries still publish yearly updates of their SD-reports based on ISD, and some initiatives finally
concretize their processes with a first ISD-report (e.g. Eurostat’s SDI), hardly any notable ISD-
processes or approaches seem to have emerged for some years now. In parallel to this diminished
interest, or maybe because of it, or maybe as a response to the initial explosion in ISD, the singular
question of the utility of ISD has appeared. In fact, many observers and supposed users of ISD had
problems to detected what would have justified the relatively intensive investments in resources and
hopes which have been put on ISD. ISD did not seem to be able to live up to expectations; many of
them simply didn’t deliver the awaited effects and seemed to be quite strictly discarded by their
supposed users. Of course, there could be many reasons for this, and one obvious one being that the
expectations were not realistic. In any case, the question surfaces among researchers and policy
actors alike, and has some pertinence outside of the debate on ISD: evidence-based policy making is
becoming the general paradigm in public management, and debating the utility-mechanisms of
9
information, indicators, assessments… could help to understand the operationalisation of ‘New
Public Management’ in the realm of SD.
The present thesis wants to be a contribution to this debate at the level of ISD.

Research questions

As in many thesis-processes, our research questions evolved during the exercise. At the beginning
stood the generic question to analyse and understand the impacts of ISD on decision-making. And
more particularly, to derive thereof a series of construction- and configuration standards, which could
contribute to enhance the impacts of SD-assessments and of ISD in policy-making processes: What
are the success factors that characterize the impacts of ISD on policy-making, and how can these be
translated into construction-criteria ? This initial question underwent a double refinement.
First, it appeared to be more accurate to conceptualize the link between ISD and policy-making in
terms of utilisation, instead of impacts, because utilisation could better account for the multiple
diversified uses which can be made of information in policy situations. Considering the question at
the level of the utilisation of ISD appeared however to be a too large conceptualisation when
engaging into a debate of ISD-performance. In effect, Utilisation can be sequenced into a series of
sub-concepts, e.g. can be expressed in terms of information usability, information accessibility,
information digestion, information use by policy-makers, information influence on policy-makers,
information impact on the policy decision. Because it was important to us to focus on those aspects
which allow to consider ISD as science-based decision objects, and thus to feed our discussion back
to the characterisation of ISD-processes, so to say to their configuration, we focused on those aspects
of indicator utilisation which are directly evolving from ISD as objects. We focused our analysis thus
to the first element in the utilisation-chain, namely on usability, i.e. the potential of ISD to integrate
decision processes.
Second, the quest to translate the results of the discussion of indicator utilisation towards the creation
of construction criteria and standards quickly revealed some fallacies. As one can expect, indicator
utilisation is far from being a linear process. Utilisation can happen without apparent reason or even
without an identifiable source of information, it can be considerably delayed, it can be opposite to the
expected… Indicator utilisation depends also on the context of the decision process, the configuration
of the decision processes, the articulation between the decision actors. Assuming that it would be
possible to create, even prudent and non-normative, ISD construction criteria which would
effectively guide influence, would however imply an implicitly mechanistic comprehension of
utilisation. We focused thus the analysis towards the level of the characteristics of ISD-processes,
and how these characteristics link to the context of the decision; i.e. in a certain sense, how usability
can be linked to the decision context.

10
From these two focalisations of the initial research question, a double interrogation finally emerged
to guide the analysis:
Which are the characteristics of ISD-initiatives that are influencing the usability of ISD in decision
situations? And: Can we identify a key which allows to read and analyse these characteristics, i.e.
construct the usability-profile of ISD-processes, with respect to the configuration of the decision
situation?

Methodology

The present thesis is organized as a discussion. Consequently, we will present a series of arguments
that will be highlighted from different perspectives. The arguments presented - as well as the
perspectives that allow their discussion - originate from analyses of ISD-processes, but even more so
from a thorough literature study. In effect, rather wide fields of disciplines - and thus literature - can
be mobilized to take an insight into the research questions, and as a consequence the literature used
presents an interwoven pattern of disciplines, research questions and research domains (see figure 1).

Evaluation
Utilisation Evaluation
Evaluation
Use

Indicator Utilisation

Utilisation of
Indicators for Sustainable
Development

Policy Processes

Indicators for
Information processes
Sustainable Development
in Sustainable Development

Environmental Decision-
Procedural Perspectives on Sustainable aiding Instruments
Development

Sustainable Development

Figure 1 – Visualisation of the literature-study (partial)

11
Contrary to this plethora of anchors in literature, literature that could be of relatively frontal and
direct use to our research questions is fairly rare (except for Boulanger 2006; Hezri 2006; Ortega-
Cerda 2005; Gudmundsson 2003; Rosenström 2002 and ‘in press’).

Facing such a variety of literature domains and disciplines, our present effort is striving to be inter-
disciplinary, in a sense that we transpose the original literature and its concepts from their original,
often disciplinary fields of research onto our own, while enlightening them with perspectives and
arguments taken from other disciplines and researchers. However, our approach faces a structural
limit, because per definition inter-disciplinarity can only be simulated if undertaken by a single
researcher, as its operationalisation necessarily calls for different disciplinary actors to interact and
produce a new level of analysis.

Organisation of the thesis

Because of the multiple overlapping literature perspectives and analyses, which we used during the
thesis, there is not a unique rationale which could have been applied to structure the work.
In order to discuss the first level of our research questions, i.e. which are the characteristics of ISD-
initiatives that are influencing the usability of ISD in decision situations?, we investigate in the first
place the interpretations which can be given to our object of study. Chapter 1 is thus concerned with
exploring definitions, configurations and particularities of indicators for sustainable development.
We investigate to what extent indicators can be understood as levers for policy change, how they are
organized with respect to processes of sustainable development. Finally, we present different
methodological constructions of ISD, and illustrate a series of typologies to bundle and structure the
different approaches to ISD.
Among the different perspectives which can be taken on ISD, one of the more obvious ones is
influenced by the interpretation to be given to the object of the assessment, i.e. the interpretation of
Sustainable Development (SD). Chapter 2 presents thus a synthetic excursion into the possible
conceptualisations of SD. We identify three different approaches to illuminate SD: the systemic
character of SD, the procedural understanding of SD and the normative-political comprehension of
SD in terms of a collection of principles. Rather then indicating antagonistic approaches, all three
approaches can be interlinked at the level of indicators for sustainable development.

The first and the second chapter are basically needed to restrict the signification of ISD and to
investigate the policy domain they are meant to operate in. Subsequently, we penetrate the heart of
the problematisation.

Chapter 3 is concerned with elaborating an analysis of the linkages between SD and evaluation use in
the case of ISD, with the objective to identify those characteristics that co-define the usability of ISD
in policy making. In a first place, we investigate the fact that the operationalisation of SD, through its
dynamic and procedural components, necessarily calls for assessments. Consequently we take a step

12
back, and explore the linkages between public policy and decision making in general, and the
information flows that sustain these processes. We will identify a collection of assessment-
characteristics, which have been developed to describe the mechanics of the assessments’ integration
into policy making. We will discuss on the basis of these characteristics, how far SD, and more
precisely some of its main defining principles, can be acknowledged to be generically
counterproductive to the production of usable assessments. Subsequently, we discuss the
interrelationships between ISD and the identified asssessment characteristics.
Chapter 4 will discuss a further axis of analysis to the usability-characteristics of ISD. The preceding
analysis and discussion of the characteristics could not satisfy our second research question, i.e. can
we identify a key which allows to read and analyse these characteristics, i.e. the usability-profile of
ISD-processes, with respect to the configuration of the decision situation?. By confronting the
usability-characteristics to the organisation and proceduralism of ISD, i.e. their institutionalisation,
we will propose a second axis to the prior established usability-characteristics. This supplementary
level of reading the usability of ISD wants to be a step towards providing a structure for the
management of usability.

13
14
Chapter 1

‘Indicators’ and
‘Indicators for sustainable development’

15
16
Indicators are still at the heart of the debate on sustainable development, whatever the level or stance
taken: sectoral issues (e.g. transport&environment; climate change; greening of public
procurement…) are claiming to use and develop indicators as well as global, multidimensional issues
(e.g. the monitoring of the ‘Millennium Development Goals’). State-of-the-Environment reporting on
country level is inextricably linked to the use of indicators, as is the implementation of the ‘Global
Reporting Initiative’ on the level of firms, which issued its 3rd version of monitoring principles.
Etcetc
Indicators get developed by some in order to help them define their strategies, whereas others assess
the success of their strategy with them. Indicators are used to evaluate and communicate on the
performance of buildings and construction sites, as they are used on the level of urbanism. Indicators
are initialized for small-scale evaluations of public space management or the allocation and use of
local development funds. Simultaneously, indicators are used to communicate on large scale ex ante
‘Sustainability Impact Assessments’. Sustainable indexes are developed to rank stock portfolios and
pension funds. Academia is striving to discuss composite indicators, which are supposed to replace or
complement Gross Domestic Product (GDP) in the near future, whereas adaptations of the same GDP
to integrate environmental and social variables are meant to keep the economic aggregate at live.
Sometimes indicators seem to be mere by-products of data treatments, such as maps extruded from
the latest GIS-software. Sometimes indicators represent the emerged part of an empirical calculus, as
it is the case with communicating the outcome of extensive Life-Cycle-Analysis (LCA). On other
occasions, indicators are condensing the results of complex and time-consuming data-collection and
–structuration efforts, as it is the case with the attempts to green the national accounts by constructing
satellite environmental accounts. Lately indicators are outputs of ‘Sustainability Impact Assessments’
as well as of processes using ‘Multi-Criteria-Decision-Analysis’. Sometimes indicators are nothing
else than a more or less successful combination of reheated data existing elsewhere in administration.

Obviously, what is so harmoniously called ‘indicators for sustainable development’ (ISD) cannot be
related to a single well-defined object. The multitude of initiatives and perspectives referring to
indicators in the context of sustainable development is rendering an ambiguous and heterogeneous
picture of the object. This first chapter is thus concerned with a general presentation of ‘indicators for
sustainable development’ (ISD).
For the sake of internal coherence and understanding of the following chapters, we need to clarify
some notions: it is the multiplicity of possible understandings about what an indicator is, which
forces us to present as clearly and as unambiguously as possible our view on indicators. While
stressing and exploring the diversity of the object of study, we intend to distil some general features
of ‘indicators’, which will help us to develop useable working definitions of different types of
indicators. The aim here is to present what could be named the dénominateur commun, e.g. the
minimal characteristics between different interpretations and conceptualisations of ISD.
In the first section, we define indicators, and present the contexts into which these instruments
inscribe themselves. In a second section, we characterize more specifically ISD from their historical
base, their appearance, their definitions, and their methodologies. We sketch also a selection of ISD-
typologies in order to account for the diversity of existing ISD approaches. Finally, the third section
will explore the roles which ISD are assumed to fulfil as it appears in general literature on sustainable

17
development (SD) or ISD. This last section will be of particular interest, because it introduces to the
heart of the present work, i.e. to provide a contribution to the better understanding of the levers for
ISD in decision processes and policy making, hence discuss how far these roles and expectations
could be met with ISD.
Subsequently the use of the term ‘indicator’ refers to ANY type of indicators (including indicators
for sustainable development): we use it as the generic term. ‘Indicator’ is thus NOT to be taken as a
short form for ‘indicators for sustainable development’, which will be addressed in short solely as
‘ISD’.

1.1 Indicators for sustainable development

Before presenting a working definition for ISD (see 1.2), the following paragraphs merely introduce
to a number of observations, which are regularly made when encountering indicators and/or ISD.
It is often assumed that the performance of ISD is decided upon at first sight: as indicators synthesize
information in a way as to render it comprehensive for a large number of users in a short time period,
it is their potential to be read and understood quickly, which is of importance when assessing the
performance of ISD.
If a more thorough usability framework will be developed later (see chapters 3 and 4), the aim here is
simply to approach indicators and to account for a series of ambiguities raised by our object of study:
What should or can be seen when approaching ISD? Is it possible to distinguish them from other
types of policy initiatives or policy domains that refer to indicators? How far is this necessary? Are
ISD to be differentiated from indicator assessments in the context of SD?

1.1.1 The relationship between ISD and assessments

A first ambiguity to be encountered when being concerned with ISD, stems from the relationship and
hierarchy between ISD and assessment (or ISD and evaluation); as a matter of fact, ISD can either be
products of an assessment, or the development of ISD can be the basis for the assessment.

When indicators are developed as products of assessments, they are rendering the results of an
assessment or evaluation or monitoring exercise. In these cases, it is the methodology used for the
assessment, which mainly induces the choice, quality, perspective and coherence of the produced
ISD. Life-Cycle-Analyses (LCA) are good examples in this regard as indicators are used in LCA to
synthesize extensive data-gathering and –treatment into an easily readable and understandable
message. However, the definition of the indicators, the data used, the definition of the system
boundaries… do not depend on methodological concerns about indicators. The parameters of the
assessment are defined by the LCA-methodology and indicators are merely used to present the output

18
data of the LCA, and to facilitate the comparison of results, which in this case is the ranking of
products and services according to their environmental consumption. The quest for coherence - both
methodological and on the level of the individual indicator - is left with the methodology of the
assessment. In other terms, the indicators’ methodological quality is dependant of the quality of the
assessment methodology. In turn, in our example, all the flaws and uncertainties of LCA are injected
also to the indicators.

On the other hand, when indicators are at the core of the assessment, the concern for methodological
coherence and robustness is rooted on the indicators’ level: the coherence between the indicators as
well as the robustness of each indicator is gaining importance. Among the numerous examples, the
most obvious ones of this type of assessment are the ISD-lists developed by the UN’s Commission
for Sustainable Development (2001) after 1995 and which were meant to participate to the
assessment of the implementation of the Rio-signed Agenda21. Lately, at international level, the
same type of indicator-based assessment mechanism was introduced for the Millennium
Development Goals. Other examples of this type of assessments based on indicators are the
numerous State-of-the-Environment reports on international or regional scale (EEA 2005) as well as
on national scale: number of these assessments are developed along methodological frameworks,
such as DPSIR-indicator lists (see section 1.2.3), which stress the fact that the coherence of the
assessment becomes organized from the level of the indicators.
Arthur Lyon Dahl (personal communication, 2004) introduced a slightly different distinction between
these two fundamentally different starting points for the development of ISD. He labelled the
approaches “assessments with indicators” (i.e. indicators are by-products) and “assessments based
on indicators” (i.e. indicators are the core of the assessment). Dahl points also to the fact that if both
approaches co-exist within the SD-community, there are only very few occasions where both
approaches converge methodologically. Consequently, the results obtained by either approach will
differ largely, because the choice of perspective prescribes the setting of the system-boundaries used
for the assessment, and because boundary-setting is of predominant influence.
Both relationships between ISD and assessments can be equally important in their contribution to a
SD-process, be it in order to develop a more nuanced image of the relationship between humans and
environment, or to present nuanced assessments to decision-makers. Conversely, differences between
the two approaches to ISD are more profound than is apparent at first sight, especially with regard to
the utilisation that can be made of them.

1.1.2 ‘Indicators of sustainable development’ or ‘Indicators for


sustainable development’ ?

A second ambiguity when treating indicator initiatives stems from the usage of terminology, and can
be illustrated with a detour at semiotics. ISD are either translated as ‘indicators for sustainable
development’ or ‘indicators of sustainable development’. If some rare authors seem to use both terms

19
(or any of the terms) without paying attention to the implicit differences they convey, these
differences exist and reflect a fundamentally different understanding of the object under assessment.

Assessing a situation with the use of ‘indicators of sustainable development’ implies indirectly that
sustainable development has been defined as a precise object (in some cases, even as a policy target).
In such an understanding, ISD are then configured with respect to the object, e.g. the distance to the
objective state. We will discuss later (see chapters 2 and 3) considerations about uncertainties and
indeterminacy of SD, but for now it should be acknowledged that SD is a dynamic process of social
transformation. SD cannot be translated directly into a series of parameters, which would define a
state of SD per se.
For instance, extensive knowledge about carrying capacities would be needed if we intend to define
SD as a target-state. As these are largely ignored today, as well as impact functions and causalities
for most of the interlinkages between environment-economy-society, we can identify at best a set of
principles (see chapter 2) that delimit the terrain of SD. At the same time, using this wording conveys
a message of ISD being specifically developed and solely applicable to SD, which is rarely the case
with the indicators composing ISD-lists or ISD-composites. Rather is it the way of composing
(process, objective, utilization…) the lists or the aggregates, which are specifically influenced by the
SD paradigm, but not necessarily the indicators therein.

In parallel, in the case of referring to ‘indicators for sustainable development’, the uncertainties
attached to SD and the inherent impossibility to clearly define SD as an operational target in terms of
a state, pose less problems. Indicators are simply meant to participate to an assessment, which is
operated with reference to the socio-political process of SD; indicators are meant to contribute to the
apprehension of the pathway to SD. This pathway towards SD can, on the contrary to SD as a state,
be constructed, for instance by contradiction: even if we don’t know exactly the endpoint of the
journey, the situation of today comprises a number of socio-environmental evolutions that can be
identified as unsustainable. Indicators could be constructed to monitor these evolutions.

Formally speaking, and in our context here, we prefer to speak of ‘indicators for sustainable
development’, even more so as this wording underlines the indicators’ competence to operate both as
a lever and as an instrument for the process of accomplishing and working towards SD.
Other ambiguities exist at the level of terminology, as ‘indicators for sustainable development’ is not
the sole wording used to describe ISD. For instance, ‘sustainability indicators’ is used too (for
instance, Bell and Morse 1999), but its reference to the more holistic ‘sustainability’ concept (for a
comment on differences between ‘sustainability’ and ‘sustainable development’, see for instance
Robinson, 2004) makes indicators developed thereof mostly eco-systemic in nature and discipline.

20
1.1.3 Incremental versus structural levers to policy change

Necessarily, ISD-initiatives reflect the understanding of their authors with regard to the nature of
necessary changes in decision-making that will lead to SD. As in the SD-community different shades
of green, red and blue tendencies coexist and occasionally overlap, there is no common
understanding neither on the levers that ISD have to contribute to. Basically, we want to distinguish 2
approaches at this level: incrementalism and structuralism.

Incrementalism

In an incremental understanding, decision-making for SD is a matter of integrating and redirecting


mostly existing knowledge so as to allow decision-makers to re-evaluate the relevance and
importance of the constraints and opportunities they are taking into account. In this sense,
incrementalism in a context of SD-policies is often understood as a matter of readjusting the
prominence of economic rationalities in public decision-making by promoting the recognition for
instance of environmental limits and negative social impacts. ISD are conceived as being tools that
help decision-makers to take notice of these potential impacts of and limits to their decisions: from a
mere mono-dimensional (administrative and hierarchical) decision-making, ISD want to render
accessible multi-dimensional information for decision processes. Incrementalism poses thus that ISD
insert themselves into the conditions and processes decision-makers find themselves in: time and
budget constraints, inappropriate and incomplete knowledge of issues that lie outside of their usual
sphere of influence, concentration to their direct spheres of influence discarding for instance long
term and distant impacts, technocratic and expert-driven decision-support… Acceptance of
incrementalism has a number of consequences on ISD as an object in policy making, such as for
instance :
! be rather few in number (e.g. comprise a number of headline indicators as in the
case of the EUROSTAT-ISD) and as aggregated as possible (e.g. see the
Environmental Sustainability Index): decision-makers are already overwhelmed by
traditional information sources. Any additional type of information which wants to
be grasped by decision-makers has to be very concise;
! strongly simplify interactions and messages (e.g. as is done with the Ecological
Footprint): decision-makers are assumed to have very limited amount of
comprehension, time, capacities and interest that they can invest into the numerous
emerging issues of modern society;
! be directed towards influencing the configuration of policy outcomes, e.g. by
promoting potential levers of action (e.g. use decoupling indicators to promote
policies of eco-efficiency).

Incrementalism is based on the idea of accepting existing reference points and mechanisms in policy-
making. ISD are meant to provide additional, more diversified information on emerging policy issues

21
and domains, but stick to the prevalent norms and conventions when it comes to the configuration of
the information and its integration into policy processes. Ziegler (op. cit. : 167) posed the challenge
in terms of making “a different type of development ‘measurable’ in a similarly reductionist mode as
that of the ruling paradigm which provides politics with metaphors encapsulated in simple
numbers”.

Structuralism

Structuralism takes a different stance. Acknowledging that the ‘bad’ (e.g. in terms of natural resource
depletion, or decrease of biodiversity…) performances of modern societies with regard to
environmental and social criteria are rooted in inadequate decisions taken on the basis of inadequate
information in inadequate decision-contexts formed by inadequate decision-processes, implies to
recognize the necessity to reform the entire decision-making system. ISD are seen in this
understanding as means and levers which contribute to reprocess that decision-making system.
Associated claims for transparency of decision-making processes are additionally acknowledged to
be met by rendering underlying, hidden information flows and contents (i.e. a sort of meta-
information) accessible and transparent. ISD allow citizens and pressure groups basically to gain
access to the same level of information than institutional or political decision-makers.
Simultaneously, the claim for more participative decision-making is met by allowing these
stakeholders to take influence on the design, structure and content of the ISD. The change in the
power-relations at the level of the information configuration and access will induce an equal change
in the power-relations in general. The one who participates to the information construction, is the one
who participates to the decision. The impacts on the level of ISD are then to:
! Adjust ISD as close as possible to ‘reality’, i.e. use holistic representations of the
systems under scrutiny, and don’t refrain to integrate, if appropriate, theoretical and
conceptual notions related for instance to systems’ theory. Present unambiguous
facts and data. It is up to the decision-maker to extrude potential action and policies
from the information that is made accessible to him. It is thus not necessary to
participate to the interpretation of the data presented, or even to link ISD exclusively
to policies.
! Don’t limit artificially the number of indicators. Complex issues cannot be
represented with a handful of variables. As a consequence, instead of accepting that
decision-makers use few information, they need rather to be trained to gain the
capacities to process larger amounts of more diversified information or as put by
Ziegler (2002 : 168), “(…) the core of the discussion about sustainability is in
learning to understand the world in its complexity rather than simplifying it”.

Both incrementalism and structuralism have major flaws. Grossly: incrementalism is not accepting
that the current decision-making system and the information treatment it induces are neither perfect,
nor are they unchangeable. Structuralism, on the other hand, oversimplifies and uses as a basis of

22
their thinking a too naïve picture of current decision-making. The challenge lies probably in
reconciling these approaches.

1.1.4 Indicators and the Self-generation of Sustainable Development

A further ambiguity of ISD is stemming from their unclear relationship with the process of SD itself.
The initiators of the ISD debate at the international level had a very precise idea of the linkage
between ISD and SD, namely "(…) indicators of sustainable development need to be developed to
provide solid bases for decision-making at all levels and to contribute to a self-regulating
sustainability of integrated environment and development systems." (§ 40.4) (UNCED, 1992). The
same type of understanding of the relationship between indicators and SD appears to be widespread
at the local level: at the level of Local Agenda 21, ISD became a major function in generating and
sustaining the process. Logically, the quality of the SD-processes, independently of the institutional
level, depends to a certain extent on the potential and performance of some form of SD-Plan or
Implementation Strategy. The delivery of such strategies depends partly on the mechanism that was
initialized to measure and evaluate the distance to the targets or the evolution of the trends that the
strategies appointed. Simultaneously, the initial picture and understanding of the existing situation
needs often to be fine-tuned in order to be able to derive the desirable targets and trends, which calls
again for the development of some form of measurement system. Both of these needs, the initial pre-
strategy picture as well as the ‘continuous’ evaluations of the situation, are reputed to be satisfied
with ISD (at least to a certain degree).
At the same time, we should not forget that ISD and SD pay tribute to their ‘époque’, the nineties,
where all of a sudden it seemed possible to provide access to us world citizens to a large amount of
background information. The information society made a big leap ahead and its products seemed to
become the keystone to improve the living conditions of the masses (whether in the North or the
South): access to information was being held a direct proxy for access to power and to decisions. The
only visible hurdle on the way to a generalized empowerment through information - but which
seemed technically manageable all of a sudden - was to facilitate access to that vital information. ISD
were supposed to participate strongly to render the necessary accessibility: they are simplified,
communicable, distributed, synthesized information.

If most authors and institutions have become more careful since then, the underlying power of
indicator evaluations is still widely acknowledged as being an important part of SD-processes. But,
even very enthusiastic and rational-minded institutions ask today whether it should and could be the
indicators that could pull the whole process of SD, as was underlined at UNCED in 1992.

The few issues we addressed here are not exhaustively reporting the numerous ambiguities with ISD
at all. As said before, these first pages were rather meant to hint at the most direct interrogations that
can be raised when having a first look behind ISD. Continuously through the rest of the text we will

23
come back thus to one or the other of these questions and develop them further, not necessarily to
present answers or portray best-practice and solutions to any of them, but with the aim to see how far
we can deconstruct these ambiguities and subsequently reconstruct them into a series of underlying
and overarching questions. Obviously, the deconstruction and analyses we intend to provide take a
utilitarian turn: the central question we identified to all of these issues is the one of the utilisation of
ISD, and in a wider perspective, it will be the impact mechanisms of ISD that we want to shed light
on.
In order to achieve this, we could also have used a precise decision-making model, which would
allow us to monitor in a stricter sense the points of influence of ISD. Typically, using such an
approach would have meant to sequence the text into: here is the model; these are the observations
we made; show how the observations can be explained by the model, and see if the deviations from
the model call for a new model. We could have selected and adapted one of the decision-process
models that exist in literature, and use it to describe deviations or compliance between the model and
what we interpreted from observed use of ISD. If such was our initial intention, it appeared that our
object of investigation – i.e. ISD - was too slippery and ambiguous itself to be conveniently tested
upon such compliance. Especially as the hypothetical selection of a model is already rendered
sufficiently complicated by the inextricable complexity and vagueness of SD. Selecting and using a
model would have meant to delimit SD with an operational definition, or in other terms, it would
have meant to abandon a considerable part of the concept’s attractiveness in order to seal it into a
manageable and managerial question of public choice.
The subsequent work remains thus utterly unfinished in a sense. It lays only a first brick to a
construction of a wider model of ISD-use. On the other side, the advantage of the following is to
present a number of quite independent analyses.

However slippery and ambiguous ISD are, in the following of this introductory chapter we first
portray, interpret and contextualize a series of definitions of ISD. Apart from the fact that these
reveal that there are many different accents possible when qualifying ISD, it appears that there are
just as many different ISD: our object of study is completely heterogeneous not only from the
understanding that different people have of it, but also from the very point of view of its basic
characteristics.

1.2 Characterizations of Indicators for Sustainable Development

There are many different possible approaches to define indicators in conjunction with SD, and
perhaps the best would be to refrain from doing so. Without reference to a clear context or precise
policy-situation, it appears that “attempts to define the characteristics of indicators per se are not
helpful” (Bosch, 2002 : 77). Obviously, such desertion from defining ISD cannot in turn be helpful
at all for our enterprise. We analyse in the following some of the more conventional definitions and
characterizations of indicators in the context of SD in order to circumscribe the meanings of ISD.

24
First we explore the historical context of ISD, then we stipulate and discuss a definition and finally
we explore some of the existing types of ISD.

1.2.1 Historical backgrounds

If SD is a rather contemporary ‘paradigm’ or policy domain, indicators in the sense they are defined
under SD have a historical background. At least since T.R. Malthus (1766-1834) or J.S. Mill (1806-
1873), questions with respect to measurement and monitoring of societal development are
occasionally in the centre of the policy debate. One of the latest such episodes at international level
was the public and expert-debate in the mid-1990s around the international UNDP-measurement of
Human Development (Sen, 1999).
More generically, ‘Public choice’ has always been a matter of monitoring, and thus valuing, societal
evolution and development against criteria defined by individual or group preferences1. In this sense,
ISD have ancestors, and three episodes seem of major importance to characterize ISD:
! the development of the System of National Accounts (SNA)2 in the 1940ies
! the social indicators’ movement of the 1970ies
! the formalization of environmental policy performance indicators since the 1980ies.

System of National Accounts (SNA)

The intellectual initiation by the USA, and the subsequent world-wide (in the context of Bretton-
Woods) use and standardisation of a system of accounting at national level, was intended to permit
monitoring monetary flows and stocks. National accounting, and thus SNA, is at the basis of the
macroeconomic indicators we use today, the most prominent being Gross Domestic (or national)
Product (GDP/GNP). While any economic debate based on analysis largely owes to the System and
its ongoing developments (e.g. PPP – ‘Purchasing Power Parity’ is one of its younger derivates), it
also inspires a wealth of people who oppose either the underlying economic thinking, or who do not
accept the supremacy of the indicators derived. A number of improvements have thus been proposed
through the years, notably in order to redefine the approaches of measuring development.
As GDP / GNP was perverted through the years into a proxy measure for well-fare, well-being or
societal development, it triggers a number of ISD-approaches that intend to find replacement
indicators that would more accurately assess issues of non-economic development and societal
progress (e.g indicators of happiness or quality of life). Along the replacement of GDP / GNP,

1
For a brief introduction into the history of ‘public choice’ and the articulation of ‘public choice’ as a fundamental
economic question: Saint-Upéry (1999).
2
The current system, updated from SNA1958 and SNA1973, is formalized in the SNA1993 nomenclature developed
jointly by the main international organisations. An overview and in-depth knowledge can be accessed via the UN –
Statistics Division’s Internet Data-base at http://unstats.un.org/unsd/sna1993/introduction.asp .

25
number of ISD-attempts make efforts to methodologically adjust GDP / GNP (e.g. Index of
Sustainable Economic Welfare or Genuine Progress Index) so to integrate in its calculus the major
types of flows that are acknowledge to keep GDP from being a measure of welfare: 1) issues that
generate monetary flows, but are universally understood as negative contributions to human welfare
(e.g. pollution, accidents…); 2) issues which are not taken into account in the classical GDP calculus
but have a positive economic value (e.g. informal trading sector, household keeping).
However, as the System of National Accounting is undeniably a successful attempt to standardize
measurement, the national accounting approach is also used as blueprint for ISD which attempt to
account for complementary, non-financial flows: environmental accounting as well as social
accounting are both under development. Especially environmental accounting3 shows promising
developments, and indicators derived thereof are starting to be used as complementary information to
economic indicators. As logical derivates, their complementarities to economic accounting yield a
very interesting capability for their use in policy-making.

Social indicators’ movement

Indicators are very much linked since their origins, to monitoring and assessing social phenomena
and policy responses. The evaluation of a society’s achievements - as well as the distribution of
goods and bads among society - took however different turns according to the ‘époque’ and scientific
evolutions. During the 1970ies, issues of social well-being were somewhat prioritized in public
policy making, exceeding the popularity of an economic reading of well-fare. Consequently,
researchers and public authorities saw a wide interest in improving the construction of indicators of
social interactions. Issues of social inequality, citizen empowerment, democracy and education…
were approached by a quite a large section of academic works and institutional structures also as a
major measurement challenge4. Academically, a real movement can be identified around the issues of
‘social indicators’ in a number of countries, and notably in the USA and France. In terms of
influence on policy-making and as decision-aiding instruments, these indicator developments had
however low influence, which made not only the public authorities’ fervour of investment tend to nil
after a while, but was generating an increasingly difficult situation for academics to justify research
in the field. One of the main reasons for this loss in interest in the configuration and measurement of
social indicators appears to be the lack of a common understanding between experts (comprising
policy actors) on how to assess social development in terms of methods and approaches (Boulanger,
2004; Perret, 2001; Cobb et al. 1998).
Without direct filiations, their work incidentally created the basis in the beginning 1990ies for the
development of a now quite important social development indicator, the Human Development Index
(UNDP, yearly). Lately, research activities in Europe are growing fast again into the development of

3
See for instance the developments in the field of Material Flow Accounting, but also the European NAMEA and
SERIEE approaches.
4
For a passionate insight into the social indicators’ movement see Cobb and Rixford (1998) for the USA episode or
Boulanger (2004) for a general overview. For an analysis of the rise and fall of ‘social indicators’ as academic branch,
see Miles, 1985.

26
European social indicator reports (e.g. Atkinson et al., 2002) and even indices (e.g. Defeyt and
Boulanger, 2003). The latter approaches are developing on a particularly active research community5.
An increasing number of challenging new approaches to social indicators and indexes and reporting
developed from the mid-nineties and are sustained by an increasing interest from national and
international institutions.
After the Social indicators’ movement’s proponents scattered in the 1980ies, a strong latent expertise
in multi-dimensional assessment through indicators subsisted, some of which found its way into ISD.
Not the least, the sociologists’ influence is echoed in the process-orientation of ISD-development, or
more generally in the procedural and discursive understanding of the role of ISD in a context of
‘public choice’.

Environmental Policy Performance indicators

After environmental issues started to be institutionalized in the developed countries during the
1970ies and 1980ies, calls for the organization of a sufficient knowledge base for policy-making
contributed largely to the development of environmental policy-making indicators. Their systematic
and structured publications were - among other influences - forged by OECD’s ‘Environmental
performance reviews’, which have been promoted by some of its member states since 1989 (see
Boisvert et al. 1998). A demand which formalized (Lehtonen, 2003) in the beginning 1990ies into
extensive inter-agency review processes of national performances in environmental policy-making.
Some of OECD’s early methodological and procedural contributions to environmental indicators -
such as the P-S-R framework (see section 1.2.3.2) – contributed to characterize the logic of ISD-
reporting.
Simultaneously, environment agencies at national and European level developed since the beginning
1990ies periodic State-of-the-Environment (SoE) reports. Largely based on data accounting for the
biophysical and chemical quality of the environment, SoE evolved into reports based on more
extensive usage of indicators and indicator frameworks (such as the P-S-R framework), while
expanding their topics to the economic and social pressures and responses to environmental
evolution.

These 3 episodes, from the SNA to the environmental performance indicators, contributed to
characterize ISD in the mid-90’s as integrated, process-oriented, multidimensional, structured,
systematic data-processing initiatives. From the sum of these characteristics, we will distil hereafter
our own working definition for ISD. It should be understood from the history of ISD that the issues
raised in this work have mostly been raised elsewhere or in earlier policy domains, be it in relation to

5
Gadrey and Jany-Catrice (2003) develop an exhaustive overview of the most interesting initiatives on social
indicators (including indicators, indexes and reporting). Otherwise for an insight into the scientific production of the
community, see journals such as “Social Indicators Research” (Kluwer).

27
economic accounting or social indicators or environmental reporting. Performing a more thorough
analysis of earlier indicator movements would however exceed the scope of this work.

1.2.2 Definitions

In relation to our very specific context of ISD, it is OECD which provide us with the most commonly
accepted definition6 of an indicator as “a parameter, or a value derived from parameters, which
points to, provides information about, describes the state of a phenomenon/environment/area, with a
significance extending beyond that directly associated with a parameter value” (OECD 1993, 2002,
2003). It should be emphasised at this stage that no direct reference is made to the relationship
between indicators and ‘reality’ or ‘observation’, a link which obviously is not central to an
indicator’s characterization. Indicators are meant to ‘point’, ‘provide’ and ‘describe’.
Slightly more subtle and elegantly, Boulanger (2004 : 3) defines an indicator as “an observable
variable used to account for an unobservable reality”. And Boulanger to add a general definition of
social indicators given by Bauer et al. (1966 : 1 in Boulanger 2004 : 3): “statistics, statistical series,
and all other forms of evidence that enable us to assess where we stand and are going with respect to
our values and goals”. 30 years after Bauer (et al. 1966), OECD explicitly acknowledged that the
value-added provided by indicators is more than pure numbers and hence that values and personal (or
community) objectives play an important role in constructing and using indicators.
Many authors inspired themselves from the OECD definition and from the working parties initiated
at OECD level. Adriaanse (1993) developed in the context of environmental policy performance
reviews for the Netherlands a widely used definition which appears to have direct filiations to
OECD: "an indicator is supposed to make a certain phenomenon perceptible that is not - or at least
not immediately - detectable. This means that an indicator has a significance extending beyond that
[which] is directly obtained from observation. (...) Indicators generally simplify in order to make
complex phenomena quantifiable in such a manner that communication is either enabled or
promoted". Adriaanse inserted an argument which calls to sustain further procedural interest into
indicators: they are meant to trigger communication among actors.
Building on Adriaanse’s procedural understanding, Rotmans (et al. 1997), quoted by his research
associates (Greeuw et al. 2001), developed Adriaanse’s definition into: “Indicators describe complex
phenomena in a (quasi-) quantitative way by simplifying them in such a way that communication is
possible with specific user groups.” And to add that “the term ‘quasi’ indicates that, although
indicators are mostly quantitative in nature, in principle they can also be qualitative. Qualitative
indicators may be preferable to quantitative indicators where the underlying quantitative information
is not available, or the subject of interest is not inherently quantifiable.”

Interestingly, if we step outside of the SD- and environmental indicator sphere, for instance by

6
We give here the current version of OECD’s definition of an indicator. Through the years, the wording of that
definition was slightly adapted to policy-discourse. Essentially however, the definition remained constant over the last
decade.

28
simply having a look at other OECD departments, the emphasis on what provides identity to an
indicator slightly shifts. As one example among many possible, the OECD’s glossary on
(development) evaluation, which was assembled in 2002, defines an indicator as a “quantitative or
qualitative factor or variable that provides a simple and reliable means to measure achievement, to
reflect the changes connected to an intervention, or to help assess the performance of a development
actor”. This definition is thus much more focused on policy performance: indicators are assessment
tools which relate directly to policy. This understanding of the links between policy evaluation and
indicators has found adherence also within the small emerging community of researchers active in
the field of evaluation for sustainable development. The EASY-ECO7 (2002) research network issued
a working definition for indicators, which they acknowledge as “a signal that reveals progress (or
lack thereof) towards objectives; means of measuring what actually happens against what has been
planned in terms of quantity, quality and timeliness”.
If, instead of leaving the environmental field, we turn to environmental indicators and have a look at
ecologists’ definitions of indicators, we find a different emphasis. Bossel, an eminent (and emeritus)
ecological systemist from Kassel University (Germany), elaborated a systemic view on sustainability
(Bossel 1999) and its measurement. Consequently, for him indicators should be "system variables
that provide us with all essential information about the health (viability) of a system and its rate of
change, and about how that contributes to the goals we want to achieve with the help of that system"
(Bossel 1998 : 72).

Many more indicator definitions exist. Based on a generalized nuanced understanding of ISD, we use
a working definition for indicators which is based on an earlier8 definition we contributed to:
Indicators for sustainable development provide an interpretation of the evolutions of stocks and/or
flows in order to account for the human-environment interactions. Simplifying the complexity of
reality, indicators are meant to participate to the self-generation of sustainable development by
enhancing communication. Defined by technical, methodological and scientific conventions, the
definition, selection and interpretation of indicators imply an articulation of scientific and societal
values at various levels and depths.

In short, the following elements can be raised.

‘Simplifying the complexity of reality’ by using ‘Interpretation’

The indirect link between indicators (for sustainable development) and ‘reality’ should not be
understood as a weakness of ISD, just as the inherent complexity in SD should not be regarded as
undermining the viability of the concept or its assessment. The representativeness9 of indicators is
not only influenced by the extent of the ‘measurable’ (e.g. advances in scientific knowledge and

7
Evaluation for Sustainability research network: http://www.sustainability.at/easy.
8
“An indicator is a sign or signal used to represent phenomena, events or complex systems. Always defined by
conventions and values, an indicator renders an empirical interpretation of reality” (Zaccaï, Bauler, forthcoming).
9
Representativeness refers here to the capacity of an indicator to render phenomena with sufficient accuracy.

29
data-quality) and the ‘measured’ (e.g. advances in data-availability), but more so by the views of the
indicator’s developers and users on the issues under assessment. For instance, the measurement of
economic welfare is influenced by our understanding of the dynamic components of welfare (i.e. the
measurable), by institutional arrangements between providers of data (i.e. the measured), and by the
societal interpretation on what constitutes welfare. Obviously, the last factor influences largely the
first two. In turn, the ‘societal’ translation of reality into indicators will be largely determined by the
value referents of the implied actors.

‘Human – environment interactions’ considered as ‘stocks and flows’

Many different approaches to defining SD exist and in the second chapter we discuss some of them.
For the time being, we consider SD as the attempt to manage the interactions of human activities with
their biological, chemical and physical environments. This implies an onion-like representation of
SD: 1) at the heart of SD, we integrate the social, cultural, institutional, economic… dimensions, into
what could be called the human dimension; 2) The environmental dimension encloses hierarchically
the human dimension: without environmental sphere, no human activities. However this conception
does not imply that we disregard the interactions within both spheres (for instance, between
economic and social development), which are ruled by their own principles (e.g. for the human
dimension: equity, justice…, or for the environmental dimension: thermodynamics, genetics…). In
other words, our representation of SD is not environmentally centered, but recognizes that human
activities are embedded in the environment and shouldn’t be conducted without due consideration of
their consequences. The interactions within and between the dimensions are comprehended hereafter
as variations in stocks and flows.

‘Self-generation of SD’ needs to ‘Enhance communication’

The initial political documents related to SD, such as Agenda 21, interpreted the comprehension of
SD in terms of a number of conditions to be fulfilled. Some of the more fundamental conditions
relate to issues of communication, such as transparency and openness of decision-making processes,
stakeholder participation… Among others, indicators are seen as foundational instruments to simplify
communication of facts and thus to participate to the achievement of SD via awareness-raising and/or
capacity-building. Many authors nuance such an automatist vision of a self-generating SD and the
prominent role given to ISD (and of information in general) in that dynamic. Fundamentally
however, the prudent reading we apply here still acknowledges that configuring policy, i.e. inducing
change, is very much linked to the development and communication of knowledge.

30
‘Conventions’ and ‘Values’

The always difficult link between ‘indicators’ and ‘reality’ opens large room for interpretation (and
thus for dialogue) to those who construct, select and use indicators. Value judgments are inextricably
linked to ISD at many levels from the conceptualization to the utilization of indicators. For instance,
Cobb and Rixford (1998 : 1-3) wrote that if “technically speaking, an indicator refers to a set of
statistics that can serve as a proxy or metaphor for phenomena that are not directly measurable”, it
appears also that “the absence of objectivity stems primarily from inevitable biases in the selection of
topics on which to gather data. There are also hidden biases in techniques of gathering and
publication of data. The pretense of objectivity stands in the way of public appreciation of those
biases”. The ‘intelligent’ management of these ‘value dialogues’ is one major condition for the
successful translation of indicators into full-sized decision-aiding instruments.
Apart from value interferences at a methodological level, there exist many levels where judgments
interfere with the definition of ISD. For instance, Boisvert (1998 : 107) refers to ‘purpose’ and
‘uncertainty’ in order to introduce two levels of judgment: “(the) main aim of indicators is to
translate scientific information, full of uncertainty and inaccessible to laypeople, into operational
data”. Indicators, as tools to enhance “bridging the gap”, are becoming a matter of translation of
‘Science’ to ‘Society’.

1.2.3 Types and typologies of indicators

Indicators are heterogeneous objects. Besides from introducing the diversity of possible approaches
to indicators, we take the opportunity here to specify terminology10.
What is commonly called ISD refers to a series of different types of indicators with various levels of
aggregation. Fundamentally, three types of aggregation are applicable to indicators:
! Spatial aggregation: aggregating data and measurements originating from different
monitoring spots to a degree which confers some spatial sense. Spatial sense can be obtained
by using for instance institutional boundaries (e.g. national borders) as spatial aggregation
masks, or by using natural or geographical referents (e.g. continents or river basins).
! Temporal aggregation: aggregating data and measurements originating from different
monitoring spots at different periods to a degree which confers some temporal sense.
Obviously only very few natural or social phenomena can be monitored continuously in
order to be simply averaged. The rest of the measurements stems from data which have been
aggregated from corrected samples taken at specific moments in time. In the context of SD,
temporal aggregation can play a further role, namely to allow bridging the gap between
current and past observations on the one hand, and future generations on the other.

10
More than before this part is resolutely relying on personal points of view. What is considered as an indicator and
what as an indices is mostly a matter of personal choice more than of hard theory, especially in such a multi-
disciplinary field of enquiry than ours. The following hierarchical leveling of indicators should thus be taken as a way
to clarify unambiguously the terms and wording used in the rest of the text.

31
! Thematic aggregation: aggregation of data and measurements related to different
phenomena or (policy) dimensions. The coherence of the thematic aggregation scheme is
determined by the coherence of the definition of the phenomenon considered to construct the
indicator. For instance, indicators of quality of life are thematic aggregations constructed
along a declension of the meaning of ‘the good life’, possibly comprising thematic issues
such as education, income, health, satisfaction…

When subsequently speaking of aggregated indicators, composite indicators or indices and indexes,
we refer usually to the latter thematic aggregation. Obviously, the first two types of aggregations are
considered merely statistic manipulations inherent in any indicator. It should be noted that the three
types of aggregation are not mutually exclusive, on the contrary: thematically aggregated indicators
are necessarily relying also on temporal and spatial aggregation.

The traditional pyramidal splitting of information levels (Bauler et al., in press) concentrates on the
thematic aggregation. It identifies as its basis, those ‘indicators’ that are the result of quantitative or
qualitative data being gathered, treated statistically and assembled for communication purposes. As
mentioned earlier, indicators at this level of aggregation can themselves be multi-dimensional to a
certain extent, as it is the case for instance with ‘Greenhouse gas emissions’ (i.e. assemblage of
emissions of different gases weighted according to their individual GHG-warming potential) or with
‘Adjusted life expectancy’ (i.e. life expectancy at birth diminished by years of illness). It is thus not
necessarily the level of multi-dimensionality which decides on terminology to be used (indices vs.
indicators).
We use the term ‘indices’ to describe a coalition of indicators that aggregates statistically different
dimensions of a single issue of SD into a single number. ‘Issues’ are defined - in the sense given by
the ASI-workshop in 2004 (Moldan et al., in press) - as the critical challenges for human
development for the coming decades. They include issues such as climate change, biodiversity, water
access, welfare…, and are not necessarily translated into policy dimensions. Indices measure thus
phenomena according to issues whose multidimensionality becomes too broad to allow to render it
on the basis of a single phenomenon. For instance the indicator ‘Adjusted life expectancy’, mentioned
above as a multidimensional indicator, will not allow measuring accurately the issue ‘health
conditions’ of a population. In order to do so, the indicator would need to integrate assessments of
the quality of the health care system as well as many other concerns. Indices combine statistically,
via application of normalization and/or aggregation techniques, a series of indicators in order to
describe an issue and to communicate on the issue at stake with a single number.
In some situations, such a single number can however not be attained robustly. The components of
the issue under consideration might not be sufficiently known and understood, as might be their
interactions; data might simply not be compatible, available or robust; human or budgetary resources
are not sufficient to engage into extensive data-treatment procedures; issues might simply be too
multidimensional to generate sufficient consensus on an aggregation key; the target group for the
indices might not be sufficiently trained to grasp complex multidimensional indices. In such
situations ‘proxy indicators’ (sometimes labeled as ‘substitute indicators’) can be preferred to the
construction of indices. These are indicators, which are put forward in order to assess an issue with a

32
single number, but without being ‘indices’: they are thus essentially non-aggregated. Basically proxy
indicators can take two distinct forms: they can be metaphors for the functioning of an issue (e.g.
such as ‘number of countries having ratified a treaty’ as a metaphor of the political consensus to act
on a problem), or they can be representative of the issue’s development by assessing crucial
breakpoints (e.g. such as ‘glacier volumes’ as crucial images in the overall chain of causal impacts of
climate change).
A combined assessment of several issues (e.g. combining environment with economic and social and
cultural aspects) with a single number, will hereafter be addressed as ‘index’ or ‘composite
indicator’. Often, indexes are statistical combinations of several indices, in some cases of indicators.
However, we restrict the term ‘index’ to a very limited number of specific cases where indices are
combined into overarching indexes: environmental index, economic index, social index, institutional
index and sustainable development index.

The right number

The common ground for indices, indexes and proxy indicators is to strive for the production of a
single number. Many arguments can be used in favor of single numbers. Most authors interpret the
need for simplification as the necessity to adapt the message generated with indicators to the level,
form and content of the ambient mass-media level of information: the average citizen couldn’t cope
with the complex and complementary messages generated by a large number of indicators, or
decision-makers wouldn’t dare to consider detailed information taking too much of their precious
time.
Perret (2001) assesses such a comprehension (which is also his own) of indicator use and impact, as
fundamentally constructivist: indicators are meant to contribute to the construction of societal
awareness. Hence their simplification towards the single number seems unavoidable in order to
enhance the widest possible comprehension of the phenomena under assessment.
In a more general context of decision-making and information input (see also chapter 3), Herbert
Simon (quoted by Perret, 2001 : 26) stated that “what information consumes is rather obvious: it
consumes the attention of its recipients. Hence, a wealth of information creates a poverty of attention
and a need to allocate that attention efficiently among the overabundance of information sources that
might consume it”.

Many authors and indicator developers, who rally behind this ‘constructivist’ approach, interpret
such injections as a call to develop indexes. In such interpretation of the necessity to develop indexes,
it is possible to identify aspects of the struggle between structuralism and incrementalism.
Counter-arguments to indexes point to the fact that promoting the use of multiple indicators will
induce in the longer run societal capacity-building towards a better handling by all citizens of the
complexity of trade-offs and incertitude that reign today’s public life. Indicators could thus be more
than a simple information tool. Indicators could be the triggers, which in the long run would
contribute to initiating capabilities such as a proper handling of multiple indicators, a proper

33
interpretation of non-convergent messages, a proper identification of causal chains,… However, in
order to be able to trigger these capabilities, which appear as vital necessities in a complex world
striving towards SD, users of indicators need to be confronted with contradictions, multiple non-
convergent messages, causal-chains,… Hence according to this (structural) interpretation of the
societal impact of indicators, indicators need to be multiple and should not be integrated into the
single index (which eventually support an unwanted opacity of cause-effect chains).

Does the divergence of opportunities that users attach to indexes or indicators mirror the divergence
between the proponents of awareness-raising vs. those that count on capacity-building as the
foundations of SD ?
When it comes to the adequate number of indicators for a given decision-support process, opinions
diverge to a point where it appears to become more a matter of faith than of confirmed arguments, or
even ‘proof’. Proponents of simplification and single numbers adhere to Paul Klemmer’s words
(2002 : 60): “If I were against sustainability policies, I would support all projects which create a
hundred, two hundred or three hundred indicators. (…) To be specific: indicator systems with more
than four to ten indicators are of no use to policy making”. Klemmer’s argument is thus
straightforward: if SD is to have a chance in the political and societal debate of contemporary media-
and information-society, than SD has to make its own those principles that apparently reign that
world. One of the fundamental principles is meant to ‘keep it simple’, and simplicity articulated in an
information tool boils down to ‘few numbers’. Klemmer’s opinion and call for simplification are
directly stemming from the usage he allows indicators to fulfill. In this case, Klemmer acknowledges
that indicators are directed towards being used by average citizens as awareness-raising instruments
only. Raising generalized awareness among citizens for SD is meant in the end to be the best way to
influence the political determination of decision-makers to engage into operationalizing SD.
However, in order to illustrate occurring divergence in opinions on the mechanisms of indicators and
indexes and the right number, it suffices to turn but a few pages of the same book (Klemmer 2002)
and to read Brühl (2002 : 75) who states: “However desirable it may seem to develop a limited
number of easy-to-handle indicators, practice tells us otherwise. Even for purely economic
evaluation or steering, it is not enough to know just the national product of an economy (…). The
same principle applies to social and ecological data”.
Interestingly, Brühl was concerned in his paper about indicators at a relatively less complex level (i.e.
at the level of industry), whereas Klemmer was discussing indicators in a policy context of SD. One
should have awaited that the more complex situation of a generalized political SD-process as
considered by Klemmer’s case would more easily generate a call for masses of adjacent and
complementary indicators, allowing to further fine-tune the policy-processes.
As could be expected by cynics, two pages further down in the same publication, a third author
brought the problem probably beyond the point of dogma, when he affirmed (Bosch 2002 : 77) that
the only possible answer to questions like “should there be many or few; should they be simple or
complex?” could be that “(…) both simple and complex, globally comprehensive indicators can be
useful. (…) Different problems and different people tackling them require different indicators”.

34
We agree with Bosch, that it is all a matter of using the right amount of complexity at the right
moment and for the right purpose. It is thus all a matter of what the indicator initiative is meant to be
used for. As our aim is to assess and evaluate the potential of ISD, discussing usability is logically
the central part of the thesis. The episode above shows however that if both Klemmer and Brühl were
using the same term ‘ISD’, they had something quite different in mind.
Klemmer was referring to ISD as general tools for the operationalization of SD, and saw probably
rightfully that the current rather weak support of SD in most developed countries’ governments rather
points towards adopting an approach to indicators as tools for awareness-raising. Brühl on the
contrary was referring to ISD as a precise decision-making tool at the level of industry, allowing to
monitor, guide and decide upon trade-offs.

More decisive for the success of an indicator initiative than the number is the indicator’s
comprehensiveness. In this respect, it would have been insightful for both Brühl and Klemmer to
read their co-author Simonis (2002 : 52) quoting Galtung: “an indicator which can not be understood
within five minutes by a person with a standard level of education is not a means of investigation, but
an instrument of political power”. The demonstration of indicators’ potential as instruments of
political empowerment will be further discussed during the subsequent chapters, and will be more
specifically addressed when questioning the roles of indicators. This discussion will allow us to see
that Galtung’s words are ambiguous and depend strongly on what is understood by a person. In
effect, Galtung was speaking in favor of the Human Development Index (UNDP, yearly) and its
apparent simplification. The HDI presents itself as a single score per country obtained by
normalization and (weightless) aggregation of three rather straightforward indicators. Such a single
score per country allows laypersons to compare performances of countries across time11 (i.e.
longitudinal comparison) as well as across countries. The five-minute countdown set by Galtung
seems thus perfectly applicable in the case of the HDI.
However, if the aim is to comprehend the entire construction of the HDI, namely the normalization
method, the sensitivity of the indicators and the index, the data-extrapolations… than five minutes is
hardly enough regardless of the educational level of the reader. A thorough scrutiny of the HDI will
explode the time necessary for the full comprehension of methodological, technical, statistical,…
weaknesses and strengths. However understanding HDI, or any other indexes, is for many authors a
prerequisite of correct indicator use.
What is then the depth of comprehension that is needed for users to utilize indicators with the
necessary knowledge and prudence? How far should users understand the methodological

11
The capacity to compare across time is of course dependant in the first place on methodological stability.
Particularly with the HDI, such stability is not given across the whole range of years (1991 - ) the reports are being
produced. The methodology has undergone a series of adaptations, which for an exercise going on for more than a
decade now is a necessity in itself. However, as the normalisation and standardisation protocols of HDI generate
scores that are very close to each other for whole groups of countries, adapting methodologies only slightly from one
year to another can imply that rankings shift fundamentally and chaotically without this reflecting any real policy
event. The authors of the reports themselves consider this being some of the main limits of HDI: its relative non-
applicability to compare individual countries’ evolutions across time, combined with its inability to rank adequately
and in non-arbitrary fashion countries which show more or less the same level of economic development. The fact that
this is however more or less exactly what media do, e.g. compare their country’s rank with its last year’s performance
and the performance of neighbour countries, should be acknowledged that somehow not too many illusions should be
constructed on people’s ability to show a learning curve in using statistics and reading methodological annexes.

35
construction, and deduce thereof weaknesses and strengths of those indexes they are using? We will
leave this question open until a later stage when we address roles and usages of ISD.

Lists of indicators and indicator frameworks

Differences between the level of aggregation, i.e. between indicators and indexes, are thus not that
evidently emphasized if we step to a deeper level of analysis. As mentioned before, the single major
difference between indicator and index stems from the fact that indexes are aggregated (often also
normalized12 and standardized13) statistics. Compared to indicators, indexes rely thus on an additional
stage of statistical treatment. We use Boulanger (2004 : 5) to sketch hereafter (see Figure 2) a
standard construction process from the identification of the concept, that is intended to be assessed,
to the index.

Without going into detail of such a process at this time, we simply raise some fundamental issues that
illustrate the difficulty of some of the stages of the outlined process
If technically the final aggregation, normalization/standardization and weight-assignment occurs only
with indexes and indices, evidence is raised (see chapter 3) that when it comes to the use of
indicators, plain non-aggregated indicators grouped in a list cannot fully avoid their own
‘aggregation’ phase, even if it is of a specific nature: users of indicator lists seem to aggregate
cognitively. In other words, assigning personal priority to one or the other phenomenon under
assessment (i.e. to one or the other indicator within a list) is to a certain extent comparable to
assigning weights during aggregation. The difference being that during a thorough methodological
aggregation process leading to the constructing of an index, the weights assigned to each sub-
indicator, along with the selection of the domains and dimensions, will follow a rationale agreed
upon by constructors. In the case of cognitive aggregation (i.e. when confronted to a number of
indicators), the rationale sits individually with each user.

Furthermore, statistically and methodologically, some of the more basic operations in aggregation are
not value-free either. For instance, the very basic choice of the number of sub-indicators used to
construct an index cannot be considered a neutral operation and needs thus to follow some form of
rationale also. This can be illustrated again with the calculus of the Human Development Index: as a
non-weighted average of three sub-indicators (i.e. GDP, life expectancy, literacy), GDP amounts to
one third of the overall HDI-score of a country. Let’s consider one moment that the HDI had been
conceived initially, or would be reviewed, as an index consisting of four sub-indexes rather than
three, e.g. by integrating a measure of environmental destruction (Hammer and Hinterberger 2003;
Neumayer 2004). Than the relative importance of economic development (or more precisely of the

12
OECD (2002a : 17) defines normalisation as “measured value divided by some benchmark value of the same
variable”.
13
OECD (2002a : 17) defines standardisation as “measured value minus sample standard deviation/standard
deviation”.

36
annual added-value produced by a national economy) of a country as measured with GDP will see its
importance diminished from a third to a fourth. In the eyes of many people, such a seemingly
innocent methodological choice made during construction has to be acknowledged as revealing
paradigmatic and ideological lines of thought. In the case of the HDI, the rationale behind the
calculus of the index is based on the notoriety of its developer, i.e. A. Sen, and the acceptance of his
views14 on development processes, which echo in the selection of the sub-indicators as well as in
their number.

Figure 2: From concept to indices (Boulanger 2004 : 5)

On the other hand, in the case of aggregated indicators, when it comes to decide on the weighting of
the individual indicators, non-weighted indexes reveal also some form of choice operated at the
moment of determining the methodology of index construction: all the constituent elements are
considered of the same importance for the final effect. E.g. human development as measured by
UNDP is equally composed of revenue, level of education and the length of a life. Methodologically,
speaking of non-weighted indicators appears thus as a misuse of terminology: not weighting sub-
indicators corresponds to assigning each component the same weight (i.e. in the case of a non-
weighted average, the weight component equals “1”). In effect, not weighting a component would
rather imply to assign the weight “0”, meaning that the component is not longer part of the index.

For the efficient usage of indicators within the context of SD, a highly normative context, there are
(Boulanger, 2004) thus a number of good reasons to avoid cognitive aggregations and to leave the
aggregation process to the developers of the indicators during the construction phase of the index,

14
Basically, Sen sees development influenced by three main parameters which can be acknowledged as: ‘to be’ (i.e.
represented by the HDI sub-indicator relative to the average life-expectancy), ‘to have’ (i.e. GDP/capita), ‘to can’ (i.e.
literacy component).

37
instead of to the individual user of indicators during the use of indicators. Revealing preferences and
choice, through communicating explicitly the methodology of the index, are acknowledged as an
elementary step towards enhancing the transparency of these decision-making instruments.

On the other hand, confronted with such a number of fundamental difficulties attached to the process
of developing an aggregated indicator from measurement, there are also some good reasons to
simplify the process of developing an indicator tool simply by omitting the final steps of aggregation.
Hence, by leaving it at the level of indicator lists (see figure 1 above), at least some of the more
classical fallacies linked to aggregation processes seem to be avoided by definition.
As a response to the above-mentioned implicit cognitive aggregation by users, there are ways to
reduce a user’s ‘freedom’ to cognitively (and to a certain extent, unconsciously) reinterpret non-
aggregated indicator lists. Indicator-frameworks have developed through the years as subsidiary
techniques (with respect to aggregation) to assemble indicators in a structured non-aggregated way,
while still conferring some internal logic, rationale and thus coherence to a listing of indicators.
The single most common indicator framework used today to structure indicators is seemingly15
derived from the works of the UN’s Statistics Division during the 1980’s (Bartelmus, 2002), who
developed a Stress-Response framework for environmental indicators (UN, 1984). The Stress-
Response framework was part of a number of recommendations elaborated by UN’s Statistics
Division meant to strengthen coherence in the development of environmental statistics. This series of
recommendations developed into a common Framework for the Development of Environment
Statistics (FDES) and was endorsed officially by the Statistics Commission of the UN in 1985 (UN,
2000). OECD’s environmental statistics were developed during the 1990’s (OECD 1994) and built
upon this Stress-Response framework to develop the now hegemonic Pressure-State-Response
frameworks (PSR), which eventually developed into a series of derivations such as the Driving-force
– Pressure – State – Impact – Response framework (DPSIR).
DPSIR builds upon the sequencing of the causal chains of human-environment interactions (see
figure 2): the general economic, demographic, political… conditions (e.g. GDP per capita) of an
entity are considered to shape the Driving-forces. Their evolution induces a series of Pressures (e.g.
CO2 emissions per capita) on the environmental systems, which as a result change their state (i.e.
concentration of Green House Gazes (GHG) in the atmosphere). Ultimately, the alteration of the
states of environmental systems induces impacts on human development (e.g. damage costs to repare
flood-avoidance infrastructure), which are responded to politically or societally (e.g. levy of CO2
emission tax) in order to influence either of the stages of the human-environment interaction
sequence.
The sequence adhered to by DPSIR appears to describe a causal chain (from driving-force to
response). Reality is eventually more undetermined. For instance, policy responses are meant to
avoid (rather than to await) impacts on society, and ideally are taken before negative impacts arise.
Reponses which influence directly and massively on the driving-forces, e.g. by taking position for a

15
Gadrey and Jany-Catrice (2003 : 37) affirm that the P-S-R framework has been developed in the 1970s by Anthony
Friend (a Canadian statistician), and that the OECD – SoE Group (a working group on reports of State-of-the-
Environment) adapted Friend’s framework. It is beyond the reach of this study to control the exact fatherhood of the
framework.

38
major ecological tax reform, are not commonly considered as being livable policy options. And
uncertainties make it often difficult to assign clearly and unambiguously the linkages between an
altered environmental state and its impacts on society (especially when it comes to health issues).
Responses are rarely triggered by the simple existence of information on a cause-effect chain.
Interestingly, terminology changed the significance of DPSIR through the years. The UN Statistics
Division when working on the Stress-Response framework in 1984 wanted “the stress-response
approach (to) focus on impacts of human intervention within the environment (stress) and the
environment's subsequent transformation (environmental response)” (UN, 2000 : 11). Today, in the
context of DPSIR, when speaking of ‘responses’, one widely acknowledges the responses of society
(policy, behavior, economic impulse…) as a reaction to identified alterations of environmental states.

Figure 3 - DPSIR framework

Of course, other indicator frameworks and indicator architectures exist along the DPSIR-family
frameworks. Among the more atypical examples, one should mention an architecture built on an eco-
systemic representation of the properties of human-environment interactions, i.e. Bossel’s (1999)
framework which articulates the viability of a system into ‘basic orientors’ (e.g. viability criteria)
and identified six sub-systems to the human-environment interaction (see also section 2.1.1).
Also to be included among the more intellectually appealing examples, the equally complex French
ISD-framework build upon a sociological and macro-economic representation of human-nature
interactions. It materializes as a modular architecture with interdependent modules, which was
initially developed by J. Theys (2000) for the Institut Français pour l’ENvironnement (IFEN). Or as
an example of the much simpler diagrammatic frameworks that are generally used throughout the
world, one can cite the themes/properties matrix used by Swiss Federal Statistics Office (2003) to
organize its ISD-list.

Using such frameworks - or architectures - to structure indicator lists has a series of advantages.
Following the argument developed earlier, an explicit architecture for indicators can fulfill a similar

39
need than aggregations: render more explicit to the user the developers’ comprehension of the
structure of human-environment interactions. When presented in a transparent and comprehensive
way, architectures allow users to perform a second-level reading of the indicator list presented. Users
are shown that an evolution of a certain phenomena measured by a specific indicator (e.g. increase in
CO2 emissions) is linked to the evolution of an upstream phenomenon (e.g. increase in private
transportation activities) and implies other phenomena to evolve equally (e.g. increase of mean global
temperature). If, in the example of CO2 emissions and their impacts the causal chain is evident and
widely acknowledged by now, this isn’t necessarily the case for more ambiguous phenomena, as for
instance the link between income distribution and environmental justice. Very basically, seeing
where the developers place the different phenomena in their architecture (e.g. is income distribution
interpreted as a cause, i.e. driving-force, or as an effect, i.e. impact?) allows indicator users to take an
insight into what constitutes the general thought-system of the developers.
Nevertheless, many indicator initiatives lately refrain from using frameworks to structure their
indicator lists. Especially those frameworks stemming from the PSR-family as well as complex
frameworks such as those developed by Theys and Bossel, tend not to be used anymore. Simple
diagrammatic matrixes (for instance, which cross policy dimensions with sustainability criteria) and
tree-like structures (for instance, giving different levels of detailed indicators) seem to take the
advantage over the more elaborate systemic architectures based on more complex representations of
the human-environment interactions.
Among the many reasons that have been identified to discard these types of frameworks, the most
obvious one is that most of the widely used frameworks (especially if they lean on DPSIR) tend to
make sense only as long as they are used in areas with relatively few uncertainties, especially with
respect to the causal relationships existing between the phenomena assessed. Such frameworks are
thus particularly operational for issues which can be assessed with mono-dimensional indicator lists.
For instance, OECD’s usage of the PSR framework is well established in State-of-the-Environment
reporting. However once the environmental scope is exceeded for more thorough multidimensional
issues, as it is necessarily the case for SD-indicators, interlinkages are becoming less clearly
identified and generally are prone to uncertainties. The individual indicators identified to assess each
phenomenon are difficult to be linked unequivocally with other indicators. If the causal relationships
(in scope, scale and direction) between phenomena cannot be unambiguously determined, DPSIR
frameworks are necessarily either mirroring fundamental uncertainties (e.g. by inscribing the same
indicators at different places in the framework) or they are mirroring evidence which does not exist
(e.g. by linking indicators of cause and of effect where knowledge of such a relationship is not
given). As a consequence, DPSIR frameworks have been identified, after a series of trials, as being of
rather weak performance when assessing environmental phenomena along with their social and
economic causes and effects. DPSIR performs thus rather weak with ISD.

A second difficulty encountered when constructing indicators on the basis of DPSIR-frameworks,


could be seen as being trivial, but reveals partly the strategic nature of selecting indicators for any
kind of list. On a number of occasions, developers of DPSIR-based ISD initiatives, after putting a lot
of energy into agreeing internally on the causal sequence of their indicator lists (i.e. on the deciding
upon the fundamental uncertainties with regard to cause-effect chains mentioned above), saw

40
themselves inevitably engaged into reiterating the same type of discussion with their users. What
should be labeled a pressure or a state or an impact is not unambiguously determinable when
considering environment – economic interactions, and depends largely on the point of view taken by
those engaged into selecting the relative indicators. Per se, assigning indicators to one or the other
sequence is not that vital and could be considered secondary, as long as the internal coherence of the
DPSIR sequencing is prevailed. However, in a classical, institutional or organizational indicator
selection process where a predetermined number of indicators per sequence is supposed to be
selected, the bargaining of including / excluding the assessment of one or the other phenomenon
extends easily to include arguments which sound like this: ‘Tropospheric ozone concentrations are a
pressure, not a state. Considering the number of pressure indicators already identified, and which
are undeniably accounting for greater potential effects, we cannot include tropospheric ozone into
the list’. DPSIR sequencing, alike other frameworks, can thus introduce a new level of strategic
bargaining during the indicator selection processes between the implied stakeholders and
administrators.

Thirdly, and perhaps the most important critique that can be raised against frameworks, the value-
added of frameworks for users is not clear. For frameworks to really enable a second-level reading of
an indicator selection, i.e. to generate a further useful comprehension of the interlinkages between
socio-environmental phenomena, they need to be quite performing with regard to their representation
of the interlinkages between dimensions of SD. In other words, the frameworks would need to be
complex, thus evolve against the simplification principle, thus become difficult to understand in
themselves, thus add more of a second-level complication for users, than a second-level reading to
them. It appears through this that frameworks can add a further level of difficulty for indicator
constructors, as well as a further potential source of misconception. The above-mentioned
frameworks elaborated by Bossel and Theys experienced this. Widely acknowledged as being of
superior quality to other more common frameworks (especially DPSIR), they couldn’t find any
candidates to apply their frameworks elsewhere than in their initial prototype-setting. It should be
noted also that the more complex approaches to frameworks generate conditions on indicator lists
which remain very selective. For instance, to be correctly applied Bossel’s framework asks for a
couple of hundreds of indicators to be selected and included into the list. In a context of ‘the fewer
the better’, following such conditions becomes difficult for institutional actors, and be it simply in
terms of resource allocation for new data gathering.

There are thus not necessarily fewer problems that occur with indicator initiatives that refrain
aggregation and want to develop along frameworks. As with aggregation, the framework needs not
necessarily be visible to the end-user (risking to add confusion), but can be an opportunity for those
who want to have a deeper methodological and conceptual insight into the interpretation of the
statistic treatment administered to stakes at hand. Agreeing to some of the issues raised above,
Rickart (et al., in Moldan et al. (in press)) summarizes her institutional experience with frameworks
at the European Environment Agency the following: “Indicator developers use frameworks to
provide a common language and perspective of the issue and its solution. This facilitates the
indicator development process in particular when there are many different actors involved. The way

41
in which issues are framed has no policy relevance, as the final indicator will stand-alone. However
the framing becomes important in the interpretation and deeper analyses of the results. The
frameworks are the assumptions and rationale on which the indicator is based and should be made
available to those wishing to interpret the indicators. Importantly understanding these assumptions
and the frameworks are essential in order to compare and discuss indicators from different
institutions, that may be based on different frameworks. For the majority of users however showing
the frameworks themselves, or categories from such frameworks, would only add an unnecessary
additional degree of complexity value that might distract them from the results.”

1.2.4 Typologies of indicators: some examples

We emphasized above that ISD are far from homogeneous. Different approaches to indicators exist
and the context of SD adds to their complexity and to their diversity. Apart from the fact that
indicators can be catalogued, as seen, according to the depth of the statistical treatment they
underwent (i.e. indexes or indicators), indicator approaches can be grouped according to the
reference system they adhere to or stem from: economics, engineering, system-dynamics, political
science, management… The table below (see table 1) shows, uncommented, some of the most
widespread ‘families’ of indicators. The presented typologies of indicators cannot be complete and
exhaustive. Some families of indicators are not considered, for instance indicators linked to energetic
systemic such as exergy- and emergy-indicators are not presented here for the simple reason that until
now these remain theoretical constructions. At the same time, some of the typologies presented could
be extended and include more or other indicator types, for instance the policy-performance approach
could include output indicators, context indicators, institutional performance indicators… The
attempt here is merely to illustrate the multiple perspectives that can be taken when trying to
characterize indicators as well as the multiple disciplines that can be linked to these indicators.
Change, progress, evolution, revolution, innovation… are the main ‘ingredients’ of SD, and can be
conceived very heterogeneously and are assessed in many ways by following different logics.

42
Table 1. Indicator typologies

Approach Type of indicator Description and relation to assessment Strengths and weaknesses

Descriptive indicators Assessment of the prevalent existing situation (+) permits to identify indicators according to their purpose in an evaluation exercise.
Functional

(-) often difficult to distinguish clearly between the 3 types of indicators and to identify a
approach

Prescriptive indictors Assessment of progress achieved with regard to desired outcome


given indicator as belonging to one or the other type.
Normative indicators Assessment of evolution of phenomena with regard to defined limits or
norms
Effectiveness indicators Assessment of the impacts (i.e. the effects) of a policy or of a change in the (+) allows to consider the quality of efforts made and of the obtained change, rather than to
conditions addressed by policy limit assessment to quantity of change induced.
(-) notable influence of the selection of the evaluation’s timeframe on the assessment’s
performance

Efficiency indicators Assessment of the performance of resources (human, economic or


verdict.
approach

environmental) allocated to support a change in a given system


Policy-

(-) increasingly difficult to identify unambiguously the effects of single policy-measures.


Outcome indicators Assessment of the means liberated by the policy decision meant to cope
with the problems identified
Input indicators Assessment of the flow of material or energy or substances entering a (+) clear relationship to a logical and hierarchical framework of interdependent systems.
Systems approach

system (e.g. a nation, a city, an industrial sector). Measured in absolute or (-) large influence of the definition of the systems’ boundaries, the division of systems into
relative values. subsystems and the hierarchy between systems.
Output indicators Assessment of the flows leaving a system. (-) ignoring the evolution of the quality of the considered system, i.e. black-box.
Throughput indicators Assessment of the flows passing through a system without notably altering
the system’s quality.
Capital or Stock Assessment of the quantity or quality of resources (human, natural, (+) allows to increase transparency of trade-offs (i.e. substitutions) between different capitals.
indicators infrastructural, knowledge…).
Economic approach

(+) permits to follow the effects of policy-measures.


(-) dependant on the formalization of a comprehensive model of different types of capitals.
(-) calls for agreement on valuation of the quality and quantity of all types of capitals
(including human, environmental, social, cultural…).
Rates or Flow indicators Assessment of the extent, speed or quality of change of given resource
capitals. (-) calls for the agreement on rules of substitution between capitals.

Guide-beam indicators; Assessment of evolutions with regard to desired outcome. Scientific, (+) calls upon actors to become explicit about their targets, needs/wants and norms.
Distance-to-target societal or political norms define a corridor of desired evolutions, or the Development of scenarios, or of limits and thresholds, allow for greater transparency on
indicators value of the target situation. policy
Process-oriented

Non-sustainability Assessment of evolutions with regard to an initial non-desired situation. (+) allows for easy communication of the steps (to be) achieved and directions (to be)
indicators followed.
(-) dependant on the strength, accuracy, robustness of the process of identifying the targets
approach

Capacity building or Assessment of the capabilities developed by a society (or institution) and and the thresholds.
institutional or human their adaptability to stress, change, crisis.
capital indicators

43
44
Conclusion to the chapter

The intention with the present thesis is to discuss the utilisation of ISD in decision situations. In this first
chapter, we defined (see section 1.2) the potential roles of ISD in general terms, e.g. participating to the
generation of SD, and in terms of communicating information. These purposes of indicators need some
detail, notably to be able to focus later our analysis on the usability of indicators in the realm of SD. In
effect (see section 3.1.2), usability is a matter of context, and more precisely of the general use-functions
attached to ISD: depending on what ISD are meant to be used for, the definition of the conditions and
characteristics of their usability is adapted. In other words, intuitively, strong ISD-initiatives are those
which are adjusted at best to the roles they are meant to fulfil; ISD are not an end in themselves, but
merely a means towards an (operational) objective.
However, neither ‘participating to the generation of SD’, nor ‘communicating for awareness-raising’,
nor ‘providing information for decision-making’, can be seen as sufficiently discriminating in order to
serve as operational objectives for ISD. ISD-initiatives conceptualize their roles often on the basis of
classical representations of policy-making cycles (e.g. agenda-setting, policy formulation, decision-
making, policy implementation, policy evaluation). Generically, for each of these stages a series of
functions or roles are then attached to ISD. For instance, in an agenda-setting phase one would expect an
information system on emerging or recurrent societal problems or policy failures. ISD could thus be
called to fulfil such an alarm-function for emerging problems, or on the other side have an evaluative role
to identify policy failure. As we will see hereafter (section 3.1), such policy-cycles are very gross
representations of occurring policy action and decision situations, and are therefore not accurate in
discriminating different operational objectives to policy-aiding instruments, let alone to attach roles,
purposes or uses for ISD to them.
A multitude of roles, which are repetitively attached to ISD, can be found at the level of indicator
initiatives themselves. As a matter of example, ISD are imagined to (Martunen and Palosaari 2004; Hezri
2006; OCDE. 2001a and b; Lehtonen 2002): Assess intangible impacts; Compare incommensurable
impacts; Identify stakeholders’ values and objectives; Support learning process of involved stakeholders;
Discriminate among competing hypotheses (for scientific exploration); Structure understanding of issues
and conceptualise solutions; Track performance as determined by results-based management;
Discriminate among alternative policies either for specific decisions or general policy directions; Inform
general users (public, stakeholders, community); Follow change and progress in differing SD-domains;
Evaluate the efficiency of policy implementation; Guarantee the transparency of decision-making; Allow
to prioritize and allocate resources; Monitor the respect of international engagements; Monitor the
integration of environmental policies into the formulation and implementation of economic and sector
policies; Catch the attention of the public on the principal questions of SD; Offer synthetic information to
high-level decision-makers; Generate feedback on policy impacts; Promote the responsibility and
accountability of decision-makers;…

45
Obviously, the items of such lists of possible or awaited roles for indicators do not enable to operate a
detailed discussion of ISD-usability; e.g. identify the exact signification of a particular role, discuss its
linkage to decision making situations, elaborate on the linkages between the identified roles and usability
in decision situations… However, they render it clear that we need to define two more elements in the
following in order to construct the context for a discussion of the usability of ISD in decision situations.
In the immediacy (chapter 2), we will thus explore further the interpretations to be given to ‘sustainable
development’. Subsequently, in chapter 3, we will explore the issues of information- and assessment
influence on decision situations, and explore different approaches to the assessment of influence on
decision situations.

46
Chapter 2
A Procedural Understanding of Sustainable
Development: Principles and Processes

47
48
The first chapter introduced to the object of our study, i.e. indicators, and it has been specified that
indicators have many different materializations. One of the underlying factors influencing the forms (and
thus at least to a certain extent, also functions) of ISD is the interpretation of the dynamics assigned to
sustainable development, and more precisely of the necessary institutional and societal change. Whether
sustainability is felt to need an incremental operationalisation strategy, or whether it is felt that SD calls
for a fundamental, structural questioning of the mechanics of decision spaces, has a major influence on
the comprehension of ISD processes. Hence, of ISD functions, hence of usages. Consequentially, the
present chapter will be concerned with discussing with a little more detail the conceptualisations of SD.

Quite obviously, the interpretation one attaches to SD is influencing the downstream configuration of any
‘operationalization’ scheme and implementation instrument. Indicators, belonging inevitably to the
larger family of ‘tools for decision-making’, are largely influenced by the interpretation of SD that is
used, and not only because "(…) the sustainability concept we adopt has consequences: our
interpretation of the concept directs our focus to certain indicators at the neglect of others” (Bossel 1999
: 3). This relationship is not unidirectional, because “(…) if we rely on a given set of indicators, we can
only see the information transmitted by these indicators, and this defines and limits both the system and
the problems we can perceive, and the kind of sustainable development we can achieve” (Bossel 1999 :
3).
Whereas these interrelationships in defining and implementing SD and ISD are of course not exclusive to
the policy domain of SD, it nevertheless implies that the direct and multilevel influences between Policy,
Science, Practice and Paradigm must be rendered visible in the encountered decision situations. The roles
of the different societal actor arenas, which intervene in ISD-configuration and SD-translation, are
necessarily to be adapted to decision situations concerned with SD. In this sense, for instance, Funtowicz
and Ravetz (1990) called for the implementation of extended peer reviews and more generically for the
realization of post normal science when decisions occur in policy domains where facts are uncertain,
values are disputed, stakes are high and decisions urgent. As we will try to argue throughout the present
chapter, SD is such a policy domain. Even if post-normal science can be disputed as a concept and
fruitful reaction to the challenges of SD, the present chapter will nevertheless be oriented to show that
decision situations which are characterized by SD imply a specifically procedural response. The
necessary re-definition of the roles between societal actors is thus emphasized by situations where
decisions are build upon high uncertainties and encounter the potential irreversibility of some of the
impacts of the decision. Redefining procedures16, processes and institutionalised arrangements is one
acknowledged option for such a structural reorientation of decision situations.

When defining SD, terminology such as holistic, systemic, multi-dimensional, stakeholder… is made
widespread use of. In parallel, hardly any indicator initiative does not refer to the same type of

16
“(…) plus une décision est incertaine, plus elle risque d’être irréversible. Passé un certain seuil, l’incertitude et
l’irréversibilité justifient une réorientation commune vers la recherche de solutions procédurales plutôt que
substantielles“ Faucheux (et al. 1993 : 83).

49
vocabulary, implying even that the graphical representations of the indicators are often referring to these
concepts too. In the following sections, we will approach SD first through systems theory, or what is
meant to be applicable17 from the latter to SD. We are not making an attempt here neither to define SD,
nor to present a complete picture of all its components or interpretations, as other authors (see for
instance, Zaccaï 2002) this already quite extensively. In this synthetic chapter, we will address three
important sets of vocabulary, which are relentlessly used in the realm of SD and ISD. After a brief
historical reading of SD and some precisions with regard to its definitions, we discuss the links between
SD and systems’ theory, as well as the latent proceduralism of SD. Finally, we will investigate shortly the
idea of SD as a collection of normative principles, only to conclude that all three notions are inextricably
linked in the case of ISD.

2.1 Sustainable Development: Systems’ Approaches and


Processes

When some 20 years ago, the Brundtland Commission presented its report (WCED, 1987), it wanted
notably to raise attention of scholars, politicians and activists for the necessity to finally come to an
integration of economic, social and environmental objectives into a global development path.
Simultaneously, the commission was calling for the long-term and global-local perspectives to be
internalized as basic decision-making variables for the future. Many analysts (e.g. Sachs 2002; Becker et
al. 1997 : 12) distil the essence of the report, and the subsequent Rio-conference18, to the fact that
development met for the first time environmental issues. Of course, some members of academia and of
the organised civil society as well as a number of civil servants had to cope since ever with multi-
dimensional constraints while defining the actions, recommendations and policies they were dealing
with. Eventually, since the dawn of economic, sociological or environmental thinking, the careful analyst
had knowledge of the evidence of interlinkages between (what we call now) the multiple pillars of SD.
Rio was however one of the first times that such interlinkages were debated and recognized at the level of
high-profile politicians, and in a global, long term perspective.

The Brundtland Commission defined sustainable development as being "a development that meets the
needs of the present without compromising the ability of future generations to meet their own needs"
(WCED, 1987). Even while carefully reading their report 'Our Common Future', sustainable development
as a policy domain does not get very much clearer than this. Today, after 20 years of more or less

17
See also Musters (et al 1998 : 245), who agree with the father of systems theory, Van Bertalanffy, “that system theory
should be regarded as a ‘guiding idea’, rather than a modelling tool”.
18
United Nations Conference on Environment and Development (UNCED) was held in 1992 in Rio de Janeiro, Brazil.
Often simply referred to as the 'Rio Summit' or 'Earth Summit'.

50
intensive efforts by all involved societal parties, SD has grown into a paradigm that is a point of reference
for many professionals while defining, implementing or promoting their activities. The lack of initial
conceptual clarity has been beneficial to most initiatives, partly because their initiators were reasonably
free to interpret SD in relation to their own constraints and opportunities. As a direct consequence of this,
the collective understanding and debate of SD might still be multi-facetted and contradictory (Zaccaï
2002), but undeniably SD has often gained some project-related clarity since Rio for individual activities
and initiatives.
However, obviously it is not the sum of individual operational definitions of SD that will aggregate into
an ambivalent and conceptually robust definition of SD, especially as “for the proliferation of definitions
is not just a matter of analysts trying to add conceptual precision to Brundtland’s rather vague
formulations. It is also an issue of different interests with substantive concerns trying to stake their
claims in the sustainable development territory” (Dryzek 1996 : 124). However, the multiplication of
different, differing and diverging ‘sustainable developments’ developed into the emergence of a
consensual vocabulary. The resulting common translation of SD can be expressed in terms of an
“implicit objective function that take the forms of such statements as: sustain only, develop mostly,
develop only but sustain somewhat, sustain or develop – for favoured objectives” (Parris and Kates 2003
: 3). Parris and Kates point to the fact that in the absence of a clear normative and implementable
definition of SD, actors choose to operationalize their SD by directly assigning it management objectives,
such as for instance: ‘do not exploit renewable resources above their rate of renewability’. In this sense,
SD becomes only the overarching concept, which allows to structure, to select and to link different
operational management objectives. As a consequence, Sachs (2001, in Bartelmus 2001 : 190) develops
SD as a normative concept close to other more classical ethical imperatives: “Sustainable development is,
however, not an operational concept. Rather – very much like peace or democracy – it is a guiding idea
for the development of societies. As such it contains two major aspirations, which have been foundational
for the formation of the concept in the last twenty years. These are, first, that humanity should respect the
finiteness of the biosphere and, second, that the recognition of global biophysical limits should not
preclude the search for greater justice in the world”.

The Brundtland Commission's main achievement was probably to have facilitated the fact that SD
provides today for a structure, a common point of reference, an umbrella concept. On the one hand, by
spelling out a name for a hypothetical, utopian situation that refers to the optimal, but improbable
integration of multiple dimensions, opposing concepts and antagonistic principles (see section 2.2). On
the other hand, by proving that such concepts are able to focus wide attention to the necessities of
stepping on a sustainable development path. Not the least, this relatively dynamic and procedural
interpretation of SD, which we will focus on somehow during the present chapter, has been accused of to
be conditioned by the Commission's - and the UNCED's - own difficult internal processes of reaching
consensus (Spangenberg et al. 2000).

51
Both of these achievements of the WCED's report - naming the concept and stressing the process - were
contributing19 to compose the necessary condition for SD to strike the attention of policy- and decision-
makers and to advance towards one of the main policy Leitbild of the last decade. It is the openness -
some would say the inherent procedural dimension - of the concept of SD that brought to surface
questions, "(…) such as: what exactly is sustainable development? What kind of development is meant?
What must be developed in a sustainable way? Is it the world, the earth, mankind, a country, a society, or
sectors such as agriculture or transport? How long must development be sustainable? How can today's
decision-makers take the needs of future generations into account?" (de Graaf et al. 1996 : 206).
As we will see in the following chapter, the perception of the 'evaluandum', i.e. in our case of
‘sustainable development’, can be of major influence for the configuration of an evaluation-process as
well as the evaluation-instrument. Just as you will encounter problems when trying to measure the speed
of your car with a thermometer, indicators need to be built for their object of measurement, which could
mean: for an as precise as possible understanding of SD. As a very precise vision of SD is however rarely
given (some would say: impossible), the relationship between SD and its assessment is solely influenced
indirectly: as SD largely refers to accounting for the interlinkages between different dimensions, it is the
formal representations that decision actors have of the scope and scale and interactions between the SD-
dimensions that are of major importance to the definition of SD.

On the level of formal representations of SD, two approaches are existing in literature and policy:
systems approaches and process-oriented approaches. In the following sections, we explore both.

2.1.1 Systems approaches

Elements of systems' theory20 (or at least its vocabulary) are at the basis of many representations of SD.
This is however not surprising as systems' theory may be the most powerful and influential scientific
paradigm (Von Bertalanffy 1968) at least since the mid-20th-century: hardly any scientific discourse is
not referring explicitly to systems, subsystems, positive and negative feedbacks, hierarchies...
In the following we will try to give a synthetic, non-exhaustive and partial overview of the application of
systems’ theory to indicators for SD.

In the context of SD, systems vocabulary is foremost used to describe the place of Man in his physical
and social environment, and to render the multiple interactions between those elements. Systems' theory

19
Other characteristics of SD can be identified as vital to the promotion and success of the concept. Notably the reference
to global environmental policy issues, can be see as one of the major success factors of SD.
20
It is of course falling out of the scope of this text to present a total picture of systems' theory. An amazingly abundant
literature exists, from introductory textbooks to scientific discussions. For the reader interested in discussions of systems'
theory in relation to SD, we prefer Giampietro (1994), Bossel (1998), Musters et al. (1998), Köhn (1998).

52
is actually a rather convenient base for a categorization and hierarchization of the elements composing
our world. But there is much more to it that is useful to SD: when using concepts of systems theory, it
becomes relatively straightforward to explore the relationships between the (sub)systems, i.e. explore the
mechanisms of integration which are central to SD and its implementation. In parallel, systems’ theory
had a direct influence on the conceptualisation of the ‘evolution’ of the world (e.g. resilience, stability…),
which has translated for instance, into many actors’ comprehension of the ‘development’-component in
SD. SD is translated into ‘development that lasts’, just as ecosystemics (for instance) are occupied to
understand the characteristics of ecosystems which survive.

However, no unique representation of the ‘System Sustainability’ does exist. Different authors use
different hierarchies and classifications to (de)construct SD.
The most wide-spread interpretation orders the (sub)systems hierarchically, seeing in ‘Nature’ (i.e.
physical, natural, mineral environment) the overarching system on which depend the subsequent systems.
This follows the popular argument21 that without Nature, there would be no living beings, hence no
humans, hence no society, hence no economy, no culture,…: hierarchical interdependencies are
exacerbated as vital for the overall sustainability, as "(…) human society is a complex adaptive system
embedded in another complex adaptive system – the natural environment – on which it depends for
support" (Bossel 1999 : 2).

Following this argument, Köhn (1998 : 183) proposes a three-level pyramidal systemic (see Figure 4),
with the natural system representing the basis on which society is developing. The second-level social
system (Köhn according to Boulding (1966)) itself is composed of three sub-systems: the policy system,
the economic system and the cultural system. All of these five (sub)systems are interrelated which each
other directly and indirectly, meaning for instance that the policy system interacts with the natural system
directly, but also indirectly by first interacting with the intermediate system (i.e. the social system).
Interrelations can be described “as functions of the rate of exchange of material, energy and information
between systems” (Köhn 1998 : 176). By using the framework of Köhn and Boulding, it becomes
obvious that the three social subsystems are of critical importance to policy-making for SD.
On the other hand, if systems’ theory is applied formally to SD, a series of problems emerge, notably
with regard to the multiple and controversial ‘scales’ (spatial and temporal) that need to be coped with in
policy-making for SD.

21
The argument can be refuted with an anthropocentric vision of the relations between man and nature, by implicitly
asking the provocative question whether Nature would exist without man’s ability to perceive her. Thus that the
environmental system is dependant on man’s faculty of consciously perceiving his surrounding. Eventually, once man, as
the only species capable of conscience, being extinct, Earth (i.e. the natural system) might cease to exist as a perceived
entity.

53
Using systems’ theory as a mechanism to describe and qualify (not to speak of quantification) the
interactions between different pillars and scales of SD implies to assign clear boundaries to the system,
its subsystems and their intersections. This however stays peculiar in SD as the definition of temporal
and spatial scales (e.g. local to global; inter-generational to intra-generational) as well as the scopes (e.g.
ecology to environment) stay fuzzy and are prone to be influenced by personal and collective value
judgments (see Giampietro (1994) for a series of conceptual solutions to the scale problem).

POLICY ECONOMY CULTURE

SOCIETY

NATURE

Figure 4 – A three level systems' hierarchy to represent sustainable development


(adapted from Köhn 1998)

The question remains however to recognize when a system, or a collection of interacting subsystems, can
be considered sustainable. Most authors are not concerned with these questions as they use elements of
systems vocabulary only on the level of the characterization of linkages, or as a way to establish a
hierarchy between the systems. They don’t link their interpretations of SD explicitly to systems’ theory
(Musters et al. 1998). Nevertheless, a series of authors derive characteristics from their representation of
the ‘sustainability system’. For instance, Köhn (1998 : 185) affirms that “according to the model,
environmental sustainability has to include social sustainability. If we accept this, economic
sustainability is part of social sustainability. Hence, there is no longer any point in considering economic
sustainability as it is a superfluous category since the economic system cannot act separately”. If this
statement is surely refreshing for most traditional economists, it has to be reminded that for the still
prevalent economic utility theory, the environment is not more then a pool of resources and as such is
denied any other utility, hence not considered either in its full respect. Unsurprisingly it appears thus that
not only the threshold of what is considered a sustainable state or evolution is variable depending on
authors, but that ideas diverge even on the expected contributions of the different subsystems.

54
Probably the most comprehensive, and formally correct contribution to link SD to systems’ theory is
developed by H. Bossel (1998, 1999) for whom sustainable development is reached when the human
system (see figure 5for Bossel’s representation of the system interdependencies) assures its viability (as a
subsystem) without impeding anymore on the viability of the overarching systems (i.e. the support
systems and the environment and resource system) on which it depends.

Figure 5 – The Six ‘basic system orientors’ (left) as response to systems' interaction (right) (Bossel 1999)

Bossel uses the concept of viability, stemming from ecosystemics, to extrude a series of criteria that any
system would have to respond to in order to be considered as viable in the long term. These criteria, or
basic system orientors, are selected according to the fundamental properties of systems’ interaction (see
figure 5). Besides these 6 basic orientors (existence, effectiveness, freedom, security, adaptability,
coexistence) determined by the system’s environment, three further orientors (reproduction,
psychological needs, responsibility) are reserved for the human system in order to account for the
consciousness of man: we are in presence of sentient, self-reproducing and conscious beings (i.e.
mankind). He further claims that these basic orientors can be transformed into characteristics and criteria
to determine the viability of systems. The criteria are applicable indifferently to environmental systems,
social systems, economic systems, or cultural systems, and furthermore the same criteria apply also to
characterize the interlinkages22 between the subsystems into the 'parent-system'.
The advantage and argument behind this representation of SD, especially with respect to the
configuration of indicators, is the relative simplicity and rationale of the construction: assessing the

22
With applying the same criteria to explain the viability of the internal relationships of the different systems, and their
external relationship to the other systems, Bossel constructs also his viewpoint on the substitution between systems.

55
viability of the systems, i.e. their sustainability, entails to assess the states of the subsystems, their
contributions to the state of the ‘parent-system’ and the state of the 'parent-system' itself against the
general systemic viability criteria.
However, by accepting this representation of SD, one has to endorse a fundamental hypothesis, namely
that human (social) interaction is in the essence of its mechanisms identically structured than biological
or chemical interactions23. That, for instance, economic competition follows basic mechanisms, which
apply also to competition between species, or humans, or ideas. This is of course a very strong
hypothesis, both very rational and positivistic.

The link between systems’ theory and indicators for SD has thus been acknowledged as a direct one by
the cited authors. The positive evolution, and thus the sustainability, of social systems (in the sense of
Köhn) rely heavily on the processes of collective and individual learning, which themselves rely on the
improvement of information exchange (which is one very basic condition for learning to occur).
Boulding (quoted by Köhn 1998: 177) states that “these systems (i.e. the social systems) are linked
together dynamically through the process of human learning, which is the main dynamic factor in all
social systems”. Köhn adds that “one may conclude that all three subsystems (policy, economy, culture)
are components of a more comprehensive social system based on the replication and evolution of human
knowledge (information). Thus, social subsystems are based on that part of the information pool which
has been codified as institutions (rules, norms, rights)”.
Expressed more rudimental: the feedback (positive and negative feedback) necessary for social systems
to evolve is consisting of information24. This conception of systems’ theory is at the basis of statements
by high-profile systems’ thinkers like Meadows and Randers (1992 : 209) for whom “from a systems
point of view a sustainable society is one that has in place information, social and institutional
mechanisms to keep in check the positive feedback loops that cause exponential population and capital
growth”.

In order to steer systems, information is thus needed, and more precisely information on their state and
rate of change, to evolve over time, avoid stagnancy, recognize and adapt to new situations… The
guiding potential of information is thus clearly emphasized by this application of systems’ theory to the
policy domain of SD. As indicators represent one particular form of information for decision-making,
they could gain considerably in potential use, but foremost can grow into one essential part of the
mechanisms of steering systems towards SD.

23
As a matter of fact, Bossel differentiates between the nature of the systems, and uses hierarchies: the viability of social
systems, i.e. human viability, as compared to physical systems is constraint by some additional criteria such as
psychological needs. From the most simple (i.e. mineral) to the human system Bossel (1999 : 21) introduces the following
hierarchy of systems : static systems; metabolic systems; self-supporting systems; selective systems; protective systems;
self-organizing systems; non-isolated systems; self-reproducing systems; sentient systems; conscious systems.
24
Information is understood here in its wide sense, i.e. including also norms, conventions, traditions, rules…

56
A series of authors (e.g. Boyd et al 1985, Ayres 1988, Dawkins 1989) develop even more drastic roles for
information by drawing an analogy between organisms using their gene pools (i.e. gene information) to
replicate, evolve and survive, and society using information, basically stored in institutions and
organizational arrangements, in order to survive. Information is stated to become the fuel for society to
enhance collective and individual learning processes, which are vital to the survival of the social systems;
e.g. in the same way as the exchange of genes are vital to the survival of the natural system.

However, the general attractiveness of the systemic comprehension of SD may lie on a more superficial
level, which is however strongly relevant for indicators: the apparent simplicity of the characterization of
the relationships between the (sub-) systems. Instead of having to face the difficulty to recognize when,
where and how sustainability is reached, systems' theory calls to asses whether or not the (sub)systems
are viable and their linkages resilient, e.g. if they are mutually reinforcing each other in order to
strengthen the overarching system.
Such a simplified formalization of SD shows that the trade-offs between the different sub-systems cannot
be designed neither at the expense of the viability of the different (sub)systems, nor at the expense of the
viability of the overarching system. Furthermore it introduces the aforementioned idea (Giampietro 1994)
that hierarchies are to be organized between the subsystems and the overarching system.

However mentally intuitive these systemic formalizations of SD appear, their implementation into ISD
has not yet been achieved in a very convincing way. Maybe the advantages of referring to systems’
theory and conceptualisations are to be found on a different level: numbers of authors use codification of
systems’ theory to vulgarize and simplify their representations of the interactions between elements of
SD.
Economist Herman Daly (1977) is one among many examples of what is possible to extrude from
systems’ theory after ‘vulgarization’. Daly’s steady-state logic is relying on the nowadays classical
argument that, as the natural system is finite, it is not possible for the underlying economic system to
grow indefinitely and impede on the capacities of the natural system.
One of the most influential representations of SD, in relation to indicators, has been developed by
Munasinghe (Munasinghe, Shearer 1995). Departing from a mere systemic vision, SD lies for
Munasinghe somewhere between the three poles (i.e. subsystems) of a triangle with an economic, a social
and an environmental dimension. Each pole defines a series of relatively mono-dimensional policy issues
(e.g. poverty reduction at the level of the social dimension, economic growth for the economic
dimension…) and mono-disciplinary criteria (e.g. efficiency and cost-effectiveness for the economic
dimension, resilience for the environmental dimension…). In this understanding, SD means thus to
implement decisions that had been considered in the light of the criteria relative to the three poles: in
short, decisions should be simultaneously economically efficient, socially equitable and environmentally
supporting.
Interestingly, Munasinghe's representation is thus departing from the systemic idea of hierarchy and
scales, which a traditional systemic representation would necessarily induce. Rather is he conceptually

57
approaching SD as a sheer problem of horizontal integration of economic, social and environmental
constraints and opportunities. As the three dimensions are of comparable importance, ‘horizontal’ trade-
offs25 and compromises are ruling SD (therefore the evident link to governance and participation issues).
It is from this type of systemically-inspired representations of SD that the early multi-national ISD
developments (see for instance the early efforts of the UN-Commission for Sustainable Development, or
of OECD) have been deduced, which juxtaposed economic, social and environmental indicators. In these
attempts, systemic interrelations were simply rendered by applying some loose formal cause-effect
simplifications (e.g. as they occur in PSR or DPSIR-schemes) to categorize the different indicators.

However, there lies one of the major weaknesses of these systemically-inspired representations when
used as a blueprint for indicators. Such representations are concerned only with one type of integration
(horizontal integration between the different dimensions), and do not allow accounting for temporal
(future generations) or spatial (local-global) integration, or any other type of integration. You may
construct of course triangular SD's for each geographical area, as well as you may use different triangular
representations for different timescales. But what this approach neglects is to characterize the
relationships between these different triangles or dimensions. How would they connect? It is obvious (at
least in terms of systemic representations) that economic, social and environmental issues are not using
the same scales (temporal and spatial), and that they cannot be understood without any hierarchy. Or as
Becker (et al. 1997 : 20) put it “distinguishing between different sustainability-related processes,
however, does not imply that different types of sustainability can be assessed separately. Sustainability,
in other words, cannot be approached by simply adding the requirements and imperatives of different
types of processes together, but is closely linked to, and emerges out of, the interrelations and
interactions between them”.

A number of variations of these dimensional approaches exist, and specifically the number of dimensions
to be considered for SD has no clear-cut answer. For Bossel (1999 : 2), for instance, "sustainable
development of human society has environmental, material, ecological, social, economic, legal, cultural,
political and psychological dimensions (…)". One of the initially very influential indicator initiative, the
initial UN-CSD indicator scheme, uses four dimensions (adding an institutional dimension to
Munasinghe) to categorize its indicators. Even if in practice, these institutional indicators were of
tremendously low quality, they represented a fundamental improvement to Munasinghe as indicators for
the institutional dimension were meant to evaluate the overarching conditions that strengthen/weaken the
SD process, e.g. transparency, good governance, participation, free press, corruption. The added
overarching fourth dimension was implicitly introducing a hierarchical relationship between the
dimensions, hence addressing vertical integration and capacity building. To a certain extent, a linkage
was established to more procedural interpretations of SD (as it will be presented in the following section).

25
Of course, these trade-offs can be configured along different understandings of the mechanisms of the SD-paradigm. For
instance, whether adopting a ‘weak’ or a ‘strong’ (Neumayer 2003) representation of sustainability has very different
consequences on how to configure the finding of the trade-offs.

58
Systemic representations, or their avatars in terms of indicators, are only one possible key towards
understanding the relationship between SD and indicators. Systems’ theory might be useful to a certain
degree to structure a description of the general linkages of the situation under assessment, but does not
provide an understanding of the processes and exchanges between the societal actors that are
conceptualising, operationalizing and implementing SD. Neither does it allow understanding the dynamic
(or the static) between different actors defending different representations of SD. It is thus not sufficient
for an assessment of SD to focalise solely on an assessment of the state or dynamic of the system(s): the
processes of the systems’ transformations and the influences stemming from the different actors have to
be accounted for in some way. If we are confronted with conscious systems, it will not suffice anymore
to consider the potential fuzzy behaviour of systems. It will become necessary to take into account the
fact that conscious beings can imagine processes that allow them to account for the adaptiveness of
society: procedures of domination and influence and power and negotiation could have been imagined by
the actors in order to influence and steer the processes of development and adaptation. In the following
section, we will explore briefly the main procedural arguments that can be encountered in the debate on
SD and ISD.

2.1.2. Processes

Whether we consider SD as a societal project of collaborative learning and apprenticeship, or simply as a


qualitative and normative characterization of development, processes are always in the centre of the
concept. By definition, the explicit reference to development makes sustainable development procedural
and evolutionary. Statements such as: “Sustainability / non-sustainability is a qualification of states and
processes within a continuum of possible states and processes” (Becker et al. 1997 : 21) are omnipresent
in literature and define formally and structurally the procedural nature of the SD paradigm. Stressing the
process-nature of a development path is of course somehow tautological.
From a sheer linguistic interpretation of the procedural character of SD to the fundamentally structuring
implications of a reflexive representation of SD (see chapter 4), at least two levels and depths of
‘procedurality’ can be identified.

Direct procedurality

It has already been emphasised, that any development, be it economic, social, individual or cultural, is
acknowledgeable in terms of a dynamic, and thus, is necessarily process-oriented. Every development

59
can be deconstructed in terms of steps, hierarchies, priorities, (dis)engagements, barriers, feedback…
Furthermore, when the overall objective is to take influence on, and orient, development paths, processes
of control and orientation, of influence and assessment are unavoidable. These again can be procedural
and open to bargaining.
In direct relation to the above mentioned systemic representations of SD, and the necessary resilience of
viable systems, Spangenberg (2000 : 21) states that “(…) a dynamic system (…) includes not only the
ability to resist externally enforced changes, but even more the ability to recover from and adapt to
pressures. The resulting state of the system is not a basic balance, but a dynamic process of permanent
change, i.e. development process”. It is the reference to unavoidable permanent change, and the
collective will to redirect this permanent change into desirable development paths - which are assumed to
unfold into better and more sustainable futures - that is at the very heart of this first level of
procedurality.
Cynical observers of the polity of SD argue that what is addressed as permanent change is, especially in
the light of national and international implementations of SD, interpretable more as a form of permanent,
negotiated and ‘scientised’ standstills and that the procedural determination of the desirable sustainable
futures be merely a new field of activities for the departments of political marketing. The repeated calls
to prioritise above all the generation of communication and information might be interpreted as
reinforcing elements of such a view; e.g. ISD might be developed to deter attention from the more
operational policy challenges.
We label this first level of process-orientation as ‘direct procedurality’. We pose that it is not specifically
related to the nature of SD, but is amplified by the development-character of SD. Hence ‘direct
procedurality’ does not generate any specific characteristics as regards SD: tools and instruments for its
implementation are classical and common to many policy processes. As with the more generic strive
towards evidence-based policy-making, ex ante evaluations, modelling and information gathering are
taking a leading role for the definition of policy, as well as its implementation. The aspect of direct
procedurality allows us thus to link the field of indicators for sustainable development to classical
indicator developments - and by extension to other decision-aiding tools and processes - in the
economic, social or environmental fields of activities.

Indirect procedurality

On this second level (Schneidewind 1997 : 183), SD is understood as a regulative idea, which defines
merely an orientation-scheme that orients and influences societal processes of development. SD can be
discussed in the light of the mechanisms of societal institutions, which operationalize SD as the interplay
of societal actors. In other words, the concept of SD is able to define only some loose procedural patterns
which should translate into ‘institutional’ action, but has not the depth to define the actions (or,
development paths) themselves. These patterns for action have to be seen as nothing else than procedural

60
guidelines or mechanisms enabling to define the interplay of actors. “The major focus becomes a search
for processes that enable the systematic exploration of development issues and policy in a
transdisciplinary way” (Meppem, Gill 1998 : 123).

It is to a large extent this indirect procedural nature of SD that makes the concept attractive to many, and
difficult to implement for everybody. The implications for assessment tools such as indicators are
manifold and occur on different levels of the construct.
On the one hand, indicators themselves are conceived as the outcome of societal processes of collective
evaluation, rather than as solely scientific exercises of efficiency or effectiveness assessments. Our very
field of study boils down to what Dratwa (2003 : 10) calls ‘political epistemology’, i.e. the study of
“processes through which a society produces and selects the knowledge or the episteme on which the
taken decisions ground”26. This implies also that the definition of the rules and mechanisms of the
process of collective assessment is gaining in importance.
On the other hand, indirect procedurality raises a fundamental risk of ‘self-reference’: ISD processes are
basically to respond to the same process-criteria as their evaluandum: indicators and the implementation
of SD are both recommended to be developed with participatory, multi-stakeholder, multi-disciplinary
(or in one word: transdisciplinary) processes. The object of the indicator-based assessment, for instance
the implemented SD policies, is assumed to be a societal development process, which is measured against
a series of rules and criteria (such as participation and multi-disciplinarity) stemming logically from the
same paradigm. The result is that both the evaluandum and the evaluation process are to respond to an
identical bouquet of characteristics. Apart from this type of double processes being extremely
complicated, and thus timely and costly, to be implemented, stretching the application of SD
characteristics to the limit will necessarily be counterproductive in the long term: it implies to replace
substantive, result-oriented assessment criteria with terms of references that are solely procedural
defined. Both aspects of an assessment are rather to be integrated.

SD can thus somehow be explained by applying a double, intertwined logic. On the one hand, the content
of SD and the interactions, which arise out of the multi-dimensional concept, can base on aspects of
systems’ theory and rendered explicit with systems’ theory wording: at a first level, systems’ theory is
applicable to the construction and to the (visual and mental) representation of the network of dimensions,
as well as to a certain extent to the inherent behaviour of the dimensions’ interrelationships.
Simultaneously, few authors extend the applicability of systems’ theory to make it one of the main
constituents and characteristics of SD.
On the other hand, SD is widely acknowledged as a process-oriented policy-making scheme, which
allows to a certain extent to re-organize and legitimise a new series of interactions of actors within an

26
Own translation.

61
institutional decision-making process. For instance, procedural aspects can be kept at the single level of
managing irreducible uncertainty with regard to development paths. More profoundly, they can be
expanded towards a scheme of institutional principles that rule the interplay of interests.

Before going any further in the construction of our argument in favour of procedural systems (see chapter
3), we will explore the underlying principles into which SD can be translated. In the absence of any
clearly defined development paths and without the helping existence of uncontested thresholds or limits,
SD can only be defined - and subsequently evaluated - as development which successfully takes into
account a series of principles.

2.2. Translating the ‘projet de société’ into principles

Using ‘principles of SD’ as a substitute for - or a refinement of - definitions of SD, was already the reflex
that was followed within the Rio declaration of 1992, which developed 27 principles for SD. Using
principles to limit the options within decision-making processes is wide-spread in numerous humanistic
policy domains (see for instance, Jonas’ principle of responsibility and Rawls’ Maximin principle). More
disciplinary approaches developed their principles, such as the capital substitution principle (or weak
sustainability) that rules environmental economists’ theories.
These series of principles are meant to render a general integrated framework of characteristics that SD
should be following. At the international level, the collection of principles includes a principle of
subsidiarity, hence indirectly calling for the different levels of public authorities to develop their own
series of adapted principles, in the hope that the process of selecting, defining, interpreting and applying
these principles will be an essential, preliminary step towards an effective, efficient and representative
operationalization of SD. In that sense, it is of course not the individual elements or principles that will
transcend into a coherent SD-project. The principles are not meant to be followed on a one by one basis,
but rather as a conglomerate of principles that should be integrated into practice as a whole. An
integration which will eventually be translated into coherent SD-policies.
In order to picture our conclusion from the last section, two of the very basic postulates27 of systems’
theory appear in filigrane of this interpretation: “the whole is more than the sum of its parts”, and “all
parts are equally important to the constitution of the whole”. Or, as Zaccaï (2002 : 31) puts it:“By
integrating a certain number of characteristics, principles, dimensions, SD multiplies the levels of
justification to satisfy the concept. It is not enough to encounter this or that principle (e.g. of

27
For a very interesting sociological discussion of these postulates with a focus on environmental policy, see Luhman
1988.

62
environmental protection or with regard to an economic criteria), but it is necessary to meet the whole
construction of principles, or to produce a justification with regard to them” 28.

The fundamental question of the how, when and with whom the selection and definition of the principles
will be realized, does appear to be of primary importance to the operationalization and implementation of
the projet de société29 (i.e. of SD). It appears thus that the discursive, collective processes with regard to
defining or delimiting the principles of SD need special attention as these processes do emerge as the
potential constituents and levers of its application into policy.
As such, SD breaks with more traditional development programmes, which seemingly are often
developed with “the belief (…) that the clearer the image of the destination, the more force the future
will exert on the present, pulling us into the desired future state” (Meppem, Gill 1998 : 128). With SD,
the destination is hardly sketched, or at best is heavily controversial, and neither are the pathways to
embark on known or even securely identifiable.
This is however not unique to SD (see Zaccaï 2002 : 31-39 for a discussion). SD has been equalled with
normative concepts - supposedly organizing public life in the western world - such as democracy, liberty
of speech, issues of equity… A common characteristic of these is that they can be equally spelled along
two levels of comprehension (Jacobs 1998 quoted in Zaccaï 2002 : 35). A first level, referring to the fact
that hardly anybody is opposing himself against the rightfulness of the notion itself. Few people are
contesting their adherence to SD (or to other normative concepts of public life, such as democracy or
human rights). A second level of comprehension, which is facing the uncertainties and disagreements
which occur within these different normative notions: fundamental debate is necessary on every level of
society to perform the necessary interpretations and translations onto an operational level of public
policies. Dissent on the meaning of SD would thus be virtually unavoidable, and would even represent
one major constituent of the necessarily discursive implementation of the normative concept.

Fundamentally, the second level of comprehension implies a process of political and societal negotiation.
In this sense, the second level comprehension needs to take into account the institutional settings
(Spangenberg 2000) in order to become translated into decision-making mechanisms. For instance,
principle 10 of the Rio Declaration, which reads ‘environmental issues are best handled with the
participation of all concerned citizens, at the relevant level’, calls implicitly for the elaboration of
procedural rules for decision-making that integrate relevant groups of societal actors in order: to
configure a more representative decision; to allow to rely on existing endogenous capacities and develop
new ones; to push recognition of the decision; improve the efficiency of or at least the adherence to its
implementation.

28
Own translation.
29
For a clearer vision of the implications of interpreting SD as a ‘projet de société’, see Crabbé (1997).

63
If we cannot develop here an exhaustive overview and analysis of all existing30 series of principles
attached to SD, the table below gives nevertheless a short insight into some examples.

Author / Institution Principles of SD

WCED, 1987 Revive Growth; Change the Quality of Growth; Conserve and Enhance the
(Brundlandt report – Tokyo Resource Base; Ensure a Sustainable Level of Population; Reorient Technology
declaration) and Manage Risks; Integrate Environment and Economics in Decision-Making;
Reform International Economic Relations; Strengthen International Co-
operation.
IUCN, UNEP, WWF, 1991 Respect and care for the community of life; Improve the quality of human life;
Conserve the Earth’s vitality and diversity; Minimize the depletion of non-
renewable resources; Keep within the earth’s carrying capacity; Change personal
attitudes and practices; Enable communities to care for their own environments;
Provide a national framework for integrating development and conservation;
Create a global alliance.
Federal Planning Bureau – Belgium, Planetary conscience; Long-term vision; Integration of components; Recognition
1999 of scientific uncertainties; participatory and responsible approach.

Ramonet I., 2002 Precautionary principle; Inter- and intra-generational equity principle;
Participation to decision-making mechanisms principle.

Table 2 : Selection of SD-principles

It is not the object here to develop into detail the articulation of the numerous existing series of principles
that were developed with regard to SD, and which do translate for each of them a specific interpretation
of SD. This fundamental discussion has been conducted by others (see for instance, Zaccaï 2002) and is
far from being closed. Neither is it the object of this section to embrace one specific series of principles
and detail the articulation of its principles as a specific interpretation of SD. However some recurrent
features of these lists of principles can be stressed. Even if neither the initial political documents, such as
the Rio Declaration and the Brundlandt Report, nor the later conceptual dissertations, fully rely on the
same sets of principles, some of the principles imposed themselves as unavoidable, and appear to be
more representative of what is tried to be reached with SD-processes regardless of the institutional level
at stake. These agreed upon principles could be acknowledged as a societal compromise for a series of
First Order Principles of SD :
! Integrate environmental, social and economic criteria into decision-making at every level
! Enhance equity, both inter-generational and intra-generational
! Base international negotiations on a principle of common, but differentiated responsibility
between the developed and the developing world

30
A collection of series of principles has been initiated by the International Institute for Sustainable Development (IISD)
and can be accessed at : http://www.iisd.org/sd/principle.asp (visited on 12 April 2007).

64
! Apply the precautionary principle, and recognize the existence of uncertainties
! Improve participation in decision-making and enhance the quality of and the access to
information
! Favour multi-actor, multi-stakeholder and global partnerships

In this perspective, SD is definable and implementable as a series of principles. The negotiation processes
as well as the interpretation processes, which will translate and operationalize the principles into policy
choices, call for adapted institutional settings in order to be successful (i.e. efficient, effective…) in the
medium and long term. As a consequence, the characteristics of the institutional settings will define in
fine the quality of the developed principles, their interpretation, their application…
In this understanding, it is the existence of efficient and adaptable institutional settings, which will be the
basic and preliminary condition for operationalizing SD.

Conclusion to the chapter

The present chapter showed synthetically the very ambiguities and difficulties, which link to defining and
conceptualizing SD. We developed more specifically the consequences of these ambiguities. SD can be
imagined in different ways. The three discussed conceptualisations, i.e. SD as a systemic, SD as a process
and SD as a series of principles, are interlinked. All three conceptualisations provide a specific level of
analysis in order to attain further clarity for SD in specific policy situations.

In order to configure, discuss and translate some of the principles to the level of a SD-policy, a
procedural setting needs unavoidably to be agreed upon. Process-oriented mechanisms to SD can provide
for the norms, conventions and rules within which to situate the debate. Because principles cannot be
defined in absolute terms, neither can their translation towards a policy situation be standardized, they
need to be procedurally determined. Systemic representations of SD, for instance in terms of sub-
systems, dimensions and viability criteria, can simplify such a procedural determination of principles by
setting a common vocabulary and a common criteria-framework.
In the following chapter, we will explore further these basic notions, and apply them more specifically to
ISD. Applied to evaluations and assessments, these three notions gain clarity. The translation of SD into
systemic terms can help to define the assessment’s dimensions and basic evaluative criteria. The
procedural reading to SD gives a strong argument in favour of the importance of evaluation cycles per se
in SD-policies. And finally, SD-principles can allow to structure the assessment processes and, more
precisely, the configuration of the processes themselves as the principles can be translated into guiding
criteria and characteristics for SD-policies. SD is thus acknowledged here as systemic representation
influenced by a collection of principles which need a procedural mechanic in order to be translated into
policy.

65
We will discuss in the following chapter, how develops thereof the necessity to strongly enhance
evaluations and assessments, and that the procedural settings in this evaluative setting is actually
translatable in terms of institutions.

66
Chapter 3
Sustainable development and assessment

67
“Problems created by man can be solved by man”
JF Kennedy quoted in Gouv.lux. (1998)

“The problems we have created in the world today will not be solved by the level of thinking that created
them.”
Einstein quoted in Thompson (1995)

“Values are what we care about… values should be the driving force of our decision-making. They
should be the basis for the time and effort spend thinking about decisions.”
Keeney quoted by Beinat (1997 : 21)

68
We have already introduced to the fact that SD demonstrates a strong potential as generator of
information, notably because SD emphasises the demand for “reliable knowledge” (Shi 2004 : 23) as an
attempt to cope with the complexity of ‘socio-environmental’ (Musters et al. 1998) interdependencies, or
what Jacobs termed “socio-ecological systems” (1996 : 14). This information demand is specifically
intense for information originating from objectified, recurrent and standardized evaluation, assessment or
monitoring processes that allow for the comprehension of the interactions of the social system (human
and institutional) with its environment in a variety of temporal and spatial scales. In other words: in the
realms of SD, Science is to develop and communicate knowledge of nature-society interdependencies.
However, many problems related to the socio-environmental system lie unknown and render it thus
difficult to develop the necessary scientific knowledge for their management. Such socio-environmental
problems persist not only because of uncertainties, incertitude or fundamental ignorance, or because of
the existence of ‘irreducible complexities’, all of which impose limits on deterministic frameworks of
analyses. In many decision-making situations related to SD, the informational bottleneck occurs also
further downstream at the level of passing (existing, relatively robust and available) information to its
potential political and administrative users.
In the first place, this points to the fact that the generation of information is of course not a goal in itself,
but represents merely an intermediary state between ‘data’ and ‘knowledge’, where simplistically: data
must be comprehensible for users to become information, and where information must be comprehended
by users to develop into knowledge (see for instance, Hansson 2002; Wildavski 1979). Simultaneously,
we will hereafter develop that, because policy makers acquire vast arrays of knowledge, i.e. information
which is understood by the ‘consumer’, on a very wide field of policy domains, such knowledge can only
be rarely mobilized explicitly. Policy makers are only rarely able to cite or point to the source of their
knowledge, i.e. a specific piece of information, on specific issues (Hezri 2006). Cognitively, the link
between the information (which can be materialized in reports, interviews, notes…) and knowledge is
often lost.

In parallel, we are acquainted to decision situations where information is incomplete, uncertain and
conflicting. What appears to be singularly innovative with SD, is that the recognition of multiple
interdependencies within the socio-environmental system induces an interdependency of incomplete,
uncertain and conflicting bits of information. A situation, which renders decision-making situations
exponentially confusing, inducing that collaborative processes of decision-aiding and evaluation are very
demanded.
Giampietro (1999 : 5), in the context of explaining problems of scale and complexity to a series of PhD
students, used the following words when interpreting the result of recognizing these interdependencies
for decision-aiding: “Human societies and ecosystems are generated by processes operating on several
hierarchical levels over a cascade of different scales. Therefore, they are perfect examples of dissipative
hierarchical systems that require a lot of non-equivalent descriptions to be used in parallel in order to
analyze their relevant features in relation to sustainability.” Giampietro stresses the fact that SD
encloses infinite and non-comparable human and ecological sub-systems, which means that their
potential understanding needs an important number of differing, but articulated analyses that

69
deliberatively use a number of different scales (temporal and geographical). An attempt to understand the
interactions between the environmental and the social systems, sine qua non condition of managing SD,
generates thus necessarily a cascade of analyses, which are to be sufficiently interdependent if the aim is
to render a coherent, articulated multi-dimensional picture of the situation.

The demand to produce interrelated assessments might not appear to be very specific to SD, as other
more traditional fields and domains of social sciences call for the same. However, in the realms of SD the
articulation between different domains of study appears often particularly difficult because of their
original independence; a good example is the often called for necessity to explore the interrelations
between social and environmental issues (i.e. so-called socio-environmental inequities), which remains
widely unsatisfied until today because of serious conceptual and technical problems to link in a consistent
manner both policy-domains.
One further problem develops on the general impossibility to render for decision-makers analyses of
situations that exceed 3 dimensions. Or, in other words, the obvious presence of addressees who develop
difficulties with comprehending representations and analyses with multiple scales and dimensions. This
limitation is even more fundamental and critical. It is not only impossible to render usable ‘pictures’,
which are sufficiently multi-dimensional, to represent correctly the interactions of the systems. Worse, it
is not even possible to apprehend the totality of information necessary for their complete understanding.
A double simplification has thus to occur: on the level of understanding and analysing the systems at
stake, as well as on the level of using and communicating such analyses. While the former is a typical
science–related issue, the latter explains the popularity of indicator schemes to represent analyses.
Simplifications of information, permitting to reduce such confusion, are thus needed, but tend inevitably
to mask the diversity and controversy within existing information. Conversely, on top of the
communication of aspects related to the quality of information, it is more and more acknowledged as
being vital to unravel to decision-makers the controversies as well as the context of information (Denisov
et al. 2001; Giampietro 1999). The discussion of these antagonistic tendencies - between the need for
simplification and the necessity for ‘complexification’ of information - is at the core of the first part of
this chapter, “(…) meaning that the larger the number of non-equivalent perspectives (...) that we can use
to represent and model the behaviour of the natural system, the richer will be our understanding of the
reality. On the negative side, a larger number of non-equivalent representations will make more difficult
to handle the resulting information space. (...) Unavoidable dilemma between ‘relevance’ and
‘compression’.” Giampietro 1999 : 14).

The integration of information into decision-aiding processes renders a series of other questions, which
need to be addressed. The growing voices asking to promote, as part of the solution, the regulatory
standardisation of SIAs31, of enlarged SEAs32, of sIAs33, etc, on the European as well as at national or

31
Sustainable Impact Assessments.
32
Strategic Environmental Assessments.
33
Social Impact assessments.

70
regional levels, are not only stemming from those who have an obvious interest (admittedly often of a
pecuniary or strategic nature, e.g. lurking consultancy contracts or other hidden agendas) to push towards
the development and implementation of such instruments.
These types of evaluation instruments - and the domination in discourse and argumentation they do imply
during their development and standardization (if ever) – have their roots in a generalized faith in “(…) a
problem-solving discourse which emphasizes the role of the expert rather than the citizen or
producer/consumer in social problem solving, and which stresses social relationships of hierarchy rather
than equality or competition” (Dryzek 1996 : 65). Such a vision of public management, centred on
technocratic problem-solving, is accused by Dryzek of ‘administrative rationalism’ (see Dryzek 1996 :
65 - 83), which he defines as a rationalism where administrations and other collective organisations (e.g.
large firms, NGOs, scientific networks,…) are assumed to be contributing successfully towards reaching
optimized policies by applying hierarchical and procedural rules of decision-making. Being one form -
admittedly wide-spread in Europe - of public management of environmental issues, Dryzek states that
administrative rationalism can be completed by - or in some cases even opposed to - other management
schemes, being either:
! open to non-appointed members of society: ‘democratic pragmatism’,
! articulated by market forces: ‘economic rationalism’;
! or explained by engineering perspectives: ‘ecological modernization’.

Departing from Dryzek, for whom SD is a relative34 to “environmentally benign growth” (1996 : 123),
we pose that SD lies somewhere between or on top of these 4 forms of management, ready to embrace
‘institutional change’ (even if voices diverge on the depth of change needed).

The preceding chapter introduced to some of the consequences of qualifying SD as a dynamic. Managing
such dynamics calls to develop and implement tools or processes that are able to recognize the value of
the inherent characteristics of any dynamic, such as adaptability, resilience, feedback enhancement,
information generating…: dynamic situations and policy-making contexts seem to cry for periodic
assessments. SD is in this respect no exception to the rule (Munda 1995).
One general characteristic of qualifying dynamics is the relative uncertainty of the concerned system’s
development path: are we on the right track? Does this action make us divert from the original track? In
which direction? Should we interfere or is the system strong enough to bounce back in the original
pathway? Is it the original pathway that copes best with our needs and wants?… The more uncertainty is
inherent or added to the system, the more such questions arise and are becoming unanswerable with
formal decision instruments.

34
We abuse of Dryzek, whose discussion is far more subtle than simply putting SD as a direct synonym of growth. Rather
he acknowledges the impossibility to define limits to growth (economic, demographic...) within the paradigm of SD. The
absence and impossibility to define such limits being for him than a hint that sustainable growth might not be excluded as
a solution.

71
The feasibility of evaluating dynamics with straightforward decision-aiding instruments is strongly
dependent of the presence and depth of 2 characteristics: commensurability and comparability of the
different alternative development pathways. Within these one distinguishes between:
! ‘Strong commensurability’, where the systems under analysis allow to be apprehended with “a
common measure of the different consequences of an action based on a cardinal scale of
measurement” (Funtowicz et al. 1998 : 4): e.g. ‘I prefer 10 times being on a sunny afternoon at a
terrace drinking a coffee with my friends to returning to office and work on that final report’.
! ‘Weak commensurability’, which is allowing ordinal scale measurement.
! ‘Strong comparability’, all options can be ranked with a “single comparative term” (Funtowicz
et al. 1998 : 4), for instance via the price of a good or service.
! ‘Weak comparability’, “where irreducible value conflict is unavoidable, but is still compatible
with rational choice employing practical judgement” (Funtowicz et al. 1998 : 4).

Decision situations as encountered in the struggle for SD, and as understood by many, are however
mostly responding to none of these situations, by presenting a fundamental incommensurability of
values, i.e. irreducibility of options’ valuation to one single unit be it monetary or physical (see Martinez-
Alier et al. 1998 for incommensurability of values as a foundation of Ecological Economics, and enjoy
the bonus of a very interesting historical analysis of the valuation discussion), induce a fundamental
incomparability of alternatives. Much confusion exists in using this terminology, and probably many
parallel understandings do compete across disciplines on the implications of incommensurability and/or
incomparability.
However, as understood in the general Ecological Economics paradigm35, incommensurability does not
automatically imply incomparability of options, if the latter can still be addressed by applying multiple
types of values (see again Martinez-Alier et al. 1998). Decision making alternatives might thus still be
comparable while the situation at stake being fundamentally incommensurable. Decision making
situations, which refer clearly to SD, however, will mostly present incommensurability of values in a way
to imply incomparability of alternatives. We will discuss this point later under a series of headings, but
one acknowledged soft solution out of this considerable difficulty for evaluation (and valuation) is to
apply non-formal participative methods permitting the collaborative development of consensuses on the
criteria, stakes and values. These soft solutions are not expected to render direct advantage or miraculous
relieve in all situations, but are meant to favour “the learning process for all involved and the result of
this interaction process will more often than not improve the decisions taken in terms of results and
acceptance or legitimacy” (Söderbaum 1999 : 297).

35
We refer here for the second time to Ecological Economics as representing the basis of our foundational understanding
of environment-society interactions. We need thus to clarify the objectives of this epistemic community, best described by
“action-oriented (research) to link theory and practice by facilitating the influence of theoretical insights in decision-
making processes” (Shi 2004 : 27) while still recognizing that “ecological and economic rationality are not sufficient to
lead to correct decisions, thus environmental decisions must be taken by using a democratic scientific-political decision-
process” (Munda 1997 : 228 quoted by Shi 2004).

72
In other words, the application of an understanding of decision-making, which relies fundamentally on
elements of procedural rationality, seems to imply to invest in soft processes that understand decision-
making as long term conflict resolution and refute the sole formal and quantitative use of decision-
making tools (Funtowicz et al 1997; Faucheux and O’Connor 1998).
Translating into our context, a hierarchical typology by Norton and Minteer (2003 : 374) developed to
“(…) think about environmental values and theories for understanding these values (…), leading to 4
different types of theories of environmental evaluation”, we alternatively structure the linkages between
sustainable development and evaluation by the following :
! Questions in relation to the nature of sustainable development are addressed with ontological
theories of SD
! Questions in relation to measurement of SD can be addressed by developing a theory of
measurement
! Questions in relation to the influence of SD on policy can be seen as related to epistemological
theories of SD
! Questions in relation to developing satisfactory policies, given the fundamental multiplicity of
meanings of SD, are to be understood as being in the realm of ‘theories of the process of policy
formation’

This hierarchical layout emphasises that if our main objective here is to discuss the second type of
questions, relative to measurement and assessment of SD, we inevitably have to develop on the manifold
relationships between policy-making and evaluation processes as addressed by the other 3 types of
questions. Inextricably, processes of policy formation and evaluation appear to be intertwined.

It is our intention to explore in this chapter the needs and implications of ‘institutional change’, notably
as generated or sustained by information generation in assessments. As constructed by Hezri (2006),
institutional change and policy processes are intrinsically and mutually linked. In the following, we will
shed light on the nature of the aforementioned manifold demands for information and thus for
assessments and evaluation. Indeed, we understand these demands not as signs of a crawling and
potentially dangerous bureaucratisation and technocratisation, but we want to think of them as attempts to
provide for an institutional answer to repetitive calls for more open and transparent policy-making. If in
this respect, the growing demand for information could be explained, it will be another question to
analyse if the supply of information (i.e. in our case, the development of indicators) has sufficiently
evolved in order to avoid unnecessary bureaucratisation and proceduralism. However it feels that in
modern democracies, open and transparent policy-making implies an increase in the degree of
proceduralism, basically because “policy is a process as well as a product” (Wildavsky, cited by Hezri
2006 : 93). The question of the proportionality between effort (i.e. processes to generate information) and
result (i.e. influence of information on the decision), or between the product and the process, remains
however widely unanswered and difficult to put into generalization.

73
The first part of this chapter will investigate why evaluative processes are becoming the more
unavoidable and necessary the less the situation to be accounted for presents static characteristics:
managing dynamics implies the development of assessments. If however evaluations are growing
evermore unavoidable because our worlds interconnect and decision situations complexify, it is necessary
to explore whether and how decision-making processes are able to cope with these complexities.
Before investigating specifically on assessments, and later on indicators as one form of an assessment
process, we will explore the more general case, i.e. whether and how public decision-making is
connected to information-flows in general, and how information-flows and decision-processes are
intertwined and non-separable while still remaining potentially antagonistic and counter-productive
(notably if confronted with decision-situations filled with complexity). From this analysis, for which we
draw largely on the very diverse literature existing on ‘knowledge use’, we explore the conditions that
have been identified to qualify ‘successful’ information, and attempt to translate these quality criteria
onto Indicators for sustainable development (ISD). As will become obvious through the following, we do
not concentrate on the more traditional quality criteria for information assessment, but do quite directly
translate quality into ‘usefulness for policy processes’, i.e. usability.
Further down the chapter, we will perceive that some of the characteristics and principles directly related
to SD (participation, long term…) can be seen as inherently counter-productive when it comes to produce
valuable assessments and information for decision-making. Finally we will close this chapter by
concluding that even if information (and indicators and evaluations) is inherently difficult to be properly
constructed and used in the context of SD, there are methodological and/or procedural characteristics that
begin to be recognized as being fundamental to enhance correct and sustained usage of ISD.

3.1 Decision-making and Information: Attempts to assess


Information

“Dans un monde où l’attention est une ressource majeure des plus rares, l’information peut être un luxe
coûteux car elle peut détourner notre attention de ce qui est important vers ce qui ne l’est pas. Nous ne
pouvons nous permettre de traiter une information simplement parce qu’elle est là.”
Herbert Simon, quoted by Leca J. (1993) in Perret B. (2002 : 2)

Evaluations are “(…) assessments, as systematic and impartial as possible, of an activity, project,
programme, strategy, policy, topic, theme, sector, operational area, institutional performance, etc. (…)
An evaluation should provide evidence-based information that is credible, reliable and useful, enabling
the timely incorporation of findings, recommendations and lessons into the decision-making processes
(…)” (UNEG, 2005 : 4). If we take a larger perspective on evaluation for decision-making, with the aim
to gain a (simplified) representation of what evaluations might be for real-life situations, evaluations
appear as processes, which generate validated information for situations where an actor is confronted

74
with a choice. We will discuss later in this text the ambiguities attached to the validation of information
in the context of SD. For the present, we pose that evaluation and assessment are largely synonyms, as
well as that indicators are information. ISD, as a second-generation evolution of indicators, in turn are a
specific form of assessment, or evaluation, basically representing a process whose outcome is validated
information. The main difference between indicators and evaluations situates at the level of the link to a
decision situation, a link which we will thus have to discuss too.
In generic terms, it is our aim here to further understand the part that information is playing in decision-
making.

A first ambiguity to be encountered when introducing what information represents in decision-making


situations, originates from the obvious characteristic that if we acknowledge simplistically an evaluation
as a process of constructing validated information (be it data, rankings, scenarios…), then information is
both an input to and a product of this process. Very obviously, constructing one type of information
demands to use a series of other types of information.
Applied to decision-making situations a similar logic could be revealed: decision-makers use available
information to settle their decisions, produce information to communicate their decision (e.g. regulatory
texts, recommendations…) and reuse the feedback of their decision as information to adapt, reject or
reinforce their decision: decisions generate information, which is then an input to a next round of
decision-making. Many prescriptive36 decision-making cycles have been developed on top of such a
linear understanding of the relationship of information and decision-making, with more or less
complicated feedback loops and interrelationships, with more or less empirical validation, with more or
less acknowledgment of the imperfections sitting within their axioms, hypothesis and observations.
Generally, their advocates distinguish a sequence of decision-making similar to the one hereafter:
! Problem identification and goal definition
! Identification of alternatives and assessment of status quo
! Information gathering, analyses and description of likely effects
! Application of impact assessments in order to rank alternatives according to their effects
with regard to the criteria of the decision-maker(s)
! Decision-making
! Decision implementation
! Evaluation of outcomes and outputs, which serves as information for the next cycle of
problem identification and goal definition.

However in the real world, such sequential cause-effect decision-making processes do not exist, and
complexity and complication occurs on each level: for instance, the first step (i.e. problem identification)
is highly dependent on precursory ‘problem qualification’. Necessarily, this qualification of problems is
societal and institutionally determined (Luhman 1988) and as such not linear at all.

36
Here we intentionally don’t follow the line of thought between prescriptive and descriptive decision-making analysis as
developed by Chechile (Chechile and Carlisle 1991 : 1-13) and for whom such decision-making cycles even if they are
ideal and hypothetical can be seen as descriptive, i.e. as derived from observation of real life decision-making.

75
Fundamentally, however, the use and influence of information is painted in desperately optimistic and
too bright colours when it comes to qualifying, interpreting, understanding and reacting to evolutions that
are reckoned to have developed into problems. Sequential policy-making cycles might be useful
simplifications in some contexts. If it comes to understanding the influence of information, they are
however quite useless. Even more so as we saw earlier (with the reported positivists’ worldviews falling
into disgrace) that even the most rational scientific mindset armed with the most sophisticated
instrumentation would inevitably be unable to escape the influence of what is often considered as
irrelevant noise, such as mood, education and culture. It is eventually not validated output information
(e.g. of an evaluation process) that acts as the one and only, or even as the most influential, element in a
decision process. Generally speaking, different decision-makers use different sources of information, are
trained differently to translate information into alternatives and options, value the options and criteria
differently, assess the information outputs differently. This incompressible individual treatment of
information in decision-making is reinforcing two formal procedural solutions, which are meant to
attenuate the influence of the individual decision-maker’s ‘information treatment’:
! engage into collaborative decision-making and evaluation processes in order to integrate a
larger set of information treatment customs;
! introduce decision-aiding processes, which structure rather than solve decision problems,
with the aim to raise self-consciousness within the individual decision-maker about his
inherent value structure and the influence of this latter on decision-making.

With respect to the first panel of collaborative solutions37, Denisov (2001) identifies 3 generic advantages
of opening the construction of (environmental) information to the participation of the subsequent
audience and users:
! Strengthen the capacities of participants to handle environmental information;
! Improve the quality and acceptance of information due to multilateral inputs and controls;
! Raise better awareness of the findings among participants as well as to a wider audience due
to direct involvement of the former.

In his words, “(…) the broad involvement of decision-makers and other stakeholders in the very process
of information development (…) turns into a continuous interaction among various and often different
information sources and viewpoints. The process itself serves as a communication tool, the impact
steadily builds as produced information reaches all process participants including most relevant
decision-makers, and then a wider audience beyond the ‘inner’ circle. The results are well
institutionalized and are likely to sustain, and the established social dialogue may even be capable of
changing social values, more than what would be possible with one-way broadcasting.” (Denisov 2001 :
11)

37
We will detail later to what extent collaborative experiences are part of the solution and how they are simultaneously
generating a series of problems that can conduct into serious decisional lock-ins.

76
These points as raised by Denisov (and others in similar contexts) should however be raised prudently as
an opening of the information generation exercises to a wider panel of participants can induce counter-
effects such as:
! A stronger advocacy by individual participants (or groups of participants) of their ‘personal’
positions, which renders the achievement of consensus difficult, e.g. agreement on which
information to construct, for whom, by whom... Worst cases show that the consensus that
could eventually be reached was not only ‘politically’ weak and fragile, but also that the
consensually constructed information was worthlessly diluted and failing to address critical
points. In a series of situations, non-collaborative, but multiple controversial positions can
serve better those who are to decide.
! A diminished transparency with regard to the origins of the authors. Dealing with a group of
participants does not at all induce that information thus obtained is value free, objective or
legitimate. Rather is it more difficult for the wider audience of non-participants to clearly
interpret the origins of the information and, in the end, render it considerably more difficult
to use this information as an argument. In collaborative exercises, agendas (hidden or not)
are not easily delivered in a transparent and comprehensive way.
! A serious risk that the search for quality of the participative construction gains too much of
importance as compared to the information-product itself. As with any other constructed
information, collaborative information should not be acknowledged as being totally
legitimated simply by the quality of the process. As with any type of information, such
information has to be put constantly into its specific context, without this context being
solely responsible for legitimating the information.
! A demystification of traditionally rather opaque scientific and institutional information
processes. Controversially, such demystification of the construction of ‘factual’ scientific
information should in principle be regarded as a positive evolution towards a more open and
reciprocal science-policy interaction. The danger in the context of SD however is that at one
point, even the most open and procedural information generation exercise will simply have
to ‘accept’ that scientific evidence per se is scarce in the case of complex decision situations:
the theoretically necessary resources (in terms of time, budget, people, competences…) to
construct the information are necessarily finite, as is our knowledge of socio-environmental
interactions. The foundations of information are thus not necessarily very robust, or to be
more precise: if they are from a scientific perspective, the reasons to believe them to be
robust can be difficult to communicate to non-scientific actors. In parallel, rendering
transparent such spaces of scientific controversy will reveal the whole panoply of bargaining
and manoeuvring to which scientific humans are capable to defend their stances and ideas,
careers and reputations.

The aim with the following three sections is to construct a generic framework of the presumably multiple
and complex uses and influences of information, and specifically of ISD, on decision-making. We will
evolve in two steps. First, exploring what can be considered as the mechanics of decision-aiding by

77
pointing to different understandings of decision- and policy-making rationalities; second, by exploring
the types of uses or influences of information on decisions or actors. From this point it is then tempting to
try and understand the characteristics which information has to meet in order to gain influence. Finally,
we will present and discuss such a framework of characteristics, which we think interesting to understand
and systematize the mechanics of the influences of information.

3.1.1 Decision-aiding and rationalities

Our perspective on decision-making is one of an institutional, societal process where a number of actors,
or group of actors, intervene. In this sense, ‘decision-making’ can also be labelled less generically as
‘policy processes’, which revolve around “factors affecting policy formulation and implementation, as
well as the subsequent effects on policy” (Sabatier 1991 : 144). The reference to processes contains two
propositions, “first an activity taking place in time, second, an activity that changes and transforms an
entity in the course of handling it” (Burch and Wood 1990 in : Hezri 2006 : 93). In order to qualify as
policy processes they take place in ‘policy systems’, which can be characterised as “the overall
institutional pattern within which policies are made” (Dunn 1981 : 60), and which encompass three
elements: the public policies, the policy stakeholders (i.e. the actors) and the policy environments.

However, if such processes have their set of characteristics and behaviours on the level of the group of
people intervening, the individual decision-maker, or person, keeps a notable influence. E.g. within
groups, dominance by some individuals might well be determining the group’s behaviour directly (e.g. by
imposing a point of view) or indirectly (e.g. by encountering sympathy or by being recognized as
resourceful). We have thus to discuss shortly the relationship between the individual and the group,
between individual behaviour and decision-making schemes and the orientation taken by the group as
response to the sum of individual behaviour. It is not our aim, however, to deeply explore the relationship
between individual decision-making and social choice. Literature38 is far too vast, diversified and
controversial to be apprehended on the basis of a single section. Rather do we choose deliberatively to
stick to a very classical dichotomy and understanding of interactions between the individual and the
group in decision-making processes.
The main questions we try to address situate themselves on another level. The aim of the following is
simply to open to the existing diversity of understandings about decision- and policy-making processes,
and eventually to explore the link of some of these elements to the discourse in SD.

Even if the general vocabulary used, seems to permit such a conclusion: decisions are not ‘made’, rather
are they shaped through social processes. Within these processes, the personal interference of the single
‘decision-shaper’ is generally recognized as being rather limited: it is mainly complex societal dynamics

38
Let alone in the main disciplines handling the issue, e.g. economics, social-psychology, sociology, giving a mere
synthesis of the different currents of thought would quickly become an ‘encyclopic’ matter.

78
that shape the elements of decision-making cycles such as the alternatives identified, the pros and cons
assigned to each alternative, the power of conviction that the different options and alternatives can
mobilize… It would be very optimistic to belief, for instance, that alternatives develop their power of
conviction on the basis of the strength of their arguments alone, or on the robustness of the data that
undermine the alternative. Decision-shaping relies less on a set of rationalistic elements forming into
coherent rules than on a heap of inextricably linked interactions between context, data, issues, stakes...
Simultaneously, extrapolating from basic logical rules for individual behaviour in decision situations (as
developed in economic theory and underlined by experimental economics) to rules applicable to
situations of social choice (for instance, by way of aggregating individual behavioural schemes) has been
denounced repeatedly as difficult or even impossible: it has been acknowledged that on the level of social
choice “no consistent decision-making rule exists that is not either dictatorial or completely arbitrary”
(Merkhofer 1987 : 143). This indeterminism of the process has been extended occasionally to the product
of the process, i.e. to actions: "(...) a great deal of what we do to help people actually hurts them, and a
great deal of what we do to hurt people actually helps them. (...) It is the sheer complexity of the real
interrelations of the social system, as compared with the very much over simplified models of the system,
with which most decision makers operate in either politics or other fields of life, which creates this 'law
of political irony'” (Boulding 1978 : 123).

Formal tools for decision-aiding, such as Multi-criteria Decision Analyses (MCDA), developed slowly
towards recognizing these individually-subjective and societal-empowering interferences to decision-
making processes. This has necessarily been the case when these tools were applied to decision situations
in the context of SD, where notions such as participation, communication, transparency of procedures,
stakeholders… gain significance. As a partial response, procedural issues were given increasing
importance (Munda 1995, 1997; Funtowicz 1997, 1998; Shi 2004; Faucheux et al. 1998) in the
conceptualisation and application of the instruments. Analysts developed tools for decision-making into
processes for decision aiding: “(…) while the former is meant to select an action in a well stated
decision-context with multiple criteria, the latter should help the decision maker either to group or to
select possible actions, or to clarify about the relevant criteria and their respective importance”
(Rauschmayer 1999).
Using other terms, Roy (1990 : 28) expressed similar views when explaining that the aim of MCDA is
not necessarily to identify the solution to a problem but “(…) to construct or create something which is
viewed as liable to help an actor taking part in a decision process either to shape, and/or to argue,
and/or to transform his preferences, or to make a decision in conformity with his goals”.

Decision-aiding instruments developed thus into processes which make it possible to structure the
decision context and decision space in a way “which allows for incommensurability of values, ignorance,
uncertainties, and which consider the fuzziness of the set of feasible alternatives” (Rauschmayer 1999).
These developments in decision aiding followed the rise of Simon’s (1976, 1982, 1983) early concept of

79
the ‘satisficing’39 decision. However, even if Simon’s developments imply “moving the scientists out
from their cocoon of self-proclaimed neutrality” (Giampietro 1999 : 13) and opened the initial search for
an optimal decision to a level where the definition of trade-offs takes priority over optimization, we do
not follow Giampietro’s (1999 : 13) further argument entirely: for him Simon’s satisficing decision rule
implies sufficient opening of the ‘rational agent’ corset by providing the space for “(i) a discussion of
what are the criteria to be included as the most relevant in the multicriteria performance space; (ii) a
continuous process of adjustment of the weights given to the various irreducible criteria used in the
integrated assessment; (iii) a ‘quality control’ on the process which is generating the integrated
assessment, which implies inputs such as political will of negotiate with other stakeholders, trust,
fairness, reciprocity”.
Simon still restricted himself to substitute ‘substantive rationality’ in decision-making with ‘procedural
rationality’. If the latter appears without any doubt as the better framework to consider decision-making
in the context of SD, notably as a convenient substitute to neo-classical ‘rational agents’ logic’, it will
however not be sufficient to provide a perfect blueprint for ISD: it is not solely procedural criteria, which
will decide upon the performance of ISD-frameworks.
Many systems of cataloguing rationalities exist and which could (or have been translated to SD). One of
the comprehensive ones (see Van Gigch 1991), while still rendering sufficient detail for analysis and
being applicable to SD (Brodhag 2000) and being directed towards evaluation questions, distinguishes
between ‘structural rationality’ (i.e. rationality which helps to understand the organisational structure
and framework of the decision situation, e.g. what is the decision-space, who participates…), ‘evaluative
rationality’ (i.e. understand the determination of the objectives, evaluandum and criteria), ‘substantive
rationality’ (i.e. help to understand the acknowledgment of the dynamics and characteristics of the object
under evaluation) and ‘procedural rationality’ (i.e. which allows to understand the choice of processes
implemented).

Other researchers, taking full account of the integration of socio-psychological aspects in decision-
making, opened the debate to more fuzziness - and less rationalities (see for instance the tremendous
developments by Kahneman and Tversky who departed from normative schemes of decision-making to
engage into research on descriptive theories). As a counter-example of conceiving decision-making in
terms rationalities, Etzioni (1992) proposed to cope with the ambiguities (occurring in contextual
decision-making situations and in individual preference determination) by integrating subjectivity as the
basis to individual decision-making. Formally, he adjusts the influence of the traditional logical-
empirical (L/E) factors in decision-making with normative-affective (N/A) factors. The former are related

39
As a major achievement in decision theory, Simon departed widely from the quest for optimal decisions. He came up
with the satisficing decision, where the decision-maker or agent is recognized as no longer chasing the absolutely optimal
solution, but where the agent is trading this unrealizable quest for the realization of an ‘easier’ solution that presents
sufficient advantages to him. The irreducibility of trade-offs as they appear in decision making became integrated into
decision-making rules (i.e. procedural rationality). However, Simon still bases his developments on the existence of a
formal rationality of the agents even if this rationality is occurring at the level of the process. As such he doesn’t
acknowledge the fundamental inconsistencies and irrationalities such as they have been, for instance, pointed out by
experiments in socio-psychology.

80
to the classical vision of different forms of rationalities interfering in decision-making processes. Etzioni
focuses on N/A factors to model the influence of sensitivity, compliance, individual systems of values,
and the like.
Effectively, rationality-based models of characterisation are relying on the existence of the sequence
‘better decision through better information’, and this also if different types of more or less ‘rational
rationalities’ are evoked. If this type of sequence could exist, then the generation and communication of
information (read: the construction and use of indicators) could be interpreted not simply as a necessary
input to decision-making, but even more so as quasi-sufficient to reach ‘sustainable’ decisions (i.e.
supposedly the better decision)! Sustainable development would inevitably be reached with a sort of
‘societal rationality’ autopilot (formerly known as social progress, social innovation40 or social
transformation41). Indicators, as they represent simplified (i.e. accessible) and validated (i.e. better)
information, could logically be thought of as being influent objects towards taking better decisions (i.e.
‘sustainable’ decisions), hence towards SD.

In order to place the discussion we are developing hereafter, and through the rest of the exercise, we have
to specify a little further what is meant with ‘utilisation’ of information, an ambiguous issue per se.
Utilisation can be subdivided and attached to a series of stages of a policy process where information is
impacting. Wildavski has developed one of the most referred to conceptualisation (i.e. ‘the ladder of
Wildavski’) of such a chain of influence between information and decision-stages. For our specific
context of ISD and their usability, a more refined segmentation - still based on the same principles -
seems more adequate and precise (see figure below).
Sequential representations of the influence of indicators on policy situations, while being grossly
simplified, provide also a representation of the interest we have into the analysis of the ‘usability’ of ISD
(see figure 7). While influence, impact and use are concentrating on the interactions within the policy
domain, usability allows to question the utilisation of ISD on the basis of the characteristics of the
indicators, e.g. what renders indicators more or less usable for the subsequent phases of the policy
process? Conveying certain usability to ISD is admittedly the first condition for indicators in order to be
considered as input to policy processes. Besides, usability is also the most adequate level of analysis once
concerned with a conceptual exploration of the potential for ISD-processes to impact on decision
situations. As a consequence, by aiming to situate the analysis at the level of the ‘object’ ISD, and the
way how the characteristics of the object interact with decision situations, the analyses needs to be made
at the level of usability.

We will discuss the above-mentioned ‘logical’ sequence - from information to SD - throughout the next
section, in order to show some of the limiting facets of the conceptualisations when these are translated
across a variety of policy domains. At the same time, we will have to point to the fact that even if the
paradigm of sustainable development tries to open and add some complexity, fuzziness and

40
For a discussion of the specific role of socio-environmental innovation, see for instance Rennings et al. 2003.
41
For a very instructive view on SD as basic blueprint for Social Transformation, see Becker et al. 1997.

81
proceduralism to the sequence, the essence of rationality-based reasoning tends to be the intellectual basis
and reference of policy texts within SD-domain (see notably, § 40.4 in: UNCED, 1992). The point we
discuss in the subsequent chapter is the link between the (quality of a) decision and (the quality or
quantity of) information. What is the influence of information in decision-making situations?

Indicator system Policy process Policy output Policy outcome


(measuring, (initiating (decisions, (changes in systems,
monitoring key preparing, documents) behaviour or
trends) reviewing) ultimate effects)

Usability Use Influence Impact


Direct
Technical Informing Conceptual Cognitive
adequacy Deciding Symbolic Behavioural
Policy Justifying Political Institutional
relevance
Persuading Tactical Physical
Perceived
Ignoring Non-use
usefulness

Information Decision Implementation


input
Figure 7 – Sequential information chain (adapted from Gudmundsson, (personal communication))

3.1.2 Handling information in decision spaces

“The more complex the problem which is being ignored, the greater are the chances for fame and
success”. von Foerster (1972) quoted in Merkhofer (1987 : 144)

Even if “(…) knowledge in general, and the global flow of information in particular, have become
increasingly important forces shaping the course of world affairs” (Clark 2002 : 1), information will
always be missing to decision-makers; i.e. information which would have allowed taking decisions with
sufficient knowledge of their direct and indirect consequences, both here and now as well as there and in
the future. All the same, it is trivial to state that science will not be able at any time and at any cost to
provide all missing elements of knowledge. Besides the evident shortcomings that occur with regard to
the construction and supply of information, we will try hereafter to understand the demand- and use-side

82
of information: what happens when information is on the verge of being integrated in a decision-making
process?
We are trying subsequently to understand some of the forces that are shaping the influence of some types
of information on some type of policy decisions. Viciously, some possible answers to these questions let
us infer that even in situations where quality information (e.g. such as a hypothetical quasi-perfect
indicator scheme) is available to those that are confronted to a decision-making situation, it occurs that
people deliberately – and occasionally unconsciously - ‘choose’ to ignore information, or as the European
Environment Agency stated it more diplomatically: “There is no simple, one-way relationship between
awareness, information and action – each can influence the others in complex and subtle ways” (EEA
quoted in Denisov 2001 : 7).

It is one aspect of these multi-dimensional influences between information, decision, action, behaviour
and attitude that we try to elucidate hereafter. The direct impact of one piece of information on one
decision is difficult to be detected. This might be feasible in some rare cases, but surely not to the extent
to provide for generally applicable conditions. The amount of intermingled causes and effects, both direct
and indirect, is rendering an indecomposable picture. However by changing our analytical perspective,
some valid points can be raised, notably when complexifying the image of ‘direct’ impact, and when
subdividing impact into more subtle effects.
The aim of this section is not to present innovation in decision- or information theory, but rather to
develop an attempt of structuring information usages with regard to information types. By focusing on
one type of information, i.e. indicators, we simultaneously downgrade the discussion to the potential
usability of indicators for decision-makers. We use literature and experience that is mostly concerned
with environmental decision situations and environmental information. Partly, we are constrained to do
so by the fact that SD-decision situations are too rare and programmatic to be able to serve as a robust
reference point. On the other hand, some rather detailed studies have been recently conducted on the
impact of environmental information. Finally, much of the environmental information is in many policy
situations seen as an integral part of a wider SD-policy.
Whatever the policy domain of the decision situation, an increasing number of different types of
information from an increasing number of sources coexist and present themselves to decision-makers:
intuition, personal advice, commercial presentations, scientific data, scientific research results… Whereas
all of them, and some of them significantly so, influence decision-makers, we focus here on constructed
information. This means that we assimilate throughout the following that indicators are one particular
type of scientific information; i.e. as outcomes of applied science: it is mostly researchers that develop
the indicators (or at least steer the process), it is mostly scientific data which are used, it is scientific
vocabulary that is used to interpret them and it is mostly scientific methods which are used to analyse
their consistency, robustness and relevance.

Per definition, complex issues are not knowable to the end, even if scientific resources would be infinite
in terms of money, time, motivation and expertise. The complexity of interactions is such that the
behaviour of complex systems is indeterminate or at best uncertain (i.e. no objective probability functions

83
can be attached to the behaviour of the system). More pragmatically, trade-offs between generating
sufficient knowledge (i.e. information costs) of the systems at stake and the need to act (i.e. opportunity
costs) on the conditions of the systems are necessarily realized. This entails that even in no-complex
decision-situations the decision-maker is never fully informed.

On the technical side, scientific information for decision-making relies upstream on a series of techniques
in order to simplify and represent reality. Merkhofer (1987 : 146) refers to the implications of such
unavoidable techniques as moments of confrontation with ‘unknown uncertainties’ or ‘inherent
incompleteness’: “The model of a decision situation can never be an absolutely complete representation
of reality. This is true because resources available for analysis are finite, but the possible interactions
and implications of a decision are infinite. Any complex real-world problem will always have more
dimensions than a model can capture”.

If we agree with the general statement that information gets ever more important in the management of
complex affairs of public life, an evolution which could be interpreted from the increasing references in
public discourse to evidence-based policy making, it is however still ambiguous for research to clearly
estimate the conditions for information to impact on a decision. However, apart from rare direct impact
on decisions, scientific information by its very nature, characteristics and dissemination channels, tends
to play its role in the background and on a longer term than other types of policy information and policy
knowledge (Denisov and Christoffersen 2001); its influence is incremental rather than abruptly structural.
For some actors, and specifically so for ‘information institutions’ (e.g. United Nations’ GRID-
ARENDAL, or the EU’s EEA), this incremental, indirect, immeasurable lever of action might grow to a
rather large problem, on the one hand of self-esteem (e.g. when in a specific policy situation their
recommendations are not followed), but also of external recognition, and thus of long-term existence. It
is however this long-term existence of such information institutions, which had been identified (Cobb and
Rixford 1998; ) as one of the minimal, but very important, conditions for information to gain usability for
decisions.

Types of information- and evaluation use

In the past, many research endeavours have been devoted to cataloguing information use, and more
particularly evaluation and assessment use, and to a lesser extent indicator use. While many typologies
co-exist and present their series of individual terminology and typologies, general consensus or
agreement seems (Weiss et al. 2005) to prevail over the fact that information can be utilised in 3 generic
ways : instrumentally, conceptually or politically.
An initial and important differentiation can be formulated between instrumental use of information and
conceptual use of information.

84
Instrumental use meaning a direct use of information to give input either to policy formulation or to
direct practice or to define implementation, e.g. a research report’s findings on an ecosystems carrying
capacity being used to define environmental policy thresholds. Instrumental utilisation seems essentially
applicable in policy situations with small societal impact, e.g. situations of “low-level decisions, where
the stakes are small and users’ interest relatively unaffected” (Weiss 1981 in Scott 2000 quoted by
Denisov 2001 : 6), or in practice-related decision contexts (e.g. inform on the design of an
implementation instrument). Some economic decisions still seem to be taken on a rather instrumental
basis following a linear ‘incentive-reaction’ chain; for instance, the adaptation of interest rates by central
banks if inflation/exchange rates exceed a given threshold. However, even these mono-dimensional
decisions cannot be retraced to one single information-event: it is rather a thorough and ongoing
evaluation by the banks’ experts using an incredible amount of information that allow at one certain
moment to develop reaction, and adapt interest rates.
In any case, instrumental use has a very limited potential as utilisation occurrence in multi-dimensional
and multi-scale (read: environmental or SD) decision-spaces which reveal high complexity and stakes.
Consequently, the credo of the instrumental use of ISD has diminished over the years within the specific
SD-indicator developing communities. It seems now rather widely acknowledged by indicator developers
that ISD are not systematically useful in a linear way in policy-making. Within the community of policy-
makers and policy stakeholders, the absence of acknowledging the limitedness of instrumental use of
indicators has even been accused by some, e.g. “the crux of the problem is the assumption of
instrumental rationality associated with indicator development. It is often assumed that indicators will
inform decision in a linear and mechanistic manner once they are made available” Hezri (2006 : 11).
In most non-local policy-making environments, such linear thinking of policy use of ISD is thus dated.
Especially so in progressive policy environments, where the shift from ‘government’ to ‘governance’ has
been initiated and where these evolutions are coinciding with the emergence of a new conception of
policy making tools and processes.
The evolution from ‘government to governance’ in the realm of SD-policy making makes indicators
grow a new role, resolutely non-instrumental, indirect, but nevertheless important in a sense, for instance,
that a series of value issues will be integrated and discussed already at the level of the indicator process.
An ‘upstreaming’ of the policy debate can thus be potentially observed, from the more traditional policy
negotiation space to the moment of constructing policy knowledge, e.g. discussing the assessment tools.
Such observations have been made very concretely in the context of a recently introduced EU-level
evaluation scheme42, in the context of which environmental stakeholders feared that the traditional entries
in policy cenacles and processes they have achieved to obtain over the years might be jeopardized by the
introduction of this new, upstream process. They feared also that for them the non-previsibility of the IA-
process, both on timing and inclusiveness of different types of impacts, will ask them to invest
considerably more resources, which they didn’t have the possibilities to invest.

42
i.e. the European Impact Assessment process, which is an ex ante, ‘participatory’ and multi-dimensional evaluation
process for policy proposals introduced in 2001.

85
Participatory debates on the level of the assessment tool grow thus also a new role, partially to counteract
a too deep technocratic and top-down way of closing in the policy debate, and thus integrating
democratic, or at least discursive, reflexes at the moment of discussing policy orientations.

However, the rather widespread original believe of the instrumental use of indicators still has some
serious implications on the way indicator schemes are conceptualized and, consequentially, the way the
indicator development processes are configured. While many initiatives in ISD are thus not assuming
anymore an instrumental use of ISD, they knowingly tend to perpetuate a simplified representation of
policy-making cycles and, in this context, tend to assume that an instrumental impact of information still
is a valid (while admittedly, gross) simplification of reality. Hence, instrumental use is still assumed to be
capturing at least the impetus-role of indicators, a role which is still sought after widely. In parallel, other
than instrumental use of ISD seems still to be acknowledged by some policy actors as abnormal; in many
regards, policy briefs and even research programmes (see the latest EU-FP7 programme) clearly orient
their calls towards the identification of misuse of indicators on policy situations, misuse being implicitly
understood as non-instrumental use. Probably does a very uncritical surface reading of th repetitive calls
for evidence-based policy making reinforce such actors’ calls for ‘usable’ indicators.

Conceptual information use applies - on the opposite - to decision situations where stakes tend to be high
and where information is not directly translated into policy action for a series of reasons (e.g. too much
complexity, divided opinions, contrasting positions, lack of opportunity), but where policy actors still
continue to think of the information as being useful. For instance, Lehtonen (2003 : 235), when analyzing
the impact of OECD’s national environmental policy reviews, and on the basis of the literature on
evaluation use, came to the conclusion that “(…) most empirical studies have shown that the direct
instrumental use of evaluation results in decision-making is rather an exception than a rule, (…) while
various indirect uses, often seen in terms of enlightenment, are much more common”. Especially in
‘difficult’ policy contexts, information is more likely to be used in terms of enlightenment, informing
problem framing, informing world views or influencing values, or in other words in a wider perspective
of social or collaborative learning, or still in other words as a means to participate to the “gradual
sedimentation of insights, theories, concepts and ways of looking at the world” (C. Weiss, cited in
Balthasar 2006 : 354). In the context of such policy-making situations, conceptual information use has
been identified to participate to ‘organisational learning’, ‘cognitive processing’ or more directly: ’policy
learning’ (see Hezri 2006), which again could be delineated differentially according to whether the
learning is happening at the level of the instrument (i.e. instrumental policy-learning), the governance
function of the instrument (i.e. governmental policy learning), the inscription of the instrument into a
policy programme (social policy-learning), or even at the level of using the instrument itself as a means
to achieve political support and reach political goals (political policy-learning).

While most evaluation analyses agree on the pertinence to operate a distinction between instrumental and
conceptual uses of information, interpretations vary on other, parallel information functions.

86
A third differentiation has, for instance, been proposed (for instance Hezri 2006; or, Balthasar 2006) and
added to instrumental and conceptual use functions, to capture what was variably labelled tactical,
political and/or symbolic uses of information. Within this third category of information use it is possible
to identify the following nuances. Political use which is referring to the use of evaluation information to
confirm, or infirm, already acquired knowledge and to pervert the information to legitimate decisions in
an ex post way, decisions which were already taken before the evaluation was available. A nuance to this
‘perversion’ of information processing, is introduced with the symbolic use of information, where the
information producing process itself is used as a means to reassure stakeholders by demonstrating the
particular importance attached to the objectivation of decisions. Still slightly different, tactical use of
information has been identified when the initialisation of information gathering and evaluation processes
is used as delaying strategy, or as a justification for non-action.

Other analysts (Balthasar, 2006; Vedung, 1997) formalized a whole different strand of evaluation use.
Besides instrumental, conceptual and symbolic/tactical/political uses, they identified process-related
information use. Process-related utilisation is meant to account for the benefits stemming from the
necessary procedural interactions and learning processes which are linked to the evaluations’ being
commissioned and implemented; sub-effects to process-related use have been labelled (Balthasar 2006)
with terminology such as ‘learning to learn’, developing and strengthening policy networks and
communities, creating a shared understanding, developing team players… However, and simultaneously,
process-related utilisation of evaluations has been acclaimed (Weiss et al. 2005) to simply pertain to
another level as the aforementioned uses, as it is not linked unequivocally to the output of the evaluation
(whether this is a decision, a non-decision, a legitimation…), but obviously related to the process itself:
“instrumental use is presumed to yield decisions of one kind or another. Conceptual use yields ideas and
understanding. Political use yields support and justification for action or no action. Process use tells
how evaluation’s influence arose” (Weiss 2005 : 14).
On the other side, it is conceivable to read the mechanics of process use in a way as to leave discussions
about evaluation use and enter issues of evaluation influence (Henry and Mark, 2003).

It is important to state, to our understanding, that these different utilisation types do not integrate a
judgement or a gradient with regard to the desirability or rightfulness of the different evaluation uses. For
instance, instrumental use is not a nobler way of using evaluation information, than when these are used
by policy-makers to re-inform decisions already taken on the basis of other evidence or even intuition.
Instrumental use of information and evaluations are acknowledged to be rare, and empirical studies
confirm largely that conceptual use of information is the most evident and encountered effect of
evaluation information. Ex post or ex nunc legitimisation of decisions as it occurs in symbolic use of
information has been identified to be a major, rather recurrent way of using information: for instance, on
a routine basis, when defending positions or decisions with empirical material, policy-makers do
selectively use available information, and do not refrain from consciously searching for information
which would legitimate their case. Of course if, in this context, policy makers would hide bits of the
evaluation, and obviously so if they would distort the messages, then symbolic use becomes synonym

87
with misutilisation. However, such negative utilisation is not reserved for symbolic use functions: on
another level, it is very much conceivable for policy makers to instrumentally use information to trigger a
policy which is on the opposite of the evaluation’s message. As stated repetitively in literature (for
instance, Weiss et al. 2005), evaluation and information is but one small input to policy-making
processes, which should not be overemphasised in comparison with many other sources of influence, or
in other words “(b)y viewing policy knowledge as an interrelated body of beliefs, information, evidence,
and explanations, we can begin to understand how it is that a policy-maker uses personal knowledge to
make decisions” (Hezri 2006 : 114). Information, and ISD and their processes, are thus part of the more
generic accumulation of information for policy-making, which one can refer to as ‘policy knowledge’.

Furthermore, it appears from empirical research (see, among others, the Global Environmental
Assessment Project) that there exists no linear and easy relationship, between an information’s
characteristics such as (for instance) the precision and completeness of information and its impact on
decisions. Moser (1999), in a study on climate research efforts, points to the fact that it is not the ongoing
scientific struggle to develop more robust information on climate change impacts that influences the
terms of the international negotiations: “Increases in the reliability and certainty of climate science…
may not necessarily lead to more effective decision-making at all… Why? Because there are non-
scientific human-dimension uncertainties that may matter just as much or even more in determining
whether scientific information is actually used in decision-making” (Moser 1999 quoted by Denisov
2001 : 8). And she points further towards the intrinsic differences in the integration of existing
information into decisions: “People differ in how they expect the world to work, how they value scientific
knowledge, in their attitudes towards uncertainty and ignorance… In addition, there are uncertainties in
how… signals are perceived, how people define the problem, who the involved actors are, what policy-
making and management institutions are involved, which policy choices and strategies are available,
feasible, chosen and implemented”.
We establish here that if limits to using science in policy-making are applicable for scientifically
constructed and thoroughly validated peer-reviewed information (e.g. the IPCC reports in the context of
climate change), then it can be ascertained true for indicators on the ground that indicators are ‘softer’
scientific information, as they unavoidably integrate an array of value judgments of the authors. This
potentially diminishes their intrinsic scientific quality (read: objectivity) in the eyes of the potential user.

Evidence for the perception by decision-makers of the scientific ‘softness’ (and hence societal fragility
and political malleability) of indicators is sometimes provided in those cases where indicators present
uncomfortable results.

When the World Economic Forum published its ‘Environmental Sustainability Index’ (ESI), elaborated
by YCELP and CIESIN43, and Belgium was twice (WEF 2001, 2002) ranked among the least

43
Yale Center for Environmental Law and Policy (Yale University) and Center for International Earth Science Information
Network (Columbia University).

88
‘environmentally sustainable’ nations of the world, the political and institutional debate44 in Belgium
centred mainly on refuting the methodological stances taken by the authors of the report. Comments by
Belgian institutional actors (such as federal and regional statistics offices) and counter-studies (financed
by regional environmental administration) demonstrated with great ability that the authors of the reports
did not take into account relevant data, that their interpretation of ‘environmental sustainability’ was
strongly biased by North-American and neo-liberal conceptions of the world, that their knowledge of
Belgium was rather imperfect. If most of these elements of critic are formally correct, they do however
only discuss the symptoms of the Belgian ranking. The cause of the relatively bad position of the country
in the ranking (apart from the fact that Belgium is probably one of the worst environmental students in
Europe) was the poor communication of updated and representative environmental data to international
bodies and institutions, and this even in topic areas where robust information was collected: the authors
of ESI used mainly data from such international bodies for their indicators. Even if considerable efforts
are lately made to improve the Belgian reporting morale to international institutions (at least towards the
European ones), the past situation reflected a rather chaotic and unprofessional institutional setting (e.g.
regions fighting each other on the methodologies to follow), which lead to the situation where the
absence of methodological consensus implied silence (i.e. no data are communicated to supra-national
bodies).
Even if the ESI-reports’ publications triggered a direct reaction in the realm of the Belgian indicator- and
data-community, it was of no instrumental use for policy-making: Belgium had particularly bad scores on
water quality and quantity, but policies in both domains were not reconsidered in either region. However,
ESI showed to decision-makers that further investments in data-treatment and communication were
absolutely necessary if they wanted to avoid future front pages in the newspapers. ESI increased
somehow the “political cost of doing nothing” (Clark 2002 : 18), or, to be more specific, the political
cost of doing nothing in terms of implementing a proactive and steered information policy which would
force inter-administrative communication and results on these issues. As a consequence, institutional
settings and the priorities of some civil servants’ missions were adapted (if only slightly) showing that
some form of collective learning among policy-makers had been reached: the ESI-reports’ conceptual use
for policy-making can thus be taken for granted.
In the longer run, one could have hoped that the ESI-reports would strengthen the impulse to elaborate
better environmental indicators and to provide for comparable data across the regions. The future
existence of such scientifically more robust and abundant information will potentially contribute to a
more enlightened environmental decision-making. The fact that the road to such enlightenment might
still be ‘long and winding’ was demonstrated shortly after the ESI-episode, when both regions were
engaging into developing one after the other their methodological interpretations of TMR-indicators45.

44
It has to be said that the political and institutional debate was triggered of by an unusually high media coverage of the
reports. No other international or European reports assessing the Belgian environmental quality via indicators did ever
enter these spheres of public debate. This is probably less linked to the intrinsic characteristics of the reports than to the
political and media ‘importance’ of the Davos meetings, where the reports are presented to press.
45
TMR : Total Material Requirement. These indicators belong to the family of physical indicators. The aim is to measure
material flows in economies as a complementary information to financial flows (as expressed with indicators such as
GDP).

89
Some social learning seemed however to have occurred within the data-community as for the
development of both regional TMR-indicators an information meeting between the 2 regions’ sub-
contractors was finally organized.

Information use and ‘reflexive governance’

Lehtonen (2003) articulated the potentially occurring indirect influences of indicators in an even wider
sense: “(…) use of evaluations and the development of indicators can be seen as instruments enhancing
the reflexivity of modernization (Giddens 1990) and deliberative democracy through inclusive,
participatory decision-making, which should ultimately contribute to sustainability through what has
been called social learning (Van der Knaap 1995)”.
The importance of evaluation, assessments and, thus also, indicators in a governance-based
understanding of modern policy-making, has been articulated in a more general context by Hezri (2006).
In a first instance, he sees a co-evolutionary reconfiguration of public space during the 90s, which
included both the birth of ‘sustainable development’ as an overarching, societal strategy-goal and the
development of the policy discourse from ‘government’ (e.g. command-and-control) to ‘governance’
(e.g. deliberation, accountability…). This double evolution results in the fact that (Hezri 2006 : 106) “an
important feature of an advanced democracy is a shift from technocratic to reflexive policy discourse.
Reflexivity entails turning the problem of discourse in on itself through, for example: centring the
problem of communication in participation; and realigning the power of policy actors by including non-
experts and citizens for a democratic policy discourse. The creation of democratic spaces and the
widespread use of auditing in governance may enhance reflexivity in governance”. In a second instance,
and here more implicitly than explicitly, the announced shift towards (reflexive) governance entails an
emerging role to be played by a series of policy instruments among which collaborative decision-tools,
informative ‘propaganda’ frameworks, support for accountability… or in other words, softer
management tools (including ISD).

Influencing the evolution of an issue domain

In the specific case of interaction between indicators and SD, one needs however to precise the influence
of information on policy-making a little further than simply with ‘enlightenment’, ‘collaborative
learning’, ‘social learning’, ‘enhancing reflexivity of modernization’, ‘contributing to discursive
democracy’. In this respect, Clark (2002 : 6) states that “even influential assessments rarely impact
policy choices directly, but rather exert substantial indirect influence on long term issue development”.
This is also the generic stance we take hereafter: indicators are prone to influence agenda setting, i.e.
shape debate. Agenda-setting can have multiple faces and operationalisations, including specific
situations where ISD are influencing operational outcomes of policy processes in terms, for instance, of
setting carrying capacity thresholds.

90
In practice, we can accord to such a reading of the Belgian ESI-episode for instance, but also in other
many other episodes. During the consultative preparation of the EU-structural indicators in 2001 (CEC
2000, 2002), the activities of some of the environmental lobbying groups to influence the list of
indicators have been directed towards influencing agenda setting. They pushed hard at the time to include
in the selection of indicators some measurement for Europe’s material intensity such as the Ecological
footprint (Wakernagel and Rees 1996, 1997). Obviously such a methodologically weak46 (but
communicative) indicator had no real chance to make it into the very small number of structural
indicators. However, indirectly the debate helped the European Environment Agency to raise intra-
institutional interest at the EU-level for indicators measuring material intensity, which ultimately
translated into the emergence of a series of pioneering EEA activities on TMR.
Influence of information, as it is developed in evaluations and with indicators, could thus be explained by
the enhancement of an ‘issue domain’, i.e. “a group of people and/or organizations interacting regularly
over periods of a decade or more (…) within a given policy area” (Sabatier et al. 1999 : 135). For such
an issue domain to emerge and develop a series of conditions is to be met, such as shared interest among
a group of actors, long-term existence of such interest and action, institutionalisation of the interactions
and hence of information exchange. It can be assumed in case of an issue domain ‘Sustainable
Development’, that the existence and stability of commonly agreed indicators would contribute to the
issue domain’s emergence with the development of a common data language or the standardization of
basic and periodic reporting. Development of the issue domain induces thus a sequence of interlinked
conditions on indicators, starting with ‘avoid one-shot elaboration and privilege long term development
and processes’ (which is highly linked to ‘get institutional recognition of the topic under evaluation’).
On a general basis, it can be said that the dynamic of indicators’ utilisation is far from being simplistic.
However, since Rio, and the subsequent developments of Local Agendas with their indicator batteries,
some sort of sustainable fairy tail emerged which Bell and Morse (1999 : xiii) caricatured with: “(…) the
tacit and somewhat naïve assumption (…) that sustainability is ‘good’ and all want it. Hence by
association, Sustainability Indicators are ‘good’ and people will eventually learn to want, love and trust
them. It all becomes a matter of faith”.

We saw above that the more complex the issues, the more vital information will always be missing to
decision-makers. We stressed further that even for those elements of information that were readily
available, a direct and instrumental impact on decisions should not be expected, but that the impact
should be comprehended in terms of ‘developing issue domains’, a process that could be understood as
long term societal agenda setting. Obviously, it is not the exclusive right of information to influence issue
domains: more trivial conditions have often a stronger impact on the emergence and persistence of an
issue domain, among which budgetary cuts, departure of the main ‘animator’, human failure and
incapacity, power games… Furthermore, institutions with their finite resources have to operate trade-offs

46
For an in-depth discussion, methodologically and societally, of the Ecological Footprint, refer to a special issue of
Ecological Economics (2000) volume 32 : 3, 341-500.

91
between the issue domains they want and can support: competition between issue domains, especially the
emerging ones, is thus everyday reality.

3.1.3 Towards a generic model for information assessment

Without denying the importance of these regime-internal mechanisms, we will continue to concentrate in
the following paragraphs on the linkages between information and issue domains: which characteristics
does information need to develop in order to be usable for the creation of an issue domain?
In this respect, Clark et al. (2002), on which Parris and Kates (2003) and others (Beco 2006;…) leaned,
pointed to an interesting 3points-framework, which we will largely follow hereafter as a generic criteria-
framework: “The most influential assessments are those that are simultaneously perceived by a broad
array of actors to possess 3 attributes: salience, credibility and legitimacy” (Clark et al. 2002 : 7).

The first important point to be raised is that it is on the level of the actors’ perception of the assessment
that information influence, and by extension information usability, is determined. We can reaffirm what
we mentioned before, namely: not the intrinsic, objective qualities of information, which can be
procedural and substantive, play a major role in generating influence on decisions, but rather the
individual’s (or the group’s) subjective judgement of the information’s qualities (i.e. adequateness,
timeliness, robustness…). Because of this subjectivity, decision-makers will attach attention both to the
information product and the construction process (i.e. evaluation process; indicator development
process).
Second, issue domains are societal and collective phenomena, and thus the influence of the assessment on
the societal development of the issue domain is effective only when a sufficiently large number of
decision-makers share a comparable appreciation of the same information. It should be noted that Clark
et al. (2002) indirectly confirm that the solitaire decision-maker is a non-existent entity, i.e. that decision-
making inevitably engages an array of actors.

Hereafter we present an introduction to each of the 3 use-criteria, and develop their interactions. In the
following, when referring to this combination of use-criteria, we speak of the ‘L,C,S-framework of
criteria’ (Legitimacy, Credibility, Salience).

Salience

Salience is relative to the correspondence of the actors’ perception of the stakes addressed in the
evaluation with regard to what they perceive as being their stakes. Does the assessment refer to the
questions deemed relevant by the decision-maker? Again it should be noted that it is not the objective
quality of the system closure or of the boundary setting; neither is it the quality of the systemic
decomposition of reality into systems or subsystems; neither is it the deductive capacity of the evaluator

92
to single out pertinent and significant questions. Rather is it the comprehension by the evaluators of the
decision-makers’ own translation of the issue under evaluation that influence salience.
We should clarify that ‘decision-maker’, in our understanding, is a reference to participants in the
decision-shaping activity. It refers thus to all types of societal actors, including NGOs, experts, civil
servants, politicians…, and is not restricted to the person or institution who will endorse responsibility of
the ‘final’ decision.
Implicitly, a second condition emerges which codetermines salience: effective communication of the
stakes, which the assessment addressed. Relevant information can be gathered from many places and
from many actors. In most decision-making situations, salient information could be gathered from many
sources. Fierce competition has thus entered since long the information market, and it is that information
that is effectively communicating on having taken into account the stakes of the main decision-makers,
which will eventually make it. This confirms also the late success of those evaluation processes, which
integrate sociological techniques allowing for the evaluators to gather in-depth knowledge of the
perception of the decision-makers’ stakes. For instance, the collaborative drafting of the IPCC’s reports’
executive summaries by decision-makers and scientists is proactively promoting salience of the reports,
because an important array of societal actors to the debate are directly involved in shaping the
information. In particular policy situations, salience has also been translated with policy relevance.

Credibility

Credibility reflects, after Clark et al. (2002 : 7), “whether an actor perceives the assessment’s arguments
to meet standards of scientific plausibility and technical adequacy”. Again, decision-makers do not
explicitly and methodologically assess the quality of the scientific arguments and rationales, which
underlie the construction of the information. Such an enterprise, apart from being time-consuming and
resource-intensive, would in most cases exceed their capacities in comprehending scientific processes.
Instead, as they have to trust in the quality control mechanisms of Science, they rather judge whether the
process of information construction sufficiently used truthfully and thoroughly such scientific
mechanisms of quality assurance (e.g. does the evaluator expose his findings to peer-review?). Of course,
decision-makers don’t judge these processes, and the information they gathered, in absolute terms. Rather
do they compare the credibility of assessments that had been made accessible to them: there exists thus
competition between information also on the level of their respective credibility.
In most issues, scientific credibility is just as difficult to evaluate as truthfulness: for the lay-man it is
intrinsically difficult to perceive if scientific quality control was effective or not. Often, such an
enterprise can only be successfully realized with considerable and long-term knowledge of the scientific
discipline. When it comes to multi-disciplinary assessments, such a credibility-check on the level of the
information itself is thus hardly possible anymore: credibility is then evaluated “by proxy” (Clark et al.
2002 : 23) such as by the former recognition of the evaluator’s expertise (or even the evaluator’s
institution’s expertise) by other decision-making bodies. Credibility seems also negatively correlated to
the amount of consensus on the issue under scrutiny; in issues of high uncertainty and complexity,

93
sufficient credibility is thus often difficult to reach as opposing views and contradictory voices diminish
the decision-makers’ abilities to assign credibility to any of the evaluations.

Legitimacy

The third attribute, legitimacy, is the more procedural and societal one. An assessment gains legitimacy if
the decision-maker, stakeholder and the evaluator perceive that the evaluation has been elaborated with
sufficient procedural fairness with respect to political or societal standards. Legitimacy does thus not only
depend on the perception of the decision-maker, but the evaluator has equally to perceive the process as
being fair and meeting acceptable standards. Clark et al. (2002 : 25) point out that “even assessments that
make recommendations that run counter to a participant’s interest may be accepted as legitimate if that
participant believes his concerns were considered, even if rejected”. Of course, procedural legitimacy is
difficult to evaluate once the information is on the desk of the decision-makers. Legitimacy enhances
more virulently than the other 2 attributes the calls for the communication of meta-information on the
assessment’s process. However, even in the presence of such meta-information and an enhanced
transparency of the process, the intrinsic legitimacy of processes is very difficultly assessed. As with
credibility, actors use proxies to develop their judgment on the legitimacy of the assessment, for instance:
who participated? Were representatives of a variety of worldviews and stakes integrated in the evaluation
process?

A simple illustrative model : science – policy interactions and the issue of


creating legitimate, credible and salient information

In order to illustrate the types of interactions, which are occurring in legitimacy, credibility and salience,
with regard to information production and digestion, an image produced by the original authors of the
L,C,S-frame seems particularly well-constructed to us. We chose to transcribe it partly hereafter (Cash et
al. 2002 : 2-3) :

“Caricatures of linking knowledge to action


(…) Consider a situation where a single scientist provides knowledge for a single decision maker. (…)
What the scientist considers relevant information may not be the same as what the decision maker
considers relevant, and vice versa. There must be some way that the expert can know what kinds of
questions to ask in order to produce knowledge that is salient to the decision maker. (…) that is, the
scientist must communicate somehow with the decision maker. Without this bridging, there is little
chance that the information will be salient, and thus found to be useful by the decision maker.
Next, we can complicate the situation by adding a second scientist (perhaps from a different region of the
world or a different scientific discipline). The two scientists provide different information and conflicting
recommendations. In this case, the decision maker will likely listen to the source and/or information that

94
she views as more plausible and/or accurate. Who, then, is the decision maker to believe and why? What
are the criteria by which a decision maker will judge the believability or trustworthiness of the two
scientists? Holding salience constant, the expert who is most credible will be most likely to influence
decision making. Information that is not credible, regardless of how salient, is likely to be ignored.
Inversely, we can complicate the situation by adding multiple decision makers. This situation helps us to
illuminate the issue of legitimacy. Here, how problems are framed, how concerns are addressed and how
policy options are considered all affect how the various decision makers view the system of connecting
knowledge to action as more or less “fair.” What is a fair process by which decision making happens,
and what is a fair and legitimate process through which experts are engaged in the scientific endeavor?
How are research agendas set, concerns placed on the table, and dimensions of the problem chosen and
focused upon? How are boundaries bridged between the two decision makers, between the expert and
decision makers, and between all three actors in such a way that legitimacy is facilitated? As in the
previous world, it is possible that regardless of how salient or credible information is, if the process of
producing and using information is not seen as legitimate there is a high probability of the information
not being used.
As we continue to complicate our model by simultaneously allowing for multiple sources of information
and multiple decision makers and multiple issues, we begin to see how the complexity of the vast majority
of sustainable development situations are plagued by interacting problems linked to perceptions of
salience, credibility and legitimacy that cross multiple boundaries.”

Mechanics and dynamics between L,C,S-criteria

To simplify, information would thus have the best chances to be ‘influential’ in a decision-making
process, if the actors of the decision-process agree that the assessment correctly formulates and integrates
the stakes of most actors, that the assessment actors gained sufficient credibility in the past on
comparable assessments, that the assessment’s process was carried out by following state-of-the-art
procedural fairness.
It is obvious that decision situations, where all 3 attributes are perceived as being met at the same time,
are rather rare and can only belong to the more simple, uncontroversial decision spaces. What is then the
residual influence of information when confronted with a complex decision situation (i.e. many different
actors with many different stakes and agendas) where actors have to deliver on issues that are highly
uncertain, complex and controversial (i.e. high competition of many different assessments with varying
attributes)? Does the impact of information tend to nil? For issues linked to SD, often unarguably
complex issues with a serious amount of uncertainty, ignorance and indeterminacies, such a verdict
would be devastating as evermore research in progressive evaluation techniques is financed on the
subject.
For complex decision situations, trade-offs and substitutions between the 3 attributes are realized
inevitably, and seldom is it possible to reach simultaneously a satisfying level for each criterion.
Increasing complexity of the issues implies, for instance, that scientific evaluators enter into scientific

95
discourse among pairs about a series of ‘technical’ issues in order to gain first their own group’s
credibility. In turn such disciplinary scientific debate can undermine salience; eventually undermine also
credibility with other disciplines and non-scientific actors. Inversely, such international evaluation efforts
as IPPC might be relatively credible, but nevertheless risked at one point to loose legitimacy in the eyes
of the South as most of IPPC’s members represent ‘Northern Science’ and developing countries’ stakes
are not sufficiently represented47.
Assessments can also be too salient, especially if evaluations are carried out ‘in house’ by institutions and
when the independence of the evaluation’s agenda is not sufficiently transparent for the outside observer.
This type of diminished quality due to excessive salience can also result from situations (as observable in
Belgium) where, due to scarce institutional resources, the institutional evaluators become the reference
persons for the issue domain within the entire institutional landscape. Evaluators tend to become actively
involved in agenda setting and policy definition, hence jeopardizing potentially the legitimacy of the
processes.

Simultaneously to the necessary existence of these trade-offs between the L,C,S-criteria, it seems obvious
that the levels of credibility, legitimacy and salience of an assessment are not stable through time:
perceptions of actors are not stable and the demand in terms of information are changing with the
evolution of the issue domain (see for instance Boulanger 2006). Major shifts of the relative levels of the
3 attributes are occurring as the decision situation passes different stages of its institutional realisation; as
the intensity of the socio-political debate on the issue accentuates or vanishes; as actors gain new
insights; as issue domains evolve; as evaluators are replaced…

We stressed already that the basic conditions for usability in terms of L,C,S might be met in the longer
run by using careful procedural settings for the evaluation. If most of the pitfalls of evaluation usability
cannot be avoided, they can be anticipated and decision-aiding tools and processes adapted accordingly.
However, anticipation and adaptation has to be enhanced thoroughly as there seems to be no generalized
recognition of these pitfalls. In this respect, Parris et al (2003 : 19) affirm: “The contrast between the
dominant stated goal, to inform decision making, and the relatively weak efforts to ensure salience,
credibility and legitimacy is striking and indicates a surprising degree of political naïveté among
sustainable development indicators community” (Parris et al 2003 : 19).

On the level of the evaluation process, Clark et al. (2002) identifies at least 3 particular points of interest
that appear to influence the process-related impact of information on decision-making : who is
participating to the evaluation process and at what degree? What is discussed with the evaluations and
how is it discussed? How is the process itself framed and does it respect minimal procedural openness,
fairness, transparency?

47
Of course, it could be argued that this over-representation of Northern science is slightly inherent to the issue at stake,
which asks for considerable amount of R&D-infrastructure and resource to see expertise grow. These resources are often
not available to the developing countries, hence expertise is sparse, hence their initial sparse representation in the IPCC.

96
Furthermore, most of the questions raised here with regard to the evaluation’s settings, configurations,
extend… can be managed by the general institutional background in which the evaluations are to
integrate themselves. The success of integrating the evaluation depends thus also from institutional
capacities both at the level of the decision-makers and at the level of the evaluators. Creating evolving
and flexible institutions (including evaluators) seems thus one main major prerequisite for evaluations to
fall on fertile ground. We will discuss the institutional implications of indicator usability in the
subsequent chapter, so that at this point we refer only shortly to 3 conditions Clark advances as being of
major influence here: institutional embeddedness of the evaluations; institutional capacity to act as (or
create) bridges between science and policy; institutional capacity for learning and critical self-reflection.

Evidence of the rather small and indirect influence of such information has been established in
‘objectiviable’ situations (see for instance Gell–Mann, 1994), where hard scientific evidence can and has
been constructed, where alternatives are clearly determined and where the range of impacts of the
different decision alternatives are solidly quantifiable. ‘Hard scientific evidence’, i.e. factual and
objectified information, seems to have only little more potential than ‘soft information’ to become
valuable and valued input to decisions. Contextual information on the construction (Giampietro 1999)
patterns of information seems however one of the unavoidable necessities, which are repeatedly stressed
by many: “(…) what is essential, is how the quantified and unquantified elements relate to describe
overall system behaviour. Without that larger picture, policy decisions may be made without placing
information in context, leading to policies that may be seriously misguided” Meppem and Gill (1998 :
125).
The absence of such contextual and/or procedural ‘information on information’, inducing a certain
obscuring of the processes’ limitations and the information products’ uncertainties, has been in some
cases the major cause for public dispute. In this respect, Van der Sluijs (2002) reports an episode of the
Dutch RIVM (i.e. one of the main Dutch public environmental data-provider) being seriously questioned
about the validity of the major data-sets the Institute constructed, and subsequently communicated to the
public via indicators and state-of-the-environment (SOE) reporting. As a matter of fact, most of RIVM’s
environmental data, as it is the case everywhere, are constructed at one point or another with computer-
based ecosystem models, which are monitoring and modelling the supposed chains of causality. As
inherent in any model, the Dutch models produced data, which were tainted with more or less
approximations and with different levels and types of uncertainties. For obvious reasons of simplification
and comprehensibility of its SOE-reports, RIVM deliberately did not communicate on the levels and
depths of uncertainties of its underlying data-sets when publishing SOE-reports. Such contextual
information was judged neither worth of simplifying, nor publishing, because of its relatively high grade
of technicality.
In the aftermath of a public debate triggered by specialists, the fact that RIVM did not communicate on
the extend of these uncertainty factors, or the imperfections inherent in the modelling assumptions, or the
approximations and readjustments that were necessary to render coherent data series in time… appeared
as major mistake: the Institute’s credibility, and to a lesser part its mode of institutionalisation, was

97
seriously questioned, even if at any moment their technical handling of uncertainties in modelling was
put into question.
As a consequence of this episode, the Institute did not have to considerably change its modelling and
data-construction. The operated subsequent institutional change improved background communication,
i.e. improved the procedural setting of the institution’s handling of uncertainties by developing an
information policy with regard to contextual information.

In the light of the above, ‘handling information’ is a general pattern of actors’ behaviour in decision
situations, at least whenever the necessary conditions cannot be aligned to make information - as
originating from assessments or other decision-aiding processes - the unique, timely basis for decision.
However, an assessment that was used in one decision situation might be relatively ignored in subsequent
similar decision situations: a slight change in the conditions of construction, dissimulation, periodicity…
might have changed considerably the balance of trade-offs between perceived salience, credibility and
legitimacy. As every decision situation ideally needs its very special assessments and information pool,
the value of standardized information is thus very limited: e.g. the dream of the real-time indicator panel
permanently accessible to all and updated very regularly is thus not only scientifically an unrealizable
issue, but could be qualified as merely producing ‘background noise’ in terms of its usability for
decisions. On the other hand, one of the basic conditions for enhanced credibility, for instance, is to
assure the long-term viability of ISD-processes. In other terms: some types of assessments such as ISD
tend to gain in usability in the longer term, a fact that would speak for redirecting one-shot assessment or
monitoring experiences into periodic assessment investments.
In the following, we will elucidate these contradictions a little further by observing to what extend SD
implies limiting conditions on the construction of indicators and assessments that are usable. Are some of
the general principles, which define the political and societal project of SD, counterproductive to the
construction of information which scores satisfyingly on the L,C,S-criteria, thus which would counteract
to the usability of indicators as decision-tools? For instance, how far is the repetitive call to consider the
SD-policy domain as necessarily participative a threat for information usability? We will put SD-
principles in balance with their ‘costs’ with respect to salience, legitimacy and credibility.
By concentrating on the threats, we take deliberately a negative posture. The consequences in terms of
opportunities, as derivable from the same criteria and attributes, will be developed later, namely with
respect of their influence on the layout of evaluation processes. From a first discussion on the influence
of the SD-policy domain on L,C,S-criteria, we will elaborate further the influence of ISD on L,C,S-
criteria, and finally, in the subsequent chapter, develop the influence of the procedural, institutional
dynamics of ISD on L,C,S-criteria.

98
3.2 Sustainable development and evaluations: principles of SD
as limits to the utilisation of assessments

From the previous sections, emerges a general dilemma. On the one hand, assessments (such as ISD) are
becoming relatively unavoidable tools for decision-making in complex issue domains such as SD,
because of the insufficiency to rely on intuition and, notably because of the potential of assessment
processes to render some transparency on the value systems of decision actors and assessors. On the other
hand, by accepting the seemingly modest instrumental impact of information on decisions, conceptual
use of evaluation and information, are acknowledged the better approach to understand influence. But
conceptual use seems heavily dependent of a series of institutional factors and prerequisites, which
depend also on the characteristics of the issue domain. Obviously, the issue domain’s nature does play a
role in information usability. A link needs to be established between the difficulties to construct salient,
credible and legitimate information and those difficulties that do arise directly from the SD-issue domain
itself.
In order to illustrate this link, we use hereafter a limited series of principles of SD. Most likely, and as
pointed out in former sections, other authors and actors would use other SD-principles, because no
universal consensus on the framing of SD with principles exists. Hereafter, we do not discuss or justify
the adequacy of these principles as a representation of SD. Neither do we intend to represent an
exhaustive or inherently coherent and systemic set of principles. The aim of the following is more likely
to reveal a discussion of the impacts (on evaluation processes and products) of some characteristics
intrinsically linked to the policy domain in which we are positioning our analysis. We focus thus on two
of the principles, which we see as most influential for assessments within SD.
The question we want to address partly here is: how far is SD usefully ‘evaluable’, whatever the
construction of the information? At this point we focus thus on the limitations that SD imposes on the
perception of the salience, legitimacy and credibility of evaluation/information.

In the following section, we will structure the discussion along 3 points. For each of the identified
principles of SD, a synthetic exploration of its necessity and importance for SD seems unavoidable to set
the scene, even if most elements have already been presented elsewhere. The next step is then to raise the
generic difficulties each principle of SD potentially exerts on evaluation products and processes: how far
is the principle counterproductive? The last part adds to the discussion by trying to develop the
opportunities and threats of each principle for the subsequent use of assessment information: what is the
potential impact of the principle on the perception of credibility, legitimacy and salience?

99
3.2.1 Multiple dimensions : L,C,S-criteria and the principle of ‘integration’

Integration and sustainable development

We drew already attention to the necessity to view SD as an integrative (and integrated) process
combining multiple ‘dimensions’ (e.g. environment, economy, health…) into horizontal integration, with
the multiple levels of policy development (local, regional, global…) into vertical integration, with the
extended multiplied profiles of policy actors (administration, politicians, civil society…) into
participative integration, etcetc.
However the number, definition, boundaries, scopes and scales of the ‘dimensions’ to be integrated
remain issues of debate, as are the mechanics or models of such integrations (e.g. Scrase and Sheate,
2002). Even the significance to be given to the ‘dimensions’ is not settled (see for instance Lehtonen
2004, showing how dimensions can be translated invariably into capabilities, capitals or institutions).
The very basic principle commonly acknowledged is that of SD inducing the integration of
environmental issues with social and economic ones (i.e. integration of dimensions), and this on different
levels of policy-making (i.e. integration of scales) and using different types of policy processes (i.e.
integration of processes). We will stick to these basic interpretations of the SD-principle of integration, as
they are sufficiently precise for our purpose here.

Integration and evaluation

Analytically, the struggle to integrate multiple dimensions, scales or processes into a coherent
framework, induces a complex understanding of policy issues, and consequently of decisions, hence of
information/evaluations. Specifically, the integration of multiple dimensions not only results into
considering simultaneously a series of separate ‘intradimensional’ (e.g. with regard to the dynamics
operating within national economies) and ‘interdimensional’ uncertainties (e.g. with regard to socio-
economic or socio-environmental interactions), but analyses and policy instruments need to be brought to
the level of ‘integrated assessments’ (Rotmans et al. 2000; Greeuw et al. 2001) which allow to
comprehend and assess the interrelationships between the considered dimensions.
However, as mentioned, many differing understandings co-exist on the significance of integration in
general, and of integrated assessment in particular (Scrase and Sheate, 2002). Simultaneously, it appears
that efforts towards promoting integrated assessments as substitute for multiple adjacent ‘mono-
dimensional’ assessments (e.g. gender, efficiency, environmental, cost-effectiveness…) fulfil not
necessarily the expectations they raised in the first place as, for instance in the case of environmental
policies, “integration appears to be part of a discourse around a new set of policy goals linked to
environmentally sustainable development, but on closer inspection it can serve as a smokescreen,
diverting attention away from precisely that challenge” (Scrase et al. 2002 : 276). Others raised

100
complementary reservations on the weaknesses of the practice of integrated assessments48, as well as of
integration in itself. While intellectually appealing, integrated evaluations and assessments do not appear
yet to have reached the necessary characterstics as to fulfil the expectations that were raised with policy-
making actors.
On the one hand, drawbacks are linked to the often excessive, but always inherent, complexity and
difficulty to construct (e.g. in terms of setting the boundaries, scales, scopes…) and to render (e.g.
operate and communicate) such integrated evaluative information. On the other hand, the first
experiences gained by different stakeholder of policy-processes contribute to question the desirability of
integrated assessments as such, not only because of the unavoidable dilution of (e.g. environmental)
objectives they induce, but even more so because they alter the conventional power- and influence-
channels of stakeholders on the policy-processes. As a matter of fact, in those policy-situations where
decision-makers encounter high complexity and low determinacy of the decision (i.e. situations where
decision-making is difficult, but where decisions are not predetermined for instance by the lack of
alternatives, or by generalized path-dependency of policy), integrated evaluations seem to favour49 the
opacity of lobbying efforts, and other influences (see Wilkinson 2004).
We fall thus back on the general antagonism related to the implementation of SD: on the one hand, an
operational necessity to enhance the complexification of issues via the integration of multiple dimensions
(e.g. ‘GDP is not sufficient as a measure of a nation’s welfare, if welfare is based on resource depletion’)
and scales (e.g. ‘GDP is not sufficient as a measure of a nation’s welfare, if this is based on creating
global threats’), and on the other hand, decision-makers calling for ever more simplified and synthetic
information (e.g. ‘the list of EC-structural indicators50 should not be composed of more than 30 mono-
dimensional non-composite indicators’).
The evident danger of integrated assessments in this understanding is that they might contribute to mask
important information and even important issues, as in practice integrated assessments suppose more than
other types of assessments to perform a relatively strict selection of the issues to be considered for
evaluation. In the context of the complexifications of the interactions between components; with
components not sufficiently known by science (e.g. understanding eco-systems); with the nature and
intensity of the interactions being largely hypothetical for science: indicators, and the simplification of
reality they fulfil, grow into widely imperfect tools for decision-making.
Generally, ‘reducing’ complexity is achieved by decomposing systems into their sub-systems until
having distilled those parts of the system, which are sufficiently simple and causal to be treated with

48
See for instance, Wilkinson (2004) or Opoku (2004) who assessed critically the practice of integrated assessment as
created by the Impact Assessment regime on the level of the European Commission.
49
Paradoxically enough, integrated assessments are meant to contribute to exactly the opposite evolution, namely to
facilitating issues such as “good governance”, “transparent decision-making”, “liability”, “improve regulatory
processes”…
50
These indicators are prepared yearly by Commission to be presented to the Council at the Spring Summit as a sort of
‘Synthesis Report of the State of Europe’. The list has been adapted since its existence, but seems to stabilize around a set
of +/- 30 indicators tracing evolutions on economic and social issues. The integration of environmental indicators (see the
relationship between indicators and the Lisbon process as analyzed by Hinterberger et al. 2003) into the top-list, though an
explicit objective, is anecdotic as only energy and climate change are taken into account. Initial efforts to push indicators
into the list that were more salient to environmental organisations had been aborted rapidly.

101
‘normal science’ and ‘normal’ analytical tools or instruments. However, “it should be noted that a
system’s complexity, nonlinearities, in a system’s adaptation, co-evolution and evolution processes and
uncertainties arising from these processes hinder the creation of a formal mathematical model (von
Bertalanffy 1968 : 85)” (Köhn 1998 : 174). Meaning that the decomposition and simplified re-
composition (i.e. creating the model) are complex per definition as soon as the object under study reveals
uncertainties.
As pointed out further by Merkhofer (1987 : 144), such decomposition can lead to the analytical
‘destruction’ of vital elements of the original system under examination. Hence conducing to missing
comprehension of the real-world problem: “(…) destroying the respondent’s natural understanding,
decomposition procedures impose a foreign response mode that does not allow people to articulate their
understanding of (holistic) value issues.” (Fischhoff quoted by Merkhofer 1987 : 146). And as “(…)
decompositions are not unique, and, therefore, different decompositions can lead to different judgments
of the same issue” (Merkhofer 1987 : 146), the efforts conducted for the reconstruction of the real-world
interlinkages with the help of indicators are seriously influenced by such societal mechanisms.
More profoundly, Tribe (1972) emphasised sociological elements of the potentially weak impact of
indicators, which are linked to the respective policy domain under scrutiny, such as: facing too may
uncertainties, potential users of such information may simply refuse to take into account the given
integrated information as they might interpret its construction as a sign of the “superiority of the
elicitor’s overall perspective (and the overall social importance of analysis and its purveyors). It conveys
a message of analyzability or solvability where that may be inappropriate”. Especially with integrated
assessments, which induce complex boundary closures, this level of inappropriate analyzability might be
reached with some users, which might imply in turn a more general refusal of taking such information
into account at all.

However, it is not necessarily these very classical antagonisms - between the complexity of the study
object and the necessity of simplification - which reduce the potential use of indicators to address
multiple dimensions, as pointed out by Bell and Morse (1999 : 31 referring to Slobodkin) : “(…) any
simplification limits our capacity to draw conclusions, (…) this is by no means unique to ecology.
Essentially, all science is the study of either very small bits of reality or simplified surrogates for complex
whole systems. How we simplify can be critical”. Complexity and uncertainty, as inherent features of
human presence in a natural environment, have generated a long term apprenticeship of management,
notably as “in human societies uncertainty is mitigated by creating suitable institutional structures or
institutions” (Köhn 1998 : 175).
Subsequently, we will discuss further whether, in a certain sense, the realisation of periodical evaluations
and indicators can be understood as forming such suitable institutional structures - or more precisely if
ISD can be considered as what Guston (1996, 1999) articulates as being ‘boundary institutions’ - and are
contributing to mitigate uncertainty in decision-making by allowing a consensus to develop in time on
the ways and means to integrate multiple dimensions into the assessment.
The major threat, stemming from the integration principle of SD, is thus more or less directly linked to
the scientific handling of uncertainty in indicator developments. Linking the dimensions of SD as well as

102
assigning proper scale and scope to ISD, is thus partly the responsibility of the ‘scientificity’ of the
techniques used, as “(c)onducting any assessment involves a choice of what to include in, and what to
exclude from, analysis: such choices generally involve trade-offs on the assessment’s credibility, salience
and legitimacy to particular users” (Eckley 2001 : 18). This is what we illustrate hereafter: the impacts
of the integration principle on L,C,S-criteria.

Integration and credibility

Currently, the generalized struggle among scientific proponents, disciplines and schools in developing a
consistent and analytically robust scheme of integration and integrated assessment can rightfully be
acknowledged as a limitation to the perception of credibility with users of information. The multitude of
integration schemes and integrated assessment methodologies ‘on the market’ add to the curiosity to
work on SD. And the existence of many integration schemes contribute also to the richness of the
concept, and most likely even to its current political and societal success. Simultaneously, meta-
conceptual debates on how, what and with whom to integrate dimensions, as well as the difficult
selection of the most adapted evaluation methodologies to the situation at hand, add to the general
confusion of the public and policy-maker (Owens et al. 2002) on the ability of and role to be played by
scientific actors. In effect, stakeholders await input from science, which is as close as possible to
evidence, knowledge and certainty. In that sense, the commonly shared recognition that none of these can
be delivered thoroughly in the case of SD-evaluations, risks to have limits.

As stated above, badly governed controversy can limit the credibility of the scientific actors in the eyes of
policy-makers and the public alike, whereas the inverse relationship seems true too. It should be noted
however that even if the difficulties arising from the call for integration do often not allow the definition
of an unambiguously embraced evaluation methodology, this does not necessarily affect the overall
quality of the evaluations’ outputs.

Credibility is also influenced by the level of the scientific institutionalisation of the domain under study;
the more assessments can build on existing knowledge and on a well-constructed and organised scientific
community, the more credible an assessment appears that anchors itself on these foundations. Integrated
assessments in the realm of SD are necessarily conducted in multi- or inter-disciplinary fashion. Such
interdisciplinary scientific collaborations are often configured individually and temporarily around the
domain under scrutiny; in many such domains, science actors have not build up an ‘institutionalized’
base, i.e. the epistemic community did not yet construct formal relationships such as interdisciplinary
scientific journals, societies, chairs… , and the field of study as such is not standardized by the
construction of an institutional base (e.g. they are not recognized by official funding agencies as a
‘discipline’). As a consequence of the lack of a common and solidified knowledge-base, with its formal
interlinkages and typical science institutionalisations, decision actors can not necessarily recognize the
state-of-the-art in these science domains. They lack a yardstick against which they could compare the

103
evaluation’s use of what the evaluation asserts being the common knowledge base, and against which
decision actors could compare the assessors’ integration in the institutionalized mechanisms of its field of
study. Furthermore, as there are no unequivocal foundations either at the level of ‘integrated assessment’
as a field of study, such as common methodological foundations and developments, shared handling of
uncertainties, standardized coping of system boundaries…, these interdisciplinary assessments mostly
reconstruct these from sketch for their specific study domain. These temporary or thematic
recombinations of assessment methodologies, building on a series of disciplinary developments, the fact
that their maturity might change from one domain of study to another and from one study to another
(Eckley 2001), as well as the fact that these recombinations are necessarily constructed and discussed at a
meta-disciplinary level, might induce an image of arbitrariness, which can lead to a rather poor credibility
of such evaluations per se.

Integration and salience

However, assessments might gain considerably in salience as a consequence of the above-developed


problems of developing credibility for inter-disciplinary integrated assessments. Most basically, the
recurrent lack of institutionalized funding schemes (for instance, through the national science
foundations) in these domains of study develops into a necessity to use third-party funding, thus to link
closely to the policy actors and their institutions. These linkages to the needs and demands of policy
actors induce a relatively high degree of salience. In parallel, inter-disciplinary methodological
constructions and studies imply a propensity for openness to consider different, controversial,
opposing… knowledge-bases and referents; in the first place, those of other disciplines, but also those of
non-scientific actors in the policy domain. It is easier and more common for integrated assessments to
make a further step towards transdisciplinary settings then it might be for less integrated and multi-
dimensional assessments. Integrating and actively referring to a large array of different types of
knowledges can in turn have as a consequence to improve the salience of the assessment. Finally, the
individual (re)construction of methodological bases, i.e. of the integrated assessments’ methodological
approaches, around the domain under study can be a further element which improves the salience of an
assessment. Integrated assessments are often directed towards policy questions, and comprise thus a level
of responses which reverberates easier with other policy actors’ referents; e.g. they include policy
recommendations. While such policy recommendations can be positive for the salience of the assessment,
they represent also a potential danger to its credibility.
In general, integrated assessments tend to be accountable to policy actors more than to their scientific
community (which actually might be relatively inexistent or poorly developed). From this, assessments
develop naturally a relatively high degree of salience, but can lack credibility.

104
Integration and legitimacy

Integrated assessments pose a particular challenge to legitimacy. By the fact that most of the
‘integrational’ boundaries need to be based on a specific consensus, established specifically for the
assessment at stake, legitimacy (i.e. procedural fairness) is unavoidable. The lack of traditionally, e.g.
scientifically, constructed boundaries to the assessments, calls to integrate the configuration of the
boundaries into the assessment process itself. However, the high expectations and the fundamental
necessity to develop adequate procedural responses to define the assessment as well as its approach, can
seriously jeopardize the feasibility of the study. The more controversial and/or unfocused the domain
(e.g. climate change, GMOs…) is the more pressure develops on the development of a very robust and
fair process to steer, configure and control the assessment. Such procedural fairness can be hard to
achieve, and even more so to sustain over a longer period of time (e.g. in the case of periodic
assessments).

As such the principle of integration is thus both a chance and a risk for assessments to perform at a
satisfying level to the challenges of the L,C,S-frame. The relative absence of a common and solidified
scientific knowledge base with its institutionalizations, as well as the necessary inter-disciplinary
orientation of such assessments, poses challenges to integrated assessments, but conveys also
opportunities to take into account new knowledges and to link the assessments more freely and directly to
the policy domain and policy questions. As a consequence, integrated assessments ask for a very high
degree of constructed and balanced procedures to define and conduct the assessments, which again can
be a challenge and a threat to the legitimacy of the evaluation.

3.2.2. Participative assessments : L,C,S-criteria and the principle of


‘participation’

Participation and SD-assessments

The linkages between participation and SD are very wide growing and can be analyzed along a number
of different lines (Van den Hove 2001): political, societal, procedural, scientific, technical… In effect,
several rationales can be distilled that contribute to clarify the need for participation in SD-assessment
exercises.
In the context of environmental assessments, Van der Sluijs (2002 : 135) categorized these calls as
follows (for a similar analysis in the case of climate change, see Jürgens 2002) : “The instrumental
rationale reads that participation may decrease conflict and increase acceptance of or trust in the
science that feeds into environmental management processes. The normative rationale reads that the

105
processes of environmental assessment and environmental management should be legitimate. The
substantive rationale is that relevant wisdom is not limited to scientific specialists and public officials”.
If such rationales do apply also for the more integrated issue of SD, and there exist strong arguments that
they do, then participation processes might be one keystone to constructing evaluations. It appears that
the presence or absence of some characteristics to the object under study decide on the necessity and
opportunity to apply participatory processes or not. The more complex and multi-dimensional the issue at
stake, the more pressing the calls for participation get at the level of the three rationales. Or as Van den
Hove (2000 : 464) states: “(…) participatory processes can be shown to emerge as logical consequences
of the complexity and indeterminacy dimensions of environmental issues”, and more precisely does she
draw attention to the fact that the emergence of participation is highly linked to O’Connor’s “irreducible
plurality of pertinent analytical perspectives” (1999 in: Van den Hove, 2000 : 464).

The call for participation as a way to integrate such multiple analytical perspectives, in line with a
political and administrative democratization of public affairs, raises conflicts that occur regularly
between involved actors (Rauschmayer 1999). Renn (1995) identified three types of conflicts with regard
to environmental issues:
! Conflicts based on incommensurability of values, i.e. conflicts that emerge from
antagonistic interpretation of principles (e.g. on the right to use ‘commons’)
! Conflicts based on factual arguments
! Conflicts based on a loss of public confidence in institutions or actors to manage
environmental issues

In the decision domain of SD, these conflicts link together to form a complicated pattern of societal
interactions between institutions and stakeholders: if lay stakeholders appear mostly to be referring to the
first type of conflicts, agencies and institutions interpret their discourse differently. In situations of
conflict resolution, the latter focus mainly on technical issues, with the consequence that lay stakeholders
have to comply with this level of argument: “(…) citizens who participate are thus forced to use first
level (factual) arguments to rationalize their value concerns. Unfortunately, this is often understood by
experts as ‘irrationality’ on the part of the public” (Renn 1995 : 357).
Generally speaking, the risks to emphasis too strongly the capability of participation to re-define
pragmatically SD are numerous and should be taken seriously as demonstrated by the following: “For if
we have reached consensus, we agree upon what development we call ‘sustainable’. So, development in
accordance with consensus is by definition ‘sustainable development’. But does consensus guarantee
some kind of stability? No. We can only try to lower the risk of future problems by considering all the
possible side effects we know, or we want to know. Negotiations seem the best way of doing so”
(Hourcade et al (1992) quoted by De Graff et al. (1996 : 210).

Less than a real solution, participatory evaluation processes seem thus to be the lesser evil. In effect,
there is a parallelism between the expected theoretical advantages of participatory processes to evaluation
and the pragmatic difficulties occurring during their implementation (see notably, Van Asselt et al. 2005),

106
and above all their ‘standardisation’. However, even if we suppose that the principle of participation had
been efficiently applied to a specific evaluation process, this would not be sufficient to ensure the
integration of such information into decision-making.
On the one hand, it is obvious that participation of stakeholders will alter their perception of the
evaluation’s characteristics and attributes. Whether their perception with regard to salience, credibility
and legitimacy alters positively or negatively depends largely on the conditions of implementation of the
participatory processes. Experiences show that flaws to their implementation can have their origins in
apparently anodyne events, or small inattentions of the process leader (Van Asselt et al. 2005). A further
decisive element of the impact of evaluations on decisions surely is the proximity of the participant to the
implementation not only of the evaluation, but of the decision. The more the participant (or group of
participants) will be concerned by the effects of the decision implementation, the more they will of
course be attentive, prudent and resentful (or on the contrary open, voluntary and curious) when being led
into situation that could alter their perceptions.

Participation and credibility

The impact of participation on the perception of the evaluation’s credibility is largely a matter related to
the configuration of the process itself.
On the one hand, the evaluator’s credibility should be cemented when stakeholders are directly
experiencing the means and capabilities of the evaluator deployed to guarantee a sufficient level of
accuracy for the analyses. For instance, in the case of scientific evaluators, stakeholders being confronted
directly with the multiple layers of peer-reviews, quality controls, sensitivity analyses… being carried out
in ‘normal science’, are necessarily altering their perception of the credibility of the scientific input to the
assessment. Simultaneously, gaining knowledge of scientific quality assurance opens a new field of critic
for participants, as they will clearer perceive the residual part of human interaction and judgements in
these seemingly standardized scientific processes. The number and extent of uncertainties attached to the
developed information will appear clearer too. Whereas these might well be comprehended as being
irreducible, the awareness of such irreducibility might provoke a more generalized mistrust in Science
and in evaluation.
Such potential to loose instead of gain credibility in open processes is of course not restricted to the case
of scientific actors being the evaluators. Gaining knowledge about the internal organisation and
mechanisms of evaluation actors will enhance comprehension of the constraints attached to the
realization of the evaluation. This might however also result in questioning more clearly the ways and
means used by the evaluators to manage such constraints.
Questions linked to participation do thus include the configuration of the assessor, and to which
institutions they ‘participate’ to, and more specifically to whom they are accountable (Eckley 2001);
whether assessors are accountable primarily to their scientific pairs or to policy makers influences in
many respects the perceived credibility of the assessement.

107
Participation and salience

Participatory evaluations are mostly generated in order to improve the integration of the stakes of the
concerned actors. The perception of salience should thus be considerably improved, as actors get the
chance to integrate properly their agendas and clarify with the evaluators their interpretations of the
stakes at hand. Being asked to state and explicit their needs and wants, stakeholders recognize the multi-
lateralism of the evaluation’s result.

Obviously however, participatory evaluations rather enhance competition between agendas and stakes,
and above all can internalize such competition into the evaluation process itself. If this internalization of
conflicts is part of the objective of the participatory process, namely in order to reach anticipatively
consensus on the right balance of issues and stakes, the risk remains that actors perceive the result (i.e.
the evaluation’s product) of the process as insufficient, inappropriate, or worse ineffective with regard to
representing their stakes.
Most of the formalizations of models, when integrating variables in the algorithms of the evaluation
instrument, are asking for relatively neat cause-effect linkages. They are less capable to run with
consensual “if - then maybe” input. Consequently the inherent logic and mechanism of the evaluation
instrument itself might inflict limits when taking into account ‘non consistent’ or ‘irrational’ points of
view through very large open participatory processes..
There is no reason to think of participatory processes as a no-mans land, where the participatory
integration of stakes would not be sensitive to power and political negotiation. If their role is to mitigate
such power relationships at best, by internalizing them explicitly, they can however do so only by
installing procedural rules, which certainly are necessary, but again: not sufficient. Eckley (2001) states
that salience of assessments is best, if stakeholders gain a substantive role in assessments, on top of the
currently spreading mere procedural meta-roles: a number of assessments in the realm of SD have
integrated quite efficiently the calls for participation of stakeholders at meta-level, e.g. by implementing
so-called users’ committees to which assessors report occasionally during the study and which can debate
during the execution of the study with the assessors. Eckley found a series of studies, which enhanced
their salience by asking participants to participate actively to some of the assessment tasks.

Participation and legitimacy

At this level of defining the procedural rules for stakeholder representation and the integration of stakes,
the perception of legitimacy of the process is of course at stake itself. Up to a certain point, participation
enhances legitimacy: processes aimed at improving the information base for decision situations do so
quite automatically. However, whether this momentarily improved legitimacy will be durable or not is a
matter of several conditions. On the one hand, legitimacy needs time to build up; more than the other two
attributes, legitimacy is thus highly dependent on the continuity of the process. Secondly, legitimacy

108
depends directly of the quality of the process, and thus necessarily on the quality of the people animating,
guiding, accompanying the process. It is thus also the quality of the ‘animators’ (both on the level of the
institutions and on the level of the individuals) and their capacity to lead durably such processes to an end
that will influence the perception of legitimacy. Moreover, enhancing legitimacy needs sufficient time
and opportunities to develop and adapt the process: openness, transparency and potential for feedback
can be considered as necessary conditions.
At another level, the question of who is invited to participate to the assessment is of major importance to
the legitimacy of the assessment. Especially in many SD-policy domains, where stakes might be
opposing, the right choice of representatives is important. Excluding too opposing visions, in order to
facilitate for instance consensus-building on values or to avoid too strong controversy to enter the
assessment process, might be crucial for assessment processes to be implemented. Simultaneously,
limiting non-conventional worldviews to the process might jeopardize perceived fairness of the process.

Needless to say that also on the level of participatory processes, the perception of each attribute against
the other is itself a matter of trade-offs, which these processes cannot bypass. For instance, enhancing
legitimacy by installing repeated long-term evaluation processes that attribute a high degree of
proceduralism and use feedback mechanisms, works against actors’ perceptions of salience: in the longer
term, each stakeholder will be adapting the profile and eventually curve out the angles of his stakes in the
process of achieving the general consensus with regard to both process and product.

Both of these principles of SD impose thus a series of consequences on the acknowledgement to be given
to the usability criteria (L,C,S). The policy domain as such is thus already preconditioning to a certain
extent the extension and interpretations to give to L,C,S. In the following sections, we will explore in
more detail the meaning of the L,C,S-frame on ISD.
First, however, we will explore the series of existing assessments of ISD-influence on policy making. In
effect, a small number of studies exist which took an identical research question to us as their point of
departure.
Second, we will explore the application of the L,C,S-framework to ISD. For each of the criteria, we
identify and discuss a series of issues (i.e. influences, flaws, interrogations…), which arise once the
framework is used to assess ISD-usability.
Finally, we will compare both points by confronting the existing studies to our assessment framework.

109
3.3 Approaches towards assessing influence of ‘indicators for
sustainable development’

While some of the roles, as well as the theoretical insertion into policy making, of indicators are very
similar to evaluations, there exists also a series of evident differences (Gudmundsson 2003) which
necessitate the following sections, where we refocus the identified L,C,S-criteria to the specific field of
ISD.

One of the main differences between evaluations (or assessments) and indicators lies within the
‘permanent’ character of most indicator processes. Even if evaluation schemes can be defined to trigger
specific evaluation exercises in a recurrent way - such as for Environmental Impact Assessments for
instance being triggered at each occurrence of building permit demands - the evaluation in itself is
particular to the object to be evaluated. Indicators, on the other side, are ‘permanent’ and stable through
time as they benefit from periodic updates, at least in principle.

Second, evaluation exercises are specifically directed towards an evaluandum. The evaluation questions
and processes are thus constructed around the object of the assessment. Indicators, even if linked to a
clearly identified concept or policy domain (e.g. sustainable development, or mobility, or poverty…), are
in the first hand developed as monitoring devices, or : “(w)hile ‘evaluations’ usually attempt to estimate
various outcomes of a specific activity or program, ‘indicators’ are often much broader in scope,
addressing a wide range of conditions in various natural and human systems (e.g. Sustainable
Development)” (Gudmundsson 2003 : 2).

Finally, evaluation exercises comprise not only an assessment of the state, but (mostly) a thorough
analysis and interpretation of the evolutions. Indicators largely rely on an analytical interpretation by the
observer. Even if graphs and data are commented, the entire cause-impact-effect chain is rarely exposed
in the case of indicators. The aim of indicators, in our context, is not necessarily to search for the
underlying causes of the observed evolutions, but rather to trigger further analysis (or evaluations) if the
observed evolutions are deemed ‘suspicious’ or alarming or are deemed not sufficiently understood.

These and other, perhaps more formal, differences between evaluation and indicators call thus for a
specific translation of the information-use criteria identified above from evaluation and assessments to
indicators for sustainable development. Whether these differences between the 2 objects induce a
principal difference in the usage functions, is not clear from literature. Argumentation goes both sides, at
least for the instrumental use of indicators or evaluation results. One the one hand, results from
evaluations are deemed more directly usable than indicators because they are constructed upon the
specificities of the evaluandum. Hence encounter more readily the salience-criterion. On the other hand
(see for instance Gudmundsson 2003), indicators with their repetitive and quantitative character are
thought to be more easily used in an instrumental fashion because they need no long translation and

110
interpretation of the shown trends and their repetitiveness influencing positively the emergence of
credibility.
It is of course likely that both logics apply, and the difference in interpretation of the instrumental use
function lies in the evolution of the usage of information for decision-making through time (see Innes
1998): indicators could be more prone to instrumental usage than evaluations, provided the users have
had the opportunity to develop confidence in the indicator construct, in the authors, in the data used…

But before discussing such use functions in detail, and link ISD to the above-identified L,C,S-criteria we
will hereafter account for some of the existing meta-evaluations and assessments of the ISD. The aim is
to show the broadness of the debate as well as the differentiated conclusions the authors have, preparing
the stage for the subsequent discussion of applying L,C,S-criteria to ISD.
The aim here is of course not to develop a comparison of evaluation-use against indicator-use. Where
those apply and are obvious, we will show the differences between both information processes, but we
will concentrate foremost on discussing at the level of ISD, hence the specificity of the SD policy
domain.

3.3.1 Selected approaches towards assessing ISD use

Research on knowledge- and information use is a widespread and multidimensional field of enquiry in
many social sciences, which is investigated by many different disciplines and approaches. Not
surprisingly thus, that some investigations have also been led specifically in the domain of ISD. In the
following section we will account for some of the works in this field. The objective here is to present the
material necessary at a later stage to identify how far the criteria used in some of the existing indicator-
use assessments can be captured at the meta-level, and eventually be ‘cognitively aggregated’, with the
L,C,S-criteria. Or, in other words the question becomes: can L,C,S-criteria capture the criteria and
conclusions identified in meta-evaluation studies on the use of ISD ? Can the findings of these other
meta-evaluations of ISD-use be translated or explained with the L,C,S-criteria?

Evaluative exercises, which try to assess specific ISD-initiatives, tend to interest more and more the
commissioners of ISD-schemes as well as science. On the one hand, the cyclicality of ISD-processes, i.e.
the revision of the schemes, which go beyond the simple periodical updating of the data, tends to be more
and more acknowledged. And as many of the more important ISD-initiatives have been launched in the
second half of the 90s, they are entering now their first structural revision phase, if the initiatives
survived the arrival and success of ‘newer’ instruments of policy making in the policy domain of SD.
On the other hand, after the initial general enthusiasm during the 1990s to develop ISD, developers and
policy makers quickly had to face serious criticisms on the usefulness of such, often expensive and
extensive, processes, which often enough can be easily attacked on their non-apparent, instrumental
value-added. Other possible use-functions needed thus to be translated specifically to ISD, generating a
first interest in the usefulness of ISD. This interest was however rather rarely developed into empirical

111
studies. As a secondary consequence, today, the question of the usefulness of general indicators, and
more widely of reporting schemes, for policy-making is gaining interest51 also in other policy domains
than ISD.

In the following, we synthesize thus a series of use-evaluations of ISD. Subsequently, in section 3.3.2, we
discuss also the linkages which can be established between these different exercises, and develop how far
their conclusions relate to the L,C,S-criteria.

Analysing and comparing the linkage between ISD-processes and policy


systems in Malaysia and Australia52

We have already accounted partially for A. Hezri’s (2006) work in the previous chapters. In this section,
we concentrate, on his empirical analysis of the Malaysian and Australian states’ indicator endeavours.
The study does not exactly assess or evaluate indicator use, but tries to identify (through qualitative
interviews with a series of policy-makers) and analyse the meaning different actors in the states’ ISD-
initiatives give to the conducted ISD-processes. His approach is nevertheless important to the subsequent
work as it shows where and how users and policy actors pretend to hook (or not) to ISD during policy-
making.
Leaning on Weiss’ (1997) typology of research- and information use, Hezri identifies (2006 : 175) 5
generic types of indicator usages : “instrumental use: used for action and problem solving where there is
a linear relationship between indicators and decision outcomes; conceptual use: used for enlightenment
where indicators sensitise users’ understanding; tactical use: used as a delaying strategy, substitute for
actions and criticism deflection; symbolic use: ritualistic assurances whereby indicator production
implies a sign or symbol of other reality; political use: the content of indicators becomes ammunition to
support a predetermined position of a user”.
With these 5 use-types in mind, Hezri uses further the concept of policy learning, which in this context is
defined as the increase of knowledge about policy making, and the influence of indicator processes on
such policy learning. To clarify, policy learning has to be understood here as a sort of meta-
apprenticeship of the way policy making is organised. Hezri is thus analysing (at least partly) at the level
of the instrument ‘indicators’ (as compared to other instruments for policy making) and its influence on
policy making, and not comparing the use of different indicators. He subdivides policy learning into 4
differentiated mechanics (2006 : 178) including instrumental learning (i.e. learning to use, adapt,
implement indicator programmes), governmental learning (i.e. learning to create or adapt organisational

51
A very convincing argument to show this widened, general interest in analysing the policy-use of indicators, can be read
in the latest Framework Programme of the European Commission, where 2 ‘topics’ are specifically addressed towards the
exploration of this issue and adjacent questions, while not being specifically linked to the SD-policy domain. FP7-SSH-
8.6.1 ‘How indicators are used in policy’ and FP7-SSH-8.6.2 ‘Developing better indicators for policy’.
52
The exercise has been undertaken in the context of a doctoral thesis, submitted in October 2006 at the Australian
National University, A.A. Hezri (2006), Connecting Sustainability Indicators to Policy Systems.

112
structures and processes in order to produce or adapt indicator programmes), social learning (i.e. learning
by a wider policy community to (re)think the social construction of policy problems) and political
learning (i.e. learning by a policy coalition to use or adapt the policy domain and to influence bargaining
situations and agenda setting). These 4 types of policy learning are spread on 2 distinct levels.
Instrumental and governmental learning are in direct linkage to the instrument of learning, i.e. in our case
the indicator programmes, whereas political and social learning are situated on the level of learning about
the policy domain, i.e. in our case at the level of ‘sustainable development’.
The findings of applying this analytical grid both for Malaysia and Australia, are partly consistent with
our theoretical analysis above. In both cases, even if wide differences exist between both countries and be
it foremost because of the considerable differences in political and democratic culture of policy making,
there is – among many other conclusions drawn by Hezri - no apparent and identifiable link or even
correlation between the way indicators are used (e.g. in an instrumental way) and the way this usage is
impacting on the further development of the policy domain (for instance, a change in how stakeholders
act in policy negotiation). Linkages between indicator use and policy learning are thus widely
unforeseeable, and appear to be at least not linear. This impossibility to identify valid recipes for
enhancing an indicator’s influence on policy learning with regard to the policy domain is unsurprising,
though. Hezri further marked that instrumental, direct, use of indicators is acknowledged in both
countries to be very rare, whereas conceptual use (i.e. enlightenment) of indicators is pointed at by the
interviewees as largely responsible for environmental policy development (in the case of Malaysia) and
the development of the sustainability policy discourse (in the case of Australia, mostly). In parallel, at the
level of policy learning, Hezri identified that instrumental and governmental policy learning were
prevalent in both countries upon political and social learning.
However, and there Hezri’s qualitative interview series of Australian and Malaysian policy makers is of
importance for the present, the author tried to identify a series of conditions (see below) which would
contribute to the development of a (non-deterministic) link between indicator use and policy learning.
However, this series of general conditions identified by Hezri, and which would promote that indicators
are used (even if it is not programmable how they will be used) and that this use is influencing on policy
learning (even if the type of policy learning is not foreseeable either), are not set on the level of the
indicators or even of the instrument, but in the wider context of the organisational arrangements existing
within institutions and which are contributing to the development of a favourable policy-making culture
(i.e. “institutional cognition” (Hezri 2006 : 341). In short, Hezri sees the source and determinants of
indicator use in the development of societal institutions, including public administration, which promote
policy learning in its variable forms. More particularly, he emphasis the importance of the democratic
organisational form (of which federalism is identified as a particularly unfavourable democratic setting
for the necessary coordination between policy actors), the policy content within a country’s sustainability
discourse (for instance it is assumed that the more environmental the less complex and the easier
indicators could be instrumentally used instrumentally on policy learning), the existence and combination

113
of a series of favourable 'meta-policy frameworks53 (such as the ‘transparency policies and programmes’
to introduce and favour ‘evidence-based policy making’). While the first two points raised are not of
particular interest here, the third one reveals some importance in the sense that it attaches outmost
importance to the administrative and institutional practices of policy-making as a main criteria for
indicator use. The fact that indicator processes, in many institutional contexts, belong themselves to the
collection of ‘meta-policy frameworks’, and thus that the argument presented could be inversed is
acknowledged by Hezri (2006 : 336) : “in institutional terms, indicator systems not only function as a
mechanism for informing rules and codes of conduct, that are also constrained by these. Thus,
sustainability indicator systems are embedded in institutions”.

The policy use of ‘Transport&Environment’ TERM-indicators at European


level54

Gudmundsson (2003), while following the same initial questioning on the use of indicators for policy
making, follows a different empirical construction. His work is building on document analyses, trying to
chase down which indicators were used in what policy documents. The final aim of the endeavour is to
use the results of the document analyses and check for patterns of use in order to construct a (2003 : 2)
“framework to guide empirical research in the use and impact of environmental integration indicators on
policy making”.

The context of Gudmundsson’s study is also very different, so we account for it briefly. Gudmundsson is
focusing on a specific policy domain’s indicators. TERM (Transport and Environment Reporting
Mechanism) is a process launched in 1999 by the European Commission in order to assess the integration
of environmental and transport-related policy objectives, representing thus the transport-related pillar of
the so-called Cardiff process: the TERM-indicators are particularly focused on the interaction between
the policy domains of transport and of environment. Gudmundson’s assessment is particularly interesting
and timely as TERM is often cited as the model-case for the development of CEC integration indicators,
which other policy domains (e.g. energy, regional development…) should eventually imitate in order to
assess the integration of environmental issues in any sector policy. TERM was repetitively mentioned as
a blueprint instrument in the wider Environmental Policy Integration (EPI) strategy.

Gudmundsson, following the above-cited seminal research on evaluation use (Weiss, but more
particularly Shulha et al 1997), divides use in 3 potential functions (2003 : 7) : the instrumental or direct

53
Hezri calls ‘meta-policy frameworks’, or “policies for making policies” (2006 : 334), the series of horizontal policy
programmes meant to configure policy making processes, or in more general terms meant to configure the actions of public
bodies. A typical example, in a European context, would be for instance the policy programme around the Aarhus
convention on the right of access to (environmental) information, or at Belgian federal level the programmes around the
‘simplification administrative’.
54
Gudmundsson H. (2003)

114
use (i.e. “use of variables and values of certain indicators in policy making”), conceptual use (i.e. “use
of key elements in the conceptual frameworks in conceiving a certain policy or policy change”) and
symbolic use (i.e. “the mentioning of certain indicators or frameworks in a relevant policy context,
without any discernible impact on the policy in which the mentioning takes place itself”). Translated onto
the context of the document analyses performed, these functions became the following. He detects direct
use if the TERM-indicator or the value of a TERM-indicator is mentioned explicitly in one of the
analysed documents. Conceptual use was ticked if the identical or adjacent policy framework (i.e.
labelled ‘TERM’s seven policy questions’ approach’) would be detected in the document as it was
developed for the TERM-indicators. Symbolic use reveals any mentioning of the TERM-process in the
analyzed document.
On the basis of the document analyses, Gudmundsson comes to a series of unequivocal conclusions.
Hardly any direct use was detected, basically explained by the rather young history of TERM and by the
nature of TERM which is called a hybrid between information- and monitoring schemes that is missing
the necessary accountability mechanisms to develop a stronger impact. Furthermore, conceptual use of
TERM-indicators could not be unambiguously detected either, which could depend from the small
amount of documents analysed as well as from their weak timeliness. Finally, almost all of the uses
detected in the document analyses were symbolic in nature, simply mentioning the existence of the
TERM-process.
While Gudmundsson’s analyses is of some importance here as a representative of the more classical
empirical and quantitative ‘evaluation’ use literature, his conclusions surely do suffer from the limited
number of documents analysed (basically, only one strategy document and a white paper), and even more
so from the nature of the documents, which were neither sufficiently varied in their nature (for instance,
back-office notes would have been considerably better suited to detect direct influence), nor sufficiently
spread in time (all were quite immediately published after the first TERM-report, thus probably not
leaving sufficient time to indicators to percolate into policy making practice).

Whatever the limited conclusions presented, the interesting issue raised by Gudmundsson is his tentative
to link indicator-uses to a limited number of ‘indicator frameworks’, which would be better termed
‘interpretation frameworks’, in order to explain the potential nature of the interlinkage between indicators
and policy-making. The framework model developed has 2 distinct levels. On a first level, utilization
frameworks and conceptual frameworks. Then, distilling the utilization frameworks, on the second level,
Gudmundsson identifies control frameworks, monitoring frameworks and information frameworks.
Conceptual frameworks refer to the interpretation models developed to structure and select indicators,
such as OECD’s Pressure-State-Response model cited before. They refer to the internal organisation of
the indicator sets or indexes.
Utilization frameworks describe the generic way developers are assuming that their indicator sets are
corresponding with policy making tasks. They show the nature and presence of the accountability
mechanisms to which the indicators are assumed to participate to. He distinguished between 3 distinct
accountability mechanisms. Control frameworks refer to indicators which are developed to steer the
policy domain (or regulation) according to its objectives, targets, benchmarks or thresholds. Monitoring

115
frameworks are developed to follow the development of a situation within the policy domain, search for
instance for policy success or failure, in order to provide the basic feedback for policy makers on a
policy. Information frameworks are the vaguest of the 3 sub-models used, as they are merely meant to
inform in a loose way a wide audience of policy makers as well as the public, on the evolution of a
situation or item. The 3 utilization frameworks implicitly show a gradation with regard to the nature of
the use of indicators: control frameworks are more prone to generate direct, instrumental use of
indicators, whereas information frameworks develop of course more easily into a loose conceptual or
symbolic use of indicators.
What Gudmundsson’s typology nicely shows is that there is no better or worse way of using indicators:
all utilization frameworks are to be considered on the same level of importance to policy making, only
interfering differently, by nature and also by timing, with different policy making tasks or moments. With
his frameworks’ setting, he supposes further that there is a link between the accountability mechanism
the indicator scheme is aiming to contribute to (be it monitoring, controlling or enlightening), and the
predominant type of utilization of the indicator concerned (instrumental, conceptual or symbolic).

Evaluating the process of ISD development in Finland 55

Finland has been with the first group of countries to develop a national ISD-reporting scheme. In 1997,
Finland’s Environmental Administration was commissioned to initiate an ISD-process on the basis of
experts’ and stakeholder participation. Without going into the details of this process here, or describe the
output in terms of their indicator set, we will briefly account in the following of 2 exercises made by one
of the Finnish civil servants leading the process, to assess the process and the use of the ISD-scheme.

The first exercise, geared towards assessing the ‘use’ of ISD, is based on the more traditional approaches
of qualitative use-evaluations. Rosenström (2002) undertook in 2000 an interview round (28
interviewees) with a fringe of potential users of the first version of the Finnish ISD-report, published in
2000. The considered users were nearly all active in the domain as politicians, mainly members of the
Finnish and European parliaments, the minister of environment. While Rosenström does address these
users as policy makers, we think it better to describe them as decision-makers in the policy domain; as
explained above, strong differences exist on the information needs and usages of different categories of
policy makers (politics, administrations, agencies…). The methodological setting of the study is quite
traditional in the sense that the selected users were exposed to a face-to-face interview, to determine their
level of knowledge and acquaintance with the ISD-publication. No categorical differentiation was
developed to account for different types of usages (e.g. instrumental, conceptual, symbolic, political…),
as we saw it in the above-mentioned exercises. Findings were then, with regard to the strength and
percolation of the Finnish ISD-scheme, very meagre: a very large majority (5/28) of the interviewees

55
Rosenström U. (2002), The potential for the use of sustainable development indicators in policy making in Finland,
Futura 2 : 19-25. Rosenström U., Kyllönen S. (in press), Impacts of a participatory approach to developing national level
sustainable development indicators in Finland, Journal of Environmental Management.

116
recognized not having made any use at any moment of the reports, mostly not even exactly knowing what
the report did consist of (even if the knowledge of the existence of the report was somewhat more
widespread). Rosenström assesses this as particularly alarming, especially as most of the interviewees
were recruited from environmentally-related policy domains (e.g. being member of a parliamentary
commission concerned with environmental issues), and blames partly, not the general hypothesis that
indicators would be useful for this category of policy makers, but the lack of investment from the part of
the indicator schemes to promote themselves. This is probably not a wrong statement, as there is in effect
a tendency to prioritize analytical and scientific soundness from investments into the promotion of ISD-
reports; but, the fact that many respondents did recognize having no further knowledge of any other ISD
(including of such widely communicated ones as the Ecological Footprint), seems to our understanding
point into the direction of a certain non-permeability of politicians for indicators, data, figures and the
like. The fact is that politicians are probably more concerned to use information on discourse, positions
and arguments, than on facts or evolutions of phenomena. Widening the audience to high- and mid-level
administration and to the executive branches of Finland’s policy makers, would have been a good reflex
to control the potential to generalize the politicians’ (non)uses of ISD.

While this first assessment of use of the ISD-process was very exploratory and non-exhaustive and
finally not sufficiently investigative to help us draw any clear understanding of the situation, the second
evaluative attempt by Rosenström (et al., in press) went a very different, and original, direction. The
emphasis here was on an assessment of the participatory construct of the Finnish ISD-process with the
aim to see if the process could be labelled ‘transparent and fair’. As mentioned above, the Finnish
exercise did rely vastly on first-order stakeholders (interest groups, civil servants…) and experts’
committees, including at various stages of the process around 100 persons. The author stresses
particularly this fact, and mentions that the aim was not to develop a large scale participation process, but
to implement what could be termed ‘participatory technocracy’ with the objective to integrate as much as
possible differing expertises on issues linked to SD without drowning the exercise in difficult consensus-
building. Consensus building between experts, whether from different ‘employers’ or not, seems
implicitly thought of by the author as being easier than with the broader public.
The 7 criteria, derived from other assessments of participatory processes (Petts, 1995; DETR 2000), read
as follows : representativity of the participants, transparency and openness of the process to the
participants, timing of the participants’ involvement with regard to the overall timing of the project, the
clarity of the definition of the tasks set for the participants, the compatibility of the programme and
mandate for participants with the objectives pursued by the participants, the emergence and degree of
shared awareness and common knowledge generated by the process, the legitimacy of the end-product
and the participatory benefits occurring from the consensus.

As acknowledged (in different terms though) by the author, these criteria do not allow to make a clear
link to the objectives pursed with the specific participatory setting and to assess to what extent the
participatory design does allow fulfilling these objectives. Participatory processes can be used for
multiple purposes, including ‘information gathering’, legitimization, constructing societal empowerment,

117
learning by doing… Not for all of these objectives, the same ‘amount’ and type of participation processes
are useful, and differences in the dosage of transparency and fairness are certainly justified. In this sense,
Rosenström points to the fact that according to her, one of the main participatory-linked shortcomings of
the Finnish process is the rather non-programmed and non-targeted participatory process, especially at
the level of the users’ and the wider public’s implication, which lacked identification of objectives and
conscious construction of the process to reach these goals. Especially, with regard to the implication of
the users (which were identified as being the politicians!), participation could have been extended to
enclose other objectives than the shear integration of expertise and the diversification of the knowledge-
base. According to Rosenström, this lack of searching for the upstream familiarization of users with the
ISD-scheme by integrating them into the configuration of the tool, was acknowledged problematic by the
officials and much better anticipated in the subsequent rounds of Finnish ISD-developments (2002 and
2004).

Political uses of social indicators, an application to ISD 56

On a more original track, Boulanger (2006) constructs an interpretation of ISD-use which is build notably
on literature from social indicators. In his reading, the potential policy uses of ISD are linked to a
sequential and temporal representation of policy making. While largely agreeing with most analyses in
the field - with respect to the relative non-use of ISD for policy making - the author restates the problem
of such assessments: the link between indicators and their usage happens at the level of discourse-setting
and -developing, and on the level of influencing bargaining as well as on the level of problem-solving.
Boulanger attaches the different types of impacts to different understandings or models of policy- and
decision-making. Instrumental usages stem from a rationalistic, positivist interpretation of policy-making
(i.e. problem-solving). Communicative usages of indicators can be understood as an equivalent of
conceptual use of ISD, contributing to the reframing of the problem and the construction of a common
understanding of the situation. They are linked to a discursive-constructivist understanding of policy-
making. Thirdly, indicators can be used in a strategic comprehension of policy processes, linked to
bargaining and voting processes.
The 3 types of indicator utilisations are thus referring to 3 different comprehensions of policy-making:
policy-making as a result of information treatment and comparison of alternatives; policy-making as a
collective effort to construct norms and discursively agree on how to implement these norms; policy-
making as an ongoing bargaining effort in public conflict resolution.
Interestingly thus, Boulanger drives the ideas of the aforementioned authors a little further. First, he sees
no antagonisms between the 3 types of utilisation, he values each one identically and he accepts their
coexistence. It is the coexistence, which brings the more interesting consequences: different people can
simultaneously use a given indicator X in many different ways, or as Boulanger states it, the 3 different

56
Boulanger P.M. (2006), Political Uses of social indicators: overview and application to sustainable development
indicators, International Conference on Uses of Sustainable Development Indicators, 3-4 April 2006, Montpellier, France.

118
decision- and policy-making models used are not competing ones, but each of them is “locally true,
corresponding to particular moments and/or facets in the life of social problems”(Boulanger 2006 : 7).
This comprehension induces however, if you accept the reading of specific indicator uses being
attachable to specific policy-making concepts, that the many users are referring to different
understandings of an identical policy-situation. Hence, it can be argued to understand a policy-situation
as a sort of sequence, an evolution: policy-situations can be referring first to the rationalistic model, then
to the discursive one and finally to the strategic one. Policy-situations are maturing through time, and
each stage of their maturation process will call (among others) for a different understanding of the used
indicators, eventually also for different indicators being mobilized for specific policy-situations (see table
below).

Model of politics and Rational-positivist Discursive-constructivist Strategic


policy-making
Politics as… Collective problem- Conflicting interests
Discourses-practices
solving
Decision as… Calculation Debate-deliberation Voting - Bargaining
Rationality Instrumental Communicative Strategic
(Zweckrationalität) (in terms of Habermas)
What is at stake - Alternative means - Definition of the situation Payoffs
- Costs and benefits - Norms and Values
- Facts
Role of information Parametric: Heuristic: Strategic:
Quantifying objectives - Framing the problem Signalling-Manipulation
and evaluating - Building a common discourse
alternatives - Building a public (mobilization )
Requirements for - Objectivity - Salience - Ambiguity
indicators - Sensitivity - Communicability - Flexibility
- Specificity - Dramatization
- Timeliness
Example of indicators - Millennium - Ecological Footprint Current unemployment
Development Goal - Genuine savings rates in most western
indicators countries
- GHG emissions
Table 3 : Three models of policy-making (from Boulanger, 2006)

Furthermore, following earlier work by Blumer, Boulanger identifies a series of stages a policy problem
will be passing through: problem emergence, legitimization, mobilization of a receptive public, formation
of an implementation plan, implementation of the action. At each stage, requirements for ISD are
changing. For instance, policy problems still emerge mostly in expert circles and “in small professional
or specialized arenas” (p.7) based on evidence, hence calling at that time of the policy process for an
informational, instrumental indicator usage.

However, these sequential and temporal interpretations of policy-processes are used in many contexts
(and notably in many institutional contexts where they form typical policy-making cycles) and somehow
the fact to link these to ISD seems not particularly groundbreaking. But, what is in our understanding of
interest in Boulanger’s work, is to be found on another level, namely the affirmation that the existing

119
differing interpretations of policy situations (and thus of indicator usages) should not be comprehended in
terms of a simple competition between the policy situations for the ‘attention’ of their public, but in terms
of coevolving struggles and discourses between the policy problems. Boulanger’s interpretation allows
thus to draw some original conclusions on common problems with ISD (2006 : 11): “(…) to date there
are not many – if any – SD indicator lists (…) on which a sufficient consensus has been reached (…).
This means that most institutionalized SD indicator sets have been so prematurely and that they will
probably remain unused and ineffective if planned in rational problem-solving policy-making
perspective. On the other hand it is likely that most of them have been adopted for strategic political
reasons (…)”. As a consequence, planning indicator development calls thus for conscious choices and
the development of what could be termed an explicit ‘use-strategy’, which would need to be strongly
adapted to the degree of maturity of the targeted policy problem: the criteria for developing successful
and ‘effective’ indicators change according to the which policy-making model is at use in the policy
domain.
E.g. during the phase of emergence of the policy problem and the parallel call for evidence, during which
a rationalist-positivist interpretation of policy-making is prevalent, the indicators can be explained with
criteria such as objectivity, sensitivity and timeliness. During the phase of discourse formulation, and the
prevalence of the discursive model of the policy-evolution, indicators can be explained as being a choice
between criteria such as salience, communicability and dramatization.
Hence, with the potentially differing roles for indicators, criteria to interpret their success (in terms of
usability) have also to be adapted. However, and here Boulanger lacks some clarity in his proposal,
whereas his proposed reading does in effect present a performing explanatory framework for indicator
(ab)use, it seems inappropriate to be used at the level of designing or positioning forthcoming ISD. The
explanatory framework does thus not directly transform into proper guidelines for the configuration of
ISD-exercises. The reason is that if this could be possible, than it would mean that it is possible to
anticipate the evolution of a policy problem; e.g. see where it stands in some months and anticipate which
needs the policy domain would need in order to evolve into the wanted state.

The sustainability discourse and its indicators: an analysis of the influence


of indicators on discourse 57

On an identical - explanatory - level than Boulanger, Ortega-Cerda (2005) analyses in a very synthetic
paper the potential uses of ISD as elements in the formulation and development of the ‘sustainability
discourse’. The originality of this analysis, as compared to others which situate their analysis on the
policy-making level, is that the author uses a theoretical frame to explain the importance which indicators
can play in certain conditions and situations to influence the ‘discourse of sustainability’. We are thus not

57
Ortega-Cerda, M. (2005), Sustainability indicators as discursive elements, Paper presented at the 6th International
conference of the European Ecological Economics Society, Lisbon, 14-17 June 2005.

120
here in presence of an analysis of the use of ISD for policy-making, but at the more theoretical ‘meta-
level’.
The frame used by Ortega-Cerda, explicitly taken from Foucault, is the one of ‘discourse analysis’. In
this logic, SD can be labelled as a discourse, i.e. “an identifiable collection of utterances governed by
rules of construction and evaluation which determine within some thematic area what may be said, by
whom, in what context, and with what effect” (Gordon 2000, cited by Ortega 2005 : 4), which interacts
and coexists with other discourses (such as security&terrorism, climate change…). As indicators
represent - in the eyes of the beholder - an objectified, science-based measurement tool, the question
posed is to assess how far and in which ways indicators can influence the SD-discourse setting, as well as
the discourse’s positioning in the public sphere (for instance, relatively to other discourses). Ortega-
Cerda further argues that indicators can be comprehended to have a major influence on the SD-
discourse’s performances, because ISD are tools which follow the rules of what is set by the ‘science-
discourse’. ‘Science discourse’ is ascertained being of prominent importance to the credibility of any
discourse. In other words, Ortega-Cerda’s logic unfolds in the following: 1° indicators are science-based
tools for policy-making, because they follow the logic and argumentations of what is recognizable by
many members of a specific discourse as being scientific practice; 2° being recognizable as stemming or
leaning on the science discourse, e.g. being recognized as science-based tool for decision-making, is a
strong support for the credibility of the tool itself, and thereof for its influence on policy-actors. In
parallel to the ‘scientificity’ of the discourse setting, - according to Foucault - discourses are influenced
by power and political action, two elements of public action in which indicators explicitly play a major
role too.
Ortega-Cerda discusses how indicators can influence these different levels: indicators and their influence
on discourses’ competition and the emergence of the SD-discourse, as well as indicators and their
influence in the interaction of ‘institutions’ (in the wider sense). Finally, Ortega-Cerda develops a short
critique of the more classical approaches of seeing in indicators elements which influence within the
decision-making processes either through direct use, communicative use, rhetorical elements…
Most importantly, in the approach taken by Ortega-Cerda, is that the framework he applies to ISD is
strongly explanatory. A series of developments in indicator configurations are opened for discussion
through the lens of ‘discourse analysis’. Discourse competition (e.g. environmental discourse vs. security
issues; nuclear energy discourse vs. climate change…), and more specifically the tendency of individual
institutions (e.g. NGOs, lobbyists, administration…) to push ‘their’ indicators as a substitute for pushing
their interpretation of the SD-discourse can be analysed very convincingly. Ortega-Cerda identifies a
series of uses which indicators seem to fulfil and which contribute to strengthen their importance:
“(indicators) fulfil a number of uses : to support the different perspectives in the sustainability discourse,
to increase the power of the organisations that create them, in many different ways in the decision-
making process, and finally they play a role as discursive elements in the learning process of
communities in relation with sustainability” (Ortega 2005 : 13).

However, while Ortega-Cerda’s analysis of the bidirectional influence between indicators and the SD-
discourse is relevant as an explanatory frame, we hold that explaining the indicators influence on policy-

121
making solely through the credibility lens, is somehow limited. Vast science exercises such as the ones
developed by the Intergovernmental Panel on Climate Change (IPCC) contribute to give sense and
interpretation to the SD-discourse, but it is not only the lever of credibility which is of importance. In the
case of the IPCC, for instance, tremendous efforts have been invested into assuring credibility of the
outputs; large numbers of experts were appointed, working groups are openly and in a transparent way
discussing to reach scientific consensus on specific issues, selection of participants is on scientific merit
only… And still, IPCC was challenged by a number of countries, and specifically so by countries of the
southern hemisphere, to develop a biased knowledge-stream: the very vast majority of members to the
IPCC are stemming from universities of the developed world. While being highly credible, in terms of
the quality of its outputs, the IPCC’s legitimacy could thus be challenged as the representation of
‘southern’ science was not sufficiently institutionalized in a policy domain (i.e. climate change) where
the stakes between the developed and the developing world are strongly diverging.

Before drawing some conclusions from these existing ISD-use assessments, we will in the following
section pass through the 3 afore-identified criteria of Legitimacy, Credibility and Salience in order to
discuss each of them with its linkages not merely on the level of SD-principles (i.e. at the level of the
policy domain), but on the level of the instrument itself, i.e. within the specific area of indicators for
sustainable development (ISD). We will first discuss for each of the identified criteria its implications
and significations at the level of ISD, and at a second moment we will tentatively integrate these points
with the above-discussed other assessment approaches in order to distil some eventual commonalities and
divergences. Finally, we prepare for the last chapter by pointing to the fact that a thorough analysis and
reading of the usability of ISD might necessarily induce to develop a second analytical axis,
complementary to the L,C,S-axis.

3.3.2 Legitimacy, Credibility, Salience at the level of ISD

After this exploratory round of possible and existing evaluation and interpretation frames for ISD, we
take the analysis back to our initial analytical frame. We will also give a brief insight into how the L,C,S-
framework has been used recently at the very pragmatic level, namely to perform meta-evaluations of
specific ISD-endeavours: we will thus in the following provide for the limited number of exercises which
have been led in the near past in order to evaluate the use of specific indicators according to the above-
mentioned L,C,S-frame. Above (see section 3.2), a more general analysis was performed, at the level of a
selection of SD-principles and their linkages to the L,C,S-criteria. The issue in the following sections is
to narrow down our exploration and application of the L,C,S-frame on the issue of our investigation :
indicators for sustainable development.

122
Hereafter, we will investigate a range of theoretical linkages which can be established between ISD and
the L,C,S-frame. Illustrations and examples stem partly from the two existing58 L,C,S-assessment
exercises which are empirically applying the framework to ISD.
Additionally, from the above-presented (3.3.1) alternative assessment frameworks, we will try to
establish which approaches can be encompassed with the L,C,S-approach. The final question for this
section being thus: how far can the L,C,S-frame be considered a valid overarching criteria-set to perform
use-analyses of ISD ?

While we pass in review the 3 criteria of the L,C,S-frame, we start with a discussion of the credibility
criteria and its linkages to ISD. Credibility is the most ‘usual’ characteristic scientific information seeks
to achieve. Second, we discuss salience, as another more traditional characteristic for information to hold
in the science-policy linkage: answering wrong questions, or asking the bad ones, is typically accused of
by both sides of the science-policy boundary as causing major misunderstandings and developing into
non-use of information. Finally, we will introduce the link between legitimate information and indicators
and their potential influence mechanics on policy making.

a° Credibility and indicators for sustainable development

As developed above, credibility is information’s characteristic which refers to it being felt and perceived
by policy actors as responding to solid and robust standards and norms of scientific work.
Typical and basic elements of a scientific process which help to foster the credibility of information are
processes such as peer reviews, or statistical tests such as sensitivity analysis, or practices such as the
deliberate repetition of laboratory results. The perception of credibility - credibility itself being often
difficult to assess for outsiders to the scientific world or even of the discipline - is mostly guided by
‘proxies’ such as the ‘fame’ of the scientific actor, the smoothness of his career, the importance of the
institution he is attached to, the former contracts and expert missions he had been appointed for…
For some authors (e.g. Ortega-Cerda, 2005) perception of credibility and perception of objectivity are
largely synonymous. We argue that while objectivity can in our context be foremost assessed at the level
of an ISD-exercise’s outcome (e.g. at the level of the product), credibility is more directly involving an
assessment of the capacities of the actors of the ISD-exercise, and of the initiative in itself (e.g. at the
level of the process).
Credibility can be – at least in some highly controversial topics such as nuclear power or GMO’s -
negatively influenced by frequent appearance in the media, by too unilateral contacts to private
enterprises, but also by strong proximity to politicians and administrations. The construction of the

58
The first one (Parris and Kates, 2003) takes us to the source and origin of the framework: the authors are part of the
wider research group58 coordinated by Harvard University, which developed the L,C,S-frame and applied it to
environmental assessments. The second much more limited attempt (Beco, 2006) is based on original work done in the
context of a Master Thesis presented in 2006 at the Université Libre de Bruxelles (Belgium), and which applies the L,C,S-
framework to a specific ISD-initiative, i.e. the ‘Environmental Sustainability Index – ESI’.

123
perception of the credibility of scientists is still implicitly guided by the basic image of the ‘non-
communicative ivory tower inhabitant’, an image which is hugely unrealistic in times where scientists
struggle for funds at many different places and thus need to integrate, in many disciplines at least, a
certain ‘commercial’ sense in order to ‘sell’ their ideas and concepts for projects and endeavours to their
potential funding agencies (whether in-house or third party). As a result, scientific actors which are
deemed credible are often not only very successful communicators of their research issues, but see their
modus operandi used by others in order to ‘pre-construct’ projects and research endeavours in a way as to
‘maximize’ the eventual perception of credibility by outsiders. The strive for credibility is all-
encompassing in applied sciences.

Notwithstanding some of the particularities of ISD-processes, compared to less policy-oriented scientific


activities, entail a series of consequences on the ‘performance’ of ISD at the level of the credibility
criteria. These particularities are dependent of and linked to issues such as the nature of the information
tool in itself, the nature of the organisational setting in ISD-development, the nature of the indicator
itself, the nature of communication tools, the nature of the ISD-developers, the nature of SD. In the
following, we will discuss these issues one by one.

Credibility and the nature of ISD

Perhaps the most persistent threat to the credibility of ISD is directly linked to the main characteristic of
ISD; ISD are “(s)implifying the complexity of reality, (and) the definition, selection and interpretation of
indicators imply an articulation of scientific and societal values at various levels and depths” (see
section 1.2.2). It is this request addressed to ISD - i.e. to be able to simplify reality in a coherent way
while still being easily acknowledgeable to users (e.g. policy makers) - which involves an articulation of
scientific and societal values. The exercise itself - from its very definition - is asked explicitly not to
emphasis only on the scientific qualities of an ISD, but to reconcile scientific and societal values into the
construction of the tool. As a consequence, and correlate to the general limit of resources which are
available to such initiatives, indicator exercises invest simultaneously into fostering their scientific
robustness and their communicability. Up to a certain level, and certainly at the moment of the
‘investment choices in the enhancement of usability’, both issues are however antagonistic, and ask for a
certain trade-off to be decided upon: fostering the scientific robustness of the indicator by relatively time-
and resource-consuming science practice, does at the margin not necessarily entail a supplementary
value-added at the level of the communicability of the ISD. As a consequence, some indicator initiatives
partly reverse the ‘credibility’-process: the more traditional scientific control mechanisms and review
processes, which are still the main levers and the first steps to construct qualitatively serious ISD, are
partly engaged after the initiative successfully passed the first hurdles in the battle for media and/or
policy makers’ attention.
The Ecological Footprint (EF) was partly confronted to such a problem, inducing an inversion in the
construction phases of the index. As a concept EF was developed as a PhD-thesis by M. Wackernagel.

124
Rapidly taken up and operationalized in numbers by Redefining Progress, a California-based NGO and
think-tank, and later by the World Wildlife Fund (e.g. WWF 2005), its scientific robustness was long
criticized59. Only recently, as the EF made its way into institutional publications (see for instance, EEA
2005), the developers of the EF found the resources to invest into an attempt to solidify methodological
flaws and limits and to countercheck the adaptability of the methodology to different national and sub-
national contexts, and to construct a peer reviewed, science-led process60. From its procedural
construction, but also from the choice of their scientific partner institutes, this process of revision seems
very deliberately directed towards an enhancement of the credibility of the index in a way as to
accompany the index out of its original NGO-terrain into an institutionalized policy instrument.
The initial ‘conceptors’ behind the EF seem to have applied some of the lessons obtained from their own
analytical work on indicators (Cobb, Rixford, 1998), as the latest initiative seems to imitate the principles
of normalisation and standardization process which exist since the 1940s for economic indexes and
national accounts (such as GDP/GNP). In this specific example, the (relative) lack of scientific credibility
is thus deliberately leading to a ‘post-construction’ of credibility, by using procedurally well-acclaimed
mechanisms and gathering ‘independent’ scientific expertise under the umbrella of the index. All this
under the ‘cover’ of a self-organized critical appraisal of the index.

Credibility and the organisational nature of ISD

A potential limit to the credibility of ISD stems from the organisational nature of typical ISD-processes,
and to a lesser extent of other indicator initiatives. As ISD are ‘boundary constructions’ (see next
chapter) between science and policy, their development phase is only rarely following the traditional
innovation sequence from development in science and communication to policy for use and their
eventual institutionalization in the statistics institutes and offices.
Quite contrary: principles of SD and the evermore-pressing arguments to engage into what is termed by
some ‘post-normal’ science (Funtowicz et al, 1994) or by others ‘reflexive governance’ (Voss et al.,
2006; see also next chapter) when conceiving instruments for policy making, is impacting on
development processes notably by opening this space to integrate stakeholders and users at very early
moments of the conceptual phase. Notably in order to use the expertise these actors gained with their
personal practice in order to calibrate instruments towards better usability by opening value-laden
decisions during the conception phase (e.g. the allocation of weights to individual sub-indicators, the
selection of the most intelligible indicators…). For indexes and composite indicators, the integration of
such ‘non-scientific’ expertise operationalizes mostly as a discussion of the appropriate dimensions and

59
E.g. Van den Bergh et al. (1999), Haberl et al. (2001), van Kooten et al. (2000), Rees (2000), Opschoor (2000), Moffat
2000), Ayres (2000).
60
With reference to the 10/10 initiative, launched by the Global Footprint Network, an initiative which initiated 10
research initiatives to refine methodological flaws of the indicator in 10 different countries with the aim to develop the
robustness of the EF in the next 10 years.

125
weights, sometimes with a prudent co-construction with the scientific authors of the conceptual frame
used to derive the index.
However, for lists of ISD and indicator sets (see for instance the Eurostat SDI-list), the implication of
stakeholders is often considerably wider. In many cases, scientific actors are mere animators of a set of
diverse (e.g. sectoral, thematic, spatial…) working groups, which are appointed to select the frame of the
ISD-list, the issues and domains to be assessed, the indicators to be monitored, the templates to publish
the reports… The composition of these stakeholder working parties vary from case to case, from citizens,
administrations, civil society, politicians, private corporations (and of course a mix of all these). Such
stakeholder processes have good reasons to exist in many contexts (see above), not the least because they
are thought to enhance the later use of the instrument by participants and their pairs. However, by lifting
the process out of the circles of experts and academia, and especially so for very open processes of ISD-
development, the indicator sets see their base for credibility altered. Not necessarily to the loss of the
initiative, though, if for instance process participation by some representatives of the community entails a
wider acceptance of the tool and thus an improved legitimacy of the ISD-set. On the other hand, as a
consequence of their altered credibility profile, these indicators do seldom penetrate durably the
negotiation space of polity: the implicit (and sought-after) subjectivities which the indicators and the sets
reflect, can be too conveniently raised as arguments during negotiations in order to refuse the further use
of these indicator sets61 to monitor issues.
In the same realm, some of the larger indicator initiatives (e.g. indicators of the UN-Commission for
Sustainable Development, or the Eurostat-SDIs) have been mirroring institutional and political
boundaries during their construction phase, thus entering probably too early an institutionalization-logic
(see also Boulanger, 2006). The organisational imperatives of such stakeholders/users processes can in
effect lead to weakening the credibility of the tool. In both examples (UN-CSD and Eurostat), the
participants to the construction phase simply were the national representatives of the parties. Countries
sent ‘their’ stakeholder or user to participate to the working groups, the latter dispatching the gained
knowledge or the open questions to national experts in order to come up with a joint position/proposal for
the next round of working groups. For obvious reasons, the members of the working groups were nearly
all civil servants in their home countries. The animators and referees of the initiatives were themselves
civil servants of the calling institution (i.e. UN-CSD or Eurostat). Such ‘indicator diplomacy’, and the
technocratic structures it operationalizes already during its conception phases, cannot always contribute
in a direct sense to the construction of the credibility of the set of indicators.

61
In one of the later developments of indicator sets we followed, participants and stakeholders anticipated already this
effect of their implication, and deliberatively chose the indicators and the conceptual frame of the set from the most official
sources. By doing so, they widely perverted the exercise and annihilated to a certain extent much of the use of such
participative structures.

126
Credibility and indicator sets vs indices

A third incidence on the credibility of the ISD-initiative is directly depending on the nature of the
indicator, i.e. whether the ISD are organised as sets of indicators or organised as indexes/indices. For
reasons already raised earlier, indexes/indices are more expert-intensive than most sets of indicators. Sets
are mainly operated as copy-paste activities: once a coherent, and maybe specific, framework and
structure has been constructed, the selection of the indicators to populate the structure of the sets is in
most cases involving comparatively less expertise. It is within the organisational logic of most indicator
sets to rely on existing indicators and data and to reach the sets’ value-added through an original
reconfiguration and interlinkage of existing indicators. Indicator sets are in most cases stakeholder-
intensive processes, and oriented towards digesting existing information to new types of users. The
simplification axiom which can be (quite paradoxically) stronger in indicator sets than in indices/indexes,
is detrimental to the involvement of larger expertise-based processes. Indicator sets are thus in general
less counting on credibility to develop value-added, than on legitimacy.
In parallel, in the case of indices/indexes a series of touchy technical questions arise which are best
solved by relying on relatively heavy scientific processes, if the solutions proposed are not to influence
negatively the usability of the indices/indexes. The best example in this, is of course the configuration of
the weights and the aggregation modes of the indexes/indices. Determining a weighting scheme is always
accused of being the ultimate point of entry for personal and societal values to spoil an indices/indexes
initiative. The credibility of the indices/indexes depends thus also on the credibility of the weighting
scheme, and more explicitly on the credibility of the procedural configuration of determining the
weighting scheme: does the latter reflect good/best scientific practice? This extends the call for
credibility at the level of the weighting scheme also onto those indices/indexes initiatives which
deliberately organise participatory processes in order to address the question of selecting and weighting
the indices/indexes’ sub-dimensions. The credibility of the organisers and initiators is directly
determining the credibility of the weighting scheme developed, and thus influences the usability of the
indexes/indices.

Credibility and the indicators’ modes of communication and dissemination

An indicator’s credibility is inevitably influenced by another characteristic of ISD: their modes of


communication. Since the beginning of their development, ISD were particularly imaginatively
communicated to users and decision-makers: flyers, leaflets and websites are very common, playing
cards, TV-commercials or computer games have been exploited also. Of course, the imagination shown
to communicate the indicators is inherent in the logic of ISD: the message the indicators want to
distribute is new, and so have the modes of communication and dissemination been adapted to reflect this
innovative character of the instrument. The logic wants also that when it is needed to touch new types of
decision-makers (e.g. consumers), than the diffusion needs to be adapted to these. As a consequence,
most ISD-initiatives did not rely on the more standard forms of scientific communication and

127
dissemination modes (e.g. reports, journals, working papers…), which would have allowed to identify
clearly the instrument as an analytical, science-based tool. To a certain extent thus, the selected modes of
communication and dissemination of the ISD-initiatives did contribute not to allow it to users to assess
critically the credibility criteria; identify clearly what was the scientific backing given to the initiative.

Credibility and the inter-disciplinarity of ISD-initiatives

At the same level as the afore-mentioned, the necessity for ISD-initiatives to be operated as inter-
disciplinary (or even trans-disciplinary) exercises contributes that users may apply incorrect ‘reading
codes’ to the initiative: users do not necessarily recognize the exercise for what it is. Every scientific
discipline has developed its own specificities and protocols to ensure, assess and control the credibility of
research done under its auspices. These disciplinary protocols are, at least partly, known to the users of
disciplinary information: information users can thus derive some elements of a credibility assessment of
information on the basis that they recognize the signature of the application of the disciplinary protocol.
These protocols, while they follow an overarching logic to all scientific endeavour, do not necessarily
completely coincide from one discipline to another (e.g. the place of the ‘first’ author in a journal, the
seriousness of the funding agency, the reputation of an university or research centre in the research
domain…). Inevitably the signature of inter-disciplinary protocols can be new to the potential users. For
initiatives such as the construction of ISD, where many disciplines are required to participate to a
common frame, it is occasionally thus difficult for the user to correctly interpret the outcomes of the
protocols. Hence, rendering it difficult for users to assess one span of the credibility characteristic.

Credibility and the innovative character of ISD

Inevitably, the credibility of an ISD-initiative is also dependent on the societal and institutional insertion
of the indicator(s). Typically, authors assert that an indicator is only mirroring its home-institution’s
reputation, and thus credibility, after a decade of constant (institutional and media) presence. As any
other type of institutionally framed policy instrument, indicators need first to generate and integrate the
necessary institutional processes to lead to acceptance, which are notoriously slow. Institutionalization of
indicators - whether in a public, private or academic organisation – is thus definitely also a matter of
passing the test of time.
In the case of ISD, this issue of the temporality of the creation of credibility of indicators could thus be
advanced as a pretty strong explanation: SD as a tangible policy-making framework exists hardly since a
decade, and while there were anyway only a few initiatives in ISD developed from the moment of the
birth of the SD-policy frame, there are even fewer ISD-initiatives which have institutionally survived
since the end-90s. Good examples for the relative fragility of ISD-initiatives are the ISD of the UN-
Commission for Sustainable Development, which were started to be developed quite early in the mid-90s

128
as a large-scale, international exercise, which mobilized many experts and institutional actors, but did not
last for a decade to become a largely abandoned initiative.

b° Legitimacy and indicators for sustainable development

Legitimacy has been defined earlier as the perception of the fairness of the indicator initiative’s process
to cope with the different actors and stakeholders’ stakes. Legitimacy could thus be shortened as the
perception of ‘procedural fairness of ideas’. In this context, fairness does not, however, mean that the
process needs to be configured in a way as to include all potential types of actors and stakeholders in the
process, for instance in a quest after an implementation of representative democracy within the indicator
process. Neither does it necessarily induce that the configuration of the process should be oriented to
foster a consensus, or the vision of the majority of actors.

Procedural fairness of ideas – legitimacy - refers to something more basic, but more difficult to assess:
the way different ideas, comments, stakes have been taken up during the process, and integrated into the
debate. Legitimacy is difficult to assess from the outside, or on the basis of the output and result of a
process. ISD publications necessarily reflect trade-offs which have been operated during their
construction phase, and often these trade-offs are fairly visible to the observer. However, how the
different stakes at the basis of these final trade-offs have been welcomed and integrated in the process is
as invisible as are those stakes which did not materialize in the end-product. Despite the evident
inconsistencies with the definition of legitimacy, proxy measures for legitimacy can be relatively non-
procedural aspects such as the diversity of actors participating to the process, the perceived neutrality and
professionalism of the animator of the process, the existence of periodic re-assessments of the pertinence
of the indicators, the general transparency of the processes…
Instead of assessing the procedural fairness of ideas it is thus often much more the blunt organisational
configuration of the process which is determining legitimacy in the eyes of the potential ISD-users.
However, a sane organisational configuration is in many indicator initiatives only a (imperfect) basic
condition which eventually might simplify the development of procedural fairness of ideas.
We will discuss hereafter shortly a series of characteristics of ISD which are influencing the performance
of the ISD-scheme with regard to the legitimacy criterion: the participatory character of many ISD-
initiatives including issues such as the profile of the animator/developer, the influence of the spatial scale
of the ISD-scheme, the specificities to communicate on the procedural aspects of an initiative and finally
we will come back with some more detail on the importance of the procedural design of the initiative.

Legitimacy and participatory processes for ISD

Participatory configuration is traditionally taken as one of the most obvious, evident and necessary
characteristics of ISD-processes. Some of the problems and interrogations generated by participation in

129
ISD-processes have already been raised earlier. However, as far as participatory schemes will have quite
a direct influence on the perception of legitimacy of the ISD, some of the issues raised need to be
explored anew. We will concentrate our exploration on the threats which participation poses on
legitimacy. By doing so, we are not denying that it is also participatory processes which contribute to
gain legitimacy for ISD.
First, neither ‘participation’, nor ‘fairness of ideas’ are univocal concepts. Quite contrary, both are
malleable and do, to start with, not present a clear direction or even a threshold. Furthermore, is it unclear
which mechanics might lead to a perception of fairness: can one assume an initiative as being fair as long
as one guarantees the possibility to any participant to submit ideas? Or, should one speak of fairness in
participation once different stakes are recognizably integrated into the outcome? In short, there exists a
first strong ambivalence between ‘fairness in means’ and ‘fairness in ends’. The consequences for the
perception of legitimacy are many, but foremost : different actors will have different interpretations on
the signification and the moment when ‘fairness of ideas’ is sufficiently met. As the conceptual
understandings cannot be harmonized across different actors, the resulting variability in assessing the
legitimacy of specific ISD-initiatives might account for a non-negligible part of the differentiated
assessments by different actors which are regularly pinpointed when comparing indicator initiatives.
Participation is thus, to a certain extent, a risk for the legitimacy of an ISD-initiative. The risk-nature can
emerge from a second plan of ambiguities: participatory processes in general are difficult to steer or to
control. The initial intentions behind the participatory process can be difficult to accomplish, both at the
level of keeping the actual procedural sequencing of the initiative in line with the original sketch, and at
the level of the tangible outcomes to the processes. As a consequence, fairness of ideas can be elevated as
a founding principle when designing the participatory process, but the relative arbitrariness of such
processes is not easily allowing to guarantee the targeted level of fairness also as a result. In terms of
achieving a sufficient level of legitimacy for a specific ISD-initiative, participatory processes represent
thus one of the greater challenges to cope with.
Especially so, as participatory processes are of course far from being exempt of strategic manoeuvres
with regard to legitimacy by the participating actors. The quest for legitimacy can be deliberatively
perverted by participants. Insuring ‘fairness of ideas’ for such processes is often interpreted by groups of
participants as ‘freedom of speech’ and eventually can end up into ‘open-outcome debates’, jeopardizing
the afore-agreed structure of the participatory process. ‘Fairness of ideas’ can further be used to slow
down processes, block specific issues and stakes. A very concrete threat in this sense, can be the
deliberate manoeuvring by participants to push towards the smallest, and most consensual denominator,
inducing potentially to jeopardize the profile of the ISD, and thus endangering the salience of the
initiative.

Many more issues could be raised here which would determine the challenge of implementing the quest
for legitimacy by way of participatory processes. A clear linkage exists between ‘fairness of ideas’ -
hence between legitimacy - and participation. However, this linkage can be positive or negative, or both
at the same time, and can even be changing during the ISD-process. To use an image, what is of
importance to achieve legitimacy is the way the levers of participation are manipulated during

130
participatory processes, much more so than to establish a clear-cut initial strategy to legitimacy (which
would anyway be difficult to transpose 1:1 during the process). The quality and characteristics of the
‘animator’ of the process becomes thus of outstanding importance. According to the type of ISD-
initiative which is scheduled (e.g. a technocratic exercise, a stakeholder-oriented exercise, a more
scientific exercise…), the characteristics and qualities of the animators would need to show some
coherence. At this level, the legitimacy criterion presents a rather clear link to the criterion of credibility:
if the animator is perceived as being credible (by his experience and expertise, by his qualities…), than
the initiative might be assessed as being potentially legitimate too.

Legitimacy and the scale – and level of aggregation - of the ISD-initiative

As with credibility, the organisation and implementation of legitimacy is strongly dependent on the
nature of an ISD initiative: whether the exercise, for instance, is directed towards developing an index or
a set of indicators is not neutral to the organisation of legitimacy. Both types of ISD raise different
moments and mechanics in order to confront the initiative with societal values, which in turn has direct
influence on the legitimacy of the initiative. While with indexes the integration of values needs, for
instance, be carefully organised at the level of the design of the aggregation scheme and the
implementation of the weights, in the case of indicator sets the construction of legitimacy concentrates on
the phases of selection of indicators and construction of the indicator framework. In this sense, indexes
have long been denied legitimacy, and thus desirability in general, because the construction of their
aggregation schemes and mechanics were thought to be necessarily opaque or at best technocratic. Hence
not prone to be able to integrate, in a reflexive and open way, the value systems of other than their
constructors. In other words, ISD could simply not be indexes because it was assumed that the
aggregated nature of indexes was hiding the inherent value choices operated during their construction.
Hence, indexes couldn’t be developing any societal legitimacy in the eyes of many stakeholders.
Currently, this perception with regard to indexes is changing. At the one hand, it has become obvious that
sound societal value integration into indicator sets is not a less problematic issue than with indexes, but
just a different one. In parallel, a series of index initiatives have successfully shown that it is
methodologically feasible – and actually relatively straightforward – to develop index configurations as
reflexive processes, thus being proactive on the issues of integration of values. And also, multi-
dimensional index initiatives have multiplied through the years, making it easier to oppose their
respective messages (and foremost their rankings), and generate multiple perspectives on identical issues.
Finally, the strongest opponents to indexes were in many cases environmental stakeholders, who - since
their massive promotion of the Ecological Footprint index - had to acknowledge the right for other
indexes in other domains to be developed.
Part of the importance of the nature of the ISD can be assigned to the spatial level for which the ISD are
developed. In effect, the scale of the ISD-initiatives, whether local and urban indicators or multinational
and global indexes, has some influence on the legitimacy of the ISD.

131
Of course, working at differing scales implies differing mechanics to ensure legitimacy. For instance, it is
not feasible to think that global ISD should interpret ‘fairness of ideas’ as the obligation to the authors of
the ISD to ensure open-access to a common construction platform. Interactions to enhance legitimacy
have to be much more constructed and elaborate. Also, the larger the scale, the more does interaction
with different stakes mean to rely on representatives of different thematically organized stakes (e.g.
typically, all environmentally-related stakes being represented by a single environmental lobbyist). On
the opposite, for small-scale indicators, stakes are often represented by the base, i.e. directly by the
citizens themselves.

Legitimacy and the problem of communication

As raised already earlier, the response of an ISD to any of the 3 L,C,S-criteria can not be assessed in
objective and deterministic fashion: an ISD initiative is responding to the criteria in terms of the
perception which actors have. The performance of an ISD to each of the criteria is thus subjective in its
essence. Of course, how an issue is perceived, in turn, can be influenced by a series of means. Evidently
so, by communicating on the issue. While such communication raises questions at the level of each of the
3 L,C,S-criteria, the problems are somewhat exacerbated at the level of legitimacy. Considering that an
actor’s assessment with regard to legitimacy is mainly relying on an assessment of some of the
characteristics of the process, this entails in terms of communication to influence perception largely by
informing about the procedural settings of the ISD-initiative.
Successfully informing at this level of procedural meta-information does suppose in many cases that
actors receiving the information have some basic acquaintance with the matters at hand, i.e. with
procedural issues in the realms of ISD. However such knowledge is for obvious reasons only intimately
spread, and certainly not to a point as to form a common ground of knowledge among actors. Without the
necessary common ground, however such information is difficult to communicate and be it only because
of the unavoidably technocratic language it will rely on in most cases. More profoundly, it remains very
difficult to communicate widely on such procedural issues without the information appearing hollow and
shallow to many actors: even a consequential description of processes and interactions during an ISD-
exercise cannot really account for the level of fairness given to ideas to percolate into the resulting ISD.
As it is quite easy to make worthless processes shine in a reflexive light, especially ex post, it is difficult
to account for a good process.

c° Salience and indicators for sustainable development

We defined salience as the perception of an ISD-user of the adequacy between the indicator scheme and
the stakes he values as being important for the domain at hand.
Salience, specifically in institutional policy making contexts, has also been simplified as being ‘perceived
policy relevance’. However, as we are in principle coping here with a wider set of types of ISD, and not

132
only with institutionalized indicators directed towards policy making, we give salience also a wider
signification. In effect, quite obviously, some ISD initiatives have been developed for other intentions
than as input to policy making. In the case of ‘salience’, quite contrary to the other 2 criteria discussed
above, a deeper differentiation needs thus to be introduced which could link to the functions and roles the
indicators are supposed to have. The functions of the ISD determine to a certain extent the proximity of
the ISD with policy making, thus determine the correct interpretation to be assigned to the ISD, on a
case-by-case basis. For instance, indicators designed to function as ‘monitoring’ devices need obviously
to be valued against a restrictive interpretation of salience, i.e. as ‘policy relevance’. However, indicators
developed as mere instruments to awareness-raising will require an interpretation of salience at the level
of ‘relevance with regard to stakes’. The interpretation we are assigning ‘salience’ can thus be wider,
exceeding pure policy relevance: salience is interpreted as the relevance of an indicator scheme with
regard to the stakes an indicator-user wants to see pursued. Whether these stakes coincide or not with
existing policy domains, is thus not of importance for an assessment of the salience of an ISD.
Contrary to the other 2 criteria, ‘salience’ can be assessed by a user in a more straightforward way. The
salience of an ISD-initiative can be determined at the level of the product, and thus doesn’t need any
specific proxy measures. Indicator users do not need to attempt to look behind the scenes of the ISD-
process (as in the case of legitimacy) or investigate on the qualities of the indicator developers (as in the
case of credibility): the concordance with their own prioritarisation of stakes can be largely controlled by
studying the output, e.g. the indicator report.

Subsequently, we discuss 4 issues which relate to the salience-criteria. First, we explore the links
between the definition to be given to salience according to different indicator users. Then we account for
the influence of the aggregation level on salience. We take also an insight, and more specifically in the
case of policy indicators, on the evolution of salience and indicators. Finally, we comment on the flaws
which a too strong commitment to developing salience can induce in the case of policy indicators.

Salience and the user-context of ISD

Salience, as with the other 2 criteria, is context-sensitive: the assessment of an ISD-initiative with regard
to its salience will of course be different for different types of users (i.e. user-relevance), and will be
different for different types of ISD.
As said above, we do not define salience with sheer ‘policy relevance’. In the present context, the focus is
on the stakes of the users. In this perspective, salience could thus also be translated by ‘user-relevance’,
i.e. to what extent a given ISD-initiative will be relatable to the stakes of a specific user (group).
Prolonging this, one could then assume that for indicators to be influential in policy making, and be
salient to policy makers, they necessarily should be linked to given policy domains. Thus the most salient
(for policy-makers) ISD-initiative would be the one, which is copying best existing institutional
priorities. However, things will not be as simple. In order to have a large societal basis and be used
largely, what is of importance for ‘salience’ to develop is that the ISD-initiative is provoking a

133
concordance of salience-assessments between different actors and users. Developing salience across
different categories of users, can of course be difficult to achieve, as prioritized domains of policy
making may well change considerably between different users with different backgrounds. This
difference is even more evident when salience needs to be constructed within policy domains, for
instance on the way to act on a problem i.e. at the level of ‘policy response’-indicators. In the case of
climate change, for instance, while the priority given to the stake might be shared across different types
of actors, the way to act upon the problem will diverge from simple adjustments (e.g. carbon tax) to a
more drastic perception of needed change (e.g. ‘décroissance’).

Salience and aggregation

In the particular context of policy-making indicators, the salience-criteria can be advanced as an


explanatory factor to operate a distinction of the relevance of indicators with different levels of
aggregation for policy making. In the case of policy indicators, salience will make coincide to a certain
extent the users’ stakes with policy domains. However, in the case of aggregated indexes, the link
between the indexes’ dimensions and policy domains is in most cases only randomly given. Indexes and
composite indicators, for reasons of methodological coherence, are necessarily following in a much more
straightforward way than for non-aggregated indicators a conceptually-defined and theory-based
structure. For instance, HDI (i.e. UNDP’s Human Development Index) has a clear established link to the
underlying concepts of Human Development (A. Sen). The underlying methodological logic in the case
of indexes is thus not to link them to prevailing policy domains, in the first place, but to establish the
selection of their dimensions along the conceptual dimensions of the theoretical framework they are
referring to. This missing link to policy domains could explain why indexes are often perceived as being
less salient in the context of policy making.
Of course, this would not mean that indexes are less salient in general terms. While not being linked to
policy-salience, indexes can well be linked to the other types of stakes of users, thus reveal their usability
in differentiated contexts, other than in policy making situations. If the stakes addressed by the indexes
coincide, for instance, with what could be termed ‘conceptual ideas of life’ of users, then indexes might
well be performing with regard to salience. The fact that in some cases the stakes addressed by
aggregated indicators are mirroring very well ‘conceptual ideas of life’ of, for instance, citizens or
pressure groups, could in contrast explain why some indices gained in the past very rapidly in popularity.
The case of the Ecological Footprint is, again, a good example, where the index was neither considered
as being scientifically very robust by the science community, nor was it considered as being particularly
useful by policy actors. And still, the Ecological Footprint (EF) is currently the most prominent ISD in
the media, and supposedly also the most known to citizens. Apart, that the EF was of course benefiting
from the promotion by environmental pressure groups (such as the WWF), part of the explanation of its
popularity and apparent use (notably, in terms of its visibility in the media) could be based on the fact
that the conceptual ideas which the EF is addressing (i.e. the finiteness of the Earth’s resources and the

134
current devastating North-South distribution of their use), is coinciding with those of a large number of
people (for the latter, see for instance, the periodic ‘Eurobarometer’-polls).

Salience and the evolution of stakes and policy domains

Much more than the other two criteria, the perception of salience of an ISD-initiative is changing over
time, even rapidly so in some cases. For credibility and for legitimacy, the perception of the potential
user is not supposed to change considerably over the lifespan of an ISD, unless the ISD and/or its process
is altered. Salience however is directly dependent on the evolution of people’s stakes, or in the case of
policy indicators, on the evolution of policy domains. And, obviously, both the stakes of people and
policy domains can be highly volatile, and are in any case evolutionary. As argued by some (e.g.
Boulanger 2006; Ortega-Cerda 2005) and as implicit in a number of concepts of social change (see for
instance, Transition Management (Voss et al. 2007)), policy domains are undergoing a maturity cycle as
they are discovered, addressed, traded-in by policy actors, and there is concurrence between different
policy domains to gain priority. The citizens’ stakes can be evolving in comparative ways.
As a consequence, the perception of the salience of an ISD will not be stable through time: it will depend
on the evolution of its societal context, either of policy domains or of societal stakes in general. As a
second consequence, the profile of the most ‘responsive’ user to a specific ISD is also prone to change
over time: while an ISD might today be valued as being particularly salient by a certain type of citizens
(e.g. suburban dwellers), as the users’ stakes are shifting as well as are those of other types of citizens,
tomorrow it might be ‘rurals’ who value most the salience of the same ISD. In principle, the evolution of
stakes/policy domains is not problematic in itself. It might become so, that the indicator initiative is
rendered relatively unusable in terms of the intentions it pursued, because the ISD were very precisely
targeted towards a specific type of users and his stakes have evolved.
For the construction and robustness of the salience of an ISD, the consequences on the ISD-scheme itself
are those of addressing a moving target: ISD-schemes should be designed in an adaptive/reactive way.
The evolutionary character of salience is thus a call to operate the reflexivity of any ISD process (see also
the next chapter).

Salience, technocracy and boundary setting

Conforming ISD with the criteria of salience induces, notably in the case of policy indicators, to conform
the indicator’s settings (e.g. issues addressed, interpretations of concepts, segmentation of stakes…) with
existing, commonly-agreed policy-agendas. Salience implies thus for policy indicators to conform with
existing administrative, institutional, political boundaries. In some cases this can lead to explain why ISD
have perceptive difficulties to question system settings or to re-interpret societal stakes.
This relative necessity to conform with the existing ‘boundary settings’ could also be interpreted as an
argument against the recurrently cited use-function of indicators as ‘initiators’ for new societal stakes. In

135
effect, following this argument, indicators would have some difficulties to develop some wide usability
when frontally addressing stakes which are not yet represented in the public arenas. This line of thought
could contribute to raise a more nuanced explanation for the rise in popularity, and the sequence of their
popularity, of a series of indicators. In the case of the Ecological Footprint, for instance, its current
popularity would thus not ground on the fact the indicator contributed to trigger a vision of the finiteness
of the Earth’s resources, but quite contrary: because of the latent existence of such a vision, and its
continuous spreading among citizens and policy actors, EF developed the necessary links towards
emerging societal stakes, hence fostered its usability.
In the case of the configuration of the processes to build up policy indicators, the strive for salience could
in consequence also be indirectly used as a case to structure the processes along the lines of existing
institutional boundaries with a direct link to existing policy agendas. Recognizing the dependence of ISD
on the emergence, existence and evolution of stakes in technocratic settings, will in many situations
simplify the configuration of the necessary procedural, institutional settings. Many ISD-initiatives try
hard to integrate the idea of SD necessitating horizontal, reflexive processes to be dealt with in a
conceptually correct manner. Such processes will often enough not correspond to the existing boundaries
of policy agendas. The mechanics of the salience criteria would however mean that such strives for
procedural innovation tend to overemphasize the innovative force of ISD by impeding on the usability of
the indicators.

3.3.3 Linkages between the L,C,S-framework and the approaches for the
assessment of indicator influence

After having explored the consequences of each of the three criteria in the case of ISD, the question
remains how the L,C,S-scheme can be related to the above-cited (see 3.3.1) approaches to indicator
influence. The intention we pursue with the present section is to discuss how the L,C,S-criteria scheme
could be found back and concurs, partly or not, to the identified, conceptual representations of the
influence of indicators. The aim here is not to check the robustness of the L,C,S-scheme, nor of the other
approaches, by trying to integrate both. We intend to identify how far the L,C,S-scheme has an
encompassing and general character and could account for distinct dynamics of indicator influence raised
by the illustrated approaches. The central question here is thus: can we link elements of the L,C,S-scheme
to the above-mentioned approaches to indicator influence?

The construct of Gudmundsson (2003), which aims at characterizing different types of indicator uses
(conceptual use, instrumental, symbolic) and link them to specific policy mechanisms (monitoring,
enlightening, controlling), is not really linkable to the L,C,S-scheme. This is due to 2 features. First,
Gudmundsson constructed his framework in the first place in order to pursue empirically his
investigation in indicator use, i.e. with document analyses. His aim was not to extrapolate to what extent
a specific ISD, was responding to specific characteristics. While some stipulations could be raised to link
the 3 policy mechanisms to L,C,S-profiles (for instance, if an ISD is to participate to ‘monitoring’, then it

136
supposedly should be particularly performing on the criteria of credibility and scientific robustness), we
do not see enough common ground between the approaches. Second, Gudmundsson is not concerned
with discussing the usability of ISD, but much more so with tracking them down in different locations.
His approach does not call to explore the characteristics of the indicator process, assess its qualities and
flaws. L,C,S is however at exactly that level of analysis. Both approaches could thus be complementary,
what further investigations might show, but at our level there is hardly a sufficiently robust link which we
can see between them.

The case is different with the analytical framework of Hezri. According to the author, the usability of
indicators for policy-making “(…) is captured by the criterion of ‘resonance’, where resonance connotes
a situation where an indicator ‘strikes a chord’ with its intended audience” (Hezri et al. 2006 : 92). Such
‘acoustic’ resonance is partly dependent on the ‘audience’ to which the indicator is addressing itself, and
is dependent on the qualities of the indicators. Indirectly, Hezri raises thus 2 families of performance
criteria for ISD-schemes : procedural, audience-based criteria, and substantive, content-based criteria.
These two underlying strings of Hezri’s performance framework for ISD can be linked, on the one hand,
to the credibility-criterion, i.e. performance with regard to the construction of the ‘content’ (i.e. the
scientific and technical robustness) of indicators. On the other hand, Hezri interprets the procedural
performance of indicators, via the fact that an indicator should link to his audience, in a similar direction
as what we described with the criterion of ‘legitimacy’.

Both frameworks by Ortega-Cerda and Boulanger have in common that they acknowledge ISD as being
evolutionary, discursive instruments of public policy making, which are evolving according to the policy
domains they are intended for. For both authors, the consequence is that profiles of ISD are necessarily
adapting over time. We acknowledged an identical issue, i.e. the characteristic of the evolutionary ISD-
profile, at the level of the ‘salience’-criterion.
More specifically, Ortega-Cerda mainly links the performance of ISD explicitly to the fact that ISD are
perceived as ‘science-based’-tools, and hence that they have to respond in the first place to issues of
credibility. The lever to enhance the usability of ISD could thus mean to trigger the right credibility with
the right societal actors in the right way. In fine, Ortega’s analysis links more explicitly ‘credibility’ with
the idea of evolution of the policy discourse: in other words, ‘credibility’ should thus be ‘salient’ to the
policy domain for indicators to become usable for specific contexts.
Boulanger states a similar idea, but which would mean to extend the concept of Ortega-Cerda into the
following. In order to adapt over time to the evolution of the given policy domain, the profiles of the ISD
need to be changing entirely. No specific priority is given to the issue of credibility, as it is done by
Ortega-Cerda, who reads the evolution of the ISD-profiles as stemming from the necessary adaptation of
the ISD to the credibility challenges of the evolving policy domain. We could thus extrapolate from
Boulanger that an evolving and maturing policy domain simply asks for alternative indicator profiles to
emerge, either stressing the credibility criteria (when the domain is situated in its rationalistic phase), or
the performance with regard to salience (during the discursive policy phase), or legitimacy (during the
strategizing phase). If we would apply the same image to Boulanger than to Ortega-Cerda, we could

137
assert that the influence of ISD will be positively directed if the ISD-scheme profile is ‘salient’ with
respect to the condition of the policy domain.

Finally, at the level of the more operationally-oriented criteria set of Rosenström, the link to the L,C,S-
scheme can be identified through the criterion of legitimacy. In effect, Rosenström sees the influence of
ISD determined by a series of procedural settings and conditions (for instance, ‘transparency and
openness of the process to the participants’, ‘timing of the participants’ involvement with regard to the
overall timing of the project’, ‘the clarity of the definition of the tasks set for the participants’). To a
certain extent, Rosenström’s set of conditions could be seen as an attempt to translate legitimacy onto
ISD-processes by providing a series of prescriptive procedural ‘orientors’. However, as raised already
earlier, to our understanding the L,C,S-scheme should only be used as a descriptive framework.
Obviously, it will remain difficult, even with the operational set of conditions of Rosenström, to create an
ISD-scheme from sketch, which will as a result of following the procedural prescriptions perform well on
legitimacy.

Linking the L,C,S-scheme to the developed meta-approaches on indicator influence, illustrates the
intrinsic differences between the 5 approaches. While they all play on slightly different plans, either
being resolutely operational or being solely conceptual and explanatory, an observation needs to be
raised. Those approaches which are the most discursive (i.e. Boulanger and Ortega-Cerda), those which
emphasize the evolutionary character of policy domains and of societal stakes, seem to demand on ISD to
strive more particularly for high salience. In parallel, legitimacy is more easily connectable to
approaches, which deconstruct the usability of ISD with regard to their procedural settings (notably,
Rosenström). Finally, the search for credibility can be attributed equally to both families of approaches;
apparently, scientific robustness of ISD is thought to be of importance in any situation.

Conclusion to the chapter :


Towards understanding the strive for L,C,S as the challenge of the
institutionalization of boundary organisations ?

The preceding section intended to identify the mechanics of the insertion of ISD into policy situations. In
order to discuss these, we used the L,C,S-framework, which was initially developed to deconstruct and
analyse the potential influence of large-scale, global environmental assessments on policy-making (e.g.
IPCC). We showed first that the L,C,S-framework needed to be partly re-translated to our field of
inquiry, i.e. sustainable development, as the underlying principles of SD pre-impose a series of
conditions (be integrative, participative…) on the policy domain at stake, which have some consequences
on the interpretations of the L,C,S-framework. As a next step, we discussed the interpretations to be
given to the L,C,S-framework, when applied to indicators for sustainable development.

138
By discussing how the L,C,S-framework could be linked to a series of different meta-approaches of
‘indicator influence’, we stressed that the conceptualisations given to the mechanics of ‘indicator
influence’ alter the interpretations to be given to the L,C,S-criteria. We deduced from these, that when
the representations used by the different authors to describe ‘indicator influence’ are translated to the
level of the L,C,S-scheme, they induce differentiated prioritisations between the 3 L,C,S-criteria; i.e. they
induce that ISD adopt a specific profile with regard to each of the three criteria.
The profile of ISD with regard to the 3 L,C,S-criteria can also be shaped in a different sense. Obviously,
ISD-initiatives pursue specific objectives, they address specific users and usages, and thus they search to
interact with policy-making processes by using differentiated levers. These differences which exist
between ISD-initiatives confer them what could be termed an ‘intended use-profile’, a representation by
the authors of an ISD-initiative of the ways their instrument is interacting with the policy domain. We
pose on the basis of the above discussion, that the ‘use-profile’ of an ISD-initiative can thus be
characterised with its ‘performance’ with regard to the L,C,S-scheme.

The existence of a more direct relationship between the ‘intention’ to position an ISD-initiative and the
attachment to a specific representation of policy making, will however remain unverified here. The
present work does not address the following: could a specific ‘intended use-profile’ of an ISD-initiative -
as determined by the L,C,S-framework – provide information on the representation of policy-making to
which the authors are referring (implicitly or explicitly) ? We saw above, for instance, that the more
discursive and evolutionary representations of indicator influence (e.g. Boulanger and Ortega-Cerda)
point towards ISD-initiatives, which develop more particularly their performance with regard to the
‘salience’-criterion. It would thus be tempting to try and inverse the relationship, and ask if it is possible
to qualify ISD-initiatives, which are for instance particularly salient, as being influenced by a more
discursive representation of policy making. We will have to leave the discussion of this inversed link for
future explorations.

However, it is possible to adjust and precise the above explorations with another level of analysis, which
we will do in the following chapter. We saw that ISD can be characterised according to their specific
‘intended use-profiles’ by using the L,C,S-framework. In the following, we will develop that these
characteristics of the profile can also be articulated in terms of the ‘institutionalisation of the ISD-
scheme’.
ISD are instruments which are projected in-between a number of different ‘actor systems’: indicators
(and evaluations or assessments, in general) are situated at the intersection of science, administration,
society, polity… . The resulting policy-making object ‘indicators for sustainable development’ can thus
be interpreted as a boundary object, situated at the crossroads of different actor systems. As indicators are
better circumscribed as processes than as objects, indicator systems have been labelled ‘boundary
organisations’, situated between different organisational, societal systems, e.g. between science and
polity.
In other words, the orientations at the level of the organisational arrangements taken during the
construction and implementation of the ISD (of the ‘boundary organisation’), co-determine the

139
performance of ISD with regard to the L,C,S-criteria. The response of an ISD to implement better
performance with regard to the L,C,S-criteria is in effect mostly of organisational nature in the first
place. For instance, in order to enhance the legitimacy of an ISD, a series of organisational arrangements
can be taken which will lead to a better uptake by the ISD-process of the stakes of the involved societal
actors, e.g. appointing neutral animators for the process or simplifying access to background information.
In the following chapter, we will discuss more thoroughly the consequences of adding such a further
level of analysis to the conceptualisation of ISD-usability. Subsequently, we explore how this
organisational reading needs to be widened towards an institutional reading of ISD, when applying the
reference to ‘boundary organisations’. In other words, the question we intend to discuss hereafter would
be : how can we categorize the linkages between the institutionalisation of ISD and their performance
with regard to the L,C,S-framework?

140
Chapter 4
Institutionalisation of Indicators for Sustainable
Development : a major factor to characterize the
usability of ISD?

141
142
Before entering the heart of this chapter, a clarification needs to be made. In the earlier parts of the
present work, we deliberately did not introduce a clear differentiation between environmental indicators
and sustainability indicators. We took ISD as part of the larger family of environmental indicators. The
justification was also leaning on the fact that a good deal of the indicator initiatives we are referring to as
ISD are linked to traditional environmental policy domains (as it is the case for instance with the
Ecological Footprint), or were for their major part actually composed of environmental issues and
environmental sub-indicators (as it is the case for instance for the Environmental Sustainability Index –
ESI), or derived their framework-logic from environmental indicators (as it is the case, for instance, with
the DPSIR-framework). This position is furthermore relatively coherent with other analyses, which have
been conducted on the level of the SD-discourse itself, and which show that SD can be interpreted - both
from its origin and its proponents - as being unequivocally linked to environmental issues.

As we are entering with the current chapter a different – institutional - level of analysis, this simplifying
analogy becomes less adequate (Lafferty 2004). In the following we need thus to distinguish
environmental from SD-indicators, and we will focus on the latter. The reason for differentiating both
indicator families is straightforward: the organisational and procedural patterns of the operationalisation
of SD are not necessarily matching the ones experimented in the ‘purer’ environmental policy domains.
The SD-domain is larger, more horizontally-oriented, and should have a different approach to expertise
and to citizens, as well as to non-state actors. Consequently, it is the organisation and institutionalisation
of the indicator processes, which we use as the distinctive mark of ISD against environmental indicators.
Of course, some institutionalized mechanisms developed in support to SD-policy tend to be very much
identical to their environmental cousins (for instance, mechanisms as environmental policy integration
(EPI) which are now translated also to SD), or are interpreted as representing mere evolutions of existing
environmental practice (for instance, sustainability impact assessments (SIA) have been repetitively
interpreted as being procedurally close to strategic environmental assessments). However, these
environmental policy mechanisms are different from their SD-avatars. For instance, in the case of
SIA/SEA, underlying mechanics and reasoning are very similar. But already, simply the fact that SIA has
to integrate multi-dimensional impacts (environmental, social, economic, cultural…), as well as the
inherent difficulties to assess and compare and weigh these impacts against each other, combined with
the relative impossibility to solely rely on quantitative assessment tools to perform the assessment of the
impacts… calls for an institutional response when implementing SIA which accounts far deeper than in
the case of environmental assessments for uncertainty, complexity, use of non-technical expertise,
opinion, risk, incommensurability…

The rationale of the present chapter flows from our preceding discussion of the consequences when
applying the L,C,S-framework to the usability-assessment of ISD. We saw that each of the 3 criteria is
influenced by a series of characteristics of the ISD-framework under assessment (e.g. indexes vs
indicators, and their influence on credibility). These characteristics can be of many natures: substantive if

143
linked to the nature of the indicators (e.g. index vs indicator), contextual if linked to the evolution or
nature of the evaluandum (e.g. policy domains which are evolving over time), etc.
We saw also that the performance of an ISD-framework with regard to L,C,S is dependent on the
perception of the actors engaged into the ISD-initiative; L,C,S can not be assessed in absolute terms, but
it is the combination of the different societal actors’ perceptions of L,C,S which gives us some idea of its
usability. How different societal actor groups link to an ISD-initiative and link between each other, how
their perceptions combine to each other, is obviously a matter of organisation. In the terms we will
develop hereafter, the perception of L,C,S is thus influenced in the first place by the institutionalisation of
the ISD-framework under analysis. As a consequence, we intend to add to the previously described
L,C,S-framework a second axis of reading (i.e. the institutional level), which should allow to further
characterize the usability of ISD by raising questions such as: how does the perception of legitimacy
evolve at the level of the interaction between specific societal actor groups? The present chapter intends
to construct such an institutional key to the reading of L,C,S.
At this point, the discussion solely wants to base such an extension of the L,C,S-framework on a series of
theoretical and conceptual arguments. The intention is to ground the extension in theory. A thoroughly
conducted application and case-study, as well as a subsequent evaluation of the robustness of the
(L,C,S*i)62-scheme will be left for future research.

In the following section (4.1.1), we will define what we call institutions, institutionalisation as well as
institutional embeddedness. We will conceptualize (section 4.1.2) the institutionalisation of ISD by using
the reference to ‘boundary organisations’. This section will also show a series of flaws and limits to the
institutionalisation of ISD: the aim is not to render a complete picture of these, but to show briefly some
of the operational difficulties and trade-offs which appear at the very base of ISD-development.
Subsequently, we will develop the implications of the institutional challenges of ISD on the use-profile of
indicators: we link L,C,S with the questions of institutionalization. Finally, in the last section (4.2) we
will show that at the discussion with regard to institutionalisation of ISD allows us to add a specific level
of reading to the L,C,S-criteria. We will conclude (section 4.2.2) by discussing a usability-analysis matrix
(‘L,C,S x institutional spheres’), and extrapolate on how far the incorporation of the issues of
institutionalisation can be taken as a complementary, explanatory factor to the assessment of the usability
of ISD.

62
Occasionally, we will use the acronym (L,C,S*i) when we speak of the extended L,C,S-framework : L,C,S-framework
extended by an institutional level of reading.

144
4.1 Institutionalisation and ISD

Subsequently, we will see that ISD can be interpreted in a certain sense as specific ‘arenas’, which act as
interfaces between different actor-spheres (e.g. science, policy, civil society…). The interactions happing
between these actor-spheres, and by extension the evolution of the arenas, are defined and configured by
the modes of institutionalisation of the ISD. Institutionalisation, in the understanding we apply here, does
not solely mean that the organisational and procedural structure given to an ISD-exercise is of influence
to the evolution of the arenas, but that these are evolving according to a network of societal interactions
(comprising not exclusively those which are formalized as organisations or as procedures). A series of
concepts and differentiations need thus to be introduced swiftly in order to further the precision of the
wording we will apply through the rest of the present chapter.

4.1.1. Institutionalization and institutional embeddedness

Conceptualizations of the meaning - and the evolution - of ‘institutions’ in the realm of environmental or
SD-policy are numerous in literature (see notably Connor and Dovers 2004, Kissling-Näf and Varone
2000, Vatn 2005, Young 2002).

According to Young (2002) “(…) institutions are clusters of rights, rules, and decision-making
procedures that give rise to social practices, assign roles to participants in these practices, and govern
interactions among occupants of those roles”. Institutions allow us to govern human activities, as they
permit us to interact in a shared comprehension of the roles of these interactions, by defining and
organising societal practice in terms of the roles different individuals or groups internalize. Vatn (2005 :
60) defined somewhat differently that “(…) institutions are the conventions, norms and formally
sanctioned rules of a society. They provide expectations, stability and meaning essential to human
existence and coordination. Institutions regularize life, support values and produce and protect
interests”.
An important differentiation needs to be made: institutions do not correspond to organisations.
Organisations, like an environmental administration for instance, are formalized structures, often even
material entities, which use personnel, budget, equipment etc. Organisations often have a legal
personality, and they are inserted into a web of other organisations by hierarchical relationships.
Institutions, in term, are wider entities, comprising formal and informal elements, such as for instance in
the case of the legal system of a country, which comprises a number of different organisations (such as a
ministry of justice, a lawyer association, law schools…), but also a series of non-organisational elements
which contribute to erect the institution into a system of governance, such as jurisprudence, academic
literature, legal tradition… . Institutions can have much smaller size and comprise fewer, or no,
formalized elements; for instance, we can also speak of an institution in order to account for the networks

145
of rules and rights which would govern small-scale, local resources (such as a particular river, or a wood,
for instance) on the basis of traditional customs of exploitation only. Such networks of rules and rights,
i.e. institutions, have also been labelled, in the case of environmental policy, as ‘environmental resource
regimes’.
North (1990, quoted by Hezri 2006 : 99) defined institutions in an even wider sense, with institutions
being “the rules of the game in a society”, which form “the humanly devised constraints that structure
political, economic and social interaction”. These constraints (i.e. laws, rights, customs, codes of
conduct, behaviour patterns, moral…) have also been acknowledged (notably, by Dryzek) as being the
‘institutional hardware’ of society, whereas the communicational and semantic aspects of human
interaction, permitting notably to develop a common understanding of the world and its interactions, have
been labelled ‘institutional software’.
Institutions have also been interpreted (Vatn 2005, 2006) as being more than just a collection of
constraints, whether formal or informal, as being those entities in a society which are creating meaning
for the individual and the group, as they steer societal and individual choice, as they define motivations
and relationships. In effect, the image of institutions corresponding to constraints, used by North, should
be complemented by a more positive vision of institutions because they are providing for common
structures and rules of interaction and understanding.

Much has been said in literature about past and present evolutions of organisations and institutions. In the
context of ISD, some (e.g. Hezri, 2006) argue for instance, that environmental policy making in a second
wave of its institutionalisation - i.e. the wave of expanding environmental policies of the 80s and early
90s to SD-issues - brought about a change in the choice of the necessary policy instruments to govern
resource regimes. Among the ‘new’ policy instruments, which were promoted to handle SD-issues, were
informative and evaluative instruments; a category to which belong also ISD. This shift in the choice of
policy instruments is supposed to be supported also by the change of paradigm from government to
governance, as well as supported further by the strive to implement rules of governance which are
derived from ‘New Public Management’ (responding to principles such as evaluations, transparency,
liability…).
Another, similar, line of thought puts emphasis on the fact that current policy problems request a change
in ‘rationality’, and that this need to be attained through institutional change: “(…) the problem our
generation faces, is that we are caught between two developments. On the one hand, institutional change
over the last two centuries has to a large extent been directed to foster ‘I’ rationality (…). On the other
hand, we are confronted with increasing environmental degradation and social despair. The latter
developments demand increased capacity to act cooperatively. In simple terms: as the threat to
sustainability is institutional, the response must certainly be institutional too, but based on a different
direction than the present.” (Vatn 2006 : 4). And more precisely : “(…) today’s economic and political
institutions are not capable of fostering sustainability. (…) What is needed is therefore an institutional
reform based on integrated decision-making, where the role of social rationality is strengthened and
where multi-dimensional assessments are instituted” (Vatn 2006 : 1). A call which can also be linked to

146
the perceived need for the development of ISD. However, the direction to take for the needed
institutional reform for SD is far from being clear to all, as “there has been insufficient institutional
change for SD and that what institutional change has taken place has either proved inadequate or too
recent and piecemeal for clear ideas to emerge as to what kinds of institutional reforms will work”
(Connor and Dovers 2004 : 203).

Deriving from the above, we can speak of institutionalisation when interactions of human activities
create - as a response to a request for their governance - a web of formal and informal rights, rules and
processes. Institutionalisation is thus not to be confused with ‘creating an organisation’, or internalizing
an object of public action in an existing organisation, as it happened for instance in the early 70s with the
creation of environmental administrations at the level of the nation-states, or more recently with the
creation of environmental departments and units in many sectoral administrations. Institutionalisations
are thus organic evolutions of public interests. In the case of SD, institutionalisation can thus not be
attached or restricted to ‘historical’ moments such as the creation of the Brundtland Commission, the Rio
Summit, the creation of the Commission for Sustainable Development at the level of the UN, the creation
of SD-administrations in some countries…, but should comprise all other webs of societal interaction
which permitted the development of a larger community of academic actors in the domain, the
emergence of specialized NGOs, the growing publication of ‘corporate SD-reports’, the development of
multi-dimensional evaluation procedures within administration, the integration of a reference to SD into
some national constitutions… Goodin (1996) defined, somewhat too general, institutionalization as “the
stable, recurring, repetitive, patterned nature of the behaviour of institutions, and because of them“.
With reference to the definition of North (see above), institutionalisation occurs when the constraints,
which govern social interaction, have consolidated to procure sufficient robustness over a certain amount
of time for a public domain to be governed with reference to an identifiable collection of rules of
conduct.
In many domains, institutionalisations comprise at one stage or another the creation of an organisation,
i.e. a formal structure which is created to govern the issue at hand. In common language,
institutionalisation is also mostly used in the sense of ‘creating an organisation’, or internalizing the issue
by mandating civil servants or politicians to its governance. The degree of institutionalisation is hence
often used as a proxy for the evolution of the recognition of the importance of the problem at stake. In the
logic of administrations, which are the classical operators of government/governance when society is
facing an emerging regime, the moment of the creation of an organisation (e.g. a unit, department,
ministry, international secretariat…) can thus be identified as the moment of the institutionalisation of the
issue.
The reference to institutionalisation to describe the evolution of the governance of policy domains can
thus be misleading, and is eminently imprecise. With respect to ISD, we prefer thus to interpret
institutionalisation in terms of institutional embeddedness.

147
Institutional embeddedness can be defined, in the context of ISD, as the creation of relatively ‘soft’
processes for managing issues of SD. These processes being embedded, as opposed to their plain
organisational integration (or what is commonly called ‘institutionalisation’), are temporary and may
vary (sometimes strongly) over time and space. These processes are part by themselves of ‘governing the
governing function’, or when following a wording advanced by Ulrich Beck, they are part of the ‘politics
of politics’, or as seen by Voss (et al., 2006) they are co-funding and accompanying measures and
processes of implementing ‘reflexive governance’ (note: we will come back to these notions in section
4.2.2).
Other authors, like Hezri (2006 : 226), see in institutional embeddedness a consequence of indicator
schemes participating to institutional cognition. He acknowledges that “for a society to comprehend the
scope of a new policy area such as sustainability, a higher order surveillance and intelligence capacity,
or institutional cognition is imperative”. In this sense, Hezri sees “sustainability indicator systems as the
policy instrument that facilitates the cognitive capacity of institutions”. Indicators are appreciated as
informative tools necessary for the function of institutions, in terms of input to decision/reflection and
thus development of institutional capacity and adaptation. Building on this relative necessity of indicators
and their importance for institutional development, Hezri (2006 : 226) asserts further that thus
“indicators are embedded in institutions”, notably also because not only they shape the specific decision
space of sustainability (as a policy domain) and/or its institutional setting, but vice-versa indicators are
necessarily receptive and responsive to the (cultural, social…) constraints which exist in handling and
integrating information in the given policy institution.
Generally, institutional embeddedness of information has been more systematically clarified by Innes
(1998). She contends that information is prone to influence decision-making by becoming embedded in
people, institutions, values… Instead of the more instrumental, direct use-functions to decision-making,
she sees in embeddedness those information flows and processes which develop into a systemic which
will influence decision-making ‘culture’ (instead of organisation) by streamlining the collective
understanding of the policy issues at hand.
As developed before, indicators do have a role to play in the shift from a conception of the ‘government’
treating problem resolution, into ‘governance’ structures and processes becoming the main policy-driver
and –maker. Of course, when indicators do join into the governance-processes, they inevitably change
their role: from the mere positive, quantitative piece of information they have been described as in a
rationalist policy-making neighbourhood, they develop into procedural, post-normal arrangements
leading into mutual learning, acceptance and commitment to ‘policy-decisions’. Indicators, and more
specifically their development and implementation into processes, are indeed becoming what has been
termed “an important new experiment in governance” (Miller, 2005 : 405). Such procedural experiments
would only be abusively labelled as institutions, and subsequently we prefer to stick to the terminology
of institutional embeddedness (IE) to designate procedurals which conceive of indicators as being soft
tools for the policy integration of SD.
In the following we use institutional embeddedness simply to differentiate policy-sustaining processes,
such as the development of ISD, which do not materialize into organisations, e.g. become the core

148
business of a specific governmental agency (for instance, as a department of a statistical office), as is
often the case (but not necessary) with vertical policy instruments such as environmental reporting.
Institutional embeddedness might thus describe better the issues related to more horizontal, transversal
policy issues which are not ‘problems’ in themselves but can be better understood as management
processes addressing the general development of capacity, in our case evaluation, for steering problems.
As a secondary, paradoxical, conclusion, suddenly we might have to attribute ISD a more serious
importance in decision-making processes as what we concluded on above.
“If one accepts ‘government’ as a continuum of ‘governance’, than a new set of corresponding demands
on policy systems is required. In the interests of accountability and efficiency, the decision process
broadens, away from simple coercive mechanisms, towards consensus building and other less direct or
formal arrangements. In governance, the utility of indicators as a policy tool whose traditional role was
to fulfil the instrumental need of rationality must, in the new reality, enhance ‘steering’, ‘mapping’,
weaving’. ‘Steering’ towards sustainability necessitates multiple, appropriate information flows, for
wider communication of sustainability values. Therefore, sustainability indicators represent “an
important new experiment in governance”, beyond the mere technical fix or improvement in
measurement protocols.” (Hezri 2006 : 88)
Informational policy instruments, and especially indicators, while we denied them hardly any direct,
instrumental use-function might be of more importance when becoming institutionally embedded as
major tools for governance. The processes of developing, updating, selecting, using… indicators - if
these enhance processes of mutual learning and commitment as was outlined above and thus contribute to
the development of governance - might actually induce that participants, and to a lesser degree external
users, will make major use of the knowledge they gained, especially so as this knowledge is not of based
on pure instrumental comprehension of facts, but on a co-constructed, often largely consensual but value-
laden, reconfiguration of data. Hezri et al. (2006) come to an identical conclusion, when stating “(…)
that sustainability indicator systems have broader functional implications in governance, which would
entail departure from the old wisdom that views informational policy instruments as the ‘most lenient
form of government tool’ (Vedung and van der Doelen, 1998)”.

This consequence, which we will discuss further in the current chapter, makes us however ask the
question of the nature of the process. The fact to accept that indicators as information gathering processes
gain in importance as governance tools, and thus as policy-driving factors, does not abolish the necessity
of controlling the process. It does, among others, partly refocus the question of the ‘robust, scientifically-
reviewed information protocols’ towards an issue of insuring the quality, efficiency, equity, control… of
the process. Instead of abolishing issues of control, it raises a series of new challenges to be integrated
into information and indicator assessments, calling thus to develop a new vector of criteria to be
integrated into the analysis of the usability of indicator initiatives, and more precisely with regard to the
nature of their institutional embeddedness. In the following, we use ‘institutionalisation’, in the sense of
‘institutional embeddedness’.

149
4.1.2 Limits to institutionalisation of ISD

Before discussing further the central question of the additional ‘vectors of criteria’ for the analysis of
usability in section 4.2, we take a step back and shortly discuss a selection of problems and fallacies
which ISD are facing during the process of ‘institutionalisations’. Most of the issues raised hereafter are
not specific to ISD, but are rather recurrent. They are ranging from very operational problems to more
conceptual considerations.
A first level of difficulties can originate, for instance, at the very basic level of the technical and statistical
operations, which can be affected by manipulation errors and ‘operational fallacies’ as listed by
Merkhofer (1987 : 148):
! Bias in collected data (e.g. wrong manipulation of the monitoring devices);
! Errors in reporting data (e.g. typing errors, cut-and-paste errors, transmission failure);
! Errors in treating and processing the data (e.g. bad application of statistical methods);
! Bad choice of methodological treatment of data (e.g. wrong choice of statistical method);
! Errors due to bad aggregation techniques;
! Use of bad proxies for missing data (e.g. non-concordance of the proxy to the behaviour of the real
data, notably on the limits of the data distribution);
! Errors of judgment and interpretation (e.g. good results, bad interpretation due to the presence of
over-complexity).

A second level of difficulties, arising with the institutionalization of ISD, takes birth when defining the
methodological background structure of the assessment (i.e. the meta-methodologies or the articulation
between the sequences of necessary evaluation methodologies). In this sense, Muster et (1996 : 206)
stated that: “Any activity, whether aimed at social, economic, cultural or ecological goals, can have
negative side effects and, at the same time, side effects can be social, economic, cultural, or ecological
problems. How can we prevent problems linked to the use of our environment if the relation between
activities and problems is so complicated? We cannot try to steer isolated activities, because it would
leave out solutions based on optimal combinations of activities. We cannot concentrate on solving
existing problems either, because this could not prevent new problems”. The difficulty of
institutionalizing ISD stems thus also from the necessary linkages that are to be created between partial
assessments or sectoral methodologies and sectoral institutions or agencies. Indeed, complexity and
uncertainty (i.e. incommensurability) do hardly allow for one single assessment system to act as a stand-
alone (Petit 1997). Rather has the focus to lie on the correct sequencing between different forms of
assessments: “(…) however, if societal processes are to be evaluated in terms of their sustainability, then
the different indicator systems cannot be merely reviewed, reformulated or supplemented. Instead, the
question of their connection and interaction must become the central point of investigation” (Becker et
al. 1997 : 10). The achievement of these connections between assessments can be very difficult as they
call for coordination at least of sectoral evaluations (horizontal integration of evaluation methodologies)

150
as well as of the criteria of these evaluations (consistency of general frameworks used for evaluation).
Boulanger63 refers to these 2 types of consequences as “knowledge, or technical integration (integration
of partial social, economical and environmental appraisals into an overall appraisal)”, and poses that
ideally a complementary integration mechanism should be realized, which he addresses as “procedural
or institutional integration (integration of economic, social and environmental appraisals in the policy-
making process)”.
The relationship between figures and ‘their’ reality is of course technically difficult, and stressful for
most of the implicated people, both analysts and users. One element of explanation for these tensions,
which develop into a third level of difficulties, and which we have already addressed earlier, is linked to
the unrealistic expectations (and fears that develop thereon) that are raised with regard to the value-added
of a particular piece of information: logically, expectations in terms of enlightenment tend to be the
higher, the more the context is blurred and difficult to apprehend. Boulanger (2006 : 11) phrases the issue
as follows : “(…) It is important to stress that institutionalisation per se is insufficient for characterizing
an indicator as successful. This has special relevance in the SD domain, where it is not uncommon to see
countries or intergovernmental organisations adopting plans and strategies with quantified targets and
indicators sets, which are never really implemented and remain therefore totally ineffective. The
indicators have been institutionalised but in a way that can be considered paying lip service to SD
without any real commitment to it or without a clear perception of the state of evolution of the problem in
other non-official public arenas”. The underlying reason for the generation of such indecent expectations
can be explained in many ways. One of them freely develops on the saying “what counts gets counted”:
decision-makers fear foremost that if something that gets counted and points into the wrong direction,
public opinion will account them for it. And this occurs regardless of the fact if the accounted evolution
is ‘real’ or significantly influenced by assumptions taken by experts or agencies during the assessment
and the construction of the numbers. Decision-makers ‘allow’ thus themselves to take some influence or
control when institutions define what should be counted; with which underlying assumptions this should
be realized; how it should be done so and by whom. Occasionally, the influence taken by the public
authority on the development of the figures trespasses the borderline between legitimately inserting
political values and taking manipulative influence. Maris (1999 : 98) reports such a case for France with
the closing of the CERC (Centre d’études des revenues et des coûts64) for having assessed too thoroughly
the growth of inequalities. Other examples of indirect manipulations are numerous, ranging from the
classical periodical (and electoral) redefinitions of criteria to evaluate unemployment to the appointment
of ‘friendly’ top-level civil servants, to endlessly shifting missions between agencies, to de-appointing
experts, to cutting budgets, to delaying publications… Cobb and Rixford (1998) give a very expressive
and impressive overview of the extreme breadth of possible direct and indirect manipulations at the level
of the institutionalisation of indicators, when reviewing the rise and fall of the social indicators’
movement in the USA. Considering the explicit openness of SD to embrace stakeholders’ and users’

63
Personal communication, September 2003.
64
Center for Studies of Revenues and Costs.

151
preferences and values in the configuration of its evaluation, SD is potentially exposed to such fallacious
influences, even more so if SD would develop into a major object of political and electoral marketing.

The process of institutionalisation (or institutional embeddedness) of ISD is thus characterized by a


number of quite ordinary operational, (meta)methodological and institutional fallacies. One convenient
solution, which is often advanced, to the management of these mutually reinforcing difficulties when
institutionalizing ISD, is to join multiple and new forms of stakeholder participation to the reorganization
of the public enterprise, including the development of instruments and means of collaborative evaluation.
Such institutionalized governance has multiple faces and definitions are all the more numerous. One of
the definitions most compatible with our objectives here, which is applicable as a blueprint for
governance in evaluation in the context of SD, describes governance as “(…) the structured ways and
means in which the divergent preferences of inter-dependent actors are translated into policy choices to
allocate values, so that the plurality of interests is transformed into co-ordinated action and the
compliance of actors is achieved” (Eising and Kohler-Koch 1999 : 5). Participative and institutionalized
evaluation processes can be seen as being a constitutive element of governance. Clark et al (2002) define
that the strength of such assessment processes (in our case: ISD) in this regard is depending also on a
series of organisational conditions.
First, assessment processes need to be sufficiently integrated in organisations, i.e. “the degree to which
scientific assessment processes are circumscribed by the organization using the assessment to inform or
validate its policy decisions” (Clark et al 2002 : 34). This does not imply that successful evaluations or
indicator developments need to be conducted in-house by the organisation, and conversely neither is it
sufficient for the targeted organisation to restrict its role to the provider of funding. Rather should
integration be regarded at in terms of active involvement of the organisation into the entire ‘cradle-to-
grave’-sequence of the evaluation, i.e. construction of the terms of reference, configuration of the
process, definition of the systems’ boundaries under evaluation…
Second, Clark (et al., 2006) stresses that organisations need to be fit for interconnectedness with the
executing actor; i.e. be able to “bridge the gulf separating those carrying out the assessment from those
using the assessment” (Clark et al 2002 : 36). In most cases of ISD construction, evaluators are scientific
actors or scientists have at least a broad role to play in the development. The construction of ISD calls
thus here for effective and efficient science-policy interfaces. However in most observed cases, such
interfaces do not exist. The process of constructing ISD is then recurrently assigned a second objective,
namely to act itself as a science-policy interface for the wider effort of implementing SD! Instead of
being able to rely on interfaces, the process of assessing turns out to act not only as its own interface, but
even more so to act as the general interface for SD-processes.
Thirdly, and most obviously, organisations need to be prepared to integrate lessons learned from the ISD-
processes themselves. This should necessarily be the case for the results of the assessment that are of
course meant to steer policy, but learning should also occur - in a long-term perspective – inside
organisations (see section 3.1.2). One of the factors influencing the usability of ISD is the continuity of
the process over a considerable period; not only producing periodical and regular indicators, but allowing

152
for adjustments in the process. Clark et al. (2002) point to a major flaw appearing with continuous
processes exempt of sufficient learning: even if the process might grow more efficient and will produce
better assessments over time, those latter risk to be more and more distant from the evolving needs of the
organisation. The result being that indicators get better in “doing the job right”, but eventually miss to
“do the right job”.

4.1.3 The L,C,S use-criteria facing the institutionalization of ISD

The institutionalization of ISD, i.e. the process of configuring ISD as a web of norms, conventions,
organisations, rules…, has some direct influence on the interpretation to be given to the afore-developed
criteria of the usability of ISD. In the present section, we discuss thus a limited series of consequences
and questionings with regard to the L,C,S-framework, which can be raised with regard to the
institutionalisation of an ISD-initiative. This should allow us to better comprehend and circumscribe the
variability of the interpretation, but also of the performance, of the L,C,S-framework in the discussion on
the institutionalisation of ISD. Some of the issues raised hereafter have been raised earlier (see section
3.3.2), but need to be re-discussed by taking an ‘institutional’ perspective.

We argue that the performance of ISD with regard to the usability criteria (i.e. the L,C,S-framework) is
co-dependent on the characteristics of their institutional embeddedness. If there is a concordance between
the modes of institutionalisation of the ISD and the roles the ISD are meant to play, then this adequation
for instance, “increases the likelihood that the participatory and procedural rules that govern the
production of an assessment will be consistent with those that infuse assessments with salience,
credibility and legitimacy in the eyes of the organisation in which the assessment processes are
embedded” (Alcock 2001 : 13). There are of course some difficulties to attain such adequation, and
subsequently we develop shortly some of the major ones.

Institutional disensus on the enhancement of L,C,S-performance

Obviously, there is no common, objective measure to valuate the performance of an ISD-framework with
regard to L,C,S. It depends on the perception of the different actors (or actor categories) involved. As a
direct consequence, a number of disensual appreciations can prevail at institutional level, which could
hamper a more generalized integration of the ISD into institutional practice.
First, disensus can of course exist within one specific actor category, or arena. The generic understanding
we use here to characterize the differing arenas (Science, Policy, Society) does oversimplify the fact that
within each of these arenas, strong differentiations are prevailing. For instance, a Policy arena is far from

153
being homogeneous: different levels of hierarchical levels prevail, differences exist between members of
different policy domains (e.g. the administration of the environment and the administration of economic
development), between administrations and politicians… All the same, strong differences in
appreciations of the L,C,S of an ISD-initiative can prevail in the Science-arena between academic
disciplines, between schools of thought… As far as the conceptualisation of usability on the basis of the
L,C,S-framework supposes that a certain level of consensus with regard to the perception of L,C,S of the
performance of a given ISD-process needs to develop among users in order to entail usability of the ISD-
initiative, the residual potential disensus can seriously question the roles ISD can play.
All the same, an established consensus on the perception of L,C,S of an ISD-initiative will not
necessarily be stabilized over time between different institutional actors, or arenas. ‘Coalitions’ with
regard to the perception of the L,C,S of an ISD-initiative can thus be changing considerably across
different arenas.
Existing disensus and changing consensus coalitions will have as consequence that the ISD-initiatives
will have a changing configuration of potential users. This, in term, can pose a serious challenge to the
adaptability and resilience of ISD-processes, asking in effect to redefine continuously the underlying
processes (Connor and Dovers, 2004) in order to keep ISD sufficiently embedded at the level of the
concerned actor arenas.
As a consequence, difficulties are emerging to develop an efficient and satisfactory strategy to enhance
the performance of an ISD-initiative with regard to L,C,S: changing actor coalitions and disensus within
a specific arena make it difficult to programme the necessary steps of adjustments to an ISD-initiative,
which would enhance the usability of the ISD.

Steering trade-offs between potential users

At another level, these difficulties in steering ISD towards enhanced L,C,S-performance is just as
obvious: there is an underlying concurrence between potential direct users and indirect users. Process
adaptations at the level of an ISD-scheme, which can improve its performance with regard to one of the
L,C,S-criteria for one specific actor arena (e.g. policy makers) can be strongly counterproductive for the
perception of the performance of the ISD for a second actor arena. It would thus make sense to specify
potential direct users from potential indirect users, and to positively and consciously steer the probable
trade-offs between these different levels of potential users. This steering would be obtained by the
configuration of the institutional embeddedness of the ISD.

154
Variability of roles of ISD through time

The volatility and variability of the perceptions of the usability of the ISD-initiatives can have still
another character. If we come back to the conceptualization raised by Boulanger (forthcoming), who
considers that policy domains are maturing over time corresponding to different models of policy-making
(from rational-positivist, discursive-constructivist to strategic), then one of the consequences of this will
be that the expected roles ISD are supposed to fulfil will be changing too (see section 3.3.1). This again
will have as a consequence that the type and evolution of the institutional embeddedness of the ISD needs
to adapt to these evolutions too. It will not only be the users, and their respective positions one from
another which will change, but the entire decision context where ISD-initiatives are evolving will be
subject to adaptations too.
Hezri (2006) equally points to possibly changing roles of indicators, appearing as a result to the
occurrence of changing conditions in decision-making processes. Policy opportunities can be, depending
on the issue domain, highly volatile as they are shifting from mere situations of political blockage to
opportunity windows which allow policy actors to (re)launch specific policy processes. Analogically, on
the basis of studies which perform ‘knowledge use’-analyses, Hezri (2006) points to a series of shifts in
types of indicator-uses due to changing policy conditions, for instance : 'tactical use’, i.e. use of indicator
processes as a delaying tactic or as a substitution for policy development or implementation, can quickly
switch to a more ‘instrumental use’ of indicators (i.e. direct use of the main message of the indicator to
define policy) once the political conditions of the decision-space developed into a ‘window of
opportunity’ prone for more direct action. In this context, he speaks of ‘second-order’ use, or even of
‘indirect’ use.

Institutional distance and usability

Another important issue of institutionalisation, which entails variability of the usability of ISD-initiatives,
is linked to the difficult question of the right ‘distance’ between the ISD-process and the users. In
evaluation assessments, this component has been addressed (see for instance, Balthasar 2006) as the
‘institutional distance’ between the evaluandum (i.e. the object of evaluation) and the evaluator. In our
context, ‘institutional distance’ will describe the distance of the identified users to the ISD-initiative. In
other terms, ‘institutional distance’ will also characterise the participatory implication of actors to the
ISD-process. Literature in evaluation assessments does not seem to have been able to clarify entirely the
link between institutional distance and evaluation use: in effect, in some situations it appears that
proximity of evaluation and evaluandum enhances usability, in others the conclusions point in the
opposite direction. Likewise, the search for best possible performance on L,C,S-criteria will surely be
influenced by the institutional distance to the ISD-process, however it is entirely unclear if the distance
will influence positively or negatively the perception of L,C,S.

155
The institutional embeddedness of ISD leads to a series of difficulties and ambiguities, which cannot
easily be acknowledged into general ‘rules’. However, the depth and difficulty of the institutional
influence on ISD show in parallel that an ‘institutional’ level of reading does continue to make sense
when analysing usability. The next section will try to establish in a first part some further arguments to
include an institutional reading to the usability of ISD. In the last part, we articulate and describe the axis
which we propose to add to the L,C,S-framework in order to account for an institutional reading.

4.2. ‘Institutional Embeddedness’ as a second axis to the


assessment of ISD-usability

Institutional embeddedness, as well as institutionalisation, are sufficiently vague notions as to raise


interest to the interactions between different actor arenas, as shown with the discussions from the
preceding sections. However, in order to refine our analytical L,C,S-framework, both notions seem too
imprecise. There exists a number of interesting attempts to characterize institutions (Vatn 2005, Young
2002,…) or to develop quality criteria for processes of institutionalisation (Connor and Dovers 2004,
Boin and Christensen 2004…), but none seemed entirely satisfactory to serve our purpose of conceiving
a usability assessment of ISD. In the following we will develop upon an alternative way to conceptualize
ISD and their integration/institutionalization into actor arenas.

4.2.1 Indicators and their institutional embeddedness : a matter of


‘boundary organisations’ ?

The preceding section introduced already to the fact that the very importance given to the institutional
embeddedness of ISD calls to include a characterisation of the nature of such processes, when striving to
account for the usability of ISD. With the present section, we will underpin this argument by using a
second perspective, i.e. boundary organisations (see, among many others, Jasanoff 1987 or Guston 1999
and 2001).

Every arena, or actor sphere - be it Science or Policy or Society- can be acknowledged with its own
particular modes of organisation and functioning; its institutional setting following its own rules, norms,
conventions… In order to steer policy domains such as SD - or in other terms to steer society into SD - it
is necessary to find ways of co-organisation and co-evolution between these different societal arenas; for
instance, in order to address challenges such as climate change, there is clearly a coalition of different
arenas needed. However, in order for such co-organisation and mutual development to emerge, a number

156
of conditions need to be met. One of them being the correct transfusion of information from one arena
into the other(s), as well as the mutual configuration of the needed information. Considering that each
arena has its own specific capacities - follows its own norms and conventions - in dealing, consuming
and producing information, such boundaries between arenas can be a serious limitation to the
development of co-organisation. Indicators for Sustainable Development are, among many other tools
and instruments, meant to transform information in such a way as to make the transfusion of information
from one arena into the others possible and easier. In this regard, institutional embeddedness of ISD-
processes becomes also a more precise meaning, as it develops into a logical call for the co-construction
of information in a setting where the emergence of common norms, conventions and needs have to
precede the mere technical parts of indicator construction.
Of course, boundaries between arenas have their raison d’être and importance, because they “demarcate
the socially constructed and negotiated borders between science and policy, between disciplines, across
nations, and across multiple levels. They serve important functions (e.g., protecting science from the
biasing influence of politics, or helping organize and allocate authority)(…)” (Cash et al., under review).

Thus the different arenas produce ‘boundaries’ between each other. Cash (et al., under review), who in
their particular domain of enquiry of global environmental assessments needed to conceptualize only 2
actor spheres (Science and Policy), speak also of gaps which occur between ‘science knowledge’ and
‘policy action’, which can make the first ineffective and unused and the last not responding correctly to
already recognized challenges.
SD calls hence for the configuration of instruments allowing to bridge these gaps or boundaries, notably
in terms of information exchanges (but not only). In this context, ISD, and evaluations in general, have
been labelled ‘boundary organisations’. They are instruments, which belong to neither arenas entirely and
bridge 2 or 3 of them. Literature speaks also of ‘science-policy interfaces’ (see for instance, de Bruyn et
al. 2005), but the idea is identical: by the fact that ISD are in-between different arenas, the ISD-processes
gain a proper identity.
Such ‘boundary organisations’, despite the vocabulary used, correspond actually by their nature and
characteristics to what we defined above as ‘institutions’. It would thus be more coherent to speak here of
‘boundary institutions’, as they rely on a web of commonly shared norms, conventions, rules…, and in
some cases they comprise formal organisations having legal status, budget, personnel… (as is the case for
instance of the IPCC or the Millennium Ecosystem Assessment). How ISD-processes are interacting with
Science, Policy and Society depends thus basically on the way the boundary organisation is configured
with regard to the different arenas. Or in other terms, the institutional embeddedness of the ISD-processes
can be characterised by the nature of the interactions of the boundary organisation with the societal
arenas (Science, Society, Policy).

We will address hereafter the necessary issues of the configuration of boundary organisations: obviously,
by extension, it is the characteristics of the configuration of the boundary organisations, which will
influence whether and how the usability of ISD is evolving. Which interactions between which arenas are

157
particularly enhanced by a given ISD-process, will link boundary organisations with the L,C,S-
framework.

4.2.2 Steering ‘boundary organisations’: ‘reflexive governance’ applied to


ISD

Boundary organisations such as ISD need to be steered if the aim is to enhance their usability. Such
steering translates actually into configuring the appropriate institutional embeddedness of ISD-processes
in-between concerned actor arenas. If we concede, in turn, that ISD are meant to steer action (public or
private) towards SD, and that one of the conditio sine qua non is that ISD need themselves to be steered
in a way as to enhance their usability, then the simple intention to influence the potential usability of ISD
develops in the end into a discussion on the ‘steering of the steering’, or - as developed in more general
terms (and most notably, U. Beck) – we are facing ‘the politics of the politics’.
Beck’s argument, in a different (in many regards, more environmentally-centred) domain, is the
following. Our society faces a series of new forms of risks, which in particular make it impossible to
sufficiently control the quality and desirability of societal evolutions in terms of their outcomes: the
uncertainties attached to some of the prevailing risks are of a nature which would make such an outcome-
centred assessment impossible. Hence, the calls to develop in parallel a control and assessment of the
processes of a society’s evolutions themselves, i.e. for instance, the way the procedural rules are
configured. If societal actors start to discuss on how to configure initiatives at this level, for instance at
the level of the configuration of the procedural rules, than one can speak of ‘rule-altering’ politics (Voss,
Bauknecht and Kemp, 2006).

In the case of ISD, we situate the configuration of the boundary organisations (to enhance usability) at
this identical meta-level of ‘politics of politics’ (Beck 2006; Beck, Giddens, Lash 1996) : steering the
institutional embeddedness of ISD, with regard to the targeted actor arenas and notably in an attempt to
improve their usability. Finally, the steering of societal evolutions at this meta-level has been addressed
by concepts such as ‘reflexive governance’ (Beck, 2006; Voss, Bauknecht and Kemp, 2006), ‘adaptive
management’ (Sendzimir et al. 2006) or ‘adaptive planning’ (Weber 2006): e.g. “reflexive governance
refers to the problem of shaping societal development in the light of reflexivity of steering strategies – the
phenomenon that thinking and acting with respect to an object of steering also affects the subject and its
ability to steer” (Voss and Kemp, 2006 : 4). More precisely “unintended consequences (of societal
actions) cause new, often more severe problems that are more difficult to handle because they require
setting aside specialised problem solving. These can be called second-order problems. Sustainability is
one, if not the main second-order problem of modernist problem-solving. Second-order problems work
successively to disrupt the structure of modernist problem solving because to grasp them – to reconstruct
them cognitively, to assess them and to get competences together to act on them – they require putting

158
aside the isolation of instrumental specialisation, widening filters of relevance, trading off values and
engaging in interaction with other specialists. In short, these problems require transgressing the
cognitive, evaluative and institutional boundaries, which paradoxically, undermines the modernist
problem-solving approach. Problem solving becomes paradoxical in that it is oriented towards
constriction and selection to reduce complexity but is forced into expansion and amalgamation to
contend with the problems it generates. This is what we call the constellation of reflexive problem
handling or, on societal level, reflexive governance” (Voss and Kemp, 2006 : 6).
Even without stepping in the details of the problematic of second- and first-order problems, the
translation onto the domain of ISD of Voss and Kemp’s rationale to develop reflexive problem handling
is quite straightforward. In a certain sense, it could stand for a synthesis of the present work.
Sustainable development, through its conceptualisation and ‘problematisation’, induces the necessity to
develop new or other forms and processes of information gathering and information diffusion (see
section 1.1.3). Traditional, modernist, scientific expertise, which is the most evident source and type of
information when managing public actions, shows some fundamental limits to live up to the challenge of
providing accurate and mobilizing information for decisions. ISD - comprising the processes which are
configured to construct the indicators - are among the ‘post-modernist’ instruments for decision-making
which could contribute to transform expertise in a way as to have it more successfully integrated into
societal decision-shaping processes. However, in order to give ISD a chance to stand up to these
expectations, the configuration of the ISD-product and processes need in their turn to be steered (or
governed) by using new forms of gathering expertise and knowledge. It is this reflexive governance - or
in other words the steering of the ‘governing’ instruments - which can be conceptualized (in different
terms) as the management of the institutional embeddedness of the ISD: reflexive governance will call
(among others) to implement a conscious and shared positioning of the ISD-process between different
societal actor arenas. The process of institutionalising the ISD-process between the Society, Policy and
Science arenas, can be translated to our understanding as an attempt to operationalize ‘reflexive
governance’. As the characteristics of the institutional embeddedness of ISD between the 3 arenas will be
of major influence for the performance of the L,C,S-criteria, ‘reflexive governance’ in the case of ISD
will also comprise the management of usability.

In the following, and last section, we will deduce from the argument to construct reflexive governance at
the level of ISD a second axis to the analysis of the usability of ISD. This second axis should in turn
allow to explore a more refined picture of the dynamics which exist between the characteristics of the
ISD (in terms of boundary organisations) and performances of ISD (with respect to L,C,S).

159
4.2.3 Integrating ‘Institutional Embeddedness’ to the L,C,S-framework

ISD-frameworks are thus interfaces between the Science, Policy and Society arenas. The degree of their
usability will also (partly) define their performance with respect to translating information from one
arena to the others. As a consequence the usability, or the performance of an ISD-framework with regard
to the L,C,S-criteria, of ISD, will be determined and defined also by the way the ISD-framework
manages its linkages to the 3 actor arenas. As developed above, these linkages - as well as the positioning
of the ISD-framework in the policy domains - can be comprehended in terms of the ‘institutional
embeddedness’ of the ISD-frame. Reflexive governance applied to the usability of ISD, in turn, can be
interpreted as a call to develop a shared co-management of the performance of the ISD with respect to the
L,C,S-criteria framework. In other terms, reflexive governance will ask for co-management of the
institutional embeddedness of the ISD; the way ISD will link to the actor arenas should be configured
consciously and should use a series of managerial techniques and processes which allow to construct the
use-profile of an ISD-framework.

The objective in the present chapter is to explore whether it is feasible to propose an extension of the
analytical usability grid (i.e. the L,C,S-framework), which will allow to take into account the institutional
embeddedness of an ISD-framework, and thus have a more diversified insight on the framework’s
potential use-profile. The aim here can only be exploratory, i.e. at the level of proposing how the above-
developed concepts could be converted into a refinement of the usability-analysis, and by discussing the
limits of such a further level of analysis.
Existing usability assessments on the basis of the L,C,S-framework (such as for instance, Parris and
Kates, 2003) have shown that the framework does not provide sufficient detail and direction to
analytically order and address the number of interrogations that are raised by an ISD-initiative. The raw
L,C,S-framework can only serve as a rough and synthetic assessment of ISD-initiatives, providing
basically only for some comparative information.
The original authors of the framework did themselves propose a refinement of their scheme, which
should allow ‘assessment designers’ to critically discuss the effectiveness of their assessments: “three
major themes (…) were identified (…) to be critical design choices influencing the credibility, salience,
and legitimacy of environmental assessments. These were the issues of participation (who is involved in
assessment processes); science and governance (how are assessments conducted, particularly with
respect to the interactions between scientific experts and policy-makers); and focus (what is within, or
excluded from, the assessment’s scope)” (Eckley, 2001 : 5). These 3 themes are meant to determine an
assessments’ characteristics, which in turn will influence L,C,S of the assessment under investigation.
Parallel to these elements of assessments’ characteristics, the authors developed 2 other families of more
contextual descriptors, which influence also the L,C,S of an assessment. First, it was proposed to detail
user characteristics into a user’s concern, his capacity and his openness, second, the historical context of
the assessment was identified to be of importance, comprising sub-items such as the characteristics of the
issue, the linkages and the attention cycle.

160
While this second, enriched framework did show some added value, notably in the discussion of the
effectiveness of environmental assessments (see for instance, Eckley, 2001), we identified some
limitations to it when trying to adapt it to the realm of ISD.
First, the added characteristics of an assessment do not seem to be sufficiently focused in the direction of
accounting for the mechanics of interactions of the arenas. By accepting the conceptualisation of
boundary organisations, i.e. seeing ISD as interfaces between a series of interconnected arenas, it seems
important that an analytical grid would focus on characterising the possible interactions between the
arenas and the ISD-scheme. Second, the grid proposed does not appear to be sufficiently closed as an
analytical framework, and does not lift ambiguities at the level the interpretation of each of the criteria.
The way they are defined, ‘participation’ and ‘science&governance’ have quite some possibilities of
overlapping, which would introduce some randomness in assessing ISD-schemes. Finally, the 3
assessment characteristics apply different perspectives when assessing usability : ‘focus’ is basically
linked to the assessment’s configuration in terms of content, whereas the other 2 criteria are more linked
to process.

Departing from this basic, but potentially too limiting, extension of the L,C,S-framework by the original
authors, we applied the above-documented conceptualisation of ‘boundary organisations’ in order to
develop a second axis to the usability assessment grid, which would account directly for a
characterisation of the interlinkages between the arenas and with the usability of an ISD-scheme. Hence,
account for institutional embeddedness of the ISD-scheme between the 3 arenas.
The result is a very straightforward proposal to construct L,C,S-evaluations of ISD according to the 3
possible couples of arenas; i.e. Science – Policy, Policy – Society, Society – Science. The matrix
obtained (see Table 4) would thus allow to account for each of the 3 L,C,S-criteria, how it was
operationalized and managed at the level of each of the 3 possible arena-couples. In other words, the grid
would account for each of the possible bi-dimensional interfaces between which a particular ISD will be
embedded; how is the ISD-scheme embedded between science and society, and how will this particular
embeddedness influence each of the 3 usability criteria?
Of course, the institutional context into which an ISD will be embedded is not composed of a single arena
couple: indicators cannot be developed only as interfaces operating between the science- and policy-
arenas, for instance. We propose to ‘artificially’ section the entire interface-space ‘Science – Society –
Policy’ in order to be in a position to characterize clearer the interactions which could be operated at the
level of ISD in order to stimulate usability. As seen above, ISD can be interpreted as boundary
organisations, having to cope with translating information across the different boundaries of the 3 arenas.
Instead of striving to discuss these boundary interactions in an analytically difficult 3-dimensional space,
it makes some sense here to separate the possible boundary interactions.
Such a separation does notably facilitate the identification of trade-offs. As seen earlier, trade-offs are
unavoidably made in the management of the L,C,S-criteria, as it is simply impossible to construct an
ISD-scheme which could be equally performing for each of the 3 criteria, be it at the level of the criteria

161
(for instance, is it often impossible to promote the salience of an ISD without loosing on one of the two
other criteria), or at the level of developing usability for all possible target users (for instance, what
would be interpreted as enhancing salience by one actor arena, could be interpreted differently by another
arena). The double entry ‘couples of arenas’ and ‘L,C,S’ will allow to account for these different trade-
offs to be explored individually.
The grid will principally enhance the identification of the usability-profile of an ISD-initiative. After
having identified and discussed L,C,S at the different 2-dimensional interfaces, a second phase of a
usability-analysis of an ISD-initiative will necessarily entail to construct on this basis a global usability-
profile for the ISD-scheme, by letting emerge the trade-offs which are operated at the level of the ISD-
initiative. E.g. with respect to which of the 3 L,C,S-criteria and which specific boundaries, the analysed
ISD-processes were configured in a way as to develop particularly performing responses to the
challenges raised by the 3 L,C,S-criteria.

A further clarification needs to be made, in order to situate the level of analysis. First of all, the grid is
aimed at providing a structure for the analysis of existing ISD-schemes, and not to discuss the
interactions at a theoretical level. Second, for reasons developed above, we are interested here in
analysing predominantly how a given ISD-initiative responded to the challenges raised by the L,C,S-
criteria. Furthermore, are we situating our endeavour at the level of ‘reflexive governance’, i.e. we are
trying to identify which ‘operations’ have been realized at the level of an ISD-scheme in order to steer
and govern the ISD-scheme in an attempt to improve its usability. These operations will mainly be of
managerial type; e.g. by implementing a specific ‘tool’ or sub-process within the ISD-initiative’s process.
More precisely, the type of questions which will seek answer to at the level of the grid, reads such as:
how does the ISD-initiative manage salience at the level of the ‘policy-science’ interface ? , i.e. how is
(the perception of) the integration of the stakes managed when coping with the different actors of the
policy-science arena ?
Operationally, at the level of an analysis, the questions could be turned also into exploring the managerial
(or governance) operations which were implemented, for instance: what are the mechanisms which have
been initiated to manage salience within the policy-science interface? In the following example of an
analytical grid, we show for each of the cells a series of possible management or governance responses,
which ISD-initiators could have put in place. E.g. at the level of managing salience for the policy-science
interface, one could for instance expect that the ISD-process did encompass an ‘Exploration of the
detailed needs and demands of policy actors w/r to ISD-outcome as well as process; e.g. perform a needs
assessment’. In this eventuality, better understanding the needs and demands of the targeted users will
allow to respect better their expectations with regard to the issues and stakes to be integrated to the ISD-
scheme, which in turn will influence the salience attached to the ISD-initiative.

162
Legitimacy Credibility Salience
(Perception of the ISD-processes’ (Perception of the implementation of (Perception of the integration of the
fairness in coping with stakes) high standards of scientific work) stakes valued as important in the
domain)
Configuration of the system’s Configuration of the ISD-process and Exploration of the detailed needs and
boundaries should be wider than policy outcome should reveal a high amount of demands of policy actors w/r to ISD-
boundaries; e.g. integrate policy actors innovation; e.g. develop the ISD-process outcome as well as process; e.g. perform
Policy - Science from other institutional levels than the as pristine science exercise. a needs assessment.
targeted level.

(Less determinant?) (Determinant) (Determinant)


Participation of society and policy at the ‘Post-normal’-peer reviews and Procedural facilitation of inclusion of
meta-level of the ISD-process; e.g. in evaluations; e.g. members of the review non-conventional issues, domains,
order to exclude political bargaining and teams not chosen by discipline, ideas...; e.g. flexibility to operate an
‘partisanism’ for instance at the moment including non-scientific actors. eventual reframing of process and/or
Society – Policy of determining the indicator scheme’s outcome.
dimensions/themes.

(Less determinant?)
(Determinant) (Determinant)
Transparency of the ‘terms of Provision of information on science Evaluation of the relevance of the
reference’; e.g. by organizing feedback actors’ profiles; e.g. communicate on (policy) domains included into the ISD-
on their definition. funding parties and amounts, as well as framework; e.g. countercheck for
former implication in projects and emerging issue domains, which are not
Society – Science consortia. (yet) policy domains.

(Determinant) (Less determinant?)


(Determinant)

Table 4 - Analytical grid to determine the usability-profiles of ISD-initiatives


(filled in with examples of possible management responses)

Another example could be picked up at the intersection between the credibility-criterion and the ‘Society
– Policy’ interface. In order to govern the ISD-scheme towards more credibility, it can be envisaged to
configure an evaluation of the scientific robustness of the ISD-initiative. If thus one could detect at the
level of an ISD-initiative, the configuration of a stakeholder-oriented (post-normal, i.e. including non-
scientific actors) assessment of the scientific background of the ISD-scheme, then this will have an
influence on the credibility of the ISD.
Salience at the ‘society – science’ interface could be governed by critically reviewing the issues which
are defining the ISD-initiative’s orientation and content. One of the important issues to this is the
definition of the dimensions (such energy efficiency, poverty alleviation…), which will be addressed by
the ISD-initiative. Salience in the eyes of the ‘society – science’ interface could mean to be able to
identify dimensions, issues and stakes which are not stemming from the policy arena, i.e. which are for
instance not (yet) formally or informally translated into policy domains, i.e. emerging issue domains. In
this sense, governing salience in this interface will translate into leading a constructed investigation on
the nature and origin of the dimensions of the ISD-initiative.

163
We will not pas through all the examples given in the table above. They serve anyway only as
illustrations. We assert that once an ISD-initiative and its processes are analysed at the level that the grid
stipulates, a profile of the ISD’s institutional embeddedness with regard to the 3 arenas can be identified.
The linkages that the ISD wants to establish between the 3 arenas will be identifiable. From these
intended linkages, the processes initiated to govern them with the ISD-process and the interpretation of
their impact and influence on the L,C-S-criteria, a usability-profile will emerge. Whether an analysed
ISD-initiative will have preferred to implement a process to govern the interface society-science to
influence the credibility of the ISD-initiative, above preferring a different operation to manage, for
instance, legitimacy at the level of the policy-science interface, allows to distil what could be called the
usability-profile of the ISD-initiative. This usability-profile can also be interpreted as part of the
identification of ‘reflexive governance’ at the level of ISD, because it is a representation of the
operational approaches chosen to steer and govern an ISD-initiative, thus to ‘steer the steering’.

We further introduce - however as mere question marks – a differentiation of the relative importance of
usability between the different cells. As a matter of fact, the possible interactions between 2 actor arenas
will necessarily focus on those issues, in which one or the other of the actors has a relatively direct lever
to influence the governance of the criteria at stake. In this sense, legitimacy can be more precisely linked
to the ‘society’-arena, credibility to the ‘science’-arena and salience to the ‘policy’-arena (see also section
3.1.3). It could follow from this, that in those governing situations where the main operator of the lever is
not participating to the given arena-couple, the management of the criteria is less determinant to the
usability of the ISD-initiative. For instance, within the arena-couple ‘society-policy’ we will find less
obvious traces of management of the credibility-criteria, as the main lever-operator (the ‘science’-arena)
is not participating to it. The management options taken in the case of this specific cell will thus be less
determining to the usability of the ISD-initiative.
However, we raise this differentiation merely as a raw hypothesis. There are arguments against it. In the
game of trading-off the different characteristics and qualities of an ISD-initiative, profiles of usability
will most probably be distributed without applying such a linear reasoning. It is furthermore not so that
the absence of the main lever-operator of a criterion will mean an absence or weakening of governing
mechanisms with regard to this criterion; the examples of possible management options given in the
above table, shows that there are well some potential governance actions which can be implemented.
Furthermore, it is probably a matter of over-interpretation to state that a specific actor arena has a
preponderance on one of the criteria; of course, credibility is of particular importance to science-actors
and it is them who put particular emphasis on themselves to manage their performance with regard to that
specific criterion, but it is all the same influenced by the stances taken by the other 2 arenas. This is
especially true, if the management action taken at the level of one arena is meant to induce a negative
impact on the criterion; e.g. if policy actors intend to discredit the science actors. Finally, it might well be
that the hypothesis should purely and simply be inversed, in the sense that the governance of a criterion at
the level of 2 particular actor arenas might actually be specifically influenced by the management of this

164
criterion by the missing 3rd actor arena; both the ‘society’-arena and the ‘policy’-arena could be relying
on the development of a strong management option by the ‘science’-arena to develop themselves a strong
profile on this criterion.

As it is presented here, the analytical grid has some limits, and would obviously need a series of
empirical applications. This however is not the objective of the present work. A series of generic limits
can already be addressed here, nevertheless.
One of the most pressing limitations is the fact that the grid in itself does not provide for the necessary
differentiation on how a specific governance action was implemented, or institutionalized. Depending
how governance and management actions were implemented and institutionalized, they have of course
different impacts on the strength and profile of an ISD-initiative with respect to that criterion. An
exploration of a series of implementation/institutionalization frameworks was led. It was for instance
held possible to introduce such a level of analysis, by referring to the 2 generic forms of
institutionalisation developed by Krasner (1988), vertical depth and horizontal width65, and subsequently
to promote to monitor for each encountered governance action whether it contributes more to one or the
other of the two. However, even with such a rather straightforward two-criteria frame, it wasn’t possible
to clarify what such an additional level of analysis could mean in terms of value added to the analysis.
One of the problems was obviously the ‘non-directionality’ of the vertical/horizontal frame: supposing it
would be possible to discriminate each managing option between the two forms of institutionalization,
than it was still not entirely clear what should be preferred from the two in order to enhance a specific
criterion with respect to its actor-arenas. For the time being, such a discrimination of the nature and
orientation of specific governance actions was thus not deemed feasible, and necessary.

A second limitation could be raised by the fact that the operationalisation of a specific management
action cannot exclusively be attributed to a specific interface. Whether the organisation of a scientific
committee, backed by a stakeholder-commission, to decide on the selection of an ISD-initiative’s
dimensions, is supposed to influence the credibility or salience can be a matter of personal judgment by
the assessor. Whether such a procedural validation to an ISD-initiative is attributable to the ‘science-
society’ or another interface can also be negotiated (even if a little less in our example here).
However, the intention was to propose a simple grid which should allow to improve the organization of
an assessments of ISD-usability, and not to develop a performing discriminator of effects. Equally thus,
using this type of grid will inevitably induce to place specific management and governance operations in

65
Vertical depth stresses to what extent institutional factors influence the individual attitudes and actions, i.e. emphasizes
how much the institutional norms and values are significant (relative to the formal ones). (…). Horizontal width, which
refers to the coupling between the various types of activities that take place within an institution. This dimension “gauges”
to what extent an organization is horizontally integrated or fragmented in an institutional sense: are activities coordinated
and logically linked, do organizational members feel they are in the “same boat,” do they share common norms, values and
commitments?

165
differing cells; which will provide for a much more accurate reading of the multiple effects of such
governance operations.

Conclusion to the chapter

From this short exploration of a possible grid to discuss usability-profiles, we foremost conclude that the
approach appears to be possible and feasible: institutional embeddedness can be analysed at the level of
the interactions between the 3 distinct arenas, and can be put into relation with respect to the L,C,S-
criteria. Institutional embeddedness is one of the adequate levels of analysis to discuss the L,C,S of an
ISD-initiative, and by consequence it is a determinant of the usability of an ISD-initiative. Furthermore
does the analytical grid show that the inverse can also be held possible : usability can be analysed at the
level of the institutional embeddedness of ISD-initiatives. These relationships let us assume that
‘institutional embeddedness’ does actually function as an important descriptor and discriminator of the
usability of ISD, hence of ISD-initiatives in short.
The level of ‘institutional embeddedness’ is not the only possible level of analysis, when we speak of
usability. It is however the one which to our understanding links best with a series of importance-gaining
conceptualisations, such as for instance ‘boundary organisations’ and also ‘reflexive governance’.

While our proposal to establish a link between ‘institutional embeddedness’ and ‘L,C,S’ seems
theoretically coherent and viable, it is clear that the next step passes through an empirical assessment of
ISD-initiatives, notably by applying the grid.
Apart from allowing to check the operational viability of the interlinkages, empirical applications could
promise to determine ad hoc profiles and strands of processes, which could facilitate the management of
the usability of an ISD-process. Such an approach has been promoted notably by Dale and Beyeler
(2001) who argue that the enhancement of the usability of indicators (in their case, environmental
indicators) would ask for a certain standardisation of indicator-processes. They notably promoted such a
standardisation in order to allow the emergence of certain automatisms - at the level of potential users –
in the comprehension of the contextual information attached to indicators. They see in the growing
complexities of indicator processes a dangerous potential for users to withdraw a too large amount of
their attention from the main indicator messages (i.e. the outcome) to the comprehension of indicator
processes, with the risk of loosing the main chain of influence of ISD. By applying a parallelism to Dale
and Beyeler, an empirical exploration of the linkages between institutional embeddedness of the ISD-
processes and their L,C,S-performance could lead to identify a certain amount of standardized profiles,
for instance, in terms of best-practice in the management of legitimacy in the ‘science – policy’ interface.
However, we do not follow these ideas. First of all, Dale and Beyeler situate the use of indicators mainly
at instrumental, direct level. As repetitively shown above, such an instrumental viewpoint on indicator-
influence is outdated. Second, ISD-initiatives need to struggle for the best possible adaptation of their

166
process to the given context: institutional embeddedness of the ISD-initiative calls for the construction of
very specific and ‘individualized’ governing processes. Attaining standardisation would point exactly in
the opposite direction. At the level of ISD, the entire approach of reflexive governance calls actually to
keep such processes open and highly adaptable to specific situations and occurrences. It might however
be that standardisation of processes is feasible and desirable in the context of environmental indicators,
i.e. the context of Dale and Beyeler: environmental indicators are surfacing (among other differences) as
less multi-dimensional and value-driven, and as more enclosed in rather traditional administrative and
technocratic processes as it is the case for ISD. In this environmental policy-context, it should not be
entirely excluded that instrumental and direct use could be observable (even if empirical studies seem to
disagree slightly, see for instance Gudmundsson 2003).

167
168
Conclusion
From ‘proceduralism’ to ‘institutional embeddedness’ to ‘reflexive
institutionalisation’: a prudent outlook on enhancing the usability of ISD
(and more?)

The initial research questions and the ‘thesis’ of the present work - as they were constructed some years
ago - were centred on investigating the patterns of impacts of ISD on decision situations, by applying a
procedural understanding to ISD-initiatives. What consequences would a thoroughly procedural reading
of ISD entail? How could or should decision processes interlink with ISD-processes to provoke mutual
reinforcement? How focused on process-related issues should ISD initiatives be? Are ISD-processes an
effective catalyst to generate institutional capacity-building for SD (which in turn would facilitate the
integration of SD-principles in public actions)?
Those questions arose out of the flourishing literature on ISD, where the analyses of the uses and impacts
of ISD remained widely vague. If these issues were explored, then the process-related consequences of
ISD were widely held for the primary (positive) effects of ISD; for instance, in terms of developing
networks of SD-conscious participants. In other words, literature on ISD assumed that the procedural
configuration of ISD provides for the primary explication of ISD-success or failure, notably through the
configuration and implementation of the participatory component in ISD. As a result, we held it to be
possible to investigate these ‘proceduralisms’ - and their effects – for instance, in order to distil a series
of ‘impact rules’ and ‘minimal conditions’, which could be stressed out and would eventually contribute
to the improvement of the impacts of ISD on decision-making. However, during the investigations, the
analysis of the interlinkages between ISD, their processes, the decision situations and the policy
processes grew more complex and difficult than expected; the procedural reading of ISD was not precise
enough. Notably, it occurred that the linkages between the process of ISD and the policy-making process
could be more accurately discussed when they were conceived at the level of the institutionalisation of
ISD, i.e. the web of norms, rules, organisations, processes, culture… which link ISD to their policy
context.

One of our original intentions was also dedicated to the exploration of the ‘proceduralisms’ of ISD as
conditions or facilitators for ‘institutional capacity building’ at the level of policy makers, notably to
understand further how these mechanisms could be held accountable as factors for the success of the
integration of SD into policy-making. It appears however that the sought-after ‘integration of ISD (and of
SD) into policy-making’ is an inadequate or ambiguous point of departure for an analysis of the influence
of ISD on policy processes; ‘integrational’ aspects in environmental and SD-policy processes have been
depicted as being utterly vague and too diverse in their possible interpretations.
The more precise (though also more generic in some terms) ‘utilisation of ISD in decision situations’ still
needed to be dissected and rendered more precisely. Because we wanted to start our analysis directly at

169
the level of the ISD (both product and processes), we concentrated on exploring the characteristics of ISD
which presumably condition the utilisation of ISD. Hence we concentrated on discussing the ‘usability’
of ISD, supposing that attaining a certain degree of usability is a first condition to be met by an ISD, if an
initiative intends to stand a chance of being used and referred to during policy processes. Setting our
analysis at the level of ‘usability’ was also the best perspective to be taken as our analysis wanted to
explore how ISD as ‘science objects’ will potentially resonate at the policy level; i.e. how an ‘object’
such as ISD-initiatives (i.e. comprising ISD as product and process), which is meant to translate and
distribute scientific assessments, monitoring and knowledge to policy-makers by simplifying and
synthesizing, potentially feeds into policy processes.

From these developments and their ramifications of the analysis, i.e. moving from a procedural level of
analysis to an institutional reading and moving from utilisation and integration of ISD into policy-
processes to the usability of ISD for decision situations, our main research question distilled more
precisely into discussing which are the characteristics of ISD-initiatives that are influencing the usability
of ISD in decision situations? At a secondary level, the research question was directed towards the
identification of a key which allows to read and analyse these characteristics, i.e. the usability-profile of
ISD-processes, with respect to the configuration of the decision situation?

Our discussion of the mechanics of decision-making processes and the handling of information within
these, identified that the generic utilisation of assessments in policy-making could be apprehended in
terms of three different characteristics: legitimacy, credibility and salience (L,C,S). Applied to the
context of ISD – synthetically - legitimacy refers to the perception of the policy-actors of the procedural
fairness, credibility to the perception of the scientific soundness and salience to the perception of
stakeholder- and policy-relevance. A 3-dimensional reading of ISD-processes, with respect to the L,C,S-
framework, helped us to further discuss the characteristics of usability-profiles of ISD in policy
situations. It further occurred, through a discussion of alternative and existing utilisation-analyses of ISD,
that the L,C,S-framework had sufficient depth and width to figure as a potential, overarching framework
of ISD-characteristics.

Simultaneously, the confrontation of the L,C,S-framework with the specificities of the issue domain of
SD, as well as a more detailed discussion of the translation of L,C,S on the level of ISD-initiatives,
showed that a secondary level of analysis was possible, and even necessary. A solely
procedural/substantive reading of ISD-initiatives would not provide for the necessary wider
comprehension of the possible insertion of ISD into decision contexts, thus their usability. The linkages
between an L,C,S-based analysis of the usability-profiles of ISD, the principles of SD and the policy
making processes could be identified to be best discussed at the level of the institutionalisation of ISD.
The institutionalisation of ISD, i.e. the solidification of a web of norms, rules, conventions, processes…
that transform and govern the ‘object ISD’ into an ‘institution’, was better apprehended with the concept

170
of ‘institutional embeddedness’, i.e. in our case, the embedding of ‘soft’ information-processes for SD-
management into public decision-making culture.
Within the context of an analysis of the institutional embeddedness of ISD, some supplementary and
explanatory concepts could be connected to ISD. First of all, ISD are identified as ‘boundary
organisations’, i.e. objects which are set to facilitate the interactions between different existing actor
arenas. In our context, ISD are ‘boundary organisations’ that help to cross the boundaries of different
cultures of understanding, constructing, organising and digesting information. ISD can in this respect be
conceptualized as assessment objects, which help to translate, mediate and communicate information
from the science-arena to the society-arena and the policy-arena, and which simultaneously help to adjust
and configure the assessment-information and –process at the level of the science-arena with respect to
the demands and culture of the society- and policy-arenas. By inextricably interlinking the arenas in both
directions, ISD turn into ‘boundary organisations’. As a consequence, we proposed to add to the analysis
of the usability of ISD with the L,C,S-framework, a second, institutional axis which allows to situate the
mechanics of L,C,S between the three arenas. An L,C,S-assessment framework, coupled with an
institutional reading of the interactions between the participating actor arenas to an ISD-initiative, allows
in effect to conceive a ‘usability-profile’ for the specific ISD-initiative.
The institutional reading of ISD-initiatives can be further developed. In order to enhance their usability,
ISD-processes need to be governed and steered: their usability can be managed and co-constructed
through the lenses of the three usability-characteristics. Simultaneously, ISD are themselves
acknowledged as being part of the government- and governance-instruments of the SD-domain. By
translating information between actor-arenas, ISD foster a ‘governance-enhancing’ function, which in the
end renders ISD as being part of the steering (or governance) instruments of SD. As a consequence, the
enhancement or even management of the usability of ISD will distil down to ‘steer the steering’. Such a
double-bound governance function can be addressed as ‘reflexive governance’, i.e. the governance of the
governance instrument.

The institutionalisation of ISD-processes - or their ‘institutional embeddedness’ – can thus be


acknowledged as a secondary key for the analysis of the L,C,S-characteristics of ISD-initiatives. It might
only be one key among other possible levels of analysis, but in effect it suggests that ISD should be
understood as ‘boundary organisations’ and that the enhancement of their usability could be understood
as a sort of catalyst to the implementation of ‘reflexive governance’ at the level of SD.

At least two further, exploratory and concluding thoughts can be developed from this line of argument.
First, information and decision-aiding instruments - including thus ISD-processes - might in the future be
re-explored in a different light with respect to their positive contribution to the institutionalisation of
sustainable development per se. They could in effect be discussed with respect to their potential to
participate to what could - by extension to the above - be termed the ‘reflexive institutionalisation’ of SD,
i.e. the institutionalisation of the institutionalisation instrument. This argument would pose the following:
a further institutionalisation (or, in simpler terms, a further integration) of SD into public policy making

171
could be facilitated by the adequate institutional embeddedness of ISD with respect to L,C,S. Whether
such a conceptually relatively appealing extension will lead to any concrete and operational
improvements to the more traditional (but vaguer) quest for the integration of SD into policy-making is
difficult to anticipate at this stage. At the operational level, nuances between both approaches might be
small, but ‘reflexive institutionalisation’ seems to convey one advantage over SD-policy integration; the
reflexive character of the process poses very automatically the question of organizing the control and
monitoring of the mechanisms, notably in order to structure feedback. In terms of ISD, and at a very
pragmatic level, such a call for monitoring and controlling the quality and effectiveness of their
institutionalisation could, for instance, reinforce the call to develop specific ‘meta-evaluation standards’
applicable to ISD.
Second, a thorough empirical application of the L,C,S-framework, coupled with an institutional reading
in line of what we proposed, to a larger series of ISD-initiatives could identify existing patterns of
usability-profiles. In effect, diverse ISD-initiatives could show a series of identical management
constellations as a response to steering the performance of the ISD and L,C,S with an institutional
response. As a consequence, it could be tempting to develop - from the identification of these
constellations - some standard institutional- or managerial-responses to the challenges of L,C,S. Hence,
to a certain extent, standardize ISD-processes according to the policy-situation they are felt to be
confronted with. While such a procedural standardization of ISD-processes might in effect be tempting,
we stay sceptic with regard to its feasibility, and to a larger degree to its desirability: nevertheless, both
questions can be worth an exploration.

Discussing the usability of ISD has not proven to be a straightforward task. Simply translating the
existing knowledge-base on evaluation- and assessment-use to the policy domain of SD, seemed an
inadequate approach to contribute to a better comprehension of the strengths and weaknesses of ISD with
respect to their potentials in a policy situation. The specificities of the SD-policy domain linked to the
emerging demands from policy-making paradigms such as ‘New Public Management’ to invest stronger
into soft, alternative and process-oriented tools for policy-making, set however a series of very concrete
demands from policy-makers and stakeholders alike to develop a better comprehension of the
mechanisms of decision-aiding tools such as ISD.
Our discussion wants to contribute to this emerging debate, which is of course not restricted to the policy
domain of SD. Nevertheless, the specific policy domain of SD had some influence on the level at which
we felt that such a discussion should be conducted at this stage, i.e. at the level of confronting,
integrating, translating a series of concepts and perspectives. As a policy domain, SD is a principles-led,
process-oriented referent, which is meant to transcend into an overarching (or at least, a structuring)
rationale for policy decisions. Discussing the usability of ISD had thus also to incorporate an attempt to
comprehend the relationship between ISD and the formation and steering of such a rationale.
Conceptually, these discussions are far from being closed, as shown by the emerging debate on the
adaptation of the principles of reflexive governance to SD.

172
Whether these discussions - including the present - and more specifically the theoretical level at which
they are led, will really develop into a different, fruitful operationalisation of SD into policy making can
however still be questioned at this moment.

173
174
References

Adriaanse A. (1993), ‘Environmental Policy Performance Indicators: a study on the development of


indicators for environmental policy in the Netherlands’, Den Haag, Netherlands Ministry of Housing,
Spatial Planning and the Environment.

Alcock F. (2001), ‘Embeddedness and Influence: a contrast of assessment failure in New England and
Newfoundland’. John F. Kennedy School of Government, Harvard University, Belfer Center for Science
& International Affairs, 2001-19.

Atkinson T., Cantillon B., Marlier E. and Nolan B. (2002), Social Indicators: The EU and Social
Inclusion, Oxford University Press, Oxford.

Ayres R. (1988), ‘Self-organization in biology and economics’, International Institute for Applied Systems
Analysis, Laxenburg, Austria.

Ayres R.U. (2000), ‘Commentary on the utility of the ecological footprint concept’, Ecological Economics
32 : 347–349.

Balthasar A. (2006), ‘The effects of institutional design on the utilization of evaluation. Evidence using
Qualitative Comparative Analysis (QCA)’, Evaluation 12 (3), 353-371.

Bartelmus P. (ed.) (2002), Unveiling wealth. On money, quality of life and sustainability. Kluwer
Academic Publishers. Dordrecht, Boston, London.

Bauer R.A., Biderman A. and Gross B. (eds) (1966), Social indicators. MIT Press, Cambridge, Mass.,
USA.

Bauler T., Douglas I., Daniels P., Demkine V., Eisenmenger N., Grosskurth J., Hak T., Knippenberg L.,
Martin J., Mederly P., Prescott-Allen R., Scholes B. and Van Woerden J., (in press) ‘Identifying
methodological challenges’. In : Moldan B., Hak T., Bourdeau P. and Dahl R. (eds), Assessment of
Sustainability Indicators. SCOPE publication series. Island Press.

Bauler T. (2000), ‘Some theoretical considerations in response to the claim after information for decision-
making’, International Sustainable Development Research Conference, ERP, University of Leeds.

Bauler T. and Paternotte V. (2000), ‘Meandering towards sustainable development : On a necessary - but
not sufficient - procedural approach to SD-issues’, Conference proceedings. Man and City. Towards a
Human and Sustainable Development, Napoli, 6th- 8th September 2000.

Beck U. (1986), Risikogesellschaft. Auf dem Weg in eine andere Moderne. Suhrkamp, Frankfurt/Main,
Germany.

Beck U. (2006), ‘Reflexive governance: politics in the global risk society’, In: Voss J-P., Bauknecht D.
and Kemp R. (2006), Reflexive Governance for Sustainable Development. Edward Elgar, Cheltenham
UK.

175
Beck U., Giddens A. and Lash S. (1996), Reflexive Modernisierung. Eine Kontroverse. Suhrkamp,
Frankfurt/Main, Germany.

Becker E., Jahn T., Stiess I. and Wehling P. (1997), ‘Sustainability: A cross-disciplinary concept for social
transformations’, Management of Social Transformations (most) Policy Paper, Institute for Social-
Ecological Research - ISOE, Germany. UNESCO, Paris.

Beco M. (2006), ‘L’expertis comme influence: impact de systèmes d’indicateurs environnementaux sur la
prise de décision politique en matirèe d’environnement’, Unpublished Master Thesis, Université Libre de
Bruxelles, Belgium.

Bell S. and Morse S. (1999), Sustainability Indicators. Measuring the immeasurable. Earthscan London.

Berkes F. and Folke C. (1994), ‘Investing in cultural capital for sustainable use of natural capital’, Bejier
Reprint Series N°22, 128-149.

Beynat E. (1997), Value functions for environmental management. Kluwer, Dordecht.

Boin R. and Christensen T. (2004), ‘Reconsidering leadership and institutions in the public sector. A
question of design?’, Paper presented at the International workshop ‘The early years of public
institutions’, Leiden University - Netherlands, 10th- 11th June 2004.

Boisvert V., Holec N. and Vivien F-D. (1998), ‘Economic and Environmental Information for
Sustainability’, In: Faucheux S. and O’Connor M. (eds) (1998), Valuation for Sustainable Development:
methods and policy indicators. Edward Elgar, Cheltenham, UK.

Bosch G. (2002), ‘Indicators of sustainable employment’, In: Bartelmus P. (ed.) (2002), Unveiling wealth.
On money, quality of life and sustainability. Kluwer Academic Publishers. Dordrecht, Boston, London.

Bossel H. (1998), Earth at a Crossroads. Paths to a sustainable Future. Cambridge University Press.

Bossel H. (1999), ‘Indicators for Sustainable development : theory, method, applications. A report to the
Balaton Group’, International Institute for Sustainable Development, Winnipeg, Canada.

Boulanger P.M. (2006), ‘Political Uses of social indicators: overview and application to sustainable
development indicators’, International Conference on Uses of Sustainable Development Indicators,
Montpellier - France, 3rd – 4th April 2006.

Boulanger P-M. (2004), ‘Les indicateurs de développement durable: un défi scientifique, un enjeu
démocratique’, Paper presented at the seminar Développement durable, Institut du Développement
durable et des Relations Internationales – Paris – France, 24th April 2004.

Boulanger P-M. (forthcoming), Political Uses of Social Indicators. Overview and application to
sustainable development indicators. International Journal for Sustainable Development.

Boulding K. (1966), ‘The economics of the coming spaceship earth’, In: Jarrett H. (ed), Environmental
quality in a growing economy. John Hopkins Press, Baltimore.

Boulding K.E. (1978), Ecodynamics. A new theory of societal evolution. Sage, London.

176
Boyd R., Richerson P. (1985), Culture and the evolutionary process. University of Chicago Press.

Brodhag Ch. (2000), ‘Evaluation, rationalité et développement durable’, paper presented at the annual
colloquium of the French Evalaution Society, Rennes - France, Juin 2006.

Brühl W. (2002), ‘The debate about sustainability in industy’, In: Bartelmus P. (2002), Unveiling wealth.
On money, quality of life and sustainability. Kluwer Academic Publishers. Dordrecht, Boston, London.

Burch M. and Wood B. (1990), Public Policy in Britain. Basil Blackwell, Oxford.

Cash D., Clark W., Alcock F., Dickson N., Eckley N. and Jäger J. (2002), ‘Salience, Credibility,
Legitimacy and Boundaries: Linking research, assessment and decision making’, John F. Kennedy
School of Government, Harvard University, Faculty Research Working Papers Series, November 2002
(RWP02-046).

Cash D., Clark W., Alcock F., Dickson N., Eckley N. and Jäger J. (Under review), ‘Salience, Credibility,
Legitimacy and Boundaries: Linking Research, Assessment and Decision Making’ To be published in :
Policy Sciences.

Commission of the European Community (CEC) (2000), Indicateurs structurels, COM(2000) 594 final,
Bruxelles.

CEC (2002), Indicateurs structurels, COM(2002) 551 final, Bruxelles.

CEC (2002), Indicateurs structurels, COM(2002) 691 final, Bruxelles.

CEC (2002), Rapport de la Commission au Conseil. Analyse de la liste ouverte d'indicateurs-clés


environnementaux, COM(2002) 524 final, Bruxelles.

Clark W., Mitchell R., Cash D. and Alcock F. (2002), ‘Information as Influence: How institutions mediate
the impact of scientific assessments on global environmental affairs’, John F. Kennedy School of
Government, Harvard University, Faculty Research Working Papers Series, November 2002 (RWP02-
044).

Cobb C. and Rixford G. (1998), ‘Lessons learned from the history of social indicators’, Redefining
Progress, San Francisco.

Connor R. and Dovers S. (2004), Institutional Change for Sustainable Development. Edward Elgar,
Cheltenham UK.

Costanza R. (1999), ‘The ecological, economic, and social importance of the oceans’, Ecological
Economics 31 (1999) 199 – 213.

Costanza, R., d’Arge, R., de Groot, R., Farber, S., Grasso, M., Hannon, B., Naeem, S., Limburg, K.,
Paruelo, J., O’Neill, R.V., Raskin, R., Sutton, P. and van den Belt, M. (1997), ‘The value of the world’s
ecosystem services and natural capital’, Nature 387, 253–260.

177
Crabbé Ph. (1997), ‘Sustainable development : concept, measures, market and policy failures at the Open
Economy, Industry and Firm levels’, paper presented at the Conference of the Canadian Society for
Ecological Economics, Hamilton, 1997.

Dale V. and Beyeler S. (2001), ‘Challenges in the development and use of ecological indicators’,
Ecological Indicators 1 : 3-10.

Daly H.E. (1977), Steady-state economics. The economics of biophysical equilibrium and moral growth.
Freeman, San Francisco.

Dawkins R. (1989), The selfish gene. Oxford University Press.

De Bruyn T., Bauler T. and Frendo L. (2005), ‘The Belgian Platform for Indicators for Sustainable
Development: an evaluation of a process-oriented science-policy interface’, paper presented at the 6th
International conference of the European Ecological Economics Society, 14th – 17th June 2005, Lisbon.

De Graaf H.J., Musters C.J.M. and Ter Keurs W.J. (1996), ‘Sustainable development : looking for new
strategies’, Ecological Economics 16 (1996) 205 – 216.

Defeyt Ph. and Boulanger P-M. (2003), ‘BEL-INSOC-10: un indicateur d’insécurité sociale en Belgique’,
Institut pour un Développement Durable, Ottignies, Belgium.

Denisov N. and Christoffersen L. (2001), ‘Impact of environmental information on decision-making


processes and the environment’, UNEP/GRID-ARENDAL, Occasional paper 01/2001. Arendal,
Denmark.

Dratwa J. (2003), ‘Taking risks with the precautionary principle. An inquiry into collective
experimentations with science and governance, into collective enunciations of policies and polities,
experimenting Europe from international organizations to the constitution of our common world’,
unpublished PhD-thesis, Ecole des Mines de Paris, France.

Dryzek J.S. (1996), The Politics of the Earth. Environmental discourses. Oxford University Press, Oxford.

Dunn W. N. (1981), Public Policy Analysis: An Introduction. PrenticeHall, EnglewoodCliffs, New Jersey.

Eckley N. (2001), ‘Designing effective assessments: the role of participation, science and governance, and
focus’, Environmental Issue Report N°26, European Environment Agency, Copenhagen, Denmark.

Eising R. and Kohler-Koch B. (1999), Governance in the European Union: A Comparative Assessment.
Routledge. London.

Etzioni A. (1992), ‘Normative-Affective Factors Toward a New Decision-Making Model’, In: Zey M.
(1992), Decision-making: alternatives to rational choice models. Sage, London.

European Environment Agency (EEA) (2005), ‘The Ecological Footprint: A resource accounting
framework for measuring human demand on the biosphere’, in coordination with Global Footprint
Network, Copenhagen – Denmark.

178
European Environment Agency (EEA) (2005), ‘The European environment - State and outlook 2005’,
Copenhagen – Denmark.

EvaluAtion of SustainabilitY – EuropeanCOnference – EASY-ECO (2002), ‘Glossary of terms’, Vienna


University of Economics and Business Administration, Proceedings and Conference preparation material.
Access also via http://www.sustainability.at/easy.

Faucheux S., Froger G. and Noël J-F. (1993), ‘Quelle hypothèse de rationalité pour le développement
soutenable’, Economie Appliquée, Tome XLVI, N° 4, pp 59-103.

Faucheux S. and Noël J-F. (1995), Economie des ressources naturelles et de l’environnement, Paris,
Armand Colin.

Faucheux S. and O’Connor M. (eds) (1998), Valuation for Sustainable Development: methods and policy
indicators. Edward Elgar, Cheltenham, UK.

Federal Planning Bureau (1999), ‘Sur la voie d’un Développement durable? Rapport federal sur le
Développement durable’, Task-Force Développement durable, Belgium.

Forsyth T. (2003), Critical Political Ecology. The politics of environmental science. Routledge, London,
UK.

Foucault M. (1983), ‘Discourse and Truth: the Problematization of Parrhesia’, six lectures given by Michel
Foucault at the University of California at Berkeley, Oct-Nov. 1983, transcribed and edited by Joseph
Pearson in 1985, available at: http://foucault.info/documents/parrhesia/ .

Funtowicz S. and Ravetz J. (1994), ‘The worth of a songbird: ecological economics as a post-normal
science’, Ecological Economics 10 (1997), 197-207.

Funtowicz, S., De Marchi B., Lo Cascio S. and Munda G. (1998), ‘The Troina Water Valuation Case
Study’, Unpublished Research Report to the Valse Project (VALuation for Sustainable Environment),
European Commission Joint Research Centre, Ispra, Italy.

Funtowicz, S., O’Connor, M. and Ravetz, J. (1997), ‘Emergent complexity and ecological economics’, In:
van den Bergh J. and van der Straaten J. (eds.), Economy and Ecosystems in Change: Analytical and
Historical Approaches. Edward Elgar, Cheltenham.

Gadrey J. and Jany-Catrice F. (2003), ‘Les indicateurs de richesse et de développement. Un bilan


international en vue d’une initiative française’, DARES, Paris - France.

Gell-Mann M. (1994), The Quark and the Jaguar: Adventures in the simple and the complex. Abacus,
London.

Georgescu-Roegen N. (1971), ‘The entropy law and the economic system’, In: Daly H. (ed), Towards a
steady-state economy. Freeman, San Francisco.

Giampietro M. (1994), ‘Using hierarchy theory to explore the concept of sustainable development’,
Futures 26 (6), 616-625.

179
Giampietro M. (1999), ‘Implications of complexity for an integrated assessment of sustainability trade-
offs: participative definition of a Multi-Objective Multiple-Scale Performance Space’, Lecture from the
25th September 1999, Advanced Study Course ‘Decision tools and processes for integrated environmental
assessment’, Universitat Autonoma Barcelona - Spain.

Giddens A. (1990), The Consequences of Modernity. Cambridge: Polity Press.

Goodin R. (1996), ‘Institutions and their design’, In: Goodin E. (ed), The theory of institutional design.
Cambridge University Press.

Gordon, C. (2000), ‘Introduction’, In: Rabinow P. (ed), Michel Foucault Power. Essential works of Michel
Foucault 1954-1984. Vol 3, The New Press, NY.

Gouvernement du Grand-Duché du Luxembourg (1998), Plan National pour un développement durable


pour le Grand-Duché de Luxembourg, Ministère de l’Environnement.

Greeuw S., Kok K. and Rothman D. (2001), ‘Factors, Actors, Sectors and Indicators: the concepts and
application in MedAction’, Working paper of the International Centre for Integrative Studies, University
of Maastricht - NL.

Gudmundsson H. (2003), ‘The policy use of environmental indicators – Learning from evaluation
research’, Journal of Transdisciplinary Environmental Studies 2 (2), 1-12.

Guston D. (1996), ‘Principal-agent theory and the structure of science policy’, Science and Public Policy
24(4).

Guston D. (1999), ‘Stabilizing the boundary between politics and science: the role of the Office of
Technology Transfer as a boundary organisation’, Social Studies of Science 29(1), 87-112.

Guston, D. H. (2001), ‘Boundary organizations in environmental policy and science: an introduction’,


Science, Technology, and Human Values 26 (4): 399-408.

Haberl H., Erb K-H. and Krausmann F. (2001), ‘How to calculate and interpret ecological footprints for
long periods of time: the case of Austria 1926–1995’, Ecological Economics 38 : 25–45.

Hammer M. and Hinterberger F. (2003), ‘A sustainable Human Development Index. A suggestion for
greening the UN’s index of social and economic welfare’, paper presented at the ‘Quo vadis MFA ?’
workshop, October 2003, Wuppertal Institute - Germany.

Hansson S.O. (2002), ‘Uncertainties in the knowledge society’, International Social Science Journal 54
(171), 39–46.

Henry G.T. and Mark M.M. (2003), ‘Beyond use: understanding evaluation’s influence on attitudes and
actions’, American Journal of Evaluation 24 (3): 293-314.

Hezri, A. A. and Dovers, S. R. (2006), ‘Sustainability indicators, policy, governance: issues for ecological
economics’, Ecological Economics 60 (2006) 86-99.

180
Hinterberger F. and Zacherl R. (2003), ‘Ways towards Sustainability in the European Union – beyond the
European Spring Summit 2003’, Sustainable Europe Research Institute (SERI), Cologne&Vienna.

Hourcade J., Salles J. and Thery D. (1992), ‘Ecological Economics and scientific controversies. Lessons
from some recent policy-making in the EEC’, Ecological Economics 6 (1992), 211-233.

Innes, J. E. (1998), ‘Information in Communicative Planning’, Journal of the American Planning


Association 64, 1 (1998).

Innes J. E. and Booher D.E. (2000), ‘Indicators for sustainable communities: a strategy building on
complexity theory and distributed intelligence’, Planning Theory and Practice 1 (2), 173-186.

IUCN, UNEP and WWF (1991), Caring for the Earth: A strategy for sustainable living. Gland,
Switzerland.

Jacobs M. (1996), ‘What is socio-ecological economics?’, Ecological Economics Bulletin 1 (2), 14-16.

Jacobs M. (1998), ‘Sustainable development as a Contested Concept’, In: Dobson A. eds (1998), Fairness
and Futurity. Oxford University Press, Oxford.

Jasanoff, S. (1987), ‘Contested boundaries in policy-relevant science’, Social Studies of Science 17: 195-
230.

Jürgens I. (2002), ‘Science-Stakeholder Dialogue and Climate Change. Towards a participatory notion of
communication’, paper presented at the Conference on the Human Dimensions of Global Environmental
Change, 6th -7th December, Berlin - Germany.

Keeney R. (1992), Value-focussed thinking. Harvard University Press, Cambridge.

Kissling-Näf I. and Varone F. (eds) (2000), Institutionen für eine nachhaltige Ressourcennutzung:
innovative Steuerungsansätze am Beispiel der Ressources Luft and Boden. Verlag Rüegger,
Chur&Zürich - Switzerland.

Klemmer P. (2002), ‘Economic, ecological and social indicators’, In: Bartelmus P. (2002), Unveiling
wealth. On money, quality of life and sustainability. Kluwer Academic Publishers. Dordrecht, Boston,
London.

Knoepfel P., Kissling-Näf I. and Varone F. (eds) (2003), Régimes institutionnels de resources naturelles
en action. Institutionnelle Regime natürlicher Ressourcen in Aktion. Verlag Helbing&Lichtenhahn, Basel
- Switzerland.

Köhn J. (1998), ‘Thinking in terms of system hierarchies and velocities. What makes development
sustainable?’, Ecological Economics 26 (1998) 173-197.

Krasner, S. D. (1988). ‘Sovereignty. An Institutional Perspective’. Comparative Political Studies, 21 (1) :


66-94.

Kuhn T. (1970). La structure des révolutions scientifiques. (2e Ed.), Flammarion, Paris – France.

181
Kurtz, J.C., Jackson L.E. and Fisher W.S. (2001), ‘Strategies for evaluating indicators based on guidelines
from the Environmental Protection Agency’s Office of Research and Development’, Ecological
Indicators 1 : 49-60.

Lafferty W. (ed) (2004), Governance for Sustainable Development : the challenge of adapting form to
function. Edward Elgar, Cheltenham, UK.

Leca J. (1993), ‘Sur le rôle de la connaissance dans la modernisation de l’Etat’, Revue française
d’administration publique n°66, avril-juin 1993.

Lehtonen M. (2002), ‘Les indicateurs d’environnement et de développement durable de l’OCDE: quel rôle
dans la mondialisation ?’, Paper presented at the seminary ‘Mondialisation, Institutions et
Développement Durable’, March 2002, C3ED, Université de Versailles Saint Quentin en Yvelines -
France.

Lehtonen M. (2003), ‘OECD environmental performance review programme : accountability (f)or


learning?’, paper presented at the EASY-ECO2 conference, May 2003, University of Vienna - Austria.

Lehtonen M., (2004), ‘The environmental-social interface of sustainable development: capabilities, social
capital, institutions’, Ecological Economics 49 (2004)199-214.

Luhman N. (1988), Ökologische Kommunikation : kann die moderne Gesellschaft sich auf ökologische
Gefährdungen einstellen? Westdeutscher Verlag, Opladen - Germany.

Macgillivray A., Zadek S. (1995), Accounting for Change. Indicators for Sustainable Development. New
Economics Foundation, London – United Kingdom.

Maris B. (1999), Lettre ouverte aux gourous de l’économie qui nous prennent pour des imbéciles. Albin
Michel, Paris - France.

Martinez Alier, J., Munda, G. and O’Neill, J. (1998), ‘Weak comparability of values as a foundation for
ecological economics’, Ecological Economics 26 (1998), 277-286.

Marttunen M. and Palosaari M. (2004), ‘Participatory and systematic approach to the use and production
of indicators’, paper presented at the PEER Conference, 18th November 2004, Helsinki.

Meadows D., Meadows D. and Randers J. (1992), Beyond the limits: confronting global collapse,
envisioning a sustainable future. Post Mills, Chelsea Green.

Meppem T. and Gill R. (1998), ‘Planning for sustainability as a learning concept’, Ecological Economics
26 (1998), 121-137.

Merkhofer M.W. (1987), Decision Science and Social Risk Management. A comparative evaluation of
Cost-Benefit Analysis, Decision Analysis and other formal Decision-Aiding Approaches. Reidel. Kluwer.

Miller C.A. (2005), ‘New civic epistemologies of quantification: making sense of indicators of local and
global sustainability, Science, Technology and Human Values, 30 : 403 – 432.

Miles I. (1985), Social Indicators of Human Development. Pinter, London.

182
Mitchell R., Clark W., Cash D. and Alcock F. (eds) (forthcoming), Global environmental assessments:
Information, institutions, and influence. MIT Press, Cambridge - USA.

Moffatt I. (2000), ‘Ecological footprints and sustainable development’, Ecological Economics 32 : 359–
362

Moldan B., Hak T., Bourdeau P. and Dahl R. (eds) (in press), Assessment of Sustainability Indicators.
SCOPE publication series. Island Press.

Moser S. (1999), ‘Impact Assessment and Decision-making : How can we connect the two?’, White paper
presented for the SLR impact assessment workshop, Charleston. Belfer Center, Harvard University-
USA.

Munasinghe M. and McNeely J. (1995), ‘Key concepts and terminology of sustainable development’, in:
Munasinghe M. and Shearer (1995), Defining and measuring sustainable development. WorldBank,
Washington.

Munasinghe M. and Shearer (1995), Defining and measuring sustainable development. WorldBank,
Washington.

Munda G. (1995), Multicriteria Evaluation in a Fuzzy Environment. Theory and Applications in


Ecological Economics. Physica-Verlag, Berlin

Munda G. (1997), ‘Environmental Economics, ecological economics and the concept of sustainable
development’, Environmental Values 6 (1997), 213-233.

Musters C.J.M, De Graaf H.J. and ter Keurs W.J. (1998), ‘Defining socio-environmental systems for
sustainable development’, Ecological Economics 26 (1998) 243-258.

Neumayer E. (2003), Weak versus Strong Sustainability. Exploring the Limits of two opposing paradigms.
Edward Elgar, Cheltenham, UK.

Neumayer E. (2004), ‘Sustainability and Well-being Indicators’, Research Paper No. 2004/XX, World
Institute for Development Economics, United Nations University.

North D. (1991), ‘Institutions’, Journal of Economic Perspectives, 5 : 97-112.

Norton B. and Minteer B. (2003), ‘From environmental ethics to environmental public philosophy:
ethicists and economists, 1973 – future’, In: Tietenberg T. and Folmer H. (eds) (2003), The international
Yearbook of environmental and resource Economics 2002/2003. A survey of current issues. Edward
Elgar, Cheltenham, UK.

O’Connor M. (1999), ‘Dialogue and Debate in a post-normal practice of science: A reflection’, Futures
31, 671-687.

OECD (1993), Environmental indicators for environmental performance reviews. Paris.

OECD (1994), Environmental Indicators. OECD, Paris.

OECD (2001a). Indicateurs clés d'environnement. Direction de l'environnement de l'OCDE. Paris

183
OECD (2001b), Sustainable Development: Critical Issues. Paris.

OECD (2002a), Aggregated environmental indices: review of aggregation methodologies in use. Working
group on environmental information and outlooks. Paris.

OECD (2002b), Glossary of Key Terms in Evaluation and Results Based Management. Working party on
Aid evaluation. Paris

OECD (2003), Environmental indicators. Development, measurement and use. Paris.

Opoku, C. and Jordan, A. (2004), ‘Impact Assessment in the EU: A global sustainable development
perspective’, paper presented at the Berlin Conference on the Human Dimensions of Global
Environmental Change, December 2004, Berlin - Germany.

Opschoor H. (2000), ‘The ecological footprint: measuring rod or metaphor?’, Ecological Economics 32 :
363–365

Ortega-Cerda, M. (2005), ‘Sustainability indicators as discursive elements’, paper presented at the 6th
International conference of the European Ecological Economics Society, 14th - 17th June 2005, Lisbon –
Portugal.

Owens S. and Cowell R. (2002), Land and Limits: interpreting sustainability in the planning process.
Routledge, London, UK.

Parris T. and Kates R. (2003), ‘Characterizing and Measuring Sustainable Development’, Annual Review
of Environmental Resources 28:13 (2003) 1-28.

Paternotte V. (2002), ‘Scientific assessment of sustainable mobility questioning its use Beyond paralysing
quibble’, Unpublished PhD thesis. Université Libre de Bruxelles – Belgium.

Perret B. (2001), ‘Indicateurs sociaux: Etat des lieux et perspectives’, Conseil de l’Emploi, des Revenus et
de la Cohésion sociale. Papiers du CERC, N°2002-01, janvier 2002. Paris.

Petit O. (1997), ‘Institutionnalisme et Développement Durable. Une tentative d’adéquation


méthodologique.’, Unpublished Post-graduate thesis. Université de Versailles Saint-Quentin-en-Yvelines.
Paris, France

Rauschmayer F. (1999), ‘Decisions in the context of sustainable development: Ethics and implementation
of multi criteria analysis’, In: Ring I., Klauer B., Wätzold F. and Mansson B. (eds), Regional
Sustainability. Physica, Heidelberg, Germany.

Rauschmayer F. (2000), ‘Ethics of Multi Criteria Analysis’, International Journal of Sustainable


Development, vol3/1.

Rees W.E. (2000), ‘Eco-footprint analysis: merits and brickbats’, Ecological Economics 32 : 371–374.

Renn O., Webler T. and Wiedemann P. (eds) (1995), Fairness and competence in citizen participation.
Kluwer, Dordrecht.

184
Rennings K., Kemp R., Bartolomeo M., Hemmelskamp J. and Hitchens D. (2003), ‘Blueprints for an
Integration of Science, Technology and Environmental Policy’, Final report. CE – R&D programme
‘Improving Human Potential Programme by the Strategic Analysis of Specific Political Issues –
STRATA’.

Rickard L., Jesinghaus J., Amann C., Glaser G., Hall S., Cheatle M., Ayang Le Karma A., Lippert E.,
McGlade J., Plock-Fichelet V., Ruffing K. and Zaccai E. (in press), ‘Ensuring Policy Relevance’, In:
Moldan B., Hak T., Bourdeau P. and Dahl R. (eds), Assessment of Sustainability Indicators. SCOPE
publication series. Island Press.

Rist G. (1997), The History of Development. From Western Origins to Global Faith. Edition 2002. Zed
Books, London, UK.

Robinson J. (2004), ‘Squaring the circle? Some thoughts on the idea of sustainable development’,
Ecological Economics 48 (2004) : 369-384.

Rosenström U. (2002), ‘The potential for the use of sustainable development indicators in policy making
in Finland’, Futura 2 : 19-25.

Rosenström U. and Kyllönen S. (in press), ‘Impacts of a participatory approach to developing national
level sustainable development indicators in Finland’, Journal of Environmental Management.

Rotmans J. and de Vries H. (1997), Indicators for Sustainable Development. Perspectives on Global
Change: The TARGETS approach. Cambridge University Press, UK.

Rotmans J., Van Asselt M. and Vellinga P. (2000), ‘An integrated planning tool for sustainable cities’,
Environmental Impact Assessment Review 20 (2000) 265–276.

Roy B. (1990), ‘Decision Aid and decision making’, In: Bana e Costa C. (ed), Readings in Multiple
Criteria Decision Aid. Springer Verlag, Berlin.

Rydin, Yvonne (2004), ‘A role for Sustainability Indicators’, presented at PEER Conference ‘Use of
Indicators for Sustainability’, November 17th – 18th, Helsinki - Finland.

Sabatier P. (1991), ‘Political science and public policy’, Political Science & Politics, Vol. 24: 144-146.

Sabatier P. and Jenkins-Smith H. (eds) (1993), Policy change and learning: an advocacy coalition
approach. Boulder.

Sachs I. (2002), ‘Dix ans après Rio, quel bilan pour le développement durable ?’, presented at Conference
at the Cité de la Science, 19th June, Paris - France.

Sachs W. (2001), ‘Post-fossil development patterns in the North’, In: Bartelmus P. (eds) (2001), Unveiling
Wealth. On Money, Quality of Life and Sustainability. Kluwer Academic Publishers, Dordrecht, NL.

Saint-Upéry M. (1999), ‘Amartya Sen ou l’économie comme science morale’, In: Sen A. (1999),
L’économie est une science morale. La Découverte, Paris.

185
Schneidewind et al, (1997), ‘Institutionelle Reformen für eine Politik der Nachhaltigkeit: vom was zum
wie in der Nachhaltigkeitsdebatte’, Gaia 6 N°3.

Scrase J. I. and Sheate W. R. (2002), ‘Integration and Integrated Approaches to Assessment: what do they
mean for the environment?’, Journal for Environmental Policy and Planning 4 (2002) : 275-294.

Scriven M. (1997), ‘Truth and Objectivity in Evaluation’, in: Chelimsky E. and Shadish W. (eds),
Evaluation for the 21st century: a handbook. Sage Publications, Thousand Oaks, USA.

Sen A. (1999), Development as Freedom. Anchor Books, NY – USA.

Sendzimir J., Magnuszewski P., Balogh P. and Vari A. (2006), ‘Adaptive Management to restore
ecological and economic resilience in the Tisza river basin’, In: Voss J-P., Bauknecht D. and Kemp R.
(eds) (2006), Reflexive Governance for Sustainable Development. Edward Elgar, Cheltenham UK.

Shi T. (2004), ‘Ecological economics as a policy science: rhetoric or commitment towards an improved
decision-making process on sustainability’, Ecological Economics 48 (2004) : 23-36.

Shulha L., Cousins J. and Bradley M. (1997), ‘Evaluation use; theory, research and practice since 1986’,
Evaluation Practice 18 (3) :195-208.

Simon H. (1976), ‘From substantive to procedural rationality’, In: Latsis S.J. (ed), Method and appraisal
in economics. Cambridge: Cambridge University Press.

Simon H. (1982), Models of bounded rationality, Cambridge, MIT Press.

Simon H. (1983), Reason in human affairs, Stanford University Press.

Simonis U. (2002), ‘On expectations and efforts’, In: Bartelmus P. (2002), Unveiling wealth. On money,
quality of life and sustainability. Kluwer Academic Publishers. Dordrecht, Boston, London.

Söderbaum P. (1999), ‘Politics and economics in relation to environment and development: on


participation and responsibility in the conceptual framework of economics’, In: Köhn J., Gowdy J.,
Hinterberger F. and van der Straaten J. (eds.), Sustainability in Question: The Search for a Conceptual
Framework. Edward Elgar, Cheltenham.

Spangenberg J., Pfahl S. and Deller K. (2000), ‘Elaboration of institutional indicators for sustainable
development’, Wuppertal Institute for Climate, Environment, Energy. Division for Material Flows and
Structural change, Final report, Germany.

Stirling A. (1999a), ‘On science and precaution in the management of technological risk’, SPRU,
University of Sussex, Final report, UK.

Stirling A. (1999b), ‘The appraisal of Sustainability: some problems and possible responses’, Local
Environment 4 (2) : 111-135.

Swiss Federal Statistics Office (2003), Le Développement Durable de Suisse. Indicateurs et commentaries.
Neuchâtel, Switzerland.

Theys J. (2000), Vers des indicateurs pour un développment durable: se mettre d’accord sur une

186
architecture avant d’empiler les briques. Développement durable. Villes et Territoires. Innover et
décloisonner pour anticiper les ruptures. Dossier N°13. Centre de Prospective et de Veille Scientifique.
Paris – France.

Thompson J. (1995), ‘The renaissance of learning in business’, In : Chawla S. and Renesch J. (eds) (1995),
Learning Organisations; developing cultures for tomorrow’s workplace. Productivity Press, Portland.

United Nations (1984), A Framework for the development of environmental statistics, NY - USA.

United Nations (2001), Report of the workshop on environment statistics, United Nations Statistics
Division, NY - USA.

United Nations Commission for Sustainable Development (UNCSD) (2000), Indicators of Sustainable
Development : guidelines and methods, NY – USA.

United Nations Evaluation Group (UNEG), (2005), Norms for Evaluation in the UN System. United
Nations Evaluation Forum. www.uneval.org (last visited : 05th April 2007).

United Nations – UNCED (1992), Agenda 21, NY – USA.

Van Asselt M. B. A., Rotmans J. and Rothman D. S. (2005), Scenario innovation: Experiences from a
European experimental garden, Taylor & Francis, Oxford, UK.

Van den Bergh C. J. M. and Verbruggen H. (1999), ‘Spatial sustainability, trade and indicators: an
evaluation of the ‘ecological footprint’’, Ecological Economics 29 : 61-72.

Van den Hove S. (2000), ‘Approches participatives pour les problèmes d’environnement. Caractérisations,
justifications, et illustrations par le cas du changement climatique’, Unpublished PhD thesis, Université
de Versailles-Saint-Quentin-en-Yvelines, France.

Van den Hove S. (2000), ‘Participatory approaches to environmental policy-making: the European
Commission Climate Policy Process as a case study’, Ecological Economics 33 (2000), 457 – 472.

Van der Knaap P. (1995), ‘Policy Evaluation and learning: Feedback, enlightenment or argumentation?‘,
Evaluation 1 : 189-216.

Van der Sluijs J. (2002), ‘A way out of the credibility crisis of models used in integrated environmental
assessment’, Futures 34 (2002) 133-146.

Van Gigch J.P. (1991), System design modeling and metamodelling, Plenum Press.

Van Kooten G.C. and Bulte E.H. (2000), ‘The ecological footprint: useful science or politics?’, Ecological
Economics 32 : 385–389

Vatn A. (2005), Institutions and the Environment. Edward Elgar, Cheltenham UK.

Vatn A. (2005), ‘Rationality, Institutions and Environmental Policy’, Ecological Economics 55 (2005)
203– 217.

Vatn A. (2006), ‘Sustainability - the need for institutional change. The institutional rots of environmental
degradation and the need for multi-dimensional governance structures’, Paper presented at the 9th

187
biennial conference of the International Society for Ecological Economics, 15th-19th December, Delhi –
India.

Vedung E. (1997), Public Policy and Program Evaluation. New Brunswick, New Jersey, Transaction.

Von Bertalanffy L. (1968), General System Theory. Foundations, Development, Applications. Braziller,
NY - USA.

Voss J-P., Bauknecht D. and Kemp R. (eds.) (2006), Reflexive Governance for Sustainable Development,
Edward Elgar, Cheltenham, UK.

Voss J-P. and Kemp R. (2006), ‘Sustainability and reflexive governance: introduction’, In: Voss J-P.,
Bauknecht D. and Kemp R. (2006), Reflexive Governance for Sustainable Development. Edward Elgar,
Cheltenham - UK.

Wackernagel M. and Rees W. (1996), ‘Our Ecological Footprint: Reducing Human Impact on the Earth’,
The new catalyst bioregional series, vol. 9.

Wackernagel M. and Rees W. (1997), ‘Perceptual and structural barriers to investing in natural capital:
economics from an ecological footprint perspective’, Ecological Economics 20 (1), 3–24.

Weber K.M. (2006), ‘Foresight and adaptive planning as complementary elements in anticipatory policy-
making: a conceptual and methodological approach’, In: Voss J-P., Bauknecht D. and Kemp R. (2006),
Reflexive Governance for Sustainable Development. Edward Elgar, Cheltenham UK.

Weiss C.H., Murphy-Graham E. and Brikeland S. (2005), ‘An alternative route to policy influence: how
evaluations affect D.A.R.E.’, American Journal of Evaluation 26 (1) : 12-30.

Wildavsky A. (1979), Speaking Truth to Power: The Art and Craft of Policy Analysis. Little Brown,

Boston.

Wilkinson D., Fergusson M., Bowyer C., Brown J., Ladefoged A., Monkhouse C. and Zdanowicz A.
(2004), ‘Sustainable Development in the European Commission’s Integrated Impact Assessments for
2003’, Institute for European Environmental Policy, London – UK, Final Report.

World Commission on Environment and Development (1987), ‘The Tokyo declaration’, In: WCED
(1987), Our Common Future, Oxford University Press.

World Economic Forum, Global Leaders for Tomorrow (2001), ‘2001 Environmental Sustainability
Index’, CIESIN and YCELP, Yale University, USA. http://www.ciesin.columbia.edu/indicators/ESI

World Economic Forum, Global Leaders for Tomorrow (2002), ‘2002 Environmental Sustainability
Index’, CIESIN and YCELP, Yale University, USA. http://www.ciesin.columbia.edu/indicators/ESI

Wynne, B. (1992), ‘Uncertainty and environmental learning: reconceiving science and policy in the
preventive paradigm’, Global Environmental Change 2 (2) 111-127.

Young O. (2002), The institutional dimensions of environmental change. Fit, interplay and scale, MIT
Press, Cambridge Massachusetts, USA.

188
Young O. (2006), Environmental Institutions: Rights, Rules, and Decision-making Systems, Course
Syllabus (ESM 248 Winter 2006), Bren School of Environmental Science and Management, University
of California at Santa Barbara - USA.

Zaccaï E. (2002), Le développement durable. Dynamique et constitution d’un projet, P.I.E.- Peter Lang,
Bern, Bruxelles.

Zaccaï E. and Bauler T. (forthcoming), ‘Les indicateurs de développement durable’, In : Le Dictionnaire


du développement durable. Institut pour un Développement Durable – Ottignies / Centrum voor
Duurzaame Ontwikkeling – UGent. SSTC, Belgium.

Zey M. (1992), Decision-making: alternatives to rational choice models. Sage, London.

Ziegler H. (2002), ‘How an sustainability become a measure of success in politics?’ In: Bartelmus P. (ed.)
(2002), Unveiling wealth. On money, quality of life and sustainability. Kluwer Academic Publishers.
Dordrecht, Boston, London.

189

S-ar putea să vă placă și