Sunteți pe pagina 1din 34

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/2046-6099.htm

SASBE
2,2

106

A review of building/
infrastructure sustainability
reporting tools (SRTs)
Renard Y. J. Siew, Maria C. A. Balatbat and David G. Carmichael
The University of New South Wales, Sydney, New South Wales, Australia
Abstract
Purpose Buildings/infrastructure are recognised to have a significant impact on the environment
and the community, and hence there is pressure on industry practitioners to incorporate environmental
and social considerations in addition to the traditional cost, time and quality. The development of
sustainability reporting tools (SRTs) to assist in the management of green building/infrastructure
projects is pivotal in informing on progress in sustainability practices. However, the rapid growth
of SRTs in the last decade, with different criteria and methodology, has created complications
for stakeholders.
Design/methodology/approach The paper provides a comprehensive review of tools to guide
practitioners, property investors, policy makers and developers towards making informed choices in
green building/infrastructure projects. Comparative analyses, benefits and limitations of these tools
are discussed in the paper.
Findings Some of the findings from the analysis of SRTs include: an emphasis on environmental
issues; scoring which does not account for uncertainty or variability in assessors perceptions; lack of
published reasoning behind the allocation of scores; inadequate definition of scales to permit
differentiation among projects; and the existence of non-scientific benchmarks.
Originality/value The paper departs from earlier reviews to include a discussion on infrastructure
SRTs, life cycle tools, and issues broader than the environment. Changes and additions, subsequent to
earlier reviews, have been made to SRTs, making the updated review provided here useful.
Keywords Sustainable development, Infrastructure, Buildings, Rating tools, Sustainability criteria,
Sustainability indicators, Reporting
Paper type General review

Smart and Sustainable Built


Environment
Vol. 2 No. 2, 2013
pp. 106-139
r Emerald Group Publishing Limited
2046-6099
DOI 10.1108/SASBE-03-2013-0010

1. Introduction
Sustainable development has been internationally agreed as a key goal for policy
makers to guide development at global, national and local levels (Singh et al., 2009).
The World Economic Forum (2011, p. 11) identifies the building sector as an area which
needs to be addressed because it accounts for 40% of the worlds energy use, 40% of
carbon output and consumes 20% of available water. The large use of electricity in
buildings has been identified as one of the main culprits for high emissions across the
globe. The Centre for International Economics Canberra and Sydney (2007) reports
that 23 per cent of the total greenhouse gas emissions in Australia come from
the energy demand of the building sector, while the US Green Building Council
(USGBC, 2011) claims that both residential and commercial buildings account for
39 per cent of total emissions in the USA, and more than any other country in the
world except China.
The increased recognition that buildings are substantial carbon dioxide (CO2)
rge-Vorsatz and Novikova, 2008; Buchanan and Honey,
emitters (Reed et al., 2009; U
1994; Levermore, 2008), and contribute significantly to climate change, puts pressure
on construction industry practitioners to incorporate sustainability goals aside from
the traditional project goals of cost, time and quality (Fernandez-Sanchez and

Rodrguez-Lopez, 2010). Translating sustainability goals into action at the project level
is complicated by the individual characteristics of countries, their cultures, climates
and types of buildings (Ugwu and Haupt, 2007).
Against this background, there is a widely recognised need to identify metrics and
tools that would help articulate the extent to which current activities are either
sustainable or not sustainable (Singh et al., 2009). This has been the key motivator
for the development and increased popularity of sustainability reporting tools
(SRTs) in the building sector and the civil engineering infrastructure sector.
Infrastructure includes transport (roads and bridges, bus and cycle ways, footpaths,
railways), water (sewage and drainage, water storage and supply), energy
(transmission and distribution) and communication (transmission and distribution)
among others (AGIC, 2012). This paper provides a review of available tools used
to assess and report sustainability in the infrastructure and building sectors. The tools
are commonly used in their country of origin, particularly if this is legislated, but are
adopted in other countries.
Cole (1999) suggests that SRTs, used with the intent to evaluate green
performance, usually take on a few common characteristics such as emphasis on the
assessment of resource use and ecological loadings, the assessment of design
intentions and potential through prediction rather than actual real-world performance,
use of performance scoring as an additive process and a performance summary,
certificate or label. Cole (2005) adds that SRTs are not only a means to facilitate
the reduction of environmental impacts, but also are increasingly being used as a basis
for risk and real estate valuations in obtaining development approval from the
banking industry. SRTs provide industry standard guidelines and allow comparability
across projects. For building owners and operators, using SRTs demonstrates
commitment to corporate social responsibility (CSR), and permits staying ahead of
future government regulations (Green Building Council of Australia (GBCA), 2012b).
Developing an ideal SRT is challenging because it needs to be able to satisfy all
stakeholders concerns (Ding, 2008).
SRTs have been in existence since the last decade within a number of countries, and
were introduced in an effort to better understand the sustainability level of buildings and
infrastructure. While it may be argued that different climates and cultures, and the
different nature of buildings/infrastructure for each country may warrant unique
reporting tools, the rapid growth of SRTs has made sustainability comparisons more
complicated for stakeholders, for example, property investors (Reed et al., 2009), who rely
on such tools to make informed investment decisions. According to Nguyen and Altan
(2011), although there are many registered building SRTs, only a few of them are widely
acknowledged. Infrastructure SRTs and life cycle tools are less commonly discussed.
This paper departs from other reviews (Ding, 2008; Reed et al., 2009; Mitchell, 2010;
Berardi, 2012; Sev, 2011), to include a discussion on infrastructure SRTs, life cycle tools,
and issues broader than the environment. Berardi (2012) provides a review of building
SRTs in three categories: total quality assessment, life cycle analysis (LCA) and energy
consumption evaluation. However, because of the wide scope adopted, the review on life
cycle tools is not extensive. Changes subsequent to the above reviews have been made to
some SRTs, for example, Leadership in Energy and Environmental Design (LEED),
making the updated review provided here useful. As well, this paper summarises the
empirical evidence on the benefits of engaging in SRTs, and provides a critique of the tools.
The Global Reporting Initiative (GRI) was set up with the intention of providing an
international sustainability reporting framework (Global Reporting Initiative (GRI), 2011).

Building/
infrastructure
SRTs
107

SASBE
2,2

108

Under this framework, specific reporting guidelines for the construction and real
estate sector are available. However, since the context provided by the GRI guidelines
is more applicable at a corporate level, it is not reviewed here. The scope of this paper is
SRTs for building/infrastructure projects.
The structure of the paper is as follows. The following sections explore the nature
of major building SRTs, infrastructure SRTs and life cycle tools applicable to both
buildings and infrastructure. A critique of these tools and suggestions for future
research are given.
This paper acknowledges that the multiplicity of terms in SRTs can be confusing to
the reader. As such, it is important to clarify some of this terminology upfront.
Typically, for most SRTs, there are hierarchical levels of sustainability criteria. To
ensure consistency, the top (highest) level will be referred to here as criteria, and the
next (lower) level as subcriteria.
The review provided here will be of interest to a range of stakeholders
construction industry practitioners, real estate investors and developers involved in
making decisions about green building/infrastructure projects. As well, it will serve
as a useful reference for the development of the next generation of SRTs.
2. SRTs for buildings
A review of some of the major tools applicable to buildings is given. This is followed,
after a similar review for infrastructure and life cycle tools, by a critique.
Building Research Establishments Environmental Assessment Method (BREEAM)
BREEAM, established in 1990, was first launched in the UK with office buildings
in mind (Bonham-Carter, 2010; Sharifi and Murayama, 2013) but later expanded
in scope to also include specific schemes for residential housing and neighbourhoods.
It is perceived to be one of the worlds foremost environmental reporting tools for
buildings (Crawley and Aho, 1999). Scores are awarded to ten criteria management,
health and well-being, energy, transport, water, materials, waste, land use and ecology,
pollution and innovation according to performance, and summed to produce an
overall score. This score is then matched to an award: pass, good, very good, excellent
or outstanding.
Table I highlights both the criteria and subcriteria in BREEAM. Scores are awarded
upon meeting the agreed performance targets for each of the subcriteria.
The award benchmarks for new buildings, refurbishments and, where applicable,
fit-out projects, are presented in Table II. The BREEAM tool offers a set of
weightings to be taken into account as part of the assessment process (see Table III)
(BREEAM, 2012).
LEED
LEED was developed by the USGBC in 2000. Since its inception, LEED has grown to
encompass more than 14,000 projects in the USA and more than 30 countries (Nguyen
and Altan, 2011). This tool promotes sustainable building and development practices
through a suite of reporting, and recognises projects which are committed to better
environmental and health performance (LEED, 2012). Two major building typologies
covered by LEED are:
(1)

New Construction and Major Renovations v2009. The criteria and scores
(included in parentheses) available for each criterion are as follows: sustainable

Management
Commissioning
Construction site impacts
Security
Health and well-being
Daylight
Occupant thermal comfort
Acoustics
Indoor air and water quality
Lighting
Energy
CO2 emissions
Low or zero carbon technologies
Energy sub-metering
Energy efficient building tools
Transport
Public transport network connectivity
Pedestrian and cyclist facilities
Access to amenities
Travel plans and information
Water
Water consumption
Leak detection
Water re-use and recycling

Building/
infrastructure
SRTs

Waste
Construction waste
Recycled aggregates
Recycling facilities
Pollution
Refrigerant use and leakage
Flood risk
NOx emissions
Watercourse pollution
External light and noise pollution
Land use and ecology
Site selection
Protection of ecological features
Mitigation/enhancement of ecological value

109

Materials
Embodied life cycle impact of materials
Materials re-use
Responsible sourcing
Designing for robustness
Innovation
New design and construction methods not formally recognised
Table I.
BREEAM criteria and
subcriteria

Source: BREEAM (2012)

Award

Score (%)

Unclassified
Pass
Good
Very Good
Excellent
Outstanding

o30
X30
X45
X55
X70
X85

Source: BREEAM (2012)

sites (26), water efficiency (10), energy and atmosphere (35), indoor
environmental quality (IEQ) (15), innovation in design (6), regional priority
(4) and materials and resources (14) (LEED, 2009a).
(2)

Existing Buildings: Operations and Maintenance v2009. The criteria and


scores (included in parentheses) available for each criterion are as follows:
sustainable sites (26), water efficiency (14), energy and atmosphere (35), IEQ
(15), innovation in design (6), regional priority (4) and materials and resources
(10) (LEED, 2009b).

For both typologies, scores are accumulated using a base of 100 (innovation in design
and regional priority are added separately), and rated according to a scale as shown

Table II.
BREEAM award

SASBE
2,2
BREEAM criterion

110

Table III.
BREEAMs criteria
weightings

Management
Health and well-being
Energy
Transport
Water
Materials
Waste
Land use and ecology
Pollution
Innovation

Weightings (%)
New builds, extensions and
Building-fit-out only
major refurbishments
(where applicable to scheme)
12
15
19
8
6
12.5
7.5
10
10
10

13
17
21
9
7
14
8
na
11
10

Source: BREEAM (2012)

in Table IV. There are embedded prerequisites within each criterion (except for
sustainable sites Existing Buildings: Operations and Maintenance v2009) which
must be met before a score is awarded. LEED for Neighbourhood Development (2009)
is the latest USGBC reporting tool, which incorporates site selection, design and
construction elements (Hurley and Horne, 2006) taking into account both landscape
and regional contexts (Sharifi and Murayama, 2013).
Green star
Green Star, developed by the Green Building Council of Australia (GBCA) is a
comprehensive voluntary building SRT. It was initially developed to accommodate the
need for buildings operating in hot climatic areas (Roderick et al., 2009; Tronchin and
Fabbri, 2008). It incorporates ideas from other tools, such as BREEAM and LEED, and
other environmental criteria specific to the Australian environment (Lockwood, 2006).
Green Star covers the nine criteria shown in Table V, where scores are awarded if
targets are met.
A single, overall score is calculated based on a series of steps. First, for each
criterion, a score is determined. Then, given weightings are applied. All the weighted
criteria scores are summed. Innovation points can be obtained by either engaging with
innovative strategies and technologies or exceeding the Green Star benchmark.
Innovation points are added to the weighted criteria scores. This gives an overall score,
which is then matched to an award (see Table VI). The GCBA only certifies buildings
with 4, 5, or 6 Green Stars.
Award

Table IV.
LEED award

Certified
Silver
Gold
Platinum
Source: LEED (2012)

Score
40-49
50-59
60-79
80 and above

Green star criterion

Purpose

Management

Scores address the adoption of sustainable development principles from


project conception through design, construction, commissioning, tuning and
operation
Scores target reduction of greenhouse emissions from building operation by
addressing energy demand reduction, use efficiency, and generation from
alternative sources
Scores address reduction of potable water through efficient design of
building services, water reuse and substitution with other water sources
(specifically rainwater)
Scores address a projects impact on its immediate ecosystem, by
discouraging degradation and encouraging restoration of flora and fauna
Scores target environmental impact along with occupant wellbeing and
performance by addressing heating, ventilation and air conditioning
(HVAC), lighting, occupant comfort and pollutants
Scores reward the reduction of demand for individual cars by both
discouraging car commuting and encouraging use of alternative
transportation
Scores target resource consumption through material selection, reuse
initiatives and efficient management practices
Scores address source of pollution from buildings and building services to
the atmosphere, watercourse and local ecosystems
Green Star seeks to reward marketplace innovation that fosters the
industrys transition to sustainable building

Energy
Water
Land use and ecology
Indoor Environment
Quality (IEQ)
Transport
Materials
Emissions
Innovation

Building/
infrastructure
SRTs
111

Table V.
Green star criteria

Source: GBCA (2012a)

Award

Score

Description

1 star
2 star
3 star
4 star
5 star
6 star

10-19
20-29
30-44
45-59
60-74
X75

Minimum practice
Average practice
Good practice
Best practice
Australian excellence
World leadership

Sources: GBCA (2012a), Roderick et al. (2009), Mitchell (2010)

Comprehensive Assessment System for Building Environmental Efficiency (CASBEE)


CASBEE was introduced by the Japan Sustainable Building Consortium in 2002 to
promote the concept of sustainable buildings (Sev, 2011). CASBEE defines sustainable
buildings as those that are designed to save energy and resources, recycle materials,
reduce emissions of toxic substances to the environment, harmonise the local climate,
traditions and culture, and lastly to sustain and improve the quality of human life
while maintaining the capacity of the ecosystem at both local and global levels
(CASBEE, 2002).
In CASBEE, buildings are assessed according to life cycles. Environmental load (L)
and building performance (Q) are distinguished, where scoring uses progressive levels

Table VI.
Green star awards

SASBE
2,2

112

1 to 5, leading to Building Environmental Efficiency (BEE), BEE Q/L. BEE


represents the overall environmental performance of a building, Q incorporates quality
(consisting of combined scores of various subcriteria, such as indoor environment,
quality of services and outdoor environment on site), and L incorporates energy,
resources and materials, and off-site environment (CASBEE, 2002).
The BEE graph (Figure 1) shows that the higher the Q value and the lower the
L value, the more sustainable the building is. Ordinary buildings are represented by a
gradient of BEE 1.0. Depending on which region in Figure 1 that a BEE value falls
into, a different CASBEE award is available: C (poor), B (Fairly poor), B (Good),
A (Very good) and S (Excellent). In 2008, CASBEE included the consideration of global
warming by estimating life cycle CO2 emissions as part of its off-site environment
subcriterion (Sharifi and Murayama, 2013).
Hong Kong Building Environmental Assessment Method (HK-BEAM)
HK-BEAM was introduced in 1996 by the Hong Kong BEAM Society, a not-for-profit
organisation consisting of professionals within the building industry (Chan and Chu,
2010). It began primarily as a voluntary environmental reporting tool for high-rise
buildings, and subsequently branched out into two main typologies covering all local
buildings: the HK-BEAM Version 4/04 for new buildings (for planning, design,
construction, commissioning, with design and specifications for deconstruction) and
the HK-BEAM Version 5/04 for existing buildings (for management, operation and
maintenance) (Lee et al., 2007; Chan and Chu, 2010). From January 2013, HK-BEAM
Plus v1.2 became mandatory.
HK-BEAM is comparable to other SRTs. The criteria (scores bonus points) are
as follows: site aspects (18 1); material aspects (11 2); energy aspects (39 2);
water aspects (7 2); IEQ (30 3) and innovative techniques (1 5) (HKGBC and
BEAM Society, 2012). Suggested weightings for these criteria are shown in Table VII.
These weightings differ depending on whether it is an existing or new building.

100

BEE = 3.0

BEE = 1.0

S
A
Q (Quality)

BEE = 1.5

B+
B

BEE = 0.5

50

Figure 1.
The BEE graph

Source: CASBEE (2002)

50
L (Load)

100

Similar to BREEAM, the determination of an overall assessment grade is by


percentage of applicable scores obtained under each criterion, including its weighting
factor. SA, EU and IEQ are perceived to be important and therefore a minimum
percentage must be obtained in these criteria to qualify for an overall grade (see
Table VIII). The overall grade (per cent) achieved is mapped to an award (Table VIII).

Building/
infrastructure
SRTs

Nationwide House Energy Rating Scheme (NatHERS)


NatHERS was initiated in 1993, but only implemented under the Energy Smart Home
Policy in 1998, as part of the Australian Governments initiative to improve thermal
performance of buildings (Kordjamshidi et al., 2006). The heat energy gains and losses
associated with the design of a building are calculated, and the amount of artificial
heating and cooling required to maintain a comfortable temperature is determined. The
results are used to establish a star award for the building 0 (poor performance)
through to 10 (requiring virtually no energy to be used for heating or cooling). Awards
are usually given before a residential building is occupied (Department of Climate
Change and Energy Efficiency, 2010).

113

Building Sustainability Index (BASIX)


BASIX is a web-based self-assessment tool (Vijayan and Kumar, 2005) which analyses
the design of a proposed building (single dwelling, multi-dwelling or alterations and
additions) and benchmarks against anticipated water consumption and greenhouse
gas emission targets (New South Wales (NSW) Government Planning and
Infrastructure, 2013). These targets are derived based on average similar
developments (NSW Government Planning and Infrastructure, 2013). BASIX can be
used across all residential building types and is part of a development application
process (NSW Government Planning and Infrastructure, 2013; NSW Government
Planning and Infrastructure, 2013).

Criterion

Weighting (%)
existing buildings

Weighting (%)
new buildings

18
12
30
15
25

25
8
35
12
20

Site aspects (SA)


Material aspects (MA)
Energy use (EU)
Water use (WU)
Indoor environmental quality (IEQ)

Table VII.
HK-BEAM criteria
weightings

Source: HKGBC and BEAM Society (2012)

Award
Platinum
Gold
Silver
Bronze

Overall (%)

SA (%)

EU (%)

IEQ (%)

75
65
55
40

70
60
50
40

70
60
50
40

70
60
50
40

Source: HKGBC and BEAM Society (2012)

Table VIII.
HK-BEAM awards

SASBE
2,2

114

Table IX.
NABERS office
buildings

Building actual performance


While SRTs generally focus on the potential environmental impact of design (design
performance), some tools only inform on actual building performance. Typically,
assessments of actual building performance are conducted on an annual basis
(NABERS, 2011). Two examples are National Australian Built Environmental Ratings
Scheme (NABERS) and Energy STAR, which are reviewed here.
NABERS
NABERS, launched in 1998 in Australia, informs on the actual environmental
performance of buildings, tenancies and homes. Criteria that are assessed include
water usage, energy usage, waste and indoor environment. There are four types of
reporting tools available for: offices, shopping centres, hotels and homes (NABERS,
2011). The awards range from 1 (worst) to 5 (best) to reflect on the point-in-time annual
performance of buildings (with reference to data from 12 months occupation/use). For
office buildings, there is a subdivision into tenancy, base building and whole building
as shown in Table IX. The tenancy subdivision covers only tenanted space and is
applicable to tenants occupying either a leased or privately owned space within a
commercial building. For building owners and property managers, two subdivisions
are applicable: base building which focuses on central building services and common
areas; and whole building which covers tenanted space, central building services and
common areas (NABERS, 2011).
Mitchell (2010) describes the assessment process in relation to NABERS energy
criterion. The first step involves converting energy use into greenhouse gas
equivalents. This is done with reference to the emissions intensity of the standard
energy mix across the relevant state/territory of Australia. For example, if the
building is located in Victoria it will have its emissions relating to electricity calculated
based on Victorias electricity mainly come from high emitting brown-coal-fired
power stations. The calculated raw emissions are then normalised by taking into
consideration the hours of use of the premises, the occupant and equipment
density and local climate. These normalised values are then divided by the area
assessed, giving emissions per square metre. Finally, this is compared against the
benchmark for the relevant state/territory and type of building, to establish a suitable
award. An example of a NABERS award for a base building using normalised
emissions per square metre is shown in Table X. The normalisation and benchmarking
process is reviewed periodically. Median performance, described in terms of half
stars, is allowed.
Confusion may exist between Green Star and NABERS, since the awards are quite
similar (Mitchell, 2010). The differences between Green Star and NABERS are
summarised in Table XI.

Office building subdivision

Coverage

Tenancy
Base building
Whole building

Tenanted space
Central building services and common areas
A combination of the above

Source: NABERS (2011)

Energy STAR
Energy STAR derives from the US Environmental Protection Agency and the US
Department of Energy. Essentially, it is a tool used to track and benchmark a buildings
energy performance. An energy performance scale is developed based on (Energy
STAR, 2011):
.

Identification of the best available survey data representative of buildings


nationwide, differentiated by size, energy use and operation. One such example
is the US Department of Energys Commercial Building Energy Consumption
Survey (CBECS) conducted once every four years (see EIA, 2012).

Assessment of the characteristics of buildings surveyed, via a statistical


analysis.

From the results of the statistical analysis, development of a model to predict the
energy use of a certain type of building accounting for its location and type of
operation.

For each surveyed building, calculation of an energy efficiency ratio (actual to


predicted energy use).

Use of the energy efficiency ratio to create a distribution of energy performance


for the population of buildings. This forms the Energy STAR performance scale
from 1 to 100, where a score of 50 means that the building is at an average level.

Award

Emissions (kg CO2/m2 )

1 star
2 star
3 star
4 star
5 star

199
167
135
103
71

Poor-poor energy management


Average building performance
Very good current market best practice
Excellent strong performance
Exceptional best building performance

Item

Green Star

NABERS

Environmental impact
assessment
Scope
Phase
Owner

Potential

Actual

Design
Design phase
GBCA

Performance
In operation/use
Department of Environment, Climate Change
and Water NSW
Office
Homes
Hotel
Shopping centres

Certifiable awards
Legislation

Office
Retail
Healthcare
Education
Industrial
4,5 or 6 stars
Accreditation is on a
voluntary basis

115

Comments

Source: Mitchell (2010)

Coverage

Building/
infrastructure
SRTs

1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5 or 5 stars


NABERS energy rating must be disclosed when
leasing or selling

Table X.
NABERS award

Table XI.
Differences between
Green Star and NABERS

SASBE
2,2

116

3. SRTs for infrastructure


A review of some of the major tools applicable to infrastructure is given. This is
followed by a critique.
A Sustainability Poverty Infrastructure Routine for Evaluation (ASPIRE)
ASPIRE was developed by ARUP and Engineers Against Poverty (EAP) for a range
of stakeholders committed to the development of sustainable pro-poor infrastructure
(Gryc, 2012); it informs on poverty reduction performance of infrastructure projects.
It considers four major criteria the environment, society, economics and
institutions with breakdowns of four to six subcriteria under each criterion as
depicted in Figure 2.
A primary environmental consideration is in terms of how a development reduces
impact on natural resources such as air, land, water, biodiversity and materials.
Infrastructure is assessed in terms of how well it meets societys needs equitably and
how it reduces poverty via public health, culture and accessibility to services. Project
viability, macroeconomic effects, livelihood opportunity and the creation of an
equitable economy are considered. The criterion of institution encompasses four
subcriteria, namely policy, governance, skills and reporting; these represent the
capacity and effectiveness of the institutional environment in supporting the delivery
of the infrastructure (Gryc, 2012). In the assessment, the user goes through a series of
questions and is provided with illustrations of best to worst case scenarios to help in
the allocation of non-weighted scores. The aggregated scores for each criterion are then
represented using a traffic light idea where green indicates strength and red indicates
weakness.
Australian Green Infrastructure Council (AGIC)
AGIC officially released its Infrastructure Sustainability Rating Tool v1.0 in December
2012. Compared to the majority of SRTs, AGIC adopts a much broader range of criteria,
including: management, procurement and purchasing, climate change adaptation,
energy, water, material, discharges to air, land and water, land, waste, ecology, health,
heritage, stakeholder participation, urban design and innovation. There are three types
of reporting under AGIC, summarised in Table XII, and the level of sustainability of a
project is scored on a 100-point scale (Table XIII).

ASPIRE

Society

Environment

Figure 2.
ASPIREs framework

Air
Land
Water
Biodiversity
Energy
Materials

Source: Gryc (2012)

Stakeholders
Culture
Population
Services
Health
Vulnerability

Economics

Equity
Livelihoods
Macro
Viability

Institutions

Structures
Skills
Policies
Reporting

Non-formal SRTs
Publications to date, on infrastructure reporting, have focused on the development of
sustainability criteria. For example, Fernandez-Sanchez and Rodrguez-Lopez (2010)
recommend more than 80 criteria for infrastructure projects in Spain. From their study,
they find that 11 of the criteria voted for by stakeholders in the top 30 (based on the
analytic hierarchy process) largely involve economic and social issues. These criteria
(with relative importance as a percentage in parentheses) include: health and safety
(3.85 per cent), necessity of work-urgency of work (3.77 per cent), life cycle cost (3.72 per
cent), economical cost/economical benefit (3.24 per cent), project declaration of general
interest (2.96 per cent), public participation and control on the project (2.59 per cent),
barrier effect of the project (2.38 per cent), project governance and strategic
management (2.26 per cent), accessibility for human biodiversity (2.26), respect for
local customs (2.05 per cent) and increase in economic value (1.42 per cent). Shen et al.
(2007) suggest a sustainability project performance checklist across a projects life
cycle inception, design, construction, operation and demolition. Ugwu and Haupt
(2007) identify key performance indicators (KPIs) and assessment methods for
infrastructure sustainability from a South African construction industry perspective.
Sahely et al. (2005) propose sustainability criteria for urban infrastructure, focusing on
key interactions and feedback mechanisms between infrastructure and wider
environmental, social and economic concerns. Morrissey et al. (2012) critically appraise
project impacts from an ecological limits sense.

Building/
infrastructure
SRTs
117

4. Life cycle tools for buildings/infrastructure


Life cycle tools include LCA, input-output (IO) analysis and material flow accounting
(MFA), Pavement Life Cycle Assessment Tool for Environmental and Economic Effects
(PaLATE), and Life Cycle Analyser.
Types of
reporting

When can this be


applied?

Design
As-built

End of planning and


design
End of construction

Operation

During operation

Description
Awarded based on the inclusion of design elements and
construction requirements
Includes measured sustainability performance during
construction and built into the infrastructure asset
Given after 24 months of operation. Based on the
measured green performance of operating
infrastructure

Source: AGIC (2012)

Award

Score

Not eligible
Commended
Excellent
Leading

o25
25-o50
50-o75
75-100

Source: AGIC (2012)

Table XII.
AGICs three
reporting types

Table XIII.
AGICs award

SASBE
2,2

118

LCA
In LCA, the environmental impact of activities and raw materials over a building life
cycle (manufacture, transportation, deconstruction and recycling) are assessed. The
four phases in LCA are: goal and scope, life cycle inventory, life cycle impact
assessment and improvement (Hsu, 2010).
In setting a goal and scope, a person may wish to investigate, for example, which
structural design has a lesser environmental impact, or how a structure can be
further improved to lessen its impact on the environment. Life cycle inventory
involves collecting inflow and outflow data and modelling. Life cycle impact involves
selecting, classifying and characterising the impact on the environment. The final
phase of improvement involves revisiting the earlier phases. This is necessary
in order to identify the most important aspects of impact assessment, check
the validity of results and redo aspects of the LCA that need more work
(Hsu, 2010, p. 15).
Mroueh et al. (2001) use LCA for road and earth construction. Mithraratne and Vale
(2004) take into account embodied and operating energy requirements as well as life
cycle costs over the useful life of a house. Many countries have also developed specific
LCA tools, for example, BEES in the USA, BOUSTEAD and ENVEST in England,
Ecoinvent in Switzerland and GaBi in Germany (see Appendix Table AI).
IO analysis
IO analysis, as used in macroeconomic studies of monetary flows, has been adapted to
environmental impact analysis (Piluso et al., 2008). IO tables (Xu et al., 2010) have rows
representing outputs, and columns representing inputs. From an IO table, a matrix of
IO coefficients can be derived. These IO coefficients (also known as technical
coefficients) represent the amount of input required to produce one unit of output
(see Xu et al., 2010 for a mathematical formulation). For sustainability analysis,
typically a simplified IO matrix is adopted (Piluso et al., 2008; Xu et al., 2010) and the
technical coefficients could potentially help answer questions such as how much CO2
has been emitted in the production of one tonne of steel (Born, 1996). Norman et al.
(2006) use IO and LCA combined to estimate the energy use and greenhouse gas
emissions associated with the manufacture of construction materials for infrastructure,
buildings and transportation.
MFA
MFA is used to characterise the flow of materials, products and substances in a defined
system (Huang and Tsu, 2003; Kahhat and Williams, 2012). It applies a conservation
law total inputs must equal total outputs plus any net accumulation (EUROSTAT,
2001). EUROSTAT (2001, pp. 20-24) suggests making a distinction between material
flows: direct vs indirect; used vs unused; domestic vs rest of the world.
PaLATE
PaLATE is a spreadsheet-based tool used in the assessment of environmental and
economic impacts for pavements and roads. The tool depends on knowing the design,
roadway cost, construction materials and transportation information (both mode and
distances), as well as any road maintenance involved. Among some of the
environmental effects covered under PaLATE are energy consumption, CO2
emissions, NOx emissions, PM10 emissions, CO emissions and leachate (Horvarth
et al., 2007).

Life Cycle Analyzer


Life Cycle Analyzer is software developed specifically to analyse the entire life cycle of
concrete in all types of production, be it site-poured or used in prefabrication. It allows
the calculation of both environmental and cost impacts of different concrete mix
designs. The output from this software can feed into major building SRTs such as
BREEAM and LEED (BASF, 2012).
5. Benefits of engaging with SRTs
Several studies have highlighted the positive impact of engaging with building/
infrastructure SRTs. Lee and Guerin (2010) find that LEED-certified buildings yield
positive benefits in relation to employee job performance. Miller et al. (2008) address
the question on the benefits of investing in energy savings and environmental design.
In their study, they use US-based Energy STAR office buildings as one set of green
buildings, together with LEED-certified buildings as an alternative, with large samples
of non-Energy STAR and non-LEED-rated buildings included in the analysis. They
conclude that going green does pay off with significant rental rate differentials.
Similar conclusions are found across other studies such as Fuerst and McAllister (2008)
cited in Miller et al. (2008), who compare rentals for LEED and non-LEED buildings,
and Eichholtz et al. (2009), who compare rentals for green and non-green buildings.
Ries et al. (2006), through a case study, explore the benefits of moving into a new
LEED-certified facility. They find that manufacturing productivity increased by 25 per
cent, while energy usage decreased by 30 per cent on a square foot basis. Heerwagen
(2000) suggests that green buildings are often linked to higher productivity, better
health and well-being, and improvements in organisational performance, such as lower
staff turnover.
Wiley et al. (2010) conducted an empirical study to test the relationship between
energy-efficient design and leasing markets for commercial real estate, and find that
green buildings have superior rents and occupancy rate. Fuerst (2009) finds no
reversal of this trend due to the economic downturn for both LEED- and Energy
STAR-certified buildings, and also argues that growth for this market segment is
likely. Research focusing on the financial performance of green office buildings in
Australia finds that a 5-star NABERS energy-rated building delivers a higher premium
value compared to a 3- to 4.5-star NABERS energy-rated building. Newell et al. (2011)
give that Green Star-certified buildings show a green premium value. Hes (2007)
justifies the effectiveness of SRTs using nine measures: reduction in environmental
impact, positive social impact, positive effect on occupant comfort, positive effect on
productivity, cost savings, ease of use, rating and modelling accuracy, ability to be
dynamic and support continuous improvement and ability to support innovation
in design.
6. Critique of building/infrastructure SRTs
There has been some recently published critique of mainstream SRTs such as
BREEAM, LEED, CASBEE, HK-BEAM and Green Star (Nguyen and Altan, 2011).
Similarities exist between them as shown in Table XIV. It is observed that all five tools
provide some form of guidance on the design and review phases of projects, offer
training and certification processes to ensure that assessors are up-to-date with these
tools, use both prescriptive and performance-based criteria in the evaluation process,
adopt different schemes most notably for new construction buildings and existing
buildings, as well as provide case studies and manuals. Verification by third parties

Building/
infrastructure
SRTs
119

CASBEE

HK-BEAM






Percentage of credits
achieved (%)

Note:  , Indicates the presence of an attribute


Sources: Nguyen and Altan (2011), BREEAM (2012), LEED (2012), CASBEE (2002), GBCA (2012a), HKGBC and BEAM Society (2012)




score




Design, review
Design
and construction
review
and as-built

Green Star




Graphical representation Total
(BEE graph)




Preliminary design,
Design,
construction design, and
and operation post design

LEED

Offer training and certification




Use of prescriptive and performance based criteria


Existence of different schemes (i.e. new construction,
existing building, etc.)


Provision of case studies and manuals


Verification (by third party)


End product presentation
Percentage of credits Total Score
achieved (%)

Design, review
and construction

Stages of project

Table XIV.
Similarities across
reporting tools
BREEAM

120

Item

SASBE
2,2

exists for all tools. In terms of the final presentation, BREEAM and HK-BEAM
reporting is in the form of percentage of credits achieved, CASBEEs reporting is done
in the form of a BEE graph as illustrated in Figure 1, while the reporting in LEED and
Green Star is done in terms of a total score.
Nguyen and Altan (2011) compare several attributes of these five building SRTs by
scoring them based on: popularity and influence, availability, user-friendliness,
applicability, data collecting process, development and quality of results presentation.
The outcomes of their assessment (Table XV) show that BREEAM and LEED may be
better in terms of applicability and popularity. CASBEE, on the other hand, gives the
highest methodology score, possibly due to its more rigorous nature.
The other strengths of these mainstream SRTs are summarised as follows.
BREEAM:
.

encourages energy reduction leading to zero net CO2 emissions;

sets a minimum standard for sub-metering energy use;

demonstrates more rigour in terms of public transport accessibility (taking into


account routes, hours of service and frequency level); and

can be independently assessed (Adegbile, 2013).

Building/
infrastructure
SRTs
121

LEED:
.

criteria are publicly-reviewed by more than 15,000 members and are arguably
more transparent compared to BREEAM (LEED, 2012);

credits are allocated for heat island effect (trees as shades, or the specification
of high solar reflectance materials) (LEED, 2009a);

credit is given for verification of thermal comfort (post-occupancy); and

allows for credit interpretation request (CIR) in the event that developers have an
alternative means of meeting a credit point.

CASBEE:
.

Highly rigorous and versatile methodology (Adegbile, 2013).

Graphical representation of assessments which can be interpreted easily. This is


not present in other tools (CASBEE, 2002).

Attribute

BREEAM

LEED

CASBEE

Green Star

HK-BEAM

10
7
11
13
7
8
8
8
3
75

10
7
10
13
7
7
10
8
3
75

6
7
13
11.5
6
9
6
7
4
69.5

5
8
9
10
9
5
8
8
3
65

5
8
11
9
8
5
8
8
4
66

Popularity (/10)
Availability (/10)
Methodology (/15)
Applicability (/20)
Data-collecting process (/10)
Accuracy (/10)
User-friendliness (/10)
Development (/10)
Presentation (/5)
Final Score (/100)
Source: Nguyen and Altan (2011)

Table XV.
Overall score of
reporting tools

SASBE
2,2

HK-BEAM:
.

Mandatory adoption of BEAM Plus Version 1.2 from 1 January 2013. This
allows for standardisation and better comparability (HKGBC and BEAM Society,
2012).

Allows for CIR in the event that developers have an alternative means of
meeting a credit point.

122

Green Star:
.

Designed specifically to cater for Australias unique conditions. (Energy


modelling is consistent with the NABERS tool.) (Green Building Council of
Australia (GBCA), 2012a).

Allows for compliance interpretation request (CIR) in the event that developers
have an alternative means of meeting a credit point.

These building SRTs have also been criticised for a lack of attention to life cycle
perspectives. Bowyer et al. (2006) claim that there is no requirement for consideration of
life cycle inventory data in LEED; Scheuer and Keoleian (2002) find LEED to be an
unreliable sustainability assessment tool when looked at from a life cycle perspective.
However, a majority of SRTs have now started or are in the process of incorporating
life cycle thinking.
Other concerns regarding SRTs have been raised. Baird (2009) argues for the
inclusion of user performance criteria, claiming that buildings which perform poorly
from a users point of view are unlikely to be sustainable. Chew and Das (2007, p. 10)
highlight one of the issues with SRTs is that scores are lost for credits that are beyond
the scope of a project. For example, sustainable site development or provisions related
to fuel-efficient vehicles are not feasible in the case of a commercial building on a tight
site, downtown with a well-established public transport system. Fard (2012) as well as
Fenner and Ryce (2008) argue that point-hunting or green-washing may become an
issue where building owners are only concerned about gaining the required points
for certification without actually addressing pertinent issues relating to energy
efficiency and resource preservation. Saunders (2008) notes that different standards are
used in different SRTs, and this makes it difficult to do comparisons between tools.
Based on a normalised set of conditions, Saunders (2008) claims that LEED (USA) uses
a less rigorous, and to a certain extent lower building code standard compared to
Green Star (Australia) or BREEAM (UK).
An analysis of 14 SRTs for buildings/infrastructure, carried out by this papers
authors, is shown in Appendix Tables AII-AIV, and covers: criteria and subcriteria;
nature of the scoring used; and identification of international standards embedded.
Note that for the reason of brevity, only the major tools are explained in detail in
this paper; more information about the other listed tools is available on the respective
web sites indicated in Appendix Table AIII. The findings are summarised here:
First, from Appendix Table AII, it is observed that SRTs have a strong
environmental focus, where a majority of them have adopted subcriteria in areas such
as energy, water, waste, land use and ecology, as well as materials. Researchers (Fenner
and Ryce, 2008; Mateus and Braganca, 2011; Watson et al., 2013; Ding, 2008; Todd
et al., 2001) have also highlighted this point across different SRTs. Many SRTs
(CASBEE, EPRA, BCA Green Mark for Districts, Estidama Pearl Community, Green
Globe, Sustainable Design Scorecard, DGNB-Seal, Protocol ITACA) do not explicitly

mention the incorporation of financial aspects in their assessment (Watson et al., 2013).
Only ASPIRE is found to encompass more balanced triple bottom line criteria of
environmental, social and economic. Ding (2008, p. 456) argues for the inclusion of
economic criteria, claiming that even though a project may be environmentally sound
it could be very expensive to develop. While it might be reasoned that economic
concerns are not downplayed by SRTs, because they have been taken into
account under environmental impacts (e.g. cost savings that emanate from energy
reduction see LEED, 2009a), this does not appear to be the case for a majority of
SRTs. CASBEE, for example, makes no explicit mention of cost issues in its
assessment. As well, from Appendix Table AIV, it is clear that ISO 14001, concerning
environmental management, is the most commonly embedded standard across SRTs
such as BREEAM, Green Star, AGIC, BCA Green Mark for Districts, Estidama Pearl
Community, Green Globe, HK-Beam and DGNB Seal. Only one SRT out of the 14
analysed, namely HK-BEAM, has included OHSAS 18001, a standard in health
and safety.
Second, from Appendix Table AIII, it can be observed that all SRTs analysed have
adopted deterministic scoring which does not account for variability or uncertainty in
value judgments. Uncertainty in judgments needs to be accounted for because of the
existence of subjective criteria and measurement scales. Some examples follow.
In Green Star, the building users guide criterion gives that 1 point is awarded for an
easy-to-use guide that includes information relevant to users, occupants and tenants.
However, different people will have different understandings of what constitutes easyto-use, and they may not arrive at the same score. The conditional requirement
set under the energy criterion states that before points can be awarded against
greenhouse gas emissions, the projects predicted greenhouse gas emissions must not
exceed 110 kg CO2/m2/annum as determined according to energy modelling tools
(referring to either the Australian Building Greenhouse Rating Validation Protocol or
the Green Star energy calculator). Uncertainty lies in these energy modelling tools and
calculations may differ depending on how optimistic or pessimistic the inputs of
energy or gas consumption are (see GBCA, 2012a). These calculations also depend on
the phase of the project, whether it is at pre-design or post-design phase. The
environmental design initiative criterion in Green Star states that 1 point is awarded if
an initiative in a project viably addresses a valid environmental concern outside
the scope of the tool; opinions may differ as to what constitutes a valid environmental
concern. LEED incorporates post-occupancy thermal comfort surveys which are
based on the value judgement of users and are carried out in a deterministic manner
(LEED, 2012). Insua and French (1991), Wolters and Mareschal (1995) and Hyde et al.
(2004) demonstrate that uncertainty in input parameters needs to be incorporated
into decision-making processes due to its influence on the ranking of alternatives. In
summary, SRTs largely ignore uncertainty in value judgements and behavioural
issues, which have the potential to affect a buildings overall performance. Fenner and
Ryce (2008) share a similar view by arguing that assessors opinions tend to differ and
therefore inconsistencies are unavoidable in sustainability assessments.
Third, not all SRTs consider criteria weighting. From Appendix Table AIII, seven
out of 14 SRTs (ASPIRE, AGIC, BCA Green Mark for Districts, Estidama Pearl
Community, Green Globe, Sustainable Design Scorecard and Protocol ITACA) do not
explicitly mention criteria weighting. There is currently no consensus to guide the
derivation of the weightings. CASBEE, for example, derives its weighting from a
survey of building owners, operators and designers. BREEAM claims that its

Building/
infrastructure
SRTs
123

SASBE
2,2

124

environmental weightings are derived from a combination of consensus-based


weightings and ranking by a panel of experts, but makes no mention of who
constitutes the panel of experts. Green Star derives its environmental weightings from
the following parties:
.

The Organisation for Economic Co-operation and Development (OECD)


Sustainable Buildings Project Report.

Australian Greenhouse Office, Environment Australia, CSIRO, the Cooperative


Research Centre for Construction and the Commonwealth Department of
Environment and Heritage.

A national survey conducted by the Green Building Council, which informed the
development of the tool and assisted in assessing regional variation.

Understanding how the weightings are derived and understanding the stakeholders is
important because this will have a bearing on the overall assessment of buildings/
infrastructure. Future harmonisation effects should consider this aspect in an attempt
to develop an overarching standard for weightings or, at the very least, there needs
to be specific mention or justification of the process involved in deriving these
weightings.
Fourth, there are issues over benchmarks and relative comparisons, based on the
scoring used by the tools. Sharifi and Murayama (2013) argue that the benchmarks set
are non-scientific. Mitchell (2010) claims that Green Star has been criticised for being
too idealistic, showing hallmarks of something developed by architects rather than
people with practical experience in the commercial building industry. As an example,
for Green Star under the IEQ criterion, 1 point is awarded when a daylight factor of 2
per cent is achieved over 90 per cent of the nominated area and 1 point is awarded
when high-frequency ballasts are installed in fluorescent luminaires over a minimum
of 95 per cent of the nominated area. The question remains as to how these standards
are set. There is no empirical evidence to justify that achieving a daylight factor of 2
per cent over the 90 per cent nominated area is actually beneficial to stakeholders.
Fenner and Ryce (2008) highlight that building SRTs rely heavily on designers to
estimate the amount of energy and resources consumed by building occupiers.
Fifth, there is a lack of published reasoning behind the scores allocated for each
criterion, further suggesting that users are merely applying these tools without really
understanding what lies behind the tools. Berardi (2012) and Ding (2008) claim that the
reasons behind the selection of criteria, allocation of scores and weights are not explicit.
SRTs are designed based on opinions, as opposed to a rigorous analysis of building/
infrastructure effects on the environment, economy and society (Fard, 2012; Fowler and
Rauch, 2006; Rumsey and McLellan, 2005 cited in Berardi, 2012; Udall and Schendler,
2005). For example, under the ecological value criterion for Green Star, 4 credit points
are available when the site has no threatened or vulnerable species, no reduction of
native vegetation cover or if the ecological value of the site is not diminished. No
further explanation is provided as to why these criteria are proposed or the reason
behind the allocation of 4 credit points. It is questionable as to whether rewarding
credit points for such criteria in this manner would lead to better environmental
impact. It could occur that a project developer has coincidentally acquired land, which
happens to meet all of the above criteria, without applying additional effort. It would be
helpful if more detailed rationalisation and explanation accompanied the criteria
proposed. Justification as to why certain criteria are allocated more credit points

compared to others would be helpful to users (e.g. at present there is no explanation in


Green Star as to why four credit points are available for the ecological value of site
criterion vs 1 credit point for the maintenance of topsoil criterion). Having the
reasoning behind score allocation would help with efforts to improve SRTs.
Lastly, the tools do not sufficiently account for possible project variety or for
sufficient differentiation between projects. Sharifi and Murayama (2013, p. 80) explain
this limitation, with reference to BREEAM: [y] to maintain a minimum point the
developer should demonstrate that 50-74% of the development site that was built on
previously developed/brownfield land will be brought back to use [y] the problem is
there is no justification for setting 50% as the minimum and awarding the same points
for two different projects that have corresponding percentages that are in the same
range but with significant differences. As a further example, in the LEED material
reused criterion, 1 point is awarded if 5 per cent of materials are reused out of the total
value of the project, and 2 points if 10 per cent of materials are reused. This may not
sufficiently account for possible project differences. For example, 5 per cent of reused
materials for a large project compared to 5 per cent of reused materials in a small
project have different environmental impact. Under Green Star, the electric lighting
levels criterion states that [y] one point is awarded where it is demonstrated that the
facility lighting design provides a maintenance illuminance of no more than 25%, a
project that has achieved a maintenance illuminance of 5 per cent vs another project
which has achieved 25 per cent obtain similar scores for this criterion. There are many
other examples of this blurred project differentiation, which can be observed across SRTs.
In contrast to the claimed benefits of engaging with SRTs, as noted in Section 5, a
few researchers have challenged their usefulness. Newsham et al. (2009) find that
28-35 per cent of LEED-certified buildings actually use more energy than traditional
buildings. Torcellini et al. (2006) find that actual energy usage in six
high-performance buildings is higher than predicted. Williamson et al. (2001) find
evidence that there is little correlation between a NatHERS award and actual
heating and cooling energy consumption.
This papers authors investigated whether there is any value in obtaining higher
Green Star awards (buildings), as compared with the base award of 4 stars. Green Star
only certifies buildings that achieve a 4, 5 or 6 stars. Buildings that do not meet at
least the minimum 4-star requirement are not publicly disclosed. Two databases are
compared: one from the Green Star web site (Green Building Council of Australia
(GBCA), 2012c), which rates buildings based on adherence to specific sustainable
design specifications, and the second from the NABERS web site (NABERS, 2012),
which rates buildings by measuring energy and water efficiency. Table XVI shows the
comparison, where data were available.
From Table XVI, it is seen that a better Green Star award does not necessarily
mean better performance in terms of energy and water efficiency (using NABERS
award as a gauge of building performance). For example, although the building
occupied by E has a higher Green Star award (6 star) compared to the building
occupied by B (5 star), the NABERS award (energy) is lower for E (3.5 star) compared
to B (5 star). It could be that the afore-mentioned limitations in SRTs (namely, that they
do not sufficiently account for project variety, have subjective benchmarks, etc.) result
in this conclusion. Naturally, this casts doubt over the reliability and effectiveness
of current SRTs. This also raises the concern that building developers might select the
SRT that results in the highest rating. Further investigation is warranted to validate
the findings presented here.

Building/
infrastructure
SRTs
125

SASBE
2,2

126
Table XVI.
Comparison between
Green Star and NABERS

Building occupant
A
B
C
D
E
F
G

Green star award


(design)
4
5
6
4
6
4
6

star
star
star
star
star
star
star

NABERS award
(energy)

NABERS award
(water)

5 star
5 star
4.5 star
4.5 star
3.5 star
3.5 star
na

5.5 star
3.5 star
na
na
na
4.5 star
2 star

Note: As of 20 May 2012; na, not available

7. Conclusions and future research


This paper provides an overview of SRTs used for sustainable development of
building/infrastructure projects. The paper gives a compilation of information about
the criteria and scoring used in SRTs. Empirical research that has been conducted on
the benefits of SRTs, and a detailed critique of SRTs are reviewed. Some of the findings
of SRTs include: the emphasis on environmental issues; scoring which does not
account for uncertainty or variability in assessors perceptions; lack of published
reasoning behind allocation of scores; inadequate definition of scales to permit
differentiation among projects; and the existence of non-scientific benchmarks.
Future research
In light of this review, much remains to be done to enhance building/infrastructure
SRTs and the current understanding of users of these tools. Some suggestions for
future research include:
.

Expanding the list of criteria to include more measurable social and economic
issues. Current SRTs for buildings are predominantly focused on the
environment.

Exploring the possibility of inter-linking different sustainability criteria. Lozano


and Huisingh (2011) observe that a majority of the guidelines and standards
address sustainability issues through compartmentalisation, that is separating
economic, environmental and social criteria. They argue that as a result of this
approach, sustainability efforts are not properly integrated.

The GRI has been introduced to guide sustainability reporting among


corporations. There is a need to bridge the current gap and look at avenues by
which building/infrastructure SRTs can interlock with GRI.

The need to incorporate uncertainty/variability in SRTs, given that assessors


perceptions differ.

Currently, there are tools that report potential environmental impact and tools
that report actual performance. Future research could look into harmonising
both tools to reduce confusion. A lot of work will be required around the
integration process. Mitchell (2010) suggests that perhaps a single agency
responsible for both tools would speed up this process.

The varying standards across SRTs developed in different countries, make


comparability difficult. Having a common standard would assist in better
benchmarking of projects internationally. A memorandum of understanding has
already been signed in 2009 between Green Star, BREEAM and LEED to jointly
develop common metrics to report CO2 emissions, and align the reporting tools
(Mitchell, 2010). Future research could work on facilitating this unification.

Building/
infrastructure
SRTs

More empirical research is needed to assess the validity of any benchmarks


proposed in SRTs. Currently, a majority of the benchmarks set are based on
perception, which may be inaccurate.

127

References
Adegbile, M.B.O. (2013), Assessment and adaption of an appropriate green building rating
system for Nigeria, Journal of Environment and Earth Science, Vol. 3 No. 1, pp. 1-11.
AGIC (2012), Australian Green Infrastructure Council IS Rating Scheme, Australian Green
Infrastructure Council (AGIC), Sydney, available at: www.agic.net.au/ISratingscheme1.htm
(accessed 7 January 2013).
Baird, G. (2009), Incorporating user performance criteria into building sustainability rating
tools (BSRTs) for buildings in operation, Sustainability, Vol. 1 No. 4, pp. 1069-1086.
BASF (2012), Life cycle Analyzer, BASF, Ludwigshafen, available at: www.basf.com/group/
corporate/en/function/conversions:/publish/content/news-and-media-relations/newsreleases/downloads/2012/P210_Life_Cycle_Analizer_e.pdf (accessed 27 January 2013).
Berardi, U. (2012), Sustainability assessment in construction sector: rating systems and rated
buildings, Sustainable Development, Vol. 20 No. 6, pp. 411-424.
Bonham-Carter, C. (2010), Sustainable communities in the UK, cited in Sharifi and Murayama
(2013).
Born, P. (1996), Input-output analysis: input of energy, CO2 and work to produce goods, Journal
of Policy Modelling, Vol. 18 No. 2, pp. 217-221.
Bowyer, J., Howe, J., Fernholz, K. and Lindburg, A. (2006), Designation of Environmentally
Preferable Building Materials: Fundamental Change Needed Within LEED, Dovetail
Partners Inc, Minneapolis, MN, available at: www.dovetailinc.org/files/Dovetail
LEED0606.pdf (accessed 27 January 2013).
BREEAM (2012), What is BREEAM? Building Research Establishment (BRE), Watford, available
at: www.breeam.org/about.jsp?id66 (accessed 20 November 2012).
Buchanan, A.H. and Honey, B.G. (1994), Energy and carbon dioxide implications of building
construction, Energy and Buildings, Vol. 20 No. 3, pp. 205-217.
CASBEE (2002), The Assessment Method Employed by CASBEE, Japan Green Build Council
and Japan Sustainable Building Consortium, available at: www.ibec.or.jp/CASBEE/
english/methodE.htm (accessed 20 November 2012).
Centre for International Economics Canberra and Sydney (2007), Embodied carbon metrics will
avoid higher than desired carbon content and additional costs, cited in Davis Langdon,
available at: www.davislangdon.com/ANZ/Sectors/Sustainability/ECM/ (accessed 24
March 2012).
Chan, P. and Chu, C. (2010), HK-BEAM (Hong Kong Building Environmental Assessment
Method): Assessing Healthy Buildings, BEAM Society, Hong Kong, available at:
www.mixtechnology.com/files/download/HK_BEAM.pdf (accessed 6 January 2012).
Chew, M.Y.L. and Das, S. (2007), Building grading systems: a review of the state-of-art,
Architectural Science Review, Vol. 51 No. 1, pp. 3-13.

SASBE
2,2

128

Cole, R.J. (1999), Building environmental assessment methods: clarifying intentions, Building
Research and Information, Vol. 27 Nos 4-5, pp. 230-246.
Cole, R.J. (2005), Building environmental assessment methods: redefining intentions and roles,
Building Research and Information, Vol. 35 No. 5, pp. 455-467.
Crawley, D. and Aho, I. (1999), Building environmental assessment methods: applications and
development trends, Building Research and Information, Vol. 27 Nos 4-5, pp. 300-308.
Department of Climate Change and Energy Efficiency (2010), NaTHERS, Department of Climate
Change and Energy Efficiency, Canberra, available at: www.climatechange.gov.au/whatyou-need-to-know/buildings/homes/B/media/publications/buildings/nationwide-homeenergy-rating-scheme.pdf (accessed 23 February 2013).
Ding, G.K.C. (2008), Sustainable construction the role of environmental assessment tools,
Journal of Environmental Management, Vol. 86 No. 3, pp. 451-464.
EIA (2012), 2012 Commercial Buildings Energy Consumption Survey (CBECS), US Department
of Energy, Washington, DC, available at: www.eia.gov/survey/form/eia_871/2012/
2012%20CBECS%20Form%20871A%20Building%20Questionnaire.pdf (accessed 10
January 2013).
Eichholtz, P., Kok, N. and Quigley, J.M. (2009), Doing well by doing good?, working paper,
Green Office Buildings, Centre for the Study of Energy Markets (CSEM), Berkeley, CA,
August.
Energy STAR (2011), Energy STAR Performance Ratings Technical Manual, US Environmental
Protection Agency and US Department of Energy, available at: www.energystar.gov/ia/
business/evaluate_performance/General_Overview_tech_methodology.pdf (accessed 7
December 2011).
EUROSTAT (2001), Economy-Wide Material Flow Accounts and Derived Indicators: A
Methodological Guide, European Commission, Luxembourg, available at: http://
epp.eurostat.ec.europa.eu/portal/page/portal/environmental_accounts/documents/3.pdf
(accessed 27 January 2013).
Fard, N.H. (2012), Energy-based sustainability rating system for buildings: case study of
Canada, Masters thesis of Applied Science, The University of British Columbia,
Okanagan.
Fenner, R.A. and Ryce, T. (2008), A comparative analysis of two building rating systems, part I:
evaluation, Engineering Sustainability, Vol. 161 No. 1, pp. 55-63.
Fernandez-Sanchez, G. and Rodrguez-Lopez, F. (2010), A methodology to identify sustainability
indicators in construction project management application to infrastructure projects in
Spain, Ecological Indicators, Vol. 10 No. 6, pp. 1193-1201.
Fowler, K.M. and Rauch, E.M. (2006), Sustainable Building Rating Systems Summary, Pacific
Northwest National Laboratory, US Department of Energy, Battelle Washington, DC.
Fuerst, F. (2009), Building momentum: an analysis of investment trends in LEED and
Energy STAR-certified properties, Journal of Retail and Leisure Property, Vol. 8 No. 4,
pp. 285-297.
Fuerst, F. and McAllister, P. (2008), Pricing sustainability: an empirical investigation of the value
impacts of green building certification, working paper presented at ARES, cited in Miller
et al. (2008).
Global Reporting Initiative (GRI) (2011), Sustainability Reporting Guidelines v3.1, GRI,
Amsterdam, available at: www.globalreporting.org/resourcelibrary/G3.1-SustainabilityReporting-Guidelines.pdf (accessed 24 January 2013).
Green Building Council of Australia (GBCA) (2012a), Green Star-Rating Tools, Green Building
Council of Australia (GBCA), Sydney, available at: www.gbca.org.au/green-Star/ratingtools/ (accessed 20 November 2012).

Green Building Council of Australia (GBCA) (2012b), Green Star-Performance Benefits, Green
Building Council of Australia (GBCA), Sydney, available at: www.gbca.org.au/green-Star/
green-Star-performance/benefits/ (accessed 3 January 2012).
Green Building Council of Australia (GBCA) (2012c), Green Star Project Directory, Green
Building Council of Australia (GBCA), Sydney, available at: www.gbca.org.au/projectdirectory.asp (accessed 12 December 2012).
Gryc, H. (2012), Sustainable Structural Engineering-Gaining Greater Benefits to Communities in
Developing Nations, Structures Congress, ASCE, Chicago, IL, available at: http://
ascelibrary.org/doi/pdf/10.1061/9780784412367.178 (accessed 20 November 2012).
Heerwagen, J. (2000), Green buildings, organizational success and occupant productivity,
Building Research and Information, Vol. 28 Nos 5-6, pp. 353-367.
Hes, D. (2007), Effectiveness of green building rating tools: a review of performance, The
International Journal of Environmental, Cultural, Economic & Social Sustainability, Vol. 3
No. 4, pp. 143-152.
HKGBC and BEAM Society (2012), BEAM Plus Existing Buildings Version 1.2, BEAM Society,
Hong Kong, available at: www.beamsociety.org.hk/files/BEAM_Plus_For_Existing_
Buildings_Version_1_2.pdf (accessed 6 January 2013).
Horvarth, A., Pacca, S., Masanet, E. and Canapa, R. (2007), PaLATE, University of California,
Berkeley, CA, available at: www.ce.berkeley.edu/Bhorvath/palate.html (accessed 27
December 2012).
Hsu, S.L. (2010), Life cycle assessment of materials and construction in commercial structures:
variability and limitations, Masters thesis, Massachusetts Institute of Technology,
Cambridge, MA.
Huang, S.-L. and Tsu, W.-L. (2003), Materials flow analysis and Energy evaluation of Taipeis
urban construction, Landscape and Urban Planning, Vol. 63 No. 2, pp. 61-74.
Hurley, J. and Horne, R. (2006), Review and analysis of tools for the implementation and
assessment of sustainable urban development, Environmental Institute of Australian and
New Zealand (EIANZ) report, Adelaide.
Hyde, K.M., Maier, H.R. and Colby, C.B. (2004), Reliability-based approach to multi-criteria
decision analysis for water resources, Journal of Water Resources Planning and
Management, Vol. 130 No. 6, pp. 429-438.
Insua, D.R. and French, R. (1991), A framework for sensitivity analysis in discrete
multi-objective decision making, European Journal of Operational Research, Vol. 54 No. 2,
pp. 176-190.
Kahhat, R. and Williams, E. (2012), Materials flow analysis of e-waste: domestic flows and
exports of used computers from the United States, Resources, Conservation and Recycling,
Vol. 67, pp. 67-74.
Kordjamshidi, M., King, S. and Prasad, D. (2006), Why are rating schemes always wrong?
Regulatory frameworks for passive design and energy efficiency, 23rd Conference on
Passive and Low Energy Architecture, Geneva, 6-8 September, available at: www.unige.ch/
cuepe/html/plea2006/Vol2/PLEA2006_PAPER176.pdf (accessed 23 February 2013).
Lee, Y.S. and Guerin, D.A. (2010), Indoor environmental quality differences between office
types in LEED-certified buildings in the US, Building and Environment, Vol. 45 No. 5,
pp. 1104-1112.
Lee, W.L., Yik, F.W.H. and Burnett, J. (2007), Assessing energy performance in the latest
versions of Hong Kong building environmental assessment method (HK-BEAM), Energy
and Buildings, Vol. 39 No. 3, pp. 343-354.
LEED (2009a), LEED for new construction and major renovations v2009, available at: http://
new.usgbc.org/credits/new-construction/v2009 (accessed 26 January 2013).

Building/
infrastructure
SRTs
129

SASBE
2,2

130

LEED (2009b), LEED for existing buildings: operations and maintenance v2009, available at:
http://new.usgbc.org/credits/existing-buildings/v2009 (accessed 26 January 2013).
LEED (2012), What is LEED? US Green Building Council, Washington, DC, available at: https://
new.usgbc.org/leed (accessed 20 November 2012).
Levermore, G.J. (2008), A review of the IPCC assessment report four: part I: the IPCC process
and greenhouse gas emission trends from buildings worldwide, Building Services
Engineering Research and Technology, Vol. 29 No. 4, pp. 349-361.
Lockwood, C. (2006), Building the green way, Tool kit, Harvard Business Review, Harvard, MA,
pp. 1-9, available at: http://ecologicdesignlab.com/files/Eco-Urban/VIII.1_HBR_building_
green_way.pdf (accessed 12 December 2012).
Lozano, R. and Huisingh, D. (2011), Inter-linking issues and dimensions in sustainability
reporting, Journal of Cleaner Production, Vol. 19 Nos 2-3, pp. 99-107.
Mateus, R. and Braganca, L. (2011), Sustainability assessment and rating of buildings:
developing the methodology SBToolPTH, Building and Environment, Vol. 46 No. 10,
pp. 1962-1971.
Miller, N., Spivey, J. and Florance, A. (2008), Does green pay off?, Journal of Real Estate
Portfolio Management, Vol. 14 No. 4, pp. 385-399.
Mitchell, L.M. (2010), Green Star and NABERS: learning from the Australian experience with
green building rating tools in Bose, R.K. (Ed.), Energy Efficient Cities: Assessment Tools
and Benchmarking Practices, The International Bank for Reconstruction and
Development/The World Bank, Washington, DC, pp. 93-124.
Mithraratne, N. and Vale, B. (2004), Life cycle analysis model for New Zealand houses, Building
and Environment, Vol. 39 No. 4, pp. 483-492.
Morrissey, J., Iyer-Raniga, U., McLaughin, P. and Mills, A. (2012), A strategic appraisal
framework for ecologically sustainable urban infrastructure, Environmental Impact
Assessment Review, Vol. 33 No. 1, pp. 55-65.
Mroueh, U.-M., Eskola, P. and Laine-Ylijoki, J. (2001), Life-cycle impacts of the use of industrial
by-products in road and earth construction, Waste Management, Vol. 21 No. 3,
pp. 271-277.
NABERS (2011), Preparing for NABERS Office Rating Application, NSW Office of Environment
and Heritage, Sydney, available at: www.nabers.gov.au/public/WebPages/Document
Handler.ashx?docType 3&id 15&attId 0 (accessed 6 January 2013).
NABERS (2012), Rating Register, NSW Office of Environment and Heritage, Sydney, available at:
www.nabers.gov.au/public/ WebPages/ContentStandard. aspx?module 30& template
3&id 310& side Ratingsfrom20July.htm (accessed 7 January 2013).
Newell, G., MacFarlane, J. and Kok, N. (2011), Building Better Returns: A Study of the Financial
Performance of Green Office Buildings in Australia, Australian Property Institute and
Property Funds Association, University of Western Sydney and the University of
Maastricht in conjunction with Jones Lang LaSalle and CBRE, Australian Property
Institute, Sydney, available at: www.api.org.au/ assets/media_library/ 000/000/219/
original.pdf?1315793106 (accessed 5 January 2013).
Newsham, G.R., Mancini, S. and Birt, B.J. (2009), Do LEED-certified buildings save energy? Yes,
but, Energy and Buildings, Vol. 41 No. 8, pp. 897-905.
New South Wales (NSW) Government (2013), Environmental Planning and Assessment
Regulation 2000, New South Wales (NSW) Government, Sydney, available at:
www.legislation.nsw. gov.au/maintop/ view/inforce/ subordleg 557 2000 cd 0 N
(accessed 24 February 2013).
Nguyen, B.K. and Altan, H. (2011), Comparative review of five sustainable rating systems,
Procedia Engineering, Vol. 21, pp. 376-386.

Norman, J., MacLean, H. and Kennedy, C. (2006), Comparing high and low residential density:
life-cycle analysis of energy use and greenhouse gas emissions, Journal of Urban
Planning and Development, Vol. 132 No. 1, pp. 10-21.
NSW Government Planning and Infrastructure (2013), Using the BASIX Assessment Tool, NSW
Government Planning and Infrastructure, Sydney, available at: www.basix.nsw.gov.au/
basixcms/ getting-started/using- the-basix-assessment- tool.html (accessed 23 February
2013).
Piluso, C., Huang, Y. and Lou, H.L. (2008), Ecological input-output analysis-based sustainability
analysis of industrial systems, Industrial & Engineering Chemistry Research, Vol. 47
No. 6, pp. 1955-1966.
Reed, R., Bilos, A., Wilkinson, S. and Schulte, K.-W. (2009), International comparison of
sustainable rating tools, Journal of Sustainable Real Estate, Vol. 1 No. 1, pp. 1-22.
Ries, R., Bilec, M.M., Gokhan, N.M. and Needy, K.L. (2006), The economic benefits of green
buildings: a comprehensive case study, The Engineering Economist, Vol. 51 No. 3,
pp. 259-295.
Roderick, Y., McEwan, D., Wheatley, C. and Alonso, C. (2009), Comparison of energy
performance assessment between LEED, BREEAM and Green Star, 11th International
IBPSA Conference, Glasgow, July 27-30.
Rumsey, P. and McLellan, J.F. (2005), The green edge the green imperative, environmental
design and construction, pp. 5556, cited in Berardi (2012).
Sahely, H.R., Kennedy, C.A. and Adams, B.J. (2005), Developing sustainability criteria for urban
infrastructure systems, Canadian Journal of Civil Engineering, Vol. 32 No. 1, pp. 72-85.
Saunders, T. (2008), A Discussion Document Comparing International Environmental Assessment
Methods for Buildings, Building Research Establishment (BRE), Watford, available at:
www.dgbc.nl/images/uploads/rapport_vergelijking.pdf (accessed 29 July 2012).
Scheuer, C.W. and Keoleian, G.A. (2002), Evaluation of LEED Using Life Cycle Assessment
Methods, National Institute of Standards and Technology (NIST), Gaithersburg, MD,
available at: www.fire.nist.gov/bfrlpubs/build02/PDF/b02170.pdf (accessed 27 January
2013).
Sev, A. (2011), A comparative analysis of building environmental assessment tools and
suggestions for regional adaptations, Civil Engineering and Environmental Systems,
Vol. 28 No. 3, pp. 231-245.
Sharifi, A. and Murayama, A. (2013), A critical review of seven selected neighbourhood
sustainability assessment tools, Environmental Impact Assessment Review, Vol. 38,
pp. 73-87.
Shen, L.-Y., Hao, J.L., Tam, V.W.-Y. and Yao, H. (2007), A checklist for assessing sustainability
performance of construction projects, Journal of Civil Engineering and Management,
Vol. XIII No. 4, pp. 273-281.
Singh, R.K., Murty, H.R., Gupta, S.K. and Dikshit, A.K. (2009), An overview of sustainability
assessment methodologies, Ecological Indicators, Vol. 9 No. 2, pp. 189-212.
Todd, J.A., Crawley, D., Geissler, S. and Lindsey, G. (2001), Comparative assessment of
environmental performance tools and the role of the green building challenge, Building
Research and Information, Vol. 29 No. 5, pp. 324-335.
Torcellini, P., Pless, S., Deru, M., Griffith, B., Long, N. and Judkoff, R. (2006), Lessons learned
from case studies of six high-performance buildings, Technical Report No. NREL/TP550-37542, National Renewable Energy Laboratory, Golden, CO, June.
Tronchin, L. and Fabbri, K. (2008), Energy performance building evaluation in Mediterranean
countries: comparison between software simulations and opening rating simulation,
Energy and Buildings, Vol. 40 No. 7, pp. 1176-1187.

Building/
infrastructure
SRTs
131

SASBE
2,2

132

Udall, R. and Schendler, A. (2005), LEED is Broken Lets Fix it, iGreenBuild.Com, San Mateo,
CA, available at: www.igreenbuild.com/cd_1706.aspx (accessed 27 January 2013).
Ugwu, O.O. and Haupt, T.C. (2007), Key performance indicators and assessment methods for
infrastructure sustainability a South African construction industry perspective,
Building and Environment, Vol. 42 No. 2, pp. 665-680.
rge-Vorsatz, D. and Novikova, A. (2008), Potential and costs of carbon dioxide mitigation in the
U
worlds buildings, Energy Policy, Vol. 36 No. 2, pp. 642-661.
USGBC (2011), Buildings and Climate Change, US Green Building Council, Washington, DC,
available at: www.documents.dgs.ca.gov/dgs/pio/facts/LA%20workshop/climate.pdf
(accessed 20 June 2012).
Vijayan, A. and Kumar, A. (2005), A review of tools to assess the sustainability in building
construction, Environmental Progress, Vol. 24 No. 2, pp. 125-132.
Watson, P., Jones, D. and Mitchell, P. (2013), Are Australian building eco-assessment tools
meeting stakeholder decision making needs?, available at: www.construction-innovation.
info/images/pdfs/Research_library/ResearchLibraryB/RefereedConferencePapers/Are_
Australian_building_eco-assessement_tools.pdf (accessed 3 May 2013).
Williamson, T.J., OShea, S. and Menadue, V. (2001), NatHERS: science and non-science, 35th
ANZAScA Conference, Wellington, November, available at: www.pc.gov.au/data/assets/
pdf_file/0017/45116/sub028attachment1.pdf (accessed 24 February 2013).
Wiley, J.A., Benefield, J.D. and Johnson, K.H. (2010), Green design and the market for
commercial office space, Journal of Real Estate Finance and Economics, Vol. 41 No. 2,
pp. 228-243.
Wolters, W.T.M. and Mareschal, B. (1995), Novel types of sensitivity analysis for
additive MCDM methods, European Journal of Operational Research, Vol. 81 No. 2,
pp. 281-290.
World Economic Forum (2011), A Profitable and Resource Efficient Future: Catalysing Retrofit
Finance and Investing in Commercial Real Estate, World Economic Forum, Geneva,
October, available at: www3.weforum.org/docs/WEF_IU_CatalysingRetrofit
FinanceInvestingCommercialRealEstate_ Report_2011.pdf (accessed 6 January 2013).
Xu, M., Allenby, B. and Kim, J. (2010), Input-Output Analysis for Sustainability, Centre for
Sustainable Engineering, Tempe, AZ, available at: www.ce.cmu.edu/Bcse/5aug07%
20Allenby%20IO.pdf (accessed 27 January 2013).
Further reading
AS/NZS:4801 (2001), Occupational Health and Safety Management Systems Specification With
Guidance for Use, Standards Australia, Sydney, available at: http://infostore.
saiglobal.com/store2/Details.aspx?ProductID386329 (accessed 16 October 2012).
BSI (2013), OHSAS 18001 Occupational Health and Safety, BSI, London, available at: www.
bsigroup.com.au/en-au/Assessment-and-Certification-services/Management-systems/
Standards-and-schemes/OHSAS-18001/ (accessed 24 January 2013).
EMAS (2013), What is EMAS?, European Commission, Brussels, available at: http://ec.
europa.eu/environment/emas/index_en.htm (accessed 23 January 2013).
ISO9001 (2008), Quality Management Systems Requirements, ISO, Geneva, available
at: www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber46486
(accessed 16 December 2011).
ISO14001 (2004), Environmental Management Systems Requirements with Guidance for Use,
ISO, Geneva, available at: www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.
htm?csnumber31807 (accessed 16 December 2011).

www.ecoinvent.org/database/

www.gabi- software.com/
australia/ software/gabisoftware/gabi-5/

BOUSTEAD
Consulting UK

Edge Environment

Ecoinvent Centre

PE International

BOUSTEAD

ENVEST

Ecoinvent

GaBi

http://ws680.nist.gov/bees/
(A(7rkOp_kyzgEkAAAAZ
WExZWEwZDItMzBiY
S00YTVlLWJhYm
UtN2NiOTc4MzNm
Y2YwJt9Z32gVssFf9Qf_
Ghok1rCVQig1))/default.aspx
www.boustead-consulting.co.uk/

National Institute of
Standards and
Technology (NIST)

BEES

http://edgeenvironment.com.au/
envest/

Web link

Developer

LCA tool

Contains data sets in the area of


agriculture, energy supply,
transport, biofuels, construction
materials, metals processing,
electronics and waste treatment
Users have the flexibility to
construct life cycle of products at
any stage

LCA tool for earlier phase of


building design

LCA tool across a number of


categories (fuel production, fuel
use, process, transport, biomass)

All stages in the life of a product


(raw material acquisition,
manufacture, transportation,
installation, recycling, waste
management)

Coverage

Life cycle assessment across different


modules (design for environment, ecoefficiency, eco-design, efficient value chains)
Life cycle cost (designing and optimising
products and services for cost reduction)
Life cycle reporting with modules across
sustainable product marketing,
sustainability reporting and LCA knowledge
sharing
Life cycle working environment (developing
manufacturing processes that address social
responsibilities)

Global warming potential


Conservation of fossil fuels
Acidification
Grid electricity use
Public water use
Reveals operational impacts and embodied
impacts of building as design evolves
Provides estimates of construction cost and
whole life cycle cost
Life cycle inventory which can be used with
other major LCA tools

Economic performance measured using


standard life cycle cost method
Economic and environmental performance
combined into one overall performance using
multi-attribute decision analysis

Outputs/types of analysis

Appendix

Building/
infrastructure
SRTs
133

Table AI.
LCA tools

Table AII.
Broad comparison of
criteria across a selection
of SRTs

Subcriterion

Note: , presence of criteria

Environmental Energy
Water
Waste
Pollution/emissions/
air
Land use and ecology
Biodiversity
Materials
Management (i.e.
integrated
process, sustainable
Social
procurement, etc.)
Health and well-being/
IEQ
Economic
Innovation
Equity of economic
opportunity
Livelihood
opportunity
Macroeconomic effects

Criterion

134





































































































































BCA
Green
Mark
Estidama
Sustainable
Green
for
Pearl
Green
Design
DGNB- Protocol
BREEAM LEED Star CASBEE ASPIRE Districts EPRA Community Globe Scorecard BEAM Seal
ITACA AGIC

SASBE
2,2

ASPIRE

CASBEE

Green Star

LEED

BREEAM

SRT

http://breeam.org

www.usgbc.org

Building Research
Establishment, UK

US Green Building
Council

www.gbca.org.au/green-Star
Green Building
Council Australia
www.ibec.or.jp
Japan Green Build
Council and Japan
Sustainable Building
Consortium
ARUP and EAP
http://engineersagainstpoverty.
com/major initiatives/aspire.cfm

Web link

Owner





na





(continued)

Scores are guided by illustrations of best


case and worst case scenarios. Traffic
light idea where green indicates strength
and red indicates weakness

Environmental weightings exist and


allocated to the 10 criteria identified;
management, waste, health and wellbeing, energy, transport, water, pollution,
land use and ecology, materials and
innovation
Similarities with BREEAM. The
management criterion is not present in
LEED
Similar to BREEAM in terms of all the
criteria assessed
Applies the concept of eco-efficiency.
Based on BEE Q/L where Q represents
quality and L represents load

Nature of SRT
Comments
Deterministic
scoring for
Weighting for
criteria
criteria

Building/
infrastructure
SRTs
135

Table AIII.
Analysis of building/
infrastructure reporting
tools

EPRA Sustainability East Perth


Assessment Tool
Redevelopment
Authority and GHD,
WA
Estidama Pearl
Abu Dhabi Urban
Community
Planning Council
(UPC)
Green Globe
Green Globe
International
Sustainable Design Moreland City
Scorecard
Council

na

na
na
na





www.epra.wa.gov.au/

www.estidama.org/
www.greenglobe.org/
www.portphilip.vic.gov.au

www.bca.gov.sg/GreenMark/
Building and
green_mark_buildings.htm
Construction
Authority, Singapore

BCA Green Mark


for Districts

www.agic.net.au/
AGICschem.htm

na

(continued)

Quite similar to BREEAM, LEED and


Green Star in terms of criteria accessed
Based on aggregated scoring. Allocated
points for environmental criteria have
been weighted to be in line with Councils
sustainability compliance priorities

An infrastructure assessment tool,


launched in 2012. Scores o25 points not
eligible for certified award, 25-o50 points
Good, 50-o75 points Excellent and 75100 points Leading.
Embedded weightings are present under
Section 5.1. Green buildings within
district (where 25% allocated for
Platinum, 20% for Gold Plus and 10% for
Gold)
Utilises Green Star awards and tiering. (i.e.
Development in Tier one: 6 Star Green
Star, Tier two: 5 Star Green Star, Tier
three: 4 Star Green Star)
Scoring awarded at three different stages;
design, construction and operational

Nature of SRT
Comments
Deterministic
scoring for
Weighting for
criteria
criteria

136

Australian Green
Infrastructure
Council

AGIC

Web link

Owner

Table AIII.

SRT

SASBE
2,2

German Sustainable www.dgnb.de/_en/certificationBuilding Council


system/Evaluation/
evaluation.php

ITACA

DGNB-Seal

Protocol ITACA

Note: na, not available; , presence of attribute

www.irbdirekt.de/daten/iconda/
CIB9084.pdf

www.mixtechnology.com/files/
download/HK_BEAM.pdf

Hong Kong Beam


Society

HK-BEAM

Web link

Owner

SRT

na

Scores are allocated to each assessment


criterion taking into account international
consensus. Weightings exist for different
criteria
Each criterion receives a maximum of 10
points based on documented or calculated
quality. There is flexibility to increase the
weighting of each criterion by as much as
threefold. Three performance standards;
Gold (80%), Silver (50%) or Bronze (35%)
All performance criteria are set within
performance scales from 2 to 5, where
0 is the minimum acceptable performance
in the industry. The overall score is also
based on this similar rating scale

Nature of SRT
Comments
Deterministic
scoring for
Weighting for
criteria
criteria

Building/
infrastructure
SRTs
137

Table AIII.

Table AIV.
International standards
embedded

|
|

Note: |, presence of a standard

ISO 14001
ISO 9001
AS/NZS 4804
EMAS
OHSAS 18001

|
|
|

Green
BREEAM LEED Star CASBEE ASPIRE AGIC
|

BCA
Green
Mark
|

Green
EPRA Estidama Globe

Sustainable
Design
Scorecard

HK- DGNB
BEAM Seal

138

International
standard

Protocol
ITACA

SASBE
2,2

About the authors


Renard Y.J. Siew is a PhD candidate at the University of New South Wales, where he also
graduated with a Bachelor of Civil Engineering (First Class Honours). He also holds a Certificate
in International Auditing (CertIA) issued by the ACCA. As an undergraduate, he was the
recipient of the Yayasan Sime Darby Scholarship, the Brookfield Multiplex Engineering
Construction Management Prize, the Australian Conferences Management Education for
Engineers (ACMEE) Award, on the Deans Honours List and a member of the Golden Key
International Honours Society. Passionate about volunteerism, Renard is an active member of
Engineers Without Borders (EWB).
Dr Maria C.A. Balatbat is a senior lecturer at the Australian School of Business, University of
New South Wales (UNSW) and a Fellow of CPA Australia. Maria is also a research coordinator at
the Centre for Energy and Environmental Markets at UNSW. She teaches advanced financial
accounting in the undergraduate and post-graduate programs. Recently, she has developed a
post-graduate course on reporting for climate change and sustainability and teaches this to
business and environmental management students.
David G. Carmichael is a professor of Civil Engineering and former Head of the Department
of Engineering Construction and Management at the University of New South Wales. He is a
graduate of the Universities of Sydney and Canterbury; a Fellow of the Institution of Engineers,
Australia; a Member of the American Society of Civil Engineers; and a former graded arbitrator
and mediator. He publishes, teaches, and consults widely in most aspects of project management,
construction management, systems engineering and problem solving. He is known for his
leftfield thinking on project and risk management (Project Management Framework, A. A.
Balkema, Rotterdam, 2004), and project planning (Project Planning, and Control, Taylor and
Francis, London, 2006). David G. Carmichael is the corresponding author and can be contacted
at: D.Carmichael@unsw.edu.au

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

Building/
infrastructure
SRTs
139

S-ar putea să vă placă și