Documente Academic
Documente Profesional
Documente Cultură
1
Reliability and availability
March 2014
Annex 2.1: Reliability and availability March 2014
This annex has been written to provide additional supporting information and explanation for the
propositions we have made in the core narrative.
The annex includes detailed information on our past and current reliability performance, and how
this benchmarks against our peers. It provides detailed information on the investments that we will
need to make and the technologies that we will employ to meet our target improvements in
reliability performance.
This annex is primarily aimed at stakeholders with an expertise in asset management and an interest
in getting a deeper understanding of how we maintain our strong reliability performance.
While we have sought to limit the use of technical language, in order to provide full information for
stakeholders we have inevitably included some concepts and terminology that may not be familiar
to the general reader.
The relevant sections in the core narrative that relate to this annex are:
Document history
This document is similar to the version that we published in June 2013 as part of our well-justified
business plan. It has been updated to reflect stakeholder feedback on our well-justified business
plan and our latest view of the costs of meeting the outputs that we have proposed in our plan. We
have also updated it to ensure that cross-references to other parts of the plan remain accurate.
* We have included at annex GL.1 a glossary that explains the key technical terms and abbreviations
used in our business plan.
* For more detail on how this plan differs from our June 2013 plan, please refer to annex G.12.
Contents
1 Keeping the lights on – our proposal ................................................................................ 4
1.1 Our proposal in a little more detail… ........................................................................... 4
1.2 Why we think our proposal is the right one ................................................................ 5
1.3 Evaluating the “realistic” options ................................................................................ 7
2 Our target reliability-output performance ........................................................................ 9
3 Our plans for proposed future reliability performance .................................................. 11
3.1 Maintaining underlying reliability performance ........................................................ 12
3.2 Planned reduction in number of power cuts ............................................................. 12
3.3 Planned reduction in duration of power cuts ............................................................ 13
3.4 Dealing with long-duration power cuts ..................................................................... 13
3.5 Improving reliability for the worst-performing parts of our network ....................... 14
3.6 Improving intermittent-fault performance................................................................ 15
3.7 Minimising the inconvenience of planned power cuts .............................................. 15
4 Our plans for ensuring network resilience ...................................................................... 16
4.1 Installing additional flood defences ........................................................................... 16
4.2 Improving our storm network resilience ................................................................... 18
4.3 Black-start resilience .................................................................................................. 18
4.4 Increased security measures at strategic sites .......................................................... 19
5 Managing future uncertainties ........................................................................................ 19
5.1 Smart meters.............................................................................................................. 19
5.2 Metal theft ................................................................................................................. 19
5.3 Low-carbon technology.............................................................................................. 20
5.4 Climate change adaptation ........................................................................................ 20
6 Benchmarking of our performance ................................................................................. 21
7 Our past and present reliability performance ................................................................. 24
1 th
5 CEER benchmarking report on the quality of electricity supply 2011
availability to all customers. We plan to improve our performance by targeting a 20% reduction in the
average time for getting the lights back on compared with present levels. 2 The system-wide average
reliability figures for our 3.9m customers can hide extremes of performance. Consequently we will
also continue to reduce long-duration power cuts. We will move our operational response capability
from the present 18-hour maximum for restoration to 12 hours. Where we are unable to achieve the
12-hour standard we will provide compensation: this affects relatively few of our customers, as fewer
than 1% of them experience a power cut.
Domestic customers typically felt the worst-served areas should be prioritised for improvements,
regardless of the number of customers affected, as everyone should be able to expect the same level
of service: therefore we are especially determined to make significant improvements for those
customers who receive a particularly poor service. Our plan is focused on improving service for those
customers who suffer significantly more power cuts than the average. We will do this by prioritising
our replacement of network components on the power lines that supply those customers, with £42m
of our asset replacement expenditure specifically allocated for that purpose.
The electricity network performs a vital role in the provision of a service upon which daily life in the UK
depends. So, as well as looking to maintain the underlying level of reliability, we seek to identify and
manage risks to the network from hazards that are outside our control. In managing these hazards we
maintain the longer-term level of network reliability. The hazards are usually low-probability but high-
consequence events such as widespread power cuts due to natural disasters, e.g. flooding or ice
storms. Through working with government agencies and other key stakeholders we have developed
plans to improve the resilience of the network. These plans involve the installation of protective
measures for our assets and improvements to the way we respond to major incidents. These
protective measures include installation of flood defences, creation of vegetation-free corridors
around our overhead lines and additional security measures at strategic sites.
2 Average restoration time for a high voltage fault on our network is 62 minutes and 200 minutes for a low voltage fault
Prior to evaluating our options in more detail, we established at a high level the potential range of
options that were available to us. These were then narrowed down by applying stakeholder
requirements and economic/technical constraints to derive a set of suitable ‘realistic’ options.
First, we considered a number of options that would result in a step change reduction in the number
of faults, the duration of the fault and/or the number of customers affected by the fault. We know
that in some parts of the world customer minutes lost (CML) arising from the high-voltage network
have been all but eliminated by investing in automation technology in every substation on the high-
voltage (HV) feeders. This ensures that, for a single HV fault, the network can be reconfigured within
the three-minute watershed, wherever the fault is on the feeder. Given the cost of this programme,
however, it is only suitable for areas of very high population density. For example, Hong Kong has
applied this approach with some success, but it has a population density of 26,000 per square
kilometre compared with 4,000 for urban Leeds. We have ruled out this kind of approach because the
costs to apply it to urban areas (i.e. ground-mounted substations) alone, would be grossly
disproportionate to the benefits.
We have also considered whether we could deliver a step change in reliability, perhaps 60%, by
significantly increasing cable-overlay and overhead line refurbishment/rebuilding programmes to
significantly reduce the risk of individual asset faults relative to current levels. Targeting a level that
would deliver material success would be massively expensive, perhaps increasing asset replacement
expenditure ten- or twentyfold. This is because of the high reliability of network assets relative to the
population of assets as a whole. For example, the proportion of low-voltage (LV) mains that develop a
fault in a given year is tiny relative to the population, of the order of 1% or 2%. Only those new
installations that replaced assets that would otherwise have failed would deliver any real benefit, but,
in order to be confident of preventing one future fault, i.e. to be confident of actually catching one LV
main that would have faulted if we had not replaced it, we would need to replace at least 10 LV mains
because our hit-rate would be no better than 1 in 10 and probably worse than that. In our view this
approach would be both financially inefficient and irresponsibly wasteful of physical resources.
Finally, and in order to overcome this problem, we considered whether deployment of predictive
technology on each feeder would yield significant improvements in supply availability at reasonable
cost. Currently, however, such technology is not cost-effective enough to be deployed as standard for
the general mass of cable assets. It is also ineffective on cables with multiple branches and at low
voltage. Thus its use would be limited to problem high-voltage feeders, and we believe this is unlikely
to change during the RIIO-ED1 price control period: if it does we will adopt it at our own expense. A
similar problem exists with overhead lines. With these assets we can and do assess condition and we
are therefore confident of being better able to spot those that might potentially fail. However, in
reality a high proportion of overhead faults are caused by environmental factors such as bad weather
and accurate prediction of the order of failure even with condition data is only partially successful. We
do stack the odds in the customer’s favour, but the hit-rate is far from 100%.
For these reasons we have concluded that targeting a radically improved reliability provision in the
short to medium term would be expensive, inefficient, difficult to implement and, in short, not in
customers’ best interests. We do not, therefore, believe that a step change in reliability performance
is a realistic option for our plan. We do, however, face choices within a range of reasonable cost and
output combinations, whereby some additional expenditure might deliver incremental reliability
improvements. In drawing up our plan, we had to evaluate the choices and trade-offs within this range
of options. We discuss this evaluation in the next section.
- We could pursue reducing the customers per fault by sectioning the HV network further.
We have a clear policy in this regard based on a cost/benefit analysis: our policy is to
reduce customer numbers per feeder to the point where the costs, which we recover
from customers, would start to outweigh the benefits to customers of increased
reliability. The evaluation we make is of the costs of various engineering solutions against
the level of consequential improvement in reliability performance, costed using the
Interruptions Incentive Scheme (IIS) incentive rates. Our policy presents several different
potential solutions, such as application of new protection points or automated switches,
or splitting of HV circuits for different volumes of connected customers. Our plan has
been developed to meet this policy and has the efficient costs built in already. If we find
any feeders that are not compliant with this policy, we will fix that at our cost.
- We have placed the greatest emphasis on targeting improved restoration times within
our plan, because it is the most cost-effective component to pursue to improve overall
availability. The gains that can be made are material and can be secured at reasonable
cost. The initiatives we will deploy cover installation of new assets, introduction of
revised operational procedures, use of advanced fault management devices and
leveraging of new IT systems to provide an efficient targeted design or operational
response. Using this portfolio of solutions our analysis suggests that 20% improvement in
restoration times is consistent with the cost-effective level of expenditure using the 2015-
23 incentive rates set on behalf of all customers. It is possible to make further gains than
we plan to do, but we believe this would be inefficient.
Reduce the duration of power cuts when they do occur by more than 20%.
As a component of overall availability, the duration of faults is a particular irritant to our
customers and we have carefully considered the level of improvement in this area that we
should pursue. Essentially our analysis shows that an improvement of this order can be
secured within the current quantum of technology and resource deployment, which itself is
justified by customers’ willingness to pay, as reflected in the incentive rates. In other words,
securing an improvement materially greater than 20% requires a different solution set. So, for
example, automating normal open points on HV rings is part of the tool kit that secures 20%.
To go beyond this we would have to automate not just the normal open point but all potential
open points on the ring. The whole point of a ring system is that any point can act as the open
point when required, so this means automating all other substations on the ring. In other
words, the benefit does not simply continue to manifest itself linearly once all normal open
points are automated. Our plan is not to stop focusing on reducing fault duration as we
achieve a 20% reduction, rather it simply recognises in the cost proposals that this is the
efficient improvement available at current levels of customer valuation. We could deliver less
than a 20% improvement, but we believe that we should target the highest level of
performance that we can justifiably deliver under the incentive regime.
Target planned investment more/less intensively to address network weak spots promptly
when they occur.
Where intermittent faults and poorly-performing sections of network are concerned, we
believe there are few real alternatives to our policy of responding to these issues as they arise.
The feedback from our customers indicates that the repeated fault situation leads to the
greatest level of frustration and poor customer service. Although such situations are rare and
investment could be increased in these areas, it is unclear that this would bring much definite
benefit: certainly such problems cannot be completely eliminated, even at grossly inefficient
levels of expenditure. Customers tell us that they understand this approach and hence we
believe the correct policy is one that drives a prompt response to the issue that customers
see: accordingly that is what we have built that into our plan. This response consists of initial
operational measures followed by appropriate asset replacement where the scope of works is
sufficient to prevent the underlying cause of interruptions from reappearing. In doing this we
believe we have a plan that optimises for the service customers want and the costs they are
willing to meet. Should the costs of meeting that service expectation exceed what we have put
in our plan, any additional costs of meeting it are at our risk.
Take steps to improve network resilience: as alternative options we considered doing more
or less resilience-enhancing work such as vegetation management and installation of flood
defences.
We have worked closely with the Environment Agency, Defra and DECC on developing plans
and policies we believe optimise between costs and risk for greater resilience against flooding,
widespread transmission failure (black start), storms and security threats as set out later in
this annex. We believe that such a level of governmental input limits our options at this stage
and it is now for us to execute the agreed policies. Our commitment to customers is to deliver
these obligations, and our cost assumptions are such that, if delivering the levels of resilience
set out involves costs greater than we have planned for, we will assume that risk and manage
it without scaling back on our delivery commitment.
Reconfigure our operational response capability to move from the current 18-hour
maximum for restoration so that we restore supplies within a maximum of 12 hours and pay
compensation if we are not successful.
For long-duration faults we have set out our operational plan to move from an 18-hour
backstop to a 12-hour backstop. To do this we will reconfigure our operational response
capability to be closer to the customer base. There is no technically plausible change that can
be made to the network that would not result in grossly disproportionate costs. So, as set out
in Section 1 of our main plan document, we will be optimising our operational bases and
resources for ‘lights-out’ response and not just focusing on cost minimisation. We believe that
the move from an 18-hour standard to a 12-hour standard is matched by customers’
willingness to pay (see our stakeholder engagement annex G9), but this approach is scalable
so that, should we move in the future to a world where the standard tightened again, we
could localise our operational capability even further. Of course the willingness to pay would
need to match that, because it is clear that costs increase the more we localise and at some
point that effect accelerates as operational capability becomes so fragmented.
accept that the method used by Ofgem in setting these targets is an entirely appropriate one and it is
our intention to respond to the incentive arrangements that are in place and perform better than the
IIS targets set by Ofgem.
75 Northeast
Customer interuptions
70
65
60
55
50
45
40
Yorkshire
75
Customer Interruptions
70
65
60
55
50
45
40
75 Northeast
Customer minutes lost
70
65
60
55
50
45
40
75 Yorkshire
70
Customer minutes lost 65
60
55
50
45
40
Planned power cuts can be required for us to replace, repair or maintain our network assets. As part of
our planning for these events we assess whether to use generator sets to maintain supplies, but this is
not possible in all cases. When a power cut is required we will ensure that the disruption is kept to a
minimum. Advance notification and assistance are provided to all the affected customers. We will
report on our planned power cut performance over 2015-23 separately from our performance in
relation to unplanned power cuts. We accept the target-setting mechanism for planned power cuts
proposed by Ofgem, as it will respond to the level of work required on the network while still
encouraging us to minimise the number and duration of power cuts.
To supplement the main or primary measures of reliability performance (i.e. CI and CML), we have a
number of secondary measures, which are deliverables that we commit to in terms of asset health and
criticality. Further details on the planned levels of secondary output deliverables associated with
reliability are presented in section 2.8 of the business plan. This annex will focus on our commitments
relating to delivery of primary output measures of reliability as seen by our customers.
3
Annex G.9: Stakeholder research reports summary - RIIO Phase 1 stakeholder consultation summary
going investment and upgrading programme in order to address weak spots on the existing network.
Respondents in previous weak-spot areas recommended upgrading the network because, in their
view, repairs in response to a fault were temporary and they were looking for a long-term solution.
- As part of the upgrading of our network management system used to remotely control
the network, we are introducing state-of-the-art network-automation functionality.
- This will allow our networks to be automatically reconfigured in less than three minutes
to prevent measured customer interruptions from occurring.
- The system will be expanded as further assets capable of being operated remotely are
installed.
Delivering our core asset maintenance and replacement plan:
- We will inspect our assets to understand their condition and undertake maintenance of
those assets in line with our policies.
- We will replace assets that are in poor condition or that are unreliable, to prevent future
asset failure.
- This will ensure that we maintain the long-term integrity of our network.
Managing the vegetation surrounding our overhead lines:
- Having previously cut back any vegetation from our lines we will revisit those lines on a
regular basis to maintain that clearance.
- We will continue to add more remote control to our assets, where economic for our
customers.
- This will provide our control engineers with more options to rapidly reconfigure the
network.
- Not all faults can be restored remotely and many require us to dispatch staff to restore
customers’ supplies via network rerouting or repair work.
- Our new IT systems will allow us to develop ways of more rapidly dispatching skilled staff
to site and providing accurate fault-status information for our customers.
Strengthening our operational management:
- We are reviewing our operational management structure to ensure that we can deliver a
cost-effective fault-response service for our customers.
- This involves assessing the location and skills mix of our workforce to ensure that our
working patterns match the type, volume and timing of fault activity.
cable faults. In rural areas such as North Yorkshire and Northumberland, on the other hand, we will
increase the availability of linesmen to deal with overhead line faults. This transformation will involve
the location of our staff nearer to the population centres they serve and the recruitment of future
staff through our workforce renewal programme into those local areas.
Additionally, our analysis of the types of fault that lead to a longer-duration power cut indicates that a
significant number are reported to us in the evening. This presents a challenge in restoring power
supplies to customers when a repair is required. Overnight working to undertake repairs is often not
supported by domestic customers for reasons of safety and disruption, and feedback continues to
reinforce this fact. In addition, carrying out this type of work during certain night-time hours can be
prohibited by Local Authorities. The opposite was true for business customers. Our plan assumes that
our engineers will stop working before midnight and then continue any necessary repair work in a safe
manner first thing in the morning. Where practical and should customers agree, we will use mobile
generation as a means of temporarily restoring power supplies overnight.
Finally, our recent survey of domestic customer priorities identified the provision of compensation as
being important to them. When we do have an unplanned power cut in excess of 12 hours we will
provide compensation to all customers in line with and above our regulator’s proposal for the
guaranteed standard that governs supply restoration in normal weather conditions. We hope also, in
doing this, to demonstrate that these payments represent a genuine apology on our part for below-
par performance in such a fundamental area of our activities, rather than just something that we are
compelled to provide.
Better integration of our IT systems and development of our GROND reliability-analysis system will
allow our design engineers to select the investments that will provide the best cost-effective
improvements in overall reliability for our customers. We will also be able to analyse the
improvements in customer service from this targeted asset replacement.
To support this analysis, we will continue to operate a hotspot management process to ensure timely
network investment interventions are based on customer feedback in conjunction with our desktop
activity. A customer hot spot is an area of the network that we select for more intensive management
attention, based on customer feedback and the type of reliability problems. These areas are the
subject of detailed plans for improvement and customer care.
history of both planned and unplanned power cuts within the last 12 months to ensure that customers
only suffer further interruptions where it is absolutely necessary.
8.0
7.0
6.0
5.0
£m
4.0
3.0
2.0
1.0
0.0
Yorkshire Northeast
800,000
700,000
600,000
Number of customers
500,000
400,000
300,000
200,000
100,000
Yorkshire Northeast
We will be supplementing this flood-defence investment with additional defence work at substations
that are at risk of surface-water flooding, in line with feedback from customers.4 We are working with
4
Customer prioritisation research summary – Explain January 2013 – Annex G.9 report 1
the Environment Agency and other network operators to expand industry guidance for flood-risk
assessment and defence measures. Our preliminary analysis concentrates on the risk presented to our
substation sites from 1-in-300-years surface-water flooding events. Our assessment has produced a
plan to defend:
56 sites belonging to our Northeast licensee at a cost of £17m, mitigating surface-water flood
risk for sites serving 2m customers
70 sites belonging to our Yorkshire licensee at a cost of £22m, mitigating surface-water flood
risk for sites serving 2.5m customers
As the national guidance evolves we will change our plans accordingly.
Heavy rain
Heavy snow
Gradual warming
Lightning
Overhead Lines
Transformers
Circuit Breakers
Earthing
Underground Cables
Vegetation Management
Emergency Response
Protection
Routine Business
Customer Service
The risks detailed were assessed on a probability and impact basis, using Northern Powergrid’s risk -
management process. The risks were assessed for the current climate, as well as for the climate
conditions predicted for the 2020s, 2050s and 2080s. Actions were then considered for any areas
where the risks were felt to be unacceptably high.
In order to ensure that climate-change adaptation is adequately embedded within our long-term
investment programmes, where appropriate all new and replacement plant will be specified to take
account of the possible climate-change effects over the lifetime of the equipment. That said, we
believe that current standards for equipment are sufficient to cater for the existing projections within
the Government’s analysis and we therefore are not proposing additional investment relating to
specification changes.
For example, the quality-of-supply benchmarking work of the Council of European Energy Regulators
indicates that UK customers experience a reliability performance that is positioned strongly in the
group of 27 countries assessed. 5 Benchmarking activity that Northern Powergrid has undertaken with
electricity distribution companies in the US shows that UK reliability performance ranks alongside the
very best utilities in the US, as highlighted in Figure 5 below. The comparison is provided against
results from a survey conducted covering 88 distribution electricity networks in the US using the IEEE
benchmarking framework for outage duration. The benchmarking measures performance using the
IEEE measure of system average interruption duration index (SAIDI), which is equivalent to the CML
measure used in Great Britain.
140
120
Customer minutes lost
100
80
60
40
2009 2010 2011 2012
First Quartile IEEE Survey Northern Powergrid (Northeast)
Northern Powergrid (Yorkshire)
5 th
5 CEER Benchmarking Report on the Quality of Electricity Supply 2011
Figure
7: UK benchmarking of performance - HV CML performance
Another way for customers to assess their level of service is to compare distribution network
operators’ performance against our regulator’s targets. The methodology for setting these targets
uses industry benchmarking data and improvement factors to determine a set of unplanned
interruption targets. These targets encourage distributors to reduce the number and duration of
power cuts by financially penalising or rewarding them. This financial incentive to improve reliability
performance is calibrated based on customers’ willingness to pay. Our historical performance against
these targets is demonstrated in the next section.
To date our performance improvements have been delivered through investment in network
equipment to reduce the duration of power cuts, operational initiatives to improve how our staff
respond to faults, initiatives to prevent faults from occurring and the use of innovative equipment on
the network.
In the past, interruptions to supply on remote power lines could cut off customers for hours whilst
engineers searched for the problem. Now such interruptions to supply can be limited to less than
three minutes, thanks to the growing number of over 5,500 remote control switches installed in our
area. They enable us to isolate the faulty area and reroute the supply around it, saving valuable time in
restoring power. Since 2010 we have invested over £10m in these types of network asset. Figures 13
and 14 show the increasing volume of remotely-controlled switches that have been installed and the
rising percentage of customer supplies that have been restored using this facility.
Two-thirds of our network is underground and faults can be harder to trace here than on overhead
lines, because the supply system is out of sight. We use devices that allow us to quickly pinpoint the
location of the problem, saving the trouble and cost, say, of digging up the road in the wrong place.
Additionally customers may often experience intermittent faults that disappear when we first try to
restore supply. Unfortunately in these situations the fault can often reappear many times, weeks or
months later, causing further disruption. This type of fault is very difficult to find and fix permanently.
In these circumstances, therefore, we are using an innovative device to help us to proactively tackle
intermittent faults on low-voltage underground cables. The device does two things: firstly it will help
us restore supply more quickly to limit the time customers are without electricity, and secondly it will
try to trace the fault before it disappears again. Once we get a good idea of where the problem is, we
can proactively carry out the necessary repairs and restore a more reliable supply.
Part of our strategy to improve reliability has been to prevent power cuts from occurring in the first
place. Two significant initiatives targeted at high-voltage overhead lines have been the installation of
arc-suppression-coil earthing and the implementation of a rigorous vegetation (trees etc)
management programme.
We have installed over 60 arc-suppression coils at primary substations. These substations have been
selected because they supply customers via a large proportion of overhead line and have a history of
transient or short-duration supply interruptions. A proportion of faults occurring on high-voltage
overhead lines are of a short duration and the lines are not permanently damaged. Lightning,
windblown debris and tree branches are typical causes of these faults. Although they are transient
faults, they do cause operation of a circuit breaker and supplies to the area are interrupted for tens of
seconds as the automatic control system tries to restore the supply. The arc-suppression coil earthing
system can prevent the circuit breaker from operating for a number of these transient faults by
controlling the amount of power dispersed during the fault. This prevents customers from even seeing
a momentary dip in the lights. Although we believe we have reached the point of saturation on our
network with these devices, we continue to explore via our innovation research the further
application of similar devices.
Throughout the present regulatory period we have delivered a vegetation management programme
that ensures the level of interference is minimised. We accomplish this by assessing the type of
vegetation in the locality of our overhead lines, cutting back the vegetation to a minimum distance
and then maintaining this clearance distance through regular visits as described in annex 1.6. This
programme commenced in 2005 and the final initial cuts were completed in 2012. Over this time we
have seen a reduction in faults caused by vegetation.