Sunteți pe pagina 1din 7

Proceedings of the 2012 9th International Pipeline Conference IPC2012 September 24-28, 2012, Calgary, Alberta, Canada



Luc Huyse Chevron ETC Houston, Texas, USA

Shahani Kariyawasam TransCanada Pipelines Calgary, Alberta, Canada

ABSTRACT The main objective in using reliability based methodologies is to provide consistent safety by explicitly accounting for uncertainties in a probabilistically quantified manner. Reliability methods also allow the articulation of the level of safety. This level of consistency in safety cannot be achieved in a deterministic analysis using safety factors. However, reliability based methods can be used to calibrate and improve deterministic methods to improve the consistency of the safety level. Providing consistent safety enables optimization of maintenance activities which enables the safest system to be provided using the available resources. Currently used deterministic and reliability based methods are both examined and discussed. Gaps and areas of improvement are identified with the objective of improving safety and explicitly articulating and communicating the level of safety. Effective use of quantitative risk and reliability methodologies requires quantitative data that describes the current state of the pipeline, the anticipated future state as well as the failure limit state. In maintaining oil and gas pipelines this level of quantitative data of the pipeline is available when pipelines are in-line inspected. Although reliability-based assessments are by no means restricted to corrosion management, the reliability based maintenance program at Pipeline Research Council International (PRCI) has been foremost applied to corrosion management because in-line inspection (ILI) data is adequately accurate to perform reliability based assessments. Guidelines for a reliability based maintenance program have been developed and projects executed to validate and demonstrate the implementation of these methodologies. The main learning from these guidelines

and subsequent validation projects has been useful in identifying the process for improving integrity related decision making, the sensitivities of these methodologies, the impact of physical uncertainty and knowledge uncertainty, and the challenges in defining and applying target criteria. These identified areas are explored and discussed. INTRODUCTION Integrity of an engineered system can be achieved using either a deterministic analysis based on safety factors or a reliability-based approach where the inherent variability of the variables or the uncertainties in the model equations are explicitly quantified as random variables and the reliability is expressed as a calculated probability of exceedance or failure. For a particular system, there is a one-to-one relationship between the safety factor and predicted reliability. In a safety factor based assessment framework, a code, standard or recommended practice is used to determine the necessary safety factor. The numerical values of these safety factors imply a specific level of scatter in the data and larger values generally point to an increased level of scatter or uncertainty and can be calibrated to match the performance of a full probabilistic analysis. As explained later in the paper, two types of uncertainty exist: inherent variability which describes natural variation in properties such as the wall thickness, operating pressure, material strength and the like. A second source of uncertainty is the model or knowledge uncertainty. Examples are the misfit between predicted and observed burst pressures due to corrosion anomalies. Although both types are mathematically represented by random variables, there is a significant

Copyright 2012 by ASME

difference in the interpretation of the results. While the inherently variable parameters really do fluctuate over a certain range, there truly is only a single (yet unknown) value associated with a knowledge uncertainty parameter.

Figure 1: Relationship between safety margin, probability and level of uncertainty. A wider scatter on the resistance, while maintaining the same safety margin, increases the failure probability

THE CHALLENGE, PROMISE AND APPEAL OF RELIABILITY-BASED METHODS Reliability-based methods aim to explicitly quantify the aggregated conservatism in terms of probabilities and risk. Accurate reliability estimates are not possible without accurate computational prediction models for the limit states and adequate quantification of the inputs and assumptions. Although this statement may seem self-evident, it should not be made light-heartedly. In fact, just about every analysis step in the pipeline integrity assessment procedures contains an inherent, yet unquantified, level of conservatism. Examples are abundant: the typical burst equation models (e.g. ASME B31G or modified B31G [1]), the corrosion growth rate assumptions (e.g. application of a maximum corrosion growth rate that is constant in time, [2]) as well as defect interaction and clustering rules [3] are all intrinsically conservative. This represents the challenge that has to be overcome by reliability-based methods. Deterministic methods have a lot of practical appeal due to their familiarity among practicing engineers. Given the additional level of complexity that comes with a reliabilitybased approach, one may wonder what the merit of reliabilitybased methods is and what tangible benefits they bring over a deterministic, safety-factor based approach. It can be shown that for a specific engineering system there is in general relationship between the employed (partial) safety factors and the calculated reliability [4, 5]. Among other factors, the safety factor and aggregated level of conservatism required to achieve a specified level of reliability depends on the scatter in the input distributions: the higher the scatter, the higher a safety factor will be required to achieve a particular reliability level. Precisely because the necessary level of conservatism depends on the amount of scatter in a system, a (slightly) different safety

factor should be employed for each pipeline and defect in order to achieve a constant reliability (or risk) level across all pipelines in a network. This is obviously practically impossible and a limited set of safety factors must be employed (e.g. class dependent). As a result, not all pipelines in the network will have identical risk when a safety factor approach is used. When a reliability-based approach is used, a more consistent risk level can be achieved across the entire network. Precisely because of the reduced spread that is achieved in a reliability-based approach, a higher level safety with minimal variation can be obtained in a reliability-based approach than in a safety factor based approach for the same level of effort. In other words, reliability-based approaches achieve higher safety efficiency than safety factor based approaches. This is the promise of reliability-based methods. It was stated above that an accurate prediction of the true likelihood of an adverse event is impossible without significant research into determining and understanding the, usually conservative, bias in the engineering models that are currently employed in the pipeline integrity state-of-the-practice. This is not a trivial task and in principle requires a fundamental revisiting of every assumption. Precisely because of this huge burden, reliability-based methods are most often not used to make accurate predictions of the likelihood but to draw up a relative ranking of the risk for various design alternatives or maintenance strategies. The underlying assumption is that if the same engineering model is consistently used to evaluate all alternatives under consideration, the bias in the engineering models becomes less relevant to the relative ranking. An interesting feature of this approach is that it is not necessary to state what the calculated reliability of each of the alternatives is, and that no direct correlation between calculated reliability and historical failure rates can nor should be inferred. It should be noted that this approach is typically followed during a reliability-based code calibration [6]. Such an approach could also be used to improve the reliability of an existing system by a predetermined amount without ever having to explicitly state a hard reliability target. This is the appeal of reliability-based methods. In pipeline integrity maintenance, reliability-based methodologies have been adopted within the last three decades. In maintaining oil and gas pipelines an adequate level of quantitative data of the pipeline is available when pipelines are in-line inspected. Although reliability-based assessments are by no means restricted to corrosion management, this paper will consider ILI assessments as the primary example. RELIABILITY METHODS IN PIPELINE INTEGRITY A reliability-based approach to pipeline integrity explicitly considers the uncertainties in the data and models and aims to predict the likelihood of failure. To this extent, it is necessary to have a good understanding of the true variability and/or conservatism in the models. Just about every analysis step in the pipeline integrity assessment procedures contains an inherent, yet unquantified, level of conservatism: the typical burst equation models (e.g. ASME B31G or modified B31G

Copyright 2012 by ASME

[1]), the corrosion growth rate assumptions (e.g. application of a maximum corrosion growth rate that is constant in time, [2]) as well as defect interaction and clustering rules [3] are all intrinsically conservative. This represents the challenge that has to be overcome by reliability-based methods. If conservative assumptions are made, conservative likelihood estimates will ensue. It is at this point unclear how such conservative likelihood estimates should be compared to risk or reliability targets. Many of the burst pressure equations in the industry are developed for use with deterministic methods and the primary intent of the models was to define a safe pressure rather than an actual burst pressure. In 1995, Brown et al. [11] modified the B31G equation [1] for use with reliability methodologies. In deterministic methods all models and inputs are supposed to be conservative where as in reliability methods the inputs and models strive to be as accurate as possible. The conservatism is applied at the end stage of the assessment when comparing the calculated reliability to a target reliability. Consequently Brown et al. [11] and Stephens and Nessim [12] modified the burst pressure equations to reduce both the bias and scatter. Brown et al. found that the definition of the corroded area is the most significant factor influencing model prediction accuracy. An assumption of a standard shape such as the parabolic shape or 0.85 factor shape reduced accuracy when compared to using effective area (RSTRENG)[8] or total area as recommended by Brown et al. This is reasonable considering that failure is proportional to the axial area loss and the total or effective area better represents this axial area. This modified equation was shown to have accuracy similar to the effective area method (RSTRENG). Stephens and Nessim modified this equation to be more appropriate for high strength steels and used ultimate tensile strength, as opposed to yield strength, in defining the flow stress. EFFECT OF UNCERTAINTIES The burst pressure equations B31G [1] and RSTRENG [8] were developed on the basis of exact defect measurements, which are readily obtained at a test site, and perhaps in the ditch (although the possibility of significant ditch sizing error should not be altogether dismissed). When these equations are used with ILI measured values, the measurement error is significant and needs to be accounted for. In the pipeline industry, safety factors are often considered adequate to account for these uncertainties [9] and consequently reported measured values have been used in the assessment. However, federal regulators in both Canada (NEB National Energy Board) and the USA (PHMSA Pipeline and Hazardous Materials Safety Administration) ask operators to account for measurement error. The specified error of most corrosion defect measurement tools is given as 10% wall thickness (wt) for depth error and 10 mm for length both for 80% confidence. The depth error (assuming a Normal distribution) is shown graphically in Figure 2. Both PHMSA and the NEB have suggested operators add 10%WT to depth measurements to account for depth error.

In effect this is specifying the confidence that the actual depth does not exceed the value, (dm+10%wt) is 90%.
-10% wt +10% wt

Mean = measured d = dm Std dev = 7.8% wt


dm - 20% wt dm-10% wt dm

dm+10% wt dm+20% wt

Figure 2: Specified depth measurement error of ILI tools (if it is assumed to follow a Normal distribution)

It is immediately clear that adding a 10%WT tolerance to the anomaly depths will result in a different TYPES OF UNCERTAINTIES There are two main types of variability. Table 1 gives the variables involved and the type of uncertainty or variability associated with each variable: 1. Physical variability; i.e., spatial and temporal variations. Examples are variability in MOP, material and geometric properties, as well as corrosion growth rates. 2. Knowledge uncertainty; i.e., uncertainty due to lack of knowledge, idealization, or limitation of technology such as the variation in defect sizing, growth, and model error. This occurs when only a single value for a parameter exists, yet is unknown.
Table 1: Causes of variability

Variable Maximum Operating Pressure (MOP) Material properties Geometric properties Defect sizing

Type of variability variation with time spatial variation, varies along pipeline spatial variation, varies along pipeline uncertainty due to lack of knowledge or limitation of technology spatial/time variation and lack of knowledge uncertainty due to idealization, lack of knowledge or limitation of technology

Defect growth Limit state model error

Copyright 2012 by ASME

Physical variability is due to spatial and temporal variations and therefore represents the variation in the physical pipe. A higher variation in these variables will lead to a higher likelihood of the pipeline failing. However, higher knowledge uncertainty does not affect the physical failure rate of the pipeline but it will affect our knowledge of the probability of failure (POF) and our confidence in the reliability estimate. Unlike most physical uncertainty the knowledge uncertainty can in principle be reduced by better technology, more accurate equations, and better models. For example if a pipeline is inspected using a low resolution ILI tool, as opposed to a high resolution tool, it will result in a higher POF. Here the condition of the pipeline is the same whether it is inspected by a low resolution or a high resolution tool, only the level of knowledge uncertainty is different. Similarly the model error does not represent the physically condition of the pipeline but our knowledge of the condition. Growth rate is strongly affected by both the temporal and spatial variations as well as the knowledge uncertainty in modeling it. In this context, the state of knowledge is said to be perfect when complete statistical information and perfect models are available (Der Kiureghian, 1989). Under a perfect state of knowledge, a precise assessment of the reliability or risk can be made. In engineering problems the designer typically makes decisions using incomplete information. The reliability problem can be reformulated as a two-level reliability problem: the physical reliability problem sits inside another reliability problem which affects our confidence in the reliability estimate. If the calculated reliability is fairly insensitive to changes in the model uncertainty parameters, we have high confidence in the reliability estimate (tight confidence bounds). Alternatively, if the reliability estimate is highly sensitive the knowledge uncertainty, the confidence bounds around this expected reliability curve will be very wide (Pat-Cornell, 2002). ACCEPTABLE LEVEL OF CALCULATED RELIABILITY Typical levels of risks experienced in society are well documented in the literature (e.g. HSE 2001, CCPS, 2009) [17, 18]. The numbers quoted in these publications refer to observed, historical rates; sometimes these are also referred to as objective frequencies. In general, annual fatality rates above 10-4 are considered broadly unacceptable, whereas annual fatality rates of less than 10-7 are considered de minimis and virtually safe. Target failure probabilities for ultimate limit state violations are specified in the Probabilistic Model Code Part I [6]. These two values should not be directly compared because the violation of an ultimate limit state (e.g. pipeline leak or rupture) does not automatically result in a (single) fatality. Therefore, ultimate limit state (ULS) violation probabilities ought to be compared to incident frequency statistics and not fatalities. The occurrence of injuries or fatalities is a conditional event assuming that an ULS violation has taken place. Although they are often intended to be used in conjunction with explicit safety factors, deterministic engineering

assessment equations are often intrinsically conservative. It was stated above that an accurate prediction of the true likelihood of an adverse event is impossible on the basis of biased or inaccurate model descriptions of the limit states. The calculated reliability will be lower if a conservatively biased model formulation is used than if an unbiased model is used. In most cases, a risk assessment on the basis of a less accurate, yet unbiased, engineering model will result in lower calculated reliability than if a more precise model is used. Engineering models are not perfect and, specifically in pipeline integrity assessments, significant sources of scatter (knowledge uncertainties) exist as explained in an earlier section. Practical examples of knowledge uncertainty in pipeline integrity assessments are: sizing uncertainty for in-line inspection data, unexplained scatter in burst prediction equation, and simplified mathematical description of actual corrosion growth. This brings up a subtle, yet quite significant point: how does one meaningfully compare a calculated probability to a historical rate? Given that the knowledge uncertainties tend to inflate the calculated rates, it is obviously conservative to require that the calculated incident rates (ULS violation) fall below the historical rates. However, internal work by both authors has indicated that the effect of knowledge uncertainties can be 2 orders of magnitude (a factor 100) on the predicted probability. Unless these effects can be adequately addressed, this brings up the question whether a hard reliability target is practically meaningful or even desirable in cases where significant knowledge uncertainty exists. Please note that it was stated in the introduction that a lot of the practical advantages of the reliability-based approach can be realized using relative risk-ranking without explicitly defining hard reliability targets. DIFFERENCES BETWEEN SAFETY-FACTOR AND RELIABILITY-BASED ASSESSMENT FOR A SINGLE ANOMALY Some examples of the impact of basic assumptions on the calculated reliabilities can be found in Kariyawasam and Huyse (2011). The traditional measure of safety (expressed as either an ERF or an RPR) is generally proportional to log10(pF) where pF is the burst probability. However, two anomalies with identical safety factor (ERF or RPR) can easily have differences in the calculated reliability of an order of magnitude (i.e. factor 10) or a full unit on the log10(pF) scale. The following example illustrates the difference between inherent uncertainty and knowledge uncertainty. The burst probability is computed for a single anomaly using the ASME B31G approach. Although wall thickness WT, outer diameter OD, and specified minimum yield stress (SMYS) are also considered random variables, we will focus on the effects of uncertainties in maximum pit depth, applied pressure and shape factor. These three uncertainties are the most significant contributors to the calculated burst probability and are different in nature. Whereas the applied pressure may truly vary over time and a random variable is intuitive; there is only a single, correct value for the pit depth. The exact value may, however,

Copyright 2012 by ASME

become uncertain if it is measured indirectly (e.g. through an magnetic flux leakage MFL ILI tool). Likewise for the shape factor, there is only one particular value of the shape factor that results in the correct burst pressure prediction; however, its value may be unknown and therefore represented by a random variable. Modeling uncertainties are a bit different though as the correct value for the shape factor may become dependent on the actual values of depth and applied pressure (this dependence would then really be caused by a mis-specification in the ASME B31G functional relationship).
1.E+0 1.E2 1.E4

over the probability density of the variable, is mostly determined by a small fraction of the entire range It is possible to set a model parameter at a particular value and obtain the same value for the conditional probability (ignoring the variability of the parameter) as for the total probability. This value is obtained at the intersection of the all RVs and parameterspecific lines in Figure 2 There is no single constant probability level for which each of the variables can be set at and the same reliability would be obtained as for the full problem. E.g. setting depth equal to the 90th percentile (+1.28 standard deviations) and leaving all other variables as random would overestimate the probability of burst. For the very specific case considered, the reliability level for the depth value is around 66% if the sizing tolerance is 10%WT and around 72% if the sizing tolerance is 15%WT. These values are well short of the 90th percentile which is sometimes proposed (which corresponds to 1.28 sigma in Figure 2)

1.E6 1.E8 1.E10 1.E12 1.E14 1.E16 1.E18 1.E20 2 1.5 1 0.5 0 0.5 1 1.5 2 Deviationfrommean(*standarddeviation) d@10%WT Pressure Shape d@15%WT AllRV's

Figure 3: Calculated burst probability depending on model and parameter assumptions

The sensitivity of the burst probability to each of the variables is shown in Figure 3. In this figure, the teal-colored line (All RVs) shows the burst probability when all variables are treated as random. Each of the other lines illustrates the conditional probability assuming one of the random variables is treated as a parameter while all other variables are kept random. Each of the parameters is then varied over a range equal to 95% of the probability content ([-2, +2] for a Normal distribution). For the pit depth, 2 random variables are used: one corresponds to a sizing error of 10%WT with 80% confidence and the other corresponds to a sizing error of 15%WT with 80% confidence. The calculated probability of burst is quite sensitive to the value of either the applied pressure, shape factor or pit depth. Making all three variables random at the same time using their respective distributions results in a different calculated probability than when the knowledge uncertainties are either removed or ignored. The figure illustrates a number of important facts: the conditional probability is highly dependent on the assumed value of the parameter and the relationship is extremely non-linear the total probability (teal-colored line in Figure 2), obtained as the integral of the conditional probabilities

It is therefore concluded that it does not make sense and is inappropriate to directly compare calculated probabilities to historical failure rates. The sensitivity of any such comparison degrades with increasing importance to knowledge uncertainty. The relative importance of the knowledge uncertainty increases with increasing variability and with increasing sensitivity of the limit state to the uncertain knowledge parameter. SUMMARY The main objective in using reliability-based methods is to provide consistent safety by accounting for uncertainties not easily addressable by deterministic, blanket safety factors. Through probabilistic calculations, reliability methods also allow articulation and explicit quantification of safety. The better specification and improved consistency, enables the optimization of safety. Deterministic methods adopt conservative values of both loads and resistances and combine them using safety factors for further conservatism. The final level of conservatism in deterministic methods is unknown. Consequently the total level of safety afforded is not quantified or justifiable. To increase consistency in the safety level provided, the values used as input to deterministic models need to be specified in a manner that accounts for variability in parameters in a more systemic manner. For instance the conservatism in different burst equations needs to be acknowledged and accounted for in a risk consistent manner without aggregating arbitrary and variable levels of conservatism. Deterministic methods have a lot of practical appeal due to their familiarity among practicing engineers. The safety factor and aggregated level of conservatism required to achieve a

Copyright 2012 by ASME

specified level of reliability depends on the scatter in the input distributions: the higher the scatter, the higher is the safety factor required to achieve a particular reliability level. It is not practical to specify a different safety factor for each different level of scatter. When a reliability-based approach is used, a more consistent risk level can be achieved across the entire network. Precisely because of the reduced spread of safety that is achieved in a reliability-based approach, a higher level safety with minimal variation can be obtained in a reliability-based approach than in a deterministic approach for the same amount of resources. In other words, reliability-based approaches achieve higher safety efficiency than deterministic approaches. In achieving more consistent and well specified safety levels using reliability methods it is important to identify the sensitivity of the reliability to certain input variables. Appropriate modelling of these significant variables is critical to optimizing decisions in pipeline integrity. The cause of the variation in each of these parameters can be due to physical uncertainty or knowledge uncertainty. Knowledge uncertainty has a significant effect on reliability in pipeline integrity with the current level of available technologies. As a result the calculated probability of failure can be significantly affected by knowledge uncertainty, thus these calculated probabilities of failure cannot be directly compared to historical rates. All of these aspects need to be further investigated and researched.

Outputs from qualitative risk methods are relative. Qualitative risk measures can be highly subjective. Quantitative risk means that measures of likelihood or probability are estimated or assigned and measurable parameters are used for consequences (dollars, losses in life, spill volumes etc). Quantitative risk methods can be more objective if we use data and models, within the model applicability range, or subjective if we assign probabilities subjectively based on judgment. In applying the results of quantitative risk analysis, the user should be aware of the implications resulting from the levels of subjectivity (due to assumptions and idealizations) and accuracy implicit in the numerical estimates. Quantitative risk methods can be deterministic or probabilistic. Probabilistic risk methods are where probabilities are estimated quantitatively using statistical data and uncertainty modeling and then mathematically combined with measurable consequences. Probabilistic risk estimates provide more objective measures of risk by combining numerical estimates of frequencies, probabilities, and consequences. As in deterministic methods the modeling assumptions will bring subjective elements into the assessment. Reliability is the probability that a component or system will perform its required function without failure during a specified time interval (usually one year), equal to 1.0 minus probability of failure. Reliability-based assessment is an assessment method in which the pipeline is assessed and operated to meet specified target reliability levels. A reliability based assessment is performed by quantifying all relevant input uncertainties. Estimated Repair Factor (ERF) = MOP/ Safe Pressure Rupture Pressure Ratio (RPR) = Burst Pressure/ (Required Safety Factor x MOP) Safety factor = Burst Pressure/ MOP

Risk is the expected value of loss (often expressed as damage per year, i.e. barrels spilled, expected number of annual injuries or fatalities). Risk = Likelihood (or probability) of failure x Consequence of failure Probability of failure or Likelihood is a measure of how likely the event is. Probability of failure is the probability that a component or a system will fail. This is typically expressed as an annual probability equal to 1.0 minus the reliability. Consequence is a measure of the outcome or loss from an event. Site Specific risk assessment is a risk assessment used on specific segment to understand associated details of risk associated with a localized region. Generally much better quality and detailed data is available for a site specific assessment than for a system-wide risk assessment. Qualitative risk means that a user-defined ranking or scale (such as high, medium, low or 1 to 10 index) is used to define risk measures. Qualitative risk analysis methods are often employed as a screening tool to identify potentially high-risk scenarios that can warrant a more detailed quantitative analysis.

1. ASME, 2009, ASME B31.G, Manual for Determining the Remaining Strength of Corroded Pipelines: Supplement to ASME B 31 Code for Pressure Piping, An American National Standard, The American Society of Mechanical Engineers. NACE, 2010, In-Line Inspection of Pipelines - Standard Practice SP0102-2010 (formerly RP0102), Houston, Texas, USA Pipeline Operator Forum, 2009, Specification and requirements for intelligent pig inspection of pipelines,



Copyright 2012 by ASME

4. 5.

Elishakoff, I., 2004, Safety Factors and Reliability: Friends or Foes, Kluwer Academic Publishers, 295pp. Kariyawasam, S. N. and Peterson, W. B., 2008, Revised Corrosion Management with Reliability based Excavation Criteria, Proceedings of IPC 2008, International Pipeline Conference, Paper No. 2008-64536, Calgary, September JCSS, 2001, A Probabilistic Model Code Part I: Basis of Design, ETH Zurich, Switzerland. Chauhan, V., Brister, J., and Dafea, M., 2009, A Review of Methods for Assessing the Remaining Strength of Corroded Pipelines, US Department of Transportation, Report No. 6781, September Vieth, P. H. and Kiefner, J. F., 1993, RSTRENG2 Users Manual, Final Report on PR-218-9205 to Pipeline Corrosion Supervisory Committee, Pipeline Research Committee, Pipeline Research Council International, Inc., Catalog No. 51688, Kiefner &Associates, Inc., March Kiefner J. F., 2008, Safety Factors for Assessing Pipeline Anomalies, INGAA white paper, P-PIC report, April 30

13. Desjardins, G., Sahney, R., Spencer, K., 2011, Draft Final Reports for Phase II of Field Demonstration and Bench Marking Work for Reliability Based Guidelines for Pipeline Integrity, final reports on project EC1-6, Pipeline Research Council International, Inc., January 14. Kariyawasam, S. N. and Peterson, W. B., 2010, Effective Improvements to Reliability Based Corrosion Management, Proceedings of IPC 2010, International Pipeline Conference, Paper No. 2010-31425, Calgary, September 15. Dawson, S.J., Kariyawasam, S.N., 2009, Understanding and Accounting for Pipeline Corrosion Growth Rates, Joint Technical Meeting, Paper No. 22, Milan, Italy May 16. Huyse, L., Van Roodselaar, A., Onderdonk, J., Wimolsukpirakul, B., Baker, J., Beuker, T., Palmer, J., Jemari, N. A., 2010, Improvements In The Accurate Estimation Of Top Of The Line Internal Corrosion Of Subsea Pipelines On The Basis Of In-Line Inspection Data, IPC 2010, International Pipeline Proceedings of Conference, IPC 2010-31038, Calgary, September 17. Health and Safety Executive, 2001, Reducing Risks Protecting People HSEs decision-making process, Norwich, UK 18. CCPS, 2007, Guidelines for Risk-Based Process Safety, Center for Chemical process Safety An AIChE Industry Technology Alliance, , Wiley 19. Kariyawasam, S, Huyse, L., 2011, Quantitative Risk and Reliability Methods in Pipeline Integrity Maintenance, JTM Meeting on Pipeline Research, San Fancisco, CA, USA, May 2011. 20. Der Kiureghian, A., 1989, Measures of Structural Safety under Imperfect States of Knowledge. ASCE Journal of Structural Engineering 115(5), 1119-1140





10. Stephens, M. J. and Nessim, M. A., 2009, Guidelines for Reliability based Pipeline Integrity Methods, Final report on PR-244-05302, Pipeline Research Council International, Inc., September 11. Brown, M., Nessim, M. and Greaves, H. 1995. Pipeline Defect Assessment: Deterministic and Probabilistic Considerations, Second International Conference on Pipeline Technology, Ostend, Belgium, September 12. Stephens, M. J. and Nessim, M. A., 2006, A Comprehensive Approach to Corrosion Management Based on Structural Reliability Methods, Proceedings of IPC 2006, International Pipeline Conference, Paper No. IPC2006-10458, Calgary, September

Copyright 2012 by ASME