00 voturi pozitive00 voturi negative

9 vizualizări56 paginimmmm

Dec 30, 2014

© © All Rights Reserved

PDF, TXT sau citiți online pe Scribd

mmmm

© All Rights Reserved

9 vizualizări

00 voturi pozitive00 voturi negative

mmmm

© All Rights Reserved

Sunteți pe pagina 1din 56

and use of reliability techniques

Status (P)

BRIME

PL97-2220

Project

Coordinator:

Partners:

Centro de Estudios y Experimentacion de Obras Publicas (CEDEX)

Laboratoire Central des Ponts et Chausses (LCPC)

Norwegian Public Roads Administration (NPRA)

Slovenian National Building and Civil Engineering Institute (ZAG)

Date:

PROJECT FUNDED BY THE EUROPEAN

COMMISSION UNDER THE TRANSPORT

RTD. PROGRAM OF THE

4th FRAMEWORK PROGRAM

by

P. Haardt, R. Kaschner, Bundesanstalt fr Strassenwesen (BASt)

S. Fjeldheim, Norwegian Public Roads Administration (NPRA)

Deliverable D6

P97-2220

CONTENTS

Page

Executive Summary

SCOPE ............................................................................................................................................................... 1

SUMMARY ....................................................................................................................................................... 1

IMPLEMENTATION ........................................................................................................................................ 1

ABSTRACT....................................................................................................................................................... 1

1.

INTRODUCTION ................................................................................................. 1

1.1.1.Studies on samples .................................................................................................................................... 2

1.1.2. In-situ examination................................................................................................................................... 3

1.2. STRUCTURAL BEHAVIOUR .................................................................................................................... 7

1.2.1. Global displacements measurements........................................................................................................ 7

1.2.1.1. Topographic checking ..................................................................................................................... 7

1.2.1.2. Deformation measurements under loading....................................................................................... 7

1.2.2. Force measurements................................................................................................................................. 8

1.2.2.1. Measure of the support reaction ....................................................................................................... 8

1.2.2.2. Other direct measurements ............................................................................................................... 9

1.2.3. Geometrical study of the cracks: crack mappings.................................................................................. 10

1.2.4. Local measurements (strains, crack lengths).......................................................................................... 11

2.1. The need to deal with uncertainties in structural safety........................................................................... 11

2.2. Definition-hypotheses .................................................................................................................................. 15

2.2.1. Probabilistic description of strengths and loads ..................................................................................... 16

2.2.2. Mathematical and numerical techniques ................................................................................................ 16

2.2.2.1. Hypotheses ..................................................................................................................................... 16

2.2.2.2. Component reliability..................................................................................................................... 16

2.2.2.3. Reliability assessment..................................................................................................................... 17

2.2.2.4. Rosenblatt transform ...................................................................................................................... 21

2.2.2.5. Algorithm for calculating the reliability index ............................................................................... 23

2.2.2.6. Sensitivity factors ........................................................................................................................... 24

2.3. Modelling of structural components .......................................................................................................... 25

2.4. System reliability.......................................................................................................................................... 26

2.4.1. Some definitions..................................................................................................................................... 27

2.4.2. Formal system representation................................................................................................................. 28

2.4.3 Exemple................................................................................................................................................... 28

2.4.4. Calculation of the failure probabilities of systems ................................................................................. 30

2.4.4.1. Series systems................................................................................................................................. 30

2.4.4.2. Parallel systems .............................................................................................................................. 32

2.4.4.3. Example of a series system............................................................................................................. 33

2.5. Event margins............................................................................................................................................... 35

2.5.1. Reliability updating with quantitative information ................................................................................ 36

2.5.2. Reliability updating with qualitative information .................................................................................. 36

2.6. Conventional probability of failure ............................................................................................................ 36

2.6.2. The problem of the minimum safety definition...................................................................................... 37

2.6.3. Life-safety criterion................................................................................................................................ 37

2.6.4. Calibration.............................................................................................................................................. 38

2.6.5. Adjustments............................................................................................................................................ 38

3. APPLICATION ..................................................................................................... 39

3.1 Time-depending losses .................................................................................................................................. 40

3.1.1. Losses due to concrete shrinkage ........................................................................................................... 40

3.1.2 Losses due to concrete creep................................................................................................................... 40

3.1.3. Losses due to steel relaxation................................................................................................................. 41

3.1.4. Determination of the concrete strength .................................................................................................. 41

3.1.5. Probabilistic models ............................................................................................................................... 41

3.2. Reliability of prestressed sections ............................................................................................................... 42

3.3. The Vauban bridge ...................................................................................................................................... 43

3.4. Reliability updating ..................................................................................................................................... 45

3.4.1. The measurement techniques ................................................................................................................. 45

3.4.2. Updating................................................................................................................................................. 45

5. REFERENCES ..................................................................................................... 49

EXECUTIVE SUMMARY

SCOPE

Europe has a large capital investment in the road network including bridges, which are the

most vulnerable element. The network contains older bridges, built when traffic loading was

lighter and before modern design standards were established. In some cases, therefore, their

carrying capacity may be uncertain. Furthermore, as bridges grow older, deterioration caused

by heavy traffic and an aggressive environment becomes increasingly significant resulting in

a higher frequency of repairs and possibly a reduced load carrying capacity.

The purpose of the BRIME project is to develop a framework for the management of bridges

on the European road network. This would enable bridges to be maintained at minimum

overall cost, taking all factors into account including condition of the structure, load carrying

capacity, rate of deterioration, effect on traffic, life of the repair and the residual life of the

structure.

The objective of WP 2: Assessing the load carrying capacity of existing bridges is to derive

general guidelines for structural assessment. For this purpose, this report describes some of

the most used experimental methods in bridge assessment. It also introduces the reliability

theory concepts which can constitute an interesting approach for bridge assessment. An

example (assessment of a prestressed concrete beam at the Serviceability Limit state)

highlights the different concepts: the computation of system and component probabilities of

failure as well as the use of results from experimental assessment for updating these

probabilities. Finally, the problem of load testing is also introduced and expressed through the

concepts of the reliability theory.

SUMMARY

This report describes different experimental assessment methods and the concepts from

reliability theory.

As a first step (section 1), the report presents general information about experimental

assessment techniques. The main objective of experimental assessment is to provide

information about the state of a structure. These information can be valuable for updating the

knowledge about the structural safety.

Section 2 introduces the concepts from the structural reliability theory. The basic features of

this theory are presented highlighting its advantages and disadvantages compared to other

approaches dealing with structural safety (partial safety factors, allowable stress design). A

full example introducing all the concepts presented in section 3 is given. That concerns the

reliability assessment of prestressed concrete beams. Details regarding the computations of

the probabilities of failure for systems and components are given as well as the manner to use

experimental assessment results for updating these probabilities. Section 4 presents some

possibilities of the reliability theory applied to proof-load testing.

IMPLEMENTATION

This report forms the basis for a subsequent discussion and evaluation of bridge assessment

procedures which will ultimately lead to the development of proposals and guidelines.

Materials presented in this report are linked to proposals made in deliverable D1 Review of

1

current procedures for assessing load carrying capacity and require information from

deliverable D5: Development of models (traffic and material strength), especially when

using reliability theory concepts. This report participates to the general conclusions presented

in D10: Guidelines for assessing load carrying capacity.

The report also plays a fundamental part in defining the approach adopted in Workpackage 3:

Modelling of deteriorated structures and its deliverable D11: Assessment of deteriorated

bridges. In addition, structural safety is a significant parameter for priority ranking, as

examined in Workpackage 6: Priority ranking and prioritisation and the decision-making

process which is being studied through Workpackage 5: Decision: repair, strengthening,

replacement. All of these components are fundamental to the development of an effective

bridge management system which will be developed in Workpackage 7: Systems for bridge

management.

AND USE OF RELIABILITY TECHNIQUES

ABSTRACT

This report describes different experimental assessment methods and the concepts from

reliability theory.

Section 1 provide general information about experimental assessment techniques. Section 2

introduces the concepts from the structural reliability theory. The basic features of this theory are

presented highlighting its advantages and disadvantages compared to other approaches dealing

with structural safety (partial safety factors, allowable stress design). A full example introducing

all the concepts presented in section 3 is given. That concerns the reliability assessment of

prestressed concrete beams. Details are also given regarding the introduction of target reliability

indexes for structural assessment. Section 4 presents some possibilities for the use of reliability

theory to proof-load testing.

1. INTRODUCTION

Experimental assessment usually starts after the detailed inspection and the interpretation of the

observations which are primarily visual. There is not general method for experimental

assessment applicable to all bridges, nor even with a family of bridges. The reasons, and

therefore the methods to be used, differ according to the nature of the disorders.

The establishment of an experimental assessment programme thus comes after a very detailed

examination of the disorders noted at the time of the preliminary visit. In fact, in practice, it is

necessary to initially have an idea of the possible causes explaining the disorders, and it will be

the direct idea of the experimental assessment.

The general objectives of an experimental assessment are of two kinds:

to assess the quality of materials in place;

to analyse the real structural behaviour.

These two objectives can be used to distinguish the techniques and the elementary means used in

the experimental assessment; nevertheless, it should be stressed that, generally, the two

objectives coexist in the same investigation campaign. It can indeed happen that a material

defect has a direct incidence on the structural behaviour; conversely, a badly structural behaviour

can be the consequence of a deterioration, at least partial, of some constitutive materials.

The means to appreciate the state of materials include:

1

tests on materials in-situ, either visual, or by more refined and more powerful methods

(radiography, electromagnetic methods, electrochemical methods, etc).

The techniques for assessing the structural behaviour are varied, and it is often necessary to

associate them in the same assessment. One can distinguish:

topographic or geometrical measurements (strain or displacement under loading);

direct force measurements;

local measurements (measurement of local deformation, strain measurement, etc).

This section briefly presents the various means for performing an experimental assessment.

Further details (as well as a set of references) can be found in /1/, /25/, /26, /27/.

1.1. MATERIALS

1.1.1.Studies on samples

The studies carried out on samples have a double objective: identification of the materials, and

evaluation of their properties. Let us recall that, for the identification of the materials,

consultation of documents which must appear, in theory, in the construction documents, can be

as important as tests on samples.

To take a sample (coring) on a structure has the major disadvantage to be partially destructive.

Consequently, one seeks to extract the smallest possible samples, in a limited number, and at the

least vital places on the structure. It results a second disadvantage, which is that the information

cannot be representative of the whole structure.

Generally, these samples are used as calibration reference or point of comparison, to complete

information from non-destructive tests carried out on the bridge (figure 1.1.).

(photo LCPC)

2

The traditional tests (compression, traction, etc) are usually carried out on test-samples whose

form and dimension can differ notably from the form and dimension of the standardised testsamples. The interpretation of the results is sensitive to all the observations which could be made

during the extraction of the test-samples (coring) until the end of the tests. Tests can have other

objectives that the estimation of a tensile or compressive strength. For example, the detailed

examination of the strain/stress diagram and of the fracture topography in the case of a metal

sample can give interesting information on the nature of material employed during construction.

Some other tests tempt to mainly measure physical properties such as density, porosity, water

content, etc. The methods for chemical and physico-chemical analysis are also developed; they

have the advantage to only require small samples. In addition, the nature of the provided

information makes the specific character of the sample less awkward than for measurements

previously mentioned. The chemical and physico-chemical studies can be expensive. The type of

test to be carried out will depend on the objectives of the experimental assessment campaign; it

is thus necessary to carefully define, as a preliminary, the required objective. It is the case, for

instance, for the mineralogical analysis of a hardened concrete.

Metallographic analysis for metals are well-known in metallurgy. Associated with a

determination of the elementary components by chemical analysis, they make possible to

determine in a very complete manner the nature of the metal, and consequently its properties.

1.1.2. In-situ examination

The majority of the techniques for in-situ material assessment extrapolate the results obtained on

samples. Indeed, at the present time, no non-destructive method able to give sufficiently reliable

results exist.

!

For cables, various assessment and monitoring methods have been developed to detect

defects able to cause the failure of a cable such as elementary wire corrosion and ruptures

(Foucault currents, acoustic monitoring,...).

For concrete, ultrasonic testing consists to measure the velocity v of the an ultrasonic wave in

the material. More exactly, one measures a travel time t between a transmitter and a receiver

separated by a known distance (figure 1.2). The measurement of the longitudinal or

compressive velocity can be easily obtained (it is the wave that arrives first at the receiver).

By using suitable and correctly directed receivers, it is possible to measure the transverse

velocity. The mass density being measured on samples, the two velocities a priori allow to

calculate the Young modulus E and the Poisson coefficient . Unfortunately, the concrete is

far from being a homogeneous, linear and isotropic material! It is a micro-cracked material

which contains water and whose mechanical characteristics are oriented according to the

casting direction. These properties strongly disturb the wave propagation in the concrete and

make impossible a good estimation of E and . It has to be noticed that the determined value

of E is appreciably higher than the value obtained by compressive tests (the variation can

reach 40 %).

3

7UDQVPLWWHU

&RQFUHWHZDOO

5HFHLYHU

In the case of non-destructive testing of concrete, these considerations thus have only little

practical interest. The method will thus be useful more to appreciate the homogeneity of a

concrete, to locate and to appreciate the importance of a defect or to give in certain cases an

estimate of the strength of the concrete, in correlation with a calibration on samples.

For a typical concrete, v values are about 4 000 m/s. A length of measurement is 1 to 2 m,

with a step of 10 or 20 cm. The measurement of the travel time thus comes out of the field of

the traditional clocks. Still ten years ago this measurement of time was done by means of an

oscilloscope (use of the delayed time-base sweep), the received signal being visualised on the

screen of the apparatus: the accuracy was better than the microsecond. Today, compact and

autonomous apparatuses are used. The time measurement is automatically done by and

electronic meter; the result directly appears in a numerical form.

The number of the points of measurements is function of the problem. In general a slotted

line per m2 is an order of magnitude. An interpretation of the results consists to plot isovelocity curves: the variations of quality of the concrete can be therefore visualised on a

general level. A crack will be seen by a discontinuity on the graph.

Industrial radiography is also used for a long date for the control of welded joints in steel

construction (control of the welds). Its application to prestressed concrete is much more

recent. Its development goes back to 1970. The principle is relatively simple: a gamma or X

source is placed on a side of the wall to study and the flow of radiation, after having crossed

the wall, comes to impress a photographic film. The photographic film is impressed

differently according to the received intensity. The presence of a body with higher density

than the concrete (a cable for example) is materialised by a clearer trace; the presence of a

vacuum (lack of grout for example) causes a more important blackening of the film (figure

1.3).

)LOP

&RQFUHWH

6WHHOEDU

(PLVVLRQ

9DFXXP

&OHDU

'DUN

The objective is thus to see inside the concrete and, of course, to obtain good quality photos.

The choice of the source, film, filters, screens, distances source-film, exposure time, are

function of the problem, the concrete thickness constituting the principal parameter in that

choice. The exposure time is inversely proportional to the activity of the source, is

proportional to the square of the distance source/film and grows exponentially with the

thickness of concrete crossed.

In the Seventies, the most current use of radiography on concrete was the control of the

injection of grouts during construction. In the Eighties and until now, its use was essential in

the study of deteriorated structures. One can even say that, in the case of prestressed concrete

bridges with disorders, one of the very first investigations after the detailed visit consists to

perform one radiographic analysis. Indeed, even if there is not any suspicion on the integrity

of the cables, it is wise to make sure of the injection quality: a crack offers an opportunity for

water to mitigate and the structural behaviour will be different according to whether a cable

is well or badly injected (figure 1.4).

Crack visualisation

Poor injection

Cable stress release

Figure 1.4. Some results obtained by the radiographic testing

(Photo LCPC)

!

Another method based on the measure of the electrochemical potential makes possible to

locate near the concrete surface the zones of steel corrosion. This determination is practised

while moving along the layout of the reinforcement and the surface of the concrete, a

5

to the reinforcement. A certain number of conditions have to be respected so that

measurements are valid. Reinforcement must be electrically continuous, the concrete of the

zone of coating must have a sufficient water content to ensure a minimal conductivity, and

there should not be painting on the surface of the concrete which can act like electrical

insulator (figure 1.5).

Measurements of electrode potential can be performed in a specific way or continuously; it

is thus possible to isopotential maps whose interpretation relies on the definition of three

classes separated by values thresholds such as those provided by ASTM C 876-80:

Classify S

Classify M

Classify R E

- 200 mV < E

- 350 mV < E < - 200 mV

< - 350 mV

the rust is possible

the rust is probable

It appears that the most negative values indicate the presence of zones where steels are

corroding. It is advisable nevertheless to precise that the application of this method is not

sufficient to give the state of corrosion of the reinforcements, but makes possible to indicate

the probability of presence of corrosion.

9ROWPHWHU

(OHFWURGH

6WHHOEDUV

!

The control depth of carbonation is carried out, with the assistance of an indicator, the

phenolphthalein, after having carried out notches with various depths in the concrete. The

reagent colours in pink the uncarbonated parts of the concrete and parts for which pH is

higher than 9. This standardised test is very easy to use.

putting an apparatus (bell shape) against the concrete surface, by making the vacuum inside

this bell, and by measuring the time to re-obtain the atmospheric pressure. This time is a

function of the characteristics of support permeability.

1.2.1. Global displacements measurements

Topography and the unloaded levelling can inform about the general state of the bridge, and the

measure of its deformations under loading informs about its structural behaviour.

1.2.1.1. Topographic checking

Upstream of the experimental assessment itself, it is necessary to assess the general geometry,

and in particular its levelling. This is especially interesting if the point-zero levelling has been

made.

When disorders occur in the foundations, they often appear by support displacements. Bridge

inspection thus must comprise the routine checking of the bearings stability (with the

topographic direction). In the case of an important bridge, the best solution consists in equipping

each support with targets. That allows, in addition to a periodic inspection, to easily carry out a

checking when unspecified disorders make suspect an instability of the foundations.

The same provisions can be made for the deck. Permanent deformations, in particular in

levelling, can indeed be the apparent sign of major disorders.

1.2.1.2. Deformation measurements under loading

The global structural behaviour under known loading can, in certain cases, give valuable

information. Bridge deflections are generally measured, but other measurements can be

performed.

The deflection measurements of the deck under loading can be obligatory before to put a bridge

into service. The test is used as a reference for all the later tests. In certain cases, loading tests

can be made on old bridges to study the way in which it reacts. It should be noted that, if the

bridge presents disorders suitable to affect its bearing capacity, one can have interest to proceed

to a progressive loading; a good method then consists in increasing loads that are stopped as

soon as the structural response becomes abnormal.

The deflection measurements are traditionally made at mid-span. A limitation is given by the

precision of measurements which is generally of the order of the millimeter.

The structural behaviour of a bridge under overloads can be studied by measuring supports

rotations or cross-sections rotations by using clinometers as well as the slope of piles or walls by

pendulum (figure 1.6). Even if deflection measurements are mostly used than measurements of

rotation, these last have the advantage to be more precise. There is a diversified range of

apparatuses making possible to measure rotations: that goes from the simple mechanical system

which has a sensitivity of 10-4 rd, to the clinometer provided by the company TELEMAC which

reaches a sensitivity of 10-8 rd. This type of inclinometer is extremely precise, but fragile,

requires an installation by a meticulous person as well as a solid protecting cover. Electric

clinometers can reach a sensitivity of 10-6 rd.

(Photo LCPC)

In certain cases, the structural behaviour can be dynamically studied; mainly accelerometers or

seismographs are used. The latter can be employed to measure any component of a displacement

such as the dynamic component of the deflection, horizontal displacements of the heads of the

pile. For the accelerometers, the data which they provide require a double integration of the

signal to obtain displacements.

1.2.2. Force measurements

1.2.2.1. Measure of the support reaction

The main objective is to measure, with an objective of information on the phenomenon, or with

an objective of assessment, the time evolution of the load distribution. A very important

phenomenon to take into account is the incidence of heat gradients. To fix the ideas, this

redistribution can create, in the central span, a moment about half of the bending moment due to

the traffic load.

Like for the heat gradient, it leads to variations in the support reactions which can largely reach

20 % of the total capacity during the same day.

8

The measurement is performed by a series of jacks which are used to raise the deck and to

measure the necessary force. In addition, comparators measure with precision the vertical

displacement of the deck (figure 1.7). If one graphically represents the necessary force versus the

displacement, one finds a curve where the first part of the graph corresponds to the release of the

bearings. The second part, linear, represents the deck bending, and its slope gives its stiffness;

the value corresponding to a zero displacement is the support capacity. This method has rendered

great services for the diagnosis of deteriorated bridges, but it also helps to clarify the importance

of the heat gradients.

1.2.2.2. Other direct measurements

A simple method of the tension measurement in a cable were developed on the theory of the

vibrating strings. This method is used for the suspended cables and stay cables. But it can also be

applied to determine the tension of the cables of external prestressing.

Measurement of concrete stresses has become a major tool in assessing the residual level of

prestress in post-tensioned concrete bridges Another method, called crossbow method, was

developed to measure local forces in wire or strands of interior prestress cables. The method of

the crossbow makes possible to measure the residual stress of cables, on the basis of the

principle that the force necessary to deviate a cable is related to its tension. In practice, the noisy

effects imply tests of calibration. The precision is approximately 3 %. The direct measurement of

the stresses is indeed of a major utility, not only to establish the diagnosis, but especially to

define a project of repair. With regard to the concrete, one of such methods is the stress release

method whose principle was developed initially in rock mechanics. The method makes possible

to directly consider the normal stress in the concrete. It consists in carrying out a local and

partial relaxation of the constraints by creation of a notch, followed by a compensation of

pressure controlled using an extra-flat jack introduced into this notch.

(Photo LCPC)

The stress relief method. provides a global picture of the state of stress in the bridge. The

technique has an accuracy of about 0.5N/mm2 in the laboratory and 1N/mm2 for site testing.

The gauge arrangement for a 75mm core comprises a central array of four 50mm Demec gauges

to measure stress releases on the core, an array of four 100mm Demec gauges across the hole to

measure the distortion of the hole and an array of eight 64mm vibrating wire gauges to measure

9

the release of stresses around the hole. The pattern can accommodate to some extent, lack of

concentric drilling and changes in material properties over a bigger area. Concrete should be

cored on the side of the section with the highest prestress component to improve the accuracy in

the back-calculation of the prestressing forces. At least three measurements are required at

representative positions. Areas with stress concentration or high stress gradient should be

avoided. For example, a reasonable distance should be kept from the corners of box girders,

transverse diaphragms and anchorage areas. Instrumentation should be carried out after the

surface of concrete is cleaned and a covermeter survey performed to avoid the steel bars. As

locked-in stresses due to differential shrinkage and temperature, and temperature restraints are

also released, tests should be carried out when these effects are at a minimum. Very cold or

warm weather would affect the results significantly. It should be remembered that the most

reliable piece of information is the difference between the magnitude of the principal stresses.

An in situ jacking system is developed so that by reloading the drilled holes, the in situ stress and

elastic modulus can be calculated. With the 75mm core, avoiding reinforcement is not difficult.

But cutting small diameter bars does not affect the results due to the relative size of the steel and

the core. However, coring close to large diameter bars affects the jacking test but has less

influence on the release strains.

1.2.3. Geometrical study of the cracks: crack mappings.

In a concrete bridge, the detailed statement of cracks, as well as its evolution in time, constitutes

an element of very important diagnosis. The cracks in concrete is indeed the external

demonstration external of the structural behaviour. A crack is in particular the witness of the

existence, at a certain time, of tensile stresses in the concrete; the fact that the crack exists

indicates that the tensile stress has reached, at a certain time, the resistance of the concrete.

Moreover, when there is a stress field, cracks occur perpendicular to the direction of the

principal constraint of traction. This information can also be very useful. The map of cracks is

very valuable for a diagnosis, in condition of being suitably drawn up.

10

Measurements of deflections, support reaction measurements, crack mappings, are processes

which can inform about the overall state of a structure. These methods generally do not make

possible to analyse the detail of the behaviour of the bridge in such or such precise point, and

must be completed by more specific measurements.

Generally, strain gauges are used for the measurement of local deformations under the effect of

various actions; crack lengths measurements consist to assess the relative displacements, under

the effect of external actions, of the two lips of a crack on the surface of a section.

The measurement of the deformations is in general used like means for evaluating the variation

of the stress field at that point. In linear elasticity, stresses and deformations (strains) are related

to the Lam equations. That makes possible to measure only variations of deformations

compared to an initial state (on a bridge, this initial state will be in general, its stresses under

dead loads). In steel construction, a value of 10 MPa constitutes a sufficient sensitivity value.

For reinforced or prestressed concrete, the reference is rather the tenth of Mpa.

The measurement of the relative displacements of the lips of a crack is often useful, to appreciate

overpressures in particular, under a given external action, in steels (passive or active) which

cross a crack. The sufficient sensitivity value is 0.01 mm. For prestressed concrete cracks, it is

sometimes 0.001 mm.

2.1. THE NEED TO DEAL WITH UNCERTAINTIES IN STRUCTURAL

SAFETY

For many centuries, the builder was left to his own intuition, to his professional ability, to his

experience and to that of his predecessors (the limits often being determined by the observed

accidents or collapses) for designing structures.

Such empiricism however did not allow the design of new structures with new materials. The

emergence of the science of building, with the mechanics of structures and the strength of

materials, occurred only much later and very gradually. The disappearance of empiricism to the

benefit of engineering sciences was largely served by the development of steel construction.

However, even at that stage, the concept of "structural safety" was not yet mentioned in the

technical literature and the use of reduction factors applied to strength appeared to be the true

expression of safety. The adopted safety principle consisted in verifying that the maximum

stresses calculated in any section of any part of a structure, and under worst case loading,

remained lower than a so-called allowable stress. This stress was derived from the failure stress

R f of the material by dividing the latter by a safety factor K, set conventionally. The structural

assessment aimed to verify:

11

S Rallowable =

Rf

K

(2.1)

The design method based on the principle of allowable stresses was used in the first part of this

century without the definition of these allowable stresses really being considered. Their values

were set arbitrarily on the basis of the mechanical properties of the materials used.

Allowance for improvements in the production of steel, as well as in the design and construction

of structures led to the raising of the allowable design stresses, by lowering the safety factors K.

Attempts to improve the design rules based upon the allowable stress principle to obtain a better

definition of loads and strengths revealed the scattered nature of the data and of the results. The

need to use tools dealing with these variabilities became obvious.

Furthermore, failure stresses were not necessary the most appropriate quantities. They are for a

material with fragile behaviour like iron, but they are not for ductile materials like mild steel or

aluminium for which the resistance limit is associated with very large deformations unacceptable

for a structure. The limit of elasticity is a characteristic almost also important that the failure

stress. Equation (2.1) does not take into account some adapting phenomena such as plasticity or

creep, and the diversity of loads which can be applied to the structures. In fact, two problems

were identified by using the allowable stress principle for assessing structural safety:

to replace the criteria of allowable stresses by other criteria such as limit states,

to rationalise the way to introduce safety.

For this reason, many engineers have tried to approach the problem from a different points of

view by defining safety by means of a probability threshold. Under the stimulus of some

engineers and scientists, the concept of probabilistic safety of structures was born. However, it

was not until the Sixties and Seventies that mathematical tools were developed for studying the

reliability of structures.

In a probabilistic approach, the stress S applied to a structural element, and the variable

characteristic of the strength R of this element, are randomly described because their values are

not perfectly known. If the verification of the criterion related to the limit state results in the

inequality:

SR

(2.2)

the failure of the component being related to the fact that this limit state is exceeded. The

probability Pf of the event S R will characterise the reliability level of the component with

regard to the considered limit state:

Pf = Prob(R S )

(2.3)

12

The semi-probabilistic approach used in many design codes schematically replaces this

probability calculation by the verification of a criterion involving characteristic values of R and

S, noted Rd and S d , and partial safety factors R and S which may be represented in the

following form:

S Sd

Rd

(2.4)

statistics and probability in the evaluation of the input data, the formulation of assessment

criteria, and the determination of load and resistance factors. However, from the designers point

of view, the application of the partial safety approach in specifications is still deterministic. The

partial factors approach does not provide relationships or methods that would allow the designer

to assess the actual risk or reserves in carrying capacity of structural members resulting from the

semi-probabilistic procedure.

Partial safety factors are designed to cover a large number of uncertainties and may thus not be

highly representative of the real need for evaluating the safety of a particular structure. For

exceptional or damaged structures, the evaluation of reliability may be overestimated or

underestimated.

Figure 2.1 shows a schematic of the three reliability approaches: allowable stress approach,

partial safety format (or semi-probabilistic approach) and probabilistic approach. In that figure,

the limit state is represented by a line R = S. In the cases of the allowable stress and semiprobabilistic approaches, the reliability assessment procedure is based first of all on the

definition of the so-called design point and corresponding characteristic value of the load

effect and the characteristic value of the resistance. The assessment procedure contains three

parts:

determination of the load effect S, representing the resulting combination of individual

load effects,

determination of the characteristic value of the resistance Rd ,

performance of the reliability check expressed by the condition S Rd .

Using a deterministic approach such as the allowable stress principle, the magnitude S d = S m

refers to the mean load effect and Rd = Rallowable refers to the mean resistance Rm reduced by a

the safety factor K. In the semi-probabilistic approach, the position of the design point leads to

R

the characteristic values S d = S S m and Rd = m . In a probabilistic approach, the

R

determination of the design point is completely bypassed and the probability of failure is

obtained by analysis of the safety function M = R - S. The reliability check is expressed by the

condition:

13

Pf Pconventional

(2.5)

where Pconventional is the conventional probability of failure which has not to be exceeded. We

shall come back later on that specific point.

5P

5P

'HVLJQ

SRLQW

5G

6P

'HVLJQ

SRLQW

5G

6P

3UREDELOLW\

RIIDLOXUH

6G

Figure 2.1. Reliability assessment approaches:

allowable stress, semi-probabilistic, probabilistic.

The introduction of uncertainties appear to be a need for rationalising the evaluation of safety.

This is motivated by various reasons:

the evolution of loads with time is often not handled,

the properties of materials are also liable to evolve in an unfavourable direction, for

example through corrosion, loss of durability or fatigue,

the combination of multi-component loads effects is badly introduced (such as the

combination of normal and moment effects),

real elements are often different from the specimens on which their performance was

measured,

studies on sensitivity to errors in modelling the behaviour of structures are generally

omitted,

poor workmanship is unfortunately statistically inevitable,

construction requirements discovered when the works are being carried out may lead to

alternative solutions which bring about an overall behaviour of the structure slightly

different from the one provided for in the design.

A method taking into account uncertainties on variables appears to be a realistic safety

assessment criterion. Therefore, probabilistic methods today constitute an alternative to semiprobabilistic approaches. They are based on:

the identification of all variables influencing the expression of the limit state criterion,

studying statistically the variability of each of these variables often considered to be

stochastically independent,

14

calculating the probability that the limit state criterion is not satisfied,

comparing the probability obtained to a limit probability previously accepted.

These methods are generally grouped under the name of reliability theory. Although extremely

attractive, the probabilistic reliability theory is limited by many factors:

some data are difficult to measure,

required statistical data often do not exist,

probability calculations quickly become insurmountable.

These considerations are decisive for determining what may be expected from the limits of

probabilism. They imply in particular that the probabilities suffer from the fact that they are only

estimates of frequencies (sometimes not observable) based upon an evolving set of partial data.

They also result from hypotheses (choice of type of distribution, for example) which make them

conventional. Consequently, the outcome of a probabilistic approach depends strongly on the

assumptions which are made about the uncertainties associated with variables. If these

assumptions are not founded on adequate data, estimates of safety will be misleading. Indeed,

probabilistic methods are often abused when variables are not carrefully modelled. It is therefore

essential that the quality of data and validity of assumptions are borne in mind when using a

probabilistic approach to make decisions about the apparent safety of a structure. This can be

assured by standardising the approach and by requiring how to use data with it (see deliverable

D1 /2/).

The widely used methods of bridge assessments are sometimes considered to be unduly

conservative. New more sophisticated methods are therefore needed, and the reliability theory

has so far been used only in a limited manner although the potential benefits are considerable. It

has remained a method for experts and most of the applications, at least for bridges, have been in

the context of design code calibration. However the method is being used increasingly for the

assessment of existing bridges, particularly for investigating optimal maintenance strategies.

Many bridge engineers and managers have heard of the reliability theory, but would be reluctant

to use it or recommend its use by consultants without knowing the advantages and drawbacks of

such application. The purpose of this report is to give an introduction about the potentialities of

reliability theory.

2.2. DEFINITION-HYPOTHESES

The theory of structural reliability is defined as the set of mathematical and numerical

techniques which, from a probabilistic description of the loads and of the strengths related to a

structure, aims to estimate the probability that the regular use conditions of this structure exceeds

a conventional failure probability. Some concepts introduced by this definition have to be

detailed.

15

The essential parameters which characterise the structural resistance or the applied loads, cannot

be defined solely in terms of characteristic values reduced by partial safety factors, but in terms

of random variables characterised by means and moments.

Choice of stochastic models for resistance variables such as yield strength and modulus of

elasticity can be based on information from a number of sources:

experimental results/measurements: based on such data statistical methods can be used to

fit probability density functions, see below. One main problem with fitting probability

density functions on the basis of experimental results is that usually most of the data are

obtained in the central part of the density function whereas the most interesting parts from

a reliability point of view are the tails - for a resistance variable the longer tail and for a

load variable the upper tail.

physical reasoning: in some cases it is possible on the basis of the physical origin of a

quantity modelled as a stochastic variable to identify which stochastic model in theory

should be used. Below three examples of this are described, namely the normal, the

lognormal and the Weibull distributions. When a stochastic model can be based on

physical reasoning the above mentioned tail sensitivity problem is avoided.

subjective reasoning: In many cases there are not sufficient data to determine a reasonable

distribution function for a stochastic variable and it is not possible on the basis of physical

reasoning to identify the underlying distribution function. In such situations subjective

reasoning may be the only way to select a distribution function. Especially for this type of

stochastic modelling it can in the future be expected that for the most often stochastic

variables there will be established code based rules for which distribution types to use.

Note : The reader will find further information related to bridges in deliverable D5 /3/.

2.2.2.1. Hypotheses

In the theory of structural reliability, it is assumed that the structural behaviour and its state are

completely defined by the realisations of a finite number of random variables and by a finite

number of relations between them. These variables can be characteristic of the structure

(geometry, resistance) or of the applied loads. The relations between variables can describe

component failures or total structural failure.

2.2.2.2. Component reliability

Let us consider a cantilever beam with perfect elasto-plastic behaviour on which normal and

bending effects are applied (figure 2.2):

16

In this example, the maximal bending effect is located at the beam basis. The beam being under

axial compressive stress, buckling risk is maximal at the mid-section. Consequently, two failure

criteria can occur: buckling or yielding.

This example shows that the same structural element can be the place where more than one risk

of failure can occur. In structural reliability, a component will therefore be defined:

by a structural element which describes the geometry and the mechanical properties, i.e.

the place of a physical phenomenon,

by a set of loads and strengths variables,

by a failure criterion which describes the physical phenomenon, and a model which both

links loads and strengths variables,

by a probabilistic description of all the variables.

A bridge is therefore a system composed of components.

2.2.2.3. Reliability assessment

Let us come back to the beam in figure 2.2 and let us consider the failure component

corresponding to the yielding criterion:

MS MR

(2.6)

When the condition expressed by equation (2.6) is fulfilled, the structural element is said to be in

safe state versus the yielding criterion. The equality condition

MR = MS

(2.7)

MR MS

(2.8)

the regular use condition has exceeded the limit state to become an unsafe use condition: this is

the failure state. Consequently, the safety margin M = M R M S distinguishes three states

(figure 2.3):

17

the limit state expressed by equation (2.7): M = 0,

the unsafe state or failure domain expressed by equation (2.8): M 0 .

The probability of failure of this simple case is therefore:

Pf = P(M 0) = P(M R M S 0)

(2.9)

If the two variables are independent, then the joint probability density function is expressed by

the product of the individual density functions. That leads to:

Pf =

M R ,M S

(r , s )drds =

Df

MR

(r ) f M S (s )drds =

MR

Df

f M R (r ) f M S (s )ds dr

(2.10)

(r )FM S (r )dr

=L

)DLOXUHGRPDLQ

J=

6DIHW\GRPDLQ

J=!

/LPLWVWDWH

J=

=M

When more than two variables are considered and if the safety margin is expressed by a non

linear function of the different variables, then the probability of failure is

Pf = P( g (Z ) 0) =

(2.11)

Df

18

The evaluation of equation (2.11) is often a very difficult task, except for linear limit states and

jointly normally variables. The probabilistic methods are divided in two families according to the

approach for calculating the probability of failure:

!

computations /4/. Monte Carlo simulation is for instance a straightforward and easy to

understand method but requires a powerful computer for the type of problems encountered in

structural engineering. With Monte Carlo simulation, the probability density function and the

associated statistical parameters of the safety margin are estimated approximately. Random

sampling (using a pseudo-random number generator available in most computers) is employed to

obtain an outcome of the random vector X. The safety margin is then evaluated for this set of

values to ascertain whether failure has occurred. This procedure is repeated many times and the

probability of failure is estimated from the fraction of trials leading to failure divided by the total

number of trials. In this way the of failure is evaluated directly without the need of algorithms.

The procedure outlined here is the so-called Direct or Crude Monte Carlo method which is not

likely to be of use in practical problems because of the large sample required in order to estimate

with an appropriate degree of confidence the failure probability. Note that the required sample

(and the corresponding number of trial evaluations) increases as the failure probability decreases.

Simple rules may be found, of the form N > C / Pf , where N is the required sample size and C is

a constant related to the confidence level and the type of function being evaluated. A typical

value for C might be 100 or greater. The objective of more advanced simulation methods is to

reduce the size of the sample required for failure probability estimation. Such methods can be

divided into two groups, namely indicator function methods (such as Importance Sampling) and

conditional expectation methods (such as Directional Simulation). Advanced simulation methods

have been developed in recent years, and are nowadays used instead of or in conjunction with

approximate techniques. The need for the combined use arises in cases where it becomes

important to check the accuracy of approximate methods , such as multi-mode or multicomponent failure. The potentiality of Response Surface Methodology has also contributed to

the increasing use of various simulation techniques for structural reliability assessment.

Approximate -or level 2- methods which approximates the calculus of the probability of failure.

Because level 3 methods are very difficult to handle, level 2 methods try to provide quick and

reliable approximations. The most well-known methods are the FORM (First Order Reliability

Method) and SORM (Second Order Reliability Method) methods. The first step consists to

transform the problem into a space constituted of standard normal distributions. That means that

all the initial variables Z (which are random and may-be statistically dependent) are transformed

in a set of independent normal random variables U with zero mean and unit standard deviation

(called standard normal variable). When the initial random variables are independent, this

transform can be a one-to-one transform. In the standardised space, the nearest point to origin of

the new limit state gU (U ) = 0 is called design point and its distance from the origin is noted .

is the reliability index. The approximation of the failure surface at the design point could be

linear (the so-called FORM approximation) or indeed via some other approximate function, such

19

as a Taylor series with second order terms retained (as in the so-called SORM approximation).

The function U = T (Z ) is called the Rosenblatt transform /5/. That function is built in such a

way that the probabilities are not modified by the transform. Consequently, if gU (U ) = 0 is the

new failure surface in the U-space, then it comes:

Pf = P( g (Z ) 0) = P( gU (Z ) 0)

(2.12)

The reliability index has been introduced buy Hasofer and Lind in 1974 for characterising a

component reliability /6/. is often called the Hasofer-Lind reliability index (figure 2.4).

)DLOXUH'RPDLQ

%LGLPHQVLRQDOQRUPDOGLVWULEXWLRQ

/LPLWVWDWH

6DIHW\GRPDLQ

6250

)250

In the FORM approach, the failure surface gU (U ) = 0 is approximated by a tangent hyperplane

at the design point. (figure 2.4). The probability of failure is then approximated by:

Pf = ( )

(2.13)

In the SORM approach, the failure surface is approximated by an hyper-parabolod which

crosses the design point and which has the same curvature (figure 2.4). The probability of failure

is given by:

n

(1 )

Pf ( )

1 / 2

(2.14)

i =1

20

Let us finally note that, when the random variables are all normal variables and if the limit state

is linear, it can be proved that, if g (Z ) = a0 +

a Z

i i

, then

i =1

E( Z 1 )

a 0 + [a1 ...a n ] M

E( Z n )

=

a1

[a1 ...a n ][C Z ] M

a n

(2.15)

where [C Z ] and E( Z i ) are the variance-covariance matrix and the mean value of Z i . That

index was initially introduced by Cornell in 1967 /7/.

2.2.2.4. Rosenblatt transform

As mentioned previously, initial varibales are transformed into standard normal variables

trhough the Rosenblatt transform T (.) . The T (.) transform is usually implicit, because there are

very few variables which have an analytical relation with a the standard normal variable, and

also because this relation is often difficult to get. For these reasons, the Rosenblatt transform is

obtained by only using the probability functions of the variables. For one variable, that gives:

(2.16)

f (z)

fZ( z )

du dT

=

= Z

=

dz dz

(u ) 1 (FZ ( z ))

(2.17)

For more than one variable, if the multi-dimensional probability function is known, a set of

independent standard normal variables is obtained by fixing the equality between the

probabilities of the two sets of variables (initial and standard):

(u1 ) = F1 ( z1 )

(u 2 ) = F2 (z 2 z1 )

M

(u n ) = Fn (z n z1 ,..., z n1 )

(2.18)

21

density conditional function is given by:

f i (z i z1 ,..., z i 1 ) =

f Z1 ,L,Zi ( z1 ,L , z i )

(2.19)

f Z1 ,L,Zi 1 ( z1 ,L , z i 1 )

i.e.

Fi (z i z1 ,..., z i 1 ) =

zi

(2.20)

f Z1 ,L,Zi 1 ( z1 ,L , z i 1 )

U1 = 1 (F1 (Z1 ))

U 2 = 1 (F2 (Z 2 Z1 ))

M

U n = 1 (Fn (Z n Z n 1 ,L , Z1 ))

(2.21)

and the inverse transform is successively obtained from the first variable:

Z1 = F11 ( (U1 ))

Z 2 = F2 1 ( (U 2 ) Z1 )

M

Z n = Fn 1 ( (U n ) Z n 1 ,L , Z1 )

(2.22)

In fact, the joint density function is rarely known, making impossible to assess the conditional

probability function. An approximation therefore used:

1. Each individual variable is transformed into a standard normal variable,

2. The correlation coefficients between the standard normal variables are assessed from the

correlation coefficients of the initial variables,

3. The dependent standard variables are transformed into independent standard normal

variables.

Table 2.1. provides some inverse Rosenblatt transforms frequently used in reliability analysis.

Variable

Normal with mean m and

standard deviation

Transform

Z = m + U

22

and standard deviation

Z=

m

1+

Exponential

2

m2

Z =

2

expU ln 1 + 2

ln[ ( U )]

Z = ln[ ln[U ]]

Gumbel with parameters

and

Table 2.1. Examples of Rosenblatt transforms

When the variables are transformed in independent standard normal variables, then the reliability

index can be calculated. Since it is the distance of the closest point of the failure surface to the

origin, the reliability index is calculated as soon as the design point is determined. That

determination is modelled as a minimisation problem:

= min u t u

(2.23)

Numerous algorithms can be used. The Rackwitz and Fiessler algorithm /8/ is certainly the most

widely used because of its easy-to-use characteristics and of the goods results that it provides.

Nevertheless, convergence problems occur in some cases.

The algorithm starts from an intial point u 0 (for example the origin) and the limit state

gU (u ) = 0 is linearised at the vicinity of u 0 . The intersection between the tangent hyperplane

with the variables plane gives an approximate linear failure surface. The closest point to the

origin on that surface is the new iteration point u1 . The procedure is then iterated.

The design point u* is the limit of the set of point u 0 ,u1 ,Lu k . If the orthonormal vector to the

trajectory

gU (u ) = gU u k

(2.24)

k =

gU u k

gU u k

(2.25)

23

where gU (u )

The intersection point of the tangent hyperplane at u k for gU (u ) with the variables hyperplane

verifies:

( )+ gu (u )(u u ) = 0

n

gU u

k

i

(2.26)

i =1

The intersection point closest to the origin is the next iterated point u k +1 :

k +1

( )

( )

( )

( )

( )

(u )

( )

( )

gU u k gU u k u k g u u k

=

.

gU u k

gU u k

(2.27)

which gives:

k +1

gU u k gU u k u k

gU

(2.28)

The algorithm requires to know the limit state function gU (u ) = 0 , that is to be able to tranform

all the variables by the inverse Rosenblatt transform.

2.2.2.6. Sensitivity factors

It is sometimes useful to appraise the sensitivity of the probability of failure versus a particular

parameter. For that purpose, sensitivity factors have been given for a specific distribution

parameter and for a coefficient of the limit state /9/:

1 t

= u*

T z* , p

p i

p i

gU u * , p

p

= i

p i

gU u * , p

It is also interesting to evaluate the opportunity to keep all the variables as random. The omission

coefficient /10/ provides such information:

i =

( Z i = mi )

(2.29)

24

for each variable Z i , replace by its median value. That coefficients express the influence of the

different variables on the reliability index. When the value is close to 1, the variable can be kept

as deterministic and equal to its median value. These coefficients can be approximated by :

1

1 i2

i 0

(2.30)

where i is the i-th component of the orthonormal vector to the tangent hyperplane at the design

point (towards the failure surface).

In the previous sections, we have insisted on the fact that structural and failure components were

different. Indeed, a structural component can have more than one failure mode, and consequently

has to be viewed as a set of failure components. The reliability assessment of a structure, as a

system of structural components, requires to know the structural behaviour of each element,

especially after failure (post-failure behaviour). Two categories of failure components are mainly

used in reliability analysis: the ductile and the brittle behaviours. In a brittle behaviour, the

failure component is no longer considered after failure. In a ductile behaviour, the component

maintains its carrying capacity versus the failure mode (for instance yielding). Extra effects are

dispatched on the other components. These two post-failure behaviours do not describe all the

post-failure modes. Nevertheless, it is possible to include buckling, brittle and ductile behaviours

by introducing a parameter which assesses the difference between the rupture load f and

the pos-failure remianing capacity af = f . If = 0 , then the material is brittle, while, if

= 1 , it is ductile. If 0 < < 1 , a buckling behaviour is described by that model (figure 2.6).

25

/RDG

/RDG

'LVSODFHPHQW

D'XFWLOH

'LVSODFHPHQ

E%ULWWOH

/RDG

5

. 5

F%XFNOLQJ

'LVSODFHPHQW

In that section, two kinds of systems are considered: parallel and series systems. These systems

plays major roles in the reliability analysis of structures.

Let us consider for instance the structure described on figure 2.7. That structure has an internal

hyperstaticity degree equal to 0. If on failure mode is applied to each structural component, the

structural failure, defined as the loss of stability, will occur as soon as one failure component will

be failed. The structure will be described as a series system where each structural component in

that case- is a failure component (figure 2.8).

Let us now consider the structure of figure 2.9. The structural failure will occur when a number

of failure components will fail. This set is called a failure mechanism.

26

A failure mechanism is formally described as a parallel system (figure 2.10a). The structure is

modelled by a series system where each component is a failure mechanism (figure 2.10b).

D3DUDOOHO

E*HQHUDO

2.4.1. Some definitions

Definition.1: A system is a set of failure components. As a structural component is not

necessarily a failure component, a structure is not necessarily a system according

to that definition.

Definition.2: A failure mechanism is a subset of failure components which, when all failed,

leads to the system failure. The system failure occurs when all the components of

the same failure mechanism have failed.

Definition.3: A system where each failure mechanism is composed of only one component is a

series system. A system which has only one failure mechanism is called a parallel

system.

A system is, by definition, a set of components. Each component is functioning or is not

functioning (failure). As matter of fact, the failure component Ni can be described by a boolean

variable Fi which is equal to 1 if the component has not failed (otherwise, it takes the value 0).

Similarily, the system S, at time t, will be dependent on the state of its m constituting

components. The value of the system boolean variable FS is therefore function of the values

taken by Fi , allowing to express FS as the image of the variables (Fi )1im by a characteristic

system function :

27

FS = (F1 ,..., Fm )

(2.31)

S is a series system as soon as a failed component leads to the system failure. The characteristic

function is then :

m

( F1 ,..., Fm ) =

(2.32)

i =!

characteristic function is :

m

( F1 ,..., Fm ) = 1

(1 F )

i

(2.33)

i =!

A system can always be described as a series system composed of parallel subsystems. Two

approaches can be used: the link sets and the cut sets approaches. For the latter, the concept of

failure mechanism is also used.

A link L for S is a subset of components such as the system is functioning if all the components

of L are functioning and the components which do not belong to L are not functioning. If no links

L included in L exist, L is said minimal. A link L is therefore a series system, since it is no

longer functioning when one of its component has failed. The system S is then described as a

parallel system constituted of series subsystems which are minimal links.

A cut set (or failure mechanism) C for S is a subset of components such as the system is no

longer functioning if all the components in C have failed and the components not belonging to C

are functioning. If no cut sets C' included in C exist, C is said minimal, or, the failure mechanism

C is said fundamental. A cut set C is therefore a parallel system since it is functioning when one

of its component is functioning. The system S is decribed by a series system constituted by

parallel subsystems.

These two representations are essential in the analysis of the reliability of structures. They

facilitate the assessment of their failure probabilities.

2.4.3 Exemple

Let us consider the truss of figure 2.11.

28

2

5

The failure of the structure S is defined by the loss of stability. The failure mode for each

structural component occurs when the compressive strength is exceeded. S can be viewed as a

system with 5 failure components. The minimal links can be easily deduced: (1,2,3), (2,3,4),

(1,2,4), (1,3,4),(1,2,5),(1,3,5),(3,4,5),(2,4,5). S is functioning if one link is functioning. The

system S and therefore the structure is represented by the parallel system of figure 2.12a.

The minimal cut sets are (1,4), (2,3), (1,2,5), (2,3,5), (2,4,5), (3,4,5). The cut set (1,2,3) is not

minimal since it exists a subset (2,3) include in (1,2,3) which is a cut set. The structure S is

decribed by the series/parallel system of figure 2.12b.

D5HSUHVHQWDWLRQZLWKPLQLPDOOLNVHWV

E5HSUHVHQWDWLRQZLWKPLQLPDOFXWVHWV

29

2.4.4.1. Series systems

Calculations with bounds

We have seen in the previous sections, that the characteristic function of a series system is

expressed by

m

( F1 ,..., Fm ) =

(F )

(2.34)

i =!

FS = F1 .F2 ......Fm 1 F1 .F2 ......Fm 1 Fm

(2.35)

FS = 1 (F1 + F1 .F2 + F1 .F2 .F3 + L + F1 .F2 ......Fm 1 Fm )

or

(2.36)

Since the variables are boolean (values equal to 0 or 1), then it can be written:

max(Fi ) FS

i

(2.37)

i =1

Taking expected values, the probability of failure of the system is then bounded by the individual

probabilities of failure:

m

max

i( 1,m )

P fi

P fs

i

f

(2.38)

i =1

These bounds can be improved by taking into account the joint probabilities between individual

components. It can be shown:

F1 .F2 ......Fi max[1 (F1 + F2 + L + Fi );0]

(2.39)

(2.40)

30

which gives:

FS F1 +

max Fi

i =2

FS

Fi .F j ;0

j =1

i 1

(2.41)

F max[F .F ]

i

i =1

j <i

i=2

(2.42)

max 0 , P i

+

f

i=2

m

Pfs

Pfi

m

Pfs

Pr ob ( g i ( Z ) < 0 ) ( g j ( Z ) < 0 )

j =1

i 1

(2.43)

i

f

i =1

i=2

j <i

The difficulty in the use of the Ditlevsen bounds is the computation of the joint probabilities of

g i (Z ) g j (Z ) . That can be made by linearising, in a first step, the limit states near their design

points, and, in a second step, by evaluating the joint probability with the bi-dimensional

distribution 2 ( X ,Y , XY ) where X, Y are two standard normal varibales with correlation XY .

Indedd, if the limit states gUj (U ) , in the U-space, are linearised near their design points, it

comes:

M j = gUj (U )

U

j

+ j = L j (U ) + j

j = 1,..., m

(2.44)

i =1

where j the orthonormal vector ate the design point (towards the failure set). The probability

of failure is therefore written:

(

)

1 Pr ob(( Li ( U ) > i ) I ( L j ( U ) > j ))

= 1 2 (1 ; 2 ; ij )

= 2 ( 1 ; 2 ; ij )

(2.45)

31

We have seen that a series system can be represented by the union of limit states. If they are

linearised near their design points in the U-space, the probability of failure can be approximated

by:

PR = 1 P(( gU 1( Z ) > 0 ) I ... I ( gUm ( Z ) > 0 ))

(2.46)

The second member of equation (2.46) is nothing else than the value of the probability function

of the multi-dimensional distribution m ( ; C ) where is the vector composed of the m

reliability indexes and C correlation matrix of the different linearised margins. The correlation

matrix is m m and is obtained from the j orthonormal vectors from each tangent

hyperplanes:

n

Cij =

(2.47)

ri rj

r =1

Pf = m ( ; C )

(2.48)

S = 1 ( m ( ; C ))

(2.49)

The method provides good results, but is difficult to implement.

2.4.4.2. Parallel systems

If gUj (U ) are the limit state functions for the m components in the standard normal space, then

the first step consists to linearise each function by a tangent hyperplane L j (U ) + j . The

m

Pf = Pr ob ( Li (U ) < i

i =1

= m ( ,C )

(2.50)

32

S = 1 ( m ( ; C ))

(2.51)

Let us consider the frame described in figure 2.13.

P

H

Figure 2.14 gives two fundamental failure mechanisms.

For each mechanism, a limit state is defined by the equality between the works of external and

internal forces. It is therefore easy to obtain the two limit states functions as it follows:

M 1 = R1 + R2 + R6 + R7 L.H

M 2 = R3 + 2 R4 + R5 L.P

(2.52)

Let us assume that all the variables are normal variables as mentioned in Table 2.2. If we only

takes into account the two failure mechanisms, then the structure is formally described by a

series system where the two components are the failure mechanisms (figure 2.14).

33

Variable

R1, R2, R5, R3, R4, R6, R7

L

P

H

Mean

Coefficient of variation

135 kN.m

10%

5m

45 kN

55 kN

/

10%

10%

Let us now assume that the strengths 1,2,6 and 7 are fully correlated ( =1), as the strengths 3,4

and 7. Under these hypotheses, the joint probability density function of the random vector M

composed of the safety margins is also normal:

R1

R1

R

R

2

2

R3

R3

R4

R4

1 1 0 0 0 1 1 L 0

M =

R5 = R5 = Z

0 0 1 2 1 0 0 0 L

R

R

6

6

R7

R7

H

H

P

P

(2.53)

which gives:

E (M ) = E (Z )

(2.54)

The correlation matrix can be easily deduced under the previous hypotheses:

1 1 0 0 0 1 1 0 0

1 1 0 0 0 1 1 0 0

0 0 1 1 1 0 0 0 0

0 0 1 1 1 0 0 0 0

[ ] = 0 0 1 1 1 0 0 0 0

1 1 0 0 0 1 1 0 0

1 1 0 0 0 1 1 0 0

0 0 0 0 0 0 0 1 0

0 0 0 0 0 0 0 0 1

(2.55)

34

( Z1 ) 0

0

CZ =

( Z1 ) 0

0

[ ]

0

0 ( Z 9 ) 0

0

0

0 ( Z 9 )

0

(2.56)

i.e.

C M = .C Z .t

(2.57)

E( M 1 )

C M1M1 4.373

=

E( M 2 ) 5.385

C M 2M 2

(2.58)

!

by the bi-dimensional distribution: PfS = = 0.612 10-5

by the Ditlevsen bounds: 0.612 10 5 PfS 0.618 10 5

The concept of safety margin has been introduced in the previous sections in order to describe

the limit sate between failure and safety. Other sets of margins can be defined: the event

margins.

Qualitative or quantitative information can be given by inspections. Each of these results is an

event, associated to an event margin H and to an occurrence probability. Qualitative inspection

results are information upon the detection or the non detection of an event related to a particular

phenomenon. The information is expressed by:

H 0

(2.59)

phenomenon. The information is expressed by:

35

H=0

(2.60)

Let us assume that we are studying the reliability of a component described by its safety margin

M. Let us also assume that different qualitative and quantitative inspection results are available

and described by a set of event margins H quant ,i

and H qual , j

. Then the probability

)1 i n

)1 j m

of failure of the component when these qualitative and quantitative information are known is

given by the conditional probability:

Pfupdated

= Pr ob M < 0 /

IH

quant ,i

= 0

I IH

qual , j

< 0

(2.61)

The calculation of this updated probability of failure is difficult to handle when the two sets of

events are simultaneously present. When quantitative or qualitative information are available, the

calculations are more amenable. We send back the reader to reference /13/ for details.

2.5.1. Reliability updating with quantitative information

When a set of quantitative information is only available, it can be shown /13/:

updated =

t

MH

HH H

t

1

MH

HH

MH

(2.62)

where , H , MH , HH are respectively the reliability index for M before updating, the vector

of reliability indexes given by the event margins, the correlation matrix between the safety and

the event margins, and the correlation matrix between the event margins.

2.5.2. Reliability updating with qualitative information

For a set of qualitative event margins, the updated probability of failure is given by /13/:

Pfupdated

m +1 ; MH

H

=

m +1 ( H ; HH )

(2.63)

The structural reliability theory assesses that no code or standard is able to warrantee full safety.

Consequently, reliability theory tries to estimate the risk that a structure or one of its component

can fail, according that this assert has been previously defined.

36

Reliability requirements to an existing structure, as well as for a new one, is expressed in terms

of probability of failure corresponding to a specific reference period. Remaining service life

predetermined at the assessment is often considered as a reference period. Shorter reference

period might be reasonable for ultimate limit state.

2.6.2. The problem of the minimum safety definition

The theory of structural reliability described in the next sections does not in itself give rules for

the choice of the reliability level. The open problem is what level should be required in order that

the structure - or one of its component - in the light of the available information, can be declared

to be sufficiently safe. Without the existence of a universal consensus, it is a widespread attitude

that the safety level should not be changed drastically when an authority introduces a new code

of practice or revises an old standard. Changes should be made in a prudent evolutionary way,

beginning with a calibration against existing design practice. It is nevertheless obvious that it is

necessary to formulate some superior principles for rational control of this evolution of the

reliability. Decision theory seems today to offer new trends in defining minimum safety in codes

and standards.

This problem is more drastic for existing bridges. A large percentage of existing bridges no

longer satisfy current standards, and the funds available to upgrade them are limited. This puts

strong economic pressure to determine fully, without compromising human safety, both the

capacity and life of bridges.

Current bridge design safety factors are based on a criterion of structural safety. The reliability

index used to determine design safety factors for the ultimate limit states is generally 3.5 or 3.8

based on a 50-year reference period. This basic criterion results in the same design safety factors

for all bridge components irrespective of the different consequences of failure for different

components. Epidemiological evidence -because of the high safety level of current codes - is the

lack of bridge failures in recent years. The use of a single safety level for bridge designs,

however, economical because the marginal difference in cost for failure situations where the

criterion could be reduced is small. For assessment, even a small difference in the criterion, can

result in a major cost of bridge repairs.

The probability of death or injury due to structural failure is equal to the probability of structural

failure times the probability of death or injury given that the failure occures. For design based on

the theory of structural reliability, the latter probability is taken equal to 1.0. Experience shows

that some failures are much less likely to result in death or injury than others. To take into

account the life-safety aspects of structural failure, the Canadian Standards Association (1981)

has adopted for bridge assessment /14/:

37

Pconventional =

A.K

W n

(2.64)

where Pconventional is defined as the target annual probability of failure based on life-safety

consequences, K is a constant based on calibration to existing experience which is known to

provide satisfactory life safety, A is the activity factor which reflects the risk to human life

associated with activities for which the structure is used, W is the warning factor corresponding

to the probability that, given failure or recognition of approaching failure, a person at risk will be

killed or seriously injured, and n is the importance factor based on the number of people, n, at

risk if failure occurs (this is essentially an aversion factor that takes into account the

proportionately greater public concern for hazards that may result in many facilities as opposed

to those that can result only in a few).

For highways, the CSA recommends to take A=3. For the importance factor n , the number of

people at risk, if a bridge collapse, is equal to the number of people who drive into the gap after

collapse. The latter depends on the traffic and visual circumstances such as weather, time of day,

lighting and geometry of approach. For normal bridges on heavily used highways under normal

traffic and visual conditions, n is assumed equal to 10. For W, a value equal to 1.0 is chosen (no

warning of collapse).

2.6.4. Calibration

For bridge elements, the reliability index for 1 year, corresponding to a reliability index of 3.5

for 50 years, is comprised between 3.5 for elements carrying dead load only and 4.0 for elements

carrying traffic load only. A base probability of failure of K = 2.3 10 4 has therefore been

adopted. This takes into account that regular inspection programs and years of satisfactory

performance have identified and corrected design and construction errors, thereby reducing a

principal cause of most failures.

2.6.5. Adjustments

The CSA proposes to adjust target reliability indexes depending on the behaviour of the element

and on the behaviour of the structural system given failure of the element. If an element , such as

a girder in a multi-girder bridge, fails without collapse because of redundancy, then the risk to

life is reduced. If an element fails gradually, (by yielding for instance), then the failure is likely

to be noticed before collapse takes place. In summary, structural behaviour affects the warning

factor W.

Experience shows that avoidance of bridge failures and inspection are closely related. The better

and more systematic the inspection is, the more likely it is that damaged components will be

identified and evaluated and steps taken to avoid failure. Of course, some components can be not

inspected. In summary, structural behaviour affects the warning factor W.

Finally, the adjustment in the reliability index for traffic category is depending on the activity

factor A.

Table 2.3 provides the 1-year time interval for all traffic categories except for permit controlled

and supervised vehicles, where the reliability index is based on a single passage.

38

= 3.5 (E + S + I + PC ) 2.0

Adjustment for element behaviour

Sudden los of capacity with little or no warning

Sudden failure with little or no warning but retention of post-failure capacity

Gradual failure with probable warning

E

0.0

0.25

0.5

Element failure leads to total collapse

Element failure probably does not lead to total collapse

Element failure leads to local failure only

S

0.0

0.25

0.5

Component not inspectable

Component regularly inspectable

Critical component inspected by evaluator

I

-0.25

0.0

0.25

All traffic categories except PC

Traffic category PC

PC

0.0

0.6

(from /15/)

3. APPLICATION

For prestressed bridges, to check structural capacity at the Serviceability Limit State implies to

properly assess the prestress value. Unlike reinforced concrete and in absence of degrading

phenomena, the prestress value is subject to losses which fall into two classes:

instantaneous losses due to the anchorage technology and the cable profiles

time-depending losses due to concrete delayed strains and steel stress losses

The two classes introduce uncertainties the engineer has to deal with. For the instantaneous

losses, they can sufficiently be precise. The major source of uncertainties concerns the value of

the friction and wobbles coefficients et used in the exponential formula for assessing the

frictional losses. The second class of losses requires more carefulness. There are basically to

types of uncertainties: external and internal. External uncertainties arises from the uncertainty in

the influencing parameters (humidity,...). Internal uncertainties are that inherent in the creep,

shrinkage or relaxation mechanisms. Roughly speaking, they depend on the material models

used. Other important uncertainties come from the structural analysis itself. Loads, geometrical

parameters are as many parameters which can be treated as random variables.

The french code B.P.E.L.91 (B.P.E.L., 1991) provides predictive models for creep, shrinkage

and steel relaxation which allows to calculate the prestress time-depending losses. The study

39

presented in this paper starts from these models and then assumes as random the different

variables introduced by these models. In the same way, the Serviceability Limit State (SLS) is

chosen as the set of limit functions defined by the B.P.E.L.91 code /16/.

3.1.1. Losses due to concrete shrinkage

Shrinkage is a time-depending shortening phenomenon of unloaded concrete. It is the linear

superposition of two basic shrinkage phenomena:

the drying shrinkage linked to hydric exchanges between concrete and environment

Different formulas have been introduced by all the design codes in order to transcribe the

influence of the different parameters upon concrete shrinkage. The B.P.E.L.91 code proposes the

following expression for the induced strain at time t since time t 0 :

r (t ,t 0 ) = r [r (t ) r (t 0 )]

(3.1)

where r is the shrinkage final value. r (t ) is a function defined on [0,+[ with values in [0,1[:

r( t ) =

t

t + 9.rm

(3.2)

t is expressed in days and rm is the mean radius of the section (i.e. the ratio of the cross-section

area over its outline length in contact with air). The french code provides some values for r

relative to the french territory and comprised between 1.5 and 5.

= r [r (t ) r (t 0 )]Ec

(3.3)

Unlike shrinkage, concrete creep implies a slow strain evolution under time-sustained stress.

Different parameters have an effect upon this phenomenon:

the initial time when the concrete piece has been loaded

the number of cables which limit the creep phenomenon

40

The B.P.E.L.91 code proposes an approximate evaluation of the total losses due to creep

cr = 2 c

Es

Ec

(3.4)

where c is the ultimate compressive stress. Ec and Es are respectively the concrete Young

modulus at the ultimate stage and the steel Young modulus. The delayed losses can be calculated

by multiplying the previous equation by the r (t ) function defined for shrinkage.

3.1.3. Losses due to steel relaxation

Relaxation is tension slackening phenomenon with constant length. That only appears for cables

for which stresses are greater than 30%-40% of their rupture limit. That depends on the steel

properties, on its treatment. Steels can be separated into two families: steels with normal

relaxation and steels with very low relaxation. A steel is characterised by its relaxation at 1000

hours, 1000 and its relaxation loss is given by the B.P.E.L.91 code as it follows:

r =

6. s initial

0 s initial 1000

100. rupt

(3.5)

where s initial is the steel initial tension (i.e. after instantaneous losses), rupt the certified

rupture limit and 0 a coefficient taken equal to 0.43 for TBR, 0.3 for RN and 0.35 for others.

The previous loss can be time indexed by multiplying it by the function r (t ) .

3.1.4. Determination of the concrete strength

According to the B.P.E.L.91 code and for time t 28 days, the compressive concrete strength is

taken constant and equal to the 28 day compressive strength 28. The tensile strength is then

deduced from this strength by the expression (in MPa)

t 28 = 0.06 c 28 + 0.6

(3.6)

3.1.5. Probabilistic models

The previous models introduce numerous variables which may be expressed in probabilistic

terms. The difficulty to probabilise variables lies in the choice of adequate probability functions

41

and related parameters. Different models are available for these random variables. For instance,

the admissible stresses can be chosen as normal or lognormal. Nevertheless, the lack of statistical

information concerning the other variables is an handicap for properly modelling them. As

matter of fact, research efforts are still necessary in this field. The B.P.E.L.91 provides some

interesting value ranges for some parameters. They are helpful for fixing appropriate coefficient

of variations. In the next sections, we have chosen to probabilistically model the losses

themselves rather than the parameters which calculate them. Indeed, it is in general easier for

engineers to assess the coefficients of variation of the different prestress losses than the other

parameters.

The B.P.E.L.91 code assumes that the structure behaviour at S.L.S is linear. It requires that, for a

cross-section, the compressive and tensile stresses must not be exceeded under minima and

maxima load effects at the bottom and the top of the beam. To check a cross-section at S.L.S.

requires to validate four inequalities:

P Pe0 vn M p vn

+

+

t sup

In

S

In

P Pe0 v' n M p v' n

c inf

S

In

In

P Pe0 vn M p vn (M t + M s )vn

+

+

+

c sup

S

In

In

In

P Pe0 v' n M p v' n (M t + M s )vn

t inf

S

In

In

In

(3.7)

where

Mp, Ms, Mt are respectively the moments implied by the dead loads, the superstrutures

and the traffic loads,

P, e0 are the prestress value and the cable profile position,

Ih, In, Sn are the homogenised cross-section inertia, the net cross-section inertia and area.

The net section is the total cross-section diminished of all holes which will be fulfilled

later. The homogenised section is the net section put 5 times on the longitudinal steel

area. These definitions are issued from the B.P.E.L. code,

c sup , c inf , t sup , t inf are the admissible tensile and compressive stresses at the top

and the bottom of the beam,

v,v' are the distances of the bottom and top of the beam from the cross-section centre of

gravity.

Consequently, to assess the reliability of a cross-section versus the Serviceability Limit State

implies to study a series system composed of four components. The limit states function are

respectively:

42

P Pe0 vn M p vn

+

+

t sup = G1 (t )

S

In

In

M p v' n

P Pe v'

+ 0 n+

+ c inf = G2 (t )

S

In

In

M p vn (M t + M s )vn

P Pe v

0 n

+ c sup = G4 (t )

S

In

In

In

P Pe0 v' n M p v' n (M t + M s )vn

t inf = G3 (t )

S

In

In

In

(3.8)

The models described in Section 3.1. and 3.2. have been applied to the Vauban bridge in

Strasbourg. This bridge belongs to the family of Viaducts with simply supported spans and

prestressed concrete beams and so called VI-PP in France. The Vauban bridge has been built in

1957. This bridge was 137.49m long and was composed of four spans of respectively 36.60m,

41.20m, 41.20m and 21.15m long. In this paper, the attention has been focused on the 36.60m

long span. Figure 1 gives a diagram of the median cross-section of the bridge beams.

The computations have been made by the First-Order Reliability Method and Table 3.1. gives

the mean and the coefficient of variation (C.O.V.) of the basic random variables used in the

calculus (fourth beam of the bridge and limit state G3 ). Distributions are taken from literature as

well as the coefficients of variation /17/. The mean values are issued from the technical notes

and experiments. As mentioned earlier and by sake of simplicity, the instantaneous and timedepending losses have been randomised instead of directly using the variables introduced in

Section 3.1. In this case, the prestress variable at time t is written as:

P(t ) = Pinitial Pinst Ptime dep r (t )

(3.9)

where Pinitial, Pinst and Ptime-dep are respectively the initial prestress value, the prestress

instantaneous losses and the final prestress delayed losses.

The live load effects are Gumbel-distributed. The parameters are fitted on histograms issued

from computations using the corresponding bridge influence line and a special highway traffic

record (french highway A10) similar to the traffic going across the bridge. The live load effects

for a 100 year reference period are extrapolated from a one-week period.

43

Variable

Mean C.O.V Type Unit

Dead load

3.780 5%

N

MN.m

Superstructures

1.335 5%

N

MN.m

Initial prestress

8.081 10% N

MN

Instantaneous losses

1.240 10% N

MN

Final losses

2.087 10% N

MN

v'/In

3.248 10% N

m-3

v'/Ih

2.800 10% N

m-3

Sn

0.930 1%

N

m2

e0

-1.309 10% N

m

Tensile concrete stress

-2.700 5%

LN MPa

Traffic load

1.313 1.991 G

MN.m

Table.3.1. Characteristics of the variables of the study case for the limit state G3 (t )

(N=normal; LN=Lognormal; G=Gumbel)

(from /19/)

The computations have been made for the limit state G3 (t ) only and for the series system (four

conditions). G3 (t ) is weakest component in the set of the four conditions. It is particularly

interesting to assess if we can reduce further studies to this limit state only, instead to work on all

the four limit states. Figure 2.15 shows that, for the Vauban bridge and under the chosen

hypotheses, the reliability indexes given with G3 (t ) alone and with the four conditions are

roughly identical after 25 years. Under 25 years, computations with a series system are

necessary.

2.3

2.2

2.1

Condition 3 alone

Reliability index

2

Updating

1.9

1.8

1.7

1.6

1.5

System

1.4

1.3

0

10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 105

Years

44

Figure 2.15 shows that, for this bridge and according to the calculus hypothesis, the reliability

indexes are lower than the usual design reliability index fixed to 2 for the Serviceability Limit

State defined for instance in the Eurocode 1.

3.4.1. The measurement techniques

The Crossbow Method and the Release Method were applied to a Vauban bridge for assessing

the prestress value of undamaged beams at 30 years. These two measurement procedure are

complementary and enable to reason in terms of stresses rather in terms of strains, which is

commonly made. They provide immediate accesses to the applied forces and moments located at

the instrumented cross-section.

3.4.2. Updating

As expressed earlier, additional information of different types may become available during the

bridge life. They are not restricted to prestress measurements, but can also concern deformations,

strains, loading measurements, humidity,.... These information can be used to update the

probability of failure of the component or, more generally, of the structure. Furthermore, they

sometimes refer to one of the basic random variables taken into account in the reliability

calculus. This is especially the case of the prestress measurements which provide valuable

information for updating the prestress distribution. Updating procedures have been extensively

presented and successfully applied to fatigue crack growth modelling.

Let us note (Pm ,i )1 i n n measured prestress values at time t=ti. These n observations can be

expressed by the equalities:

H i = Pm ,i P(ti )

(3.10)

Pf ,updated (t ) = Pr ob(M (t ) H1 L H n )

(3.11)

Madsen /13/ provides an expresssion for the updated first-order index upd according to the

measurements (Pm ,i )1 i n

upd ( t ) =

}[

( t )t M ( t )H i H i H j

}[

1t M ( t )H i H i H j

] {

1

{ i }

M ( t )H }

(3.12)

45

where {i}, [HiHj] and {M(t)Hi} are respectively the vector of the reliability indexes from the

events (Hi0), the correlation matrix between the margins Hi and Hj and the correlation vector

between the margins Hi and M(t). The different correlation coefficients are the inner products of

the unit vectors composed of the sensitivity coefficients from the different margins. In the

present study, an alone measurement has been performed at t = 30 years.

Margin

(30 years)

Reliability

Index

1.49

Correlation

between

M and H

-0.349

0.164

Updated

1.65

Reliability

Index

Table 3.2. Updated reliability index

The measurements provide the value Pm = 5500 kN /18/. Assuming a 10% error measurement,

the updated reliability index can be calculated. Table 3.2. synthetises the different results

concerning the margins M(t30) and H. It means that the model describing the prestress losses is

conservative and over-estimates them. The measurement permits to slightly fit a better

estimation of the failure probability of the considered median cross-section at time t=30 years.

The same approach can be applied to different times t 30 years. That permits to predict the

updated reliability loss after the inspection instant. Figure 2.15 illustrates the new evolution of

the reliability index taking into account the measurement at time t = 30 years. It can be shown

that the discrepency in the reliability level is maintained along the bridge lifetime. A time t=,

the safety level is 0.25 times higher than the initial predicted safety level

The notion that further testing provides additional information for models of structural or

material strength is not new. All structural design is based ultimately on such practical

information /20/. Usually testing is performed on components and on materials. Only seldom are

whole structures tested. In a sense proof-loading of an existing in-service structure is the ultimate

test, but what does it actually reveal about the structure?

If the proof load is highly correlated to the matter of interest, such as measuring stiffness to make

inferences about ultimate strength for reinforced concrete beams, a proof load test can be useful

/21/. Also a proof load test can determine the minimum strength or capacity of a component,

such as a reinforced concrete beam. The load supported by the beam then can be applied to the

probability density function for its strength to truncate it below the proof load. Typically, this

increases the reliability index /22/, /23/. But the test itself may:

46

and/or

Generally, only higher levers of proof load have a significant effect on the predicted reliability.

There are only rather general guidelines for load testing given in some structural design codes.

Typically, they require satisfactory performance under test that correspond to the factored loads

for gradual (bending) failure and to slightly load levers (e.g. 10% higher) for shear (brittle)

failure /2/.

A typical load test consists of the following steps:

(1) slow application of loading in several approximately equal increments, al sufficient time

between increments for the structural response to become reasonably steady (say about

one hour)

(2) continued application of load steps until code specified proof load lever is reached,

provided that at each stage it is considered safe to continue loading

(3) measurement of deformations at each stage and noting of cracking patterns and other

signs of possible distress

(4) at maximum load lever holding the load for, say, 24 hours and continuing to monitor

structural behaviour

(5) slow unloading of the structure, monitoring deformations.

For steel and reinforced concrete structures the proof load test often is considered successful if

the eventual deformations of the structure are less than about 25% of the maximum

deformations, suggesting that inelastic behaviour of this magnitude is tolerable for the types of

steel used. For modern reinforced concrete using steels with less pronounced yield level this may

be optimistic. Importantly, the proof-load test says nothing about how close the proof-load might

have been to the ultimate capacity of the structure, how much ductility remains, and whether

there has not been some damage caused by the test itself. Also, a proof load test result supplies

little information about how the structure compares with the requirements of the relevant design

code.

Traditionally, design code rules for proof-load testing are rather arbitrary. However, probabilistic

arguments might be invoked to make sensible use of the information illustrated in the simple

example below.

Let us consider a simply supported bridge under proof-loading at mid-span (Figure 2.16).

Let us assume that, at mid-span, the failure domain is defined by

47

Mu Ma

(4.1)

where M u is the ultimate bending moment and M a the applied bending moment. The initial

risk can be assessed by the probability of failure:

Pf = Pr ob(M u M a 0)

(4.2)

Now, let us consider a test to be performed at mid-span. This test provides a bending effect M t .

The risk test, defined as the risk that the structure fails under the test, is evaluated by the

probability of risk test:

Pf = Pr ob(M u M t 0 )

(4.3)

If the test succeeds, then the residual risk is therefore the risk of failure after the test is

performed. This risk can be assessed by the conditional probability of failure:

Pf = Pr ob(M u M a 0 M t M u 0)

(4.4)

The probability of failure given by Equation (4.4) can be calculated from the multi-dimensional

integral from Equation (2.63).

As a result of cooperative research projects EXTRA I + II, supported by the German Ministry of

Training, Science, Research and Technology a new technique of experimental assessment was

developed by the universities of Bremen, Dresden, Leipzig and Weimar. Equal to calculation

methods it bases on a comparison of limit states and results in information on the ultimate limit

state (ULS) or the serviceability limit state (SLS).

If a structure is loaded by a increasing test load it shows different reactions which are measurable

to a large extent. Structural damage starts if a ultimate test load obsRu is exceeded. In the frame

of an extensive test program this limit is determined in situ and has to be kept strictly during the

load test. From the ultimate test load obsRu the calculation value of structural resistance expRd is

determined by taking safety values into account. In general this rate exceeds the calculated rate

by Od. It serves to increase the permitted load or to balance structural faults. Major

prerequisites are:

Ductile structures

Careful analysis of the structure

Flexible and adjustable loading equipment and safety mechanisms

On-line data logging and presentation

Experience

Currently this method is under discussion in Germany. A draft guideline for application in civil

engineering but without bridge structures has been presented in 1998. For more information refer

to /28, 29/.

48

5. REFERENCES

/1/

Calgaro, J.A.; Lacroix R., Maintenance et Reparation des Ponts, Presse de lENPC, 1997

/2/

Deliverable D1, Review of current procedures for assessing load carrying capacity,

BRIME, 1999

/3/

Deliverable D5, Development of models (traffic and material strength), BRIME, 1999

/4/

Marek, P., Gustar, M., Anagnos, T., Simulation-based reliability assessment for structural

engineers, CRC Press, 1996

/5/

Statistics, Vol.23, 470-472, 1952

/6/

Hasofer, A.M., Lind, N.C., Exact and invariant second moment code format, Journal of

the Engineering Mechanics Division, Vol.100, 111-121, 1974

/7/

Cornell, C.A., Some thoughts on "Maximum Probable loads and Structural safety

insurance, Memorandum to ASCE Structural Safety Committee, MIT, 1967

/8/

Rackwitz, R., Fiessler, B., Structural reliability under combined random load sequences,

Computers and structures, Vol.9, 489-494, 1978

/9/

Madsen, H.O., Krenk S., Lind N.C., Methods of structural safety, Prentice-Hall, 1986

/10/ Madsen, H.O., Omission sensitivity factors, Structural Safety , N.5, 35-45, 1988

/11/ Ditlevsen, O., Narrow reliability bounds for structural systems, Journal of Structural

Mechanics, Vol.7, 453-472, 1979

/12/ Hohenbichler, M., An approximation to the multivariate normal distribution, Euromech

155 (DIALOG), Danish Engineering Academy, Lyngby, Denmark, 1982

/13/ Madsen, H.O., Model Updating in First-Order Reliability Theory with Application to

Fatigue Crack Growth, 2nd International Workshop on Stochastic Methods in Structural

Mechanics, Pavia, Italy, 1985

/14/ CSA S-136,1981; CSA S-6, 1990, Supplement N1 to CAN/CSA-S6-88

49

/15/ Allen, D.E., Criteria for Structural Evaluation and Upgrading of Existing Buildings,

NRCC, Ottawa, Ontario, 1991

/16/ B.P.E.L., Rgles Techniques de Conception et de Calcul des Ouvrages et Constructions

en Bton Prcontraint suivant la Mthode des Etats-Limites, Fascicule N62, Titre 1,

Section II, 1991

/17/ ASCE, Journal of the Structural Division,Vol.104, ST9, 1978

/18/ Abdunur, C., Testing and Modeling to Assess Prestressed Bridges Capacity,

International Colloquium on Remaining Structural Capacity, Copenhagen, Denmark, 353360, 1992

/19/ Cremona, C., Mise Jour de la Fiabilit de Ponts Prcontraints au moyen de mesures de

la force de prcontrainte, Bulletin de Liaison des LPC, 199, 63-70, 1995

/20/ Hall, W.B., Tsai, M., Load testing, structural reliability and test evaluation, Structural

Safety, 6, 285-302, 1989

/21/ Vaneziano, D., Galeota, D., Giammatteo, M.M., Analysis of bridge proof-load data I:

Model and statistical procedures, Structural Safety, 2, 91-104, 1984

/22/ Fujino, Y., Lind, N.C., Proof-load factors and reliability, Journal of the Structural

Division, Vol.103, ST4, 853-870, 1977

/23/ Fu, G., Tang, J., Risk-based proof-load requirements for bridge evaluation, Journal of

Structural Engineering, Vol.121, N.3, 542-556, 1995

/24/ Nowak, A.S., Tharmabala, T., Bridge reliability evaluation using load tests, Journal of

Structural Engineering, Vol.114, N.10, 2268-2279, 1988

/25/ Jungwirth, D., Beyer, E., Grbel, P., Bauerhafte Betonbauwerke, Beton-Verlag,

Dsseldorf, 1986

/26/ Grube, H., Kern, E., Quittmann, H.-D.: "Instandhaltung von Betonbauwerken", in:

Betonkalender 1990, Verlag Ernst und Sohn, Berlin, 1990

/27/ Krieger, J, Erprobung und Bewertung zerstrungsfreier Prfmethoden fr Betonbrcken,

in Berichte der Bundesanstalt fr Straenwesen, Heft B 18, Wirtschaftsverlag NW,

Bremerhaven 1998

/28/ C. Bucher et. al., EXTRA II Pilotobjekt Weserwehrbrcken Drakenburg, in: Bautechnik

74, 1997, Heft 5

/29/ K. Steffens et. al., Experimentelle Tragsicherheitsbewertung von Massivbrcken, in:

Bautechnik 76, 1999, Heft 1

50

## Mult mai mult decât documente.

Descoperiți tot ce are Scribd de oferit, inclusiv cărți și cărți audio de la editori majori.

Anulați oricând.