Documente Academic
Documente Profesional
Documente Cultură
p = 0.99
p = 0.5
0T
2
1.E+00
1.E+01
1.E+02
1.E+03
1.E+04
1.E+05
1.E+06
1.E+07
Foreword
These lecture notes provides an insight into the concepts, methods and procedures of
structural reliability and risk analysis considering the presence of random uncertainties. The
course is addressed to undergraduate students from Faculty of Engineering in Foreign
Languages instructed in English language as well as to postgraduate students in structural
engineering. The objectives of the courses are:
-
to provide the necessary background to carry out reliability based design and riskbased decision making and to apply the concepts and methods of performance-based
engineering;
The content of the lectures in Structural Reliability and Risk Analysis is:
-
Formulation of reliability concepts for structural components; exact solutions, firstorder reliability methods; reliability indices; basis for probabilistic design codes
Table of Contents
380
340
390
350
360
400
330
370
400
360
340
350
370
360
350
350
360
350
360
390
The statistical relevance of the information contained in Table 1.1 can be revealed if one shall
order the data in ascending order in Table 1.2 (320, 330 and so on). The number of occurring
figures from Table 1.1 is listed in the second column of Table 1.2. It indicates how often the
corresponding value x occurs in the sample and is called absolute frequency of that value x in
the sample. Dividing it by the size n of the sample one obtains the relative frequency listed in
the third column of Table 1.2.
If for a certain value x one sums all the absolute frequencies corresponding to the sample
values which are smaller than or equal to that x, one obtains the cumulative frequency
corresponding to that x. This yields the values listed in column 4 of Table 1.2. Division by the
size n of the sample yields the cumulative relative frequency in column 5 of Table 1.2.
The graphical representation of the sample values is given by histograms of relative
frequencies and/or of cumulative relative frequencies (Figure 1.1 and Figure 1.2).
If a certain numerical value does not occur in the sample, its frequency is 0. If all the n values
of the sample are numerically equal, then this number has the frequency n and the relative
frequency is 1. Since these are the two extreme possible cases, one has:
-
0.25
Relative frequency
0.20
0.15
0.10
0.05
0.00
320
330
340
350
360
370
380
390
400
410
420
x , daN/cm2
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
320
330
340
350
360
370
380
390
400
410
420
x , daN/cm
If a sample consists of too many numerically different sample values, the process of grouping
may simplify the tabular and graphical representations, as follows (Kreyszig, 1979).
A sample being given, one chooses an interval I that contains all the sample values. One
subdivides I into subintervals, which are called class intervals. The midpoints of these
subintervals are called class midpoints. The sample values in each such subinterval are said to
form a class. The number of sample values in each such subinterval is called the
corresponding class frequency. Division by the sample size n gives the relative class
frequency. This frequency is called the frequency function of the grouped sample, and the
corresponding cumulative relative class frequency is called the distribution function of the
grouped sample.
If one chooses few classes, the distribution of the grouped sample values becomes simpler but
a lot of information is lost, because the original sample values no longer appear explicitly.
When grouping the sample values the following rules should be obeyed (Kreyszig, 1979):
the class intervals should be chosen so that the class midpoints correspond to simple
number;
if a sample value xj coincides with the common point of two class intervals, one takes
it into the class interval that extends from xj to the right.
The mean value of a sample x1, x2, ,xn is denoted by x (or mx) and is defined by the
formula:
_
x=
1
n
(1.1)
j =1
The mean value of the sample is simply the sum of all the sample values divided by the size n
of the sample. Obviously, it measures the average size of the sample values, and sometimes
_
n 1 j =1
2
(1.2)
The sample variance is the sum of the squares of the deviations of the sample values from the
_
mean x , divide by n-1. It measures the spread or dispersion of the sample values and is always
positive. The square root of the sample variance s2 is called the standard deviation of the
sample and is denoted by s.
The coefficient of variation of a sample x1, x2, ,xn is denoted by COV and is defined as the
ratio of the standard deviation of the sample to the sample mean:
COV =
s
_
(1.3).
1.3. Probability
A random experiment or random observation is a process that has the following properties,
(Kreyszig, 1979):
the result of each performance depends on chance (that is, on influences which we
cannot control) and therefore can not be uniquely predicted.
The result of a single performance of the experiment is called the outcome of that trial.
The set of all possible outcomes of an experiment is called the sample space of the experiment
and will be denoted by S. Each outcome is called an element or point of S.
Experience shows that most random experiments exhibit statistical regularity or stability of
relative frequencies; that is, in several long sequences of such an experiment the
Structural Reliability and Risk Analysis Lecture Notes
corresponding relative frequencies of an event are almost equal. Since most random
experiments exhibit statistical regularity, one may assert that for any event E in such an
experiment there is a number P(E) such that the relative frequency of E in a great number of
performances of the experiment is approximately equal to P(E). For this reason one postulates
the existence of a number P(E) which is called probability of an event E in that random
experiment. Note that this number is not an absolute property of E but refers to a certain
sample space S, that is, to a certain random experiment. The probability thus introduced is the
counterpart of the empirical relative frequency (Kreyszig, 1979).
(1.4)
(1.5)
(1.6)
(1.7)
it follows that
One shall now define and consider continuous random variables. A random variable X and the
corresponding distribution are said to be of continuous type or, briefly, continuous if the
corresponding cumulative distribution function F(x) = P(X x) can be represented by an
integral in the form
F ( x) = f (u )du
(1.8)
where the integrand is continuous and is nonnegative. The integrand f is called the probability
density function of X, PDF or, briefly, the density of the distribution. Differentiating one
notices that
F(x) = f(x)
(1.9)
f (u )du = 1
(1.10)
(1.11)
Hence this probability equals the area under the curve of the density f(x) between x=a and
x=b, as shown in Figure 1.7.
f(x)
P(a<X<b)
-4
-3
-2
-1
0a
1b
10
xf ( x)dx
X =
(1.12)
where f(x) is the density of continuous random variable X. The mean is also known as the
mathematical expectation of X and is sometimes denoted by E(X).
The variance of a distribution is denoted by X2 and is defined by the formula
X2 = ( x X ) 2 f ( x)dx
(1.13).
The positive square root of the variance is called the standard deviation and is denoted by X.
Roughly speaking, the variance is a measure of the spread or dispersion of the values which
the corresponding random variable X can assume.
The coefficient of variation of a distribution is denoted by VX and is defined by the formula
VX =
X
X
(1.14).
If a random variable X has mean X and variance X2, then the corresponding variable
Z = (X - X)/X has the mean 0 and the variance 1. Z is called the standardized variable
corresponding to X.
The central moment of order i of a distribution is defined as:
i = ( x X ) i f ( x)dx
(1.15).
From relations (1.13) and (1.15) it follows that the variance is the central moment of second
order.
The skewness coefficient is defined with the following relation:
1 =
3
3
2 2
( )
(1.16).
(1.17).
11
For a symmetric distribution the mean, the mode and the median are coincident and the
skewness coefficient is equal to zero.
An asymmetric distribution does not comply with relation (1.17). Asymmetric distributions
are with positive asymmetry (skewness coefficient larger than zero) and with negative
asymmetry (skewness coefficient less than zero). Distributions with positive asymmetry have
the peak of the distribution shifted to the left (mode smaller than mean); distributions with
negative asymmetry have the peak of the distribution shifted to the right (mode larger than
mean).
A negative skewness coefficient indicates that the tail on the left side of the probability
density function is longer than the right side and the bulk of the values (including the median)
lies to the right of the mean. A positive skew indicates that the tail on the right side of the
probability density function is longer than the left side and the bulk of the values lie to the left
of the mean. A zero value indicates that the values are relatively evenly distributed on both
sides of the mean, typically implying a symmetric distribution.
Figure 1.4. Asymmetric distributions with positive asymmetry (left) and negative asymmetry
(right) (www.mathwave.com)
12
2. DISTRIBUTIONS OF PROBABILITY
2.1. Normal distribution
The continuous distribution having the probability density function, PDF
f ( x) =
e
2 X
1 x X
2 X
(2.1)
is called the normal distribution or Gauss distribution. A random variable having this
distribution is said to be normal or normally distributed. This distribution is very important,
because many random variables of practical interest are normal or approximately normal or
can be transformed into normal random variables. Furthermore, the normal distribution is a
useful approximation of more complicated distributions.
In Equation 2.6, is the mean and is the standard deviation of the distribution. The curve of
f(x) is called the bell-shaped curve. It is symmetric with respect to . Figure 2.1 shows f(x) for
same and various values of (and various values of coefficient of variation V).
0.012
f(x)
Normal distribution
0.01
V=0.10
V=0.20
0.008
V=0.30
0.006
0.004
0.002
0
100
200
300
400
500
600
13
F ( x) =
2 X
2
1 v X
2 X
(2.2)
dv
Figure 2.2 shows F(x) for same and various values of (and various values of coefficient of
variation V).
From (2.2) one obtains
P(a < X b) = F (b) F (a ) =
2 X
1 v X
2 X
(2.3).
dv
The integral in (2.2) cannot be evaluated by elementary methods, but can be represented in
terms of the integral
( z ) =
1
2
u2
2
(2.4)
du
which is the distribution function of the standardized variable with mean 0 and variance 1 and
has been tabulated. In fact, if one sets (v - )/ = u, then du/dv = 1/, and one has to integrate
from - to z = (x - )/.
The density function and the distribution function of the normal distribution with mean 0 and
variance 1 are presented in Figure 2.3.
1
F(x)
0.9
Normal distribution
0.8
V=0.10
V=0.20
V=0.30
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
100
200
300
400
500
600
14
( x ) /
2 X
u2
2
(2.5).
f(z)
Standard normal
distribution
0.35
0.30
0.25
0.20
0.15
0.10
0.05
z
0.00
-4
1.0
-3
-2
-1
(z)
Standard normal
distribution
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
-4
-3
-2
-1
Figure 2.3. PDF and CDF of the normal distribution with mean 0 and variance 1
15
a X
(2.6).
(2.7).
(2.8).
The meaning of kp becomes clear if one refers to the reduced standard variable z = (x - )/.
Thus, x = + z and kp represents the value of the reduced standard variable for which (z)
= p.
The most common values of kp are given in Table 2.1.
0.01
-2.326
0.02
-2.054
0.05
-1.645
0.95
1.645
0.98
2.054
0.99
2.326
16
F (ln x ) =
2 ln X
ln x
1 ln v ln X
2 ln X
d (ln v ) =
2 ln X
1 ln v ln X
2 ln X
1
dv
v
(2.9).
Since:
x
F (ln x ) = f ( v )dv
(2.10)
the probability density function PDF results from (2.9) and (2.10):
1
f(x)=
2 ln X
1
e
x
1 ln x ln X
2 ln X
(2.11).
The lognormal distribution is asymmetric with positive asymmetry, i.e. the peak of the
distribution is shifted to the left. The skewness coefficient for lognormal distribution is:
1 = 3VX + VX3
(2.12)
where VX is the coefficient of variation of random variable X. Higher the variability, higher
the shift of the lognormal distribution.
The mean and the standard deviation of the random variable lnX are related to the mean and
the standard deviation of the random variable X as follows:
mln X = ln
mX
1 + V X2
ln X = ln(1 + V X2 )
(2.13)
(2.14).
In the case of the lognormal distribution the following relation between the mean and the
median holds true:
mln X = ln x0.5
(2.15).
Combining (2.13) and (2.15) it follows that the median of the lognormal distributions is
linked to the mean value by the following relation:
x 0.5 =
mX
1 + V X2
(2.16)
(2.17)
ln X VX
(2.18)
The PDF and the CDF of the random variable X are presented in Figure 2.4 for different
coefficients of variation.
17
0.012
f(x)
Log-normal distribution
0.01
V=0.10
V=0.20
V=0.30
0.008
0.006
0.004
0.002
0
100
200
300
400
500
600
1
F(x)
0.9
Log-normal distribution
0.8
V=0.10
V=0.20
V=0.30
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
100
200
300
400
500
600
Figure 2.4. Probability density function, f(x) and cumulative distribution function, F(x)
of the log-normal distribution for various values of V
If one uses the reduced variable (lnv - )/ = u, then du/dv = 1/(v), and one has to integrate
from - to z = (lnx - )/. From (2.4) one obtains:
( z ) =
2 X
(ln x ) /
u2
2
1
vdu =
v
1
2
u2
2
du
(2.19)
18
The fractile xp that is defined as the value of the random variable X with p non-exceedance
probability (P(X xp) = p) is computed as follows, given lnX normally distributed:
ln(xp) = lnX + kplnX
(2.20)
ln X + k p ln X
(2.21)
where kp represents the value of the reduced standard variable for which (z) = p.
0.009
0.008
0.007
0.006
0.005
0.004
distribution
all values
0.003
0.002
distribution
maxima
distribution
minima
0.001
0
0
100
200
300
400
500
600
700
800
900
x
1000
Figure 2.5. Distribution of all values, of minima and of maxima of random variable X
Structural Reliability and Risk Analysis Lecture Notes
19
( x u )
(2.22)
where:
u = x 0.45x mode of the distribution (Figure 2.7)
( x u )
dF ( x )
= e ( x u ) e e
dx
(2.23)
The PDF of Gumbel distribution for maxima for the random variable X with the same mean
x and different coefficients of variation Vx is represented in Figure 2.6.
Gumbel distribution
for maxima in 1 year
F(x)
0.9
V=0.10
V=0.20
V=0.30
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
100
200
300
400
500
600
700
Figure 2.5. CDF of Gumbel distribution for maxima for the random variable X
with the same mean x and different coefficients of variation Vx
20
0.014
f(x)
Gumbel distribution
for maxima in 1 year
0.012
V=0.10
V=0.20
V=0.30
0.01
0.008
0.006
0.004
0.002
x
0
100
200
300
400
500
600
700
Figure 2.6. PDF of Gumbel distribution for maxima for the random variable X
with the same mean x and different coefficients of variation Vx
One can notice in Figure 2.6 that higher the variability of the random variable, higher the shift
to the left of the PDF.
0.005
f(x)
0.0045
0.004
0.0035
0.003
0.0025
0.002
0.45
0.0015
0.001
0.0005
0
100
200
u
300
x
400
500
600
700
21
Given X follows Gumbel distribution for maxima, the fractile xp that is defined as the value of
the random variable X with p non-exceedance probability (P(X xp) = p) is computed as
follows,:
F ( x p ) = P( X x p ) = p = e e
( x p u )
(2.24).
ln( ln p ) = x 0.45 x
x
1.282
ln( ln p ) = x + k Gp x
(2.25)
where:
(2.26).
The values of kpG for different non-exceedance probabilities are given in Table 2.2.
Table 2.2. Values of kpG for different non-exceedance probabilities p
p
kpG
0.50
-0.164
0.90
1.305
0.95
1.866
0.98
2.593
(2.27)
where:
F(x)N years CDF of random variable X in N years
F(x)1 year CDF of random variable X in 1 year.
The Gumbel distribution for maxima has a very important property the reproducibility of
Gumbel distribution - i.e., if the annual maxima (in 1 year) follow a Gumbel distribution for
maxima then the maxima in N years will also follow a Gumbel distribution for maxima:
F ( x ) N = ( F ( x )1 ) = e e
N
=e
e 1 ( x u1 ) + ln N
=e
1 ( x u1 )
1 ( x ( u1 +
ln N
= e Ne
= e e
1 ( x u1 )
( x u N )
=
(2.28)
where:
u1 mode of the distribution in 1 year
22
0.0063
f(x)
Gumbel distribution
for maxima
N yr.
1 yr.
0.0042
lnN /
0.0021
lnN /
0
100
200
u1
300
m1
400
500
uN
mN
600
700
800
900
x
1000
Figure 2.8. PDF of Gumbel distribution for maxima in 1 year and in N years
23
F(x)
Gumbel distribution
for maxima
0.9
0.8
0.7
N yr.
0.6
1 yr.
0.5
0.4
0.3
0.2
0.1
0
100
200
300
400
500
600
700
800
900
x
1000
Figure 2.9. CDF of Gumbel distribution for maxima in 1 year and in N years
24
1
1
1
=
=
P1 year ( X > x ) 1 p 1 Fx ( x )
(2.29)
where:
p is the annual probability of the event (Xx)
FX(x) is the cumulative distribution function of X.
Thus the mean recurrence interval of a value x is equal to the reciprocal of the annual
probability of exceedance of the value x. The mean recurrence interval or return period has an
inverse relationship with the probability that the event will be exceeded in any one year. For
example, a 10-year flood has a 0.1 or 10% chance of being exceeded in any one year and a
50-year flood has a 0.02 (2%) chance of being exceeded in any one year. It is commonly
assumed that a 10-year earthquake will occur, on average, once every 10 years and that a 100year earthquake is so large that we expect it only to occur every 100 years. While this may be
statistically true over thousands of years, it is incorrect to think of the return period in this
way. The term return period is actually misleading. It does not necessarily mean that the
design earthquake of a 10 year return period will return every 10 years. It could, in fact, never
occur, or occur twice. This is why the term return period is gradually replaced by the term
recurrence interval. Researchers proposed to use the term return period in relation with the
effects and to use the term recurrence interval in relation with the causes.
The mean recurrence interval is often related with the exceedance probability in N years. The
relation among MRI, N and the exceedance probability in N years, Pexc,N is:
Pexc , N 1 e
N
MRI
(2.30)
Usually the number of years, N is considered equal to the lifetime of ordinary buildings, i.e.
50 years. Table 2.3 shows the results of relation (2.30) for some particular cases considering
N=50 years.
25
Table 2.3 Correspondence amongst MRI, Pexc,1 year and Pexc,50 years
Mean recurrence
interval, years
MRI
10
30
50
100
225
475
975
2475
Probability of
exceedance in 1
year
Pexc,1 year
0.10
0.03
0.02
0.01
0.004
0.002
0.001
0.0004
Probability of
exceedance in 50
years
Pexc,50 years
0.99
0.81
0.63
0.39
0.20
0.10
0.05
0.02
The modern earthquake resistant design codes consider the definition of the seismic hazard
level based on the probability of exceedance in 50 years. The seismic hazard due to ground
shaking is defined as horizontal peak ground acceleration, elastic acceleration response
spectra or acceleration time-histories. The level of seismic hazard is expressed by the mean
recurrence interval (mean return period) of the design horizontal peak ground acceleration or,
alternatively by the probability of exceedance of the design horizontal peak ground
acceleration in 50 years. Four levels of seismic hazard are considered in FEMA 356
Prestandard and Commentary for the Seismic Rehabilitation of Buildings, as given in Table
2.4. The correspondence between the mean recurrence interval and the probability of
exceedance in 50 years, based on Poisson assumption, is also given in Table 2.4.
Table 2.4. Correspondence between mean recurrence interval and probability of exceedance
in 50 years of design horizontal peak ground acceleration as in FEMA 356
Seismic Hazard
Level
SHL1
SHL2
SHL3
SHL4
Probability of
exceedance
50% in 50 years
20 % in 50 years
10 % in 50 years
2 % in 50 years
26
ql 2
; R = y W .
8
27
For the random variable X one knows: the probability density function, PDF, the cumulative
distribution function, CDF, the mean, m and the standard deviation, . The unknowns are the
PDF, CDF, m and for the random variable Y.
If one applies the equal probability formula, Figure 2.12:
(2.31)
1
1
1 y b
ya
fY ( y ) = f X ( x )
= f X ( x) = f X
dy
a
a a
b
dx
(2.32)
Distribution of Y Distribution of X
mY =
yfY ( y )dy =
=
2
Y
(a x + b) f X ( x)dx = a
(y m
xf X ( x)dx + b
( x)dx = a mX + b
) fY ( y )dy = (a x + b a mX b) 2 f X dx =
2
= a 2 ( x mX ) 2 f X ( x)dx = a 2 Y2
Y = a X
(2.33)
mY = a mX + b
VY =
Y
mY
a X
a mX + b
y
fY ( y )dy
fY ( y ) = ?
f X ( x)dx
y + dy
b
x+
a
=
f X (x)
x
x x + dx
28
Observations:
1.
mY = Y mX , mX ,, mX
1
(2.34)
2
n
y
y
y
y
X2 n =
X2 i
X2 + +
X2 1 +
2
i =1 xi
x2
x1
xn
2
Y
(2.35)
Relations 2.34 and 2.35 are the basis for the so-called First Order Second Moment Models,
FOSM.
Few examples of FOSM are provided in the following:
Y = a X +b
mY = a mX + b
Y2 = a 2 X2
Y = a X
Y = X1 X 2
mY = mX mX
1
Y2 = ( X 2 )2 X2 1 + ( X 1 )2 X2 2 = mX2 2 X2 1 + mX2 1 X2 2
VY =
Y
mY
mX2 2 X2 + mX2 1 X2
1
mX mX
1
X2 1
mX2
X2 2
mX2
= VX21 + VX22
29
(3.1a)
=P(R-S0)
(3.1b)
=P(R/S1)
(3.1c)
(3.1d)
or, in general
=P(G(R ,S)0)
(3.1e)
where G( ) is termed the limit state function and the probability of failure is identical with the
probability of limit state violation.
Quite general density functions fR and fS for R and S respectively are shown in Figure 3.1
together with the joint (bivariate) density function fRS(r,s). For any infinitesimal element
(rs) the latter represents the probability that R takes on a value between r and r+r and S a
value between s and s+s as r and s each approach zero. In Figure 3.1, the Equations (3.1)
are represented by the hatched failure domain D, so that the probability of failure becomes:
P f = P (R S 0 ) =
f (r , s )dr ds
(3.2).
RS
sr
f R (r ) f S (s )dr ds =
F (x ) f (x )dx
R
(3.3)
Relation (3.3) is also known as a convolution integral with meaning easily explained by
reference to Figure 3.2. FR(x) is the probability that Rx or the probability that the actual
resistance R of the member is less than some value x. Let this represent failure. The term fs(x)
represents the probability that the load effect S acting in the member has a value between x
and x+x as x 0. By considering all possible values of x, i.e. by taking the integral over all
x, the total probability of failure is obtained. This is also seen in Figure 3.3 where the density
functions fR(r) and fS(s) have been drawn along the same axis.
30
Figure 3.1. Joint density function fRS(r,s), marginal density functions fR(r) and fS(s)
and failure domain D, (Melchers, 1999)
1 F R (x), f S (x)
0.9
0.8
0.7
R
0.6
S
0.5
0.4
0.3
P(R<x)
0.2
0.1
0
-4 -3 -2 -1
R=x
2 3
x
9 10 11
31
0.0063
f R (x), f S (x)
amount of overlap
of fR( ) and fS()
rough indicator of pf
0.0042
R
S
0.0021
0
100
200
300
400
500
600
700
800
x
900 1000
Pf =
[1 F
( x)] f R ( x)dx
(3.4)
which is simply the sum of the failure probabilities over all cases of resistance for which the
load effect exceeds the resistance.
Z = R - S
(3.5a)
Z2 = R2 + S2
(3.5b)
(3.6)
32
where ( ) is the standard normal distribution function (for the standard normal variate with
zero mean and unit variance). The random variable Z = R - S is shown in Figure 3.4, in which
the failure region Z 0 is shown shaded. Using (3.5) and (3.6) it follows that (Cornell, 1969)
( )
R
S
= ( )
Pf =
1/ 2
2
2
R + S
(3.7)
f Z (z)
0.03
Z<0
0.025
Z>0
Failure
Safety
0.02
0.015
0.01
Pf
0.005
0 0
-60
-50
-40
-30
-20
-10
10
20
z
33
R
then has a
S
Z =
Z =
R
ln
S
ln
R
S
ln
R
S
(3.8a)
= V R = VR2 + VS2
(3.8b)
Pf = P ln 0 = P(Z 0 ) =
S
(3.9)
where ( ) is the standard normal distribution function (zero mean and unit variance). The
R
random variable Z= ln is shown in Figure 3.5, in which the failure region Z 0 is shown
S
shaded.
0.035
f Z (z)
0.03
Z<0
0.025
Z>0
Failure
Safety
0.02
0.015
0.01
0.005
Pf
0 0
-60
-50
-40
-30
-20
-10
10
20
z
R
S
34
0 ln
S
= ( )
Pf =
V 2 + V 2 1/ 2
S
R
(3.10)
ln
R
S
V +V
2
R
2
S
(3.11)
1 VR
3.
3 VS
R
S
(3.12).
(VR + VS )
where e VR and e VS are called safety coefficients, SC with respect to the mean. But one
needs the SC with respect to the characteristic values of the loads and resistances, the socalled partial safety coefficients, PSC. To this aim, one defines the limit state function used in
the design process:
S0.98 0.98 = R0.05 0.05
Rdesign
S design
(3.14)
where 0.98 and 0.05 are called partial safety coefficients, PSC.
Assuming that S and R are log-normally distributed, one has:
R0.05 = e
mln R 1.645 ln R
ln m R
e 1.645VR = mR e 1.645VR
(3.15)
35
S0.98 = e
mln S + 2.054 ln S
ln m S
e 2.054VS = mS e 2.054VS
V R
R0.05 mR e 1.645VR e
=
=
S0.98 mS e 2.054VS
(3.16)
e S e 1.645VR
e 2.054VS
( +1.645 )V
( 2.054 )V
0.05
0.98
(3.17)
R
R0.05 e
= S0.98 eS
0.05 = e
(3.18)
( +1.645 )V R
0.98 = e
(3.19)
( 2.054 )V S
(3.20)
The partial safety coefficients 0.05 and 0.98 as defined by Equations 3.19 and 3.20 depend on
the reliability index and the coefficients of variation for resistances and loads, respectively.
If the reliability index is increased, the partial safety coefficient for loads 0.98 increases
while the partial safety coefficient for resistance 0.05 decreases. The theory of partial safety
coefficients based on lognormal model is incorporated in the Romanian Code CR0-2005
named Cod de proiectare. Bazele proiectarii structurilor in constructii (Design Code. Basis
of Structural Design).
0.98
2.0
0.9
0.05
1.8
0.8
1.6
1.4
0.7
1.2
VR
VS
0.6
1.0
0
0.2
0.4
0.6
0.05
0.1
0.15
0.2
Figure 3.6. Variation of partial safety coefficients with the coefficient of variation of load
effect (left) and of resistance (right)
36
Source 1
M1
R1
R3
Site
R2
Source 2
M2
Step 1
Step 2
Ground M3
Motion
Parameter, M2
Y1
Y
2
Y = Y3
YN
M1
R2
R3
R1
Distance
Step 3
Step 4
37
Source 1
Site
rate,
R
f
Source 2
Magnitude
R
Step 1
Ground
Step 2
P(Y>y*)
Motion
Parameter
Step 3
Distance, R
Parameter value, y*
Step 4
38
(4.1)
10a
39
ln M
=a-bM
ln 10
(4.2)
ln M = a ln10 - b ln10 M = - M
(4.3)
M = e M
(4.4)
fM(M) =
M min
=1-
e M
M min
= 1 - e ( M M min )
(4.5)
dFM ( M )
= e ( M M min) .
dM
(4.6)
Mmin is the mean annual rate of earthquakes of magnitude M larger or equal than Mmin.
If both a lower threshold magnitude Mmin and a higher threshold magnitude Mmax are taken
into account, the probabilistic distribution of magnitudes can be obtained as follows (McGuire
and Arabasz, 1990).
The cumulative distribution function must have the unity value for M = Mmax. This yields:
FM ( M )
1 e ( M M min )
=
FM (M ) = P[Mag. M Mmin M Mmax] =
FM ( M max ) 1 e ( M max M min )
f M (M ) =
dFM ( M )
e ( M M min )
=
.
1 e ( M max M min )
dM
(4.7)
(4.8)
40
M = M [1 FM ( M )] = M min
(4.9)
1 e ( M max M min )
min
where M min = e M min is the mean annual rate of earthquakes of magnitude M larger or
equal than Mmin. Finally one gets (McGuire and Arabasz, 1990):
max
min
e ( M M min ) [ 1 e ( M max M ) ]
= e M min
=
1 e ( M max M min )
= e M
1 e ( M max M )
1 e ( M max M min )
(4.10)
(4.11)
fY(y|m,r)
r
41
the probability of occurrence during a very short time interval is proportional to the
length of the time interval;
the probability of more than one occurrence during a very short time interval is
negligible.
If N is the number of occurrences of a particular event during a given time interval, the
probability of having n occurrences in that time interval is:
P[N = n] =
n e
n!
(4.12)
where is the average number of occurrences of the event in that time interval.
(4.13)
where X is a vector of random variables that influence Y (usually magnitude, M and source-tosite distance, R). Assuming M and R independent, for a given earthquake recurrence, the
probability of exceeding a particular value, y*, is calculated using the total probability
theorem (Cornell, 1968, Kramer, 1996):
P (Y > y*) = P(Y > y* | m, r ) f M (m) f R (r )dmdr
(4.14)
where:
- P(Y>y*|m,r) probability of exceedance of y* given the occurrence of an earthquake of
magnitude m at source to site distance r.
Structural Reliability and Risk Analysis Lecture Notes
42
(4.15)
where:
(PGA>PGA*) mean annual rate of exceedance of PGA* ;
- Mmin is the mean annual rate of earthquakes of magnitude M larger or equal than Mmin;
- P(PGA>PGA*|m,r) probability of exceedance of PGA* given the occurrence of an
earthquake of magnitude m at source to site distance r.
The mean annual rate of exceedance of PGA the hazard curve - for Bucharest site and
Vrancea seismic source is represented in Figure 4.5.
1.E-01
PSHA - Bucharest
1.E-02
1.E-03
1.E-04
100
150
200
250
300
PGA , cm/s2
350
400
Figure 4.5. Hazard curve for Bucharest from Vrancea seismic source
43
5.1. Background
Physical phenomena of common interest in engineering are usually measured in terms of
amplitude versus time function, referred to as a time history record. There are certain types of
physical phenomena where specific time history records of future measurements can be
predicted with reasonable accuracy based on ones knowledge of physics and/or prior
observations of experimental results, for example, the force generated by an unbalanced
rotating wheel, the position of a satellite in orbit about the earth, the response of a structure to
a step load. Such phenomena are referred to as deterministic, and methods for analyzing their
time history records are well known. Many physical phenomena of engineering interest,
however, are not deterministic, that is, each experiment produces a unique time history record
which is not likely to be repeated and cannot be accurately predicted in detail. Such processes
and the physical phenomena they represent are called random or stochastic. In this case a
single record is not as meaningful as a statistical description of the totality of possible
records.
As just mentioned, a physical phenomenon and the data representing it are considered random
when a future time history record from an experiment cannot be predicted with reasonable
experimental error. In such cases, the resulting time history from a given experiment
represents only one physical realization of what might have occurred. To fully understand the
data, one should conceptually think in terms of all the time history records that could have
occurred, as illustrated in Figure 5.1.
x1 (t )
x1 (t j )
tj
xi (t )
xi (t j )
tj
xn (t )
xn (t j )
44
This collection of all time history records xi(t), i= 1, 2, 3, , which might have been produced
by the experiment, is called the ensemble that defines a random process {x(t)} describing the
phenomenon. Any single individual time history belonging to the ensemble is called a sample
function. Each sample function is sketched as a function of time. The time interval involved is
the same for each sample. There is a continuous infinity of different possible sample
functions, of which only a few are shown. All of these represent possible outcomes of
experiments which the experimenter considers to be performed under identical conditions.
Because of variables beyond his control the samples are actually different. Some samples are
more probable than others and to describe the random process further it is necessary to give
probability information.
the mean: m x (t j ) =
x (t
i
i =1
)
; (5.1) mx (t ) - deterministic function of time
[x (t
x
i =1
i =1
(t j )
(5.2)
the variance x2 (t j ) ; x2 (t j ) =
2
i
) m x (t j )
n
time dependent
(5.3) x (t j ) = x2 (t j )
Considering two different random processes x(t ) and x * (t ) one may get the same mean and
the same variance for both processes (Figure 5.2).
Obviously, the mean and the variance cannot describe completely the internal structure of a
random (stochastic) process and additional higher order average values are needed for a
complete description.
The average product of the data values at times tj and tk called the correlation function is
given by:
n
R x (t j , t k ) =
x (t
i =1
) xi (t k )
(5.4)
45
x (t )
mx (t ) + x (t )
mx (t ) x (t )
m x (t ) = m x* (t )
x (t ) = x* (t )
mx (t )
mx* (t ) x* (t )
mx* (t )
Figure 5.2. Two different processes with same mean and same variance
Furthermore, the average product of the data values at times t and t+ is called the
autocorrelation at time delay
n
R x (t , t + ) =
x (t ) x (t + )
i
i =1
(5.5)
If = 0 R x (t , t ) = x 2 (t ) .
(5.6)
[x (t
n
K x (t j , t k ) =
i =1
) m x (t j ) [xi (t k ) m x (t k )]
(5.7)
An important property of the variance function is easily obtained by developing the right hand
side member of Equation (5.7):
[x (t
n
K x (t j , t k ) =
i =1
j ) x i (t k )
m x (t k )
xi (t j )
i =1
m x (t j )
x (t
i =1
)
+ m x (t j ) m x (t k )
K x (t j , t k ) = R x (t j , t k ) m x (t j ) m x (t k )
(5.8)
If t j = t k = t x2 (t ) = x 2 (t ) m x2 (t ) .
(5.9)
46
Ergodicity: All the averages discussed so far have been ensemble averages. Given a single
sample x(t) of duration T it is, however, possible to obtain averages by averaging with respect
time along the sample. Such an average is called a temporal average in contrast to the
ensemble averages described previously.
Within the subclass of stationary random processes there exists a further subclass known as
ergodic processes. An ergodic process is one for which ensemble averages are equal to the
corresponding temporal averages taken along any representative sample function.
Structural Reliability and Risk Analysis Lecture Notes
47
For almost all stationary data, the average values computed over the ensemble at time t will
equal the corresponding average values computed over time form a single time history record
(theoretically, infinitely long - T - practically, long enough). For example, the average
values may be as:
T
mx =
1
x(t )dt
T 0
(5.10)
1
x = x 2 (t )dt
T 0
2
x2 =
1
T
2
[x(t ) m x ] dt
(5.11)
x2 =
1
T
[x
(t ) 2m x x(t ) + m x2 dt
T
mx T
m x2 T
1 2
dt
= x (t )dt 2 x(t )dt +
T0
T 0
T 0
x2
mx2
2mx2
x2 = x 2 m x2
2
x
(5.12)
R x ( ) =
1
x(t ) x(t + )dt
T 0
(5.13)
Rx ( )
For = 0 R x (0) = x 2
(5.14)
The autocorrelation function is a non-increasing symmetric function with respect to the time
origin:
R x ( ) = R x ( )
(5.15)
R x ( ) R x (0) = x 2
Rx ( )
x2
(5.16)
48
(5.17)
mx T
mx T
m x2 T
K x ( ) = R x ( )
x(t + )dt
x(t )dt +
dt
T 0
T 0
T 0
K x ( ) = Rx ( ) m x2
(5.18)
K x (0) = x2
If = 0
x2 = x 2 m x2
(5.19)
For stationary data, the properties computed from time averages over individual records of the
ensemble will be the same from one record to the next and will equal the corresponding
properties computed from an ensemble average over the records at any time t if (Papoulis,
1991):
T
1
Rxx ( ) x2 d 0
T T
as T.
(5.20)
In practice, violations of Equation 5.20 are usually associated with the presence of periodic
components in the data.
In conclusion, for stationary ergodic processes, the ensemble is replaced by one representative
function, thus the ensemble averages being replaced by temporal averages, Figure 5.5.
An ergodic process is necessarily stationary since the temporal average is a constant while the
ensemble average is generally a function of time at which the ensemble average is performed
except in the case of stationary processes. A random process can, however, be stationary
without being ergodic. Each sample of an ergodic process must be completely representative
for the entire process.
It is possible to verify experimentally whether a particular process is or is not ergodic by
processing a large number of samples, but this is a very time-consuming task. On the other
hand, a great simplification results if it can be assumed ahead of time that a particular process
is ergodic. All statistical information can be obtained from a single sufficiently long sample.
In situations where statistical estimates are desired but only one sample of a stationary process
is available, it is common practice to proceed on the assumption that the process is ergodic.
Zero mean: It is very convenient to perform the computations for zero mean random
processes. Even if the stationary random process is not zero mean, one can perform a
translation of the time axis with the magnitude of the mean value of the process (Figure 5.6).
In the particular case of zero mean random processes, the following relations hold true:
49
Rx (0) = x 2 = x2 + mx2 = x2
(5.21)
K x ( ) = Rx ( ) m x2 = Rx ( )
(5.22)
x1 (t )
t
tj
xi (t )
representative sample
tj
x(t )
xn (t )
mx
tj
x(t )
mx = 0
x(t )
mx
t
Figure 5.6. Zero mean stationary ergodic random process
50
Normality (Gaussian): Before stating the normality assumption, some considerations will be
given to probability distribution of stochastic processes.
N [x(t1 ) ]
N
(5.23)
where N[x(t1) ] is the number of measurements with an amplitude less than or equal to at
time t1. The probability statement in Equation 5.23 can be generalized by letting the amplitude
take on arbitrary values, as illustrated in Figure 5.7. The resulting function of is the
probability cumulative distribution function, CDF of the random process {x(t)} at time t1, and
is given by
Fx ( x,t1 ) = Prob[x(t1 ) ]
(5.24)
x1(t)
t
t1
x2(t)
P(x, t1)
t
t1
P(, t1)
xN(t)
x
t
t1
51
The probability distribution function then defines the probability that the instantaneous value
of {x(t)} from a future experiment at time t1 will be less than or equal to the amplitude of
interest. For the general case of nonstationary data, this probability will vary with the time t1.
For the special case of stationary ergodic data, the probability distribution function will be the
same at all times and can be determined from a single measurement x(t) by
FX (x ) = Prob[x(t ) ] = lim
T [x(t ) ]
T
(5.25)
where T[x(t) ] is the total time that x(t) is less than or equal to the amplitude , as illustrated
in Figure 5.8.
In this case, the probability distribution function defines the probability that the instantaneous
value of x(t) from a future experiment at an arbitrary time will be less than or equal to any
amplitude of interest.
There is a very special theoretical process with a number of remarkable properties one of the
most remarkable being that in the stationary case the spectral density (or autocorrelation
function) does provide enough information to construct the probability distributions. This
special random process is called the normal random process.
Many random processes in nature which play the role of excitations to vibratory systems are
at least approximately normal. The normal (Gaussian) assumption implies that the probability
distribution of all ordinates of the stochastic process follows a normal (Gaussian) distribution
(Figure 5.9).
A very important property of the normal or Gaussian process is its behavior with respect to
linear systems. When the excitation of a linear system is a normal process the response will be
in general a very different random process but it will still be normal.
52
Normal distribution
x(t)
f X dx = P( x X (t ) < x + dx )
53
Rx ( ) = S x ( ) e ir d
(6.1)
where Sx() is essentially (except for the factor 2) the Fourier transform of R()
S x ( ) =
1
2
R ( ) e
(6.2)
Rx (0) = x 2 = S x ( )d
(6.3)
The mean square of the process equals the sum over all frequencies of Sx()d so that Sx()
can be interpreted as mean square spectral density (or power spectral density, PSD). If the
process has zero mean, the mean square of the process is equal to the variance of the process
+
R x (0) =
( )d = x 2 = x2 + m x2 = x2
(6.4)
The Equations 6.1 and 6.2 used to define the spectral density are usually called the Wiener
Khintchine relations and point out that R x ( ), S x ( ) are a pair of Fourier transforms.
Note that the dimensions of Sx() are mean square per unit of circular frequency. Note also
that according to Equation 6.3 both negative and positive frequencies are counted.
It is like a random process is decomposed in a sum of harmonic oscillations of random
amplitudes at different frequencies and the variance of random amplitudes is plotted against
frequencies. Thus the power spectral density (PSD) is obtained (Figure 6.1). The power
spectral density function gives the image of the frequency content of the stochastic process.
As one can notice from Equation 6.3 and from Figure 6.1, the area enclosed by the PSD
function in the range [1, 2] represents the variance of the amplitudes of the process in that
particular range. The predominant frequency of the process, p, indicates the frequency
around whom most of the power of the process is concentrated.
54
S x ( )
(6.5)
The factor 4 is made up of a factor of 2 accounting for the change in frequency units and a
factor of 2 accounting for the consideration of positive frequencies only, instead of both
positive and negative frequencies for an even function of frequency. Instead of relation (6.3)
one has
+
(6.6)
for the relation between the variance of the zero-mean process and the experimental PSD.
S x ( ) - bilateral PSD
Gx ( )
G x ( ) - unilateral PSD
x2
= 2f
g x ( f ) =normalized unilateral PSD
gx ( f ) =
Gx ( f )
x2
55
Gx ( f ) = 2Gx ( )
x2
gx ( f )
The PSD is the Fourier transform of a temporal average (the autocorrelation function) and
plays the role of a density distribution of the variance along the frequency axis.
For real processes the autocorrelation function is an even function of and that it assumes its
peak value at the origin. If the process contains no periodic components then the
autocorrelation function approaches zero as . If the process has a sinusoidal component
of frequency 0 then the autocorrelation function will have a sinusoidal component of the
same frequency as .
If the process contains no periodic components then PSD is finite everywhere. If, however,
the process has a sinusoidal component of frequency 0 then the sinusoid contributes a finite
amount to the total mean square and the PSD must be infinite at =0; i.e., the spectrum has
an infinitely high peak of zero width enclosing a finite area at 0. In particular, if the process
does not have zero mean then there is a finite zero frequency component and the spectrum
will have a spike at =0.
For practical computations, Equations 6.1 and 6.2 become:
+
Rx ( ) = S x ( ) cos d
(6.7)
S x ( ) =
1 +
Rx ( ) cos d
2
(6.8)
if one uses
e it = cos t sin t
Structural Reliability and Risk Analysis Lecture Notes
56
and keeps just the real parts of the integrands in relations (6.1) and (6.2).
Given a stationary random process x(t), then the random processes x(t ) =
.
d2
[x(t )] , each of whose sample functions is the temporal second derivative of the
dt 2
corresponding sample x(t), are also stationary random processes. Furthermore, the following
relations hold true:
x(t ) =
..
Rx ( ) = Rx( ) =
Rx ( ) = Rx( ) =
d2
[Rx ( )]
d 2
d4
[Rx ( )]
d 4
(6.9)
(6.10)
Using Equation 6.3 in 6.9 and 6.10, for a zero mean process is follows:
+
2 +
2
(
)
S
d
=
=
S x ( )d = R' ' (0)
x x
+
+
2 = S x ( )d = 4 S x ( )d = R' ' ' ' (0 )
x
(6.11)
ii.
the Kennedy Shinozuka indicators f10, f50 and f90 which are fractile frequencies
below which 10%, 50% and 90% of the total cumulative power of PSD occur;
iii.
the frequencies f1, f2 and f3 corresponding to the highest 1,2,3 peaks of the PSD.
To define the frequency content indicators, one has to introduce first the spectral moment of
order i:
57
i = i Gx ( )d
(6.12)
0 = Gx ( )d = x2
(6.13)
2 = 2Gx ( )d = x2
(6.14)
4 = 4Gx ( )d = x2
(6.15)
The spectral moments are computed using unilateral PSD; otherwise the uneven spectral
moments would be equal to zero if bilateral PSD is to be used in the calculations.
The Cartwright & Longuet-Higgins indicator is:
= 1
22
0 4
(6.16)
Wide frequency band processes have values close to 2/3 and smaller than 0.85. Narrow
frequency band seismic-processes of long predominant period (i.e. superposition of a single
harmonic process at a short predominant frequency, fp and a wide band process) are
characterized by values greater than 0.95. The processes with values in between 0.85 and
0.95 have intermediate frequency band.
The relation for computing q indicator is:
q = 1
12
0 2
(6.17)
( f )df = 0.1
(6.18)
( f )df = 0.5
(6.19)
( f )df = 0.9
(6.20)
f 50
g
0
f 90
g
0
58
gx ( f )
1
f10
f 50
f 90
p
Figure 6.4. Example of wide band spectrum
59
under the spectrum. Nevertheless, the ideal white noise model can sometimes be used to
provide physically meaningful results in a simple manner.
S x ( )
S0
The band-limited white noise spectrum shown in Figure 6.6 is a close approximation of many
physically realizable random processes.
S x ( )
x2
S0
For band-limited white noise the autocorrelation function is, Figure 6.7:
+
Rx ( ) = S x ( ) e i d = 2 S x ( ) cos( )d = 2 S 0 cos(w )d =
=2
S0
sin = 2
1
S0
[sin sin ]
2
(6.21)
For a wide band process the autocorrelation function is vanishing quickly (~2cycles); there is
a finite variance x2 = 2 S 0 (2 1 ) and nonzero correlation with the past and the future, at
least for short intervals.
60
Rx ( )
Figure 6.7. Autocorrelation function corresponding to the band-limited white noise spectra
p
Figure 6.8. Narrow band spectrum
If it can be assumed that the process is normal then it is possible to compute the average or
expected frequency of the cycles. When a stationary normal random process with zero mean
has a narrow-band spectrum, the statistical average frequency or expected frequency is 0
where:
=
2
0
2
S x ( )d
R' ' (0 ) x
=
=
R(0 ) x2
(
)
S
d
x
2
.
(6.22)
This result is simply stated and has a simple interpretation; i.e., 02 is simply a weighted
average of 2 in which the PSD is the weighting function. The establishment of this result, by
S. O. Rice represented a major advance in the theory of the random processes.
61
If one assumes that x(t) is a stationary normal zero-mean process, one finds the expected
number of crossing the level x=a with positive slope:
a+ =
a2
1 x
exp
2
2 x
2 x
.
(6.23)
If one sets a = 0 in relation (6.23), the expected frequency, in crossings per unit time, of zero
crossings with positive slope is obtained. Finally if the process narrow-band, the probability is
very high that each such crossing implies a complete cycle and thus the expected frequency,
in cycles per unit time, is 0+. It should be emphasized that this result is restricted to normal
processes with zero mean.
For band-limited white noise, Figure 6.9, the autocorrelation function is:
+
1 +
Rx ( ) = S x ( ) e i d = 2 S x ( ) cos( )d = 2 S 0 cos(w )d =
=2
S0
sin
1 +
1
=2
S0
[sin(
+ ) sin 1 ]
(6.24)
S x ( )
S0
1 1 +
Rx ( )
62
i = G 0 d = G 0
i
i +1
i +1
=G 0
1i +1 )
i +1
= G 0 2 (1 i +1 )
i +1
i +1
i +1
2
(6.25)
where: =
1
.
2
G0
22 (1 2 )
2
2
1
3 (1 + )
2
= 1
= 1
q = 1
G
0 2
4 (1 + + 2 )
G 0 2 (1 ) 0 23 (1 3 )
3
(6.26)
since the q indicator is a measure of the scattering of the PSD values with respect to
the central frequency, the q value increases as the frequency band of the PSD
increases;
for narrow band random process, the q values are in the range [0, 0.25];
for wide band random process, the q values are in the range (0.25, 0.50].
G0 3
2 (1 3 )
(1 + + 2 )2
5
3
= 1 2 = 1
= 1
G
0 4
9 (1 + ) (1 + 2 ) + 1
G0 2 (1 ) 0 25 (1 5 )
5
(6.27)
for wide band random signals, the values are in the vicinity of 2/3.
63
0.5
0.7
0.6
0.4
0.5
0.3
0.4
0.3
0.2
0.2
0.1
0.1
0.0
0.0
0.2
0.4
0.6
0.8
0.0
1.0
0.0
= 1/ 2
0.2
0.4
0.6
0.8
1.0
= 1 / 2
Let us consider a random process expressed as a sum of two random signals, one with a PSD
characterized by a constant value GA within a narrow frequency band in the range [1, 2]
and one with a PSD characterized by a constant value GB within a wide frequency band in the
range [2, 3], Figure 6.13.
G ( )
GB
GA
i = G ( ) d = G A d + G B d =G A
0
GA
2i +1
i +1
(1 Ai +1 ) + G B
3i +1
i +1
i +1
i +1
+ GB
i +1
i +1
=
2
(6.28)
(1 Bi +1 ) = iA + iB
64
where: A =
and B = 2 .
2
3
(1 A + 1B )
12
= 1
=
(0 A + 0 B ) (2 A + 2 B )
0 2
2
q = 1
GA
(1 A2 ) B2 + (1 B2 )
GB
(6.29)
3
= 1
4 GA
G
(1 A ) B + (1 B ) A 1 3 B3 + 1 3
GB
GB
(2 A + 2 B )
22
= 1
=
(0 A + 0 B ) (4 A + 4 B )
0 4
2
GA
(1 A3 ) B3 + (1 B3 )
GB
= 1
9 GA
GA
1 5 B5 + 1 5
(1 A ) B + (1 B )
GB
GB
(6.30)
Situation 1, of Figure 6.13, where the narrow band random signal is of high frequency;
Situation 2, of Figure 6.14, where the narrow band random signal is of low frequency.
G ( )
GA
GB
1 2
65
In order to investigate the influence of the frequency band on the values of the frequency
content indicators, several cases for A and B values are considered. For situation 1, the
following case is considered - A = 0.05 and B = 0.95. For situation 2, the following case is
considered - A = 0.95 and B = 0.05. The values of the frequency content indicators are
presented in the following. The results are represented as a function of the ratio GA to GB.
0.50
qC
0.45
A =0.05; B =0.95
0.40
qC
0.9
A =0.95; B =0.05
0.8
0.35
0.7
0.30
0.6
0.25
0.5
0.20
0.4
0.15
0.3
0.10
0.2
0.1
0.05
G A /G B
G A /G B
0.00
1.E-04
1.E-03
1.E-02
1.E-01
1.E+00
0
1.E+00
1.E+01
1.E+02
1.E+03
1.E+04
for small GA/GB values, the wide band random signal has little influence, the value of
q approaching 0, as for a pure sinusoid. As it is expected, the approach towards zero is
more significant as the narrow band random signal has a narrower PSD;
for large GA/GB values, the value of q approach 0.50, the value corresponding to a
white noise random signal. The value is only approaching 0.50, but it never reaches
0.50 since it is a band limited white noise random signal;
a superposition of a wide band random signal with a high frequency narrow band
random signal with strong contrast in terms of PSD ordinates (GA/GB<<1) will
produce a random signal very close to a pure sinusoid. In other words, only the narrow
band random signal will influence the value of q;
-for situation 2
-
for small GA/GB values, the value of q approach 0.50, the value corresponding to a
white noise random signal. The value is only approaching 0.50, but it never reaches
0.50 since it is a band limited white noise random signal;
for large GA/GB values, the value of q is increasing above 0.50, up to a GA to GB ratio
of 100 for case 1, up to 1000 for case 2 and up to 10000 for case 3. The maxim q value
is 0.60 for case 1, 0.75 for case 2 and 0.85 for case 3;
a superposition of a wide band random signal with a low frequency narrow band
random signal with strong contrast in terms of PSD ordinates (GA/GB>>1) will
66
produce a random signal with q values larger than 0.50. The increase in q values is
steadier as the PSD of the narrow band random signal is narrower.
0.70
A=0.05; B=0.95
0.60
0.9
A =0.95; B =0.05
0.8
0.50
0.7
0.6
0.40
0.5
0.30
0.4
0.3
0.20
0.2
0.10
0.1
G A /G B
0.00
1.E-04
1.E-03
1.E-02
1.E-01
1.E+00
G A /G B
0
1.E+00
1.E+01
1.E+02
1.E+03
1.E+04
for small GA/GB values, the wide band random signal has little influence, the value of
approaching 0, as for a pure sinusoid. As it is expected, the approach towards zero is
more significant as the narrow band random signal has a narrower PSD;
for large GA/GB values, the value of approach 2/3, the value corresponding to a white
noise random signal. The value is only approaching 2/3, but it never reaches 2/3 since
it is a band limited white noise random signal;
a superposition of a wide band random signal with a high frequency narrow band
random signal with strong contrast in terms of PSD ordinates (GA/GB<<1) will
produce a random signal very close to a pure sinusoid. In other words, only the narrow
band random signal will influence the value of .
-for situation 2
-
for small GA/GB values, the value of approach 2/3, the value corresponding to a white
noise random signal. The value is only approaching 2/3, but it never reaches 2/3 since
it is a band limited white noise random signal;
for large GA/GB values, the value of is increasing above 2/3, up to a GA to GB ratio of
100 for case 1. For cases 2 and 3 the value is steadily increasing. The maxim value is
0.90 for case 1, 0.95 for case 2 and 0.99 for case 3;
a superposition of a wide band random signal with a low frequency narrow band
random signal with strong contrast in terms of PSD ordinates (GA/GB>>1) will
produce a random signal with values larger than 2/3. The increase in values is
steadier and larger as the PSD of the narrow band random signal is narrower.
67
Wide frequency band processes have values close to 2/3 and smaller than 0.85. Narrow
frequency band seismic-processes of long predominant period (i.e. superposition of a single
harmonic process at a short predominant frequency, fp and a wide band process) are
characterized by values greater than 0.95.
68
7.1. Introduction
When attention is focused on the response of structural systems to random vibration it is
generally possible to identify an excitation or input and a response or output. The excitation
may be a motion (i.e., acceleration, velocity, or displacement), or a force history. The
response may be a desired motion history or a desired stress history. When the excitation is a
random process, the response quantity will also be a random process. The central problem of
this chapter is the determination of information regarding the response random process from
corresponding information regarding the excitation process.
The excitation time-history is x(t) and the response time-history is y(t). In preparation for the
case where input and output are both random vibrations we review the case where they are
individual particular functions. If x(t) is a specified deterministic time history then it is
possible to obtain a specific answer for y(t) by integrating the differential equations of motion,
subject to initial conditions.
It is a property of linear time-invariant systems that when the excitation is steady state simple
harmonic motion (without beginning or end) then the response is also steady state simple
harmonic motion at the same frequency. The amplitude and phase of the response generally
depend on the frequency. A concise method of describing the frequency dependence of the
amplitude and phase is to give the complex frequency response or transfer function H().
This has the property that when the excitation is the real part of eit then the response is the
real part of H()eit. The complex transfer function H() is obtained analytically by
substituting
x = e i t
(7.1)
y = H ( )e it
(7.2)
in the equations of motion, cancelling the eit terms, and solving algebraically for H().
Since H() is essentially an output measure for unit input its dimensions will have the
dimensions of the ratio y/x.
Knowledge of a transfer function H() for all frequencies contains all the information
necessary to obtain the response y(t) to an arbitrary known excitation x(t). The basis for this
remark is the principle of superposition, which applies to linear systems. The superposition
here is performed in the frequency domain using Fouriers method. When x(t) has a period
then it can be decomposed into sinusoids forming a Fourier series. The response to each
sinusoid, separately, is provided by H() and these responses form a new Fourier series
representing the response y(t). When x(t) is not periodic but has a Fourier transform
X ( ) =
x(t )e
i t
dt
(7.3)
69
then an analogous superposition is valid. For each frequency component separately Equations
(7.1) and (7.2) yield:
Y ( ) = H ( )X ( )
(7.4)
as the Fourier transform of the response y(t). The response itself is given by the inverse
Fourier transform representation
1
y (t ) =
2
Y ( )e
i t
(7.5)
DOF
y (t )
a (t ), v(t ), d (t ) - action
y (t ) - response
c, k
ad(t)
(t )
h(t)
a(t)
A(i)
System
H(i)
y(t)
Y(i)
Figure 7.1. SDOF system and excitation-response diagram associated with this system
(7.6)
which simplifies to
Structural Reliability and Risk Analysis Lecture Notes
70
y(t ) + 2 0 y (t ) + 02 y (t ) = a (t )
(7.7)
where:
damping ratio
0 natural circular frequency of the SDOF system.
Equation 7.7 can be solved in the time domain or in the frequency domain.
y (t ) = a ( ) h(t )d
(7.8)
where:
h(t ) =
e 0t sin d t
(7.9)
is the response function to unit impulse and the damped circular frequency is
d = 0 1 2
(7.10).
(7.11)
where:
A() Fourier transform of the excitation (input acceleration) given by
A( ) =
a(t ) e
i t
dt
(7.12)
71
H ( ) 2 + 02 + i 2 0 = 1
H ( ) =
(7.13)
2
0 1 + i 2
0
0
Note that here H() has the dimensions of displacement per unit acceleration.
The square of the modulus of the complex transfer function is given by
H ( ) =
(7.14)
2 2
2
4
2
0 1 + 4
0
0
The modulus of the non-dimensional complex transfer function is given by, Figure 7.2
1
H 0 ( ) =
(7.15)
2
2
2
1 + 4
0
0
02
H 0 (i )
(7.16)
The maximum value of the maximum response is reached for = 0 and is given by, Figure
7.2
max y max (t ) =
2
0
1 m 1
=
2 k 2
(7.17)
where m is the mass of the SDOF system and k is the stiffness of the SDOF system
( k = m 02 ).
H (i )
2
0
H 0 (i )
1
2
1
2
72
Once the solution of Equation 7.11 is computed, one can go back in time domain by using the
inverse Fourier transform
y (t ) =
1
2
Y (i )e
i t
(7.18).
o action
Stationary
Ergodic
Zero-mean
Normal
response
Stationary
Ergodic
Zero-mean
Normal
The previous section has dealt with the excitation-response or input-output problem for linear
time-invariant systems in terms of particular responses to particular excitations. In this section
the theory is extended to the case where the excitation is no longer an individual time history
but is an ensemble of possible time histories; i.e., a random process. When the excitation is a
stationary random process then the response is also a stationary random process. The
statistical properties of the response process can be deduced from knowledge of the system
and of the statistical properties of the excitation.
a (t )
y (t )
t
E [ y (t )] = E [x(t )]H (0 )
(7.19)
73
m y (t ) = E [ y (t )] = H (0) ma (t ) =
02
ma (t ) = 0
(7.20).
(7.21)
A slightly more compact form is achieved by noting that the product of H() and its complex
conjugate may be written as the square of the modulus of H(); i.e.,
S y ( ) = H ( ) S x ( )
2
(7.22).
The power spectral density of the response is equal to the square of modulus of the transfer
function multiplied with the power spectral density of the excitation. Note that this is an
algebraic relation and only the modulus of the complex transfer function H() is needed.
[ ]
E y 2 = S y ( )d = H ( ) S x ( )d =
4
0
H 0 ( ) S x ( )d
2
(7.23)
In particular, if the output process has zero mean, the mean square value is equal to the
variance of the stationary response process y(t) and relation (7.23) turns to:
+
y2 = S y ( )d = H ( ) S x ( )d
2
(7.24)
The procedure involved in relation (7.24) is depicted in Figure 7.4 for stationary ergodic zeromean acceleration input process.
74
S a ( )
+
Area =
( )d = a2
H (i )
4
0
1
4 2
04
S y ( )
Area =
( )d = y2
p1
Figure 7.4. PSD of output process from the PSD of the input process and transfer function
75
Thus, let us suppose that the excitation has zero mean. Then according to relation (7.19) the
response also has zero mean.
The spectrum of the input is simply the constant S0. The spectrum of the output follows from
Equation 7.22, Figure 7.5:
S y ( ) = H ( ) S 0 =
04 1
0
2
+ 4
S0
(7.25)
The mean square of the response (equal to the variance of the process in case of zero-mean
processes) can be deduced from the spectral density of the response using:
y2 =
=
H 0 (i ) S a ( )d =
2
4
0
+
S
S
2
H 0 (i ) d 04 0 = 0 3
2 0
S0
4
0
(7.26)
0
2
In case the process is normal or Gaussian it is completely described statistically by its spectral
density. The first-order probability distribution is characterized by the variance alone in this
case where the mean is zero.
S a ( )
S0
L
H (i )
Figure 7.5. Band limited white noise and transfer function of the system
76
maximum amplitude. Because of this particular aspect of the transfer function of SDOF
systems with low damping, the product under the integral in relation (7.24) will raise
significant values only in the interval 0 0 and one can assume that the PSD of the input
excitation is constant in this interval and equal to S a ( 0 ) , Figure 7.6. Thus, relation (7.26)
valid for band-limited white noise is also applicable and the variance of the response process
S a (0 )
is equal to y2
.
203
S a ( )
a = ct
1
H (i )
04
1
4 2
2
2
2
y2 = REZONANCE
+ NON
REZ REZONANCE
1 1
1
4 2
2 0 4
neglected
2 0
S y ( )
Figure 7.6. Computation of response process variance for SDOF systems with low damping
The previous analytical developments are illustrated below considering as input the PSD of
March 4, 1977 seismic ground motion recorded at INCERC station on N-S direction and two
SDOF systems with damping ratio =0.05 and natural circular frequency 0 of 12.57 rad/s,
respectively 4.19 rad/s. In Figure 7.7 are represented the input PSD, the transfer function of
SDOF system with =0.05 and 0=12.57 rad/s and the response PSD; in Figure 7.8 are
depicted the input PSD, the transfer function of SDOF system with =0.05 and 0=4.19 rad/s
and the response PSD.
77
S a( )
0
0
0
0
0
0
0
0
0
0
10
15
20
H ( )^2
0
0
0
0
0
0
0
0
0
0
10
15
20
10
15
20
S y( )
0
0
0
0
0
0
0
0
0
0
0
0
Figure 7.7 PSD of March 4, 1977 INCERC N-S accelerogram (upper), transfer function
of SDOF system with =0.05 and 0=12.57 rad/s (middle) and PSD of response (lower)
78
S a( )
0
0
0
0
0
0
0
0
0
10
15
10
15
10
15
20
H ( )^2
0
0
0
0
0
0
0
0
0
20
S y( )
0
0
0
0
0
0
0
0
0
0
0
0
20
Figure 7.8 PSD of March 4, 1977 INCERC N-S accelerogram (upper), transfer function
of SDOF system with =0.05 and 0=4.19 rad/s (middle) and PSD of response (lower)
Structural Reliability and Risk Analysis Lecture Notes
79
Note: If the mean and the variance of the displacement response process are known, the mean
and the variance of the velocity and the acceleration response processes can be found using
the relations:
y = 0 y
y = 02 y
m y = 0 m y = 0
m y = 02 m y = 0
y2 = 02 y2 =
S a (0 )
20
y2 = 04 y2 =
0 S a (0 )
.
2
(+b )
b
1 y
=
exp
2 2
2 y
y
1
=
2
S y ( )d
b2
exp
2 2
y
S y ( )d
(7.27)
y (t )
y (t )
Distribution of maximum
f y (t )
80
If one uses the spectral moments i = i S y ( )d , the variance of the response is equal to the
spectral moment of order zero, 0 = y2 and the variance of the first derivative of the response
is equal to the spectral moment of second order, 2 = y2 and relation (7.27) becomes:
(+b ) =
b2
1
2 exp
2 2
2
0
y
(7.28)
The expected frequency, in crossings per unit time, of zero crossings with positive slope (the
average number of zero crossings with positive slope in unit time) is obtained setting b=0 in
relation (7.28):
0+ =
1
2
2
0
(7.29)
Finally, combining relations (7.28) and (7.29), one gets the expected number of crossing the
level b with positive slope:
b2
2
2 y
(+b ) = 0+ exp
(7.30)
For getting the distribution of the peak values, ymax of the response process one uses the
Davenports approach. The maximum (peak) response is normalized as:
=
y max
(7.31)
The normalized response process as well as the distribution of the peak values of the response
are represented in Figure 7.10.
ymax
81
Let us consider the issue of up-crossing the barrier ymax, Figure 7.11. Setting the barrier b =
ymax in relation (7.30) one gets the expected number of crossing the level ymax with positive
slope:
2
y max
2
2 y
y+ = 0+ exp
max
(7.32).
barrier b = ymax
2 y
y+ = 0+ exp
max
(7.33).
If the barrier b = ymax is high enough the upcrossings are considered rare independent events
that follows a Poisson type of probability distribution. If the expected number of upcrossings
level ymax in time t is y+ t , the probability of having n upcrossings in that time interval is:
max
(
P[N = n] =
+
y max
t exp( y+ t )
n
(7.34).
max
n!
Setting n=0 in relation (7.34) one gets the probability of having zero upcrossings in time t,
thus the probability of non-exceedance level ymax in time t, that is the cumulative distribution
function of the random variable level ymax:
Fy
max
= exp( y+ t )
(7.35).
max
Since the relation between ymax and is linear, the cumulative distribution function of ymax is
also valid for . Moreover, combining relations (7.33) and (7.35) one gets the double
exponential form of the cumulative distribution function of :
2
F = exp 0+ t exp
2
+ t e
= e 0
2
2
(7.36)
82
The probability distribution function, PDF is obtained by differentiating the CDF with respect
to :
dF
f =
(7.37).
The probability density function and the cumulative distribution function of the normalized
peak response are represented in Figure 7.12 and Figure 7.13, respectively.
The mean value of the distribution, m (mean of the peak values of normalized response) is
m = 2 ln ( 0+ t ) +
0.577
(7.38)
2 ln ( 0+ t )
(7.39)
2 ln ( 0+ t )
Recalling that the relation between ymax and is linear and combining relations (7.31) and
(7.38) one gets the mean of the peak values of response:
my
max
0.577
= m y = 2 ln ( 0+ t ) +
+
y
(
)
2
ln
t
(7.40).
1.4
1.2
1.0
100
0 t =
+
1000
fh
0.8
10000
0.6
0.4
0.2
0.0
0
83
1.0
0.9
0.8
0.7
100
Fh
0.6
0+t =
1000
10000
0.5
0.4
0.3
0.2
0.1
0.0
0
Finally, one has to get the peak response with p non-exceedance probability in the time
interval t, Figure 7.14 that is given by the following relation:
ymax ( p, t ) = p , t y
(7.41).
excedance probability: 1 p
p, t
2
(7.42).
84
It follows from relation (7.42) that the peak normalized response with p non-exceedance
probability in the time interval t is:
2
ln( p ) = + t e
p, t
p, t
+
2
ln( ln( p )) = ln t e
0
p, t
+
= ln t p, t
0
2
= 2 ln + t 2 ln ( ln ( p ))
0
p,t = 2 ln + t 2 ln( ln ( p ))
0
(7.43).
The peak normalized response with p non-exceedance probability in the time interval t, p,t is
represented in Figure 7.15.
p = 0.99
p = 0.5
0T
2
1.E+00
1.E+01
1.E+02
1.E+03
1.E+04
1.E+05
1.E+06
1.E+07
The peak response with p non-exceedance probability in the time interval t, ymax(p,t):
Structural Reliability and Risk Analysis Lecture Notes
85
y max ( p, t ) = p ,t y = 2 ln + t 2 ln ( ln ( p )) y
0
(7.44).
In the case of seismic response of SDOF systems one has to consider the issue of out-crossing
the barrier y max , Figure 7.16; since it is a double-barrier problem, one has replace in the
previous relations (7.33, 7.36, 7.387.40, 7.427.44) + t with 2 + t .
0
y (t )
barrier b = ymax
t
5.000
50.000
55.000
60.000
65.000
barrier b = -ymax
86
The stochastic modelling of wind action on buildings and structures is presented according to
the Romanian technical regulation NP 082-04 Cod de proiectare privind bazele proiectarii si
actiuni asupra constructiilor. Actiunea vantului Design Code. Basis of Design and Loads on
Buildings and Structures. Wind Loads, that is in line with and compatible to EUROCODE
1991: Actions on structures Part 1-4: General actions Wind actions.
8.1. General
Wind effects on buildings and structures depend on the exposure of buildings, structures and
their elements to the natural wind, the dynamic properties, the shape and dimensions of the
building (structure).
The random field of natural wind velocity is decomposed into a mean wind in the direction of
air flow (x-direction) averaged over a specified time interval and a fluctuating and turbulent
part with zero mean and components in the longitudinal (x-) direction, the transversal (y-)
direction and the vertical (z-) direction.
The sequence of maximum annual mean wind velocities can be assumed to be a Gumbel
distributed sequence with possibly direction dependent parameters. The turbulent velocity
fluctuations can be modelled by a zero mean stationary and ergodic Gaussian process.
terrain exposure (category II), averaged on 10 min time interval (EUROCODE 1 and NP 08204). For other than 10 min averaging intervals, in open terrain exposure, the following
relationships may be used:
1h
10min
1min
3s
.
1.05U ref
= U ref
= 0.84U ref
= 0.67U ref
(8.1)
The distribution of maximum annual mean wind velocity fits a Gumbel distribution for
maxima:
FU ( x) = exp{ exp[1 ( x u1 )]}
(8.2)
The mode u and the parameter 1 of the distribution are determined from the mean m1 and the
standard deviation 1 of the set of maximum annual mean wind velocities:
u = m1
0.577
1.282
(8.3)
87
(8.4)
ln N
1 ,
1.282
(8.5)
m N = m1 +
N = 1 .
The reference wind velocity having the probability of non-exceedance during one year, p =
0.98, is so called characteristic velocity, U0.98. The mean recurrence interval (MRI) of the
characteristic velocity is T= 50 yr. For any probability of non-exceedance p, the fractile Up of
Gumbel distributed random variable can be computed as follows:
1
U p = m1 + 0.45
ln ( ln p ) 1
1.282
(8.6)
(8.7)
Generally, it is not possible to infer the maxima over dozens of years from observations
covering only few years. For reliable results, the number of the years of available records
must be of the same order of magnitude like the required mean recurrence interval.
The reference wind velocity pressure can be determined from the reference wind velocity
(standard air density =1.25kg/m3) as follows:
Qref [Pa ] =
1
2
2
[m / s ]
U ref
= 0.625 U ref
2
(8.8)
The conversion of velocity pressure averaged on 10 min into velocity pressure averaged on
other time interval can be computed from relation (8.1):
1h
10min
1min
3s
.
1.1Qref
= Qref
= 0.7Qref
= 0.44Qref
(8.9)
88
for Minimum Design Loads for Buildings, ASCE 7 and fits very well the available Romanian
data.
The coefficient of variation of the maximum annual wind velocity is in the range VU =
0.150.3 with mean 0.23 and standard deviation 0.06. The coefficient of variation of the
velocity pressure (square of the velocity) is approximately double the coefficient of variation
of the velocity:
VQ = 2 VU.
(8.10)
89
Figure 8.1. The reference wind velocity pressure, [kPa] with mean recurrence interval MRI=50 yr.
wind velocity averaged on 10 min. at 10 m above ground in open terrain. The mapped values are valid for sites situated bellow 1000 m
91
8.4 Terrain roughness and Variation of the mean wind with height
The roughness of the ground surface is aerodynamically described by the roughness length, zo,
(in meters), which is a measure of the size of the eddies close to the ground level, Table 8.1.
Alternatively, the terrain roughness can be described by the surface drag coefficient, ,
corresponding to the roughness length, zo:
k
=
z ref
ln
z0
1
=
10
2.5 ln z
0
(8.11)
Table 8.1. Roughness length zo, in meters, for various terrain categories 1)
2)
Terrain
Terrain description
category
0
Sea or coastal area exposed to the open sea
I
Lakes or flat and horizontal area with negligible vegetation and
without obstacles
II
Area with low vegetation such as grass and isolated obstacles
(trees, buildings) with separations of at least 20 obstacle heights
III
Area with regular cover of vegetation or buildings or with
isolated obstacles with separations of maximum 20 obstacle
heights (such as villages, suburban terrain, permanent forest)
IV
Area in which at least 15 % of the surface is covered with
buildings and their average height exceeds 15 m
1)
1
1
0.05
0.3
1.0
10
2)
For the full development of the roughness category, the terrains of types 0 to III must prevail in the
up wind direction for a distance of at least of 500m to 1000m, respectively. For category IV this
distance is more than 1 km.
The variation of the mean wind velocity with height over horizontal terrain of homogenous
roughness can be described by the logarithmic law:
U( z ) =
z
1
u* ( z0 ) ln
z0
k
(8.12)
where:
zo
zmin
U( z )
.
z
2.5 ln
z0
(8.13)
The logarithmic profile is valid for moderate and strong winds (mean velocity > 10 m/s) in
neutral atmosphere (where the vertical thermal convection of the air may be neglected). Even
relation (8.12) holds true through the whole atmospheric boundary layer, its use is
recommended only in the lowest 200m, or 0.1, where is the depth of the boundary layer.
Relation (8.12) is more precisely valid as:
U( z ) =
z d0
1 *
; z do.
u ( z0 ) ln
k
z0
(8.14)
0.07
ln
ln
z
z0
.
z ref
(8.15)
z 0,ref
1
U 2 ( z )
2
(8.16)
2
Qref
U ref
ln z ref
z 0,ref
z
z
ln = k r2 ( z 0 ) ln for z>zmin
z0
z0
(8.17a)
c r (z ) = c r ( z = z min )
for z zmin
(8.17b)
200
180
Terrain category IV
160
140
Terrain category II
120
Terrain category I
100
Terrain category 0
80
60
40
20
0
0
0.5
1.5
2.5
3.5
z
Figure 8.2. Roughness factor, c r ( z ) = k ( z0 ) ln
z0
2
r
Longitudinal
(8.18)
v2 = v u*2
Transversal
(8.19)
w2 = w u*2
Vertical
(8.20).
For z < 0.1h, the ratios v/u and w/u near the ground are constant irrespective the
roughness of the terrain (ESDU, 1993):
v
0.78
u
(8.21)
w
0.55 .
u
U(z,t)
(8.22)
u(z,t)
u(z,t)
U(z)
t
U(z,t) = U(z) + u(z,t)
U(z)
Figure 8.3. Stochastic process of the wind velocity at height z above ground: U(z,t) = U(z) + u(z,t)
The variance of the longitudinal velocity fluctuations can be expressed from non-linear
regression of measurement data, as function of terrain roughness (Solari, 1987):
4.5 u = 4.5 0.856 ln z 0 7.5
(8.23).
The intensity of longitudinal turbulence is the ratio of the root mean squared value of the
longitudinal velocity fluctuations to the mean wind velocity at height z (i.e. the coefficient of
variation of the velocity fluctuations at height z):
I u (z ) =
u (z , t )
= u
U (z )
U ( z)
2
(8.24).
u u*
u * 2.5 ln
z
z0
u
2.5 ln
z
z0
I u (z ) = I (z = z min )
for z>zmin
(8.25a)
for z zmin
(8.25b)
200
Terrain category IV
180
160
140
Terrain category II
120
Terrain category I
100
Terrain category 0
80
60
40
20
0
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
2.5 ln
z
z0
c g (z ) =
q peak (z )
Q (z )
Q (z ) + g q
Q (z )
= 1 + g Vq = 1 + g [2 I u (z )]
(8.26)
where:
Q(z) is the mean velocity pressure of the wind
q = q( z, t )2
1/ 2
(8.27)
200
Terrain category IV
180
160
140
Terrain category II
120
Terrain category I
100
80
Terrain category 0
60
40
20
0
1.50
2.00
2.50
3.00
3.50
4.00
(8.28)
The exposure factor is defined as the product of the gust and roughness factors:
ce(z) = cg(z) cr(z).
(8.29)
200
180
160
Terrain category IV
140
120
100
Terrain category II
80
Terrain category I
60
Terrain category 0
40
20
0
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
5.0
References
Aldea, A., Arion, C., Ciutina, A., Cornea, T., Dinu, F., Fulop, L., Grecea, D., Stratan,
A., Vacareanu, R., 2004. Constructii amplasate in zone cu miscari seismice puternice,
coordonatori Dubina, D., Lungu, D., Ed. Orizonturi Universitare, Timisoara 2003, ,
ISBN 973-8391-90-3, 479 p.
Benjamin, J R, & Cornell, C A, Probability, statistics and decisions for civil engineers,
John Wiley, New York, 1970
Cornell, C.A., A Probability-Based Structural Code, ACI-Journal, Vol. 66, pp. 974985, 1969
FEMA 356, Prestandard and Commentary for the Seismic Rehabilitation of Buildings,
FEMA & ASCE, 2000
Ferry Borges, J.& Castanheta, M., Siguranta structurilor traducere din limba
engleza, Editura Tehnica, 1974
Hahn, G. J. & Shapiro, S. S., Statistical Models in Engineering - John Wiley & Sons,
1967
Kreyszig, E., Advanced Engineering Mathematics fourth edition, John Wiley &
Sons, 1979
Lungu, D., Vcreanu, R., Aldea, A., Arion, C., Advanced Structural Analysis,
Conspress, 2000
Madsen, H. O., Krenk, S., Lind, N. C., Methods of Structural Safety, Prentice-Hall,
1986
Melchers, R. E., Structural Reliability Analysis and Prediction, John Wiley & Sons,
2nd Edition, 1999
Papoulis, A., Probability, random variables, and stochastic processes. New York:
McGraw-Hill, 1991
Rosenblueth, E., Esteva, L., Reliability Bases for some Mexican Codes, Probabilistic
Design of Reinforced Concrete Buildings, ACI Publications SP-31, pp. 1-41, 1972