Documente Academic
Documente Profesional
Documente Cultură
Concrete Slabs
Abstract
In this paper, the reliability analysis of RC two-way and one-way rectangular slabs
under uniformly distributed loading is investigated. First, load modeling and ap-
proximate methods of reliability analysis are reviewed. The relevant deterministic
design procedure for RC slabs is explained. Next, the modelling of the basic vari-
ables is described, following which, the limit state function is set up. Finally, the
reliability analyses of one-way and two-way slabs with dierent boundary conditions
using the various time variant reliability methods are described. The results of the
analyses are discussed in the context of the Indian code of practice IS-456.
1 Introduction
The variability of loading on structures is known from the results of load sur-
veys and eld measurements (Mitchell and Woodgate, 1971; Culver, 1976).
During the lifetime of any structure, it is subjected to many loads that ex-
hibit temporal and spatial randomness. The loads (dead, live, wind, earth-
quake, etc.) may act individually or in combination with other loads. The `load
combination' problem has been traditionally handled by using experience and
judgement. However, these over-simplied procedures fail to rationally account
for uncertainties and risk. In recognition of these shortcomings, code-making
authorities in some countries are engaged in a process of `code calibration' and
formulation of improved recommendations. First and foremost in this proce-
dure is the modelling of the individual loads and then the load combination
problem. The presently used models for load modelling and load combination
studies are reviewed in the following sections.
The dead load usually consists of the self-weight of the structure. The self-
weight comprises the weight of structural and non-structural components. It
is seen that dead load has the following characteristics:
2
The basic load model is given by
ˆ
wDL = γconc dV (1)
where γconc is the equivalent unit weight of the material and V is the volume
bounded by the boundaries of the material.
The basic model is given as (Pier and Cornell, 1973; McGuire and Cornell,
1974; Corotis and Tsay, 1983):
where wij
is the arbitrary point in time (APIT) load intensity at co-ordinates
(x, y)on the ith oor of the j th building; m is the mean of the load intensity;
γbldg is the deviation of the oor load from the mean m for building j ; γf lr is
the deviation of the oor load from the mean m for oor i; and εij is a zero
mean random eld to describe the spatial variability on the ith oor of the
3
jth building. It is assumed that γ and εij are independent. The load eect
S (x, y) in linear elastic systems may be obtained by applying the principle of
superposition as:
¨
S (x, y) = wij (x, y) I (x, y) dxdy (3)
where I (x, y) is dened as the inuence function of the load eect over the
area under consideration.
˜
wij (x, y) I (x, y) dxdy
q (x, y) = A ˜ (4)
A
I (x, y) dxdy
2 πdσε2 Υ (A)
E [q (x, y)] = m; V ar [q (x, y)] = σbldg + σf2lr + k (5)
A
2
s s
A d A
Υ (A) = erf − 1 − exp − (6)
d Aπ d
˜
I 2 (x, y) dxdy
k = h˜A i2 (7)
A
I (x, y) dxdy
The common assumption in all reliability studies is that the safe set is simply
connected and then limit state function is continuous and piecewise dier-
entiable. The load processes used in this study are stationary in time and
4
homogenous in space and the time and space characteristics are assumed to
be independent. Linear load combinations are assumed and the upcrossing is
over a constant threshold.
Time invariant reliability methods use random variables to describe the uncer-
tain parameters involved. The reliability index and the probability of failure
are the main descriptors to assess the risk of failure. For obtaining the reliabil-
ity index, the two most commonly used methods are the rst order reliability
method (FORM) and the second order reliability method (SORM). As the
name suggests, the former deals with the linear expansion of the limit state
function and (as a Taylor Series), whereas the latter includes the second order
Taylor Series terms of the function (in terms of the curvature).
√
βHL = min uT u (8)
A more general transformation other than the above mentioned methods are
the Rosenblatt transformation (Rosenblatt, 1952) and the Nataf transforma-
tion (Ditlevsen and Madsen, 1995). If the joint probability density function is
completely described, the Rosenblatt transformation can be used to obtain a
set of independent normal random variables. If this information is not avail-
able (viz., the marginal probability distribution and correlation structure is
available), the Nataf transformation is used. The Rosenblatt transformation
is explained below.
5
U may be obtained by the following relation:
´ xi
−∞
f (x1 , x2 , ..., xi−1 , s) ds
F (xi |x1 , x2 , ..., xi−1 ) = (10)
f (x1 , x2 , ..., xi−1 )
The second-order reliability method (SORM) was developed using the princi-
ple of quadratic approximations (Fiessler et al, 1979). A simple closed form
solution using a second order approximation was developed by Breitung (1984)
as:
n−1
Pf ≈ Φ−1 [−β]
Y
(1 + βκi ) (11)
i=1
where κi is the principal curvature of the limit state function at the design
point. This has been derived by the concept of asymptotic integration.
Pf = Φ (−β) (12)
Monte Carlo simulation (MCS) method can be used to obtain a more ex-
act value of the probability of failure, especially when the limit state func-
tion is highly non-linear. Various sampling techniques have been developed
to increase the eciency of the MCS method. These techniques constrain the
sample to be representative or distort the sample to emphasise the important
aspects of the failure function in question. Some of these methods are impor-
tance sampling, adaptive sampling, randomisation sampling, etc. (Melchers,
1999). This can also be expressed in terms of the generalised reliability index,
−1
which is dened as −Φ (Pf ).
6
3.2 Ferry Borges Castenheta (FBC) Model
The Ferry Borges-Castenheta load model has received much attention due to
its usefulness in code specication and calibration. A scalar FBC process is a
sequence of rectangular load pulses of xed duration τ following immediately
after each other (no gaps between the pulses). The elements of this sequence
are all mutually independent and identically distributed random variables.
Let this be denoted by X (t, τ ). An n combination FBC process, given by
{X1 (t, τ1 ) , X2 (t, τ2 ) , ..., Xn (t, τn )} is a vector process comprising scalar FBC
τ
processes such that τ1 ≥ τ2 ≥ ... ≥ τn and i , i ≤ j; i, jZ. A 3-combination
τj
FBC process is illustrated in Fig. 2.3. The T duration envelope process to
an FBC process X (t, τ ) is again an FBC process X (t, T ) in which the pulse
duration T is an integer multiple of τ and in which the amplitude of this
process is dened as the maximum of X (t, T ) over the time T . Consider the
Pn
sum i=1 Xi (t, τi ). Then, in the interval [0, τ1 ], we have:
n
X n−2
X
max Xi (t, τi ) = max Xi (t, τi ) + Zn−1 (t, τn−1 ) (13)
i=1 i=1
where Zn−1 (t, τn−1 ) = Xn−1 (t, τn−1 ) + Xn (t, τn−1 ) is another FBC process.
Thus, the n combination problem has been converted into a n−1 problem.
The distribution function of the amplitudes of Zn−1 (t, τn−1 ) is derived as:
ˆ∞
fZn−1 (t,τn−1 ) (z) = FXn (t,τn−1 ) (z − x) fXn−1 (t,τn−1 ) (x) dx (14)
−∞
h i τn−1
τn
where FXn (t,τn−1 ) (x) = FXn (t,τn ) (x) . The repetition of these steps would
nally lead to a FBC process. Thus, it would be logical to note that that
the distribution function of the maximal load eect can be obtained by n−
1 subsequent convolution integrations. Generally, this calculation would be
dicult by the use of standard numerical methods. However, if it is assumed
that the amplitude distributions are absolutely continuous, the rst order
reliability method can be used. The Rackwitz Fiessler algorithm has been
used in this regard (Rackwitz and Fiessler, 1978). By this procedure, the
cumulative distribution functions and the probability density functions of the
actual and the equivalent normal variables are equated at the design point,
say x as discussed in the Section 2.3.1. Then, the mean and the standard
deviation of the equivalent normal distribution are given as follows:
φ [Φ−1 (F (x))]
µ = x − σΦ−1 (F (x)) ; σ = (15)
f (x)
7
where φ and Φ are the standard normal density function and distribution
function respectively.
Let us consider the case when n = 2. This procedure calculates the approxi-
mate value of the distribution function of X1 (t)+X2 (t, τ1 ) at any pre-selected
value, say z. (x1 , x2 ) are chosen such that x1 + x2 = z .
A set of points
The distribution function of X1 (t) is approximated at the point x1 by the
normal distribution with mean µ1 and standard deviation σ1 as given in
Eqn. 2.15. The distribution function of the τ1 duration envelope X2 (t, τ1 )
of X2 (t) is again approximated as a normal distribution in the same fash-
ion with statistical parameters µ2 and σ2 . Thus, the distribution function
2 2
of X1 (t) + X2 (t, τ1 ) ∼ N (µ1 + µ2 , σ1 + σ2 ). It is clear that the accuracy of
this method depends on the choice of x1 and x2 . A new approximation point
(x1 , x2 ) is chosen on the straight line x1 + x2 = z at which the product of
the two approximating normal density functions with statistical parameters
(µ1 , σ1 ) and (µ2 , σ2 ) have the maximal value. This point is
βzσ12 βzσ22
(x1 , x2 ) ≡ (µ1 + q , µ2 + µ1 + q ) (16)
σ12 + σ22 σ12 + σ22
where z = x− µ
σ . This iterative procedure is carried out until convergence
is reached. Turkstra and Madsen (1980) have shown the application of the
Rackwitz Fiessler algorithm with respect to FBC model.