Sunteți pe pagina 1din 8

Load Combination Studies for Reinforced

Concrete Slabs

K. Krishnan Nair, A. Meher Prasad and D. Menon

Department of Civil and Environmental Engineering, Stanford University, CA,


USA, 94305. (Corresponding Author)
Structural Engineering Laboratory, Department of Civil Engineering, Indian
Institute of Technology Madras, Chennai, India 673601.

Abstract
In this paper, the reliability analysis of RC two-way and one-way rectangular slabs
under uniformly distributed loading is investigated. First, load modeling and ap-
proximate methods of reliability analysis are reviewed. The relevant deterministic
design procedure for RC slabs is explained. Next, the modelling of the basic vari-
ables is described, following which, the limit state function is set up. Finally, the
reliability analyses of one-way and two-way slabs with dierent boundary conditions
using the various time variant reliability methods are described. The results of the
analyses are discussed in the context of the Indian code of practice IS-456.

1 Introduction

During the lifetime of any structure, it is subjected to dierent loads. Most


of these loads are random and exhibit spatial and temporal variations. These
loads may act singly or in combination with one another. Thus, it is neces-
sary to model these loads, accounting for their variability and to study their
combination characteristics. These loads may be realistically modelled in a
probabilistic framework as random processes. Loads may be broadly classied
into micro-scale (very close to the natural period of the structure) and macro-
scale uctuations (larger than the natural period of the structure) according
to their time periods. The former is essentially dynamic, whereas the latter is
primarily static. Although extensive research has been carried in the case of

Email addresses: kknair@stanford.edu, prasadam,dmenon@civil.iitm.ac.in


(K. Krishnan Nair, A. Meher Prasad and D. Menon).

Preprint submitted to Elsevier November 15, 2008


micro-scale loading (wind and earthquake loads) and the associated structural
response, it is seen that macro-scale loading has not been explored in detail.

The variability of loading on structures is known from the results of load sur-
veys and eld measurements (Mitchell and Woodgate, 1971; Culver, 1976).
During the lifetime of any structure, it is subjected to many loads that ex-
hibit temporal and spatial randomness. The loads (dead, live, wind, earth-
quake, etc.) may act individually or in combination with other loads. The `load
combination' problem has been traditionally handled by using experience and
judgement. However, these over-simplied procedures fail to rationally account
for uncertainties and risk. In recognition of these shortcomings, code-making
authorities in some countries are engaged in a process of `code calibration' and
formulation of improved recommendations. First and foremost in this proce-
dure is the modelling of the individual loads and then the load combination
problem. The presently used models for load modelling and load combination
studies are reviewed in the following sections.

2 Review of Load and Resistance Models

Ellingwood et al (1980) has classied loads into permanent, sustained and


transient load cases (Fig. 2.1). Permanent loads (dead loads) maintain a con-
stant magnitude with a relatively small random variation. Sustained loads (live
loads) may be thought of as step functions in time. Transient loads (extreme
wind forces and earthquake forces) are those that occur infrequently and last
for a very short duration and could be thought of as pulses. In this section,
the dead, live and wind loads used for code calibration are reviewed. It is seen
that Gaussian and renewal pulse processes provide a reasonable representation
of the macro-time loading.

2.1 Dead Load

The dead load usually consists of the self-weight of the structure. The self-
weight comprises the weight of structural and non-structural components. It
is seen that dead load has the following characteristics:

• The probability of occurrence at an arbitrary point in time value (APIT) is


close to one
• The variability with time is normally negligible
• The uncertainties is small in comparison to other loads and is thus modelled
as a normal or log-normal random variable

2
The basic load model is given by

ˆ
wDL = γconc dV (1)

where γconc is the equivalent unit weight of the material and V is the volume
bounded by the boundaries of the material.

2.2 Live Load

The live load (sometimes referred to as `imposed load') usually consists of


the weight of the furniture, equipment, and the residents of the building.
The live load is categorised in terms of occupancy such as oce buildings,
domestic buildings, hotels, etc. The live load is modelled as a random eld
varying both spatially and temporally. With respect to the temporal variation,
it has a sustained part as well as a transient part (Ellingwood et al, 1980).
The sustained part generally consists of the gravity loading due to furniture,
equipment, people, etc. It is seen that the magnitude of sustained loading
depends on the type of occupancy and can vary from time to time, but is
nearly constant within a given time interval (Fig. 2.1b). The transient part of
the load includes gravity force due to extraordinary loads such as temporary
storage, crowding of people, etc. This has a very short duration time (Fig.
2.1c), which may be modelled as impulse modelling. Some features of the live
loads in oce buildings obtained by load surveys (Mitchell and Woodgate,
1971; Culver, 1976; Choi, 1991) are:

• Occupancy changes produce changes in the sustained loading;


• Variation of loading within rooms and between rooms bears some correla-
tion;
• Correlation exists between loadings on dierent oors;
• Loading intensity is area dependent.

The basic model is given as (Pier and Cornell, 1973; McGuire and Cornell,
1974; Corotis and Tsay, 1983):

wij (x, y) = m + γbldg + γf lr + εij (x, y) (2)

where wij
is the arbitrary point in time (APIT) load intensity at co-ordinates
(x, y)on the ith oor of the j th building; m is the mean of the load intensity;
γbldg is the deviation of the oor load from the mean m for building j ; γf lr is
the deviation of the oor load from the mean m for oor i; and εij is a zero
mean random eld to describe the spatial variability on the ith oor of the

3
jth building. It is assumed that γ and εij are independent. The load eect
S (x, y) in linear elastic systems may be obtained by applying the principle of
superposition as:

¨
S (x, y) = wij (x, y) I (x, y) dxdy (3)

where I (x, y) is dened as the inuence function of the load eect over the
area under consideration.

An equivalent uniformly distributed load (udl) q(x, y) of the sustained com-


ponent of the live load is given as:

˜
wij (x, y) I (x, y) dxdy
q (x, y) = A ˜ (4)
A
I (x, y) dxdy

The statistical parameters of q(x, y) are derived as:

2 πdσε2 Υ (A)
E [q (x, y)] = m; V ar [q (x, y)] = σbldg + σf2lr + k (5)
A
 2
 s s 
A d A
 
Υ (A) = erf − 1 − exp −  (6)
d Aπ d
˜
I 2 (x, y) dxdy
k = h˜A i2 (7)

A
I (x, y) dxdy

where A is the area under consideration and d is a constant to be evaluated.


The three parameters σbldg , σf lr and σε have to be evaluated from the survey
data. These are derived by Pier and Cornell (1973). It has also been demon-
strated that the maximum equivalent uniformly distributed load is relatively
insensitive to the bay aspect ratio, average rate of change of occupancy and
to the lifetime of the building (Pier and Cornell, 1973; McGuire and Cornell,
1974).

3 Methods of Reliability Analysis

The common assumption in all reliability studies is that the safe set is simply
connected and then limit state function is continuous and piecewise dier-
entiable. The load processes used in this study are stationary in time and

4
homogenous in space and the time and space characteristics are assumed to
be independent. Linear load combinations are assumed and the upcrossing is
over a constant threshold.

3.1 Time Invariant (TI) Reliability Method

Time invariant reliability methods use random variables to describe the uncer-
tain parameters involved. The reliability index and the probability of failure
are the main descriptors to assess the risk of failure. For obtaining the reliabil-
ity index, the two most commonly used methods are the rst order reliability
method (FORM) and the second order reliability method (SORM). As the
name suggests, the former deals with the linear expansion of the limit state
function and (as a Taylor Series), whereas the latter includes the second order
Taylor Series terms of the function (in terms of the curvature).

The Hasofer-Lind index (βHL ) is dened as the minimum distance between


the origin of the transformed independent zero-mean unit normal space to
the limit state function (Hasofer and Lind, 1974). This index is formulation
invariant and is the most commonly used reliability index. βHL is dened as:


βHL = min uT u (8)

where u is the vector of the transformed unit normal independent random


variables (Fig. 2.2). The minimum distance point on the limit state function
is called the `design point . The deciency in the Hasofer-Lind formulation
is that it is applicable to normal variables only. Since the normal random
variable is completely described by its mean and standard deviation, any
two approximate conditions could be used. The Rackwitz-Fiessler algorithm
(Rackwitz and Fiessler, 1978), Chen-Lind method (Chen and Lind, 1983) and
Wu-Wirsching algorithm (Wu and Wirsching, 1987) can be used for this pur-
pose.

A more general transformation other than the above mentioned methods are
the Rosenblatt transformation (Rosenblatt, 1952) and the Nataf transforma-
tion (Ditlevsen and Madsen, 1995). If the joint probability density function is
completely described, the Rosenblatt transformation can be used to obtain a
set of independent normal random variables. If this information is not avail-
able (viz., the marginal probability distribution and correlation structure is
available), the Nataf transformation is used. The Rosenblatt transformation
is explained below.

Consider the vector of n random variables X with a joint probability distribu-


tion function FX (x). Then, a vector of independent standard normal variables

5
U may be obtained by the following relation:

u1 = Φ−1 [F1 (x1 )]

u2 = Φ−1 [F2 (x2 |x1 )] (9)

un = Φ−1 [Fn (xn |x1 , x2 , ..., xn−1 )]

Equation (2.9) represents the Rosenblatt transformation. The conditional prob-


ability distribution function may be derived as

´ xi
−∞
f (x1 , x2 , ..., xi−1 , s) ds
F (xi |x1 , x2 , ..., xi−1 ) = (10)
f (x1 , x2 , ..., xi−1 )

The second-order reliability method (SORM) was developed using the princi-
ple of quadratic approximations (Fiessler et al, 1979). A simple closed form
solution using a second order approximation was developed by Breitung (1984)
as:

n−1
Pf ≈ Φ−1 [−β]
Y
(1 + βκi ) (11)
i=1

where κi is the principal curvature of the limit state function at the design
point. This has been derived by the concept of asymptotic integration.

The nominal probability of failure Pf corresponding to a particular value of β


may be dened as

Pf = Φ (−β) (12)

Monte Carlo simulation (MCS) method can be used to obtain a more ex-
act value of the probability of failure, especially when the limit state func-
tion is highly non-linear. Various sampling techniques have been developed
to increase the eciency of the MCS method. These techniques constrain the
sample to be representative or distort the sample to emphasise the important
aspects of the failure function in question. Some of these methods are impor-
tance sampling, adaptive sampling, randomisation sampling, etc. (Melchers,
1999). This can also be expressed in terms of the generalised reliability index,
−1
which is dened as −Φ (Pf ).

6
3.2 Ferry Borges Castenheta (FBC) Model

The Ferry Borges-Castenheta load model has received much attention due to
its usefulness in code specication and calibration. A scalar FBC process is a
sequence of rectangular load pulses of xed duration τ following immediately
after each other (no gaps between the pulses). The elements of this sequence
are all mutually independent and identically distributed random variables.
Let this be denoted by X (t, τ ). An n combination FBC process, given by
{X1 (t, τ1 ) , X2 (t, τ2 ) , ..., Xn (t, τn )} is a vector process comprising scalar FBC
τ
processes such that τ1 ≥ τ2 ≥ ... ≥ τn and i , i ≤ j; i, jZ. A 3-combination
τj
FBC process is illustrated in Fig. 2.3. The T duration envelope process to
an FBC process X (t, τ ) is again an FBC process X (t, T ) in which the pulse
duration T is an integer multiple of τ and in which the amplitude of this
process is dened as the maximum of X (t, T ) over the time T . Consider the
Pn
sum i=1 Xi (t, τi ). Then, in the interval [0, τ1 ], we have:

n
X n−2
X
max Xi (t, τi ) = max Xi (t, τi ) + Zn−1 (t, τn−1 ) (13)
i=1 i=1

where Zn−1 (t, τn−1 ) = Xn−1 (t, τn−1 ) + Xn (t, τn−1 ) is another FBC process.
Thus, the n combination problem has been converted into a n−1 problem.
The distribution function of the amplitudes of Zn−1 (t, τn−1 ) is derived as:

ˆ∞
fZn−1 (t,τn−1 ) (z) = FXn (t,τn−1 ) (z − x) fXn−1 (t,τn−1 ) (x) dx (14)

−∞

h i τn−1
τn
where FXn (t,τn−1 ) (x) = FXn (t,τn ) (x) . The repetition of these steps would
nally lead to a FBC process. Thus, it would be logical to note that that
the distribution function of the maximal load eect can be obtained by n−
1 subsequent convolution integrations. Generally, this calculation would be
dicult by the use of standard numerical methods. However, if it is assumed
that the amplitude distributions are absolutely continuous, the rst order
reliability method can be used. The Rackwitz Fiessler algorithm has been
used in this regard (Rackwitz and Fiessler, 1978). By this procedure, the
cumulative distribution functions and the probability density functions of the
actual and the equivalent normal variables are equated at the design point,
say x as discussed in the Section 2.3.1. Then, the mean and the standard
deviation of the equivalent normal distribution are given as follows:

φ [Φ−1 (F (x))]
µ = x − σΦ−1 (F (x)) ; σ = (15)
f (x)

7
where φ and Φ are the standard normal density function and distribution
function respectively.

Let us consider the case when n = 2. This procedure calculates the approxi-
mate value of the distribution function of X1 (t)+X2 (t, τ1 ) at any pre-selected
value, say z. (x1 , x2 ) are chosen such that x1 + x2 = z .
A set of points
The distribution function of X1 (t) is approximated at the point x1 by the
normal distribution with mean µ1 and standard deviation σ1 as given in
Eqn. 2.15. The distribution function of the τ1 duration envelope X2 (t, τ1 )
of X2 (t) is again approximated as a normal distribution in the same fash-
ion with statistical parameters µ2 and σ2 . Thus, the distribution function
2 2
of X1 (t) + X2 (t, τ1 ) ∼ N (µ1 + µ2 , σ1 + σ2 ). It is clear that the accuracy of
this method depends on the choice of x1 and x2 . A new approximation point
(x1 , x2 ) is chosen on the straight line x1 + x2 = z at which the product of
the two approximating normal density functions with statistical parameters
(µ1 , σ1 ) and (µ2 , σ2 ) have the maximal value. This point is

βzσ12 βzσ22
(x1 , x2 ) ≡ (µ1 + q , µ2 + µ1 + q ) (16)
σ12 + σ22 σ12 + σ22

where z = x− µ
σ . This iterative procedure is carried out until convergence
is reached. Turkstra and Madsen (1980) have shown the application of the
Rackwitz Fiessler algorithm with respect to FBC model.

3.3 Wen's Load Coincidence (LC) Model

The extreme value distribution or the rst passage probability distribution of


individual ltered Poisson process is dicult to obtain. It is also seen that
very few solutions exist for linear combinations of such load histories. The
combination of ltered Poisson processes have been extensively studied by
Wen (1977, 1990). In the model proposed by Wen, called the Load Coinci-
dence model, the approximation is made on processes of low intensity with
non-overlapping pulses. Also, the dierent possibilities of load coincidences,
their mean rates of occurrence and conditional probabilities of crossing the
level, given a partial coincidence of loads, has been discussed. It is seen that
the coincidence part contributes signicantly to the maximum load eect.
This section is restricted to the combination of Poisson pulse processes (Fig.
2.4). The mathematical preliminaries concerning Poisson pulse processes are
discussed in the Appendix. Initially, the combination of two Poisson pulse
processes is studied and then this is extended to n processes.

S-ar putea să vă placă și