Sunteți pe pagina 1din 6

# NPTEL

Course On

STRUCTURAL RELIABILITY
Module # 04 Lecture 6
Course Format: Web

Instructor: Dr. Arunasis Chakraborty Department of Civil Engineering Indian Institute of Technology Guwahati

## 6. Lecture 06: Subset Simulation

As discussed in the previous section, in ideal case, calculation of comes from the PDF information from random variables as given in Eq. 4.6.1 where, is PDF information of random variable and is indicator: = 1 if sample satisfy failure, = 0 otherwise.
= =

4.6.1

But in common cases, it is not an efficient method to use integration to calculate for high number of random variables or complicated shape of limit functions. In this scenario, simulation based approaches are best suited i.e. Monte Carlo simulation (MCS) or Latin Hypercube sampling (LHS). In case of MCS, the required number of sampling is 1/ , where is associated to the system. If there is need to find out very small scale probability of failure then the number of time of solving the system will be very high due to large number of samples. This drawback is observed for all population based methods. For this reasons, Important sampling or Adaptive sampling technique is used. These techniques can give reliable result for low number of random variable. If number of random variables are increased than there have some difficulties to identify the region of sampling in case of Important sampling methods. In that case, here another method is discussed: Subset simulation. Here failure event is denoted as and considered 1 , 2 , ..., as 1 2 = in a decreasing sequence. Then at any event would be =1 , = 1, 2, , . So, can be expressed as given in Eq. 4.6.2.

= =
=1

4.6.2

1 1

=1

= | = | 1

=1 1 =1

4.6.3

=
=1

## Lecture 06: Subset Simulation

1

=
=1

+1

From Eq. 4.6.3, it is clear that a failure probability can be expressed as the product of a sequence of conditional probabilities. Calculating this conditional probability, it can be possible to evaluate very low probability. For example, to obtain = 104 , there is only need to perform analysis to calculate = 102 for 2 times. As a result, instead of calculating for 10000 sample, same can be reproduced by using 2 100 samples. So, for calculating the actual , one need to calculate 1 , +1 : = 1, 2, , 1 . To calculate (1 ), MCS will be sufficient and this is as follows 1 (1 ) 1 =

1
=1

4.6.4

where, are independent and identically distributed samples obtained from respective PDF information. In case of calculating rest conditional probabilities, MCS is not efficient to populate the samples as in plain MCS there is no consideration of conditional PDF of samples. In that case, Markov chain Monte Carlo simulation method with Metropolis algorithm is adopted. By this method, for every = 1, , , let , called the proposal PDF, be a one-dimensional PDF for centered at with the symmetry property = . Generate a sequence of samples 1 , 2 from a given sample 1 by computing +1 from = 1 , , , = 1,2, , as follows: 1. Generate a candidate state : For each component = 1, , , simulate from . . Compute the ratio = / . Set = with probability 1, and set = with the remaining probability 1 1, 2. Accept/Reject : Check the location of . If , accept it as the next sample, i.e. +1 = ; otherwise reject it and the current sample as the next sample i.e. +1 = . By this method, after generating , either or is taken to next sample +1 depending weather it lies or not. In this method next sample will be distributed as . if the current sample is. As shown in Step 1, the individual are independent and the transition PDF of the Markov chain between any two states in is as per Eq. 4.6.5

+1 =
=1

+1

4.6.5

where, the transition PDF for th component of is . In case of +1 () , the relation is given by Eq. 4.6.6 Course Instructor: Dr. Arunasis Chakraborty 2

## Lecture 06: Subset Simulation +1 = +1 min 1,

+1

4.6.6

From symmetric property of and the identity min 1, = min {1, } for , > 0, Eq. 4.6.6 can be expressed as below:
+1 = +1 +1

4.6.7

Eq. 4.6.7 shows that satisfy reversibility condition with respect to . If it is taken as all the states lie in , then Eq. 4.6.5 and 4.6.7 gives that the transition PDF for the whole state also satisfies the reversibility condition with respect to (. | ), as given by Eq. 4.6.8.
+1 | = +1 ) +1 )

4.6.8

So, the newly obtained samples are distributed as per (. | ). In Eq. 4.6.9 satisfy the previous statement.
+1 = = +1

+1 +1 +1

4.6.9

= +1

= +1

here, +1 = 1. By this it is confirmed that the next Markov chain sample +1 will also be distributed as (. | ). Here is given the steps of subset simulation 1. By direct MCS, samples are generated as per the PDF information of the corresponding random variables to compute (1 ). 2. (1 ) is calculated from the generated samples. 3. Starting from each samples generated in step 1, new samples are generated using Markov chain method with Metropolis algorithm. With this samples (2 |1 ) can be calculated. 4. Again from each sample generated in step 3, new sample will be generated to calculate th 3 2 . This process will continue until is not achieved. For an intermediate level probability of failure would be as Eq. 4.6.10
+1 +1 1 =

+1
=1

4.6.10

## Course Instructor: Dr. Arunasis Chakraborty 3

Lecture 06: Subset Simulation 5. Finally, the would be product of all intermediate probability of failure, as given in Eq. 4.6.11

=
=1

4.6.11

In subset simulation method, the efficiency lies on the choosing intermediate failure events . There are two factor consider to select { }. First one is the parameterized of the target failure. This allows the generation of intermediate failure events by varying the value of the defined parameter. Second one is the choosing of the specific sequence of the values of the defined parameter. This effects the values of the conditional probabilities +1 .

## Algorithm Here, a general algorithm is given for subset simulation

m n psub ns = 1/psub z flag = 0; pf_ss = 1; while (flag == 0) for ii g(ii) = Pfun(z(ii)); %Values of performance function end; [gsort,index]=sort(g); gt = gsort(n*psub+1); if (gt < 0) i=1; while gsort(i)<0 i=i+1; end; pf_ss=pf_ss*(i-1)/n; flag = 1; break; else %Rank the values of performance function %Locate n*psub smallest values %Define the criteria for 1st subset %Number of random variables %Sample size of each subset %Failure probability assigned to each subset %Number of new samples from each seed sample %Generate random variables as per their property

## Lecture 06: Subset Simulation

pf_ss = pf_ss*psub; end; w = zeros(n,m); for i=1:n*psub seed = z(index(i),:); for j=1:ns u = rand(1,m); u = (seed-0.5)+ u; pdf2 = exp(-0.5*sum(seed.^2));%Calculate acceptance probability pdf1 = exp(-0.5*sum(u.^2)); I = 0; if Pfun(u)<gt; I=1; end; alpha = min(1,I*pdf1/pdf2); V = rand(1,1); %Accept u with probability = alpha if V < alpha; w(ns*(i-1)+j,:)= u; else w(ns*(i-1)+j,:)= seed; end; seed = w(ns*(i-1)+j,:); end; end; z=w; end; pf_ss betass = -norminv(pf_ss) % Probability of failure % Reliability index

function [c, ceq] = Pfun(z) X1 = z(1); X2 = z(2); X3 = z(3); L = 2; c = X1*X2-X3*L; ceq = [];