Sunteți pe pagina 1din 17

Sensitivity Analysis

Jake Blanchard
Fall 2010

Introduction
Sensitivity

Analysis = the study


of how uncertainty in the output
of a model can be apportioned to
different input parameters
Local sensitivity = focus on
sensitivity at a particular set of
input parameters, usually using
gradients or partial derivatives
Global or domain-wide sensitivity
= consider entire range of inputs

Typical Approach
Consider

a Point Reactor Kinetics

problem
1.8

P0
C ( 0)

1.7
1.6
1.5
P(t)

dP 0

P(t ) C (t )
dt

dC
P (t ) C (t )
dt
P(0) P0 1

=0.08
increased by 50%

1.4
1.3
1.2
1.1
1

0.5

1.5
time (s)

2.5

Results
P(t)

normalized to P0

Mean

lifetime normalized to
baseline value (0.001 s)
t=3 s
-3

x 10

relative change in P(t)

-1

-2

-3
-0.1

-0.05

0
0.05
relative change in

0.1

0.15

Results
P(t)

normalized to P0

Mean

lifetime normalized to
baseline value (0.001 s)
t=0.1 s
0.02

0.015

relative change in P(t)

0.01
0.005
0
-0.005
-0.01
-0.015
-0.1

-0.05

0
0.05
relative change in

0.1

0.15

Putting all on one chart


t=0.1 s
0.025

0.02

dimensionless variation in P(t)

0.015

0.01
0.005
0
-0.005
-0.01
-0.015
-0.02
-0.025
-0.2

-0.15

-0.1
-0.05
0
0.05
dimensionless variation in input variable

0.1

0.15

Putting all on one chart


t=3 s
0.15

dimensionless variation in P(t)

0.1

0.05
0
-0.05
-0.1
-0.15
-0.2
-0.2

-0.15

-0.1
-0.05
0
0.05
dimensionless variation in input variable

0.1

0.15

Quantifying Sensitivity
To

first order, our measure of


sensitivity is the gradient of an
output with respect to some
particular input variable.
Suppose
are
Y C s Ps all
Ct Pvariables
t C j Pj
uncertain and
Y Cs Ps Ct Pt C j Pj

Then,

if inputs are independent,

y Cs ps Ct pt C j p j

C C C
2
y

2
s

2
s

2
t

2
t

2
j

2
j

Quantifying Sensitivity
Most

obvious calculation of
sensitivity isY
Sx

Px

This

is the slope of the curves we


just looked at
We can normalize about some
0
0
0
0
y

C
p

C
p

C
p
s s
t t
j j
point (y0)
0
p
l
x Y
Sx 0
y Px

Quantifying Sensitivity
This

normalized sensitivity says


nothing about the expected
variation in the inputs.
If we are highly sensitive to a
variable which varies little, it may
not matter in the end
Normalize to input variances
x Y

Sx
y Px

Rewriting
s Y
s
S
Cs
y Ps
y

t
S Ct
y

j
S Cj
y

y Cs2 s2 Ct2 t2 C 2j 2j
2

j
2
2
1 Cs
Ct
C 2j 2

y
2
s
2
y

2
t
2
y

A Different Approach
Question:

If we could eliminate
the variation in a single input
variable, how much would we
reduce output variation?
Hold one input (Px) constant
Find

output variance V(Y|Px=px)

This

will vary as we vary px

So

now do this for a variety of


values of px and find expected

Now normalize
V ( E (Y | Px ))
Sx
Vy
This

is often called the

importance measure,
sensitivity index,
correlation ratio, or
first order effect

Variance-Based Methods
Assume
Y f ( x) f 0 f i xi f ij xi , x j ... f1, 2,...,k x1 , x2 ,..., xk
k

i 1

j i

Choose

each term such that it has


a mean of 0
Hence, f0 is average of f(x)

f i xi E Y xi f 0

f ij xi , x j E Y xi , x j f i xi f j x j f 0

Variance Methods
Since

terms are orthogonal, we


can square everything and
integrate over
our domain
2
Vi E Y | xi
k

V f Vi Vij Vijk ... V1, 2,...,k


i 1

Vi f i 2 xi dxi
Si

Vi
Vf
k

1 S i S ij S ijk ... S1, 2,...,k


i 1

Variance Methods
Si

is first order (or main) effect of

xi
Sij

is second order index. It


measures effect of pure
interaction between any pair of
output variables
Other values of S are higher
order indices
Typical sensitivity analysis just
addresses first order effects

Suppose k=4
1=S1+S2+S3+S4+S12+S13+S14+S23

+S24+S34+S123+S124+S134+S234+S12
34

Total

# of terms is
4+6+4+1=15=24-1

S-ar putea să vă placă și