Sunteți pe pagina 1din 15

Part I

Severity, Frequency, and


Aggregate Loss
1 Basic Probability
Raw moments

n
= E[X
n
]
Central moments
n
= E[(X )
n
]
Skewness
1
=
3
/
3
Kurtosis
2
=
4
/
4
Coecient of variation CV = /
Covariance Cov[X, Y ] = E[(X
X
)(Y
Y
)] = E[XY ] E[X] E[Y ]
Correlation
XY
= Cov[X, Y ]/(
X

Y
)
MGF M
X
(t) = E[e
tX
]
PGF P
X
(t) = E[t
X
] = M
X
(log t)
Moments via MGF M
(n)
X
(0) = E[X
n
]
Moments via PGF P
(n)
X
(1) = E[X(X 1) . . . (X n + 1)]
Conditional mean E[X] = E
Y
[E
X
[X|Y ]]
2 Variance
Sample variance Var[

X] = Var
_
1
n
n

i=1
X
i
_
=
Var[X]
n
Mixtures F(x) =
n

i=1
w
i
F
Xi
(x), w
1
+w
2
+ +w
n
= 1
Bernoulli shortcut Var[Y ] = (a b)
2
q(1 q)
3 Conditional Variance
Conditional variance Var[X] = Var[E[X|I]] + E[Var[X|I]]
1
4 Expected Values
Payment per loss with
deductible
E[(X d)
+
] =
_

d
(x d)f(x) dx =
_

d
S(x) dx
Payment per loss with
claims limit/Limited ex-
pected value
E[X d] =
_
d
0
xf(x) dx +dS(d) =
_
d
0
S(x) dx
Decomposition relation E[X] = E[(X d)
+
] + E[X d]
Payment per payment
event/mean residual life
e(d) = E[X d|X > d] =
E[(X d)
+
]
1 F(d)
LEV higher moments E[(X d)
k
] =
_
d
0
kx
k1
S(x) dx
Deductible + Limit
(max. payment = u d)
E[Y
L
] = E[X u] E[X d]
5 Parametric Distributions
Tail weight:
1. Compare moments More moments = less tail weight
2. Density ratios Low ratio = numerator has less tail weight
3. Hazard rate Increasing hazard rate = less tail weight
4. Mean residual life Decreasing MRL = less tail weight
6 Lognormal Distribution
Continuously compounded growth rate
Continuously compounded dividend return
Volatility
v
Lognormal parameters = (
1
2

2
v
)t
=
v

t
Asset price at time t S
t
= S
0
exp( +Z)
Strike price K
European call (option to buy) C = max{0, S
T
K}
European put (option to sell) C = max{0, K S
T
}
American options exercise at any time up to T
Black-Scholes

d
1
= (log(K/S
0
)
2
)/

d
2
= (log(K/S
0
) )/
Cumulative distribution Pr[S
t
< K] = (

d
2
)
Limited expected value E[S
t
|S
t
< K] = S
0
e
()t
(

d
1
)
(

d
2
)
E[S
t
|S
t
> K] = S
0
e
()t
(

d
1
)
(

d
2
)
2
7 Deductibles, LER, Ination
Loss elimination ratio LER(d) = E[X d]/ E[X]
Ination if Y = (1 +r)X E[Y d] = (1 +r) E
_
X
d
1+r
_
8 Other Coverage Modications
With deductible d, maximum covered loss u, coinsurance , policy limit/maximum
payment L = (u d):
Payment per loss E[Y
L
] = (E[X u] E[X d])
Second moment of per-loss E[(Y
L
)
2
] = E[(X u)
2
] E[(X d)
2
] 2d E[Y
L
]
9 Bonuses
With earned premium P, losses X, proportion of premiums r:
Bonus B = c(rP X)
+
10 Discrete Distributions
(a, b, 0) recursion
p
k
p
k1
= a +
b
k
zero-truncated relation p
T
0
= 0, p
T
n
=
p
n
1 p
0
zero-modied relation p
M
n
=
1 p
M
0
1 p
0
p
n
11 Poisson/Gamma
If S = X
1
+X
2
+ +X
N
, where X Gamma(, ), N Poisson() where
varies by X, then S NegBinomial(r = , = ).
12 Frequency DistributionsExposure and Cov-
erage Modications
Model Original Exposure mod. Coverage mod.
Exposure n
1
n
2
n
1
Pr[X > 0] 1 1
Poisson (n
2
/n
1
)
Binomial m, q (n
2
/n
1
)m, q m, q
Neg. Binomial r, (n
2
/n
1
)r, r,
3
13 Aggregate Loss Models: Approximating Dis-
tribution
Compound variance Var[S] = Var[X] E[N] + E[X]
2
Var[N]
14 Aggregate Loss Models: Recursive Formula
Frequency p
n
= Pr[N = n]
Severity f
n
= Pr[X = n]
Aggregate loss g
n
= Pr[S = n] = f
S
(n)
(a, b, 0) recursion g
k
=
1
1 af
0
k

j=1
_
a +
bj
k
_
f
j
g
kj
15 Aggregate LossesAggregate Deductible
E[S d] = d(1 F(d)) +
d/h1

j=0
hjg
hj
16 Aggregate LossesMisc. Topics
If X Exponential(), N Geometric(), then F
S
(x) =
1
1+
[x = 0] +

1+
F
X
(x), where X

Exponential((1 +)).
If S = X
1
+ +X
n
, then S Gamma( = n, ).
Method of rounding: p
k
= F
X
(k + 1/2) F
X
(k 1/2)
17 Ruin Theory
Ruin probability, discrete, nite horizon

(u, t)
Survival probability, continuous, innite horizon (u)
4
Part II
Empirical Models
18 Review of Mathematical Statistics
Bias Bias

() = E[

|]
Consistency lim
n
Pr[|

n
| < ] = 1, > 0
Mean square error MSE

() = E[(

)
2
|]
Sample variance s
2
=
1
n 1
n

k=1
(x
i
x)
2
Variance of s
2

2
/n
MSE/Bias relation MSE

() = Var[

] + Bias

()
2
19 Empirical Distribution for Complete Data
Total number of observations n
Observations in j-th interval n
j
Width of j-th interval c
j
c
j1
Empirical density f
n
(x) =
n
j
n(c
j
c
j1
)
20 Variance of Empirical Estimators with Com-
plete Data
Empirical variance

Var[S
n
(x)] = S
n
(x)(1 S
n
(x))/n = n
x
(n n
x
)/n
3
21 Kaplan-Meier and Nelson

Aalen Estimators
Risk set at time y
j
r
j
Loss events at time y
j
s
j
Kaplan-Meier product-limit estimator S
n
(t) =
j1

i=1
_
1
s
i
r
i
_
, y
j1
t < y
j
Nelson-

Aalen cumulative hazard



H(t) =
j1

i=1
s
i
r
i
, y
j1
t < y
j
22 Estimation of Related Quantities
Exponential extrapolation: t S
n
(y
k
) = exp(y
k
/), and solve for the parame-
ter .
5
23 Variance of Kaplan-Meier and Nelson-

Aalen
Estimators
Greenwoods approximation
for KM

Var[S
n
(y
j
)] = S
n
(y
j
)
2
j

i=1
s
i
r
i
(r
i
s
i
)
Greenwoods approximation
for N

Var[

H(y
j
)] =
j

i=1
s
i
r
2
i
100(1)% log-transformed
condence interval for KM
(S
n
(t)
1/U
, S
n
(t)
U
), U = exp
_
_
z
/2
_

Var[S
n
(t)]
S
n
(t) log S
n
(t)
_
_
100(1)% log-transformed
condence interval for N

A
(

H(t)/U,

H(t)U), U = exp
_
_
z
/2
_

Var[

H(t)]

H(t)
_
_
24 Kernel Smoothing
Uniform kernel density k
y
(x) =
1
2b
, y b x y +b
Uniform kernel CDF K
y
(x) =
_
_
_
0, x < y b
x(yb)
2b
, y b x y = b
1, y +b < x
Triangular kernel density height = 1/b, base = 2b
Empirical probability at y
i
p
n
(y
i
)
Fitted density

f(x) =
n

i=1
p
n
(y
i
)k
yi
(x)
Fitted distribution

F(x) =
n

i=1
p
n
(y
i
)K
yi
(x)
Use conditional expectation formulas to nd moments of kernel-smoothed
distributions; condition on the empirical distribution.
25 Approximations for Large Data Sets
Right/upper endpoint of j-th interval c
j
Number of new entrants in [c
j
, c
j+1
) d
j
Number of withdrawals in (c
j
, c
j+1
] u
j
Number of events in (c
j
, c
j+1
] s
j
Risk set for the interval (c
j
, c
j+1
] r
j
Conditional mortality rate in (c
j
, c
j+1
] q
j
Population at time c
j
P
j
=
j1

i=0
d
i
u
i
s
i
Generalized relation r
j
= P
j
+d
j
u
j
UD of entrants/withdrawals = = 1/2
6
Part III
Parametric Models
26 Method of Moments
For a k-parameter distribution, match the rst k empirical moments to the tted
distribution:
E[X
m
] =
n

i=1
(x
i
x)
m
27 Percentile Matching
Interpolated k-th order statistic x
k+w
= (1 w)x
k
+wx
k+1
, 0 < w < 1
Smoothed empirical 100p-th
percentile

p
= x
p(n+1)
28 Maximum Likelihood Estimators
Likelihood function L(

) =
n

i=1
Pr[X X
i
|

]
Loglikelihood l = log L
X
i
are the observed eventseach is a subset of the sample space. Maximize l
by nding

such that
l

i
= 0 for each parameter in the tted distribution.
29 MLEsSpecial Techniques
Exponential MLE = sample mean
Gamma (xed ) MLE = method of moments
Normal MLE() = sample mean, MLE(
2
) = population variance
Poisson MLE = sample mean
Neg. Binomial MLE(r) = sample mean
Lognormal take logs of sample, then use Normal shortcut
Censored exponential MLETake each observation (including censored ones)
and subtract the deductible; sum the result and divide by the number of uncen-
sored observations.
7
30 Estimating Parameters of a Lognormal Dis-
tribution
31 Variance of MLEs
For n estimated parameters

= (
1
,
2
, . . . ,
n
), the estimated variance of a
function of MLEs is computed using the delta method:
Covariance matrix (

) =
_

2
1

12

1n

21

2
2

2n
.
.
.
.
.
.
.
.
.
.
.
.

n1

n2

2
n
_

_
Delta method Var[g(

)] =
_
g

)
_
g

_
Fishers information I(
rs
) = E
_

2
l(

r
_
Covariance-information relation (

)I(

) = I
n
32 Fitting Discrete Distributions
To choose which (a, b, 0) distribution to t to a set of data, compute the empirical
mean and variance. Then note
Binomial E[N] > Var[N]
Poisson E[N] = Var[N]
Negative Binomial E[N] < Var[N]
33 Cox Proportional Hazards Model
Hazard class i/Covariate z
i
logarithm of proportionality constant for class i
i
Proportionality constant/relative risk c = exp(
1
z
1
+
2
z
2
+ +
n
z
n
)
Baseline hazard H
0
(t)
Hazard relation H(t|z
1
, . . . , z
n
) = H
0
(t)c
34 Cox Proportional Hazards Model: Partial
Likelihood
Number at risk at time y k
Proportionality constants of members at risk {c
1
, c
2
, . . . , c
k
}
Failures at time y {j
1
, j
2
, . . . , j
d
}
Breslows partial likelihood
d

i=1
exp(
ji
)
_
k

i=1
exp(
i
)
_
d
8
35 Cox Proportional Hazards Model: Estimat-
ing Baseline Survival
Risk set at time y
j
R(y
j
)
Proportionality constants for the
members of risk set R(y
j
)
c
i
Baseline hazard rate

H
0
(t) =

yjt
s
j

iR(yj)
c
i
36 The Generalized Linear Model
37 Hypothesis Tests: Graphic Comparison
Adjusted tted density f

(x) =
f(x)
1 F(d)
Adjusted tted distribution F

(x) =
F(x) F(d)
1 F(d)
D(x) plot D(x) = F
n
(x) F

(x)
empirical observations x
1
, x
2
, . . . , x
n
p-p plot (F
n
(x
j
), F

(x
j
))
Normal probability plot (x
j
, F
1
(F
n
(x)))
Where the p-p plot has slope > 1, then the tted distribution has more weight
than the empirical distribution; where the slope < 1, the tted distribution has
less weight.
38 Hypothesis Tests: Kolmogorov-Smirnov
Komolgorov-Smirnov statistic D = max |F
n
(x) F

(x;

)|
KS-statistic is the largest absolute dierence between the tted and empirical
distribution. Should be used on individual data, but bounds on KS can be
established with grouped data. Fitted distribution must be continuous. Uniform
weight across distribution. Lower critical value for tted parameters and for
more samples.
39 Hypothesis Tests: Anderson-Darling
Anderson-Darling statistic A
2
= n
_
u
t
(F
n
(x) F

(x))
2
F

(x)(1 F

(x))
f

(x) dx
AS-statistic used only on individual data. Heavier weight on tails of distribution.
Critical value independent of sample size, but decreases for tted parameters.
9
40 Hypothesis Tests: Chi-square
Total number of observations n
Hypothetical probability X is in j-th group p
j
Number of observations in j-th group n
j
Chi-square statistic Q =
k

j=1
(n
j
E
j
)
2
E
j
E
j
= n
j
p
j
Degrees of freedom df = total number of groups, minus number of estimated
parameters, minus 1 if n is predetermined
41 Likelihood Ratio Algorithm, Schwarz Bayesian
Criterion
Likelihood Ratiocompute loglikelihood for each parametric model. Twice the
dierence of the loglikelihoods must be greater than 100(1 )% percentile
of chi-square with df = dierence in the number of parameters between the
compared models.
Schwarz Bayesian CriterionCompute loglikelihoods and subtract
r
2
log n,
where r is the number of estimated parameters in the model and n is the sample
size of each model.
Part IV
Credibility
42 Limited Fluctuation CredibilityPoisson Fre-
quency
Poisson frequency of claims
Margin of acceptable uctuation k
Condence of uctuation being within k P
Severity CV CV
2
s
=
2
s
/
2
s
n
0
n
0
=
_

1
_
1+P
2
_
k
_
2
Credibility for Frequency Severity Aggregate
Exposure units e
F
n
0

n
0

CV
2
s
n
0

_
1 +CV
2
s
_
Number of claims n
F
n
0
n
0
CV
2
s
n
0
(1 +CV
2
s
)
Aggregate losses s
F
n
0

s
n
0

s
CV
2
s
n
0

s
(1 +CV
2
s
)
10
43 Limited Fluctuation Credibility: Non-Poisson
Frequency
Credibility for Frequency Severity Aggregate
Exposure units e
F
n
0

2
f

2
f
n
0

2
s

2
s
n
0

f
_

2
f

f
+

2
s

2
s
_
Number of claims n
F
n
0

2
f

f
n
0

2
s

2
s
n
0
_

2
f

f
+

2
s

2
s
_
Aggregate losses s
F
n
0

2
f

f
n
0

2
s

s
n
0

s
_

2
f

f
+

2
s

2
s
_
Poisson group frequency is the special case
f
=
2
f
= . If a compound Poisson
frequency model is used, you cannot use the Poisson formulayou must use the
mixed distribution (e.g., Poisson/Gamma mixture is Negative Binomial).
44 Limited Fluctuation Credibility: Partial Cred-
ibility
Credibility factor Z =
_
n/n
F
=
_
e/e
F
=
_
s/s
F
Manual premium (presumed
value before observations)
M
Observed premium

X
Credibility premium P
C
= Z

X + (1 Z)M
45 Bayesian Estimation and CredibilityDiscrete
Prior
Constructing a table: First row is the prior probability, the chance of member-
ship in a particular class before any observations are made. Second row is the
likelihood function of the observation(s) given the hypothesis of membership in
that particular class. Third row is the joint probability, the product of Rows
1 and 2. Row 4 is the posterior probability, which is Row 3 divided by the
sum of Row 3. Row 5 is the hypothetical mean or conditional probability, the
expectation or probability of the desired outcome given that the observations
belong that that class. Row 6 is the expectation, Bayesian premium, or ex-
pected probability, the desired result given the observations, and is the sum of
the products of Rows 4 and 5.
11
46 Bayesian Estimation and CredibilityContinuous
Prior
Observations x = (x
1
, x
2
, . . . , x
n
)
Prior density ()
Model density f(x|)
Joint density f(x, ) = f(x|)()
Unconditional density f(x) =
_
f(x, ) d
Posterior density (|x
1
, . . . , x
n
) =
f(x, )
f(x)
Predictive density f(x
n+1
|x) =
_
f(x
n+1
|)(|x) d
Loss function minimizing MSE posterior mean E[|x]
Loss function minimizing abso-
lute error
posterior median
Zero-one loss function posterior mode
A conjugate prior is the prior distribution when the prior and posterior distri-
butions belong to the same parametric family.
47 Bayesian Credibility: Poisson/Gamma
With Poisson frequency with mean , where is Gamma distributed with pa-
rameters , ,
Number of claims x
Number of exposures n
Average claims per exposure x = x/n
Conjugate prior parameters , = 1/
Posterior parameters

= +x

= +n
Credibility premium P
C
=

48 Bayesian Credibility: Normal/Normal


With Normal frequency with mean and xed variance v, where is Normal
with mean and variance a,
Posterior parameters

=
v +na x
v +na
Posterior variance a

=
va
v +na
Credibility factor Z =
n
n +v/a
Credibility premium

12
49 Bayesian Credibility: Binomial/Beta
With Binomial frequency with parameters M, q, where q is Beta with parame-
ters a, b,
Number of trials m
Number of claims in m trials k
Posterior parameters a

= a +k
b

= b +mk
Credibility premium P
C
= a

/(a

+b

)
50 Bayesian Credibility: Exponential/Inverse Gamma
With exponential severity with mean , where is inverse Gamma with pa-
rameters , ,
Posterior parameters

= +n

= +n x
51 B uhlmann Credibility: Basics
Expected hypothetical mean = E[E[X|]]
Variance of hypothetical mean (VHM) a = Var[E[X|]]
Expected value of process variance (EPV) v = E[Var[X|]]
B uhlmanns k k = v/a
B uhlmann credibility factor Z = n/(n +k)
B uhlmann credibility premium P
C
= Z

X + (1 Z)
52 B uhlmann Credibility: Discrete Prior
No additional formulas
53 B uhlmann Credibility: Continuous Prior
No additional formulas
54 B uhlmann-Straub Credibility
55 Exact Credibility
B uhlmann equals Bayesian credibility when the model distribution is a member
of the linear exponential family and the conjugate prior is used.
13
Frequency/Severity B uhlmanns k
Poisson/Gamma k = 1/ =
Normal/Normal k = v/a
Binomial/Beta k = a +b
56 B uhlmann As Least Squares Estimate of Bayes
Variance of observations Var[X] =

p
i
X
2
i


X
2
Bayesian estimates Y
i
Covariance Cov[X, Y ] =

p
i
X
i
Y
i


X

Y
Mean relationship E[X] = E[Y ] = E[

Y ]
regression slope/B uhlmann cred-
ibility estimate
b = Z =
Cov[X, Y ]
Var[X]
regression intercept a = (1 Z) E[X]
B uhlmann predictions

Y
i
= a +bX
i
57 Empirical Bayes Non-Parametric Methods
For uniform exposures,
Number of exposures/years data n
Number of classes/groups r
Observation of of group i, year j x
ij
Unbiased manual premium = x =
1
rn
r

i=1
n

j=1
x
ij
Unbiased EPV v =
1
r
r

i=1
1
n 1
n

j=1
(x
ij
x
i
)
2
Unbiased MHV a =
1
r 1
r

i=1
( x
i
x)
2

v
n
B uhlmann credibility factor Z =
n
n +

k
58 Empirical Bayes Semi-Parametric Methods
When the model is Poisson, v = , and we have
EPV v =
VHM a = Var[S] v
Sample variance Var[S] =
2
=

(x
i
x)
2
n 1
14
Part V
Simulation
59 SimulationInversion Method
Random number u [0, 1]
Inversion relationship Pr[F
1
(u) x] = Pr[F(u) F(x)] = F(x)
Method x
i
= F
1
(u)
Simply take the generated uniform random number u and compute the inverse
CDF of u to obtain the corresponding simulated x
i
.
15

S-ar putea să vă placă și