Sunteți pe pagina 1din 31

Accepted Manuscript

Continuous Optimization
Robust Portfolio Optimization with Copulas
Iakovos Kakouris, Berc Rustem
PII: S0377-2217(13)01006-0
DOI: http://dx.doi.org/10.1016/j.ejor.2013.12.022
Reference: EOR 12048
To appear in: European Journal of Operational Research
Received Date: 6 August 2012
Accepted Date: 14 December 2013
Please cite this article as: Kakouris, I., Rustem, B., Robust Portfolio Optimization with Copulas, European Journal
of Operational Research (2013), doi: http://dx.doi.org/10.1016/j.ejor.2013.12.022
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting proof before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Robust Portfolio Optimization with Copulas
Iakovos Kakouris
a
, Berc Rustem
b
a
Corresponding Author, Department of Computing, Imperial College London,
180 Queens Gate, London, SW7 2AZ, UK, tel: +44 (0) 20 7594 8341, fax: +44 (0) 20 7581 8024,
iak05@ic.ac.uk
b
Department of Computing, Imperial College London, 180 Queens Gate,
London, SW7 2AZ, UK, tel: +44 (0) 20 7594 8345, fax: +44 (0) 20 7581 8024, br@doc.ic.ac.uk
Abstract
Conditional Value at Risk (CVaR) is widely used in portfolio optimization
as a measure of risk. CVaR is clearly dependent on the underlying probability
distribution of the portfolio. We show how copulas can be introduced to any
problem that involves distributions and how they can provide solutions for the
modeling of the portfolio. We use this to provide the copula formulation of the
CVaR of a portfolio. Given the critical dependence of CVaR on the underlying
distribution, we use a robust framework to extend our approach to Worst Case
CVaR(WCVaR). WCVaRis achievedthroughthe use of rival copulas. These rival
copulas have the advantage of exploiting a variety of dependence structures,
symmetric and not. We compare our model against two other models, Gaussian
CVaR and Worst Case Markowitz. Our empirical analysis shows that WCVaR
can asses the risk more adequately than the two competitive models during
periods of crisis.
1

1 Introduction
In this paper we look into the problem of portfolio optimization where the assets
of the portfolio are described by random variables. In this situation the selection of
the optimal portfolio depends on the underlying assumptions on the behaviour of
the assets and the choice on the measure of risk. Usually the objective is to nd the
optimal risk-return trade-o.
One of the pioneers in portfolio optimization was Markowitz (1952) who pro-
posed the mean-variance framework for risk return analysis. Although the most
common measure for the estimation of the return of the portfolio remains the ex-
pected return many other ways of calculating the risk have been developed. A
widely used measure of risk is Value at Risk (VaR). VaR is the measure of risk that
is recommended as a standard by the Basel Committee. However, VaR has been
criticized in recent years mainly for two reasons. Firstly, VaR does not satisfy sub-
additivity and hence it is not a coherent measure of risk in the way that is dened
by Artzner et al. (1998). Also, it is not a convex measure of risk and thus it may
have many local extrema which cause technical issues when optimizing a portfolio.
Secondly, it gives a percentile of loss distribution that does not provide an adequate
picture of the possible losses in the tail of the distribution. Szego (2005) uses this
argument to state that VaR does not measure risk. Then he suggests alternative
measures of risk with one of them being Conditional Value at Risk (CVaR).
CVaR is the expectation of the distribution above VaR. Thus, the value of CVaR is
aected by the fatness of the tail of the distribution. Hence, CVaR provides a better
description of the loss on the tail of the distribution. Rockafellar and Uryasev (2000,
2002) proposed a minimization formulation that usually results in a convex or linear
problem. These are desirable aspects of CVaR and have paved the way of its use in
risk management and portfolio optimization. A literature review on CVaR can be
found in Zhu and Fukushima (2009) and the references therein.
Following the formulation of Rockafellar and Uryasev (2000, 2002) in order to
calculate CVaR one has to make some assumptions on the underlying distribution
2

of the assets. This can also be in the form of an uncertainty domain like a hy-
percube or an ellipsoidal set in which all feasible uncertainty values lie (Zhu and
Fukushima, 2009). An alternative is assuming some multivariate distribution (Zhu
and Fukushima, 2009). In this paper we focus on the selection of multivariate distri-
butions.
Gaussian distribution is the most commonly used multivariate case. It is easy to
calibrate and also there are very ecient algorithms to simulate Gaussian data. This
also applies to some extent to the elliptical family of distributions with student-
distribution being widely used in Credit Risk (Chan and Kroese, 2010; Clemente and
Romano, 2004; Romano). One disadvantage of using the Gaussian distribution is its
symmetry. This implies that the probability of losses is the same as the probability of
gains. Studies suggest that at least in the context of nancial markets, assets exhibit
stronger comovements during a crisis as opposed to prosperity (Ang and Chen,
2002; Hu, 2002, 2006). The second disadvantage is that it uses linear correlation as a
measure of dependence. As the name suggests, linear correlation is characterized by
linear dependencies. Since the observation of asymmetric comovements mentioned
above suggests non-linear dependencies, linear correlation may not be an adequate
measure of dependence (Artzner et al., 1998; Szego, 2005).
One way of addressing the limitations of the symmetry underlying elliptical
distributions is to consider mixture distributions. A linear combination of a set of
distributions is used to t the given sample by optimizing the combination weights.
Hasselblad (1966, 1969) was one of the rst who looked into mixture distributions
and how their parameters can be estimated. Zhu and Fukushima (2009) avoid the
assumptions needed on the set of distributions and their parameters and also avoid
the estimation of the weights. Subsets of historical returns are used to represent data
arising from dierent distributions and a worst-case scenario approach is applied to
avoid the calibration of the weights. Hu (2002, 2006) and Smillie (2007) use mixture
copulas to t their data samples for the bivariate case. The work of Hu (2002, 2006)
and Zhu and Fukushima (2009) motivates us to introduce copulas within a worst
case robust scenario framework.
3

Copulas are multivariate distribution functions whose one-dimensional margins
are uniformly distributed on the closed interval [0,1] (Cherubini et al., 2004; Nelsen,
2006; Sklar, 1959). The uniform margins can be replaced by univariate cumula-
tive distributions of random variables (Cherubini et al., 2004; Nelsen, 2006; Sklar,
1959). Hence, copulas consider the dependency between the marginal distributions
of the random variables instead of focusing directly on the dependency between the
random variables themselves. This makes them more exible than standard distrib-
utions because it is possible to separate the selection of the multivariate dependency
from the selection of the univariate distribution. As an extension to that, the calibra-
tion of the multivariate distribution can be separated into two steps (Cherubini et al.,
2004). Also, the fact that copulas describe the dependency between the marginal
distributions which are monotonic makes them invariant under monotonic trans-
formations (Cherubini et al., 2004; Nelsen, 2006). Copulas are associated with many
measures of dependence that measure the monotonic dependencies between two
random variables. Furthermore, like copulas themselves these monotonic measures
are invariant under monotonic transformations (Embrechts et al., 2001; Schweizer
and Wol, 1981; Wol, 1980).
Inthis paper, we mainlyfocus onArchimediancopulas. This is a familyof copulas
that exhibits some interesting characteristics that can be utilized in our distribution
modelling as discussed in the next section.
The paper is structured as follows. In Section 2 we introduce copulas and the
associated measures of dependence together with some theoretical background. In
Section 3 we derive CVaR for copulas. We extend CVaR to WCVaR through the use
of mixture copulas. We conclude the section by stating the generalized optimization
problem for WCVaR. In Section 4 we construct a model based on the theory of the
previous sections. Then, we provide two numerical examples where we asses the
performance of our model. Finally, we close with conclusions.
4

2 Copulas
Copulas arise from the theory of probabilistic metric functions and were rst intro-
duced by Sklar in 1959 (Sklar, 1959). Copulas are multivariate distribution functions
whose one-dimensional margins are uniformly distributed on the closed interval
I [0, 1]. A more rigorous denition for copulas is given below.
Denition 2.1. An n dimensional copula (n-copula) is a function C from I
n
to I with the
following properties
1. C(u
1
, .., u
i
, ..., u
n
) = 0 if any u
i
= 0 for i = 1, 2, ..., n (we also describe a function with
this property as grounded)
2. C (1, ..., 1, u
m
< 1, ..., 1) = u
m
for all u
m
I where m = 1, 2, ..., n
3. C(u) 0 u I
n
(we also describe a function with this property as n-increasing)
We continue with the denition of distribution functions and joint multivariate
distribution functions. This is used when we discuss the relation between distribu-
tion functions and copulas.
Denition 2.2. A distribution function is a function F from R to I with the following
properties:
1. F is nondecreasing
2. F() = 0 and F() = 1
Denition 2.3. A joint multivariate distribution function is a function from R
n
to I with
the following properties
1. F is n-increasing
2. F is grounded
3. F() = 1
5

4. F(, ..., , x
m
, , ..., ) = F
m
(x
m
)
where m = 1, 2, ..., n.
We can use copulas to replace probability distributions in all of their applications
thanks to Sklars theorem. Sklars theorem is probably the most important result
that links copulas to probability distributions. This, together with the corollary
that follows provide the relation between n-Copulas and multivariate distributions.
Sklar introduced his theorem in 1959 (Sklar, 1959), where the proof for its bivariate
case can also be found. The multivariate case is discussed by Schweizer and Sklar
(1983) (proofs of the theorem and corollary can be found therein).
Theorem 2.4 (Sklars Theorem). Let F be an n-dimensional distribution function with
margins F
1
, ..., F
n
. Then there exists an n-copula C such that, for all x R
n
,
F(x
1
, ..., x
n
) = C(F
1
(x
1
), ..., F
n
(x
n
)). (1)
Furthermore, if F
1
, ..., F
n
are continuous, then Cis unique; otherwise Cis unique on RanF
1

... RanF
n
(Ran Range).
Corollary 2.5. Let F be an n-dimensional distribution function with margins F
1
, ..., F
n
, and
let C be an n-copula. Then, for any u I
n
,
C(u
1
, ..., u
n
) = F(F
1
1
(u
1
), ..., F
1
n
(u
n
)) (2)
where F
1
1
, ..., F
1
n
are the quasi-inverses of the marginals.
The margins F
1
, ..., F
n
and the multivariate distribution function F are as dened
by Denitions 2.2 and 2.3. The margins u
i
can be replaced by F
i
(x
i
), because they
both belong to the domain I and are uniformly distributed, i.e. let u U(0, 1), then
P(F(x) u) = P(x F
1
(u)) = F(F
1
(u)) = u).
Using Theorem 2.4 and Corollary 2.5 we can also derive the relation between
the probability density functions and the copulas. In the following denition f is
the multivariate probability density function of the probability distribution F and
f
1
, ..., f
n
are the univariate probability density functions of the margins F
1
, ..., F
n
.
6

Denition 2.6. The copulas density of a n-copula C is the function c : I
n
[0, ) such
that
c(u
1
, ..., u
n
)

n
C(u
1
..., u
n
)
u
1
...u
n
=
f (x
1
, ..., x
n
)

n
i=1
f
i
(x
i
)
(3)
Equations (1)-(3) imply that copulas decompose the multivariate probability dis-
tributionfromits margins. The margins F
1
, ..., F
n
canbe any distributionof our choice
while the copula simply describes the monotonic relation between the margins. This
is one of the biggest advantages of copulas because they divide the problem of nd-
ing the correct distribution into two parts; rst is nding the distribution of the
margins and second the dependency between them. This is much easier than nd-
ing directly the multivariate dependency between the random variables. Hence,
the calibration of the copulas become an easier task. The calibration methods for
copulas can be found in Cherubini et al. (2004). Also, an introduction to copulas can
be found in Nelsen (2006) and Schweizer and Sklar (1983) discuss the relationship
of copulas to probabilistic metric spaces and the underlying theory.
2.1 Special cases of copulas and related measures of dependence
We introduce the copulas to be used in our examples. Together with the copulas we
consider their associated measures of dependence. The focus is on a special family
of copulas called Archimedian. We also consider the Gaussian copula which is the
copula version of the multivariate normal distribution.
Archimedian copulas were rstly introduced by Ling (1965). They belong to the
family of probabilistic metric spaces that have some of the properties of Archimedes
triangle function and hence the name (Schweizer and Sklar, 1983). This is a family
of copulas that arises dierently from the rest. Instead of using Theorem 2.4 we
construct them using directly a function , known as a generator, which enables us
to write the expression for the copula in a closed form.
Denition 2.7. Given a function : I [0, ) such that (1) = 0 and (0) = and
having inverse
1
completely monotone, an n-place Archimedian copula is a function
7

C

: I
n
I such that
C

(u) =
1
((u
1
) + ... + (u
n
)).
Extended literature regarding the Archimedian copulas can be found in Cheru-
bini et al. (2004); Nelsen (2006); Schweizer and Sklar (1983). In our analysis we
focus on three Archimedian copulas, Clayton, Gumbel and Frank. Our motivation for
using these particular copulas stems from Hu (2002, 2006). Hu (2002, 2006) focuses
on the calibration of bivariate mixture copula (see also Hasselblad (1966, 1969) for
mixture distributions). There are two reasons why these particular copulas have
been chosen. Each copula better describes a dierent type of dependency. Clayton
and Gumbel are non symmetric copulas that describe more adequately negative
and positive dependencies, i.e. stronger dependence below and above the 50
th
per-
centile respectively. The Frank copula is symmetric but it has dierent properties
to the Gaussian copula. Hence, by using them in a mixture structure we cover a
large spectrum of possible dependencies. Also, these three copulas are very easy to
calibrate.
The denitions of the three Archimedian copulas are the following:
Denition 2.8. Given a generator of the form (u) = u

1 with (0, ) then, the


Clayton n-copula is given by
C
Cl
(u) = max[(u

1
+ ... + u

n
n + 1)
1/
, 0].
Denition 2.9. Given a generator of the form (u) = (ln(u))

with (1, ) then, the


Gumbel n-copula is given by
C
Gu
(u) = exp
_
[(lnu
1
)

+ ... + (lnu
n
)

]
1/
_
.
8

Denition 2.10. Given a generator of the form (u) = ln
_
exp(u)1
exp()1
_
with (0, ) then,
the Frank n-copula is given by
C
Fr
(u) =
1

ln
_
1 +
(e
u
1
1) ... (e
u
n
1)
(e

1)
n1
_
.
For the calibration of the free parameters, , of the Archimedian copulas we will
use Kendals . Kendals is a bivariate measure of dependence and is dened by
the following equation
(X
1
, X
2
) = 4
_

F(x
1
, x
2
)dF(x
1
, x
2
) 1
= 4
_
1
0
_
1
0
C(u
1
, u
2
)dC(u
1
, u
2
) 1.
(4)
As we can see from equation (4), Kendals measures the dependency between the
cumulative distributions of random variable X
1
and X
2
and does not depend on the
random variables themselves. Thus, is a measure of monotonic dependence and
is invariant under monotonic transformations. This makes it a more robust measure
of dependence when compared to linear correlation. For comparison purposes, we
also dene linear correlation as
(X
1
, X
2
) =
1
(X
1
) (X
2
)
_

[F(x
1
, x
2
) F
1
(x
1
)F
2
(x
2
)]dx
1
dx
2
=
1
(X
1
) (X
2
)
_
1
0
_
1
0
[C(u
1
.u
2
) uv]dF
1
1
(u
1
)dF
1
2
(u
2
),
(5)
where (X
i
) denotes the variance of the random variable X
i
. It can be seen that in
the copula version of , the dependency on the random variables X
1
and X
2
remains
in form of the volatility (X
i
). An extensive literature on monotonic measures of
dependence can be found in Nelsen (2006); Schweizer and Wol (1981); Smillie
(2007); Wol (1980) and the references therein.
Let us denote the free parameter, , of each of the Archimedian copulas by
Cl
for C
Cl
(Denition 2.8),
Gu
for C
Gu
(Denition 2.9) and
Fr
for C
Fr
(Denition 2.10).
For these three cases we have closed form relations with Kendalls equation (4)
(Cherubini et al., 2004). For C
Cl
we have that
= 1
1
Cl
, (6)
9

for C
Gu
=

Gu

Gu
+ 2
, (7)
and for C
Fr
we have
= 1 +
4[D
1
(
Fr
)]

Fr
, (8)
where
D
k
() =
k

k
_

0
x
exp(x) 1
dx for k = 1, 2.
As we can see from Denitions 2.8-2.10 there is only one free parameter to
calibrate regardless of the dimensions of the copula. On the other hand we have
a for each pair of random variables. Our solution to this problem is to calculate
the for all the pairs and select the largest. To do that we calculate (X
i
, X
j
) for
i, j = 1, 2, ..., n and i j. A higher positive implies stronger negative dependence
in the case of Clayton and stronger positive dependence in the case of Gumbel etc.
Hence, by selecting the highest positive we deliberately choose the most extreme
behaviour for our copulas. This seems to be consistent with the use of copulas within
a worst case robust framework. This also allows an easy calibration.
Other ways used to estimate are mentioned in the literature by Genest and
Rivest (1993), and Smillie (2007). The former employ a more sophisticated approach
where they use a semi-parametric methodology for the decomposition of . The
latter follows a similar approach to ours where instead of taking the maximum ,
takes the median from the estimated s. In both cases the focus is to provide the
best possible t for the data. Hence, a more optimistic estimation will be provided
aecting the robustness of the model. We believe that our method is more suitable
for the worst case framework that we seek.
Finally, we give the denition of the Gaussian copula (Cherubini et al., 2004;
Embrechts et al., 2001; Smillie, 2007).
Denition 2.11. Given a n-place standard multivariate normal distribution function
n
parameterized by a dispersion matrix P [1, 1]
nn
, the Gaussian copula is the function
C
Ga
: I
n
I such that
C
Ga
(u) = (
1
1
(u
1
), ...,
1
n
(u
n
)). (9)
10

For C
Ga
to be called a Gaussian copula all the margins {
i
}
n
i=1
have to be normally
distributed but they can have dierent mean and variance.
3 Worst Case Conditional Value at Risk
Having introduced the theorems that enable us to associate copulas with distribu-
tions we will derive the copula formulation of the Worst Case Conditional Value at Risk
(WCVaR). Zhu and Fukushima (2009) have derived the WCVaR for distributions.
We follow a similar approach to derive WCVaR for copulas. At every step involv-
ing the use of distributions we present the equivalent copula formulation. For the
derivation of the copula formulation we use equations (1)-(3).
In order to dene the WCVaR we rst have to dene Value at Risk (VaR) and
Conditional Value at Risk (CVaR). We rst discuss VaR.
Proposition 3.1. Let w W R
m
be a decision vector, u I
n
a randomvector, g(w, u) the
cost function and F(x) = (F
1
(x
1
), ..., F
n
(x
n
)) a set of marginal distributions where u = F(x).
Also, let us assume that u follows a continuous distribution with copula density function
c(.). Then VaR

for a condence level is dened as


VaR

(w) min{ R : C(u | g(w, u) ) }. (10)


Proof. Given a decision w W and a random vector x R
n
which follows a contin-
uous distribution with density function f (.), the probability of g(w, x) not exceeding
a threshold is represented by
(w, )
_
g(w,x)
f (x)dx
=
_
g(w,x)
c(F(x))
n

i=1
f
i
(x
i
)dx
=
_
g(w,u)
c(u)du
= C(u | g(w, u) ),
11

where f
i
(x
i
) =
F
i
(x
i
)
x
i
is the univariate probability distribution of the individual el-
ements of the random vector x (see Denition 2.6). g(w, u) = g(w, F
1
(u)) where
F
1
(u) = (F
1
1
(u
1
), ..., F
1
n
(u
n
)) maps the domain of the cost function from R
n
to I
n
, as
implied by the transformation u
i
= F
i
(x
i
). For the derivation of the copula version of
(w, ) we use equation (3). Having dened (w, ), we consider the VaR. Given a
xed w W and a condence level , VaR

is dened as
VaR

(w) min{ R : (w, ) }


= min{ R : C(u | g(w, u) ) }.

We continue with the denition of the CVaR with respect to VaR.


Proposition 3.2. Given w, u, F(x) and g(w, u) as in Proposition 3.1 we dene CVaR

for
a condence level as
CVaR

(w)
1
1
_
g(w,u)VaR

(w)
g(w, u)c(u)du. (11)
Proof. We start from the equation of CVaR that arises from the probability density
function f (.) and we derive the copula form.
CVaR

(w) =
1
1
_
g(w,x)VaR

(w)
g(w, x) f (x)dx
=
1
1
_
g(w,x)VaR

(w)
g(w, x)c(F(x))
n

i=1
f
i
(x
i
)dx
=
1
1
_
g(w,u)VaR

(w)
g(w, u)c(u)du.
(12)

Following Rockafellar and Uryasev (2000) we formulate equation (12) as the


following minimization problem
G

(w, ) = +
1
1
_
xR
n
[g(w, x) ]
+
f (x)dx
= +
1
1
_
uI
n
[ g(w, u) ]
+
c(u)du.
(13)
12

Hence, we have
CVaR

(x) = min
R
G

(w, ). (14)
By solving the minimization problem in equation (14), we directly obtain both the
values of CVaR and VaR. From Proposition 3.1 we have that the value of VaR is the
value of .
In order for the above denitions to be computed, exact knowledge of the distri-
bution f (x) or copula density c(u) and the margins F(x) is needed. As the aim in this
paper is to represent distributions with copulas, we shall omit using f (x) and use
c(u) instead. The equivalence of the two has been discussed in Section 2. Knowledge
of the copula C(u) and its margins {u
i
= F
i
(x
i
)}
n
i=1
implies knowledge of f (x) and c(u).
Acopula representation of the distribution of x cannot be expected to be exact. Thus,
we assume that our copula representation belongs to a set of copulas c(.) C. The
notion of robustness with respect to C involves the worst performing copula (or
copulas, since the worst case may not be unique), i.e. the copula for which we obtain
the greatest CVaR. Hence, we dene WCVaR.
Denition 3.3. The WCVaR for xed w W with respect to C is dened as
WCVaR

(w) sup
c(.)C
CVaR

(w). (15)
It is known that CVaR is a coherent measure of risk (Artzner et al., 1998; Szego,
2005; Zhu and Fukushima, 2009). For a measure of risk mapping a random vector
X to be coherent it has to satisfy the following properties:
(i) Subadditivitty: for all random vectors X and Y, (X + Y) (X) + (Y);
(ii) Positive homogeneity : for positive constant , (X) = (X);
(iii) Monotonicity: if X Y for each outcome, then (X) (Y);
(iv) Translation invariance: for constant m, (X + m) = (X) + m.
Zhu and Fukushima (2009) prove that WCVaR preserves coherence. They also give
the following lemma from Fan (1953) which allows us to formulate the problem into
a tractable one.
13

Lemma 3.4. Suppose that W and X are nonempty convex sets in R
n
and R
m
, respectively,
and the function z(w, x) is convex in w for any x, and concave in x for any w. Then we have
min
wW
max
xX
z(w, x) = max
xX
min
wW
z(w, x) (16)
We also use Lemma 3.4 to extend the proof from Zhu and Fukushima (2009) to
copulas and eventually formulate our problem as a minmax problem.
3.1 Mixture Copula
In this example the distribution of the vector of returns x is described by a mixture
copula
C(F(x)) =
T

C, (17)
where = { : e
T
= 1, 0, R
l
} and

C = (C
1
(F(x)), ..., C
l
(F(x))) is the vector
with copulas and F(x) = (F
1
(x), ..., F
n
(x)) is the vector of the cumulative univariate
distributions. We can apply equation (3) to equation (17) to obtain the density of the
mixture copula. Then, we can use this density in Denitions 2.8-2.11 to obtain
G

(w, , ) = +
1
1
_
uI
n
[ g(w, u) ]
+
l

i=1

i
c
i
(u)du =
l

i=1

i
G
i

(w, ), (18)
where
G
i

(w, ) = +
1
1
_
uI
n
[ g(w, u) ]
+
c
i
(u)du for i = 1, 2, ..., l. (19)
The optimization problem that we need to solve is stated by the following theorem
and corollary from Zhu and Fukushima (2009):
Theorem 3.5. For each w, WCVaR

(w) with respect to C is given by


WCVaR

(w) = min
R
max

(w, , ), (20)
where = { : e
T
= 1, 0, R
l
}.
14

Corollary 3.6. Minimizing WCVaR

(w) over W can be achieved by the following mini-


mization
min
wW
WCVaR

(w) = min
wW
min
R
max

(w, , ). (21)
More specically, if (w

) attains the right hand side minimum, then w

attains the
left hand side minimum.
ZhuandFukushima (2009) provide the proof for the case of mixture distributions.
The theorems in Section 2, together with Proposition 3.1 and Proposition 3.2, show
that Theorem 3.5 and Corollary 3.6 can be applied to copulas. For the sake of
completeness we give the motivation behind the proof and we continue with the
formulation of the optimization problem.
In order to optimize the portfolio we need to solve
min
wW
WCVaR

(w) min
wW
max

min
R
G

(w, , ) (22)
Since the mixture copula (17) is linear in , Zhu and Fukushima (2009) use Lemma
3.4 to show that (22) can be written as
min
wW
min
R
max

(w, , ). (23)
An epigraph formulation can be used to reduce problem (23) into a minimization
problem as follows
min
(w,,)WRR
_

_
:
l

i=1

i
G
i

(w, ) ,
_

_
(24)
and must satisfy
G
i

(w, ) , for i = 1, 2, ..., l. (25)


Problem (24) can thus be reduced to
min
(w,,)WRR
_
: G
i

(w, ) , i = 1, 2, ..., l
_
. (26)
A straightforward approach for evaluating problem (26) is by Monte Carlo sim-
ulation. Rockafellar and Uryasev (2000) give an approximation of G

(w, ), where
Monte Carlo simulation can be used. They write G

(w, ) as

(w, ) = +
1
S(1 )
S

k=1
[ g(w, u
[k]
) ]
+
, (27)
15

where u
[k]
is the k
th
sample vector (again here we give the copula version where
u
[k]
= F(x
[k]
)). Thus, using equation (27) we can express problem (26) for evaluation
using Monte Carlo simulations
min
(w,,)WRR
_

_
: +
1
S
i
(1 )
S
i

k=1
[ g(w, u
i
[k]
) ]
+
, i = 1, 2, ..., l.
_

_
, (28)
where u
i
[k]
is the k
th
sample arising from copula C
i
of the mixture copula (equation
(17)). S
i
is the size of the sample that arises fromC
i
.
Following Zhu and Fukushima (2009) we write the minimization problem as
min (29)
s.t. w W, v R
m
, R, R (30)
+
1
S
i
(1 )
(1
i
)
T
v
i
, i = 1, ..., l (31)
v
i
k
g(w, u
i
[k]
) , k = 1, ..., S
i
, i = 1, ..., l (32)
v
i
k
0, k = 1, ..., S
i
, i = 1, ..., l, (33)
where v = (v
1
; ...; v
l
) R
m
with m =

l
i=1
S
i
and 1
i
= (1; ...; 1) R
S
i
.
4 Portfolio management under a worst case copula
scenario
In this section we demonstrate how the theory in Section 3 can be used for the
optimization of a portfolio of nancial assets. Financial assets can be described by
distributions and their risk can be measured using CVaR.
We consider a portfolio of n nancial assets A
1
, ..., A
n
. We assume that the returns
of the assets are log-normally distributed and they are consistent with the Black and
Scholes (1973) representation, given by
x
i
=
dA
i
(t)
A
i
(t)
=
i
dt +
i
dB
i
(t), (34)
where
i
and
i
is the mean and the standard deviation of the random variable x
i
and dB
i
(t) denotes a Wiener process. We have the return vector x = (x
1
, ..., x
n
) R
n
16

and u = (u
1
, ..., u
n
) = (
1
(x
1
), ...,
n
(x
n
)). We also dene the decision vector w =
(w
1
, ..., w
n
) R
n
, which denotes the amount of investment in each nancial asset in
the portfolio. We also dene the loss function
g(w, u) = w
T

1
(u), (35)
where
1
(u) = (
1
1
(u
1
), ...,
1
n
(u
n
)). Hence, the loss function is the negative of the
portfolio return w
T

1
(u).
We have selected above the univariate distribution that describes the asset re-
turns. We now consider the selection of the copula that describes the dependency
between these returns. We rst solve a simple optimization problem using the
Gaussian copula. Consider the problem
min
wW
CVaR(w), (36)
where W denes the domain of w as described by its constrains and u C
Ga
(see
equation (12)). This is equivalent to equations (29)-(33) when l = 1.
The advantage of problem (36) is that it employs the Gaussian copula. The latter
is the most commonly used copula in practice for characterizing multivariate de-
pendencies. It is also easy to use. Furthermore, the Gaussian copula is a desirable
reference point for assessing portfolio performance. There are, however drawbacks
to the Gaussian copula. The rst is its symmetry. Studies show that assets have
stronger negative comovements than positive (Ang and Chen, 2002; Hu, 2002, 2006).
A second disadvantage is the linear correlation that can only capture linear depen-
dencies between assets, which may not be realistic (Artzner et al., 1998; Szego, 2005).
Both of these disadvantages may lead to bad performance in the presence of market
shocks.
We aimto compensate for some of these disadvantages byusinga mixture copula.
In problem (37), the mixture set C contains the copulas from Section 2.1. The aim is
to cover all types of dependencies and thereby use a robust measure of dependence
(Kendalls (4)). This robustness is further augmented by the worst-case approach.
Thus, the second problem we solve is
min
wW
WCVaR(w), (37)
17

where W is dened as above and u c and c C is a set of copulas (see equation
(15))
For further comparison we introduce a third and nal model. The third model is
an extension of Markowitz (1952). Markowitz (1952) model is given by
min
R,wW
{ : w
T
w }, (38)
where , the covariance matrix, is symmetric positive semidenite. Problem (38)
constitutes a quadratic convex problem. The extended model takes into consid-
eration the uncertainty that governs . The solution is provided within a worst
case framework where is unknown, but is assumed to belong in a set S
+
, which
comprises symmetric positive semidenite matrices (Kontoghiorghes et al., 2002;
Rustem et al., 2000). Worst Case Markowitz (WCM) has the form
min
R,wW
sup
S
+
{ : w
T
w }. (39)
Given a discrete set S
+
problem (39) can be reformulated into a convex optimization
problem
min
R,wW
{ : w
T

i
w ,
i
S
+
}. (40)
WCM provides us with nice comparison model since it is an extended, robust (sim-
ilarly to (37)) version of the well-known classic Markowitz model, and uses to
describe the dependence structure, in a similar fashion to problem (36).
Problems (36) and (37) are solved using equations (29)-(33) and WCM using
(40). Also, for all three problems we assume W to be convex and, without loss of
generality (and for simplicity), we dene it by the following constraints:
e
T
w = 1, (41)
where e is the vector with all-unity elements. Furthermore, to assure portfolio
diversication, the additional constrains
w w w (42)
canbe imposed, for example in the case of a portfolio comprising only long positions.
At this instance we may require w, w [0.1, 1]
n
where w, w are lower and upper
18

bounds respectively. This way we prevent the portfolio from having a single asset
position. Of course, such constraints are not compulsory and vary according to the
investors preference.
Finally, since we optimize an asset portfolio we are interested in its performance.
Hence, it is often desirable to impose an additional performance restriction in terms
of the minimum expected return
r
E(w
T
F
1
(u))
r
. (43)
4.1 Numerical examples
We use the following seven indices: Nikkei225, FTSE100, Nasdaq, DAX30, Sensex,
Bovespa, Gold index. These represent six dierent stock exchange markets from
dierent parts of the world and one commodity index. The markets corresponding
to the indices are Japan, UK, USA, Germany, India and Brazil. These markets, with
the inclusion of the commodity, are intended to lead to a diversied portfolio.
The data used covers the period November 1998 - July 2011, during which we
look at the daily returns. This time line includes the dot-combubble, SouthAmerican
crisis and Asian crisis. These three events took place between 1998 and 2002, and
they had a large negative impact on the world markets. The data also include the
2008 Global Recession crisis. Both periods of crises can be observed in Figure 1.
In both numerical examples the data between 1998 and 2003 (1200 time steps) is
usedfor the calibration of the three problems, (36),(37) and(40). This way we include
data from a period of crisis. We expect to show that the risk and the dependencies
between assets during a crisis period can be assessed more eciently with the use of
WCVaR as opposed to using only Gaussian copula CVaR. The Worst Case Portfolio
(WCP), given by (37) should perform more robustly than the Gaussian Portfolio
(GP), given by (36). Overall, and particularly during a period of crisis, we expect
that the WCP will performbetter in the presence of downside shocks. Also, it will be
interesting to see how Worst Case Markowitz Portfolio (WCMP), given by (40), fares
against the other two portfolios, since it combines some of their individual trades.
19

Figure 1. Seven indices from 1998 to 2011. All the indices are normalized to 1 on
July 2003
Jan2000 Jan2002 Jan2004 Jan2006 Jan2008 Jan2010
0
1
2
3
4
5
6


NIKKEI225
FTSE100
NASDAQ
DAX30
SENSEX
BOVESPA
GOLD
Thus, the critical test is the performance of all three portfolios during the 2008 crisis.
4.1.1 Static portfolio
In our rst example, we consider a static portfolio in which the weights of the
portfolio are calculated only once. A daily rebalancing is performed using the same
weights throughout the entire lifespan of the portfolio. For the computation of the
weights we calibrate our copulas using the period between 1998 to 2003 and then
solve problems (36) and(37). To create the set S
+
for WCMwe calculate six . Inmore
detail,
1
is calculated fromthe period between 1998 to 2003.
2
is the diagonal of
1
,
with the o-diagonal elements being all equal to zero to assume independence.
3
is calculated through the relation = sin(0.5) (Smillie, 2007) (which holds under
normality assumption but in our case is dierent to
1
).
4
is calculated from a
smaller sample concentrated around the peak of 2000 crisis for more extreme eects.
Finally,
5
and
6
are calculated from dierent sample periods prior to 2003. The
means, variances, and the covariance matrix (
1
) of the seven assets as estimated
from the period between 1998 to 2003 are given in Table 1. The former two are
used for the univariate distributions of the assets as dened by equation (34). The
20

univariate distributions together with the four copulas of Section 2 are used to run
Monte Carlo simulations in order to provide the inputs u
i
[k]
neededto solve equations
(29)-(33). Problem (40) does not require any simulations because it is not a stochastic
problem.
Table 1. The mean and variance of the seven assets daily returns between November
1998 and June 2003
(10
3
) Mean Variance Covariance
Nikkey225 -0.41 0.21
FTSE100 -0.32 0.12 0.04
Nasdaq -0.22 0.45 0.04 0.12
DAX30 -0.38 0.23 0.04 0.19 0.24
Sensex 0.18 0.31 0.05 0.03 0.02 0.03
Bovespa 0.36 1.14 0.02 0.09 0.27 0.14 0.02
Gold 0.14 0.06 0.01 -0.02 -0.03 -0.03 -0.00 -0.02
Nikkey225 FTSE100 Nasdaq DAX30 Sensex Bovespa
Simulating data from Gaussian copula is straightforward with the use of the
Cholesky decomposition of the correlation matrix. Simulating data from other cop-
ulas can be a dicult task, in general, which makes them less attractive. The
simulation of data from the three Archimedian copulas uses the algorithms found in
Melchiori (2006). Melchiori (2006) provides a summary of the results from Devroye
(1986); Marshall and Olkin (1988); Nolan (2012).
To solve problems (36),(37) and (40), we also have to dene the constraints W.
We use as a starting point equations (41)-(43). We specify w 0. This implies that
we only allow long positions in the portfolio. The upper bound w in equation (42)
is implied by the budget constraint (41).
The optimization problems are solvedusing the YalmipMatlab package, together
with the CPLEX solver. The PC used for the implementation of the numerical
examples is an Intel Core 2 Duo, 2.8GHz with 4 GB memory. For each of the four
copulas, we run 10000 simulations for each of the seven assets. Simulating and
solving all three problems takes less than a minute.
All three problems are solved for
r
= 0 to 0.00025, where
r
is the required
21

Figure 2. Static portfolios: Out of Sample portfolio performance for
r
= 0 (see
equation (43))
Jan2004 Jan2005 Jan2006 Jan2007 Jan2008 Jan2009 Jan2010 Jan2011
1
1.5
2
2.5
3
3.5


Normal distribution portfolio
Worst case portfolio
Worst case Markowitz portfolio
Jan2004 Jan2005 Jan2006 Jan2007 Jan2008 Jan2009 Jan2010 Jan2011
0.3
0
0.3
0.6
WCP GP, level difference
minimum daily return (see (43)). The results are presented in Figure 2, Table 2 and
Table 3. Table 2 illustrates the performance of the three portfolios (GP, WCP and
WCMP) both for the In Sample(November 1998 - June 2003) and Out of Sample
(June 2003 - July 2011). In Sample performance shows that the lower bound
r
is always satised at least by one of the three portfolios in the form of AR. Also,
the overall (in and out of sample) performance of the WCP portfolio always has
higher volatility (Vol) and CVaR. This is to be expected since the WCP has to satisfy
more constraints in the optimization problem, i.e., the same constraints that exist for
the Gaussian copula in the GP have to be satised for all four copulas used in the
WCP. Hence, the CVaR obtained for the WCP is the CVaR of the worst case copula,
which is the equivalent the requirement imposed by inequality (25). These copulas
also have fat tails, and hence higher Vol. On the other hand WCMP, as a variance
minimization problem, has the smallest Vol and CVaR out of the three, at least for

r
= 0 to 0.00015. This advantage stops when higher returns are required.
We expect WCPandWCMPtoperformbetter thanthe GP, at least under the worst
case scenarios. This should apply throughout the Out of Sample testing period.
22

Table 2. Comparison of the performance of Gaussian optimal, Worst Case optimal
and Worst Case Markowitz optimal portfolios
GP (I) In Sample Out of Sample

r
WCP (II) AR
1
Vol
2
CVaR
0.95
AR
1
Vol
2
CVaR
0.95
MD1
3
MD2
4
TR
5
(10
3
) WCMP (III) (10
3
) (10
3
) (10
3
) (10
3
) (10
3
) (10
3
) (%) (%) (%)
0.00
I 0.00 6.66 14.4 0.56 9.00 22.1 -7.44 -31.8 198
II 0.13 9.29 19.2 0.70 12.2 29.1 -7.23 -28.9 277
III 0.01 6.50 14.2 0.55 8.89 21.9 -7.67 -35.9 191
0.10
I 0.10 7.05 15.1 0.66 9.84 23.9 -7.11 -31.4 263
II 0.13 9.29 19.2 0.70 12.2 29.1 -7.23 -28.9 277
III 0.10 6.79 14.7 0.64 9.46 23.1 -7.14 -35.9 254
0.15
I 0.15 7.56 16.2 0.71 10.4 25.3 -7.02 -31.6 300
II 0.15 9.04 18.7 0.71 12.0 28.8 -7.07 -29.6 286
III 0.15 7.18 15.6 0.70 10.0 24.2 -6.95 -35.1 294
0.20
I 0.20 7.95 16.9 0.73 10.2 24.9 -6.59 -39.7 319
II 0.20 8.80 18.1 0.72 11.1 26.8 -6.89 -35.4 300
III 0.20 7.99 17.2 0.73 10.2 24.9 -6.52 -40.6 322
0.25
I 0.25 11.3 24.0 0.74 11.8 28.5 -7.68 -49.8 313
II 0.25 11.8 24.2 0.72 12.0 28.4 -8.07 -44.7 296
III 0.25 11.8 25.9 0.75 12.4 30.0 -7.48 -52.3 314
1. AR : Average daily return over the period
2. Vol : The volatility dened by the standard deviation
3. MD1 : Maximum draw-down, The worst return between two consecutive days
4. MD2 : Maximum draw-down, The worst return between two days within a period
of maximum 6 months
5. TR : The total return from the beginning to the end of the period
Hence, we focus on the Out of Sample period. Table 3 shows that the performance
of WCP up to January 2008 (the beginning of the crisis) was similar or worse than
GP with WCMP consistently having the highest. The better performance of WCMP
may suggest that assuming some uncertainty in the estimation of the covariance
or correlation matrix may improve your judgment of risk, at least during periods
of prosperity. While
r
increases, the dierence between the returns of the WCP
and the other two portfolios increases. This can also be veried in Figure 2 for the
case of
r
= 0. This is at most evident at the bottom Figure 2 where the dierence
between the notional value of the WCP and GP is displayed. The dierence up to
January 2008 is very close to zero, after which it starts to increase with the WCP
23

outperforming the GP.
Table 3. Average daily return of Gaussian optimal, Worst Case optimal and Worst
Case Markowitz optimal portfolios up to January 2008

r
(10
3
)
0.00 0.10 0.15 0.20 0.25
GP 0.71 0.82 0.88 1.02 1.20
WCP 0.73 0.73 0.76 0.90 1.03
WCM 0.76 0.87 0.92 1.05 1.28
This behaviour changes from 2008 onwards. In Table 2 we see that the perfor-
mance of GP with respect to Average Return (AR) and Total Return (TR) is worse
or similar to WCP for all
r
tested. The reason is the robustness of the performance
of the WCP during the 2008 crisis. It is surprising that the WCMP does not exhibit
similar robustness. On the contrary its behaviour is similar to the GP. Assessing the
risk between the assets through the use of the covariance matrix, similarly to the
correlation of problem (36), does not carry any information about the characteristics
of the tail of the distribution of each asset. Such a model proves inadequate to in
emulating risk during periods of extreme movements. This is evident fromthe Max-
imum Drawdown (MD2); this measures the greatest loss within a 6 month period.
For the given dataset this occurs at the time of the crisis. We can see that the WCP
MD2 is better for all
r
. Another observation from Table 2 is that the higher the re-
quirement for
r
is, the more similar the performance of the three portfolios becomes
with respect to ARand TR. This is the result of our demand for higher returns, which
forces all three portfolios to select high return assets. Thus, the number of assets
that can be selected becomes smaller and hence the portfolios become similar. This
becomes more apparent in the case of
r
= 0.00025, in which case in addition to the
AR and TR, Vol and CVaR are much closer.
With the above in mind, we can conclude that an optimistic portfolio, like the
GP, is more suitable during times of prosperity. Also, the addition of uncertainty
with respect to the values of the covariance matrix, like the WCMP, may improve
our assessment of risk during prosperity even further. In contrast, a more robust
portfolio like WCP proves to be benecial during a crisis period.
24

4.1.2 Dynamic portfolio
We consider the case when the optimal weights of the assets are recomputed. In
this more dynamic portfolio, the weights of the assets are recalibrated on a monthly
basis. At every step, we extend the in sample calibration window by a month, then
we solve problems (36) and (37) to obtain the weights. For problem (40), to obtain
the weights, we use the set S
+
, as described in the previous section, to which we add

7
.
7
is evaluated from the expanding sample. This way we make sure that new
information are taken into account. Then, for all three portfolios, we keep the same
weights for the rest of that month. During the month, daily rebalancing is performed
using the constant weights. We do not adopt a moving window of calibration since
we do not want to lose the information from the old crises, in our case the crises
between 1998 and 2003.
The expanding calibration window causes the estimated values of
i
, used in
equation (34), to change every month. Thus, we have to make sure that the constraint
(43) remains within the feasible set of w. To do so we replace constraint (43) with
the dynamic constraint
E(w
T
F
1
(u)) max[wEq
T
, 0], (44)
where = (
1
, ...,
n
) is the vector of the asset return means as calculated using the
calibration period and wEq = (1/n, ..., 1/n) i.e. all the weights are equal. The rest of
the constraints remain as in Section 4.1.1.
For comparison purposes, we also include a simple portfolio not based on opti-
mization. The Equally weighted portfolio (EWP) has equal positions in all assets
i.e. we always use wEq as the weights of the portfolio.
The out of sample performance of all four portfolios (GP, WCP, WCMP and
EWP) is shown in Figure 3 and Table 4. The observations are very similar to those
of Section 4.1.1. The CVaR and Vol of the WCP are higher than the GP but the MD2,
AR and TR of the WCP is signicantly higher. WCMP exhibits similar behaviour to
the GP. Also, in the case of the EWP, the Vol and the CVaR lie between those of the
other two portfolios. At the same time its performance is worse with respect to the
25

AR, TR and MD2, i.e., the EWP sustains the largest losses during the 2008 Global
Recession crisis.
Figure 3. Dynamic portfolio performance with = max[0, wEq
T
x] (see equation
(44))
Jan2004 Jan2005 Jan2006 Jan2007 Jan2008 Jan2009 Jan2010 Jan2011
1
1.5
2
2.5
3
3.5


Normal distribution Portfolio
Worst case portfolio
Worst Case Markowitz
Equal weights portfolio
Jan2004 Jan2005 Jan2006 Jan2007 Jan2008 Jan2009 Jan2010 Jan2011
0
0.3
0.6
WCP GP , level difference
5 Summary and Conclusions
In this paper we demonstrated one way of using copulas in a portfolio optimization
framework where the worst-case copula is considered. In particular, we focus on the
derivation of CVaR and WCVaR for copulas. In the case of WCVaR we show how a
mixture copula can be used in order to obtain a convex optimization problem.
By introducing copulas in the CVaR framework we allow more exibility in the
selection of the distribution. The most commonly used distribution for modelling
multivariate dependencies is the Gaussian copula. This is due to the simplicity of
its construction and the availability of ecient methods of simulation. Its disadvan-
tages are its symmetry and its ability to only describe linear dependencies via the
use of linear correlation in its structure. However symmetric behaviour and linear
26

Table 4. Comparisonof the performance of Gaussianoptimal, the Worst Case optimal
and Equally Weighted portfolios
Out of Sample
AR
1
Vol
2
CVaR
0.95
MD1
3
MD2
4
TR
5
(10
3
) (10
3
) (10
3
) (%) (%) (%)
GP 0.53 8.76 21.5 -7.49 -33.0 174
WCP 0.71 12.3 29.2 -7.20 -28.8 274
WCMP 0.51 8.77 21.7 -7.60 -36.2 168
EWP 0.45 9.73 23.9 -6.52 -45.4 130
1. AR : Average daily return over the period
2. Vol : The volatility dened by the standard deviation
3. MD1 : Maximum draw-down, The worst return between two consecutive days
4. MD2 : Maximum draw-down, The worst return between two days within a
period of maximum 6 months
5. TR : The total return from the beginning to the end of the period
dependencies among assets are unrealistic (Ang and Chen, 2002; Artzner et al., 1998;
Hu, 2002, 2006). We discuss alternative distribution functions in the form of copulas
that can exhibit asymmetric behaviour and utilize monotonic measures of depen-
dence in their formulation. These are the three Archimedian copulas in Section 2.1
that are also easy to simulate from using the algorithms given by Melchiori (2006).
The advantage of using non symmetric distribution functions was demonstrated
in the numerical examples of Section 4.1. In Section 4.1 we provide a comparison
between three competitive models, WCVaR, Gaussian copula CVaR and WCM. We
showthat for bothstatic anddynamic strategies, for lowminimumexpectedportfolio
return, the WCP outperforms both the GP and the WCMP in every statistic except
Vol and CVaR. In particular during the 2008 crisis the WCP performed more robustly
than the other two portfolios, and that was true even for a high minimum expected
return requirement. This shows that the assumption of symmetry and the use of the
correlation or covariance matrix as a measure of dependence provide insucient
information for assessing the risk during periods of crisis.
We compare the performance of dynamic portfolios with that of the EWP. The
EWP performance shows that following a naive approach in which risk is neglected
is not necessarily the correct way forward. Although the EWP performed relatively
27

adequately soon after 2002, it suered the biggest loss during the 2008 crisis. As a
result, the EWP became the worst performing portfolio among all portfolios in the
numerical examples.
It seems reasonable to conclude that when optimizing a portfolio, the associated
risk needs to be taken into account. All possible dependencies have to be considered
in order to obtain robust results. One way of achieving this is through copulas and
mixture copulas, that allow dependency systems with higher exibility in their de-
scription than a single distribution. This exibility allows non symmetric behaviour
and fat tails in the design of the system.
References
A. Ang and J. Chen. Asymmetric correlations of equity portfolios. Journal of Financial
Economics, 63:443494, 2002.
P. Artzner, F. Delbaen, JM Erber, and D. Heath. Coherent measures of risk. Mathe-
matical Finance, 9:203228, 1998.
F. Black and M. Scholes. The pricing of options and corporate liabilities. The Journal
of Political Economy, 81(3):637654, 1973.
J. C. C. Chan and D. P. Kroese. Ecient estimation of large portfolio loss probabilities
in t-copula models. European Journal of Operational Research, 205(2):361 367, 2010.
U. Cherubini, E. Luciano, and W. Vecchiato. Copula Methods in Finance. Finance
Series. John Wiley and Sons, 2004.
A. D. Clemente and C. Romano. Measuring and optimizing portfolio credit risk: a
copula-based approach. Economic Notes, 33:325357, 2004.
L. Devroye. Non-Uniform Random Variate Generation. Springer-Verlag, 1986.
P. Embrechts, F. Lindskog, and A. McNeil. Modelling dependence with copulas and
28

applications to risk management. Department of Mathematics ETHZ CH-8092
Zurich Switzerland, 2001.
K. Fan. Minimax theorems. Proceedings of National Academy of Science, 39(1):4247,
1953.
C. Genest and L. P. Rivest. Statistical inference procedures for bivariate archimedean
copulas. Journal of the American Statistical Association, 88(423):10341043, 1993.
V. Hasselblad. Estimation of parameters for a mixture of normal distributions.
Technometrics, 8(3):431444, 1966.
V. Hasselblad. Estimation of nite mixtures of distributions from the exponential
family. Journal of the American Statistical Association, 64(328):14591471, 1969.
L. Hu. Dependence patterns across nancial markets: methods and evidence. De-
partment of Economics Ohio State University, 2002.
L. Hu. Dependence patterns across nancial markets: a mixed copula approach.
Applied Financial Economics, 16:717729, 2006.
E. J. Kontoghiorghes, B. Rustem, and S. Siokos. Computational Methods in Decision-
Making, Economics and Finance. Kluwer Academic Publishers, 2002.
C. H. Ling. Representation of associative functions. Publication Mathematics Debrecen,
12:189212, 1965.
H. Markowitz. Portfolio selection. The Journal of Finance, 7(1):7791, 1952.
A. W. Marshall and I. Olkin. Families of multivariate distributions. Journal of the
American Statistical Association, 83(403):834841, 1988.
M. R. Melchiori. Tools for sampling multivariate archimedean copulas. Technical
report, University Nacional del Litoral, Santa Fe, Argentina, 2006.
R. B. Nelsen. An Introduction to Copulas. Series in Statistics. Springer, 2 edition, 2006.
29

J. P. Nolan. Stable Distributions - Models for Heavy Tailed Data. Birkhauser, Boston,
2012. In progress, Chapter 1 online at academic2.american.edu/jpnolan.
R. T. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk. Journal
of Risk, 2(3), 2000.
R. T. Rockafellar and S. Uryasev. Conditional value-at-risk for general loss distribu-
tions. Journal of Banking and Finance, 26(7):1443 1471, 2002.
C. Romano. Applying copula function to risk management. Part of authors Ph.D.
Thesis Extreme Value Theory and coherent risk measures: applications to risk
management.
B. Rustem, R. G. Becker, and W. Marty. Robust min-max portfolio strategies for rival
forecast and risk scenarios. Journal of Economics Dynamics & Control, 24:15911621,
2000.
B. Schweizer and A. Sklar. Probabilistic Metric Spaces. North-Holland, 1983.
B. Schweizer andE. F. Wol. Onnon-parametric measures of dependence for random
variables. The Annals of Statistics, 9(4):879885, 1981.
A. Sklar. Fonctions de repartition a n dimensions et leurs marges. Pupl. Inst. Statist.
Univ. Paris, 8:229231, 1959.
A. Smillie. New copula models in quantitative nance. PhD thesis, Imperial College of
Science, Technology and Medicine University of London, 2007.
G. Szego. Measures of risk. European Journal of Operational Research, 163:519, 2005.
E. F. Wol. N-dimensional measures of dependence. Stochastica, 4(3):175188, 1980.
S. Zhu and M. Fukushima. Worst-case conditional value-at-risk with application to
robust portfolio management. Operations Research, 57(5):11551168, 2009.
30

S-ar putea să vă placă și