Sunteți pe pagina 1din 44

RiskMetrics Monitor

TM

J.P.Morgan/Reuters

Second quarter 1996


New York
August 13, 1996

RiskMetrics News

J.P. Morgan and Reuters will collaborate to develop a more powerful version of RiskMetrics.
This Monitor is the first edition under the new collaboration between both firms.
RiskMetrics News also introduces FourFifteen , an Excel based VaR calculator and report
generator developed and distributed by J.P. Morgan.
Some new additions to the RiskMetrics software vendor list.
We review changes to the government yield curve compounding method used for volatility
estimation.

Morgan Guaranty Trust Company


Risk Management Advisory
Jacques Longerstaey
(1-212) 648-4936
riskmetrics@jpmorgan.com

Reuters Ltd
International Marketing
Martin Spencer
(44-171) 542-3260
martin.spencer@reuters.com

Research, Development, and Applications


This quarter, we address the following subjects:
An improved methodology for measuring VaR

Most Value-at-Risk (VaR) methodologies show their limitations when dealing with events that
are relatively infrequent. These shortcomings appear when risk managers select confidence intervals above 99%, and this applies not only to variance-covariance VaR techniques such as
RiskMetrics, but also to historical simulation.
In the case of RiskMetrics, the lack of coverage in the tail of the distributions has been attributed to the assumption that returns follow a conditional normal distribution. Since the distributions of many observed financial return series have tails that are fatter than those implied by
conditional normality, risk managers may underestimate the effective level of risk. The purpose
of this article is to describe a RiskMetrics VaR methodology that allows for a more realistic
model of financial return tail distributions.
A Value-at-Risk analysis of currency exposures

26

In this article, we compute the VaR of a portfolio of foreign exchange flows that consist of exposures to OECD and emerging market currencies, most of which are not yet covered by the RiskMetrics data sets. In doing so, we underscore the limitations of standard VaR practices when
underlying market return distributions deviate significantly from normality.
Estimating index tracking error for equity portfolios

34

In our RiskMetricsTechnical Document, we outlined a single-index equity VaR approach to


estimate the systematic market risk of equity portfolios. In this article, we discuss the principal
variables influencing the process of portfolio diversification, and suggest an approach to quantifying expected tracking error to market indices.

RiskMetrics Monitor
Second quarter 1996
page 2

RiskMetrics News
Jacques Longerstaey
Morgan Guaranty Trust Company
Risk Management Advisory
(1-212) 648-4936
riskmetrics@jpmorgan.com

Martin Spencer
Reuters Ltd
International Marketing
(44-171) 542-3260
martin.spencer@reuters.com

J.P. Morgan and Reuters team up on RiskMetrics


J.P. Morgan and Reuters have recently announced their decision to collaborate in the development of a
more powerful and sophisticated version of RiskMetrics.
Since the launch of RiskMetrics in October 1994, we have received numerous requests to add new
products, instruments, and markets to the daily volatility and correlation data sets. We have also perceived the need in the market for a more flexible VaR data tool than the standard matrices that are currently distributed over the Internet.
The new partnership with Reuters, which will be based on the precept that both firms will focus on their
respective strengths, will help us achieve these things:

Methodology
J.P. Morgan will continue to develop the RiskMetrics set of VaR methodologies and publish them in
the quarterly RiskMetrics Monitor and in the annual RiskMetricsTechnical Document.
RiskMetrics data sets
Reuters will take over the responsibility for data sourcing as well as production and delivery of the risk
data sets.
The current RiskMetrics data sets will continue to be available on the Internet and will be further improved as a benchmark tool designed to broaden the understanding of the principles of market risk measurement.
When J.P. Morgan first launched RiskMetrics in October 1994, the objective was to go for broad market coverage initially, and follow up with more granularity in terms of the markets and instruments covered. This over time, would reduce the need for proxies and would provide additional data to measure
more accurately the risk associated with non-linear instruments.
The partnership will address these new markets and products and will also introduce a new customizable service, which will be available over the ReutersWeb service. The customizable RiskMetrics approach will give risk managers the ability to scale data to meet the needs of their individual trading
profiles. Its capabilities will range from providing customized covariance matrices needed to run VaR
calculations, to supplying data for historical simulation and stress-testing scenarios.
More details on these plans will be discussed in later editions of the RiskMetrics Monitor.
Systems
Both J.P. Morgan and Reuters, through its Sailfish subsidiary, have developed client-site RiskMetrics
VaR applications. These products, together with the expanding suite of third party applications will
continue to provide RiskMetrics implementations.

RiskMetrics Monitor
Second quarter 1996
page 3

RiskMetrics News (continued)

J.P. Morgan launches FourFifteen


On April 15th, 1996, J.P. Morgan launched FourFifteen, an Excel based VaR calculator and report
generator that uses the RiskMetrics methodology and data sets.
FourFifteen is a practical tool for measuring the market risk in complex portfolios of financial instruments, and runs on both Windows and Macintosh platforms.
J.P. Morgan designed FourFifteen to use the unique database of volatilities and correlations which is
updated daily and needed to generate key market risk information. FourFifteen performs VaR analyses of portfolios containing financial products including fixed income, foreign exchange, equities, and
their derivatives in 23 currencies. Users can specify the base currency, horizon, and risk threshold.
FourFifteen also provides an array of standard reports to present risk information in a clear and useful
manner. Users with more specific reporting needs can create customized reports to aid in the identification of their key sources of risk.
Why FourFifteen? FourFifteen is named after J.P. Morgans market risk report produced at 4:15
p.m. each day. The 4:15 Report, a single sheet of paper, summarizes the daily earnings at risk for
J.P. Morgan worldwide.
FourFifteen allows for a more informed perspective on a wide range of risk management issues in
the areas of trading management, benchmark performance evaluation, asset/liability management, resource allocation, and regulatory reporting. It is a useful aid to risk analysis and decision-making at
micro, macro and strategic levels. The system's inherent flexibility delivers risk analysis tailored to various users, from senior executives to financial analysts.
For more information on FourFifteen, contact your local J.P. Morgan representative.

Additions to the list of RiskMetrics systems developers


microcomp GmbH
Robert-Heuser-Strasse 15, 50968 Koeln, Germany
Peter Jumpertz, (49-221) 937 08020, Fax (49-221) 937 08015,
100276.3233@compuserve.com
ValueRisk is a front and middle office tool, that can be easily connected to position keeping
systems and other back office applications. It helps to manage portfolios of all kinds of financial products, including fixed income, equities, commodities, foreign exchange, and derivatives. The system is targeted to the individual trader, fund manager or treasurer, and to the
manager of a trading or treasury department as well.
ValueRisk introduces Risk Percentage, a common measurement for risk in different kinds of
financial markets. Risk is measured as a percentage relative to a fully leveraged position. It is
based on the accepted returns-on-capital concept at mark-to-market prices. The concept is a
basis for clear and communicable capital allocation decisions. It helps to implement welldefined diversification strategies, and limits the loss of a risk-taking unit. The system also
serves as the basis for performance measurement.

RiskMetrics Monitor
Second quarter 1996
page 4

RiskMetrics News (continued)

ValueRisk monitors price series and cross-correlation of virtually all traded financial products.
The system calculates volatility and determines the total risk level as well as the stop-out
probability based on historical cross-correlations of all products for various portfolios.
Accounting for unprecedented market dynamics, the system offers quick and powerful simulation capabilities and stress testing function. It is fully compliant with the latest BIS (Basel
Committee) and BAK (Bundesaufsichtsamt fr das Kreditwesen) risk controlling requirements.
Midas-Kapiti International
45 Broadway, New York, N.Y. 10006
Abby Friedman (1-212) 898 9500, FAX (1 212) 898 9510
TMark trader analytics system provides comprehensive high-speed pricing, portfolio analysis
and hedging facilities to support trading of off-balance sheet instruments. Recently introduced
Release 2.4 supports the J.P. Morgan RiskMetrics VaR methodology. Specific features of
Release 2.4 include multi-user processing, deal ticket generation, increased number of currencies and rates, and revaluation of cash flows from basis swaps. TMark is a PC-based system
and runs under Microsoft Windows.
Midas-Kapiti International is one of the world's leading suppliers of software solutions to
banks, financial institutions and corporations. Specializing in applications software for derivatives, market data integration, international banking, trade finance and risk management has
been our focus for more than 20 years. With over 24 international offices serving 1000 customers in 90 countries, we have an unparalleled distribution network which allows us to offer
sales and support to our customers wherever their offices may be located.
Value & Risk GmbH
Quellenweg 5a, 61348 Bad Homburg, Germany
Tako Ossentjuk (49-6172) 685051, FAX (49-6172) 685053, 100714.3446@compuserve.com
Value & Risk calculates Value-at-Risk with several selectable methodologies, such as covariance-methodology, quantile-simulation, user defined/stress test -simulation, worst case, principal component analysis, Monte Carlo, historical simulation, and RiskMetrics. It supports
various distributions and manifold position equivalent calculations as well as stochastic and
non-stochastic hedge proposals. It also offers substantial drill down and reporting functionality including numerous graphics.
Value & Risk co-operates with SAS Institute and is able to implement complete solutions from
data integration to decision support for traders, managers and controllers in a multi-user environment.
Besides supplying the above mentioned methodology, Value & Risk offers the standard
method for capital requirements according to Basel Committee and CAD. All markets and
their instruments are covered including a variety of exotic options.

RiskMetrics Monitor
Second quarter 1996
page 5

RiskMetrics News (continued)


For the current list of software vendors that support RiskMetrics, please refer to the J.P. Morgan Web
page at:
http://www.jpmorgan.com/MarketDataInd/RiskMetrics/Third_party_directory.html

Change to the RiskMetrics yield calculation


Until now, our price volatility for government zeros has been estimated by using discrete compounding,
t
(i.e., price = 1 ( 1 + r ) , where r = yield rate, and t = term) to first establish the current price for each
zero yield interval.
However, continuous compounding, i.e., price = exp ( rt ) , should have been used instead, given the
yield calculation method of the term structure model published in RiskMetricsTechnical Document.
We will switch to continuous compounding for government zeros beginning June 17th.
The effect of this change on estimated VaR varies according to maturity interval and a currency's yield
levels. Chart 1 shows that the average price volatility for most currencies will increase an average of 5
to 8%. Higher yielding currencies, for example, Lira or Peseta, will increase an average of 10 to 12%.
On the other hand, the volatility for low yielding currencies, for example, Yen, will average an increase
of 2 to 5%.
The increase in average price volatility, however, does not translate into a direct, corresponding increase in VaR. The change in VaR is also a function of the time to maturity. The reduction in a cashflows present value using continuous versus discrete compounding is greater the longer the time to
maturity. Thus, a nearer government interval, for example, 2 years, will have more of an increase in
VaR than a farther one. This is because the reduction in present value has a dampening effect vis-a-vis
the increase in price volatility. This is particularly noticeable when looking at the overall effect on VaR
of high versus low yielding currencies. Chart 2 on page 6 shows the change in VaR for the 30 year Lira
and the 20 year Yen. In fact, the reduction in present value for the 30 year Lira more than offsets the
increase in its price volatility and the revised VaR estimate is actually less than the current estimate.
Chart 2 also compares the change in present value of the 20 year Yen versus the 30 year Lira.
Chart 1
Changes in average volatility estimates
RiskMetrics data, Jan-92 to Feb-96
15.0%
12.5%
10.0%
7.5%
5.0%
2.5%
0.0%
USD

JPY

FRF
2 YEAR

NZD

LIR

10 YEAR

DEM

20 YEAR

ESP
30 YEAR

DKK

CAD

RiskMetrics Monitor
Second quarter 1996
page 6

RiskMetrics News (continued)

Chart 2
Change in average estimated VaR
RiskMetrics data, Jan-92 to Feb-96
15.0%
10.0%
5.0%
0.0%
-5.0%
-10.0%
USD

JPY

FRF
2 YEAR

NZD

LIR

10 YEAR

DEM

20 YEAR

ESP

DKK

CAD

30 YEAR

Revision to series used in EM algorithm for swap and government rates


When applying the Expectation Maximization (EM) algorithm to fill in missing data, we predefine a
grouping of instruments related to the instrument type. Until the middle of May when we filled missing
swap or government data, we used only the swap or government series, respectively, for each currency.
This, however, lead to instances of discontinuity in the swap spread over governments. On the day EM
was applied, the filled in swap yields would hardly move even though the corresponding government
yields might have changed +/ 20 basis points. As a result, a previously high correlation between swaps
and government, particularly in the same currency, would decline for a period before recovering. To
remedy this, we now include all the swap and government series for each currency when using the EM
algorithm for either missing swap or government yields.

RiskMetrics Monitor
Second quarter 1996
page 7

An improved methodology for measuring VaR


Peter Zangari
Morgan Guaranty Trust Company
Risk Management Advisory
(1-212) 648-8641
zangari_p@jpmorgan.com

Since its release in October 1994, RiskMetrics has inspired an important discussion on VaR methodologies. A focal point of this discussion has been a RiskMetrics assumption that returns follow a conditional normal distribution. Since the distributions of many observed financial return series have tails
that are fatter than those implied by conditional normality, risk managers may underestimate the risk
of their positions if they assume returns follow a conditional normal distribution. In other words, large
financial returns are observed to occur more frequently than predicted by the conditional normal distribution. Therefore, it is important to be able to modify the current RiskMetrics model to account for
the possibility of such large returns.
The purpose of this article is to describe a RiskMetrics VaR methodology that allows for a more realistic model of financial return tail distributions. The article is organized as follows: the first section
reviews the fundamental assumptions behind the current RiskMetrics calculations, in particular, the
assumption that returns follow a conditional normal distribution. The second section (A new VaR
methodology on page 10) introduces a simple model that allows us to incorporate fatter-tailed distributions. The third section (A statistical model for estimating return distributions and probabilities on
page 18) shows how we estimate the unknown parameters of this model so that they can be used in conjunction with current RiskMetrics volatilities and correlations. The fourth section (on page 23) is the
conclusion to this article.

A review of the implications of the current RiskMetrics assumptions about return


distributions
In a normal market environment RiskMetrics VaR forecasts are given by the bands of a confidence
interval that is symmetric around zero. These bands represent the largest expected change in the value
of a portfolio with a specified level of probability. For example, the VaR bands associated with a 90%
confidence interval are given by { 1.65 p, 1.65 p } where /+1.65 are the 5th/95th percentiles of the
standardized normal distribution, and p is the portfolio standard deviation which may depend on correlations between returns on individual instruments. The scale factors /+ 1.65 result from the assumption that standardized returns (i.e. a mean centered return divided by its standard deviation) are
normally distributed. When this is true we expect 5% of the standardized observations to lie below
1.65 and 5% to lie above +1.65. Often, whether complying with regulatory requirements or internal
policy, risk managers compute VaR at different probability levels such as 95% and 98%. Under the assumption that returns are conditionally normal, the scale factors associated with these confidence intervals are /+1.96 and /+2.33, respectively. It is our experience that while RiskMetrics VaR
estimates provide reasonable results for the 90% confidence interval, the methodology does not do as
well at the 95% and 98% confidence levels.1 Therefore, our goal is to extend the RiskMetrics model
to provide better VaR estimates at these larger confidence levels.
Before we can build on the current RiskMetrics methodology, it is important to understand exactly
what RiskMetrics assumes about the distribution of financial returns. RiskMetrics assumes that returns follow a conditional normal distribution. This means that while returns themselves are not normal, returns divided by their respective forecasted standard deviations are normally distributed with
mean 0 and a variance 1. For example, let r t , denote the time t return, i.e., the return on an asset over
a one-day period. Further, let t denote the forecast of the standard deviation of returns for time t based
on historical data. It then follows from our assumptions that while r t is not necessarily normal, the standardized return, r t t , is normally distributed. The distinction between these two types of returns is
that, unlike conditional returns, unconditional returns have fat-tails. Second, time-varying, persistent
volatility, which is a feature of the conditional assumption, is a real phenomenon.
1 See

Darryl Hendricks, Evaluation of Value-at-Risk Models Using Historical Data, FRBNY Economic Policy Review,
April, 1996.

RiskMetrics Monitor
Second quarter 1996
page 8

An improved methodology for measuring VaR (continued)

Extending the RiskMetrics VaR framework to include the large probability of large returns begins
with an understanding of the dynamics of market returns. Chart 1 illustrates such dynamics by showing
a time series of returns for the Nikkei 225 index over the period April 1990 through April 1996.
Chart 1
Nikkei index returns, r(t)
Returns (in percent) for April 1990April 1996

Nikkei index returns

10

high volatility

5
0
-5

low volatility
-10
Mar-90

Aug-91

Dec-92

May-94

Sep-95

Feb-97

In addition to the typical feature of volatility clustering (i.e. periods of high and low volatility), Chart 1
displays large returns that appear to be inconsistent with the remainder of the data series. To show how
the observed returns, r(t), differ from their standardized counterparts, r(t)/t, the standardized returns
for the Nikkei index are presented in Chart 2 on page 8. (To create the standard errors for scaling the
returns, we used the current RiskMetrics daily forecasting methods.)
Chart 2
Standardized returns ( r t t ) of the Nikkei index
April 4, 1990March 26, 1996
8

Nikkei index returns

6
4
2
0
-2
-4
-6
-8
-10
Mar-90

Conditional event

Aug-91

Dec-92

May-94

Sep-95

Feb-97

Comparing Charts 1 and 2, note the large negative standardized return (called Conditional event in
Chart 2) and that it results in part from the low level of volatility that preceded the corresponding ob-

RiskMetrics Monitor
Second quarter 1996
page 9

An improved methodology for measuring VaR (continued)

served return (Chart 1). We can interpret such large conditional non-normal returns as surprises because immediately prior to the observed return there was a period of low return volatility. Hence, the
conditional event return is unexpected.
Conversely, observed returns that appear large may no longer do so when standardized because their
values were expected. In other words, these large returns occur in periods of relatively high volatility.
This scenario is demonstrated in Chart 3, which shows an observed time series of spot gold contract
returns and their standardized values.
Chart 3
Observed spot gold returns (A) and standardized returns (B)
April 4, 1990March 26, 1996
4

(A)

Gold returns

2
0
-2
-4
-6
-8
Mar-90

Aug-91

Dec-92

May-94

Sep-95

Feb-97

(B)

Gold returns

4
2
0
-2
-4
-6
-8
Mar-90

Aug-91

Dec-92

May-94

Sep-95

Feb-97

Chart 3 demonstrates the effect of standardization on return magnitudes. For example, observed returns
(A) show more volatility clustering relative to their standard counterparts (B).
To summarize, RiskMetrics assumes that financial returns divided by their respective volatility forecasts are normally distributed with mean 0 and variance 1. This assumption is crucial because it recognizes that volatility changes over time.

RiskMetrics Monitor
Second quarter 1996
page 10

An improved methodology for measuring VaR (continued)

A new VaR methodology


In developing a new way to measure VaR, we must meet three criteria that relate to RiskMetrics VaR:
First, continue using the current RiskMetrics volatilities and correlations because users have
shown a strong interest in using these estimates when computing VaR. (We believe that this
interest results from the intuitive interpretation of volatility and correlation.)
Second, build a model in which it is fairly straightforward to aggregate risks of individual
instruments. Keeping this in mind, we must develop a model that builds on the conditional
normal distribution.
And third, set up conditions such that VaR is easily computed. Therefore, the number of new
parameters required to compute VaR must not be so large as to impede implementation.
We motivate the development of the new VaR methodology as follows: Instead of assuming that standardized returns are normally distributed with mean 0 and variance 1, we assume that the standardized
returns are generated from a mixture of two different normal distributions.
For example, suppose that we believe that on most days standardized returns are generated according
to the conditional normal distribution with a zero mean and variance close to 1. However, on other days,
say on days where a large return is observed, we assume that standardized returns are still normally
distributed but with a different mean and variance. In fact, we would expect the variance of this latter
normal distribution to be large. To be more specific, let p 1 be the probability that a standardized return
was generated from the normal distribution N 1 , where N 1 is identified by its mean 1 and variance
2
1 . Similarly, let p 2 be the probability that a standardized return was generated from the normal dis2
tribution N 2 , where N 2 is identified by its mean 2 and variance 2 . Mathematically, the standardized return distribution is generated according to the following probability density function:
[1]

PDF = p 1 N 1 ( 1, 1 ) + p 2 N 2 ( 2, 2 )

Eq. [1] is known as a normal mixture model. An interesting feature of this model is that it allows us to
assign large returns a larger probability (compared to the standard normal model). Chart 4 on page 11
shows two simulated densities, one from the standard normal distribution, the other from a normal mixture model with the following parameters:
2

p 1 = 0.98, p 2 = 0.02, 1 = 2 = 0, 1 = 1, and 2 = 100

RiskMetrics Monitor
Second quarter 1996
page 11

An improved methodology for measuring VaR (continued)

Chart 4
Standard normal and normal mixture probability density functions (PDF)
0.40
0.35
0.30

PDF

0.25
0.20

Mixture
0.15
0.10
0.05

Normal
4.50

3.55

2.60

1.65

0.70

-0.25

-1.20

-2.15

-3.10

-4.05

-5.00

0.00

standardized returns

Since the normal mixture model can assign relatively larger-than-normal probabilities to big returns,
we choose to model standardized returns as the sum of a a normal return, n t , with mean zero and vari2
ance n , and another normal return, t , with mean and variance that occurs each period with proba2
bility p. Note that if we set n = 1 , then n t represents the part of the return that is modeled correctly
according to RiskMetrics. It then follows that we can write a standardized return, R(t), as generated
from the following model:
[2]

Rt = nt + t t

where, t = 1 with probability p, or t = 0 with probability 1 p . When t = 1 , the standardized


2
2
return is normally distributed with mean and variance + n . Otherwise it is distributed normally
t
2
with mean zero and variance n . Also, we assume that t and t are both independent of n t . For each
2
2
observed time series we can estimate , , p, and n using historical data. A description of this estimation process is provided in A statistical model for estimating return distributions and probabilities on page 18.
Recall that Eq. [2] on page 11 was motivated by the need to have our VaR model account for large returns, i.e. returns that occur less than 5% of the time according to the RiskMetrics model. Unfortunately, due to data limitations, it is very difficult in practice to determine the accuracy of forecasting
returns that occur less than 2.5% of the time. One way to get around the situation of not having enough
data to properly test Eq. [2] is to perform a Monte Carlo simulation. Under controlled settings we can
simulate as much data as we need to, and then conduct inference based on the simulated data. The only
requirement is that the simulated data have fat tails so as to reflect observed financial return
distributions. One commonly used distribution that is known to have fat-tails, and that is easy to simulate, is the t-distribution.

RiskMetrics Monitor
Second quarter 1996
page 12

An improved methodology for measuring VaR (continued)

We then conduct a Monte Carlo study on single instruments to determine how accurately our new model captures fat-tails. The experiment is performed in three steps. First, we simulate 10,000 observations
from a t-distribution and then compute the simulated percentiles at the 0.5%, 1%, 2.5% and 5% probability levels. Second, we estimate the parameters of Eq. [2] by using the simulated data, and determine
the percentiles implied by Eq. [2]. (We refer to the percentiles generated from Eq. [2] as mix since
its associated PDF is essentially a normal mixture model.) Third, we compare the actual percentiles
generated from the t-distribution to those produced by Eq. [2] and by the standard normal model (standard RiskMetrics).
Table 1 reports the results of our study. Data in column 2 is simulated from t-distributions with 2, 4, 10
and 100 degrees of freedom. Notice that the smaller the degrees of freedom, the fatter the tails of the
simulated distribution (compare columns 4 and 5).
Table 1
Comparing percentiles of t-distribution and normal mixture
Simulated data from t-distribution with 2, 4, 10, and 100 degrees of freedom
Two degrees of freedom

Parameters1
(1)

Test results

(2)

= 0.1357

= 6.414

(3)

= 1.440

p = 1.3 %

Percentile (%)

t -dist.

mix

(4)
relative error
(mix-t-dist)/(t-dist)

(5)
relative error
(normal-t-dist)/(t-dist)

0.5

5.113

4.453

13%

50%

3.832

3.601

6%

40%

2.5

2.796

2.930

5%

30%

2.195

2.428

10%

25%

Four degrees of freedom

Parameters*

Test results

1Parameters

= 0.0753

= 4.394

= 1.148
p = 0.66 %
n
relative error
(mix-t-dist)/(t-dist)

relative error
(normal-t-dist)/(t-dist)

Percentile (%)

t -dist.

mix

0.5

3.130

3.121

0%

17%

2.719

2.753

1%

14%

2.5

2.239

2.288

2%

13%

2.811

1.910

32%

40%

are defined as follows: = mean of the normal distribution, t, describing the standardized return
= standard deviation of the normal distribution, t
n = standard deviation of the normal return, nt
p = probability that the standardized return is generated from the normal distribution

RiskMetrics Monitor
Second quarter 1996
page 13

An improved methodology for measuring VaR (continued)

Table 1 (continued)
Comparing percentiles of t-distribution and normal mixture
Simulated data from t-distribution with 2, 4, 10, and 100 degrees of freedom
Ten degrees of freedom

Parameters*

(2)

(1)

Test results

= 0.226

(3)

= 3.397

= 1.029

p = 0.32 %

Percentile (%)

t -dist.

mix

(4)
relative error
(mix-t-dist)/(t-dist)

(5)
relative error
(normal-t-dist)/t-dist)

0.5

2.896

2.710

6%

12%

2.496

2.428

3%

7%

2.5

2.053

2.034

0.1%

5%

1.677

1.703

2%

2%

One hundred degrees of freedom (close to standard normal distribution)

Parameters

Test results

1Parameters

= 0.4189

= 2.518

= 1.026
p = 0.2 %
n
relative error
(mix-t-dist)/t-dist)

relative error
(normal-t-dist/t-dist)

Percentile (%)

t -dist.

mix

0.5

2.579

2.626

2%

0%

2.392

2.366

1%

3%

2.5

1.972

1.990

1%

0%

1.648

1.669

1%

1%

are defined as follows: = mean of the normal distribution, t, describing the standardized return
= standard deviation of the normal distribution, t
n = standard deviation of the normal return, nt
p = probability that the standardized return is generated from the normal distribution

The results in Table 1 (columns 4 and 5) clearly show how the mixture model is superior to the normal
model in recovering the tail percentiles of the t-distribution (notably in the cases of 2 and 4 degrees of
freedom). Also, reading from the top of the table downwards, notice how the estimates of , n, and p
change as the distribution becomes more normal. When there are very fat tails, there is a greater chance
of observing a return from the normal distribution with the large variance (1.3% for 2 degrees of freedom vs. 0.2% with 100 degrees of freedom). Furthermore, the estimated standard deviation becomes smaller as the simulated distribution becomes more and more normal.
Next, we consider the case of calculating the VaR of a portfolio of returns. Unlike the single instrument
case described above, when aggregating returns we must grapple with issues such as the correlation
between different returns t s and t s . For example, consider a portfolio that consists of two instruments with returns R 1t and R 2t expressed as follows:
[3]

R 1t = n 1t + 1t 1t
R 2t = n 2t + 2t 2t

Let w 1 and w 2 denote the amount invested in instruments 1 and 2, respectively. We can write the portfolio return, R pt , as

RiskMetrics Monitor
Second quarter 1996
page 14

An improved methodology for measuring VaR (continued)

R pt = w 1 1t R 1t + w 2 2t R 2t
w 1 1t 1t 1t + w 2 2t 2t 2t
New portfolio component

w 1 1t n 1t + w 2 2t n 2t

[4]

Standard RiskMetrics component

Notice that when aggregating returns we must multiply the returns by the RiskMetrics standard deviations ( 1t and 2t ) to preserve the proper scale.
Now, to compute VaR it is simple to evaluate the standard RiskMetrics component since all that is
required are the RiskMetrics standard deviations and correlations which are readily available. However, things are not so straightforward with the New portfolio component. For example, we must determine whether we should use estimates of the correlation between 1t and 2t or simply assume how
they are correlated. Similar issues may apply to 1t and 2t although, according to our estimation procedure, it is not possible to estimate their correlation. Through extensive experimentation with portfolios of various sizes, we have concluded that treating both the t 's and t 's as independent offers the
best results relative to the standard normal model and the assumption that the t 's are perfectly correlated. The implication of this result is that for any portfolio of arbitrary size, all that is required to compute its VaR are the RiskMetrics volatilities and correlations, the probabilities, p, mean estimates
and the standard deviations . Note that there is one p, , and for each time series. We now offer
an example to show how we reached the conclusion to treat the t 's and t 's as independent.
Consider the case of a portfolio that consists of five assets. We assume that the true returns generated
from the following model:
[5]

R jt = n jt + jt jt

for j = 1, 2, 3, 4, 5

where the vector = ( 1, , 5 ) is distributed multivariate normal with mean vector (0,0,0,0,0),
standard deviation vector (10,10,10,10,10), and correlation matrix

[6]

1
0.5
= 0.5
0.5
0.5

0.5
1
0.5
0.5
0.5

0.5
0.5
1
0.5
0.5

0.5
0.5
0.5
1
0.5

0.5
0.5
0.5
0.5
1

The vector = ( 1, , 5 ) is distributed according to a correlated multinomial distribution with


probabilities p j = 0.02 for j = 1,2,3,4,5 and a correlation matrix that is the same as . Finally, n t is
multivariate normal with mean vector (0,0,0,0,0), standard deviation vector (1,1,1,1,1) and correlation
matrix equal to . We form the portfolio return series weighting each return series by 1 unit
(i.e. wj = 1 for j = 1,2,3,4,5). After generating 5000 portfolio returns according to Eq. [5] on page 14,
we calculate the percentiles at the 0.5%, 1%, 2.5%, and 5% probability levels. These percentiles are
reported in the first column of Table 2 on page 15. Table 2 also reports percentiles computed under the
following conditions: (1) the standard normal assumption, i.e., just accounting for n t , (2) the assumption that the t 's are independent, and (3) the assumption that t 's are perfectly correlated.

RiskMetrics Monitor
Second quarter 1996
page 15

An improved methodology for measuring VaR (continued)

Recall that we ignore the correlations between the t 's because, in practice, it is very difficult to get
good estimates of the correlation matrix .
Table 2
Testing assumptions on correlation between ts
5000 simulated returns from correlated mixture model
Percentiles:
Independent ts

Perfectly correlated ts

1.65

1.88

1.74

2.14

1.96

2.40

2.15

1.0%

3.32

2.32

3.32

3.10

0.5%

5.08

2.57

4.20

8.54

Confidence Interval

True

Normal

5.0%

1.63

2.5%

Next, we conduct the same experiment but this time on real data. We form a portfolio of returns from
5 foreign exchange series again weighting each return series by one unit. Table 3 reports the parameter
estimates after fitting each of the five return series to Eq. [2] on page 11. The percentiles implied by
Eq. [2] under different aggregation assumptions are reported in Table 4.
Table 3
Parameter estimates of the normal mixture model
Fitting the model (Eq. [2]) to 5 foreign exchange return series
Parameter estimates:
t

nt

p (%)

Austria

0.80

3.09

1.01

1.2

Australia

0.33

3.48

1.10

1.5

Belgium

0.68

3.46

1.03

1.5

Canada

0.25

3.20

1.01

1.4

Denmark

1.34

3.00

1.08

1.4

Currencies

Table 4
Testing assumptions on correlation between ts
Portfolio returns generated from 5 foreign exchange series
Confidence Interval

Normal

Percentiles:
Independent ts

Perfectly correlated ts

5.0%

1.65

1.71

1.69

2.5%

1.96

2.05

2.10

1.0%

2.32

2.46

2.56

0.5%

2.57

2.77

3.16

Tables 2 and 4 show that the independence and perfect correlation assumptions give similar results.
However, at the smaller percentiles, say 1% and smaller, the assumption that the t s are independent
tend to be more accurate.
To help summarize how to compute VaR under this new proposed methodology, we refer the reader to
the flow chart presented in Chart 5 on page 17. The gray shaded boxes represent the data required

RiskMetrics Monitor
Second quarter 1996
page 16

An improved methodology for measuring VaR (continued)

(which we would supply) to estimate VaR under the new methodology. In addition to the RiskMetrics
volatilities and correlations, we would supply p, , and for each time series and the formulae that
would take as inputs the portfolio weights, the RiskMetrics volatilities and correlations, and p, ,
and to produce a VaR estimate at a prespecified confidence level. The italicized words tell when the
data would be updated.

RiskMetrics Monitor
Second quarter 1996
page 17

An improved methodology for measuring VaR (continued)

Chart 5
Flow chart of VaR calculation

Start with RiskMetrics


price returns
Map positions to standard
RiskMetrics vertices
Estimate volatilities and correlations

Use
RiskMetrics
covariance matrix
daily

Fit individual standardized


return series to a
statistical model
monthly

Estimate standard deviation


(), mean, (), and
probabilities, p
monthly

Compute adjusted
percentile

Standard RiskMetrics
VaR

New VaR

RiskMetrics Monitor
Second quarter 1996
page 18

An improved methodology for measuring VaR (continued)

Thus far, there has been no mention of how p, , and are estimated. The next section describes
t
t
the statistical model used to estimate p, , and . The estimation process is rather technical and
t
t
uninterested readers should skip to Conclusions on page 23.

A statistical model for estimating return distributions and probabilities


Recall from the previous section that returns divided by the RiskMetrics volatility forecasts are assumed to be generated from a mixture of normal distributions. Using the notation , which is interpreted distributed as, we write such returns as being generated from the following model:
Rt = nt + t t

[7]
where

n t N 0, n

Prob ( t = 1 ) = p
Prob ( t = 0 ) = ( 1 p )
and

2
t N 0,
We analyze Eq. [7] within a Bayesian framework. Given prior distributions and values on
2
2
t, p, t, and n , we derive the marginal posterior distributions of p, t, and n , as well as a time series
of the posterior probabilities of events, i.e., Prob ( t = 1 ) at each point in time. The basic computational tool used is the Gibbs sampler, which uses random draws from the conditional distributions of
each variable of a random vector given all other variables to obtain samples from the marginal distributions.2 The sampler thus only requires the ability to draw random samples from the conditional distributions of the variables involved. This minimum requirement makes the sampler particularly useful
2
as the joint distribution of the variables p, t, and n is complicated.
As previous research has shown, traditional maximum likelihood analysis of models such as Eq. [7] is
complicated because the random mechanism, p, can give rise to unknown numbers of shifts at arbitrary
time points. Alternatively, formulating Eq. [7] by using a Bayesian approach and applying the Gibbs
sampler is an effective way to obtain the marginal posterior distributions. A major advantage of the
Bayesian approach is that there is no need to consider the number of level shifts in the series; the shifts
are in effect governed by the probability p.
Following the Bayesian paradigm, in order to estimate Eq. [7], we must first specify the prior distribu2
tions for p, t, and n . The prior distribution for the variance of the standard normal random variable
nt is
[8]

2 See

n v

Appendix on page 24 for a description of this technique.

RiskMetrics Monitor
Second quarter 1996
page 19

An improved methodology for measuring VaR (continued)


2

where is an inverse chi-square random variable with v degrees of freedom with mean and variance
given by
2
E n = ( 2 )

>2

2
2
2
V n = 2 ( ) ( 2 ) ( 4 )

>4

The prior distribution for the event returns, t is normal, i.e.,


[9]

2
t NID 0,

where NID stands for normal, independently distributed.


Finally, it is assumed that the prior probability that an event occurs, p, follows a beta distribution, i.e.,
p Beta ( 1 + 2 ) , with mean and variance
E ( p) = 1 ( 1 + 2)
2

V ( p) = 1 2 ( 1 + 2) ( 1 + 2 + 1)
2

The hyperparameters, i.e., the parameters ( , , 1, 2, ) for the prior distributions are assumed
known. In practice, these hyperparameters can be specified by using substantive prior information on
the series under study. The purpose of the Gibbs sampler is to find the conditional posterior distribu2

tions of subsets of the unknown parameters , , p, n . Denoting the conditional probability density
of given by p(|) and using some standard Bayesian techniques, we obtain the posterior distributions, i.e., the distributions after we update our priors with data.3
To quantify our a priori beliefs, we set the priors to the following values:
1 = 2 ; 2 = 98
= 3
2

= 3
2

= 100
These settings imply that before we estimate the parameters of Eq. [7], we believe an event occurs 2%
of the time, the expected value of the standard deviation of nt is one, and event returns, t, are distributed normally with mean of zero and a standard deviation of 10. Chart 6 shows how the prior distribution on event returns compares to the prior distribution of standardized returns et.

3 The

posterior distributions are given in the appendix to this paper (page 24).

RiskMetrics Monitor
Second quarter 1996
page 20

An improved methodology for measuring VaR (continued)

Chart 6
Prior distributions for standardized returns and event returns
0.4

N(0,1)

0.35
0.3
0.25
0.2
0.15
0.1

Prior (0,100)

0.05
0
-10

-8

-6

-4

-2

10

Note that by choosing a large standard deviation we are implying that we do not have strong a priori
beliefs on the values of event returns. Also, assuming a mean of zero implies that there is no bias toward
either positive or negative events.
Combining the observed returns with our prior settings, we estimate the marginal distributions of
2
, p, and n , which represent the probability distributions of the event returns, the probability of an
event and the variance of the normal standardized random variable, respectively. Chart 7 presents the
estimated posterior distribution of event returns for gold spot contracts. Specifically, given our priors
we fit Eq. [7] on page 18 to gold spot contract returns for the period April 4, 1990 through March 26,
1996 and estimate the distribution of .
Chart 7
Posterior distribution of event returns () of gold spot contracts
X-axis is in units of standardized returns
0.12
0.1
PDF

0.08
0.06
0.04
0.02
0
-10.00

-7.00

-4.00

-1.00

2.00

5.00

8.00

Spot gold standardized returns

Notice how the event returns have a symmetric bimodal distribution. Effectively, what this shape tells
us is that there is important information in the data. The data has transformed the prior distribution into

RiskMetrics Monitor
Second quarter 1996
page 21

An improved methodology for measuring VaR (continued)

something very different. When event returns are negative, values around 4.5 occur most frequently.
Similarly, when event returns are positive the values around 4.3 occur most often. This is in stark contrast to the prior distribution, which assumes that zero is the most commonly observed value for event
returns.
In fact, when fitting Eq. [7] to 215 time series in the RiskMetrics database, which includes foreign
exchange, money market rates, government bonds, commodities, and equities, we have found similar
shapes for the event return distributions. For example, Charts 7 and 8 show event return distributions
for the DEM/USD foreign exchange series and the US S&P 500 equity index for the period April 4,
1990 through March 26, 1996.
Chart 8
Posterior distribution of DEM/USD foreign exchange event returns
X-axis is in units of standardized returns
0.14
0.12

PDF

0.1
0.08
0.06
0.04
0.02
0
-8.00 -6.00 -4.00 -2.00 0.00 2.00 4.00 6.00 8.00

DEM/USD standardized returns

Note that whereas the DEM/USD event returns have a symmetric bimodal distribution, the S&P 500s
event return distribution has a pronounced mode at 5.0. As Chart 9 on page 22 shows, when events in
the S&P 500 occur, they are more likely to be negative than positive.

RiskMetrics Monitor
Second quarter 1996
page 22

An improved methodology for measuring VaR (continued)

Chart 9
Posterior distribution of S&P 500 event returns
X-axis is in units of standardized returns

0.12
0.1
PDF

0.08
0.06
0.04
0.02
0
-10.00

-7.00

-4.00

-1.00

2.00

5.00

8.00

S&P 500 standardized returns

Charts 7, 8, and 9 presented the distribution of for three different return series, similarly we can estimate the posterior distribution of p, the probability that an event occurs. Chart 10 shows the distribution of the probability of an event occurring for the S&P 500.
Chart 10
Posterior distribution of the probability of an event (p) for the S&P 500
April 4, 1990 through March 26, 1996
12000
10000

PDF

8000
6000
4000
2000
0
0.004

0.014

0.024

0.034

0.044

Probability of event

Recall that the prior probability of observing an event followed a beta distribution and it was expected
that, on average, an event would occur 2% of the time. However, after using the data to update our priors we see that for the S&P 500, the estimated distribution of p is positively skewed with a peak
around 1%.

RiskMetrics Monitor
Second quarter 1996
page 23

An improved methodology for measuring VaR (continued)

Finally, note that throughout this section we derived probability distributions of the parameters of interest. However, our VaR methodology requires point estimates, not entire distributions. When computing VaR we take the means of the posterior distributions of p, , and for their point estimates.

Conclusions
This article has developed a new methodology to measure VaR that explicitly accounts for fat-tail distributions. Building on the current RiskMetrics model, VaR under the new methodology takes as inputs portfolio weights two parameters: the current RiskMetrics volatilities and correlations, and
estimates of p, , and . This model was developed keeping in mind three requirements: (1) build
t
t
a VaR model that continues to use RiskMetrics volatilities and correlations, (2) use the conditional
normal distribution as the baseline model and (3) keep the number of additional parameter estimates
required to compute VaR to a minimum. The upshot of meeting these criteria is a relatively simply VaR
framework that goes a long way towards modeling large portfolio losses. Furthermore, the new VaR
methodology serves as a basis from which a risk manager can perform structured Monte Carlo.
In closing, we are interested in your feedback related to this proposed methodology. Send the author
E-mail with any comments or questions you may have regarding the contents of this article.

RiskMetrics Monitor
Second quarter 1996
page 24

An improved methodology for measuring VaR (continued)

Appendix
Conditional posterior distributions
2

The conditional posterior of n is inverse chi-squared


[A.1]

n + s

+ T

where
2

s =

( R R)

and
R = sample mean of R t
At any point in time, the conditional posterior probability of t is given by
[A.2]

p g1 ( Rt )
Prob ( t = 1 R t, t, n, p ) = ------------------------------------------------------------------------p g1 ( Rt ) + ( 1 p) go ( Rt )

where
[A.3]

2
2 1 2

g 1 ( R t ) = 2 n
exp 1 2 [ ( R t t ) n ]

[A.4]

2
2 1 2

g 0 ( R t ) = 2 n
exp 1 2 [ R t n ]

if ( t = 1 )

if ( t = 0 )

This result follows from a direct application of Bayes rule, i.e.,

[A.5]

Prob ( t = 1 R t ) =
Prob ( t = 1 ) Prob ( R t = 1 )
t
------------------------------------------------------------------------------------------------------------------------------------------------------------------Prob ( t = 1 ) Prob ( R t = 1 ) + Prob ( t = 0 ) Prob ( R t = 0 )
t

The conditional posterior distribution of t is as follows:

2
If t = 0 , there is no information on t except its prior, so that t NID 0,
If t = 1 , we use standard results on the relation between a normal prior and likelihood to
derive the posterior distribution of t
[A.6]

* 2
t ( t = 1 ) NID t ,

RiskMetrics Monitor
Second quarter 1996
page 25

An improved methodology for measuring VaR (continued)

where
2

R1
*
t = ----------------2
2
+ n
and
2 2

n
2
= ----------------2
2
+ n
Finally, the conditional posterior distribution of depends only on t . Let k be the number of 1s in
the T x 1 vector = ( 1, 2, T ) , that is, k is the number of events in the time series. Because the
prior of p is Beta ( 1, 2 ) , the conditional posterior distribution of p is Beta ( 1 + k, 2 + T k ) .

The Gibbs sampler


The Gibbs sampler is a Monte Carlo integration algorithm which is an adaptation from Markov Chain
Monte Carlo (MCMC) methods. It has been the focus of many academic papers in the statistics literature. A main reason for its popularity is its conceptual simplicity. The Gibbs sampler generates random
variables from a distribution indirectly, without having to calculate the density. In other words, rather
than having to compute or approximate the probability density function p(x) directly, the Gibbs sampler
makes it possible to generate a sample X 1, , X N from the cumulative distribution function P(x) without knowing p(x).
The Gibbs sampler simulates random variables from marginal and joint distributions as follows. Consider a set of random variables r = ( r 1, , r N ) which has a joint distribution function P ( r 1, , r N ) .
It is assumed that the joint distribution is determined uniquely by the full conditional distributions
P ( r ) = ( r j r i r j ) , j = 1, , N, and that it is possible to sample from these conditional distributions. For each j, let P ( r j ) denote the marginal distribution. Given an arbitrary set of starting values
( 0)
( 1)
( 0)
( 0)
( 1)
( 0) ( 0)

r 1 , r 2 , , r N , draw r 1 from P r 1 r 2 , , r N , then r 2 from


( 0)
( 0)
( 0)
( 1)

P r 2 r 1 , r 3 , , r N and so on up to r N to complete one iteration of the scheme. After A such


( A)
( A) ( A)
iterations the result is r 1 , r 2 , , r N . It can be shown that for a large enough sample,
( A)
( A) ( A)
r 1 , r 2 , , r N is a simulated draw from the joint cumulative distribution function
P ( r 1, r 2, , r N ) . In this paper we run the sampler 1000 times, then 500 times, and use the last 500
simulated samples for statistical inference.

RiskMetrics Monitor
Second quarter 1996
page 26

A Value-at-Risk analysis of currency exposures


Peter Zangari
Morgan Guaranty Trust Company
Risk Management Advisory
(1-212) 648-8641
zangari_p@jpmorgan.com

In this article, we compute the VaR of a portfolio of foreign exchange flows that consists of the exposures that are provided in Table A.1 in the Appendix on page 30. In so doing, we underscore the limitations of standard VaR analysis when return distributions deviate significantly from normality. All
exposures are assumed to have been converted to U.S. dollar equivalent at the current spot rate.
Briefly, for a given forecast horizon and confidence level, VaR is the maximum expected loss of a portfolios current value. Based on the standard RiskMetrics methodology, Table 1 reports the portfolios
VaR over different forecast horizons and confidence levels. For example, over the next year, there is a
95% chance that current portfolio value of USD 2,502MM will not fall by more than USD 129.42MM.
Table 1
Portfolio VaR estimates (USD MM)
Confidence
Interval

Time Horizon

Annual Risk

1 Quarter

2 Quarters

3 Quarters

Diversified

Undiversified

99%

91.24

129.03

157.92

182.48

469.57

95%

64.71

91.51

112.08

129.42

333.03

90%

50.21

71.01

86.97

100.43

258.43

The RiskMetrics methodology is based on the precept that risks across instruments are not perfectly
additive given the lack of perfect positive correlation. As a result, the total risk in a portfolio of positions is often less than the sum of the instrument risks taken separately. This diversification benefit can
be estimated by taking the difference in VaR assuming all correlations between exposures are 1 (no diversification) and VaR estimates based on estimated correlations (as presented in Table 1). Consider the
VaR estimates (95% confidence) on the individual exposures for a one year horizon. The total sum of
these numbers is USD 333.03MM (sum of annual VaR estimates in Table A.1 on page 30). This is
equivalent to the VaR estimate of USD 129.42MM reported in Table 1 plus the diversification benefit
of USD 203.61MM.
In addition to the portfolios VaR, we compute the VaR of each foreign exchange exposure for a forecast
horizon of one year, and a 95% confidence level. Table A.1 reports VaR estimates for the 52 positions.
To further help understand the riskiness of the individual foreign exchange positions, Table A.1 also
reports the volatility forecasts of foreign exchange returns (i.e. not weighted by the size of the foreign
exchange positions).
To obtain the aforementioned risk estimates, we conducted the VaR analysis closely following the RiskMetrics methodology. For each of the 52 time series (Table A.1) we used 86 historical weekly prices
for the period 7/15/95 through 4/31/96. Missing observations, due to country-specific holidays, were
forecast using the statistical routine known as the EM algorithm. The VaR estimates reported in
Tables A.1 and A.2 were computed using the standard RiskMetrics delta valuation methodology.
This requires the computation of volatilities for each exposure and correlations between exposures.
Volatilities and correlations were computed using exponentially weighted averages with a decay factor
of 0.94 which implies that our volatility and correlation forecasts effectively use 75 historical data
points. Recall that exponential weighting is a simple way of capturing the dynamics of returns. Its key
feature is that it weighs recent data more heavily than data recorded in the distant past.
See RiskMetricsTechnical Document for exact formulae.
When computing VaR, RiskMetrics assumes that portfolio returns are distributed conditionally normal. That is, it is assumed that returns divided by their respective standard deviations are normally dis-

RiskMetrics Monitor
Second quarter 1996
page 27

A Value-at-Risk analysis of currency exposures (continued)

tributed. It is important to distinguish this assumption from simply assuming that currency returns are
normally distributed. Table A.2 on page 32 reports various test statistics to determine the validity of
this assumption. All results were computed from normalized returns, that is, returns divided by their
standard errors.
RiskMetrics also assumes that returns are independent over time. Finally, to generate VaR estimates
over different time horizons, we simply scale the weekly VaR forecast by the square root of the time
horizon in weeks. For example, since we define 60 weeks to be one year,1 we obtain the one-year VaR
forecast by multiplying the one-week VaR forecast by the square root of 60.

Discussion
Table A.2 on page 32 presents evidence that several returns series are clearly not conditionally normal.
For example, Mexicos skew statistic is 13.71, which is very nonnormal. When returns are aggregated
into a portfolio, it would not be unreasonable to expect the portfolios return distribution to become
more normal. However, as the sample statistics for the portfolio show in Table A.2, this is not the case.
In fact, the non-normality of the portfolios return is a result of the relatively high weights on returns
that are very non-normal (e.g. China has a weight of 74). Fortunately, there are advanced statistical
methods that allow us to adjust our VaR estimates to reflect the portfolios skewness and kurtosis (see
RiskMetrics Monitor, 1st quarter, 1996). In the current analysis, we did not adjust the VaR estimates
for skewness and kurtosis.
In addition to the general deviations from conditional normality, there is the issue of event risk. For
various reasons, events appear as very large returns that occur with only a small probability. While we
are currently developing a VaR methodology that allows users to explicitly account for event risk in
their VaR calculations, it is not included in this analysis. For a discussion of fat-tail distributions, see
the preceding paper in this edition, An improved methodology for measuring VaR on page 7.
One way to measure how sensitive the VaR estimates in Tables A.1 and A.2 are to its underlying assumptions is to compare these values to VaR estimates produced by historical simulation. Under historical simulation, no statistical distribution for returns is assumed. Instead, sample returns over the 85
week historical sample period and the portfolio weights are used to construct the portfolios profit and
loss (P&L) distribution. It is then assumed that this distribution holds in the future. Chart 1 on page 28
shows the P&L distribution of a portfolio for a one year horizon. For a 95% confidence level, VaR is
given by the 5th percentile of this distribution which is USD 174MM.

1 In

this analysis, we use the convention of 5 weeks per month, 15 weeks per quarter, and 60 weeks per year.

RiskMetrics Monitor
Second quarter 1996
page 28

A Value-at-Risk analysis of currency exposures (continued)

Chart 1
Portfolio profit & loss distribution over one year based on historical simulation
VaR = USD 174MM at 95% confidence level
14
12
Frequency

10
8
6
4
2
147

54

39

23

-8

-23

-39

-54

-147

0
Portfolio value (mm$)

Table 2 presents portfolio VaR estimates based on historical simulation for various forecast horizons.
Table A.1 on page 30 reports annual VaR estimates for individual foreign exchange exposures.
Table 2
Portfolio VaR estimates (USD MM)
Historical simulation; 95% confidence level
Confidence interval1

1Due

Time Horizon
1 Quarter

2 Quarters

3 Quarters

1 Year

95%

84

119

146

174

90%

80

113

138

160

to an insufficient amount of data, we do not report results for the 99% confidence level.

It should be noted that the accuracy of the VaR estimates produced by historical simulation is very dependent upon the sample size. In general, when no statistical distribution is assumed for returns, it is
difficult to obtain accurate estimates of the 1st, 5th and 10th percentiles without a sufficient amount of
data.
For example, to find the 5th percentile of the profit and loss (P&L) distribution consisting of 85 data
points we first sort the data and then select the 4th largest point. (Actually, the fifth percentile of 85 is
4.25 and using the fourth observation is a rough approximation to the true percentile value). Similarly,
the 1st and 10th percentiles are represented by the first and ninth data points of the sorted P&L series,
respectively. Table 3 presents the first nine P&L values from the sorted distribution as well as their corresponding percentiles.

RiskMetrics Monitor
Second quarter 1996
page 29

A Value-at-Risk analysis of currency exposures (continued)

Table 3
First nine values of the portfolios profit and loss (P&L) distribution
One year forecast horizon
Order

P&L Values

Approximate Percentiles

1st

288

1st

2nd

279

3rd

258

4th

177

5th

5th

165

6th

160

7th

160

8th

160

9th

140

10th

Note that the 5th percentile in Table 3 does not match its counterpart in Table 2 because there, the percentile is computed as 0.75 (4 + 0.25) 5 = 174. Nevertheless, we present Table 3 to show how percentile estimates are very sensitive to the number of data points used to construct the P&L distribution.
More specifically, by computing the confidence intervals for the estimated percentiles, we find that
there is a 20% chance that the estimated 5th percentile will be less than 279 or greater than 160; there
is only a 67% chance that the 5th percentile is less than 165 and greater than 258. The fact that it is
difficult to get robust estimates of the percentiles in above analysis is one reason for the differences
between RiskMetrics VaR and VaR according to historical simulation. Another reason for the difference is related to the relative data weighting schemes used by both methodologies. In historical simulation, all occurrences have equal weights. Under the standard RiskMetrics approach, market
movements are exponentially weighted.

RiskMetrics Monitor
Second quarter 1996
page 30

A Value-at-Risk analysis of currency exposures (continued)

Appendix
Table A.1
Portfolio composition and VaR1
Annual Volatility
Weight

1.65 Std. Dev.

Annual Value at Risk


RiskMetrics

Hist. Simulation

OECD
Australia

35

12.361

4.330

4.340

Austria

66

22.565

14.890

13.360

Belgium

32

16.592

5,310

6.220

Denmark

59

15.850

9.350

11.070

France

28

15.616

4.370

5.280

Germany

37

16.847

6.230

7.370

Greece

80

15.220

12.180

14.460

Holland

30

16.738

5.020

5.880

Italy

82

11.190

9.180

12.470

New Zealand

98

10.948

10.730

7.360

Portugal

28

15.208

4.260

5.270

Spain

48

14.843

7.120

8.120

Turkey

68

30.132

20.490

18.450

UK

81

11.641

9.430

14.170

Switzerland

41

19.810

8.120

8.330

6.637

0.400

0.680

Latin Amer. Econ. System


Brazil
Chile

10.770

0.430

0.520

Colombia

83

13.462

11.170

7.540

Costa Rica

53

4.353

2.310

1.460

Dominican Rep.

77

8.974

6.910

12.630

El Salvador

29

0.637

0.180

0.260

Equador

51

12.132

6.190

8.370

Guatemala

90

10.521

9.470

10.870

Honduras

10

11.272

1.130

1.840

Jamaica

64

15.298

9.790

8.000

Mexico

99

33.727

33.390

53.560

4.030

0.360

0.160

Peru

27

10.536

2.840

1.870

Trinidad

12

16.731

2.010

2.020

Uruguay

74

7.491

5.540

8.320

Nicaragua

RiskMetrics Monitor
Second quarter 1996
page 31

A Value-at-Risk analysis of currency exposures (continued)

Table A.1 (continued)


Portfolio composition and VaR1
Annual Volatility

Annual Value at Risk

Weight

1.65 Std. Dev.

RiskMetrics

Hist. Simulation

Malaysia

54

5.065

2.730

2.250

Philippines

79

5.521

4.360

9.850

Thailand

34

2.813

0.960

0.950

Fiji

84

5.651

4.750

4.980

Hong Kong

79

0.346

0.270

0.360

Reunion Island

41

15.614

6.400

7.730

Malawi

55

18.211

10.020

19.350

South Africa

85

7.953

6.760

8.120

Zambia

11

19.112

2.100

3.130

Zimbabwe

57

6.968

3.970

5.100

Ivory Coast

53

15.607

8.270

9.990

18.700

0.370

0.300

0.020

ASEAN

Southern African Dev. Comm.

Uganda
Others

3.964

0.040

Czech Repub

64

12.784

8.180

8.300

Hungary

83

12.984

10.780

11.560

India

94

17.787

16.720

13.960

Romania

43

25.866

11.120

4.520

Russia

82

14.730

12.080

20.030

China

Total
1Countries

2,502

are grouped by major economic groupings as defined in Political Handbook of the World: 19951996. New
York: CSA Publishing, State University of New York, 1996. Countries not formally part of an economic group are
listed in their respective geographic areas.

RiskMetrics Monitor
Second quarter 1996
page 32

A Value-at-Risk analysis of currency exposures (continued)

Table A.2
Testing for conditional normality1
Normalized return series; 85 total observations
Tail Probability (%)8

Tail value9

Mean6

Std. Dev.7

< 1.65

> 1.65

< 1.65

> 1.65

6.105

0.120

0.943

2.900

5.700

2.586

2.306

13.517

0.085

1.037

8.600

5.700

1.975

2.499

19.172

0.089

0.866

8.600

2.900

1.859

2.493

17.510

0.077

0.903

11.400

2.900

1.915

2.576

0.950

17.642

0.063

0.969

8.600

2.900

2.140

2.852

4.453

0.937

18.064

0.085

0.872

5.700

2.900

1.821

2.703

0.098

2.259

0.940

15.678

0.154

0.943

11.400

2.900

1.971

2.658

0.067

4.567

0.939

18.360

0.086

0.865

5.700

2.900

1.834

2.671

Italy

0.480

0.019

0.984

7.661

0.101

0.763

2.900

1.853

New Zealand

1.746

7.829

0.963

8.808

0.068

1.075

2.900

2.900

2.739

3.633

Portugal

1.747

0.533

0.947

21.201

0.062

0.889

11.400

2.900

1.909

2.188

Spain

6.995

1.680

0.935

14.062

0.044

0.957

8.600

2.900

2.293

1.845

Turkey

30.566

118.749

0.865

2.408

0.761

1.162

11.400

2.944

UK

7.035

2.762

0.936

11.711

0.137

0.955

8.600

2.900

2.516

1.811

Switzerland

0.009

0.001

0.992

6.376

0.001

0.995

2.900

5.700

2.415

2.110

Skewness2

Kurtosis3

SW4

OECD
Australia

0.314

3.397

0.958

Austria

0.369

0.673

0.961

Belgium

0.157

2.961

0.943

Denmark

0.650

4.399

0.932

France

0.068

3.557

Germany

0.096

Greece
Holland

BL(18)5

Latin Amer. Econ. System


Brazil

0.880

1.549

0.976

10.900

0.224

0.282

Chile

1.049

0.512

0.983

11.035

0.291

0.904

8.600

2.057

Colombia

2.010

4.231

0.927

4.041

0.536

1.289

11.400

2.900

3.305

2.958

Costa Rica

0.093

33.360

0.878

19.893

0.865

0.425

5.700

2.011

Dominican Rep

0.026

41.011

0.872

10.796

0.050

1.183

5.700

5.700

3.053

3.013

El Salvador

2.708

49.717

0.672

9.626

0.014

0.504

2.900

1.776

Equador

0.002

50.097

0.852

10.463

0.085

1.162

5.700

5.700

3.053

3.013
2.237

Guatemala

0.026

1.946

0.959

12.276

0.280

1.036

8.600

5.700

2.365

Honduras

42.420

77.277

0.705

5.794

0.575

1.415

14.300

3.529

Jamaica

81.596

451.212

0.674

6.030

0.301

1.137

2.900

2.900

6.163

1.869

Mexico

13.71

30.237

0.930

15.156

0.158

0.597

2.900

2.500

0
0

0.051

2.847

0.977

132.183

0.508

0.117

122.807

672.453

0.560

2.713

0.278

1.365

5.700

5.069

Trinidad

0.813

0.339

0.980

10.271

0.146

1.063

8.600

11.400

2.171

1.915

Uruguay

0.724

0.106

0.989

9.464

0.625

0.371

Nicaragua
Peru

RiskMetrics Monitor
Second quarter 1996
page 33

A Value-at-Risk analysis of currency exposures (continued)

Table A.2 (continued)


Testing for conditional normality1
Normalized return series; 85 total observations
Tail Probability (%)8

Tail value9

Skewness2

Kurtosis3

SW4

BL(18)5

Mean6

Std. Dev.7

< 1.65

> 1.65

< 1.65

> 1.65

Malaysia

1.495

0.265

0.977

28.815

0.318

0.926

8.600

2.366

Philippines

1.654

0.494

0.975

22.944

0.082

0.393

Thailand

0.077

0.069

0.987

10.099

0.269

0.936

8.600

2.900

2.184

1.955

Fiji

4.073

6.471

0.965

6.752

0.129

0.868

2.900

2.900

3.102

1.737

Hong Kong

5.360

29.084

0.906

12.522

0.032

1.001

5.700

5.700

2.233

2.726

Reunion Island

0.068

3.558

0.950

17.641

0.063

0.969

8.600

2.900

2.140

2.853

ASEAN

Southern African Dev. Comm.


0.157

9.454

0.870

14.143

0.001

0.250

South Africa

34.464

58.844

0.837

7.925

0.333

1.555

8.600

4.480

Zambia

22.686

39.073

0.891

9.462

0.007

0.011

Zimbabwe

20.831

29.234

0.895

9.142

0.487

0.762

5.700

2.682

Malawi

Ivory Coast
Uganda

0.068

3.564

0.950

17.643

0.064

0.970

8.600

2.900

2.144

2.857

40.815

80.115

0.767

9.629

0.203

1.399

8.600

2.900

4.092

1.953

Others
80.314

567.012

0.557

5.268

0.107

1.521

2.900

2.900

3.616

8.092

Czech Repub

0.167

12.516

0.937

2.761

0.108

0.824

5.700

2.900

2.088

2.619

Hungary

1.961

0.006

0.984

8.054

0.342

0.741

5.700

2.135

China

5.633

3.622

0.950

9.683

0.462

1.336

17.100

5.700

2.715

1.980

89.973

452.501

0.726

5.187

1.249

1.721

14.300

4.078

0.248

2.819

0.959

5.061

0.120

0.369

Portfolio

21.010

11.057

0.951

11.940

0.340

0.926

9.200

2.451

Normal

0.000

3.000

1.000

< 18.000

1.000

5.000

5.000

2.067

2.067

India
Romania
Russia

Countries are grouped by major economic groupings as defined in Political Handbook of the World: 19951996. New York: CSA Publishing, State University of
New York, 1996. Countries not formally part of an economic group are listed after their respective geographic areas.
2
If returns are conditionally normal, the skewness value is zero.
3
If returns are conditionally normal, the kurtosis value is zero.
4
If returns are conditionally normal, this value is one. (SW stands for Shapiro-Wilks test.)
5
If there is autocorrelation in the time series, this value is greater than 18.31. (BL stands for Box-Ljung test statistic.)
6
Sample mean of the return series.
7
Sample standard deviation of the normalized return series.
8
Tail probabilities give the observed probabilities of normalized returns falling below 1.65 and above +1.65. Under conditional normality, these values are 5%.
9
Tail values give the observed average value of normalized returns falling below 1.65 and above +1.65. Under conditional normality, these values are 2.067 and
+2.067, respectively.

RiskMetrics Monitor
Second quarter 1996
page 34

Estimating index tracking error for equity portfolios


Alan Laubsch
Morgan Guaranty Trust Company
Risk Management Advisory
(1-212) 648-8369
laubsch_alan@jpmorgan.com

In the RiskMetricsTechnical Document, we outlined a single-index equity VaR approach to estimate the systematic market risk of equity portfolios. In this paper, we discuss the principal variables
influencing the process of portfolio diversification, and suggest an approach to quantifying expected
tracking error to market indices.1

The current RiskMetrics framework


The market risk of the stock, VaRS, is defined as the market value of the investment in that stock, MV S ,
multiplied by the price volatility estimate of that stocks returns, 1.65 R :
S

VaR S = MV S 1.65 R

[1]

Since RiskMetrics does not publish volatility estimates for the universe of international stocks,
equity positions are mapped to their respective local indices. This methodology maps the return of a
stock to the return of a stock (market) index in order to attempt to forecast the correlation structure between securities. Let the return of a stock, R S , be defined as
RS = S R M + S + S

[2]
where

R M = the return of a stock (market) index


S = a measure of the expected change in R S given a change in R M (beta)
S = the expected value of the stocks return that is firm-specific
S = the random element of the firm specific normal return
E [ S] = 0
and
2

E S

= S

As such, the returns of assets are explained by market-specific2 ( S R M ) and stock-specific


( S + S ) components. Similarly, the total variance of stocks is a function of the market- and firm-specific variances.
[3]

R = S R +
S

Since the firm-specific component can be diversified away by increasing the number of different equities that comprise a given portfolio, the market risk, VaR S , of the stock can be expressed as a function
of the stock index
[4]

R = S R
S

paper is an addendum to RiskMetricsTechnical Document. (3rd ed.) New York, May 1995, Section C: Mapping to describe positions, pp. 107156.

1 This

2 A number

of equity analytics firms provide estimates of stock betas across a large number of markets.

RiskMetrics Monitor
Second quarter 1996
page 35

Estimating index tracking error for equity portfolios (continued)

Substituting Eq. [4] into Eq. [1] yields


[5]

VaR S = MV S S 1.65 R

where
1.65 R = RiskMetrics volatility estimate for the appropriate stock index.
M

Using this framework, the VaR of a portfolio of stocks becomes:


N

[6]

VaR p = 1.65 R
M

MV

Si

Si

i=1

The process of portfolio diversification


For well diversified equity portfolios, the current RiskMetrics technique should yield reasonable risk
estimates. For example, as a rule of thumb, in the U.S. equity market, portfolios with 25 or more stocks
are generally well diversified (although the level of diversification will vary depending on whether
there are any significant concentrations, for example to a particular industry sector).
Chart 1 shows the effect of diversification for U.S. equities, based on a study of monthly volatilities for
all stocks listed on the New York Stock Exchange.3 Total risk rapidly declines and approaches the index
(or fully diversified) volatility level.
Chart 1
Portfolio diversification effect
3.0

Portfolio standard deviation

2.5

2.0

1.5

1.0

0.5

1.0 represents fully diversified market risk

0.0
1

10

20

45

150

350

700

Number of assets in portfolio

3 Edwin

J. Elton and Martin J. Gruber. Modern Portfolio Theory and Investment Analysis. (4th ed.) New York: John
Wiley and Sons, Inc., 1991. p. 33.

RiskMetrics Monitor
Second quarter 1996
page 36

Estimating index tracking error for equity portfolios (continued)

In Modern Portfolio Theory and Investment Analysis, Elton and Gruber derive a formula that illustrates
the process of diversification with respect to the number of different stocks in a portfolio.
The variance of a portfolio of N stocks is
N

[7]

2
P

J=1

2 2
X J J

XX
J

2
K JK

J = 1K = 1
KJ

where
X J = proportion of stock J

2
J

2
JK

= variance of returns for stock J


= covariance of returns between stocks J and K

For an equally weighted portfolio of N assets (i.e., proportion held in security XJ=1/N) the formula for
portfolio variance becomes
2
1 2 N1 2
P = ---- J + ------------- JK
N
N

where
2

P = stock portfolio variance of returns


N = number of entities in the portfolio
2

J = average variance of returns for individual securities


2

JK = average covariance of returns ( diversified index variance)


To better illustrate the process of diversification, we can arrange the terms in the following manner:
2
2
1 2 2
P = ---- J JK + JK
N

The first term of the equation (1/N times the difference between the variance of individual securities
and the average covariance) corresponds to the residual firm specific risk of a portfolio. The second
term (average covariance), represents the diversified market risk component. Now we can see that as
the number of stocks in a portfolio increases, the firm specific component declines to zero, and we are
left only with undiversifiable market risk. Therefore, the variance of a broad market index, such as the
2
2
S&P 500, should approximate the average covariance of stock returns ( JK R ). Substituting the
M
average covariance with the market index variance yields
[8]

2
2
2
1 2
P = ---- J RM + RM
N

RiskMetrics Monitor
Second quarter 1996
page 37

Estimating index tracking error for equity portfolios (continued)

Applications
Elton and Gruber's derivation clarifies the process of diversification and underlines the key variables
of portfolio risk: (a) average variance, (b) average covariance and (c) number of elements in a portfolio. Using these key variables, we can compare the effect of diversification for different equity markets.
For example, Elton and Gruber show that significantly more risk is diversifiable in the Netherlands or
Belgium (76%, and 80% respectively), than in the more correlated equity markets of Germany and
Switzerland (56%). In general, significant risk reduction through diversification is possible when the
average covariance of a population is small compared to the average variance.
Elton and Gruber's derivation could be applied for estimating the tracking error to a broad market index
when a portfolio is not fully diversified. For example, we could calculate a Diversification Scaling Factor (i.e., proportion of expected total risk to systematic risk) to estimate the incremental risk given the
number of elements within a portfolio.
2

[9]

P
Diversification Scaling Factor = -------2
R
M

1 2 2
= ------------- R + 1
2 J
M
N R
M

Using this formula, RiskMetrics users could adjust their equity VaR estimate upward to reflect firm
specific risk:
Adjusted VaR p = Diversification Scaling Factor VaR p
The potential applications of this technique are broad. For example, the diversification scaling factor
could be used as a back of the envelope estimate of how much residual risk to expect in a stock portfolio, given the number of different stocks held. The advantage of Elton and Gruber's derivation lies in
its simplicity. Using basic variablesthe number of securities in a portfolio, and the proportion of firm
specific to diversified riskone can get an estimate of index tracking error.

Practical considerations
To estimate the average volatility of stocks within an index, one can take either an evenly weighted or
a market-cap weighted volatility of each component security.4 Depending on client demand and resource availability, J.P. Morgan could potentially include this volatility estimate in future releases of
the RiskMetrics data set. Its integration into the RiskMetrics data set would be relatively straightforward because the overall correlation matrix would be unaffected (we assume that firm specific risk
is independent).

4 Component

BARRA.

securities of broad market indices are available from a number of sources, for example, Bloomberg or

RiskMetrics Monitor
Second quarter 1996
page 38

Estimating index tracking error for equity portfolios (continued)

Example 1
Consider a portfolio consisting of four U.S. stocks, with market values of USD 25MM each, a one
month standard deviation for the S&P500 Index of 3.46%, and an average standard deviation of stocks
within the S&P 500 of 8.9%.
Example summary
Average equity volatility (1.65)
Diversified index volatility (1.65)
Number of securities

14.69%
5.72%
4

Security

Market
Value

Beta

Systematic
Risk

Stock A

$25.00

0.5

$0.71

Stock B

$25.00

1.5

$2.14

Stock C

$25.00

$1.43

Stock D

$25.00

0.8

$1.14

Total systematic equity VaR (1)1,2

$5.43

(2)1

155%

$8.41

Diversification scaling factor

Adjusted equity VaR (3)


1This

parameter is calculated as shown in its respective section (see pages 3839).


systematic equity VaR assumes a perfect correlation of 1 and therefore, is
additive and equal to the sum of the values in the Systematic Risk column.

2Total

(1) Total systematic equity VaR


N

VaR p = 1.65 R
M

MV

Si

Si

i=1

= 1.65 R

S&P500

[ MV A A + MV B B + MV C C + MV D D ]

= 1.65 ( 3.46 ) ( 25MM ) [ 0.5 + 1.5 + 1.0 + 0.8 ]


= 5.72 ( 25MM ) ( 3.8 )
= USD 5.43MM

RiskMetrics Monitor
Second quarter 1996
page 39

Estimating index tracking error for equity portfolios (continued)

(2) Diversification scaling factor


2

p
1
2 2

----------------- = ---------------------R S&P500 + 1


J
2
2
N R
R
S&P500

S&P500

2
2
1
= -------------------------- 8.9% 3.46% + 1
4 ( 3.46% )
= 155%

(3) Adjusted equity VaR


Finally, adjust the systematic risk to reflect the expected total risk in the portfolio:
Adjusted Equity VaR p = Diversification scaling factor Total systematic equity VaR
= 155 % ( 5.43MM )
= USD 8.41MM
Note that Elton and Gruber's derivation assumes that stocks are randomly selected from a population
and that portfolios are evenly distributed. This technique becomes less accurate for asymmetric portfolios, or if there are significant concentrations. For portfolios with significant industry concentrations,
one could apply a sub-index that more closely reflect the portfolio composition (for example an oil
stock index, or bank stock index). Alternately, one could depart from a one factor CAPM approach to
a multi-factor approach for equity VaR.

Total equity VaR for asymmetrically distributed portfolios


For a more precise calculation of portfolio volatility, one should consider the exact weighting of assets.
The following shows how to calculate VaR for a single market portfolio, assuming an average variance
for equities.
VaR =

(market risk) + (residual risk)

where
N

market risk = 1.65 R


M

MV

Si

Si

i=1
N

residual risk = 1.65

MV

2 2
Si J

2 2
Si R
M

i=1

Market risk for individual stocks is aggregated linearly (correlation=1), while residual risk is aggregated assuming independence (i.e., square root of the sum of the squares).

RiskMetrics Monitor
Second quarter 1996
page 40

Estimating index tracking error for equity portfolios (continued)

Example 2
Consider the same parameters outlined in Example 1, except that we hold different proportions of the
same stocks in this portfolio.
Example summary
Average equity volatility (1.65)
Diversified index volatility (1.65)
Number of securities

14.69%
5.72%
4

Security

Market Value

Beta

Systematic Risk

Residual Risk

Stock A

$10.00

0.5

$0.29

$1.44

Stock B

$20.00

1.5

$1.71

$2.39

Stock C

$30.00

$1.71

$4.06

Stock D

$40.00

0.8

$1.83

$5.58

Total systematic equity VaR (1)1,2

$5.54

Total residual risk (2)1,3

$7.44

(3)1

$9.28

Total equity VaR


1This

parameter is calculated as shown in its respective section (see pages 4041).


systematic equity VaR assumes a perfect correlation of 1 and therefore, is additive and equal to the sum
of the values in the Systematic Risk column.
3Total residual risk assumes a correlation of zero and therefore, is calculated as shown on page 41.
2Total

(1) Total systematic equity VaR


N

Systematic VaR p = 1.65 R


M

MV

Si

Si

i=1

= 1.65 R

S&P500

[ MV A A + MV B B + MV C C + MV D D ]

( 10MM ) ( 0.5 ) + ( 20MM ) ( 1.5 )


+ ( ( 30MM ) ( 1 ) ) + ( 40MM ) ( 0.8 )
= 5.72% ( 97MM )
= USD 5.54MM
= 1.65 ( 3.46% )

RiskMetrics Monitor
Second quarter 1996
page 41

Estimating index tracking error for equity portfolios (continued)

(2) Total residual risk


N

Total residual risk = 1.65

MV

2 2
Si R S

2
R
M

i=1

2
2
2 2
2
2
2 2
MV A J A R + MV B J B R
M
M
= 1.65
2 2
2 2
2 2
2 2
+MV C J C R + MV D J D R
M
M

= 1.65

10

+30

8.9 ( 0.5 ) ( 3.46 )


2

8.9 ( 1 ) ( 3.46 )

8.9 ( 1.5 ) ( 3.46 )

8.9 ( 0.8 ) ( 3.46 )

+ 20
+ 40

= USD 7.44MM

(3) Total equity VaR


Total equity VaR p =

( market risk ) + ( total residual risk )


2

= ( 5.54 ) + ( 7.44 )
= USD 9.28MM

Multi-Market Portfolio
VaR for a portfolio consisting of equities from several different markets follows the same methodology
of aggregating market and residual risk. The difference is that the correlation between different market
indices is incorporated, as well as FX risk.

RiskMetrics Monitor
Second quarter 1996
page 42

RiskMetrics Monitor
Second quarter 1996
page 43

Previous editions of the RiskMetrics Monitor


1st Quarter 1996: January 23, 1996

Basel Committee revises market risk supplement to 1988 Capital Accord.

A look at two methodologies that use a basic delta-gamma parametric VaR precept but achieve
results similar to simulation.

4th Quarter 1995: October 12, 1995


Exploring alternative volatility forecasting methods for the standard RiskMetrics monthly
horizon.
How accurate are the risk estimates in portfolios that contain Treasury bills proxied by LIBOR
data?
A solution to the standard cashflow mapping algorithm, which sometimes leads to imaginary
roots.

3rd Quarter 1995: July 5, 1995


Mapping and estimating VaR for interest rate swaps.
Adjusting correlations obtained from nonsynchronous data.

RiskMetrics Monitor
Second quarter 1996
page 44

RiskMetrics products

Worldwide RiskMetrics contacts

Introduction to RiskMetrics: An eight-page document that


broadly describes the RiskMetrics methodology for
measuring market risks.

For more information about RiskMetrics, please contact the


author or any other person listed below.

RiskMetrics Directory: Available exclusively on-line, a list


of consulting practices and software products that incorporate
the RiskMetrics methodology and data sets.

New York

Jacques Longerstaey (1-212) 648-4936


longerstaey_j@jpmorgan.com

Chicago

Michael Moore (1-312) 541-3511


moore_mike@jpmorgan.com

Mexico

Beatrice Sibblies (52-5) 540-9554


sibblies_beatrice@jpmorgan.com

San Francisco

Paul Schoffelen (1-415) 954-3240


schoffelen_paul@jpmorgan.com

Toronto

Dawn Desjardins (1-416) 981-9264


desjardins_dawn@jpmorgan.com

RiskMetrics Monitor: A quarterly publication that


discusses broad market risk management issues and statistical
questions as well as new software products built by third-party
vendors to support RiskMetrics.
RiskMetrics data sets: Two sets of daily estimates of future
volatilities and correlations of approximately 420 rates and
prices, with each data set totaling 88,000+ data points. One set
is for computing short-term trading risks, the other for mediumterm investment risks. The data sets currently cover foreign
exchange, government bond, swap, and equity markets in up to
22 currencies. Eleven commodities are also included.
A RiskMetrics Regulatory data set, which incorporates the
latest recommendations from the Basel Committee on the use
of internal models to measure market risk, is now available.
Bond Index Cash Flow Maps: A monthly insert into the
Government Bond Index Monitor outlining synthetic cash flow
maps of J.P. Morgans bond indices.
Trouble accessing the Internet? If you encounter any
difficulties in either accessing the J.P. Morgan home page at
http://www.jpmorgan.com or in downloading the
RiskMetrics data files, you can call 1-800-JPM-INET in the
United States.

North America

Europe
London

Benny Cheung (44-71) 325-4210


cheung_benny@jpmorgan.com

Brussels

Isabelle Vanderstricht (32-2) 508-8060


vanderstricht_i@jpmorgan.com

Paris

Ciaran OHagan (33-1) 4015-4058


ohagan_c@jpmorgan.com

Frankfurt

Robert Bierich (49-69) 712-4331


bierich_r@jpmorgan.com

Milan

Roberto Fumagalli (39-2) 774-4230


fumagalli_r@jpmorgan.com

Madrid

Jose Luis Albert (34-1) 577-1722


albert_j-l@jpmorgan.com

Zurich

Viktor Tschirky (41-1) 206-8686


tschirky_v@jpmorgan.com

Asia
Singapore

Michael Wilson (65) 326-9901


wilson_mike@jpmorgan.com

Tokyo

Yuri Nagai (81-3) 5573-1168


nagai_y@jpmorgan.com

Hong Kong

Martin Matsui (85-2) 973-5480


matsui_martin@jpmorgan.com

Australia

Debra Robertson (61-2) 551-6200


robertson_d@jpmorgan.com

RiskMetrics is based on, but differs significantly from, the market risk management systems developed by J.P. Morgan for its own use. J.P. Morgan does not warrant any results
obtained from use of the RiskMetrics data, methodology, documentation or any information derived from the data (collectively the Data) and does not guarantee its sequence,
timeliness, accuracy, completeness or continued availability. The Data is calculated on the basis of historical observations and should not be relied upon to predict future market
movements. The Data is meant to be used with systems developed by third parties. J.P. Morgan does not guarantee the accuracy or quality of such systems.
Additional information is available upon request. Information herein is believed to be reliable, but J.P. Morgan does not warrant its completeness or accuracy. Opinions and estimates constitute our judgement and are
subject to change without notice. Past performance is not indicative of future results. This material is not intended as an offer or solicitation for the purchase or sale of any financial instrument. J.P. Morgan may hold a
position or act as market maker in the financial instruments of any issuer discussed herein or act as advisor or lender to such issuer. Morgan Guaranty Trust Company is a member of FDIC and SFA. Copyright 1996 J.P.
Morgan & Co. Incorporated. Clients should contact analysts at and execute transactions through a J.P. Morgan entity in their home jurisdiction unless governing law permits otherwise.

S-ar putea să vă placă și