Sunteți pe pagina 1din 10

VALUE-AT-RISK

Value-at-Risk (VaR) measures the worst expected loss under normal market conditions over a specific time interval at a given confidence level. As one of our references states: VaR answers the question: how much can I lose with x% probability over a pre-set horizon. Another way of expressing this is that VaR is the lowest quantile of the potential losses that can occur within a given portfolio during a specified time period. The basic time period T and the confidence level (the quantile) q are the two major parameters that should be chosen in a way appropriate to the overall goal of risk measurement. The time horizon can differ from a few hours for an active trading desk to a year for a pension fund. When the primary goal is to satisfy external regulatory requirements, such as bank capital requirements, the quantile is typically very small (for example, 1% of worst outcomes). However for an internal risk management model used by a company to control the risk exposure the typical number is around 5% (visit the internet sites in references for more details. In the jargon of VaR, suppose that a portfolio manager has a daily VaR equal to $1 million at 1%. This means that there is only one chance in 100 that a daily loss bigger than $1 million occurs under normal market conditions. Value-at-Risk (VaR), is a widely used measure of financial risk, which provides a way of quantifying and managing the risk of a portfolio. VaR was conceived in 1993 partly in response to various financial disasters. Work started on its development in 1988 after central banks wanted a methodology to set minimum capital requirements in banks to protect against credit risk in trading. Banks began adopting it around 1993-1995, as a key component of the management of market risk for many financial institutions2. It is used as an internal risk management tool, and has also been chosen by the Basel Committee on Banking Supervision as the international standard for external regulatory purpose3. Recent years, non-bank energy traders and end-users have begun to use VaR. Now the majority of major oil companies and traders are using the VaR method for risk measurement4. There are many approaches to measuring VaR. Any valuation model for computing VaR simply represent of a possible reality or a possible outcome, based on certain probability and confidence percentage parameters. VaR measures the worst expected loss over a given time horizon with a certain confidence or probability level. VaR allows management to see the probable risk their company is taking, or, in the case of companies hedging,it can also illustrate reduction in possible financial exposure. VaR can summarise all the market risks of the entire portfolio of a bank or a firm as one number for example in Slovak crowns (SKK). The key use of VaR is for assessing market risk (exposure to losses in the market place through physical and derivative positions) although VaR is being used more frequently to assess credit risk (credit VaR modelling). However, VaR does not give a consistent method for measuring risk, as different VaR models will come up with different VaR results. It should also be noted that VaR only measures quantifiable risks; it cannot measure risks

DIAGRAM TO SHOW RISK MEASUREMENT METHODS

What is Value at Risk? In its most general form, the Value at Risk measures the potential loss in value of a risky asset or portfolio over a defined period for a given confidence interval. Thus, if the VaR on an asset is $ 100 million at a one-week, 95% confidence level, there is a only a 5% chance that the value of the asset will drop more than $ 100 million over any given week. In its adapted form, the measure is sometimes defined more narrowly as the possible loss in value from normal market risk as opposed to all risk, requiring that we draw distinctions between normal and abnormal risk as well as between market and nonmarket risk. While Value at Risk can be used by any entity to measure its risk exposure, it is used most often by commercial and investment banks to capture the potential loss in value of their traded portfolios from adverse market movements over a specified period; this can then be compared to their available capital and cash reserves to ensure that the losses can be covered without putting the firms at risk. Key aspects of Value at Risk To estimate the probability of the loss, with a confidence interval, we need to define the probability distributions of individual risks, the correlation across these risks and the effect of such risks on value. In fact, simulations are widely used to measure the VaR for asset portfolio. The focus in VaR is clearly on downside risk and potential losses. Its use in banks reflects their fear of a liquidity crisis, where a low-probability catastrophic occurrence creates a loss that wipes out the capital and creates a client exodus. The demise of Long Term Capital Management, the investment fund with top pedigree Wall Street traders and Nobel Prize winners, was a trigger in the widespread acceptance of VaR. There are three key elements of VaR a specified level of loss in value, a fixed time period over which risk is assessed and a confidence interval. The VaR can be specified for an individual asset, a portfolio of assets or for an entire firm. While the VaR at investment banks is specified in terms of market risks interest rate changes, equity market volatility and economic growth there is no reason why the risks cannot be defined more broadly or narrowly in specific contexts. Thus, we could compute the VaR for a large investment project for a firm in terms of competitive and firm-specific risks and the VaR for a gold mining company in terms of gold price risk.

A Short History of VaR While the term Value at Risk was not widely used prior to the mid 1990s, the origins of the measure lie further back in time. The mathematics that underlie VaR were largely developed in the context of portfolio theory by Harry Markowitz and others, though their efforts were directed towards a different end devising optimal portfolios for equity investors. In particular, the focus on market risks and the effects of the comovements in these risks are central to how VaR is computed. The impetus for the use of VaR measures, though, came from the crises that beset financial service firms over time and the regulatory responses to these crises. The first regulatory capital requirements for banks were enacted in the aftermath of the Great Depression and the bank failures of the era, when the Securities Exchange Act established the Securities Exchange Commission (SEC) and required banks to keep their borrowings below 2000% of their equity capital. In the decades thereafter, banks devised risk measures and control devices to ensure that they met these capital requirements. With the increased risk created by the advent of derivative markets and floating exchange rates in the early 1970s, capital requirements were refined and expanded in the SECs Uniform Net Capital Rule (UNCR) that was promulgated in 1975, which categorized the financial assets that banks held into twelve classes, based upon risk, and required different capital requirements for each, ranging from 0% for short term treasuries to 30% for equities. Banks were required to report on their capital calculations in quarterly statements that were titled Financial and Operating Combined Uniform Single (FOCUS) reports. The first regulatory measures that evoke Value at Risk, though, were initiated in 1980, when the SEC tied the capital requirements of financial service firms to the losses that would be incurred, with 95% confidence over a thirty-day interval, in different security classes; historical returns were used to compute these potential losses. Although the measures were described as haircuts and not as Value or Capital at Risk, it was clear the SEC was requiring financial service firms to embark on the process of estimating onemonth 95% VaRs and hold enough capital to cover the potential losses. At about the same time, the trading portfolios of investment and commercial banks were becoming larger and more volatile, creating a need for more sophisticated and timely risk control measures. Ken Garbade at Bankers Trust, in internal documents, presented sophisticated measures of Value at Risk in 1986 for the firms fixed income portfolios, based upon the covariance in yields on bonds of different maturities. By the early 1990s, many financial service firms had developed rudimentary measures of Value at Risk, with wide variations on how it was measured. In the aftermath of numerous disastrous losses associated with the use of derivatives and leverage between 1993 and 1995, culminating with the failure of Barings, the British investment bank, as a result of unauthorized trading in Nikkei futures and options by Nick Leeson, a young trader in Singapore, firms were ready for more comprehensive risk measures. In 1995, J.P. Morgan provided public access to data on the variances of and covariances across various security and asset classes, that it had used internally for almost a decade to manage risk, and allowed software makers to develop software to measure risk. It titled the service RiskMetrics and used the term Value at Risk to describe the risk measure that emerged from the data. The measure found a ready audience with commercial and investment banks, and the regulatory authorities overseeing them, who warmed to its intuitive appeal. In the last decade, VaR has becomes the established measure of risk exposure in financial service firms and has even begun to find acceptance in non-financial service

firms.

Measuring Value at Risk There are three basic approaches that are used to compute Value at Risk, though there are numerous variations within each approach. The measure can be computed analytically by making assumptions about return distributions for market risks, and by using the variances in and covariances across these risks. It can also be estimated by running hypothetical portfolios through historical data or from Monte Carlo simulations. In this section, we describe and compare the approaches.

Variance-Covariance Method Since Value at Risk measures the probability that the value of an asset or portfolio will drop below a specified value in a particular time period, it should be relatively simple to compute if we can derive a probability distribution of potential values. That is basically what we do in the variance-covariance method, an approach that has the benefit of simplicity but is limited by the difficulties associated with deriving probability distributions. This method assumes that stock returns are normally distributed. In other words, it requires that we estimate only two factors - an expected (or average) return and a standard deviation - which allow us to plot a normal distribution curve. Here we plot the normal curve against the same actual return data:

The idea behind the variance-covariance is similar to the ideas behind the historical method - except that we use the familiar curve instead of actual data. The advantage of the normal curve is that we automatically know where the worst 5% and 1% lie on the curve. They are a function of our desired confidence and the standard deviation ( ):

The blue curve above is based on the actual daily standard deviation of the QQQ, which is 2.64%. The average daily return happened to be fairly close to zero, so we will assume an average return of zero for illustrative purposes. Here are the results of plugging the actual standard deviation into the formulas above:

Historical Simulation
Historical simulations represent the simplest way of estimating the Value at Risk for many portfolios. In this approach, the VaR for a portfolio is estimated by creating a hypothetical time series of returns on that portfolio, obtained by running the portfolio through actual historical data and computing the changes that would have occurred in each period. Historical simulation method for a simple stock. The goal of the historical-simulation method is to find a large number of possible scenarios for tomorrows asset price given todays price. These scenarios will build up the distribution, so that any risk measure can be computed. Since, for the most part, these scenarios are computed artificially, rather than observed on the market, we will name the set of such scenarios the simulated distribution. We will begin with the computation of the simulated distribution for a simple stock and then gradually extend it to more complex assets. For simplicity, it will be assumed that the horizon period for the evaluation of risk is one day. The computation of the scenario distribution for a simple stock is straightforward and is built on the stationary assumption, where it is assumed that the return of the stock from today to tomorrow is

distributed like the historical frequency of the daily returns for the same stock during a given past period, for example, two years. However, it is possible to consider a shorter reference period, such as one year, or a longer period, such as five years. Experience has shown us that using two years of daily data is optimal: the period is long enough to have a statistically relevant number of samples about 500 and short enough to react adequately to market changes.

Historical simulation method


Historical simulation method provides a straightforward implementation of full valuation (Figure 3). The simulated market states are produced by adding to the base case the period-to-period changes in market variables in the specified historical time series. The key assumption in historical simulation is that the set of possible future scenarios is fully represented by what happened over a specific historical window. This methodology involves collecting the set of risk factor changes over a historical window: for example, daily changes over the last five years. The set of scenarios thus obtained is assumed to be a good representation of all possibilities that could happen between today and tomorrow. The instruments in the portfolio are then repeatedly re-valued against each of the scenarios. This produces a distribution of portfolio values, or equivalently, a distribution of changes in portfolio value from todays value. Usually, some of these changes will involve profits and some will involve losses. Ordering the changes in portfolio value from worst to best, the 99% VaR, for example, is computed as the loss such that 1% of the profits or losses are below it, and 99% are above it. The main advantage of historical simulation is

that it makes no assumptions about risk factor changes being from a particular distribution. Therefore, this methodology is consistent with the risk factor changes being from any distribution. Another important advantage is that historical simulation does not involve the estimation of any statistical parameters, such as variances or covariances, and is consequently exempt from inevitable estimation errors. It is also a methodology that is easy to explain and defend to a non-technical and important audience, such as a corporate board of directors. However, as is usually the case, this methodology also has some disadvantages. The most obvious disadvantage is that historical simulation, in its purest form, can be very difficult to accomplish because it requires data on all risk factors to be available over a reasonably long historical period in order to give a good representation of what might happen in the future. Another disadvantage is that historical simulation does not involve any distributional assumptions, the scenarios that are used in computing VaR are limited to those that occurred in the historical sample. The next section describes how Monte Carlo simulation can be used to address this limitation of historical simulation. Monte Carlo simulation techniques are by far the most flexible and powerful, since they are able to take into account all non-linearities of the portfolio value with respect to its underlying risk factor, and to incorporate all desirable distributional properties, such as fat tails and time varying volatilities. Also, Monte Carlo simulations can be extended to apply over longer holding periods, making it possible to use these techniques for measuring credit risk. However, these techniques are also by far the most expensive computationally. The key difference between historical simulation and simulation Monte Carlo is that the historical simulation model carries out the simulation using the real observed changes in the market place over the last X periods (using historical market price data) to generate Y hypothetical portfolio profits or losses, whereas in the Monte Carlo simulation a random number generator is used to produce tens of thousands of hypothetical changes in the market. These are then used to construct thousands of hypothetical profits and losses on the current portfolio, and the subsequent distribution of possible portfolio profit or loss. Finally, the VaR is determined from this distribution according to the parameters set (e.g. 95 % confidence level). The method is summarized in Figure.

Monte Carlo method The RiskMetrics Monte Carlo methodology consists of three major steps14: - Scenario generation, using the volatility and correlation estimates for the underlying assets in our portfolio, we produce a large number of future price scenarios in accordance with the lognormal models. - For each scenario, we compute a portfolio value. - We report the results of the simulation, either as a portfolio distribution or as a particular risk measure.

Other Monte Carlo methods may vary the first step by creating returns by (possibly quite involved) modeled distributions, using pseudo random numbers or quasi random numbers to draw a sample from the distribution. The next two steps are as above. The calculation of VaR then proceeds as for the historical simulation method. The advances in other Monte Carlo methods over RiskMetrics Monte Carlo are in the creation of the distributions. However, to create experiments using a Monte Carlo method is fraught with dangers. Each market variable has to be modeled according to an estimated distribution and the relationships between distributions (such as correlation or less obvious non-linear relationships, for which copulas are becoming prominent). Using the Monte Carlo approach means one is committed to the use of such distributions and the estimations one makes. These distributions can become inappropriate; possibly in an insidious manner. To build and keep current a Monte Carlo risk management system requires continual reestimation, a good reserve of analytic and statistical skills, and non-automatic decisions. One of these advanced methods use dynamic risk factor modeling15. Dynamic risk factor modeling enables risk management practitioners to efficiently fit many risk factor models simultaneously. Risk factors models can be fitted using different distributional specifications, including non-normal distributions. This multivariate simulation process captures and maintains the dependence structure of the risk factors modelled separately. To accomplish this, the simulation engine uses a framework based on the statistical concept of a copula. A copula is a function that combines marginal distributions of the variables (risk factors) into a specific multivariate distribution in which all of its one dimensional marginals are the cumulative distribution functions (CDFs) of the risk factors. This dynamic approach to risk factor modeling is appropriate for market and for credit risk applications.
Delta-normal method

The delta-normal method is a parametric, analytic technique where the distributional assumption made is that the daily geometric returns of the market variables are multivariate normally distributed with mean return zero. Historical data is used to measure the major parameters: means, standard deviations, correlations. When the market value of the portfolio is a linear function of the underlying parameters, the distribution of the profits is normal as well. VaR is computed by multiplying the vector of first derivatives of the portfolio value with respect to the risk factor variables (the "deltas") by the specified covariance matrix, and then multiplying by a multiplier that depends on the normal distribution quantile point for the confidence level at which VaR is being computed (1.65 bellow the mean give 5 % level, 2.33 will give the 1 % level). This method was introduced by the J.P.Morgans RiskMetrics system The detailed description of this method we can found in [JOR00, p.206-221] or [ENG03, p.18-20]. Figure 2 details the step, involves in this approach. The advantages of this method include its speed and simplicity, and the fact that distribution of returns need not be assumed to be stationary through time, since volatility updating is incorporated into the parameter estimation. The delta-normal method can be subject to a number criticism. A first problem is the existence of fat tails in the distribution of returns on most financial assets. The distribution of daily returns of any risk factor would in reality typically show significant amount of positive kurtosis. This leads to fatter tails and extreme outcomes occurring much more frequently than would be predicted by the normal distribution assumption, which would lead to an underestimation of VaR (since VaR is concerned with the tails of the distribution). Another problem is that the method inadequately measures the risk of nonlinear instruments, such as options or mortgages.

Delta-Normal method

S-ar putea să vă placă și