The RiskMetrics
variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
model
A model is an informative representation of an object, person or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin ''modulus'', a measure.
Models c ...
(also known as exponential smoother) was first established in 1989, when Sir
Dennis Weatherstone, the new chairman of
J.P. Morgan, asked for a daily report measuring and explaining the risks of his firm. Nearly four years later in 1992, J.P. Morgan launched the RiskMetrics methodology to the
marketplace
A marketplace or market place is a location where people regularly gather for the purchase and sale of provisions, livestock, and other goods. In different parts of the world, a marketplace may be described as a ''souk'' (from the Arabic), '' ...
, making the substantive research and analysis that satisfied Sir Dennis Weatherstone's request freely available to all market participants.
In 1998, as client demand for the group's
risk management expertise exceeded the firm's internal risk management resources, the Corporate Risk Management Department was spun off from J.P. Morgan as RiskMetrics Group with 23 founding employees. The RiskMetrics technical document was revised in 1996. In 2001, it was revised again in ''Return to RiskMetrics''. In 2006, a new method for modeling risk factor returns was introduced (RM2006). On 25 January 2008, RiskMetrics Group listed on the New York Stock Exchange (
NYSE
The New York Stock Exchange (NYSE, nicknamed "The Big Board") is an American stock exchange in the Financial District of Lower Manhattan in New York City. It is by far the world's largest stock exchange by market capitalization of its listed co ...
: RISK). In June 2010, RiskMetrics was acquired by
MSCI
MSCI Inc. is an American finance company headquartered in New York City. MSCI is a global provider of equity, fixed income, real estate indexes, multi-asset portfolio analysis tools, ESG and climate products. It operates the MSCI World, MSCI ...
for $1.55 billion.
Risk measurement process
Portfolio risk measurement can be broken down into steps. The first is modeling the market that drives changes in the portfolio's value. The market model must be sufficiently specified so that the portfolio can be revalued using information from the market model. The risk measurements are then extracted from the probability distribution of the changes in portfolio value. The change in value of the portfolio is typically referred to by portfolio managers as profit and loss, or P&L
Risk factors
Risk management systems are based on models that describe potential changes in the factors affecting portfolio value. These risk factors are the building blocks for all pricing functions. In general, the factors driving the prices of financial securities are equity prices, foreign exchange rates, commodity prices, interest rates,
correlation
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statisti ...
and
volatility. By generating future scenarios for each risk factor, we can infer changes in portfolio value and reprice the portfolio for different "states of the world".
Portfolio risk measures
Standard deviation
The first widely used portfolio risk measure was the
standard deviation of portfolio value, as described by
Harry Markowitz
Harry Max Markowitz (born August 24, 1927) is an American economist who received the 1989 John von Neumann Theory Prize and the 1990 Nobel Memorial Prize in Economic Sciences.
Markowitz is a professor of finance at the Rady School of Managemen ...
. While comparatively easy to calculate, standard deviation is not an ideal risk measure since it penalizes profits as well as losses.
Value at risk
The 1994 tech doc popularized
VaR as the risk measure of choice among investment banks looking to be able to measure their portfolio risk for the benefit of banking regulators. VaR is a
downside risk
Downside risk is the financial risk associated with losses. That is, it is the risk of the actual return being below the expected return, or the uncertainty about the magnitude of that difference.
Risk measures typically quantify the downside risk ...
measure, meaning that it typically focuses on losses.
Expected shortfall
A third commonly used risk measure is
expected shortfall
Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. The "expected shortfall at q% level" is the expected return on the portfolio in the wor ...
, also known variously as expected tail loss, XLoss, conditional VaR, or CVaR.
Marginal VaR
The Marginal VaR of a position with respect to a portfolio can be thought of as the amount of risk that the position is adding to the portfolio. It can be formally defined as the difference between the VaR of the total portfolio and the VaR of the portfolio without the position.
Incremental risk
Incremental risk statistics provide information regarding the sensitivity of portfolio risk to changes in the position holding sizes in the portfolio.
An important property of incremental risk is subadditivity. That is, the sum of the incremental risks of the positions in a portfolio equals the total risk of the portfolio. This property has important applications in the allocation of risk to different units, where the goal is to keep the sum of the risks equal to the total risk.
Since there are three risk measures covered by RiskMetrics, there are three incremental risk measures: Incremental VaR (IVaR), Incremental Expected Shortfall (IES), and Incremental Standard Deviation (ISD).
Incremental statistics also have applications to portfolio optimization. A portfolio with minimum risk will have incremental risk equal to zero for all positions. Conversely, if the incremental risk is zero for all positions, the portfolio is guaranteed to have minimum risk only if the risk measure is subadditive.
Coherent risk measures
A
coherent risk measure
In the fields of actuarial science and financial economics there are a number of ways that risk can be defined; to clarify the concept theoreticians have described a number of properties that a risk measure might or might not have. A coherent risk ...
satisfies the following four properties:
1. Subadditivity
A risk measure is subadditive if for any portfolios A and B, the risk of A+B is never greater than the risk of A plus the risk of B. In other words, the risk of the sum of subportfolios is smaller than or equal to the sum of their individual risks.
Standard deviation and expected shortfall are subadditive, while VaR is not.
Subadditivity is required in connection with aggregation of risks across desks, business units, accounts, or subsidiary companies. This property is important when different business units calculate their risks independently and we want to get an idea of the total risk involved. Lack of subadditivity could also be a matter of concern for regulators, where firms might be motivated to break up into affiliates to satisfy capital requirements.
2. Translation invariance
Adding cash to the portfolio decreases its risk by the same amount.
3. Positive homogeneity of degree 1
If we double the size of every position in a portfolio, the risk of the portfolio will be twice as large.
4. Monotonicity
If losses in portfolio A are larger than losses in portfolio B for all possible risk factor return scenarios, then the risk of portfolio A is higher than the risk of portfolio B.
Assessing risk measures
The estimation process of any risk measure can be wrong by a considerable margin. If from the imprecise estimate we cannot get a good understanding what the true value could be, then the estimate is virtually worthless. A good risk measurement is to supplement any estimated risk measure with some indicator of their precision, or, of the size of its error.
There are various ways to quantify the error of some estimates. One approach is to estimate a confidence interval of the risk measurement.
Market models
RiskMetrics describes three models for modeling the risk factors that define financial markets.
Covariance approach
The first is very similar to the mean-covariance approach of Markowitz. Markowitz assumed that asset covariance matrix
can be observed. The covariance matrix can be used to compute portfolio variance. RiskMetrics assumes that the market is driven by risk factors with observable covariance. The risk factors are represented by time series of prices or levels of stocks, currencies, commodities, and interest rates. Instruments are evaluated from these risk factors via various pricing models. The portfolio itself is assumed to be some linear combination of these instruments.
Historical simulation
The second market model assumes that the market only has finitely many possible changes, drawn from a risk factor return sample of a defined historical period. Typically one performs a historical simulation by sampling from past day-on-day risk factor changes, and applying them to the current level of the risk factors to obtain risk factor price scenarios. These perturbed risk factor price scenarios are used to generate a profit (loss) distribution for the portfolio.
This method has the advantage of simplicity, but as a model, it is slow to adapt to changing market conditions. It also suffers from simulation error, as the number of simulations is limited by the historical period (typically between 250 and 500 business days).
Monte carlo simulation
The third market model assumes that the logarithm of the return, or, log-return, of any risk factor typically follows a
normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
. Collectively, the log-returns of the risk factors are
multivariate normal.
Monte Carlo algorithm
In computing, a Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with a certain (typically small) probability. Two examples of such algorithms are Karger–Stein algorithm and Monte Carlo algorithm for minimum Feedb ...
simulation generates random market scenarios drawn from that multivariate normal distribution. For each scenario, the profit (loss) of the portfolio is computed. This collection of profit (loss) scenarios provides a sampling of the profit (loss) distribution from which one can compute the risk measures of choice.
Criticism
Nassim Taleb
Nassim Nicholas Taleb (; alternatively ''Nessim ''or'' Nissim''; born 12 September 1960) is a Lebanese-American essayist, mathematical statistician, former option trader, risk analyst, and aphorist whose work concerns problems of randomness, p ...
in his book ''
The Black Swan'' (2007) wrote:
Banks are now more vulnerable to the Black Swan than ever before with "scientists" among their staff taking care of exposures. The giant firm J. P. Morgan put the entire world at risk by introducing in the nineties RiskMetrics, a phony method aiming at managing people’s risks. A related method called “Value-at-Risk
Value at risk (VaR) is a measure of the risk of loss for investments. It estimates how much a set of investments might lose (with a given probability), given normal market conditions, in a set time period such as a day. VaR is typically used by ...
,” which relies on the quantitative measurement of risk, has been spreading.[ Cited in ]
References
*
Harry Markowitz
Harry Max Markowitz (born August 24, 1927) is an American economist who received the 1989 John von Neumann Theory Prize and the 1990 Nobel Memorial Prize in Economic Sciences.
Markowitz is a professor of finance at the Rady School of Managemen ...
, "Portfolio Selection", ''Journal of Finance'', Mar., 1952.
*Peter Zangari
RiskMetrics Technical Document 1996.
*Matthew Pritzker
Board of Governors of the Federal Reserve System, ''Finance and Economics Discussion Series'', 2001.
*Jeremy Berkowitz and James O'Brien, "How Accurate Are Value-at-Risk Models at Commercial Banks?", ''Journal of Finance'', Vol. 57, No. 3 (Jun., 2002), pp. 1093–1111.
*Jorge Mina and Jerry Xiao
2001.
*Chris Finger
How historical simulation made me lazy ''RiskMetrics Research Monthly'', April, 2006.
*Gilles Zumbach
RiskMetrics Working Paper, November 2006.
*Alan Laubsch
1999
;Specific
External links
{{DEFAULTSORT:Riskmetrics
Actuarial science
Financial risk modeling