**What is Value At Risk?**

In the aftermath of the 2008 financial crisis, a myriad of factors leading to the calamity were extensively examined by various public and private entities. It became apparent that some factors had played more of a role than others. Some of these critical factors included the secured subprime mortgages from Fannie Mae and Freddie Mac, various forms of collateralized debt obligations (mortgage backed securities), faulty credit ratings, etc. However, one very crucial factor thatwasnâ€™t extensively examined was the risk management of the banking firms themselves, more specifically, the use of value at risk (VaR). VaR is a financial tool used to measure the risk of loss on a portfolio of financial assets. The measure looks at three variables: the potential loss on a portfolio, the probability of this loss occurring, and the time in which this loss can occur. For example, if a firm determines it has a 2% one month VaR of $200,000,000, on a specific investment portfolio, this would mean that there is a 2% chance that the firm can lose $200,000,000 on the portfolio in any given month and that every fifty months they should expect to lose $200,000,000 on the portfolio[1].

**Faulty Assumptions**

The problem with VaR is rooted in how the probability variable is calculated. To understand this problem, we must take a step back and look at financial economics as a whole; after all, VaR is the apotheosis of over 50 years of financial economics. Financial economic theory is based on two important tenants: that markets are efficient, and that the risk in financial markets is normally distributed.

Efficient market theory claims that all investors act in rational ways as to maximize their wealth, thus, as soon as new information is released into the market, it is factored into the prices of various assets immediately. Since the market adjusts all market prices instantaneously, all prices are essentially unpredictable in the future because news that has yet to be released is unavailable and therefore cannot be known. This implies that investors can only â€˜beatâ€™ the market by luck alone, as information becomes a nonfactor.

The theory that financial risk is normally distributed places all financial events bearing any price effects on a bell curve. The middle, where the height is at its highest point denotes events of low severity that occur with high frequency. The tail ends of the bell curve denote rare events such as stock market crashes, runs on banks, bursting of asset bubbles, etc. (hence the term â€˜tail riskâ€™). According to the model so called â€˜rare eventsâ€™ like stock market crashes almost never occur since the height of the curve at the tails is practically zero[2].

Majority of award winning financial economics theories (especially Nobel Prize winning ones) are based on the assumptions of normal distribution of risk and efficient markets. The problem is that these assumptions do not reflect the real world. Numerous studies have shown that markets arenâ€™t efficient. For example, mathematician Benoit Mandelbrot showed that future prices are in fact partly dependent on historical prices, not only new information. Experiments by Amos Tversky and his colleagues showed how investors tend to behave in irrational ways, for example, they are more apt to avoid a loss than to gain a profit even though the expected value of both situations is the same (the tendency for risk aversion). These and numerous other studies by various social scientists have the shown efficient markets and rational actions are misconceptions.

The idea that risk is normally distributed, with the chance of rare events being practically zero is a complete fallacy. This is highlighted by historical events such as multiple stock market crashes and multiple burst asset bubbles (most recent one being the housing bubble). According to the normal distribution, such events should almost never occur, let alone multiple times. Despite the fallacies of financial economic assumptions, many financial tools based on these flawed assumptions/theories are used in modern finance and have been an integral part of many financial disasters. Value at risk is a perfect example of this.

VaR, as mentioned before, is based on three variables. The most problematic of these three variables is the probability of a loss occurring. Since VaR is based on the faulty assumptions ofefficient markets and a normal distribution of risk, the probability variable is also faulty.Depending on the specific type of calculations used (there are three main ones), VaRusually assumes that future relationships between the prices of various assets in a portfolio will be the same as they were in the past, no matter what. So if historical data wereto be taken during a period of stability, the model will assume that future prices will reflect this stability and that the probability of severe standard deviations from projected prices would be essentially zero. It also assumes that asset price variations are random and have no reasons behind them. To make matters worse, VaR further assumes that risk is the net of long minus short positions, not the gross. For example, a short position in a volatile stock can be offset by a long position in 10 year US treasuries. According to VaR the net risk of this will be less than the risk of either of the two investments on its own. Realistically the risk of both investments should be added on to each other, not netted. The math gets extremely complicated when the risks of multiple investments in a portfolio are offset against each other and multiple relationships between various investments are created. The proliferation of the many erroneous assumptions in VaR causes it to be a very misleading indicator of risk. Thus, according to VaR, the probability of a huge loss on any given portfolio may be relatively low, while in actuality it may be quite high.An appropriate analogy would be as if the thermostat of a furnace showed the temperature to be 24 degrees meanwhile it was really close to 60 degrees.

**Regulations**

After the 1920â€™s there were various regulatory reforms that attempted to keep banking practices sound to ensure the solvency of these firms. However, capital requirements were a special case. Although the percentage of reserve capital is set by a regulatory agency, for example in the US it is the Board of Governors of the Federal Reserve System, this percentage of required capital is based on the risk-weighted averages of assets.The banks however,are able to use their own risk models to determine an adequate amount of capital that would be held to meet state requirements against the risks taken on by various investments.VaR is very appealing to the banks as it allows for higher amounts of leverage, which means higher amounts of profit, so long as the various undertaken positions are largely offset against one another as to minimize net risk. Incase of a loss, the banks are in a sense insured that the taxpayers would be the ones left holding the bag. Guarantees of mortgage payments by government sponsored enterprises such as Fannie Mae, Freddie Mac and bank account guarantees from the government in the form of Federal Deposit Insurance ensure that the federal government would step in and bail out the banks to â€˜save the consumersâ€™. This only encouragesrisky bets in the pursuit of higher profits.

**The Future of VaR and Other Flawed Risk Measures**

Since 2008, there have been a few individuals who understand the role thatVaR played in the crisis. Some of them have been arguing to have the use of VaR as a calculation of risk banned. The question is: Is this the way to go? The answer is no! Of course it isnâ€™t. In a free market, banks should be allowed to use whatever risk models they want, no matter how faulty. Now the key term here is â€˜free marketâ€™. If banks desire the perks of a free market they should also be subjected to all of the consequences of a free market. This means no taxpayer-funded bailouts and no government guarantees in any shape or form. Now many would say, but guarantees like Federal Deposit Insurance protect the consumers! On a superficial level, yes they do protect depositors but on a more fundamental level they allow banks to engage in fractional reserve lending (but thatâ€™s a whole other article). If guarantees such as FDI were eliminated, banks would necessarily have to use safer financial models or theyâ€™d face the risk of going bankrupt. Overtime as a result of â€˜market evolutionâ€™ (analogous to Darwinian evolution) only banks who use the safest and soundest practices (including risk models) would remain. There would be no need for governmental guarantees of any sort, and more importantly, the knowledge of the possibility of financial catastrophes would greatly increase.

Going forward, governmental regulators must ask themselves whether the answer is more regulation or less regulation; whether governmental guarantees protect the people or protect the banks; whether banking should be unconditionally subject to the free market or should banking losses be socialized while the profits privatized; such questions are of the upmost importance to the financial future of our society.

[1] 2% chance of loss of $Y per month X 50 = 100% chance of loss of $Y every 50 months

[2] However the curve never touches the x-axis as there is a horizontal asymptote at y = 0

Twitter: @g__mehra

Email: gmehra@uwo.ca

Tags: Bubble, Federal Reserve, Financial Crisis, Housing Bubble, Value at Risk, VaR

A good article! But note: your statement "2% chance of loss of $Y per month X 50 = 100% chance of loss of $Y every 50 months" is wrong. Actually, there is a 36% chance that you don't lose $Y in 50 months (0.98^50). Did you mean the expected loss?

Great contributors in this area are Ludwig von Mises's brother, Richard von Mises in the book "Probability, Statistics and Truth" in which he makes the distinction between Case and Class probability, and Frank Knight in the book "Risk, Uncertainty, and Profit," where he distinguishes risk from uncertainty.

Both authors come to a similar conclusion where it is impossible to use (or know) the outcome of a market event based on probability distributions (estimated from historical data or assumed). This is in contrast to knowing the limiting frequency or probability of getting "heads" or "tails" by flipping a coin which can be deemed a Class and not a case event.

Highly recommend both books (note that Knight is not an Austrian) as well as Ludwig von Mises's chapter on this in Human Action. I could not include the finer points in this post by all authors.

http://books.google.ca/books/about/Probability_St… http://mises.org/humanaction/chap6sec4.asp http://mises.org/humanaction/chap6sec3.asp http://www.amazon.com/Risk-Uncertainty-Profit-Fra…

Wonderful article!

So much of the financial world has been built up around faulty assumptions. Another example would be the “risk free rate”, used in a number of financial calculations like the Black-Scholes Options Pricing Model as well as the Capital Asset Pricing Model.

Since when is there a guaranteed rate of return?

I see your point of view there, and you were referring at how Greece almost went bankrupt, but it didn't.. I've never seen a treasury bond defaulted ( which brings us to the Risk free rate).

Actually the Greek government gave two haircuts in 2011 and 2012 of about 50% each time to private holders of its debt. The second haircut alone shaved off about 100bn. euros of its outstanding debt total