Economics Interactive Tutorial (Instructions)
RiskCopyright © 1999-2000 Samuel L. Baker
No, this is not a strategy guide to the Parker Brothers conquer-the-world game. Rather, it is a discussion of the concept of risk as used in economics. Risk is important to insurance, the evaluation of securities, and really any economic decision involving the future.
Definition: A situation has risk if the future state or outcome of the situation is not known for certain.
By that definition, most situations involve risk.
A risky situation can have more than one possible outcome. For each possible outcome (or range of outcomes) one can assess the probability that the particular outcome will happen.
Frank H. Knight, in his 1921 book Risk, Uncertainty, and Profit made this distinction between what he called "risk" and "uncertainty":
- The probabilities of the various possible outcomes are known
- The probabilities of the various possible outcomes are not known.
By that definition, most real-life situations involve uncertainty. You usually don't know the probabilities of the different things that can happen in real life.
Uncertainty, though, is hard to analyze, so instead we usually assume that we do know the probabilities of all of the outcomes. Games of chance are the situations that come closest to having known probabilities for all outcomes. Insurance, particularly life insurance and property/casualty insurance (for example, insurance against damage from a natural disaster), comes close to having known probabilities, based on past experience with people and nature.
Securities (stocks and bonds) are often analyzed using the concept of risk. Some "hedge" funds in the mid-1990s developed elaborate strategies for investment that hedged -- counter-balanced -- the risks. The funds bought some securities and simultaeously borrowed ("shorted") related securities when the differences between their prices was strayed from their historical norm. The probability, as they calculated it, of making a profit was a near-certainty. Two economists won a Nobel Prize in 1997 for working out the theory of this. Those economists lent their names and expertise to one hedge fund, which was hugely successful until an unexpected event (financial crises in Asian countries in 1997 and then Russia's bond default in 1998) changed the probabilities. Investors, including these economists, got a very expensive reminder that there is uncertainty in securities. I can't resist showing you this diagram from Wikipedia on "Long Term Capital Management": (To be fair to those economists, it appears that the firm, Long Term Capital Management, had taken a long position in Russian bonds, not hedged according to the economists' theory.)
This lesson was pretty much ignored, however. Shortly thereafter, banks and insurance companies greatly expanded trading in securities that were elaborately designed to segregate risks in lending money to buy real estate. When real estate prices, which had been rising since 1998, levelled off in 2006 and fell precipitously in 2007, the probabilities changed and many of these securities, became worthless. That included securities with an AAA rating, meaning that they were supposed to have a neglible risk of failure. In 2008, some major financial institutions failed, while others were saved from failure only by the United States government. Risk was revealed as uncertainty again. (And, some prominent economists who write editorials for the Wall Street Journal were revealed as fools again, even though none of them admit it yet.)
But, as I said, uncertainty is hard to analyze, so let's look at risk.
Since risk is defined as a situation is which the probabilities of possible outcomes are known, let's spend a moment reviewing the concept of probability.
Probability(I adapted this section from my brief review of statistical concepts for HSPM J716.)
The probability of an event is a fraction between 0 and 1.
1 is the probability of an event that is sure to happen. The probability that the sun will rise tomorrow is 1.
0 is the probability of an event that cannot happen. The probability that the Gamecocks will win a football national championship is 0. (Ha! Ha! Just a joke. Maybe Spurrier can do it!)
Here's how you get probabilities between 0 and 1: Suppose there are a number of equally likely events. Some events represent a "win" and some a "loss." The probability of winning is the number of winning events divided by the total number of events. Such well-defined situations arise in practice only in games.
For example, consider a coin with a "heads" side and a "tails" side. If we assume that the two sides are equally likely to be up when the coin is tossed and lands on a table, then the probability of "heads" is 1/2 and the probability of "tails" is 1/2.
For another example, an American roulette wheel has 38 spaces. Each space has a number and a color. The numbers are 1, 2, 3, and so forth up to 36, plus a 0 and a 00. Eighteen of the spaces are red. Eighteen are black. Two, the 0 and the 00, are green. If you bet that the ball will fall in a red space, your probability of winning is 18/38.
A deck of playing cards has 52 different cards (if we remove the Jokers). I draw a card at random from that deck. What is the probability that the card will be the Queen of Diamonds?
The Law of Large Numbers
If a trial (such a play of a game of chance) is repeated many times, then the more times the trial is repeated, the more likely it is that the frequency of any particular event will be close to the probability of that event. For example, if we flip a fair coin many times, the more times we flip it, the more likely it is that the number of "heads" divided by the total number of tosses will be close to 1/2. This may be taken as the definition of probability, or it can be taken as a theorem, in which case it is called the Law of Large Numbers.
Evaluating a game of chanceA casino with a roulette wheel will typically pay "1 to 1" if you bet that a red number will come up and a red number does come up. "1 to 1" means that if you win they give you an amount of money equal to what you bet. If you bet $1 and win, you get to keep your original $1, plus they give you $1 more. If a black or green number comes up, you lose your $1 to the casino.
To figure out whether this kind of gamble is likely to make you money, you can use the concept of expected value.
Expected ValueThe expected value is calculated by taking each outcome and multiplying its value by the probability of that outcome, then adding all the products up.
For the roulette wheel, the expected value of betting $1 on red at 1-1 odds is:
The probability of red times the payoff (gain) when a red number comes
the probability of black times the payoff (loss, in this case) when a black number comes up, plus
the probability of green times the payoff (also a loss, in this case) when a green number comes up.
The numbers for this are:
$18/38 plus $-18/38 plus $-2/38 adds up to $-2/38, or approximately $-0.0528.
For each $1 you bet, your expected loss (it's a loss because the sign is negative) is just over 5 cents.
Because, for roulette games, casinos set all of the payoff odds as if the two green numbers were not there, all roulette bets have the same expectation, $-0.0528 per dollar bet.
For example, if you bet on a single number, the payoff is 35 to 1, as
if there were 35 losing numbers and 1 winning number. Really, though,
there are 37 losing numbers, so your expected value is:
1/38 times $35 (for when you win), plus
37/38 times $-1 (for when you lose),
which equals ($35 - $37)/38 = $-2/38, or approximately $-0.0528.
This is how the casino makes money, even though the wheel itself is fair, in the sense that the casino doesn't manipulate it to make you lose. The wheel is fair, but the game is not.
A Fair GameA fair game has an expected value of $0. Roulette has an expected value of $-0.0528, approximately, so it's not "fair" in this sense.
The casino could make roulette fair by adjusting the payoffs.
For example, if betting on red or black and winning paid 20 to 18, rather
than 1 to 1, then the game would be fair. Paying 20 to 18 would mean
that if you bet $1 and won, the casino would give you about $1.11.
Your expected value would then be:
18/38 times $20/18 (for when your color comes up), plus
18/38 times $-1 (for when the other color comes up), plus
2/38 times $-1 (for when green comes up),
which equals ($20 - $18 - $2)/38 = $0/38 = $0.
Later, we'll see that there is an analogus concept for insurance: The actuarially fair premium is the premium that exactly equals the expected insurance payoff. But let's not get ahead of ourselves.
The Expected Value and the Law of Large NumbersIf you go into a casino with $10 and start betting $1 at a time at roulette, sooner or later you'll lose all your $10. Why is this? If your expected loss per $1 bet is $-0.0528, shouldn't you expect to lose just $0.528 when you bet $10?
Click on the better explanation:
The Law of Large Numbers implies:
The more times you play, the more likely it is that --
your net gain, divided by the total amount you have bet, will be close to the game's expected value.
We can illustrate this with a roulette simulation.
Roulette SimulationBelow, after this explanation, is a roulette simulation. Before you try it, let me tell you how it works and what it is supposed to show.
We will pretend that we are playing roulette. We will bet on Odd. This means we will win when an the little ball on the roulette wheel lands on an odd number. We lose if the ball finishes on an even number. We also lose if the ball finishes on 0 or 00 because, in roulette, 0 and 00 are neither even nor odd.
The simulation does not have a little ball or a wheel. Instead, it just shows the numbers that come up.
If we bet on Odd, there are 18 numbers that win for us: 1, 3, 5, 7, 9, ..., up through 35. Twenty numbers lose for us: 2, 4, 6, 8, ..., through 36, plus 0 and 00. Winning pays 1-1, meaning that we get $1 if we win and lose $1 if we don't. Our expected loss per bet is (18-20)/38 = $-0.0528 per dollar bet.
To operate the simulator, click on the Start rectangular button. The imaginary roulette wheel spins, and the number it chooses appears. Winning numbers are placed on the green-background row. Losing numbers on on the pink row. After each spin, the simulator adds up how many times you have won, how many times you have lost, how much you are ahead (gain is positive) or behind (gain is negative), and your average gain per dollar bet. That ratio should get closer and closer to -0.0528 as the simulation runs.
You can adjust the speed. The simulator pauses after to you have lost $10, so you can see how long you would have lasted if you had started with $10 in your pocket. By that time, the total amount you have bet will be much more than $10, unless you have been very unlucky. You will have bet each of your ten dollars over and over again. If you have average luck, your $10 will last for 190 spins, and you will have bet $190. $190 equals $10 divided by 0.0528.
The applet will pause again after you lose $100, and again after you lose $1000.
What is supposed to happen: As mentioned, the longer this roulette simulation runs, the closer the net gain per dollar bet should come to -0.0528.
You'll notice considerable fluctuation around -0.0528, though. It make take thousands of spins before the fraction gets close to the theoretical value. That is because there is no such thing as a "law of averages." The "law of averages" is the idea that, for instance, if you have some bad luck then you'll have some good luck later to make up for it. The Law of Large Numbers says that even if you've had some bad luck in the past, your luck starting now is most likely to be average. Keep playing, and the results from average luck will eventually swamp the effects of any particular run of bad luck or good luck.
To further test your understanding of probability concepts, we have another roulette simulation. This one has the bettor use a doubling system, a sure fire way to beat the odds?
Evaluating SecuritiesThe application of this roulette model to evaluating financial securities is almost straightforward.
I say "almost" is because of the important phenomenon of risk aversion. A risk averse person is one who is willing to pay money to avoid taking a risk. We discuss that in detail another interactive tutorial. (This is a link, but I recommend working through the rest of this tutorial first.) Leaving risk aversion aside, the application of the roulette model is straightforward.
Suppose you have the opportunity to buy a treasury bill from some country
A treasury bill is a simple I.O.U., a promise to pay a certain amount of money at a certain time. A typical treasury bill might promise to pay $10,000 in six months, one-half of a year. When you buy such a treasury bill, you are lending money, in effect. You give the country some money today. They will give you $10,000 in six months, or so they promise.
If there is no risk that the country won't pay off the bill, then the value of the bill today depends on current interest rates. Investors considering buying the bill will offer an amount of money that is less than $10,000, enough less so that the rate of return is at least as much as what they could get from lending money to some other trusted borrower. For the $10,000 six month bill, an investor will offer at most the amount of money $X such that $X would to $10,000 in six months if invested at current interest rates. In other words, the investor will offer the present value of a sure $10,000 six months from now.
Let's say current interest rates are 2.5% for six months. For this no-risk treasury bill, you'll offer at most the amount of money $X such that $X times 1.025 equals $10,000. X equals 10000/1.025 = $9756.10, approximately. (For more on this type of calculation, see the interactive tutorial on discounting future income.) This is why it is said that treasury bills sell "at a discount." You give the treasury $9756.10. Six months from now they give you $10,000. You have earned 2.5% interest.
Now let's suppose that another country, one with shaky finances, is selling $10,000 six-month treasury bills. You estimate that the probability that the country will actually pay the $10,000 when the six months are up is 90%. There is a 10% probability that it will pay $0.
What is the expected value of this risky treasury bill? Assume that the current interest rate on sure money is 2.5%, as in the second paragraph above. Do the calculations (you may need some scratch paper), and type your answer in the box below. If you would like some help, click the hint button.
To set this example up, I had to posit the probability that the country would pay off the bill. How would I make that judgement in practice? Some economists argue that the market can tell us what the probabilities are. By comparing a security's market price with the market price of a bond that will surely be paid off, one can calculate what "the market thinks" the probability of default is.
We can show how this is supposed to work by turning our current example around.
Suppose 6-month $10,000 treasury bills issued by the U.S. (and therefore sure to pay off) are selling for $9756. Country X's 6-month $10,000 treasury bills are selling for $8780. What does the market think is the probability that Country X will actually pay? (Again, we are still ignoring risk aversion, which will be discussed in the next interactive tutorial.)
The answer is 90%. The probability we want is the number p such that 9756 times p equals 8780. That number p, as we have seen, is about 0.9, or 90%.
Doing this in real life could be more complicated. In reality, there are more possible outcomes than just paying in full or paying nothing. Payment can be partial, or it can be deferred for a while. Economists argue that the market price, set by the interplay of supply and demand, reflects the probabilities of all possible outcomes.
The more basic problem with inferring probabilities from market prices is that market reflects what people think. What people think can be wrong, and it can change rapidly. In the summer of 1998, after Russia temporarily stopped paying off its bonds, market prices fell sharply for Brazil's bills and bonds. The connections between the Russian and Brazilian economies were too thin for there to have been any direct causation. What happened was that investors reassessed the probabilities of defaults for all countries, particularly those outside North America and Western Europe. Defaults now seemed more likely, so the prices of the securities fell as their expected values fell.
In July 2011, the European Union announced a deal that amounted to a controlled partial default for Greece. Greek bonds would be paid at 80% of their face value. For example, a bank that had lent 1,000,000 Euros to Greece will get 800,000 Euros, plus interest payments based on $800,000 rather than $1,000,000. Greek bonds went up in price after the announcement. They had been selling at a discount that was greater than 20%.
By the way, sometimes commentators say that investors have gotten "risk averse," and that is why risky securities lose market value. This is not what risk averse means. What happened in 1998 and 2011 and some years in between was that investors changed their assessment of how big the risks were. You may have a certain attitude about gambling on a security with a 90% probability of paying off. Some news event may cause you to think that the probability of paying off has dropped to 50%. Your attitude toward a 90%-sure bet doesn't necessarily change. What changes is that the security no longer offers a 90%-sure bet. So the price you would pay for that security will be much lower, even without any change in your attitude toward risk, which is what risk aversion is about.
Effective Interest Rates Vary According to RiskIn the example above, Country X is, in effect, paying a higher interest rate because its bills are risky. The interest it is paying can be calculated like this:
- We want the interest rate i such that (1 + i) = $10000/$8780.
i = $10000/$8780 - 1.
i = 0.139, approximately, or 13.9% for the six months.
A dramatic example of changing the assessment of risk happened during August 2007. Starting around 2002, banks and similar financial institutions took home mortgages, assembled them into groups, and then sold them to investors. What the banks were selling was the right to collect homeowners' payments on their mortgages.
|S&P risk rating||Bonds|
Palmetto Health Alliance
The material after this paragraph was written a few years ago. I'm thinking now that the interest rate differences are a better measure of risk than the ratings from the ratings comanies. See this chart, from Nate Silver's blog, which shows how poorly the ratings correlate with the risk of default. For this chart, the measure of risk, on the Y-axis, is how much you have to pay to buy insurance that will pay you if the country defaults. See also The Activist Ratings Agencies and Their Poor Public Sector Predictions.
Assessing the risk of these mortgage-backed securities was a job for ratings firms, particularly Moody's Investors Service and Standard & Poor's. These are well-established companies that have been in the ratings business for many years. (Here is a link to John Moody's 1904 book about why monopolies -- then the darlings of Wall Street but the bane of Progressives -- were good things, in Moody's view. Hmm ... they haven't changed much! )
Moody's and Standard & Poor's have alphanumeric designations for different levels of risk. The chart to the right shows the Standard & Poor's categories with some examples. Borrowers high on that list, like the United States Treasury, are considered low risk. These borrowers pay the lowest interest rates. Interest rates are higher for borrowers lower on the list. As of August 2007, the best junk bonds (BB+) pay about 4½ percentage points more than the best investment grade bonds. The differences can change, though. According to the Wall Street Journal, August 20, 2007, the difference between what A borrowers pay and what the U.S. Treasury pays widened from about ¼ percentage point in July to more than 2 percentage points in mid-August. Ordinarily, the value of a bond depends on the expected value of its future payments. That is what this tutorial teaches. The widening interest difference between AAA and A bonds from July to August 2007 suggested that something else was going on: Investors were panicking. They were avoiding buying corporations' bonds because other investors were doing the same, and the investors were afraid that they would not be able to resell the bonds if they bought them. One day, the demand for commercial paper -- corporations' I.O.U.s -- almost disappeared. Sales of commercial paper that usually took an hour were taking most of the day. It was like a stampede. If you are in the middle of one, you have no choice but to run with the crowd. This was scary, because a corporation that cannot "roll over" its bonds is in the same mess as a homeowner who has a balloon payment due and cannot refinance. Central banks in Europe and the U.S. stopped the stampede by loaning money at lowered interest rates to banks that bought corporate bonds. That seems to have partly worked. Commercial paper is getting bought and sold, but the interest rate spread is still high.
(Some commentators wonder why governments could not similarly encourage bridge loans to hard-pressed homeowners. That is a topic for another time!)
The mortgages in the mortgaged-backed securities were risky to various degrees, depending on such things as how big the down payment was, what the income of the homebuyer family was, and how much credit card or other debt the family was carrying. "Prime" mortgages are home loans with big down payments to families with enough income to meet standards and little or no other debt. "Sub-prime" mortgages are home loans with looser requirements. There are lots of gradations of risk within the sub-prime category. Standard & Poors, Moody's, and other rating companies reviewed the mortgages and estimated, based on recent experience, the probability that each mortgage would be paid. Banks bundled higher probability mortgages together and sold them as packages -- mortage-backed securities -- to investors. Standard & Poor's and other raters gave these securities investment-grade ratings (see table). Those securities could be sold to pension funds and other institutions looking for good returns with low risk. Meantime, the low-probability mortgages were bundled into junk-rated securities. Those went to buyers able and willing to take more risk.
In ordinary times, this can work fine. It did work, up until mid-2006. Homeowners occasionally fall behind on their mortgages for individual reasons, like job loss, divorce, or expensive medical care. These independent random events were expected. Pooling mortgages into groups spreads risk, just like insurance, and the higher interest rates on the riskier securities covered the losses, just as insurance premiums ordinarily do.
Then the housing bubble started to deflate. Banks and mortgage companies had been lending to riskier and riskier people to buy houses. By late 2006, this was backfiring. A growing number of homebuyers were failing to make even their first mortgage payment. Banks pulled back on making risky new loans. This reduced the flow of homebuyers. House prices stopped rising, and started falling in some areas. Homeowners who were counting on refinancing their mortgages could not do so, because they could not get a new mortgage loan as big as the old one. Their houses were not worth as much. Defaults were no longer independent events. Lots of homebuyers were defaulting at once, being stuck with loans that they could not refinance and houses that they could not sell except at a loss.
Ratings companies realized that the past experience that they had been using to assess risk no longer applied. In March 2007, Standard & Poor's was predicting that housing prices would be flat in 2007 and then rise in 2008. By July 2007, Standard & Poor's was predicting an 8% fall in housing prices into early 2008. That month, Moody's and Standard & Poor's lowered the ratings on about a thousand securities that were based on sub-prime mortgage loans. Big international investors got worried about lending money to financial corporations that were counting on income from these mortgage-backed securities. Then the big investors got worried that other big investors were worried about lending to financial corporations held a lot of mortgage-backed securities. No one wants to be the last person to lend money to a loser. That sparked the panic described above.
The ratings companies defend their risk-assessment methodology. They say that they are used to being blamed when things go sour.
Sept. 2008 update: In mid-September, 2008, after the U.S. Government declined to bail out the failing Lehman Brothers investment bank, the perceived difference in risk between the U.S. Teasury and big banks widened considerably. On one morning (Sept. 18), the "TED" spread, which is the difference between the interest rate on three-month U.S. Treasury bills (3-month I.O.U.'s) and the London Interbank Offered Rate (what big banks charge other big banks for overnight loans), jumped to 3 percentage points. In calmer times (like March 2008), the interest rate diffence was about 1/4 of a percentage point. Banks with spare cash flocked to U.S. Treasury bills, driving the interest rate on 3-month bills down on 0.05%. No, that is not a typographical error. Banks were so eager to lend their money to a safe borrower that the U.S. was borrowing at an interest rate of 1/20th of a percent. Lending among banks was almost nil, until the U.S. and other governments announced measures to pump money into the world's banking system.
A fair game is a game that ...
If investors think that the probability has gone up that a certain security
will not pay off, what happens to the price of the security?
Can you separate risk from uncertainty?
Thanks for participating! The follow-up tutorial is about risk aversion.