A lot of investing is based on using standard deviations as a measure of risk. It is used in things like the Sharpe Ratio and the Mean-Variance Optimization (the “efficient frontier” from Modern Portfolio Theory).

Standard deviation is just a measure of volatility. You start with a list of numbers and then you calculate how much they differ from one another.

`2, 4, 4, 4, 5, 5, 7, 9`

The average of those numbers is 5 and the standard deviation is 2.13.

- 68% of the values are between (5–2.13) and (5+2.13). That is, between 2.87 and 7.13. In our data, 2 and 9 fall outside of that range.
- 95% of the values are between (5-(2 x 2.13)) and (5 + (2 x 2.13)). That is, between 0.74 and 9.26. All of our sample data fits in that range, but we only have a few values.

There are some issues with using standard deviation alone. It doesn’t tell you much about *skewness.* That is, does the distribution of returns “lean” in one direction or another? Are you more likely to get a gain or a loss? Stocks are more likely to have a gain than a loss, so they have “negative skew”.

It also doesn’t tell you anything about *kurtosis* or spread. That is, how tall is the peak of the curve and how fat are the tails?

But an even bigger problem is that when we use standard deviation as a measure of risk, then we are saying that *any* variance is bad. Even when our returns are *more* than the average, that is still considered bad.

The standard deviation of equity returns in Japan between 1950 and 1959 was 43%. As a point of comparison, the standard deviation of US equity returns from 1871–2015 was 18% and the standard deviation of US government bond returns from 1900–2015 was 4.3%. So this is a ** lot** higher.

But is it actually *riskier*? After all, the worst case scenario was that you lost 5%. Mostly the standard deviation is saying, “You’re going to have great returns but I don’t know how much…could be 10% or could be 130%”

The high standard deviation is caused (in part) by gaining 138.45% in a single year. Is that really a bad thing?

There’s another way we can measure things: use the **semideviation**. The semideviation starts with a simple observation: people don’t care about accidentally earning too much money, they really only care about when returns are *less* than expected.

You calculate the average. Then you look at any values that fall below the average and see how *far* below the average they were. And…that’s it really.

When we used standard deviation, Japanese stocks in 1950–1959 looked crazy risky (43%) compared to US stocks (20%). But when we look at they semideviation Japanese stocks in that period they look substantially safer: 27% for Japanese stocks and 18% for US stocks.

That feels closer to how I think most people would react as well. If you think the stocks are supposed to return 6% a year, then you don’t get upset when they return 9% in a year. But you *do* get upset when they return 3% for a year.

If semideviation is so great, why isn’t it used more often? Partly because standard deviation isn’t totally useless. It does tell us *something* about risk, even if we could argue there are better choices. Partly because of inertia. When Modern Portfolio Theory was introduced by Markowitz he used standard deviation. I can’t find a quote but apparently he claimed that he only used standard deviation because it was well-known, not because he thought it was the best choice.