7250_WEB_Covisum-SmartRisk_social-ads_Effectively-LI.png

By Ron Piccinini, PhD
Director of Product Development

How can you tell if someone went to Harvard? They will tell you within five minutes of meeting them, as the popular joke goes. Similarly, ask any freshly-minted finance MBA or CFA candidate about portfolio construction, and chances are high that you will hear about ‘beta’ in pretty short order. As most advisors know, beta is the key statistic in Modern Portfolio Theory (MPT), and has something to do with the volatility of a stock or asset class relative to the market. According to the Theory, the expected return of a stock depends solely on its sensitivity to the equity risk premium, a.k.a. ‘beta’. If your client needs a higher expected return on her portfolio, you should increase the allocation to ‘high-beta’ assets. Conversely, you should increase the portion of ‘low-beta’ assets for that hypothetical client seeking lower expected returns. In an informationally efficient market, high expected returns should not be available without taking high levels of risk, thus beta became a measure of risk along the way. If the only source of expected returns comes from an asset’s sensitivity to the market (beta), then only stocks with betas greater than 1.00 will provide greater returns than the market, and since risk and return go hand-in-hand, it follows that high-beta stocks are the riskier ones. Pretty simple isn’t it?

I have never heard an advisor mention beta. When I have asked about the mighty MPT risk statistic often printed on their documentation, the most common answer I receive is that, “we don’t talk about that with clients.”  Could it be that financial advisors lack the sophistication to understand such a simple yet powerful concept? Or is there something deeper?

To illustrate this, we will take a look at Berkshire Hathaway (BRK.A), Vanguard’s International Growth Fund (VWIGX), and Citigroup (C). These are not penny stocks, these are storied names, with long market histories and vast institutional following. Surely beta “works” for these securities.

If we divide the last 30 years into 10 segments of three years each and compute beta for each of the periods, we get the results in Table 1:

Table 1: Realized Values of Beta through Time

Symbol

1987-1990

1990-1993

1993-1996

1996-1999

1999-2002

2002-2005

2005-2008

2008-2011

2011-2014

2014-2017

BRK.A

0.65

0.79

0.61

0.55

0.39

0.25

0.24

0.60

0.83

0.75

VWIGX

0.16

0.30

0.14

0.32

0.40

0.44

0.92

0.92

0.99

0.86

C

0.97

1.37

1.73

1.62

1.25

1.27

1.01

1.98

1.70

1.37

 

 

 

 

 

Beta is designed to explain long-term returns, yet the value of beta appears highly unstable, even at short-time horizons. Every security in Table 1 has experienced at least one doubling of its value from one three-year period to the next. This explaination, “Mr. and Mrs. Client, I have constructed this portfolio of low-beta stocks, which will put you on a successful path to retirement. Hopefully the risk doesn’t double any time soon,” does not inspire much confidence.

What if, instead of looking at beta in three-year periods, we looked at the entirety of available data? Perhaps beta fluctuates around a long-term average value?

Using 30 years of data to estimate beta, we get the following results: BRK-A: 0.56, VWIGX: 0.61, C: 1.50, using daily data. Using quarterly data, however, we get different numbers: BRK-A: 0.75, VWIGX: 0.97, C: 1.90. As Table 2 below shows, even with 30 years to “average out,” the value of beta is highly sensitive to the frequency of returns used to estimate it.

Table 2: Effect of Data Frequency on the Value of Beta

Symbol

Daily Data

Weekly Data

Monthly Data

Quarterly Data

BRK.A

0.56

0.61

0.64

0.75

VWIGX

0.61

0.78

0.92

0.97

C

1.50

1.80

1.82

1.90

 

 

 

 

 

 

The value of beta is highly dependent on how beta is estimated. Should we use daily, or weekly data? Why not micro-second data? As of this writing, Morningstar’s published estimates of beta for these three symbols were respectively: BRK-A: 0.85, VWIGX: 1.09, C: 1.58, which are somewhat different from my numbers, but have the same rank order. So, which is the right answer? Unfortunately, there is no right answer, it really all depends on how far back we decide to go, what benchmark we use, how the ‘risk-free rate’ is defined, and the data frequency used.

One final thought experiment. Suppose you just graduated from your finance MBA in 1987 and the “MPT Fairy” just revealed to you that over the next 30 years the realized betas would be BRK-A: 0.61, VWIGX: 0.78, and C: 1.80. Based on MPT, which of these three securities would you have expected to have generated the lowest return? The highest?

BRK-A returned over 8,000%, VWIGX returned about 240%, and C returned about 170%, the exact opposite of what was predicted by the theory.

I’m starting to think that advisors are right to leave beta out of the discussion.

In conclusion, Warren Buffet famously wrote that “The riskiness of an investment is not measured by beta (a Wall Street term encompassing volatility and often used in measuring risk) but rather by the probability -- the reasoned probability -- of that investment causing its owner a loss of purchasing power over his contemplated holding period.” 

Perhaps advisors could focus their client conversations on “downside dollars.” This is what’s ultimately relevant to their clients. Candid conversations around what a client’s statement could look like in a reasoned, bad scenario will help prepare them not to be shocked and make a mistake when the market turns.

Want to improve your conversations with clients and help set proper downside expectations? SmartRisk makes it easy. Try it free now. 

Get a 10-day free trial