Flirting with Models

The Research Library of Newfound Research

Are Market Implied Probabilities Useful?

This post is available as a PDF download here.

Summary­­

  • Using historical data from the options market along with realized subsequent returns, we can translate risk-neutral probabilities into real-world probabilities.
  • Market implied probabilities are risk-neutral probabilities derived from the derivatives market. They incorporate both the probability of an event happening and the equilibrium cost associated with it.
  • Since investors have the flexibility of designing their own portfolios, real-world probabilities of event occurrences are more relevant to individuals than are risk-neutral probabilities.
  • With a better handle on the real-world probabilities, investors can make portfolio decisions that are in line with their own goals and risk tolerances, leaving the aggregate decision making to the policy makers.

Market-implied probabilities are just as the name sounds: weights that the market is assigning an event based upon current prices of financial instruments.  By deriving these probabilities, we can gain an understanding of the market’s aggregate forecast for certain events.  Fortunately, the Federal Reserve Bank of Minneapolis provides a very nice tool for visualizing market-implied probabilities without us having to derive them.[1]

For example, say that I am concerned about inflation over the next 5 years. I can see how the probability of a large increase has been falling over time and how the probability of a large decrease has fallen recently, with both currently hovering around 15%.

Historical Market Implied Probabilities of Large Moves in Inflation

Source: Minneapolis Federal Reserve

I can also look at the underlying probability distributions for these predictions, which are derived from the derivatives market, and compare the changes over time.

Market Implied Probability Distributions for Moves in Inflation

Source: Minneapolis Federal Reserve

From this example, we can judge that not only has the market’s implied inflation forecast increased, but the precision has also increased (i.e. lower standard deviation) and the probabilities have been skewed to the left with fatter tails (i.e. higher kurtosis).

Inflation is only one of many variables analyzed.

Also available is implied probability data for the S&P 500, short and long-term interest rates, major currencies versus the U.S. dollar, commodities (energy, metal, and agricultural), and a selection of the largest U.S. banks.

With all the recent talk about low volatility, the data for the S&P 500 over the next 12 months is likely to be particularly intriguing to investors and advisors alike.

Historical Market Implied Probabilities of Large Moves in the S&P 500

Source: Minneapolis Federal Reserve

The current market implied probabilities for both large increases and decreases (i.e. greater than a 20% move) are the lowest they have been since 2007.

Interpreting Market Implied Probabilities

A qualitative assessment of probability is generally difficult unless the difference is large. We can ask ourselves, for example, how we would react if the probability of a large loss jumped from 10% to 15%. We know that the latter case is riskier, but how does that translate into action?

The first step is understanding what the probability actually means.

Probability forecasts in weather are a good example of this necessity. Precipitation forecasts are a combination of forecaster certainty and coverage of the likely precipitation.[2] For example, if there is a 40% chance of rain, it could mean that the forecaster is 100% certain that it will rain in 40% of the area. Or it could mean that they are 40% certain that it will rain in 100% of the area.  Or it could mean that they are 80% certain that it will rain in 50% of the area.

Once you know what the probability even represents, you can have a better grasp on whether you should carry an umbrella.

In the case of market implied probabilities, what we have is the risk-neutral probability. These are the probabilities of an event given that investors are risk neutral; these probabilities factor in the both the likelihood of an event and the cost in the given state of the world. These are not the real-world probabilities of the market moving by a given amount. In fact, they can change over time even if the real-world probabilities do not.

To illustrate these differences between a risk-neutral probability and a real-world probability, consider a simple coin flip game. The coin is flipped one time. If it lands on heads, you make $1, and if it lands on tails, you lose $1.

The coin is fair, so the probability for the coin flip is 50%. How much would you pay to play this game?

If you answer is nothing, then you are risk neutral, and the risk neutral probability is also 50%.

However, risk averse players would say, “you have to pay me to play that game.” In this case, the risk neutral probability of a tails is greater than 50% because of the downside that players ascribe to that event.

Now consider a scenario where a tails still loses $1, but a heads pays out $100.  Chances are that even very risk-averse players would pay close to $1 to play this game.

In this case, the risk neutral probability of a heads would be much greater than 50%.

But in all cases, the actual likelihoods of heads and tails never changed; they still had a 50% real-world probability of occurring.

As with the game, investors who operate in the real world are generally risk averse. We pay premiums for insurance-like investments to protect in the states of the world we dread the most. As such, we would expect the risk-neutral probability of a “bad” event (e.g. the market down more than 20%) to be higher than the real-world probability.

Likewise, we would expect the risk-neutral probability of a “good” event (e.g. the market up more than 20%) to be lower than the real-world probability.

How Market Implied Probabilities Are Calculated

Note (or Warning): This section contains some calculus. If that is not of interest, feel free to skip to the next section; you won’t miss anything. For those interested, we will briefly cover how these probabilities are calculated to see what (or who), exactly, in the market implies them.

The options market contains call and put options over a wide array of strike prices and maturities. If we assume that the market is free from arbitrage, we can transform the price of put options into call options through put-call parity.[3]

In theory, if we knew the price of a call option for every strike price, we could calculate the risk-neutral probability distribution, fRN, as the second derivative with respect to the strike price.

where r is the risk-free rate, C is the price of a call option, K is the strike price and T-t is the time to option maturity.

Since options do not exist at every strike price, a curve is fit to the data to make it a continuous function that can be differentiated to yield the probability distribution.

Immediately, we see that the probabilities are set by the options market.

Are Market Implied Probabilities Useful?

Feldman et. al (2015), from the Minneapolis Fed, assert that market-based probabilities are a useful tool for policy makers.[4] Their argument centers around that fact that risk-neutral probabilities encapsulate both the probability of an event occurring – the real-world probability – and the cost/benefit of the event.

Assuming broad access to the options market, households or those acting on behalf of households can express their views on the chances of the event happening and the willingness to exchange cash flows in different states of the world by trading the appropriate options.

In the paper, the authors admit two main pitfalls:

  1. Participation – An issue can arise here since the people whose welfare the policy-makers are trying to consider may not be participating. Others outside the U.S. may also be influencing the probabilities.
  2. Illiquidity – Options do not always trade frequently enough in the fringes of the distribution where investors are usually most concerned. Because of this, any extrapolation must be robust.

However, they also refute many common arguments against using risk-neutral probabilities.

  1. These are not “true” probabilities – The fact that these market implied probabilities are model-independent and derived from household preferences rather than from a statistician’s model, with its own biased assumptions, is beneficial, especially since these market probabilities account for resource availability.
  2. No Household is “Typical” – In equilibrium, all households should be willing to rearrange their cash flows in different states of the world as long as the market is complete. Therefore, a policy-maker aligns their beliefs with those of the households in aggregate by using the market-based probabilities.

We have covered how policymakers often do not forecast very well themselves[5], which Ellison and Sargent argue may be intentional, stating that the FOMC may purposefully forecast incorrectly in order to form policy that is robust to model misspecification.[6]

Where a problem could arise is when an individual investor (i.e. a household) makes a decision for their own portfolio based on these risk-neutral probabilities.

We agree that having a financial market makes a “typical” investor more relevant than the “average fighter pilot” example in our previous commentary.[7]  But what a central governing body uses to make decisions is different from what may be relevant to an individual.

The ability to be flexible is key. In this case, an investor can construct their own portfolio.  It would be like a pilot constructing their own plane.

Getting to Real World Probabilities

Using the method outlined in Vincent-Humphreys and Noss (2012), we can transform risk-neutral probabilities into real-world probabilities, assuming that investor risk preferences are stable over time.[8]

Without getting too deep into the mathematical framework, the basic premise is that if we have a probability density function (PDF) for the risk-neutral probability, fRN, with a cumulative density function (CDF), FRN, we can multiply it by a calibration function, C, to obtain the real-world probability density function, fRW.

The beta distribution is a suitable choice for the calibration function.[9]  Using a beta distribution balances being parsimonious – it only has two parameters – with flexibility, since it allows for preserving the risk-neutral probability distribution by simply shifting the mean and adjusting the variance, skew, and kurtosis.

The beta distribution parameters are calculated using the realized value that the market implied probability represents (e.g. change in the S&P 500, interest rates commodity prices, etc.) over the subsequent time period.

Deriving the Real-World Probability for a Large Move in the S&P 500 

We have now covered what market-implied probabilities are and how they are calculated and discussed their usefulness for policy makers.

But individual investors price risk differently based on their own situations and preferences. Because of this, it is helpful to strip off the market-implied costs that are baked into the risk-neutral probabilities. The real-world probabilities could then be used to weigh stress testing scenarios or evaluate the cost of other risk management techniques that align more with investor goals than option strategies.

Using the framework outlined above, we can go through an example of transforming the market implied probabilities of large moves in the S&P 500 into their real-world probabilities.

Statistical aside: The options data starts in 2007, and with 10 years of data, we only have 10 non-overlapping data points, which reduces the power of the maximum likelihood estimate used to fit the beta distribution. However, with options expiring twice a month, we have 24 separate data sets to use for calculating standard errors. Since we are concerned more about the potential differences between the risk-neutral and real-world distributions, we could use the rolling 12-month periods and still see the same trends. As with any analysis with overlapping periods, there can be significant autocorrelation to deal with. By using the 6-month distribution data from the Minneapolis Fed, we could double the number of observations.

Since the Minneapolis Fed calculates the market implied (risk neutral) probability distribution and the summary statistics (numerically), we must first translate it into a functional form to extend the analysis. Based on the data and the summary statistics, the distribution is neither normal nor log-normal. It is left-skewed and has fat tails most of the time.

Market Implied Probability Distributions for Moves in the S&P 500

Source: Minneapolis Federal Reserve

We will assume that the distribution can be parameterized using a skewed generalized t-distribution, which allows for these properties and also encompasses a variety of other distributions including the normal and t-distributions.[10]  It has 5 parameters, which we will fit by matching the moments (mean, variance, skewness and kurtosis) of the distribution along with the 90th percentile value, since that tail of the distribution is generally the smaller of the two.[11]

We can check the fits using the reported median and the 10th percentile values to see how well they match.

Fit Percentile Values vs. Reported Values

Source: Minneapolis Fed.  Calculations by Newfound Research. 

There are instances where the reported distribution is bi-modal and would not be as accurately represented by the generalized skewed t distribution, but, as the above graph shows, the quantiles where our interest is focused line up decently well.

Now that we have our parameterized risk-neutral distribution for all time periods, the next step is to input the subsequent 12-month S&P 500 return into the CDF calculated at each point in time. While we don’t expect this risk-neutral distribution to necessarily produce a good forecast of the market return, this step produces the data needed to calibrate the beta function.

The graph below shows this CDF result over the rolling periods.

Cumulative Probabilities of Realized 12-month S&P 500 Returns using the Risk-Neutral Distribution from the Beginning of Each Period

Source: Minneapolis Fed and CSI.  Calculations by Newfound Research. 

The persistence of high and low values is evidence of the autocorrelation issue we discussed previously since the periods are overlapping.

The beta distribution function used to transition from the risk-neutral distribution to the real-world distribution has parameters j = 1.64 and k = 1.00 with standard errors of 0.09 and 0.05, respectively.

We can see how this function changes at the endpoints of the 95% confidence intervals for each parameter as a way to assess the uncertainty in the calibration.

Estimated Calibration Functions for 12-month S&P 500 Returns

Source: Minneapolis Fed and CSI.  Calculations by Newfound Research. Data from Jan 2007 to Nov 2017.

When we transform the risk-neutral distribution into the real-world distribution, the calibration function values that are less than 1 in the left tail reduce the probabilities of large market losses.

In the right tail, the calibration estimates show that real-world probabilities could be higher or lower than the risk-neutral probabilities depending on the second parameter’s value in the beta distribution (this corresponds to k being either greater than or less than 1).

With the risk-neutral distribution and the calibrated beta distribution, we now have all the pieces to calculate the real-world distribution at any point in the options data set.

The graph below shows how these functions affect the risk-neutral probability density using the most recent option data. As expected, much more of the density is centered around the mode, and the distribution is skewed to the right, even using the bounds of the confidence intervals (CI) for the beta distribution parameters. 

Risk Neutral and Real-World Probability Densities

Source: Minneapolis Fed and CSI.  Calculations by Newfound Research. Data as of 11/15/17. Past performance is no guarantee of future results.

 

Risk NeutralReal-WorldReal World (Range Based on Calibration)
Mean Return0.1%6.4%4.2% – 8.3%
Probability of a large loss (>20%)10.6%2.6%1.5% – 4.3%
Probability of a large gain (>20%)1.7%2.8%1.7% – 4.4%

 Source: Minneapolis Fed and CSI.  Calculations by Newfound Research. Data as of 11/15/17. Past performance is no guarantee of future results.

Based on this analysis, we see some interesting things occurring.

  • The mean return is considerably higher than the values suggested by many large firms, such as JP Morgan, BlackRock, and Research Affiliates.[12] Those estimates are generally for 7-10 years, so this doesn’t rule out having a good 2018, which is what the options market is showing.
  • The real-world probability of a large loss is considerably lower than the 10.6% risk-neutral probability. When firms like GMO are indicating that 10% levels are already unreasonably low, their assessment of complacency in the market would only get stronger.[13]
  • The lower end of the range for the real-world probability of a large gain is in line with the risk-neutral probability, suggesting that investors are seeking out risks (lower risk aversion) in a market with depressed yields on fixed income and low current volatility.

This also shows how looking at market implied probabilities can paint a skewed picture of the chances of an event occurring.

However, we must keep in mind that these real-world probabilities are still derived from the market-implied probabilities. In an efficient market world, all risks would correctly be priced into the market. But we know from the experience during the Financial Crisis that that is not always the case.

Our recommendation is to take all market probabilities with a grain of salt. Just because having a coin land on heads five times in a row has a probability of less than 4% doesn’t mean we should be surprised if it happens once. And coin flipping is something that we know the probability for.

Whether the market probabilities we use are risk-neutral or real-world, there are a lot of assumptions that go into calculating them, and the consequences of being wrong can have a large impact on portfolios. Risk management is important if the event occurs regardless of how likely it is to occur.

As with the weather, a 10% chance of a large loss versus a 4% chance is not a big difference in absolute terms, but a large portfolio loss is likely more devastating than getting rained on a bit should you decide not to bring an umbrella.

Conclusion

Market implied probabilities are risk-neutral probabilities derived from the derivatives market. If we assume that the market is efficient and that there is sufficient investor participation in these markets, then these probabilities can serve as a tool for governing organizations to adjust policy going forward.

However, these probabilities factor in both the actual probability of an event and the perceived cost to investors. Individual investors will attribute their own costs to such events (e.g. a retiree could be much more concerned about a 20% market drop than someone in the beginning of their career).

If individuals want to assess the probability of the event actually happening in order to make portfolio decisions, then they have to focus on the real-world probabilities.  Ultimately, an investor’s cost function associated with market events depends more on life circumstances. While a bad state of the world for an investor can coincide with a bad state of the world for the market (e.g. losing a job when the market tanks), risk in an individual’s portfolio should be managed for the individual, not the “typical household”.

While the real-world probability of an event is typically dependent on an economic or statistical model, we have presented a way to translate the market implied probabilities into real-world probabilities.

With a better handle on the real-world probabilities, investors can make portfolio decisions that are in line with their own goals and risk tolerances.

[1] https://www.minneapolisfed.org/banking/mpd

[2] https://www.weather.gov/ffc/pop

[3] https://en.wikipedia.org/wiki/Put%E2%80%93call_parity

[4] https://www.minneapolisfed.org/~/media/files/banking/mpd/optimal_outlooks_dec22.pdf

[5] https://blog.thinknewfound.com/2015/03/weekly-commentary-folly-forecasting/

[6] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2160157

[7] https://blog.thinknewfound.com/2017/09/the-lie-of-averages/

[8] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2093397

[9] The beta distribution takes arguments between 0 and 1, inclusive, and has a non-decreasing CDF. It was also used in Fackler and King (1990) – https://www.jstor.org/stable/1243146.

[10] https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf

[11] Since we have 5 unknown parameters, we have to add in this fifth constraint. We could also have used the 10th percentile value or the median. Whichever, we use, we can see how well the other two align with the reported values.

[12] https://interactive.researchaffiliates.com/asset-allocation/

[13] https://www.gmo.com/docs/default-source/research-and-commentary/strategies/asset-allocation/the-s-p-500-just-say-no.pdf

It’s Long/Short Portfolios All The Way Down

There is a PDF version of this post available for download here.

Summary­­

  • Long/short portfolios are helpful tools for quantifying the value-add of portfolio changes, especially for active strategies.
  • In the context of fees, we can isolate the implicit fee of the manager’s active decisions (active share) relative to a benchmark and ask ourselves whether we think that hurdle is attainable.
  • Bar-belling low fee beta with high active share, higher fee managers may actually be cheaper to incorporate than those managers found in the middle of the road.
  • However, as long as investors still review their portfolios on an itemized basis, this approach runs the risk of introducing greater behavioral foibles than a more moderated – yet ultimately more expensive – approach.

After a lecture on cosmology and the structure of the solar system, William James was accosted by a little old lady.

“Your theory that the sun is the centre of the solar system, and the earth is a ball which rotates around it has a very convincing ring to it, Mr. James, but it’s wrong. I’ve got a better theory,” said the little old lady.

“And what is that, madam?” Inquired James politely.

“That we live on a crust of earth which is on the back of a giant turtle,”

Not wishing to demolish this absurd little theory by bringing to bear the masses of scientific evidence he had at his command, James decided to gently dissuade his opponent by making her see some of the inadequacies of her position.

“If your theory is correct, madam,” he asked, “what does this turtle stand on?”

“You’re a very clever man, Mr. James, and that’s a very good question,” replied the little old lady, “but I have an answer to it. And it is this: The first turtle stands on the back of a second, far larger, turtle, who stands directly under him.”

“But what does this second turtle stand on?” persisted James patiently.

To this the little old lady crowed triumphantly. “It’s no use, Mr. James – it’s turtles all the way down.”

— J. R. Ross, Constraints on Variables in Syntax 1967

The Importance of Long/Short Portfolios

Anybody who has read our commentaries for some time has likely found that we have a strong preference for simple models.  Justin, for example, has a knack for turning just about everything into a conversation about coin flips and their associated probabilities.  I, on the other hand, tend to lean towards more hand-waving, philosophical arguments (e.g. The Frustrating Law of Active Management[1] or that every strategy is comprised of a systematic and an idiosyncratic component[2]).

While not necessarily 100% accurate, the power of simplifying mental models is that it allows us to explore concepts to their – sometimes absurd – logical conclusion.

One such model that we use frequently is that the difference between any two portfolios can be expressed as a dollar-neutral long/short portfolio.  For us, it’s long/short portfolios all the way down.

This may sound like philosophical gibberish, but let’s consider a simple example.

You currently hold Portfolio A, which is 100% invested in the S&P 500 Index.  You are thinking about taking that money and investing it entirely into Portfolio B, which is 100% invested in the Barclay’s U.S. Aggregate Bond Index.  How can you think through the implications of such a change?

One way of thinking through such changes is that recognizing that there is some transformation that takes us from Portfolio A to portfolio B, i.e. Portfolio A + X = Portfolio B.

We can simply solve for X by taking the difference between Portfolio B and Portfolio A.  In this case, that difference would be a portfolio that is 100% long the Barclay’s U.S. Aggregate Bond Index and 100% short the S&P 500 Index.

Thus, instead of saying, “we’re going to hold Portfolio B,” we can simply say, “we’re going to continue to hold Portfolio A, but now overlay this dollar-neutral long/short portfolio.”

This may seem like an unnecessary complication at first, until we realize that any differences between Portfolio A and B are entirely captured by X.  Focusing exclusively on the properties of X allows us to isolate and explore the impact of these changes on our portfolio and allows us to generalize to cases where we hold allocation to X that are different than 100%.

Re-Thinking Fees with Long/Short Portfolios

Perhaps most relevant, today, is the use of this framework in the context of fees.

To explore, let’s consider the topic in the form of an example.  The iShares S&P 500 Value ETF (IVE) costs 0.18%, while the iShares S&P 500 ETF (IVV) is offered at 0.04%.  Is it worth paying that extra 0.14%?

Or, put another way, does IVE stand a chance to make up the fee gap?

Using the long/short framework, one way of thinking about IVE is that IVE = IVV + X, where X is the long/short portfolio of active bets.

But are those active bets worth an extra 0.14%?

First, we have to ask, “how much of the 0.18% fee is actually going towards IVV and how much is going towards X?”  We can answer this by using a concept called active share, which explicitly measures how much of IVE is made up of IVV and how much it is made up of X.

Active share can be easily explained with an example.[3]  Consider having a portfolio that is 50% stocks and 50% bonds, and you want to transition it to a portfolio that is 60% stocks and 40% bonds.

In essence, your second portfolio is equal to your first plus a portfolio that is 10% long stocks and 10% short bonds.  Or, equivalently, we can think of the second portfolio as equal to the first plus a 10% position in a portfolio that is 100% long stocks and 100% short bonds.

Through this second lens, that 10% number is our active share.

Returning to our main example, IVE has a reported active share of 42% against the S&P 500[4].

Hence, we can say that IVE = 100% IVV + 42% X.  This also means that 0.14% of the 0.18% fee is associated with our active bets, X.  (We calculate this as 0.18% – 0.04% x 100%.)

If we take 0.14% and divide it by 42%, we get the implicit fee that we are paying for our active bets.  In this case, 0.333%.

So now we have to ask ourselves, “do we think that a long/short equity portfolio can return at least 0.333%?”  We might want to dive more into exactly what that long/short portfolio looks like (i.e. what are the actual active bets being made by IVE versus IVV), but it does not seem so outrageous.  It passes the sniff test.

What if IVE were actually 0.5% instead?  Now we would say that 0.46% of the 0.5% is going towards our 42% position in X.  And, therefore, the implicit amount we’re paying for X is actually 1.09%.

Am I confident that an equity long/short value portfolio can clear a hurdle of 1.09% with consistency?  Much less so.  Plus, the fee now eats a much more significant part of any active return generated.  E.g. If we think the alpha from the pure long/short portfolio is 3%, now 1/3rd of that is going towards fees.

With this framework in mind, it is no surprise active managers have historically struggled so greatly to beat their benchmarks.  Consider that according to Morningstar[5], the dollar-weighted average fee paid to passive indexes was 0.25% in 2000, whereas it was 1% for active funds.

If we assume a very generous 50% active share for those active funds, we can use the same math as before to find that we were, in essence, paying a 2.00% fee for the active bets.  That’s a high hurdle for anyone to overcome.

And the closet indexers?  Let’s be generous and assume they had an active share of 20% (which, candidly, is probably high if we’re calling them closet indexers).  This puts the implied fee at 4%!  No wonder they struggled…

Today, the dollar weighted average expense ratio for passive funds is 0.17% and for active funds, it’s 0.75%.  To have an implied active fee of less than 1%, active funds at that level will have to have an active share of at least 30%.[6]

Conclusion

As the ETF fee wars rage on, and the fees for standard benchmarks plummeting on a near-daily basis, the only way an active manager can continue to justify a high fee is with an exceptionally high active share.

We would argue that those managers caught in-between – with average fees and average active share – are those most at risk to be disintermediated.  Most investors would actually be better off by splitting the exposure into cheaper beta solutions and more expensive, high active share solutions.  Bar-belling low fee beta with high active share, higher fee managers may actually be cheaper to incorporate than those found the middle of the road.

The largest problem with this approach, in our minds, is behavioral.  High active share should mean high tracking error, which means significant year-to-year deviation from a benchmark.  So long as investors still review their portfolios on an itemized basis, this approach runs the risk of introducing greater behavioral foibles than a more moderated – yet ultimately more expensive – approach.

 


 

[1] https://blog.thinknewfound.com/2017/10/frustrating-law-active-management/

[2] https://twitter.com/choffstein/status/880207624540749824

[3] Perhaps it is “examples” all the way down.

[4] See https://tools.alphaarchitect.com

[5] https://corporate1.morningstar.com/ResearchLibrary/article/810041/us-fund-fee-study–average-fund-fees-paid-by-investors-continued-to-decline-in-2016/

[6] We are not saying that we need a high active share to predict outperformance (https://www.aqr.com/library/journal-articles/deactivating-active-share). Rather, a higher active share reduces the implicit fee we are paying for the active bets.

The Frustrating Law of Active Management

A PDF version of this post is available for download here.

Summary­­

  • In an ideal world, all investors would outperform their benchmarks. In reality, outperformance is a zero-sum game: for one investor to outperform, another must underperform.
  • If achieving outperformance with a certain strategy is perceived as being “easy,” enough investors will pursue that strategy such that its edge is driven towards zero.
  • Rather, for a strategy to outperform in the long run, it has to be hard enough to stick with in the short run that it causes investors to “fold,” passing the alpha to those with the fortitude to “hold.”
  • In other words, for a strategy to outperform in the long run, it must underperform in the short run. We call this The Frustrating Law of Active Management.

A few weeks ago, AQR published a piece titled Craftsmanship Alpha: An Application to Style Investing[1], to which Cliff Asness wrote a further perspective piece titled Little Things Mean a Lot[2].

We’ll admit that we are partial to the title “craftsmanship alpha” because portfolio craftsmanship is a concept we spend a lot of time thinking about.  In fact, we have a whole section titled Portfolio Craftsmanship on the Investment Philosophy section of our main website.[3]  We further agree with Cliff: little things do mean a lot.  We even wrote a commentary about it in May titled Big Little Details[4].

But there was one quote from Cliff, in particular, that inspires this week’s commentary:

Let’s just make up an example. Imagine there are ten independent (uncorrelated) sources of “craftsmanship alpha” and that each adds 2 basis points of expected return at the cost of 20 basis points of tracking error from each (against some idea of a super simple “non-crafted” alternative.)  Each is thus a 0.10 Sharpe ratio viewed alone. Together they are expected to add 20 basis points to the overall factor implementation inducing 63 basis points of tracking error (20 basis points times the square-root of ten). That’s a Sharpe ratio of 0.32 from the collective craftsmanship (in addition to the basic factor returns).

[…]

But, as many have noted in other contexts, a Sharpe ratio like 0.32 can be hard to live with. Its chance of subtracting from your performance in a given year is about 37%. Its chance of subtracting over five years is about 24%. And, wait for it… over twenty years the chance it subtracts is still about 8%. That’s right. There’s a non-trivial chance your craftsmanship is every bit as good as you think, and it subtracts over two full decades, perhaps the lion’s share of your career. Such is the unforgiving, uncaring math.

Whether it is structural alpha, style premia, or craftsmanship alpha: we believe that the very uncertainty and risk that manifests as (expected) tracking error is a necessary component for the alpha to exist in the first place.

The “unforgiving, uncaring math” that is a result – the fact that you can do everything right and still get everything wrong – is a concept that in the past we have titled The Frustrating Law[5] of Active Management.

Defining The Frustrating Law of Active Management

We define The Frustrating Law of Active Management as:

For any disciplined[6] investment approach to outperform over the long run, it must experience periods of underperformance in the short run.

As if that were not frustrating enough a concept – that even if we do everything right, we still have to underperform from time-to-time – we add this corollary:

For any disciplined investment approach to underperform over the long run, it must experience periods of outperformance in the short run.

In other words, even if a competing manager does everything wrong, they should still be rewarded with outperformance at some point.  Talk about adding insult to injury.

For the sake of brevity, we will only explore the first half of the law in this commentary.  Note, however, that the second law is simply the inverse case of the first.  After all, if we found an investment strategy that consistently underperformed, we could merely inverse the signals and have a strategy that consistently outperforms.  If the latter is impossible, so must be the former.

For it to work, it has to be hard

Let’s say we approach you with a new investment strategy.  We’ve discovered the holy grail: a strategy that always outperforms.  It returns an extra 2% over the market, consistently, every year, after fees.

Ignoring reasonable skepticism for a moment, would you invest?  Of course you would.  This is free money we’re talking about here!

In fact, everyone we pitch to would invest.  Who wouldn’t want to be invested in such a strategy?  And here, we hit a roadblock.

Everyone can’t invest.  Relative performance is, after all, zero sum: for some to outperform, others must underperform.  Our extra return has to come from somewhere.

If we do continue to accept money into our strategy, we will begin to approach and eventually exceed capacity.  As we put money to work, we will create impact and inform the market, driving prices away from us.  As we try to buy, prices will be driven up and as we try to sell, prices will be driven down.  By chasing price, our outperformance will deteriorate.

And it needn’t even be us trading the strategy.  Once people learn about what we are doing – and how easy it is to make money – others will begin to employ the same approach.  Increasing capital flow will continue to erode the efficacy of the edge as more and more money chases the same, limited opportunities. The growth is likely to be exponential, quickly grinding our money machine quickly to a halt.

So, the only hope of keeping a consistent edge is in a mixture of: (1) keeping the methodology secret, (2) keeping our deployed capital well below capacity, and (3) having a structural moat (e.g. first-mover advantage, relationship-driven flow, regulatory edge, non-profit-seeking counter-party, etc).

While we believe that all asset managers have the duty to ensure #2 remains true (we highly recommend reading Alpha or Assets by Patrick O’Shaughnessy[7]), #1 pretty much precludes any manager actually trying to raise assets (with, perhaps, a few limited exceptions in the hedge fund world that can raise assets on brand alone).

The takeaway here is that if an edge is perceived as being easy to implement (i.e. not case #3 above) and easy to achieve, enough people will do it to the point that the edge is driven to zero.

Therefore, if an edge is known by many (e.g. most style premia like value, momentum, carry, defensive, trend, etc), then for it to persist over the long run, the outperformance must be difficult to capture.  Remember: for outperformance to exist, weak hands must at some point “fold” (be it for behavioral or risk-based reasons), passing the alpha to strong hands that can “hold.”

This is not just a case of perception, either.  Financial theory tells us that a strategy cannot always outperform its benchmark with certainty.  After all, if it did, we would have an arbitrage: we could go long the strategy, short the benchmark, and lock in certain profit.  As markets loathe (or, perhaps, love) arbitrage, such an opportunity should be rapidly chased away.  Thus, for a disciplined strategy to generate alpha over the long run, it must go through periods of underperformance in the short-run.

Can We Diversify Away Difficulty?

Math tells us that we should be able to stack the benefits of multiple, independent alpha sources on top of each other and simultaneously benefit from potentially reduced tracking error due to diversification.

Indeed, mathematically, this is true.  It is why diversification is known as the only free lunch in finance.

This certainly holds for beta, which derives its value from economic activity.  In theory, everyone can hold the Sharpe ratio optimal portfolio and introduce cash or leverage to hit their appropriate risk target.

Alpha, on the other hand, is explicitly captured from the hands of other investors.  Contrary to the Sharpe optimal portfolio, everyone cannot hold the Information ratio optimal portfolio at the same time[8]Someone needs to be on the other side of the trade.

Consider three strategies that all outperform over the long run: strategy A, strategy B, and strategy C.  Does our logic change if we learn that strategy C is simply 50% strategy A plus 50% strategy B?  Of course not!  For C to continue to outperform over the long run, it must remain sufficiently difficult to stick with in the short-run that it causes weak hands to fold.

Conclusion

For a strategy to outperform in the long run, it has to be perceived as hard: hard to implement or hard to hold.  For public, liquid investment styles that most investors have access to, it is usually a case of the latter.

This law is underpinned by two facts.  First, relative performance is zero-sum, requiring some investors to underperform for others to outperform.  Second, consistent outperformance violates basic arbitrage theories.

While coined somewhat tongue-in-cheek, we think this law provides an important reminder to investors about reasonable expectations.  As it turns out, the proof is not always in the eating of the pudding.  In fact, track records can be entirely misleading as validators of an investment process.  As Cliff pointed out, even if our alpha source has a Sharpe ratio of 0.32, there is an 8% chance that it subtracts from performance over the next 20-years.

Conversely, even negative alpha sources can show beneficial performance by chance.  An alpha source with a Sharpe ratio of -0.32 has an 8% chance that it adds to performance over the next 20-years.

And that’s why we call it The Frustrating Law of Active Management.  For investors and asset managers alike, there is little more frustrating than knowing that to continue working over the long run, good strategies have to do poorly, and poor strategies have to do well over shorter timeframes.

 


 

[1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3034472

[2] https://www.aqr.com/cliffs-perspective/little-things-mean-a-lot

[3] https://www.thinknewfound.com/investment-philosophy

[4] https://blog.thinknewfound.com/2017/05/big-little-details/

[5] To be clear that we don’t mean a “law” in the sense of an inviolable, self-evident axiom.  In truth, our “law” is much closer to a “theory.”

[6] The disciplined component here is very important.  By this, we mean a strategy that applies a consistent set of rules.  We do not mean, here, a bifurcation of systematic versus discretionary.  Over the years, we’ve met a large number of discretionary managers who apply a highly disciplined approach.  Rather, we mean those aspects of an investment strategy that can be codified and turned into a set of systematically applied rules.

Thus, even a discretionary manager can be thought of as a systematic manager plus a number of idiosyncratic deviations from those rules.  The deviations must be idiosyncratic, by nature.  If there was a consistent reason for making the deviations, after all, the reason could be codified itself.  Thus, true discretion only applies to unique, special, and non-repeatable situations.

Note that the discipline does not preclude randomness.  You could, for example, flip a coin and use the result to make an investment decision every month.  So long as the same set of rules is consistently applied, we believe The Frustrating Law of Active Management applies.

[7] http://investorfieldguide.com/alpha-or-assets/

[8] Well, technically they can if everyone is a passive investor.  In this case, however, the information ratio would be undefined, with zero excess expected return and zero tracking error.

 

Addressing Low Return Forecasts in Retirement with Tactical Allocation

This post is available for download as a PDF here.

Summary­­

  • The current return expectations for core U.S. equities and bonds paint a grim picture for the success of the 4% rule in retirement portfolios.
  • While varying the allocation to equities throughout the retirement horizon can provide better results, employing tactical strategies to systematically allocate to equities can more effectively reduce the risk that the sequence of market returns is unfavorable to a portfolio.
  • When a tactical strategy is combined with other incremental planning and portfolio improvements, such as prudent diversification, more accurate spending assessments, tax efficient asset location, and fee-conscious investing, a modest allocation can greatly boost likely retirement success and comfort.

Over the past few weeks, we have written a number of posts on retirement withdrawal planning.

The first was about the potential impact that high core asset valuations – and the associated muted forward return expectations – may have on retirement.

The second was about the surprisingly large impact that small changes in assumptions can have on retirement success, akin to the Butterfly Effect in chaos theory. Retirement portfolios can be very sensitive to assumed long-term average returns and assumptions about how a retiree’s spending will evolve over time.

In the first post, we presented a visualization like the following:

Historical Wealth Paths for a 4% Withdrawal Rate and 60/40 Stock/Bond Allocation
Source: Shiller Data Library.  Calculations by Newfound Research. Analysis uses real returns and assumes the reinvestment of dividends.  Returns are hypothetical index returns and are gross of all fees and expenses.  Results may differ slightly from similar studies due to the data sources and calculation methodologies used for stock and bond returns.

 

The horizontal (x-axis) represents the year when retirement starts.  The vertical (y-axis) represents the years post-retirement.  The coloring of each cell represents the savings balance at a given point in time.  The meaning of each color as follows:

  • Green: Current account value greater than or equal to initial account value (e.g. an investor starting retirement with $1,000,000 has a current account balance that is at least $1,000,000).
  • Yellow: Current account value is between 75% and 100% of initial account value
  • Orange: Current account value is between 50% and 75% of the initial account value.
  • Red: Current account value is between 25% and 50% of the initial account value.
  • Dark Red: Current account value is between 0% and 25% of initial account value.
  • Black: Current account value is zero; the investor has run out of money.

We then recreated the visualization, but with one key modification: we adjusted the historical stock and bond returns downward so that the long-term averages are in line with realistic future return expectations[1] given current valuation levels.  We did this by subtracting the difference between the actual average log return and the forward-looking long return from each year’s return.  With this technique, we capture the effect of subdued average returns while retaining realistic behavior for shorter-term returns.

 

Historical Wealth Paths for a 4% Withdrawal Rate and 60/40 Stock/Bond Allocation with Current Return Expectations

Source: Shiller Data Library.  Calculations by Newfound Research. Analysis uses real returns and assumes the reinvestment of dividends.  Returns are hypothetical index returns and are gross of all fees and expenses.  Results may differ slightly from similar studies due to the data sources and calculation methodologies used for stock and bond returns.

 

One downside of the above visualizations is that they only consider one withdrawal rate / portfolio composition combination.  If we want the see results for withdrawal rates ranging from 1% to 10% in 1% increments and portfolio combinations ranging from 0/100 stocks/bonds to 100/0 stocks/bonds in 20% increments, we would need sixty graphs!

To distill things a bit more, we looked at the historical “success” of various investment and withdrawal strategies.  We evaluated success on three metrics:

  1. Absolute Success Rate (“ASR”): The historical probability that an individual or couple will not run out of money before their retirement horizon ends.
  2. Comfortable Success Rate (“CSR”): The historical probability that an individual or couple will have at least the same amount of money, in real terms, at the end of their retirement horizon compared to what they started with.
  3. Ulcer Index (“UI”): The average pain of the wealth path over the retirement horizon where pain is measured as the severity and duration of wealth drawdowns relative to starting wealth. [2]

As a quick refresher, below we present the ASR for various withdrawal rate / risk profile combinations over a 30-year retirement horizon first using historical returns and then using historical returns adjusted to reflect current valuation levels.  The CSR and Ulcer Index table illustrated similar effects.

Absolute Success Rate for Various Combinations of Withdrawal Rate and Portfolio Composition – 30 Yr. Horizon

Absolute Success Rate for Various Combinations of Withdrawal Rate and Portfolio Composition with Average Stock and Bond Returns Equal to Current Expectations – 30 Yr. Horizon

Source: Shiller Data Library.  Calculations by Newfound Research.  Analysis uses real returns and assumes the reinvestment of dividends.  Returns are hypothetical index returns and are gross of all fees and expenses.  Results may differ slightly from similar studies due to the data sources and calculation methodologies used for stock and bond returns.

 

Overall, our analysis suggested that retirement withdrawal rates that were once safe may now deliver success rates that are no better – or even worse – than a coin flip.

The combined conclusion of these two posts is that the near future looks pretty grim for retirees and that an assumption that is slightly off can make the outcome even worse.

Now, we are going to explore a topic that can both mitigate low growth expectations and adapt a retirement portfolio to reduce the risk of a bad planning assumption. But first, some history.

 

How the 4% Rule Started

In 1994, Larry Bierwirth proposed the 4% rule, and William Bengen expanded on the research in the same year.[3], [4]

In the original research, the 4% rule was derived assuming that the investor held a 50/50 stock/bond portfolio, rebalanced annually, withdrew a certain percentage of the initial balance, and increased withdrawals in line with inflation. 4% is the highest percentage that could be withdrawn without ever running out of money over an historical 30-year retirement horizon.

Graphically, the 4% rule is the minimum value shown below.

Maximum Inflation Indexed Withdrawal to Deplete a 60/40 Portfolio Over a 30 Yr. Horizon

Source: Shiller Data Library.  Calculations by Newfound Research.  Analysis uses real returns and assumes the reinvestment of dividends.  Returns are hypothetical index returns and are gross of all fees and expenses.  Results may differ slightly from similar studies due to the data sources and calculation methodologies used for stock and bond returns.

 

Since its publication, the rule has become common knowledge to nearly all people in the field of finance and many people outside it. While it is a good rule-of-thumb and starting point for retirement analysis, we have two major issues with its broad application:

  1. It assumes that not running out of money is the only goal in retirement without considering implications of ending surpluses, return paths that differ from historical values, or evolving spending needs.
  2. It provides a false sense of security: just because 4% withdrawals never ran out of money in the past, that is not a 100% guarantee that they won’t in the future.

 

For example, if we adjust the stock and bond historical returns using the estimates from Research Affiliates (discussed previously) and replicate the analysis Bengen-style, the safe withdrawal rate is a paltry 2.6%.

 

Maximum Inflation Indexed Withdrawal to Deplete a 60/40 Portfolio Over a 30 Yr. Horizon using Current Return Estimates

Source: Shiller Data Library and Research Affiliates.  Calculations by Newfound Research.  Analysis uses real returns and assumes the reinvestment of dividends.  Returns are hypothetical index returns and are gross of all fees and expenses.  Results may differ slightly from similar studies due to the data sources and calculation methodologies used for stock and bond returns.

 

While this paints a grim picture for retirement planning, it’s not likely how one would plan their financial future. If you were to base your retirement planning solely on this figure, you would have to save 54% more for retirement to generate the same amount of annual income as with the 4% rule, holding everything else constant.

In reality, even with the low estimates of forward returns, many of the scenarios had safe withdrawal rates closer to 4%. By putting a multi-faceted plan in place to reduce the risk of the “bad” scenarios, investors can hope for the best while still planning for the worst.

One aspect of a retirement plan can be a time-varying asset allocation scheme.

 

Temporal Risk in Retirement

Conventional wisdom says that equity risk should be reduced as one progresses through retirement. This is what is employed in many “through”-type target date funds that adjust equity exposure beyond the retirement age.

If we heed the “own your age in bonds” rule, then a retiree would decrease their equity exposure from 35% at age 65 to 5% at the end of a 30-year plan horizon.

Unfortunately, this thinking is flawed.

When a newly-minted retiree begins retirement, their success is highly dependent on their first few years of returns because that is when their account values are the largest. As they make withdrawals and are reducing their account values, the impact of a large drawdown in dollar terms is not nearly as large.  This is known as sequence risk.

As a simple example, consider three portfolio paths:

  • Portfolio A: -30% return in Year 1 and 6% returns for every year from Year 2 – Year 30.
  • Portfolio B: 6% returns for every year except for Year 15, in which there is a -30% return.
  • Portfolio C: 6% returns for every year from Year 1 – Year 29 and a -30% return in Year 30.

These returns work about to the expected returns on a 60/40 portfolio using Research Affiliates’ Yield & Growth expectations, and the drawdown is approximately in line with the drawdown on a 60/40 portfolio over the past decade.  We will assume 4% annual withdrawals and 2% annual inflation with the withdrawals indexed to inflation.

 

3 Portfolios with Identical Annualized Returns that Occur in Different Orders

Portfolio C fares the best, ending the 30-year period with 12% more wealth than it began with. Portfolio B makes it through, not as comfortably as Portfolio C but still with 61% of its starting wealth. Portfolio A, however, starts off stressful for the retiree and runs out of money in year 27.

Sequence risk is a big issue that retirement portfolios face, so how does one combat it with dynamic allocations?

 

The Rising Glide Path in Retirement

Kitces and Pfau (2012) proposed the rising glide path in retirement as a method to reduce sequence risk.[5]  They argued that since retirement portfolios are most exposed to market risk at the beginning of the retirement period, they should start with the lowest equity risk and ramp up as retirement progresses.

Based on Monte Carlo simulations using both capital market assumptions in line with historical values and reduced return assumptions for the current environment, the paper showed that investors can maximize their success rate and minimize their shortfall in bad (5th percentile) scenarios by starting with equity allocations of between 20% and 40% and increasing to 60% to 80% equity allocations through retirement.

We can replicate their analysis using the reduced historical return data, using the same metrics from before (ASR, CSR, and the Ulcer Index) to measure success, comfort, and stress, respectively.

 

Absolute Success Rate for Various Equity Glide Paths with Average Stock and Bond Returns Equal to Current Expectations – 30 Yr. Horizon with a 4% Initial Withdrawal Rate

Comfortable Success Rate for Various Equity Glide Paths with Average Stock and Bond Returns Equal to Current Expectations – 30 Yr. Horizon with a 4% Initial Withdrawal Rate

Ulcer Index for Various Equity Glide Paths with Average Stock and Bond Returns Equal to Current Expectations – 30 Yr. Horizon with a 4% Initial Withdrawal Rate

Source: Shiller Data Library and Research Affiliates.  Calculations by Newfound Research.  Analysis uses real returns and assumes the reinvestment of dividends.  Returns are hypothetical index returns and are gross of all fees and expenses.  Results may differ slightly from similar studies due to the data sources and calculation methodologies used for stock and bond returns.

 

Note that the main diagonal in the chart represents static allocations, above the main diagonal represents the decreasing glide paths, and below the main diagonal represents increasing glide paths.

Since these returns are derived from the historical returns for stocks and bonds (again, accounting for a depressed forward outlook), they capture both the sequence of returns and shifting correlations between stocks and bonds better than Monte Carlo simulation. On the other hand, the sample size is limited, i.e. we only have about 4 non-overlapping 30 year periods.

Nevertheless, these data show that there was not a huge benefit or detriment to using either an increasing or decreasing equity glide path in retirement based on these metrics. If we instead look at minimizing expected shortfall in the bottom 10% of scenarios, similar to Kitces and Pfau, we find that a glide path starting at 40% rising to around 80% performs the best.

However, it will still be tough to rest easy with a plan that has an ASR of around 60 and a CSR of around 30 and an expected shortfall of 10 years of income.

With these unconvincing results, what can investors do to improve their retirement outcomes through prudent asset allocation?

 

Beyond a Static Glide Path

There is no reason to constrain portfolios to static glide paths. We have said before that the risk of a static allocation varies considerably over time. Simply dictating an equity allocation based on your age does not always make sense regardless of whether that allocation is increasing or decreasing.

If the market has a large drawdown, an investor should want to avoid this regardless of where they are in the retirement journey. Missing drawdowns is always beneficial as long as enough upside is subsequently realized.

In recent papers, Clare et al. (2017 and 2017) showed that trend following can boost safe withdrawal rates in retirement portfolios by managing sequence risk. [6],[7]

The million-dollar question is, “how tactical should we be?”

The following charts show the ASR, CSR, and Ulcer index values for static allocations to stocks, bonds, and a simple tactical strategy that invests in stocks when they are above their 10-month simple moving average (SMA) and in bonds otherwise.

The charts are organized by the minimum and maximum equity exposures along the rows and columns. The charts are symmetric across the main diagonal so that they can be compared to both increasing and decreasing equity glide paths.

The equity allocation is the minimum of the row and column headings, the tactical strategy allocation is the absolute difference between the headings, and the bond allocation is what’s needed to bring the total allocation to 100%.

For example, the 20% and 50% column is a portfolio of 20% equities, 30% tactical strategy, and 50% bonds. It has an ASR of 75, a CSR of 40, and an Ulcer index of 22.

 

Absolute Success Rate for Various Tactical Allocation Bounds Paths with Average Stock and Bond Returns Equal to Current Expectations – 30 Yr. Horizon with a 4% Initial Withdrawal Rate

Comfortable Success Rate for Various Tactical Allocation Bounds with Average Stock and Bond Returns Equal to Current Expectations – 30 Yr. Horizon with a 4% Initial Withdrawal Rate

Ulcer Index for Various Tactical Allocation Bounds with Average Stock and Bond Returns Equal to Current Expectations – 30 Yr. Horizon with a 4% Initial Withdrawal Rate

Source: Shiller Data Library and Research Affiliates.  Calculations by Newfound Research.  Analysis uses real returns and assumes the reinvestment of dividends.  Returns are hypothetical index returns and are gross of all fees and expenses.  Results may differ slightly from similar studies due to the data sources and calculation methodologies used for stock and bond returns.

 

These charts show that being tactical is extremely beneficial under these muted return expectations and that being highly tactical is even better than being moderately tactical.

So, what’s stopping us from going whole hog with the 100% tactical portfolio?

Well, this is a case where a tactical strategy can reduce the risk of not making it through the 30-year retirement at the risk of greatly increasing the ending wealth. It may sound counterintuitive to say that ending with too much extra money is a risk, but when our goal is to make it through retirement comfortably, taking undue risks come at a cost.

For instance, we know that while the tactical strategy may perform well over a 30-year time horizon, it can go through periods of significant underperformance in the short-term, which can lead to stress and questioning of the investment plan. For example, in 1939 and 1940, the tactical strategy underperformed a 50/50 portfolio by 16% and 11%, respectively.

These times can be trying for investors, especially those who check their portfolios frequently.[8] Even the best-laid plan is not worth much if it cannot be adhered to.

Being tactical enough to manage the risk of having to make a major adjustment in retirement while keeping whipsaw, tracking error, and the cost of surpluses in check is key.

 

Sizing a Tactical Sleeve

If the goal is having the smallest tactical sleeve to boost the ASR and CSR and reduce the Ulcer index to acceptable levels in a low expected return environment, we can turn back to the expected shortfall in the bad (10th percentile) scenarios to determine how large of a tactical sleeve to should include in the portfolio. The analysis in the previous section showed that being tactical could yield ASRs and CSRs in the 80s and 90s (dark green).  This, however, requires a tactical sleeve between 50% and 70%, depending on the static equity allocation.

Thankfully, we do not have to put the entire burden on being tactical: we can diversify our approaches.  In the previous commentaries mentioned earlier, we covered a number of topics that can improve retirement results in a low expected return environment.

  • Thoroughly examine and define planning factors such as taxes and the evolution of spending throughout retirement.
  • Be strategic, not static: Have a thoughtful, forward-looking outlook when developing a strategic asset allocation. This means having a willingness to diversify U.S. stocks and bonds with the ever-expanding palette of complementary asset classes and strategies.
  • Utilize a hybrid active/passive approach for core exposures given the increasing availability of evidence-based, factor-driven investment strategies.
  • Be fee-conscious, not fee-centric. For many exposures (e.g. passive and long-only core stock and bond exposure), minimizing cost is certainly appropriate. However, do not let cost considerations preclude the consideration of strategies or asset classes that can bring unique return generating or risk mitigating characteristics to the portfolio.
  • Look beyond fixed income for risk management given low interest rates.
  • Recognize that the whole can be more than the sum of its parts by embracing not only asset class diversification, but also strategy/process diversification.

While each modification might only result in a small, incremental improvement in retirement outcomes, the compounding effect can be very beneficial.

The chart below shows the required tactical sleeve size needed to minimize shortfalls/surpluses for a given improvement in the annual returns (0bp through 150bps).

 

Tactical Allocation Strategy Size Needed to Minimize 10% Expected Shortfall/Surplus with Average Stock and Bond Returns Equal to Current Expectations for a Range of Annualized Return Improvements  – 30 Yr. Horizon with a 4% Initial Withdrawal Rate

Source: Shiller Data Library and Research Affiliates.  Calculations by Newfound Research.  Analysis uses real returns and assumes the reinvestment of dividends.  Returns are hypothetical index returns and are gross of all fees and expenses.  Results may differ slightly from similar studies due to the data sources and calculation methodologies used for stock and bond returns.

 

For a return improvement of 125bps per year over the current forecasts for static U.S. equity and bond portfolios, with a static equity allocation of 50%, including a tactical sleeve of 20% would minimize the shortfall/surplus.

This portfolio essentially pivots around a static 60/40 portfolio, and we can compare the two, giving the same 125bps bonus to the returns for the static 60/40 portfolio.

 

Comparison of a Tactical Allocation Enhanced Portfolio with a Static 60/40 Portfolio with Average Stock and Bond Returns Equal to Current Expectations + 125bps per year   – 30 Yr. Horizon with a 4% Initial Withdrawal Rate

Source: Shiller Data Library and Research Affiliates.  Calculations by Newfound Research.  Analysis uses real returns and assumes the reinvestment of dividends.  Returns are hypothetical index returns and are gross of all fees and expenses.  Results may differ slightly from similar studies due to the data sources and calculation methodologies used for stock and bond returns.

 

In addition to the much more favorable statistics, the tactically enhanced portfolio only has a downside tracking error of 1.1% to the static 60/40 portfolio.

 

Conclusion: Being Dynamic in Retirement

From this historical analysis, high valuations of core assets in the U.S. suggest a grim outlook for the 4% rule. Predetermined dynamic allocation paths through retirement can help somewhat, but merely specifying an equity allocation based on one’s age loses sight of the changing risk a given market environment.

The sequence of market returns can have a large impact on retirement portfolios. If a drawdown happens early in retirement, subsequent returns may not be enough to provide the tailwind that they have in the past.

Investors who are able to be fee/expense/tax-conscious and adhere to prudent diversification may be able to incrementally improve their retirement outlook to the point where a modest allocation to a sleeve of tactical investment strategies can get their portfolio back to a comfortable success rate.

Striking a balance between shortfall/surplus risk and the expected experience during the retirement period along with a thorough assessment of risk tolerance in terms of maximum and minimum equity exposure can help dictate how flexible a portfolio should be.

In our QuBe Model Portfolios, we pair allocations to tactically managed solutions with systematic, factor based strategies to implement these ideas.

While long-term capital market assumptions are a valuable input in an investment process, adapting to shorter-term market movements to reduce sequence risk may be a crucial way to combat market environments where the low return expectations come to fruition.


[1] Specifically, we use the “Yield & Growth” capital market assumptions from Research Affiliates.  These capital market assumptions assume that there is no valuation mean reversion (i.e. valuations stay the same going forward).  The adjusted average nominal returns for U.S. equities and 10-year U.S. Treasuries are 5.3% and 3.1%, respectively, compared to the historical values of 9.0% and 5.3%.

[2] Normally, the Ulcer Index would be measured using true drawdown from peak, however, we believe that using starting wealth as the reference point may lead to a more accurate gauge of pain.

[3] Bierwirth, Larry. 1994. Investing for Retirement: Using the Past to Model the Future. Journal of Financial Planning, Vol. 7, no. 1 (January): 14-24.

[4] Bengen, William P. 1994. “Determining Withdrawal Rates Using Historical Data.” Journal of Financial Planning, vol. 7, no. 4 (October): 171-180.

[5] Pfau, Wade D. and Kitces, Michael E., Reducing Retirement Risk with a Rising Equity Glide-Path (September 12, 2013). Available at SSRN: https://ssrn.com/abstract=2324930

[6] Clare, A. and Seaton, J. and Smith, P. N. and Thomas, S. (2017). Can Sustainable Withdrawal Rates Be Enhanced by Trend Following? Available at SSRN: https://ssrn.com/abstract=3019089

[7] Clare, A. and Seaton, J. and Smith, P. N. and Thomas, S. (2017) Reducing Sequence Risk Using Trend Following and the CAPE Ratio. Financial Analysts Journal, Forthcoming. Available at SSRN: https://ssrn.com/abstract=2764933

[8] https://blog.thinknewfound.com/2017/03/visualizing-anxiety-active-strategies/

Tax-Managed Models & Asset Location

This post is available for download as a PDF here.

Summary­­

  • In a world of anemic asset returns, tax management may help significantly contribute to improving portfolio returns.
  • Ideally, asset location decisions would be made with full investor information, including goals, risk tolerances, tax rates, and distribution of wealth among account types.
  • Without perfect information, we believe it is helpful to have both tax-deferred and tax-managed model portfolios available.
  • We explore how tax-adjusted expected returns can be created, and how adjusting for taxes affects an optimized portfolio given today’s market outlook.

Before we begin, please note that we are not Certified Public Accountants, Tax Attorneys, nor do we specialize in tax management.  Tax law is complicated and this commentary will employ sweeping generalizations and assumptions that will certainly not apply to every individual’s specific situation.  This commentary is not meant as advice, simply research.  Before making any tax-related changes to your investment process, please consult an expert.

Tax-Managed Thinking

We’ve been writing a lot, recently, about the difficulties investors face going forward.[1][2][3]  It is our perspective that the combination of higher-than-average valuations in U.S. stocks and low interest rates in core U.S. bonds indicates a muted return environment for traditionally allocated investors going forward.

There is no silver bullet to this problem.  Our perspective is that investors will likely have to work hard to make many marginal, but compounding, improvements.  Improvements may include reducing fees, thinking outside of traditional asset classes, saving more, and, for investors in retirement, enacting a dynamic withdrawal plan.

Another potential opportunity is in tax management.

I once heard Dan Egan, Director of Behavioral Finance at Betterment, explain tax management as an orthogonal improvement: i.e. one which could seek to add value regardless of how the underlying portfolio performed.  I like this description for two reasons.

First, it fits nicely into our framework of compounding marginal improvements that do not necessarily require just “investing better.”  Second, Dan is the only person, besides me, to use the word “orthogonal” outside of a math class.

Two popular tax management techniques are tax-loss harvesting and asset location.  While we expect that tax-loss harvesting is well known to most (selling investments at a loss to offset gains taken), asset location may be less familiar.  Simply put, asset location is how investments are divided among different accounts (taxable, tax-deferred, and tax-exempt) in an effort to maximize post-tax returns.

Asset Location in a Perfect World

Taxes are a highly personal subject.  In a perfect world, asset location optimization would be applied to each investor individually, taking into account:

  • State tax rates
  • Federal tax rates
  • Percentage of total assets invested in each account type

Such information would allow us to run a very simple portfolio optimization that could take into account asset location.

Simply, for each asset, we would have three sets of expected returns: an after-tax expected return, a tax-deferred expected return, and a tax-exempt expected return.  For all intents and purposes, the optimizer would treat these three sets of returns as completely different asset classes.

So, as a simple example, let’s assume we only want to build a portfolio of U.S. stocks and bonds.  For each, we would create three “versions”: Taxable, Tax-Deferred, and Tax-Exempt.  We would calculate expected returns for U.S. Stocks – Taxable, U.S. Stocks – Tax-Deferred, and U.S. Stocks – Tax-Exempt.  We would do the same for bonds.

We would then run a portfolio optimization.  To the optimizer, it would look like six asset classes instead of two (since there are three versions of stocks and bonds).  We would add the constraint that the sum of the weights to Taxable, Tax-Deferred, and Tax-Exempt groups could not exceed the percentage of our wealth in each respective account type.  For example, if we only have 10% of our wealth in Tax-Exempt accounts, then U.S. Stocks – Tax Exempt + U.S. Bonds – Tax Exempt must be equal to 10%.

Such an approach allows for the explicit consideration of an individual’s tax rates (which are taken into account in the adjustment of expected returns) as well as the distribution of their wealth among different account types.

Case closed.[4]

Asset Location in a Less Than Perfect World

Unfortunately, the technology – and expertise – required to enable such an optimization is not readily available for many investors.

As an industry, the division of labor can significantly limit the availability of important information.  While financial advisors may have access to an investor’s goals, risk tolerances, specific tax situation, and asset location break-down, asset managers do not.  Therefore, asset managers are often left to make sweeping assumptions, like infinite investment horizons, defined and constant risk tolerances, and tax indifference.

Indeed, we currently make these very assumptions within our QuBe model portfolios. Yet, we think we can do better.

For example, consider investors at either end of the spectrum of asset location.  On the one end, we have investors with the vast majority of their assets in tax-deferred accounts.  On the other, investors with the vast majority of their wealth in taxable accounts.  Even if two investors at opposite ends of the spectrum have an identical risk tolerance, their optimal portfolios are likely different.  Painting with broad strokes, the tax-deferred investor can afford to have a larger percentage of their assets in tax-inefficient asset classes, like fixed income and futures-based alternative strategies.  The taxable investor will likely have to rely more heavily on tax-efficient investments, like indexed equities (or active equities, if they are in an ETF wrapper).

Things get much messier in the middle of the spectrum.  We believe investors have two primary options:

  1. Create an optimal tax-deferred portfolio and try to shift tax-inefficient assets into the tax-deferred accounts and tax-efficient assets into taxable accounts. Investor liquidity needs need to be carefully considered here, as this often means that taxable accounts will be more heavily tilted towards more volatile equities while bonds will fall into tax-deferred accounts.
  2. Create an optimal tax-deferred portfolio and an optimal taxable portfolio, and invest in each account accordingly. This is, decidedly, sub-optimal to asset location in a perfect world, and should even under most scenarios be sub-optimal to Option #1, but it should be preferable to simply ignoring taxes.  Furthermore, it may be easier from an implementation perspective, depending on the rebalancing technology available to you.

With all this in mind, we have begun to develop tax-managed versions of our QuBe model portfolios, and expect them to be available at the beginning of Q4.

Adjusting Expected Returns for Taxes

To keep this commentary to a reasonable length (as if that has ever stopped us before…), we’re going to use a fairly simple model of tax impact.

At the highest level, we need to break down our annual expected return into three categories: unrealized, externally realized, and internally realized.

  • Unrealized: The percentage of the total return that remains un-taxed. For example, the expected return of a stock that is bought and never sold would be 100% unrealized (ignoring, for a moment, dividends and end-of-period liquidation).
  • Externally Realized: The percentage of total return that is taxed due to asset allocation turnover. For example, if we re-optimize our portfolio annually and incur 20% turnover, causing us to sell positions, we would say that 20% of expected return is externally realized.
  • Internally Realized: The percentage of total return that comes from internal turnover, or income generated, within our investment. For example, the expected return from a bond may be 100% internally realized.  Similarly, a very active hedge fund strategy may have a significant amount of internal turnover that realizes gains.

Using this information, we can fill out a table, breaking down for each asset class where we expect returns to come from as well as within that category, what type of tax-rate we can expect.  For example:

For example, in the table above we are saying we expect 70% of our annual U.S. equity returns to be unrealized while 30% of them will be realized at a long-term capital gains rate.  Note that we also explicitly estimate what we will be receiving in qualified dividends.

On the other hand, we only expect that 35% of our hedge fund returns to be unrealized, while 15% will be realized from turnover (all at a long-term capital gains rate) and the remaining 50% will be internally realized by trading within the fund, split 40% short-term capital gains and 60% long-term capital gains.For example, in the table above we are saying we expect 70% of our annual U.S. equity returns to be unrealized while 30% of them will be realized at a long-term capital gains rate.  Note that we also explicitly estimate what we will be receiving in qualified dividends.

Obviously, there is a bit of art in these assumptions.  How much the portfolio turns over within a year must be estimated.  What types of investments you are making will also have an impact.  For example, if you are investing in ETFs, even very active equity strategies can be highly tax efficient.  Mutual funds on the other hand, potentially less so.  Whether a holding like Gold gets taxed at a Collectible rate or a split between short- and long-term capital gains will depend on the fund structure.

Using this table, we can then adjust the expected return for each asset class using the following equations:

Where,

In English,

  • Take the pre-tax return and subtract out the amount we expect to come from qualified dividend yield.
  • Take the remainder and multiply it by the total blended tax rate we expect from externally and internally realized gains.
  • Add back in the qualified dividend yield, after adjusting for returns.

As a simple example, let’s assume U.S. equities have a 6% expected return.  We’ll assume a 15% qualified dividend rate and a 15% long-term capital gains rate.  We’ll ignore state taxes for simplicity.

Our post-tax expected return is, therefore 6% – (6%-2%)*(30%*15%) – 2%*15% = 5.52%.

We can follow the same broad steps for all asset classes, making some assumptions about tax rates and expected sources of realized returns.

(For those looking to take a deeper dive, we recommend Betterment’s Tax-Coordinated Portfolio whitepaper[5], Ashraf Al Zaman’s Tax Adjusted Portfolio Optimization and Asset Location presentation[6], and Geddes, Goldberg, and Bianchi’s What Would Yale Do If It Were Taxable? paper[7].)

 

How Big of a Difference Does Tax Management Make?

So how much of a difference does taking taxes into account really make in the final recommended portfolio?

We explore this question by – as we have so many times in the past – relying on J.P. Morgan’s capital market assumptions.  The first portfolio is constructed using the same method we have used in the past: a simulation-based mean-variance optimization that targets the same risk level as a 60% stock / 40% bond portfolio mix.

For the second portfolio, we run the same optimization, but adjust the expected return[8] for each asset class.

We make the following assumptions about the source of realized returns and tax rates for each asset class (note that we have compressed the above table by combining rates together after multiplying for the amount realized by that category; e.g. realized short below represents externally and internally realized short-term capital gains).

Again, the construction of the below table is as much art as it is science, with many assumptions embedded about the type of turnover the portfolio will have and the strategies that will be used to implement it.

 

CollectibleOrdinary IncomeRealized ShortRealized LongUnrealizedDividend
Alternative – Commodities0%0%10%20%70%0%
Alternative – Event Driven0%0%26%53%21%0%
Alternative – Gold30%0%0%0%70%0%
Alternative – Long Bias0%0%26%53%21%1%
Alternative – Macro0%0%26%53%21%0%
Alternative – Relative Value0%0%26%53%21%0%
Alternative – TIPS0%100%0%0%0%0%
Bond – Cash0%100%0%0%0%0%
Bond – Govt (Hedged) ex US0%100%0%0%0%0%
Bond – Govt (Not Hedged) ex US0%100%0%0%0%0%
Bond – INT Treasuries0%100%0%0%0%0%
Bond – Investment Grade0%100%0%0%0%0%
Bond – LT Treasuries0%100%0%0%0%0%
Bond – US Aggregate0%100%0%0%0%0%
Credit – EM Debt0%100%0%0%0%0%
Credit – EM Debt (Local)0%100%0%0%0%0%
Credit – High Yield0%100%0%0%0%0%
Credit – Levered Loans0%100%0%0%0%0%
Credit – REITs0%100%0%0%0%0%
Equity – EAFE0%0%10%20%70%2%
Equity – EM0%0%10%20%70%2%
Equity – US Large0%0%10%20%70%2%
Equity – US Small0%0%10%20%70%2%

We also make the following tax rate assumptions:

  • Ordinary Income: 28%
  • Short-Term Capital Gains: 28%
  • Long-Term Capital Gains: 28%
  • Qualified Dividend: 15%
  • Collectibles: 28%
  • Ignore state-level taxes.

The results of both optimizations can be seen in the table below.

 

Tax-DeferredTax-Managed
Equity – US Large3.9%5.3%
Equity – US Small5.9%7.0%
Equity – EAFE3.3%4.8%
Equity – Emerging Markets11.1%12.0%
Sum24.2%29.1%
Bond – US Aggregate0.1%0.1%
Bond – Int US Treasuries0.6%0.4%
Bond – LT US Treasuries12.4%12.2%
Bond – Investment Grade0.0%0.0%
Bond – Govt (Hedged) ex US0.3%0.1%
Bond – Govt (Not Hedged) ex US0.3%0.2%
Sum13.8%13.1%
Credit – High Yield6.2%3.9%
Credit – Levered Loans11.8%8.9%
Credit – EM Debt4.2%2.7%
Credit – EM Debt (Local)5.2%3.5%
Credit – REITs8.6%8.1%
Sum36.0%27.1%
Alternative – Commodities4.0%3.9%
Alternative – Gold11.3%13.9%
Alternative – Macro6.8%8.6%
Alternative – Long Bias0.1%0.1%
Alternative – Event Driven1.6%2.2%
Alternative – Relative Value0.5%1.3%
Alternative – TIPS1.6%0.8%
Sum26.0%30.8%

 

Broadly speaking, we see a shift away from credit-based asset classes (though, they still command a significant 27% of the portfolio) and towards equity and alternatives.

We would expect that if the outlook for equities improved, or we reduced the expected turnover within the portfolio, this shift would be even more material.

It is important to note that at least some of this difference can be attributed to the simulation-based optimization engine.  Percentages can be misleading in their precision: the basis point differences between assets within the bond category, for example, are not statistically significant changes.

And how much difference does all this work make?  Using our tax-adjusted expected returns, we estimate a 0.20% increase in expected return between tax-managed and tax-deferred versions right now.  As we said: no silver bullets, just marginal improvements.

What About Municipal Bonds?

You may have noticed municipal bonds are missing from the above example.  What gives?

Part of the answer is theoretical.  Consider the following situation.  You have two portfolios that are identical in every which way (e.g. duration, credit risk, liquidity risk, et cetera), except one is comprised of municipal bonds and one of corporate bonds.  Which one do you choose?

The one with the higher post-tax yield, right?

This hypothetical highlights two important considerations.  First, the idea that municipal bonds are for taxable accounts and corporate bonds are for tax-deferred accounts overlooks the fact that investors should be looking to maximize post-tax return regardless of asset location.  If municipal bonds offer a better return, then put them in both accounts!  Similarly, if corporate bonds offer a more attractive return after taxes, then they should be held in taxable accounts.

For example, right now the iShares iBoxx $ Investment Grade Corporate Bond ETF (LQD) has a 30-day SEC yield of 3.16%.  The VanEck Vectors ATM-Free Intermediate Municipal Index ETF (ITM) has a 30-day SEC yield of just 1.9%.  However, this is the taxable equivalent to an investor earning a 3.15% yield at a 39.6% tax rate.

In other words, LQD and ITM offer a nearly identical return within in a taxable account for an investor in the highest tax bracket.  Lower tax brackets imply lower taxable equivalent return, meaning that LQD may be a superior investment for these investors.  (Of course, we should note that municipal bonds are not corporate bonds.  They often are often less liquid, but of higher credit quality.)

Which brings up our second point: taxes are highly personal.  For a wealthy investor, an ordinary income tax of 35% could make municipal bonds far more attractive than they are for an investor only paying a 15% ordinary income tax rate.

Simply put: solving the when and where of municipal bonds is not always straight forward.  We believe the best approach is account for them as a standalone asset class within the optimization, letting the optimizer figure out how to maximize post-tax returns.

Conclusion

We believe that a low-return world means that many investors will have a tough road ahead when it comes to achieving their financial goals.  We see no silver bullet to this problem.  We do see, however, many small steps that can be taken that can compound upon each other to have a significant impact.  We believe that asset location provides one such opportunity and is therefore a topic that deserves far more attention in a low-return environment.

 


 

[1] See The Impact of High Equity Valuations on Safe Withdrawal Rates –   https://blog.thinknewfound.com/2017/08/impact-high-equity-valuations-safe-retirement-withdrawal-rates/

[2] See Portfolios in Wonderland & The Weird Portfolio – https://blog.thinknewfound.com/2017/08/portfolios-wonderland-weird-portfolio/

[3] See The Butterfly Effect in Retirement Planning – https://blog.thinknewfound.com/2017/09/butterfly-effect-retirement-planning/

[4] Clearly this glosses over some very important details.  For example, an investor that has significant withdrawal needs in the near future, but has the majority of their assets tied up in tax-deferred accounts, would significantly complicate this optimization.  The optimizer will likely put tax-efficient assets (e.g. equity ETFs) in taxable accounts, while less tax-efficient assets (e.g. corporate bonds) would end up in tax-deferred accounts.  Unfortunately, this would put the investor’s liquidity needs at significant risk.  This could be potentially addressed by adding expected drawdown constraints on the taxable account.

[5] https://www.betterment.com/resources/research/tax-coordinated-portfolio-white-paper/

[6] http://www.northinfo.com/documents/337.pdf

[7] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2447403

[8] We adjust volatility as well.

Page 20 of 25

Powered by WordPress & Theme by Anders Norén