This post is available as a PDF download here.

Summary­

  • Constructing an asset allocation that never lost money over given rolling periods leads to unsettling allocations: large positions in small-caps, long-term U.S. Treasuries, and precious metals.
  • In many investment analyses, past results may be a downright misleading guide to the future because one realization of historical data leads to a result that is overfit.
  • To combat this, a common approach it to look at other geographies or time periods. But we do not always have this luxury.
  • By introducing some randomness into the portfolio construction process, we can generate a more intuitive and robust result that is not tailored to certain artifacts in the data.
  • Adding uncertainty to a certain past creates more certainty about an uncertain future.

Is it possible to create an asset allocation that has never lost money over rolling 6-month periods?

Sure, just about anything is possible if we torture the data enough.  Whether it is meaningful or not is another question entirely.

Yet there may be interesting lessons to learn in such an exercise.  After all, many investors have future liabilities they must account for.  If, for example, an investor knows they plan on buying a house in 5 years and will need to put $100,000 down, how might they consider investing that $100,000 today?

One answer is to construct our own indexed annuity by purchasing $100,000 of 5-year U.S. Treasuries at par value and investing the coupons into 6-month call options on the S&P 500.  Barring a default in the U.S. government, this would provide $100,000 at maturity and allow participation in U.S. equity growth.1

For investors less interested in an engineered solution and more interested in a simple asset allocation, we may be able to turn to history as a guide. Specifically, using a cross-section of asset classes going back to 1974, we can construct portfolios that maximize annualized returns subject to a constraint that the portfolio never has an end-of-period drawdown.

For example, we can construct a portfolio that would maximize full-period returns subject to the constraint that it never had a loss in any rolling 12-month period.

What would such a portfolio look like for a 3-month horizon?  A 12-month horizon?  A 5-year horizon?  Before looking at the results plotted below, picture in your mind the general stock/bond mix you imagine would have historically satisfied these constraints. What allocation to stocks couldyou hold in order to not lose money over a rolling given length of time?

What we see on the left end of the graph is the allocation for portfolios that seek to maximize returns subject to not having any losses over short rolling periods. If your horizon is less than 10 months, you’re probably best served by simply investing all your capital in 30-day U.S. Treasury bills.  This is unsurprising: the cost of certainty is opportunity.

In the middle of the graph we see that a 3-year (36-month) rolling period introduces Small-Cap equities into the mix, offset by exposures to Precious Metals, Intermediate-Term U.S. Treasuries, and Long-Term U.S. Treasuries.

Finally, at the far end of the graph we see the portfolio is dominated (approximately 57%) by exposure to Small-Cap equities, with the remainder being invested in Long-Term U.S. Treasuries (approximately 27%) and Precious Metals (approximately 16%).

While the left end of the graph seems rather intuitive, the right end of the graph deserves a curious raise of our eyebrows.  If you built a portfolio that was 60% small-cap, 30% long-term U.S. Treasuries, and 10% precious metals, how certain would you feel that you would at least have your starting capital back five years from today?

We certainly wouldn’t bet on it.

Yet history tells us that over our 45-year exam period, this is precisely the case! We should feel confident!  Are we merely letting our natural biases and risk aversion get the better of us?  Or, is our gut telling us something our brain still has yet to figure out?

At the risk of stating the obvious, the future is far less certain than the past. Here’s what we know about the past:

  • Small-caps did exceptionally well, outpacing their large-cap peers by over 300 basis points per year.
  • S. Treasuries and Precious Metals were excellent diversifiers to U.S. equities.
  • Precious metals had exceptional early-period returns during the inflationary regime of the 1970s and early 1980s.

Yet history is just a sample size of 1.  Here is what we do not know about the future:

  • Whether the size premium is real.
  • Whether U.S. Treasuries will continue to diversify U.S. equities (e.g. monthly correlation from 1973-1985 between long-term U.S. Treasuries and Large-Cap equities was 0.37).
  • Whether an inflationary regime will manifest and whether precious metals will again serve as a hedge.

Our result of 60% small-cap equities, 30% long-term U.S. Treasuries, and 10% precious metals is definitively data-mined on this sample.  While 45 years may appear to be a sufficiently long horizon, in reality there are just a handful of meaningfully different economic regimes. A single outlier event (e.g. small-cap outperformance) can completely dominate the results.  But what if that outlier was noise, not signal?

Using Randomness to Create Certainty

Is there a way to improve our answer?  An obvious step would be to gather more data, either over time or across different geographies.  But what if we do not have any more data?

One potential answer is subset resampling, which averages together a large number of optimizations where each optimization represents a randomly selected subset of the investable universe.  In this case, we utilized the following approach:

  1. Randomly select 4 of the 8 investable assets.
  2. Optimize for the portfolio that maximizes annualized returns, subject to the end-of-period loss constraint.
  3. If the solution is infeasible (e.g. not having losses after 3-months when T-Bills are not in the investible universe), throw the answer away.
  4. Repeat 1-3 until 1000 solutions have been found.
  5. Average the 1000 solutions together.

The intuition behind this approach is that each individual optimization forgoes diversification to reduce estimation error.  Specifically, in our case, we are not reducing estimation error per se, but trying to reduce the impact of noise that may exist in historical returns (e.g. the magnitude of the realized size premium).  Averaging the results together is a naïve application of ensemble techniques (specifically, bagging) that can help decrease variance and avoid overfitting.

We plot the results below and further include graphs capturing end-of-period loss constraints of -10%, -20%, and -30%.

If we look at the result representing no losses over rolling 5-year periods (the far right side of the top-left graph), we see a much more diversified portfolio than before.  Equities have been pared back to just 40% and are comprised of U.S. large-cap, small-cap and World ex-US equities.  Bonds are no longer dominated by just long-duration exposure, but now include a mix of short- and intermediate-term Treasuries as well as corporates.  The precious metals allocation is cut in half.

Are we more confident that a diversified, 40/60 portfolio is much more likely to avoid losses over a 5-year period than a concentrated 55/45?  Absolutely.

Ironically, we had to introduce randomness into our process to create confidence: adding uncertainty to a certain past created more certainty about an uncertain future.

Conclusion

In this commentary, we began with a simple question: how should we construct a portfolio if we want to be confident we will not experience any losses after a certain length of time?

We turned towards history as a guide and found ourselves uneasy with the results. While short-horizon results were intuitive, longer-horizon results were significantly overweight small-cap equities. The result arose from the time horizon in question, which represented a period that the realized size premium was significant.  This led to the construction of a portfolio that implicitly assumed such a premium would exist going forward.

Unfortunately, the certainty of history can, in many circumstances, be a poor guide to an uncertain future.

To combat the risk that past realized returns are the result of significant embedded noise, and that our process was unintentionally maximizing exposure to this noise, we introduced more randomness.  Specifically, we employed a process called subset resampling, which randomly selects a subset of the investible universe and optimizes over that subset in an effort to reduce estimation error.

We would argue the results align much more closely with our intuition.  For example, over rolling 5-year periods, subset resampling created a far more diversified portfolio and reduced exposure to equities by 15%.

While there are many lessons to be learned about balancing certainty and opportunity, we would argue that the biggest lesson of this commentary is simple: past results may be a downright misleading guide to the future.

 


 

  1. Similarly, with a bit of financial engineering an investor could construct their own zero-coupon bond by purchasing a 5-year U.S. Treasury and selling short the appropriate amount of 1-, 2-, 3-, and 4-year Treasuries in order to offset intermediate coupons.  The extra capital up front could then be used to invest in other assets.

Corey is co-founder and Chief Investment Officer of Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Corey is responsible for portfolio management, investment research, strategy development, and communication of the firm's views to clients.

Prior to offering asset management services, Newfound licensed research from the quantitative investment models developed by Corey. At peak, this research helped steer the tactical allocation decisions for upwards of $10bn.

Corey is a frequent speaker on industry panels and contributes to ETF.com, ETF Trends, and Forbes.com’s Great Speculations blog. He was named a 2014 ETF All Star by ETF.com.

Corey holds a Master of Science in Computational Finance from Carnegie Mellon University and a Bachelor of Science in Computer Science, cum laude, from Cornell University.

You can connect with Corey on LinkedIn or Twitter.

Or schedule a time to connect.