Flirting with Models

The Research Library of Newfound Research

Machine Learning, Subset Resampling, and Portfolio Optimization

This post is available as a PDF download here

Summary

  • Portfolio optimization research can be challenging due to the plethora of factors that can influence results, making it hard to generalize results outside of the specific cases tested.
  • That being said, building a robust portfolio optimization engine requires a diligent focus on estimation risk. Estimation risk is the risk that the inputs to the portfolio optimization process (i.e. expected returns, volatilities, correlations) are imprecisely estimated by sampling from the historical data, leading to suboptimal allocations.
  • We summarize the results from two recent papers we’ve reviewed on the topic of managing estimation risk. The first paper relies on techniques from machine learning while the second paper uses a form of simulation called subset resampling.
  • Both papers report that their methodologies outperform various heuristic and optimization-based benchmarks.
  • We perform our own tests by building minimum variance portfolios using the 49 Fama/French industry portfolios.  We find that while both outperform equal-weighting on a risk-adjusted basis, the results are not statistically significant at the 5% level.

 

This week, we are going to review a couple of recent papers we’ve come across on the topic of reducing estimation risk in portfolio optimization.

Before we get started, we want to point out that while there are many fascinating papers on portfolio optimization, it is also one of the most frustrating areas to study in our opinion.  Why?  Because ultimately portfolio optimization is a very, very complex topic.  The results will be impacted in significant ways by a number of factors like:

  • What is the investment universe studied?
  • Over what time period?
  • How are the parameters estimated?
  • What are the lookback periods used to estimate parameters?
  • And so on…

Say that you find a paper that argues for the superiority of equal-weighted portfolios over mean-variance optimization by testing on a universe of large-cap U.S. equities. Does this mean that equal-weighting is superior to mean-variance optimization in general?  We tend to believe not.  Rather, we should take the study at face value: equal-weighting was superior to the particular style of mean-variance in this specific test.

In addition, the result in and of itself says nothing about why the outperformance occurred.  It could be that equal-weighting is a superior portfolio construction technique.

But maybe the equal-weighted stock portfolio just happens by chance to be close to the true Sharpe optimal portfolio.  If I have a number of asset classes that have reasonably similar returns, risks, and correlations, it is very likely that equal-weighting does a decent job of getting close to the Sharpe optimal solution.  On the other hand, consider an investment universe that consists of 9 equity sectors and U.S. Treasuries.  In this case, equal-weighting is much less likely to be close to optimal and we would find it more probable that optimization approaches could outperform.

Maybe equal-weighting exposes the stock portfolio to risk-premia like the value and size factors that improve performance.  I suspect that to some extent the outperformance of minimum variance portfolios in a number of studies is at least partially explained by the exposures that these portfolios have to the defensive or low beta factor (the tendency of low risk exposures to outperform high risk exposures on a risk-adjusted basis).

Maybe the mean estimates in the mean-variance optimization are just terrible and the results are less an indictment on MVO than on the particular mean estimation technique used.  To some extent, the difficulty of estimating means is a major part of the argument for equal-weighting or other heuristic or shrinkage-based approaches.  At the same time, we see a number of studies that estimate expected returns using sample means with long (i.e. 5 or 10 year) lookbacks.  These long-term horizons are exactly the period over which returns tend to mean revert and so the evidence would suggest these are precisely the types of mean estimates you wouldn’t want to use.  To properly test mean-variance, we should at least use mean estimates that have a chance of succeeding.

All this is a long-winded way of saying that it can be difficult to use the results from research papers to build a robust, general purpose portfolio optimizer.  The results may have limited value outside of the very specific circumstances explored in that particular paper.

That being said, this does not give us an excuse to stop trying.  With that preamble out of the way, we’ll return to our regularly scheduled programming.

 

Estimation Risk in Portfolio Optimization

Estimation risk is the risk that the inputs to the portfolio optimization process (i.e. expected returns, volatilities, correlations) are imprecisely estimated by sampling from the historical data, leading to suboptimal allocations.

One popular approach to dealing with estimation risk is to simply ignore parameters that are hard to estimate.  For example, the naïve 1/N portfolio, which allocates an equal amount of capital to each investment in the universe, completely foregoes using any information about the distribution of returns.  DiMiguel, Garlappi and Uppal (2007)[1] tested fourteen variations of sample-based mean-variance optimization on seven different datasets and concluded that “…none is consistently better than the 1/N rule in terms of Sharpe Ratio, certainty-equivalent return, or turnover, which indicates that, out of sample, the gain from optimal diversification is more than offset by estimator error.”

Another popular approach is to employ “shrinkage estimators” for key inputs.  For example, Ledoit and Wolf (2004)[2] propose shrinking the sample correlation matrix towards (a fancy way of saying “averaging it with”) the constant correlation matrix.  The constant correlation matrix is simply the correlation matrix where each diagonal element is equal to the pairwise average correlation across all assets.

Generally speaking, shrinkage involves blending an “unstructured estimator” like the sample correlation matrix with a “structured estimator” like the constant correlation matrix that tries to represent the data with few free parameters. Shrinkage tends to limit extreme observations, thereby reducing the unwanted impact that such observations can have on the optimization result.

Interestingly, the common practice of imposing a short-sale constraint when performing mean-variance optimization or minimum variance optimization is equivalent to shrinking the expected return estimates[3] and the covariance estimates[4], respectively.

Both papers that we’ll discuss here are alternate ways of performing shrinkage.

Applying Machine Learning to Reduce Estimation Risk

The first paper, Reducing Estimation Risk in Mean-Variance Portfolios with Machine Learning by Daniel Kinn (2018)[5], explores using a standard machine learning approach to reduce estimation risk in portfolio optimization.

Kinn’s approach recognizes that estimation error can be decomposed into two sources: bias and variance.  Both bias and variance result in suboptimal results, but in very different ways.  Bias results from the model doing a poor job of capturing the pertinent features of the data.  Variance, on the other hand, results from the model being sensitive to the data used to train the model.

To get a better intuitive sense of bias vs. variance, consider two weather forecasters, Mr. Bias and Ms. Variance.  Both Mr. Bias and Ms. Variance work in a town where the average temperature is 50 degrees.  Mr. Bias is very stubborn and set in his ways.  He forecasts that the temperature will be 75 degrees each and every day.  Ms. Variance, however, is known for having forecasts that jump up and down.  Half of the time she forecasts a temperature of 75 degrees and half of the time she forecasts a temperature of 25 degrees.

Both forecasters have roughly the same amount of forecast error, but the nature of their errors are very different.  Mr. Bias is consistent but has way too rosy of a picture of the town’s weather.  Ms. Variance on the other hand, actually has the right idea when it comes to long-term weather trends, but her volatile forecasts still leave much to be desired.

The following graphic from EliteDataScience.com gives another take on explaining the difference between the two concepts.

Source: https://elitedatascience.com/bias-variance-tradeoff

 

When it comes to portfolio construction, some popular techniques can be neatly classified into one of these two categories.  The 1/N portfolio, for example, has no variance (weights will be the same every period), but may have quite a bit of bias if it is far from the true optimal portfolio.  Sample-based mean-variance options, on the other hand, should have no bias (assuming the underlying distributions of asset class returns does not change over time), but can be highly sensitive to parameter measurements and therefore exhibit high variance.  At the end of the day, we are interested in minimum total estimation error, which will generally involve a trade-off between bias and variance.

Source: https://elitedatascience.com/bias-variance-tradeoff

 

Finding where this optimal trade-off lies is exactly what Kinn sets out to accomplish with the machine learning algorithm described in this paper.  The general outline of the algorithm is pretty straightforward:

  1. Identify the historical data to be used in calculating the sample moments (expected returns, volatilities, and correlations).
  2. Add a penalty function to the function that we are going to optimize. The paper discusses a number of different penalty functions including Ridge, Lasso, Elastic Net, and Principal Component Regression.  These penalty functions will effectively shrink the estimated parameters with the exact nature of the shrinkage dependent on the penalty function being used.  By doing so we introduce some bias, but hopefully with the benefit of reducing variance even further and as a result reducing overall estimation error.
  3. Use K-fold cross-validation to fit the parameter(s) of the penalty function. Cross-validation is a machine learning technique where the training data is divided in various sets of in sample and out of sample data.  The parameter(s) chosen will be those that produce the lowest estimation error in the out of sample data.
  4. Using the optimized parameters from #3, fit the model on the entire training set. The result will be the optimized portfolio weights for the next holding period.

Kinn tests three versions of the algorithm (one using a Ridge penalty function, one using a Lasso penalty function, and one using principal component regression) on the following real-world data sets.

  • 20 randomly selected stocks from the S&P 500 (covers January 1990 to November 2017)
  • 50 randomly selected stocks from the S&P 500 covers January 1990 to November 2017)
  • 30 industry portfolios using stocks listed on the NYSE, AMEX, and NASDAQ covers January 1990 to November January 2018)
  • 49 industry portfolios using stocks listed on the NYSE, AMEX, and NASDAQ covers January 1990 to November January 2018)
  • 200 largest cryptocurrencies by market value as of the end of 2017 (if there was ever a sign of a 2018 paper on portfolio optimization it has to be that one of the datasets relates to crypto)
  • 1200 cryptocurrencies observed from September 2013 to December 2017

As benchmarks, Kinn uses traditional sample-based mean-variance, sample-based mean-variance with no short selling, minimum variance, and 1/N.

The results are pretty impressive with the machine learning algorithms delivering statistically significant risk-adjusted outperformance.

Here are a few thoughts/comments we had when implementing the paper ourselves:

  1. The specific algorithm, as outlined in the paper, is a bit inflexible in the sense that it only works for mean-variance optimization where the means and covariances are estimated from the sample. In other words, we couldn’t use the algorithm to compute a minimum variance portfolio or a mean-variance portfolio where we want to substitute in our own return estimates.  That being said, we think there are some relatively straightforward tweaks that can make the process applicable in these scenarios.
  2. In our tests, the parameter optimization for the penalty functions was a bit unstable. For example, when using the principal component regression, we might identify two principal components as being worth keeping in one month and then ten principal components being worth keeping in the next month.  This can in term lead to instability in the allocations.  While this is a concern, it could be dealt with by smoothing the parameters over a number of months (although this introduces more questions like how exactly to smooth and over what period).
  3. The results tend to be biased towards having significantly fewer holdings than the 1/N benchmark. For example, see the righthand chart in the exhibit below.  While this is by design, we do tend to get wary of results showing such concentrated portfolios to be optimal especially when in the real world we know that asset class distributions are far from well-behaved.

 

Applying Subset Resampling to Reduce Estimation Error

The second paper, Portfolio Selection via Subset Resampling by Shen and Wang (2017)[6], uses a technique called subset resampling.  This approach works as follows:

  1. Select a random subset of the securities in the universe (e.g. if there are 30 commodity contracts, you could pick ten of them).
  2. Perform the portfolio optimization on the subset selected in #1.
  3. Repeat steps #1 and #2 many times.
  4. Average the resulting allocations together to get the following result.

The table below shows an example of how this would work for three asset classes and three simulations with two asset classes selected in each subset.

One way we can try to get intuition around subset resampling is by thinking about the extremes.  If we resampled using subsets of size 1, then we would end up with the 1/N portfolio.  If we resampled using subsets that were the same size as the universe, we would just have the standard portfolio optimized over the entire universe.  With subset sizes greater than 1 and less than the size of the whole universe, we end up with some type of blend between 1/N and the traditionally optimized portfolio.

The only parameter we need to select is the size of the universe.  The authors suggest a subset size equal to n0.8 where n is the number of securities in the universe.  For the S&P 500, this would correlate to a subset size of 144.

The authors test subset resampling on the following real-world data sets.

  • FF100: 100 Fama and French portfolios spanning July 1963 to December 2004
  • ETF139: 139 ETFs spanning January 2008 to October 2012
  • EQ181:  Individual equities from the Russell Top 200 Index (excluding those stocks with missing data) spanning January 2008 to October 2012
  • SP434:  Individual equities from the S&P 500 Index (excluding those stocks with missing data) spanning September 2001 to August 2013.

As benchmarks, the authors use 1/N (EW); value-weighted (VW); minimum-variance (MV); resampled efficiency (RES) from Michaud (1989)[7]; the two-fund portfolio (TZT) from Tu and Zhou (2011)[8], which blends 1/N and classic mean-variance; the three-fund portfolio (KZT) from Kan and Zhou (2007)[9] which blends the risk-free asset, classic mean-variance, and minimum variance; the four fund portfolio (TZF) from Tu and Zhou (2011) which blends KZT and 1/N; mean-variance using the shrinkage estimator from Ledoit and Wolf (2004) (SKC); and on-line passive aggressive mean reversion (PAMR) from Li (2012)[10].

Similar to the machine learning algorithm, subset resampling does very well in terms of risk-adjusted performance.  On three of the four data sets, the Sharpe Ratio of subset resampling is better than that of 1/N by a statistically significant margin.  Additionally, subset resampling has the lowest maximum drawdown in three of the four data sets.  From a practical standpoint, it is also positive to see that the turnover for subset resampling is significantly lower than many of the competing strategies.

 

As we did with the first paper, here are some thoughts that came to mind in reading and re-implementing the subset resampling paper:

  1. As presented, the subset resampling algorithm will be sensitive to the number and types of asset classes in an undesirable way. What do we mean by this?  Suppose we had three uncorrelated asset classes with identical means and standard deviations.  We use subset resampling with subsets of size two to compute a mean-variance portfolio.  The result will be approximately 1/3 of the portfolio in each asset class, which happens to match the true mean-variance optimal portfolio.  Now we add a fourth asset class that also has the same mean and standard deviation but is perfectly correlated to the third asset class.  With this setup, the third and fourth asset classes are one in the same.  As a result, the true mean-variance optimal portfolio will have 1/3 in the first and second asset classes and 1/6 in the third or fourth asset class (in reality the solution will be optimal as long as the allocations to the third and fourth asset classes sum to 1/3).  However, subset resampling will produce a portfolio that is 25% in each of the four asset classes, an incorrect result.  Note that this is a problem with many heuristic solutions, including the 1/N portfolio.
  2. There are ways that we could deal with the above issue by not sampling uniformly, but this will introduce some more complexity into the approach.
  3. In a mean-variance setting, the subset resampling will dilute the value of our mean estimates. Now, this should be expected when using any shrinkage-like approach, but it is something to at least be aware of. Dilution will be more severe the smaller the size of the subsets.
  4. In terms of computational burden, it can be very helpful to use some “smart” resampling that is able to get a representative sampling with fewer iterations that a naïve approach. Otherwise, subset resampling can take quite a while to run due to the sheer number of optimizations that must be calculated.

Performing Our Own Tests

In this section, we perform our own tests using what we learned from the two papers.  Initially, we performed the test using mean-variance as our optimization of choice with 12-month return as the mean estimate.  We found, however, that the impact of the mean estimate swamped that of the optimizations.  As a result, we repeated the tests, this time building minimum variance portfolios.  This will isolate the estimator error relating to the covariance matrix, which we think is more relevant anyways since few practitioners use sample-based estimates of expected returns. Note that we used the principal component regression version of the machine learning algorithm.

Our dataset was the 49 industry portfolios provided in the Fama and French data library. We tested the following optimization approaches:

  • EW: 1/N equally-weighted portfolio
  • NRP: naïve risk parity where positions are weighted inversely to their volatility, correlations are ignored
  • MV: minimum variance using the sample covariance matrix
  • ZERO: minimum variance using sample covariance matrix shrunk using a shrinkage target where all correlations are assumed to be zero
  • CONSTANT: minimum variance using sample covariance matrix shrunk using a shrinkage target where all correlations are equal to the sample pairwise correlation across all assets in the universe
  • PCA: minimum variance using sample covariance matrix shrunk using a shrinkage target that only keeps the top 10% of eigenvectors by variance explained
  • SSR: subset resampling
  • ML: machine learning with principal component regression

The results are presented below:

Results are hypothetical and backtested and do not reflect any fees or expenses. Returns include the reinvestment of dividends. Results cover the period from 1936 to 2018. Past performance does not guarantee future results.

 

All of the minimum variance strategies deliver lower risk than EW and NRP and outperform a risk-adjusted basis although none of the Sharpe Ratio differences are significant at a 5% confidence level. Of the strategies, ZERO (shrinking with a covariance matrix that assumes zero correlation) and SSR (subset resampling) delivered the highest Sharpe Ratios.

 

Conclusion

Portfolio optimization research can be challenging due to the plethora of factors that can influence results, making it hard to generalize results outside of the specific cases tested.  It can be difficult to ascertain whether the conclusions are truly attributable to the optimization processes being tested or some other factors.

That being said, building a robust portfolio optimization engine requires a diligent focus on estimation risk.  Estimation risk is the risk that the inputs to the portfolio optimization process (i.e. expected returns, volatilities, correlations) are imprecisely estimated by sampling from the historical data, leading to suboptimal allocations.

We summarize the results from two recent papers we’ve reviewed on the topic of managing estimation risk.  The first paper relies on techniques from machine learning to find the optimal shrinkage parameters that minimize estimation error by acknowledging the trade-off between bias and variance.  The second paper uses a form of simulation called subset resampling.  In this approach, we repeatedly select a random subset of the universe, optimize over that subset, and then blend the subset results to get the final result.

Both papers report that their methodologies outperform various heuristic and optimization-based benchmarks.  We feel that both the machine learning and subset resampling approaches have merit after making some minor tweaks to deal with real world complexities.

We perform our own tests by building minimum various portfolios using the 49 Fama/French industry portfolios.  We find that while both outperform equal-weighting on a risk-adjusted basis, the results are not statistically significant at the 5% level.  While this highlights that research results may not translate out of sample, this certainly does not disqualify either method as potentially being useful as tools to manage estimation risk.

 

 

[1] Paper can be found here: http://faculty.london.edu/avmiguel/DeMiguel-Garlappi-Uppal-RFS.pdf.

[2] Paper can be found here: http://www.ledoit.net/honey.pdf

[3] DiMiguel, Garlappi and Uppal (2007)

[4] Jagannathan and Ma (2003), “Risk reduction in large portfolios: Why imposing the wrong constraints helps.”

[5] Paper can be found here: https://arxiv.org/pdf/1804.01764.pdf.

[6] Paper can be found here: https://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14443

[7] Paper can be found here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2387669

[8] Paper can be found here: https://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=2104&context=lkcsb_research

[9] Paper can be found here: https://www.cambridge.org/core/journals/journal-of-financial-and-quantitative-analysis/article/optimal-portfolio-choice-with-parameter-uncertainty/A0E9F31F3B3E0873109AD8B2C8563393

[10] Paper can be found here: http://research.larc.smu.edu.sg/mlg/papers/PAMR_ML_final.pdf

 

Momentum’s Magic Number

This post is available as a PDF download here.

Summary­

  • In HIMCO’s May 2018 Quantitative Insight, they publish a figure that suggests the optimal holding length of a momentum strategy is a function of the formation period.
  • Specifically, the result suggests that the optimal holding period is one selected such that the formation period plus the holding period is equal to 14-to-18 months: a somewhat “magic” result that makes little intuitive, statistical, or economic sense.
  • To investigate this result, we construct momentum strategies for country indices as well as industry groups.
  • We find similar results, with performance peaking when the formation period plus the holding period is equal to 12-to-14 months.
  • While lacking a specific reason why this effect exists, it suggests that investors looking to leverage shorter-term momentum signals may benefit from longer investment horizons, particularly when costs are considered.

A few weeks ago, we came across a study published by HIMCO on momentum investing1.  Contained within this research note was a particularly intriguing exhibit.

Source: HIMCO Quantitative Insights, May 2018

What this figure demonstrates is that the excess cumulative return for U.S. equity momentum strategies peaks as a function of both formation period and holding period.  Specifically, the returns appear to peak when the sum of the formation and holding period is between 14-18 months.

For example, if you were to form a portfolio based upon trailing 6-1 momentum – i.e. ranking on the prior 6-month total returns and skipping the most recent month (labeled in the figure above as “2_6”) – this evidence suggests that you would want to hold such a portfolio for 8-to-12 months (labeled in the figure above as 14-to-18 months since the beginning of the uptrend).

Which is a rather odd conclusion.  Firstly, we would intuitively expect that we should employ holding periods that are shorter than our formation periods.  The notion here is that we want to use enough data to harvest information that will be stationary over the next, smaller time-step.  So, for example, we might use 36 months of returns to create a covariance matrix that we might hold constant for the next month (i.e. a 36-month formation period with a 1-month hold).  Given that correlations are non-stable, we would likely find the idea of using 1-month of data to form a correlation matrix we hold for the next 36-months rather ludicrous.

And, yet, here we are in a similar situation, finding that if we use a formation period of 5 months, we should hold our portfolio steady for the next 8-to-10 months.  And this is particularly weird in the world of momentum, which we typically expect to be a high turnover strategy.  How in the world can having a holding period longer than our formation period make sense when we expect information to quickly decay in value?

Perhaps the oddest thing of all is the fact that all these results center around 14-18 months.  It would be one thing if the conclusion was simply, “holding for six months after formation is optimal”; here the conclusion is that the optimal holding period is a function of formation period.  Nor is the conclusion something intuitive, like “the holding period should be half the formation period.”

Rather, the result – that the holding period should be 14-to-18 months minus the length of the formation period – makes little intuitive, statistical, or economic sense.

Out-of-Sample Testing with Countries and Sectors

In effort to explore this result further, we wanted to determine whether similar results were found when cross-sectional momentum was applied to country indices and industry groups.

Specifically, we ran three tests.

In the first, we constructed momentum portfolios using developed country index returns (U.S. dollar denominated; net of withholding taxes) from MSCI.  The countries included in the test are: Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Hong Kong, Ireland, Israel, Italy, Japan, Netherlands, New Zealand, Norway, Portugal, Singapore, Spain, Sweden, Switzerland, the United Kingdom, and the United States of America.  The data extends back to 12/1969.

In the second, we constructed momentum portfolios using the 12 industry group data set from the Kenneth French Data Library.  The data extends back to 7/1926.

In the third, we constructed momentum portfolios using the 49 industry group data set from the Kenneth French Data Library.  The data extends back to 7/1926.

For each data set, we ran the same test:

  • Vary formation periods from 5-1 to 12-1 months.
  • Vary holding periods from 1-to-26 months.
  • Using this data, construct dollar-neutral long/short portfolios that go long, in equal-weight, the top third ranking holdings and go short, in equal-weight, the bottom third.

Note that for holding periods exceeding 1 month, we employed an overlapping portfolio construction process.

Below we plot the results.

Source: MSCI and Kenneth French Data Library. Calculations by Newfound Research. Past performance is not a predictor of future results.  All information is backtested and hypothetical and does not reflect the actual strategy managed by Newfound Research.  Performance is net of all fees except for underlying ETF expense ratios.  Returns assume the reinvestment of all dividends, capital gains, and other earnings.

 

While the results are not as clear as those published by HIMCO, we still see an intriguing effect: returns peak as a function of both formation and holding period. For the country strategy, formation and holding appear to peak between 12-14 months, indicating that an investor using 5-1 month signals would want to hold for 7 months while an investor using 12-1 signals would only want to hold for 1 month.

For the industry data, the results are less clear.  Where the HIMCO and country results exhibited a clear “peak,” the industry results simply seem to “decay slower.”  In particular, we can see in the results for the 12-industry group test that almost all strategies peak with a 1-month holding period.  However, they all appear to fall off rapidly, and uniformly, after the time where formation plus holding period exceeds 16 months.

While less pronounced, it is worth pointing out that this result is achieved without the consideration of trading costs or taxes.  So, while the 5-1 strategy 12-industry group strategy return may peak with a 1-month hold, we can see that it later forms a second peak at a 9-month hold (“14 months since beginning uptrend”).  Given that we would expect a nine month hold to exhibit considerably less trading, analysis that includes trading cost estimates may exhibit even greater peakedness in the results.

Does the Effect Persist for Long-Only Portfolios?

In analyzing factors, it is often important to try to determine whether a given result is arising from an effect found in the long leg or the short leg.  After all, most investors implement strategies in a long-only capacity.  While long-only strategies are, technically, equal to a benchmark plus a dollar-neutral long/short portfolio2, the long/short portfolio rarely reflects the true factor definition.

Therefore, we want to evaluate long-only construction to determine whether the same result holds, or whether it is a feature of the short-leg.

Source: MSCI and Kenneth French Data Library. Calculations by Newfound Research. Past performance is not a predictor of future results.  All information is backtested and hypothetical and does not reflect the actual strategy managed by Newfound Research.  Performance is net of all fees except for underlying ETF expense ratios.  Returns assume the reinvestment of all dividends, capital gains, and other earnings.

We find incredibly similar results.  Again, country indices appear to peak between 12-to-14 months after the beginning of the uptrend.  Industry group results, while not as strong as country results, still appear to offer fairly flat results until 12-to-14 months after the beginning of the uptrend.  Taken together, it appears that this result is sustained for long-only portfolio implementations as well.

Conclusion

Traditionally, momentum is considered a high turnover factor.  Relative ranking of recent returns can vary substantially over time and our intuition would lead us to expect that the shorter the horizon we use to measure returns, the shorter the time we expect the relative ranking to persist.

Yet recent research published by HIMCO finds this intuition may not be true.  Rather, they find that momentum portfolio performance tends to peak 14-to-18 months after the beginning of the uptrend in measured. In other words, a portfolio formed on prior 5-month returns should hold between 9-to-13 months, while a portfolio formed on the prior 12-months of returns should only hold 2-to-6 months.

This result is rather counter-intuitive, as we would expect that shorter formation periods would require shorter holding periods.

We test this result out-of-sample, constructing momentum portfolios using country indices, 12-industry group indices, and 49-industry group indices. We find a similar result in this data. We then further test whether the result is an artifact found in only long/short implementations whether this information is useful for long-only investors.  Indeed, we find very similar results for long-only implementations.

Precisely why this result exists is still up in the air.  One argument may be that the trade-off is ultimately centered around win rate versus the size of winners.  If relative momentum tends to persist for only for 12-to-18 months total, then using 12-month formation may give us a higher win rate but reduce the size of the winners we pick.  Conversely, using a shorter formation period may reduce the number of winners we pick correctly (i.e. lower win rate), but those we pick have further to run. Selecting a formation period and a holding period such that their sum equals approximately 14 months may simply be a hack to find the balance of win rate and win size that maximizes return.

 


 

The New Glide Path

This post is available as a PDF download here.

Summary­

  • In practice, investors and institutions alike have spending patterns that makes the sequence of market returns a relevant risk factor.
  • All else held equal, investors would prefer to make contributions before large returns and withdrawals before large declines.
  • For retirees making constant withdrawals, sustained declines in portfolio value represent a significant risk. Trend-following has demonstrated historical success in helping reduce the risk these types of losses.
  • Traditionally, stock/bond glide paths have been used to control sequence risk. However, trend-following may be able to serve as a valuable hybrid between equities and bonds and provide a means to diversify our diversifiers.
  • Using backward induction and a number of simplifying assumptions, we generate a glide path based upon investor age and level of wealth.
  • We find that trend-following receives a significant allocation – largely in lieu of equity exposure – for investors early in retirement and whose initial consumption rate closely reflects the 4% level.

In past commentaries, we have written at length about investor sequence risk. Summarized simply, sequence risk is the sensitivity of investor goals to the sequence of market returns.  In finance, we traditionally assume the sequence of returns does not matter.  However, for investors and institutions that are constantly making contributions and withdrawals, the sequence can be incredibly important.

Consider for example, an investor who retires with $1,000,000 and uses the traditional 4% spending rule to allocate a $40,000 annual withdrawal to themselves. Suddenly, in the first year, their portfolio craters to $500,000.  That $40,000 no longer represents just 4%, but now it represents 8%.

Significant drawdowns and fixed withdrawals mix like oil and water.

Sequence risk is the exact reason why traditional glide paths have investors de-risk their portfolios over time from growth-focused, higher volatility assets like equities to traditionally less volatile assets, like short-duration investment grade fixed income.

Bonds, however, are not the only way investors can manage risk.  There are a variety of other methods, and frequent readers will know that we are strong advocates for the incorporation of trend-following techniques.

But how much trend-following should investors use?  And when?

That is exactly what this commentary aims to explore.

Building a New Glidepath

In many ways, this is a very open-ended question.  As a starting point, we will create some constraints that simplify our approach:

  1. The assets we will be limited to are broad U.S. equities, a trend-following strategy applied to U.S. equities, a 10-year U.S. Treasury index, and a U.S. Treasury Bill index.
  2. In any simulations we perform, we will use resampled historical returns.
  3. We assume an annual spend rate of $40,000 growing at 3.5% per year (the historical rate of annualized inflation over the period).
  4. We assume our investor retires at 60.
  5. We assume a male investor and use the Social Security Administration’s 2014 Actuarial Life Table to estimate the probability of death.

Source: St. Louis Federal Reserve and Kenneth French Database.  Past performance is hypothetical and backtested.  Trend Strategy is a simple 200-day moving average cross-over strategy that invests in U.S. equities when the price of U.S. equities is above its 200-day moving average and in U.S. T-Bills otherwise.  Returns are gross of all fees and assume the reinvestment of all dividends.  None of the equity curves presented here represent a strategy managed by Newfound Research. 

To generate our glide path, we will use a process of backwards induction similar to that proposed by Gordon Irlam in his article Portfolio Size Matters (Journal of Personal Finance, Vol 13 Issue 2). The process works thusly:

  1. Starting at age 100, assume a success rate of 100% for all wealth levels except for $0, which has a 0% success rate.
  2. Move back in time 1 year and generate 10,000 1-year return simulations.
  3. For each possible wealth level and each possible portfolio configuration of the four assets, use the 10,000 simulations to generate 10,000 possible future wealth levels, subtracting the inflation-adjusted annual spend.
  4. For a given simulation, use standard mortality tables to determine if the investor died during the year. If he did, set the success rate to 100% for that simulation. Otherwise, set the success rate to the success rate of the wealth bucket the simulation falls into at T+1.
  5. For the given portfolio configuration, set the success rate as the average success rate across all simulations.
  6. For the given wealth level, select the portfolio configuration that maximizes success rate.
  7. Return to step 2.

As a technical side-note, we should mention that exploring all possible portfolio configurations is a computationally taxing exercise, as would be an optimization-based approach.  To circumvent this, we employ a quasi-random low-discrepancy sequence generator known as a Sobol sequence.  This process allows us to generate 100 samples that efficiently span the space of a 4-dimensional unit hypercube.  We can then normalize these samples and use them as our sample allocations.

If that all sounded like gibberish, the main thrust is this: we’re not really checking every single portfolio configuration, but trying to use a large enough sample to capture most of them.

By working backwards, we can tackle what would be an otherwise computationally intractable problem.  In effect, we are saying, “if we know the optimal decision at time T+1, we can use that knowledge to guide our decision at time T.”

This methodology also allows us to recognize that the relative wealth level to spending level is important.  For example, having $2,000,000 at age 70 with a $40,000 real spending rate is very different than having $500,000, and we would expect that the optimal allocation would different.

Consider the two extremes.  The first extreme is we have an excess of wealth.  In this case, since we are optimizing to maximize the probability of success, the result will be to take no risk and hold a significant amount of T-Bills.  If, however, we had optimized to acknowledge a desire to bequeath wealth to the next generation, you would likely see the opposite extreme: with little risk of failure, you can load up on stocks and to try to maximize growth.

The second extreme is having a significant dearth of wealth.   In this case, we would expect to see the optimizer recommend a significant amount of stocks, since the safer assets will likely guarantee failure while the risky assets provide a lottery’s chance of success.

The Results

To plot the results both over time as well as over the different wealth levels, we have to plot each asset individually, which we do below.  As an example of how to read these graphs, below we can see that in the table for U.S. equities, at age 74 and a $1,600,000 wealth level, the glide path would recommend an 11% allocation to U.S. equities.

A few features we can identify:

  • When there is little chance of success, the glide path tilts towards equities as a potential lottery ticket.
  • When there is a near guarantee of success, the glide path completely de-risks.
  • While we would expect a smooth transition in these glide paths, there are a few artifacts in the table (e.g. U.S. equities with $200,000 wealth at age 78). This may be due to a particular set of return samples that cascade through the tables.  Or, because the trend following strategy can exhibit nearly identical returns to U.S. equities over a number of periods, we can see periods where the trend strategy received weight instead of equities (e.g. $400,000 wealth level at age 96 or $200,000 at 70).

Ignoring the data artifacts, we can broadly see that trend following seems to receive a fairly healthy weight in the earlier years of retirement and at wealth levels where capital preservation is critical, but growth cannot be entirely sacrificed.  For example, we can see that an investor with $1,000,000 at age 60 would allocate approximately 30% of their portfolio to a trend following strategy.

Note that the initially assumed $40,000 consumption level aligns with the generally recommended 4% withdrawal assumption.  In other words, the levels here are less important than their size relative to desired spending.

It is also worth pointing out again that this analysis uses historical returns.  Hence, we see a large allocation to T-Bills which, once upon a time, offered a reasonable rate of return.  This may not be the case going forward.

Conclusion

Financial theory generally assumes that the order of returns is not important to investors. Any investor contributing or withdrawing from their investment portfolio, however, is dramatically affected by the order of returns.  It is much better to save before a large gain or spend before a large loss.

For investors in retirement who are making frequent and consistent withdrawals from their portfolios, sequence manifests itself in the presence of large and prolonged drawdowns.  Strategies that can help avoid these losses are, therefore, potentially very valuable.

This is the basis of the traditional glidepath.  By de-risking the portfolio over time, investors become less sensitive to sequence risk.  However, as bond yields remain low and investor life expectancy increases, investors may need to rely more heavily on higher volatility growth assets to avoid running out of money.

To explore these concepts, we have built our own glide path using four assets: broad U.S. equities, 10-year U.S. Treasuries, U.S. T-Bills, and a trend following strategy. Not surprisingly, we find that trend following commands a significant allocation, particularly in the years and wealth levels where sequence risk is highest, and often is allocated to in lieu of equities themselves.

Beyond recognizing the potential value-add of trend following, however, an important second takeaway may be that there is room for significant value-add in going beyond traditional target-date-based glide paths for investors.

Factor Fimbulwinter

This post is available as a PDF download here.

Summary­

  • Value investing continues to experience a trough of sorrow. In particular, the traditional price-to-book factor has failed to establish new highs since December 2006 and sits in a 25% drawdown.
  • While price-to-book has been the academic measure of choice for 25+ years, many practitioners have begun to question its value (pun intended).
  • We have also witnessed the turning of the tides against the size premium, with many practitioners no longer considering it to be a valid stand-alone anomaly. This comes 35+ years after being first published.
  • With this in mind, we explore the evidence that would be required for us to dismiss other, already established anomalies.  Using past returns to establish prior beliefs, we simulate out forward environments and use Bayesian inference to adjust our beliefs over time, recording how long it would take for us to finally dismiss a factor.
  • We find that for most factors, we would have to live through several careers to finally witness enough evidence to dismiss them outright.
  • Thus, while factors may be established upon a foundation of evidence, their forward use requires a bit of faith.

In Norse mythology, Fimbulvetr (commonly referred to in English as “Fimbulwinter”) is a great and seemingly never-ending winter.  It continues for three seasons – long, horribly cold years that stretch on longer than normal – with no intervening summers.  It is a time of bitterly cold, sunless days where hope is abandoned and discord reigns.

This winter-to-end-all-winters is eventually punctuated by Ragnarok, a series of events leading up to a great battle that results in the ultimate death of the major gods, destruction of the cosmos, and subsequent rebirth of the world.

Investment mythology is littered with Ragnarok-styled blow-ups and we often assume the failure of a strategy will manifest as sudden catastrophe.  In most cases, however, failure may more likely resemble Fimbulwinter: a seemingly never-ending winter in performance with returns blown to-and-fro by the harsh winds of randomness.

Value investors can attest to this.  In particular, the disciples of price-to-book have suffered greatly as of late, with “expensive” stocks having outperformed “cheap” stocks for over a decade.  The academic interpretation of the factor sits nearly 25% belowits prior high-water mark seen in December 2006.

Expectedly, a large number of articles have been written about the death of the value factor.  Some question the factor itself, while others simply argue that price-to-book is a broken implementation.

But are these simply retrospective narratives, driven by a desire to have an explanation for a result that has defied our expectations?  Consider: if price-to-book had exhibited positive returns over the last decade, would we be hearing from nearly as large a number of investors explaining why it is no longer a relevant metric?

To be clear, we believe that many of the arguments proposed for why price-to-book is no longer a relevant metric are quite sound. The team at O’Shaughnessy Asset Management, for example, wrote a particularly compelling piece that explores how changes to accounting rules have led book value to become a less relevant metric in recent decades.1

Nevertheless, we think it is worth taking a step back, considering an alternate course of history, and asking ourselves how it would impact our current thinking.  Often, we look back on history as if it were the obvious course.  “If only we had better prior information,” we say to ourselves, “we would have predicted the path!”2  Rather, we find it more useful to look at the past as just one realized path of many that’s that could have happened, none of which were preordained.  Randomness happens.

With this line of thinking, the poor performance of price-to-book can just as easily be explained by a poor roll of the dice as it can be by a fundamental break in applicability.  In fact, we see several potential truths based upon performance over the last decade:

  1. This is all normal course performance variance for the factor.
  2. The value factor works, but the price-to-book measure itself is broken.
  3. The price-to-book measure is over-crowded in use, and thus the “troughs of sorrow” will need to be deeper than ever to get weak hands to fold and pass the alpha to those with the fortitude to hold.
  4. The value factor never existed in the first place; it was an unfortunate false positive that saturated the investing literature and broad narrative.

The problem at hand is two-fold: (1) the statistical evidence supporting most factors is considerable and (2) the decade-to-decade variance in factor performance is substantial.  Taken together, you run into a situation where a mere decade of underperformance likely cannot undue the previously established significance.  Just as frustrating is the opposite scenario. Consider that these two statements are not mutually exclusive: (1) price-to-book is broken, and (2) price-to-book generates positive excess return over the next decade.

In investing, factor return variance is large enough that the proof is not in the eating of the short-term return pudding.

The small-cap premium is an excellent example of the difficulty in discerning, in real time, the integrity of an established factor.  The anomaly has failed to establish a meaningful new high since it was originally published in 1981.  Only in the last decade – nearly 30 years later – have the tides of the industry finally seemed to turn against it as an established anomaly and potential source of excess return.

Thirty years.

The remaining broadly accepted factors – e.g. value, momentum, carry, defensive, and trend – have all been demonstrated to generate excess risk-adjusted returns across a variety of economic regimes, geographies, and asset classes, creating a great depth of evidence supporting their existence. What evidence, then, would make us abandon faith from the Church of Factors?

To explore this question, we ran a simple experiment for each factor.  Our goal was to estimate how long it would take to determine that a factor was no longer statistically significant.

Our assumption is that the salient features of each factor’s return pattern will remain the same (i.e. autocorrelation, conditional heteroskedasticity, skewness, kurtosis, et cetera), but the forward average annualized return will be zero since the factor no longer “works.”

Towards this end, we ran the following experiment: 

  1. Take the full history for the factor and calculate prior estimates for mean annualized return and standard error of the mean.
  2. De-mean the time-series.
  3. Randomly select a 12-month chunk of returns from the time series and use the data to perform a Bayesian update to our mean annualized return.
  4. Repeat step 3 until the annualized return is no longer statistically non-zero at a 99% confidence threshold.

For each factor, we ran this test 10,000 times, creating a distribution that tells us how many years into the future we would have to wait until we were certain, from a statistical perspective, that the factor is no longer significant.

Sixty-seven years.

Based upon this experience, sixty-seven years is median number of years we will have to wait until we officially declare price-to-book (“HML,” as it is known in the literature) to be dead.3  At the risk of being morbid, we’re far more likely to die before the industry finally sticks a fork in price-to-book.

We perform this experiment for a number of other factors – including size (“SMB” – “small-minus-big”), quality (“QMJ” – “quality-minus-junk”), low-volatility (“BAB” – “betting-against-beta”), and momentum (“UMD” – “up-minus-down”) – and see much the same result.  It will take decades before sufficient evidence mounts to dethrone these factors.

HMLSMB4QMJBABUMD
Median Years-until-Failure6743132284339

 

Now, it is worth pointing out that these figures for a factor like momentum (“UMD”) might be a bit skewed due to the design of the test.  If we examine the long-run returns, we see a fairly docile return profile punctuated by sudden and significant drawdowns (often called “momentum crashes”).

Since a large proportion of the cumulative losses are contained in these short but pronounced drawdown periods, demeaning the time-series ultimately means that the majority of 12-month periods actually exhibit positive returns.  In other words, by selecting random 12-month samples, we actually expect a high frequency of those samples to have a positive return.

For example, using this process, 49.1%, 47.6%, 46.7%, 48.8% of rolling 12-month periods are positive for HML, SMB, QMJ, and BAB factors respectively.  For UMD, that number is 54.7%.  Furthermore, if you drop the worst 5% of rolling 12-month periods for UMD, the average positive period is 1.4x larger than the average negative period.  Taken together, not only are you more likely to select a positive 12-month period, but those positive periods are, on average, 1.4x larger than the negative periods you will pick, except for the rare (<5%) cases.

The process of the test was selected to incorporate the salient features of each factor.  However, in the case of momentum, it may lead to somewhat outlandish results.

Conclusion

While an evidence-based investor should be swayed by the weight of the data, the simple fact is that most factors are so well established that the majority of current practitioners will likely go our entire careers without experiencing evidence substantial enough to dismiss any of the anomalies.

Therefore, in many ways, there is a certain faith required to use them going forward. Yes, these are ideas and concepts derived from the data.  Yes, we have done our best to test their robustness out-of-sample across time, geographies, and asset classes.  Yet we must also admit that there is a non-zero probability, however small it is, that these are false positives: a fact we may not have sufficient evidence to address until several decades hence.

And so a bit of humility is warranted.  Factors will not suddenly stand up and declare themselves broken.  And those that are broken will still appear to work from time-to-time.

Indeed, the death of a factor will be more Fimulwinter than Ragnarok: not so violent to be the end of days, but enough to cause pain and frustration among investors.

 

Addendum

We have received a large number of inbound notes about this commentary, which fall upon two primary lines of questions.  We want to address these points.

How were the tests impacted by the Bayesian inference process?

The results of the tests within this commentary are rather astounding.  We did seek to address some of the potential flaws of the methodology we employed, but by-in-large we feel the overarching conclusion remains on a solid foundation.

While we only presented the results of the Bayesian inference approach in this commentary, as a check we actually tested two other approaches:

  1. A Bayesian inference approach assuming that forward returns would be a random walk with constant variance (based upon historical variance) and zero mean.
  2. Forward returns were simulated using the same bootstrap approach, but the factor was being discovered for the first time and the entire history was being evaluated for its significance.

The two tests were in effort to isolate the effects of the different components of our test.

What we found was that while the reported figures changed, the overall  magnitude did not.  In other words, the median death-date of HML may not have been 67 years, but the order of magnitude remained much the same: decades.

Stepping back, these results were somewhat a foregone conclusion.  We would not expect an effect that has been determined to be statistically significant over a hundred year period to unravel in a few years.  Furthermore, we would expect a number of scenarios that continue to bolster the statistical strength just due to randomness alone.

Why are we defending price-to-book?

The point of this commentary was not to defend price-to-book as a measure.  Rather, it was to bring up a larger point.

As a community, quantitative investors often leverage statistical significance as a defense for the way we invest.

We think that is a good thing.  We should look at the weight of the evidence.  We should be data driven.  We should try to find ideas that have proven to be robust over decades of time and when applied in different markets or with different asset classes.  We should want to find strategies that are robust to small changes in parameterization.

Many quants would argue (including us among them), however, that there also needs to be a why.  Why does this factor work?  Without the why, we run the risk of glorified data mining.  With the why, we can choose for ourselves whether we believe the effect will continue going forward.

Of course, there is nothing that prevents the why from being pure narrative fallacy.  Perhaps we have simply weaved a story into a pattern of facts.

With price-to-book, one might argue we have done the exact opposite.  The effect, technically, remains statistically significant and yet plenty of ink has been spilled as to why it shouldn’t work in the future.

The question we must answer, then, is, “when does statistically significant apply and when does it not?”  How can we use it as a justification in one place and completely ignore it in others?

Furthermore, if we are going to rely on hundreds of years of data to establish significance, how can we determine when something is “broken” if the statistical evidence does not support it?

Price-to-book may very well be broken.  But that is not the point of this commentary.  The point is simply that the same tools we use to establish and defend factors may prevent us from tearing them down.

 

How to Benchmark Trend-Following

This post is available as a PDF download here.

Summary­

  • Benchmarking a trend-following strategy can be a difficult exercise in managing behavioral biases.
  • While the natural tendency is often to benchmark equity trend-following to all-equities (e.g. the S&P 500), this does not accurately give the strategy credit for choosing to be invested when the market is going up.
  • A 50/50 portfolio of equities and cash is generally an appropriate benchmark for long/flat trend-following strategies, both for setting expectations and for gauging current relative performance.
  • If we acknowledge that for a strategy to outperform over the long-run, it must undergo shorter periods of underperformance, using this symmetric benchmark can isolate market environments that underperformance should be expected.
  • Diversifying risk-management approaches (e.g. pairing strategic allocation with tactical trend-following) can manage events that are unfavorable to one strategy, and benchmarking is a tool to set expectations around the level of risk management necessary in different market environments.

Any strategy that deviates from the most basic is compared to a benchmark. But how do you choose an appropriate benchmark?

The complicated nature of benchmarking can be easily seen by considering something as simple as a value stock strategy.

You may pit your concentrated value manager you currently use up against the more diversified value manager you used previously. At that time, you may have compared that value manager to a systematic smart-beta ETF like the iShares S&P 500 Value ETF (ticker: IVE). And if you were invested in that ETF, you might compare its performance to the S&P 500.

What prevents you from benchmarking them all to the S&P 500? Or from benchmarking the concentrated value strategy to all of the other three?

Benchmark choices are not unique and are highly dependent on what aspect of performance you wish to measure.

Benchmarking is one of the most frequently abused facets of investing. It can be extremely useful when applied in the correct manner, but most of the time, it is simply a hurdle to sticking with an investment plan.

In an ideal world, the only benchmark for an investor would be whether or not they are on track for hitting their financial goals. However, in an industry obsessed with relative performance, choosing a benchmark is a necessary exercise.

This commentary will explore some of the important considerations when choosing a benchmark for trend-following strategies.

The Purpose of a Trend-Following Benchmark

As an investment manager, our goal with benchmarking is to check that a strategy’s performance is in line with our expectations. Performance versus a benchmark can answer questions such as:

  • Is the out- or underperformance appropriate for the given market environment?
  • Is the magnitude of out- or underperformance typical?
  • How is the strategy behaving in the context of other ways of managing risk?

With long/flat trend-following strategies, the appropriate benchmark should gauge when the manager is making correct or incorrect calls in either direction.

Unfortunately, we frequently see long/flat equity trend-following strategies benchmarked to an all-equity index like the S&P 500. This is similar to the coinflip game we outlined in our previous commentary about protecting and participating with trend-following.[1]

The behavioral implications of this kind of benchmarking are summarized in the table below.

The two cases with wrong calls – to move to cash when the market goes up or remain invested when the market goes down – are appropriately labeled, as is the correct call to move to cash when the market is going down. However, when the market is going up and the strategy is invested, it is merely keeping up with its benchmark even though it is behaving just as one would want it to.

To reward the strategy in either correct call case, the benchmark should consist of allocations to both equity and cash.

A benchmark like this can provide objective answers to the questions outlined above.

Deriving a Trend-Following Benchmark

Sticking with the trend-following strategy example we outlined in our previous commentary[2], we can look at some of the consequences of choosing different benchmarks in terms of how much the trend-following strategy deviates from them over time.

The chart below shows the annualized tracking error of the strategy to the range of strategic proportions of equity and cash.

Source: Kenneth French Data Library. Data from July 1926 – February 2018. Calculations by Newfound Research. Returns are gross of all fees, including transaction fees, taxes, and any management fees.  Returns assume the reinvestment of all distributions.  This document does not reflect the actual performance results of any Newfound investment strategy or index.  All returns are backtested and hypothetical.  Past performance is not a guarantee of future results.

The benchmark that minimizes the tracking error is a 47% allocation to equities and 53% to cash. This 0.47 is also the beta of the trend-following strategy, so we can think of this benchmark as accounting for the risk profile of the strategy over the entire 92-year period.

But what if we took a narrower view by constraining this analysis to recent performance?

The chart below shows the equity allocation of the benchmark that minimizes the tracking error to the trend-following strategy over rolling 1-year periods.

Source: Kenneth French Data Library. Data from July 1926 – February 2018. Calculations by Newfound Research. Returns are gross of all fees, including transaction fees, taxes, and any management fees.  Returns assume the reinvestment of all distributions.  This document does not reflect the actual performance results of any Newfound investment strategy or index.  All returns are backtested and hypothetical.  Past performance is not a guarantee of future results.

A couple of features stand out here.

First, if we constrain our lookback period to one year, a time-period over which many investors exhibit anchoring bias, then the “benchmark” that we may think we will closely track – the one we are mentally tied to – might be the one that we deviate the most from over the next year.

And secondly, the approximately 50/50 benchmark calculated using the entire history of the strategy is rarely the one that minimizes tracking error over the short term.

The median equity allocation in these benchmarks is 80%, the average is 67%, and the data is highly clustered at the extremes of 100% equity and 100% cash.

Source: Kenneth French Data Library. Data from July 1926 – February 2018. Calculations by Newfound Research. Returns are gross of all fees, including transaction fees, taxes, and any management fees.  Returns assume the reinvestment of all distributions. This document does not reflect the actual performance results of any Newfound investment strategy or index.  All returns are backtested and hypothetical.  Past performance is not a guarantee of future results.

The Intuitive Trend-Following Benchmark

Is there a problem in determining a benchmark using the tracking error over the entire period?

One issue is that it is being calculated with the benefit of hindsight. If you had started a trend-following strategy back in the 1930s, you would have arrived at a different equity allocation for the benchmark based on this analysis given the available data (e.g. using data up until the end of 1935 yields an equity allocation of 37%).

To remove this reliance on having a sufficiently long backtest, our preference is to rely more on the strategy’s rules and how we would use it in a portfolio to determine our trend-following benchmarks.

For a trend following strategy that pivots between stocks and cash, a 50/50 benchmark is a natural choice.

It is broad enough to include the assets in the trend-following strategy’s investment universe while being neutral to the calls to be long or flat.

Seeing the 50/50 portfolio be the answer to the tracking error minimization problem over the entire data simply provides empirical evidence for its use.

One argument against using a 50/50 blend could focus on the fact that the market is generally up more frequently than it is down, at least historically. While this is true, the magnitude of down moves has often been larger than the magnitude of up moves. Since this strategy is explicitly meant as a risk management tool, accounting for both the magnitude and the frequency is prudent.

Another argument against its use could be the belief that we are entering a different market environment where history will not be an accurate guide going forward. However, given the random nature of market moves coupled with the behavioral tendencies of investors to overreact, herd, and anchor, a benchmark close to a 50/50 is likely still a fitting choice.

Setting Expectations with a Trend-Following Benchmark

Now that we have a benchmark to use, how do we use it to set our expectations?

Neglecting the historical data for the moment, from the ex-ante perspective, it is helpful to decompose a typical market cycle into four different segments and assess how we expect trend-following to behave:

  • Initial decline – Equity markets begin to sell off, and the fully invested trend-following strategy underperforms the 50/50 benchmark.
  • Prolonged drawdown – The trend-following strategy adapts to the decline and moves to cash. The trend-following strategy outperforms.
  • Initial recovery – The trend-following strategy is still in cash and lags the benchmark as prices rebound off the bottom.
  • Sustained recovery – The trend-following strategy reinvests and captures more of the upside than the benchmark.

Of course, this is a somewhat ideal scenario that rarely plays out perfectly. Whipsaw events occur as prices recover (decline) before declining (recovering) again.

But it is important to note how the level of risk relative to this 50/50 benchmark varies over time.

Contrast this with something like an all equity strategy benchmarked to the S&P 500 where the risk is likely to be similar during most market environments.

Now, if we look at the historical data, we can see this borne out in the graph of the drawdowns for trend-following and the 50/50 benchmark.

Source: Kenneth French Data Library. Data from July 1926 – February 2018. Calculations by Newfound Research. Returns are gross of all fees, including transaction fees, taxes, and any management fees.  Returns assume the reinvestment of all distributions.  This document does not reflect the actual performance results of any Newfound investment strategy or index.  All returns are backtested and hypothetical.  Past performance is not a guarantee of future results.

In most prolonged and major (>20%) drawdowns, trend-following first underperforms the benchmark, then outperforms, then lags as equities improve, and then outperform again.

Using the most recent example of the Financial Crisis, we can see the capture ratios verses the benchmark in each regime.

Source: Kenneth French Data Library. Data from October 2007 – February 2018. Calculations by Newfound Research. Returns are gross of all fees, including transaction fees, taxes, and any management fees.  Returns assume the reinvestment of all distributions.  This document does not reflect the actual performance results of any Newfound investment strategy or index.  All returns are backtested and hypothetical.  Past performance is not a guarantee of future results.

The underperformance of the trend-following strategy verses the benchmark is in line with expectations based on how the strategy is desired to work.

Another way to use the benchmark to set expectations is to look at rolling returns historically. This gives context for the current out- or underperformance relative to the benchmark.

From this we can see which percentile the current return falls into or check to see how many standard deviations it is away from the average level of relative performance.

Source: Kenneth French Data Library. Data from July 1926 – February 2018. Calculations by Newfound Research. Returns are gross of all fees, including transaction fees, taxes, and any management fees.  Returns assume the reinvestment of all distributions.  This document does not reflect the actual performance results of any Newfound investment strategy or index.  All returns are backtested and hypothetical.  Past performance is not a guarantee of future results.

In all this, there are a few important points to keep in mind:

  • Price moves that occur faster than the scope of the trend-following measurement can be one source of the largest underperformance events.
  • Along a similar vein, whipsaw is a key risk of trend-following. Highly oscillatory markets will not be favorable to trend-following. In these scenarios, trend following can underperform even fully invested equities.
  • With percentile analysis, there is always a first time for anything. Having a rich data history covering a variety of market scenarios mitigates this, but setting new percentiles, either on the low end or high end, is always possible.
  • Sometimes a strategy is expected to lag its benchmark in a given market environment. A primary goal with benchmarking is it accurately set these expectations for the potential magnitude of relative performance and design the portfolio accordingly.

Conclusion

Benchmarking a trend-following strategy can be a difficult exercise in managing behavioral biases. With the tendency to benchmark all equity-based strategies to an all-equity index, investors often set themselves up for a let-down in a bull market with trend-following.

With benchmarking, the focus is often on lagging the benchmark by “too much.” This is what an all-equity benchmark can do to trend-following. However, the issue is symmetric: beating the benchmark by “too much” can also signal either an issue with the strategy or with the benchmark choice. This is why we would not benchmark a long/flat trend-following strategy to cash.

A 50/50 portfolio of equities and cash is generally an appropriate benchmark for long/flat trend-following strategies. This benchmark allows us to measure the strategy’s ability to correctly allocate when equities are both increasing or decreasing.

Too often, investors use benchmarking solely to see which strategy is beating the benchmark by the most. While this can be a use for very similar strategies (e.g. a set of different value managers), we must always be careful not to compare apples to oranges.

A benchmark should not conjure up an image of a dog race where the set of investment strategies are the dogs and the benchmark is the bunny out ahead, always leading the way.

We must always acknowledge that for a strategy to outperform over the long-run, it must undergo shorter periods of underperformance. Diversifying approaches can manage events that are unfavorable to one strategy, and benchmarking is a tool to set expectations around the level of risk management necessary in different market environments.

 

[1] https://blog.thinknewfound.com/2018/05/leverage-and-trend-following/

[2] https://blog.thinknewfound.com/2018/03/protect-participate-managing-drawdowns-with-trend-following/

Page 16 of 25

Powered by WordPress & Theme by Anders Norén