The Research Library of Newfound Research

Category: Weekly Commentary Page 4 of 21

Timing Trend Model Specification with Momentum

A PDF version of this post is available here.

Summary

  • Over the last several years, we have written several research notes demonstrating the potential benefits of diversifying “specification risk.”
  • Specification risk occurs when an investment strategy is overly sensitive to the outcome of a single investment process or parameter choice.
  • Adopting an ensemble approach is akin to creating a virtual fund-of-funds of stylistically similar managers, exhibiting many of the same advantages of traditional multi-manager diversification.
  • In this piece, we briefly explore whether model specification choices can be timed using momentum within the context of a naïve trend strategy.
  • We find little evidence that momentum-based parameter specification leads to meaningful or consistent improvements beyond a naively diversified approach.

Over the last several years, we’ve advocated on numerous occasions for a more holistic view of diversification: one that goes beyond just what we invest in, but also considers how those decisions are made and when they are made.

We believe that this style of thinking can be applied “all the way down” our process.  For example, how-based diversification would advocate for the inclusion of both value and momentum processes, as well as for different approaches to capturing value and momentum.

Unlike correlation-based what diversification, how-based diversification often does little for traditional portfolio risk metrics.  For example, in Is Multi-Manager Diversification Worth It? we demonstrated that within most equity categories, allocating across multiple managers does almost nothing to reduce  portfolio volatility.  It does, however, have a profound impact on the dispersion of terminal wealth that is achieved, often by avoiding manager-specific tail-risks.  In other words, our certainty of achieving a given outcome may be dramatically improved by taking a multi-manager approach.

Ensemble techniques to portfolio construction can be thought of as adopting this same multi-manager approach by creating a set of virtual managers to allocate across.

In late 2018, we wrote two notes that touched upon this:  When Simplicity Met Fragility and What Do Portfolios and Teacups Have in Common?  In both studies we injected a bit of randomness into asset returns to measure the stability of trend-following strategies.  We found that highly simplistic models tended to exhibit significant deviations in results with just slightly modified inputs, suggesting that they are highly fragile.  Increasing diversification across what, how, and when axes led to a significant improvement in outcome stability.

As empirical evidence, we studied the real-time results of the popular Dual Momentum GEM strategy in our piece Fragility Case Study: Dual Momentum GEM, finding that slight deviations in model specification lead to significantly different allocation conclusions and therefore meaningfully different performance results.  This was particularly pronounced over short horizons.

Tying trend-following to option theory, we then demonstrated how an ensemble of trend following models and specifications could be used to increase outcome certainty in Tightening the Uncertain Payout of Trend-Following.

Yet while more diversification appears to make portfolios more consistent in the outcomes they achieve, empirical evidence also suggests that certain specifications can lead to superior results for prolonged periods of time.  For example, slower trend following signals appear to have performed much, much better than fast trend following signals over the last two decades.

One of the benefits of being a quant is that it is easy to create thousands of virtual managers, all of whom may follow the same style (e.g. “trend”) but implement with a different model (e.g. prior total return, price-minus-moving-average, etc) and specification (e.g. 10 month, 200 day, 13 week / 34 week cross, etc).  An ancillary benefit is that it is also easy to re-allocate capital among these virtual managers.

Given this ease, and knowing that certain specifications can go through prolonged periods of out-performance, we might ask: can we time specification choices with momentum?

Timing Trend Specification

In this research note, we will explore whether momentum signals can help us time out specification choices as it relates to a simple long/flat U.S. trend equity strategy.

Using data from the Kenneth French library, our strategy will hold broad U.S. equities when the trend signal is positive and shift to the risk-free asset when trends are negative.  We will develop 1023 different strategies by employing three different models – prior total return, price-minus-moving-average, and dual-moving-average-cross-over – with lookback choices spanning from 20-to-360 days in length.

After constructing the 1023 different strategies, we will then apply a momentum model that ranks the models based upon prior returns and equally-weights our portfolio across the top 10%.  These choices are made daily and implemented with 21 overlapping portfolios to reduce the impact of rebalance timing luck.

It should be noted that because the underlying strategies are only allocating between U.S. equities and a risk-free asset, they can go through prolonged periods where they have identical returns or where more than 10% of models share the highest prior return.  In these cases, we select all models that have returns equal-to-or-greater-than the model identified at the 10th percentile.

Before comparing performance results, we think it is worthwhile to take a quick look under the hood to see whether the momentum-based approach is actually creating meaningful tilts in specification selection.  Below we plot both aggregate model and lookback weights for the 126-day momentum strategy.

Source: Kenneth French Data Library. Calculations by Newfound Research.

We can see that while the model selection remains largely balanced, with the exception of a few periods, the lookback horizon selection is far more volatile.  On average, the strategy preferred intermediate-to-long-term signals (i.e. 181-to-360 day), but we can see intermittent periods where short-term models carried favor.

Did this extra effort generate value, though?  Below we plot the ratio of the momentum strategies’ equity curves versus the naïve diversified approach.

We see little consistency in relative performance and four of the five strategies end up flat-to-worse.  Only the 252-day momentum strategy out-performs by the end of the testing period and this is only due to a stretch of performance from 1950-1964.  In fact, since 1965 the relative performance of the 252-day momentum model has been negative versus the naively diversified approach.

Source: Kenneth French Data Library. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions.

This analysis suggests that naïve, momentum-based specification selection does not appear to have much merit against a diversified approach for our simple trend equity strategy.

The Potential Benefits of Virtual Rebalancing

One potential benefit of an ensemble approach is that rebalancing across virtual managers can generate growth under certain market conditions.  Similar to a strategically rebalanced portfolio, we find that when returns across virtual managers are expected to be similar, consistent rebalancing can harvest excess returns above a buy-and-hold approach.

The trade-off, of course, is that when there is autocorrelation in specification performance, rebalancing creates a drag.   However, given that the evidence above suggests that relative performance between specifications is not persistent, we might expect that continuously rebalancing across our ensemble of virtual managers may actually allow us to harvest returns above and beyond what might be possible with just selecting an individual manager.

Source: Kenneth French Data Library. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions.

Conclusion

In this study, we explored whether we could time model specification choices in a simple trend equity strategy using momentum signals.

Testing different lookback horizons of 21-through-378 days, we found little evidence of meaningful persistence in the returns of different model specifications.  In fact, four of the five momentum models we studied actually under-performed a naïve, diversified.  The one model that did out-perform only seemed to do so due to strong performance realized over the 1950-1964 period, actually relatively under-performing ever since.

While this evidence suggests that timing specification with momentum may not be a fruitful approach, it does suggest that the lack of return persistence may benefit diversification for a second reason: rebalancing.  Indeed, barring any belief that one specification would necessarily do better than another, consistently re-pooling and distributing resources through rebalancing may actually lead to the growth-optimal solution.1 This potentially implies an even higher hurdle rate for specification-timers to overcome.

 


 

Re-specifying the Fama French 3-Factor Model

This post is available as a PDF download here.

Summary­

  • The Fama French three-factor model provides a powerful tool for assessing exposures to equity risk premia in investment strategies.
  • In this note, we explore alternative specifications of the value (HML) and size (SMB) factors using price-to-earnings, price-to-cash flow, and dividend yield.
  • Running factor regressions using these alternate specifications on a suite of value ETFs and Newfound’s Systematic Value strategy, lead to a wide array of results, both numerically and directionally.
  • While many investors consider the uncertainty of the parameter estimates from the regression using the three-factor model, most do not consider the uncertainty that comes from the assumption of how you construct the equity factors in the first place.
  • Understanding the additional uncertainty is crucial for manager and investors who must consider what risks they are trying to measure and control by using tools like factor regression and make sure their assumptions align with their goals.

In their 1992 paper, The Cross-Section of Expected Stock Returns, Eugene Fama and Kenneth French outlined their three-factor model to explain stock returns.

While the Capital Asset Pricing Model (CAPM) only describes asset returns in relation to their exposure to the market’s excess return through the stock’s beta and identifies any return beyond that as alpha, Fama and French’s three-factor model reattributed some of that supposed alpha to exposures to a value factor (High-minus-low or HML) based on returns stratified by price-to-book ratios and a size factor (small-minus-big or SMB) based on returns stratified by market capitalization.

This gave investors a tool to judge investment strategies based on the loadings to these risk factors. A manager with a seemingly high alpha may have simply been investing in value and small-cap stocks historically.

The notion of compensated risk premia has also opened the floodgate of many additional factors from other researchers (such as momentum, quality, low beta, etc.) and even two more factors from Fama and French (investment and profitability).

A richer factor universe opens up a wide realm of possibilities for analysis and attribution. However, setting further developments aside and going back to the original three-factor model, we would be remiss if we didn’t dive a bit further into its specification.

At the highest level, we agree with treating “value” and “size” as risk factors, but there is more than one way to skin a factor.

What is “value”?

Fama and French define it using the price-to-book ratio of a stock. This seems legitimate for a broad swath of stocks, especially those that are very capital intensive – such as energy, manufacturing, and financial firms – but what about industries that have structurally lower book values and may have other potential price drivers? For example, a technology company might have significant intangible intellectual property and some utility companies might employ leverage, which decreases their book value substantially.

To determine value in these sectors, we might utilize ratios that account for sales, dividends, or earnings. But then if we analyzed these strategies using the Fama French three-factor model as it is specified, we might misjudge the loading on the value factor.

“Size” seems more straightforward. Companies with low market capitalizations are small. However, when we consider how the size factor is defined based on the value factor, there might even be some differences in SMB using different value metrics.

In this commentary, we will explore what happens when we alter the definition of value for the value factor (and hence the size factor) and see how this affects factor regressions of a sample of value ETFs along with our Systematic Value strategy.

HML Factor Definitions

In the standard version of the Fama French 3-factor model, HML is constructed as a self-financing long/short portfolio using a 2×3 sort on size and value. The investment universe is split in half based on market capitalization and in three parts (30%/40%/30%) based on valuation, in this base case, price-to-book ratio.

Using additional data from the Kenneth French Data Library and the same methodology, we will construct HML factors using sorts based on size and:

  • Price-to-earnings ratios
  • Price-to-cash flow ratios
  • Dividend yields

The common inception date for all the factors is June 1951.

The chart below shows the growth of each of the four value factor portfolios.

Source: Kenneth French Data Library. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. 

Over the entire time period – and for many shorter time horizons – the standard HML factor using price-to-book does not even have the most attractive returns. Price-to-earnings and price-to-cash flow often beat it out.

On the other hand, the HML factor formed using dividend yields doesn’t look so hot.

One of the reasons behind this is that the small, low dividend yield companies performed much better than the small companies that were ranked poorly by the other value factors. We can see this effect borne out in the SMB chart for each factor, as the SMB factor for dividend yield performed the best.

(Recall that we mentioned previously how the Fama French way of defining the size factor is dependent on which value metric we use.)

Source: Kenneth French Data Library. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions.

Looking at the statistical significance of each factor through its t-statistic, we can see that Price-to-Earnings and Price-to-Cash Flow yielded higher significance for the HML factor than Price-to-Book. And those two along with Dividend Yield all eclipsed the Price-to-Book construction of the SMB factor.

T-Statistics for HML and SMB Using Various Value Metrics

 Price-to-BookDividend YieldPrice-to-EarningsPrice-to-Cash Flow
HML2.90.03.73.4
SMB1.02.41.61.9

Assuming that we do consider all metrics to be appropriate ways to assess the value of companies, even if possibly under different circumstances, how do different variants of the Fama French three-factor model change for each scenario with regression analysis?

The Impact on Factor Regressions

Using a sample of U.S. value ETFs and our Systematic Value strategy, we plot the loadings for the different versions of HML. The regressions are carried out using the trailing three years of monthly data ending on October 2019.

Source: Tiingo, Kenneth French Data Library. Calculations by Newfound Research. Past performance is not an indicator of future results. Returns represent live strategy results. Returns for the Newfound Systematic Value strategy are gross of all management fees and taxes, but net of execution fees.  Returns for ETFs included in study are gross of any management fees, but net of underlying ETF expense ratios.  Returns assume the reinvestment of all distributions.

For each different specification of HML, the differences in the loading between investments is generally directionally consistent. For instance, DVP has higher loadings than FTA for all forms of HML.

However, sometimes this is not the case.

VLUE looks more attractive than VTV based on price-to-cash flow but not dividend yield. FTA is roughly equivalent to QVAL in terms of loading when price-to-book is used for HML, but it varies wildly when other metrics are used.

The tightest range for the four models for any of the investments is 0.09 (PWV) and the widest is 0.52 (QVAL). When we factor in that these estimates each have their own uncertainty, distinguishing which investment has the better value characteristic is tough. Decisions are commonly made on much smaller differences.

We see similar dispersion in the SMB loadings for the various constructions.

Source: Tiingo, Kenneth French Data Library. Calculations by Newfound Research. Past performance is not an indicator of future results. Returns represent live strategy results. Returns for the Newfound Systematic Value strategy are gross of all management fees and taxes, but net of execution fees.  Returns for ETFs included in study are gross of any management fees, but net of underlying ETF expense ratios.  Returns assume the reinvestment of all distributions.

Many of these values are not statistically significant from zero, so someone who has a thorough understanding of uncertainty in regression would likely not draw a strict comparison between most of these investments.

However, one implication of this is that if a metric is chosen that does ascribe significant size exposure to one of these investments, an investor may make a decision based on not wanting to bear that risk in what they desire to be a large-cap investment.

Can We Blend Our Way Out?

One way we often mitigate model specification risk is by blending a number of models together into one.

By averaging all of our HML and SMB factors, respectively, we arrive at blended factors for the three-factor model.

Source: Tiingo, Kenneth French Data Library. Calculations by Newfound Research. Past performance is not an indicator of future results. Returns represent live strategy results. Returns for the Newfound Systematic Value strategy are gross of all management fees and taxes, but net of execution fees.  Returns for ETFs included in study are gross of any management fees, but net of underlying ETF expense ratios.  Returns assume the reinvestment of all distributions.

All of the investments now have HML loadings in the top of their range of the individual model loadings, and many (FTA, PWV, RPV, SPVU, VTV, and the Systematic Value strategy) have loadings to the blended HML factor that exceed the loadings for all of the individual models.

The opposite is the case for the blended SMB factor: the loadings are in the low-end of the range of the individual model loadings.

Source: Tiingo, Kenneth French Data Library. Calculations by Newfound Research. Past performance is not an indicator of future results. Returns represent live strategy results. Returns for the Newfound Systematic Value strategy are gross of all management fees and taxes, but net of execution fees.  Returns for ETFs included in study are gross of any management fees, but net of underlying ETF expense ratios.  Returns assume the reinvestment of all distributions.

So which is the correct method?

That’s a good question.

For some investments, it is situation-specific. If a strategy only uses price-to-earnings as its value metric, then putting it up against a three-factor model using the P/E ratio to construct the factors is appropriate for judging the efficacy of harvesting that factor.

However, if we are concerned more generally about the abstract concept of “value”, then the blended model may be the best way to go.

Conclusion

In this study, we have explored the impact of model specification for the value and size factor in the Fama French three-factor model.

We empirically tested this impact by designing a variety of HML and SMB factors based on three additional value metrics (price-to-earnings, price-to-cash flow, and dividend yield). These factors were constructed using the same rules as for the standard method using price-to-book ratios.

Each factor, with the possible exceptions of the dividend yield-based HML, has performance that could make it a legitimate specification for the three-factor model over the time that common data is available.

Running factor regressions using these alternate specifications on a suite of value ETFs and Newfound’s Systematic Value strategy, led to a wide array of results, both numerically and directionally.

While many investors consider the uncertainty of the parameter estimates from the regression using the three-factor model, most do not consider the uncertainty that comes from the assumption of how you construct the equity factors in the first place.

Understanding the additional uncertainty is crucial for decision-making. Managers and investors alike must consider what risks they are trying to measure and control by using tools like factor regression and make sure their assumptions align with their goals.

“Value” is in the eye of the beholder, and blind applications of two different value factors may lead to seeing double conclusions.

Diversification: More Than “What”

Not seeing the video?  Click here.

 

The Dumb (Timing) Luck of Smart Beta

This post is available as a PDF download here.

Summary

  • In past research notes we have explored the impact of rebalance timing luck on strategic and tactical portfolios, even using our own Systematic Value methodology as a case study.
  • In this note, we generate empirical timing luck estimates for a variety of specifications for simplified value, momentum, low volatility, and quality style portfolios.
  • Relative results align nicely with intuition: higher concentration and less frequent rebalancing leads to increasing levels of realized timing luck.
  • For more reasonable specifications – e.g. 100 stock portfolios rebalanced semi-annually – timing luck ranges between 100 and 400 basis points depending upon the style under investigation, suggesting a significant risk of performance dispersion due only to when a portfolio is rebalanced and nothing else.
  • The large magnitude of timing luck suggests that any conclusions drawn from performance comparisons between smart beta ETFs or against a standard style index may be spurious.

We’ve written about the concept of rebalance timing luck a lot.  It’s a cowbell we’ve been beating for over half a decade, with our first article going back to August 7th, 2013.

As a reminder, rebalance timing luck is the performance dispersion that arises from the choice of a particular rebalance date (e.g. semi-annual rebalances that occur in June and December versus March and September).

We’ve empirically explored the impact of rebalance timing luck as it relates to strategic asset allocation, tactical asset allocation, and even used our own Systematic Value strategy as a case study for smart beta.  All of our results suggest that it has a highly non-trivial impact upon performance.

This summer we published a paper in the Journal of Index Investing that proposed a simple solution to the timing luck problem: diversification.  If, for example, we believe that our momentum portfolio should be rebalanced every quarter – perhaps as an optimal balance of cost and signal freshness – then we proposed splitting our capital across the three portfolios that spanned different three-month rebalance periods (e.g. JAN-APR-JUL-OCT, FEB-MAY-AUG-NOV, MAR-JUN-SEP-DEC).  This solution is referred to either as “tranching” or “overlapping portfolios.”

The paper also derived a formula for estimating timing luck ex-ante, with a simplified representation of:

Where L is the timing luck measure, T is turnover rate of the strategy, F is how many times per year the strategy rebalances, and S is the volatility of a long/short portfolio that captures the difference of what a strategy is currently invested in versus what it could be invested in if the portfolio was reconstructed at that point in time.

Without numbers, this equation still informs some general conclusions:

  • Higher turnover strategies have higher timing luck.
  • Strategies that rebalance more frequently have lower timing luck.
  • Strategies with a less constrained universe will have higher timing luck.

Bullet points 1 and 3 may seem similar but capture subtly different effects.  This is likely best illustrated with two examples on different extremes.  First consider a very high turnover strategy that trades within a universe of highly correlated securities.  Now consider a very low turnover strategy that is either 100% long or 100% short U.S. equities.  In the first case, the highly correlated nature of the universe means that differences in specific holdings may not matter as much, whereas in the second case the perfect inverse correlation means that small portfolio differences lead to meaningfully different performance.

L, in and of itself, is a bit tricky to interpret, but effectively attempts to capture the potential dispersion in performance between a particular rebalance implementation choice (e.g. JAN-APR-JUL-OCT) versus a timing-luck-neutral benchmark.

After half a decade, you’d would think we’ve spilled enough ink on this subject.

But given that just about every single major index still does not address this issue, and since our passion for the subject clearly verges on fever pitch, here comes some more cowbell.

Equity Style Portfolio Definitions

In this note, we will explore timing luck as it applies to four simplified smart beta portfolios based upon holdings of the S&P 500 from 2000-2019:

  • Value: Sort on earnings yield.
  • Momentum: Sort on prior 12-1 month returns.
  • Low Volatility: Sort on realized 12-month volatility.
  • Quality: Sort on average rank-score of ROE, accruals ratio, and leverage ratio.

Quality is a bit more complicated only because the quality factor has far less consistency in accepted definition.  Therefore, we adopted the signals utilized by the S&P 500 Quality Index.

For each of these equity styles, we construct portfolios that vary across two dimensions:

  • Number of Holdings: 50, 100, 150, 200, 250, 300, 350, and 400.
  • Frequency of Rebalance: Quarterly, Semi-Annually, and Annually.

For the different rebalance frequencies, we also generate portfolios that represent each possible rebalance variation of that mix.  For example, Momentum portfolios with 50 stocks that rebalance annually have 12 possible variations: a January rebalance, February rebalance, et cetera.  Similarly, there are 12 possible variations of Momentum portfolios with 100 stocks that rebalance annually.

By explicitly calculating the rebalance date variations of each Style x Holding x Frequency combination, we can construct an overlapping portfolios solution.  To estimate empirical annualized timing luck, we calculate the standard deviation of monthly return dispersion between the different rebalance date variations of the overlapping portfolio solution and annualize the result.

Empirical Timing Luck Results

Before looking at the results plotted below, we would encourage readers to hypothesize as to what they expect to see.  Perhaps not in absolute magnitude, but at least in relative magnitude.

For example, based upon our understanding of the variables affecting timing luck, would we expect an annually rebalanced portfolio to have more or less timing luck than a quarterly rebalanced one?

Should a more concentrated portfolio have more or less timing luck than a less concentrated variation?

Which factor has the greatest risk of exhibiting timing luck?

Source: Sharadar.  Calculations by Newfound Research.

To create a sense of scale across the styles, below we isolate the results for semi-annual rebalancing for each style and plot it.

Source: Sharadar.  Calculations by Newfound Research.

In relative terms, there is no great surprise in these results:

  • More frequent rebalancing limits the risk of portfolios changing significantly between rebalance dates, thereby decreasing the impact of timing luck.
  • More concentrated portfolios exhibit larger timing luck.
  • Faster-moving signals (e.g. momentum) tend to exhibit more timing luck than more stable, slower-moving signals (e.g. low volatility).

What is perhaps the most surprising is the sheer magnitude of timing luck.  Consider that the S&P 500 Enhanced Value, Momentum, Low Volatility, and Quality portfolios all hold 100 securities and are rebalanced semi-annually.  Our study suggests that timing luck for such approaches may be as large as 2.5%, 4.4%, 1.1%, and 2.0% respectively.

But what does that really mean?  Consider the realized performance dispersion of different rebalance date variations of a Momentum portfolio that holds the top 100 securities in equal weight and is rebalanced on a semi-annual basis.

Source: Sharadar.  Calculations by Newfound Research.  Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. 

The 4.4% estimate of annualized timing luck is a measure of dispersion between each underlying variation and the overlapping portfolio solution.  If we isolate two sub-portfolios and calculate rolling 12-month performance dispersion, we can see that the difference can be far larger, as one might exhibit positive timing luck while the other exhibits negative timing luck.  Below we do precisely this for the APR-OCT and MAY-NOV rebalance variations.

Source: Sharadar.  Calculations by Newfound Research.  Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. 

In fact, since these variations are identical in every which way except for the date on which they rebalance, a portfolio that is long the APR-OCT variation and short the MAY-NOV variation would explicitly capture the effects of rebalance timing luck.  If we assume the rebalance timing luck realized by these two portfolios is independent (which our research suggests it is), then the volatility of this long/short is approximately the rebalance timing luck estimated above scaled by the square-root of two.

Derivation: For variations vi and vj and overlapping-portfolio solution V, then:

Thus, if we are comparing two identically-managed 100-stock momentum portfolios that rebalance semi-annually, our 95% confidence interval for performance dispersion due to timing luck is +/- 12.4% (2 x SQRT(2) x 4.4%).

Even for more diversified, lower turnover portfolios, this remains an issue.  Consider a 400-stock low-volatility portfolio that is rebalanced quarterly.  Empirical timing luck is still 0.5%, suggesting a 95% confidence interval of 1.4%.

S&P 500 Style Index Examples

One critique of the above analysis is that it is purely hypothetical: the portfolios studied above aren’t really those offered in the market today.

We will take our analysis one step further and replicate (to the best of our ability) the S&P 500 Enhanced Value, Momentum, Low Volatility, and Quality indices.  We then created different rebalance schedule variations.  Note that the S&P 500 Low Volatility index rebalances quarterly, so there are only three possible rebalance variations to compute.

Source: Sharadar.  Calculations by Newfound Research.  Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. 

We see a meaningful dispersion in terminal wealth levels, even for the S&P 500 Low Volatility index, which appears at first glance in the graph to have little impact from timing luck.

Minimum Terminal Wealth

Maximum Terminal Wealth

Enhanced Value

$4.45

$5.45

Momentum

$3.07

$4.99

Low Volatility

$6.16

$6.41

Quality

$4.19

$5.25

 

We should further note that there does not appear to be one set of rebalance dates that does significantly better than the others.  For Value, FEB-AUG looks best while JUN-DEC looks the worst; for Momentum it’s almost precisely the opposite.

Furthermore, we can see that even seemingly closely related rebalances can have significant dispersion: consider MAY-NOV and JUN-DEC for Momentum. Here is a real doozy of a statistic: at one point, the MAY-NOV implementation for Momentum is down -50.3% while the JUN-DEC variation is down just -13.8%.

These differences are even more evident if we plot the annual returns for each strategy’s rebalance variations.   Note, in particular, the extreme differences in Value in 2009, Momentum in 2017, and Quality in 2003.

Source: Sharadar.  Calculations by Newfound Research.  Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. 

Conclusion

In this study, we have explored the impact of rebalance timing luck on the results of smart beta / equity style portfolios.

We empirically tested this impact by designing a variety of portfolio specifications for four different equity styles (Value, Momentum, Low Volatility, and Quality).  The specifications varied by concentration as well as rebalance frequency.  We then constructed all possible rebalance variations of each specification to calculate the realized impact of rebalance timing luck over the test period (2000-2019).

In line with our mathematical model, we generally find that those strategies with higher turnover have higher timing luck and those that rebalance more frequently have less timing luck.

The sheer magnitude of timing luck, however, may come as a surprise to many.  For reasonably concentrated portfolios (100 stocks) with semi-annual rebalance frequencies (common in many index definitions), annual timing luck ranged from 1-to-4%, which translated to a 95% confidence interval in annual performance dispersion of about +/-1.5% to +/-12.5%.

The sheer magnitude of timing luck calls into question our ability to draw meaningful relative performance conclusions between two strategies.

We then explored more concrete examples, replicating the S&P 500 Enhanced Value, Momentum, Low Volatility, and Quality indices.  In line with expectations, we find that Momentum (a high turnover strategy) exhibits significantly higher realized timing luck than a lower turnover strategy rebalanced more frequently (i.e. Low Volatility).

For these four indices, the amount of rebalance timing luck leads to a staggering level of dispersion in realized terminal wealth.

“But Corey,” you say, “this only has to do with systematic factor managers, right?”

Consider that most of the major equity style benchmarks are managed with annual or semi-annual rebalance schedules.  Good luck to anyone trying to identify manager skill when your benchmark might be realizing hundreds of basis points of positive or negative performance luck a year.

 

The Limit of Factor Timing

This post is available as a PDF download here.

Summary­

  • We have shown previously that it is possible to time factors using value and momentum but that the benefit is not large.
  • By constructing a simple model for factor timing, we examine what accuracy would be required to do better than a momentum-based timing strategy.
  • While the accuracy required is not high, finding the system that achieves that accuracy may be difficult.
  • For investors focused on managing the risks of underperformance – both in magnitude and frequency – a diversified factor portfolio may be the best choice.
  • Investors seeking outperformance will have to bear more concentration risk and may be open to more model risk as they forego the diversification among factors.

A few years ago, we began researching factor timing – moving among value, momentum, low volatility, quality, size etc. – with the hope of earning returns in excess not only of the equity market, but also of buy-and-hold factor strategies.

To time the factors, our natural first course of action was to exploit the behavioral biases that may create the factors themselves. We examined value and momentum across the factors and used these metrics to allocate to factors that we expected to outperform in the future.

The results were positive. However, taking into account transaction costs led to the conclusion that investors were likely better off simply holding a diversified factor portfolio.

We then looked at ways to time the factors using the business cycle.

The results in this case were even less convincing and were a bit too similar to a data-mined optimal solution to instill much faith going forward.

But this evidence does not necessarily remove the temptation to take a stab at timing the factors, especially since explicit transactions costs have been slashed for many investors accessing long-only factors through ETFs.Source: Kenneth French Data Library, AQR. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. 

After all, there is a lot to gain by choosing the right factors. For example, in the first 9 months of 2019, the spread between the best (Quality) and worst (Value) performing factors was nearly 1,000 basis points (“bps”). One month prior, that spread had been double!

In this research note, we will move away from devising a systematic approach to timing the factors (as AQR asserts, this is deceptively difficult) and instead focus on what a given method would have to overcome to achieve consistent outperformance.

Benchmarking Factor Timing

With all equity factor strategies, the goal is usually to outperform the market-cap weighted equity benchmark.

Since all factor portfolios can be thought of as a market cap weighted benchmark plus a long/short component that captures the isolated factor performance, we can focus our study solely on the long/short portfolio.

Using the common definitions of the factors (from Kenneth French and AQR), we can look at periods over which these self-financing factor portfolios generate positive returns to see if overlaying them on a market-cap benchmark would have added value over different lengths of time.1

We will also include the performance of an equally weighted basket of the four factors (“Blend”).

Source: Kenneth French Data Library, AQR. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. Data from July 1957 – September 2019.

The persistence of factor outperformance over one-month periods is transient. If the goal is to outperform the most often, then the blended portfolio satisfies this requirement, and any timing strategy would have to be accurate enough to overcome this already existing spread.

Source: Kenneth French Data Library, AQR. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. Data from July 1957 – September 2019.

The results for the blended portfolio are so much better than the stand-alone factors because the factors have correlations much lower than many other asset classes, allowing even naïve diversification to add tremendous value.

The blended portfolio also cuts downside risk in terms of returns. If the timing strategy is wrong, and chooses, for example, momentum in an underperforming month, then it could take longer for the strategy to climb back to even. But investors are used to short periods of underperformance and often (we hope) realize that some short-term pain is necessary for long-term gains.

Looking at the same analysis over rolling 1-year periods, we do see some longer periods of factor outperformance. Some examples are quality in the 1980s, value in the mid-2000s, momentum in the 1960s and 1990s, and size in the late-1970s.

However, there are also decent stretches where the factors underperform. For example, the recent decade for value, quality in the early 2010s, momentum sporadically in the 2000s, and size in the 1980s and 1990s. If the timing strategy gets stuck in these periods, then there can be a risk of abandoning it.

Source: Kenneth French Data Library, AQR. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. Data from July 1957 – September 2019.

Again, a blended portfolio would have addressed many of these underperforming periods, giving up some of the upside with the benefit of reducing the risk of choosing the wrong factor in periods of underperformance.

Source: Kenneth French Data Library, AQR. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. Data from July 1957 – September 2019.

And finally, if we extend our holding period to three years, which may be used for a slower moving signal based on either value or the business cycle, we see that the diversified portfolio still exhibits outperformance over the most rolling periods and has a strong ratio of upside to downside.

Source: Kenneth French Data Library, AQR. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. Data from July 1957 – September 2019.

The diversified portfolio stands up to scrutiny against the individual factors but could a generalized model that can time the factors with a certain degree of accuracy lead to better outcomes?

Generic Factor Timing

To construct a generic factor timing model, we will consider a strategy that decides to hold each factor or not with a certain degree of accuracy.

For example, if the accuracy is 50%, then the strategy would essentially flip a coin for each factor. Heads and that factor is included in the portfolio; tails and it is left out. If the accuracy is 55%, then the strategy will hold the factor with a 55% probability when the factor return is positive and not hold the factor with the same probability when the factor return is negative. Just to be clear, this strategy is constructed with look-ahead bias as a tool for evaluation.

All factors included in the portfolio are equally weighted, and if no factors are included, then the returns is zero for that period.

This toy model will allow us to construct distributions to see where the blended portfolio of all the factors falls in terms of frequency of outperformance (hit rate), average outperformance, and average underperformance. The following charts show the percentiles of the diversified portfolio for the different metrics and model accuracies using 1,000 simulations.2

Source: Kenneth French Data Library, AQR. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. Data from July 1957 – September 2019.

In terms of hit rate, the diversified portfolio behaves in the top tier of the models over all time periods for accuracies up to about 57%. Even with a model that is 60% accurate, the diversified portfolio was still above the median.

Source: Kenneth French Data Library, AQR. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. Data from July 1957 – September 2019.

For average underperformance, the diversified portfolio also did very well in the context of these factor timing models. The low correlation between the factors leads to opportunities for the blended portfolio to limit the downside of individual factors.

Source: Kenneth French Data Library, AQR. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. Data from July 1957 – September 2019.

For average outperformance, the diversified portfolio did much worse than the timing model over all time horizons. We can attribute this also to the low correlation between the factors, as choosing only a subset of factors and equally weighting them often leads to more extreme returns.

Overall, the diversified portfolio manages the risks of underperformance, both in magnitude and in frequency, at the expense of sacrificing outperformance potential. We saw this in the first section when we compared the diversified portfolio to the individual factors.

But if we want to have increased return potential, we will have to introduce some model risk to time the factors.

Checking in on Momentum

Momentum is one model-based way to time the factors. Under our definition of accuracy in the toy model, a 12-1 momentum strategy on the factors has an accuracy of about 56%. While the diversified portfolio exhibited some metrics in line with strategies that were even more accurate than this, it never bore concentration risk: it always held all four factors.

Source: Kenneth French Data Library, AQR. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. Data from July 1957 – September 2019.

For the hit rate percentiles of the momentum strategy, we see a more subdued response. Momentum does not win as much as the diversified portfolio over the different time periods.

But not winning as much can be fine if you win bigger when you do win.

The charts below show that momentum does indeed have a higher outperformance percentile but with a worse underperformance percentile, especially for 1-month periods, likely due to mean reversionary whipsaw.

Source: Kenneth French Data Library, AQR. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. Data from July 1957 – September 2019.

While momentum is definitely not the only way to time the factors, it is a good baseline to see what is required for higher average outperformance.

Now, turning back to our generic factor timing model, what accuracy would you need to beat momentum?

Sharpening our Signal

The answer is: not a whole lot. Most of the time, we only need to be about 53% accurate to beat the momentum-based factor timing.

Source: Kenneth French Data Library, AQR. Calculations by Newfound Research. Past performance is not an indicator of future results. Performance is backtested and hypothetical. Performance figures are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Performance assumes the reinvestment of all distributions. 

The caveat is that this is the median performance of the simulations. The accuracy figure climbs closer to 60% if we use the 25th percentile as our target.

While these may not seem like extremely high requirements for running a successful factor timing strategy, it is important to observe that not many investors are doing this. True accuracy may be hard to discover, and sticking with the system may be even harder when the true accuracy can never be known.

Conclusion

If you made it this far looking for some rosy news on factor timing or the Holy Grail of how to do it skillfully, you may be disappointed.

However, for most investors looking to generate some modest benefits relative to market-cap equity, there is good news. Any signal for timing factors does not have to be highly accurate to perform well, and in the absence of a signal for timing, a diversified portfolio of the factors can lead to successful results by the metrics of average underperformance and frequency of underperformance.

For those investors looking for higher outperformance, concentration risk will be necessary.

Any timing strategy on low correlation investments will generally forego significant diversification in the pursuit of higher returns.

While this may be the goal when constructing the strategy, we should always pause and determine whether the potential benefits outweigh the costs. Transaction costs may be lower now. However, there are still operational burdens and the potential stress caused by underperformance when a system is not automated or when results are tracked too frequently.

Factor timing may be possible, but timing and tactical rotation may be better suited to scenarios where some of the model risk can be mitigated.

Page 4 of 21

Powered by WordPress & Theme by Anders Norén