The Research Library of Newfound Research

Author: Corey Hoffstein Page 6 of 18

Corey is co-founder and Chief Investment Officer of Newfound Research.

Corey holds a Master of Science in Computational Finance from Carnegie Mellon University and a Bachelor of Science in Computer Science, cum laude, from Cornell University.

You can connect with Corey on LinkedIn or Twitter.

Using PMI to Trade Cyclicals vs Defensives

This blog post is available as a PDF download here.

Summary­

  • After stumbling across a set of old research notes from 2009 and 2012, we attempt to implement a Cyclicals versus Defensives sector trade out-of-sample.
  • Post-2012 returns prove unconvincing and we find little evidence supporting the notion that PMI changes can be used for constructing this trade.
  • Using data from the Kenneth French website, we extend the study to 1948, and similarly find that changes in PMI (regardless of lookback period) are not an effective signal for trading Cyclical versus Defensive sectors.

I love coming across old research because it allows for truly out-of-sample testing.

Earlier this week, I stumbled across a research note from 2009 and a follow-up note from 2012, both exploring the use of macro-based signals for constructing dollar-neutral long/short sector trades.  Specifically, the pieces focused on using manufacturing Purchasing Manager Indices (PMIs) as a predictor for Cyclical versus Defensive sectors.1

The strategy outlined is simple: when the prior month change in manufacturing PMI is positive, the strategy is long Cyclicals and short Defensives; when the change is negative, the strategy is long Defensives and short Cyclicals.  The intuition behind this signal is that PMIs provide a guide to hard economic activity.

The sample period for the initial test is from 1998 to 2009, a period over which the strategy performed quite well on a global basis and even better when using the more forward-looking ratio of new orders to inventory.

Red flags start to go up, however, when we read the second note from 2012.  “It appears that the new orders-to-inventory ratio has lost its ability to forecast the output index.”  “In addition, the optimal lookback period … has shifted from one to two months.”

At this point, we can believe one of a few things:

  • The initial strategy works, has simply hit a rough patch in the three years after publishing, and will work again in the future.
  • The initial strategy worked but has broken since publishing.
  • The initial strategy never worked and was an artifact of datamining.

I won’t even bother addressing the whole “one-month versus two-month” comment. Long-time readers know where we come down on ensembles versus parameter specification…

Fortunately, we do not have to pass qualitative judgement: we can let the numbers speak for themselves.

While the initial notes focused on global implementation, we can rebuild the strategy using U.S. equity sectors and US manufacturing PMI as the driving signal. This will serve both as an out-of-sample test for assets, as well as provide approximately 7 more years of out-of-sample time to evaluate returns.

Below we plot the results of this strategy for both 1-month and 2-month lookback periods, highlighting the in-sample and out-of-sample periods for each specification based upon the date the original research notes were published.  We use the State Street SPDR Sector Select ETFs as our implementation vehicles, with the exception of the iShares Dow Jones US Telecom ETF.

Source: CSI Data; Quandl. Calculations by Newfound Research.  Results are hypothetical.  Results assume the reinvestment of all distributions. Results are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes. Past performance is not an indicator of future results.  

 

The first thing we notice is that the original 1-month implementation – which appeared to work on a global scale – does not seem particularly robust when implemented with U.S. sectors.  Post publish date results do not fare much better.

The 2-month specification, however, does appear to work reasonably well both in- and out-of-sample.

But is there something inherently magical about that two-month specification?  We are hard-pressed to find a narrative explanation.

If we plot lookback specifications from 3- to 12-months, we see that the 2-month specification proves to be a significant outlier. Given the high correlation between all the other specifications, it is more likely that the 2-month lookback was the beneficiary of luck rather than capturing a special particular edge.

Source: CSI Data; Quandl. Calculations by Newfound Research.  Results are hypothetical.  Results assume the reinvestment of all distributions. Results are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes. Past performance is not an indicator of future results.  

 

Perhaps we’re not giving this idea enough breathing room.  After all, were we to evaluate most value strategies in the most recent decades, we’d likely declare them insignificant as well.

With manufacturing PMI data extending back to the 1948, we can use sector index data from the Kenneth French website to reconstruct this strategy.

Unfortunately, the Kenneth French definitions do not match GICs perfectly, so we have to change the definition of Cyclicals and Defensives slightly.  Using the Kenneth French data, we will define Cyclicals to be Manufacturing, Non-Durables, Technology, and Shops. Defensives are defined to be Durables, Telecom, Health Care, and Utilities.

We use the same strategy as before, going long Cyclicals and short Defensives when changes in PMI are positive, and short Cyclicals and long Defensives when changes to PMI are negative.  We again vary the lookback period from 1- to 12-months.

Source: Kenneth French Data Library; Quandl. Calculations by Newfound Research. Results are hypothetical.  Results assume the reinvestment of all distributions. Results are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes. Past performance is not an indicator of future results.  

 

The results are less than convincing.  Not only do we see significant dispersion across implementations, but there is also no consistency in those implementations that do well versus those that do not.

Perhaps worse, the best performing variation only returned a paltry 1.40% annualized gross of any implementation costs.  Once we start accounting for transaction costs, slippage, and management fees, this figure deflates towards zero rather quickly.

Source: Kenneth French Data Library; Quandl. Calculations by Newfound Research. Results are hypothetical.  Results assume the reinvestment of all distributions. Results are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes. Past performance is not an indicator of future results.  

Conclusion

There is no shortage of quantitative research in the market and the research can be particularly compelling when it seems to fit a pre-existing narrative.

Cyclicals versus Defensives are a perfect example.  Their very names imply the regimes during which they are supposed to add value, but actually translating this notion into a robust strategy proves to be less than easy.

I would make the philosophical argument that it quite simply cannot be easy.  Consider the two pieces of information we need to believe for this strategy to work:

  • Cyclicals outperform Defensives in an economic expansion and Defensives outperform Cyclicals in an economic contraction.
  • We can forecast economic expansions and contractions before it is priced into the market.

If we have very high confidence in both statements, it effectively implies an arbitrage.

Therefore, if we have very high confidence in the truth of the first statement, then for markets to be reasonably efficient, we must have little confidence in the second statement.

Similarly, if we have high confidence in the trust of the second statement, then for markets to be reasonably efficient, we must have little confidence in the first statement.

Thus, a more reasonable expectation might be that Cyclicals tend to outperform Defensives during an expansion, and Defensives tend to outperform Cyclicals in a contraction, but there may be meaningful exceptions depending upon the particular cycle.

Furthermore, we may believe we have an edge in forecasting expansions and contractions (perhaps not with just PMI, though), but there will be many false positives and false negatives along the way.

Taken together, we might believe we can construct such a strategy, but errors in both assumptions will lead to periods of frustration.  However, we should recognize that for such an “open secret” strategy to work in the long run, there have to be troughs of sorrow deep enough to avoid permanent crowding.

In this case, we believe there is little evidence to suggest that level changes in PMI provide particular insight into Cyclicals versus Defensives, but that does not mean there are no macro signals that might.

 


 

Your Style-age May Vary

This post is available as PDF download here.

Summary­

  • New research from Axioma suggests that tilting less – through lower target tracking error – can actually create more academically pure factor implementation in long-only portfolios.
  • This research highlights an important question: how should long-only investors think about factor exposure in their portfolios?Is measuring against an academically-constructed long/short portfolio really appropriate?
  • We return to the question of style versus specification, plotting year-to-date excess returns for long-only factor ETFs.While the general style serves as an anchor, we find significant specification-driven performance dispersion.
  • We believe that the “right answer” to this dispersion problem largely depends upon the investor.

When quants speak about factor and style returns, we often do so with some sweeping generalizations.  Typically, we’re talking about some long/short specification, but precisely how that portfolio is formed can vary.

For example, one firm might look at deciles while another looks at quartiles.  One shop might equal-weight the holdings while another value-weights them.  Some might include mid- and small-caps, while others may work on a more realistic liquidity-screened universe.

More often than not, the precision does not matter a great deal (with the exception of liquidity-screening) because the general conclusion is the same.

But for investors who are actually realizing these returns, the precision matters quite a bit.  This is particularly true for long-only investors, who have adopted smart-beta ETFs to tap into the factor research.

As we have discussed in the past, any active portfolio can be decomposed into its benchmark plus a dollar-neutral long/short portfolio that encapsulates the active bets.   The active bets, then, can actually approach the true long/short implementation.

To a point, at least.  The “shorts” will ultimately be constrained by the amount the portfolio can under-weight a given security.

For long-only portfolios, increasing active share often means having to lean more heavily into the highest quintile or decile holdings.  This is not a problem in an idealized world where factor scores have a monotonically increasing relationship with excess returns.  In this perfect world, increasing our allocation to high-ranking stocks creates just as much excess return as shorting low-ranking stocks does.

Unfortunately, we do not live in a perfect world and for some factors the premium found in long/short portfolios is mostly found on the short side.1  For example, consider the Profitability Factor.  The annualized spread between the top- and bottom-quintile portfolios is 410 basis points.  The difference between the top quintile portfolio and the market, though, is just 154 basis points.  Nothing to scoff at, but when appropriately discounted for data-mining risk, transaction costs, and management costs, there is not necessarily a whole lot left over.

Which leads to some interesting results for portfolio construction, at least according to a recent study by Axioma.2  For factors where the majority of the premium arises from the short side, tilting less might mean achieving more.

For example, Axioma found that a portfolio optimized maximize exposure to the profitability factor while targeting a tracking error to the market of just 10 basis points had a meaningfully higher correlation than the excess returns of a long-only portfolio that simply bought the top quintile.  In fact, the excess returns of the top quintile portfolio had zero correlation to the long/short factor returns.  Let’s repeat that: the active returns of the top quintile portfolio had zero correlation to the returns of the profitability factor.  Makes us sort of wonder what we’re actually buying…

Source: Kenneth French Data Library; Calculations by Newfound Research.

 

Cumulative Active Returns of Long-Only Portfolios

So, what does it actually mean for long-only investors when we plot long/short equity factor returns?  When we see that the Betting-Against-Beta (“BAB”) factor is up 3% on the year, what does that imply for our low-volatility factor ETF?  Momentum (“UMD”) was down nearly 10% earlier this year; were long-only momentum ETFs really under-performing by that much?

And what does this all mean for the results in those fancy factor decomposition reports the nice consultants from the big asset management firms have been running for me over the last couple of years?

Source: AQR. Calculations by Newfound Research.

We find ourselves back to a theme we’ve circled many times over the last few years: style versus specification.  Choices such as how characteristics are measured, portfolio concentration, the existence or absence of position- and industry/sector-level constraints, weighting methodology, and rebalance frequency (and even date!) can have a profound impact on realized results.  The little details compound to matter quite a bit.

To highlight this disparity, below we have plotted the excess return of an equally-weighted portfolio of long-only style ETFs versus the S&P 500 as well as a standard deviation cone for individual style ETF performance.

While most of the ETFs are ultimately anchored to their style, we can see that short-term performance can meaningfully deviate.

Source: CSI Analytics.  Calculations by Newfound Research.  Results are hypothetical.  Results assume the reinvestment of all distributions.   Results are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes, with the exception of underlying ETF expense ratios.  Past performance is not an indicator of future results.   Year-to-Date returns are computed by assuming an equal-weight allocation to representative long-only ETFs for each style.  Returns are net of underlying ETF expense ratios.   Returns are calculated in excess of the SPDR&P 500 ETF (“SPY”).  The ETFs used for each style are (in alphabetical order): Value: FVAL, IWD, JVAL, OVLU, QVAL, RPV, VLU, VLUE; Size: IJR, IWM, OSIZ; Momentum: FDMO, JMOM, MMTM, MTUM, OMOM, QMOM, SPMO; Low Volatility: FDLO, JMIN, LGLV, OVOL, SPLV, SPMV, USLB, USMV; Quality; FQAL, JQUA, OQAL, QUAL, SPHQ; Yield: DVY, FDVV, JDIV, OYLD, SYLD, VYM; Growth: CACG, IWF, QGRO, RPG, SCHG, SPGP, SPYG; Trend: BEMO, FVC, LFEQ, PTLC.  Newfound may hold positions in any of the above securities.

 

Conclusion

In our opinion, the research and data outlined in this commentary suggests a few potential courses of action for investors.

  • For certain styles, we might consider embracing smaller tilts for purer factor exposure.
  • To avoid specification risk, we might embrace the potential benefits of multi-manager diversification.
  • Or, if there is a particular approach we prefer, simply acknowledge that it may not behave anything like the academic long/short definition – or even other long-only implementations – in the short-term.

Academically, we might be able to argue for one approach over another.  Practically, the appropriate solution is whatever is most suitable for the investor and the approach that they will be able to stick with.

If a client measures their active returns with respect to academic factors, then understanding how portfolio construction choices deviate from the factor definitions will be critical.

An advisor trying to access a style but not wanting to risk choosing the wrong ETF might consider asking themselves, “why choose?”  Buying a basket of a few ETFs will do wonders to reduce specification risk.

On the other hand, if an investor is simply trying to maximize their compound annualized return and nothing else, then a concentrated approach may very well be warranted.

Whatever the approach taken, it is important to remember that results between two strategies that claim to implement the same style can and will deviate significantly, especially in the short run.

 


 

Timing Luck and Systematic Value

This post is available as a PDF download here.

Summary­

  • We have shown many times that timing luck – when a portfolio chooses to rebalance – can have a large impact on the performance of tactical strategies.
  • However, fundamental strategies like value portfolios are susceptible to timing luck, as well.
  • Once the rebalance frequency of a strategy is set, we can mitigate the risk of choosing a poor rebalance date by diversifying across all potential variations.
  • In many cases, this mitigates the risk of realizing poor performance from an unfortunate choice of rebalance date while achieving a risk profile similar to the top tier of potential strategy variations.
  • By utilizing strategies that manage timing luck, the investors can more accurately assess performance differences arising from luck and skill.

On August 7th, 2013 we wrote a short blog post titled The Luck of Rebalance Timing.  That means we have been prattling on about the impact of timing luck for over six years now (with apologies to our compliance department…).

(For those still unfamiliar with the idea of timing luck, we will point you to a recent publication from Spring Valley Asset Management that provides a very approachable introduction to the topic.1)

While most of our earliest studies related to the impact of timing luck in tactical strategies, over time we realized that timing luck could have a profound impact on just about any strategy that rebalances on a fixed frequency.  We found that even a simple fixed-mix allocation of stocks and bonds could see annual performance spreads exceeding 700bp due only to the choice of when they rebalanced in a given year.

In seeking to generalize the concept, we derived a formula that would estimate how much timing luck a strategy might have.  The details of the derivation can be found in our paper recently published in the Journal of Index Investing, but the basic formula is:

Here is strategy turnover, is how many times per year the strategy rebalances, and S is the volatility of a long/short portfolio capturing the difference between what the strategy is currently invested in versus what it could be invested in.

We’re biased, but we think the intuition here works out fairly nicely:

  • The higher a strategy’s turnover, the greater the impact of our choice of rebalance dates. For example, if we have a value strategy that has 50% turnover per year, an implementation that rebalances in January versus one that rebalances in July might end up holding very different securities.  On the other hand, if the strategy has just 1% turnover per year, we don’t expect the differences in holdings to be very large and therefore timing luck impact would be minimal.
  • The more frequently we rebalance, the lower the timing luck. Again, this makes sense as more frequent rebalancing limits the potential difference in holdings of different implementation dates.  Again, consider a value strategy with 50% turnover.  If our portfolio rebalances every other month, there are two potential implementations: one that rebalances January, March, May, etc. and one that rebalances February, April, June, etc. We would expect the difference in portfolio holdings to be much more limited than in the case where we rebalance only annually.2
  • The last term, S, is most easily explained with an example. If we have a portfolio that can hold either the Russell 1000 or the S&P 500, we do not expect there to be a large amount of performance dispersion regardless of when we rebalance or how frequently we do so.  The volatility of a portfolio that is long the Russell 1000 and short the S&P 500 is so small, it drives timing luck near zero.  On the other hand, if a portfolio can hold the Russell 1000 or be short the S&P 500, differences in holdings due to different rebalance dates can lead to massive performance dispersion. Generally speaking, S is larger for more highly concentrated strategies with large performance dispersion in their investable universe.

Timing Luck in Smart Beta

To date, we have not meaningfully tested timing luck in the realm of systematic equity strategies.3  In this commentary, we aim to provide a concrete example of the potential impact.

A few weeks ago, however, we introduced our Systematic Value portfolio, which seeks to deliver concentrated exposure to the value style while avoiding unintended process and timing luck bets.

To achieve this, we implement an overlapping portfolio process.  Each month we construct a concentrated deep value portfolio, selecting just 50 stocks from the S&P 500.  However, because we believe the evidence suggests that value is a slow-moving signal, we aim for a holding period between 3-to-5 years.  To achieve this, our capital is divided across the prior 60 months of portfolios.4

Which all means that we have monthly snapshots of deep value5 portfolios going back to November 2012, providing us data to construct all sorts of rebalance variations.

The Luck of Annual Rebalancing

Given our portfolio snapshots, we will create annually rebalanced portfolios.  With monthly portfolios, there are twelve variations we can construct: a portfolio that reconstitutes each January; one that reconstitutes each February; a portfolio that reconstitutes each March; et cetera.

Below we plot the equity curves for these twelve variations.

Source: CSI Analytics.  Calculations by Newfound Research.  Results are hypothetical.  Results assume the reinvestment of all distributions.   Results are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Past performance is not an indicator of future results.  

We cannot stress enough that these portfolios are all implemented using a completely identical process.  The only difference is when they run that process.  The annualized returns range from 9.6% to 12.2%.  And those two portfolios with the largest disparity rebalanced just a month apart: January and February.

To avoid timing luck, we want to diversify when we rebalance.  The simplest way of achieving this goal is through overlapping portfolios.  For example, we can build portfolios that rebalance annually, but allocate to two different dates.  One portfolio could place 50% of its capital in the January rebalance index and 50% in the July rebalance index.

Another variation could place 50% of its capital in the February index and 50% in the August index.6  There are six possible variations, which we plot below.

The best performing variation (January and July) returned 11.7% annualized, while the worst (February and August) returned 9.7%.  While the spread has narrowed, it would be dangerous to confuse 200bp annualized for alpha instead of rebalancing luck.

Source: CSI Analytics.  Calculations by Newfound Research.  Results are hypothetical.  Results assume the reinvestment of all distributions.   Results are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Past performance is not an indicator of future results.  

We can go beyond just two overlapping portfolios, though.  Below we plot the three variations that contain four overlapping portfolios (January-April-July-October, February-May-August-November, and March-June-September-December).  The best variation now returns 10.9% annualized while the worst returns 10.1% annualized.  We can see how overlapping portfolios are shrinking the variation in returns.

Finally, we can plot the variation that employs 12 overlapping portfolios.  This variation returns 10.6% annualized; almost perfectly in line with the average annualized return of the underlying 12 variations.  No surprise: diversification has neutralized timing luck.

Source: CSI Analytics.  Calculations by Newfound Research.  Results are hypothetical.  Results assume the reinvestment of all distributions.   Results are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Past performance is not an indicator of future results.  

Source: CSI Analytics.  Calculations by Newfound Research.  Results are hypothetical.  Results assume the reinvestment of all distributions.   Results are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Past performance is not an indicator of future results.  

But besides being “average by design,” how can we measure the benefits of diversification?

As with most ensemble approaches, we see a reduction in realized risk metrics.  For example, below we plot the maximum realized drawdown for annual variations, semi-annual variationsquarterly variations, and the monthly variation.  While the dispersion is limited to just a few hundred basis points, we can see that the diversification embedded in the monthly variation is able to reduce the bad luck of choosing an unfortunate rebalance date.

Source: CSI Analytics.  Calculations by Newfound Research.  Results are hypothetical.  Results assume the reinvestment of all distributions.   Results are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Past performance is not an indicator of future results.  

Just Rebalance more Frequently?

One of the major levers in the timing luck equation is how frequently the portfolio is rebalanced.  However, we firmly believe that while rebalancing frequency impacts timing luck, timing luck should not be a driving factor in our choice of rebalance frequency.

Rather, rebalance frequency choices should be a function of the speed at which our signal decays (e.g. fast-changing signals such as momentum versus slow-changing signals like value) versus implementation costs (e.g. explicit trading costs, market impact, and taxes).  Only after this choice is made should we seek to limit timing luck.

Nevertheless, we can ask the question, “how does rebalancing more frequently impact timing luck in this case?”

To answer this question, we will evaluate quarterly-rebalanced portfolios.  The distinction here from the quarterly overlapping portfolios above is that the entire portfolio is rebalanced each quarter rather than only a quarter of the portfolio.  Below, we plot the equity curves for the three possible variations.

Source: CSI Analytics.  Calculations by Newfound Research.  Results are hypothetical.  Results assume the reinvestment of all distributions.   Results are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes.  Past performance is not an indicator of future results.  

The best performing variation returns 11.7% annualized while the worst returns 9.7% annualized, for a spread of 200 basis points.  This is actually larger than the spread we saw with the three quarterly overlapping portfolio variations, and likely due to the fact that turnover within the portfolios increased meaningfully.

While we can see that increasing the frequency of rebalancing can help, in our opinion the choice of rebalance frequency should be distinct from the choice of managing timing luck.

Conclusion

In our opinion, there are at least two meaningful conclusions here:

The first is for product manufacturers (e.g. index issuers) and is rather simple: if you’re going to have a fixed rebalance schedule, please implement overlapping portfolios.  It isn’t hard.  It is literally just averaging.  We’re all better off for it.

The second is for product users: realize that performance dispersion between similarly-described systematic strategies can be heavily influenced by when they rebalance. The excess return may really just be a phantom of luck, not skill.

The solution to this problem, in our opinion, is to either: (1) pick an approach and just stick to it regardless of perceived dispersion, accepting the impact of timing luck; (2) hold multiple approaches that rebalance on different days; or (3) implement an approach that accounts for timing luck.

We believe the first approach is easier said than done.  And without a framework for distinguishing between timing luck and alpha, we’re largely making arbitrary choices.

The second approach is certainly feasible but has the potential downside of requiring more holdings as well as potentially forcing an investor to purchase an approach they are less comfortable with.   For example, blending IWD (Russell 1000 Value), RPV (S&P  500 Pure Value), VLUE (MSCI U.S. Enhanced Value), and QVAL (Alpha Architect U.S. Quantitative Value) may create a portfolio that rebalances on many different dates (annual in May; annual in December; semi-annual in May and November; and quarterly, respectively), it also introduces significant process differences.  Though research suggests that investors may benefit from further manager/process diversification.

For investors with conviction in a single strategy implementation, the last approach is certainly the best.  Unfortunately, as far as we are aware, there are only a few firms who actively implement overlapping portfolios (including Newfound Research, O’Shaughnessy Asset Management, AQR, and Research Affiliates). Until more firms adopt this approach, timing luck will continue to loom large.

 


 

Ensemble Multi-Asset Momentum

This post is available as a PDF download here.

Summary­

  • We explore a representative multi-asset momentum model that is similar to many bank-based indexes behind structured products and market-linked CDs.
  • With a monthly rebalance cycle, we find substantial timing luck risk.
  • Using the same basic framework, we build a simple ensemble approach, diversifying both process and rebalance timing risk.
  • We find that the virtual strategy-of-strategies is able to harvest diversification benefits, realizing a top-quartile Sharpe ratio with a bottom-quartile maximum drawdown.

Early in the 2010s, a suite of index-linked products came to market that raised billions of dollars.  These products – offered by just about every major bank – sought to simultaneously exploit the diversification benefits of modern portfolio theory and the potential for excess returns from the momentum anomaly.

While each index has its own bells and whistles, they generally follow the same approach:

  • A global, multi-asset universe covering equities, fixed income, and commodities.
  • Implemented using highly liquid ETFs.
  • Asset class and position-level allocation limits.
  • A monthly rebalance schedule.
  • A portfolio optimization that seeks to maximize weighted prior returns (e.g. prior 6 month returns) while limiting portfolio volatility to some maximum threshold (e.g. 5%).

And despite their differences, we can see in plotting their returns below that these indices generally share a common return pattern, indicating a common, driving style.

Source: Bloomberg.

Frequent readers will know that “monthly rebalance” is an immediate red flag for us here at Newfound: an indicator that timing luck is likely lurking nearby.

Replicating Multi-Asset Momentum

To test the impact of timing luck, we replicate a simple multi-asset momentum strategy based upon available index descriptions.

We rebalance the portfolio at the end of each month.  Our optimization process seeks to identify the portfolio with a realized volatility less than 5% that would have maximized returns over the prior six months, subject to a number of position and asset-level limits.  If the 5% volatility target is not achievable, the target is increased by 1% until a portfolio can be constructed that satisfies our constraints.

We use the following ETFs and asset class limits:

As a naïve test for timing luck, rather than assuming the index rebalances at the end of each month, we will simply assume the index rebalances every 21 trading days. In doing so, we can construct 21 different variations of the index, each representing the results from selecting a different rebalance date.

Source: CSI Analytics; Calculations by Newfound Research.  Results are backtested and hypothetical.  Results assume the reinvestment of all distributions.  Results are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes, with the exception of underlying ETF expense ratios.  Past performance is not an indicator of future results. 

As expected, the choice of rebalance date has a meaningful impact.  Annualized returns range from 4.7% to 5.5%, Sharpe ratios range from 0.6 to 0.9, and maximum drawdowns range from 9.9% to 20.8%.

On a year-by-year basis, the only thing that is consistent is the large spread between the worst and best-performing rebalance date.  On average, the yearly spread exceeds 400 basis points.

Min

Max

2008*

-9.91%

0.85%

2009

2.36%

4.59%

2010

6.46%

9.65%

2011

3.31%

10.15%

2012

6.76%

10.83%

2013

3.42%

6.13%

2014

5.98%

10.60%

2015

-5.93%

-2.51%

2016

4.18%

8.45%

2017

9.60%

11.62%

2018

-6.00%

-2.53%

2019 YTD

5.93%

10.01%

* Partial year starting 7/22/2018

We’ve said it in the past and we’ll say it again: timing luck can be the difference between hired and fired.  And while we’d rather be on the side of good luck, the lack of control means we’d rather just avoid this risk all together.

If it isn’t nailed down for a reason, diversify it

The choice of when to rebalance is certainly not the only free variable of our multi-asset momentum strategy.  Without an explicit view as to why a choice is made, our preference is always to diversify so as to avoid specification risk.

We will leave the constraints (e.g. volatility target and weight constraints) well enough alone in this example, but we should consider the process by which we’re measuring past returns as well as the horizon over which we’re measuring it.  There is plenty of historical efficacy to using prior 6-month total returns for momentum, but no lack of evidence supporting other lookback horizons or measurements.

Therefore, we will use three models of momentum: prior total return, the distance of price from its moving average, and the distance of a short-term moving average from a longer-term moving average.  We will vary the parameterization of these signals to cover horizons ranging from 3- to 15-months in length.

We will also vary which day of the month the portfolio rebalances on.

By varying the signal, the lookback horizon, and the rebalance date, we can generate hundreds of different portfolios, all supported by the same theoretical evidence but having slightly different realized results due to their particular specification.

Our robust portfolio emerges by calculating the weights for all these different variations and averaging them together, in many ways creating a virtual strategy-of-strategies.

Below we plot the result of this –ensemble approach– as compared to a –random sample of the underlying specifications–.  We can see that while there are specifications that do much better, there are also those that do much worse.  By employing an ensemble approach, we forgo the opportunity for good luck and avoid the risk of bad luck.   Along the way, though, we may pick up some diversification benefits: the Sharpe ratio of the ensemble approach fell in the top quartile of specifications and its maximum drawdown was in the bottom quartile (i.e. lower drawdown).

Source: CSI Analytics; Calculations by Newfound Research.  Results are backtested and hypothetical.  Results assume the reinvestment of all distributions.  Results are gross of all fees, including, but not limited to, manager fees, transaction costs, and taxes, with the exception of underlying ETF expense ratios.  Past performance is not an indicator of future results.

Conclusion

In this commentary, we again demonstrate the potential risk of needless specification and the potential power of diversification.

Using a popular multi-asset momentum model as our example, we again find a significant amount of timing luck lurking in a monthly rebalance specification.  By building a virtual strategy-of-strategies, we are able to manage this risk by partially rebalancing our portfolio on different days.

We go a step further, acknowledging that processrepresents another axis of risk. Specifically, we vary both how we measure momentum and the horizon over which it is measured.  Through the variation of rebalance days, model specifications, and lookback horizons, we generate over 500 different strategy specifications and combine them into a virtual strategy-of-strategies to generate our robust multi-asset momentum model.

As with prior commentaries, we find that the robust model is able to effectively reduce the risk of both specification and timing luck.  But perhaps most importantly, it was able to harvest the benefits of diversification, realizing a Sharpe ratio in the top quartile of specifications and a maximum drawdown in the lowest quartile.

Decomposing the Credit Curve

This post is available as a PDF download here.

Summary­

  • In this research note, we continue our exploration of credit.
  • Rather than test a quantitative signal, we explore credit changes through the lens of statistical decomposition.
  • As with the Treasury yield curve, we find that changes in the credit spread curve can be largely explained by Level, Slope, and Curvature (so long as we adjust for relative volatility levels).
  • We construct stylized portfolios to reflect these factors, adjusting position weights such that they contribute an equal amount of credit risk. We then neutralize interest rate exposure such that the return of these portfolios represents credit-specific information.
  • We find that the Level trade suggests little-to-no realized credit premium over the last 25 years, and Slope suggests no realized premium of junk-minus-quality within credit either. However, results may be largely affected by idiosyncratic events (e.g.  LTCM in 1998) or unhedged risks (e.g. sector differences in credit indices).

In this week’s research note, we continue our exploration of credit with a statistical decomposition of the credit spread curve.  Just as the U.S. Treasury yield curve plots yields versus maturity, the credit spread curve plots excess yield versus credit quality, providing us insight into how much extra return we demand for the risks of declining credit quality.

Source: Federal Reserve of St. Louis; Bloomberg.  Calculations by Newfound Research. 

Our goal in analyzing the credit spread curve is to gain a deeper understanding of the principal drivers behind its changes.  In doing so, we hope to potentially gain intuition and ideas for trading signals between low- and high-quality credit.

To begin our, we must first construct our credit spread curve.  We will use the following index data to represent our different credit qualities.

  • Aaa: Bloomberg U.S. Corporate Aaa Index (LCA3TRUU)
  • Aa: Bloomberg U.S. Corporate Aa Index (LCA2TRUU)
  • A:Bloomberg U.S. Corporate A Index (LCA1TRUU)
  • Baa: Bloomberg U.S. Corporate Baa Index (LCB1TRUU)
  • Ba: Bloomberg U.S. Corporate HY Ba Index (BCBATRUU)
  • B: Bloomberg U.S. Corporate HY B Index (BCBHTRUU)
  • Caa: Bloomberg U.S. Corporate HY Caa Index (BCAUTRUU)

Unfortunately, we cannot simply plot the yield-to-worst for each index, as spread captures the excess yield.  Which raises the question: excess to what?  As we want to isolate the credit component of the yield, we need to remove the duration-equivalent Treasury rate.

Plotting the duration of each credit index over time, we can immediately see why incorporating this duration data will be important.  Not only do durations vary meaningfully over time (e.g. Aaa durations varying between 4.95 and 11.13), but they also deviate across quality (e.g. Caa durations currently sit near 3.3 while Aaa durations are north of 11.1).

Source: Bloomberg.

To calculate our credit spread curve, we must first calculate the duration-equivalent Treasury bond yield for each index at each point in time.  For each credit index at each point in time, we use the historical Treasury yield curve to numerically solve for the Treasury maturity that matches the credit index’s duration.  We then subtract that matching rate from the credit index’s reported yield-to-worst to estimate the credit spread.

We plot the spreads over time below.

Source: Federal Reserve of St. Louis; Bloomberg.  Calculations by Newfound Research.

Statistical Decomposition: Eigen Portfolios

With our credit spreads in hand, we can now attempt to extract the statistical drivers of change within the curve.  One method of achieving this is to:

  • Calculate month-to-month differences in the curve.
  • Calculate the correlation matrix of the differences.
  • Calculate an eigenvalue decomposition of the correlation matrix.

Stopping after just the first two steps, we can begin to see some interesting visual patterns emerge in the correlation matrix.

  • There is not a monotonic decline in correlation between credit qualities. For example, Aaa is not more highly correlated to Aa than Ba and A is more correlated to B than it is Aa.
  • Aaa appears to behave rather uniquely.
  • Baa, Ba, B, and to a lesser extent Caa, appear to visually cluster in behavior.
  • Ba, B, and Caa do appear to have more intuitive correlation behavior, with correlations increasing as credit qualities get closer.

Step 3 might seem foreign for those unfamiliar with the technique, but in this context eigenvalue decomposition has an easy interpretation.   The process will take our universe of credit indices and return a universe of statistically independent factor portfolios, where each portfolio is made up of a combination of credit indices.

As our eigenvalue decomposition was applied to the correlation matrix of credit spread changes, the factors will explain the principal vectors of variance in credit spread changes.  We plot the weights of the first three factors below.

Source: Federal Reserve of St. Louis; Bloomberg.  Calculations by Newfound Research.

For anyone who has performed an eigenvalue decomposition on the yield curve before, three familiar components emerge.

We can see that Factor #1 applies nearly equal-weights across all the credit indices. Therefore, we label this factor “level” as it represents a level shift across the entire curve.

Factor #2 declines in weight from Aaa through Caa.  Therefore, we label this factor “slope,” as it controls steepening and flattening of the credit curve.

Factor #3 appears as a barbell: negative weights in the wings and positive weights in the belly.  Therefore, we call this factor “curvature,” as it will capture convexity changes in the curve.

Together, these three factors explain 80% of the variance in credit spread changes. Interestingly, the 4thfactor – which brings variance explained up to 87.5% – also looks very much like a curvature trade, but places zero weight on Aaa and barbells Aa/Caa against A/Baa.  We believe this serves as further evidence as to the unique behavior of Aaa credit.

Tracking Credit Eigen Portfolios

As we mentioned, each factor is constructed as a combination of exposure to our Aaa-Caa credit universe; in other words, they are portfolios!  This means we can track their performance over time and see how these different trades behave in different market regimes.

To avoid overfitting and estimation risk, we decided to simplify the factor portfolios into more stylized trades, whose weights are plotted below (though ignore, for a moment, the actual weights, as they are meant only to represent relative weighting within the portfolio and not absolute level).  Note that the Level trade has a cumulative positive weight while the Slope and Curvature trades sum to zero.

To actually implement these trades, we need to account for the fact that each credit index will have a different level of credit duration.

Akin to duration, which measure’s a bond’s sensitivity to interest rate changes, credit duration measures a bond’s sensitivity to changes in its credit spread. As with Treasuries, we need to adjust the weights of our trades to account for this difference in credit durations across our indices.

For example, if we want to place a trade that profits in a steepening of the Treasury yield curve, we might sell 10-year US Treasuries and buy 2-year US Treasuries. However, we would not buy and sell the same notional amount, as that would leave us with a significantly negative duration position.  Rather, we would scale each leg such that their durations offset.  In the end, this causes us to buy significantly more 2s than we sell 10s.

To continue, therefore, we must calculate credit spread durations.

Without this data on hand, we employ a statistical approach.  Specifically, we take monthly total return data and subtract yield return and impact from interest rate changes (employing the duration-matched rates we calculated above).  What is left over is an estimate of return due to changes in credit spreads. We then regress these returns against changes in credit spreads to calculate credit spread durations, which we plot below.

Source: Federal Reserve of St. Louis; Bloomberg.  Calculations by Newfound Research.

The results are a bit of a head scratcher.  Unlike duration in the credit curve which typically increases monotonically across maturities, we get a very different effect here.  Aaa credit spread duration is 10.7 today while Caa credit spread duration is 2.8.  How is that possible?  Why is lower-quality credit not more sensitiveto credit changes than higher quality credit?

Here we run into a very interesting empirical result in credit spreads: spread change is proportional to spread level.  Thus, a true “level shift” rarely occurs in the credit space; e.g. a 1bp change in the front-end of the credit spread curve may actually manifest as a 10bp change in the back end.  Therefore, the lower credit spread duration of the back end of the curve is offset by larger changes.

There is some common-sense intuition to this effect.  Credit has a highly non-linear return component: defaults.  If we enter an economic environment where we expect an increase in default rates, it tends to happen in a non-linear fashion across the curve.  To offset the larger increase in defaults in lower quality credit, investors will demand larger corresponding credit spreads.

(Side note: this is why we saw that the Baa–Aaa  spread did not appear to mean-revert as cleanly as the log-difference of spreads did in last week’s commentary, Value and the Credit Spread.)

While our credit spread durations may be correct, we still face a problem: weighting such that each index contributes equal credit spread duration will create an outsized weight to the Caa index.

DTS Scaling

Fortunately, some very smart folks thought about this problem many years ago. Recognizing the stability of relative spread changes, Dor, Dynkin, Hyman, Houweling, van Leeuwen, and Penninga (2007)recommend the measure of duration times spread (“DTS”) for credit risk.

With a more appropriate measure of credit sensitivity, we can now scale our stylized factor portfolio weights such that each position contributes an equal level of DTS.  This will have two effects: (1) the relative weights in the portfolios will change over time, and (2) the notional size of the portfolios will change over time.

We scale each position such that (1) they contribute an equal level of DTS to the portfolio and (2) each leg of the portfolio has a total DTS of 500bps.  The Level trade, therefore, represents a constant 500bps of DTS risk over time, while the Slope and Curvature trades represent 0bps, as the longs and short legs net out.

One problem still remains: interest rate risk.  As we plotted earlier in this piece, the credit indices have time-varying – and sometimes substantial – interest rate exposure.  This creates an unintended bet within our portfolios.

Fortunately, unlike the credit curve, true level shift does empirically apply in the Treasury yield curve.  Therefore, to simplify matters, we construct a 5-year zero-coupon bond, which provides us with a constant duration instrument.  At each point in time, we calculate the net duration of our credit trades and use the 5-year ZCB to neutralize the interest rate risk.  For example, if the Level portfolio has a duration of 1, we would take a -20% notional position in the 5-year ZCB.

Source: Federal Reserve of St. Louis; Bloomberg.  Calculations by Newfound Research.

Some things we note when evaluating the portfolios over time:

  • In all three portfolios, notional exposure to higher credit qualities is substantially larger than lower credit qualities. This captures the meaningfully higher exposure that lower credit quality indices have to credit risk than higher quality indices.
  • The total notional exposure of each portfolio varies dramatically over time as market regimes change. In tight spread environments, DTS is low, and therefore notional exposures increase. In wide spread environments – like 2008 – DTS levels expand dramatically and therefore only a little exposure is necessary to achieve the same risk target.
  • 2014 highlights a potential problem with our approach: as Aaa spreads reached just 5bps, DTS dipped as low as 41bps, causing a significant swing in notional exposure to maintain the same DTS contribution.

Conclusion

The fruit of our all our labor is the graph plotted below, which shows the growth of $1 in our constant DTS, stylized credit factor portfolios.

What can we see?

First and foremost, constant credit exposure has not provided much in the last 25 years until recently.  It would appear that investors did not demand a high enough premium for the risks that were realized over the period, which include the 1998 LTCM blow-up, the burst of the dot-com bubble, and the 2008 recession.

From 12/31/2008 lows through Q1 2019, however, a constant 500bps DTS exposure generated a 2.0% annualized return with 2.4% annualized volatility, reflecting a nice annual premium for investors willing to bear the credit risk.

Slope captures the high-versus-low-quality trade.  We can see that junk meaningfully out-performed quality in the 1990s, after which there really did not appear to be a meaningful difference in performance until 2013 when oil prices plummeted and high yield bond prices collapsed.  This result does highlight a potential problem in our analysis: the difference in sector composition of the underlying indices. High yield bonds had an outsized reaction compared to higher quality investment grade credit due to more substantial exposure to the energy sector, leading to a lop-sided reaction.

What is also interesting about the Slope trade is that the market did not seem to price a meaningful premium for holding low-quality credit over high-quality credit.

Finally, we can see that Curvature (“barbell versus belly”) – trade was rather profitable for the first decade, before deflating pre-2008 and going on a mostly-random walk ever since.  However, as mentioned when the curvature trade was initially introduced, the 4th factor in our decomposition also appeared to reflect a similar trade but shorts Aa and Caa versus a long position in A and Baa.  This trade has been a fairly consistent money-loser since the early 2000s, indicating that a barbell of high quality (just not Aaa) and junk might do better than the belly of the curve.

It is worth pointing out that these trades represent a significant amount of compounding estimation – from duration-matching Treasury rates to credit spread durations – which also means a significant risk of compounding estimation error.  Nevertheless, we believe there are a few takeaways worth exploring further:

  • The Level trade appears highly regime dependent (in positive and negative economic environments), suggesting a potential opportunity for on/off credit trades.
  • The 4th factor is a consistent loser, suggesting a potential structural tilt that can be made by investors by holding quality and junk (e.g. QLTA + HYG) rather than the belly of the curve (LQD).  Implementing this in a long-only fashion would require more substantial analysis of duration trade-offs, as well as a better intuition as to whythe returns are emerging as they are.
  • Finally, a recognition that maintaining a constant credit risk level requires reducing notional exposure as rates go up, as rate changes are proportional to rate levels. This is an important consideration for strategic asset allocation.

 

Page 6 of 18

Powered by WordPress & Theme by Anders Norén