The Research Library of Newfound Research

Category: Risk & Style Premia Page 15 of 16

Anatomy of a Bull Market

This blog post is available for download as a PDF here.

Summary

  • Long-term average stock returns smooth over the bull and bear markets that investors experience, and no two market cycles ever unfold the exact same way. Bull and bear markets can vary significantly in both duration and magnitude.
  • But there are other characteristics of bull markets that can also differ in meaningful ways, such as velocity, sources of return, and investor experience.
  • When it comes to analyzing bull markets, inflation, interest rates, equity valuations, earnings, and dividends all play a part.
  • Assessing the current economic environment in the context of historical U.S. and international bull markets can help set better expectations and reduce the risk of surprises that can lead to emotional decisions.

A few days back, we found this “History of U.S. Bear & Bull Markets Since 1926” one-pager from First Trust.  In our opinion, the graph is a nice visualization of market expansions and contractions over the last 90 years.

We’ve recreated the graph below. There are some slight differences in what we show vs. the First Trust data since we use a different data source[1] and stick to monthly data.  We also go back to the beginning of the first bull market of the 20th century.

Over the period from 1903 to 2016, there were 12 bull markets in the S&P 500.  The average bull market lasted 8.1 years with a total return of 387%.  The average bear market lasted 1.5 years with a total loss of 35%.

The current bull market, which began in March 2009, is the 7th longest and the 6th strongest.  For it to be the longest ever, it would have to continue through the fourth quarter of 2023.  For it to be the largest ever, the S&P would have to return another 665%.

Data Source: Robert Shiller’s data library. Calculations by Newfound Research. Bull markets are defined from the lowest close reached after the market has fallen 20% or more to the next market high.  Bear markets are defined from the last market high prior to the market closing down at least 20% to the lowest close after it’s down 20% or more.  Monthly data is used to make these calculations.  Past performance does not guarantee future results. 

While this analysis is informative, it’s still an incomplete picture of the anatomy of bull (and bear) markets.  Below, we will examine this same data from four other perspectives:

  1. Velocity: How fast do bull and bear markets unfold?
  2. Sources of return: How much of bull market returns are composed of inflation? Dividend yield?  Earnings growth?  Valuation changes?
  3. Experience: What was the experience of an investor using a balanced 50/50 asset allocation during these bull and bear markets?
  4. Context: How does the experience of bull and bear markets in the U.S. compare to other markets around the world?

 

Velocity: How fast do bull and bear markets unfold?

More often than not, market cycle analysis focuses on duration and magnitude.  We can change the focus to velocity by graphing the annualized return experienced in each bull and bear market.

Data Source: Robert Shiller’s data library.  Calculations by Newfound Research. Bull markets are defined from the lowest close reached after the market has fallen 20% or more to the next market high.  Bear markets are defined from the last market high prior to the market closing down at least 20% to the lowest close after it’s down 20% or more.  Monthly data is used to make these calculations.  Returns are not annualized for market cycles that lasted less than one year.  Past performance does not guarantee future results. 

This snapshot highlights three important characteristics of the historical behavior of U.S. equity markets.

First, we don’t experience the average.  Over the 113+ year period we considered, the U.S. equity market returned an annualized 9.8%.  Yet, the path of returns has been defined by thrilling bull markets and crushing bear markets.

Consider this: since 1903, there has not been a market cycle with a single digit annualized return.

Ten of the twelve bull markets had annualized gains greater than 15%.  Similarly, annualized losses exceeded 15% in ten of the eleven bear markets.

Second, bear markets typically unfold more rapidly than bull markets.  The average annualized returns for bull and bear markets are 19% and -25%, respectively.

Third, the current bull market is slow by historical standards.  It ranks 17th in velocity out of the 23 market cycles that we studied. This same phenomenon occurred in the bull market that followed the Great Depression, the only bear market that was more severe than the Financial Crisis. 

 

Sources of Return: How much of a given bull market can be attributed to inflation?  Dividend yield?  Earnings growth?  Valuation changes?  

Equity returns can be decomposed into four components:

  • Inflation
  • Dividend Yield
  • Earnings Growth
  • Valuation Changes

Using this framework, it quickly becomes clear that not all bull markets are created equal.



Data Source: Robert Shiller’s data library.  Calculations by Newfound Research. Bull markets are defined from the lowest close reached after the market has fallen 20% or more to the next market high.  Bear markets are defined from the last market high prior to the market closing down at least 20% to the lowest close after it’s down 20% or more.  Monthly data is used to make these calculations. Past performance does not guarantee future results. 

For example, the bull market of the 70s and 80s was driven largely by inflation.  On a nominal or pre-inflation basis, this was the second largest bull market of all time.  On a real basis or post-inflation basis, however, it drops to just the fifth largest.

Both of the two most recent bull markets are unique in their own right.

The pre-global financial crisis bull market – lasting from February 2003 to October 2007 – had the largest share of return driven by earnings growth at just north of 30%.  The current bull market is only the second instance of a large (greater than 100%) bull market where more than half the gains have come from expanding valuation multiples.

The contribution from valuation expansion is larger than even the buildup of the tech bubble.

Going beyond headline shock and awe, however, we recognize that classifying all valuation changes into a single bucket is probably painting with too broad of a brush.  Valuations returning to normal after a market crash is not the same as valuations expanding from historical averages to all-time highs.  We can address this by modifying the previous graphic.  Specifically, we break the “Valuation Changes” category into two parts[2]:

  • “Valuation Normalization”: Valuations increasing from historically low levels to the long-term median.
  • “Valuation Expansion”: Valuations increasing from the long-term median to higher levels.

When all valuation changes are lumped together, the five most valuation-centric bull markets of the nine in the graphic are:

  1. August 1921 to September 1929 (79%)
  2. March 2009 to December 2016 (59%)
  3. December 1987 to August 2000 (53%)
  4. June 1932 to May 1946 (48%)
  5. February 2003 to October 2007 (37%)

When we focus, however, on only “Valuation Expansion,” the top five changes to:

  1. December 1987 to August 2000 (43%)
  2. February 2003 to October 2007 (37%)
  3. June 1962 to December 1968 (32%)
  4. August 1921 to September 1929 (32%)
  5. March 2009 to December 2016 (27%)

When we ignore “Valuation Normalization,” the current bull market drops from the 2nd most valuation-centric to the 5th most valuation-centric.  The majority of the valuation gains in this cycle were the result of the recovery from the bottom of the financial crisis.



Data Source: Robert Shiller’s data library.  Calculations by Newfound Research. Bull markets are defined from the lowest close reached after the market has fallen 20% or more to the next market high.  Bear markets are defined from the last market high prior to the market closing down at least 20% to the lowest close after it’s down 20% or more.  Monthly data is used to make these calculations. Past performance does not guarantee future results. 

 

Experience: How did balanced investors fare during historical equity bull markets?

Many investors do not hold 100% stock allocations.  As a result, their experience during equity bull markets will also depend on bond returns.  The chart below shows the upside capture for a 50/50 stock/bond investor during the twelve equity bull markets since 1903.

Data Source: Robert Shiller’s data library.  Calculations by Newfound Research. Bull markets are defined from the lowest close reached after the market has fallen 20% or more to the next market high.  Bear markets are defined from the last market high prior to the market closing down at least 20% to the lowest close after it’s down 20% or more.  Monthly data is used to make these calculations.  Returns are not annualized for market cycles that lasted less than one year.  The 50% bond allocation is a hypothetical index created using the interest rate data from Shiller’s data library.  Past performance does not guarantee future results.  Past performance does not guarantee future results.  The balanced portfolio is rebalanced annually.    

Despite the continued secular decline in interest rates, the last two bull markets (February 2003 to October 2007 and March 2009 to December 2016) have actually been below average for balanced investors.

Why?  Because the relative performance of a balanced investors vs. a stock investor will not only depend on the path of interest rates (i.e. do rates increase or decrease), but also on the average interest rate over the period.

For ideal bull market up capture, balanced investors should hope for high and declining interest rates.  Recently, we’ve had the latter, but not the former.

Data Source: Robert Shiller’s data library.  Calculations by Newfound Research. Bull markets are defined from the lowest close reached after the market has fallen 20% or more to the next market high.  Bear markets are defined from the last market high prior to the market closing down at least 20% to the lowest close after it’s down 20% or more.  Monthly data is used to make these calculations.  Returns are not annualized for market cycles that lasted less than one year.  The 50% bond allocation is a hypothetical index created using the interest rate data from Shiller’s data library.  Past performance does not guarantee future results.  Past performance does not guarantee future results.  The balanced portfolio is rebalanced annually.   

Going forward, we may move toward the bottom right-hand corner, which has historically had the lowest up-capture.

 

Context: How does the experience of bull and bear markets in the U.S. compare to other markets around the world?  

In the following pages, we recreate the First Trust graph for Japan, the United Kingdom, Europe ex-UK, and Asia ex-Japan.

Looking beyond the United States can be a useful reminder that the future behavior of the S&P 500 is not constrained by past experiences.

It’s possible to have larger bull markets than what we have seen in the U.S., as evidenced by the 1970s and 1980s in Japan and the UK.

It’s also possible for bear markets to drag on for years. The longest bear market in the U.S. since 1903 lasted slightly less than three years.  Japan, on the other hand, saw a 20+ year bear market that lasted the entirety of the 1990s and 2000s.

Data Source: MSCI.  Calculations by Newfound Research. Bull markets are defined from the lowest close reached after the market has fallen 20% or more to the next market high.  Bear markets are defined from the last market high prior to the market closing down at least 20% to the lowest close after it’s down 20% or more.  Monthly data is used to make these calculations.  Past performance does not guarantee future results. 

Data Source: MSCI.  Calculations by Newfound Research. Bull markets are defined from the lowest close reached after the market has fallen 20% or more to the next market high.  Bear markets are defined from the last market high prior to the market closing down at least 20% to the lowest close after it’s down 20% or more.  Monthly data is used to make these calculations.  Past performance does not guarantee future results. 

Data Source: MSCI.  Calculations by Newfound Research. Bull markets are defined from the lowest close reached after the market has fallen 20% or more to the next market high.  Bear markets are defined from the last market high prior to the market closing down at least 20% to the lowest close after it’s down 20% or more.  Monthly data is used to make these calculations.  Past performance does not guarantee future results. 

Data Source: MSCI.  Calculations by Newfound Research. Bull markets are defined from the lowest close reached after the market has fallen 20% or more to the next market high.  Bear markets are defined from the last market high prior to the market closing down at least 20% to the lowest close after it’s down 20% or more.  Monthly data is used to make these calculations.  Past performance does not guarantee future results. 

Conclusion

While long-term average stock returns have been high, they smooth over the bull and bear markets that investors experience along the way.

These large directional swings have many characteristics that make them unique, including their durations and magnitudes.  Velocity, sources of return, and investor experience have also shown significant variation across market cycles.

This current bull market has been slow by historical standards and has largely been driven by normalization of equity valuations following the financial crisis.  Balanced investors have benefitted from declining interest rates, but saw muted up-capture since interest rates started declining from a relatively low level.

Putting the current market environment into context by considering other geographies can lead to a more thorough understanding of how to position our portfolios and develop a plan that can be adhered to regardless of how a given market cycle unfolds.

 

[1] We use data from Robert Shiller’s website.  This data was used in Shiller’s book, Irrational Exuberance.  Shiller presents monthly data.  Prior to January 2000, price data is the average of the S&P 500’s (or a predecessor’s) daily closes for that monthly.

[2] To avoid hindsight bias when calculating the historical median, we used rolling 50 year periods.

Capital Efficiency in Multi-factor Portfolios

This blog post is available as a PDF here.

Summary­­

  • The debate for the best way to build a multi-factor portfolio – mixed or integrated – rages on.
  • Last week we explored whether the argument held that integrated portfolios are more capital efficient than mixed portfolios in realized return data for several multi-factor ETFs.
  • This week we explore whether integrated portfolios are more capital efficient than mixed portfolios in theory.  We find that for some broad assumptions, they definitively are.
  • We find that for specific implementations, mixed portfolios can be more efficient, but it requires a higher degree of concentration in security selection.

This commentary is highly technical, relying on both probability theory and calculus, and requires rendering a significant number of equations.  Therefore, it is only available as a PDF download.

For those less inclined to read through mathematical proofs, the important takeaway is this: for some broad assumptions, integrated multi-factor portfolios are provably more capital efficient (e.g. more factor exposure for your dollar) than mixed approaches.

Is That Leverage in My Multi-Factor ETF?

This blog post is available as a PDF here.

Summary­­

  • The debate for the best way to build a multi-factor portfolio – mixed or integrated – rages on.
  • FTSE Russell published a video supporting their choice of an integrated approach, arguing that by using the same dollar to target multiple factors at once, their portfolio makes more efficient use of capital than a mixed approach.
  • We decompose the returns of several mixed and integrated multi-factor portfolios and find that integrated portfolios do not necessarily create more capital efficient allocations to factor exposures than their mixed peers.

 

A colleague sent us a video this week from FTSE Russell, titled Factor Indexing: Avoiding exposure to nothing.

In the video, FTSE Russell outlines their argument for why they prefer an integrated – or composite – multi-factor index construction methodology over a mixed one.

As a reminder, a mixed approach is one in which a portfolio is built for each factor individually, and those portfolios are combined as sleeves to create a multi-factor portfolio.  An integrated approach is one in which securities are selected that have high scores across multiple factors, simultaneously.

The primary argument held forth by integration advocates is that in a mixed approach, securities selected for one factor may have negative loadings on another, effectively diluting factor exposures.

For example, the momentum stock sleeve in a mixed approach may, unintentionally, have a negative loading on the value factor.  So, when combined with the value sleeve, it dilutes the portfolio’s overall value exposure.

This is a topic we’ve written about many, many times before, and we think the argument ignores a few key points:

FTSE Russell did, however, put forth an interesting new argument.  The argument was this: an integrated approach is more capital efficient because the same dollar can be utilized for exposure to multiple factors.

 

$1, Two Exposures

To explain what FTSE Russell means, we’ll use a very simple example.

Consider the recently launched REX Gold Hedged S&P 500 ETF (GHS) from REX Shares.  The idea behind this ETF is to provide more capital efficient exposure to gold for investors.

Previously, to include gold, most retail investors would have to explicitly carve out a slice of their portfolio and allocate to a gold fund.  So, for example, an investor who held 100% in the SPDR S&P 500 ETF (“SPY”) could carve out 5% and by the SPDR Gold Trust ETF (“GLD”).

The “problem” with this approach is that while it introduces gold, it also dilutes our equity exposure.

GHS overlays the equity exposure with gold futures, providing exposure to both.  So now instead of carving out 5% for GLD, an investor can carve out 5% for GHS.  In theory, they retain their 100% notional exposure to the S&P 500, but get an additional 5% exposure to gold (well, gold futures, at least).

So does it work?

One way to check is by trying to regress the returns of GHS onto the returns of SPY and GLD.  In effect, this tries to find the portfolio of SPY and GLD that best explains the returns of GHS.

ghs-factors

Source: Yahoo! Finance.  Calculations by Newfound Research.

 

What we see is that the portfolio that best describes the returns of GHS is 0.75 units of SPY and 0.88 units of GLD.

So not necessarily the perfect 1:1 we were hoping for, but a single dollar invested in GHS is like having a $1.63 portfolio in SPY and GLD.

Note: This is the same math that goes into currency-hedged equity portfolios, which is why we do not generally advocate using them unless you have a view on the currency.  For example, $1 invested in a currency-hedged European equity ETF is effectively the same as having $1 invested in un-hedged European equities and shorting $1 notional exposure in EURUSD.  You’re effectively layering a second, highly volatile, bet on top of your existing equity exposure.

This is the argument that FTSE Russell is making for an integrated approach.  By looking for stocks that have simultaneously strong exposure to multiple factors at once, the same dollar can tap into multiple excess return streams.  Furthermore, theoretically, the more factors included in a mixed portfolio, the less capital efficient it becomes.

Does it hold true, though?

 

The Capital Efficiency of Mixed and Integrated Multi-Factor Approaches 

Fortunately, there is a reasonably easy way to test the veracity of this claim: run the same regression we did on GHS, but on multi-factor ETFs using a variety of explanatory factor indices.

Here is a quick outline of the Factors we will utilize:

FactorSourceDescription
Market – RFRFama/FrenchTotal U.S. stock market return, minus t-bills
HML DevilAQRValue premium
SMBFama/FrenchSmall-cap premium
UMDAQRMomentum premium
QMJAQRQuality premium
BABAQRAnti-beta premium
LV-HBNewfoundLow-volatility premium

Note: Academics and practitioners have yet to settle on whether there is an anti-beta premium (where stocks with low betas outperform those with high betas) or a low-volatility premium (where stocks with low volatilities outperform those with high volatilities).   While similar, these are different factors.  However, as far as we are aware, there are no reported long-short low-volatility factors that are publicly available.  We did our best to construct one using a portfolio that is long one share of SPLV and short one share of SPHB, rebalanced monthly.

We will test a number of mixed-approach ETFs and a number of integrated-approach ETFs as well.

Of those in the mixed group, we will use Global X’s Scientific Beta U.S. ETF (“SCIU”) and Goldman Sachs’ ActiveBeta US Equity ETF (“GSLC”).

In the integrated group, we will use John Hancock’s Multifactor Large Cap ETF (“JHML”), JPMorgan’s Diversified Return US Equity ETF (“JPUS”), iShares’ Edge MSCI Multifactor USA ETF (“LRGF”), and FlexShares’ Morningstar U.S. Market Factor Tilt ETF (“TILT”).

We’ll also show the factor loadings for the SPDR S&P 500 ETF (“SPY”).

If the argument from FTSE Russell holds true, we would expect to see that the factor loadings for the mixed approach portfolios should be significantly lower than the integrated approach portfolios.  Since SCIU and GSLC both target to have four unique factors under the hood, and NFFPI has five, we would expect their loadings to be 1/5th to 1/4th of those found on the integrated approaches.

The results:

factor-loadings-multi-factor

Source: AQR, Kenneth French Data Library, and Yahoo! Finance.  Calculations by Newfound Research.

 

Before we dig into these, it is worth pointing out two things:

  • Factor loadings should be thought of both on an absolute, as well as a relative basis. For example, while GSLC has almost no loading on the size premium (SMB), the S&P 500 has a negative loading on that factor.  So compared to the large-cap benchmark, GSLC has a significantly higher
  • Not all of these loadings are statistically significant at a 95% level.

So do integrated approaches actually create more internal leverage?  Let’s look at the total notional factor exposure for each ETF:

total-notional-multi-factor

Source: AQR, Kenneth French Data Library, and Yahoo! Finance.  Calculations by Newfound Research.

 

It does, indeed, look like the integrated approaches have more absolute notional factor exposure.  Only SCIU appears to keep up – and it was the mixed ETF that had the most statistically non-significant loadings!

But, digging deeper, we see that not all factor exposure is good factor exposure.  For example, JPUS has significantly negative loadings on UMD and QMJ, which we would expect to be a performance drag.

Looking at the sum of factor exposures, we get a different picture.

total-factor-exposure

Source: AQR, Kenneth French Data Library, and Yahoo! Finance.  Calculations by Newfound Research.

 

Suddenly the picture is not so clear.  Only TILT seems to be the runaway winner, and that may be because it holds a simpler multi-factor mandate of only small-cap and value tilts.

 

Conclusion

The theory behind the FTSE Russell argument behind preferring an integrated multi-factor approach makes sense: by trying to target multiple factors with the same stock, we can theoretically create implicit leverage with our money.

Unfortunately, this theory did not hold out in the numbers.

Why?  We believe there are two potential reasons.

  • First, selecting for a factor in a mixed approach does not mean avoiding other factors. For example, while unintentional, a sleeve selecting for value could contain a small-cap bias or a quality bias.
  • In an integrated approach, preferring securities with high loadings on multiple factors simultaneously may avoid securities with extremely high factor loadings on a single factor. This may create a dilutive effect that offsets the benefit of capital efficiency.

In addition, we have concerns as to whether the integrated approach may degrade some of the very significant diversification benefits that can be harvested by combining factors.

Ultimately, while an interesting theoretical argument, we do not believe that capital efficiency is a justified reason for preferring the opaque complexity of an integrated approach over the simplicity of a mixed one.

 

Client Talking Points

  • At the cutting edge of investment research, there is often disagreement on the best way to build portfolios.
  • While a strongly grounded theoretical argument is necessary, it does not suffice: results must also be evident in empirical data.
  • To date, the argument that an integrated approach of building a multi-factor portfolio is more capital efficient than the simpler mixed approach does not prove out in the data.

Multi-Factor: Mix or Integrate?

This blog post is available as a PDF here.

Summary

  • Recently a paper was published by AQR where the authors advocate for an integrated approach to multi-factor portfolios, preferring securities that exhibit strong characteristics across all desired factors instead of a mixed approach, where securities are selected based upon extreme exposure to a single characteristic.
  • We believe the integrated approach fails to acknowledge the impact of the varying lengths over which different factors mature, ultimately leading to a portfolio more heavily influenced by higher turnover factors.

The Importance of Factor Maturity
Cliff Asness, founder of AQR, recently published a paper titled My Factor Philippic.  This paper was written in response to the recently popularized article How Can “Smart Beta” Go Horribly Wrong? which was co-authored by Robert Arnott, co-founder of Research Affiliates.

Arnott argues that many popular factors are currently historically overvalued and, furthermore, that the historical excess return offered by some recently popularized factors can be entirely explained by rising valuation trends in the last 30 years.
Caveat emptor, warns Arnott: valuations always matter.

Much to our delight (after all, who doesn’t like to see two titans of industry go at it?), Asness disagrees.

One of the primary arguments laid out by Asness is that valuation is a meaningless predictor for factors with high turnover.

The intuition behind this argument is simple: while valuations may be a decent predictor of forward annualized returns for broad markets over the next 5-to-10 years, the approach only works because the basket of securities stays mostly constant.  For example, valuations for U.S. equities may be a good predictor because we expect the vast majority of the basket of U.S. equities to stay constant over the next 5-to-10 years.

The same is not true for many factors.  For example, let’s consider a high turnover factor like momentum.

Valuations of a momentum basket today are a poor predictor of annualized returns of a momentum strategy over the next 5-to-10 years because the basket of securities held could be 100% different three months from now.

Unless the same securities are held in the basket, valuation headwinds or tailwinds will not necessarily be realized.

For the same reason, valuation is also poor as an explanatory variable of factor returns.  Asness argues that Arnott’s warning of valuation being the secret driver of factor returns is unwarranted in high turnover factors.

Multi-Factor: Mix or Integrate?
On July 2nd, Fitzgibbons, Friedman, Pomorski, and Serban (FFPS) – again from AQR – published a paper titled Long-Only Style Investing: Don’t Just Mix, Integrate.  

The paper attempts to conclude the current debate about the best way to build multi-factor portfolios.  The first approach is to mix, where a portfolio is built by combining stand-alone factor portfolios.  The second approach is to integrate, where a portfolio is built by selecting securities that have simultaneously strong exposure to multiple factors at once.

A figure from the paper does a good job of illustrating the difference.  Below, a hypothetical set of stocks is plotted based upon their current valuation and momentum characteristics.

AQR Paper Scatter Plots

In the top left, a portfolio of deep value stocks is selected.  In the top right, the mix approach is demonstrated, where the deepest value and the highest momentum stocks are selected.

In the bottom left, the integrated approach is demonstrated, where the securities simultaneously exhibiting strong valuation and momentum characteristics are selected.

Finally, in the bottom right we can see how these two approaches differ: with yellow securities being those only found in the mix portfolio and blue securities being found only in the integrated portfolio.

It is worth noting that the ETF industry has yet to make up its mind on the right approach.

GlobalX and Goldman Sachs prefer the mix approach in their ETFs (SCIU / GSLC) while JPMorgan and iShares prefer the integrate approach (JPUS / LRGF).

The argument made by those taking the integrated approach is that they are looking for securities with well-rounded exposures rather than those with extreme singular exposures.  Integrators argue that this approach helps them avoid holding securities that might cancel each other out.  If we look back towards the mix example above (top right), we can see that many securities selected due to strength in one factor are actually quite poor in the other.

Integrators claim that this inefficiency can create a drag in the mix portfolio.  Why hold something with strong momentum if it has a very poor valuation score that is only going to offset it?

We find it somewhat ironic that FFPS and Asness both publish for AQR, because we would argue that Asness’s argument points out the fundamental flaw in the theory outlined by integrators.  Namely: the horizons over which the premia mature differ.

Therefore, a strong positive loading in a factor like momentum is not necessarily offset by a strong negative loading in a factor like value.  Furthermore, by integrating we run the risk of the highest turnover factor actually dominating the integrated selection process.

Data
In the rest of this commentary, we will be using industry data from the Kenneth French data library.  For momentum scores, we calculate 12 one-month total return and calculate cross-sector z-scores[1].  For valuation scores, we calculate a normalized 5-year dividend yield score and then calculate cross-sector z-scores.[2]

Do Factor Premia Actually Mature at Different Time Periods?
In his paper, Asness referenced the turnover of a factor portfolio as an important variable.  We prefer to think of high turnover factors as factors whose premium matures more quickly.

For example, if we buy a stock because it has high relative momentum, our expectation is that we will likely hold it for longer than a day, but likely much shorter than a year.  Therefore, a strategy built off relative momentum will likely have high turnover because the premium matures quickly.

On the other hand, if we buy a value stock, our expectation is that we will have to hold it for up to several years for valuations to adequately reverse.  This means that the value premium takes longer to mature – and the strategy will likely have lower turnover.

We can see this difference in action by looking at how valuation and momentum scores change over time.

Z-Score Changes NoDur

We see similar pictures for other industries.  Yet, looks can be deceiving and the human brain is excellent at finding patterns where there are none (especially when we want to see those patterns).  Can we actually quantify this difference?

One way is to try to build a model that incorporates both the randomness of movement and how fast these scores mean-revert.  Fitting our data to this model would tell us about how quickly each premium matures.

One such model is called an Ornstein-Uhlenbeck process (“OU process”).  An OU process follows the following stochastic differential equation:

OU Process

To translate this into English using an example: the change in value z-score from one period to the next can be estimated as a “magnetism” back to fair value plus some randomness.  In the equation, theta tells us how strong this magnetism is, mu tells us what fair value is, and sigma tells us how much influence the randomness has.

For our momentum and valuation z-scores, we would expect mu to be near-zero, as over the long-run we would not expect a given sector to exhibit significantly more or less relative momentum or relative cheapness/richness than peer sectors.

Given that we also believe that the momentum premium is realized over a shorter horizon, we would also expect that theta – the strength of the magnetism, also called the speed of mean reversion – will be higher.  Since that strength of magnetism is higher, we will also need sigma – the influence of randomness – to be larger to overcome it.

So how to the numbers play out?[3]

For the momentum z-scores:

ThetaMuSigma
NoDur0.970.021.00
Durbl1.000.031.63
Manuf1.22-0.030.96
Enrgy0.980.061.69
HiTec1.040.031.49
Telcm1.15-0.071.52
Shops1.220.031.24
Hlth0.840.111.39
Utils1.48-0.091.61
Other1.18-0.091.13
Average1.100.001.36

For the valuation z-scores:

ThetaMuSigma
NoDur0.11-0.200.34
Durbl0.080.580.49
Manuf0.130.010.37
Enrgy0.070.190.40
HiTec0.090.230.33
Telcm0.070.030.38
Shops0.11-0.150.36
Hlth0.05-0.470.36
Utils0.06-0.350.40
Other0.11-0.010.37
Average0.08-0.010.38

We can see results that echo our expectations: the speed of mean-reversion is significantly lower for value than momentum.  In fact, the average theta is less than 1/10th.

The math behind an OU-process also lets us calculate the half-life of the mean-reversion, allowing us to translate the speed of mean reversion to a more interpretable measure: time.

The half-life for momentum z-scores is 0.27 years, or about 3.28 months.  The half-life for valuation z-scores is 3.76 years, or about 45 months.  These values more or less line up with our intuition about turnover in momentum versus value portfolios: we expect to hold momentum stocks for a few months but value stocks for a few years.

Another way to analyze this data is by looking at how long the relative ranking of a given industry group stays consistent in its valuation or momentum metric.  Based upon our data, we find that valuation ranks stayed constant for an average of approximately 120 trading days, while the average length of time an industry group held a consistent momentum rank was only just over 50 days.

Maturity’s Influence on Integration
The scatter plots drawn by FFPS are deceiving because they only show a single point in time.  What they fail to show is how the locations of the dots change over time.

With the expectation that momentum scores will change more rapidly than valuation scores, we would expect to see points move more rapidly up and down along the Y-axis than we would see them move left and right along the X-axis.
Given this, our hypothesis is that changes in our inclusion score are driven more significantly by changes in our momentum score.

To explore this, we create an integration score, which is simply the sum of the valuation and momentum z-scores.  Those industries in the top 30% of integration scores at any time are held by the integrated portfolio.

To distill the overall impact of momentum score changes versus valuation score changes, we need to examine the absolute value of these changes.  For example, if the momentum score change was +0.5 and the valuation score change was -0.5, the overall integration score change is 0.  Both momentum and value, in this case, contributed equally (or, contributed 50% each), to the overall score change.

So a simple formula for measuring the relative percentage contribution to score change is:

Contribution Formula

If value and momentum score changes contributed equally, we would expect the average contribution to equal 50%.

The average contribution based upon our analysis is 72.18% (with a standard error of 0.24%).  The interquartile range is 59.02% to 91.19% and the median value is 79.47%.

Put simply: momentum score changes are a much more significant contributor to integration score changes than valuation score changes are.

We find that this effect is increased when we examine only periods when an industry is added or deleted from the integrated portfolio.  In these periods, the average contribution climbs to 78.46% (with a standard error of 0.69%), with an interquartile range of 70.28% to 94.46% and a median value of 85.57%.

Changes in the momentum score contribute much more significantly than value score changes.

Integration: More Screen than Tilt?
The objective of the integrated portfolio approach is to find securities with the best blend of characteristics.

In reality, because one set of characteristics changes much more slowly, certain securities can be sidelined for prolonged periods of time.

Let’s consider a simplified example.  Every year, the 10 industry groups are assigned a random, but unique, value score between 1 and 10.

Similarly, every month, the 10 industry groups are assigned a random, but unique, momentum score between 1 and 10.

The integration score for each industry group is calculated as the sum of these two scores.  Each month, the top 3 scoring industry groups are held in the integrated portfolio.

What is the probability of an industry group being in the integrated portfolio, in any given month, if it has a value score of 1?  What about 2?  What about 10?
Numerical simulation gives us the following probabilities:

Probability of Inclusion Monthly

So if these are the probabilities of an industry group being selected in a given month given a certain value score, what is the probability of an industry group not being selected into the integrated portfolio at all during the year it has a given value score?

Probability of Inclusion Annual

If an industry group starts the year with a value score of 1, there is 99.1% probability it will never being selected into the integrated portfolio all year.

Conclusion
While we believe this topic deserves a significantly deeper dive (one which we plan to perform over the coming months), we believe the cursory analysis highlights a very important point: an integrated approach runs a significant risk of being more heavily influenced by higher turnover factors.  While FFPS believe there are first order benefits to the integrated approach, we think the jury is still out and that those first order effects may actually be simply due to an increased exposure to higher turnover factors.  Until more a more substantial understanding of the integrated approach is established, we continue to believe that a mixed approach is prudent.  After all, if we don’t understand how a portfolio is built and the source of the returns it generates, how can we expect to manage risk?


[1] Z-scoring standardizes, on a relative basis, what would otherwise be arbitrary values.
[2] We use yield versus historical as our measure for valuation as a matter of convenience.  However, there are two theoretical arguments justifying this choice.  First, the most common measure of value is book-to-market (B/M), which assumes that fair valuation of a company is its book value.  Another such model is the dividend discount model.  If we assume a constant growth rate of dividends and a constant cost of capital for the company, then book value should be proportional to total dividends, or, equivalently, book-to-market proportional to dividend yield.  Similarly, if you assume a constant long-term payout ratio, dividends per share are proportional to earnings per share, which makes yield inversely proportional to price-to-earnings, a popular valuation ratio.
[3] We used maximum likelihood estimation to calculate these figures.

A Closer Look At Growth and Value Indices

In a commentary a few weeks ago entitled Growth Is Not “Not Value,” we discussed a problem in the index construction industry in which growth and value are often treated as polar opposites. This treatment can lead to unexpected portfolio holdings in growth and value portfolios. Specifically, we may end up tilting more toward shrinking, expensive companies in both growth and value indices.

2D Quadrants - What we're really getting

The picture of what we want for each index looks more like this:

2D Quandrants - What we want

The overlap is not a bad thing; it simply acknowledges that a company can be cheap and growing, arguably a very good set of characteristics.

A common way of combining growth and value scores into a single metric is to divide growth ranks by value ranks. As we showed in the previous commentary, many index providers do something similar to this.

Essentially this means that low growth gets lumped in with high value and vice versa.

But how much does this affect the index allocations? Maybe there just are not many companies that get included or excluded based on this process.

Let’s play index provider for a moment.

Using data from Morningstar and Yahoo! Finance at the end of 2015, we can construct growth and value scores for each company in the S&P 500 and see where they fall in the growth/value planes shown above.

To calculate the scores, we will use an approach similar to the one in last commentary where the composite growth score is the average of the normalized scores for EPS growth, sales growth, and ROA, and the composite value score is the average of the normalized scores for P/B, P/S, and P/E ratios.

The chart below shows the classification when we take an independent approach to selecting growth and value companies based on those in the top third of the ranks.2D Sort Growth and Value

In each class, 87% of the companies were identified as only being growth or value while 13% of companies were included in both growth and value.

The next chart shows the classifications when we use the ratio of growth to value ranks as a composite score and again select the top third.1D Sort Growth and Value

Relative to what we saw previously, growth and value now extend further into the non-value (expensive) and non-value (cheap) realms of the graph, respectively.

There is also no overlap between the two categories, but we are now missing 16% of the companies that we had identified as good growth or value candidates before. On the flip side, 16% of the companies we now include were not identified as growth or value previously in our independent sort.

If we trust our independent growth and value ranking methodologies, the combined growth and value metric leaves out over a third of the companies that were classified as both growth and value. These companies did not appear in either index under the combined scoring scheme.

With the level of diversification in some of these indices, a few companies may not make or break the performance, but leaving out the top ones defeats the purpose of our initial ranking system. As with the NCAA March Madness tournament (won by Corey with a second place finish by Justin), having a high seed may not guarantee superior performance, but it is often a good predictor (since 1979, the champion has only been lower than a 3 seed 5 times).

Based on this analysis, we can borrow the final warning to buyers from the previous commentary:

“when you’re buying value and growth products tracking any of these indices, you’re probably not getting what you expect – or likely want.”

… and say that the words “probably” and “likely” are definitely an understatement for those seeking the best growth and value companies based on this ranking.

Page 15 of 16

Powered by WordPress & Theme by Anders Norén