This blog post is available for download as a PDF here.

Summary

  • Diversification is a cornerstone of portfolio construction. It provides investors with the important ability to invest in the face of uncertainty.  Because it can reduce risk without necessarily sacrificing potential reward, it is known as the only free lunch on Wall Street.
  • Yet, we believe that many investors underutilize a less well-known type of diversification: process diversification.
  • When it comes to asset allocation, we regularly run across a “high conviction” bias whereby investors with their own approach to asset allocation are hesitant to incorporate complementary allocation strategies.
  • The reason for this behavior seems to be a fear that adding managers with different approaches to asset allocation will dilute the investor’s own views.
  • We believe that investors should blend a set of complementary, evidence-based asset allocation approaches (e.g. combining a strategic allocation with a momentum-based approach, a value-based approach, and a risk parity strategy), even if these different approaches regularly hold conflicting views about various asset classes. Doing so can capture the upside of each approach with less extreme allocation swings, better asset class diversification, and lower tracking error (an important consideration in managing client expectations).

In a July 2016 research piece titled “Fooled by Conviction,” S&P argues against the idea that “…managers should focus exclusively on their best ideas, holding more concentrated portfolios of securities in which they have the highest confidence.”  The authors lay out the case that “high conviction” equity portfolios:

  1. Are riskier than more diversified counterparts
  2. Cloud the investor’s ability to evaluate manager skill
  3. Increase trading costs
  4. Increase the probability of underperformance

We have some disagreements with the specific arguments in the article.

For example, the authors compare stock picking to a coin-flipping game when arguing point #2.  Specifically, their example assumes that a manager makes a yes or no call on whether or not a stock will outperform the benchmark.

A skillful manager will get this call right more often than not.

With this setup, the authors reach the correct conclusion that the more stocks considered, the better.

The coin-flipping example, however, is a poor model for evidence-based active management where excess returns are typically compensation for bearing systematic risk and/or the result of exploiting behavioral biases.

Take the value factor as an example.  If a portfolio of deep value stocks is too concentrated, it will fail to diversify away the idiosyncratic risk we are not compensated for bearing.  On the other hand, if the portfolio is too diversified, the value exposure will be diluted, decreasing the potential to capture value-related excess returns.

All this being said, we agree strongly with the overall message of the piece.  Concentration for concentration’s sake is not something we should strive for.

“High conviction” bias is not only a stock-picking phenomenon: we see it in the asset allocation world as well.  Investors with their own approach to asset allocation can be hesitant to incorporate outside managers with alternative allocation approaches.

For example, a value-based allocator looking to find asset classes at bargain prices may be reluctant to allocate a sleeve of their portfolio to a momentum manager.

More often than not, the rationale we hear cited for this viewpoint is that introducing other allocation approaches will “dilute,” or even contradict, the investor’s own allocation decisions.

To explore this reasoning, let’s use a toy example with the following assumptions:

  1. We live in a two asset class world. We assume that returns for both asset classes are independent from one another and are normally distributed with an annualized expected return of 8% and an annualized standard deviation of 16%.
  2. There is an unlimited number of asset allocators making independent investment decisions.
  3. Each asset allocator makes a binary decision once per year, investing 100% of their portfolio in one of the two asset classes.
  4. Each asset allocator has skill (i.e. will get the asset class call correct more than 50% of the time).

For simplicity, we will assume that the accuracy of each manager is 55%.  While this is admittedly an arbitrary choice, it is without loss of generality and equates to a reasonable expected 1% of annualized outperformance.

So back to our question: Does combining more and more of these allocators together in a single portfolio dilute the results?

The answer will depend on what exactly we mean by dilute.

In one sense, yes.  Adding more allocators will reduce the probability of large bets on either asset class.

In our hypothetical example, using a single allocator will always lead to a 100% allocation to one of the two asset classes in the universe.  Incorporating a second allocator cuts the probability of our portfolio holding 100% in a single asset roughly in half.  Half of the time the two allocators will agree and the portfolio will be 100% invested in that asset class and half of the time the two allocators will disagree, leading to a 50/50 allocation.

As more and more allocators are added, the likelihood of being invested entirely in one asset class drops rapidly.  In fact, each additional allocator will reduce this probability by a factor of around two.  With five allocators, we would expect to see a 100/0 or 0/100 allocation less than once a decade.

Calculations by Newfound Research.  This is a hypothetical example that does not reflect any actual investments or asset classes.    

A byproduct of fewer extreme bets is that the likelihood of large outperformance decreases.

For example, with only one allocator, annualized tracking error to a 50/50 benchmark exceeds 10%.  We would expect to see outperformance of more than 5% once every three years, more than 10% once every five year, and more than 15% once every ten years.

With five allocators, on the other hand, tracking error falls below 5% and we would expect see outperformance of more than 5% once every seven years, outperformance of more than 10% once every 26 years, and outperformance of more than 15% once every 90 years.

Calculations by Newfound Research.  This is a hypothetical example that does not reflect any actual investments or asset classes.    

Anecdotally, this line of thought is what we see crop up when investors who do their own asset allocation balk at including complementary allocation approaches.  The fear is that using other allocation approaches, managers, or strategies will offset their own allocation decisions, watering down outperformance when the original asset allocations are correct.

The large outperformance that disappears, however, is not a dilution of skill.  Rather, it is a dilution of positive variance.  Or, put another way, we’re sacrificing good luck.  However, the dilution is symmetric: you also lose bad luck.

Investors, however, love upside variance.  This is often why it seems that diversification will always disappoint.  Positive surprises are always welcome.  Diversification dilutes that opportunity.

The potential to have positive surprises, however, comes at the cost of potentially realizing negative ones as well.

So using multiple allocators will reduce your odds of winning the allocation lottery (i.e. achieving big outperformance in a given year), but doing so does not reduce the expected amount of outperformance.  If all allocators have the same amount of skill, then expected outperformance will be the same whether we use one allocator or one million allocators.

So if expected outperformance remains unchanged, why bother?  Incorporating complementary processes reduces risk in two important ways.

First, overall portfolio volatility will be lower.  Why?  Because as we saw previously, using multiple allocation processes reduces the likelihood of large bets on a subset of asset classes.  This allows the overall portfolio to more consistently harvest available diversification opportunities.

Calculations by Newfound Research.  This is a hypothetical example that does not reflect any actual investments or asset classes.    

Second, tracking error will be lower.  While this will indeed lower the chances for large outperformance, it will simultaneously lower the odds of large underperformance.  In our view, managing this downside tracking error is crucial as it helps to control behavioral biases that can interfere with long-term investment success.

With a single allocator, we would expect to experience one year every decade where the portfolio underperforms the 50/50 benchmark by at least 13.8%.  With five independent allocators, this number falls to 3.9%.

Calculations by Newfound Research.  This is a hypothetical example that does not reflect any actual investments or asset classes.    

While this example is highly simplified, it does a nice job of laying out the case for why we should embrace a diverse set of views when it comes to asset allocation, even if these views sometimes conflict with each other.

Reality is of course a lot more complicated than the simple example we’ve used so far.

First, there isn’t an infinite number of independent allocation approaches supported by academic and practitioner evidence.  In fact, the list of evidence-based approaches is usually limited to some combination of the following:

  • Strategic (e.g. balanced portfolios, 1/N, maximum Sharpe, etc.)
  • Momentum
  • Value
  • Defensive
  • Carry

Second, no allocation approach has an equal ex ante probability of success across all market environments.  Evidence-based investment processes that are successful over the long-run can still go in and out of favor over extended periods of time.  Take momentum as an example.  It has worked phenomenally well on average over the long-run.  Yet the path to this success contains both periods of great success (e.g. 2008, 2009, 2013) and significant struggle (e.g. 2011, 2015).

With these nuances in mind, let’s slightly modify the initial example by:

  1. Limiting the number of allocation strategies.
  2. Assuming that while each approach has average accuracy of 55%, this accuracy will evolve over time. [Note: We assume that each year accuracy for a given allocator will either stay constant or change to a new value.  The transition probability to a new accuracy value is 10%.  In other words, for a given year there is a 10% chance that the allocator’s skill will change.  When skill does change, accuracy is drawn from a uniform distribution with a lower bound of 10% and an upper bound of 100%].

We find that allowing allocator skill to be random increases the likelihood of significant negative tracking error over rolling five-year periods.  For example, assume we use one allocator with constant accuracy of 55%.  In this case, the 10th percentile for five-year relative performance to the 50/50 benchmark is -27%.  In other words, there is a one in ten chance that the portfolio will trail the benchmark by 27% or more over a given five-year period.  If we instead use one allocator with random accuracy that averages to 55%, this number jumps to 34%.

In our hypothetical world, adding randomness to allocator skill generally means we need one additional allocator to achieve similar limits to downside tracking error as in the constant skill case (i.e. 2 allocators with constant skill will experience roughly the same degree of downside tracking error as will 3 allocators with stochastic skill).

Calculations by Newfound Research.  This is a hypothetical example that does not reflect any actual investments or asset classes.   

Conclusion

Diversification is a cornerstone of portfolio construction.  It provides investors with the important ability to invest in the face of uncertainty.  Because it can reduce risk without necessarily sacrificing potential reward, it is known as the only free lunch on Wall Street.

Yet, we believe that many investors underutilize a less well-known type of diversification: process diversification.  When it comes to asset allocation, we regularly run across a “high conviction” bias whereby investors with their own approach to asset allocation are hesitant to incorporate complementary third-party allocation strategies.

The reason for this behavior seems to be a fear that adding managers with a different approach to asset allocation will dilute the investor’s own views.  For example, a value-based allocator looking to find asset classes at bargain prices may be reluctant to allocate a sleeve of their portfolio to a momentum manager who will buy upward trending asset classes even if they are overvalued.

Yet, we believe that combining a handful of complementary asset allocation processes can be a powerful tool, both for dealing with future uncertainty and addressing common behavioral biases.

Justin is a Managing Director and Portfolio Manager at Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Justin is responsible for portfolio management, investment research, strategy development, and communication of the firm's views to clients.

Justin is a frequent speaker on industry panels and is a contributor to ETF Trends.

Prior to Newfound, Justin worked for J.P. Morgan and Deutsche Bank. At J.P. Morgan, he structured and syndicated ABS transactions while also managing risk on a proprietary ABS portfolio. At Deutsche Bank, Justin spent time on the event‐driven, high‐yield debt, and mortgage derivative trading desks.

Justin holds a Master of Science in Computational Finance and a Master of Business Administration from Carnegie Mellon University as a well as a BBA in Mathematics and Finance from the University of Notre Dame.