This blog post is available as a PDF here.

Summary­­

  • Mean-variance optimization assumes that you can fully describe the risks and returns of assets in a few simple numbers. 
  • Extreme market events often cause volatilities and correlations to spike dramatically, but stress testing on an individual asset basis can allow our own biases and oversights to creep into the process.
  • By decomposing the risk structure into independent sources of risks and systematically stress testing those components, we can arrive at a solution that models the fat-tailed nature of crises and is entirely rules-based and bias-free.

One of the things that makes wealth management so difficult is that even the topics that seem easy in theory are devilishly difficult in practice.

Consider portfolio construction.  The objective is simple: we just want to divide our wealth up over a variety of asset classes and investments to balance risk with reward. 

As a foundation, we have modern portfolio theory which provides an elegant solution for deriving an optimal portfolio allocation assuming that asset class returns can be completely modeled using a multi-variate normal distribution. 

Reality, is of course, far more complicated than can be captured by these assumptions.  Not only do we have to deal with an ever-changing set of investor preferences and objectives, but we have to deal with the empirical evidence of dynamic correlation structures and latent risks that manifest as devastating black swan events. 

We believe the balance in pragmatic portfolio construction is finding ways to incorporate and account for these complexities without adding unnecessary bells and whistles into the process. 

This week, we wanted to explore one idea for incorporating fat-tails – whereby we empirically see more extreme market moves than would be expected by normal models – into the portfolio construction process.

 

Crashing Correlations 

Let’s consider a very simple two-asset example: a portfolio built using only U.S. stocks and Emerging Market stocks. 

In “normal” markets, let’s assume they have the following volatility and correlation structure:

covariance-system-1

As a quick refresher on matrix notation, the top row of the  (17%) is the volatility of U.S. stocks while 22% is the volatility of Emerging Market stocks.  The correlation between the two asset classes is 0.85, and the ones represent the correlation of each asset with itself (think of them as placeholders to make the math more simple). 

In an “extreme” market, however, we might see the volatility of each asset spike and/or the diversification benefit between them disappear (i.e. correlations “crash towards 1”).   

covariance-system-2These spiking volatility levels, coupled with decreasing cross-asset diversification opportunities, can cause significant and sudden losses in our portfolios.  Think of what happened during the global financial crisis where nearly every equity market dropped dramatically.

If these extreme scenarios go unaccounted for in our portfolio construction process, we will likely take on too much risk.  If we over-account for them, however, our portfolios will likely be too conservative, leaving too much return potential on the table.

 

Capturing Covariance Structures: Principal Portfolios

One way many investors look to incorporate this information is through scenario-based stress testing.  In this practice, an investor invents a variety of “worst case” scenarios and attempts to capture their effects upon volatility and correlation structures.  For example, an investor may invent a “market crash” scenario whereby equity correlations jump by 20% and volatility spikes by 30%.

While the benefit of this method is that it allows investors to model very specific events, we think there are two drawbacks:

  1. It forces investors to explicitly model how the assets will react in a scenario, which is difficult to do.
  2. It fails to account for risks we haven’t thought of yet.

In our opinion, a better method is to look at the portfolio not through the lens of what happens to assets but through the lens of principal portfolios, which allows us to examine how joint reactions can occur.

Principal portfolios are a way of breaking down a portfolio into a set of unique sub-portfolios that exhibit statistically independent return structures.

What does that actually mean?  Let’s explore a simple example. 

Below we plot a scatter series of (demeaned) hypothetical returns for U.S. stocks and Emerging Market stocks using the first volatility and correlation structure example above.

spy-em-hypo-scatter 

To find the principal portfolios that describe this relationship, we perform what is known as an eigenvalue decomposition.  We can think of this mathematical process as identifying the best-fitting ellipse (or, in the case of more than 2 assets, ellipsoid and hyper-ellipsoid) to the data and then finding the vectors that describe that ellipse. It is essentially like rotating the x- and y-axes to better fit our data.

spy-em-fit-ellipse

spy-em-fit-eigenvectors

These vectors (the orange arrows) will have the dual features of both describing the primary axes of variance as well as being perpendicular to one another.  In this case, we end up with the following (unit) vectors and lengths:

eigen-system The first principal portfolio (p1), in this case, is a portfolio that holds 0.59 units of U.S. stocks and 0.84 units of Emerging Market stocks.  If we were to try to label this portfolio, we might think of it as the portfolio that captures the shared “global equity beta” exposure between the two assets. 

The second principal portfolio (p2) holds 0.84 units of U.S. stocks and -0.59 units of Emerging Market stocks.  This is a long/short portfolio that captures the relative return between the two asset classes.

The lengths of the vectors are important because they tell us how much variance is explained by each principal portfolio.  So, in this case, the first principal portfolio explains 93% of the overall system variance (0.071 / (0.071 + 0.005)).  This means that the beta factor more-or-less overwhelms the relative performance factor in terms of explaining overall system volatility.

So now instead of thinking of our portfolio as a blend of U.S. stocks and Emerging Market stocks, we can think of it as a combination of global equity beta and a relative long/short portfolio.

But why is this transformation useful?

 

A Shock to the System

The benefit of looking at the relationship of assets through this structure is it allows us to very easily create random market scenarios and explore their effect on our asset-based covariance system.

For example, instead of saying, “a crisis increases all equity correlations by 20%,” we can just say, “a crisis doubles the impact of the beta factor.”  To do this, we would just double the length of the beta factor.

spy-em-extended-eigenvector

Using these principal portfolios and the new lengths, we can re-create our covariance matrix and simulate new data.

What we can see is that the volatility of both assets, as well as the correlation between them, has increased.  In other words, by positively shocking the first principal portfolio, we have increased the effect that portfolio has on system returns, effectively creating a “fat tail” in scenarios related to that principal portfolio.

spy-em-new-hypo-scatter 

Guided by Randomness

The benefit of this model is that you can allow randomness to be your guide in creating market scenarios.  While we shocked the beta factor to increase volatility and correlations, there is no reason we could not have reduced it in length to capture a period when markets are calm and cross-asset correlations have relaxed.

Or we could shock the second principal portfolio, leaving the beta factor alone while increasing the effects of relative risks.

Or we can let randomness be our guide, randomly selecting a principal portfolio to shock and how much to shock it by. 

Consider the following pseudo-algorithm:

  1. Construct an eigenvalue decomposition to identify the principal portfolios and their explained variance: eigen-decomposition
  2. Generate a standard random normal: z~N(0,1).
  3. Select the index, k, of a random principal portfolio with the probability of selecting the principal portfolio being proportional to the percentage of total variance explained by the principal portfolio.
  4. Perturb the kth eigen-value: lambda-shock
  5. Re-compute the covariance matrix using the new eigen-values: reconstructed-system

 

(Math-free side note: Those steps in a nutshell can be thought of as “randomly shock the primary independent sources of risk in proportion to how much they contribute to the total risk.”)

This process has the added benefit that it can create both positive and negative shocks (depending on if z is above or below 0).  Furthermore, those shocks will be governed by random normal draws: so we’ll only create extreme moves a very low percentage of the time. We will also favor more extreme scenarios due to the exponential scaling.  Finally, while this method will have a higher likelihood of shocking those principal portfolios that explain the vast majority of the system’s variance (which will largely be shared beta exposure for equities), it can still end up shocking other principal portfolios as well, creating market structures that have crises in idiosyncratic factors.

 

An Example

Below we show the difference in recommended weights for a balanced risk portfolio between a resampled mean-variance optimization process that shocks the covariance matrix and one that does not.

portfolio-weight-differencesSource: JPMorgan, BNY Mellon, Morgan Stanley.  Calculations by Newfound Research.

 

What we can see is that asset classes that had been recommended in the standard MVO as diversifiers – things like alternatives (event driven, macro, commodities, and gold) as well as some credit (high yield and EM debt) are sacrificed in favor of traditional US large cap equities, cash, and long-term US Treasuries.

This is likely due to the fact that shocks to the primary factor highlighted diversification weakness in assets like high yield and EM debt, and favored things like cash and long-term US Treasuries.  Shocks to more exotic betas likely highlighted the risks in some alternative assets, ultimately reducing their exposure. 

Ultimately, we believe that it is important to incorporate some sort of estimate of fat-tails in our portfolio construction process.  While scenario-based designs may be appealing from a narrative standpoint, we believe the more stochastic-based principal portfolio method we have outlined herein is not only more scalable, but it also helps reduce the risk that we underestimate certain risks due to our own biases.

 

Client Talking Points

  • Acknowledging that some risks are very hard to measure can be a good thing when thinking about your portfolio allocation.
  • At the same time, we can’t be so fearful that we underallocate to potential sources of return.
  • By balancing these two facts in our process, we hope to arrive at a robust solution regardless of what future market environments have in store for our portfolio.

Corey is co-founder and Chief Investment Officer of Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Corey is responsible for portfolio management, investment research, strategy development, and communication of the firm's views to clients.

Prior to offering asset management services, Newfound licensed research from the quantitative investment models developed by Corey. At peak, this research helped steer the tactical allocation decisions for upwards of $10bn.

Corey is a frequent speaker on industry panels and contributes to ETF.com, ETF Trends, and Forbes.com’s Great Speculations blog. He was named a 2014 ETF All Star by ETF.com.

Corey holds a Master of Science in Computational Finance from Carnegie Mellon University and a Bachelor of Science in Computer Science, cum laude, from Cornell University.

You can connect with Corey on LinkedIn or Twitter.