I recently read an interesting AllianceBernstein blog post titled "Tail-Risk Parity: The Quest for a Crash-Proof Portfolio," which led me to their white paper, "An Introduction to Tail-Risk Parity."  AllianceBernstein has released their own spin on risk parity -- a topic that has received a rapidly increasing marketshare of the quant blogosphere and twittersphere -- and I think it is one of the more interesting takes.

While we've commented on risk-parity many times before, for the uninitiated, risk-parity is the concept of allocating based on balancing risk -- often, measuring risk as volatility.  The critique of standard 60/40 portfolios is that 90% of their volatility is driven by equity, making the risk profile decisively equity-driven.  Growing in popularity over the last few years, risk-parity has come under major focus, and critique, lately for being so heavily dollar-weighted towards fixed-income instruments, causing portfolio dislocations in the market's most recent taper tantrum.

In their implementation, AllianceBernstein has chosen to measure risk via implied Expected Tail Loss (ETL), extracting information from the options market to determine return skewness.  Why not use volatility?

  1. Volatility is a symmetric measure, and therefore punishes positive skew in our return (which we want)
  2. Volatility spikes lead to position size reduction (i.e. position liquidation), which often drives prices down and volatility further up, leading to further size reduction; i.e. a volatility vortex
  3. Portfolio and asset returns are heavily driven by the "fat tails" and asymmetric profiles of return distributions -- an area volatility does not cover

AllianceBernstein defines Expected Tail Loss:

Expected tail loss measures a portfolio’s expected return over a pre-specified horizon in the “worst possible” scenarios, where “worst” is defined by a user-defined percentile. For example, a 5% monthly ETL corresponds to the expected average of the 5% worst monthly returns. ETL captures the risk of large losses or drawdowns that traditional measures that rely on a normal distribution underestimate.

(Note: ETL is also known as Conditional Value-at-Risk (CVaR) and Expected Shortfall in the industry)

What Tail-Risk Parity seeks to do is reduce exposure to assets with high option-implied Expected Tail Loss.  In other words, when the options market is saying that the cost of hedging downside losses is high, an indication that the implied probability of downside losses has gone up, the strategy reduces its exposure.  (Note: I don't know exactly how AllianceBernstein is computing their option-implied Expected Tail Loss.  I presume that they are either (a) backing out the implied probability distribution of returns and computing ETL from there or are (b) backing out option-implied skewness and using a modified ETL calculation)

In my opinion, one of the more interesting arguments that AllianceBernstein makes is that by focusing on tail-risk, they are implicitly assuming the failure of diversification in their portfolio -- as it is in portfolio "tail" scenarios that correlations, by definition, have crashed to 1.  Therefore, the Tail-Risk Parity portfolio is constructed for "crisis" scenarios.  By focusing on only volatility, a naïve risk parity implementation may ignore the creeping risk of rising correlations.

Just like Risk Parity is the Mean-Variance solution for a correlation structure of 1 and equivalent Sharpe ratios, Tail-Risk Parity is the Mean-CVaR optimized solution when securities have a correlation structure of 1 and equivalent return-to-CVaR ratios.

Now, a brief tangent...

Value-at-Risk is the threshold that defines the "worst possible" scenarios in the above ETL definition.  At first blush, VaR, and VaR-related measures, might seem like better measures to use in portfolio construction that volatility because they more intuitively align with our definition of risk: downside loss.

A simple critique of VaR is that just because an asset has historically exhibited a low historical VaR, it has a low simulated VaR, or its implied VaR is low does not mean it will necessarily have a low VaR in the future.  So optimizing a portfolio to overweight assets based on minimizing VaR is sort of like hiring only people who have never made mistakes before: it doesn't mean they won't make a mistake in the future -- it just means you have no idea what they are going to do when they do make a mistake.  However, since volatility and VaR are traditionally very highly related, the same argument could be made for low volatility versus high volatility assets.  It's Taleb's turkey.

There is a more subtle critique, however, tied to statistics.  Because VaR is normally associated with a low probability level, like 1% or 5%, we can also define VaR as the cutoff defining the region after which we do not have a whole lot of historical information.  Or any information, really.  In fact, VaR can also be interpreted as the region after which normal statistics can no longer be applied -- we no longer have enough data or information to make good statistical estimates.  Thar be dragons there.  So how in the world are we supposed to estimate the average of data in that region?

In that light, estimating ETL is a statistical nightmare because we don't have the observations to actually calculate it with any accuracy.

And that's my problem with the Tail-Risk Parity idea.  It's not that we're underweighting assets that the options market is saying has fat left tails -- it's that we're relying on an estimate that, almost by definition, is going to be bad.  The argument that we're relying on the insurance market to accurately price these risks and probabilities doesn't make it any better.  In fact, if we know that selling insurance (volatility) has, historically, been a winning bet, then by definition we can expect that the implied volatility on the options are overstated on realized volatility, meaning that we are going to overstate our left-tail risk.  In fact, for low-frequency events with unknown, and possibly massive, magnitudes, we can presume that insurers would require an even greater premium -- which will translate to an overstated implied probability of large magnitude events.  This potentially overstated size of the tail will cause us to take a sub-optimal amount of risk during normal markets (when risks are not realized), leading to risk-adjusted, and likely total, return underperformance.

Ignoring correlations has the same effect.  I applaud AllianceBernstein for their simplified approach, but by assuming a constant correlation matrix of 1 ignores diversification benefits in normal markets.

Furthermore, the entire structure only accounts for known unknowns -- the risks that the insurance market is appropriately (or, inappropriately) pricing for.  It ignores the unknown unknowns that the options market cannot appropriately price -- because they don't even know what they are.

The strategy also suffers from some of the same liquidation-vortex risks that risk parity portfolios suffer from because ETL and volatility tend to be highly correlated; as volatility picks up, VaR typically rises, leading to increased CVaR readings, leading to position size reduction.

Now, with all that said, I generally like the direction that AllianceBernstein is headed.  I think utilizing skew, kurtosis, and investor utility information can do wonders to align product performance with client expectations.  I just think this backtest looks particularly rosy because of the 30-year bull market and several, fully realized high-skew / fat-tail risks in equity markets.

Aprospos of nothing -- I think page 13 of the white paper is absolutely brilliant and is absolutely the correct way to measure the merits of any investment solution focused on risk management: by  asking "how much would the same protection have cost me in the insurance (options) market?"  Good stuff right there.

Corey is co-founder and Chief Investment Officer of Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Corey is responsible for portfolio management, investment research, strategy development, and communication of the firm's views to clients.

Prior to offering asset management services, Newfound licensed research from the quantitative investment models developed by Corey. At peak, this research helped steer the tactical allocation decisions for upwards of $10bn.

Corey is a frequent speaker on industry panels and contributes to ETF.com, ETF Trends, and Forbes.com’s Great Speculations blog. He was named a 2014 ETF All Star by ETF.com.

Corey holds a Master of Science in Computational Finance from Carnegie Mellon University and a Bachelor of Science in Computer Science, cum laude, from Cornell University.

You can connect with Corey on LinkedIn or Twitter.