The vast majority of people will actually allocate, during their lifetime, to a strategy with negative expected excess return — and “happily” do so!  What is this strategy?  Insurance.  Why do we choose to allocate to such a strategy?  Risk aversion implies that we’ll take a small shift left in our return distribution for the non-linear tail protection afforded to us.

Even with such a common example as an insurance policy, positive excess returns stills seems to be a pre-requisite for a strategy’s inclusion in a portfolio.  Few people sit around hoping that their house burns down so they can collect insurance, but that’s how we treat strategies focused on protecting in the left tail: we want risks (like recessions) to be realized so that we don’t end up with negative realized returns.

Non-linearities can be very hard to quantify in portfolio construction, especially under a traditional mean-variance or $latex \alpha-\beta$ framework.  But we can start to account for skewness, kurtosis, and discontinuities with a more flexible utility function.  Specifically, we will look to a family of functions falling under constant risk aversion utility.  These functions rely only on a single input – terminal wealth – making them useful in scenario analysis.    For scenario set $latex \Omega$, we will want to maximize our utility:

$latex \Sigma_{\Gamma\in\Omega}{\pi_{\Gamma}U(\Gamma)}$

Where $latex \pi_{\Gamma}$ is the probability of the scenario and $latex \Gamma$ is the terminal value of the portfolio and we define:

$latex U(\Gamma) = \frac{\Gamma^{1-\gamma}}{(1-\gamma)}$

Since return will be a function of our weight to each asset class and the return of that asset class, we can re-write $latex \Gamma$ as $latex w^{T}\vec{r}+1$ (assuming $latex w_{0}=1$) where $latex \vec{r}$ is a vector of asset returns.

As an example, we will consider a two-asset, two-state scenario with a 0% risk-free rate.  Our first asset is the market portfolio, which in our upstate has an 8% return and in our down state has a -40% return.  Our second asset – our insurance – has a -1% return in our upstate and a 40% return in our downstate.  Our upstate has a probability of 98% and our down state has a probability of 2%.  This gives our market portfolio an expected return of 7.04% and our insurance an expected return of -0.18%.

For different risk aversion levels – from risk seeking at 0.1 to risk averse at 10 – we can plot how much we hold in the market portfolio versus how much we hold in insurance. We can see that through the lens of this framework, for a given risk aversion level, it is entirely rational to hold an asset class with a negative expected excess return.

Frequently reported statistics that break down expected excess return are $latex alpha$ and $latex beta$.  Few negative $latex \alpha$ strategies make it by the screening stage, but that may not be entirely rational.  $latex \alpha$ is the expected excess returns with respect to a beta-adjusted benchmark.  So, technically, a strategy can exhibit positive $latex \alpha$ while still underperforming its benchmark from a total return perspective and, conversely, a strategy can exhibit negative $latex \alpha$ while outperforming its benchmark from a total return perspective:

BetaAlphaExpected Excess ReturnRelative to Benchmark
0.53%8%Underperform
1.5-3%12%Outperform

$latex \alpha$ will simply measure if the return is in excess of the risks borne.

But there are a couple issues at hand.  The first – and most important – is that we are placing linear performance metrics on non-linear functions of price (the Procrustean bed of performance reporting).  The non-linearity makes $latex \alpha$ and $latex \beta$ nonsense numbers.  The second is that we have non-linear risk-return preferences, which means that even if our returns were a simple, linear function of price, the application of linear performance metrics remains somewhat non-intuitive.  The third is that we are commonly estimating the $latex \alpha$-$latex \beta$ framework from historical data and the empirical data may not necessarily appropriately reflect the magnitude or frequency of the risks we are pricing against.

Ultimately, these topics are highly relevant to us as tactical asset allocators because the $latex \alpha$-$latex \beta$ framework does not accurately capture our non-linear and discontinuous return stream, making it difficult to fit into a traditional mean-variance utility framework.  A linear regression just does not cut it when investors are risk averse and care about higher moments and tail events.  However, in a more scenario-based framework, it becomes clear that even if a tactical manager underperforms his stated benchmark on a total return and alpha basis, an investor may still choose to invest with the consideration of risk aversion.  Much like evaluating insurance, understanding the premium you pay (how much the manager may underperforms the benchmark and when) for the protection you get (the confidence in the downside protection) is more important that $latex \alpha$ and $latex \beta$.

### Corey Hoffstein Corey is co-founder and Chief Investment Officer of Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Corey is responsible for portfolio management, investment research, strategy development, and communication of the firm's views to clients. Prior to offering asset management services, Newfound licensed research from the quantitative investment models developed by Corey. At peak, this research helped steer the tactical allocation decisions for upwards of \$10bn. Corey holds a Master of Science in Computational Finance from Carnegie Mellon University and a Bachelor of Science in Computer Science, cum laude, from Cornell University. You can connect with Corey on LinkedIn or Twitter.