The Research Library of Newfound Research

Month: July 2017

Building an Unconstrained Sleeve

We’re often asked about how to build an unconstrained sleeve in a portfolio.

Our view is that your mileage will largely vary by where you are trying to go.  With that in mind, we focus on three objectives:

  • Sleeves that seek to hedge equity losses.
  • Sleeves that seek significant equity upside capture while reducing downside.
  • Sleeves that seek an absolute return profile.

We explore how these sleeves can be built using common strategies such as tactical equity, minimum volatility equity, managed futures, risk parity, global contrarian, alternative income, and traditional U.S. Treasuries.

You can find the full presentation below.

 

(If the above slideshow is not working, you can view an online version here or download a PDF version here.)

 

Combining Tactical Views with Black-Litterman and Entropy Pooling

This post is available as a PDF download here

Summary­­

  • In last week’s commentary, we outline a number of problems faced by tactical asset allocators in actually implementing their views.
  • This week, we explore popular methods for translating a combination of strategic views and tactical views into a single, comprehensive set of views that can be used as the foundation of portfolio construction.
  • We explore Black-Litterman, which can be used to implement views on returns as well as the more recently introduced Entropy Pooling methodology of Meucci, which allows for more flexible views.
  • For practitioners looking to implement tactical views into a number of portfolios in a coherent manner, the creation of posterior capital market assumptions via these methods may be an attractive process.

Note: Last week’s commentary was fairly qualitative – and hopefully applicable for practitioners and non-practitioners alike.  This week’s is going to be a bit wonkier and is primarily aimed at those looking to express tactical views in an asset allocation framework.  We’ll try to keep the equations to a minimum, but if the question, “how do I create a posterior joint return distribution from a prior and a rank view of expected asset class returns?” has never crossed your mind, this might be a good week to skip.

In last week’s commentary, we touched upon some of the important details that can make the actual implementation and management of tactical asset allocation a difficult proposition.[1]  Specifically, we noted that:

  1. Establishing consistent measures across assets is hard (e.g. “what is fair value for a bond index and how does it compare to equities?”);
  2. There often are fewer bets being made, so position sizing is critical;
  3. Cross-asset dynamics create changing risk profiles for bets placed.
  4. Tactical decisions often explicitly forego diversification, increasing the hurdle rate.

We’ll even add a fifth, sixth, and seventh:

  1. Many attractive style premia (e.g. momentum, value, carry, and trend) trades require leverage or shorting. Many other tactical views (e.g. change in yield curve curvature or change in credit spreads) can require leverage and shorting to neutralize latent risk factors and allocate risk properly.
  2. Combining (potentially conflicting) tactical views is not always straight forward.
  3. Incorporating tactical views into a preexisting policy portfolio – which may include long-term strategic views or constraints – is not obvious.

This week, we want to address how points #2-7 can be addressed with a single comprehensive framework.[2]

What is Tactical Asset Allocation?

As we hinted in last week’s commentary, we’re currently smack dab in the middle of writing a book on systematic tactical asset allocation.

When we sat down to write, we thought we’d start at an obvious beginning: defining “what is tactical asset allocation?”

Or, at least, that was the plan.

As soon as we sat down to write, we got a case of serious writer’s block.  Which, candidly, gave us deep pause.  After all, if we struggled to even write down a succinct definition for what tactical asset allocation is, how in the world are we qualified to write a book about it?

Fortunately, we were eventually able to put digital ink to digital paper.  While our editor would not let us get away with a two sentence chapter, our thesis can be more or less boiled down to:

Strategic asset allocation is the policy you would choose if you thought risk premia were constant; tactical asset allocation is the changes you would make if you believe risk premia are time-varying.[3]

We bring this up because it provides us a mental framework for thinking about how to address problems #2 – 7.

Specifically, given prior market views (e.g. expected returns and covariances) that serve as the foundation to our strategic asset allocation, can our tactical views be used to create a posterior view that can then serve as the basis of our portfolio construction process? 

Enter Black-Litterman

Fortunately, we’re not the first to consider this question.  We missed that boat by about 27 years or so.

In 1990, Fischer Black and Robert Litterman developed the Black-Litterman model while working at Goldman Sachs. The model provides asset allocators with a framework to embed opinions and views about asset class returns into a prior set of return assumptions to arrive at a bespoke asset allocation.

Part of what makes the Black-Litterman model unique is that it does not ask the allocator to necessarily come up with a prior set of expected returns.  Rather, it relies on equilibrium returns – or the “market clearing returns” – that serve as a neutral starting point.  To find these returns, a reverse optimization method is utilized.

Here, R is our set of equilibrium returns, c is a risk aversion coefficient, S is the covariance matrix of assets, and w is the market-capitalization weights of those assets.

The notion is that in the absence of explicit views, investors should hold the market-capitalization weighted portfolio (or the “market portfolio”).  Hence, the return views implied by the market-capitalization weights should be our starting point.

Going about actually calculating the global market portfolio weights is no small feat.  Plenty of ink has been spilled on the topic.[4]  For the sake of brevity, we’re going to conveniently ignore this step and just assume we have a starting set of expected returns.

The idea behind Black-Litterman is to then use a Bayesian approach to combine our subjective views with these prior equilibrium views to create a posterior set of capital market assumptions.

Specifically, Black-Litterman gives us the flexibility to define:

  • Absolute asset class return views (e.g. “I expect U.S. equities to return 4%”)
  • Relative asset class return views (e.g. “I expect international equities to outperform U.S. equities by 2%”)
  • The confidence in our views

Implementing Black-Litterman

We implement the Black-Litterman approach by constructing a number of special matrices.

  • P: Our “pick matrix.” Each row tells us which asset classes we are expressing a view on.  We can think of each row as a portfolio.
  • Q: Our “view vector.” Each row tells us what our return view is for the corresponding row in the pick matrix.
  • O: Our “error matrix.” A diagonal matrix that represents the uncertainty in each of our views.

Given these matrices, our posterior set of expected returns is:

If you don’t know matrix math, this might be a bit daunting.

At the highest level, our results will be a weighted average of our prior expected returns (R) and our views (Q).  How do compute the weights?  Let’s walk through it.

  • t is a scalar. Generally, small.  We’ll come back to this in a moment.
  • S is the prior covariance matrix. Now, the covariance matrix represents the scale of our return distribution: i.e. how far away from the expectation that we believe our realized returns could fall. What we need, however, is some measure of uncertainty of our actual expected returns.  g. If our extracted equilibrium expected returns for stocks is 5%, how certain are we it isn’t actually supposed to be 4.9% or 5.1%? This is where t comes back.  We use a small t (generally between 0.01 and 0.05) to scale S to create our uncertainty estimate around the expected return. (tS)-1, therefore, is our certainty, or confidence, in our prior equilibrium returns.
  • If O is the uncertainty in our view on that portfolio, O-1 can be thought of as our certainty, or confidence, in each view.
    Each row of P is the portfolio corresponding to our view. P’O-1P, therefore, can be thought of as the transformation that turns view uncertainty into asset class return certainty.
  • Using our prior intuition of (tS)-1, (tS)-1R can be thought of as certainty-scaled prior expected returns.
  • Q represents our views (a vector of returns). O-1Q, therefore, can be thought of as certainty-scaled P’O-1Q takes each certainty-scaled view and translates it into cumulative asset-class views, scaled for the certainty of each view.

With this interpretation, the second term – (tS)-1R + P’O-1Q – is a weighted average of our prior expected returns and our views.  The problem is that we need the sum of the weights to be equal to 1.  To achieve this, we need to normalize.

That’s where the first term comes in.  (tS)-1 + P’O-1P is the sum of our weights.  Multiplying the second term by ((tS)-1 + P’O-1P)-1 is effectively like dividing by the sum of weights, which normalizes our values.

Similar math has been derived for the posterior covariance matrix as well, but for the sake of brevity, we’re going to skip it.  A Step- by-Step Guide to Black-Litterman by Thomas Idzorek is an excellent resource for those looking for a deeper dive.

Black-Litterman as a Solution to Tactical Asset Allocation Problems

So how does Black-Litterman help us address problems #2-7 with tactical asset allocation?

Let’s consider a very simple example.  Let’s assume we want to build a long-only bond portfolio blending short-, intermediate-, and long-term bonds.

For convenience, we’re going to make a number of assumptions:

  1. Constant durations of 2, 5, and 10 for each of the bond portfolios.
  2. Use current yield-to-worst of SHY, IEI, and IEF ETFs as forward expected returns. Use prior 60 months of returns to construct the covariance matrix.

This gives us a prior expected return of:

E[R]
SHY1.38%
IEI1.85%
IEF2.26%

And a prior covariance matrix,

SHYIEIIEF
SHY0.000050.0001770.000297
IEI0.0001770.0007990.001448
IEF0.0002970.0014480.002795

In this example, we want to express a view that the curvature of the yield curve is going to change.  We define the curvature as:

Increasing curvature implies the 5-year rate will go up and/or the 2-year and 10-year rates will go down.  Decreasing curvature implies the opposite.

To implement this trade with bonds, however, we want to neutralize duration exposure to limit our exposure to changes in yield curve level and slope.  The portfolio we will use to implement our curvature views is the following:

We also need to note that bond returns have an inverse relationship with rate change.  Thus, to implement an increasing curvature trade, we would want to short the 5-year bond and go long the 2- and 10-year bonds.

Let’s now assume we have a view that the curvature of the yield curve is going to increase by 50bps over the next year.  We take no specific view as to how this curvature increase will unfold (i.e. the 5-year rate rising by 50bps, the 5-year rate rising by 25bps and each of the 2-year and 10-year rates falling by 25bps, etc.).  This implies that the curvature bond portfolio return has an expected return of negative 5%.

Implementing this trade in the Black-Litterman framework, and assuming a 50% certainty of our trade, we end up with a posterior distribution of:

E[R]
SHY1.34%
IEI1.68%
IEF1.97%

And a posterior ovariance matrix,

SHYIEIIEF
SHY0.0000490.0001820.000304
IEI0.0001820.0008190.001483
IEF0.0003040.0014830.002864

We can see that while the expected return for SHY did not change much, the expected return for IEF dropped by 0.29%.

The use of this model, then, is that we can explicitly use views about trades we might not be able to make (due to leverage or shorting constraints) to alter our capital market assumptions, and then use our capital market assumptions to build our portfolio.

For global tactical style premia – like value, momentum, carry, and trend – we need to explicitly implement the trades.  With Black-Litterman, we can implement them as views, create a posterior return distribution, and use that distribution to create a portfolio that still satisfies our policy constraints.

The Limitations of Black-Litterman

Black-Litterman is a hugely powerful tool.  It does, however, have a number of limitations.  Most glaringly,

  • Returns are assumed to be normally distributed.
  • Expressed views can only be on returns.

To highlight the latter limitation, consider a momentum portfolio that ranks asset classes based on prior returns.  The expectation with such a strategy is that each asset class will outperform the asset class ranked below it.  A rank view, however, is inexpressible in a Black-Litterman framework.

Enter Flexible Views with Entropy Pooling

While a massive step forward for those looking to incorporate a variety of views, the Black-Litterman approach remains limited.

In a paper titled Fully Flexible Views: Theory and Practice[5], Attilio Meucci introduced the idea of leveraging entropy pooling to incorporate almost any view a practitioner could imagine.  Some examples include,

  • A prior that need not be normally distributed – or even be returns at all.
  • Non-linear functions and factors.
  • Views on the return distribution, expected returns, median returns, return ranks, volatilities, correlations, and even tail behavior.

Sounds great!  How does it work?

The basic concept is to use the prior distribution to create a large number of simulations.  By definition, each of these simulations occurs with equal probability.

The probability of each scenario is then adjusted such that all views are satisfied.  As there may be a number of such solutions, the optimal solution is the one that minimizes the relative entropy between the new distribution and the prior distribution.

How is this helpful?  Consider the rank problem we discussed in the last section.  To implement this with Meucci’s entropy pooling, we merely need to adjust the probabilities until the following view is satisfied:

Again, our views need not be returns based.  For example, we could say that we believe the volatility of asset A will be higher than asset B.  We would then just adjust the probabilities of the simulations until that is the case.

Of course, the accuracy of our solution will depend on whether we have enough simulations to accurately capture the distribution.  A naïve numerical implementation that seeks to optimize over the probabilities would be intractable.  Fortunately, Meucci shows that the problem can be re-written such that the number of variables is equal to the number of views.[6]

A Simple Entropy Pooling Example

To see entropy-pooling in play, let’s consider a simple example.  We’re going to use J.P. Morgan’s 2017 capital market assumptions as our inputs.

In this toy example, we’re going to have the following view: we expect high yield bonds to outperform US small-caps, US small-caps to outperform intermediate-term US Treasuries, intermediate-term US Treasuries will outperform REITs, and REITs will outperform gold.  Exactly how much we expect them to outperform by is unknown.  So, this is a rank view.

We will also assume that we are 100% confident in our view.

The prior, and resulting posterior expected returns are plotted below.

We can see that our rank views were respected in the posterior.  That said, since the optimizer seeks a posterior that is as “close” as possible to the prior, we find that the expected returns of intermediate-term US Treasuries, REITs, and gold are all equal at 3%.

Nevertheless, we can see how our views altered the structure of other expected returns.  For example, our view on US small-caps significantly altered the expected returns of other equity exposures.  Furthermore, for high yield to outperform US small-caps, asset class expectations were lowered across the board.

Conclusion

Tactical views in multi-asset portfolios can be difficult to implement for a variety of reasons.  In this commentary, we show how methods like Black-Litterman and Entropy Pooling can be utilized by asset allocators to express a variety of views and incorporate these views in a cohesive manner.

Once the views have been translated back into capital market assumptions, these assumptions can be leveraged to construct a variety of portfolios based upon policy constraints.  In this manner, the same tactical views can be embedded consistently across a variety of portfolios while still acknowledging the unique objectives of each portfolio constructed.


[1] https://blog.thinknewfound.com/2017/07/four-important-details-tactical-asset-allocation/

[2] For clarity, we’re using “addressed” here in the loose sense of the word.  As in, “this is one potential solution to the problem.”  As is frequently the case, the solution comes with its own set of assumptions and embedded problems.  As always, there is no holy grail.

[3] By risk premia, we mean things like the Equity Risk Premium, the Bond Risk Premium (i.e. the Term Premium), the Credit Risk Premium, the Liquidity Risk Premium, et cetera.  Active Premia – like relative value – confuse this notion a bit, so we’re going to conveniently ignore them for this discussion.

[4] For example, see: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2352932

[5] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1213325

[6] Those looking to implement can find Meucci’s MatLab code (https://www.mathworks.com/matlabcentral/fileexchange/21307-fully-flexible-views-and-stress-testing) and public R code (https://r-forge.r-project.org/scm/viewvc.php/pkg/Meucci/R/EntropyProg.R?view=markup&root=returnanalytics) available.  We have a Python version we can likely open-source if there is enough interest.

Growth Optimal Portfolios

This post is available as a PDF download here.

Summary­­

  • Traditional portfolio management focuses explicitly on the trade-off between risk and return.
  • Anecdotally, investors often care more about the growth of their wealth. Due to compounding effects, wealth is a convex function of realized returns.
  • Within, we explore geometric mean maximization, an alternative to the traditional Sharpe ratio maximization that seeks to maximize the long-term growth rate of a portfolio.
  • Due to compounding effects, volatility plays a critical role in the growth of wealth. Seemingly lower return portfolios may actually lead to higher expected terminal wealth if volatility is low enough.
  • Maximizing for long-term growth rates may be incompatible with short-term investor needs. More explicit accounting for horizon risk may be prudent.

In 1956, J.L. Kelly published “A New Interpretation of Information Rate,” a seminal paper in betting theory that built off the work of Claude Shannon.  Within, Kelly derived an optimal betting strategy (called the Kelly criterion) for maximizing the long-term growth rate of a gambler’s wealth over a sequence of bets.  Key in this breakthrough was the acknowledgement of cumulative effects: the gambler would be reinvesting gains and losses, such that too large a bet would lead to ruin before any probabilistic advantage was ever realized.

Around the same time, Markowitz was laying the foundations of Modern Portfolio Theory, which relied upon mean and variance for the selection of portfolios.  Later work by Sharpe and others would identify the notion of the tangency portfolio: the portfolio that maximizes excess return per unit of risk.

Without leverage, however, investors cannot “eat” risk-adjusted returns.  Nor do they, anecdotally, really seem to care about it.  We, for example, have never heard of anyone opening their statement to look at their Sharpe ratio.

More academically, part of the problem with Markowitz’s work, as identified by Henry Latane in 1959, was that it did not provide an objective measure for selecting a portfolio along the efficient frontier.  Latane argued that for an investor looking to maximize terminal wealth (assuming a sequence of uncertain and compounding choices), one optimal strategy was to select the portfolio that maximized geometric mean return.

 

The Math Behind Growth-Optimal Portfolios

We start with the idea that the geometric mean return, g, of a portfolio – the value we want to maximize – will be equal to the annualized compound return:

With some slight manipulation, we find:

For[1],

We can use a Taylor expansion to approximate the log returns around their mean:

Dropping higher order terms and taking the expected value of both sides, we get:

Which can be expressed using the geometric mean return as:

Where sigma is the volatility of the linear returns.

 

Multi-Period Investing: Volatility is a Drag

At the end of the last section, we found that the geometric mean return is a function of the arithmetic mean return and variance, with variance reducing the growth rate.  This relationship may already be familiar to some under the notion of volatility drag.[2]

Volatility drag is the idea that the arithmetic mean return is greater than the geometric mean return – with the difference being due to volatility. Consider this simple, albeit extreme, example: on the first day, you make 100%; on the second day you lose 50%.

The arithmetic mean of these two returns is 25%, yet after both periods, your true compound return is 0%.

For less extreme examples, a larger number of periods is required.  Nevertheless, the effect remains: “volatility” causes a divergence between the arithmetic and geometric mean.

From a pure definition perspective, this is true for returns.  It is, perhaps, somewhat misleading when it comes to thinking about wealth.

Note that in finance, we often assume that wealth is log-normally distributed (implying that the log returns are normally distributed).  This is important, as wealth should only vary between [0, ∞) while returns can technically vary between (-∞, ∞).

If we hold this assumption, we can say that the compounded return over T periods (assuming constant expected returns and volatilities) – is[3]:

Where  is the random return shock at time t.

Using this framework, for large T, the median compounded return is:

What about the mean compounded return?  We can re-write our above framework as:

Note that the random variable is log-normal, the two terms are independent of one another, and that

Thus,

The important takeaway here is that volatility does not affect our expected level of wealth.  It does, however, drive the mean and median further apart.

The intuition here is that while returns are generally assumed to be symmetric, wealth is highly skewed: we can only lose 100% of our money but can theoretically make an infinite amount.  Therefore, the mean is pushed upwards by the return shocks.

Over the long run, however, the annualized compound return does not approach the mean: rather, it approaches the median.  Consider that the annualized compounded return can be written:

Taking the limit as T goes to infinity, the second term approaches 1, leaving only:

Which is the annualized median compounded return.  Hence, over the long run, over one single realized return path, the investor’s growth rate should approach the median, not the mean, meaning that volatility plays a crucial role in long-term wealth levels.

 

The Many Benefits of Growth-Optimal Portfolios

The works of Markowitz et al. and Latane have subtle differences.

  • Sharpe Ratio Maximization (“SRM”) is a single-period framework; Geometric Mean Maximization (“GMM”) is a multi-period framework.
  • SRM maximizes the expected utility of terminal wealth; GMM maximizes the expected level of terminal wealth.

Over time, a number of attributes regarding GMM have been proved.

  • Breiman (1961) – GMM minimizes the expected time to reach a pre-assigned monetary target V asymptotically as V tends to infinity.
  • Hakansson (1971) – GMM is myopic; the current composition depends only on the distribution of returns over the next rebalancing period.
  • Hakansson and Miller (1975) – GMM investors never risk ruin.
  • Algoet and Cover (1988) – Assumptions requiring the independence of returns between periods can be relaxed.
  • Ethier (2004) – GMM maximizes the median of an investor’s fortune.
  • Dempster et al. (2008) – GMM can create value even in the case where every tradeable asset becomes almost surely worthless.

With all these provable benefits, it would seem that for any investor with a sufficiently long investment horizon, the GMM strategy is superior.  Even Markowitz was an early supporter, dedicating an entire chapter of his book Portfolio Selection: Efficient Diversification of Investments, to it.

Why, then, has GMM largely been ignored in favor of SRM?

 

A Theoretical Debate

The most significant early challenger to GMM was Paul Samuelson who argued that maximizing geometric mean return was not necessarily consistent with maximizing an investor’s expected utility.  This is an important distinction, as financial theory generally requires decision making be based on expected utility maximization.  If care is not taken, the maximization of other objective functions can lead to irrational decision making: a violation of basic finance principles.

 

Practical Issues with GMM

Just because the GMM provably dominates the value of any other portfolio over a long-horizon does not mean that it is “better” for investors over all horizons.

We use quotation marks around better because the definition is largely subjective – though economists would have us believe we can be packaged nicely into utility functions.  Regardless,

  • Estrada (2010) shows that GMM portfolios are empirically less diversified and more volatile than SRM portfolios.
  • Rubinstein (1991) shows that it may take 208 years to be 95% confident that a Kelly strategy beats an all-cash strategy, and 4700 years to be 95% sure that it beats an all-stock strategy.

A horizon of 208 years, and especially 4700 years, has little applicability to nearly all investors.  For finite horizons, however, maximizing the long-term geometric growth rate may not be equivalent to maximizing the expected geometric return.

Consider a simple case with an asset that returns either 100% or -50% for a given year.  Below we plot the expected geometric growth rate of our portfolio, depending on how many years we hold the asset.

We can see that for finite periods, the expected geometric return is not zero, but rather asymptotically approaches zero as the number of years increases.

 

Finite Period Growth-Optimal Portfolios

Since most investors do not have 4700 hundred years to wait, a more explicit acknowledgement of holding period may be useful.  There are a variety of approximations available to describe the distribution of geometric returns with a finite period (with complexity trading off with accuracy); one such approximation is:

Rujeerapaiboon, Kuhn, Wiesemann (2014)[4] propose a “robust” solution for fixed-mix portfolios (i.e. those that rebalance back to a fixed set of weights at the end of each period) and finite horizons.  Specifically, they seek to maximize the worst-case geometric growth rate (where “worst case” is defined by some probability threshold), under all probability distributions (consistent with an investor’s prior information).

If we simplify a bit and assume a single distribution for asset returns, then for a variety of worst-case probability thresholds, we can solve for the maximum growth rate.

As we would expect, the more certain we need to be of our returns, the lower our growth rate will be.  Thus, our uncertainty parameter, , can serve, in a way, as a risk-aversion parameter.

As an example, we can employ J.P. Morgan’s current capital market assumptions, our simulation-based optimizer, the above estimates for E[g] and V[g], and vary the probability threshold to find “robust” growth-optimal portfolios.  We will assume a 5-year holding period.

Source: Capital market assumptions from J.P. Morgan.  Optimization performed by Newfound Research using a simulation-based process to account for parameter uncertainty.  Certain asset classes listed in J.P. Morgan’s capital market assumptions were not considered because they were either (i) redundant due to other asset classes that were included or (ii) difficult to access outside of private or non-liquid investment vehicles. 

 

To make interpretation easier, we have color coded the categories, with equities in blue, fixed income in green, credit in orange, and alternatives in yellow.

We can see that even with our uncertainty constraints relaxed to 20% (i.e. our growth rate will only beat the worst-case growth rate 80% of the time), the portfolio remains fairly diversified, with large exposures to credit, alternatives, and even long-dated Treasuries largely used to offset equity risk from emerging markets.

While this is partly due to the generally bearish view most firms have on traditional equities, this also highlights the important role that volatility plays in dampening geometric return expectations.

 

Low Volatility: A Geometric Mean Anomaly?

By now, most investors are aware of the low volatility anomaly, whereby strategies that focus on low volatility or low beta securities persistently outperform expectations given by models like CAPM.

To date, there have been three behavioral arguments:

  1. Asset managers prefer to buy higher risk stocks in effort to beat the benchmark on an absolute basis;
  2. Investors are constrained (either legally or preferentially) from using leverage, and therefore buy higher risk stocks;
  3. Investors have a deep-seeded preference for lottery-type payoffs, and so buy riskier stocks.

In all three cases, investors overbid higher risk stocks and leave low-risk stocks underbid.

In Low Volatility Equity Investing: Anomaly or Algebraic Artifact, Dan diBartolomeo offers another possibility.[5]  He notes that while the CAPM says there is a linear relationship between systematic risk (beta) and reward, the CAPM is a single-period model.  In a multi-period model, there would be convex relationship between geometric return and systematic risk.

Assuming the CAPM holds, diBartolomeo seeks to solve for the optimal beta that maximizes the geometric growth rate of a portfolio.  In doing so, he addresses several differences between theory and reality:

  • The traditional market portfolio consists of all risky assets, not just stocks. Therefore, an all stock portfolio likely has a very high relative beta.
  • The true market portfolio would contain a number of illiquid assets. In adjusting volatility for this illiquidity – which in some cases can triple risk values – the optimal beta would likely go down.
  • In adjusting for skew and kurtosis exhibited by financial time series, the optimal beta would likely go down.
  • In general, investors tend to be more risk averse than they are growth optimal, which may further cause a lower optimal beta level.
  • Beta and market volatility are estimated, not known. This causes an increase in measured asset class volatility and further reduces the optimal beta value.

With these adjustments, the compound growth rate of low volatility securities may not be an anomaly at all: rather, perception of outperformance may be simply due to a poor interpretation of the CAPM.

This is both good and bad news.  The bad news is that if the performance of low volatility is entirely rational, it’s hard for a manager to demand compensation for it.  The good news is that if this is the case, and there is no anomaly, then the performance cannot be arbitraged away.

 

Conclusion: Volatility Matters for Wealth Accumulation

While traditional portfolio theory leads to an explicit trade-off of risk and return, the realized multi-period wealth of an investor will have a non-linear response – i.e. compounding – to the single-period realizations.

For investors who care about the maximization of terminal wealth, a reduction of volatility, even at the expense of a lower expected return, can lead to a higher level of wealth accumulation.

This can be non-intuitive.  After all, how can a lower expected return lead to a higher level of wealth?  To invoke Nassim Taleb, in non-linear systems, volatility matters more than expected return.  Since wealth is a convex function of return, a single bad, outlier return can be disastrous.  A 100% gain is great, but a 100% loss puts you out of business.

With compounding, slow and steady may truly win the race.

It is worth noting, however, that the portfolio that maximizes long-run return may not necessarily best meet an investor’s needs (e.g. liabilities).  In many cases, short-run stability may be preferred at the expense of both long-run average returns and long-term wealth.


[1] Note that we are using  here to represent the mean of the linear returns. In Geometric Brownian Motion,  is the mean of the log returns.

[2] For those well-versed in pure mathematics, this is an example of the AM-GM inequality.

[3] For a more general derivation with time-varying expected returns and volatilities, please see http://investmentmath.com/finance/2014/03/04/volatility-drag.html.

[4] https://doi.org/10.1287/mnsc.2015.2228

[5] http://www.northinfo.com/documents/559.pdf

Powered by WordPress & Theme by Anders Norén