This post is available as a PDF download here

Summary­­

  • Calculating an optimal portfolio from a set of capital market assumptions (CMAs) is a straightforward quantitative exercise, but the results are highly dependent on the assumptions holding in the future.
  • Any portfolio that is initially assumed to be optimal will be sub-optimal if any single assumed parameter turns out to be different.
  • By utilizing multiple sets of capital market assumptions, we aim to mitigate the cost of being wrong.
  • Rather than looking for the optimal portfolio, finding a robust portfolio that is close to optimal may be a better long-run investment.

Risk can never be destroyed, only transformed.[1] If we reduce one risk, chances are that we are increasing another.

This is not as gloomy as it sounds. These decisions give us the ability to choose which risks we are exposed to based on our preferences and risk tolerance.

Consider homeowner’s insurance. The risk of a catastrophic loss, while unlikely, is worth insuring against for a premium. In the case of insurance, we know the downside we are trading in and the cost we are paying to do so. There is value in the smoothness.

The danger occurs when we reduce a known risk for an unknown one. Before the financial crisis, someone insuring their home did not likely acknowledge that there was a risk of an insurance company going bankrupt the week before their house burnt down. We generally stick with insurance companies that have good credit ratings and long histories so that this risk is negligible. But it is not zero.

Our strategic portfolios, recently updated for the quarter end, are a prime example of this smoothing and transference of risk. If you have read commentaries in the past, you have likely seen some of the ways we account for a variety of risks, such as:

In our portfolio construction process, we utilize three sets of capital market assumptions (CMAs). For each set of expected returns, volatilities, and correlations, we calculate the optimal portfolios along the efficient frontier. Then we blend the corresponding portfolios along the efficient frontier to arrive at the final result for the sleeve.[2]

We acknowledge that every set of capital market assumptions is likely to be wrong. What is calculated as an “optimal” portfolio ex-ante will almost certainly not be the optimal portfolio ex-post.

How does our blending process help smooth out this risk?

Three Different Frontiers

In mean-variance portfolio optimization, the goal is to invest in the portfolio along the efficient frontier with the maximum Sharpe ratio. By pairing this portfolio with cash or using leverage, the risk can be tailored to match any specified target.

In practice, we can forego the use of cash or leverage and simply invest in a portfolio along the efficient frontier.

In an ideal world, our assumed expected returns, volatilities, and correlations would be valid ex-post, and our realized results would coincide with our predicted results.

However, this is rarely the case.

A more realistic scenario is to have realized parameters that are different than the values used in the optimization. These different parameters now allow us to calculate three different frontiers:

  1. The Predicted Efficient Frontier – calculated as the optimal portfolios given the ex-ante expected parameters.
  2. The Realized Frontier – calculated using the portfolio weights from the predicted efficient frontier and the ex-post parameters. This represents the actual portfolio results.
  3. The Realized Efficient Frontier - calculated as the optimal portfolios given the ex-post parameters. This is what we would have obtained if we had known the ex-post parameters.

In a two asset world, if we assume that volatilities and correlations can be estimated with decent accuracy, then the allocations in the realized frontier are the same as the realized efficient frontier as long as the rank ordering of the returns does not flip. If you invested in what you thought was the optimal portfolio for a given level of risk, the realized portfolio was likely still optimal, just different.[3]

The potential differences become larger with more asset classes. For example, if U.S. Large-Cap equities, EAFE equities, Intermediate U.S. Treasuries, and corporate bonds have the expected returns, volatilities, and correlations shown in the table below, we can construct the predicted efficient frontier (#1). Assuming a set of realized parameters, also shown below, we can also calculate the other two frontiers.[4]

Calculations by Newfound Research. All results are hypothetical.

Mathematically, the realized frontier (#2) must be less than or equal to the realized efficient frontier (#3) at each level of risk because the weights of the portfolios in the predicted efficient frontier (#1) can only lead to optimal results if the predicted parameters are actually realized. Any other parameter realizations render those weights sub-optimal.

This puts simplistic portfolio optimizers in a tough place.

If we invest in what we think is the optimal portfolio, we expose ourselves to the risk of being disappointed when the realized results are nowhere close to our predictions. Also, as the number of estimates increases, our realized results are likely further from what would have been optimal if we had known the true parameters.

Smoothing out this risk is valuable not only from a long-term results perspective but also from the shorter-horizon decision of abandoning the process altogether.

Quantifying Sub-Optimality

Comparing the results of two portfolios must take both return and risk into account. Since we are looking at cases where we invest in one portfolio that is optimal given the assumptions, one way to evaluate the realized result is to look at the portfolio that would have targeted the same goal in the set of realized portfolios.

This goal could be a specified return or risk level. In the case of the QuBe model portfolios, the goal is a risk profile similar to a reference stock/bond allocation.

To compare these portfolios, we can look at the same type of portfolio on the realized frontier and move along the capital allocation line to a portfolio that has the same amount of risk as the predicted portfolio. The difference between the returns gives us a metric for comparison

Graphically, this is the vertical distance of the portfolio from the capital allocation line that passes through the realized portfolio and the risk-free rate.

The chart below shows an example using the frontiers presented earlier and a risk-free rate of 2%.

Calculations by Newfound Research. All results are hypothetical.

The implication of comparing this way is that if the frontier is shifted along the capital allocation line, then there is no difference in the portfolios. In that case, we essentially achieved the relative risk profile we predicted and were compensated by returns in proportion to the amount of risk we realized.

The Benefit of Blending

Within the optimizations for each set of capital market assumptions, we employ a robust method to reduce estimation error. 

In essence, the simulation approach supposes that returns evolve in a variety of different ways that are anchored to the capital market assumptions. This built-in uncertainty treats the capital market assumptions as an estimate of the simulated “true” scenarios. We want the portfolio that performs well in each “true” scenario.

If the goal is to minimize the risk of being far away from the predicted results, then we must trade optimality, or at least the guise of optimality, for robustness. Utilizing three different sets of capital market assumptions provides three different anchor points for the true scenarios.

With no prior belief on how accurate each set of capital market assumptions is, we opt for simply averaging across the individual results from each data set.

The charts below show the performance of each individual frontier calculated using a single set of capital market assumptions along with the blended result. In each graph, we assume a different provider of capital market assumptions is completely accurate. These graphs answer the question, “If the capital market assumptions from Provider A are right, how will my portfolio calculated using either the capital market assumptions from Provider B or a blend of all portfolios perform?”

Source: BlackRock, BNY Mellon, and J.P. Morgan. Calculations by Newfound Research. All results are hypothetical. Data as of June 30, 2017.

Source: BlackRock, BNY Mellon, and J.P. Morgan. Calculations by Newfound Research. All results are hypothetical. Data as of June 30, 2017.

Source: BlackRock, BNY Mellon, and J.P. Morgan. Calculations by Newfound Research. All results are hypothetical. Data as of June 30, 2017.

In each case, the data vendor who is assumed to have the accurate forecast has the best frontier since that one is optimal under those assumptions.

The frontiers calculated using the assumptions from the other vendors vary by different amounts along the frontier. The average deviations for each risk profile are shown below.

As an example, the BlackRock 0/100 bar is the average deviation assuming that J.P. Morgan is optimal, BNY Mellon is optimal, and BlackRock is optimal (which contributes 0 in this case since it is assumed to be right). The Blend actually gets penalized more than the others since it does not have a 0 contribution from any set of CMAs.

Source: BlackRock, BNY Mellon, and J.P. Morgan. Calculations by Newfound Research. All results are hypothetical. Data as of June 30, 2017.

If we care more about large deviations from any given data source, we can look at the root mean squared deviation, which penalizes large divergences from zero (basically a standard deviation assuming a mean of zero).

Source: BlackRock, BNY Mellon, and J.P. Morgan. Calculations by Newfound Research. All results are hypothetical. Data as of June 30, 2017.

As the risk profile gets more aggressive, the deviations, both simply averaged and squared, generally decrease. Under both schemes, the blended portfolio has the lowest or second lowest average for most of the risk profiles.

This is partly because the results from J.P. Morgan and BNY Mellon are relatively close. With three vendors, blending two similar ones with one that is different should be biased toward the majority.

But what if BlackRock is right?

Having a little of each is the most robust.

Under the BlackRock scenario, the blend portfolio beats all of the others; under the BNY Mellon scenario, the blend portfolio beats all but the most conservative J.P. Morgan portfolio; and under the J.P. Morgan scenario, the blend portfolio still beats all of the BlackRock and 2 out of the 6 BNY Mellon portfolios.

Similar Results, Different Method

Having two data vendors with very similar results might seem redundant, but the underlying allocations are where the important differences are. Blending two portfolios with identical risk and return characteristics on an ex-ante prediction level can have large benefits once the results are realized.

The charts below show how different even the most conservative and aggressive portfolios are for each set of capital market assumptions.

Source: BlackRock, BNY Mellon, and J.P. Morgan. Calculations by Newfound Research. All results are hypothetical. Data as of June 30, 2017.

Source: BlackRock, BNY Mellon, and J.P. Morgan. Calculations by Newfound Research. All results are hypothetical. Data as of June 30, 2017.

Conclusion

Capital market assumptions will rarely be exactly right, so relying on multiple sets is a way to reduce the risk of choosing the wrong one.

This still does not remove the risk that the market evolves in a way that is wildly different from any of the capital market assumptions, in a way that is not even captured by any simulated path or by enough simulated paths to have an appreciable impact on results. However, the fact that we can state this risk means that we can anticipate it.

Using multiple sets of capital market assumptions and blending the results reduces the likelihood of this happening by increasing the coverage over the parameter space.

In the case of our QuBe models, the inclusion of risk parity and the way that asset class allocations are implemented at the security level (e.g. risk-managed strategies) are ways to further manage the risk inherent to relying on a set of parameters.

Big institutions pour a lot of time and effort into deriving capital market assumptions. As the public availability of these data sets extends over time, we will more thoroughly be able to assess the accuracy of these assumptions.

Based on the current state of the data that’s available, these assumptions can serve as good starting points for portfolio development when paired with a robust allocation process that acknowledges uncertainty. Still, we must not hinge our portfolios solely on a limited set of assumptions because even the most rigorous models cannot fix “garbage in, garbage out”.


[1] https://blog.thinknewfound.com/2015/12/risk-cannot-destroyed/

[2] The other two sleeves consist of a reference point optimization and a risk parity model.

[3] If you were aiming to invest in the tangency portfolio, this could have changed, substantially.

[4] These realized parameters were generated using a single simulated 120-month GBM path with annual covariance shocks for each asset class.

Nathan is a Vice President at Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Nathan is responsible for investment research, strategy development, and supporting the portfolio management team.

Prior to joining Newfound, he was a chemical engineer at URS, a global engineering firm in the oil, natural gas, and biofuels industry where he was responsible for process simulation development, project economic analysis, and the creation of in-house software.

Nathan holds a Master of Science in Computational Finance from Carnegie Mellon University and graduated summa cum laude from Case Western Reserve University with a Bachelor of Science in Chemical Engineering and a minor in Mathematics.