*There is a PDF version of this post available for download here.*

**Summary**

- Long/short portfolios are helpful tools for quantifying the value-add of portfolio changes, especially for active strategies.
- In the context of fees, we can isolate the implicit fee of the manager’s active decisions (active share) relative to a benchmark and ask ourselves whether we think that hurdle is attainable.
- Bar-belling low fee beta with high active share, higher fee managers may actually be cheaper to incorporate than those managers found in the middle of the road.
- However, as long as investors still review their portfolios on an itemized basis, this approach runs the risk of introducing greater behavioral foibles than a more moderated – yet ultimately more expensive – approach.

After a lecture on cosmology and the structure of the solar system, William James was accosted by a little old lady.

“Your theory that the sun is the centre of the solar system, and the earth is a ball which rotates around it has a very convincing ring to it, Mr. James, but it’s wrong. I’ve got a better theory,” said the little old lady.

“And what is that, madam?” Inquired James politely.

“That we live on a crust of earth which is on the back of a giant turtle,”

Not wishing to demolish this absurd little theory by bringing to bear the masses of scientific evidence he had at his command, James decided to gently dissuade his opponent by making her see some of the inadequacies of her position.

“If your theory is correct, madam,” he asked, “what does this turtle stand on?”

“You’re a very clever man, Mr. James, and that’s a very good question,” replied the little old lady, “but I have an answer to it. And it is this: The first turtle stands on the back of a second, far larger, turtle, who stands directly under him.”

“But what does this second turtle stand on?” persisted James patiently.

To this the little old lady crowed triumphantly. “It’s no use, Mr. James – it’s turtles all the way down.”

— J. R. Ross, Constraints on Variables in Syntax 1967

**The Importance of Long/Short Portfolios**

Anybody who has read our commentaries for some time has likely found that we have a strong preference for simple models. Justin, for example, has a knack for turning just about everything into a conversation about coin flips and their associated probabilities. I, on the other hand, tend to lean towards more hand-waving, philosophical arguments (e.g. The Frustrating Law of Active Management[1] or that every strategy is comprised of a systematic and an idiosyncratic component[2]).

While not necessarily 100% accurate, the power of simplifying mental models is that it allows us to explore concepts to their – sometimes absurd – logical conclusion.

One such model that we use frequently is that the *difference between any two portfolios can be expressed as a dollar-neutral long/short portfolio.* For us, it’s long/short portfolios all the way down.

This may sound like philosophical gibberish, but let’s consider a simple example.

You currently hold Portfolio A, which is 100% invested in the S&P 500 Index. You are thinking about taking that money and investing it entirely into Portfolio B, which is 100% invested in the Barclay’s U.S. Aggregate Bond Index. How can you think through the implications of such a change?

One way of thinking through such changes is that recognizing that there is some *transformation* that takes us from Portfolio A to portfolio B, i.e. Portfolio A + X = Portfolio B.

We can simply solve for X by taking the difference between Portfolio B and Portfolio A. In this case, that difference would be a portfolio that is 100% long the Barclay’s U.S. Aggregate Bond Index and 100% short the S&P 500 Index.

Thus, instead of saying, “we’re going to hold Portfolio B,” we can simply say, “we’re going to continue to hold Portfolio A, but now *overlay* this dollar-neutral long/short portfolio.”

This may seem like an unnecessary complication at first, until we realize that any differences between Portfolio A and B are entirely captured by X. Focusing exclusively on the properties of X allows us to isolate and explore the impact of these changes on our portfolio and allows us to generalize to cases where we hold allocation to X that are different than 100%.

**Re-Thinking Fees with Long/Short Portfolios**

Perhaps most relevant, today, is the use of this framework in the context of fees.

To explore, let’s consider the topic in the form of an example. The iShares S&P 500 Value ETF (IVE) costs 0.18%, while the iShares S&P 500 ETF (IVV) is offered at 0.04%. Is it worth paying that extra 0.14%?

Or, put another way, does IVE stand a chance to make up the fee gap?

Using the long/short framework, one way of thinking about IVE is that IVE = IVV + X, where X is the long/short portfolio of active bets.

But are those active bets worth an extra 0.14%?

First, we have to ask, “how much of the 0.18% fee is actually going towards IVV and how much is going towards X?” We can answer this by using a concept called *active share*, which explicitly measures how much of IVE is made up of IVV and how much it is made up of X.

*Active share *can be easily explained with an example.[3] Consider having a portfolio that is 50% stocks and 50% bonds, and you want to transition it to a portfolio that is 60% stocks and 40% bonds.

In essence, your second portfolio is equal to your first plus a portfolio that is 10% long stocks and 10% short bonds. Or, equivalently, we can think of the second portfolio as equal to the first plus a 10% position in a portfolio that is 100% long stocks and 100% short bonds.

Through this second lens, that 10% number is our active share.

Returning to our main example, IVE has a reported active share of 42% against the S&P 500[4].

Hence, we can say that IVE = 100% IVV + 42% X. This also means that 0.14% of the 0.18% fee is associated with our active bets, X. (We calculate this as 0.18% – 0.04% x 100%.)

If we take 0.14% and divide it by 42%, we get the implicit fee that we are paying for our active bets. In this case, 0.333%.

So now we have to ask ourselves, “do we think that a long/short equity portfolio can return *at least *0.333%?” We might want to dive more into exactly what that long/short portfolio looks like (i.e. what are the actual active bets being made by IVE versus IVV), but it does not seem so outrageous. It passes the sniff test.

What if IVE were actually 0.5% instead? Now we would say that 0.46% of the 0.5% is going towards our 42% position in X. And, therefore, the implicit amount we’re paying for X is actually 1.09%.

Am I confident that an equity long/short value portfolio can clear a hurdle of 1.09% with consistency? Much less so. Plus, the fee now eats a much more significant part of any active return generated. E.g. If we think the alpha from the pure long/short portfolio is 3%, now 1/3^{rd} of that is going towards fees.

With this framework in mind, it is no surprise active managers have historically struggled so greatly to beat their benchmarks. Consider that according to Morningstar[5], the dollar-weighted average fee paid to passive indexes was 0.25% in 2000, whereas it was 1% for active funds.

If we assume a *very generous* 50% active share for those active funds, we can use the same math as before to find that we were, in essence, paying a 2.00% fee for the active bets. That’s a high hurdle for anyone to overcome.

And the closet indexers? Let’s be generous and assume they had an active share of 20% (which, candidly, is probably high if we’re calling them closet indexers). This puts the implied fee at *4%*! No wonder they struggled…

Today, the dollar weighted average expense ratio for passive funds is 0.17% and for active funds, it’s 0.75%. To have an implied active fee of less than 1%, active funds at that level will have to have an active share of *at least *30%.[6]

**Conclusion**

As the ETF fee wars rage on, and the fees for standard benchmarks plummeting on a near-daily basis, the only way an active manager can continue to justify a high fee is with an exceptionally high active share.

We would argue that those managers caught in-between – with average fees and average active share – are those most at risk to be disintermediated. Most investors would actually be better off by splitting the exposure into cheaper beta solutions and more expensive, high active share solutions. Bar-belling low fee beta with high active share, higher fee managers may actually be cheaper to incorporate than those found the middle of the road.

The largest problem with this approach, in our minds, is behavioral. High active share should mean high tracking error, which means significant year-to-year deviation from a benchmark. So long as investors still review their portfolios on an itemized basis, this approach runs the risk of introducing greater behavioral foibles than a more moderated – yet ultimately more expensive – approach.

[1] https://blog.thinknewfound.com/2017/10/frustrating-law-active-management/

[2] https://twitter.com/choffstein/status/880207624540749824

[3] Perhaps it is “examples” all the way down.

[4] See https://tools.alphaarchitect.com

[5] https://corporate1.morningstar.com/ResearchLibrary/article/810041/us-fund-fee-study–average-fund-fees-paid-by-investors-continued-to-decline-in-2016/

[6] We are not saying that we need a high active share to predict outperformance (https://www.aqr.com/library/journal-articles/deactivating-active-share). Rather, a higher active share reduces the implicit fee we are paying for the active bets.

## Are Market Implied Probabilities Useful?

By Nathan Faber

On November 27, 2017

In Risk & Style Premia, Weekly Commentary

This post is available as a PDF download here.SummaryMarket-implied probabilities are just as the name sounds: weights that the market is assigning an event based upon current prices of financial instruments. By deriving these probabilities, we can gain an understanding of the market’s aggregate forecast for certain events. Fortunately, the Federal Reserve Bank of Minneapolis provides a very nice tool for visualizing market-implied probabilities without us having to derive them.[1]

For example, say that I am concerned about inflation over the next 5 years. I can see how the probability of a large increase has been falling over time and how the probability of a large decrease has fallen recently, with both currently hovering around 15%.

Historical Market Implied Probabilities of Large Moves in InflationSource: Minneapolis Federal ReserveI can also look at the underlying probability distributions for these predictions, which are derived from the derivatives market, and compare the changes over time.

Market Implied Probability Distributions for Moves in InflationSource: Minneapolis Federal ReserveFrom this example, we can judge that not only has the market’s implied inflation forecast increased, but the precision has also increased (i.e. lower standard deviation) and the probabilities have been skewed to the left with fatter tails (i.e. higher kurtosis).

Inflation is only one of many variables analyzed.

Also available is implied probability data for the S&P 500, short and long-term interest rates, major currencies versus the U.S. dollar, commodities (energy, metal, and agricultural), and a selection of the largest U.S. banks.

With all the recent talk about low volatility, the data for the S&P 500 over the next 12 months is likely to be particularly intriguing to investors and advisors alike.

Historical Market Implied Probabilities of Large Moves in the S&P 500Source: Minneapolis Federal ReserveThe current market implied probabilities for both large increases and decreases (i.e. greater than a 20% move) are the lowest they have been since 2007.

Interpreting Market Implied ProbabilitiesA qualitative assessment of probability is generally difficult unless the difference is large. We can ask ourselves, for example, how we would react if the probability of a large loss jumped from 10% to 15%. We know that the latter case is riskier, but how does that translate into action?

The first step is understanding what the probability actually means.

Probability forecasts in weather are a good example of this necessity. Precipitation forecasts are a combination of forecaster certainty and coverage of the likely precipitation.[2] For example, if there is a 40% chance of rain, it could mean that the forecaster is 100% certain that it will rain in 40% of the area. Or it could mean that they are 40% certain that it will rain in 100% of the area. Or it could mean that they are 80% certain that it will rain in 50% of the area.

Once you know what the probability even represents, you can have a better grasp on whether you should carry an umbrella.

In the case of market implied probabilities, what we have is the

risk-neutralprobability. These are the probabilities of an event given that investors are risk neutral; these probabilities factor in the both the likelihood of an event and the cost in the given state of the world. These are not the real-world probabilities of the market moving by a given amount. In fact, they can change over time even if the real-world probabilities do not.To illustrate these differences between a risk-neutral probability and a real-world probability, consider a simple coin flip game. The coin is flipped one time. If it lands on heads, you make $1, and if it lands on tails, you lose $1.

The coin is fair, so the probability for the coin flip is 50%. How much would you pay to play this game?

If you answer is

nothing, then you are risk neutral, and the risk neutral probability is also 50%.However, risk averse players would say, “you have to pay

meto play that game.” In this case, the risk neutral probability of a tails isgreaterthan 50% because of the downside that players ascribe to that event.Now consider a scenario where a tails still loses $1, but a heads pays out $100. Chances are that even very risk-averse players would pay close to $1 to play this game.

In this case, the risk neutral probability of a heads would be much greater than 50%.

But in all cases, the actual likelihoods of heads and tails never changed; they still had a 50% real-world probability of occurring.

As with the game, investors who operate in the real world are generally risk averse. We pay premiums for insurance-like investments to protect in the states of the world we dread the most. As such, we would expect the risk-neutral probability of a “bad” event (e.g. the market down more than 20%) to be higher than the real-world probability.

Likewise, we would expect the risk-neutral probability of a “good” event (e.g. the market up more than 20%) to be lower than the real-world probability.

How Market Implied Probabilities Are CalculatedNote (or Warning): This section contains some calculus. If that is not of interest, feel free to skip to the next section; you won’t miss anything. For those interested, we will briefly cover how these probabilities are calculated to see what (or who), exactly, in the market implies them.The options market contains call and put options over a wide array of strike prices and maturities. If we assume that the market is free from arbitrage, we can transform the price of put options into call options through put-call parity.[3]

In theory, if we knew the price of a call option for every strike price, we could calculate the risk-neutral probability distribution,

f, as the second derivative with respect to the strike price.^{RN}where

ris the risk-free rate,Cis the price of a call option,Kis the strike price andT-tis the time to option maturity.Since options do not exist at every strike price, a curve is fit to the data to make it a continuous function that can be differentiated to yield the probability distribution.

Immediately, we see that the probabilities are set by the options market.

Are Market Implied Probabilities Useful?Feldman et. al (2015), from the Minneapolis Fed, assert that market-based probabilities are a useful tool for policy makers.[4] Their argument centers around that fact that risk-neutral probabilities encapsulate both the probability of an event occurring – the real-world probability – and the cost/benefit of the event.

Assuming broad access to the options market, households or those acting on behalf of households can express their views on the chances of the event happening and the willingness to exchange cash flows in different states of the world by trading the appropriate options.

In the paper, the authors admit two main pitfalls:

Participation– An issue can arise here since the people whose welfare the policy-makers are trying to consider may not be participating. Others outside the U.S. may also be influencing the probabilities.Illiquidity– Options do not always trade frequently enough in the fringes of the distribution where investors are usually most concerned. Because of this, any extrapolation must be robust.However, they also refute many common arguments against using risk-neutral probabilities.

These are not “true” probabilities– The fact that these market implied probabilities are model-independent and derived from household preferences rather than from a statistician’s model, with its own biased assumptions, is beneficial, especially since these market probabilities account for resource availability.No Household is “Typical”– In equilibrium, all households should be willing to rearrange their cash flows in different states of the world as long as the market is complete. Therefore, a policy-maker aligns their beliefs with those of the households in aggregate by using the market-based probabilities.We have covered how policymakers often do not forecast very well themselves[5], which Ellison and Sargent argue may be intentional, stating that the FOMC may purposefully forecast incorrectly in order to form policy that is robust to model misspecification.[6]

Where a problem could arise is when an individual investor (i.e. a household) makes a decision for their own portfolio based on these risk-neutral probabilities.

We agree that having a financial market makes a “typical” investor more relevant than the “average fighter pilot” example in our previous commentary.[7] But what a central governing body uses to make decisions is different from what may be relevant to an individual.

The ability to be flexible is key. In this case, an investor can construct their own portfolio. It would be like a pilot constructing their own plane.

Getting to Real World ProbabilitiesUsing the method outlined in Vincent-Humphreys and Noss (2012), we can transform risk-neutral probabilities into real-world probabilities, assuming that investor risk preferences are stable over time.[8]

Without getting too deep into the mathematical framework, the basic premise is that if we have a probability density function (PDF) for the risk-neutral probability,

f, with a cumulative density function (CDF),^{RN}F, we can multiply it by a calibration function,^{RN}C, to obtain the real-world probability density function,f^{RW}.The beta distribution is a suitable choice for the calibration function.[9] Using a beta distribution balances being parsimonious – it only has two parameters – with flexibility, since it allows for preserving the risk-neutral probability distribution by simply shifting the mean and adjusting the variance, skew, and kurtosis.

The beta distribution parameters are calculated using the realized value that the market implied probability represents (e.g. change in the S&P 500, interest rates commodity prices, etc.) over the subsequent time period.

Deriving the Real-World Probability for a Large Move in the S&P 500We have now covered what market-implied probabilities are and how they are calculated and discussed their usefulness for policy makers.

But individual investors price risk differently based on their own situations and preferences. Because of this, it is helpful to strip off the market-implied costs that are baked into the risk-neutral probabilities. The real-world probabilities could then be used to weigh stress testing scenarios or evaluate the cost of other risk management techniques that align more with investor goals than option strategies.

Using the framework outlined above, we can go through an example of transforming the market implied probabilities of large moves in the S&P 500 into their real-world probabilities.

Statistical aside: The options data starts in 2007, and with 10 years of data, we only have 10 non-overlapping data points, which reduces the power of the maximum likelihood estimate used to fit the beta distribution. However, with options expiring twice a month, we have 24 separate data sets to use for calculating standard errors. Since we are concerned more about the potential differences between the risk-neutral and real-world distributions, we could use the rolling 12-month periods and still see the same trends. As with any analysis with overlapping periods, there can be significant autocorrelation to deal with. By using the 6-month distribution data from the Minneapolis Fed, we could double the number of observations.Since the Minneapolis Fed calculates the market implied (risk neutral) probability distribution and the summary statistics (numerically), we must first translate it into a functional form to extend the analysis. Based on the data and the summary statistics, the distribution is neither normal nor log-normal. It is left-skewed and has fat tails most of the time.

Market Implied Probability Distributions for Moves in the S&P 500Source: Minneapolis Federal ReserveWe will assume that the distribution can be parameterized using a skewed generalized t-distribution, which allows for these properties and also encompasses a variety of other distributions including the normal and t-distributions.[10] It has 5 parameters, which we will fit by matching the moments (mean, variance, skewness and kurtosis) of the distribution along with the 90

^{th}percentile value, since that tail of the distribution is generally the smaller of the two.[11]We can check the fits using the reported median and the 10

^{th}percentile values to see how well they match.Fit Percentile Values vs. Reported ValuesSource: Minneapolis Fed. Calculations by Newfound Research.There are instances where the reported distribution is bi-modal and would not be as accurately represented by the generalized skewed t distribution, but, as the above graph shows, the quantiles where our interest is focused line up decently well.

Now that we have our parameterized risk-neutral distribution for all time periods, the next step is to input the subsequent 12-month S&P 500 return into the CDF calculated at each point in time. While we don’t expect this risk-neutral distribution to necessarily produce a good forecast of the market return, this step produces the data needed to calibrate the beta function.

The graph below shows this CDF result over the rolling periods.

Cumulative Probabilities of Realized 12-month S&P 500 Returns using the Risk-Neutral Distribution from the Beginning of Each PeriodSource: Minneapolis Fed and CSI. Calculations by Newfound Research.The persistence of high and low values is evidence of the autocorrelation issue we discussed previously since the periods are overlapping.

The beta distribution function used to transition from the risk-neutral distribution to the real-world distribution has parameters

j= 1.64 andk= 1.00 with standard errors of 0.09 and 0.05, respectively.We can see how this function changes at the endpoints of the 95% confidence intervals for each parameter as a way to assess the uncertainty in the calibration.

Estimated Calibration Functions for 12-month S&P 500 ReturnsSource: Minneapolis Fed and CSI. Calculations by Newfound Research. Data from Jan 2007 to Nov 2017.When we transform the risk-neutral distribution into the real-world distribution, the calibration function values that are less than 1 in the left tail reduce the probabilities of large market losses.

In the right tail, the calibration estimates show that real-world probabilities could be higher or lower than the risk-neutral probabilities depending on the second parameter’s value in the beta distribution (this corresponds to

kbeing either greater than or less than 1).With the risk-neutral distribution and the calibrated beta distribution, we now have all the pieces to calculate the real-world distribution at any point in the options data set.

The graph below shows how these functions affect the risk-neutral probability density using the most recent option data. As expected, much more of the density is centered around the mode, and the distribution is skewed to the right, even using the bounds of the confidence intervals (CI) for the beta distribution parameters.

Risk Neutral and Real-World Probability DensitiesSource: Minneapolis Fed and CSI. Calculations by Newfound Research. Data as of 11/15/17. Past performance is no guarantee of future results.Source: Minneapolis Fed and CSI. Calculations by Newfound Research. Data as of 11/15/17. Past performance is no guarantee of future results.Based on this analysis, we see some interesting things occurring.

This also shows how looking at market implied probabilities can paint a skewed picture of the chances of an event occurring.

However, we must keep in mind that these real-world probabilities are still derived from the market-implied probabilities. In an efficient market world, all risks would correctly be priced into the market. But we know from the experience during the Financial Crisis that that is not always the case.

Our recommendation is to take all market probabilities with a grain of salt. Just because having a coin land on heads five times in a row has a probability of less than 4% doesn’t mean we should be surprised if it happens once. And coin flipping is something that we

knowthe probability for.Whether the market probabilities we use are risk-neutral or real-world, there are a lot of assumptions that go into calculating them, and the consequences of being wrong can have a large impact on portfolios. Risk management is important if the event occurs regardless of how likely it is to occur.

As with the weather, a 10% chance of a large loss versus a 4% chance is not a big difference in absolute terms, but a large portfolio loss is likely more devastating than getting rained on a bit should you decide not to bring an umbrella.

ConclusionMarket implied probabilities are risk-neutral probabilities derived from the derivatives market. If we assume that the market is efficient and that there is sufficient investor participation in these markets, then these probabilities can serve as a tool for governing organizations to adjust policy going forward.

However, these probabilities factor in both the actual probability of an event

andthe perceived cost to investors. Individual investors will attribute their own costs to such events (e.g. a retiree could be much more concerned about a 20% market drop than someone in the beginning of their career).If individuals want to assess the probability of the event actually happening in order to make portfolio decisions, then they have to focus on the real-world probabilities. Ultimately, an investor’s cost function associated with market events depends more on life circumstances. While a bad state of the world for an investor can coincide with a bad state of the world for the market (e.g. losing a job when the market tanks), risk in an individual’s portfolio should be managed for the individual, not the “typical household”.

While the real-world probability of an event is typically dependent on an economic or statistical model, we have presented a way to translate the market implied probabilities into real-world probabilities.

With a better handle on the real-world probabilities, investors can make portfolio decisions that are in line with their own goals and risk tolerances.

[1] https://www.minneapolisfed.org/banking/mpd

[2] https://www.weather.gov/ffc/pop

[3] https://en.wikipedia.org/wiki/Put%E2%80%93call_parity

[4] https://www.minneapolisfed.org/~/media/files/banking/mpd/optimal_outlooks_dec22.pdf

[5] https://blog.thinknewfound.com/2015/03/weekly-commentary-folly-forecasting/

[6] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2160157

[7] https://blog.thinknewfound.com/2017/09/the-lie-of-averages/

[8] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2093397

[9] The beta distribution takes arguments between 0 and 1, inclusive, and has a non-decreasing CDF. It was also used in Fackler and King (1990) – https://www.jstor.org/stable/1243146.

[10] https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf

[11] Since we have 5 unknown parameters, we have to add in this fifth constraint. We could also have used the 10

^{th}percentile value or the median. Whichever, we use, we can see how well the other two align with the reported values.[12] https://interactive.researchaffiliates.com/asset-allocation/

[13] https://www.gmo.com/docs/default-source/research-and-commentary/strategies/asset-allocation/the-s-p-500-just-say-no.pdf