At Newfound, we often use the phrase “model risk,” which captures both the uncertainty in the accuracy, applicability, or assumptions of our models as well as the probability of model failure.  In this rather long-winded post, we explore an example of how other market participants account for model uncertainty and what we consider responsible model usage.  

In a recent blog posting, Jared Woodward of Condor Options provided a quantitative argument for why he believes Nassim Nicholas Taleb is wrong: despite evidence that we may dramatically underestimate the probability of “fat tails,” we don’t necessarily mis-price the risk of them.  Whether or not Taleb or Woodward is correct is not the point of this post; rather, it is the line of argument that brings up an interesting detail that we would like to highlight.

Jared’s evidence that Taleb may be philosophically correct but pragmatically wrong is the long-run cost of insuring against low-probability, high-impact events actually exceeds the costs of the events themselves.  To prove his point, he demonstrates that insurance purchasers (option buyers) pay a premium outsized to the protection they receive.  Purchasing options is a “long convexity” (gamma or vega) position.  Convexity tells us that the valuation of the security depends on the variance of the security; securities with positive convexity are “long volatility” because, all else held equal, volatility increases the price of the security (hence why options are quoted in volatility, because they are effectively instruments whose value changes as volatility does).  The simplest metaphor for being long convexity is purchasing a lottery ticket: our downside is fixed (cost of the ticker) and our upside is variable (hopefully millions), so volatility is good.  On the other hand, if we are selling lottery tickets, we are short convexity: we are collecting a lot of fixed premiums, but there is a chance we will have to pay out a much larger cost.  When you buy insurance, you are long convexity; when you sell it, you are short.

If we were an insurance agency, we’d likely develop a series of models for estimating the risks associated with the policies we offer such as fire, flood, or tornado.  If we estimate the true probability of a flood in a given region as 2% a year, we could also say that we would have to pay out our insurance about once every 50 years.  If we fairly price in-line with this probability, our expected profit is going to be zero.  If several low probability events happen successively, we’d go out of business.  Likely, we’d take our base probability and multiply it a couple of times so that even if we get several low probability events happening back-to-back, we wouldn’t go out of business because we would have built a capital buffer.

So why wouldn’t we expect the same for those selling options?  In fact, it is what we see.  We have re-created a graph from Jared’s site below, plotting the VIX index (implied volatility) versus the realized volatility over the period.


We can see consistent evidence that option purchasers are paying a volatility premium (again, all else held equal, higher volatility levels imply higher option premiums).  Below, we plot the premium spread daily as well as the cumulative spread over time.


cumulative premium

Despite the large positive spike in the realized versus implied spread (indicating that option purchasers “underpaid” for the volatility that actually occurred), 2008 is barely a blip in the cumulative cost radar.  This is only an evaluation of the volatility premium, however — how would this look if we actually implemented via buying an at-the-money straddle, which is an effective play on the volatility level?  Well, given that we know that the vol premium is not in our favor as option buyers, we can predict that the results are not pretty.  And they are not:

strategy returns

On a cumulative basis, again we see that 2008 is a blip in our overall losses.

cumulative returns

Ultimately, 2008 doesn’t even matter because by 1995, our portfolio is worth less than 1% of its original value.

portfolio value

Of course, this is an extreme case.  We are implementing the trades every day and compounding our losses very quickly — but the lesson here is that insurance providers (option sellers who are short convexity) seem to be dramatically over-charging for the insurance they are providing.  In fact, if we use the “estimation error” of the last 60 days of implied volatility versus realized volatility to scale our implied volatility estimate, our go from historically over-estimating  volatility on average by 36.5% to only 1%.

What we would like to highlight here, however, is not an argument for whether Woodward or Taleb is correct, but rather the common-sense approach to using models that occurs.  One of the naive criticisms of the Black-Scholes model is that it uses the assumption that equity returns follow an i.i.d. normal distribution, when everyone knows that equity returns are certainly not normally distributed.  Yet if we were using Black-Scholes to price options, we would see the exact opposite results: selling insurance would be a losing business.

What we see in the options market is that to counter this “faulty” assumption (as well as to provide a cushion for replication error for insurance sellers who are discretely hedging), insurance providers are charging a premium, on average, of nearly 30% of their true volatility estimate.  In other words, if a normal distribution says that the probability of a 3-sigma loss is 0.13%, our insurance providers are assuming the probability of that same level loss is 1.4% — a 10-fold increase!  In terms of how frequently these monthly events will occur, the premium decreases the odds from a once-every-65-years (using accurate volatility estimates and normality assumptions) to a once-every-6-years event (puffing up volatility estimates to make up for normality assumptions).  Options traders assume a crash is around every turn.

The point is that insurers are “responsible” model users.  They recognize a degree of uncertainty in their model estimates (the “vol-of-vol”), as well as potentially unrealistic assumptions (normality of returns), and price in such a way to account for them.  If we knew with certainty what the realized volatility would be over the next thirty days then there would be a fair price for an option where neither seller nor buyer would make a profit over the long run.  Convexity sellers would demand a premium to this level and convexity buyers would demand a discount.  Those who crossed the bid-ask spread would be considered insane.

But we don’t know anything with certainty, and as we look further out in the future, the picture only gets hazier.  Responsible model usage is similar to packing for a trip: we may look at the weather forecast as a general guideline for what to expect, but we’ll likely also pack an umbrella, “just in case,” to prepare for uncertain conditions.

We call this “pricing for model risk”: recognizing that there is a degree of uncertainty and a probability of failure in a model, then assuming that this uncertainty will go against you and adjusting your pricing or model usage appropriately.

Even non-quantitative value investors implicitly do it, looking for a “margin of safety” in their purchases to make up for uncertainty in their earnings, growth, and economic factor estimates.  In other words, fair value is hard to estimate accurately, so we should always assume we are wrong.  The approach is fairly common sense: why pay a cost that forces us to assume our model is correct 75% of the time if we can pay a cost that only forces us to assume we are correct 50% of the time, even if historically our model IS correct 75% of the time?

At Newfound, our use of models in portfolio development parallels the behavior of option sellers and value investors: we always try to build in a margin of safety, or an “uncertainty” factor around the accuracy of our models.  We strive to build our portfolios in a manner that assumes time-to-time model failure in excess of historic levels.  We consider this to be the only responsible manner to use models in portfolio development.

Corey is co-founder and Chief Investment Officer of Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Corey is responsible for portfolio management, investment research, strategy development, and communication of the firm's views to clients. Prior to offering asset management services, Newfound licensed research from the quantitative investment models developed by Corey. At peak, this research helped steer the tactical allocation decisions for upwards of $10bn. Corey holds a Master of Science in Computational Finance from Carnegie Mellon University and a Bachelor of Science in Computer Science, cum laude, from Cornell University. You can connect with Corey on LinkedIn or Twitter.