I was reading one of the recent Nautilus articles this morning, "Revisiting Moneyball with Paul DePodesta" -- an interview with the 2003 assistant general manager of the Oakland A's credited for developing a new way to interpret baseball player statistics that upended how players were scouted.  There was one particular quote by DePodesta that really resonated for me:

Through our analysis, we can get a real handle on what we ought to expect. But reality only happens once.

A background in probability and statistics allows you to see the future not as a thread of certainty, but rather as an infinite set of possibilities, some with higher likelihood than others.  This has several implications for quantitative investors.

Firstly, it highlights what quantitative investing seeks to do.  Quantitative investors seek to use mathematics, statistics, and computer modeling to create a systematic, repeatable process to constantly put themselves on the correct side of "lucky."  In other words, they see the future as a distribution of possible events and try to give themselves a positive expected value in all the events.  In those events that they will lose, they try to minimize the lass.  At the end of the day, in the words of DePodesta,

We know we’re still going to be wrong often, but we’re at least trying to give ourselves positive odds.

We're playing a distribution.  We're trying to skew the odds in our favor.  But we have to recognize that we have no control as to how the future will play out: only how we can position ourselves (and our portfolios) in it.  This also means that we have to recognize that while we may position our portfolio for gain in the vast majority of possible scenarios, it is possible that a succession of low-probability events occur that become realized losses.  Without a view of probability, this normally leads to questioning along the lines of, "how did you not see that coming?"  Instead, we should ask ourselves, "is it possible that we attributed too low a probability to that event?"  I think Aaron Brown puts it best: "It is small harm if you assign a nonzero probability to a scenario that is in fact impossible. You might give up a little profit, but that's survivable.  It can be fatal to assign a zero probability to a scenario that is in fact possible."

Consider, for example, a "tail-hedge" fund that bleeds several hundred basis points a year, on top of management fees.  If the right risks are never realized, the hedge is never realized, and the fund will just die by a death of 1,000 paper cuts.  The bleed is like paying an insurance premium: if a hurricane or flood never happens, we never collect.  On the other hand, if the risks are realized again and again, the manager can look like a genius.  She was not necessarily smarter than her peers -- she just simply put herself in a position to collect a large payoff if certain future events should come to fruition -- and they did.  (Of course, this is an over simplification: there is a certain art in determining whether we are over or underpaying for a certain amount of insurance based on our view of future probabilities versus the market's views.)

Secondly, we have the recognize that the past is just one of an infinite number of possible pasts that we may have had.  This plays into how we utilize data to look at the future.  Just because something did happen does not mean it was certain, a priori, that it would happen.  Consider, for example, the long-term expected equity risk premium is ~2.4% (see this report): this is the average excess return investors can expect over all future possible scenarios for bearing excess risk.  In some scenarios, the realized premium may be higher.  If those risks are realized, like they were from 2000-2010, the realized premium may even end up being negative.

So as we derive our data from historical samples and build our backtests, we always have to keep in the back of our mind that just because a realized path did happen, it was not necessarily certain it was going to.  In fact, a realized path could be made up entirely of low-probability events.

Finally, things change.  When the public became more aware of the role of performance-enhancing drugs in baseball, it forced DePodesta to re-evaluate his system.  Did PEDs make the last 15-20 years of statistics meaningless?  If you are using historical information to calibrate your model, your implicit assumption is that the near-term future looks like the past.  But how do you adjust your model, or your data, to account for such a dramatic change?

In quantitative investing, it is an open question: we're playing a distribution, but the distribution is always changing.  Sometimes just a little and sometimes a lot.  In baseball, the realization that increased usage of PEDs had an impact on the relevancy of the historical information we were using.  The future no longer looked like the near-past.  I draw a direct parallel from this example to the dynamic window that drives our momentum models.  The example we often give as to why our models use a dynamic look-back period is that information flowing into the market can change the relevancy of historical data.  In 2006, the past several years of earnings information was relevant; in late 2008, only the previous several weeks was relevant.

Baseball and quantitative asset management may not be the same, but DePodesta's methodology shares a lot of similarities and highlights many problems.  At Newfound, we believe that the four pillars of Quantitive Integrity that each model and portfolio are built on help address these shared issues.

When a model is designed to be simple, it helps prevent it from being designed to exploit historical data ("curve fitting" or "data mining") that only works in the rear-view mirror, but fails to recognize that we saw only one realized path of many potential paths.

We design our models and portfolios to be robust across time-horizons, asset classes, and geographies.  This gives us more certainty that as the distribution changes, our model changes with it.  The example we frequently give is that if the S&P 500 suddenly starts behaving like Gold, many models would break: we utilize the same models on equities as fixed income instruments and commodities, giving us greater certainty in their robustness.

Our models must be adaptive to recognize that events can change the relevancy of historical information; the future may not look like the recent past, and therefore we have to adapt as quickly as possible to calibrate to the new market environment.

Our models are designed to be reactive.  Seeing the future as an infinite set of possibilities highlights the impractical nature of predicting.  By designing our models and portfolios around edges that can be exploited in a reactive manner, like momentum, we can take advantage of changing future probability distributions.  Prediction relies on the future looking like the past: a bet we are not willing to make.

Corey is co-founder and Chief Investment Officer of Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Corey is responsible for portfolio management, investment research, strategy development, and communication of the firm's views to clients.

Prior to offering asset management services, Newfound licensed research from the quantitative investment models developed by Corey. At peak, this research helped steer the tactical allocation decisions for upwards of $10bn.

Corey is a frequent speaker on industry panels and contributes to ETF.com, ETF Trends, and Forbes.com’s Great Speculations blog. He was named a 2014 ETF All Star by ETF.com.

Corey holds a Master of Science in Computational Finance from Carnegie Mellon University and a Bachelor of Science in Computer Science, cum laude, from Cornell University.

You can connect with Corey on LinkedIn or Twitter.

Or schedule a time to connect.