At Newfound, we've always held the strict belief that financial markets cannot be predicted. Noisy data and incomplete theory guarantee model failure, and the dynamic and non-linear nature of economies and markets means that failure can have a catastrophic effect on portfolios. In other words, portfolios that heavily rely on the accuracy of predictions are short model risk. Very short. (In this sense, "short" means "hurt by," whereas if we were "long" model risk, we would benefit from it)

The dictionary typically defines prediction and forecasting to be synonyms, but we consider them to be different actions. We consider a prediction to be a specific statement about whether an event will or will not occur. A forecast defines the odds, and thereby provides us with a probability distribution of an event occurring.

In other words, if prediction tells us whether a given state of the world will or will not occur, forecasting tells us the probability of all potential states of the world.

Prediction with 100% accuracy provides perfect foresight into the future (which would make us very happy investors). A less accurate prediction model means we must begin to diversify our portfolio across multiple independent predictions (or "bets"). As we all found out in 2008, however, constructing a portfolio of independent bets is easier said than done.

Furthermore, constructing a portfolio of independent predictions is not what many portfolio managers do. Rather, they construct a macroeconomic prediction for the future and skew their portfolio towards that view. Pundits on CNBC discuss their predictions daily with no discussion of probability: their degree of certainty is 100%. While this makes good television (who wants to listen to someone saying that the probability of the Fiscal Cliff being resolved in time is 50%?), it makes for poor asset allocation decisions.

Our portfolio should be designed around forecasts. Forecasts give us the ability to design a portfolio that will provide adequate returns in all possible market states. To quote Aaron Brown in his book Red-Blooded Risk:

It is small harm if you assign a nonzero probability to a scenario that is in fact impossible. You might give up a little profit, but that's survivable. It can be fatal to assign a zero probability to a scenario that is in fact possible.

That is the danger with prediction: it by definition assigns a zero probability to possible events. In comparison, through forecasts, we at least assign *some* probability to all events; even if the probability is low, it is better than a zero probability.

This is where "black swans" come in. Black swan theory, as developed by Nassim Nicholas Taleb, claims that major events are often inappropriately rationalized after the fact, misleading us to believe they were, in fact, predictable. Rather, black swans fall into the category of *unknown* *unknowns,* as described by Donald Rumsfeld:

[T]here are known knowns; there are things we know that we know.

There are known unknowns; that is to say there are things that, we now know we don't know.

But there are also unknown unknowns – there are things we do not know we don't know.

By building a portfolio based on prediction, we are guaranteed to be blinded by black swans. Since we cannot, by definition, put a probability on such an event, can a portfolio based on forecasting be robust to this error? One of the benefits of forecasting is that it need not describe the exact circumstances of the world, just the results. For example, we may consider a 0.01% probability of a daily loss of 6-times volatility for each asset we hold, simultaneously, in our portfolio construction. We need not describe what *causes* this event -- which may be a black swan -- but we can explore the *implications* of the event. Ray Dalio says it more eloquently: "Make sure that the probability of the unacceptable (i.e., the risk of ruin) is nil" (Principle #197). We may not know what the catalyst is, but we certainly can define what unacceptable is.

At Newfound, we embrace forecast-based strategy development and risk management. By definition, we cannot predict or even enumerate possible black swans -- the moment we describe them, they enter the realm of *known unknowns. *Just as we explored a scenario above describing an unacceptable event and ascribing a non-zero probability to it, even though we did not know the cause, we can do the same in regard to model failure.

For example, we know that our Information Mapping Technology can have difficulty calibrating when a time-series is trend-less but volatility is increasing. If this condition persists, our more tactical strategies may incur whipsaw and trading costs. We don't have to predict the cause of such a scenario to understand the ramifications and there is little cost to ascribe such a situation a non-zero probability when constructing a portfolio from such strategies.

In fact, our frequent follow-up question is, "what sister strategy can we develop that would benefit in such a scenario?" And our research continues...