In his book "The Signal and the Noise: Why So Many Predictions Fail -- but Some Don't", Nate Silver discusses the many subtleties of developing and utilizing predictive models. Two of his examples highlight the important difference between prediction and forecasting.

Silver first introduces the field of meteorology, where advances in computational power and data collection has lead to increased short-term modeling accuracy. While we have a strong understanding of the math behind the natural processes that govern weather down to the molecular level (fluid dynamics), our long-term prediction accuracy is limited by the fact that weather is a chaotic system. A chaotic system is defined by a process that is (1) dynamic (outputs become inputs) and (2) non-linear. One implication of a chaotic system is that even the tiniest errors in data-collection can lead to dramatically different outputs, more commonly known as the "butterfly effect." This makes longer-term predictions more difficult as tiny and random errors in the initial data grow exponentially in longer-term simulations.

We are then introduced to the field of Seismology and earthquake modeling, a field that unfortunately has not had the same increase in predictive accuracy as meteorology, though not for lack of effort. One of the fundamental problem with earthquake prediction is that unlike weather patterns, we still do not fully understand how to model the stresses along fault lines and their branches and tributaries. To make matters worse, seismologists cannot even get an accurate reading of the stress levels which would help them develop these models; unlike meteorologists who can "just look up," seismologists have to look kilometers under the Earth's crust.

Throughout the book, Silver stresses that models should be based upon a fundamental understanding of a process; the "story" a model tells should not be about the data, but rather based on sound theory. To paraphrase Silver, just throwing in a lot of potential explanatory variables into a blender does not make "haute cuisine." Seismologists unfortunately have underdeveloped theories and noisy data.

While we are getting no closer to predicting earthquakes (e.g. an earthquake will hit Los Angeles on December 28th), strides have been made in forecasting them (e.g. there is a 80% chance of an earthquake in Los Angeles in the next twenty years). Enter the Richter-Gutenberg law, a model that suggests that the frequency of earthquake magnitude to frequency follows a power-law. In other words, for a given region, an earthquake measuring between 6.0-6.9 should occur with ten-times more frequency than one between 7.0-7.9, which in turn should occur with ten-times more frequency than one between 8.0-8.9.

While this model does not tell us when an earthquake will occur, it can still provide us with a model for managing risk. If we live in a region where 5.0 quakes occur once every 3 years, the Richter-Gutenberg law tells us that 6.0 quakes should occur once every 30 years and 7.0 quakes once every 300 years. Even if there has never been a recorded 7.0 in the region, it would be prudent to consider the possibility when discussing building codes, emergency supplies, and community drills, especially since a 7.0 is exponentially more damaging than a 6.0 which is exponentially more damaging than a 5.0.

On October 29, 2012, Felix Salmon wrote a piece for Reuters titled "How Goldman Sachs protects itself from a hundred-year storm". Attached to the piece is a picture of a man stacking sandbags in front of Goldman Sachs' headquarters at 200 West in preparation for "Frankenstorm" Sandy. Mr. Salmon points out that the sandbags are likely a wasted gesture: the probability that the storm is strong enough to reach 200 West but not strong enough to be render useless a couple of sandbags is extremely slim. The issue, as Mr. Salmon points out, is that once we developed the computational complexity and the intellectual hubris, we began to design by model: "[t]he architects did lots of clever mathematics, or the actuaries did lots of clever sums, and soon there were dozens of huge buildings in the Manhattan flood zone." No amount of clever modeling can serve as protection when the tail event hits, and it was buildings like Trinity Church, built only upon the common sense of higher ground some 150 years before 200 West, that were at far less risk of flooding during hurricane Sandy.

When incorporating models into our decision making process, the greatest risk is believing we are meteorologists when we are really seismologists.

In asset management, our data is noisy, economies are complex and chaotic, and human behavior is often irrational: we are, without a doubt, seismologists. However, just because our ability to accurately predict may be rendered fairly useless, it doesn't render our ability to forecast useless. Coupled with "common sense" heuristics, it is upon forecasts that we can begin to construct long-term stable financial plans. Much like using the Richter-Gutenberg law to model the frequency of a catastrophic earthquakes and then designing building codes and public safety procedures appropriately, we can model loss scenarios and develop portfolio policies that ensure we remain within our long-term risk thresholds.

At Newfound, we work to constantly remind ourselves of our position as seismologists. We actively avoid prediction, always attempt to reduce model complexity, and design our strategies for model failure. Our active models are reactionary and share more in common with "early warning systems," than predictive models. Early warning systems are used to detect moderate to large earthquakes, tsunamis, or other unpredictable natural disasters very quickly and send warnings to areas before the disaster arrives, allowing protective action to be taken and triggering automatic responses to protect against human casualties and damage to critical infrastructure.  Our goal is to reduce the long-term costs of false-positives without reducing our ability to identify true positives. While this approach will always underperform one that employs a highly accurate predictive model (if one were to exist), we believe that when the tail-event eventually hits, we would rather be sitting in Trinity Church than stacking sandbags at 200 West.