This post is available as a PDF download here.

Summary­­

  • The world is awash with new data. Satellite imagery, shipping manifests, agricultural sensors, and more can provide untapped insights.
  • To understand how investors might benefit, we decompose investment strategies into three pieces: systematic rules, idiosyncratic decisions, and randomness.
  • We explore how new and unique data might help investors enhance decision making in each of these categories.
  • Ultimately, we believe that more data does not necessarily mean more meaningful data.  For most investors, financial planning problems are not solved by alpha, and their efforts are likely better spent managing beta risk, which will dominate returns.

This week I have the privilege of speaking at a number of events in New York City.

On Monday, I’ll be discussing factor investing, from a practitioner’s perspective, with Invesco’s Factor Council event.  On the panel with me is Scott Lavalle, Director of Investment Advisor Research at PNC Asset Management.

On Tuesday, I’ll be participating in a Barron’s roundtable discussion with Barry Ritholtz (CIO of Ritholtz Wealth Management), Dave Nadig (CEO of ETF.com), and Ben Fulton (CEO of Elkhorn Investments) about the current and future ETF landscape.

On Wednesday, I’ll be talking the many uses (and potential abuses) of capital market assumptions at J.P. Morgan’s ETF symposium.  I’ll be sitting with John Bilton, Head of Global Strategy, Multi-Asset Solutions at J.P. Morgan.  He will be unveiling the updated 2018 outlook – so I’m excited for a front-row seat!

Finally, on Thursday, I’ll be at the Evidence-Based Investing Conference, where my panel is The Next Frontier: As Quants Tackle New Data Sets, Which Will Work and Which Are Just Gimmicks?  Ben Carlson will be moderating and other panelists include Kevin Quigg (Chief Strategist, ACSI Funds & Exponential ETFs), Leigh Drogan (CEO & Founder, Estimize), and Patrick O’Shaughnessy (Portfolio Manager, O’Shaughnessy Asset Management).

If you happen to be at any of these conferences, please come say hello!

While I hope to do a recap commentary of all these panels, I wanted to offer some ex-ante thoughts related to my panel at EBI.

The Next Frontier of Quant?

The panel description asks, “With satellite imagery and other non-traditional metrics now being exploited by professional investors, have we moved beyond the era of price-to-book and PE ratio into a new frontier?”

Without a doubt, this is a dense topic.  Yet, we believe it is a timely one.  Most of the low-hanging fruit, as far as systematic investment factors are concerned, have been commoditized and packaged into low-cost ETFs.[1]  Investment news seems to have turned its eye towards a new wave of funds, driven by the promise that machine learning (or even “AI”) techniques can discover profitable edges in unique and untapped data sources (e.g. satellite imagery, consumer patterns, digital sensors in agriculture, shipping manifests, et cetera).

We fully expect the next wave of investment products to embrace this approach.  Or, at the very least, leverage it for marketing spin.

I do my best to find a balance between an open mind and a healthy degree of skepticism when it comes to claims of innovation.  Particularly those promising excess returns.

To help maintain balance, I’ve adopted a framework that I use to think about investment strategies in general: one which I use to help understand where new advances can be adopted.

The Simple Framework: Systematic Rules + Idiosyncratic Decisions + Randomness

I believe the performance of any investment strategy can be decomposed into the result of three pieces: systematic rules, idiosyncratic decisions, and randomness.

Systematic rules are the parts of the investment strategy that can be codified and turned into a set of consistently applied decisions.  For example, we can define a value strategy as one which buys equities exhibiting low price-to-book characteristics relative their peers.

Idiosyncratic decisions are the unique, special situations that cause a deviation from the systematic rules.  By nature, these deviations must be non-repeatable in nature.  After all, if there were a consistent reason for deviating from the systematic rules, that consistent reason could, in turn, be codified into a systematic rule.  As an example, the ability for Warren Buffett to buy warrants from Goldman Sachs was idiosyncratic due both to the environment (2008 crisis) and the person (I don’t know about you, but I certainly wasn’t offered this deal).

Finally, randomness is due to exogenous forces – both micro (e.g. size of bid/ask spread when we trade) and macro (e.g. political event) in nature – that are explicitly not considered in our decision-making process.[2]  I’m going to largely ignore this component, as it affects strategy performance, but not decision making.

This breakdown applies whether applied to quant strategies or fully discretionary managers.  While discretionary managers may eschew the idea that they can be replaced by a set of rules, often the core of their strategy can.  Indeed, that is the entire notion behind factor investing, and more broadly, benchmarking in general.  Their value-add, then, is in knowing when to throw the rule-book out.

When we discuss the next frontier of quant – both in new techniques as well as new data sets – this is the framework to which I apply.

Enhancing Systematic Rules with Better Data

Performance of the classic price-to-book value factor has waned over the last decade, causing many to ask whether mass adoption of the approach has finally eroded its efficacy.  While other value metrics have continued to shine – including price-to-earnings and enterprise-value-to-EBITDA – they remain largely driven by calculations based upon financial statements.

Are there new data sets that would give us better insight into the true financial stability of a company, or allow us to better estimate financial metrics before they are released?

Using satellite imagery to count shipping containers and estimate sales volume certainly tells a good story.  And, if I am being less skeptical, I can imagine that certain datasets may allow for better cross-geographic normalization of financial reporting (particularly in those countries with more poorly enforced standards).  At least one example I am aware of in this area is the notion of using resource efficiency metrics to as a proxy for firm value.[3]

The rise of new information for us to pour over has been exponential.  In May 2013, it was published that 90% of the world’s information had been generated over just the prior two years.[4]  While this new data may indicate untapped and unique sources of information, it also poses two problems.

First, it’s new.  One of the many benefits of systematic rules is that we can backtest them to gain a better understanding of their statistical properties.  We can establish their robustness across time, asset class, and geographies.  The application of truly new data will either be limited to higher frequency domains (where statistical robustness can be checked), or largely based upon a theoretical connection.

Second, not all new information is necessarily valuable.  Read the comments section of almost any YouTube video and you’ll immediately understand what I mean.  Even if we assume the signal-to-noise ratio remains constant over time, increasing the size of the data we have does not mean a sudden increase in our ability to identify meaningful data.  A tour through the Factor Zoo – with hundreds of systematic investment strategies on display – highlights this problem.  There may be truly new factors lurking in the data, but there are plenty of false ones as well.

Ultimately, however, any value add for systematic rules will be governed by the Frustrating Law of Active Management[5].  If new data does prove to be useful, for it to be exploited consistently it will have to be kept secret and applied well below capacity.  Otherwise, for it to continue to work in the long run, it will have to be hard in the short-run.  The alpha may be there, but there will be plenty of painful tracking error as well.

Making Better Idiosyncratic Decisions

As much as it pains me to say as a quant, the best area for improvement may be in the realm of idiosyncratic decisions.

Consider a few examples of how data may be used:

  • Semantic analysis of Twitter data may provide greater insight into consumer confidence than survey data during the holiday season.
  • Satellite imagery may provide a better view into post-hurricane damage, and therefore better express the risks realized by insurance companies.
  • Agricultural sensors may clue us into whether a harvest will be better or worse than expected.
  • On-the-ground reconnaissance of speculative real estate markets may be used to establish that mortgage brokers are selling risky mortgages to Wall Street banks, which are re-packaging them into collateralized debt obligations with significantly higher ratings.

The skeptic in me, however, still sees plenty of problems in this arena.  First, a manager has to conceptualize an idea.  Second, they must identify the data that would be relevant.  Third, they have to extract and clean it.  Fourth, they must establish meaning in the data.  Finally, they must place a correct trade upon it.

The last piece is especially important.  Consider that knowing Donald Trump was going to win the election was not necessarily the same as knowing how markets around the globe were going to react.

However, the optimist in me believes that increased access to information may help investors make better informed decisions.  Like viewing the market through a kaleidoscope, each dataset may paint a slightly different story.  Perhaps, in aggregate, better informed decisions will lead to more efficient markets.

Nevertheless, even if we believe that managers can exploit unique data sources to enhance their idiosyncratic decision making, it may be hard for us as investors to benefit from.

First, we have to identify which manager we want to invest with.  This requires us establishing confidence about a manager’s ability to make these idiosyncratic decisions.  This is particularly difficult because the decisions are completely independent and unique in nature.

Then, even if we believe a manager has the ability to identify them, are we willing to give them the flexibility to exploit them?  Would we be comfortable with the same manager buying credit default swaps one year and weather derivatives the next?  We shouldn’t forget that Michael Burry – whose Scion Capital profited handsomely from his purchase of credit default swaps – had to fight an investor revolt and freeze redemptions before his bet proved correct.

Conclusion

Alternative data sources offer the allure of untapped alpha.  However, we believe that extracting signal from noise will continue to prove difficult.  In particular, without a rich history, establishing statistical certainty in a new investment factor will be difficult.  Enhancing systematic decision making with new data may be a faith-based endeavor.

Idiosyncratic decisions may fare better, but establishing which managers will be able to exploit this benefit will remain a challenge.

In the grand scheme, for most investors, alternative data will likely be another glittering marketing distraction from what really matters: sound financial planning and risk management.

Alpha will not solve the financial planning needs of the vast majority of investors.  While there is nothing wrong in trying to better tilt the odds in our favor with alpha, beta risk will continue to dominate the returns of most investors.

 


 

[1] We will withhold judgement, for now, as to the quality of these implementations.

[2] Note that you can have systematic rules that include randomness.  For example, a rule that says, “I flip a coin at the end of each month and invest depending on whether it lands on Heads or Tails” is a random strategy, but an entirely systematic rule.

[3] https://www.osmosisim.com/us/research/#moreResearch

[4] https://www.sciencedaily.com/releases/2013/05/130522085217.htm

[5] https://blog.thinknewfound.com/2017/10/frustrating-law-active-management/

Corey is co-founder and Chief Investment Officer of Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Corey is responsible for portfolio management, investment research, strategy development, and communication of the firm's views to clients.

Prior to offering asset management services, Newfound licensed research from the quantitative investment models developed by Corey. At peak, this research helped steer the tactical allocation decisions for upwards of $10bn.

Corey is a frequent speaker on industry panels and contributes to ETF.com, ETF Trends, and Forbes.com’s Great Speculations blog. He was named a 2014 ETF All Star by ETF.com.

Corey holds a Master of Science in Computational Finance from Carnegie Mellon University and a Bachelor of Science in Computer Science, cum laude, from Cornell University.

You can connect with Corey on LinkedIn or Twitter.

Or schedule a time to connect.