The Research Library of Newfound Research

Category: Craftsmanship Page 1 of 6

15 Ideas, Frameworks, and Lessons from 15 Years

Today, August 28th, 2023, my company Newfound Research turns 15.  It feels kind of absurd saying that.  I know I’ve told this story before, but I never actually expected this company to turn into anything.  I started the company while I was still in undergrad and I named it Newfound Research after a lake my family used to visit in New Hampshire.  I fully expected the company to be shut down within a year and just go on to a career on Wall Street.

But here we are, 15 years later.  I’m not sure why, but this milestone feels larger than any recent birthday I can remember.  I’m so incredibly grateful for what this company has given me.  I’m grateful to my business partner, Tom.  I’m grateful to employees – both past and present – who dedicated part of their lives and careers to work here.  I’m grateful to our clients who supported this business.  I’m grateful for all the friends in the industry that I’ve made.  And I’m grateful to people like you who have given me a bit of a platform to explore the ideas I’m passionate about.

Coming up on this anniversary, I reflected quite a bit on my career.  And one of the things I thought about was all the lessons I’ve learned over the years.  And I thought that a fun way to celebrate would be to take the time and write down some of those ideas and lessons that have come to influence my thinking.

So, without further ado, here are 15 lessons, ideas, and frameworks from 15 years.

1.     Risk cannot be destroyed, only transformed.

For graduate school, I pursued my MS in Computational Finance at Carnegie Mellon University.  This financial engineering program is a cross-disciplinary collaboration between the finance, mathematics, statistics, and computer-science departments.

In practice, it was a study on the theoretical and practical considerations of pricing financial derivatives.

I don’t recall quite when it struck me, but at some point I recognized a broader pattern at play in every assignment.  The instruments we were pricing were always about the transference of risk in some capacity.  Our goal was to identify that risk, figure out how to isolate and extract it, package it into the appropriate product type, and then price it for sale.

Risk was driving the entire equation.  Pricing was all about understanding distribution of the potential payoffs and trying to identify “fair compensation” for the variety of risks and assumptions we were making.

For every buyer, there is a seller and vice versa and, at the end of the day, sellers who did not want risk would have to compensate buyers to bear it.

Ultimately, when you build a portfolio of financial assets, or even strategies, you’re expressing a view as to the risks you’re willing to bear.

I’ve come to visualize portfolio risk like a ball of play-doh.  As you diversify your portfolio, the play-doh is getting smeared over risk space.  For example, if you move from an all equity to an equity/bond portfolio, you might reduce your exposure to economic contractions but increase your exposure to inflation risk.

The play-doh doesn’t disappear – it just gets spread out.  And in doing so, you become sensitive to more risks, but less sensitive to any single risk in particular.

I’ll add that the idea of the conservation of risk is by no means unique to me.  For example, Chris Cole has said, on a number of occasions, that “volatility is never created or destroyed, only transmuted.”  In 2008, James Saft wrote in Reuters that “economic volatility, a bit like energy, cannot be destroyed, only switched from one form to another.”  In 2007, Swasti Kartikaningtyas wrote on the role of central counterparties in Indonesian markets, stating, “a simple entropy law for finance is that risks cannot be destroyed, only shifted among parties.”  In his 2006 book “Precautionary Risk Management,” Mark Jablonowski stated, “risk cannot be destroyed, it can only be divided up.”  In 1999, Clarke and Varma, writing on long-run strategic planning for enterprises, said, “like matter, risk cannot be destroyed.”

My point here is only that this idea is not novel or unique to me by any means.  But that does not make it any less important.

2.     “No pain, no premium”

The philosophy of “no pain, no premium” is just a reminder that over the long run, we get paid to bear risk.  And, eventually, risk is likely going to manifest and create losses in our portfolio.  After all, if there were no risk of losses, then why would we expect to earn anything above the risk-free rate?

Modern finance is largely based upon the principal that the more risk you take, the higher your expected reward.  And most people seem to inherently understand this idea when they buy stocks and bonds.

But we can generally expect the same to be true for many investment strategies.  Value investors, for example, are arguably getting paid to bear increased bankruptcy risk in the stocks they buy.

What about strategies that are not necessarily risk-based?  What about strategies that have a more behavioral explanation, like momentum?

At a meta level, we need the strategy to be sufficiently difficult to stick with to prevent the premium from being arbed away.  If an investment approach is viewed as easy money, enough people will adopt it that the inflows will drive out the excess return.

So, almost by definition, certain strategies – especially low frequency ones – need to be difficult to stick with for any premium to exist.  The pain is, ultimately, what keeps the strategy from getting crowded and allows the premium to exist.

3.     Diversifying, cheap beta is worth just as much as equally diversifying, expensive alpha.

I’ll put this lesson in the category of, “things that are obvious but might need to be said anyway.”

Our industry is obsessed with finding alpha.  But, for the most part, a portfolio doesn’t actually care if something is actually alpha or beta.

If you have a portfolio and can introduce a novel source of diversifying beta, it’s not only likely to be cheaper than any alpha you can access, but you can probably ascribe a much higher degree of confidence to its risk premium.

For example, if you invest only in stocks, finding a way to thoughtfully introduce bonds may do much, much more for your portfolio over the long run, with a higher degree of confidence, than trying to figure out a way to pick better stocks.

For most portfolios, beta will drive the majority of returns over the long run.  As such, it will be far more fruitful to first exhaust sources of beta before searching for novel sources of alpha.

By the way, I’m pretty sure I stole this lesson title from someone, but I can’t find the original person who said it.  If it’s you, my apologies.

4.     Diversification has multiple forms.

In 2007, Meb Faber published his paper A Quantitative Approach to Tactical Asset Allocation where he explored the application of a 10-month moving average as a timing model on a variety of asset classes.

It will likely go down in history as one of the most well-timed papers in finance given the 2008 crisis that immediately followed and how well the simple 10-month moving average model would have done in protecting your capital through that event.  It’s likely the paper that launched one-thousand tactical asset allocation models.

In 2013, I wrote a blog post where I showed that the performance of this model was highly sensitive to the choice of rebalance date.  Meb had originally written the paper using an end-of-month rebalance schedule.  In theory, there was nothing stopping someone from running the same model and rebalancing on the 10th trading day of every month.  In the post, I showed the performance of the strategy when applied on every single possible trading day variation, from the 1st to the last trading day of each month.  The short-term dispersion between the strategies was astounding even though the long-run returns were statistically indistinguishable.  

And my obsession with rebalance timing luck was born.

Shortly thereafter my good friend Adam Butler pointed out to me that the choice of a 10-month moving average was just as arbitrary.  Why not 9?  Why not 11?  Why not 200 days?  Why a simple moving average and not an exponential moving average or simple time-series momentum?  Just like what I saw with rebalancing schedule, the long-run returns were statistically indistinguishable but the short-run returns had significant dispersion.

The sort of dispersion that put managers out of business.

Ultimately, I developed my view that diversification was three dimensional: what, how, and when.

What is the traditional diversification almost everyone is certainly familiar with.  This is the diversification across securities or assets.  It’s the what you’re invested in.

How is the process by which investment decisions are made.  This includes diversification across different investment styles – such as value versus momentum – but also within a style.  For example, how are we measuring value?  Or what trend model and speed are we using?

When is the rebalance schedule.

Just as traditional portfolio theory tells us that we should diversify what we invest in because we are not compensated for bearing idiosyncratic risk, I believe the same is true across the how and when axes.

Our aim should be to diversify all uncompensated bets with extreme prejudice.

5.     The philosophical limits of diversification: if you diversify away all the risk, you shouldn’t expect any reward.

One of the most common due diligence questions is, “when doesn’t this strategy work?”  It’s an important question to ask for making sure you understand the nature any strategy.

But the fact that a strategy doesn’t work in certain environments is not a critique.  It should be expected.  If a strategy worked all the time, everyone would do it and it would stop working.

Similarly, if you’re building a portfolio, you need to take some risk.  Whether that risk is some economic risk or process risk or path dependency risk, it doesn’t matter – it should be there, lurking in the background.

If you want a portfolio that has absolutely no scenario risk, you’re basically asking for a true arbitrage or an expensive way of replicating the risk-free rate.

In other words, if you diversify away all the risk in your portfolio – again, think of this as smearing the ball of play-doh really, really, really thin across a very large plane of risk scenarios – return should just converge to the risk-free rate.

If it doesn’t, you’d have an arbitrage: just borrow at the risk-free rate and buy your riskless, diversified portfolio.

But arbitrages don’t come around easy.  Especially for low-frequency strategies and combinations of low-Sharpe asset classes.  There is no magical combination of assets and strategies that will eliminate downside risk in all future states of the world.

A corollary to this point is what I call the frustrating law of active management.  The basic idea is that if an investment idea is perceived both to have alpha and to be “easy”, investors will allocate to it and erode the associated premium.  That’s just basic market efficiency.

So how can a strategy be “hard”?  Well, a manager might have a substantial informational or analytical edge.  Or a manager might have a structural moat, accessing trades others do not have the opportunity to pursue.

But for most major low-frequency edges, “hard” is going to be behavioral.  The strategy has to be hard enough to hold on to that it does not get arbitraged away.

Which means that for any disciplined investment approach to outperform over the long run, it must experience periods of underperformance in the short run.

But we can also invert the statement and say that for any disciplined investment approach to underperform over the long run, it must experience periods of outperformance in the short run.

For active managers, the frustration is not only does their investment approach have to under-perform from time-to-time, but bad strategies will have to out-perform.  The latter may seem confusing, but consider that a purposefully bad strategy could simply be inverted – or traded short – to create a purposefully good one.

6.     It’s usually the unintended bets that blow you up.

I once read a comic – I think it was Farside, but I haven’t been able to find it – that joked that the end of the world would come right after a bunch of scientists in a lab said, “Neat, it worked!”

It’s very rarely the things we intend to do that blow us up.  Rather, it’s the unintended bets that sneak into our portfolio – those things we’re not aware of until it’s too late.

As an example, in the mid-2010s, it became common to say how cheap European equities were versus U.S. equities.  Investors who dove headlong into European equities, however, were punished.

Simply swapping US for foreign equities introduces a significant currency bet.  Europe may have been unjustifiably cheap, but given that valuation reversions typically play out over years, any analysis of this trade should have included either the cost of hedging the currency exposure or, at the very least, an opinion for why being implicitly short the dollar was a bet worth making.

But it could be argued that the analysis itself was simply wrong.  Lawrence Hamtil wrote on this topic many times, pointing out that both cross-country and time-series analysis of valuation ratios can be meaningfully skewed by sector differences.  For example, U.S. equity indices tend to have more exposure to Tech while European indices have more exposure to Consumer Staples.  When normalized for sector differences, the valuation gap narrowed significantly.

People who took the Europe versus US trade were intending to make a valuation bet.  Unless they were careful, they were also taking a currency and sector discrepancy bet.  

Rarely is it the intended bets that blow you up.

7.     It’s long/short portfolios all the way down.

I don’t remember when this one came to me, but it’s one of my favorite mental models.  The phrase is a play off of the “Turtles all the way down” expression.

Every portfolio, and every portfolio decision, can be decomposed into being long something and short something else.

It sounds trivial, but it’s incredibly powerful.  Here’s a few examples:

1.     You’re evaluating a new long-only, active fund.  To isolate what the manager is doing, you can take the fund’s holdings and subtract the holdings of their benchmark.  The result is a dollar-neutral long/short portfolio that reflects the manager’s active bets – it’s long the stuff they’re overweight and short the stuff they’re underweight.  This can help you determine what types of bets they’re making, how big the bets are, and whether the bets are even large stand a chance at covering their fee.

2.     If you’re contemplating selling one exposure to buy another in your portfolio, the trade is equivalent to holding your existing portfolio and overlaying a long/short trade: long the thing you’d buy and short the thing you’d sell.  This allows you to look at the properties of the trade as a whole (both what you’re adding and what you’re subtracting).

3.     If you want to understand how different steps of your portfolio construction process contribute to risk or return, you can treat the changes, stepwise, as long/short portfolios.  For example, for a portfolio that’s equal-weight 50 stocks from the S&P 500, you might compare: (1) Equal-Weight S&P 500 minus S&P 500, and then (2) Equal-Weight 50 Stocks minus Equal-Weight S&P 500.  Isolating each step of your portfolio construction as a long/short allows you to understand the return properties created by that step.

In all of these cases, evaluating the portfolio through the lens of the long/short framework provides meaningful insight.

8.     The more diversified a portfolio is, the higher the hurdle rate for market timing.

Market timing is probably finance’s most alluring siren’s song.  It sounds so simple.  Whether it’s market beta or some investment strategy, we all want to say: “just don’t do the thing when it’s not a good time to do it.”

After all the equity factors were popularized in the 2010s, factor timing came into vogue.  I read a number of papers that suggested that you could buy certain factors at certain parts of the economic cycle.  There was one paper that used a slew of easily tracked economic indicators to contemporaneously define where you were in the cycle, and then rotated across factors depending upon the regime.

And the performance was just ridiculous.

So, to test the idea, I decided to run the counterfactuals.  What if I kept the economic regime definitions the same, but totally randomized the basket of factors I bought in each part of the cycle?  With just a handful of factors, four regimes, and buying a basket of three factors per regime, you could pretty much brute force your way through all the potential combinations and create a distribution of their return results.

No surprise, the paper result was right up there in the top percentiles.  Know what else was?  Just a naïve, equally-weighted portfolio of the factors.  And that’s when you have to ask yourself, “what’s my confidence in this methodology?”

Because the evidence suggests is really, really hard to just beat naïve diversification.

There are a few ways you can get a sense for this, but one of my favorites is just by explicitly looking into the future and asking, “how accurate would I have to be to beat a well-diversified portfolio?”  This isn’t a hard simulation to run, and for reasonable levels of diversification, accuracy numbers creep up quite quickly.

Ultimately, timing is a very low breadth exercise.  To quote Michele Aghassi from AQR, “you’re comparing now versus usual.”  And being wrong compounds forever.

In almost all cases, it’s a lot easier to find something that can diversify your returns than it is to increase your accuracy in forecasting returns.

As a corollary to this lesson, I’ll add that the more predictable a thing is, the less you should be able to profit from it.

For example, let’s say I have a system that allows me to forecast the economic regime we’re in and I have a model for which assets should do well in that economic regime.

If I can forecast the economic regime with certainty, and if the market is reasonably efficient, I probably shouldn’t be able to know which assets will do well in which regime.  Conversely, if I know with perfect certainty which assets will do well in which regime, then I probably shouldn’t be able to forecast the regimes with much accuracy.

If markets are even reasonably efficient, the more easily predictable the thing, the less I should be able to profit from it.

9.     Certain signals are only valuable at extremes.

I was sent a chart recently with a plot of valuations for U.S. large-cap, mid-cap, and small-cap stocks.  The valuations were represented as an average composite of price-to-earnings, price-to-book, and price-to-sales z-scores.  The average z-score of large-caps sat at +1 while the average z-score for both mid- and small-caps sat at -1.

The implication of the chart was that a rotation to small- and mid-caps might be prudent based upon these relative valuations.

Lesson #6 about unintended bets immediately comes to mind.

For example, are historical measures even relevant today?  Before 2008 large-cap equities had a healthy share of financials and energy.  Today, the index is dominated by tech and communication services.  And we went through an entire decade with a zero interest rate policy regime.  How do rates at 5% plus today impact the refinancing opportunities in small-caps versus large-caps?  What about the industry differences between large-caps and small-caps?  Or the profit margins?  Or exposure to foreign revenue sources?  How are negative earners being treated in this analysis?  Is price-to-sales even a useful metric when sales are generated across the entire enterprise?

You might be able to sharpen your analysis and adjust your numbers to account for many of these points.  But there may be many others you simply don’t think of.  And that’s the noise.

Just about every signal has noise.

The question is, “how much noise?”  The more noise we believe a signal to have, the stronger we need the signal to be to believe it has any efficacy.  While we may be comfortable trading precisely measured signals at a single standard deviation, we may only have confidence in coarsely measured signals at much higher significance.

10.  Under strong uncertainty, “halvsies” can be an optimal decision.

During the factor wars of the mid-2010s, a war raged between firms as to what the best portfolio construction approach was: mixed or integrated.

The mixed approach said that each factor should be constructed in isolation and held in its own sleeve.

The integrated approach said that stocks should be scored on all the factors simultaneously, and the stocks with the best aggregate scores should be selected.

There were powerhouses on both sides of the argument.  Goldman Sachs supported mixed while AQR supported integrated.

I spent months agonizing over the right way to do things.  I read papers.  I did empirical analysis.  I even took pen to paper to derive the expected factor efficiency in each approach.

At the end of the day, I could not convince myself one way or another.  So, what did I do?  Halvsies.

Half the portfolio was managed in a mixed manner and half was managed in an integrated manner.

Really, this is just diversification for decision making.  Whenever I’ve had a choice with a large degree of uncertainty, I’ve often found myself falling back on “halvsies.” 

When I’ve debated whether to use one option structure versus another, with no clear winner, I’ve done halvsies.

When I’ve debated two distinctly different methods of modeling something, with neither approach being the clear winner, I’ve done halvsies.

Halvsies provides at least one step in the gradient of decision making and implicitly creates diversification to help hedge against uncertainty.

11.  Always ask: “What’s the trade?”

In July 2019, Greek 10-Year Bonds were trading with a yield that was nearly identical to US 10-Year Bonds.

By December, the yield on Greek 10-year bonds was 40 basis points under US 10-year bonds.  How could that make any sense?  How could a country like Greece make U.S. debt look like it was high yield?

When something seems absurd, ask this simple question: what’s the trade?  If it’s so absurd, how do we profit from it?

In this case, we might consider going long the U.S. 10-year and short the Greek 10-year in a convergence trade.  But we quickly realize an important factor: you don’t actually get paid in percentage points, you get paid in currency.  And that’s where the trade suddenly goes awry.  In this case, you’d receive dollars and owe euros.  And if you tried to explicitly hedge that trade away up front via a cross-currency basis swap, any yield difference largely melted away.

A more relevant financial figure would perhaps have been the spread between 10-year Greek and German bonds, which traded between 150-275bps in the 2nd half of 2019.  Not wholly unreasonable anymore.

When financial pundits talk about things in the market being absurd, ask “what’s the trade?”  Working through how to actual profit from the absurdity often shines a light on why the analysis is wrong.

12.  The trade-off between Type I and Type II errors is asymmetric

Academic finance is obsessed with Type I errors.  The literature is littered with strategies exhibiting alphas significant at a 5% level.  The literature wants to avoid reporting false positives.

In practice, however, there is an asymmetry that has to be considered.

What is the cost of a false positive?  Unless the strategy is adversely selected, the performance of trading a false positive should just be noise minus trading costs.  (And the opportunity cost of capital.)

What is the cost of a false negative?  We miss alpha.

Now consider how a focus on Type I errors can bias the strategies you select.  Are they more likely to be data-mined?  Are they more likely to be crowded?  Are they less likely to incorporate novel market features without meaningful history?

Once we acknowledge this asymmetry, it may actually be prudent to reduce the statistical requirements on the strategies we deploy.

13.  Behavioral Time is decades longer than Statistical Time

I recently stole this one from Cliff Asness.  This point has less to do with any practical portfolio construction thoughts or useful mental models.  It’s just simply acknowledging that managing money in real life is very, very, very different than managing money in a research environment.

It is easy, in a backtest, to look at the multi-year drawdown of a low-Sharpe strategy and say, “I could live through that.”  When it’s a multi-decade simulation, a few years looks like a small blip – just a statistical eventuality on the path.  You live that multi-year drawdown in just a few seconds in your head as your eye wanders the equity curve from the bottom left to the upper right.

In the real world, however, a multi-year drawdown feels like a multi-decade drawdown.  Saying, “this performance is within standard confidence bands for a strategy given our expected Sharpe ratio and we cannot find any evidence that our process is broken,” is little comfort to those who have allocated to you.  Clients will ask you for attribution.  Clients will ask you whether you’ve considered X explanation or Y.  Sales will come screeching to a halt.  Clients will redeem.

For anyone considering a career in managing money, it is important to get comfortable living in behavioral time.

14. Jensen’s Inequality

Jensen’s inequality basically says, “a function applied to a mean does not necessarily equal the mean applied after the function.”

What does that mean and how is it useful?  Consider this example.

You’re building a simple momentum portfolio.  You start with the S&P 500 and rank them by their momentum score, selecting the top 100 and then equally weighting them.

But you remember Lesson #4 and decide to use multiple momentum signals to diversify your how risk.

Here’s the question: do you average all the momentum scores together and then pick the top 100 or do you use each momentum score to create a portfolio and then average those portfolios together.

Jensen’s inequality tells us these approaches will lead to different results.  This is basically the mixed versus integrated debates from Lesson #10.  And the more convex the function is, the more different the results will likely be.  Imagine if instead of picking the top 100 we pick the top 20 or just the top 5.  It’s easy to imagine how different those portfolios could become with different momentum signals.

Here’s another trivial example.  You have 10 buy/sell signals.  Your function is to be long an asset if the signals are positive and short if the signals are negative.

If you average your signals first, your position is binary: always on or off.  But if you apply your function to each signal, and then average the results, you end up with a gradient of weights, the distribution of which will be a function of how correlated your signals are with one another.

You can see how Jensen’s inequality plays a huge role in portfolio construction.  Why?  Because non-linearities show up everywhere.  Portfolio optimization?  Non-linear.  Maximum or minimum position sizes?  Non-linear.  Rank-based cut-offs?  Non-linear.

And the more non-linear the function, the greater the wedge. But this also helps us understand how certain portfolio construction constraints can help us reduce the size of this wedge.

Ultimately, Jensen’s tells us that averaging things together in the name of diversification before or after convex steps in your process will lead to dramatically different portfolio results.

15. A backtest is just a single draw of a stochastic process.

As the saying goes, nobody has ever seen a bad backtest.

And our industry, as a whole, has every right to be skeptical about backtests.  Just about every seasoned quant can tell you a story about naively running backtests in their youth, overfitting and overoptimizing in desperate search of the holy grail strategy.

Less sophisticated actors may even take these backtests and launch products based on them, marketing the backtests to prospective investors.

And most investors would be right to ignore them outright.  I might even be in favor of regulation that prevents them from being shown in the first place.

But that doesn’t mean backtests are ultimately futile.  But we should acknowledge that when we run a single backtest, it’s just a single draw of a larger stochastic process.  Historical prices and data are, after all, just a record of what happened, but not a full picture of what could have happened.

Our job, as researchers, is to use backtesting to try to learn about what the underlying stochastic process looks like.

For example, what happens if we change the parameters of our process?  What happens if we change our entry or exit timing?  Or change our slippage and impact assumptions?

One of my favorite techniques is to change the investable universe, randomly removing chunks of the universe to see how sensitive the process is.  Similarly, randomly removing periods of time from the backtest to test regime sensitivities.

Injecting this randomness into the backtest process can tell us how much of an outlier our singular backtest really is.

Another fantastic technique is to purposefully introduce lookahead bias into your process.  By explicitly using a crystal ball, we can find the theoretical upper limits of achievable results and develop confidence bands for what our results should look like with more reasonable accuracy assumptions.

Backtesting done poorly is worse than not backtesting.  You’d be better off with pen and paper just trying to reason about your process.  But backtesting done well, in my opinion, can teach you quite a bit about the nature of your process, which is ultimately what we want to learn about.

16.  The Market is Usually Right

Did I say 15 ideas and lessons?  Here’s a bonus lesson that’s taken me far longer to learn than I’d care to admit.

The market is, for the most part, usually right.  It took me applying Lesson #11 – “What’s the Trade” – over and over to realize that most things that seem absurd probably aren’t.

That isn’t to say there aren’t exceptions.  If we see $20 on the ground, we might as well pick it up.  The 2021 cash & carry trade in crypto comes to mind immediately.  With limited institutional capacity and a nearly insatiable appetite for leverage from retail investors, the implied financing rates in perps and futures hit 20%+ for highly liquid tokens such as Bitcoin and Ethereum.  I suspect that’s as close to free money as I’ll ever get.

But that’s usually the exception.

This final lesson is about a mental switch for me.  Instead of seeing something and immediately saying, “the market is wrong,” I begin with the assumption that the market is right and I’m the one who is missing something.  This forces me to develop a list of potential reasons I might be missing or overlooking and exhaust those explanations before I can build my confidence that the market is, indeed, wrong.


If you made it this far, thank you.  I appreciate the generosity of your time.  I hope some of these ideas or lessons resonated with you and I hope you enjoyed reading as much as I enjoyed reflecting upon these concepts and putting together this list.  It will be fun for me to look back in another 15 and see how many of these stood the test of time.

Until then, happy investing.

Is Managed Futures Value-able?

In Return StackingTM: Strategies for Overcoming a Low Return Environment, we advocated for the addition of managed futures to traditionally allocated portfolios.  We argued that managed futures’ low empirical correlation to both equities and bonds and its historically positive average returns makes it an attractive diversifier. More specifically, we recommended implementing managed futures as an overlay to a portfolio to avoid sacrificing exposure to core stocks and bonds.

The luxury of writing research is that we work in a “clean slate” environment.  In the real world, however, investors and allocators must contemplate changes in the context of their existing portfolios.  Investors rarely just hold pure beta exposure, and we must consider, therefore, not only how a managed futures overlay might interact with stocks and bonds, but also how it might interact with existing active tilts.

The most common portfolio tilt we see is towards value stocks (and, often, quality-screened value).  With this in mind, we want to briefly explore whether stacking managed futures remains attractive in the presence of an existing value tilt.

Diversifying Value

If we are already allocated to value, one of our first concerns might be whether an allocation to managed futures actually provides a diversifying return stream.  One of our primary arguments for including managed futures into a traditional stock/bond portfolio is its potential to hedge against inflationary pressures.  However, there are arguments that value stocks do much of the same, acting as “low duration” stocks compared to their growth peers.  For example, in 2022, the Russell 1000 Value outperformed the broader Russell 1000 by 1,145 basis points, offering a significant buoy during the throes of the largest bout of inflation volatility in recent history.

However, broader empirical evidence does not actually support the narrative that value hedges inflation (see, e.g., Baltussen, et al. (2022), Investing in Deflation, Inflation, and Stagflation Regimes) and we can see in Figure 1 that the long-term empirical correlations between managed futures and value is near-zero.

(Note that when we measure value in this piece, we will look at the returns of long-only value strategies minus the returns of broad equities to isolate the impact of the value tilt.  As we recently wrote, a long-only value tilt can be effectively thought as long exposure to the market plus a portfolio that is long the over-weight positions and short the under-weight positions1.  By subtracting the market return from long-only value, we isolate the returns of the active bets the tilt is actually taking.)

Figure 1: Excess Return Correlation

Source: Kenneth French Data Library, BarclayHedge. Calculations by Newfound Research. Performance is backtested and hypothetical.  Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise.  Performance assumes the reinvestment of all dividends.  Past performance is not indicative of future results.  See Appendix A for index definitions.

Correlations, however, do not tell us about the tails.  Therefore, we might also ask, “how have managed futures performed historically conditional upon value being in a drawdown?” As the past decade has shown, underperformance of value-oriented strategies relative to the broad market can make sticking to the strategy equally difficult.

Figure 2 shows the performance of the various value tilts as well as managed futures during periods when the value tilts realized a 10% or greater drawdown2.

Figure 2: Value Relative Drawdowns Greater than 10%

Source: Kenneth French Data Library, BarclayHedge. Calculations by Newfound Research. Performance is backtested and hypothetical.  Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise.  Performance assumes the reinvestment of all dividends.  Past performance is not indicative of future results.  See Appendix A for index definitions.

We can see that while managed futures may not have explicitly hedged the drawdown in value, its performance remained largely independent and accretive to the portfolio as a whole.

To drive the point of independence home, we can calculate the univariate regression coefficients between value implementations and managed futures.  We find that the relationship between the strategies is statistically insignificant in almost all cases. Figure 3 shows the results of such a regression.

Figure 3: Univariate Regression Coefficients

Source: Kenneth French Data Library, BarclayHedge. Calculations by Newfound Research. *, **, and *** indicate statistical significance at the 0.05, 0.01, and 0.001 level. Performance is backtested and hypothetical.  Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise.  Performance assumes the reinvestment of all dividends.  Past performance is not indicative of future results.  See Appendix A for index definitions.

But How Much?

As our previous figures demonstrate, managed futures has historically provided a positively diversifying benefit in relation to value; but how can we thoughtfully integrate an overlay into an portfolio that wants to retain an existing value tilt?

To find a robust solution to this question, we can employ simulation techniques.  Specifically, we block bootstrap 100,000 ten-year simulated returns from three-month blocks to find the robust information ratios and MAR ratios (CAGR divided by maximum drawdown) of the value-tilt strategies when paired with managed futures.

Figure 4 shows the information ratio frontier of these portfolios, and Figure 5 shows the MAR ratio frontiers.

Figure 4: Information Ratio Frontier

Source: Kenneth French Data Library, BarclayHedge. Calculations by Newfound Research. Performance is backtested and hypothetical.  Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise.  Performance assumes the reinvestment of all dividends.  Past performance is not indicative of future results.  See Appendix A for index definitions.

Figure 5: MAR Ratio Frontier

Source: Kenneth French Data Library, BarclayHedge. Calculations by Newfound Research. Performance is backtested and hypothetical.  Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise.  Performance assumes the reinvestment of all dividends.  Past performance is not indicative of future results.  See Appendix A for index definitions.

Under both metrics it becomes clear that a 100% tilt to either value or managed futures is not prudent. In fact, the optimal mix, as measured by either the Information Ratio or MAR Ratio, appears to be consistently around the 40/60 mark. Figure 6 shows the blends of value and managed futures that maximizes both metrics.

Figure 6: Max Information and MAR Ratios

Source: Kenneth French Data Library, BarclayHedge. Calculations by Newfound Research. Performance is backtested and hypothetical.  Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise.  Performance assumes the reinvestment of all dividends.  Past performance is not indicative of future results.  See Appendix A for index definitions.

In Figure 7 we plot the backtest of a 40% value / 60% managed futures portfolio for the different value implementations.

Figure 7: 40/60 Portfolios of Long/Short Value and Managed Futures

Source: Kenneth French Data Library, BarclayHedge. Calculations by Newfound Research. Performance is backtested and hypothetical.  Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise.  Performance assumes the reinvestment of all dividends.  Past performance is not indicative of future results.  See Appendix A for index definitions.

These numbers suggest that an investor who currently tilts their equity exposure towards value may be better off by only tilting a portion of their equity towards value and introducing a managed futures overlay onto their portfolio.  For example, if an investor has a 60% stock and 40% bond portfolio and the 60% stock exposure is currently all value, they might consider moving 36% of it into passive equity exposure and introducing a 36% managed futures overlay.

Depending on how averse a client is to tracking error, we can plot how the tracking error changes depending on the degree of portfolio tilt. Figure 8 shows the estimated tracking error when introducing varying allocations to the 40/60 value/managed futures overlay.

Figure 8: Relationship between Value/Managed Futures Tilt and Tracking Error

Source: Kenneth French Data Library, BarclayHedge. Calculations by Newfound Research. Performance is backtested and hypothetical.  Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise.  Performance assumes the reinvestment of all dividends.  Past performance is not indicative of future results.  See Appendix A for index definitions.

For example, if we wanted to implement a tilt to a quality value strategy, but wanted a maximum tracking error of 3%, the portfolio might add an approximate allocation of 46% to the 40/60 value/managed futures overlay.  In other words, 18% of their equity should be put into quality-value stocks and a 28% overlay to managed futures should be introduced.

Using the same example of a 60% equity / 40% bond portfolio as before, the 3% tracking error portfolio would hold 42% in passive equities, 18% in quality-value, 40% in bonds, and 28% in a managed futures overlay.

What About Other Factors?

At this point, it should be of no surprise that these results extend to the other popular equity factors. Figures 8 and 9 show the efficient information ratio and MAR ratio frontiers when we view portfolios tilted towards the Profitability, Momentum, Size, and Investment factors.

Figure 9: Information Ratio Frontier for Profitability, Momentum, Size, and Investment Tilts

Source: Kenneth French Data Library, BarclayHedge. Calculations by Newfound Research. Performance is backtested and hypothetical.  Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise.  Performance assumes the reinvestment of all dividends.  Past performance is not indicative of future results.  See Appendix A for index definitions. 

Figure 10: MAR Ratio Frontier for Profitability, Momentum, Size, and Investment Tilts

Source: Kenneth French Data Library, BarclayHedge. Calculations by Newfound Research. Performance is backtested and hypothetical.  Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise.  Performance assumes the reinvestment of all dividends.  Past performance is not indicative of future results.  See Appendix A for index definitions.

Figure 11: Max Information and MAR Ratios for Profitability, Momentum, Size, and Investment Tilts

Source: Kenneth French Data Library, BarclayHedge. Calculations by Newfound Research. Performance is backtested and hypothetical.  Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise.  Performance assumes the reinvestment of all dividends.  Past performance is not indicative of future results.  See Appendix A for index definitions.

Once again, a 40/60 split emerges as a surprisingly robust solution, suggesting that managed futures has historically offered a unique, diversifying return to all equity factors.


Our analysis highlights the considerations surrounding the use of managed futures as a complement to a traditional portfolio with a value tilt. While value investing remains justifiably popular in real-world portfolios, our findings indicate that managed futures can offer a diversifying return stream that complements such strategies. The potential for managed futures to act as a hedge against inflationary pressures, while also offering a diversifying exposure during relative value drawdowns, strengthens our advocacy for their inclusion through a return stackingTM framework.

Our examination of the correlation between managed futures and value reveals a near-zero relationship, suggesting that managed futures can provide distinct benefits beyond those offered by a value-oriented approach alone. Moreover, our analysis demonstrates that a more conservative tilt to value, coupled with managed futures, may be a prudent choice for inverse to tracking error. This combination offers the potential to navigate unfavorable market environments and potentially holds more of a portfolio benefit than a singular focus on value.

Appendix A: Index Definitions

Book to Market – Equal-Weighted HiBM Returns for U.S. Equities (Kenneth French Data Library)

Profitability – Equal-Weighted HiOP Returns for U.S. Equities (Kenneth French Data Library)

Momentum – Equal-Weighted Hi PRIOR Returns for U.S. Equities (Kenneth French Data Library)

Size – Equal-Weighted SIZE Lo 30 Returns for U.S. Equities (Kenneth French Data Library)

Investment – Equal-Weighted INV Lo 30 Returns for U.S. Equities (Kenneth French Data Library)

Earnings Yield – Equal-Weighted E/P Hi 10 Returns for U.S. Equities (Kenneth French Data Library)

Cash Flow Yield – Equal-Weighted CF/P Hi 10 Returns for U.S. Equities (Kenneth French Data Library)

Dividend Yield – Equal-Weighted D/P Hi 10 Returns for U.S. Equities (Kenneth French Data Library)

Quality Value – Equal-Weighted blend of BIG HiBM HiOP, ME2 BM4 OP3, ME2 BM3 OP3, and ME2 BM3 OP4 Returns for U.S. Equities (Kenneth French Data Library)

Value Blend – An equal-weighted Returns of Book to Market, Earnings Yield, Cash Flow Yield, and Dividend Yield returns for U.S. Equities (Kenneth French Data Library)

Passive Equities (Market, Mkt) – U.S. total equity market return data from Kenneth French Library.

Managed Futures – BTOP50 Index (BarclayHedge). The BTOP50 Index seeks to replicate the overall composition of the managed futures industry with regard to trading style and overall market exposure. The BTOP50 employs a top-down approach in selecting its constituents. The largest investable trading advisor programs, as measured by assets under management, are selected for inclusion in the BTOP50. In each calendar year the selected trading advisors represent, in aggregate, no less than 50% of the investable assets of the Barclay CTA Universe.

Index Funds Reimagined?

I recently had the privilege to serve as a discussant at the Democratize Quant 2023 conference to review Research Affliates’s new paper, Reimagining Index Funds.  The post below is a summary of my presentation.


In Reimagining Index Funds (Arnott, Brightman, Liu and Nguyen 2023), the authors propose a new methodology for forming an index fund, designed to avoid the “buy high, sell low” behavior that can emerge in traditional index funds while retaining the depth of liquidity and capacity.  Specifically, they propose selecting securities based upon the underlying “economic footprint” of the business.

By using fundamental measures of size, the authors argue that the index will not be subject to sentiment-driven turnover.  In other words, it will avoid those additions and deletions that have primarily been driven by changes in valuation rather than changes in fundamentals.  Furthermore, the index will not arbitrarily avoid securities due to committee bias.  The authors estimate that total turnover is reduced by 20%.

The added benefit to this approach, the authors further argue, is that index trading costs are actually quite large.  While well-telegraphed additions and deletions allow index fund managers to execute market-on-close orders and keep their tracking error low, it also allows other market participants to front run these changes.  The authors’ research suggests that these hidden costs could be upwards of 20 basis points per year, creating a meaningful source of negative alpha.

Methodology & Results

The proposed index construction methodology is fairly simple:

Footnote #3 in the paper further expands upon the four fundamental measures:

The results of this rather simple approach are impressive.

  • Tracking error to the S&P 500 comparable to that of the Russell 1000.
  • Lower turnover than the S&P 500 or the Russell 1000.
  • Statistically meaningful Fama-French-Carhart 4-Factor alpha.

But What Is It?

One of the most curious results of the paper is that despite having a stated value tilt, the realized value factor loading in the Fama-French-Carhart regression is almost non-existent.  This might suggest that the alpha emerges from avoiding the telegraphed front-running of index additions and deletions.

However, many equity quants may notice familiar patterns in the cumulative alpha streams of the strategies.  Specifically, the early years look similar to the results we would expect from a value tilt, whereas the latter years look similar to the results we might expect from a growth tilt.

With far less rigor, we can create a strategy that holds the Russell 1000 Value for the first half of the time period and switches to the Russell 1000 Growth for the second half.  Plotting that strategy versus the Russell 1000 results in a very familiar return pattern. Futhermore, such a strategy would load positively on the value factor for the first half of its life and negatively for the second half of its life, leading a full-period factor regression to conclude zero exposure.

But how could such a dynamic emerge from such a simple strategy?

“Economic Footprint” is a Multi-Factor Tilt

The Economic Footprint variable is described as being an equal-weight metric of four fundamental measures: book value, sales, cash flow, and dividends, all measured as a percentage of all publicly-traded U.S. listed companies.  With a little math (inspired by this presentation from Cliff Asness), we will show that Economic Footprint is actually a mutli-factor screen on both Value and Market-Capitalization.

Define the weight of a security in the market-capitalization weighted index as its market capitalization divided by the total market capitalization of the universe.

If we divide both sides of the Economic Footprint equation by the weight of the security, we find:Some subtle re-arrangements leave us with: The value tilt effectively looks at each security’s value metric (e.g. book-to-price) relative to the aggregate market’s value metric.  When the metric is cheaper, the value tilt will be above 1; when the metric is more expensive, the value tilt will be less than 1.  This value tilt then effectively scales the market capitalization weight.

Importantly, economic footprint does not break the link to market capitalization.

Breaking economic footprint into two constituent parts allows us to get a visual intuition as to how the strategy operates.

In the graphs below, I take the largest 1000 U.S. companies by market capitalization and plot them based upon their market capitalization weight (x-axis) and their value tilt (y-axis).

(To be clear, I have no doubt that my value tilt scores are precisely wrong if compared against Research Affiliates’s, but I have no doubt they are directionally correct.  Furthermore, the precision does not change the logic of the forthcoming argument.)

If we were constructing a capitalization weighted index of the top 500 companies, the dots would be bisected vertically.

As a multi-factor tilt, however, economic footprint leads to a diagonal bisection.

The difference between these two graphs tells us what we are buying and what we are selling in the strategy relative to the naive capitalization-weighted benchmark.

We can clearly see that the strategy sells larg(er) glamour stocks and buys small(er) value stocks.  In fact, by definition, all the stocks bought will be both (1) smaller and (2) “more value” and any of the stocks sold.

This is, definitionally, a size-value tilt.  Why, then, are the factor loadings for size and value so small?

The Crucial Third Step

Recall the third step of the investment methodology: after selecting the companies by economic footprint, they are re-weighted by their market capitalization.  Now consider an important fact we stated above: every company we screen out is, by definition, larger than any company we buy.

That means, in aggregate, the cohort we screen out will have a larger aggregate market cap than the cohort we buy.

Which further means that the cohort we don’t screen out will, definitionally, become proportionally larger.

For example, at the end of April 2023, I estimate that screening on economic footprint would lead to the sale of a cohort of securities with an aggregate market capitalization of $4 trillion and the purchase of a cohort of securities with an aggregate market capitalization of $1.3 trillion.

The cohort that remains – which was $39.5 trillion in aggregate market capitalization – would grow proportionally from being 91% of the underlying benchmark to 97% of our new index.  Mega-cap growth names like Amazon, Google, Microsfot, and Apple would actually get larger based upon this methodology, increasing their collective weights by 120 basis points.

Just as importantly, this overweight to mega-cap tech would be a persistent artifact throughout the 2010s, suggesting why the relative returns may have looked like a growth tilt.

Why Value in 1999?

How, then, does the strategy create value-like results in the dot-com bubble?  The answer appears to lie in two important variables:

  1. What percentage of the capitalization-weighted index is being replaced?
  2. How strongly do the remaining securities lean into a value tilt?

Consider the scatter graph below, which estimates how the strategy may have looked in 1999.  We can see that 40% of the capitalization-weighted benchmark is being screened out, and 64% of the securities that remain have a positive value tilt.  (Note that these figures are based upon numerical count; it would likely be more informative to measure these figures weighted by market capitalization.)

By comparison, in 2023 only 20% of the underlying benchmark names are replaced and of the securities that remain, just 30% have a tilt towards value. These graphics suggest that while a screen on economic footprint creates a definitive size/value tilt, the re-weighting based upon relative market capitalization can lead to dynamic style drift over time.


The authors propose a new approach to index construction that aims to maintain a low tracking error to traditional capitalization-weighted benchmarks, reduce turnover costs, and avoid “buy high, sell low” behavior.  By selecting securities based upon the economic footprint of their respective businesses, the authors find that they are able to produce meaningful Fama-French-Carhart four-factor alpha while reducing portfolio turnover by 20%.

In this post I find that economic footprint is, as defined by the authors, actually a multi-factor tilt based value and market capitalization.  By screening for companies with a high economic footprint, the proposed method introduces a value and size tilt relative to the underlying market capitalization weighted benchmark.

However, the third step of the proposed process, which then re-weights the selected securities based upon their relative market capitalization, will always increase the weight of the securities of the benchmark that were not screened out.  This step creates the potential for meaningful style drift within the strategy over time.

I would argue the reason the factor regression exhibited little-to-no loading on value is that the strategy exhibited a positive value tilt over the first half of its lifetime and a negative value tilt over the second half, effectively cancelling out when evaluated over the full period.  The alpha that emerges, then, may actually be style timing alpha.

While the authors argue that their construction methodology should lead to the avoidance of “buy high, sell low” behavior, I would argue that the third step of the investment process has the potential to lead to just that (or, at the very least, buy high).  We can clearly see that in certain environments, portfolio construction choices can actually swamp intended factor bets.

Whether this methodology actually provides a useful form of style timing, or whether it is an unintended bet in the process that lead to a fortunate, positive ex-post result is an exercise left to other researchers.

What Is Managed Futures?


  • Much like in 2008, managed futures as an investment strategy had an impressive year in 2022. With most traditional asset classes struggling to navigate the inflationary macroeconomic environment, managed futures has been drawing interest as a potential diversifier.
  • Managed futures is a hedge fund category that uses futures contracts as their primary investment vehicle. Managed futures managers can engage in many different investment strategies, but trend following is the most common.
  • Trend following as an investment strategy has a substantial amount of empirical evidence promoting its efficacy as an investment strategy. There also exist several behavioral arguments for why this anomaly exists, and why we might expect it to continue.
  • As a diversifier, multi-asset trend following has provided diversification benefits when compared to both stocks and bonds. Additionally, trend following has posted positive returns in the four major drawdowns in equities since 2000.

Cut short your losses, and let your winners run. – David Ricardo, 1838

What is Managed Futures?

Managed futures is a hedge fund category originating in the 1980s, named for the ability to trade (both long and short) global equity, bond, commodity, and currency futures contracts. Today, these strategies have been made available to investors in both mutual fund and ETF wrappers. The predominate strategy of most managed futures managers is trend following, so much so, that the terms are often used synonymously.

While trend following is by far the largest and most pronounced strategy in the category, it is not the only strategy used in the space.1 Managed futures can engage in trend following, momentum trading, mean reversion, carry-focused strategies, relative value trading, macro driven strategies, or any combination thereof. Any individual managed futures manager may have a certain bias towards one of the strategies, though, trend following is by far the most utilized strategy of the group2.

Figure 1: The Taxonomy of Managed Futures

Adapted from Kaminski (2014). The most common characteristics are highlighted in orange.

What is Trend Following?

Simply put, trend following is a strategy that buys (‘goes long’) assets that have been rising in price and sells (‘goes short’) assets that have been decreasing in price, based on the premise that this trend will continue. The precise method of measuring trends varies widely, but each primarily relies on the difference between an asset’s price today and the price of the same asset previously. Some common methods of measuring trends include total return measurements, moving averages, and regression lines. These different approaches are all mathematically linked, and empirical evidence does not suggest that one method is necessarily better than another3.

Trend following has a rich history in financial markets, with centuries of evidence supporting the idea that markets tend to trend. The obvious question to then ask is: why? The past few decades of academic research has focused on explaining theories such as the Efficient Market Hypothesis and research into explanatory market factors (such as value and size), diminishing the amount of research being conducted on trend following.

Figure 2: The Life Cycle of a Trend

Adapted from AQR. For illustrative purposes only.

The classification of trend following as an anomaly, however, has not left it without theories for why it works. There are a number of generally accepted explanations for why trend following works, and more importantly, why the anomaly might continue to persist.

Anchoring Bias: When new data enters the marketplace, investors can overly rely on historical data, thereby underreacting to the new information. This can be seen in Figure 3 where, after the catalyst of new information enters the market, the price of a security will directionally follow the fair value of the asset, but not with a large enough magnitude to match the fair value precisely.

Disposition Effect: Investors have a tendency to take gains on their winning positions too early and hold onto their losing positions too long.

Herding: After a noticeable trend has been established, investors “bandwagon” into the trade, prolonging the directional trend, and potentially pushing the price past the asset’s fair value4.

Confirmation Bias: Investors tend to ignore information that is contrary to an their beliefs. A positive (or negative) signal will be ignored if the investor has a differing view, extending the time frame for the convergence of an asset’s price to its fair value.

Rational Inattention Bias: Investors cannot immediately digest all information due to a lack of information processing resources (or mental capacity). Consequently, prices move towards fair value more slowly as the information is processed by all investors.

As previously mentioned, methodologies may vary widely when analyzing an asset’s trend, but the general theme is to view an asset’s current price relative to some measure of its recent history. For example, one common example of this is to observe an asset’s current price versus its 200-day moving average: initiating a long position when the price is above its moving average or a short position when it is below. Extending Figure 2, we can graphically depict the trade cycle attempting to take advantage of such a trend.

Figure 3: The Life Cycle of a Trade

Source: Newfound Research, AQR. For illustrative purposes only

Of course, using such an idealized description of a trend is not typically what is found in the market, which leads to many false-starts, The risk-management decisions made to reduce the impact of these false-starts begins to highlight part of the attractiveness of the strategy as a diversifier.

Consider that the fair value of an asset is generally never known with a high degree of certainty. A trend following manager is thus reliant on the perceived direction of trend at any given time, and so, must make choices based on how the trend evolves or not.

Figure 4: Heads I Trend, Tails I Don’t

Adapted from Michael Covel. For illustrative purposes only.

When the model indicates that a trend has formed, the manager will initiate a position in the direction of the indicated trend (either short or long – blue line in Figure 4). As long as the trend continues, the strategy will hold that position, and only exit when the signal indicates that the trend no longer exists. At that time, the manager will remove the position, potentially taking the opposite position5.

The second case (red line in Figure 4) is one in which the trend reverses shortly after a position has been initiated. After establishing a position in the asset, the price of the asset reverts to its previous levels, possibly completely reversing in direction. In such a case, the signal will indicate that the trend no longer exists and recommend that the position be removed.

Historically, by quickly cutting losers and letting winning trades run, trend following has created a positively skewed return profile. Managed futures strategies tend to trade many different markets and underlying assets. This minimizes the impact of trends being rejected but may increase the probability of taking a position in an asset that has an outlier trend occurring that might be out of the scope of a traditional portfolio.

Kaminski (2014) refers to this characteristic as divergent risk taking6, where a divergent investor “profess[es] their own ignorance to the true structure of potential risks/benefits with some level of skepticism for what is knowable or is not dependable”.

This divergent risk behavior results in a positively skewed return distribution by not risking too much on a trade, removing the position if it goes against you, and allowing a trade to run if it is winning7.

The structural nature of trend following minimizes the size of any bets taken, and quickly eliminates a position if the bet is not paying off. By diversifying across many markets, asset classes, and economic goods, while maintaining sensible positions without directional bias, the strategy maintains staying power by not swinging for the fences and staying with a time-proven approach8, in a well-diversified manner.

Using Managed Futures as A Diversifier

The traditional investor portfolio has typically been dominated by two assets: stocks and bonds. In recent history, investors have even been able to use fixed income to buffer equity risk as high-quality bonds have exhibited flight-to-safety characteristics in times of extreme market turmoil. In the first two decades of the 2000s, this pairing has worked extremely well given that interest rates declined over the period, inflation remained low, and the bonds were resilient during the fallout of the tech bubble and the Great Financial Crisis.

In Figure 5, we chart the relationship between the year-over-year Consumer Price Index for All Urban Consumers (“CPIAUCSL”) versus the 12-month correlation between U.S. Stocks and 10-Year U.S. Treasuries9. We can see that negative correlation is most pronounced when inflation is low. Positive correlation regimes, on the other hand, have historically occurred in all realized ranges of CPI changes, the most striking occurring when inflation was extraordinarily high.

Figure 5: The Relationship Between Inflation and Equity-Bond Correlation

Source: FRED, Kenneth French Data Library, Tiingo. For illustrative purposes only.

Since trend following can hold both long and short positions, it has the potential to trade price trends in  assets in any direction that may emerge from increasing inflation risks.   This is highlighted by the performance of trend following in 2022, where the year-to-date real returns of U.S. equities10, 10-Year U.S. Treasuries, and the SG CTA Trend Index as of December 31, 2022 , were -19.5%, -16.5%, and +27.4%, respectively.  During 2022, trend following strategies were generally long the U.S. Dollar, short fixed income securities, and short equity indices. Additionally, the managers tended to hold mixed positions in the commodity space, taking long and short positions in the individual commodity contracts exhibiting both positive and negative trends.

Importantly, the dynamics exhibited throughout different economic regimes (such as monetary inflation vs supply/demand inflation) will unfold differently, so positions that were profitable in 2022 will likely not be the same in all environments. Trend following as a strategy, is dynamic in nature, and will adjust positioning as trends emerge and fade, regardless of the economic regime.

In addition to historically providing a ballast in inflationary regimes, one of managed futures’ claims to fame stems from the strategy’s ability to provide negative correlation in times of financial stress, specifically, in equity crises. The net result of including an allocation to trend following strategies during these periods has been a reduction in portfolio drawdowns and portfolio volatility.

Though managed futures have been in existence since the 1980’s, the strategy garnered its popularity coming out of the Great Financial Crisis, as it was one of the few investment strategies to provide a positive return. While this event shot the strategy to prominence, it was not an isolated incident. In fact, this relationship has been repeated frequently throughout history.

Table 1 shows the cumulative nominal returns of stocks, bonds, and managed futures when the equity market realized a greater-than 20% drawdown.

Table 1: Nominal Return of Equities, Bonds, and Managed Futures During Equity Crises

Source: FRED, Kenneth French Data Library, BarclayHedge. Calculations by Newfound Research. Time period is based on data availability. Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise Past performance is not a reliable indicator of future performance.

Since the inception of the SG CTA Trend Index11, bonds have provided diversification benefits in three of the four large drawdowns. 2022, however, was the first period in which inflation has been a concern in the market, and U.S. Treasuries were insufficient to reduce risk in a traditional portfolio.

We can see, though, that the SG CTA Trend Index provided similar diversification benefits during the drawdowns in the first two decades of the century, but also proved capable while inflation shocks rose to prominence in 2022.

Figure 6: Performance From 1999 to 2022

Source: BarclayHedge, Tiingo. 60/40 Portfolio is the Vanguard Balance Index Fund (“VBINX”) and returns presented are net of the management fee of the fund. Time period is based on data availability. Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise. Past performance is not a reliable indicator of future performance.


Traditional portfolios consisting of equity and fixed income exposure have enjoyed two decades of strong performance due to favorable economic tailwinds. With the changing economic regime and uncertainty facing markets ahead, however, investors have begun searching for potential additions to their portfolios to protect against inflation and to provide diversifying exposure to other macroeconomic headwinds.

Trend following as a strategy has extensive empirical evidence supporting both its standalone performance, as well as the diversifying benefits in relation to traditional asset classes such as stocks and bonds. In addition, trend following is mechanically convex in that it can provide positive returns in both bull and bear markets.

Managed futures is a strong contender as an addition to a stock-and-bond heavy portfolio. Finding its roots in the 1980s, the strategy has a tenured history in the investment landscape with a demonstrated history of providing diversifying exposure in times of equity crisis.

In this paper, we have shown that trend following is a robust trading strategy with behavioral underpinnings, suggesting that the strategy has staying power in the long-run, as well as desirable characteristics due to the mechanical nature of the strategy.

As a potential addition to a traditional investment portfolio, managed futures provides a source of diversification beyond that of mainstream asset classes, as well as strong absolute returns on a standalone basis.


A trend following strategy can benefit from both positive and negative price trends. If prices are increasing, then a long position can be initiated; if prices are decreasing, then a short position can be initiated. Said differently: a trend following strategy can potentially profit from both increases or decreases in price.

This characteristic is immediately reminiscent of a long position in an option straddle, where a put and call option are purchased with the same strike price. This option position would, thereby, benefit if the price moves largely either positive or negative12.

Figure A1: Long Straddle Payoff Profile

Source: Newfound Research. For illustrative purposes only.

Empirically, these strategies have in fact performed remarkably similar. To illustrate this, we will create two simple strategies.

The first strategy is a simple trend following strategy that takes a long position in the S&P 500 when its prior 12-month return is positive, and a short position when its negative.

The second strategy will attempt to replicate the delta-position of a straddle expiring in one month, struck at the close price of the S&P 500 twelve months ago. We then compute the delta of this position using the Black-Scholes model13 and take a position in the S&P 500 equal to the computed delta. For example, if the price of the S&P 500 12-months ago was $3,000, we would calculate the delta of a straddle struck at $3,000. Since the delta of this position will range between -1 and 1, the strategy will use this as an allocation to the S&P 500.

Figure A2: Replicating Trend Following with Straddles

Source: Tiingo. Calculations by Newfound Research. Returns assume the reinvestment of all dividends. The S&P 500 is represented by the Vanguard 500 Index Fund Investor Shares (“VFINX”). For illustrative purposes only. Past performance is not a reliable indicator of future performance.

For both strategies, we will assume that any excess capital is held in cash, returning 0%. Figure A2 plots the growth of $1 invested in each strategy.

As we can see, the option strategy and the trend following strategy provide a roughly equivalent return profile. In fact, if we compare the quarterly returns of the two strategies to the S&P 500, an important pattern emerges. Both strategies exhibit convex relationships in relation to the S&P 500.

Figure A3: Trend Following Relationship to the Underlying

Source: Newfound Research. For illustrative purposes only.

Figure A4: Straddle Replication Relationship to the Underlying

Source: Newfound Research. For illustrative purposes only.

APPENDIX B: Index Definitions

U.S. Stocks: U.S. total equity market return data from Kenneth French Library. Performance is gross of all costs (including, but not limited to, advisor fees, manager fees, taxes, and transaction costs) unless explicitly stated otherwise. Performance assumes the reinvestment of all dividends.

10-Year U.S. Treasuries: The 10-Year U.S. Treasury index is a constant maturity index calculated by assuming that a 10-year bond is purchased at the beginning of every month and sold at the end of that month to purchase a new bond at par at the beginning of the next month. You cannot invest directly in an index, and unmanaged index returns do not reflect any fees, expenses, or sales charges. The referenced index is shown for general market comparison and is not meant to represent any Newfound index or strategy. Data for 10-Year U.S. Treasury yields come from the Federal Reserve of St. Louis economic database (“FRED”).

SG Trend Index:  The SG Trend Index is designed to track the largest 10 (by AUM) CTAs and be representative of the managed futures trend-following space.


The Hidden Cost in Costless Put-Spread Collars: Rebalance Timing Luck

We have published a new paper on the topic of rebalance timing luck in option strategies: The Hidden Cost in Costless Put-Spread Collars: Rebalance Timing Luck.

Prior research and empirical investment results demonstrate that strategy performance can be highly sensitive to rebalance schedules, an effect called rebalance timing luck (“RTL”). In this paper we extend the empirical analysis to option-based strategies. As a case study, we replicate a popular strategy – the self-financing, three-month put-spread collar – with three implementations that vary only in their rebalance schedule. We find that the annualized tracking error between any two implementations is in excess of 400 basis points. We also decompose the empirically-derived rebalance timing luck for this strategy into its linear and non-linear components. Finally, we provide intuition for the driving causes of rebalance timing luck in option-based strategies.

Page 1 of 6

Powered by WordPress & Theme by Anders Norén