*This post is available as a PDF download here.*

# Summary

- Trend following is “mechanically convex,” meaning that the convexity profile it generates is driven by the rules that govern the strategy.
- While the convexity can be measured analytically, the unknown nature of future price dynamics makes it difficult to say anything specific about expected behavior.
- Using simulation techniques, we aim to explore how different trend speed models behave for different drawdown sizes, durations, and volatility levels.
- We find that shallow drawdowns are difficult for almost all models to exploit, that faster drawdowns generally require faster models, and that lower levels of price volatility tend to make all models more effective.
- Finally, we perform historical scenario analysis on U.S. equities to determine if our derived expectations align with historical performance.

We like to use the phrase “mechanically convex” when it comes to trend following. It implies a transparent and deterministic “if-this-then-that” relationship between the price dynamics of an asset, the rules of a trend following, and the performance achieved by a strategy.

Of course, nobody knows how an asset’s future price dynamics will play out. Nevertheless, the deterministic nature of the rules with trend following should, at least, allow us to set semi-reasonable expectations about the outcomes we are trying to achieve.

A January 2018 paper from OneRiver Asset Management titled *The Interplay Between Trend Following and Volatility in an Evolving “Crisis Alpha” Industry* touches precisely upon this mechanical nature. Rather than trying to draw conclusions analytically, the paper employs numerical simulation to explore how certain trend speeds react to different drawdown profiles.

Specifically, the authors simulate 5-years of daily equity returns by assuming a geometric Brownian motion with 13% drift and 13% volatility. They then simulate drawdowns of different magnitudes occurring over different time horizons by assuming a Brownian bridge process with 35% volatility.

The authors then construct trend following strategies of varying speeds to be run on these simulations and calculate the median performance.

Below we re-create this test. Specifically,

- We generate 10,000 5-year simulations assuming a geometric Brownian motion with 13% drift and 13% volatility.
- To the end of each simulation, we attach a 20% drawdown simulation, occurring over T days, assuming a geometric Brownian bridge with 35% volatility.
- We then calculate the performance of different NxM moving-average-cross-over strategies, assuming all trades are executed at the next day’s closing price. When the short moving average (N periods) is above the long moving average (M periods), the strategy is long, and when the short moving average is below the long moving average, the strategy is short.
- For a given T-day drawdown period and NxM trend strategy, we report the median performance across the 10,000 simulations over the drawdown period.

By varying T and the NxM models, we can attempt to get a sense as to how different trend speeds should behave in different drawdown profiles.

Note that the generated tables report on the median performance of the trend following strategy over only the drawdown period. The initial five years of positive expected returns are essentially treated as a burn-in period for the trend signal. Thus, if we are looking at a drawdown of 20% and an entry in the table reads -20%, it implies that the trend model was exposed to the full drawdown without regard to what happened in the years prior to the drawdown. The return of the trend following strategies over the drawdown period can be larger than the drawdown because of whipsaw and the fact that the underlying equity can be down more than 20% at points during the period.

Furthermore, these results are for *long/short* implementations. Recall that a long/flat strategy can be thought of as 50% explore to equity plus 50% exposure to a long/short strategy. Thus, the results of long/flat implementations can be approximated by halving the reported result and adding half the drawdown profile. For example, in the table below, the 20×60 trend system on the 6-month drawdown horizon is reported to have a drawdown of -4.3%. This would imply that a long/flat implementation of this strategy would have a drawdown of approximately -12.2%.

*Calculations by Newfound Research. Results are hypothetical. Returns are gross of all fees, including manager fees, transaction costs, and taxes.*

There are several potential conclusions we can draw from this table:

- None of the trend models are able to avoid an immediate 1-day loss.
- Very-fast (10×30 to 10×50) and fast (20×60 and 20×100) trend models are able to limit losses for week-long drawdowns, and several are even able to profit during month-long drawdowns but begin to degrade for drawdowns that take over a year.
- Intermediate (50×150 to 50×250) and slow (75×225 to 75×375) trend models appear to do best for drawdowns in the 3-month to 1-year range.
- Very slow (100×300 to 200×400) trend models do very little at all for drawdowns over any timeframe.

Note that these results align with results found in earlier research commentaries about the relationship between measured convexity and trend speed. Namely, faster trends appear to exhibit convexity when measured over shorter horizons, whereas slower trend speeds require longer measurement horizons.

But what happens if we change the drawdown profile from 20%?

**Varying Drawdown Size **

*Calculations by Newfound Research. Results are hypothetical. Returns are gross of all fees, including manager fees, transaction costs, and taxes.*

We can see some interesting patterns emerge.

First, for more shallow drawdowns, slower trend models struggle over almost all drawdown horizons. On the one hand, a 10% drawdown occurring over a month will be too fast to capture. On the other hand, a 10% drawdown occurring over several years will be swamped by the 35% volatility profile we simulated; there is too much noise and too little signal.

We can see that as the drawdowns become larger and the duration of the drawdown is extended, slower models begin to perform much better and faster models begin to degrade in relative performance.

Thus, if our goal is to protect against large losses over sustained periods (e.g. 20%+ over 6+ months), intermediate-to-slow trend models may be better suited our needs.

However, if we want to try to avoid more rapid, but shallow drawdowns (e.g. Q4 2018), faster trend models will likely have to be employed.

**Varying Volatility **

In our test, we specified that the drawdown periods would be simulated with an intrinsic volatility of 35%. As we have explored briefly in the past, we expect that the optimal trend speed would be a function of both the dynamics of the trend process and the dynamics of the price process. In simplified models (i.e. constant trend), we might assume the model speed is proportional to the trend speed relative to the price volatility. For a more complex model, others have proposed that model speed should be proportional to the volatility of the trend process relative to the volatility of the price process.

Therefore, we also want to ask the question, “what happens if the volatility profile changes?” Below, we re-create tables for a 20% and 40% drawdown, but now assume a 20% volatility level, about half of what was previously used.

*Calculations by Newfound Research. Results are hypothetical. Returns are gross of all fees, including manager fees, transaction costs, and taxes.*

We can see that results are improved almost without exception.^{1}

Not only do faster models now perform better over longer drawdown horizons, but intermediate and slow models are now much more effective at horizons where they had previously not been. For example, the classic 50×200 model saw an increase in its median return from -23.1% to -5.3% for 20% drawdowns occurring over 1.5 years.

It is worth acknowledging, however, that even with a reduced volatility profile, a shallower drawdown over a long horizon is still difficult for trend models to exploit. We can see this in the last three rows of the 20% drawdown / 20% volatility table where none of the trend models exhibit a positive median return, despite having the ability to profit from shorting during a negative trend.

**Conclusion**

The transparent, “if-this-then-that” nature of trend following makes it well suited for scenario analysis. However, the uncertainty of how price dynamics may evolve can make it difficult to say anything about the future with a high degree of precision.

In this commentary, we sought to evaluate the relationship between trend speed, drawdown size, drawdown speed, and asset volatility and a trend following systems ability to perform in drawdown scenarios. We generally find that:

- The effectiveness of trend speed appears to be positively correlated with drawdown speed. Intuitively, faster drawdowns require faster trend models.
- Trend models struggle to capture shallow drawdowns (e.g. 10%). Faster trend models appear to be effective in capturing relatively shallow drawdowns (~20%), so long as they happen with sufficient speed (<6 months). Slower models appear relatively ineffective against this class of drawdowns over all horizons, unless they occur with very little volatility.
- Intermediate-to-slow trend models are most effective for larger, more prolonged drawdowns (e.g. 30%+ over 6+ months).
- Lower intrinsic asset volatility appears to make trend models effective over longer drawdown horizons.

From peak-to-trough, the dot-com bubble imploded over about 2.5 years, with a drawdown of about -50% and a volatility of 24%. The market meltdown in 2008, on the other hand, unraveled in 1.4 years, but had a -55% drawdown with 37% volatility. Knowing this, we might expect a slower model to have performed better in early 2000, while an intermediate model might have performed best in 2008.

If only reality were that simple!

While our tests may have told us something about the *expected *performance, we only live through one realization. The precise and idiosyncratic nature of how each drawdown unfolds will ultimately determine which trend models are successful and which are not. Nevertheless, evaluating the historical periods of large U.S. equity drawdowns, we do see some common patterns emerge.

The sudden drawdown of 1987, for example, remains elusive for most of the models. The dot-com and Great Recession were periods where intermediate-to-slow models did best. But we can also see that trend is not a panacea: the 1946-1949 drawdown was very difficult for most trend models to navigate successfully.

Our conclusion is two-fold. First, we should ensure that the trend model we select is in-line with the sorts of drawdown profiles we are looking to create convexity against. Second, given the unknown nature of how drawdowns might evolve, it may be prudent to employ a variety of trend following models.

## 2 Pingbacks