This post is available as a PDF download here.
Summary
- Information does not flow into the market at a constant frequency or with constant magnitude.
- By sampling data using a constant time horizon (e.g. “200-day simple moving average”), we may over-sample during calm market environments and under-sample in chaotic ones.
- As an example, we introduce a highly simplified price model and demonstrate that trend following lookback periods should be a dynamic function of trend and volatility in the time domain.
- By changing the sampling domain slightly, we are able to completely eliminate the need for the dynamic lookback period.
- Finally, we demonstrate a more complicated model that samples market prices based upon cumulative log differences, creating a dynamic moving average in the time domain.
- We believe that there are other interesting applications of this line of thinking, many of which may already be in use today by investors who may not be aware of it (e.g. tracking-error-based rebalancing techniques).
In the 2014 film Interstellar, Earth has been plagued by crop blights and dust storms that threaten the survival of mankind. Unknown, interstellar beings have opened a wormhole near Saturn, creating a path to a distant galaxy and the potential of a new home for humanity.
Twelve volunteers travel into the wormhole to explore twelve potentially hospitable planets, all located near a massive black hole named Gargantua. Of the twelve, only three reported back positive results.
With confirmation in hand, the crew of the spaceship Endurance sets out from Earth with 5,000 frozen human embryos, intent on colonizing the new planets.
After traversing the wormhole, the crew sets down upon the first planet – an ocean world – and quickly discovers that it is actually inhospitable. A gigantic tidal wave kills one member of the crew and severely delays the lander’s departure.
The close proximity of the planet to the gravitational forces of the supermassive black hole invites exponential time dilation effects. The positive beacon that had been tracked had perhaps been triggered just minutes prior on the planet. For the crew, the three hours spent on the planet amounted to over 23 years on Earth. The crew can only watch, devastated, as their loved ones age before their eyes in the video messages received – and never responded to – in their multi-decade absence.
Our lives revolve around the clock, though we do not often stop to reflect upon the nature of time.
Some aspects of time tie to corresponding natural events. A day is simply reckoned from one midnight to the next, reflecting the Earth’s full rotation about its axis. A year, which reflects the length of time it takes for the Earth to make a full revolution around the Sun, will also correspond to a full set of a seasons.
Others, however, are seemingly more arbitrary. The twenty-four-hour day is derived from ancient Egyptians, who divided day-time into 10 hours, bookended by twilight hours. The division of an hour into sixty minutes comes from the Babylonians, who used a sexagesimal counting system.
We impose the governance of the clock upon our financial system as well. Public companies prepare quarterly and annual reports. Economic data is released at a scheduled monthly or quarterly pace. Trading days for U.S. equity markets are defined as between the hours of 9:30am and 4:00pm ET.
In many ways, our imposition of the clock upon markets creates a natural cadence for the flow of information.
Yet, despite our best efforts to impose order, information most certainly does not flow into the market in a constant or steady manner.
New innovations, geopolitical frictions, and errant tweets all represent idiosyncratic events that can reshape our views in an instant. A single event can be of greater import than all the cumulative economic news that came before it; just consider the collapse of Lehman Brothers.
And much like the time dilation experienced by the crew of Endurance, a few, harrowing days of 2008 may have felt longer than the entirety of a tranquil year like 2017.
One way of trying to visualize this concept is by looking at the cumulative variance of returns. Given the clustered nature of volatility, we would expect to see periods where the variance accumulates slowly (“calm markets”) and periods where the variance accumulates rapidly (“chaotic markets”).
When we perform this exercise – by simply summing squared daily returns for the S&P 500 over time – we see precisely this. During market environments that exhibit stable economic growth and little market uncertainty, we see very slow and steady accumulation of variance. During periods when markets are seeking to rapidly reprice risk (e.g. 2008), we see rapid jumps.
Source: CSI Data. Calculations by Newfound Research.
If we believe that information flow is not static and constant, then sampling data on a constant, fixed interval will mean that during calm markets we might be over-sampling our data and during chaotic markets we might be under-sampling.
Let’s make this a bit more concrete.
Below we plot the –adjusted closing price of the S&P 500– and its –200-day simple moving average–. Here, the simple moving average aims to estimate the trend component of price. We can see that during the 2005-2007 period, it estimates the underlying trend well, while in 2008 it dramatically lags price decline.
Source: CSI Data. Calculations by Newfound Research.
The question we might want to ask ourselves is, why are looking at the prior 200 days? Or, more specifically, why is a day a meaningful unit of measure? We already demonstrated above that it very well may not be: one day might be packed with economically-relevant information and another entirely devoid.
Perhaps there are other ways in which we might think about sampling data. We could, for example, sample data based upon cumulative volume intervals. Another might be on a fixed number of cumulative ticks or trades. Yet another might be on a fixed cumulative volatility or variance.
As a firm which makes heavy use of trend-following techniques, we are particularly partial to the latter approach, as the volatility of an asset’s trend versus its price should inform the trend lookback horizon. If we think of trend following as being the trading strategy that replicates the payoff profile of a straddle, increased volatility levels will decrease the delta of the option positions, and therefore decrease our position size. An interpretation of this effect is that the increased volatility decreases our certainty of where price will fall at expiration, and therefore we need to decrease our sensitivity to price movements.
If that all sounds like Greek, consider this simple example. Assume that price follows a highly simplified model as a function of time:
There are two components of this model: the linear trend and the noise.
Now let’s assume we are attempting to identify whether the linear trend is positive or negative by using a simple moving average (“SMA”) of price:
To determine if there is a positive or a negative trend, we simply ask if our current SMA value is greater or less than the prior SMA value. For a positive trend, we require:
Substituting our above definition of the simple moving average:
When we recognize that most of the terms on the left also appear on the right, we can re-write the whole comparison as the new price in the SMA being greater than the old price dropping out of the SMA:
Which, through substitution of our original definition, leaves us with:
Re-arranging a bit, we get:
Here we use the fact that sin(x) is bounded between -1 and 1, meaning that:
Assuming a positive trend (m > 0), we can replace with our worst-case scenario,
To quickly test this result, we can construct a simple time series where we assume a=3 and m=0.5, which implies that our SMA length should be greater than 11. We plot the –time series– and –SMA– below. Note that the –SMA– is always increasing.
Despite being a highly simplified model, it illuminates that our lookback length should be a function of noise versus trend strength. The higher the ratio of noise to trend, the longer the lookback required to smooth out the noise. On the other hand, when the trend is very strong and the noise is weak, the lookback can be quite short.1
Thus, if trend and noise change over time (which we would expect them to), the optimal lookback will be a dynamic function. When trend is much weaker than noise, we our lookback period will be extended; when trend is much stronger than noise, the lookback period shrinks.
But what if we transform the sampling domain? Rather than sampling price every time step, what if we sample price as a function of cumulative noise? For example, using our simple model, we could sample when cumulative noise sums back to zero (which, in this example, will be the equivalent of sampling every 2π time-steps).2
Sampling at that frequency, how many of data points would we need to estimate our trend? We need not even work out the math as before; a bit of analytical logic will suffice. In this case, because we know the cumulative noise equals zero, we know that a point-to-point comparison will be affected only by the trend component. Thus, we only need n=1 in this new domain.
And this is true regardless of the parameterization of trend or noise. Goodbye! dynamic lookback function.
Of course, this is a purely hypothetical – and dramatically over-simplified – model. Nevertheless, it may illuminate why time-based sampling may not be the most efficient practice if we do not believe that information flow is constant.
Below, we again plot the –S&P 500– as well as a standard –200-day simple moving average–.
We also sample prices of the S&P 500 based upon cumulative magnitude of log differences, approximating a cumulative 2.5% volatility move. When the market exhibits low volatility levels, the process samples price less frequently. When the market exhibits high volatility, it samples more frequently. Finally, we plot a –200 period moving average– based upon these samples.
We can see that sampling in a different domain – in this case, the log difference space – we can generate a process that reacts dynamically in the time domain. During the calm markets of 2006 and early 2007, the –200 period moving average– behaves like the –200-day simple moving average–, whereas during the 2008 crisis it adapts to the changing price level far more quickly.
By changing the domain in which we sample, we may be able to create a model that is dynamic in the time domain, avoiding the time-dilation effects of information flow.
Conclusion
Each morning the sun rises and each evening it sets. Every year the Earth travels in orbit around the sun. What occurs during those time spans, however, varies dramatically day-by-day and year-by-year. Yet in finance – and especially quantitative finance – we often find ourselves using time as a measuring stick.
We find the notion of time almost everywhere in portfolio construction. Factors, for example, are often defined by measurements over a certain lookback horizon and reformed based upon the decay speed of the signal.
Even strategic portfolios are often rebalanced based upon the calendar. As we demonstrated in our paper Rebalance Timing Luck: The Difference Between Hired and Fired, fixed-schedule rebalancing can invite tremendous random impact in our portfolios.
Information does not flow into the market at a constant rate. While time may be a convenient measure, it may actually cause us to sample too frequently in some market environments and not frequently enough in others.
One answer may be to transform our measurements into a different domain. Rather than sampling price based upon the market close of each day, we might sample price based upon a fixed amount of cumulative volume, trades, or even variance. In doing so, we might find that our measures now represent a more consistent amount of information flow, despite representing a dynamic amount of data in the time domain.
The Speed Limit of Trend
By Corey Hoffstein
On April 15, 2019
In Trend, Weekly Commentary
This post is available as a PDF download here.
Summary
We like to use the phrase “mechanically convex” when it comes to trend following. It implies a transparent and deterministic “if-this-then-that” relationship between the price dynamics of an asset, the rules of a trend following, and the performance achieved by a strategy.
Of course, nobody knows how an asset’s future price dynamics will play out. Nevertheless, the deterministic nature of the rules with trend following should, at least, allow us to set semi-reasonable expectations about the outcomes we are trying to achieve.
A January 2018 paper from OneRiver Asset Management titled The Interplay Between Trend Following and Volatility in an Evolving “Crisis Alpha” Industry touches precisely upon this mechanical nature. Rather than trying to draw conclusions analytically, the paper employs numerical simulation to explore how certain trend speeds react to different drawdown profiles.
Specifically, the authors simulate 5-years of daily equity returns by assuming a geometric Brownian motion with 13% drift and 13% volatility. They then simulate drawdowns of different magnitudes occurring over different time horizons by assuming a Brownian bridge process with 35% volatility.
The authors then construct trend following strategies of varying speeds to be run on these simulations and calculate the median performance.
Below we re-create this test. Specifically,
By varying T and the NxM models, we can attempt to get a sense as to how different trend speeds should behave in different drawdown profiles.
Note that the generated tables report on the median performance of the trend following strategy over only the drawdown period. The initial five years of positive expected returns are essentially treated as a burn-in period for the trend signal. Thus, if we are looking at a drawdown of 20% and an entry in the table reads -20%, it implies that the trend model was exposed to the full drawdown without regard to what happened in the years prior to the drawdown. The return of the trend following strategies over the drawdown period can be larger than the drawdown because of whipsaw and the fact that the underlying equity can be down more than 20% at points during the period.
Furthermore, these results are for long/short implementations. Recall that a long/flat strategy can be thought of as 50% explore to equity plus 50% exposure to a long/short strategy. Thus, the results of long/flat implementations can be approximated by halving the reported result and adding half the drawdown profile. For example, in the table below, the 20×60 trend system on the 6-month drawdown horizon is reported to have a drawdown of -4.3%. This would imply that a long/flat implementation of this strategy would have a drawdown of approximately -12.2%.
There are several potential conclusions we can draw from this table:
Note that these results align with results found in earlier research commentaries about the relationship between measured convexity and trend speed. Namely, faster trends appear to exhibit convexity when measured over shorter horizons, whereas slower trend speeds require longer measurement horizons.
But what happens if we change the drawdown profile from 20%?
Varying Drawdown Size
Calculations by Newfound Research. Results are hypothetical. Returns are gross of all fees, including manager fees, transaction costs, and taxes.
We can see some interesting patterns emerge.
First, for more shallow drawdowns, slower trend models struggle over almost all drawdown horizons. On the one hand, a 10% drawdown occurring over a month will be too fast to capture. On the other hand, a 10% drawdown occurring over several years will be swamped by the 35% volatility profile we simulated; there is too much noise and too little signal.
We can see that as the drawdowns become larger and the duration of the drawdown is extended, slower models begin to perform much better and faster models begin to degrade in relative performance.
Thus, if our goal is to protect against large losses over sustained periods (e.g. 20%+ over 6+ months), intermediate-to-slow trend models may be better suited our needs.
However, if we want to try to avoid more rapid, but shallow drawdowns (e.g. Q4 2018), faster trend models will likely have to be employed.
Varying Volatility
In our test, we specified that the drawdown periods would be simulated with an intrinsic volatility of 35%. As we have explored briefly in the past, we expect that the optimal trend speed would be a function of both the dynamics of the trend process and the dynamics of the price process. In simplified models (i.e. constant trend), we might assume the model speed is proportional to the trend speed relative to the price volatility. For a more complex model, others have proposed that model speed should be proportional to the volatility of the trend process relative to the volatility of the price process.
Therefore, we also want to ask the question, “what happens if the volatility profile changes?” Below, we re-create tables for a 20% and 40% drawdown, but now assume a 20% volatility level, about half of what was previously used.
Calculations by Newfound Research. Results are hypothetical. Returns are gross of all fees, including manager fees, transaction costs, and taxes.
We can see that results are improved almost without exception.1
Not only do faster models now perform better over longer drawdown horizons, but intermediate and slow models are now much more effective at horizons where they had previously not been. For example, the classic 50×200 model saw an increase in its median return from -23.1% to -5.3% for 20% drawdowns occurring over 1.5 years.
It is worth acknowledging, however, that even with a reduced volatility profile, a shallower drawdown over a long horizon is still difficult for trend models to exploit. We can see this in the last three rows of the 20% drawdown / 20% volatility table where none of the trend models exhibit a positive median return, despite having the ability to profit from shorting during a negative trend.
Conclusion
The transparent, “if-this-then-that” nature of trend following makes it well suited for scenario analysis. However, the uncertainty of how price dynamics may evolve can make it difficult to say anything about the future with a high degree of precision.
In this commentary, we sought to evaluate the relationship between trend speed, drawdown size, drawdown speed, and asset volatility and a trend following systems ability to perform in drawdown scenarios. We generally find that:
From peak-to-trough, the dot-com bubble imploded over about 2.5 years, with a drawdown of about -50% and a volatility of 24%. The market meltdown in 2008, on the other hand, unraveled in 1.4 years, but had a -55% drawdown with 37% volatility. Knowing this, we might expect a slower model to have performed better in early 2000, while an intermediate model might have performed best in 2008.
If only reality were that simple!
While our tests may have told us something about the expected performance, we only live through one realization. The precise and idiosyncratic nature of how each drawdown unfolds will ultimately determine which trend models are successful and which are not. Nevertheless, evaluating the historical periods of large U.S. equity drawdowns, we do see some common patterns emerge.
Calculations by Newfound Research. Results are hypothetical. Returns are gross of all fees, including manager fees, transaction costs, and taxes.
The sudden drawdown of 1987, for example, remains elusive for most of the models. The dot-com and Great Recession were periods where intermediate-to-slow models did best. But we can also see that trend is not a panacea: the 1946-1949 drawdown was very difficult for most trend models to navigate successfully.
Our conclusion is two-fold. First, we should ensure that the trend model we select is in-line with the sorts of drawdown profiles we are looking to create convexity against. Second, given the unknown nature of how drawdowns might evolve, it may be prudent to employ a variety of trend following models.