When we released our paper, Being Strategic about Tactical Allocation, we proposed the future research of an inlay methodology versus an overlay.  In the original paper, trading was constrained to occurring only when the underlying strategy re-balanced.  In other words, if the underlying strategy went 6 months without trading, the overlay would also go 6 months without trading.

For those of you who have forgotten the paper (or, didn't read it at all), here is a brief refresher: we generate 100 random allocation files were generated from a suite of mutual funds going back to 1996.  Absolutely random (long-only) allocations were generated for each security approximately every 3 months (technically, the offset of the next rebalance date was distributed as a Normal(63, 10)).

We then applied the following algorithm at each rebalance point:

let w be the original portfolio weights
for each set of rebalance weights: 
   let t be our target portfolio allocations
   let m be a trailing N-day correlation matrix
   let E be the eigenvectors of m

   Find w' such that we minimize || f(b_t - b_g) || + K*|| w' - w ||
   where
      b_g is our current guess bets, E^(-1) w'
      b_t is our target bets, E^(-1) t
      f is a function that weights differences by eigenvalue significance
      K is the relative cost of turnover versus tracking error

   w = w'

The optimization attempts to simultaneously minimize turnover and tracking error, with w' being pulled towards t through the effective bets we are making and w' being pulled towards w through the cost of turnover.

Our hypothesis for improvement was that if the co-movement relationships between securities materially changed within re-allocation periods, the asset allocation we utilized as a proxy for the targeted allocation may no longer apply.  We felt that through an inlay process, we could retain our reduced turnover and further reduce tracking error.

So we re-wrote the algorithm as follows:

let w be the original portfolio weights
for each date of date we currently have:
   let e be be our current effective weights
   if today is a rebalance day:
      let t be the rebalance weights
   else:
      let t be our effective weights had we implemented the last rebalance

   let m be a trailing N-day correlation matrix
   let E be the eigenvectors of m
   Find w' such that we minimize || f(b_t - b_g) || + K*|| w' - e ||
   where
      b_g is our current guess bets, E^(-1) w'
      b_t is our target bets, E^(-1) t
      f is a function that weights differences by eigenvalue significance
      K is the relative cost of turnover versus tracking error

   if || b_g - b_t || > threshold:
      w = w'

The major changes here are: (1) running the optimization every day, (2) running calculations with effective weights & (3) the introduction of a threshold variable at the end.  These introductions mean that the algorithm can now trigger trades between rebalance dates if the allocations we had previously selected were no longer representative of the original target allocations due to changes in security co-movement relationships.

So what were the results?  Setting the threshold variable at 5% and K=1, mean tracking error is significantly lower (at a 99% significance level) than the overlay method, but the mean turnover is significantly greater (at a 99% significance level).

Inlay Distribution of Annualized Tracking Errors

Inlay Distribution of Total Return Difference Inlay Distribution of Turnover Reduction Inlay Naive v Smart Total Turnover

So where does that leave us? Well, there are a lot of free-floating variables to consider. In the algorithm, we should consider the implications of the look-back length, the threshold boundary, and the relative importance of turnover v tracking error. In the test itself, we should consider whether our methodology adequately captures the differences between these two methods. Do we expect the co-movement relationships of our asset classes to significantly change during the (on average) 63 trading day re-allocation gap?

Obviously, more research needs to be performed and determining the optimal algorithm parameters will likely be a factor of the portfolio securities in question. Nevertheless, we think that these methods or a promising step forward for tactical managers like ourselves that need to balance the value of the tactical bets we are making against the cost of placing them.

Corey is co-founder and Chief Investment Officer of Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Corey is responsible for portfolio management, investment research, strategy development, and communication of the firm's views to clients.

Prior to offering asset management services, Newfound licensed research from the quantitative investment models developed by Corey. At peak, this research helped steer the tactical allocation decisions for upwards of $10bn.

Corey is a frequent speaker on industry panels and contributes to ETF.com, ETF Trends, and Forbes.com’s Great Speculations blog. He was named a 2014 ETF All Star by ETF.com.

Corey holds a Master of Science in Computational Finance from Carnegie Mellon University and a Bachelor of Science in Computer Science, cum laude, from Cornell University.

You can connect with Corey on LinkedIn or Twitter.

Or schedule a time to connect.