# Month: August 2018

Trade optimization is more technical topic than we usually cover in our published research.  Therefore, this note will relies heavily on mathematical notation and assumes readers have a basic understanding of optimization.  Accompanying the commentary is code written in Python, meant to provide concrete examples of how these ideas can be implemented.  The Python code leverages the PuLP optimization library.

Readers not proficient in these areas may still benefit from reading the Introduction and evaluating the example outlined in Section 5.

## Summary

• In practice, portfolio managers must account for the real-world implementation costs – both explicit (e.g. commission) and implicit (e.g. bid/ask spread and impact) associated with trading portfolios.
• Managers often implement trade paring constraints that may limit the number of allowed securities, the number of executed trades, the size of a trade, or the size of holdings. These constraints can turn a well-formed convex optimization into a discrete problem.
• In this note, we explore how to formulate trade optimization as a Mixed-Integer Linear Programming (“MILP”) problem and implement an example in Python.

## 0. Initialize Python Libraries

import pandas
import numpy

from pulp import *

import scipy.optimize

## 1. Introduction

In the context of portfolio construction, trade optimization is the process of managing the transactions necessary to move from one set of portfolio weights to another. These optimizations can play an important role both in the cases of rebalancing as well as in the case of a cash infusion or withdrawal.  The reason for controlling these trades is to try to minimize the explicit (e.g. commission) and implicit (e.g. bid/ask spread and impact) costs associated with trading.

Two approaches are often taken to trade optimization:

1. Trading costs and constraints are explicitly considered within portfolio construction. For example, a portfolio optimization that seeks to maximize exposure to some alpha source may incorporate explicit measures of transaction costs or constrain the number of trades that are allowed to occur at any given rebalance.
2. Portfolio construction and trade optimization occur in a two step process. For example, a portfolio optimization may take place that creates the “ideal” portfolio, ignoring consideration of trading constraints and costs. Trade optimization would then occur as a second step, seeking to identify the trades that would move the current portfolio “as close as possible” to the target portfolio while minimizing costs or respecting trade constraints.

These two approaches will not necessarily arrive at the same result. At Newfound, we prefer the latter approach, as we believe it creates more transparency in portfolio construction. Combining trade optimization within portfolio optimization can also lead to complicated constraints, leading to infeasible optimizations.  Furthermore, the separation of portfolio optimization and trade optimization allows us to target the same model portfolio across all strategy implementations, but vary when and how different portfolios trade depending upon account size and costs.

For example, a highly tactical strategy implemented as a pooled vehicle with a large asset base and penny-per-share commissions can likely afford to execute a much higher number of trades than an investor trying to implement the same strategy with $250,000 and$7.99 ticket charges. While implicit and explicit trading costs will create a fixed drag upon strategy returns, failing to implement each trade as dictated by a hypothetical model will create tracking error.

Ultimately, the goal is to minimize the fixed costs while staying within an acceptable distance (e.g. turnover distance or tracking error) of our target portfolio. Often, this goal is expressed by a portfolio manager with a number of semi-ad-hoc constraints or optimization targets. For example:

• Asset Paring. A constraint that specifies the minimum or maximum number of securities that can be held by the portfolio.
• Trade Paring. A constraint that specifies the minimum or maximum number of trades that may be executed.
• Level Paring. A constraint that establishes a minimum level threshold for securities (e.g. securities must be at least 1% of the portfolio) or trades (e.g. all trades must be larger than 0.5%).

Unfortunately, these constraints often turn the portfolio optimization problem from continuous to discrete, which makes the process of optimization more difficult.

## 2. The Discreteness Problem

Consider the following simplified scenario. Given our current, drifted portfolio weights $w_{old}$ and a new set of target model weights $w_{target}$, we want to minimize the number of trades we need to execute to bring our portfolio within some acceptable turnover threshold level, $\theta$. We can define this as the optimization problem:

\begin{aligned} & \text{minimize} & & \sum\limits_{i} 1_{|t_i|}>0 \\ & \text{subject to} & & \sum\limits_{i} |w_{target, i} - (w_{old, i} + t_i)| \le 2 * \theta \\ & & & \sum\limits_{i} t_i = 0 \\ & \text{and} & & t_i \ge -w_{old,i} \end{aligned}

Unfortunately, as we will see below, simply trying to throw this problem into an off-the-shelf convex optimizer, as is, will lead to some potentially odd results. And we have not even introduced any complex paring constraints!

#### 2.1 Example Data

# setup some sample data
tickers = "amj bkln bwx cwb emlc hyg idv lqd \
pbp pcy pff rem shy tlt vnq vnqi vym".split()

w_target = pandas.Series([float(x) for x in "0.04095391 0.206519656 0 \
0.061190655 0.049414401 0.105442705 0.038080766 \
0.07004622 0.045115708 0.08508047 0.115974239 \
0.076953702 0 0.005797291 0.008955226 0.050530852 \
0.0399442".split()], index = tickers)

w_old = pandas.Series([float(x) for x in \
"0.058788745 0.25 0 0.098132817 \
0 0.134293993 0.06144967 0.102295438 \
0.074200473 0 0 0.118318536 0 0 \
0.04774768 0 0.054772649".split()], \
index = tickers)

n = len(tickers)

w_diff = w_target - w_old

#### 2.2 Applying a Naive Convex Optimizer

The example below demonstrates the numerical issues associated with attempting to solve discrete problems with traditional convex optimizers.  Using the portfolio and target weights established above, we run a naive optimization that seeks to minimize the number of trades necessary to bring our holdings within a 5% turnover threshold from the target weights.

# Try a naive optimization with SLSQP

theta = 0.05
theta_hat = theta + w_diff.abs().sum() / 2.

def _fmin(t):
return numpy.sum(numpy.abs(t) > 1e-8)

def _distance_constraint(t):
return theta_hat - numpy.sum(numpy.abs(t)) / 2.

def _sums_to_zero(t):
return numpy.sum(numpy.square(t))

t0 = w_diff.copy()

bounds = [(-w_old[i], 1) for i in range(0, n)]

result = scipy.optimize.fmin_slsqp(_fmin, t0, bounds = bounds, \
eqcons = [_sums_to_zero], \
ieqcons = [_distance_constraint], \
disp = -1)

result =  pandas.Series(result, index = tickers)

Note that the trades we received are simply $w_{target} - w_{old}$, which was our initial guess for the optimization.  The optimizer didn’t optimize.

What’s going on? Well, many off-the-shelf optimizers – such as the Sequential Least Squares Programming (SLSQP) approach applied here – will attempt to solve this problem by first estimating the gradient of the problem to decide which direction to move in search of the optimal solution. To achieve this numerically, small perturbations are made to the input vector and their influence on the resulting output is calculated.

In this case, small changes are unlikely to create an influence in the problem we are trying to minimize. Whether the trade is 5% or 5.0001% will have no influence on the *number* of trades executed. So the first derivative will appear to be zero and the optimizer will exit.

Fortunately, with a bit of elbow grease, we can turn this problem into a mixed integer linear programming problem (“MILP”), which have their own set of efficient optimization tools (in this article, we will use the PuLP library for the Python programming language). A MILP is a category of optimization problems that take the standard form:

\begin{aligned} & \text{minimize} & & c^{T}x + h^{T}y \\ & \text{subject to} & & Ax + Gy \le b \\ & \text{and} & & x \in \mathbb{Z}^{n} \end{aligned}

Here b is a vector and A and G are matrices. Don’t worry too much about the form.

The important takeaway is that we need: (1) to express our minimization problem as a linear function and (2) express our constraints as a set of linear inequalities.

But first, for us to take advantage of linear programming tools, we need to eliminate our absolute values and indicator functions and somehow transform them into linear constraints.

## 3. Linear Programming Transformation Techniques

#### 3.1 Absolute Values

Consider an optimization of the form:

\begin{aligned} & \text{minimize} & & \sum\limits_{i} |x_i| \\ & \text{subject to} & & ... \end{aligned}

To get rid of the absolute value function, we can rewrite the function as a minimization of a new variable, $\psi$.

\begin{aligned} & \text{minimize} & & \sum\limits_{i} \psi_i \\ & \text{subject to} & & \psi_i \ge x_i \\ & & & \psi_i \ge -x_i \\ & \text{and} & & ... \end{aligned}

The combination of constraints makes it such that $\psi_i \ge |x_i|$. When $x_i$ is positive, $\psi_i$ is constrained by the first constraint and when $x_i$ is negative, it is constrained by the latter. Since the optimization seeks to minimize the sum of each $\psi_i$, and we know $\psi_i$ will be positive, the optimizer will reduce $\psi_i$ to equal $|x_i|$, which is it’s minimum possible value.

Below is an example of this trick in action. Our goal is to minimize the absolute value of some variables $x_i$. We apply bounds on each $x_i$ to allow the problem to converge on a solution.

lp_problem = LpProblem("Absolute Values", LpMinimize)

x_vars = []
psi_vars = []

bounds = [[1, 7], [-10, 0], [-9, -1], [-1, 5], [6, 9]]

print "Bounds for x: "
print pandas.DataFrame(bounds, columns = ["Left", "Right"])

for i in range(5):
x_i = LpVariable("x_" + str(i), None, None)
x_vars.append(x_i)

psi_i = LpVariable("psi_i" + str(i), None, None)
psi_vars.append(psi_i)

lp_problem += lpSum(psi_vars), "Objective"

for i in range(5):
lp_problem += psi_vars[i] >= -x_vars[i]
lp_problem += psi_vars[i] >= x_vars[i]

lp_problem += x_vars[i] >= bounds[i][0]
lp_problem += x_vars[i] <= bounds[i][1]

lp_problem.solve()

print "\nx variables"
print pandas.Series([x_i.value() for x_i in x_vars])

print "\npsi Variables (|x|):"
print pandas.Series([psi_i.value() for psi_i in psi_vars])
Bounds for x:
Left  Right
0     1      7
1   -10      0
2    -9     -1
3    -1      5
4     6      9

x variables
0    1.0
1    0.0
2   -1.0
3    0.0
4    6.0
dtype: float64

psi Variables (|x|):
0    1.0
1    0.0
2    1.0
3    0.0
4    6.0
dtype: float64

#### 3.2 Indicator Functions

Consider an optimization problem of the form:

\begin{aligned} & \text{minimize} & & \sum\limits_{i} 1_{x_i > 0} \\ & \text{subject to} & & ... \end{aligned}

We can re-write this problem by introducing a new variable, $y_i$, and adding a set of linear constraints:

\begin{aligned} & \text{minimize} & & \sum\limits_{i} y_i \\ & \text{subject to} & & x_i \le A*y_i\\ & & & y_i \ge 0 \\& & & y_i \le 1 \\ & & & y_i \in \mathbb{Z} \\ & \text{and} & & ... \end{aligned}

Note that the last three constraints, when taken together, tell us that $y_i \in \{0, 1\}$. The new variable A should be a large constant, bigger than any value of $x_i$. Let’s assume $A = max(x) + 1$.

Let’s first consider what happens when $x_i \le 0$. In such a case, $y_i$ can be set to zero without violating any constraints. When $x_i$ is positive, however, for $x_i \le A*y_i$ to be true, it must be the case that $y_i = 1$.

What prevents $y_i$ from equalling 1 in the case where $x_i \le 0$ is the goal of minimizing the sum of $y_i$, which will force $y_i$ to be 0 whenever possible.

Below is a sample problem demonstrating this trick, similar to the example described in the prior section.

lp_problem = LpProblem("Indicator Function", LpMinimize)

x_vars = []
y_vars = []

bounds = [[-4, 1], [-3, 5], [-6, 1], [1, 7], [-5, 9]]

A = 11

print "Bounds for x: "
print pandas.DataFrame(bounds, columns = ["Left", "Right"])

for i in range(5):
x_i = LpVariable("x_" + str(i), None, None)
x_vars.append(x_i)

y_i = LpVariable("ind_" + str(i), 0, 1, LpInteger)
y_vars.append(y_i)

lp_problem += lpSum(y_vars), "Objective"

for i in range(5):
lp_problem += x_vars[i] >= bounds[i][0]
lp_problem += x_vars[i] <= bounds[i][1]

lp_problem += x_vars[i] <= A * y_vars[i]

lp_problem.solve()

print "\nx variables"
print pandas.Series([x_i.value() for x_i in x_vars])

print "\ny Variables (Indicator):"
print pandas.Series([y_i.value() for y_i in y_vars])
Bounds for x:
Left  Right
0    -4      1
1    -3      5
2    -6      1
3     1      7
4    -5      9

x variables
0   -4.0
1   -3.0
2   -6.0
3    1.0
4   -5.0
dtype: float64

y Variables (Indicator):
0    0.0
1    0.0
2    0.0
3    1.0
4    0.0
dtype: float64

#### 3.3 Tying the Tricks Together

A problem arises when we try to tie these two tricks together, as both tricks rely upon the minimization function itself. The $\psi_i$ are dragged to the absolute value of $x_i$ because we minimize over them. Similarly, $y_i$ is dragged to zero when the indicator should be off because we are minimizing over it.

What happens, however, if we want to solve a problem of the form:

\begin{aligned} & \text{minimize} & & \sum\limits_{i} 1_{|x_i| > 0} \\ & \text{subject to} & & ... \end{aligned}

One way of trying to solve this problem is by using our tricks and then combining the objectives into a single sum.

\begin{aligned} & \text{minimize} & & \sum\limits_{i} y_i + \psi_i \\ & \text{subject to} & & \psi_i \ge x_i \\ & & & \psi_i \ge -x_i \\ & & & x_i \le A*y_i\\ & & & y_i \ge 0 \\ & & & y_i \le 1 \\ & & & y_i \in \mathbb{Z} \\ & \text{and} & & .. \end{aligned}

By minimizing over the sum of both variables, $\psi_i$ is forced towards $|x_i|$ and $y_i$ is forced to zero when $\psi_i = 0$.

Below is an example demonstrating this solution, again similar to the examples discussed in prior sections.

lp_problem = LpProblem("Absolute Values", LpMinimize)

x_vars = []
psi_vars = []
y_vars = []

bounds = [[-7, 3], [7, 8], [5, 9], [1, 4], [-6, 2]]

A = 11

print "Bounds for x: "
print pandas.DataFrame(bounds, columns = ["Left", "Right"])

for i in range(5):
x_i = LpVariable("x_" + str(i), None, None)
x_vars.append(x_i)

psi_i = LpVariable("psi_i" + str(i), None, None)
psi_vars.append(psi_i)

y_i = LpVariable("ind_" + str(i), 0, 1, LpInteger)
y_vars.append(y_i)

lp_problem += lpSum(y_vars) + lpSum(psi_vars), "Objective"

for i in range(5):
lp_problem += x_vars[i] >= bounds[i][0]
lp_problem += x_vars[i] <= bounds[i][1]

for i in range(5):
lp_problem += psi_vars[i] >= -x_vars[i]
lp_problem += psi_vars[i] >= x_vars[i]

lp_problem += psi_vars[i] <= A * y_vars[i]

lp_problem.solve()

print "\nx variables"
print pandas.Series([x_i.value() for x_i in x_vars])

print "\npsi Variables (|x|):"
print pandas.Series([psi_i.value() for psi_i in psi_vars])

print "\ny Variables (Indicator):"
print pandas.Series([y_i.value() for y_i in y_vars])
Bounds for x:
Left  Right
0    -7      3
1     7      8
2     5      9
3     1      4
4    -6      2

x variables
0    0.0
1    7.0
2    5.0
3    1.0
4    0.0
dtype: float64

psi Variables (|x|):
0    0.0
1    7.0
2    5.0
3    1.0
4    0.0
dtype: float64

y Variables (Indicator):
0    0.0
1    1.0
2    1.0
3    1.0
4    0.0
dtype: float64

## 4. Building a Trade Minimization Model

Returning to our original problem,

\begin{aligned} & \text{minimize} & & \sum\limits_{i} 1_{|t_i| > 0} \\ & \text{subject to} & & \sum\limits_{i} |w_{target, i} - (w_{old, i} + t_i)| \le 2 * \theta \\ & & & \sum\limits_{i} t_i = 0 \\ & \text{and} & & t_i \ge -w_{old,i} \end{aligned}

We can now use the tricks we have established above to re-write this problem as:

\begin{aligned} & \text{minimize} & & \sum\limits_{i} (\phi_i + \psi_i + y_i) \\ & \text{subject to} & & \psi_i \ge t_i \\ & & & \psi_i \ge -t_i \\ & & & \psi_i \le A*y_i \\ & & & \phi_i \ge (w_{target,i} - (w_{old,i} + t_i))\\ & & & \phi_i \ge -(w_{target,i} - (w_{old,i} + t_i)) \\ & & & \sum\limits_{i} \phi_i \le 2 * \theta \\ & & & \sum\limits_{i} t_i = 0 \\ & \text{and} & & t_i \ge -w_{old,i} \end{aligned}

While there are a large number of constraints present, in reality there are just a few key steps going on. First, our key variable in question is $t_i$. We then use our absolute value trick to create $\psi_i = |t_i|$. Next, we use the indicator function trick to create $y_i$, which tells us whether each position is traded or not. Ultimately, this is the variable we are trying to minimize.

Next, we have to deal with our turnover constraint. Again, we invoke the absolute value trick to create $\phi_i$, and replace our turnover constraint as a sum of $\phi$‘s.

Et voila?

As it turns out, not quite.

Consider a simple two-asset portfolio. The current weights are [0.25, 0.75] and we want to get these weights within 0.05 of [0.5, 0.5] (using the L^1 norm – i.e. the sum of absolute values – as our definition of “distance”).

Let’s consider the solution [0.475, 0.525]. At this point, $\phi = [0.025, 0.025]$ and $\psi = [0.225, 0.225]$. Is this solution “better” than [0.5, 0.5]? At [0.5, 0.5], $\phi = [0.0, 0.0]$ and $\psi = [0.25, 0.25]$. From the optimizer’s viewpoint, these are equivalent solutions. Within this region, there are an infinite number of possible solutions.

Yet if we are willing to let some of our tricks “fail,” we can find a solution. If we want to get as close as possible, we effectively want to minimize the sum of $\psi$‘s. The infinite solutions problem arises when we simultaneously try to minimize the sum of $\psi$‘s and $\phi$‘s, which offset each other.

Do we actually need the values of $\psi$ to be correct? As it turns out: no. All we really need is that $\psi_i$ is positive when $t_i$ is non-zero, which will then force $y_i$ to be 1. By minimizing on $y_i$, $\psi_i$ will still be forced to 0 when $t_i = 0$.

So if we simply remove $\psi_i$ from the minimization, we will end up reducing the number of trades as far as possible and then reducing the distance to the target model as much as possible given that trade level.

\begin{aligned} & \text{minimize} & & \sum\limits_{i} (\phi_i + y_i) \\ & \text{subject to} & & \psi_i \ge t_i \\ & & & \psi_i \ge -t_i \\ & & & \psi_i \le A*y_i \\ & & & \phi_i \ge (w_{target,i} - (w_{old,i} + t_i))\\ & & & \phi_i \ge -(w_{target,i} - (w_{old,i} + t_i)) \\ & & & \sum\limits_{i} \phi_i \le 2 * \theta \\ & & & \sum\limits_{i} t_i = 0 \\ & \text{and} & & t_i \ge -w_{old,i} \end{aligned}

As a side note, because the sum of $\phi$‘s will at most equal 2 and the sum of y‘s can equal the number of assets in the portfolio, the optimizer will get more minimization bang for its buck by focusing on reducing the number of trades first before reducing the distance to the target model. This priority can be adjusted by multiplying $\phi_i$ by a sufficiently large scaler in our objective.

theta = 0.05

t_vars = []
psi_vars = []
phi_vars = []
y_vars = []

A = 2

for i in range(n):
t = LpVariable("t_" + str(i), -w_old[i], 1 - w_old[i])
t_vars.append(t)

psi = LpVariable("psi_" + str(i), None, None)
psi_vars.append(psi)

phi = LpVariable("phi_" + str(i), None, None)
phi_vars.append(phi)

y = LpVariable("y_" + str(i), 0, 1, LpInteger) #set y in {0, 1}
y_vars.append(y)

# add our objective to minimize y, which is the number of trades
trading_model += lpSum(phi_vars) + lpSum(y_vars), "Objective"

for i in range(n):
trading_model += psi_vars[i] <= A * y_vars[i]

for i in range(n):
trading_model += phi_vars[i] >= -(w_diff[i] - t_vars[i])
trading_model += phi_vars[i] >= (w_diff[i] - t_vars[i])

# Make sure our trades sum to zero

trading_model += (lpSum(phi_vars) / 2. <= theta)

results = pandas.Series([t_i.value() for t_i in t_vars], index = tickers)

print "Number of trades: " + str(sum([y_i.value() for y_i in y_vars]))

print "Turnover distance: " + str((w_target - (w_old + results)).abs().sum() / 2.)
Number of trades: 12.0
Turnover distance: 0.032663284500000014

## 5. A Sector Rotation Example

As an example of applying trade paring,  we construct a sample sector rotation strategy.  The investment universe consists of nine sector ETFs (XLB, XLE, XLF, XLI, XLK, XLU, XLV and XLY).  The sectors are ranked by their 12-1 month total returns and the portfolio holds the four top-ranking ETFs in equal weight.  To reduce timing luck, we apply a four-week tranching process.

We construct three versions of the strategy.

• Naive: A version which rebalances back to hypothetical model weights on a weekly basis.
• Filtered: A version that rebalances back to hypothetical model weights when drifted portfolio weights exceed a 5% turnover distance from target weights.
• Trade Pared: A version that applies trade paring to rebalance back to within a 1% turnover distance from target weights when drifted weights exceed a 5% turnover distance from target weights.

The equity curves and per-year trade counts are plotted for each version below.  Note that the equity curves do not account for any implicit or explicit trading costs.

Data Source: CSI. Calculations by Newfound Research. Past performance does not guarantee future results.  All returns are hypothetical index returns. You cannot invest directly in an index and unmanaged index returns do not reflect any fees, expenses, sales charges, or trading expenses.  Index returns include the reinvestment of dividends.  No index is meant to measure any strategy that is or ever has been managed by Newfound Research.   The indices were constructed by Newfound in August 2018 for purposes of this analysis and are therefore entirely backtested and not investment strategies that are currently managed and offered by Newfound.

For the reporting period covering full years (2001 – 2017), the trade filtering process alone reduced the average number of annual trades by 40.6% (from 255.7 to 151.7).  The added trade paring process reduced the number of trades another 50.9% (from 151.7 to 74.5), for a total reduction of 70.9%.

## 6. Possible Extensions & Limitations

There are a number of extensions that can be made to this model, including:

• Forcing accuracy. There may be positions for which more greater drift can be permitted and others where drift is less desired. This can be achieved by adding specific constraints to our $\phi_i$ variables.

Unfortunately, there are also a number of limitations. The first set is due to the fact we are formulating our optimization as a linear program. This means that quadratic constraints or objectives, such as tracking error constraints, are forbidden. The second set is due to the complexity of the optimization problem. While the problem may be technically solvable, problems containing a large number of securities and constraints may be time infeasible.

#### 6.1 Non-Linear Constraints

In the former case, we can choose to move to a mixed integer quadratic programming framework. Or, we can also employ multi-step heuristic methods to find feasible, though potentially non-optimal, solutions.

For example, consider the case where we wish our optimized portfolio to fall within a certain tracking error constraint of our target portfolio. Prior to optimization, the marginal contribution to tracking error can be calculated for each asset and the total current tracking error can be calculated. A constraint can then be added such that the current tracking error minus the sum of weighted marginal contributions must be less than the tracking error target. After the optimization is complete, we can determine whether our solution meets the tracking error constraint.

If it does not, we can use our solution as our new $w_{old}$, re-calculate our tracking error and marginal contribution figures, and re-optimize. This iterative approach approximates a gradient descent approach.

In the example below, we introduce a covariance matrix and seek to target a solution whose tracking error is less than 0.25%.

covariance_matrix = [[ 3.62767735e-02,  2.18757921e-03,  2.88389154e-05,
7.34489308e-03,  1.96701876e-03,  4.42465667e-03,
1.12579361e-02,  1.65860525e-03,  5.64030644e-03,
2.76645571e-03,  3.63015800e-04,  3.74241173e-03,
-1.35199744e-04, -2.19000672e-03,  6.80914121e-03,
8.41701096e-03,  1.07504229e-02],
[ 2.18757921e-03,  5.40346050e-04,  5.52196510e-04,
9.03853792e-04,  1.26047511e-03,  6.54178355e-04,
1.72005989e-03,  3.60920296e-04,  4.32241813e-04,
6.55664695e-04,  1.60990263e-04,  6.64729334e-04,
-1.34505970e-05, -3.61651337e-04,  6.56663689e-04,
1.55184724e-03,  1.06451898e-03],
[ 2.88389154e-05,  5.52196510e-04,  4.73857357e-03,
1.55701811e-03,  6.22138578e-03,  8.13498400e-04,
3.36654245e-03,  1.54941008e-03,  6.19861236e-05,
2.93028853e-03,  8.70115005e-04,  4.90113403e-04,
1.22200026e-04,  2.34074752e-03,  1.39606650e-03,
5.31970717e-03,  8.86435533e-04],
[ 7.34489308e-03,  9.03853792e-04,  1.55701811e-03,
4.70643696e-03,  2.36059044e-03,  1.45119740e-03,
4.46141908e-03,  8.06488179e-04,  2.09341490e-03,
1.54107719e-03,  6.99000273e-04,  1.31596059e-03,
-2.52039718e-05, -5.18390335e-04,  2.41334278e-03,
5.14806453e-03,  3.76769305e-03],
[ 1.96701876e-03,  1.26047511e-03,  6.22138578e-03,
2.36059044e-03,  1.26644146e-02,  2.00358907e-03,
8.04023724e-03,  2.30076077e-03,  5.70077091e-04,
5.65049374e-03,  9.76571021e-04,  1.85279450e-03,
2.56652171e-05,  1.19266940e-03,  5.84713900e-04,
9.29778319e-03,  2.84300900e-03],
[ 4.42465667e-03,  6.54178355e-04,  8.13498400e-04,
1.45119740e-03,  2.00358907e-03,  1.52522064e-03,
2.91651452e-03,  8.70569737e-04,  1.09752760e-03,
1.66762294e-03,  5.36854007e-04,  1.75343988e-03,
1.29714019e-05,  9.11071171e-05,  1.68043070e-03,
2.42628131e-03,  1.90713194e-03],
[ 1.12579361e-02,  1.72005989e-03,  3.36654245e-03,
4.46141908e-03,  8.04023724e-03,  2.91651452e-03,
1.19931947e-02,  1.61222907e-03,  2.75699780e-03,
4.16113427e-03,  6.25609018e-04,  2.91008175e-03,
-1.92908806e-04, -1.57151126e-03,  3.25855486e-03,
1.06990068e-02,  6.05007409e-03],
[ 1.65860525e-03,  3.60920296e-04,  1.54941008e-03,
8.06488179e-04,  2.30076077e-03,  8.70569737e-04,
1.61222907e-03,  1.90797844e-03,  6.04486114e-04,
2.47501106e-03,  8.57227194e-04,  2.42587888e-03,
1.85623409e-04,  2.91479004e-03,  3.33754926e-03,
2.61280946e-03,  1.16461350e-03],
[ 5.64030644e-03,  4.32241813e-04,  6.19861236e-05,
2.09341490e-03,  5.70077091e-04,  1.09752760e-03,
2.75699780e-03,  6.04486114e-04,  2.53455649e-03,
9.66091919e-04,  3.91053383e-04,  1.83120456e-03,
-4.91230334e-05, -5.60316891e-04,  2.28627416e-03,
2.40776877e-03,  3.15907037e-03],
[ 2.76645571e-03,  6.55664695e-04,  2.93028853e-03,
1.54107719e-03,  5.65049374e-03,  1.66762294e-03,
4.16113427e-03,  2.47501106e-03,  9.66091919e-04,
4.81734656e-03,  1.14396535e-03,  3.23711266e-03,
1.69157413e-04,  3.03445975e-03,  3.09323955e-03,
5.27456576e-03,  2.11317800e-03],
[ 3.63015800e-04,  1.60990263e-04,  8.70115005e-04,
6.99000273e-04,  9.76571021e-04,  5.36854007e-04,
6.25609018e-04,  8.57227194e-04,  3.91053383e-04,
1.14396535e-03,  1.39905835e-03,  2.01826986e-03,
1.04811491e-04,  1.67653296e-03,  2.59598793e-03,
1.01532651e-03,  2.60716967e-04],
[ 3.74241173e-03,  6.64729334e-04,  4.90113403e-04,
1.31596059e-03,  1.85279450e-03,  1.75343988e-03,
2.91008175e-03,  2.42587888e-03,  1.83120456e-03,
3.23711266e-03,  2.01826986e-03,  1.16861730e-02,
2.24795908e-04,  3.46679680e-03,  8.38606091e-03,
3.65575720e-03,  1.80220367e-03],
[-1.35199744e-04, -1.34505970e-05,  1.22200026e-04,
-2.52039718e-05,  2.56652171e-05,  1.29714019e-05,
-1.92908806e-04,  1.85623409e-04, -4.91230334e-05,
1.69157413e-04,  1.04811491e-04,  2.24795908e-04,
5.49990619e-05,  5.01897963e-04,  3.74856789e-04,
-8.63113243e-06, -1.51400879e-04],
[-2.19000672e-03, -3.61651337e-04,  2.34074752e-03,
-5.18390335e-04,  1.19266940e-03,  9.11071171e-05,
-1.57151126e-03,  2.91479004e-03, -5.60316891e-04,
3.03445975e-03,  1.67653296e-03,  3.46679680e-03,
5.01897963e-04,  8.74709395e-03,  6.37760454e-03,
1.74349274e-03, -1.26348683e-03],
[ 6.80914121e-03,  6.56663689e-04,  1.39606650e-03,
2.41334278e-03,  5.84713900e-04,  1.68043070e-03,
3.25855486e-03,  3.33754926e-03,  2.28627416e-03,
3.09323955e-03,  2.59598793e-03,  8.38606091e-03,
3.74856789e-04,  6.37760454e-03,  1.55034038e-02,
5.20888498e-03,  4.17926704e-03],
[ 8.41701096e-03,  1.55184724e-03,  5.31970717e-03,
5.14806453e-03,  9.29778319e-03,  2.42628131e-03,
1.06990068e-02,  2.61280946e-03,  2.40776877e-03,
5.27456576e-03,  1.01532651e-03,  3.65575720e-03,
-8.63113243e-06,  1.74349274e-03,  5.20888498e-03,
1.35424275e-02,  5.49882762e-03],
[ 1.07504229e-02,  1.06451898e-03,  8.86435533e-04,
3.76769305e-03,  2.84300900e-03,  1.90713194e-03,
6.05007409e-03,  1.16461350e-03,  3.15907037e-03,
2.11317800e-03,  2.60716967e-04,  1.80220367e-03,
-1.51400879e-04, -1.26348683e-03,  4.17926704e-03,
5.49882762e-03,  7.08734925e-03]]

covariance_matrix = pandas.DataFrame(covariance_matrix, \
index = tickers, \
columns = tickers)
theta = 0.05
target_te = 0.0025

w_old_prime = w_old.copy()

# calculate the difference from the target portfolio
# and use this difference to estimate tracking error
# and marginal contribution to tracking error ("mcte")
z = (w_old_prime - w_target)
te = numpy.sqrt(z.dot(covariance_matrix).dot(z))
mcte = (z.dot(covariance_matrix)) / te

while True:
w_diff_prime = w_target - w_old_prime

t_vars = []
psi_vars = []
phi_vars = []
y_vars = []

A = 2

for i in range(n):
t = LpVariable("t_" + str(i), -w_old_prime[i], 1 - w_old_prime[i])
t_vars.append(t)

psi = LpVariable("psi_" + str(i), None, None)
psi_vars.append(psi)

phi = LpVariable("phi_" + str(i), None, None)
phi_vars.append(phi)

y = LpVariable("y_" + str(i), 0, 1, LpInteger) #set y in {0, 1}
y_vars.append(y)

# add our objective to minimize y, which is the number of trades
trading_model += lpSum(phi_vars) + lpSum(y_vars), "Objective"

for i in range(n):
trading_model += psi_vars[i] <= A * y_vars[i]

for i in range(n):
trading_model += phi_vars[i] >= -(w_diff_prime[i] - t_vars[i])
trading_model += phi_vars[i] >= (w_diff_prime[i] - t_vars[i])

# Make sure our trades sum to zero

# Set tracking error limit
#    delta(te) = mcte * delta(z)
#              = mcte * ((w_old_prime + t - w_target) -
#                        (w_old_prime - w_target))
#              = mcte * t
#    te + delta(te) <= target_te
#    ==> delta(te) <= target_te - te
trading_model += (lpSum([mcte.iloc[i] * t_vars[i] for i in range(n)]) \
<= (target_te - te))

trading_model += (lpSum(phi_vars) / 2. <= theta)

# update our w_old' with the current trades
results = pandas.Series([t_i.value() for t_i in t_vars], index = tickers)
w_old_prime = (w_old_prime + results)

z = (w_old_prime - w_target)
te = numpy.sqrt(z.dot(covariance_matrix).dot(z))
mcte = (z.dot(covariance_matrix)) / te

if te < target_te:
break

print "Tracking error: " + str(te)

# since w_old' is an iterative update,
# the prior w_old'.  Thus, we need to calculate
results = (w_old_prime - w_old)

print "Turnover distance: " + str((w_target - (w_old + results)).abs().sum() / 2.)
Tracking error: 0.0016583319880074485
Turnover distance: 0.01624453350000001

#### 6.2 Time Constraints

For time feasibility, heuristic approaches can be employed in effort to rapidly converge upon a “close enough” solution. For example, Rong and Liu (2011) discuss “build-up” and “pare-down” heuristics.

The basic algorithm of “pare-down” is:

2. Solve the optimization problem in its unconstrained format, allowing trades to occur only for securities in the trade list.
3. If the solution meets the necessary constraints (e.g. maximum number of trades, trade size thresholds, tracking error constraints, etc), terminate the optimization.
4. Eliminate from the trade list a subset of securities based upon some measure of trade utility (e.g. violation of constraints, contribution to tracking error, etc).
5. Go to step 2.

The basic algorithm of “build-up” is:

2. Add a subset of securities to the trade list based upon some measure of trade utility.
3. Solve the optimization problem in its unconstrained format, allowing trades to occur only for securities in the trade list.
4. If the solution meets the necessary constraints (e.g. maximum number of trades, trade size thresholds, tracking error constraints, etc), terminate the optimization.
5. Go to step 2.

These two heuristics can even be combined in an integrated fashion. For example, a binary search approach can be employed, where the initial trade list list is filled with 50% of the tradable securities. Depending upon success or failure of the resulting optimization, a pare-down or build-up approach can be taken to either prune or expand the trade list.

## 7. Conclusion

In this research note we have explored the practice of trade optimization, which seeks to implement portfolio changes in as few trade as possible.  While a rarely discussed detail of portfolio management, trade optimization has the potential to eliminate unnecessary trading costs – both explicit and implicit – that can be a drag on realized investor performance.

Constraints within the practice of trade optimization typically fall into one of three categories: asset paring, trade paring, and level paring.  Asset paring restricts the number of securities the portfolio can hold, trade paring restricts the number of trades that can be made, and level paring restricts the size of positions and trades.  Introducing these constraints often turns an optimization into a discrete problem, making it much more difficult to solve for traditional convex optimizations.

With this in mind, we introduced mixed-integer linear programming (“MILP”) and explore a few techniques that can be utilized to transform non-linear functions into a set of linear constraints.  We then combined these transformations to develop a simple trade optimization framework that can be solved using MILP optimizers.

To offer numerical support in the discussion, we created a simple momentum-based sector rotation strategy.  We found that naive turnover-filtering helped reduce the number of trades executed by 50%, while explicit trade optimization reduced the number of trades by 70%.

Finally, we explored how our simplified framework could be further extended to account for both non-linear functional constraints (e.g. tracking error) and operational constraints (e.g. managing execution time).

The paring constraints introduced by trade optimization often lead to problems that are difficult to solve.  However, when we consider that the cost of trading is a very real drag on the results realized by investors, we believe that the solutions are worth pursuing.

# Summary

• We compare and contrast different approaches to risk managing equity exposure; including fixed income, risk parity, managed futures, tactical equity, and options-based strategies; over the last 20 years.
• We find that all eight strategies studied successfully reduce risk, while six of the eight strategies improve risk-adjusted returns. The lone exceptions are two options-based strategies that involve being long volatility and therefore are on the wrong side of the volatility risk premium.
• Over time, performance of the risk management strategies varies significantly both relative to the S&P 500 and compared to the other strategies. Generally, risk-managed strategies tend to behave like insurance, underperforming on the upside and outperforming on the downside.
• Diversifying your diversifiers by blending a number of complementary risk-managed strategies together can be a powerful method of improving long-term outcomes. The diversified approach to risk management shows promise in terms of reducing sequence risk for those investors nearing or in retirement.

I was perusing Twitter the other day and came across this tweet from Jim O’Shaughnessy, legendary investor and author of What Works on Wall Street.

As always. Jim’s wisdom is invaluable.  But what does this idea mean for Newfound as a firm?  Our first focus is on managing risk.  As a result, one of the questions that we MUST know the answer to is how to get more investors comfortable with sticking to a risk management plan through a full market cycle.

Unfortunately, performance chasing seems to us to be just as prevalent in risk management as it is in investing as a whole.  The benefits of giving up some upside participation in exchange for downside protection seemed like a no brainer in March of 2009.  After 8+ years of strong equity market returns (although it hasn’t always been as smooth of a ride as the market commentators may make you think), the juice may not quite seem worth the squeeze.

While we certainly don’t profess to know the answer to our burning question from above, we do think the first step towards finding one is a thorough understanding on the risk management landscape.  In that vein, this week we will update our State of Risk Management presentation from early 2016.

We examine eight strategies that roughly fit into four categories:

• Diversification Strategies: strategic 60/40 stock/bond mix1 and risk parity2
• Options Strategies: equity collar3, protective put4, and put-write5
• Equity Strategies: long-only defensive equity that blends a minimum volatility strategy6, a quality strategy7, and a dividend growth strategy8 in equal weights
• Trend-Following Strategies: managed futures9 and tactical equity10

# The Historical Record

We find that over the period studied (December 1997 to July 2018) six of the eight strategies outperform the S&P 500 on a risk-adjusted basis both when we define risk as volatility and when we define risk as maximum drawdown.  The two exceptions are the equity collar strategy and the protective put strategy.  Both of these strategies are net long options and therefore are forced to pay the volatility risk premium.  This return drag more than offsets the reduction of losses on the downside.

Data Source: Bloomberg, CSI. Calculations by Newfound Research. Past performance does not guarantee future results. Volatility is a statistical measure of the amount of variation around the average returns for a security or strategy. All returns are hypothetical index returns. You cannot invest directly in an index and unmanaged index returns do not reflect any fees, expenses, sales charges, or trading expenses. Index returns include the reinvestment of dividends. No index is meant to measure any strategy that is or ever has been managed by Newfound Research. The Tactical Equity strategy was constructed by Newfound in August 2018 for purposes of this analysis and is therefore entirely backtested and not an investment strategy that is currently managed and offered by Newfound.

Data Source: Bloomberg, CSI. Calculations by Newfound Research. Past performance does not guarantee future results. Drawdown is a statistical measure of the losses experienced by a security or strategy relative to its historical maximum. The maximum drawdown is the largest drawdown over the security or strategy’s history. All returns are hypothetical index returns. You cannot invest directly in an index and unmanaged index returns do not reflect any fees, expenses, sales charges, or trading expenses. Index returns include the reinvestment of dividends. No index is meant to measure any strategy that is or ever has been managed by Newfound Research. The Tactical Equity strategy was constructed by Newfound in August 2018 for purposes of this analysis and is therefore entirely backtested and not an investment strategy that is currently managed and offered by Newfound.

# Not Always a Smooth Ride

While it would be nice if this outperformance accrued steadily over time, reality is quite a bit messier.  All eight strategies exhibit significant variation in their rolling one-year returns vs. the S&P 500.  Interestingly, the two strategies with the widest ranges of historical one-year performance vs. the S&P 500 are also the two strategies that have delivered the most downside protection (as measured by maximum drawdown).  Yet another reminder that there is no free lunch in investing.  The more aggressively you wish to reduce downside capture, the more short-term tracking error you must endure.

Relative 1-Year Performance vs. S&P 500 (December 1997 to July 2018)

Data Source: Bloomberg, CSI. Calculations by Newfound Research. Past performance does not guarantee future results. All returns are hypothetical index returns. You cannot invest directly in an index and unmanaged index returns do not reflect any fees, expenses, sales charges, or trading expenses. Index returns include the reinvestment of dividends. No index is meant to measure any strategy that is or ever has been managed by Newfound Research. The Tactical Equity strategy was constructed by Newfound in August 2018 for purposes of this analysis and is therefore entirely backtested and not an investment strategy that is currently managed and offered by Newfound.

# Thinking of Risk Management as (Uncertain) Portfolio Insurance

When we examine this performance dispersion across different market environments, we find a totally intuitive result: risk management strategies generally underperform the S&P 500 when stocks advance and outperform the S&P 500 when stocks decline.  The hit rate for the risk management strategies relative to the S&P 500 is 81.2% in the four years that the S&P 500 was down (2000, 2001, 2002, and 2008) and 19.8% in the seventeen years that the S&P was up.

In this way, risk management strategies are akin to insurance.  A premium, in the form of upside capture ratios less than 100%, is paid in exchange for a (hopeful) reduction in downside capture.

With this perspective, it’s totally unsurprising that these strategies have underperformed since the market bottomed during the global market crisis.   Seven of the eight strategies (with the long-only defensive equity strategy being the lone exception) underperformed the S&P 500 on an absolute return basis and six of the eight strategies (with defensive equity and the 60/40 stock/bond blend) underperformed on a risk-adjusted basis.

Annual Out/Underperformance Relative to S&P 500 (December 1997 to July 2018)

Data Source: Bloomberg, CSI. Calculations by Newfound Research. Past performance does not guarantee future results. All returns are hypothetical index returns. You cannot invest directly in an index and unmanaged index returns do not reflect any fees, expenses, sales charges, or trading expenses. Index returns include the reinvestment of dividends. No index is meant to measure any strategy that is or ever has been managed by Newfound Research. The Tactical Equity strategy was constructed by Newfound in August 2018 for purposes of this analysis and is therefore entirely backtested and not an investment strategy that is currently managed and offered by Newfound.

The good news is that there is significant year-to-year variation in the performance across strategies, as evidenced by the periodic table of returns above, suggesting there are diversification benefits to be harvested by allocating to multiple risk management strategies.  The average annual performance differential between the best performing strategy and the worst performing strategy is 20.0%.  This spread was less than 10% in only 3 of the 21 years studied.

We see the power of diversifying your diversifiers when we test simple equal-weight blends of the risk management strategies.  Both blends have higher Sharpe Ratios than 7 of the 8 individual strategies and higher excess return to drawdown ratios than 6 of the eight individual strategies.

This is a very powerful result, indicating that naïve diversification is nearly as good as being able to pick the best individual strategies with perfect foresight.

Data Source: Bloomberg, CSI. Calculations by Newfound Research. Past performance does not guarantee future results. All returns are hypothetical index returns. You cannot invest directly in an index and unmanaged index returns do not reflect any fees, expenses, sales charges, or trading expenses. Index returns include the reinvestment of dividends. No index is meant to measure any strategy that is or ever has been managed by Newfound Research. The Tactical Equity strategy was constructed by Newfound in August 2018 for purposes of this analysis and is therefore entirely backtested and not an investment strategy that is currently managed and offered by Newfound.

# Why Bother with Risk Management in the First Place?

As we’ve written about previously, we believe that for most investors investing “failure” means not meeting one’s financial objectives.  In the portfolio management context, failure comes in two flavors.  “Slow” failure results from taking too little risk, while “fast” failure results from taking too much risk.

In this book, Red Blooded Risk, Aaron Brown summed up this idea nicely: “Taking less risk than is optimal is not safer; it just locks in a worse outcome.  Taking more risk than is optimal also results in a worst outcome, and often leads to complete disaster.”

Risk management is not synonymous with risk reduction.  It is about taking the right amount of risk, not too much or too little.

Having a pre-defined risk management plan in place before a crisis can help investors avoid panicked decisions that can turn a bad, but survivable event into catastrophe (e.g. the retiree that sells all of his equity exposure in early 2009 and then stays out of the market for the next five years).

It’s also important to remember that individuals are not institutions.  They have a finite investment horizon.  Those that are at or near retirement are exposed to sequence risk, the risk of experiencing a bad investment outcome at the wrong time.

We can explore sequence risk using Monte Carlo simulation.  We start by assessing the S&P 500 with no risk management overlay and assume a 30-year retirement horizon.  The simulation process works as follows:

1. Randomly choose a sequence of 30 annual returns from the set of actual annual returns over the period we studied (December 1998 to July 2018).
3. For the sequence of returns chosen, calculate the perfect withdrawal rate (PWR). Clare et al, 2016 defines the PWR as “the withdrawal rate that effectively exhausts wealth at death (or at the end of a fixed period, known period) if one had perfect foresight of all returns over the period.11

We plot the distribution of PWRs for the S&P 500 below.  While the average PWR is a respectable 5.7%, the range of outcomes is very wide (0.6% to 14.7%).  The 95 percent confidence interval around the mean is 2.0% to 10.3%.  This is sequence risk.  Unfortunately, investors do not have the luxury of experiencing the average, they only see one draw.  Get lucky and you may get to fund a better lifestyle than you could have imagined with little to no financial stress.  Get unlucky and you may have trouble paying the bills and will be sweating every market move.

Calculations by Newfound Research. Past performance does not guarantee future results. All returns are hypothetical index returns. You cannot invest directly in an index and unmanaged index returns do not reflect any fees, expenses, sales charges, or trading expenses. Index returns include the reinvestment of dividends.

Next, we repeat the simulation, replacing the pure S&P 500 exposure with the equal-weight blend of risk management strategies excluding the equity collar and the protective put.  We see quite a different result.  The average PWR is similar (6.2% to 5.7%), but the range of outcomes is much smaller (95% confidence interval from 4.4% to 8.1%).  At its very core, this is what implementing a risk management plan is all about.  Reducing the role of investment luck in financial planning.  We give up some of the best outcomes (in the right tail of the S&P 500 distribution) in exchange for reducing the probability of the very worst outcomes (in the left tail).

Calculations by Newfound Research. Past performance does not guarantee future results. All returns are hypothetical index returns. You cannot invest directly in an index and unmanaged index returns do not reflect any fees, expenses, sales charges, or trading expenses. Index returns include the reinvestment of dividends.

# Conclusion

There is no holy grail when it comes to risk management.  While a number of approaches have historically delivered strong results, each comes with its own pros and cons.

In an uncertain world where we cannot predict exactly what the next crisis will look like, diversifying your diversifiers by combining a number of complementary risk-managed strategies may be a prudent course of action. We believe that this type of balanced approach has the potential to deliver compelling results over a full market cycle while managing the idiosyncratic risk of any one manager or strategy.

Diversification can also help to increase the odds of an investor sticking with their risk management plan as the short-term performance lows won’t be quite as low as they would be with a single strategy (conversely, the highs won’t be as high either).

That being said, having the discipline to stick with a risk management plan also requires being realistic.  While it would be great to build a strategy with 100% upside and 0% downside, such an outcome is unrealistic.  Risk-managed strategies tend to behave a lot like uncertain insurance for the portfolio.  A premium, in the form of upside capture ratios less than 100%, is paid in exchange for a (hopeful) reduction in downside capture.  This upside underperformance is a feature, not a bug.  Trying too hard to correct it may lead to overfit strategies fail to deliver adequate protection on the downside.