I recently had the privilege to serve as a discussant at the Democratize Quant 2023 conference to review Research Affliates’s new paper, Reimagining Index Funds. The post below is a summary of my presentation.
Introduction
In Reimagining Index Funds (Arnott, Brightman, Liu and Nguyen 2023), the authors propose a new methodology for forming an index fund, designed to avoid the “buy high, sell low” behavior that can emerge in traditional index funds while retaining the depth of liquidity and capacity. Specifically, they propose selecting securities based upon the underlying “economic footprint” of the business.
By using fundamental measures of size, the authors argue that the index will not be subject to sentiment-driven turnover. In other words, it will avoid those additions and deletions that have primarily been driven by changes in valuation rather than changes in fundamentals. Furthermore, the index will not arbitrarily avoid securities due to committee bias. The authors estimate that total turnover is reduced by 20%.
The added benefit to this approach, the authors further argue, is that index trading costs are actually quite large. While well-telegraphed additions and deletions allow index fund managers to execute market-on-close orders and keep their tracking error low, it also allows other market participants to front run these changes. The authors’ research suggests that these hidden costs could be upwards of 20 basis points per year, creating a meaningful source of negative alpha.
Methodology & Results
The proposed index construction methodology is fairly simple:
Footnote #3 in the paper further expands upon the four fundamental measures:
The results of this rather simple approach are impressive.
- Tracking error to the S&P 500 comparable to that of the Russell 1000.
- Lower turnover than the S&P 500 or the Russell 1000.
- Statistically meaningful Fama-French-Carhart 4-Factor alpha.
But What Is It?
One of the most curious results of the paper is that despite having a stated value tilt, the realized value factor loading in the Fama-French-Carhart regression is almost non-existent. This might suggest that the alpha emerges from avoiding the telegraphed front-running of index additions and deletions.
However, many equity quants may notice familiar patterns in the cumulative alpha streams of the strategies. Specifically, the early years look similar to the results we would expect from a value tilt, whereas the latter years look similar to the results we might expect from a growth tilt.
With far less rigor, we can create a strategy that holds the Russell 1000 Value for the first half of the time period and switches to the Russell 1000 Growth for the second half. Plotting that strategy versus the Russell 1000 results in a very familiar return pattern. Futhermore, such a strategy would load positively on the value factor for the first half of its life and negatively for the second half of its life, leading a full-period factor regression to conclude zero exposure.
But how could such a dynamic emerge from such a simple strategy?
“Economic Footprint” is a Multi-Factor Tilt
The Economic Footprint variable is described as being an equal-weight metric of four fundamental measures: book value, sales, cash flow, and dividends, all measured as a percentage of all publicly-traded U.S. listed companies. With a little math (inspired by this presentation from Cliff Asness), we will show that Economic Footprint is actually a mutli-factor screen on both Value and Market-Capitalization.
Define the weight of a security in the market-capitalization weighted index as its market capitalization divided by the total market capitalization of the universe.
If we divide both sides of the Economic Footprint equation by the weight of the security, we find:Some subtle re-arrangements leave us with: The value tilt effectively looks at each security’s value metric (e.g. book-to-price) relative to the aggregate market’s value metric. When the metric is cheaper, the value tilt will be above 1; when the metric is more expensive, the value tilt will be less than 1. This value tilt then effectively scales the market capitalization weight.
Importantly, economic footprint does not break the link to market capitalization.
Breaking economic footprint into two constituent parts allows us to get a visual intuition as to how the strategy operates.
In the graphs below, I take the largest 1000 U.S. companies by market capitalization and plot them based upon their market capitalization weight (x-axis) and their value tilt (y-axis).
(To be clear, I have no doubt that my value tilt scores are precisely wrong if compared against Research Affiliates’s, but I have no doubt they are directionally correct. Furthermore, the precision does not change the logic of the forthcoming argument.)
If we were constructing a capitalization weighted index of the top 500 companies, the dots would be bisected vertically.
As a multi-factor tilt, however, economic footprint leads to a diagonal bisection.
The difference between these two graphs tells us what we are buying and what we are selling in the strategy relative to the naive capitalization-weighted benchmark.
We can clearly see that the strategy sells larg(er) glamour stocks and buys small(er) value stocks. In fact, by definition, all the stocks bought will be both (1) smaller and (2) “more value” and any of the stocks sold.
This is, definitionally, a size-value tilt. Why, then, are the factor loadings for size and value so small?
The Crucial Third Step
Recall the third step of the investment methodology: after selecting the companies by economic footprint, they are re-weighted by their market capitalization. Now consider an important fact we stated above: every company we screen out is, by definition, larger than any company we buy.
That means, in aggregate, the cohort we screen out will have a larger aggregate market cap than the cohort we buy.
Which further means that the cohort we don’t screen out will, definitionally, become proportionally larger.
For example, at the end of April 2023, I estimate that screening on economic footprint would lead to the sale of a cohort of securities with an aggregate market capitalization of $4 trillion and the purchase of a cohort of securities with an aggregate market capitalization of $1.3 trillion.
The cohort that remains – which was $39.5 trillion in aggregate market capitalization – would grow proportionally from being 91% of the underlying benchmark to 97% of our new index. Mega-cap growth names like Amazon, Google, Microsfot, and Apple would actually get larger based upon this methodology, increasing their collective weights by 120 basis points.
Just as importantly, this overweight to mega-cap tech would be a persistent artifact throughout the 2010s, suggesting why the relative returns may have looked like a growth tilt.
Why Value in 1999?
How, then, does the strategy create value-like results in the dot-com bubble? The answer appears to lie in two important variables:
- What percentage of the capitalization-weighted index is being replaced?
- How strongly do the remaining securities lean into a value tilt?
Consider the scatter graph below, which estimates how the strategy may have looked in 1999. We can see that 40% of the capitalization-weighted benchmark is being screened out, and 64% of the securities that remain have a positive value tilt. (Note that these figures are based upon numerical count; it would likely be more informative to measure these figures weighted by market capitalization.)
By comparison, in 2023 only 20% of the underlying benchmark names are replaced and of the securities that remain, just 30% have a tilt towards value. These graphics suggest that while a screen on economic footprint creates a definitive size/value tilt, the re-weighting based upon relative market capitalization can lead to dynamic style drift over time.
Conclusion
The authors propose a new approach to index construction that aims to maintain a low tracking error to traditional capitalization-weighted benchmarks, reduce turnover costs, and avoid “buy high, sell low” behavior. By selecting securities based upon the economic footprint of their respective businesses, the authors find that they are able to produce meaningful Fama-French-Carhart four-factor alpha while reducing portfolio turnover by 20%.
In this post I find that economic footprint is, as defined by the authors, actually a multi-factor tilt based value and market capitalization. By screening for companies with a high economic footprint, the proposed method introduces a value and size tilt relative to the underlying market capitalization weighted benchmark.
However, the third step of the proposed process, which then re-weights the selected securities based upon their relative market capitalization, will always increase the weight of the securities of the benchmark that were not screened out. This step creates the potential for meaningful style drift within the strategy over time.
I would argue the reason the factor regression exhibited little-to-no loading on value is that the strategy exhibited a positive value tilt over the first half of its lifetime and a negative value tilt over the second half, effectively cancelling out when evaluated over the full period. The alpha that emerges, then, may actually be style timing alpha.
While the authors argue that their construction methodology should lead to the avoidance of “buy high, sell low” behavior, I would argue that the third step of the investment process has the potential to lead to just that (or, at the very least, buy high). We can clearly see that in certain environments, portfolio construction choices can actually swamp intended factor bets.
Whether this methodology actually provides a useful form of style timing, or whether it is an unintended bet in the process that lead to a fortunate, positive ex-post result is an exercise left to other researchers.
2 Pingbacks