Today, August 28th, 2023, my company Newfound Research turns 15.  It feels kind of absurd saying that.  I know I’ve told this story before, but I never actually expected this company to turn into anything.  I started the company while I was still in undergrad and I named it Newfound Research after a lake my family used to visit in New Hampshire.  I fully expected the company to be shut down within a year and just go on to a career on Wall Street.

But here we are, 15 years later.  I’m not sure why, but this milestone feels larger than any recent birthday I can remember.  I’m so incredibly grateful for what this company has given me.  I’m grateful to my business partner, Tom.  I’m grateful to employees – both past and present – who dedicated part of their lives and careers to work here.  I’m grateful to our clients who supported this business.  I’m grateful for all the friends in the industry that I’ve made.  And I’m grateful to people like you who have given me a bit of a platform to explore the ideas I’m passionate about.

Coming up on this anniversary, I reflected quite a bit on my career.  And one of the things I thought about was all the lessons I’ve learned over the years.  And I thought that a fun way to celebrate would be to take the time and write down some of those ideas and lessons that have come to influence my thinking.

So, without further ado, here are 15 lessons, ideas, and frameworks from 15 years.

1.     Risk cannot be destroyed, only transformed.

For graduate school, I pursued my MS in Computational Finance at Carnegie Mellon University.  This financial engineering program is a cross-disciplinary collaboration between the finance, mathematics, statistics, and computer-science departments.

In practice, it was a study on the theoretical and practical considerations of pricing financial derivatives.

I don’t recall quite when it struck me, but at some point I recognized a broader pattern at play in every assignment.  The instruments we were pricing were always about the transference of risk in some capacity.  Our goal was to identify that risk, figure out how to isolate and extract it, package it into the appropriate product type, and then price it for sale.

Risk was driving the entire equation.  Pricing was all about understanding distribution of the potential payoffs and trying to identify “fair compensation” for the variety of risks and assumptions we were making.

For every buyer, there is a seller and vice versa and, at the end of the day, sellers who did not want risk would have to compensate buyers to bear it.

Ultimately, when you build a portfolio of financial assets, or even strategies, you’re expressing a view as to the risks you’re willing to bear.

I’ve come to visualize portfolio risk like a ball of play-doh.  As you diversify your portfolio, the play-doh is getting smeared over risk space.  For example, if you move from an all equity to an equity/bond portfolio, you might reduce your exposure to economic contractions but increase your exposure to inflation risk.

The play-doh doesn’t disappear – it just gets spread out.  And in doing so, you become sensitive to more risks, but less sensitive to any single risk in particular.

I’ll add that the idea of the conservation of risk is by no means unique to me.  For example, Chris Cole has said, on a number of occasions, that “volatility is never created or destroyed, only transmuted.”  In 2008, James Saft wrote in Reuters that “economic volatility, a bit like energy, cannot be destroyed, only switched from one form to another.”  In 2007, Swasti Kartikaningtyas wrote on the role of central counterparties in Indonesian markets, stating, “a simple entropy law for finance is that risks cannot be destroyed, only shifted among parties.”  In his 2006 book “Precautionary Risk Management,” Mark Jablonowski stated, “risk cannot be destroyed, it can only be divided up.”  In 1999, Clarke and Varma, writing on long-run strategic planning for enterprises, said, “like matter, risk cannot be destroyed.”

My point here is only that this idea is not novel or unique to me by any means.  But that does not make it any less important.

2.     “No pain, no premium”

The philosophy of “no pain, no premium” is just a reminder that over the long run, we get paid to bear risk.  And, eventually, risk is likely going to manifest and create losses in our portfolio.  After all, if there were no risk of losses, then why would we expect to earn anything above the risk-free rate?

Modern finance is largely based upon the principal that the more risk you take, the higher your expected reward.  And most people seem to inherently understand this idea when they buy stocks and bonds.

But we can generally expect the same to be true for many investment strategies.  Value investors, for example, are arguably getting paid to bear increased bankruptcy risk in the stocks they buy.

What about strategies that are not necessarily risk-based?  What about strategies that have a more behavioral explanation, like momentum?

At a meta level, we need the strategy to be sufficiently difficult to stick with to prevent the premium from being arbed away.  If an investment approach is viewed as easy money, enough people will adopt it that the inflows will drive out the excess return.

So, almost by definition, certain strategies – especially low frequency ones – need to be difficult to stick with for any premium to exist.  The pain is, ultimately, what keeps the strategy from getting crowded and allows the premium to exist.

3.     Diversifying, cheap beta is worth just as much as equally diversifying, expensive alpha.

I’ll put this lesson in the category of, “things that are obvious but might need to be said anyway.”

Our industry is obsessed with finding alpha.  But, for the most part, a portfolio doesn’t actually care if something is actually alpha or beta.

If you have a portfolio and can introduce a novel source of diversifying beta, it’s not only likely to be cheaper than any alpha you can access, but you can probably ascribe a much higher degree of confidence to its risk premium.

For example, if you invest only in stocks, finding a way to thoughtfully introduce bonds may do much, much more for your portfolio over the long run, with a higher degree of confidence, than trying to figure out a way to pick better stocks.

For most portfolios, beta will drive the majority of returns over the long run.  As such, it will be far more fruitful to first exhaust sources of beta before searching for novel sources of alpha.

By the way, I’m pretty sure I stole this lesson title from someone, but I can’t find the original person who said it.  If it’s you, my apologies.

4.     Diversification has multiple forms.

In 2007, Meb Faber published his paper A Quantitative Approach to Tactical Asset Allocation where he explored the application of a 10-month moving average as a timing model on a variety of asset classes.

It will likely go down in history as one of the most well-timed papers in finance given the 2008 crisis that immediately followed and how well the simple 10-month moving average model would have done in protecting your capital through that event.  It’s likely the paper that launched one-thousand tactical asset allocation models.

In 2013, I wrote a blog post where I showed that the performance of this model was highly sensitive to the choice of rebalance date.  Meb had originally written the paper using an end-of-month rebalance schedule.  In theory, there was nothing stopping someone from running the same model and rebalancing on the 10th trading day of every month.  In the post, I showed the performance of the strategy when applied on every single possible trading day variation, from the 1st to the last trading day of each month.  The short-term dispersion between the strategies was astounding even though the long-run returns were statistically indistinguishable.  

And my obsession with rebalance timing luck was born.

Shortly thereafter my good friend Adam Butler pointed out to me that the choice of a 10-month moving average was just as arbitrary.  Why not 9?  Why not 11?  Why not 200 days?  Why a simple moving average and not an exponential moving average or simple time-series momentum?  Just like what I saw with rebalancing schedule, the long-run returns were statistically indistinguishable but the short-run returns had significant dispersion.

The sort of dispersion that put managers out of business.

Ultimately, I developed my view that diversification was three dimensional: what, how, and when.

What is the traditional diversification almost everyone is certainly familiar with.  This is the diversification across securities or assets.  It’s the what you’re invested in.

How is the process by which investment decisions are made.  This includes diversification across different investment styles – such as value versus momentum – but also within a style.  For example, how are we measuring value?  Or what trend model and speed are we using?

When is the rebalance schedule.

Just as traditional portfolio theory tells us that we should diversify what we invest in because we are not compensated for bearing idiosyncratic risk, I believe the same is true across the how and when axes.

Our aim should be to diversify all uncompensated bets with extreme prejudice.

5.     The philosophical limits of diversification: if you diversify away all the risk, you shouldn’t expect any reward.

One of the most common due diligence questions is, “when doesn’t this strategy work?”  It’s an important question to ask for making sure you understand the nature any strategy.

But the fact that a strategy doesn’t work in certain environments is not a critique.  It should be expected.  If a strategy worked all the time, everyone would do it and it would stop working.

Similarly, if you’re building a portfolio, you need to take some risk.  Whether that risk is some economic risk or process risk or path dependency risk, it doesn’t matter – it should be there, lurking in the background.

If you want a portfolio that has absolutely no scenario risk, you’re basically asking for a true arbitrage or an expensive way of replicating the risk-free rate.

In other words, if you diversify away all the risk in your portfolio – again, think of this as smearing the ball of play-doh really, really, really thin across a very large plane of risk scenarios – return should just converge to the risk-free rate.

If it doesn’t, you’d have an arbitrage: just borrow at the risk-free rate and buy your riskless, diversified portfolio.

But arbitrages don’t come around easy.  Especially for low-frequency strategies and combinations of low-Sharpe asset classes.  There is no magical combination of assets and strategies that will eliminate downside risk in all future states of the world.

A corollary to this point is what I call the frustrating law of active management.  The basic idea is that if an investment idea is perceived both to have alpha and to be “easy”, investors will allocate to it and erode the associated premium.  That’s just basic market efficiency.

So how can a strategy be “hard”?  Well, a manager might have a substantial informational or analytical edge.  Or a manager might have a structural moat, accessing trades others do not have the opportunity to pursue.

But for most major low-frequency edges, “hard” is going to be behavioral.  The strategy has to be hard enough to hold on to that it does not get arbitraged away.

Which means that for any disciplined investment approach to outperform over the long run, it must experience periods of underperformance in the short run.

But we can also invert the statement and say that for any disciplined investment approach to underperform over the long run, it must experience periods of outperformance in the short run.

For active managers, the frustration is not only does their investment approach have to under-perform from time-to-time, but bad strategies will have to out-perform.  The latter may seem confusing, but consider that a purposefully bad strategy could simply be inverted – or traded short – to create a purposefully good one.

6.     It’s usually the unintended bets that blow you up.

I once read a comic – I think it was Farside, but I haven’t been able to find it – that joked that the end of the world would come right after a bunch of scientists in a lab said, “Neat, it worked!”

It’s very rarely the things we intend to do that blow us up.  Rather, it’s the unintended bets that sneak into our portfolio – those things we’re not aware of until it’s too late.

As an example, in the mid-2010s, it became common to say how cheap European equities were versus U.S. equities.  Investors who dove headlong into European equities, however, were punished.

Simply swapping US for foreign equities introduces a significant currency bet.  Europe may have been unjustifiably cheap, but given that valuation reversions typically play out over years, any analysis of this trade should have included either the cost of hedging the currency exposure or, at the very least, an opinion for why being implicitly short the dollar was a bet worth making.

But it could be argued that the analysis itself was simply wrong.  Lawrence Hamtil wrote on this topic many times, pointing out that both cross-country and time-series analysis of valuation ratios can be meaningfully skewed by sector differences.  For example, U.S. equity indices tend to have more exposure to Tech while European indices have more exposure to Consumer Staples.  When normalized for sector differences, the valuation gap narrowed significantly.

People who took the Europe versus US trade were intending to make a valuation bet.  Unless they were careful, they were also taking a currency and sector discrepancy bet.  

Rarely is it the intended bets that blow you up.

7.     It’s long/short portfolios all the way down.

I don’t remember when this one came to me, but it’s one of my favorite mental models.  The phrase is a play off of the “Turtles all the way down” expression.

Every portfolio, and every portfolio decision, can be decomposed into being long something and short something else.

It sounds trivial, but it’s incredibly powerful.  Here’s a few examples:

1.     You’re evaluating a new long-only, active fund.  To isolate what the manager is doing, you can take the fund’s holdings and subtract the holdings of their benchmark.  The result is a dollar-neutral long/short portfolio that reflects the manager’s active bets – it’s long the stuff they’re overweight and short the stuff they’re underweight.  This can help you determine what types of bets they’re making, how big the bets are, and whether the bets are even large stand a chance at covering their fee.

2.     If you’re contemplating selling one exposure to buy another in your portfolio, the trade is equivalent to holding your existing portfolio and overlaying a long/short trade: long the thing you’d buy and short the thing you’d sell.  This allows you to look at the properties of the trade as a whole (both what you’re adding and what you’re subtracting).

3.     If you want to understand how different steps of your portfolio construction process contribute to risk or return, you can treat the changes, stepwise, as long/short portfolios.  For example, for a portfolio that’s equal-weight 50 stocks from the S&P 500, you might compare: (1) Equal-Weight S&P 500 minus S&P 500, and then (2) Equal-Weight 50 Stocks minus Equal-Weight S&P 500.  Isolating each step of your portfolio construction as a long/short allows you to understand the return properties created by that step.

In all of these cases, evaluating the portfolio through the lens of the long/short framework provides meaningful insight.

8.     The more diversified a portfolio is, the higher the hurdle rate for market timing.

Market timing is probably finance’s most alluring siren’s song.  It sounds so simple.  Whether it’s market beta or some investment strategy, we all want to say: “just don’t do the thing when it’s not a good time to do it.”

After all the equity factors were popularized in the 2010s, factor timing came into vogue.  I read a number of papers that suggested that you could buy certain factors at certain parts of the economic cycle.  There was one paper that used a slew of easily tracked economic indicators to contemporaneously define where you were in the cycle, and then rotated across factors depending upon the regime.

And the performance was just ridiculous.

So, to test the idea, I decided to run the counterfactuals.  What if I kept the economic regime definitions the same, but totally randomized the basket of factors I bought in each part of the cycle?  With just a handful of factors, four regimes, and buying a basket of three factors per regime, you could pretty much brute force your way through all the potential combinations and create a distribution of their return results.

No surprise, the paper result was right up there in the top percentiles.  Know what else was?  Just a naïve, equally-weighted portfolio of the factors.  And that’s when you have to ask yourself, “what’s my confidence in this methodology?”

Because the evidence suggests is really, really hard to just beat naïve diversification.

There are a few ways you can get a sense for this, but one of my favorites is just by explicitly looking into the future and asking, “how accurate would I have to be to beat a well-diversified portfolio?”  This isn’t a hard simulation to run, and for reasonable levels of diversification, accuracy numbers creep up quite quickly.

Ultimately, timing is a very low breadth exercise.  To quote Michele Aghassi from AQR, “you’re comparing now versus usual.”  And being wrong compounds forever.

In almost all cases, it’s a lot easier to find something that can diversify your returns than it is to increase your accuracy in forecasting returns.

As a corollary to this lesson, I’ll add that the more predictable a thing is, the less you should be able to profit from it.

For example, let’s say I have a system that allows me to forecast the economic regime we’re in and I have a model for which assets should do well in that economic regime.

If I can forecast the economic regime with certainty, and if the market is reasonably efficient, I probably shouldn’t be able to know which assets will do well in which regime.  Conversely, if I know with perfect certainty which assets will do well in which regime, then I probably shouldn’t be able to forecast the regimes with much accuracy.

If markets are even reasonably efficient, the more easily predictable the thing, the less I should be able to profit from it.

9.     Certain signals are only valuable at extremes.

I was sent a chart recently with a plot of valuations for U.S. large-cap, mid-cap, and small-cap stocks.  The valuations were represented as an average composite of price-to-earnings, price-to-book, and price-to-sales z-scores.  The average z-score of large-caps sat at +1 while the average z-score for both mid- and small-caps sat at -1.

The implication of the chart was that a rotation to small- and mid-caps might be prudent based upon these relative valuations.

Lesson #6 about unintended bets immediately comes to mind.

For example, are historical measures even relevant today?  Before 2008 large-cap equities had a healthy share of financials and energy.  Today, the index is dominated by tech and communication services.  And we went through an entire decade with a zero interest rate policy regime.  How do rates at 5% plus today impact the refinancing opportunities in small-caps versus large-caps?  What about the industry differences between large-caps and small-caps?  Or the profit margins?  Or exposure to foreign revenue sources?  How are negative earners being treated in this analysis?  Is price-to-sales even a useful metric when sales are generated across the entire enterprise?

You might be able to sharpen your analysis and adjust your numbers to account for many of these points.  But there may be many others you simply don’t think of.  And that’s the noise.

Just about every signal has noise.

The question is, “how much noise?”  The more noise we believe a signal to have, the stronger we need the signal to be to believe it has any efficacy.  While we may be comfortable trading precisely measured signals at a single standard deviation, we may only have confidence in coarsely measured signals at much higher significance.

10.  Under strong uncertainty, “halvsies” can be an optimal decision.

During the factor wars of the mid-2010s, a war raged between firms as to what the best portfolio construction approach was: mixed or integrated.

The mixed approach said that each factor should be constructed in isolation and held in its own sleeve.

The integrated approach said that stocks should be scored on all the factors simultaneously, and the stocks with the best aggregate scores should be selected.

There were powerhouses on both sides of the argument.  Goldman Sachs supported mixed while AQR supported integrated.

I spent months agonizing over the right way to do things.  I read papers.  I did empirical analysis.  I even took pen to paper to derive the expected factor efficiency in each approach.

At the end of the day, I could not convince myself one way or another.  So, what did I do?  Halvsies.

Half the portfolio was managed in a mixed manner and half was managed in an integrated manner.

Really, this is just diversification for decision making.  Whenever I’ve had a choice with a large degree of uncertainty, I’ve often found myself falling back on “halvsies.” 

When I’ve debated whether to use one option structure versus another, with no clear winner, I’ve done halvsies.

When I’ve debated two distinctly different methods of modeling something, with neither approach being the clear winner, I’ve done halvsies.

Halvsies provides at least one step in the gradient of decision making and implicitly creates diversification to help hedge against uncertainty.

11.  Always ask: “What’s the trade?”

In July 2019, Greek 10-Year Bonds were trading with a yield that was nearly identical to US 10-Year Bonds.

By December, the yield on Greek 10-year bonds was 40 basis points under US 10-year bonds.  How could that make any sense?  How could a country like Greece make U.S. debt look like it was high yield?

When something seems absurd, ask this simple question: what’s the trade?  If it’s so absurd, how do we profit from it?

In this case, we might consider going long the U.S. 10-year and short the Greek 10-year in a convergence trade.  But we quickly realize an important factor: you don’t actually get paid in percentage points, you get paid in currency.  And that’s where the trade suddenly goes awry.  In this case, you’d receive dollars and owe euros.  And if you tried to explicitly hedge that trade away up front via a cross-currency basis swap, any yield difference largely melted away.

A more relevant financial figure would perhaps have been the spread between 10-year Greek and German bonds, which traded between 150-275bps in the 2nd half of 2019.  Not wholly unreasonable anymore.

When financial pundits talk about things in the market being absurd, ask “what’s the trade?”  Working through how to actual profit from the absurdity often shines a light on why the analysis is wrong.

12.  The trade-off between Type I and Type II errors is asymmetric

Academic finance is obsessed with Type I errors.  The literature is littered with strategies exhibiting alphas significant at a 5% level.  The literature wants to avoid reporting false positives.

In practice, however, there is an asymmetry that has to be considered.

What is the cost of a false positive?  Unless the strategy is adversely selected, the performance of trading a false positive should just be noise minus trading costs.  (And the opportunity cost of capital.)

What is the cost of a false negative?  We miss alpha.

Now consider how a focus on Type I errors can bias the strategies you select.  Are they more likely to be data-mined?  Are they more likely to be crowded?  Are they less likely to incorporate novel market features without meaningful history?

Once we acknowledge this asymmetry, it may actually be prudent to reduce the statistical requirements on the strategies we deploy.

13.  Behavioral Time is decades longer than Statistical Time

I recently stole this one from Cliff Asness.  This point has less to do with any practical portfolio construction thoughts or useful mental models.  It’s just simply acknowledging that managing money in real life is very, very, very different than managing money in a research environment.

It is easy, in a backtest, to look at the multi-year drawdown of a low-Sharpe strategy and say, “I could live through that.”  When it’s a multi-decade simulation, a few years looks like a small blip – just a statistical eventuality on the path.  You live that multi-year drawdown in just a few seconds in your head as your eye wanders the equity curve from the bottom left to the upper right.

In the real world, however, a multi-year drawdown feels like a multi-decade drawdown.  Saying, “this performance is within standard confidence bands for a strategy given our expected Sharpe ratio and we cannot find any evidence that our process is broken,” is little comfort to those who have allocated to you.  Clients will ask you for attribution.  Clients will ask you whether you’ve considered X explanation or Y.  Sales will come screeching to a halt.  Clients will redeem.

For anyone considering a career in managing money, it is important to get comfortable living in behavioral time.

14. Jensen’s Inequality

Jensen’s inequality basically says, “a function applied to a mean does not necessarily equal the mean applied after the function.”

What does that mean and how is it useful?  Consider this example.

You’re building a simple momentum portfolio.  You start with the S&P 500 and rank them by their momentum score, selecting the top 100 and then equally weighting them.

But you remember Lesson #4 and decide to use multiple momentum signals to diversify your how risk.

Here’s the question: do you average all the momentum scores together and then pick the top 100 or do you use each momentum score to create a portfolio and then average those portfolios together.

Jensen’s inequality tells us these approaches will lead to different results.  This is basically the mixed versus integrated debates from Lesson #10.  And the more convex the function is, the more different the results will likely be.  Imagine if instead of picking the top 100 we pick the top 20 or just the top 5.  It’s easy to imagine how different those portfolios could become with different momentum signals.

Here’s another trivial example.  You have 10 buy/sell signals.  Your function is to be long an asset if the signals are positive and short if the signals are negative.

If you average your signals first, your position is binary: always on or off.  But if you apply your function to each signal, and then average the results, you end up with a gradient of weights, the distribution of which will be a function of how correlated your signals are with one another.

You can see how Jensen’s inequality plays a huge role in portfolio construction.  Why?  Because non-linearities show up everywhere.  Portfolio optimization?  Non-linear.  Maximum or minimum position sizes?  Non-linear.  Rank-based cut-offs?  Non-linear.

And the more non-linear the function, the greater the wedge. But this also helps us understand how certain portfolio construction constraints can help us reduce the size of this wedge.

Ultimately, Jensen’s tells us that averaging things together in the name of diversification before or after convex steps in your process will lead to dramatically different portfolio results.

15. A backtest is just a single draw of a stochastic process.

As the saying goes, nobody has ever seen a bad backtest.

And our industry, as a whole, has every right to be skeptical about backtests.  Just about every seasoned quant can tell you a story about naively running backtests in their youth, overfitting and overoptimizing in desperate search of the holy grail strategy.

Less sophisticated actors may even take these backtests and launch products based on them, marketing the backtests to prospective investors.

And most investors would be right to ignore them outright.  I might even be in favor of regulation that prevents them from being shown in the first place.

But that doesn’t mean backtests are ultimately futile.  But we should acknowledge that when we run a single backtest, it’s just a single draw of a larger stochastic process.  Historical prices and data are, after all, just a record of what happened, but not a full picture of what could have happened.

Our job, as researchers, is to use backtesting to try to learn about what the underlying stochastic process looks like.

For example, what happens if we change the parameters of our process?  What happens if we change our entry or exit timing?  Or change our slippage and impact assumptions?

One of my favorite techniques is to change the investable universe, randomly removing chunks of the universe to see how sensitive the process is.  Similarly, randomly removing periods of time from the backtest to test regime sensitivities.

Injecting this randomness into the backtest process can tell us how much of an outlier our singular backtest really is.

Another fantastic technique is to purposefully introduce lookahead bias into your process.  By explicitly using a crystal ball, we can find the theoretical upper limits of achievable results and develop confidence bands for what our results should look like with more reasonable accuracy assumptions.

Backtesting done poorly is worse than not backtesting.  You’d be better off with pen and paper just trying to reason about your process.  But backtesting done well, in my opinion, can teach you quite a bit about the nature of your process, which is ultimately what we want to learn about.

16.  The Market is Usually Right

Did I say 15 ideas and lessons?  Here’s a bonus lesson that’s taken me far longer to learn than I’d care to admit.

The market is, for the most part, usually right.  It took me applying Lesson #11 – “What’s the Trade” – over and over to realize that most things that seem absurd probably aren’t.

That isn’t to say there aren’t exceptions.  If we see $20 on the ground, we might as well pick it up.  The 2021 cash & carry trade in crypto comes to mind immediately.  With limited institutional capacity and a nearly insatiable appetite for leverage from retail investors, the implied financing rates in perps and futures hit 20%+ for highly liquid tokens such as Bitcoin and Ethereum.  I suspect that’s as close to free money as I’ll ever get.

But that’s usually the exception.

This final lesson is about a mental switch for me.  Instead of seeing something and immediately saying, “the market is wrong,” I begin with the assumption that the market is right and I’m the one who is missing something.  This forces me to develop a list of potential reasons I might be missing or overlooking and exhaust those explanations before I can build my confidence that the market is, indeed, wrong.

Conclusion

If you made it this far, thank you.  I appreciate the generosity of your time.  I hope some of these ideas or lessons resonated with you and I hope you enjoyed reading as much as I enjoyed reflecting upon these concepts and putting together this list.  It will be fun for me to look back in another 15 and see how many of these stood the test of time.

Until then, happy investing.