One of the first questions I typically receive when I discuss our dynamic, volatility-adjusted momentum models is whether the dynamic window driving our models simply contracts when volatility increases in the market and expands when volatility declines.  I think it is because most people associate "volatility" with "risk."

Volatility and risk are not quite the same thing -- a fact that has been repeated ad nauseam in the quant blogosphere, but still bears repeating.  Volatility is a statistical measure of dispersion; risk is an intrinsic characteristic.  We often use volatility as a proxy for risk, but they aren't the same thing.

If we consider standard financial pricing theory, the current stock price should represent the probability-weighted, discounted future cash flows under all future scenarios.  This means that the current stock price should take into account things like the probability of the CEO being hit by a bus (a real risk) and the effect on future cash flow that the event would have.  Therefore, large changes in price without significant changes in information (in my opinion, this part is critical) means that the market is having difficulty reaching agreement on a forecast.  So volatility doesn't mean more risk: it just means the market can't agree as to what the future looks like.  A lot of people interpret this as market uncertainty, but it does not necessarily imply uncertainty: it may just be several very certain parties in disagreement.  Consider Carl Icahn and Bill Ackman's recent public battle about the future of Herbalife: both parties were staunch in their entirely opposite viewpoints.

Volatility can often serve as a proxy for risk -- especially in rank-ordering securities -- but does not necessarily capture all risks.  As an example, consider a firm that is entertaining a buyout offer.  Price will typically spike towards the buyout price and volatility will dry up.  The uncertainty of the deal will not be measured by volatility, but rather in the discount the security trades at relative to the offer price.  Despite low levels of volatility, the security still has considerable jump risk due to the deal falling through.  The lack of volatility simply means that the market is agreeing on the future event probabilities and their impacts.

From a mathematical perspective, high volatility can mean higher probability of loss.  If we return to our meaning of volatility under standard pricing theory, buying without an informed view as to whether a security is trading at a discount or premium means that we are taking a gamble that the market will ultimately conclude that the outlook for the company is rosier than it was when we bought it.  A historically high level of dispersion in security price means that we stand to either make or lose a lot in our gamble.  Modeling a stock price as a continuous geometric Brownian motion, we can estimate the probability of a drawdown as a function of volatility (For a continuous geometric Brownian motion with drift μ and volatility σ, define Γ=2μ/σ^2. The probability of a drawdown greater than D can be approximated as p = [1 / (1 - D)]^(-Γ)).  It should not come as a surprise that as volatility increases, so does the probability of larger and larger drawdowns.

To return to the original question that sparked this post, our models do not tighten up as volatility increases and loosen up as volatility decreases.  The reason is because our models are trying to distill the true underlying trend (i.e. the risk premium being paid).  Increasing volatility does not imply greater risk, just greater uncertainty or disagreement about those risks.  Therefore tightening our models under these scenarios would likely lead to more whipsaw.  Instead, our dynamic window is driven by a more nuanced process: the ratio of signal (trendiness) to noise (volatility) in the market place.  If volatility increases without a commensurate increase in trendiness, our window will actually expand to help average out the impact of the noise and surface the underlying trend.  Likewise, if trendiness increases without a commensurate increase in volatility, our window will contract, allowing our models to react more quickly to trend changes while we can track them with greater certainty.  Because our momentum models care about the one realized path and not the infinite possible paths a stock price could take, our interpretation of volatility is only to help determine whether a change in the realized path can be attributed to trend or noise.

Where volatility, as a pure measure, does frequently enter our process is as a sizing and positioning guide during portfolio construction.  We always consider the case where the models driving our strategies fail to provide any informational advantage, or worse, put us at a disadvantage.  From that perspective, we can return to the notion of putting on positions as a pure mathematical "gamble" and determine how much exposure we want for a given bet.  Without an informational advantage, putting on large positions in several high-volatility positions can be a great way to rapidly wipe out our capital base.

So is volatility the same thing as risk?  In one part of our process, the answer is very much no: volatility is uncertainty and we treat it as such.  In another part of our process, the answer is very much yes: volatility is a proxy for potential loss of capital.  A contradiction?  I don't think so.  When using volatility in your models, I think it is paramount to understand why you are using it and what property you are modeling when you use it: market participant uncertainty about the future or the dispersion in the distribution of returns.

Corey is co-founder and Chief Investment Officer of Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Corey is responsible for portfolio management, investment research, strategy development, and communication of the firm's views to clients.

Prior to offering asset management services, Newfound licensed research from the quantitative investment models developed by Corey. At peak, this research helped steer the tactical allocation decisions for upwards of $10bn.

Corey is a frequent speaker on industry panels and contributes to ETF.com, ETF Trends, and Forbes.com’s Great Speculations blog. He was named a 2014 ETF All Star by ETF.com.

Corey holds a Master of Science in Computational Finance from Carnegie Mellon University and a Bachelor of Science in Computer Science, cum laude, from Cornell University.

You can connect with Corey on LinkedIn or Twitter.

Or schedule a time to connect.