Investors have been no stranger to persistent and growing concerns of global market uncertainty. Sovereign legislators have failed to take necessary steps to quell market concerns of fiscal imprudence (both in the United State and abroad), assets have been driven afar from fundamental valuation by central banks’ uncoordinated liquidity injections, and mounting surpluses have engendered a savings rate that continues to dampen global growth expectations. Such sustained deviations from historical norms have had marked impact on asset management thought leaders: George Soros has hung up his trading shoes, Paulson’s Advantage Plus Fund lost more than 20% in 2012, and the seemingly omniscient minds at SAC Capital Advisors continue to surface in insider trading investigations. If you don’t think this is a challenging environment to make portfolio allocation decisions, you’re not paying attention.

Fortunately, there is one cog in the transmission of the market that still appears to be functioning normally: relative volatility of stocks. Relative volatility simply refers to the volatility of an asset, relative to another, and not on an absolute basis. So even though an investor might not be able to determine the volatility of stock XYZ next month based on the volatility this month, she may be able to glean how the volatility of stock XYZ will be compared to its peers by looking at historical data.


To examine the persistence of relative volatility in this blog post we use stock prices for roughly 1,500 stocks from the period of July 31st 2002 to October 22nd 2012 (the same data set as was used to illustrate the positive skew of stock picking). To provide the reader a contrasting relative statistic with no persistence, we also look at a stock’s relative return. To determine the persistence in a stock’s relative return and / or volatility, the following recipe is used for weekly, monthly, quarterly, an semi-annual time windows:

  1. Calculate the historical return and volatility for all stocks for the given window
  2. Bin each statistic into a quartile (four equal-sized groups, where the 1st quartile is 1/4 of the data with the lowest values, and the the 4th quartile is 1/4 of the sample with the highest values).
  3. Calculate the future return and volatility for all stocks for the given window
  4. Bin the future stock statistic into quartiles again
  5. Compare the historical and future quartile for each of the stocks

The following graphic illustrates the relative quartile calculation for each security.

window illustration

If there is no persistence in a given stock’s return or volatility (i.e. the historical quartile is more or less independent of the realized quartile over the next window), we should observe the 1,500 stocks changing quartiles about 75% of the time, on average. Why? Read on…

What to Expect

A simple thought experiment should clear up any confusion around why stock statistics would switch quartiles 75% of the time, if they happened randomly. Suppose there was a 4 sided die that was rolled for each stock to determine the historical quartile. If it came up 1, that stock went into the first quartile, rolling a 2 placed the stock into the second quartile and so on. Then for each stock, the die was rolled again, this time to determine what the future quartile would be. If the same number was rolled the second time as was rolled the first, the stock “stayed in the same quartile.” If a 1 was rolled for Stock A the first time, there is a 25% chance of rolling a 1 again, or (1 – 25%) = 75% probability of rolling a different number. Therefore, given that relative stock statistics between historical and future time periods are random, we would expect quartiles to change approximately 75% of the time.

Below is a histogram where historical and future quartiles were randomly generated using our “dice” example, where the dotted line is our 75% value.

Randomized Example

Clearly, the frequency distributions center around 0.75 — the number of stocks that change quartiles in future periods given that the data is random. The reader should also note that the distributions get fatter when less stocks are used, and thinner (more tightly centered around the mean) when more stocks are used. So for our analysis of almost 1,500 stocks we should expect fairly tight distributions around the mean if in fact the relative statistic is random.

Sample Results

Below are the results of our analysis for weekly, monthly, semi-annual, and annual windows. A dashed line is drawn at 0.75 to show where quartile switching would center around, given that the relative statistic were random.



Returns: No persistence from historical to future quartiles, implying that relative return is approximately random from one weekly period to the next

Volatility: Persistence from historical to future quartiles well below the threshold for random values. A skewed left tail further implies a greater degrees of relative volatility persistence.



Returns: Again, no persistence from historical to future quartiles, implying that relative return is approximately random from one monthly period to the next

Volatility: Persistence from historical to future quartiles is well below the threshold for random values. Compared to the weekly values, monthly relative persistence appears to be even greater, with a greater degree of left skew, implying monthly relative volatility persists even more so than weekly.



Returns: Again, no persistence from historical to future quartiles

Volatility: Relative volatility persistence continues to be evident, although fatter right tails now accompany fatter left tails observed in the monthly histogram.



Returns: No persistence from historical to future quartiles

Volatility: Continued relative volatility persistence

Relative Statistics Under the “New Normal”

At the onset of this blog post, I claimed “one cog in the transmission of the market still appears to be functioning normally.” The object of reference was relative volatility, therefore, if relative volatility persistence has continued to function much like the it did from the periods of 2002 – 2008, we should should see a similarly shaped histogram to the ones above.

The graphic below shows relative statistics of return and volatility beginning in January of 2010 to October 22nd, 2012, with a window size of one week (larger windows provided less observations and obscured the ability to effectively see a distribution.)



In later blog posts, relative volatility persistence will be examined further by exploring:

  • Whether low or high quartiles have a greater persistence
  • Risk adjusted returns of low relative volatility stocks
  • Methodologies to construct portfolios of low-relative volatility stocks

For now, the illustrations above should provide investors ample fodder to conclude that relative returns follow a random process, whereas relative volatility has some underlying, non-random behavior. The takeaway is that no information regarding a stock’s future relative performance can be gleaned from the past.1 However, the same cannot be said with relative volatility. The relative volatility that occurred in the past most certainly holds bread crumbs of insight about future relative volatility expectations.

Why is this information useful? Firstly, portfolio managers are often given mandates to stay fully invested over some threshold, and therefore must try to reduce the relative impact of market dislocations through stock selection — relative volatility is a natural addition to the stock selection process that can aid in that process. Furthermore, as we will see in later posts, simple optimization procedures can incorporate estimates of relative volatility to more effectively manage macro-behavior of intra-asset class stocks. In these cases and many others that center around stock selection, deeper analysis of relative volatility can provide an enhanced framework to make decisions, especially in financial markets where “the new normal” seems to look little like the old.

©Newfound Research LLC, 2013

  1. For those curious about the academic implications, this blog post would supports Paul Samuelson’s view of the Efficient Market Hypothesis that markets are largely “micro-efficient” but not “macro-efficient.”

Benjamin is a Managing Director in Newfound’s Product Development and Quantitative Strategies group, where he is responsible for the ongoing research and development of new intellectual property and strategies. Specifically, Benjamin’s focus is in the area of exploring model applications to fundamental, economic and systemic market variables. Drawing on his years of experience in the financial services industry, he helps to ensure that Newfound’s products and messaging effectively meet the needs of investors and portfolio managers. He also plays a critical role in developing new business and client relationships.