As we saw in a previous post, detecting jumps in asset prices can be relevant in tactical asset allocation models, especially when volatility is involved.  Suppose we have identified a jump through some means – what do we do with it?

At this point, I like to ask myself the questions:

  1. How confident am I in my assertion?
  2. How do I translate this confidence into action?

The example we saw before with Stock X and Stock Y is a good illustration of why this is important.

Jump Effect

Both stocks jumped up by 15% on Day 200; Stock X then continued on at the same volatility (in the GBM generated price path), and Stock Y continued on at twice the previous volatility, which had started off equal to that of Stock X.

On Day 200 (or even for a few days following) I really don’t know what the future has in store.  Even if I know what happens now because I am testing how a strategy would have performed in 2003, for instance, my model can’t know the future.  If it reacted to future information, it would be a bad model despite how good-looking the results may be.

This is where “thinking like a Bayesian”, as author Aaron Brown puts it, is useful for answering the questions posed earlier.  Say I have some preconceived notion of how often jumps occur in stock prices (for instance, a 15% change occurs about once every 4 years assuming that log returns follow a t-distribution with 4 degrees of freedom, mean 0, and volatility of 30%).  This is my prior estimate.  Now on Day 201 I observe a small price change.  What is my new belief that Day 200 was in fact a jump based on this new information?  This is my posterior estimate.

If we continue this process, as we observe more data after our supposed jump, we will refine our estimate of the probability that Day 200 was actually a jump.  This degree of belief can then be used to filter the jump.  Rather than removing it totally from our analysis, our model could account for a portion of it based on the probability.

Predicting these jumps is an entirely different animal, but once they occur, we would like to react to them in a robust, adaptive manner.  By doing so, we can ensure that our model avoids whipsaw, accurately measures information flow, and ultimately behaves in a way that agrees with Newfound’s philosophies of quantitative integrity and sound risk management.

Nathan is a Vice President at Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Nathan is responsible for investment research, strategy development, and supporting the portfolio management team.

Prior to joining Newfound, he was a chemical engineer at URS, a global engineering firm in the oil, natural gas, and biofuels industry where he was responsible for process simulation development, project economic analysis, and the creation of in-house software.

Nathan holds a Master of Science in Computational Finance from Carnegie Mellon University and graduated summa cum laude from Case Western Reserve University with a Bachelor of Science in Chemical Engineering and a minor in Mathematics.