Warning: count(): Parameter must be an array or an object that implements Countable in /home/goedin21/domains/quantingdutchman.com/public_html/wp-includes/post-template.php on line 284

Warning: count(): Parameter must be an array or an object that implements Countable in /home/goedin21/domains/quantingdutchman.com/public_html/wp-includes/post-template.php on line 284

Warning: count(): Parameter must be an array or an object that implements Countable in /home/goedin21/domains/quantingdutchman.com/public_html/wp-includes/post-template.php on line 284

Warning: count(): Parameter must be an array or an object that implements Countable in /home/goedin21/domains/quantingdutchman.com/public_html/wp-includes/post-template.php on line 284

SPC for Strategy Tracking 1/2

In this blog I have introduced three trading strategies so far. It is great to have them and I am sure I will be adding more later. For now I am using these three to test-drive my trading business and to improve my process & tools setup. So I am trading these strategies live and I am learning from my experience and from my mistakes :-). One topic that has tinkered me lately is the question when my real-live strategies results start to deviate from the original results that were achieved during backtesting.

I just finished reading Evidence Based Technical Analysis by David Aronson. In his book David discusses in great depth various statistical methods to analyse strategy test results. In his Blog Jez Liberty discusses the bootstrap method and Monte Carlo analysis. Both methods can be used to find the distribution of the population of the returns from a strategy based on the sample returns created in a backtest. With this distribution one can determine a confidence interval of a strategy derived from the sample returns. Jez is a great guy and has even provided free sample code/tool to run your own analysis.

The next  step after analysing the returns in the backtest is monitoring the real-live results to see if these match historical performance. Having a day-time job in Supply Chain Management I have come across a good tool: Statistical Process Control  (SPC) charts.

In production management these charts are used to monitor the performance of a process by measuring the tracking-statistic of samples taking from the process output. The individual data points are plotted on a chart next to so-called Upper / Lower Control Limits (UCL/LCL). These limits are derived from the process specifications that are acceptable. When data points violate the control limits the process is considered to be out-of-control and an intervention to correct the process will be made. There are various sites that discuss the theory of SPC and a good starting point is here.

In applying SPC charts to trading strategies I have used the following approach. The process to be monitored is the trading strategy. The strategy produces returns per trade and this will be the tracking statistic. I will use the backtested results to calculate the UCL/LCL and mean of the tracking statistic. Real-live results are plotted in the control chart. When studying the literature, one will find various types of control charts depending on type of tracking statistic (variable/attribute data) and sample size. Here is a good site on control chart selection which contains the following decision tool.

With help of the selection tool, I have decided to use the X-MR (aka I-MR) control charts. In the next post I will go into detail and explain how I have created the I-MR control charts for my strategies and how I have automated the creation of the charts with Amibroker/Excel.



1 thought on “SPC for Strategy Tracking 1/2

  1. Interesting concept – I have always thought that you need to check the performance of your trading system once you’ve gone live vs. the back-testing results to identify when the system might be broken. Looking forward to the next posts for more detail…

    The other aspect which is probably important to monitor is the performance of the live trading vs. performance of back-testing system on live data (running in parallel). Any divergence in results might indicate bad assumptions such as slippage in the historical back-test… But I’m not sure if this warrants a complicated monitoring process…

    Thanks for the mention and nice words!

Leave a Comment