Fossicking for Divergent Strategies using EA Studio

It’s time to get our hands dirty and enter the world of data mining where we will be demonstrating a workflow process we use at ATS to create a vast array of divergent systematic trading strategies with high positive skew that are applicable to Forex and CFD instruments that span across asset classes.

This blog post is fairly technical in nature and will not necessarily appeal to many systematic trend traders who solely build their trading strategies from logical design principles ….but has been written to assist those who use data mining in generating suitable strategies for systematic trend following and momentum models. If you are interested in data mining techniques then read on……but you have been warned. ๐Ÿ™‚

There are a range of very powerful data mining programs available in the market place such as Expert Advisor Studio (EA Studio), Strategy Quant X and Adaptrade Builder which allow the user to quickly generate thousands of algorithms using random and genetic generation methods. Each of these technologies have particular strengths and weaknesses and it is not the intent of this post to make a personal recommendation about which to use. In fact we use multiple solutions personally ourselves as any of these options are excellent value for money and each provide some unique extra bells and whistles which you may want to use to augment your own workflow processes and take data mining to the next level. For the purposes of this post however, we will be showcasing  EA Studio.

It is important to note from the outset that we engage a different workflow process than is traditionally used with data mining models.The reason for this departure from more traditional data mining methods and in particular the robustness testing measures adopted, is due to our desire to target divergent as opposed to convergent strategies in our data mining efforts.

This selective preference towards divergence requires us to substantially depart from the traditional workflow methods used for data mining solutions which are typically biased towards techniques used to derive predictive ‘convergent’ solutions with great linear equity curves. As a result, we do not use all the recommended workflow components such as walk forward optimisation and validation as these techniques do not assist the identification of suitable divergent models.

We have written at length in our blog about our preference in avoiding ‘predictive’ convergent models with negative skew given their relatively short shelf-life and critical dependence on the maintenance of stable (predictable) market conditions.

Our preference is in data mining those ‘price following’ strategy solutions (such as trend following and momentum breakout models) that respond to more exotic divergent market conditions. These solutions have a far lower trade frequency than convergent models as they avoid trading during the everyday ‘normal’ market churn and only respond to those relatively rare divergent market conditions when markets are more likely to be trending in nature.  

This therefore necessitates that our processes seek individual trading strategies that trade relatively infrequently, have relatively low sample sizes and visually display very stepped and volatile equity curve profiles that are representative of divergent equity curve signatures. The  stepped nature of the divergent equity curve can be attributed to the occasional rapid growth of the equity curve during divergent market conditions inter-dispersed with long periods of inactivity between divergent phases  which incur building drawdowns and periods of extended stagnation.

The diagram below displays the differences between convergent and divergent approaches to data mining.

Defining the Logic Space within which strategies are data mined

To ensure that our data mining operations are directed towards finding suitable divergent strategies we first need to strictly define the logic space within which strategies are randomly or genetically defined. This first step to the workflow process is essential as this is where the broad fundamental principles of divergence are established to ensure that all strategies have these broad fundamental principles embedded in them. These essential fundamental principles common to all  divergent strategies are as follows:

  1. Cut losses short and let profits run;
  2. Trade during divergent market conditions and avoid trading during normal noisy or convergent market conditions;
  3. Use simple models with a preference for fewer variables and avoid prescriptive complex models that significantly reduce degrees of freedom;
  4. Use as much data as you can to enhance strategy robustness;
  5. Remain within the higher timeframes such as H4 and above to reduce frictional costs (such as spread and slippage).

This is how we apply these broad design principles using EA Studio.

Cut Losses Short and Let Profits Run

Within Strategy Properties of the Reactor settings we always apply a stop loss and a trailing stop condition and no profit target condition.

The minimum and maximum pips defined for the stop and trail condition are broadly defined  by eyeballing the chart for the appropriate instrument and timeframe in MT4 and using the cross-hair tool to define a realistic trailing stop range.

Establishing an open profit condition associated with a trailing stop helps to select those strategies in the data mining process that possess positive skew.  We eliminate all generated strategies that possess negative skew given their ability to significantly compromise the risk exposure of the consolidated portfolio.

Trade during Divergent Market Conditions

We use a preset entry rule which is applied to the reactor for all strategies to ensure that we are more likely to be entering our trades within divergent market phases. In this example for the D1 timeframe on any instrument we use a 200 SMA condition that must be met for all strategies.

This preset entry rule does not need to be restricted to  a 200 period SMA and a vast array of indicators could be used such as longer term lookback EMA’s, Donchian channels, MA crossovers etc. The intent of this preset feature is simply to restrict trade activities to more exotic and unpredictable market divergent phases.  

Use Simple Models with a Preference for Fewer Variables

In addition to the preset entry condition we need at least 1 additional entry rule and 1 exit rule. Ideally we are after a maximum of 2-3 entry conditions including the preset SMA condition and a single exit condition for both long and short symmetrical strategies. Strategies with too many variables significantly reduce the overall robustness of a strategy to handle variable market conditions and tend to be specifically configured to a single market condition. We therefore opt for those strategies possessing fewer strategy variables.

We restrict the indicators used for data mining divergence to traditional trend following and momentum indicators and signals that are easily understood and commonly applied by trend traders such as Average True Range, Donchian Channels, Moving averages and Moving Average Crossovers, MACD signals.

Here is a screen shot of the selection of indicators used for entry and exit signals. We have simply excluded those indicators which we feel do not significantly add value in detecting trend following/momentum breakout strategies.

Below is a table reflecting the acceptance criteria used to validate successful data mined strategies.

Given that we are looking for a vast number of strategies with only a weak edge, we use low threshold acceptance criteria for strategy validation purposes.

For example:

  • We are looking for those strategies that are only active during divergent market phases so we need to ensure that the maximum count of trades is sufficiently small to capture these rare events (we reduce this number down to a 50 trade minimum for a 15 year plus test range period (or approximately 3-4 trades per year) per strategy.
  • We expect volatile return profiles for this type of divergent strategy hence a minimum return/drawdown ratio of 0.5 for a 15 plus year period allows significant volatility in the return stream.
  • The minimum profit factor of 1.1 across the time series ensures that we capture a large number of strategies with a weak edge. If you set this value too high, you will find that you bias your results towards negative skewed convergent options with higher trade frequency.

Use as Much Available Data as Possible to Enhance Strategy Robustness

When selecting a data horizon for data mining strategies for a particular instrument and timeframe we always select the greatest available date range from available data.  For robustness purposes the intent is to ensure all validated strategies generated  have navigated as many different market conditions available as possible.

Preferably you would also test strategies using the multi-market feature available to the data mining platform, however this is only possible if individual instrument volatility is standardized using measures such as fixed fractional position sizing using the ATR. Unfortunately, EA studio is currently limited to using a standard lot sizing per instrument which restricts our ability to use the multi-market feature.

When using data mining programs, multi-market testing is a very powerful feature to assist robustness testing. For data mining purposes, different instruments simply represent different market conditions. It is all just data. As a result, if you have the ability to test across multi-markets using volatility adjusted position sizing methods, then it is strongly advised as this effectively significantly extends the data horizon for testing purposes. 

We also ensure that all testing is conducted on the Broker data sources from which you intend to live trade your strategies. Within the market maker environment of CFD’s and Forex, different broker data sources have their own characteristics and nuances such as the GMT offset used, differences in spread, SWAP and differences in slippage. While operating on the H4 and D1 timeframes helps to reduce these material variations in data quality, we strongly  recommend that your data mining results are derived from the broker source you intend to trade with. 

Furthermore, given that we monitor live trade results against test results as an ongoing workflow process once we are trading live, using a common broker data source helps to ensure that material differences in data source between live and walk forward results are eliminated or significantly reduced.

Remain within the Higher Timeframes such as H4 and above to Reduce Frictional Costs of Trading

Given that we  are realistically targeting divergent strategies with only a weak edge, it is essential that we reduce where possible the frictional costs of trading such as spread and slippage. The material impact of spread and slippage on performance of a trading strategy is significantly higher as you progress to the smaller timeframes as trade frequency from scale variance increases. To reduce this impact we stick to the higher timeframes of H4 and above.

Over-trading a single instrument is a significant obstacle to the systematic trend/momentum trader as divergent market conditions are few and far between. To overcome this impediment, we therefore elect to diversify across many different instruments which therefore allows us to increase our total trade frequency at the global portfolio level. With say 3-5 trades per year per instrument, when trading a portfolio of say 200 discreet return streams, this therefore lifts total trade frequency at the portfolio level to say 600-1000 trades per year. Diversification is therefore essential to data mine for infrequent and unpredictable divergent market conditions. 

Restricted Use of Data Mining Modules

For the purposes of data mining for divergent strategies, the modules we will be focused on for strategy generation are:

  1. Strategy Generation using a 30% Out of Sample OOS component and adopting common acceptance criteria (discussed earlier);
  2. A 20 step optimisation process retaining 30% OOS; and
  3. A 90% Monte Carlo validation process that randomise indicator parameters (30% indicator change probability).

We do not use the Walk Forward Testing and Optimisation given the stepped nature of divergent strategies with extended periods of stagnation. Under divergent data mining techniques, the underlying strategy performance directly responds to market conditions. If market conditions are divergent n nature, then strategies generated should perform well, whereas during convergent or noisy market conditions, strategies should either be inactive or stagnating with not too much deterioration of capital.

Given the unknown extent of either convergent or divergent market conditions, we do not assume a better strategy is one that performs well across segmented market data…..hence it would be inappropriate to use walk forward testing and optimisation under these assumptions.

We have attached a set file for EURUSD D1 that is applied to the majority of instruments in our trading universe. The only differences to this generic set file which is applied to each instrument relate to chosen min pips and max pips used for stop and trailing stop definition based on individual instrument volatility. Note that this set file relates to the date range used for Pepperstone data between 1st Jan 2000 up to 31st Dec 2015 and includes a 30% OOS component.

1002 EA Studio Settings EURUSD D1 2019-04-21 (to 31 Dec 2015)

Diversification Methods

We use two broad forms of diversification in our data mining workflow processes that comprise:

  • System Diversification; and
  • Market Diversification.

One of the significant benefits afforded by EA studio is it’s ability to quickly derive hundreds of different robust strategies that fall within the established design logic. The strategies that are generated by this software are also automatically filtered to remove correlated systems by virtue of their correlation statistic over the time series and by their degree of system design similarity. 

Having such a powerful array of relatively uncorrelated divergent systems, facilitates excellent system diversification at the portfolio level. 

Now that we are armed with a virtually unlimited swathe of different divergent systems, we also use market diversification as a means to spread our systems far and wide across asset classes to search for divergent market conditions.

The overall result of diversifying across 25 markets with approximately 30 unique divergent systems for each instrument means that the result at the portfolio level is heavily diversified with over 750 unique long range return streams to compile at the portfolio level.

There is ample diversification provided through these two methods without having to further diversify across different timeframes….though for those who seek additional diversification…..then that opportunity is always available for those who wish to do so. 

Workflow Process

Now that we have defined the design logic space and also chosen a sufficiently broad universe of instruments spanning across asset classes we are now ready to outline the workflow processes that will be undertaken in our data mining venture.

One of the critical traps that we do not want to fall into is selection bias. Using data mining techniques it is easy to generate magnificent portfolio results using in sample data mined solutions that have been optimised….however we need to pay special attention to ensuring that our workflow process eliminates the propensity for those results within which lurks selection bias or curve fit results.

Let’s say we go back in time to early 2016 and we want to generate our first multi-instrument and multi-strategy portfolio. These are the processes we use to generate our first portfolio which has been data mined using long range data up to 31 Dec 2015.

Step 1. Set the date horizon for the data mining activities from the earliest available data per instrument on D1 to 31 Dec 2015. For example our Pepperstone data for most instruments on D1 have data available on D1 from 1/1 2000 to 31 Dec 2015 (or 15 years of data). It is important to span as many possible market conditions as possible in your data range and in particular the GFC.

Step 2. Ensure that the tail 30% of that data is preserved for OOS testing and not used for strategy generation and optimisation. Strategy validation results use all available data….therefore for a strategy to be successful in passing validation criteria, it therefore needs to pass both the IS and OOS component.

Step 3. Ensure that strategy optimisation is restricted to the same 70% IS component of data and excludes optimisation over the 30% OOS component.

Step 4. Do not use Walk Forward Testing or Walk Forward Optimisation (as discussed previously).

Step 5. Apply 90% validation on Monte Carlo Testing for the entire data range using  ‘randomise indicator parameters’. This test is predominantly used to eliminate curve fit results from the optimisation phase and ensure that each simple strategy is robust across it’s indicator parameters. Note that we do not apply other MC tests that are more relevant to convergent styles. Tests such as randomise history data, skip position exit and backtest starting bar are only relevant to those strategies which possess a linear equity curve where trade frequency is relatively evenly spaced throughout the test period. With divergent models, trade frequency is sporadic and predominantly confined to divergent market phases..

Step 6. Save the set file developed for the reactor workflow process and use for 4 instances of testing for each instrument. This browser based software of EA Studio can run multiple instances of the reactor settings at the same time. More powerful threadripper servers etc. can assist in ramping up the processing power…but for us using 2 workstations with the chrome browser, this means we have always 8 instances of EA Studio (for 2 separate instruments) chugging away at all times. Leave the 4 instance reactor generation running for 24 to 48 hours to generate as many possible strategies in your collection as possible. A maximum of 400 strategies are available (100 each instance) for each session.

Step 7. Save each collection into a file directory and then compile the 4 collections into a single collection of up to 100 strategies using the validator. This process removes correlated strategies and also allows another rerun of the entire collection using MC testing of 90% validation. You should therefore be left with a validated set of strategies (up to 100) in a single collection (derived by validating 4 separate collections of the same instrument).

Step 8. We want to restrict this list to approximately 20-30 divergent strategies of relatively low trade sample size that we can then compile into a single set of strategies that we use for live trading purposes. So eliminate those strategies with:

  1. Relatively large sample size. For example over a 15 year period remove those strategies that possess a trade frequency of >125 trades as they are likely to be convergent in nature or possibly curve fit for the data.
  2. Extreme stagnation or very few trades. Clearly there is a limit to the term ‘extreme’here. We recognise that stagnation will be likely in our strategies…but extreme stagnation is not desirable over a 15 year period. 
  3.  Have relatively low return/drawdown ratios. Remove those strategies with lower return to drawdown ratios across the collection as the relatively higher drawdown contributing to these ratios suggest that the strategies may not be ideal for protecting capital; and
  4. Have equity curves that are not representative of overall market conditions for the period. For example if the market chart clearly reflects divergent conditions during say 2003 to 2006 and there is no equivalent equity curve growth for a particular strategy, then drop the strategy. Refer to the method we engage to map the equity curve to the market condition described later in this post. We need to do this at the single strategy level as well as the sub-portfolio level.

For example, your validation process will be restricting an initial input list of say 400 strategies to approximately 20-30 strategies that pass the tests above.

Step 8. Keep a copy of the ensuing validated collection for future testing at a later date and compile these validated strategies into a single expert adviser (portfolio collection) for the instrument. The single algorithm will therefore contain approximately 30 different strategies which at a holistic level we treat as a single divergent automated solution for a single instrument. Within EA Studio the compiled algorithm is referred to as a portfolio. To avoid confusion, this compiled algorithm that represents a collection of strategies compiled into a single EA is referred to as a sub-portfolio in this Blog.

Step 9. Load the sub-portfolio for the compiled instrument into your brokers demo account and run a backtest over the available data range. The intent of this step is to ensure that the compiled collection generates a suitable result within your brokers MT4 environment and also include realistic frictional costs of slippage, SWAP in your local currency . Results from this backtest result are loaded into the ATS portfolio compiler (available in the shop) for further testing and future portfolio compilation purposes.

Step 10. Map the sub-portfolio equity curve to the Market condition to ensure that the sub-portfolio produces positive results during divergent market conditions and has not been curve fit.

In this mapping process we align the date axis of the equity curve with the date axis of your market chart and then map those periods of strong performance against the market condition to ensure that during these market conditions, the market condition was divergent in nature. We also take note of those unfavourable market conditions and observe the impact on the equity curve of the sub-portfolio.

The charts below for USDJPY D1 and EURUSD D1 provides examples on how to conduct this mapping process.

This mapping process is an essential task of the workflow process that ensures an appropriate underlying divergent logic is represented in the sub portfolio. It will also provide confidence in the ability of your compiled sub portfolio to outperform during divergent market conditions and preserve capital during non-divergent market conditions. Note that we conduct this mapping process at the single strategy level when selecting validated collections and also at the sub-portfolio level.

Step 11. Run this workflow process across your entire universe of instruments (in this instance 25 instruments) and capture MT4 results in your ATS Compiler database. We have prepared a two part demonstration series on how to capture this information in your ATS Portfolio Compiler. For further information on this process then please refer to the Youtube videos included in the links below.

The ATS Portfolio Compiler Suite of Tools – Part 1 of a 2 Part Series

The ATS Portfolio Compiler Suite of Tools – Part 2 of a 2 Part Series

Step 12. Generate a risk weighted portfolio for your 25 instruments for the date period up to 31 Dec 2015 in the compiler.

Step 13: Compare the risk weighted portfolio result against the Trend Following Index or appropriate industry benchmark to evaluate the result.

For example below are the results of the 25 instrument risk weighted portfolio up to 31 March 2019 produced from data mining efforts using 30% OOS data for the period 1st Jan 2000 to 31 Dec 2015.

The portfolio displayed strong performance returns up to 31 October 2016 and thereafter stagnated. It is tempting to conclude that results post 31 October 2016 were symptomatic of weak data mined strategies that failed to live up to expectations however such assumptions need to be compared against industry benchmarks of trend following performance over the same period.

The graph above compares the data mined portfolio against the TF Index which comprises over 50 divergent fund programs and is used for bench-marking purposes. Results post 31 October 2016 were similar across the entire index suggesting that periods of divergence during this date range across a broad array of asset classes were few and and far between. Furthermore we see that the data mined portfolio favorably protected investment capital during this period.

Step 14: At yearly intervals we conduct the same exercise following steps 1-13 and extend the data range a further year and re-validate existing strategies over the extended date range. The re-validation process retains solid performing strategies and removes poor performers when validation criteria are no longer met. In addition to retained strategies we also data mine for new strategies to be included in the revised portfolio. Each year the portfolio is updated to retain solid performing strategies and include new strategies that arise from data mining activities from the extended date range. This way, the portfolio progressively adapts over time.


Well that about wraps things up in regards to the workflow processes we undertake to generate portfolios of divergent strategies using EA Studio. We hope this post has been helpful for those trying to find their way.

It is still very early days in our assessment of the efficacy of this data mining approach, but we are hopeful of a positive result over time that mirrors the performance of similar divergent systematic fund programs in the industry. 

Trade well and prosper

Rich B

2 thoughts on “Fossicking for Divergent Strategies using EA Studio”

  1. Thank you for this beautiful post. I tried your method even if i have always done convergent portfolios.

    Unfortunately over 100 strategies, anyone was with less than 125 trades over 10 Years (my data avaiable).

    1. Hi Edu. Thanks for the kind words. Most of my strategies in my divergent portfolio trade with less than 100 trades over a 10-15 year period. The mapping process is incredibly important as with small trade sizes you need this method to ensure that your strategies are robust and only trade during strongly trending periods. Best of luck with your data mining efforts. ๐Ÿ™‚ Regards Rich

Leave a Reply