In the Beginning there was Trend Following – A Primer – Part 17

Trend Following Primer Series – The Recency Phase – There is no Permanence, Only Change –  Part 17

Primer Series Contents

There is no Permanence,Only Change

In our prior Primer we undertook a process in our Workflow that sought to identify robust strategies from a large list of possible trend following systems that had been logically designed by our Engineers. We found lots of them, say 4,000 solutions that met our validation criteria (about 40 markets x 100 solutions).

Chart 39 below provides an example of 40 return streams from a 50 year historical data set that have passed the prior Robustness phase comprising both breakout models and other various forms of trend following model. You will notice that there are discreet points in the time series, where positively correlated step ups occur. These are moments when our models experience ‘outlier’ moves of significance. Between these step-ups there are non-correlated periods of stagnation or slight drawdown. This is where noise and mean reversion play a role in contributing to drawdowns.

Chart 39: Example of 40 return Streams for EURUSD brought forward from the Robustness Phase for further Testing

In addition to robustness expressing a return streams ability to survive adverse market conditions of the past, the term ‘robustness’ doesn’t end here. There is more to it.

Robustness is also a statement of ‘responsiveness to change’. It relates to how well a strategy is configured to (or correlated with) its current environment. The prior test undertaken, demonstrates survival across a very long data history, where many unfavourable market conditions have contributed to adverse drawdowns and this is great to know, but not sufficient. If any of these unfavourable conditions arise again, then we can be confident that our systems can navigate them, but such a long snapshot of a past environment does not provide us with an indication of how fit these strategies are now in responding to current adversity.

Robustness is more than just a Historical Test. It is also a Statement of Fitness

So if we can imagine an environment that dynamically changes over time, we could also imagine that our robust strategies need to be periodically refreshed with newer risk mitigation methods that respond to this environmental change. In other words, that our systems can adapt to that changing state through ‘selection processes’.

You see, when we look at other complex systems such as natural systems, our perception of what it means to be a robust species is the ability of that species to respond to the changing environment around them. The environment changes and a robust species needs to cope with that change about it. Robustness is therefore tied to the ‘close’ relationship (or the correlation) between the environment and a species. If a species significantly lags in its coping state when compared to the altered state of the environment, it is exposed to far greater risk in the current environment. A species needs to be strong and only carry limited survival risk in the current environment, so it is capable of absorbing new risk relating to future uncertainty.

We need a similar coping strategy for our trend following systems in how they can be periodically refreshed to minimise current risk in their architecture. Our trend following systems need to be strongly positively correlated with current trending conditions. The current configuration must not be warehousing risk associated with the current state of the market so it can fully absorb future risk.

Trends in any complex system relate to transitions (or change events) in that complex system, and the nature of the trending condition is also subject to change over time. We therefore find that the trends of yesteryear are significantly different to current trends, yet the trends of yesterday are more similar to the trends of tomorrow.

So our robustness statement for our systems also needs to reflect this ‘correlated’ relationship. That the trend following systems of yesteryear are significantly different to current trending systems, yet the trending systems of yesterday are more similar to the trend following systems of tomorrow.

This principle of correlation between agent and system state is found in all complex adaptive systems and relates to how ‘selection processes’ work in a complex system. It relates to how ‘fit’ an agent is in that system currently, to meet the challenges of tomorrow?

Our Progress So Far

So, in our workflow process so far, we have say 40 markets x 100 solutions sitting in front of us that have passed the prior step. Their robustness so far has been assessed using a historical data set comprising an account of what is known, but now we want a process that is adaptive in nature, whereby more recent data which has an adverse affect on trend form and characterised as whipsaws (expressed by current volatility, noise and mean reversion), plays a stronger role in the selection process.

This phase of robustness testing referred to as ‘The Recency Phase’ provides an adaptive component to our Workflow  whereby our process of system selection ‘adapts’ over time in response to new trending information which is received by ‘future data’ and is injected into the workflow process.

Our prior ‘Robustness Phase’ was undertaken using historic data, so the conclusions reached about each systems robustness was a statement made about that systems overall response to the entire data set.….however we know that this statement of robustness is going to be challenged in the future as new data that has not been seen before, dilutes this ‘historic robustness statement’.

So, we therefore need to undertake this entire workflow process at annual intervals.  Each time we undertake our workflow process, a years’ worth of extra data for our universe is injected into the process which includes any new information that can assist in keeping our systems fit by releasing adverse risk. By replacing our tired systems with new vitalized versions we can keep the portfolio sharp.

We want a Conditional Outcome of both Historic Robustness and Current Fitness

So, to allow for an adaptive element in our workflow, we not only want to select for systems that are historically robust (as they have passed the prior robustness phase), but from this selection of say 40 markets offering 100 solutions, we then want to select those historically robust systems that also demonstrate current fitness. This additional requirement reduces the selection to then say 40 markets offering 50 solutions (2000 or so total solutions).

Both the prior Robustness Phase AND this Recency Phase of our workflow are contingent processes.

In our processes here at ATS, we adopt a ‘recency window’ using a 5-year look-back, but you can set this window to any desired look-back. However, we do not want it to be too ‘adaptive’ and we simply want to ensure that a degree of change can be imparted in our process.

So as trending conditions change over time from say, shorter term trends to longer term trends, or less volatile trends to more volatile trends, we will clearly lag in our response time but not by far. We don’t want to be adapting to noise, but rather the slower changes in trend form.

We view this process of ‘sharpening our portfolios’ as a method that ensures we do not attempt to ‘time’ the market. By keeping the portfolio razor sharp through an annual ‘re-balancing process’ we avoid having to make decisions such as ‘is the portfolio in too much drawdown?’  and ‘when should I turn the systems off with under-performance?’. We integrate a process into our Workflow that ensures that these decisions are made for us, without having to interfere under discretion. It is the discretionary over-ride that frequently is the reason for long term poor performance.

Think of it this way. The time a predictive mindset wants to ‘change’ is usually at the worst time, when drawdowns have already been introduced that compromise geometric returns. Avoid that impact by never letting your portfolios get severely blunt in the first place.

Bacteria and viruses and perhaps the most adept creatures at responding to change. They don’t predict change (attempt to ‘time’ change).  They already have mutated strains sitting in their collections, that are ready for it….at all times. If they waited to change, the slow response time would probably be too late to perpetuate their existence.

This Recency Phase is likened to processes of Natural Selection whereby we reduce our selection of robust systems to those that are more fit for immediate purpose. They are more likely to carry less warehoused risk and possess system characteristics that are suited for the near future.

Each year we undertake the workflow process, we therefore replace all our portfolio systems with a new generation of fitter candidates.  But we allow systems with existing active trades to play out before they are deleted from our portfolios.

In our historical robustness test we historically tested across a 30 plus year look-back and in our recency test we use a 5-year look-back using a component of this historical data set. So we are reusing a portion of this data again. Now this poses a problem of ‘reuse of data’ which we need to understand as it can introduce a bias in our selection process.

A Word on the Dangers of the Reuse of Data and Why we Insist on Using Large Data Sets for Our Decision Making Processes

We need to take a slight detour now, as we need to explain this possible dangerous bias that arises from the reuse of data for testing and more specifically any process of decision making between alternatives when randomness has a say in the matter. The danger arises from a subtle bias associated with selecting the best candidates from amongst alternatives where randomness has a say in the matter.

Now unlike alternative data mining methods, we do not break our data set into In Sample or Out of Sample data to design and then validate our systems. We need all the data we can get our hands on to test the robustness of our systems given the low trade sample size per system. We have avoided the use of In Sample data by designing our systems from scratch without the need for data to test their efficacy.

This is great as it avoids our propensity to over-fit our solutions through any form of optimisation process  but using an entire data set to test for robustness for example can lead us to problems where we then re-use the data more than once. We actually do need to reuse 5 years worth of our historical data for our Recency Phase.

The issue of bias that unfolds from the reuse of data relates to a ‘selection bias that creeps into our decision making process’ if there is a component of randomness in the results. A random distribution of returns can offer a range of possible paths from positive equity curves to negative equity curves and anything in between. If there is randomness in our 5 year look-back that influences the recency phase (which there is likely to be), then by selecting the equity curves with a positive result and eliminating other random curves that are unfavorable, we are actually biasing the result with our selection process.

You see, our decision has now introduced a Bayesian Bias into the selection process….just like the Monty Hall problem. Warning…if you want to get your head messed up then go ahead and click on that Monty Hall Problem.

A Visual Way of Understanding the Problem of Selection  Bias

Let’s understand this better visually. We all now understand that an equity curve which is a derivative expression of underlying market data, comprises elements of non-randomness (signal) and elements of randomness (noise). In a ‘mostly efficient’ market , the randomness in the equity curve is considerable and should never by overlooked. For Trend Followers, this is particularly pertinent as a great deal of our equity curves are a feature of the noise that resides in our long term equity curves. There are only a few moments where we have ‘outlier anomalies’ residing somewhere in an otherwise noisy equity curve. In Primer 3 we showed an example of an equity curve of a trend following program that was derived from a trade sample size of 500 trades (refer to Chart 40 below). The Trend Following Program (in red) was found to have a weak edge, but only when examining that equity curve over a far greater trade sample size (refer to Chart 41 Below where we included this weak edge example of a Trend Following system in a sea of otherwise random equity curves).

Chart 40: Trend Following Program embedded in a sea of Random Equity Curves (Trade sample size 500)

Chart 41: Same Trend Following Program embedded in a sea of Random Equity Curves (Trade sample size 5000)

So let us assume that we only had a small trade sample size  of 500 trades  (like we found in Chart 40) and had to make a decision about which equity curve we would choose for our Trend Following Portfolio from amongst these alternatives.  Let us say that we could only pick 5 from this sea of alternatives. We decided to use profit over the time series as out prime method of choice determinant. In Chart 42 we therefore select the top 5 only from this list of available alternatives.

Chart 42: Selection of Top 5 Equity Curves from a Trade Sample of 500

If you refer to Chart 42 having selected the top 5 return streams, you will note that they do not include the return stream of the Trend Following System with an edge that was displayed in red in Charts 40 and 41. You have only selected random return streams.

This is why you find that when you take these selected candidates to the live trading environment in your Diversified Portfolio, they immediately fall of the cliff when they are implemented. They only ever were random equity curves and such curves have no enduring projection power into an uncertain future.

However if we had made this decision with a Trade Sample size of 5000 trades (Chart 43) we would have avoided that problem. We would have then selected the system with the weak edge (red) albeit, we would also have selected a few random candidates as well.

Chart 43: Selection of Top 5 Equity Curves from a Trade Sample of 5000

So hopefully you now agree that using large trade samples sizes is always preferable when having to make decisions between alternatives.  In fact, as you can now imagine, we can actually evaluate the power of our edge by comparing our Trend Following equity curves against random equity curves over large trade samples.

Back to our Recency Test

Okay so now that we have made this slight detour, we return to our description of the Recency Phase. Have we adopted a slight Bayesian bias in our process by reusing 5 years of data for our recency test?

Well yes, we have slightly, but this Bayesian selection bias is reduced from our use of a fairly long look-back (5 years), where randomness has less say and we feel it is worthwhile to take this slight statistical risk. Furthermore, when considering our selection process including the Recency Phase you will note that we are using the entire long-term series of equity curves in our decision making process. However, this form of bias is inevitable in any form of selection process, and ultimately given our finite capital limitations, we must make selection decisions from alternatives.

Fortunately the statistical bias we have recognised in our process is significantly lower than the more pronounced selection bias arising within alternate data mining processes such as the dreaded ‘curve fit word’.

It is exceptionally difficult to avoid statistical biases in data mining, but we do our best.

Being aware of the problem is important as it allows us to identify it, consider ways to reduce it and note any residual possible bias in our assumptions. Many traders who adopt powerful 3rd party data mining software are blissfully unaware of this issue.

Inclusion of Brokers Trading Costs

In our prior historical Robustness phase, we undertook our testing on a pre-cost basis, but now with our 5-year Recency phase, we introduce brokers trading costs back into our assessment such as commissions, interest holding costs (eg. SWAP), and slippage assumptions. This ensures that realism starts to enter our process.

We conservatively apply these costs to deliberately understate results so that we are unlikely to get any surprises from our brokers claws when we enter the live trading environment.

By including trading costs into our assessment, we now need to lift the pre-cost positive expectancy threshold for strategies to pass this phase. Given that our solutions only offer a weak edge, this post cost inclusion is essential, as it ensures that any weak edge that now resides in our systems is sufficient to accommodate these additional conservative costs.

Validation Method Used by this Process

So having tested all our robust systems during this recency phase we are now at the pointy end of this exercise where we need to select suitable candidates to move to the next phase of the workflow process.

We once again base our selection of suitable candidates of the Recency test using MAR as our preferred metric. The recency test is non-compounded and normalised like our robustness phase, but this time we also include brokers trading costs into the 5-year return streams that are generated.

Chart 44: Example of 20 non Compounded Return Streams for EURUSD that have passed the Robustness Test AND Recency Test

Those that pass this test with positive MAR then move onto the next phase of the process.

If you closely look at Chart 44, you will notice that not only do these return streams meet the long term robustness test, but components of each return stream in the last 5 years offer very good recent performance with minimal drawdown. Yes, they have been selected for their recency attributes and hence ‘curve fit’ for recency, but we are not expecting them to continue on with this recency trajectory in the future. We don’t know the future, but at least each of these return streams are now fully ‘fit’ for function to navigate an uncertain future.

So let’s say for the purposes of example, that we are now left with 40 markets each with 50 systems (2000 total return streams) that are robust and fit for purpose (aka responsive to change). We are now ready to progress to the compilation phase of the Workflow where we use correlation methods to blend return streams  together, first by individual market as Sub-Portfolios and then across markets and a Portfolio.

Well, my head hurts now. This was a tough Primer and I had many attempts at putting it to the pen for the sake of coherence, but you will be pleased to know that it gets back to good old simple trend following from here on in, and the fun stuff of sub-portfolio compilation commences. This is where we see the fruits of all this philosophy and applied practice unfold.

So stay tuned to this series.

Trade well and prosper

The ATS mob

18 Comments. Leave new

You must be logged in to post a comment.
Menu