In the Beginning there was Trend Following – A Primer – Part 15

Trend Following Primer Series – Put Your Helmets On, It’s Time To Go Mining –  Part 15

Primer Series Contents

Put Your Helmets On, It’s Time To Go Mining

In our prior Primer, we introduced readers to a data mining method we deploy at ATS, which undertakes a Five-step Workflow Process to interrogate Market Data, using long/short trend following systems that target outliers, known to reside in the left and right tails of the distribution of market returns.

The workflow process represents the procedural steps in our Data Mining enterprise that we use to test our hypothesis.

And our 5 Step workflow is described below:

Defining the Scope of Our Experiment

We can only commence the workflow process after we have defined the scope of our experimental test and have obtained the stockpile of resources (market data) from which we will be undertaking our processes. So, in this preliminary phase, before we get the wheels of our workflow process whirring, we need to collect data, and I mean lots of it  spanning vast stretches of time across geography and asset classes.

This data set, upon which we will be conducting our experiment therefore defines the scope of our testing universe from which we then want to make conclusions. Now remember that this is not representative of the actual reality of our financial markets, as we must restrict our universal scope to what can be realistically assessed under our limited resources available to our factory, however our stockpile needs to be sufficiently representative of ‘the reality out there’ so we can make pertinent conclusions about that larger reality.   

Now does this mean that we need to define the universe up-front that we will be actually trading with our diversified portfolios? Not at this stage. This just describes the pool of data we draw from. Ideally that pool is diversified across geographies, asset classes and timeframes so that our workflow can derive a portfolio using a part of this spectral diversification based on how the return streams of this vast array of possibilities can consolidate together using ‘data to do all the talking’. However our hypothesis also places demands on us that we need to stretch far and wide.  

Our hypothesis states that our method of extracting alpha from fat tailed markets is applicable to ANY liquid market, which is an unusual call to make for a predictive mindset that likes to specifically target a market by virtue of its own unique price action signature.  But it makes sense if we are talking about fat tailed market environments which by their very definition are unusual non predictive events.

We need all the data we can get our hands on, within our resource constraints, as we are targeting the possible causal relationship between unpredictable rare events (fat tailed environments), and the success of our trend following models. We know fat tailed environments exist, for in Primer 4 we saw them explicitly recorded in a vast array of markets spanning across asset classes and timeframes.

But we especially need very large data sets spanning a wide universe of liquid markets because our hypothesis is couched in terms of the benefits that trend followers can receive from specifically exploiting these anomalous phenomena. This puts a different tack on our whole data mining exercise from conventional logic applied to ‘predictive data mining methods’ and entirely flavours the way we attack our problem as trend followers.

Conventional practice in quantitative science suggests that we remove anomalous data from our data sets before we interrogate them, given the material impact these outliers can have over the outcomes of the experiment. But here we are now demanding that we include them, as we believe that they are a trend followers ‘bread and butter’.

So, you could now understand why we get feisty and say…”hey, you quant guys, you are deliberately diluting the potential power of our entire trend following process through excluding their impact on your data sets. No wonder you have trouble seeing or addressing these anomalies?”.

In fact, this entire Primer series so far has been preparing your, or priming you, to think differently to conventional logic and applied practice. For good reason. Because conventional wisdom fails to consider the impact that outliers have to the trading endeavour. This Primer series has been, setting your mind up, to throw away the conventional quantitative tools and metrics that are simply inappropriate in dealing with our non-linear exotic world we predate in.

It is critical that our Workflow process is honed for our specific purpose, and that we do not include any redundant or worthless process that does not assist our validation process. So, you will not find reference to the processes of Data Sample Treatment, Optimisation, Monte Carlo Testing, Walk Forward testing in  our factory processes. You also will not find the use of validation metrics such as Sharpe or Sortino in our instrument panels.

Our workflow is specifically configured to respond to our hypothesis, and ensure that any causal linkage between fat tailed market data and our trend following systems/portfolios are not diluted or eliminated by our processing method.

Having a purpose-built factory, we then organise our processes in a systematic, progressive, and logical manner to undertake a sequence of steps that progressively evaluates how each of our trend following systems respond to our defined universe of data and then, like a manufacturing plant, create a global portfolio which can hopefully validate our hypothesis.

So, we have a defined universe of market data (our resources) waiting in stockpiles outside our factory, and we have some clever engineers constructing their elegant trend following models inside the factory (which is the first step of the workflow). We will then be rigorously testing this ensemble of designs using our ‘data stockpile’ to see how they stack up and what pops out at the end of the data driven testing process.

Each step in the workflow process progressively narrows in on a potential edge that exists within the ensemble which are created within the initial design phase.

Process 1 – Design Phase

Now our engineering division comprises strange folk with weird eyes and crazy expressions and we let them do their stuff. We occasionally throw them a bone or two to keep them going on into the night, but generally we leave them to their own devices with only the strictest ‘Golden Rules’ forged from the scriptures that they must apply to every ingenious monster they make.

….and then we leave these strange denizens of the dark on their own while they conspire and create their fiendish devices.

They then come back to us with an ensemble of coded system designs which have been configured to meet these golden rules of trend following and can be classed into two different groups defined by their entry method.

  • Simple Breakout Models; and
  • Other Simple Trend Following Models

We need to classify these models into these grouping as the intent of each grouping is to respond to trend following conditions in different ways.

Simple Breakout Models are used to ensure we capture any significant trending condition that may become an outlier, and do not miss any opportunity. However, the issue with breakout models is that a momentum breakout is typically an explosive  and volatile affair where you need to give lots of room in the design to avoid ‘the whipsaw effect’ in these turbulent times.

The breathing room allowed for in the design unfortunately dilutes the reward to risk relationship of the model. Yes, we catch all trends using breakout, but with dilute models.

But our creative engineers have provided us with many different breakout designs using a vast array of different forms of entry breakout technique using Donchian Channels, Consolidation indicators etc.

Other simple Trend Following Models are used to target an array of different forms of trend segment aside from breakouts and based on their architecture can capture a vast array of different forms of trending condition, some of which provide exceptional reward to risk relationships.

Once again, the diversity of trend following entry design is evident in the array of entry techniques provided. Our clever engineers have used a variety of different classic trend following indicators such as moving average crossovers, standard deviation channels etc.

Diversification of Entry Benefits

Diversification of entry condition is a specific objective we are seeking for a number of reasons:

  • We are uncertain of the exact form a trend will take in a fat tailed environment and we need to respond to a myriad of possible forms and volatility profiles;
  • Diversification of entry allows us to have many different systems attacking different aspects of a trending condition. This therefore allows us to progressively scale up in our committed position size as a trend matures; and
  • Diversification of entry can offer correlation and cointegration benefits when we compile our ensemble of systems into a diversified portfolio.

Outliers are unpredictable in nature. We have so few examples of them given their anomalous nature, we just can’t neatly classify them into different types. They can be of any form or any volatility.

Chart 32 below demonstrates a diversified suite of 15 simple trend following designs that have been applied to USDCAD between 1992 to 1995. During this clear period of long trending bias of the underlying market data you can see how each of the varying designs have targeted various aspects of the trending data series. I apologise for the colour scheme of Chart 32 as it makes it hard to see the discreet system trade trajectories….but squint hard. 

Having a single trending solution significantly narrows your prospects of extracting the juice out of trending environments.

Chart 32: Diversified Suite of 15 Simple Trend Following Systems used to Capture various segments of a Trending Condition

Trends can considerably vary in form and character over the course of the trending data series and, depending on the granularity of your method, having a single solution to capture a trending condition is a woeful inadequacy. When markets exhibit trending conditions, we need to take full advantage of all their trending behaviour and squeeze the trend lemon dry. Diversification of system entry whereby each system has a different way to capture an element of a trending condition is the way we achieve this outcome.

It just takes a small degree of variation in adverse price movement which can trigger our tight trailing stop. Therefore, despite that the market trend may simply continue on its merry way after this small retracement, we are left stranded and have to work out a new re-entry as without diversification of entry, we only have one solution to address them. 

A diversified approach to system design whereby we deploy many different forms of simple trend following system that target different aspects of a trending condition,  therefore allows us to capitalise on a trends ability to significantly vary in form over the trending price series.

When it comes to the portfolio compilation phase of our Workflow process, where we start to compile the discreet return streams of each of our trend following systems into a united front to tackle fat tailed trending conditions, we need to mitigate adverse volatility of our portfolio and capitalise on any favourable volatility to take advantage of the compounding effect.

The correlation and cointegration benefits offered by system diversification that particularly relate to the diversification of entry condition is a major tool at our disposal that we use to generate superior geometric returns.

Standardised Design Features to Manage Risk in Fat Tailed Environments

So now we have a diverse ensemble of entry methods, but what about the features of risk management that are embedded in the architecture of each design?

We find that every entry design comprises a standardised logic encoded within them that can be applied to any market or timeframe and that risk is treated in the same way by all of them.

We find that each design adopts the Average True Range as a method to define an initial stop, a trailing stop, AND a standardised method to define risk in $ to every single design. So, it does not matter what market or timeframe we apply our systems to. They all allow for a standard dollar risk bet for each design solution. In their application in a diverse portfolio, they are all configured equally in terms of their risk contribution to the portfolio. That is great and just what we need.

We also find tucked away in the diverse range of designs are measures that specifically respond to the uncertainty of fat tailed market environment:

  • No profit targets applied in the solutions allowing for potential infinite reward under fat-tailed conditions;
  • Lookbacks used by entry indicators are generous ensuring that the solutions avoid the propensity to over-trade during noisy or mean reverting conditions and only allow for a trade entry into a trending conditions when they become material in nature;
  • ATR based risk management methods allow for a vast array of different volatility settings to be applied that capture a vast array of different volatility profiles of a trending condition.

They are crafty and clever engineers after all so we will feed them.

Our Systems Need to be Causally Connected to the Market

You see, our engineers have realised that it is the market that dictates our fate, and our system needs to be aligned with it. So if we observe a market that is trending or a market that is not trending, then our system performance needs to express that condition. System performance is a derivative expression of the market condition.

A trading system is simply a method we apply to capture a prevailing market condition. Change the market condition and that system will flounder.

Our engineers recognised that they needed to integrate very simple causal logic into their design to be able to capture trending environments. They had a simple task to perform as it turns out. Design very simple systems that are directionally agnostic (either long or short), are only active during trending market conditions, have a tight stop to minimise the risk of adverse price moves, and have a trailing stop which progressively snugly follows the overall trend direction.

That is all there really is to the overall design logic of trend following systems.

Chart 33 below provides a brief description of the core design features around which our engineers devised their devilish devices.

Chart 33: Simple Trend Following Core Design Logic

Just four simple design principles to diversify around:

  1. Entry Condition – Diverse simple entry conditions allow us to capture a variety of different material trend forms across any liquid market;
  2. Initial stop – Normalised initial stop method to allow application across any liquid market, is used for standardised risk allocation and provides a risk release if we are wrong in our trade entry;
  3. Trailing Stop – Normalised trailing stop that always cuts losses short at all times during the trade event from entry through to exit;
  4. Exit Condition – A simple exit condition that provides a signal for that particular system design when it deems ‘the trend to end’.

As you can see these four simple design principles provide the constraints that dictate our success or failure in capturing trending conditions.

All our engineers did was to think of a system as a designed container, within which price needs to reside for the system to be profitable. If price decides to move outside those constraining parameters of your system we design, then we have a losing trade. While price remains within the envelope of these applied constraints, we remain in the trade.

If markets do not trend, then our systems do not perform. If our engineers have done their job well then our portfolios will stagnate as opposed to enter drawdowns during unfavorable conditions as they have reduced our propensity to trade during unfavorable non-trending conditions. But that is by-the-by. No trending condition equals poor or no performance from our correlated systems. In fact, if you observe your performance during unfavourable market regimes and find that you are achieving great performance results, this is a sure sign that your system is not capturing trends but rather simply capturing the spoils of the price data that has been presented to it.

This is a symptom that your system has been over-optimised, and curve fit to historical data as opposed to being designed to respond to trending market environments.

It is essential when developing a trading system that you understand the constraining variables of your system and how they ‘map to the market condition’. There is no system that can respond to all market regimes….so having an under-performing system is to be expected when conditions are not favourable to your system variables.

Avoidance of Curve Fitting through a Design First Logic

The design phase is a very important step when using computer power to undertake data mining.

Today there are many 3rd party data mining platforms that make it so easy to generate trading strategies that ‘fit the data’. I call these ‘convenient quantitative disasters waiting to happen’. There is so much frustration by the traders that invest in these systems when they deploy their ‘convenient’ coded algorithmic expressions with the expectation they will deliver their eager expectations….and then they just fall off the cliff when they are taken into the live trading environment. 

So much money spent on these quantitative ‘shiny’ behemoths and they still can’t deliver on the promises.

You see, all that the user has done, is ‘curve fit’ a solution to data noise that has no enduring potential when the ‘curve fit’ algorithm is then tasked with the fate of navigating an uncertain future. 

Here is how the problem starts. Some data mining methods start with no apriori design assumption and randomly generate strategies (or use genetic algorithms) to identify solutions that meet specific performance criteria. For example, the practitioner simply enters the desired return to drawdown ratio, profit factor, minimum number of trades etc……and then lets the workflow process generate ‘any’ possible solution that meets these criteria. Often solutions are generated that make no intuitive design sense yet can pass the performance validation criteria.

The problem arising from this ‘generic’ method is that there are myriad ways a solution can be generated to meet these performance criteria by simply ‘curve fitting’ to past data without their necessarily being a ‘causal’ relationship’ between that design generated and the past data.   They simply may be curve fitting to noise where no ‘signal is present’. Primer 3 discusses this dreaded term ‘noise’ in more detail.

So how do we avoid this ‘curve fitting’ debacle? We use a workflow method that is far narrower in definition specifically targeting a particular market condition (tail events) and whose design configuration specifically responds to that condition and ‘causally’ links the solution to that condition.

The Further Tragedy of Curve Fitting and Why We Need to Keep Systems Simple

Now that we understand that it is the constraints imposed by the system that ultimately decide your fate when trading these markets, you could imagine that the more variables you include in your trading system design, the more prescriptive you become in choosing the specific market conditions  you address with your trading system.

Imagine we have the following trending series of daily closing price points of market data.

Chart 34: Hypothetical Plot of Trending Market Data

Now we want to design a trend following system that captures those trending points of data.

We could design a trend following trading system that exactly responded to that past market data with no data error to capture all the possible profit that existed in that price series. The curve would look like this.

Chart 35: A Curve Fit Response to Trending Data

Chart 35 above is clearly a curve fit response to past market data. If the future exactly plays out in accordance with the plot of past market data, we will achieve a perfect trading result with maximum profit and no drawdown.

What is important to note is that the system plot above does not represent a simple trading system with few variables. If we plotted this curve algebraically it would represent a complex polynomial function with many variables such as.

y=ix5+ jx4+ kx3+ lx2+ mx+ c

This system has therefore 7 variables that need to be fit to the past data (I,j,k,l,m,c,x).

So, as we increase the number of variables used in our system design, we more prescriptively ‘map’ our system trading performance to more precisely mimic past market data. While this produces a higher correlation between the past market data and our trading performance, it also exposes us to a greater standard error if future market data does not exactly mimic the price action of this historic data set.

If we refer to Chart 36 below, we have plotted a possible future trending series against the historic trending series we used to design our ‘curve fit system’.

The curve fit nature of our trending system designed to closely match past market data, now creates significant error differences when you compare the deviation of results between the two different trending series.

Chart 36 shows the standard error between a single data point of the possible future trending series, against the red curve fit result derived from our historical trending series. The error between data points in these different trending series are significant. This error magnifies when applied across the time series if there is not a perfect match between the locations of the two discreet time series.

Chart 36: Standard Error between a single data points of two trending series

So let us see what happens when we simplify our system to more broadly, and we less prescriptively respond to the past market data seen in Chart 37.

The simplest algebraic function we can apply, that loosely represents the plot of historic market data, is a straight line. In this case a regression line that represents the line of best fit that represents the data. Clearly this does not exactly match the plot of past market data, but it does provide a simple representation of many potential trending series. This will be very useful if future conditions do not exactly represent the past.

Chart 37: Simple Representation of a Historical Trending Series

The algebraic expression that represents this curve is familiar to many of us who have studied algebra at school and is represented by the line equation of

y= mx+ c

Notice how we now only have 3 variables that describe this trending model. With simplicity comes fewer variables and greater robustness.

While we lose specificity in our design with simpler models, we significantly increase the variation of possible future outcomes which a simpler model can perform within.

So, if I now plot the future data on this simplistic model (refer to Chart 38) we find that the straight line is still a fairly good representation of the future market data and the standard error that arises from this new trending data series is significant lower than the curve fit trajectory.

Chart 38: Application of the Simple Linear Model to a Future Trending Series

So, the previous examples clearly demonstrate why it is imperative that we deploy simple models with few data variables to have the greatest chance of navigating future trends that may vary from historic trending data.

Trends come in all shapes and sizes from parabolic trends to linear trends to trends with significant volatility embedded in them or with smooth linear trajectories.

Given that the future is uncertain, future trending environments are likely to adopt a variety of possible forms. We therefore want to avoid overly prescriptive models that conform to a particular class of trend and adopt very simply trending methods that can capture a broad class of different trending condition. Having a simple core design significantly improves the robustness of our trend following models in navigating an uncertain future.

Well, that’s it for this Shift. Time to go home, clean up and get ready for your next Shift where we will reviewing the Robustness Stage of the Workflow Process. 

This is where we take these engineered systems for a workout and blast them with market data to see where any risk weakness resides in these trend following contraptions.

So stay tuned to this series.

Trade well and prosper

The ATS mob

20 Comments. Leave new

You must be logged in to post a comment.
Menu