Trend Following Primer Series – Building a Diversified, Systematic, Trend Following Model – Part 13
Primer Series Contents
- An Introduction- Part 1
- Care Less about Trend Form and More about the Bias within it- Part 2
- Divergence, Convergence and Noise – Part 3
- Revealing Non-Randomness through the Market Distribution of Returns – Part 4
- Characteristics of Complex Adaptive Markets – Part 5
- The Search for Sustainable Trading Models – Part 6
- The Need for an Enduring Edge – Part 7
- Compounding, Path Dependence and Positive Skew – Part 8
- A Risk Adjusted Approach to Maximise Geometric Returns – Part 9
- Diversification is Never Enough…for Trend Followers – Part 10
- Correlation Between Return Streams – Where all the Wiggling Matters – Part 11
- The Pain Arbitrage of Trend Following – Part 12
- Building a Diversified, Systematic, Trend Following Model – Part 13
- A Systematic Workflow Process Applied to Data Mining – Part 14
- Put Your Helmets On, It’s Time to Go Mining – Part 15
- The Robustness Phase – T’is But a Scratch – Part 16
- There is no Permanence, Only Change – Part 17
- Compiling a Sub Portfolio: A First Glimpse of our Creation – Part 18
- The Court Verdict: A Lesson In Hubris – Part 19
- Conclusion: All Things Come to an End, Even Trends – Part 20
Building a Diversified, Systematic, Trend Following Model
So here we are. In this trend following story so far, I have laid out a philosophy about how a certain Trend Follower believes that a liquid market behaves, but now I am at the pointy end of this exercise where I need to now validate my ‘hypothesis’ using a rigorous systematic process that assesses what the Market thinks about this ‘crazy idea’.
I want to examine how close my trend following interpretation is to how the market might behave. While digging into the weeds using quantitative methods to assess the degree to which this interpretation may be correct, I need to keep in the back of my mind that the market does not care what I think. The market is far bigger than my interpretation. It is an emergent expression arising from what all participants think.
So, I need to face the ugly truth that I am never going to get it all right. A Model is just an interpretation as opposed to the Reality. I am just going to have a model that comprises a large rounding error. But through committing to a diligent scientific process, I plan to keep that rounding error between my model versus the reality as small as possible.
If I get a close match between my model and the reality, then I can be happy with my interpretation, and what is more, if this model can demonstrate that I can extract alpha from the market, then I could use this model to build my nest-egg.
So, to convert this process into one that is familiar for a trader, I am seeking to test the validity of my trend trading philosophy in a real market using market data.
Given that my philosophy extends to any liquid market, then the validity of the model needs to be tested under diversification. If I find through this process that my model has merit, when applied to historical market data, then I can convert this model into a diversified systematic trend following portfolio that at least has ‘fairly well’ described how that model has fared over history.
So here is the Hypothesis we will be assessing through this exercise.
“That we can construct a Diversified Systematic Trend Following Portfolio that, targets the tails of the distribution of market returns (both left and right), is adaptive in nature, and through its application across liquid markets can deliver powerful risk adjusted returns over the long term in a sustainable manner.”
Now remember that this hypothesis (model) is mute regarding the ability to extend this framework into an uncertain future as we have no future data to test this hypothesis on, so we are limited to historical data sets.
We therefore need to continuously undertake this iterative process again and again to continuously assess its validity with the addition of new market data, but at least this process ensures that we keep our models up to date….or our portfolios razor sharp.
In fact, this model for trend following is only part of a bigger model within which we apply constraints or model limitations, to narrow our experimental focus, so there is always room for making better models at many different scales. Revisions in one model, will require revisions in other models and so on and so forth.
So, you end up with a model that specifically responds to the narrow domain of Trend Following, which resides in a more comprehensive model described by complexity science. It is just a complex nested system of models within models, and there is always room for improvement as we expand our domain of understanding.
The reality we face in interrogating this complex system we call the financial markets, is that this modelling we undertake is a never-ending exercise of ‘building better models’ that more faithfully describe the reality out there.
Just when you gift wrap your latest model that most faithfully interprets your reality, then it is back to this process driven exercise to do it all again with additional insights you have obtained from prior modelling.
In our complex adaptive markets, you need to continuously create better models. Our work is never done. But we strive to do the best that we can do.
Why the Scientific Method?
So, I now need to put this discretionary model of how I feel the market behaves to the test, and develop a diversified systematic trend following model that can evaluate ‘how suitable’ this interpretation may be in describing the reality out there.
I now need to describe a mathematical process called Lambda Calculus………..
I love being dramatic. We don’t have to go that far…..so come back all you mathematics haters.
But we do need to apply a rigorous systematic process under a ‘Sciencey’ method that can evaluate our hypothesis without letting that ‘pesky brain’ possibly bias what the Market wants to say about this model.
“What does Science have to do with trading?”, I hear from the bleachers. Well it provides a well worn method through a process driven approach that:
- outlines the central problem which the model seeks to address (or the hypothesis);
- provides a scope that limits the context within defined bounds (defines the assumptions); and
- then evaluates with rigor the validity of the hypothesis by putting grist to the mill and seeing what the data has to say about the matter.
Discretion is the Basis for All Models – As Much as We Think Otherwise
Now while our ‘hypothesis testing’ seeks to apply quantitative systematic rigor to the testing process, there is inevitably a degree of discretion in any method undertaken. The most obvious discretionary judgement is the hypothesis itself, which is a human interpretation of how a system behaves (a possible slice of the reality) and not the actual reality.
But there are many others that we will bump into along the way as we undertake our validation process of our hypothesis, such as our choice of universe to test (the limited selection of liquid markets we examine), the parameters we use in our system design etc. The list just goes on and on. It is ‘literally’ littered with discretionary decision making.
So as much as we try to objectify our method using systematic processes to try and avoid a propensity to ‘steer or nudge’ the testing result to agree with our hypothesis, we unfortunately can’t avoid it. This is a lesser problem than that faced by discretionary traders as systematic application does significantly reduce this propensity, but in our method, we must ensure that we declare any discretion applied, by stating these decisions as assumptions in the method, so that we can revisit it another day if needed, to evaluate any bias that may have influenced the overall result.
There are also major issues when applying quantitative methods to data mining processes where we can easily ‘fit our conclusions’ to erroneous data. This can complicate our ability to validate our hypothesis. Worse still it can render our intensive validation completely worthless. We will be closely looking at these issues of ‘adverse curve fitting’ in a future Primer and highlight the methods we use to reduce the impact of this curly problem for quantitative traders.
In our next Primer we will introduce you to a Workflow Process that we use here at ATS as a methodology to validate our trend following hypothesis.
But before I leave you after this brief introduction to future Primers’, as the cows need milking, I thought I would leave you with an inspiring short 12 minute video from Winton Capital about how they adopt the scientific method to validate their hypothesis about how the market behaves.
So stay tuned to this series.
Trade well and prosper
The ATS mob