In the Beginning there was Trend Following – A Primer – Part 19

Trend Following Primer Series – The Court Verdict: A Lesson In Hubris –  Part 19

Primer Series Contents

The Court Verdict: A  Lesson in Hubris

Finally, we are at the end of our Workflow Process. This is it. This is where we compile our global portfolio using our available sub-portfolios and do one final test. The ultimate test of long-term historic portfolio performance. This is when we can then validate our hypothesis.

In the previous Primer we started to get excited and provided an inspiring Walkthrough of ‘the first glimpse of our creation – The Sub Portfolio’, but wait, there is more wonder to be had. “Wait till you see the Second Glimpse of our Masterpiece – The Portfolio?”.

We are nervous of course as we approach our day in Court, where we can prove our hypothesis to one and all, and fortunately in the short time we have left before this auspicious moment, we can speed through this last phase of the Workflow Process to compile our Portfolio as it is punchy and brief.

We have our collection of Sub-Portfolios to work with (comprising say 40 separate markets), and without the need for repetition, we have simply followed the exact same process as that adopted in our prior process where we created Sub Portfolios. We have simply replaced the term ‘return stream’ with the term ‘sub-portfolio’ in this final phase.

To briefly summarize;

1.       Commence with say 40 uncompounded sub-portfolios of equal risk$ weighting;

2.       Use as many of these Sub-portfolios as you can to compile your Global Portfolio within your finite capital restrictions. Ideally, we would use all we have available to us as market diversification particularly across asset classes is a big deal in delivering diversification benefit. Don’t worry about equal representation across asset classes as we let the data speak for itself through this iteration process. With say 50 years of data across say 30 to 40 markets, there is more than enough information embedded in each sub portfolios correlated relationship with the balance of the portfolio without the need for discretionary judgement;

3.       Iterate to build the optimal risk-weighted Portfolio adopting the same process as we did for the constituent Sub Portfolios;

4.       Return to your PC’s and select the best non compounded portfolio composite using the MAR metric;

5.       Compound the result using your desired trade risk % which applies compounding treatment and leverage to your chosen solution. Adjust up or down this trade risk % to achieve a drawdown that is half your tolerance……..and then

6.       Run the final portfolio test

….and there we have it. Simple, logical and straight forward. Wait till they get a load of this?

We are now ready to present our findings to the court. Now all we have to do is find an even better musical accompaniment to our final Masterpiece, to truly do justice to this achievement of Modern Man.

The Proceeding

The courtroom was austere and Rumpole (the Chief Prosecutor) was deep in a cryptic crossword and simply didn’t pay any attention to me.

“I am going to teach him a lesson, and not just him, all of them” I thought to myself, as I scanned around the courtroom .

I confidently unpacked my LightPro and attached it to my laptop to present my findings to the court and straightened my tie. I was ready to prove the Hypothesis of My Workflow.

The Judge then entered the court and bid us all to take a seat, as he announced the intent of the proceedings.

“We are here today to validate a hypothesis that seeks to empirically demonstrate the power of a quantitative approach to Trend Following (specifically targeting the tails of the distribution of market returns) that can deliver sustainable long term returns”

The Lightpro illuminated the bold Hypothesis for all to read.

The Judge continued…….

“The author of this experimental test will now do the court a courtesy and present his argument.”

I stood up, bowed slightly to his reverence, and introduced the final portfolio.

“Thank you, your honour. Here I have a brief presentation of the fruits of my workflow process where I demonstrate my hypothesis using a backtest of a small but diverse systematic portfolio spanning 22 separate markets with 10 diversified trend following systems applied to each. This test is conducted over a 50 year backtest starting with an initial deposit of $50,000 using a 1% Trade risk of Equity to allow for compounding”.

I pressed the play button to my presentation which was going to blow the Court’s socks off, but was it the right music to truly represent this Masterpiece?

Rumpole however was still deeply engaged in his crossword puzzle.

The Barrister….and the Twist

The silence in the Courtroom was deafening.

“That music was the perfect accompaniment after all” I said to my colleague. “Sufficiently grand to inspire”

Rumpole stopped his crossword and the Courtroom was held  in suspense as he deliberated.

“Thank you for your presentation he grumbled. Most impressive. I even liked the elevating music or should I say elevator music.”

The Courtroom sniggered. Was this going to be another hammer blow by the adept Barrister?

“I almost even believed you had the Holy Grail in your hands for a brief moment…….but I see a slight snag in your argument”.

Guffaws are heard in the Courtroom and my heart starts to spasm

“You see you missed one small detail in your process logic. You presented to us an exhibit of an Equity curve arising from 22 separate sup portfolios over a 50 year history that attempts to validate your claim that this curve is a proof of robustness in your hypothesis.”

Exhibit A: Proof of Robustness

Rumpole continued….

“You go on to state that you undertake this entire process annually to refresh your systems and keep your portfolio razor sharp, but I see no evidence of this annual replacement in your equity curve that you have supplied with this exhibit.

You have simply presented us with the backtest results of your current selection of systems in your portfolio which is only valid for one year into the future. Your demonstration is not yet complete. In fact, it is woefully deficient”.

A Shattered Ego and the Need for a ‘Humble Mind’ Reset

Rumpole was right of course. I stood defeated, as I recognised the deficiency in my presentation. How could the moment change so suddenly from sublime awe to one of shattered delusion.

The answer lay in that brain of mine in which I was so sure played no role in this elegant systematic process.

Somehow that quantitative process I had become so attached to was found wanting…..and Rumpole knew it, as he had come across thousands of supremely confident quantitative traders before. He just needed to find a simple floor in the systematic logic that would shatter the argument and demonstrate this hubris for all to see.

He was, like the Market, going to teach me a lesson that all quantitative traders need to heed.

There was an essential step missing in the work flow. The final process of the flow just beyond Step 6. A piece of the process was missing.

  1. Commence with say 40 uncompounded sub-portfolios of equal risk$ weighting;
  2. Use as many of these Sub-portfolios as you can to compile your Global Portfolio within your finite capital restrictions. Ideally, we would use all we have available to us as market diversification particularly across asset classes is a big deal in delivering diversification benefit. Don’t worry about equal representation across asset classes as we let the data speak for itself through this iteration process. With say 50 years of data across say 30 to 40 markets, there is more than enough information embedded in each sub portfolios correlated relationship with the balance of the portfolio without the need for discretionary judgement;
  3. Iterate to build the optimal risk-weighted Portfolio adopting the same process as we did for the constituent Sub Portfolios;
  4. Return to your PC’s and select the best non compounded portfolio composite using the MAR metric;
  5. Compound the result using your desired trade risk % which applies compounding treatment and leverage to your chosen solution. Adjust up or down this trade risk % to achieve a drawdown that is half your tolerance……..and then
  6. Run the final portfolio test.

Here was the fatal missing step

  1. Undertake this process annually to retire and replace systems developed through this workflow process. Allow active trades to naturally close but do not take new trades from the tired sequence. New trades are only allowed through the new ‘refreshed’ sequence of data mined systems.

The ego overrode the moment and, in my haste, to present my Masterpiece to the court, I forgot the smallest matter. This one tiny step in the process that makes all the difference. I forgot to annually revisit the strategy and undertake this process to project the performance results for the next year.

As systematic and non-discretionary as I thought my processes were, my brain had the last laugh and set me up for a lesson in hubris.

Fortunately, the mistake as it turns out, is actually an under-estimation, as opposed to an over-estimation of the power of the described Workflow in this Primer series.

By rotating systems each year to replace ‘blunt systems’ with new ones, the process lifts, as opposed to lowers, the equity curve. As we progressively compound the results, the impact of this annual ‘sharpening of the portfolio’ lifts the curve over annual intervals and we no longer see the obvious ‘recency’ acceleration in the curve as depicted in Exhibit A above. Rather we see a far stabler equity curve over its lifetime.

Now why aren’t I presenting you this effect to demonstrate what I mean? This would make the proof all so much more powerful. The performance metrics would rival the benchmarks of the Professional Funds. It could even makes us famous?

Because the lesson this Primer is teaching is a necessary lesson of humility. The Primer was building the momentum of excitement and ego, which was challenging the efficacy of the process.

Rumpole saw it for what it was. In fact he had seen it again and again from a legion of quantitative traders who are attached to their models.

Not a rigorous proof of concept, but rather a statement of hubris. He was literally more interested in a cryptic crossword to what I was planning to say after all. He knew his craft. I unfortunately did not know mine. I failed to recognise the role a brain actually plays in a systematic process.

The Real Reason for the Missing Step

The real reason for the essential missing step in the process is that to undertake the process of annual rotation of systems at both the sub portfolio and portfolio level over a 50-year backtests would have my PC’s working for many years.

I simply do not have the time or the inclination to direct my PC resources to this endeavour to further validate the hypothesis of this Workflow process. I am suitably convinced with the processes I can demonstrate now with a single generation of systems from a 50 year rigorous workout, without having to go that extra ‘exhaustive step’ to prove it.

There comes a time where you have to make a decision to step into a live environment and take your ‘possibly slightly’ deficient models with you into the fray. There is always modelling risk lurking there somewhere. Learn to live with it, otherwise you could become a victim of ‘paralysis by analysis’.

As it currently stands it takes about a month to undertake this Workflow process each year, and this only gives us a years’ worth of life out of the output results. However, I hope that I might have convinced you in this series so far that the Workflow process is worth the effort.

Another reason for not demonstrating a proof of concept for the past 50 years is that I don’t have a data set of 100 years to allow me to calculate the first annual rotation of the 50 year test series. I only have 50 years of data in my universe which at best would allow a rigorous result to be generated over a 25 year horizon.

Furthermore, a backtest on historical data only provides limited information that can assist in navigating an uncertain future. It is a bit like asking the question at 57 years old, will I live till I am 100 using my 57-year history to make that assessment. History only has limited value in the information it provides when embedded in an uncertain future.

From the point of this ‘Now’, there is only one historical record behind you but there are an infinite array of possible future paths ahead of you.  A backtest simply doesn’t pick this nugget of wisdom up.

The Brain Versus the Machine – A Last Word on Quantitative Workflow Processes

The moral of the narrative in this fictional Court of this Primer has been to strongly suggest that any quantitative developer needs to remove any pre-conceived bias that they may have in their models. Having this bias will only lead to failure with your systematic methods.  Despite the efforts adopted by quants to removing the interfering bias of the brain from the process, it is always lurking within the assumptions of the models such as the decisions made regarding which process driven steps we choose for our Workflow.

You simply cannot cover enough bases in a simulated systematic environment to validate any proof statement. Back-tests as part of this simulated environment are simply useful tools to help you assess what the past has possibly delivered to your systems. In fact there is still considerable material variation between a simulated environment versus a live trading environment. There is the psychology of the reality, the slippage of illiquidity and the errors of simply being human that lead to this material difference.

Have no doubt. Always opt for a live track record rather than a simulated one, when evaluating system performance. The difference is chalk and cheese.

However, on an even deeper level, whether simulated or live, no one and I mean no one can predict the outcomes of the next day, let alone the most profitable future path into an uncertain future.

So here we are after the humiliation of facing our own Hubris. We humbly accept that while we can come very far in our modelling processes using the privilege of hindsight  (‘look-back’ methods), we will never know how far or how close we are to the actual reality.

But is a systematic workflow process that uses the ‘data to make all the major decisions’ better than a more discretionary path to portfolio development?

It certainly appears to offer sound logic in the process that avoids most biases lurking in a human brain, so surely it must be a better way to model this complex system we call the Market?

Well unfortunately we find that all models have their weaknesses. There is this thing called reality out there, but there are only ever models that we can use to describe it.

I have spent the last six Primers leading you through a quantitative workflow process that seeks to prove our trend following hypothesis, but as close as we may feel that we are approaching an answer about reality using some heavy computer power, quantitative methods can be as guilty as discretionary methods in their assumptions about any reality associated with a complex system.

You see there are no fully non-discretionary quantitative methods. They are all ultimately steered by the fallibility of a brain. The biases that arise from this fallible organ inevitably contaminates our models to some degree.

The heuristic limitations of the brain that create illusory bias in our discretionary model making (further described in Primer 2 of this series), has now led to statistical biases of data mining and curve fitting in our quantitative methods that stifle our ability to correctly match any interpretation with the reality.

People often accuse me of being a ‘brain bully’ where they see my intent as ridiculing that grey soft squishy stuff that lies in our skull but ‘au contraire’ my friends. This amazing organ that has been crafted over deep time within a natural environment is beautifully aligned with that environment. It has been perfectly ‘curve fit’ for purpose through eons of applied selection processes. Give me a brain anytime as opposed to an artificial intelligence (AI) that seeks to replace it’s amazing capabilities. ‘AI’ has got such a long way to go before it can challenge the master at interpreting the universe that surrounds us.

Now, provided that the human mind remains as the dominant contributor to the machinations of these financial markets, then there is a lot of merit in discretionary approaches, provided that any bias that resides in that human brain of ours is understood and its impact in our financial markets.

We believe there is always a necessary role played by a brain in the development of a workflow process. So much so, that we leave the hard stuff of process design to the brain itself. The brain decides what the steps should be and why and the brain decides what limitations to place around the experimental method and the assumptions we make for our model making.

The quant stuff which is embedded in this workflow process is simply harnessing what a computer, processing binary bits, can do extremely well to take lots of the heavy lifting away from the brain in those areas it simply cannot compete.

So, we view our workflow processes as the combination of what the brain does best and what our information processing systems can do best. We feel that you simply cannot get better than that. It is going to take a long time before anyone can exactly copy what we do. Any small edge that resides in our workflow process is protected for a time.

However, this final word on ‘brain versus machine’ has a caveat. We touched on some of the incredibly important biases in our statistical treatment using our machines that our brain may not see, and we can therefore be blind to. The biases of over-fitting and the bias of selection processes used in the workflow process itself. Over-fitting is a curse to the machine that seeks to investigate the reality of complex financial markets as is selection bias arising from decisions made in selection between competing return streams.

The question is, do these biases that we know of which creep into our statistical treatment, outweigh the benefits of moving towards this more systematic path of portfolio development using quantitative methods? If everything was simply left to the brain, as limited as it is, are we any better off with the workflow method? I mean Einstein was able to crack relativity without the use of a machine, and the machines today still cannot work out what the brain can magnificently simplify.

Well, we believe that there still is a small edge in our workflow that allows us to better distinguish ‘the Signal within the Noise’, provided we let the brain steer the process and that we take all measures to identify any biases that may arise in our statistical treatment and avoid them. In tandem, we believe the brain plus the machines can simply make better models that interpret the reality out there.

But here are a few guiding pointers to help to protect you from adverse bias that may arise when using statistical methods to find that damned elusive ‘signal’ using quantitative methods such as that delivered through this workflow.

  • Adopt Design First Logic using the brain as opposed to ‘hocus-pocus’ optimisation methods. Configure your system through Design Principles to deliberately respond to a market condition you are targeting. Do not let an automated system generator decide for you. It will over-fit to the data which ‘mostly is noise’.
  • Avoid where possible choosing your systems to use based on the most profitable equity curve. If there is randomness in a portfolio equity curve, which we know there is in a ‘mostly efficient market’, you are only exacerbating ‘fictional’ performance as you are removing the adverse random elements that make up the equity curve in your selection process.
  • Most Monte Carlo methods applied to trade results themselves (as opposed to market data) tell you nothing as they disrupt the serial correlation in the equity curve (related to the market signal) and not the noise element of the equity curve (related to the randomness in the market data). They turn the entire equity curve into noise. So don’t use them as methods to test for robustness.
  • Walk Forward should only be used for Predictive techniques, that are seeking to capture a repeatable market condition. They are not applicable for trend following or momentum methods. A nice straight equity curve for a single system can only be maintained if market conditions remain favourable over the entire extent of that equity curve. That is only relevant to predictable methods for the period of time that a predictable market condition persists, and that can be very short. Every equity curve must display periods when they are performing and periods when they are under-performing as we know that no system can address all market conditions. Be suspect of straight equity curve for a single return stream.
  • The market condition determines your fate. A good system simply allows you to extract the elusive signal from otherwise noisy market data. Bad systems……well they are just bad in every shape and form.
  • Straight equity curves over the long term are the result of a portfolio comprising different successful systems attacking many different conditions. Trend follower’s equity curves are superior to ‘predictive equity curves’ as they avoid the cost of failed predictions.
  • Always data mine using the most data you can get your hands on. This is the only way you can reduce the impact of randomness in your overall performance results. Data mining over a few years of data is asking for trouble.
  • Avoid getting too attached to your Models. Your brain and the way it likes to assign causality to a reality is the ultimate weak spot in your quantitative models, despite the fact that you feel you have objectively eliminated this weak spot through your systematic processes. Always treat any outcome derived from your quantitative processes with scepticism. There are always better models out there…….so keep modelling.

I have now demonstrated how we here at ATS use Data Mining to apply our Trend trading philosophy. Some of the systematic workflow processes we apply may be new to many trend traders who have a different way of interpreting their craft, but when you dig down into the weeds of this process, you can probably see that the core logic of trend following safely resides within it.

We are coming to the end of of this Primer series…..and I am quite literally running out of things to say that might assist a trader who wants to pursue the Trend Following path.

Just the ‘Conclusion’ to go that ends this series…….

Stay tuned for our next and final installment in this Primer Series.

Trade well and prosper

The ATS mob

 

12 Comments. Leave new

You must be logged in to post a comment.
Menu