Beware the naive ap...

# Beware the naive application of sample statistics - A Great Primer on the Central Limit Theorem and it's implications/limitations.

(@copernicus)
Joined: 1 year ago
Posts: 236
23/11/2019 12:12 pm

The key takeaway here is that even if the input variables are not normally distributed, the sampling distribution will approximate the standard normal distribution by virtue of the assumptions applied in normal sampling methods.

This is a very important issue to consider when dealing with fat tailed distributions as we like to do as trend followers. You simply cannot make inferences on the nature of the fat tail when applying theorems that model around sampling methods that will always bias the series towards a normal distribution. The reason for this is that the nature of 'real' distribution in a fat tailed environment is that it does not possess a central mean tendency (or single mean and a single standard deviation away from this single mean). The reality may be one of many different means or indeed a reality of no central tendency at all. The nature of non-normal distribution in these important but 'exotic'  zones require different treatments to make any sense from them. The bottom line is that notions that are solidly built around premises of normality such as standard deviation and associated derived equations for Sharpe ratio etc. lose efficacy in those environments where distributions are clearly not normal in nature.

Always closely look at the core assumptions embedded in any statistical inference. Do not blindly adopt a statistical treatment without carefully unpacking the limitations embedded in it's core assumptions. Statistics is a method to apply gross statistical statements derived from sample data to make inferences about the entire population data. If you start with a faulty premise...your result may not correlate with the underlying reality and simply be a result derived from the particular statistical method chosen.

Now we all know that markets are complex systems and they adapt over time. Markets are non-stationery.....so using ideas such as standard deviation or statistical terminology associated with a normal distribution has very little value if you want to understand what really is going on. They may apply nicely to 95% of the population....but where the 5% matters....such as the case where outliers exert a dominant influence of the data set....then the failure to correctly address these instances means that your conclusions vastly under-rate their overall importance.

The limiting assumption might make calculations far easier....but this defeats the whole purpose of understanding how complex systems work. It is the inter-relationships that exist in complex systems that create the non-normal distributions and non-linear tendencies....so we certainly do not want to simplify our statistical models to avoid dealing with this non-normality as in doing so will avoid our understanding of what actually gives rise to the fat tails themselves.

That is why we here at ATS tend to prefer using visual methods of determining the 'big ideas' using very large sample sizes as opposed to more conventional standard statistical methods that assign meaning to a single statistic derived from a small sample data series.

For deeper ideas in looking at better models of the underlying reality...then the works of Nassim Taleb and Benoit Mandelbrot are highly recommended. Their models do justice to the underlying complexity that exists in non-linear environments such as these financial markets.

This topic was modified 2 months ago 15 times by Rich B
This topic was modified 2 months ago by Fred

Quidquid latine dictum, altum videtur

Share: