The Trader and the Three Bears: Overfit, Underfit and Optimally Fit


This article is specifically tailored for trend traders, as it delves into the crucial concepts of overfitting and underfitting in trading models. While these terms are important to understand for any trading strategy, it is essential to consider them in the context of the specific form of trading model we use to navigate the markets. Trend traders, in particular, face unique challenges and requirements when it comes to effectively addressing overfitting and underfitting.

It is worth noting that some of the methods outlined in this article, while valid and effective for trend followers, may not be applicable or suitable for alternative trading models. Trend trading involves identifying and capitalizing on directional trends in price series, which require distinct approaches compared to other trading strategies. Therefore, trend traders must carefully consider the strategies and techniques that align with their specific trading style.

Understanding overfitting and underfitting is imperative for trend traders. Overfitting occurs when a trading model becomes overly tailored to historical data, potentially losing its effectiveness in predicting future market movements. On the other hand, underfitting refers to models that fail to capture the full potential of signals, resulting in missed trading opportunities. Striking the right balance between overfitting and underfitting is crucial for developing robust and reliable trend-following models.

Throughout this article, we will explore various strategies and methods specifically designed to address overfitting and underfitting in trend trading. It is important to recognize that these approaches may not be applicable to all types of trading models, as different strategies require tailored techniques. By understanding these concepts and implementing appropriate strategies, trend traders can enhance the reliability and effectiveness of their trading models while navigating the dynamic and ever-changing markets.

Understanding Overfitting and Underfitting in Trading Models

In the realm of quantitative trading, model development plays a pivotal role. However, it is crucial for traders to grasp the concepts of “overfitting” and “underfitting,” as they have a profound impact on the effectiveness of trading models.

Developing trading models is akin to embarking on an exhilarating scavenger hunt, where historical price data serves as the map, and the desired outcome is a promising trade. However, it is vital to strike a balance in the complexity of the model, avoiding both excessive intricacy and oversimplification. This delicate equilibrium lies at the core of understanding overfitting and underfitting.

Imagine crafting a model that perfectly fits the historical price data, capturing even the smallest fluctuations and idiosyncrasies. It may initially seem like a remarkable achievement, but beware, as this could be an instance of overfitting. While the model appears flawless, it may be too finely tuned to past data, compromising its ability to predict future market behaviour. The model’s allure might fade when faced with new data, as it struggles to accommodate the inherent uncertainties of the market.

Conversely, underfitting occurs when a model is too generic and fails to incorporate crucial aspects of the data. This approach oversimplifies the complexities of the market, akin to predicting the weather based solely on the season while disregarding factors such as humidity, wind direction, and atmospheric pressure.

To safeguard our testing process and ensure the efficacy of our trading models, it is imperative to prevent these negative consequences from infiltrating our approach. By comprehending the distinction between signals and noise, we can navigate this challenge more effectively. The “signal” represents the underlying trend revealed by the data—a whisper of the market’s future movements. In contrast, “noise” signifies the random fluctuations and erratic ups and downs that do not contribute meaningful information to our analysis.

Developing a clear understanding of these concepts enables us to identify instances when our model is overly influenced by noise (overfitting) or fails to capture the essential signals (underfitting). By striking the right balance and staying attuned to the valuable signals amidst the noise, we enhance the reliability and predictive power of our trading models.

Decoding the Language of Signal and Noise in Price Data

Let us delve deeper into the realm of trading signals and noise, which form the foundation of any trading model. Understanding these concepts is akin to learning the language of the market, enabling us to interpret its messages and identify potential trades.

Attuning to the Signal

In trading, a “signal” is akin to a secret code—a pattern within the price data that aligns with our trading systems. Visualize it as breadcrumbs leading us towards profitable trades. The specific signal we seek depends on the type of trader we are.

For trend followers, the golden goose lies in a signal revealing a directionally trending price series—whether it’s an upward or downward trend. Simple trend-following models are deployed to capitalize on these price signals. It is akin to spotting a rising tide and riding the wave at the perfect moment to reach our desired outcome.

On the flip side, mean reverters seek signals indicating a repeating oscillation of prices around a balance point or a convergent price series. It’s akin to a pendulum swinging back and forth around its equilibrium. Mean-reverting models are employed to profit from these oscillations.

The key takeaway is that the signal we pursue directly aligns with the type of trading model we employ. The objective is to identify opportunities within these price patterns, utilizing optimized models tailored to exploit them.

Suppressing the Noise

Then there is “noise”—the nemesis of our signal. It encompasses all other price features that clutter our data, obscuring the precious signal we aim to uncover. However, what may be considered noise for one trader could be a signal for another!

For trend followers, any convergent price patterns are deemed noise as they disrupt the effectiveness of trend-following systems in generating profits. It’s akin to a squall disturbing the smooth sail on a trending tide.

Similarly, for mean reverter’s, divergent price series—desirable to trend followers—are perceived as noise. Once again, the critical aspect to remember is the relationship between the price data patterns and the nature of the systems we employ to extract trading opportunities.

Striking the Right Balance: Signal-to-Noise Ratio

So how do we triumph in the battle between signal and noise? The answer lies in the Signal-to-Noise Ratio (SNR). A high SNR in the price data indicates that the trending price series—or signals—are more prevalent compared to background noise. It’s akin to having a clear map to the treasure, devoid of distractions. Armed with optimally fitted models, trend followers can exploit these signals to generate more profits than losses.

However, caution is warranted, as trading models are likely to underperform when applied to price data with a low SNR. It’s akin to navigating through a fog—the noise obscures the signal, making it challenging to discern profitable opportunities.

Remember, not every trend within price data represents a genuine signal. Sometimes, it may be a randomly constructed trend devoid of any real bias or momentum. By directing our models to participate solely in trending signals, we effectively reduce noise in our trade results. This approach enables us to more accurately identify real opportunities and maximize profits.

Unveiling Profits Amidst Market Noise

Let’s further explore the interplay of signals and noise in trading models, unravelling the distinction between random and genuine trends, emphasizing the significance of the signal-to-noise ratio in trade performance, and recognizing that our edge lies within the price data signals rather than the trading models themselves.

Filtering Out the Noise: Key Role of Trending Signals

In trading, we often encounter both random and real trends within price series. Our primary objective is to target and capitalize on trending signals—trades that align with directionally trending price series. While this approach may reduce our potential trade sample size, it simultaneously amplifies the signal-to-noise ratio, offering a cleaner and clearer path towards potential profits.

To visualize the concept of signal and noise, consider the analogy of the film “Contact,” inspired by Carl Sagan’s novel. In the movie, Jodie Foster uses an array of radio telescopes to search for extraterrestrial life. Amidst the static and noise, she eventually detects a repeating signal—a valuable piece of information that enables humanity to explore distant galaxies. Similarly, in trading, amidst the market “noise,” we search for recurring “signals”—patterns that lead us to profitable trades.

Unearthing Profits: The Power of Signals

An essential principle to grasp is that our edge lies in the price data signals, not in the trading models themselves. The models we utilize are tools employed to extract enduring and exploitable signals from the price data.

If there is no exploitable signal present, our trading systems may stagnate or deteriorate over time. It is crucial to acknowledge that profitable outcomes in trading rely on effectively capitalizing on an enduring signal within the price data. Many traders desire constant profitability from their models, but the reality is that a price series typically encompasses both signal and noise. Long-term profitability can only be achieved when our trading models successfully exploit these enduring signals.

Understanding this principle also aids in identifying whether our trading systems are overfit or underfit. If a price series lacks significant trends, we should not expect our systems to consistently generate profitable outcomes. If they do, it serves as a red flag indicating potential overfitting.

Lastly, we must confront a harsh truth: It is highly improbable to generate long-term profits when trading a noisy price series. Believing that sustained profits can be extracted from noisy data is akin to placing faith in perpetual motion machines. Without an enduring and repeatable signal within the price data, our fortunes become reliant on mere luck. A genuine trading edge emerges from an enduring signal concealed within the price data.

By recognizing the significance of signals, differentiating between noise and meaningful trends, and understanding the limitations of trading systems when confronted with noise, we can better navigate the complexities of the market and uncover profitable opportunities.

Unravelling the Tom Basso Experiment: The Nature of Randomness

A fascinating case study in the trading world revolves around the renowned Tom Basso experiment, where random entries were employed in a trend-following model, resulting in long-term profitability. This experiment, also conducted by David Harding, challenges conventional notions regarding the significance of exploitable signals within price data.

However, it is important to recognize that Basso’s experiment was not purely random. Despite utilizing random entry points within a trend, he implemented specific rules for trade exits, including an initial stop and a trailing stop. These exit rules were not randomly generated. It was the presence of these asymmetrical rules that enforced an exploitable advantage within a trending price series.

Even though the entry points were random, Basso’s experiment incorporated parameters that were not randomly defined. It was these non-random factors, such as the initial stop and trailing stop, which correlated with trending price signals and successfully harnessed a profitable edge from the price data.

The key takeaway is that despite the random entry condition, other elements within the system biased the outcome towards long-term profitability. The price series itself possessed sufficient characteristics for a trend-following model to capitalize on the opportunities presented by trending price data, irrespective of the random entry method employed.

Having developed a clear understanding of the interplay between signal and noise in price data, we can now delve deeper into the essential concepts of overfitting, optimal fitting, and underfitting in trading models. By exploring these concepts, we can further refine our understanding of how to develop robust and effective trading strategies.

Understanding the Terms Overfitting, Optimal Fitting, and Underfitting

To gain a comprehensive understanding of these terms, it is crucial to recognize the two components that influence the performance of any trading strategy:

  1. Intrinsic Power of the System: This refers to the system’s ability to capitalize on an enduring and repeatable edge present within the market data. Essentially, it measures how effectively the strategy is optimized to extract signals from the market data.
  2. Luck: This factor accounts for the role of random fluctuations and noise in trading performance. At times, the top-performing return stream generated by a backtest may be purely due to luck. Overfitting a trading strategy to historical price data often leads to outcomes driven by luck rather than the system’s actual power.

Let’s delve deeper into each type of “fit”:

Optimal Fitting: An optimally fit trading system responds favourably to the signals present in the price data while disregarding the noise. This is the desired state, as when applied to future market data, the system can effectively exploit the majority of the signals.

Overfitting: An overfit system extracts both signal and noise from the price series, reacting to all price patterns that result in profitable outcomes. However, the issue with overfit systems lies in their inability to distinguish valid signals from noise. Over-optimized models often deteriorate swiftly when applied to future time series since the majority of historic price patterns (attributed to noise) that the system was trained and optimized on may not reoccur in the future.

Underfitting: An underfit model fails to optimally extract the signal from a price series, leaving a significant portion of the signal unexploited. While underfit systems may not necessarily degrade in the future, their effectiveness in extracting a sufficient edge becomes questionable. Nonetheless, having an underfit model is still preferable to having an overfit model when developing trend-following systems.

This notion is particularly relevant when it comes to Outliers—unique and unpredictable events. A trading model specifically designed to exploit opportunities from historic trends may not perform as well when confronted with future trends that have not occurred in the historical record. Due to the unpredictable nature of Outliers, an optimally fit trading system often ends up being underfit when applied to historic price data.

Understanding the dynamics of overfitting, optimal fitting, and underfitting is essential in developing robust trading strategies that can effectively extract signals from the market data while minimizing the influence of noise.

The Goldilocks Principle in Trading

In the context of trading, we strive to find a balance where our models are neither too overfit (too hot) nor too underfit (too cold), but just right (optimally fit). Let’s explore this principle further:

  1. Overfitting (Too Hot): An overfit trading model has been overly tuned to historical data, including both signal and noise. If the future deviates even slightly from the past, this model will quickly deteriorate, much like porridge that is too hot.
  2. Underfitting (Too Cold): Conversely, an underfit strategy is comparable to porridge that is too cold. Such models fail to extract a sufficient exploitable edge from the historical market data. While an underfit strategy may not degrade in the future, there is considerable uncertainty regarding its effectiveness in extracting a sufficient edge, especially if the markets trend in the future.
  3. Optimal Fitting (Just Right): Our goal is to develop a model that is optimally fit to the historical market data, similar to porridge that is just right. We aim to exploit the signals in the price data that our systems are designed to target. However, it is important to consider that Outliers are unique in nature. Therefore, our definition of “sufficiently fit” is more flexible compared to other investment methods. Ideally, our solutions should have “loose pants,” meaning they should be adaptable enough to accommodate new market conditions.

In essence, we are seeking a middle ground where the model is warm, avoiding extremes of being too hot or too cold. Our objective is to tailor our models to effectively capture the trending signals in the market data, as these signals hold the greatest potential for representing Outliers.

By achieving optimal fitting, we increase the likelihood of our trading models generating favorable outcomes while minimizing the risks associated with overfitting and underfitting. Striking the right balance ensures that our models are robust and adaptable to different market conditions, setting us on a path towards long-term success.

Sampling Bias in Trading

Before delving deeper into overfitting, underfitting, and optimal fitting, it is important to address another critical issue related to overfitting: sampling bias. Sampling bias occurs in our strategy development process when we are compelled to choose between different models. Let’s examine this concept further through an example:

Imagine two traders, Sam and Joe, who are tasked with creating trend-following systems using the same 30-year historical data sample. However, they are not provided access to the most recent five years of data, which is reserved for testing purposes.

After developing their respective models, Sam and Joe apply them to the most recent five years of data. Sam’s model outperforms Joe’s model, leading us to the unbiased conclusion that Sam’s model is superior.

However, if we decide to trade Sam’s model going forward and discard Joe’s model, we introduce a selection bias into the process. This bias arises from choosing one strategy over another, and it is challenging, if not impossible, to eliminate entirely.

The performance outcomes of our trading strategies are influenced by two key components:

  1. The outcome derived from the real edge present in the price data.
  2. The outcome influenced by the noise in the price data, which can lead to wins or losses by chance.

The issue with selection bias lies in the fact that when we are forced to choose from different options, we may inadvertently select a strategy that performs better due to the presence of noise or “luck” in the system. Therefore, it is crucial to find a way to eliminate or reduce the impact of luck in our performance results. An appealing equity curve may be the result of overfitting rather than the outcome of effectively extracting an enduring signal from the price data.

By recognizing and addressing sampling bias, we can strive to develop trading strategies that are robust and not solely dependent on chance outcomes. It is important to approach strategy selection and evaluation with caution, ensuring that the chosen models are truly optimized to capture the underlying signals in the market data, rather than being driven by random noise.

Reducing Overfitting in Trading Models

Overfitting in trading models is a hidden danger that is often overlooked when traders solely rely on appealing backtest results and performance metrics. However, it is crucial to assess whether these outcomes stem from a genuine market edge or mere chance. Failure to make this distinction can lead to the adoption of overfitted models that lack reliability and predictability.

To combat the issue of overfitting, it is important to recognize that different trading techniques require tailored approaches to mitigate this problem. While methods like Monte Carlo techniques can effectively reduce overfitting in convergent models, they may not be well-suited for trend-following models.

The unique characteristics of trending price series play a role in understanding why certain techniques are more suitable for reducing overfitting in trend-following models. Unlike convergent signals that exhibit consistent frequencies and amplitudes, trending price series possess an unpredictable pattern. They manifest sporadically, without clear repetition or consistency. As a result, techniques like Monte Carlo which rely on consistent signal frequencies over time, are less effective for trend-following systems.

To address the challenge of overfitting in trend-following models, specific strategies should be employed. Here are some key methods:

  1. Design First Logic: Adopting a design-first approach, trading models are developed with predefined design rules that exploit opportunities presented by trending price series. This ensures that the system’s design aligns with the objective of capturing trends effectively.
  2. Simple Models with Few Variables: Simplicity is key in trend-following models. By using models with a limited number of variables, the risk of overcomplicating the system and overfitting is reduced. Simpler models enhance adaptability and robustness across different market conditions.
  3. Visual Mapping Process: Evaluating all trading strategies through visual mapping allows for an assessment of their ability to capture the characteristics of trending price conditions. This evaluation validates the models’ effectiveness and their alignment with desired trading objectives.
  4. Extensive Multimarket and Timeframe Testing: Conducting thorough testing across various markets and timeframes significantly increases the sample size and provides confidence in the models’ performance. Diversifying the testing scope offers a broader perspective on how the models perform in different scenarios, reducing the risk of overfitting to specific market conditions.
  5. Utilize All Available Data: Instead of reserving data for out-of-sample testing, maximize the use of all available data for comprehensive analysis. This approach ensures a thorough assessment of the models’ performance and robustness without sacrificing valuable information.

By implementing these tailored methods for reducing overfitting in trend-following models, traders can better ensure the reliability and effectiveness of their strategies. This comprehensive approach acknowledges the distinct characteristics of trending price series and provides a solid foundation for successful trading outcomes.

Design First Logic in System Development

Designing a trading system with a “design first” logic is a crucial step in effective system development. By applying the Golden Rules as the initial framework, we ensure that the system’s design is capable of harnessing the desired edge we seek. This approach focuses on developing a system that fully exploits trending price series, rather than relying solely on data mining processes that lack predefined design principles.

Data mining processes aim to discover patterns within data, but they often suffer from overfitting when no design principles are established from the outset. Without a predefined system design, the data mining engine tends to fit the model too closely to historical data, making it challenging to discern the underlying design logic and accurately evaluate its performance. Simply specifying performance objectives and allowing the data mining software to generate strategies to meet those objectives can result in an overfit outcome that utilizes noise and random signals to achieve the desired performance.

To avoid this potential pitfall, adopting a design-first objective within the data mining process becomes crucial. This approach emphasizes the inclusion of Golden Rules for trend following in our models before applying data mining to optimize the variables.

The idea is to identify the edge or signal we want to extract from the market data and then develop Golden Rules that embody the logic capable of capturing that edge within the price data. These Golden Rules may involve principles such as cutting losses short and letting profits run, applying equal bets to all return streams, and striving for low correlation among various return streams in our portfolios. These design principles serve as the guiding constraints for how our models will operate on unseen price data.

Once we have integrated these Golden Rules into each of our trading systems, we can then utilize data mining to assign adjustable parameters that align with these foundational principles. This process allows for flexibility in adapting the models to different market conditions while maintaining consistency with the overarching design logic.

By adopting a design-first approach and incorporating the Golden Rules of trend following into our models, we establish a strong foundation for system development. This approach enables us to strike a balance between capturing the desired edge and avoiding overfitting tendencies, ultimately leading to more robust and effective trading systems.

Simplicity and Flexibility in Trend-Following Models

Adopting simple models with a limited number of parameters is advantageous in mitigating over-optimization to historical market data. These models have the ability to capture a broader range of potential trend formations, increasing the opportunities for profitable trades.

When focusing on outliers, which lack a precise definition and can assume various forms, the importance of simplicity in our models becomes evident. Rather than striving for precision, we prioritize simplicity. Complex models, with an abundance of parameters, often lead to overfitting and restrict our ability to exploit a diverse set of signals within the market data. Such precise models tend to lose their effectiveness in capturing alternative patterns that may arise in trending price series. To overcome this limitation, we intentionally opt for models that are “underfit” to any single trend form, allowing us to capitalize on a wider array of potential trends.

By utilizing simple models with few parameters, we tap into a larger sample size and maintain robustness in the face of varying market conditions. We refer to these models as “loose pants” models, symbolizing the flexibility they provide in capturing different forms of trending price series. Incorporating few parameters and embracing simple trend-following designs unlock the ability to exploit a diverse range of potential trend formations. This strategy significantly increases the sample size, enhancing the reliability and adaptability of our models when seeking to exploit a multitude of trends in the future.

The utilization of simple models with loose pants ensures that our trading strategies remain effective when targeting outliers, maximizing our chances of capitalizing on directional anomalies. Traders who opt for complex models often find their strategies overfit to noise, limiting their ability to extract signals of genuine market trends. In contrast, our emphasis on simplicity and flexibility improves the signal-to-noise ratio, enabling us to leverage a broader range of enduring trending signals within historical data.

With a focus on simplicity, flexibility, and maximizing sample size, we develop robust trend-following models that effectively navigate various market conditions and capture profitable opportunities.

Visual Mapping for System Evaluation

After implementing a design-first logic and developing simple models with few parameters, the next step is to utilize visual mapping for evaluating the trade outcomes derived from the design logic in relation to the characteristics of the price data. This process helps assess whether the systems are overfitting to the data or genuinely exploiting meaningful signals.

Understanding the nature of our trading models and their expected performance is crucial. When employing Golden Rules for a Trend Following model, we anticipate strong performance during trending market conditions and potential underperformance in non-trending markets. It is important to avoid solely focusing on optimal performance results for selection. Instead, evaluating the performance outcomes of all candidate models against the nature of the price data is essential before making informed trading decisions.

Identifying significant trends or outliers in hindsight is relatively straightforward. If we observe that the models are actively participating during historical outliers while exhibiting lower activity during non-outlier periods, it provides confidence that the strategy is effectively capturing genuine signals rather than operating purely in noisy market conditions.

The purpose of visual mapping is to mitigate selection bias and avoid favouring the best-performing model alone. By linking trade outcomes to the characteristics of the price data through visual mapping, we significantly reduce the risk of overfitting our trading systems to noise. It’s important to note that this process is not a purely quantitative statistical method of selection; instead, it establishes a meaningful connection between system performance and design logic.

By utilizing visual mapping for evaluating trading systems, we gain valuable insights into their effectiveness and their alignment with the characteristics of the price data. This approach strengthens our decision-making process, allowing us to develop robust strategies capable of navigating diverse market conditions and capitalizing on genuine signals for successful trading outcomes.

Multi-Market Evaluation and Increased Sample Size

To minimize the risk of overfitting trading models to historical data, adopting a vast sample size through multi-market evaluation is crucial. Trend followers face challenges when dealing with outliers—unpredictable anomalies. Within a single market and timeframe, the occurrence of outliers in a 30-year historical record is limited. This results in a small sample size per return stream for capturing this specific signal.

To overcome this challenge, extensive multi-market and multi-timeframe testing is conducted using universal trend-following models. By diversifying across numerous markets and timeframes while using consistent models, parameter sets, and variable settings, the sample size is significantly increased. This expansion allows for thousands of outlier instances to be included in the analysis.

Adopting universal models that can be applied to various return streams reduces the risk of overfitting to a single market’s historical data. Outliers are a universal feature found in all liquid markets and are not tied to specific patterns associated with individual market histories. Therefore, it is important to remain underfit to any single market to achieve optimal alignment with the characteristics of all markets.

Market data can be normalized using techniques like Average True Range (ATR)-based methods, which treat each market’s history as just another potential market history. Through multi-market testing, a wide range of alternative past histories is leveraged to simulate and approximate potential future market conditions.

By treating all markets equally and avoiding excessive tailoring to a single market’s characteristics, trend followers enhance the robustness of their trading systems. This approach, combined with a significantly increased sample size, validates the strategies and improves the reliability of the trading process.

Multi-market evaluation and the pursuit of a substantial sample size contribute to reducing overfitting risks and enhancing the effectiveness of trend-following strategies.

Utilizing the Full Data Set

In many trading methodologies, Out of Sample (OOS) tests are commonly used to validate the effectiveness of models. However, when it comes to trend following, where outliers are irregular and non-repeating signals, the use of OOS testing is not recommended. The reason for this is the inherent value of all the available data for trend followers.

The methods outlined earlier to reduce overfitting in trend-following strategies are sufficient in significantly minimizing the risk of overfitting, eliminating the need to set aside Out of Sample Data for validation purposes. Every piece of data is valuable for trend followers, and it is crucial to utilize as much of it as possible to thoroughly test the robustness of our approach.

By incorporating design-first logic, developing simple models with few parameters, conducting visual mapping, and embracing multi-market evaluation, trend followers can effectively mitigate overfitting without sacrificing the use of their valuable data. Utilizing the entire data set allows for comprehensive analysis and enhances the ability to assess the performance and reliability of trend-following strategies.

Recognizing the importance of maximizing the use of available data, trend followers can ensure the resilience and effectiveness of their trading approach. Utilizing the full data set provides a broader perspective and facilitates the identification of robust trends and patterns that can lead to successful trading outcomes.


In this article, we have explored the critical concepts of overfitting and underfitting in trading models that are relevant to trend trading and discussed various strategies to address these issues. The key points highlighted include:

  1. Understanding Overfitting and Underfitting: Overfitting occurs when models are too finely tuned to historical data, while underfitting refers to models that fail to capture the full potential of signals. Both can lead to unreliable and ineffective trading strategies.
  2. Tailored Approaches: Different trading techniques require specific methods to mitigate overfitting. For convergent models, techniques like Monte Carlo can be effective, while trend-following models require different strategies due to the unique characteristics of trending price series.
  3. Design First Logic: Adopting a design-first approach ensures that trading models are built on predefined design rules that exploit opportunities in trending price series. This helps in avoiding data mining processes that often lead to overfitting.
  4. Simplicity and Flexibility: Adopting simple models with few parameters reduces the risk of over-optimization and enhances adaptability across different market conditions. Emphasizing simplicity over precision allows for capturing a wider range of potential trends.
  5. Visual Mapping for Evaluation: Visual mapping is a valuable tool for evaluating trade outcomes and assessing if models are genuinely capturing meaningful signals. This process reduces the risk of overfitting by linking performance results to the characteristics of the price data.
  6. Multi-Market Evaluation and Sample Size: Extensive testing across multiple markets and timeframes increases the sample size and reduces the risk of overfitting to specific market conditions. Diversifying the testing scope provides a broader perspective and enhances the reliability of trading models.
  7. Utilizing the Full Data Set: In trend following, where outliers are irregular and non-repeating, the use of Out of Sample (OOS) testing is not recommended. Instead, utilizing the full data set allows for comprehensive analysis and robustness testing of trading strategies.

By incorporating these key points into our trading approach, we can mitigate the risks of overfitting and underfitting, ensuring the reliability and effectiveness of our strategies. Striking the right balance between capturing the desired edge and avoiding overfitting tendencies is essential for achieving long-term success in trading. With a comprehensive and tailored approach, we can navigate the challenges posed by overfitting and underfitting, maximizing our chances of consistent profitability in the dynamic world of trading.


Trade well and Prosper

The ATS mob

You must be logged in to post a comment.
%d bloggers like this: