Backtest is an advanced tool and one needs extreme care when using. This article lists a few of the mistakes and how to avoid them. The pitfalls are similar to backtesting pitfalls in the financial markets.
Overfitting
Overfitting occurs when the filter settings are specifically selected to match the data. Overfitting filter settings are usually different per level and market and most of the the time are narrow.
Results on the backtest usually look good however real life tips might work poorly.
You can avoid it by using generic settings (like 80% minimum probability and 5% minimum expected value). Making sure you don't use in-sample testing is a good way to avoid it too.
In-Sample Testing
When the filters are set on the same (for example the whole) time interval as they are being evaluated, results might be overly optimistic.
You can avoid in-sample testing by identifying the halves: you select your preferred settings on one and evalute the performance on the other.
Not enough data
If you select a narrow date range or use very specific filter settings you might end up with a low number of tips which don't represent the performance that can be expected.
Make sure to use filters that show at least 200 tips. Also, always try to keep P-Value low as it shows if the results are because of pure luck only.
Look Ahead Bias
Even if you make sure to avoid the pitfalls above, you may end up with a losing set of filters. You might start again. If still losing start again. If this is done too many times, it ultimately leads to a situation where backtesting is essentially done on the validation set and you're now overfitting to the validation set.
Using generic settings and not redoing the same process many times help to avoid look ahead bias.