It wasn't quite the question I was expecting to be answered either (and not the method as you already mentioned). It was quite an interesting approach though.
But I was more expecting you to ask them how to safely select the best parameters to a trading model. And/or compensate a significant test based on the fact you had backfitted the best input parameters. (Surely the answer isn't that you don't have to?!)
Maybe that question is just impossible to know the answer to since you can never know how much you have overfitted. Perhaps the age old method of using out-of-sample data for cross validation is the way to go. But even that is fitting to data to an extent.marksmeets302 wrote:Right now I'm looking at the following question: I have a system that works, and I can make it even better by tweaking one of the variables. By testing it on historical data I can pick the best value. I know that this can lead to overfitting, so if I do a test for significance again I have to compensate for the fact that I already picked the best variation. From the internet I've already found 3 approaches on how to do that and they all give different results...
I'm actually warming to method the students used now actually. I can see why they did it. It's just very dependent on your timeseries model and relying on what's happened in the past (which is obviously a big problem). And not very helpful for our sports scenarios as you already mentioned.