I object to your second sentence. The idea that one's complete plan of data analysis should be determined in advance is unjustified, even in a setting where you are trying to confirm a preexisting scientific hypothesis. On the contrary, any decent data analysis will require some attention to the actual data that has been acquired. The researchers who believe otherwise are generally researchers who believe that significance testing is the beginning and the end of data analysis, with little to no role for descriptive statistics, plots, estimation, prediction, model selection, etc. In that setting, the requirement to fix one's analytic plans in advance makes more sense because the conventional ways in which p-values are calculated require that the sample size and the tests to be conducted are decided in advance of seeing any data. This requirement hamstrings the analyst, and hence is one of many good reasons not to use significance tests.
You might object that letting the analyst choose what to do after seeing the data allows overfitting. It does, but a good analyst will show all the analyses they conducted, say explicitly what information in the data was used to make analytic decisions, and use methods such as cross-validation appropriately. For example, it is generally fine to recode variables based on the obtained distribution of values, but choosing for some analysis the 3 predictors out of 100 that have the closest observed association to the dependent variable means the the estimates of association are going to be positively biased, by the principle of regression to the mean. If you want to do variable selection in a predictive context, you need to select variables inside your cross-validation folds, or using only the training data.