why not find all the reasons that might require a fork some day and only fork it once and not few times?
Not all the potential reasons would have actionable elements yet. For example, let's take quantum computing. At some point in future, probably many years from now, a change will almost certainly need to be made to the encryption to make it quantum-proof. We know this, but a quantum-proof algorithm hasn't been invented yet, so we can't implement that in a fork until someone makes that breakthrough. There might be other unexpected events we've yet to foresee, so again, wouldn't have a solution to implement yet. Obviously it's best to keep forks to a minimum, though, which is why I don't understand the fixation people seem to have with static blocksizes. It's clearly something which should be flexible and algorithmically determined, so it only has to be forked once and not every time it needs changing.
I would prefer we use Soft Forks where possible and move towards a system like Core has done with SegWit.
It should be strongly emphasized that won't always be an option, though. Soft forks can be done when adding new network rules or tightening existing ones. If we ever need to loosen or remove network rules, it can only take the form of a hard fork.